Compare commits

...

73 Commits

Author SHA1 Message Date
Elyse
3af07f928f
Merge 670bafe148 into da252d9d32 2024-11-18 02:47:52 +02:00
bashonly
da252d9d32
[cleanup] Misc (#11554)
Closes #6884
Authored by: bashonly, Grub4K, seproDev

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
Co-authored-by: sepro <sepro@sepr0.com>
2024-11-17 23:25:05 +00:00
gillux
e079ffbda6
[ie/litv] Fix extractor (#11071)
Authored by: jiru
2024-11-17 21:37:15 +00:00
bashonly
2009cb27e1
[ie/SonyLIVSeries] Add sort_order extractor-arg (#11569)
Authored by: bashonly
2024-11-17 21:16:22 +00:00
Jackson Humphrey
f351440f1d
[ie/ctvnews] Fix extractor (#11534)
Closes #8689
Authored by: jshumphrey, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-11-17 21:06:50 +00:00
bashonly
670bafe148
Merge branch 'yt-dlp:master' into pr/live-sections 2024-11-07 11:48:52 -06:00
bashonly
e05694e550
Merge branch 'yt-dlp:master' into pr/live-sections 2024-10-26 09:35:14 -05:00
bashonly
9dd8574b68
Merge branch 'master' into yt-live-from-start-range 2024-10-09 11:23:14 -05:00
bashonly
160d973aee
Merge branch 'master' into yt-live-from-start-range 2024-08-17 04:40:45 -05:00
bashonly
c0be43d4d7
Merge branch 'yt-dlp:master' into pr/live-sections 2024-07-24 22:58:09 -05:00
bashonly
4f1af12b70
Merge branch 'master' into pr/live-sections 2024-07-21 17:25:36 -05:00
bashonly
724a6cb2cb
Merge branch 'yt-dlp:master' into pr/live-sections 2024-07-10 19:08:37 -05:00
bashonly
66a6e0a686
Merge branch 'yt-dlp:master' into pr/live-sections 2024-07-08 00:18:09 -05:00
bashonly
6208f7be9c
Merge branch 'master' into yt-live-from-start-range 2024-06-12 01:29:53 -05:00
bashonly
6a84199473
Merge branch 'yt-dlp:master' into pr/live-sections 2024-05-28 13:22:13 -05:00
bashonly
54ad67d785
Merge branch 'yt-dlp:master' into pr/live-sections 2024-05-23 09:48:06 -05:00
bashonly
172dfbeaed
Merge branch 'yt-dlp:master' into pr/live-sections 2024-05-10 13:52:35 -05:00
bashonly
cf96b24de6
Merge branch 'master' into yt-live-from-start-range 2024-04-16 11:01:17 -05:00
bashonly
50c943e8a0
Merge branch 'yt-dlp:master' into pr/yt-live-from-start-range 2024-03-19 15:18:22 -05:00
bashonly
6fc6349ef0
Merge branch 'master' into yt-live-from-start-range 2024-02-29 04:58:30 -06:00
bashonly
5156a16cf9
Merge branch 'master' into yt-live-from-start-range 2024-01-19 17:05:19 -06:00
Elyse
fb2b57a773 Merge remote-tracking branch 'github/yt-live-from-start-range' into yt-live-from-start-range 2023-10-08 01:01:31 -06:00
Elyse
2741b5827d Merge remote-tracking branch 'origin' into yt-live-from-start-range 2023-10-08 00:24:29 -06:00
bashonly
bd730470f2
Cleanup 2023-07-22 13:32:10 -05:00
bashonly
194bc49c55
Merge branch 'yt-dlp:master' into pr/6498 2023-07-22 13:23:54 -05:00
bashonly
1416cee726
Update yt_dlp/options.py 2023-07-22 17:59:48 +00:00
Elyse
622c555356 Fix bug after merge 2023-06-24 14:43:50 -06:00
Elyse
99e6074c5d Merge remote-tracking branch 'origin' into yt-live-from-start-range 2023-06-24 14:30:12 -06:00
Elyse
1f7974690e Merge remote-tracking branch 'origin' into yt-live-from-start-range 2023-06-03 14:39:32 -06:00
Elyse
8ee942a9c8 Add warning about --download-sections without --live-from-start 2023-05-13 13:29:28 -06:00
Elyse
444e02ef3b Merge remote-tracking branch 'origin/master' into yt-live-from-start-range 2023-05-07 00:33:18 -06:00
Elyse
4e93198ae6 Restore README.md
I think this is auto-generated by some script
2023-05-06 23:29:40 -06:00
Elyse
78285eea86 Update options docs 2023-05-06 23:24:58 -06:00
Elyse
7f93eb7a28 Support for epoch timestamps 2023-05-06 23:05:38 -06:00
Elyse
128d30492b Always compute last_seq 2023-04-18 23:17:39 -06:00
Elyse
129555b19a Fix return values of _extract_sequence_from_mpd 2023-03-17 22:39:21 -06:00
Elyse
01f672fe27 Lock less agressively
This gives a speed performance of about 30%
2023-03-17 22:37:31 -06:00
Elyse
2fbe18557b Add some documentation 2023-03-12 01:42:45 -06:00
Elyse
b131f3d1f1 Improve option documentation 2023-03-12 01:37:33 -06:00
Elyse
544836de83 Allow days in parse_duration 2023-03-12 01:37:21 -06:00
pukkandan
6cea8cbe2d
Merge remote-tracking branch 'origin/master' into pr/6498 2023-03-12 11:57:41 +05:30
Elyse
5e4699a623 Fix linter 2023-03-11 20:02:52 -06:00
Elyse
79ae58a5c4 Fix linter 2023-03-11 20:00:34 -06:00
Elyse
3faa1e33ed Add initial documentation 2023-03-11 19:51:14 -06:00
Elyse
fbae888c65 Add debug for selected section 2023-03-11 19:51:14 -06:00
Elyse
cdac7641d6 Remove tz_aware date code 2023-03-11 19:51:14 -06:00
Elyse
a43ba2eff6 Fix unified_timestamp 2023-03-11 19:51:14 -06:00
Elyse
0ed9a73a73 Add fragment count 2023-03-11 19:51:14 -06:00
Elyse
e40132da09 Revert "[utils] Allow using local timezone for 'now' timestamps"
This reverts commit 1799a6ae36.
2023-03-11 19:51:14 -06:00
Elyse
e6e2eb00f1 Support negative durations 2023-03-11 19:51:14 -06:00
pukkandan
9fc70f3f6d [extractor/youtube] Construct fragment list lazily
Building fragment list for all formats take significant time for large videos
2023-03-11 19:51:14 -06:00
pukkandan
5ef1a928a7 [extractor/youtube] Add extractor-arg include_duplicate_formats 2023-03-11 19:51:14 -06:00
Lesmiscore
db62ffdafe [extractor/youtube] Add client name to format_note when -v (#6254)
Authored by: Lesmiscore, pukkandan
2023-03-11 19:51:14 -06:00
vampirefrog
f137666451 [extractor/rokfin] Re-construct manifest url (#6507)
Authored by: vampirefrog
2023-03-11 19:51:14 -06:00
Daniel Vogt
e3ffdf76aa [extractor/opencast] Fix format bug (#6512)
Authored by: C0D3D3V
2023-03-11 19:51:14 -06:00
pukkandan
9f717b69b4 [extractor/hidive] Fix login
Fixes https://github.com/yt-dlp/yt-dlp/issues/6493#issuecomment-1462906556
2023-03-11 19:51:14 -06:00
pukkandan
34d3df72e9 Support loading info.json with a list at it's root 2023-03-11 19:51:14 -06:00
makeworld
96f5d29db0 [extractor/cbc:gem] Update _VALID_URL (#6499)
Authored by: makeworld-the-better-one
Closes #6395
2023-03-11 19:51:13 -06:00
Elyse
c222f6cbfc [extractor/twitch] Fix is_live (#6500)
Closes #6494
Authored by: elyse0
2023-03-11 19:51:13 -06:00
pukkandan
2d1655493f [extractor/youtube] Bypass throttling for -f17
and related cleanup

Thanks @AudricV for the finding
2023-03-11 19:51:13 -06:00
pukkandan
c376b95f95 [downloader/curl] Fix progress reporting
Bug in 8c53322cda
Closes #6490
2023-03-11 19:51:13 -06:00
Daniel Vogt
8df470761e [extractor/opencast] Add ltitools to _VALID_URL (#6371)
Authored by: C0D3D3V
2023-03-11 19:51:13 -06:00
D0LLYNH0
e3b08bac9c [extractor/iq] Set more language codes (#6476)
Authored by: D0LLYNH0
2023-03-11 19:51:13 -06:00
Elyse
932758707f Fix linter 2023-03-09 18:51:10 -06:00
Elyse
317ba03fdf Improve parse_chapters comments 2023-03-09 18:35:20 -06:00
Elyse
e42e25619f Create last_segment_url only if necessary 2023-03-09 18:24:39 -06:00
Elyse
fba1c397b1 [youtube] Support --download-sections for YT Livestream from start 2023-03-09 17:32:19 -06:00
Elyse
b83d7526f2 Add fixme in modified parse_chapters function
A range like '*(now-1hour)-(now-30minutes)' doesn't work
2023-03-09 17:21:02 -06:00
Elyse
fdb9aaf416 Use local timezone for download sections 2023-03-09 17:19:39 -06:00
Elyse
1799a6ae36 [utils] Allow using local timezone for 'now' timestamps 2023-03-09 17:18:44 -06:00
Elyse
367429e238 [common] Extract start and end keys for Dash fragments 2023-03-09 17:17:16 -06:00
Sophire
439be2b4a4 [utils] Add microseconds to unified_timestamp 2023-03-09 12:07:08 -06:00
Elyse
2fbd6de957 [utils] Add hackish 'now' support for --download-sections 2023-03-09 11:30:40 -06:00
17 changed files with 324 additions and 158 deletions

View File

@ -342,8 +342,9 @@ If you fork the project on GitHub, you can run your fork's [build workflow](.git
extractor plugins; postprocessor plugins can extractor plugins; postprocessor plugins can
only be loaded from the default plugin only be loaded from the default plugin
directories directories
--flat-playlist Do not extract the videos of a playlist, --flat-playlist Do not extract a playlist's URL result
only list them entries; some entry metadata may be missing
and downloading may be bypassed
--no-flat-playlist Fully extract the videos of a playlist --no-flat-playlist Fully extract the videos of a playlist
(default) (default)
--live-from-start Download livestreams from the start. --live-from-start Download livestreams from the start.
@ -1869,6 +1870,9 @@ The following extractors use this feature:
#### digitalconcerthall #### digitalconcerthall
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats * `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
#### sonylivseries
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
**Note**: These options may be changed/removed in the future without concern for backward compatibility **Note**: These options may be changed/removed in the future without concern for backward compatibility
<!-- MANPAGE: MOVE "INSTALLATION" SECTION HERE --> <!-- MANPAGE: MOVE "INSTALLATION" SECTION HERE -->

View File

@ -234,5 +234,10 @@
"when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7", "when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7",
"short": "[ie/vimeo] Fix API retries (#11351)", "short": "[ie/vimeo] Fix API retries (#11351)",
"authors": ["bashonly"] "authors": ["bashonly"]
},
{
"action": "add",
"when": "52c0ffe40ad6e8404d93296f575007b05b04c686",
"short": "[priority] **Login with OAuth is no longer supported for YouTube**\nDue to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)"
} }
] ]

View File

@ -452,10 +452,15 @@ class TestUtil(unittest.TestCase):
self.assertEqual(unified_timestamp('2018-03-14T08:32:43.1493874+00:00'), 1521016363) self.assertEqual(unified_timestamp('2018-03-14T08:32:43.1493874+00:00'), 1521016363)
self.assertEqual(unified_timestamp('Sunday, 26 Nov 2006, 19:00'), 1164567600) self.assertEqual(unified_timestamp('Sunday, 26 Nov 2006, 19:00'), 1164567600)
self.assertEqual(unified_timestamp('wed, aug 16, 2008, 12:00pm'), 1218931200) self.assertEqual(unified_timestamp('wed, aug 16, 2008, 12:00pm'), 1218931200)
self.assertEqual(unified_timestamp('2022-10-13T02:37:47.831Z'), 1665628667)
self.assertEqual(unified_timestamp('December 31 1969 20:00:01 EDT'), 1) self.assertEqual(unified_timestamp('December 31 1969 20:00:01 EDT'), 1)
self.assertEqual(unified_timestamp('Wednesday 31 December 1969 18:01:26 MDT'), 86) self.assertEqual(unified_timestamp('Wednesday 31 December 1969 18:01:26 MDT'), 86)
self.assertEqual(unified_timestamp('12/31/1969 20:01:18 EDT', False), 78) self.assertEqual(unified_timestamp('12/31/1969 20:01:18 EDT', False), 78)
self.assertEqual(unified_timestamp('2023-03-09T18:01:33.646Z', with_milliseconds=True), 1678384893.646)
# ISO8601 spec says that if no timezone is specified, we should use local timezone;
# but yt-dlp uses UTC to keep things consistent
self.assertEqual(unified_timestamp('2023-03-11T06:48:34.008'), 1678517314)
def test_determine_ext(self): def test_determine_ext(self):
self.assertEqual(determine_ext('http://example.com/foo/bar.mp4/?download'), 'mp4') self.assertEqual(determine_ext('http://example.com/foo/bar.mp4/?download'), 'mp4')

View File

@ -28,7 +28,12 @@ from .cache import Cache
from .compat import urllib # isort: split from .compat import urllib # isort: split
from .compat import urllib_req_to_req from .compat import urllib_req_to_req
from .cookies import CookieLoadError, LenientSimpleCookie, load_cookies from .cookies import CookieLoadError, LenientSimpleCookie, load_cookies
from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name from .downloader import (
DashSegmentsFD,
FFmpegFD,
get_suitable_downloader,
shorten_protocol_name,
)
from .downloader.rtmp import rtmpdump_version from .downloader.rtmp import rtmpdump_version
from .extractor import gen_extractor_classes, get_info_extractor from .extractor import gen_extractor_classes, get_info_extractor
from .extractor.common import UnsupportedURLIE from .extractor.common import UnsupportedURLIE
@ -3372,7 +3377,7 @@ class YoutubeDL:
fd, success = None, True fd, success = None, True
if info_dict.get('protocol') or info_dict.get('url'): if info_dict.get('protocol') or info_dict.get('url'):
fd = get_suitable_downloader(info_dict, self.params, to_stdout=temp_filename == '-') fd = get_suitable_downloader(info_dict, self.params, to_stdout=temp_filename == '-')
if fd != FFmpegFD and 'no-direct-merge' not in self.params['compat_opts'] and ( if fd not in [FFmpegFD, DashSegmentsFD] and 'no-direct-merge' not in self.params['compat_opts'] and (
info_dict.get('section_start') or info_dict.get('section_end')): info_dict.get('section_start') or info_dict.get('section_end')):
msg = ('This format cannot be partially downloaded' if FFmpegFD.available() msg = ('This format cannot be partially downloaded' if FFmpegFD.available()
else 'You have requested downloading the video partially, but ffmpeg is not installed') else 'You have requested downloading the video partially, but ffmpeg is not installed')

View File

@ -12,6 +12,7 @@ import itertools
import optparse import optparse
import os import os
import re import re
import time
import traceback import traceback
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS, CookieLoadError from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS, CookieLoadError
@ -340,12 +341,13 @@ def validate_options(opts):
(?P<end_sign>-?)(?P<end>[^-]+) (?P<end_sign>-?)(?P<end>[^-]+)
)?''' )?'''
current_time = time.time()
chapters, ranges, from_url = [], [], False chapters, ranges, from_url = [], [], False
for regex in value or []: for regex in value or []:
if advanced and regex == '*from-url': if advanced and regex == '*from-url':
from_url = True from_url = True
continue continue
elif not regex.startswith('*'): elif not regex.startswith('*') and not regex.startswith('#'):
try: try:
chapters.append(re.compile(regex)) chapters.append(re.compile(regex))
except re.error as err: except re.error as err:
@ -362,11 +364,16 @@ def validate_options(opts):
err = 'Must be of the form "*start-end"' err = 'Must be of the form "*start-end"'
elif not advanced and any(signs): elif not advanced and any(signs):
err = 'Negative timestamps are not allowed' err = 'Negative timestamps are not allowed'
else: elif regex.startswith('*'):
dur[0] *= -1 if signs[0] else 1 dur[0] *= -1 if signs[0] else 1
dur[1] *= -1 if signs[1] else 1 dur[1] *= -1 if signs[1] else 1
if dur[1] == float('-inf'): if dur[1] == float('-inf'):
err = '"-inf" is not a valid end' err = '"-inf" is not a valid end'
elif regex.startswith('#'):
dur[0] = dur[0] * (-1 if signs[0] else 1) + current_time
dur[1] = dur[1] * (-1 if signs[1] else 1) + current_time
if dur[1] == float('-inf'):
err = '"-inf" is not a valid end'
if err: if err:
raise ValueError(f'invalid {name} time range "{regex}". {err}') raise ValueError(f'invalid {name} time range "{regex}". {err}')
ranges.append(dur) ranges.append(dur)

View File

@ -36,6 +36,8 @@ class DashSegmentsFD(FragmentFD):
'filename': fmt.get('filepath') or filename, 'filename': fmt.get('filepath') or filename,
'live': 'is_from_start' if fmt.get('is_from_start') else fmt.get('is_live'), 'live': 'is_from_start' if fmt.get('is_from_start') else fmt.get('is_live'),
'total_frags': fragment_count, 'total_frags': fragment_count,
'section_start': info_dict.get('section_start'),
'section_end': info_dict.get('section_end'),
} }
if real_downloader: if real_downloader:

View File

@ -1,4 +1,3 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,

View File

@ -2733,7 +2733,7 @@ class InfoExtractor:
r = int(s.get('r', 0)) r = int(s.get('r', 0))
ms_info['total_number'] += 1 + r ms_info['total_number'] += 1 + r
ms_info['s'].append({ ms_info['s'].append({
't': int(s.get('t', 0)), 't': int_or_none(s.get('t')),
# @d is mandatory (see [1, 5.3.9.6.2, Table 17, page 60]) # @d is mandatory (see [1, 5.3.9.6.2, Table 17, page 60])
'd': int(s.attrib['d']), 'd': int(s.attrib['d']),
'r': r, 'r': r,
@ -2775,8 +2775,14 @@ class InfoExtractor:
return ms_info return ms_info
mpd_duration = parse_duration(mpd_doc.get('mediaPresentationDuration')) mpd_duration = parse_duration(mpd_doc.get('mediaPresentationDuration'))
availability_start_time = unified_timestamp(
mpd_doc.get('availabilityStartTime'), with_milliseconds=True) or 0
stream_numbers = collections.defaultdict(int) stream_numbers = collections.defaultdict(int)
for period_idx, period in enumerate(mpd_doc.findall(_add_ns('Period'))): for period_idx, period in enumerate(mpd_doc.findall(_add_ns('Period'))):
# segmentIngestTime is completely out of spec, but YT Livestream do this
segment_ingest_time = period.get('{http://youtube.com/yt/2012/10/10}segmentIngestTime')
if segment_ingest_time:
availability_start_time = unified_timestamp(segment_ingest_time, with_milliseconds=True)
period_entry = { period_entry = {
'id': period.get('id', f'period-{period_idx}'), 'id': period.get('id', f'period-{period_idx}'),
'formats': [], 'formats': [],
@ -2955,13 +2961,17 @@ class InfoExtractor:
'Bandwidth': bandwidth, 'Bandwidth': bandwidth,
'Number': segment_number, 'Number': segment_number,
} }
duration = float_or_none(segment_d, representation_ms_info['timescale'])
start = float_or_none(segment_time, representation_ms_info['timescale'])
representation_ms_info['fragments'].append({ representation_ms_info['fragments'].append({
media_location_key: segment_url, media_location_key: segment_url,
'duration': float_or_none(segment_d, representation_ms_info['timescale']), 'duration': duration,
'start': availability_start_time + start,
'end': availability_start_time + start + duration,
}) })
for s in representation_ms_info['s']: for s in representation_ms_info['s']:
segment_time = s.get('t') or segment_time segment_time = s['t'] if s.get('t') is not None else segment_time
segment_d = s['d'] segment_d = s['d']
add_segment_url() add_segment_url()
segment_number += 1 segment_number += 1
@ -2977,6 +2987,7 @@ class InfoExtractor:
fragments = [] fragments = []
segment_index = 0 segment_index = 0
timescale = representation_ms_info['timescale'] timescale = representation_ms_info['timescale']
start = 0
for s in representation_ms_info['s']: for s in representation_ms_info['s']:
duration = float_or_none(s['d'], timescale) duration = float_or_none(s['d'], timescale)
for _ in range(s.get('r', 0) + 1): for _ in range(s.get('r', 0) + 1):
@ -2984,8 +2995,11 @@ class InfoExtractor:
fragments.append({ fragments.append({
location_key(segment_uri): segment_uri, location_key(segment_uri): segment_uri,
'duration': duration, 'duration': duration,
'start': availability_start_time + start,
'end': availability_start_time + start + duration,
}) })
segment_index += 1 segment_index += 1
start += duration
representation_ms_info['fragments'] = fragments representation_ms_info['fragments'] = fragments
elif 'segment_urls' in representation_ms_info: elif 'segment_urls' in representation_ms_info:
# Segment URLs with no SegmentTimeline # Segment URLs with no SegmentTimeline
@ -3767,7 +3781,7 @@ class InfoExtractor:
""" Merge subtitle dictionaries, language by language. """ """ Merge subtitle dictionaries, language by language. """
if target is None: if target is None:
target = {} target = {}
for d in dicts: for d in filter(None, dicts):
for lang, subs in d.items(): for lang, subs in d.items():
target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs) target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs)
return target return target

View File

@ -1,11 +1,24 @@
import json
import re import re
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import orderedSet from .ninecninemedia import NineCNineMediaIE
from ..utils import extract_attributes, orderedSet
from ..utils.traversal import find_element, traverse_obj
class CTVNewsIE(InfoExtractor): class CTVNewsIE(InfoExtractor):
_VALID_URL = r'https?://(?:.+?\.)?ctvnews\.ca/(?:video\?(?:clip|playlist|bin)Id=|.*?)(?P<id>[0-9.]+)(?:$|[#?&])' _BASE_REGEX = r'https?://(?:[^.]+\.)?ctvnews\.ca/'
_VIDEO_ID_RE = r'(?P<id>\d{5,})'
_PLAYLIST_ID_RE = r'(?P<id>\d\.\d{5,})'
_VALID_URL = [
rf'{_BASE_REGEX}video/c{_VIDEO_ID_RE}',
rf'{_BASE_REGEX}video(?:-gallery)?/?\?clipId={_VIDEO_ID_RE}',
rf'{_BASE_REGEX}video/?\?(?:playlist|bin)Id={_PLAYLIST_ID_RE}',
rf'{_BASE_REGEX}(?!video/)[^?#]*?{_PLAYLIST_ID_RE}/?(?:$|[?#])',
rf'{_BASE_REGEX}(?!video/)[^?#]+\?binId={_PLAYLIST_ID_RE}',
]
_TESTS = [{ _TESTS = [{
'url': 'http://www.ctvnews.ca/video?clipId=901995', 'url': 'http://www.ctvnews.ca/video?clipId=901995',
'md5': 'b608f466c7fa24b9666c6439d766ab7e', 'md5': 'b608f466c7fa24b9666c6439d766ab7e',
@ -17,13 +30,32 @@ class CTVNewsIE(InfoExtractor):
'timestamp': 1467286284, 'timestamp': 1467286284,
'upload_date': '20160630', 'upload_date': '20160630',
'categories': [], 'categories': [],
'tags': [],
'season_id': 57981,
'duration': 764.631,
'series': 'CTV News National story',
'thumbnail': r're:^https?://.*\.jpg$',
'season': 'Season 0',
'season_number': 0, 'season_number': 0,
'season': 'Season 0',
'tags': [],
'series': 'CTV News National | Archive | Stories 2',
'season_id': '57981',
'thumbnail': r're:https?://.*\.jpg$',
'duration': 764.631,
},
}, {
'url': 'https://barrie.ctvnews.ca/video/c3030933-here_s-what_s-making-news-for-nov--15?binId=1272429',
'md5': '8b8c2b33c5c1803e3c26bc74ff8694d5',
'info_dict': {
'id': '3030933',
'ext': 'flv',
'title': 'Heres whats making news for Nov. 15',
'description': 'Here are the top stories were working on for CTV News at 11 for Nov. 15',
'thumbnail': 'http://images2.9c9media.com/image_asset/2021_2_22_a602e68e-1514-410e-a67a-e1f7cccbacab_png_2000x1125.jpg',
'season_id': '58104',
'season_number': 0,
'tags': [],
'season': 'Season 0',
'categories': [],
'series': 'CTV News Barrie',
'upload_date': '20241116',
'duration': 42.943,
'timestamp': 1731722452,
}, },
}, { }, {
'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224', 'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224',
@ -46,6 +78,65 @@ class CTVNewsIE(InfoExtractor):
'id': '1.5736957', 'id': '1.5736957',
}, },
'playlist_mincount': 6, 'playlist_mincount': 6,
}, {
'url': 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797',
'md5': '24bc4b88cdc17d8c3fc01dfc228ab72c',
'info_dict': {
'id': '2695026',
'ext': 'flv',
'season_id': '89852',
'series': 'From CTV News Channel',
'description': 'md5:796a985a23cacc7e1e2fafefd94afd0a',
'season': '2023',
'title': 'Bank of Canada asks public about digital currency',
'categories': [],
'tags': [],
'upload_date': '20230526',
'season_number': 2023,
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
'timestamp': 1685105157,
'duration': 253.553,
},
}, {
'url': 'https://stox.ctvnews.ca/video-gallery?clipId=582589',
'md5': '135cc592df607d29dddc931f1b756ae2',
'info_dict': {
'id': '582589',
'ext': 'flv',
'categories': [],
'timestamp': 1427906183,
'season_number': 0,
'duration': 125.559,
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
'series': 'CTV News Stox',
'description': 'CTV original footage of the rise and fall of the Berlin Wall.',
'title': 'Berlin Wall',
'season_id': '63817',
'season': 'Season 0',
'tags': [],
'upload_date': '20150401',
},
}, {
'url': 'https://ottawa.ctvnews.ca/features/regional-contact/regional-contact-archive?binId=1.1164587#3023759',
'md5': 'a14c0603557decc6531260791c23cc5e',
'info_dict': {
'id': '3023759',
'ext': 'flv',
'season_number': 2024,
'timestamp': 1731798000,
'season': '2024',
'episode': 'Episode 125',
'description': 'CTV News Ottawa at Six',
'duration': 2712.076,
'episode_number': 125,
'upload_date': '20241116',
'title': 'CTV News Ottawa at Six for Saturday, November 16, 2024',
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
'categories': [],
'tags': [],
'series': 'CTV News Ottawa at Six',
'season_id': '92667',
},
}, { }, {
'url': 'http://www.ctvnews.ca/1.810401', 'url': 'http://www.ctvnews.ca/1.810401',
'only_matching': True, 'only_matching': True,
@ -57,29 +148,35 @@ class CTVNewsIE(InfoExtractor):
'only_matching': True, 'only_matching': True,
}] }]
def _ninecninemedia_url_result(self, clip_id):
return self.url_result(f'9c9media:ctvnews_web:{clip_id}', NineCNineMediaIE, clip_id)
def _real_extract(self, url): def _real_extract(self, url):
page_id = self._match_id(url) page_id = self._match_id(url)
def ninecninemedia_url_result(clip_id): if mobj := re.fullmatch(self._VIDEO_ID_RE, urllib.parse.urlparse(url).fragment):
return { page_id = mobj.group('id')
'_type': 'url_transparent',
'id': clip_id,
'url': f'9c9media:ctvnews_web:{clip_id}',
'ie_key': 'NineCNineMedia',
}
if page_id.isdigit(): if re.fullmatch(self._VIDEO_ID_RE, page_id):
return ninecninemedia_url_result(page_id) return self._ninecninemedia_url_result(page_id)
else:
webpage = self._download_webpage(f'http://www.ctvnews.ca/{page_id}', page_id, query={ webpage = self._download_webpage(f'https://www.ctvnews.ca/{page_id}', page_id, query={
'ot': 'example.AjaxPageLayout.ot', 'ot': 'example.AjaxPageLayout.ot',
'maxItemsPerPage': 1000000, 'maxItemsPerPage': 1000000,
}) })
entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet( entries = [self._ninecninemedia_url_result(clip_id)
re.findall(r'clip\.id\s*=\s*(\d+);', webpage))] for clip_id in orderedSet(re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
if not entries: if not entries:
webpage = self._download_webpage(url, page_id) webpage = self._download_webpage(url, page_id)
if 'getAuthStates("' in webpage: if 'getAuthStates("' in webpage:
entries = [ninecninemedia_url_result(clip_id) for clip_id in entries = [self._ninecninemedia_url_result(clip_id) for clip_id in
self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')] self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
return self.playlist_result(entries, page_id) else:
entries = [
self._ninecninemedia_url_result(clip_id) for clip_id in
traverse_obj(webpage, (
{find_element(tag='jasper-player-container', html=True)},
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId', {str}))
]
return self.playlist_result(entries, page_id)

View File

@ -569,7 +569,7 @@ class FacebookIE(InfoExtractor):
if dash_manifest: if dash_manifest:
formats.extend(self._parse_mpd_formats( formats.extend(self._parse_mpd_formats(
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)), compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url)) mpd_url=url_or_none(vid_data.get('dash_manifest_url')) or mpd_url))
def process_formats(info): def process_formats(info):
# Downloads with browser's User-Agent are rate limited. Working around # Downloads with browser's User-Agent are rate limited. Working around

View File

@ -1,30 +1,32 @@
import json import json
import uuid
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
join_nonempty,
smuggle_url, smuggle_url,
traverse_obj, traverse_obj,
try_call, try_call,
unsmuggle_url, unsmuggle_url,
urljoin,
) )
class LiTVIE(InfoExtractor): class LiTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?litv\.tv/(?:vod|promo)/[^/]+/(?:content\.do)?\?.*?\b(?:content_)?id=(?P<id>[^&]+)' _VALID_URL = r'https?://(?:www\.)?litv\.tv/(?:[^/?#]+/watch/|vod/[^/?#]+/content\.do\?content_id=)(?P<id>[\w-]+)'
_URL_TEMPLATE = 'https://www.litv.tv/%s/watch/%s'
_URL_TEMPLATE = 'https://www.litv.tv/vod/%s/content.do?content_id=%s' _GEO_COUNTRIES = ['TW']
_TESTS = [{ _TESTS = [{
'url': 'https://www.litv.tv/vod/drama/content.do?brc_id=root&id=VOD00041610&isUHEnabled=true&autoPlay=1', 'url': 'https://www.litv.tv/drama/watch/VOD00041610',
'info_dict': { 'info_dict': {
'id': 'VOD00041606', 'id': 'VOD00041606',
'title': '花千骨', 'title': '花千骨',
}, },
'playlist_count': 51, # 50 episodes + 1 trailer 'playlist_count': 51, # 50 episodes + 1 trailer
}, { }, {
'url': 'https://www.litv.tv/vod/drama/content.do?brc_id=root&id=VOD00041610&isUHEnabled=true&autoPlay=1', 'url': 'https://www.litv.tv/drama/watch/VOD00041610',
'md5': 'b90ff1e9f1d8f5cfcd0a44c3e2b34c7a', 'md5': 'b90ff1e9f1d8f5cfcd0a44c3e2b34c7a',
'info_dict': { 'info_dict': {
'id': 'VOD00041610', 'id': 'VOD00041610',
@ -32,16 +34,15 @@ class LiTVIE(InfoExtractor):
'title': '花千骨第1集', 'title': '花千骨第1集',
'thumbnail': r're:https?://.*\.jpg$', 'thumbnail': r're:https?://.*\.jpg$',
'description': '《花千骨》陸劇線上看。十六年前,平靜的村莊內,一名女嬰隨異相出生,途徑此地的蜀山掌門清虛道長算出此女命運非同一般,她體內散發的異香易招惹妖魔。一念慈悲下,他在村莊周邊設下結界阻擋妖魔入侵,讓其年滿十六後去蜀山,並賜名花千骨。', 'description': '《花千骨》陸劇線上看。十六年前,平靜的村莊內,一名女嬰隨異相出生,途徑此地的蜀山掌門清虛道長算出此女命運非同一般,她體內散發的異香易招惹妖魔。一念慈悲下,他在村莊周邊設下結界阻擋妖魔入侵,讓其年滿十六後去蜀山,並賜名花千骨。',
'categories': ['奇幻', '愛情', '中國', '仙俠'], 'categories': ['奇幻', '愛情', '仙俠', '古裝'],
'episode': 'Episode 1', 'episode': 'Episode 1',
'episode_number': 1, 'episode_number': 1,
}, },
'params': { 'params': {
'noplaylist': True, 'noplaylist': True,
}, },
'skip': 'Georestricted to Taiwan',
}, { }, {
'url': 'https://www.litv.tv/promo/miyuezhuan/?content_id=VOD00044841&', 'url': 'https://www.litv.tv/drama/watch/VOD00044841',
'md5': '88322ea132f848d6e3e18b32a832b918', 'md5': '88322ea132f848d6e3e18b32a832b918',
'info_dict': { 'info_dict': {
'id': 'VOD00044841', 'id': 'VOD00044841',
@ -55,94 +56,62 @@ class LiTVIE(InfoExtractor):
def _extract_playlist(self, playlist_data, content_type): def _extract_playlist(self, playlist_data, content_type):
all_episodes = [ all_episodes = [
self.url_result(smuggle_url( self.url_result(smuggle_url(
self._URL_TEMPLATE % (content_type, episode['contentId']), self._URL_TEMPLATE % (content_type, episode['content_id']),
{'force_noplaylist': True})) # To prevent infinite recursion {'force_noplaylist': True})) # To prevent infinite recursion
for episode in traverse_obj(playlist_data, ('seasons', ..., 'episode', lambda _, v: v['contentId']))] for episode in traverse_obj(playlist_data, ('seasons', ..., 'episodes', lambda _, v: v['content_id']))]
return self.playlist_result(all_episodes, playlist_data['contentId'], playlist_data.get('title')) return self.playlist_result(all_episodes, playlist_data['content_id'], playlist_data.get('title'))
def _real_extract(self, url): def _real_extract(self, url):
url, smuggled_data = unsmuggle_url(url, {}) url, smuggled_data = unsmuggle_url(url, {})
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
vod_data = self._search_nextjs_data(webpage, video_id)['props']['pageProps']
if self._search_regex( program_info = traverse_obj(vod_data, ('programInformation', {dict})) or {}
r'(?i)<meta\s[^>]*http-equiv="refresh"\s[^>]*content="[0-9]+;\s*url=https://www\.litv\.tv/"', playlist_data = traverse_obj(vod_data, ('seriesTree'))
webpage, 'meta refresh redirect', default=False, group=0): if playlist_data and self._yes_playlist(program_info.get('series_id'), video_id, smuggled_data):
raise ExtractorError('No such content found', expected=True) return self._extract_playlist(playlist_data, program_info.get('content_type'))
program_info = self._parse_json(self._search_regex( asset_id = traverse_obj(program_info, ('assets', 0, 'asset_id', {str}))
r'var\s+programInfo\s*=\s*([^;]+)', webpage, 'VOD data', default='{}'), if asset_id: # This is a VOD
video_id) media_type = 'vod'
else: # This is a live stream
asset_id = program_info['content_id']
media_type = program_info['content_type']
puid = try_call(lambda: self._get_cookies('https://www.litv.tv/')['PUID'].value)
if puid:
endpoint = 'get-urls'
else:
puid = str(uuid.uuid4())
endpoint = 'get-urls-no-auth'
video_data = self._download_json(
f'https://www.litv.tv/api/{endpoint}', video_id,
data=json.dumps({'AssetId': asset_id, 'MediaType': media_type, 'puid': puid}).encode(),
headers={'Content-Type': 'application/json'})
# In browsers `getProgramInfo` request is always issued. Usually this if error := traverse_obj(video_data, ('error', {dict})):
# endpoint gives the same result as the data embedded in the webpage. error_msg = traverse_obj(error, ('message', {str}))
# If, for some reason, there are no embedded data, we do an extra request. if error_msg and 'OutsideRegionError' in error_msg:
if 'assetId' not in program_info:
program_info = self._download_json(
'https://www.litv.tv/vod/ajax/getProgramInfo', video_id,
query={'contentId': video_id},
headers={'Accept': 'application/json'})
series_id = program_info['seriesId']
if self._yes_playlist(series_id, video_id, smuggled_data):
playlist_data = self._download_json(
'https://www.litv.tv/vod/ajax/getSeriesTree', video_id,
query={'seriesId': series_id}, headers={'Accept': 'application/json'})
return self._extract_playlist(playlist_data, program_info['contentType'])
video_data = self._parse_json(self._search_regex(
r'uiHlsUrl\s*=\s*testBackendData\(([^;]+)\);',
webpage, 'video data', default='{}'), video_id)
if not video_data:
payload = {'assetId': program_info['assetId']}
puid = try_call(lambda: self._get_cookies('https://www.litv.tv/')['PUID'].value)
if puid:
payload.update({
'type': 'auth',
'puid': puid,
})
endpoint = 'getUrl'
else:
payload.update({
'watchDevices': program_info['watchDevices'],
'contentType': program_info['contentType'],
})
endpoint = 'getMainUrlNoAuth'
video_data = self._download_json(
f'https://www.litv.tv/vod/ajax/{endpoint}', video_id,
data=json.dumps(payload).encode(),
headers={'Content-Type': 'application/json'})
if not video_data.get('fullpath'):
error_msg = video_data.get('errorMessage')
if error_msg == 'vod.error.outsideregionerror':
self.raise_geo_restricted('This video is available in Taiwan only') self.raise_geo_restricted('This video is available in Taiwan only')
if error_msg: elif error_msg:
raise ExtractorError(f'{self.IE_NAME} said: {error_msg}', expected=True) raise ExtractorError(f'{self.IE_NAME} said: {error_msg}', expected=True)
raise ExtractorError(f'Unexpected result from {self.IE_NAME}') raise ExtractorError(f'Unexpected error from {self.IE_NAME}')
formats = self._extract_m3u8_formats( formats = self._extract_m3u8_formats(
video_data['fullpath'], video_id, ext='mp4', video_data['result']['AssetURLs'][0], video_id, ext='mp4', m3u8_id='hls')
entry_protocol='m3u8_native', m3u8_id='hls')
for a_format in formats: for a_format in formats:
# LiTV HLS segments doesn't like compressions # LiTV HLS segments doesn't like compressions
a_format.setdefault('http_headers', {})['Accept-Encoding'] = 'identity' a_format.setdefault('http_headers', {})['Accept-Encoding'] = 'identity'
title = program_info['title'] + program_info.get('secondaryMark', '')
description = program_info.get('description')
thumbnail = program_info.get('imageFile')
categories = [item['name'] for item in program_info.get('category', [])]
episode = int_or_none(program_info.get('episode'))
return { return {
'id': video_id, 'id': video_id,
'formats': formats, 'formats': formats,
'title': title, 'title': join_nonempty('title', 'secondary_mark', delim='', from_dict=program_info),
'description': description, **traverse_obj(program_info, {
'thumbnail': thumbnail, 'description': ('description', {str}),
'categories': categories, 'thumbnail': ('picture', {urljoin('https://p-cdnstatic.svc.litv.tv/')}),
'episode_number': episode, 'categories': ('genres', ..., 'name', {str}),
'episode_number': ('episode', {int_or_none}),
}),
} }

View File

@ -13,7 +13,10 @@ from ..utils import (
unified_timestamp, unified_timestamp,
url_or_none, url_or_none,
) )
from ..utils.traversal import traverse_obj from ..utils.traversal import (
subs_list_to_dict,
traverse_obj,
)
class RutubeBaseIE(InfoExtractor): class RutubeBaseIE(InfoExtractor):
@ -92,11 +95,11 @@ class RutubeBaseIE(InfoExtractor):
hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls') hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls')
formats.extend(fmts) formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles) self._merge_subtitles(subs, target=subtitles)
for caption in traverse_obj(options, ('captions', lambda _, v: url_or_none(v['file']))): self._merge_subtitles(traverse_obj(options, ('captions', ..., {
subtitles.setdefault(caption.get('code') or 'ru', []).append({ 'id': 'code',
'url': caption['file'], 'url': 'file',
'name': caption.get('langTitle'), 'name': ('langTitle', {str}),
}) }, all, {subs_list_to_dict(lang='ru')})), target=subtitles)
return formats, subtitles return formats, subtitles
def _download_and_extract_formats_and_subtitles(self, video_id, query=None): def _download_and_extract_formats_and_subtitles(self, video_id, query=None):

View File

@ -199,8 +199,9 @@ class SonyLIVSeriesIE(InfoExtractor):
}, },
}] }]
_API_BASE = 'https://apiv2.sonyliv.com/AGL' _API_BASE = 'https://apiv2.sonyliv.com/AGL'
_SORT_ORDERS = ('asc', 'desc')
def _entries(self, show_id): def _entries(self, show_id, sort_order):
headers = { headers = {
'Accept': 'application/json, text/plain, */*', 'Accept': 'application/json, text/plain, */*',
'Referer': 'https://www.sonyliv.com', 'Referer': 'https://www.sonyliv.com',
@ -215,6 +216,9 @@ class SonyLIVSeriesIE(InfoExtractor):
'from': '0', 'from': '0',
'to': '49', 'to': '49',
}), ('resultObj', 'containers', 0, 'containers', lambda _, v: int_or_none(v['id']))) }), ('resultObj', 'containers', 0, 'containers', lambda _, v: int_or_none(v['id'])))
if sort_order == 'desc':
seasons = reversed(seasons)
for season in seasons: for season in seasons:
season_id = str(season['id']) season_id = str(season['id'])
note = traverse_obj(season, ('metadata', 'title', {str})) or 'season' note = traverse_obj(season, ('metadata', 'title', {str})) or 'season'
@ -226,7 +230,7 @@ class SonyLIVSeriesIE(InfoExtractor):
'from': str(cursor), 'from': str(cursor),
'to': str(cursor + 99), 'to': str(cursor + 99),
'orderBy': 'episodeNumber', 'orderBy': 'episodeNumber',
'sortOrder': 'asc', 'sortOrder': sort_order,
}), ('resultObj', 'containers', 0, 'containers', lambda _, v: int_or_none(v['id']))) }), ('resultObj', 'containers', 0, 'containers', lambda _, v: int_or_none(v['id'])))
if not episodes: if not episodes:
break break
@ -237,4 +241,10 @@ class SonyLIVSeriesIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
show_id = self._match_id(url) show_id = self._match_id(url)
return self.playlist_result(self._entries(show_id), playlist_id=show_id)
sort_order = self._configuration_arg('sort_order', [self._SORT_ORDERS[0]])[0]
if sort_order not in self._SORT_ORDERS:
raise ValueError(
f'Invalid sort order "{sort_order}". Allowed values are: {", ".join(self._SORT_ORDERS)}')
return self.playlist_result(self._entries(show_id, sort_order), playlist_id=show_id)

View File

@ -241,7 +241,7 @@ class SoundcloudBaseIE(InfoExtractor):
format_urls.add(format_url) format_urls.add(format_url)
formats.append({ formats.append({
'format_id': 'download', 'format_id': 'download',
'ext': urlhandle_detect_ext(urlh) or 'mp3', 'ext': urlhandle_detect_ext(urlh, default='mp3'),
'filesize': int_or_none(urlh.headers.get('Content-Length')), 'filesize': int_or_none(urlh.headers.get('Content-Length')),
'url': format_url, 'url': format_url,
'quality': 10, 'quality': 10,

View File

@ -2869,17 +2869,17 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
microformats = traverse_obj( microformats = traverse_obj(
prs, (..., 'microformat', 'playerMicroformatRenderer'), prs, (..., 'microformat', 'playerMicroformatRenderer'),
expected_type=dict) expected_type=dict)
_, live_status, _, formats, _ = self._list_formats(video_id, microformats, video_details, prs, player_url) with lock:
is_live = live_status == 'is_live' _, live_status, _, formats, _ = self._list_formats(video_id, microformats, video_details, prs, player_url)
start_time = time.time() is_live = live_status == 'is_live'
start_time = time.time()
def mpd_feed(format_id, delay): def mpd_feed(format_id, delay):
""" """
@returns (manifest_url, manifest_stream_number, is_live) or None @returns (manifest_url, manifest_stream_number, is_live) or None
""" """
for retry in self.RetryManager(fatal=False): for retry in self.RetryManager(fatal=False):
with lock: refetch_manifest(format_id, delay)
refetch_manifest(format_id, delay)
f = next((f for f in formats if f['format_id'] == format_id), None) f = next((f for f in formats if f['format_id'] == format_id), None)
if not f: if not f:
@ -2910,6 +2910,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
begin_index = 0 begin_index = 0
download_start_time = ctx.get('start') or time.time() download_start_time = ctx.get('start') or time.time()
section_start = ctx.get('section_start') or 0
section_end = ctx.get('section_end') or math.inf
self.write_debug(f'Selected section: {section_start} -> {section_end}')
lack_early_segments = download_start_time - (live_start_time or download_start_time) > MAX_DURATION lack_early_segments = download_start_time - (live_start_time or download_start_time) > MAX_DURATION
if lack_early_segments: if lack_early_segments:
self.report_warning(bug_reports_message( self.report_warning(bug_reports_message(
@ -2930,9 +2935,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
or (mpd_url, stream_number, False)) or (mpd_url, stream_number, False))
if not refresh_sequence: if not refresh_sequence:
if expire_fast and not is_live: if expire_fast and not is_live:
return False, last_seq return False
elif old_mpd_url == mpd_url: elif old_mpd_url == mpd_url:
return True, last_seq return True
if manifestless_orig_fmt: if manifestless_orig_fmt:
fmt_info = manifestless_orig_fmt fmt_info = manifestless_orig_fmt
else: else:
@ -2943,14 +2949,13 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
fmts = None fmts = None
if not fmts: if not fmts:
no_fragment_score += 2 no_fragment_score += 2
return False, last_seq return False
fmt_info = next(x for x in fmts if x['manifest_stream_number'] == stream_number) fmt_info = next(x for x in fmts if x['manifest_stream_number'] == stream_number)
fragments = fmt_info['fragments'] fragments = fmt_info['fragments']
fragment_base_url = fmt_info['fragment_base_url'] fragment_base_url = fmt_info['fragment_base_url']
assert fragment_base_url assert fragment_base_url
_last_seq = int(re.search(r'(?:/|^)sq/(\d+)', fragments[-1]['path']).group(1)) return True
return True, _last_seq
self.write_debug(f'[{video_id}] Generating fragments for format {format_id}') self.write_debug(f'[{video_id}] Generating fragments for format {format_id}')
while is_live: while is_live:
@ -2970,11 +2975,19 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
last_segment_url = None last_segment_url = None
continue continue
else: else:
should_continue, last_seq = _extract_sequence_from_mpd(True, no_fragment_score > 15) should_continue = _extract_sequence_from_mpd(True, no_fragment_score > 15)
no_fragment_score += 2 no_fragment_score += 2
if not should_continue: if not should_continue:
continue continue
last_fragment = fragments[-1]
last_seq = int(re.search(r'(?:/|^)sq/(\d+)', fragments[-1]['path']).group(1))
known_fragment = next(
(fragment for fragment in fragments if f'sq/{known_idx}' in fragment['path']), None)
if known_fragment and known_fragment['end'] > section_end:
break
if known_idx > last_seq: if known_idx > last_seq:
last_segment_url = None last_segment_url = None
continue continue
@ -2984,20 +2997,36 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if begin_index < 0 and known_idx < 0: if begin_index < 0 and known_idx < 0:
# skip from the start when it's negative value # skip from the start when it's negative value
known_idx = last_seq + begin_index known_idx = last_seq + begin_index
if lack_early_segments: if lack_early_segments:
known_idx = max(known_idx, last_seq - int(MAX_DURATION // fragments[-1]['duration'])) known_idx = max(known_idx, last_seq - int(MAX_DURATION // last_fragment['duration']))
fragment_count = last_seq - known_idx if section_end == math.inf else int(
(section_end - section_start) // last_fragment['duration'])
try: try:
for idx in range(known_idx, last_seq): for idx in range(known_idx, last_seq):
# do not update sequence here or you'll get skipped some part of it # do not update sequence here or you'll get skipped some part of it
should_continue, _ = _extract_sequence_from_mpd(False, False) should_continue = _extract_sequence_from_mpd(False, False)
if not should_continue: if not should_continue:
known_idx = idx - 1 known_idx = idx - 1
raise ExtractorError('breaking out of outer loop') raise ExtractorError('breaking out of outer loop')
last_segment_url = urljoin(fragment_base_url, f'sq/{idx}')
yield { frag_duration = last_fragment['duration']
'url': last_segment_url, frag_start = last_fragment['start'] - (last_seq - idx) * frag_duration
'fragment_count': last_seq, frag_end = frag_start + frag_duration
}
if frag_start >= section_start and frag_end <= section_end:
last_segment_url = urljoin(fragment_base_url, f'sq/{idx}')
yield {
'url': last_segment_url,
'fragment_count': fragment_count,
'duration': frag_duration,
'start': frag_start,
'end': frag_end,
}
if known_idx == last_seq: if known_idx == last_seq:
no_fragment_score += 5 no_fragment_score += 5
else: else:
@ -4170,6 +4199,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
dct['downloader_options'] = {'http_chunk_size': CHUNK_SIZE} dct['downloader_options'] = {'http_chunk_size': CHUNK_SIZE}
yield dct yield dct
if live_status == 'is_live' and self.get_param('download_ranges') and not self.get_param('live_from_start'):
self.report_warning('For YT livestreams, --download-sections is only supported with --live-from-start')
needs_live_processing = self._needs_live_processing(live_status, duration) needs_live_processing = self._needs_live_processing(live_status, duration)
skip_bad_formats = 'incomplete' not in format_types skip_bad_formats = 'incomplete' not in format_types
if self._configuration_arg('include_incomplete_formats'): if self._configuration_arg('include_incomplete_formats'):

View File

@ -419,7 +419,9 @@ def create_parser():
general.add_option( general.add_option(
'--flat-playlist', '--flat-playlist',
action='store_const', dest='extract_flat', const='in_playlist', default=False, action='store_const', dest='extract_flat', const='in_playlist', default=False,
help='Do not extract the videos of a playlist, only list them') help=(
'Do not extract a playlist\'s URL result entries; '
'some entry metadata may be missing and downloading may be bypassed'))
general.add_option( general.add_option(
'--no-flat-playlist', '--no-flat-playlist',
action='store_false', dest='extract_flat', action='store_false', dest='extract_flat',
@ -427,7 +429,14 @@ def create_parser():
general.add_option( general.add_option(
'--live-from-start', '--live-from-start',
action='store_true', dest='live_from_start', action='store_true', dest='live_from_start',
help='Download livestreams from the start. Currently only supported for YouTube (Experimental)') help=('Download livestreams from the start. Currently only supported for YouTube (Experimental). '
'Time ranges can be specified using --download-sections to download only a part of the stream. '
'Negative values are allowed for specifying a relative previous time, using the # syntax '
'e.g. --download-sections "#-24hours - 0" (download last 24 hours), '
'e.g. --download-sections "#-1h - 30m" (download from 1 hour ago until the next 30 minutes), '
'e.g. --download-sections "#-3days - -2days" (download from 3 days ago until 2 days ago). '
'It is also possible to specify an exact unix timestamp range, using the * syntax, '
'e.g. --download-sections "*1672531200 - 1672549200" (download between those two timestamps)'))
general.add_option( general.add_option(
'--no-live-from-start', '--no-live-from-start',
action='store_false', dest='live_from_start', action='store_false', dest='live_from_start',

View File

@ -1250,7 +1250,7 @@ def unified_strdate(date_str, day_first=True):
return str(upload_date) return str(upload_date)
def unified_timestamp(date_str, day_first=True): def unified_timestamp(date_str, day_first=True, with_milliseconds=False):
if not isinstance(date_str, str): if not isinstance(date_str, str):
return None return None
@ -1276,7 +1276,7 @@ def unified_timestamp(date_str, day_first=True):
for expression in date_formats(day_first): for expression in date_formats(day_first):
with contextlib.suppress(ValueError): with contextlib.suppress(ValueError):
dt_ = dt.datetime.strptime(date_str, expression) - timezone + dt.timedelta(hours=pm_delta) dt_ = dt.datetime.strptime(date_str, expression) - timezone + dt.timedelta(hours=pm_delta)
return calendar.timegm(dt_.timetuple()) return calendar.timegm(dt_.timetuple()) + (dt_.microsecond / 1e6 if with_milliseconds else 0)
timetuple = email.utils.parsedate_tz(date_str) timetuple = email.utils.parsedate_tz(date_str)
if timetuple: if timetuple:
@ -2071,16 +2071,19 @@ def parse_duration(s):
days, hours, mins, secs, ms = [None] * 5 days, hours, mins, secs, ms = [None] * 5
m = re.match(r'''(?x) m = re.match(r'''(?x)
(?P<sign>[+-])?
(?P<before_secs> (?P<before_secs>
(?:(?:(?P<days>[0-9]+):)?(?P<hours>[0-9]+):)?(?P<mins>[0-9]+):)? (?:(?:(?P<days>[0-9]+):)?(?P<hours>[0-9]+):)?(?P<mins>[0-9]+):)?
(?P<secs>(?(before_secs)[0-9]{1,2}|[0-9]+)) (?P<secs>(?(before_secs)[0-9]{1,2}|[0-9]+))
(?P<ms>[.:][0-9]+)?Z?$ (?P<ms>[.:][0-9]+)?Z?$
''', s) ''', s)
if m: if m:
days, hours, mins, secs, ms = m.group('days', 'hours', 'mins', 'secs', 'ms') sign, days, hours, mins, secs, ms = m.group('sign', 'days', 'hours', 'mins', 'secs', 'ms')
else: else:
m = re.match( m = re.match(
r'''(?ix)(?:P? r'''(?ix)(?:
(?P<sign>[+-])?
P?
(?: (?:
[0-9]+\s*y(?:ears?)?,?\s* [0-9]+\s*y(?:ears?)?,?\s*
)? )?
@ -2104,17 +2107,19 @@ def parse_duration(s):
(?P<secs>[0-9]+)(?P<ms>\.[0-9]+)?\s*s(?:ec(?:ond)?s?)?\s* (?P<secs>[0-9]+)(?P<ms>\.[0-9]+)?\s*s(?:ec(?:ond)?s?)?\s*
)?Z?$''', s) )?Z?$''', s)
if m: if m:
days, hours, mins, secs, ms = m.groups() sign, days, hours, mins, secs, ms = m.groups()
else: else:
m = re.match(r'(?i)(?:(?P<hours>[0-9.]+)\s*(?:hours?)|(?P<mins>[0-9.]+)\s*(?:mins?\.?|minutes?)\s*)Z?$', s) m = re.match(r'(?i)(?P<sign>[+-])?(?:(?P<days>[0-9.]+)\s*(?:days?)|(?P<hours>[0-9.]+)\s*(?:hours?)|(?P<mins>[0-9.]+)\s*(?:mins?\.?|minutes?)\s*)Z?$', s)
if m: if m:
hours, mins = m.groups() sign, days, hours, mins = m.groups()
else: else:
return None return None
sign = -1 if sign == '-' else 1
if ms: if ms:
ms = ms.replace(':', '.') ms = ms.replace(':', '.')
return sum(float(part or 0) * mult for part, mult in ( return sign * sum(float(part or 0) * mult for part, mult in (
(days, 86400), (hours, 3600), (mins, 60), (secs, 1), (ms, 1))) (days, 86400), (hours, 3600), (mins, 60), (secs, 1), (ms, 1)))