mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2024-11-23 15:51:24 +01:00
Compare commits
27 Commits
9398d2142b
...
5c0017c86e
Author | SHA1 | Date | |
---|---|---|---|
|
5c0017c86e | ||
|
f919729538 | ||
|
7ea2787920 | ||
|
f7257588bd | ||
|
3095d815c9 | ||
|
06bd726ab3 | ||
|
52d9594ea6 | ||
|
e720e8879d | ||
|
fe592cd6ab | ||
|
61fd2648d2 | ||
|
feaefd8ec6 | ||
|
dcefdfe508 | ||
|
1e23756e50 | ||
|
efe4b7101a | ||
|
365e615d11 | ||
|
f65ad7f3c2 | ||
|
53a7fcc231 | ||
|
31c13e92e2 | ||
|
fe29c67a14 | ||
|
60f51dec60 | ||
|
28c242d82c | ||
|
d9a6507fe6 | ||
|
972a2d51ad | ||
|
7398a7cb2f | ||
|
51681d1294 | ||
|
41c6125907 | ||
|
16974726a4 |
12
CONTRIBUTORS
12
CONTRIBUTORS
|
@ -695,3 +695,15 @@ KBelmin
|
|||
kesor
|
||||
MellowKyler
|
||||
Wesley107772
|
||||
a13ssandr0
|
||||
ChocoLZS
|
||||
doe1080
|
||||
hugovdev
|
||||
jshumphrey
|
||||
julionc
|
||||
manavchaudhary1
|
||||
powergold1
|
||||
Sakura286
|
||||
SamDecrock
|
||||
stratus-ss
|
||||
subrat-lima
|
||||
|
|
58
Changelog.md
58
Changelog.md
|
@ -4,6 +4,64 @@
|
|||
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||
-->
|
||||
|
||||
### 2024.11.18
|
||||
|
||||
#### Important changes
|
||||
- **Login with OAuth is no longer supported for YouTube**
|
||||
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
|
||||
|
||||
#### Core changes
|
||||
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
|
||||
- **utils**
|
||||
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
|
||||
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
|
||||
|
||||
#### Extractor changes
|
||||
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
|
||||
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
|
||||
- **chaturbate**
|
||||
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
|
||||
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
|
||||
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
|
||||
- **ctvnews**
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
|
||||
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
|
||||
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
|
||||
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
|
||||
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
|
||||
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
|
||||
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
|
||||
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
|
||||
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
|
||||
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
|
||||
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
|
||||
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **spreaker**
|
||||
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
|
||||
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
|
||||
- **youtube**
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
|
||||
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Misc. changes
|
||||
- **build**
|
||||
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
|
||||
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
|
||||
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
|
||||
- **cleanup**
|
||||
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
|
||||
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
|
||||
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.11.04
|
||||
|
||||
#### Important changes
|
||||
|
|
|
@ -1867,9 +1867,6 @@ The following extractors use this feature:
|
|||
#### bilibili
|
||||
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
|
||||
|
||||
#### digitalconcerthall
|
||||
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
|
||||
|
||||
#### sonylivseries
|
||||
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
|
||||
|
||||
|
|
|
@ -129,6 +129,8 @@
|
|||
- **Bandcamp:album**
|
||||
- **Bandcamp:user**
|
||||
- **Bandcamp:weekly**
|
||||
- **Bandlab**
|
||||
- **BandlabPlaylist**
|
||||
- **BannedVideo**
|
||||
- **bbc**: [*bbc*](## "netrc machine") BBC
|
||||
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
||||
|
@ -484,6 +486,7 @@
|
|||
- **Gab**
|
||||
- **GabTV**
|
||||
- **Gaia**: [*gaia*](## "netrc machine")
|
||||
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
|
||||
- **GameJolt**
|
||||
- **GameJoltCommunity**
|
||||
- **GameJoltGame**
|
||||
|
@ -651,6 +654,8 @@
|
|||
- **Karaoketv**
|
||||
- **Katsomo**: (**Currently broken**)
|
||||
- **KelbyOne**: (**Currently broken**)
|
||||
- **Kenh14Playlist**
|
||||
- **Kenh14Video**
|
||||
- **Ketnet**
|
||||
- **khanacademy**
|
||||
- **khanacademy:unit**
|
||||
|
@ -784,10 +789,6 @@
|
|||
- **MicrosoftLearnSession**
|
||||
- **MicrosoftMedius**
|
||||
- **microsoftstream**: Microsoft Stream
|
||||
- **mildom**: Record ongoing live by specific user in Mildom
|
||||
- **mildom:clip**: Clip in Mildom
|
||||
- **mildom:user:vod**: Download all VODs from specific user in Mildom
|
||||
- **mildom:vod**: VOD in Mildom
|
||||
- **minds**
|
||||
- **minds:channel**
|
||||
- **minds:group**
|
||||
|
@ -798,6 +799,7 @@
|
|||
- **MiTele**: mitele.es
|
||||
- **mixch**
|
||||
- **mixch:archive**
|
||||
- **mixch:movie**
|
||||
- **mixcloud**
|
||||
- **mixcloud:playlist**
|
||||
- **mixcloud:user**
|
||||
|
@ -1060,8 +1062,8 @@
|
|||
- **PhilharmonieDeParis**: Philharmonie de Paris
|
||||
- **phoenix.de**
|
||||
- **Photobucket**
|
||||
- **PiaLive**
|
||||
- **Piapro**: [*piapro*](## "netrc machine")
|
||||
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
|
||||
- **Picarto**
|
||||
- **PicartoVod**
|
||||
- **Piksel**
|
||||
|
@ -1088,8 +1090,6 @@
|
|||
- **PodbayFMChannel**
|
||||
- **Podchaser**
|
||||
- **podomatic**: (**Currently broken**)
|
||||
- **Pokemon**
|
||||
- **PokemonWatch**
|
||||
- **PokerGo**: [*pokergo*](## "netrc machine")
|
||||
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
||||
- **PolsatGo**
|
||||
|
@ -1160,6 +1160,7 @@
|
|||
- **RadioJavan**: (**Currently broken**)
|
||||
- **radiokapital**
|
||||
- **radiokapital:show**
|
||||
- **RadioRadicale**
|
||||
- **RadioZetPodcast**
|
||||
- **radlive**
|
||||
- **radlive:channel**
|
||||
|
@ -1367,9 +1368,7 @@
|
|||
- **spotify**: Spotify episodes (**Currently broken**)
|
||||
- **spotify:show**: Spotify shows (**Currently broken**)
|
||||
- **Spreaker**
|
||||
- **SpreakerPage**
|
||||
- **SpreakerShow**
|
||||
- **SpreakerShowPage**
|
||||
- **SpringboardPlatform**
|
||||
- **Sprout**
|
||||
- **SproutVideo**
|
||||
|
@ -1570,6 +1569,8 @@
|
|||
- **UFCTV**: [*ufctv*](## "netrc machine")
|
||||
- **ukcolumn**: (**Currently broken**)
|
||||
- **UKTVPlay**
|
||||
- **UlizaPlayer**
|
||||
- **UlizaPortal**: ulizaportal.jp
|
||||
- **umg:de**: Universal Music Deutschland (**Currently broken**)
|
||||
- **Unistra**
|
||||
- **Unity**: (**Currently broken**)
|
||||
|
@ -1587,8 +1588,6 @@
|
|||
- **Varzesh3**: (**Currently broken**)
|
||||
- **Vbox7**
|
||||
- **Veo**
|
||||
- **Veoh**
|
||||
- **veoh:user**
|
||||
- **Vesti**: Вести.Ru (**Currently broken**)
|
||||
- **Vevo**
|
||||
- **VevoPlaylist**
|
||||
|
|
|
@ -3541,7 +3541,8 @@ class YoutubeDL:
|
|||
'writing DASH m4a. Only some players support this container',
|
||||
FFmpegFixupM4aPP)
|
||||
ffmpeg_fixup(downloader == 'hlsnative' and not self.params.get('hls_use_mpegts')
|
||||
or info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None,
|
||||
or info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None
|
||||
or downloader == 'niconico_live',
|
||||
'Possible MPEG-TS in MP4 container or malformed AAC timestamps',
|
||||
FFmpegFixupM3u8PP)
|
||||
ffmpeg_fixup(downloader == 'dashsegments'
|
||||
|
|
|
@ -1,12 +1,22 @@
|
|||
import contextlib
|
||||
import json
|
||||
import math
|
||||
import threading
|
||||
import time
|
||||
|
||||
from . import get_suitable_downloader
|
||||
from .common import FileDownloader
|
||||
from .external import FFmpegFD
|
||||
from ..downloader.fragment import FragmentFD
|
||||
from ..networking import Request
|
||||
from ..utils import DownloadError, str_or_none, try_get
|
||||
from ..networking.exceptions import network_exceptions
|
||||
from ..utils import (
|
||||
DownloadError,
|
||||
RetryManager,
|
||||
str_or_none,
|
||||
traverse_obj,
|
||||
urljoin,
|
||||
)
|
||||
|
||||
|
||||
class NiconicoDmcFD(FileDownloader):
|
||||
|
@ -56,34 +66,36 @@ class NiconicoDmcFD(FileDownloader):
|
|||
return success
|
||||
|
||||
|
||||
class NiconicoLiveFD(FileDownloader):
|
||||
""" Downloads niconico live without being stopped """
|
||||
class NiconicoLiveFD(FragmentFD):
|
||||
""" Downloads niconico live/timeshift VOD """
|
||||
|
||||
def real_download(self, filename, info_dict):
|
||||
video_id = info_dict['video_id']
|
||||
ws_url = info_dict['url']
|
||||
ws_extractor = info_dict['ws']
|
||||
ws_origin_host = info_dict['origin']
|
||||
live_quality = info_dict.get('live_quality', 'high')
|
||||
live_latency = info_dict.get('live_latency', 'high')
|
||||
dl = FFmpegFD(self.ydl, self.params or {})
|
||||
_PER_FRAGMENT_DOWNLOAD_RATIO = 0.1
|
||||
_WEBSOCKET_RECONNECT_DELAY = 10
|
||||
|
||||
new_info_dict = info_dict.copy()
|
||||
new_info_dict.update({
|
||||
'protocol': 'm3u8',
|
||||
})
|
||||
@contextlib.contextmanager
|
||||
def _ws_context(self, info_dict):
|
||||
""" Hold a WebSocket object and release it when leaving """
|
||||
|
||||
def communicate_ws(reconnect):
|
||||
if reconnect:
|
||||
ws = self.ydl.urlopen(Request(ws_url, headers={'Origin': f'https://{ws_origin_host}'}))
|
||||
video_id = info_dict['id']
|
||||
format_id = info_dict['format_id']
|
||||
live_latency = info_dict['downloader_options']['live_latency']
|
||||
ws_url = info_dict['downloader_options']['ws_url']
|
||||
|
||||
self.ws = None
|
||||
|
||||
self.m3u8_lock = threading.Event()
|
||||
self.m3u8_url = None
|
||||
|
||||
def communicate_ws():
|
||||
self.ws = self.ydl.urlopen(Request(ws_url, headers=info_dict.get('http_headers')))
|
||||
if self.ydl.params.get('verbose', False):
|
||||
self.to_screen('[debug] Sending startWatching request')
|
||||
ws.send(json.dumps({
|
||||
self.write_debug('Sending HLS server request')
|
||||
self.ws.send(json.dumps({
|
||||
'type': 'startWatching',
|
||||
'data': {
|
||||
'stream': {
|
||||
'quality': live_quality,
|
||||
'protocol': 'hls+fmp4',
|
||||
'quality': format_id,
|
||||
'protocol': 'hls',
|
||||
'latency': live_latency,
|
||||
'chasePlay': False,
|
||||
},
|
||||
|
@ -91,50 +103,147 @@ class NiconicoLiveFD(FileDownloader):
|
|||
'protocol': 'webSocket',
|
||||
'commentable': True,
|
||||
},
|
||||
'reconnect': True,
|
||||
},
|
||||
}))
|
||||
else:
|
||||
ws = ws_extractor
|
||||
with ws:
|
||||
with self.ws:
|
||||
while True:
|
||||
recv = ws.recv()
|
||||
recv = self.ws.recv()
|
||||
if not recv:
|
||||
continue
|
||||
data = json.loads(recv)
|
||||
if not data or not isinstance(data, dict):
|
||||
if not isinstance(data, dict):
|
||||
continue
|
||||
if data.get('type') == 'ping':
|
||||
# pong back
|
||||
ws.send(r'{"type":"pong"}')
|
||||
ws.send(r'{"type":"keepSeat"}')
|
||||
self.ws.send(r'{"type":"pong"}')
|
||||
self.ws.send(r'{"type":"keepSeat"}')
|
||||
elif data.get('type') == 'stream':
|
||||
self.m3u8_url = data['data']['uri']
|
||||
self.m3u8_lock.set()
|
||||
elif data.get('type') == 'disconnect':
|
||||
self.write_debug(data)
|
||||
return True
|
||||
return
|
||||
elif data.get('type') == 'error':
|
||||
self.write_debug(data)
|
||||
message = try_get(data, lambda x: x['body']['code'], str) or recv
|
||||
return DownloadError(message)
|
||||
message = traverse_obj(data, ('data', 'code')) or recv
|
||||
raise DownloadError(message)
|
||||
elif self.ydl.params.get('verbose', False):
|
||||
if len(recv) > 100:
|
||||
recv = recv[:100] + '...'
|
||||
self.to_screen(f'[debug] Server said: {recv}')
|
||||
self.write_debug(f'Server said: {recv}')
|
||||
|
||||
stopped = threading.Event()
|
||||
|
||||
def ws_main():
|
||||
reconnect = False
|
||||
while True:
|
||||
while not stopped.is_set():
|
||||
try:
|
||||
ret = communicate_ws(reconnect)
|
||||
if ret is True:
|
||||
return
|
||||
except BaseException as e:
|
||||
self.to_screen('[{}] {}: Connection error occured, reconnecting after 10 seconds: {}'.format('niconico:live', video_id, str_or_none(e)))
|
||||
time.sleep(10)
|
||||
continue
|
||||
finally:
|
||||
reconnect = True
|
||||
communicate_ws()
|
||||
break # Disconnected
|
||||
except BaseException as e: # Including TransportError
|
||||
if stopped.is_set():
|
||||
break
|
||||
|
||||
self.m3u8_lock.clear() # m3u8 url may be changed
|
||||
|
||||
self.to_screen('[{}] {}: Connection error occured, reconnecting after {} seconds: {}'.format(
|
||||
'niconico:live', video_id, self._WEBSOCKET_RECONNECT_DELAY, str_or_none(e)))
|
||||
time.sleep(self._WEBSOCKET_RECONNECT_DELAY)
|
||||
|
||||
self.m3u8_lock.set() # Release possible locks
|
||||
|
||||
thread = threading.Thread(target=ws_main, daemon=True)
|
||||
thread.start()
|
||||
|
||||
return dl.download(filename, new_info_dict)
|
||||
try:
|
||||
yield self
|
||||
finally:
|
||||
stopped.set()
|
||||
if self.ws:
|
||||
self.ws.close()
|
||||
thread.join()
|
||||
|
||||
def _master_m3u8_url(self):
|
||||
""" Get the refreshed manifest url after WebSocket reconnection to prevent HTTP 403 """
|
||||
|
||||
self.m3u8_lock.wait()
|
||||
return self.m3u8_url
|
||||
|
||||
def real_download(self, filename, info_dict):
|
||||
with self._ws_context(info_dict) as ws_context:
|
||||
# live
|
||||
if info_dict.get('is_live'):
|
||||
info_dict = info_dict.copy()
|
||||
info_dict['protocol'] = 'm3u8'
|
||||
return FFmpegFD(self.ydl, self.params or {}).download(filename, info_dict)
|
||||
|
||||
# timeshift VOD
|
||||
from ..extractor.niconico import NiconicoIE
|
||||
ie = NiconicoIE(self.ydl)
|
||||
|
||||
video_id = info_dict['id']
|
||||
|
||||
# Get video info
|
||||
total_duration = 0
|
||||
fragment_duration = 0
|
||||
for line in ie._download_webpage(info_dict['url'], video_id, note='Downloading m3u8').splitlines():
|
||||
if '#STREAM-DURATION' in line:
|
||||
total_duration = int(float(line.split(':')[1]))
|
||||
if '#EXT-X-TARGETDURATION' in line:
|
||||
fragment_duration = int(line.split(':')[1])
|
||||
if not (total_duration and fragment_duration):
|
||||
raise DownloadError('Unable to get required video info')
|
||||
|
||||
ctx = {
|
||||
'filename': filename,
|
||||
'total_frags': math.ceil(total_duration / fragment_duration),
|
||||
}
|
||||
|
||||
self._prepare_and_start_frag_download(ctx, info_dict)
|
||||
|
||||
downloaded_duration = ctx['fragment_index'] * fragment_duration
|
||||
while True:
|
||||
if downloaded_duration > total_duration:
|
||||
break
|
||||
|
||||
retry_manager = RetryManager(self.params.get('fragment_retries'), self.report_retry)
|
||||
for retry in retry_manager:
|
||||
try:
|
||||
# Refresh master m3u8 (if possible) to get the new URL of the previously-chose format
|
||||
media_m3u8_url = ie._extract_m3u8_formats(
|
||||
ws_context._master_m3u8_url(), video_id, note=False,
|
||||
query={'start': downloaded_duration}, live=False)[0]['url']
|
||||
|
||||
# Get all fragments
|
||||
media_m3u8 = ie._download_webpage(
|
||||
media_m3u8_url, video_id, note=False, errnote='Unable to download media m3u8')
|
||||
fragment_urls = traverse_obj(media_m3u8.splitlines(), (
|
||||
lambda _, v: not v.startswith('#'), {lambda url: urljoin(media_m3u8_url, url)}))
|
||||
|
||||
with self.DurationLimiter(len(fragment_urls) * fragment_duration * self._PER_FRAGMENT_DOWNLOAD_RATIO):
|
||||
for fragment_url in fragment_urls:
|
||||
success = self._download_fragment(ctx, fragment_url, info_dict)
|
||||
if not success:
|
||||
return False
|
||||
self._append_fragment(ctx, self._read_fragment(ctx))
|
||||
downloaded_duration += fragment_duration
|
||||
|
||||
except (DownloadError, *network_exceptions) as err:
|
||||
retry.error = err
|
||||
continue
|
||||
|
||||
if retry_manager.error:
|
||||
return False
|
||||
|
||||
return self._finish_frag_download(ctx, info_dict)
|
||||
|
||||
class DurationLimiter:
|
||||
def __init__(self, target):
|
||||
self.target = target
|
||||
|
||||
def __enter__(self):
|
||||
self.start = time.time()
|
||||
|
||||
def __exit__(self, *exc):
|
||||
remaining = self.target - (time.time() - self.start)
|
||||
if remaining > 0:
|
||||
time.sleep(remaining)
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
jwt_decode_hs256,
|
||||
parse_codecs,
|
||||
try_get,
|
||||
url_or_none,
|
||||
|
@ -13,9 +16,6 @@ from ..utils.traversal import traverse_obj
|
|||
class DigitalConcertHallIE(InfoExtractor):
|
||||
IE_DESC = 'DigitalConcertHall extractor'
|
||||
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_ACCESS_TOKEN = None
|
||||
_NETRC_MACHINE = 'digitalconcerthall'
|
||||
_TESTS = [{
|
||||
'note': 'Playlist with only one video',
|
||||
|
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
|
|||
'params': {'skip_download': 'm3u8'},
|
||||
'playlist_count': 1,
|
||||
}]
|
||||
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
|
||||
'is the "access_token_production" from your browser local storage')
|
||||
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_CLIENT_ID = 'dch.webapp'
|
||||
_CLIENT_SECRET = '2ySLN+2Fwb'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_OAUTH_HEADERS = {
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Origin': 'https://www.digitalconcerthall.com',
|
||||
'Referer': 'https://www.digitalconcerthall.com/',
|
||||
'User-Agent': _USER_AGENT,
|
||||
}
|
||||
_access_token = None
|
||||
_access_token_expiry = 0
|
||||
_refresh_token = None
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
login_token = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
|
||||
@property
|
||||
def _access_token_is_expired(self):
|
||||
return self._access_token_expiry - 30 <= int(time.time())
|
||||
|
||||
def _set_access_token(self, value):
|
||||
self._access_token = value
|
||||
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
|
||||
|
||||
def _cache_tokens(self, /):
|
||||
self.cache.store(self._NETRC_MACHINE, 'tokens', {
|
||||
'access_token': self._access_token,
|
||||
'refresh_token': self._refresh_token,
|
||||
})
|
||||
|
||||
def _fetch_new_tokens(self, invalidate=False):
|
||||
if invalidate:
|
||||
self.report_warning('Access token has been invalidated')
|
||||
self._set_access_token(None)
|
||||
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
if not self._refresh_token:
|
||||
self._set_access_token(None)
|
||||
self._cache_tokens()
|
||||
raise ExtractorError(
|
||||
'Access token has expired or been invalidated. '
|
||||
'Get a new "access_token_production" value from your browser '
|
||||
f'and try again, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
|
||||
bearer_token = self._access_token or self._download_json(
|
||||
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
|
||||
data=urlencode_postdata({
|
||||
'affiliate': 'none',
|
||||
'grant_type': 'device',
|
||||
'device_vendor': 'unknown',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
|
||||
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
|
||||
'app_id': 'dch.webapp',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
|
||||
# but this is no longer effective since actual login is not possible anymore
|
||||
'device_model': 'unknown',
|
||||
'app_id': self._CLIENT_ID,
|
||||
'app_distributor': 'berlinphil',
|
||||
'app_version': '1.84.0',
|
||||
'client_secret': '2ySLN+2Fwb',
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})['access_token']
|
||||
'app_version': '1.95.0',
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers=self._OAUTH_HEADERS)['access_token']
|
||||
|
||||
try:
|
||||
login_response = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
|
||||
'grant_type': 'password',
|
||||
'username': username,
|
||||
'password': password,
|
||||
response = self._download_json(
|
||||
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
|
||||
data=urlencode_postdata({
|
||||
'grant_type': 'refresh_token',
|
||||
'refresh_token': self._refresh_token,
|
||||
'client_id': self._CLIENT_ID,
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Referer': 'https://www.digitalconcerthall.com',
|
||||
'Authorization': f'Bearer {login_token}',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
**self._OAUTH_HEADERS,
|
||||
'Authorization': f'Bearer {bearer_token}',
|
||||
})
|
||||
except ExtractorError as error:
|
||||
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
raise ExtractorError('Invalid username or password', expected=True)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
self._set_access_token(None)
|
||||
self._refresh_token = None
|
||||
self._cache_tokens()
|
||||
raise ExtractorError('Your tokens have been invalidated', expected=True)
|
||||
raise
|
||||
self._ACCESS_TOKEN = login_response['access_token']
|
||||
|
||||
self._set_access_token(response['access_token'])
|
||||
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
|
||||
self.write_debug('New refresh token granted')
|
||||
self._refresh_token = refresh_token
|
||||
self._cache_tokens()
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
self.report_login()
|
||||
|
||||
if username == 'refresh':
|
||||
self._refresh_token = password
|
||||
self._fetch_new_tokens()
|
||||
|
||||
if username == 'token':
|
||||
if not traverse_obj(password, {jwt_decode_hs256}):
|
||||
raise ExtractorError(
|
||||
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
|
||||
self._set_access_token(password)
|
||||
self._cache_tokens()
|
||||
|
||||
if username in ('refresh', 'token'):
|
||||
if self.get_param('cachedir') is not False:
|
||||
token_type = 'access' if username == 'token' else 'refresh'
|
||||
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
|
||||
'token next time, pass --username cache along with any password')
|
||||
return
|
||||
|
||||
if username != 'cache':
|
||||
raise ExtractorError(
|
||||
'Login with username and password is no longer supported '
|
||||
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# Try cached access_token
|
||||
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
|
||||
self._set_access_token(cached_tokens.get('access_token'))
|
||||
self._refresh_token = cached_tokens.get('refresh_token')
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
# Try cached refresh_token
|
||||
self._fetch_new_tokens(invalidate=True)
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._ACCESS_TOKEN:
|
||||
self.raise_login_required(method='password')
|
||||
if not self._access_token:
|
||||
self.raise_login_required(
|
||||
'All content on this site is only available for registered users. '
|
||||
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
|
||||
|
||||
def _entries(self, items, language, type_, **kwargs):
|
||||
for item in items:
|
||||
video_id = item['id']
|
||||
|
||||
for should_retry in (True, False):
|
||||
self._fetch_new_tokens(invalidate=not should_retry)
|
||||
try:
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
'Authorization': f'Bearer {self._access_token}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
break
|
||||
except ExtractorError as error:
|
||||
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
continue
|
||||
raise
|
||||
|
||||
formats = []
|
||||
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
|
||||
|
@ -157,7 +255,6 @@ class DigitalConcertHallIE(InfoExtractor):
|
|||
'Accept': 'application/json',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
})
|
||||
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))
|
||||
|
||||
|
|
|
@ -7,7 +7,6 @@ import time
|
|||
import urllib.parse
|
||||
|
||||
from .common import InfoExtractor, SearchInfoExtractor
|
||||
from ..networking import Request
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
|
@ -32,12 +31,56 @@ from ..utils import (
|
|||
)
|
||||
|
||||
|
||||
class NiconicoIE(InfoExtractor):
|
||||
IE_NAME = 'niconico'
|
||||
IE_DESC = 'ニコニコ動画'
|
||||
class NiconicoBaseIE(InfoExtractor):
|
||||
_NETRC_MACHINE = 'niconico'
|
||||
_GEO_COUNTRIES = ['JP']
|
||||
_GEO_BYPASS = False
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
login_ok = True
|
||||
login_form_strs = {
|
||||
'mail_tel': username,
|
||||
'password': password,
|
||||
}
|
||||
self._request_webpage(
|
||||
'https://account.nicovideo.jp/login', None,
|
||||
note='Acquiring Login session')
|
||||
page = self._download_webpage(
|
||||
'https://account.nicovideo.jp/login/redirector?show_button_twitter=1&site=niconico&show_button_facebook=1', None,
|
||||
note='Logging in', errnote='Unable to log in',
|
||||
data=urlencode_postdata(login_form_strs),
|
||||
headers={
|
||||
'Referer': 'https://account.nicovideo.jp/login',
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
})
|
||||
if 'oneTimePw' in page:
|
||||
post_url = self._search_regex(
|
||||
r'<form[^>]+action=(["\'])(?P<url>.+?)\1', page, 'post url', group='url')
|
||||
page = self._download_webpage(
|
||||
urljoin('https://account.nicovideo.jp', post_url), None,
|
||||
note='Performing MFA', errnote='Unable to complete MFA',
|
||||
data=urlencode_postdata({
|
||||
'otp': self._get_tfa_info('6 digits code'),
|
||||
}), headers={
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
})
|
||||
if 'oneTimePw' in page or 'formError' in page:
|
||||
err_msg = self._html_search_regex(
|
||||
r'formError["\']+>(.*?)</div>', page, 'form_error',
|
||||
default='There\'s an error but the message can\'t be parsed.',
|
||||
flags=re.DOTALL)
|
||||
self.report_warning(f'Unable to log in: MFA challenge failed, "{err_msg}"')
|
||||
return False
|
||||
login_ok = 'class="notice error"' not in page
|
||||
if not login_ok:
|
||||
self.report_warning('Unable to log in: bad username or password')
|
||||
return login_ok
|
||||
|
||||
|
||||
class NiconicoIE(NiconicoBaseIE):
|
||||
IE_NAME = 'niconico'
|
||||
IE_DESC = 'ニコニコ動画'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://www.nicovideo.jp/watch/sm22312215',
|
||||
'info_dict': {
|
||||
|
@ -176,7 +219,6 @@ class NiconicoIE(InfoExtractor):
|
|||
}]
|
||||
|
||||
_VALID_URL = r'https?://(?:(?:www\.|secure\.|sp\.)?nicovideo\.jp/watch|nico\.ms)/(?P<id>(?:[a-z]{2})?[0-9]+)'
|
||||
_NETRC_MACHINE = 'niconico'
|
||||
_API_HEADERS = {
|
||||
'X-Frontend-ID': '6',
|
||||
'X-Frontend-Version': '0',
|
||||
|
@ -185,46 +227,6 @@ class NiconicoIE(InfoExtractor):
|
|||
'Origin': 'https://www.nicovideo.jp',
|
||||
}
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
login_ok = True
|
||||
login_form_strs = {
|
||||
'mail_tel': username,
|
||||
'password': password,
|
||||
}
|
||||
self._request_webpage(
|
||||
'https://account.nicovideo.jp/login', None,
|
||||
note='Acquiring Login session')
|
||||
page = self._download_webpage(
|
||||
'https://account.nicovideo.jp/login/redirector?show_button_twitter=1&site=niconico&show_button_facebook=1', None,
|
||||
note='Logging in', errnote='Unable to log in',
|
||||
data=urlencode_postdata(login_form_strs),
|
||||
headers={
|
||||
'Referer': 'https://account.nicovideo.jp/login',
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
})
|
||||
if 'oneTimePw' in page:
|
||||
post_url = self._search_regex(
|
||||
r'<form[^>]+action=(["\'])(?P<url>.+?)\1', page, 'post url', group='url')
|
||||
page = self._download_webpage(
|
||||
urljoin('https://account.nicovideo.jp', post_url), None,
|
||||
note='Performing MFA', errnote='Unable to complete MFA',
|
||||
data=urlencode_postdata({
|
||||
'otp': self._get_tfa_info('6 digits code'),
|
||||
}), headers={
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
})
|
||||
if 'oneTimePw' in page or 'formError' in page:
|
||||
err_msg = self._html_search_regex(
|
||||
r'formError["\']+>(.*?)</div>', page, 'form_error',
|
||||
default='There\'s an error but the message can\'t be parsed.',
|
||||
flags=re.DOTALL)
|
||||
self.report_warning(f'Unable to log in: MFA challenge failed, "{err_msg}"')
|
||||
return False
|
||||
login_ok = 'class="notice error"' not in page
|
||||
if not login_ok:
|
||||
self.report_warning('Unable to log in: bad username or password')
|
||||
return login_ok
|
||||
|
||||
def _get_heartbeat_info(self, info_dict):
|
||||
video_id, video_src_id, audio_src_id = info_dict['url'].split(':')[1].split('/')
|
||||
dmc_protocol = info_dict['expected_protocol']
|
||||
|
@ -906,7 +908,7 @@ class NiconicoUserIE(InfoExtractor):
|
|||
return self.playlist_result(self._entries(list_id), list_id)
|
||||
|
||||
|
||||
class NiconicoLiveIE(InfoExtractor):
|
||||
class NiconicoLiveIE(NiconicoBaseIE):
|
||||
IE_NAME = 'niconico:live'
|
||||
IE_DESC = 'ニコニコ生放送'
|
||||
_VALID_URL = r'https?://(?:sp\.)?live2?\.nicovideo\.jp/(?:watch|gate)/(?P<id>lv\d+)'
|
||||
|
@ -916,17 +918,30 @@ class NiconicoLiveIE(InfoExtractor):
|
|||
'info_dict': {
|
||||
'id': 'lv339533123',
|
||||
'title': '激辛ペヤング食べます\u202a( ;ᯅ; )\u202c(歌枠オーディション参加中)',
|
||||
'view_count': 1526,
|
||||
'comment_count': 1772,
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'description': '初めましてもかって言います❕\nのんびり自由に適当に暮らしてます',
|
||||
'uploader': 'もか',
|
||||
'channel': 'ゲストさんのコミュニティ',
|
||||
'channel_id': 'co5776900',
|
||||
'channel_url': 'https://com.nicovideo.jp/community/co5776900',
|
||||
'timestamp': 1670677328,
|
||||
'is_live': True,
|
||||
'ext': None,
|
||||
'live_latency': 'high',
|
||||
'live_status': 'was_live',
|
||||
'thumbnail': r're:^https://[\w.-]+/\w+/\w+',
|
||||
'thumbnails': list,
|
||||
'upload_date': '20221210',
|
||||
},
|
||||
'skip': 'livestream',
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
'ignore_no_formats_error': True,
|
||||
},
|
||||
'expected_warnings': [
|
||||
'The live hasn\'t started yet or already ended.',
|
||||
'No video formats found!',
|
||||
'Requested format is not available',
|
||||
],
|
||||
}, {
|
||||
'url': 'https://live2.nicovideo.jp/watch/lv339533123',
|
||||
'only_matching': True,
|
||||
|
@ -940,36 +955,17 @@ class NiconicoLiveIE(InfoExtractor):
|
|||
|
||||
_KNOWN_LATENCY = ('high', 'low')
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage, urlh = self._download_webpage_handle(f'https://live.nicovideo.jp/watch/{video_id}', video_id)
|
||||
|
||||
embedded_data = self._parse_json(unescapeHTML(self._search_regex(
|
||||
r'<script\s+id="embedded-data"\s*data-props="(.+?)"', webpage, 'embedded data')), video_id)
|
||||
|
||||
ws_url = traverse_obj(embedded_data, ('site', 'relive', 'webSocketUrl'))
|
||||
if not ws_url:
|
||||
raise ExtractorError('The live hasn\'t started yet or already ended.', expected=True)
|
||||
ws_url = update_url_query(ws_url, {
|
||||
'frontend_id': traverse_obj(embedded_data, ('site', 'frontendId')) or '9',
|
||||
})
|
||||
|
||||
hostname = remove_start(urllib.parse.urlparse(urlh.url).hostname, 'sp.')
|
||||
latency = try_get(self._configuration_arg('latency'), lambda x: x[0])
|
||||
if latency not in self._KNOWN_LATENCY:
|
||||
latency = 'high'
|
||||
|
||||
def _yield_formats(self, ws_url, headers, latency, video_id, is_live):
|
||||
ws = self._request_webpage(
|
||||
Request(ws_url, headers={'Origin': f'https://{hostname}'}),
|
||||
video_id=video_id, note='Connecting to WebSocket server')
|
||||
ws_url, video_id, note='Connecting to WebSocket server', headers=headers)
|
||||
|
||||
self.write_debug('[debug] Sending HLS server request')
|
||||
self.write_debug('Sending HLS server request')
|
||||
ws.send(json.dumps({
|
||||
'type': 'startWatching',
|
||||
'data': {
|
||||
'stream': {
|
||||
'quality': 'abr',
|
||||
'protocol': 'hls+fmp4',
|
||||
'protocol': 'hls',
|
||||
'latency': latency,
|
||||
'chasePlay': False,
|
||||
},
|
||||
|
@ -977,10 +973,10 @@ class NiconicoLiveIE(InfoExtractor):
|
|||
'protocol': 'webSocket',
|
||||
'commentable': True,
|
||||
},
|
||||
'reconnect': False,
|
||||
},
|
||||
}))
|
||||
|
||||
with ws:
|
||||
while True:
|
||||
recv = ws.recv()
|
||||
if not recv:
|
||||
|
@ -993,17 +989,40 @@ class NiconicoLiveIE(InfoExtractor):
|
|||
qualities = data['data']['availableQualities']
|
||||
break
|
||||
elif data.get('type') == 'disconnect':
|
||||
self.write_debug(recv)
|
||||
self.write_debug(data)
|
||||
raise ExtractorError('Disconnected at middle of extraction')
|
||||
elif data.get('type') == 'error':
|
||||
self.write_debug(recv)
|
||||
message = traverse_obj(data, ('body', 'code')) or recv
|
||||
self.write_debug(data)
|
||||
message = traverse_obj(data, ('data', 'code')) or recv
|
||||
raise ExtractorError(message)
|
||||
elif self.get_param('verbose', False):
|
||||
if len(recv) > 100:
|
||||
recv = recv[:100] + '...'
|
||||
self.write_debug(f'Server said: {recv}')
|
||||
|
||||
formats = sorted(self._extract_m3u8_formats(
|
||||
m3u8_url, video_id, ext='mp4', live=is_live), key=lambda f: f['tbr'], reverse=True)
|
||||
for fmt, q in zip(formats, qualities[1:]):
|
||||
fmt.update({
|
||||
'format_id': q,
|
||||
'protocol': 'niconico_live',
|
||||
})
|
||||
yield fmt
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage, urlh = self._download_webpage_handle(f'https://live.nicovideo.jp/watch/{video_id}', video_id)
|
||||
headers = {'Origin': 'https://' + remove_start(urllib.parse.urlparse(urlh.url).hostname, 'sp.')}
|
||||
|
||||
embedded_data = self._parse_json(unescapeHTML(self._search_regex(
|
||||
r'<script\s+id="embedded-data"\s*data-props="(.+?)"', webpage, 'embedded data')), video_id)
|
||||
|
||||
ws_url = traverse_obj(embedded_data, ('site', 'relive', 'webSocketUrl'))
|
||||
if ws_url:
|
||||
ws_url = update_url_query(ws_url, {
|
||||
'frontend_id': traverse_obj(embedded_data, ('site', 'frontendId')) or '9',
|
||||
})
|
||||
|
||||
title = traverse_obj(embedded_data, ('program', 'title')) or self._html_search_meta(
|
||||
('og:title', 'twitter:title'), webpage, 'live title', fatal=False)
|
||||
|
||||
|
@ -1028,16 +1047,19 @@ class NiconicoLiveIE(InfoExtractor):
|
|||
**res,
|
||||
})
|
||||
|
||||
formats = self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True)
|
||||
for fmt, q in zip(formats, reversed(qualities[1:])):
|
||||
fmt.update({
|
||||
'format_id': q,
|
||||
'protocol': 'niconico_live',
|
||||
'ws': ws,
|
||||
'video_id': video_id,
|
||||
'live_latency': latency,
|
||||
'origin': hostname,
|
||||
})
|
||||
live_status, availability = self._check_status_and_availability(embedded_data, video_id)
|
||||
|
||||
if availability == 'premium_only':
|
||||
self.raise_login_required('This video requires premium', metadata_available=True)
|
||||
elif availability == 'subscriber_only':
|
||||
self.raise_login_required('This video is for members only', metadata_available=True)
|
||||
elif availability == 'needs_auth':
|
||||
# PPV or tickets for limited time viewing
|
||||
self.raise_login_required('This video requires additional steps to watch', metadata_available=True)
|
||||
|
||||
latency = try_get(self._configuration_arg('latency'), lambda x: x[0])
|
||||
if latency not in self._KNOWN_LATENCY:
|
||||
latency = 'high'
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
|
@ -1052,7 +1074,79 @@ class NiconicoLiveIE(InfoExtractor):
|
|||
}),
|
||||
'description': clean_html(traverse_obj(embedded_data, ('program', 'description'))),
|
||||
'timestamp': int_or_none(traverse_obj(embedded_data, ('program', 'openTime'))),
|
||||
'is_live': True,
|
||||
'live_status': live_status,
|
||||
'availability': availability,
|
||||
'thumbnails': thumbnails,
|
||||
'formats': formats,
|
||||
'formats': [*self._yield_formats(
|
||||
ws_url, headers, latency, video_id, live_status == 'is_live')] if ws_url else None,
|
||||
'http_headers': headers,
|
||||
'downloader_options': {
|
||||
'live_latency': latency,
|
||||
'ws_url': ws_url,
|
||||
},
|
||||
}
|
||||
|
||||
def _check_status_and_availability(self, embedded_data, video_id):
|
||||
live_status = {
|
||||
'Before': 'is_live',
|
||||
'Open': 'was_live',
|
||||
'End': 'was_live',
|
||||
}.get(traverse_obj(embedded_data, ('programTimeshift', 'publication', 'status', {str})), 'is_live')
|
||||
|
||||
if traverse_obj(embedded_data, ('userProgramWatch', 'canWatch', {bool})):
|
||||
is_member_free = traverse_obj(embedded_data, ('program', 'isMemberFree', {bool}))
|
||||
is_shown = traverse_obj(embedded_data, ('program', 'trialWatch', 'isShown', {bool}))
|
||||
self.write_debug(f'.program.isMemberFree: {is_member_free}; .program.trialWatch.isShown: {is_shown}')
|
||||
|
||||
if is_member_free is None and is_shown is None:
|
||||
return live_status, self._availability()
|
||||
|
||||
if is_member_free is False:
|
||||
availability = {'needs_auth': True}
|
||||
msg = 'Paid content cannot be accessed, the video may be blank.'
|
||||
else:
|
||||
availability = {'needs_subscription': True}
|
||||
msg = 'Restricted content cannot be accessed, a part of the video or the entire video may be blank.'
|
||||
self.report_warning(msg, video_id)
|
||||
return live_status, self._availability(**availability)
|
||||
|
||||
if traverse_obj(embedded_data, ('userProgramWatch', 'isCountryRestrictionTarget', {bool})):
|
||||
self.raise_geo_restricted(countries=self._GEO_COUNTRIES, metadata_available=True)
|
||||
return live_status, self._availability()
|
||||
|
||||
rejected_reasons = traverse_obj(embedded_data, ('userProgramWatch', 'rejectedReasons', ..., {str}))
|
||||
self.write_debug(f'.userProgramWatch.rejectedReasons: {rejected_reasons!r}')
|
||||
|
||||
if 'programNotBegun' in rejected_reasons:
|
||||
self.report_warning('Live has not started', video_id)
|
||||
live_status = 'is_upcoming'
|
||||
elif 'timeshiftBeforeOpen' in rejected_reasons:
|
||||
self.report_warning('Live has ended but timeshift is not yet processed', video_id)
|
||||
live_status = 'post_live'
|
||||
elif 'noTimeshiftProgram' in rejected_reasons:
|
||||
self.report_warning('Timeshift is disabled', video_id)
|
||||
live_status = 'was_live'
|
||||
elif any(x in ['timeshiftClosed', 'timeshiftClosedAndNotFollow'] for x in rejected_reasons):
|
||||
self.report_warning('Timeshift viewing period has ended', video_id)
|
||||
live_status = 'was_live'
|
||||
|
||||
availability = self._availability(needs_premium='notLogin' in rejected_reasons, needs_subscription=any(x in [
|
||||
'notSocialGroupMember',
|
||||
'notCommunityMember',
|
||||
'notChannelMember',
|
||||
'notCommunityMemberAndNotHaveTimeshiftTicket',
|
||||
'notChannelMemberAndNotHaveTimeshiftTicket',
|
||||
] for x in rejected_reasons), needs_auth=any(x in [
|
||||
'timeshiftTicketExpired',
|
||||
'notHaveTimeshiftTicket',
|
||||
'notCommunityMemberAndNotHaveTimeshiftTicket',
|
||||
'notChannelMemberAndNotHaveTimeshiftTicket',
|
||||
'notHavePayTicket',
|
||||
'notActivatedBySerial',
|
||||
'notHavePayTicketAndNotActivatedBySerial',
|
||||
'notUseTimeshiftTicket',
|
||||
'notUseTimeshiftTicketOnOnceTimeshift',
|
||||
'notUseTimeshiftTicketOnUnlimitedTimeshift',
|
||||
] for x in rejected_reasons))
|
||||
|
||||
return live_status, availability
|
||||
|
|
|
@ -259,6 +259,8 @@ class RedditIE(InfoExtractor):
|
|||
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, json.JSONDecodeError):
|
||||
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
|
||||
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
|
||||
self.raise_login_required('Account authentication is required')
|
||||
raise
|
||||
|
||||
|
|
|
@ -887,7 +887,7 @@ class FFmpegFixupM4aPP(FFmpegFixupPostProcessor):
|
|||
class FFmpegFixupM3u8PP(FFmpegFixupPostProcessor):
|
||||
def _needs_fixup(self, info):
|
||||
yield info['ext'] in ('mp4', 'm4a')
|
||||
yield info['protocol'].startswith('m3u8')
|
||||
yield info['protocol'].startswith('m3u8') or info['protocol'] == 'niconico_live'
|
||||
try:
|
||||
metadata = self.get_metadata_object(info['filepath'])
|
||||
except PostProcessingError as e:
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Autogenerated by devscripts/update-version.py
|
||||
|
||||
__version__ = '2024.11.04'
|
||||
__version__ = '2024.11.18'
|
||||
|
||||
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c'
|
||||
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
|
||||
|
||||
VARIANT = None
|
||||
|
||||
|
@ -12,4 +12,4 @@ CHANNEL = 'stable'
|
|||
|
||||
ORIGIN = 'yt-dlp/yt-dlp'
|
||||
|
||||
_pkg_version = '2024.11.04'
|
||||
_pkg_version = '2024.11.18'
|
||||
|
|
Loading…
Reference in New Issue
Block a user