mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2024-11-23 15:51:24 +01:00
Compare commits
16 Commits
89a7a3f037
...
72700bb215
Author | SHA1 | Date | |
---|---|---|---|
|
72700bb215 | ||
|
f919729538 | ||
|
7ea2787920 | ||
|
f7257588bd | ||
|
da252d9d32 | ||
|
d2344827c8 | ||
|
302b23a9a3 | ||
|
2ed0b5568e | ||
|
cd6e20b68e | ||
|
d2d940ef25 | ||
|
012993fa8d | ||
|
df63ae4477 | ||
|
cc17902c09 | ||
|
099db78935 | ||
|
3fac07c04a | ||
|
d4c52a28af |
12
CONTRIBUTORS
12
CONTRIBUTORS
|
@ -695,3 +695,15 @@ KBelmin
|
||||||
kesor
|
kesor
|
||||||
MellowKyler
|
MellowKyler
|
||||||
Wesley107772
|
Wesley107772
|
||||||
|
a13ssandr0
|
||||||
|
ChocoLZS
|
||||||
|
doe1080
|
||||||
|
hugovdev
|
||||||
|
jshumphrey
|
||||||
|
julionc
|
||||||
|
manavchaudhary1
|
||||||
|
powergold1
|
||||||
|
Sakura286
|
||||||
|
SamDecrock
|
||||||
|
stratus-ss
|
||||||
|
subrat-lima
|
||||||
|
|
58
Changelog.md
58
Changelog.md
|
@ -4,6 +4,64 @@
|
||||||
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
### 2024.11.18
|
||||||
|
|
||||||
|
#### Important changes
|
||||||
|
- **Login with OAuth is no longer supported for YouTube**
|
||||||
|
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
|
||||||
|
|
||||||
|
#### Core changes
|
||||||
|
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
|
||||||
|
- **utils**
|
||||||
|
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
|
||||||
|
|
||||||
|
#### Extractor changes
|
||||||
|
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
|
||||||
|
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
|
||||||
|
- **chaturbate**
|
||||||
|
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
|
||||||
|
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
|
||||||
|
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
|
||||||
|
- **ctvnews**
|
||||||
|
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||||
|
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
|
||||||
|
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
|
||||||
|
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
|
||||||
|
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||||
|
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
|
||||||
|
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
|
||||||
|
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||||
|
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
|
||||||
|
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||||
|
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
|
||||||
|
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
|
||||||
|
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
|
||||||
|
- **spreaker**
|
||||||
|
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
|
||||||
|
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
|
||||||
|
- **youtube**
|
||||||
|
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||||
|
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
|
||||||
|
|
||||||
|
#### Misc. changes
|
||||||
|
- **build**
|
||||||
|
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
|
||||||
|
- **cleanup**
|
||||||
|
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
|
||||||
|
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
|
||||||
|
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
|
||||||
|
|
||||||
### 2024.11.04
|
### 2024.11.04
|
||||||
|
|
||||||
#### Important changes
|
#### Important changes
|
||||||
|
|
|
@ -342,8 +342,9 @@ If you fork the project on GitHub, you can run your fork's [build workflow](.git
|
||||||
extractor plugins; postprocessor plugins can
|
extractor plugins; postprocessor plugins can
|
||||||
only be loaded from the default plugin
|
only be loaded from the default plugin
|
||||||
directories
|
directories
|
||||||
--flat-playlist Do not extract the videos of a playlist,
|
--flat-playlist Do not extract a playlist's URL result
|
||||||
only list them
|
entries; some entry metadata may be missing
|
||||||
|
and downloading may be bypassed
|
||||||
--no-flat-playlist Fully extract the videos of a playlist
|
--no-flat-playlist Fully extract the videos of a playlist
|
||||||
(default)
|
(default)
|
||||||
--live-from-start Download livestreams from the start.
|
--live-from-start Download livestreams from the start.
|
||||||
|
@ -1866,9 +1867,6 @@ The following extractors use this feature:
|
||||||
#### bilibili
|
#### bilibili
|
||||||
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
|
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
|
||||||
|
|
||||||
#### digitalconcerthall
|
|
||||||
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
|
|
||||||
|
|
||||||
#### sonylivseries
|
#### sonylivseries
|
||||||
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
|
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
|
||||||
|
|
||||||
|
|
|
@ -234,5 +234,10 @@
|
||||||
"when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7",
|
"when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7",
|
||||||
"short": "[ie/vimeo] Fix API retries (#11351)",
|
"short": "[ie/vimeo] Fix API retries (#11351)",
|
||||||
"authors": ["bashonly"]
|
"authors": ["bashonly"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "add",
|
||||||
|
"when": "52c0ffe40ad6e8404d93296f575007b05b04c686",
|
||||||
|
"short": "[priority] **Login with OAuth is no longer supported for YouTube**\nDue to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
|
@ -129,6 +129,8 @@
|
||||||
- **Bandcamp:album**
|
- **Bandcamp:album**
|
||||||
- **Bandcamp:user**
|
- **Bandcamp:user**
|
||||||
- **Bandcamp:weekly**
|
- **Bandcamp:weekly**
|
||||||
|
- **Bandlab**
|
||||||
|
- **BandlabPlaylist**
|
||||||
- **BannedVideo**
|
- **BannedVideo**
|
||||||
- **bbc**: [*bbc*](## "netrc machine") BBC
|
- **bbc**: [*bbc*](## "netrc machine") BBC
|
||||||
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
||||||
|
@ -484,6 +486,7 @@
|
||||||
- **Gab**
|
- **Gab**
|
||||||
- **GabTV**
|
- **GabTV**
|
||||||
- **Gaia**: [*gaia*](## "netrc machine")
|
- **Gaia**: [*gaia*](## "netrc machine")
|
||||||
|
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
|
||||||
- **GameJolt**
|
- **GameJolt**
|
||||||
- **GameJoltCommunity**
|
- **GameJoltCommunity**
|
||||||
- **GameJoltGame**
|
- **GameJoltGame**
|
||||||
|
@ -651,6 +654,8 @@
|
||||||
- **Karaoketv**
|
- **Karaoketv**
|
||||||
- **Katsomo**: (**Currently broken**)
|
- **Katsomo**: (**Currently broken**)
|
||||||
- **KelbyOne**: (**Currently broken**)
|
- **KelbyOne**: (**Currently broken**)
|
||||||
|
- **Kenh14Playlist**
|
||||||
|
- **Kenh14Video**
|
||||||
- **Ketnet**
|
- **Ketnet**
|
||||||
- **khanacademy**
|
- **khanacademy**
|
||||||
- **khanacademy:unit**
|
- **khanacademy:unit**
|
||||||
|
@ -784,10 +789,6 @@
|
||||||
- **MicrosoftLearnSession**
|
- **MicrosoftLearnSession**
|
||||||
- **MicrosoftMedius**
|
- **MicrosoftMedius**
|
||||||
- **microsoftstream**: Microsoft Stream
|
- **microsoftstream**: Microsoft Stream
|
||||||
- **mildom**: Record ongoing live by specific user in Mildom
|
|
||||||
- **mildom:clip**: Clip in Mildom
|
|
||||||
- **mildom:user:vod**: Download all VODs from specific user in Mildom
|
|
||||||
- **mildom:vod**: VOD in Mildom
|
|
||||||
- **minds**
|
- **minds**
|
||||||
- **minds:channel**
|
- **minds:channel**
|
||||||
- **minds:group**
|
- **minds:group**
|
||||||
|
@ -798,6 +799,7 @@
|
||||||
- **MiTele**: mitele.es
|
- **MiTele**: mitele.es
|
||||||
- **mixch**
|
- **mixch**
|
||||||
- **mixch:archive**
|
- **mixch:archive**
|
||||||
|
- **mixch:movie**
|
||||||
- **mixcloud**
|
- **mixcloud**
|
||||||
- **mixcloud:playlist**
|
- **mixcloud:playlist**
|
||||||
- **mixcloud:user**
|
- **mixcloud:user**
|
||||||
|
@ -1060,8 +1062,8 @@
|
||||||
- **PhilharmonieDeParis**: Philharmonie de Paris
|
- **PhilharmonieDeParis**: Philharmonie de Paris
|
||||||
- **phoenix.de**
|
- **phoenix.de**
|
||||||
- **Photobucket**
|
- **Photobucket**
|
||||||
|
- **PiaLive**
|
||||||
- **Piapro**: [*piapro*](## "netrc machine")
|
- **Piapro**: [*piapro*](## "netrc machine")
|
||||||
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
|
|
||||||
- **Picarto**
|
- **Picarto**
|
||||||
- **PicartoVod**
|
- **PicartoVod**
|
||||||
- **Piksel**
|
- **Piksel**
|
||||||
|
@ -1088,8 +1090,6 @@
|
||||||
- **PodbayFMChannel**
|
- **PodbayFMChannel**
|
||||||
- **Podchaser**
|
- **Podchaser**
|
||||||
- **podomatic**: (**Currently broken**)
|
- **podomatic**: (**Currently broken**)
|
||||||
- **Pokemon**
|
|
||||||
- **PokemonWatch**
|
|
||||||
- **PokerGo**: [*pokergo*](## "netrc machine")
|
- **PokerGo**: [*pokergo*](## "netrc machine")
|
||||||
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
||||||
- **PolsatGo**
|
- **PolsatGo**
|
||||||
|
@ -1160,6 +1160,7 @@
|
||||||
- **RadioJavan**: (**Currently broken**)
|
- **RadioJavan**: (**Currently broken**)
|
||||||
- **radiokapital**
|
- **radiokapital**
|
||||||
- **radiokapital:show**
|
- **radiokapital:show**
|
||||||
|
- **RadioRadicale**
|
||||||
- **RadioZetPodcast**
|
- **RadioZetPodcast**
|
||||||
- **radlive**
|
- **radlive**
|
||||||
- **radlive:channel**
|
- **radlive:channel**
|
||||||
|
@ -1367,9 +1368,7 @@
|
||||||
- **spotify**: Spotify episodes (**Currently broken**)
|
- **spotify**: Spotify episodes (**Currently broken**)
|
||||||
- **spotify:show**: Spotify shows (**Currently broken**)
|
- **spotify:show**: Spotify shows (**Currently broken**)
|
||||||
- **Spreaker**
|
- **Spreaker**
|
||||||
- **SpreakerPage**
|
|
||||||
- **SpreakerShow**
|
- **SpreakerShow**
|
||||||
- **SpreakerShowPage**
|
|
||||||
- **SpringboardPlatform**
|
- **SpringboardPlatform**
|
||||||
- **Sprout**
|
- **Sprout**
|
||||||
- **SproutVideo**
|
- **SproutVideo**
|
||||||
|
@ -1570,6 +1569,8 @@
|
||||||
- **UFCTV**: [*ufctv*](## "netrc machine")
|
- **UFCTV**: [*ufctv*](## "netrc machine")
|
||||||
- **ukcolumn**: (**Currently broken**)
|
- **ukcolumn**: (**Currently broken**)
|
||||||
- **UKTVPlay**
|
- **UKTVPlay**
|
||||||
|
- **UlizaPlayer**
|
||||||
|
- **UlizaPortal**: ulizaportal.jp
|
||||||
- **umg:de**: Universal Music Deutschland (**Currently broken**)
|
- **umg:de**: Universal Music Deutschland (**Currently broken**)
|
||||||
- **Unistra**
|
- **Unistra**
|
||||||
- **Unity**: (**Currently broken**)
|
- **Unity**: (**Currently broken**)
|
||||||
|
@ -1587,8 +1588,6 @@
|
||||||
- **Varzesh3**: (**Currently broken**)
|
- **Varzesh3**: (**Currently broken**)
|
||||||
- **Vbox7**
|
- **Vbox7**
|
||||||
- **Veo**
|
- **Veo**
|
||||||
- **Veoh**
|
|
||||||
- **veoh:user**
|
|
||||||
- **Vesti**: Вести.Ru (**Currently broken**)
|
- **Vesti**: Вести.Ru (**Currently broken**)
|
||||||
- **Vevo**
|
- **Vevo**
|
||||||
- **VevoPlaylist**
|
- **VevoPlaylist**
|
||||||
|
|
51
test/test_mp4parser.py
Normal file
51
test/test_mp4parser.py
Normal file
|
@ -0,0 +1,51 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
# Allow direct execution
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import unittest
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
import io
|
||||||
|
|
||||||
|
from yt_dlp.postprocessor.mp4direct import (
|
||||||
|
parse_mp4_boxes,
|
||||||
|
write_mp4_boxes,
|
||||||
|
)
|
||||||
|
|
||||||
|
TEST_SEQUENCE = [
|
||||||
|
('test', b'123456'),
|
||||||
|
('trak', b''),
|
||||||
|
('helo', b'abcdef'),
|
||||||
|
('1984', b'1q84'),
|
||||||
|
('moov', b''),
|
||||||
|
('keys', b'2022'),
|
||||||
|
(None, 'moov'),
|
||||||
|
('topp', b'1991'),
|
||||||
|
(None, 'trak'),
|
||||||
|
]
|
||||||
|
|
||||||
|
# on-file reprensetation of the above sequence
|
||||||
|
TEST_BYTES = b'\x00\x00\x00\x0etest123456\x00\x00\x00Btrak\x00\x00\x00\x0eheloabcdef\x00\x00\x00\x0c19841q84\x00\x00\x00\x14moov\x00\x00\x00\x0ckeys2022\x00\x00\x00\x0ctopp1991'
|
||||||
|
|
||||||
|
|
||||||
|
class TestMP4Parser(unittest.TestCase):
|
||||||
|
def test_write_sequence(self):
|
||||||
|
with io.BytesIO() as w:
|
||||||
|
write_mp4_boxes(w, TEST_SEQUENCE)
|
||||||
|
bs = w.getvalue()
|
||||||
|
self.assertEqual(TEST_BYTES, bs)
|
||||||
|
|
||||||
|
def test_read_bytes(self):
|
||||||
|
with io.BytesIO(TEST_BYTES) as r:
|
||||||
|
result = list(parse_mp4_boxes(r))
|
||||||
|
self.assertListEqual(TEST_SEQUENCE, result)
|
||||||
|
|
||||||
|
def test_mismatched_box_end(self):
|
||||||
|
with io.BytesIO() as w, self.assertRaises(AssertionError):
|
||||||
|
write_mp4_boxes(w, [
|
||||||
|
('moov', b''),
|
||||||
|
('trak', b''),
|
||||||
|
(None, 'moov'),
|
||||||
|
(None, 'trak'),
|
||||||
|
])
|
|
@ -58,6 +58,7 @@ from .postprocessor import (
|
||||||
FFmpegPostProcessor,
|
FFmpegPostProcessor,
|
||||||
FFmpegVideoConvertorPP,
|
FFmpegVideoConvertorPP,
|
||||||
MoveFilesAfterDownloadPP,
|
MoveFilesAfterDownloadPP,
|
||||||
|
MP4FixupTimestampPP,
|
||||||
get_postprocessor,
|
get_postprocessor,
|
||||||
)
|
)
|
||||||
from .postprocessor.ffmpeg import resolve_mapping as resolve_recode_mapping
|
from .postprocessor.ffmpeg import resolve_mapping as resolve_recode_mapping
|
||||||
|
@ -3548,8 +3549,11 @@ class YoutubeDL:
|
||||||
and (info_dict.get('is_live') or info_dict.get('is_dash_periods')),
|
and (info_dict.get('is_live') or info_dict.get('is_dash_periods')),
|
||||||
'Possible duplicate MOOV atoms', FFmpegFixupDuplicateMoovPP)
|
'Possible duplicate MOOV atoms', FFmpegFixupDuplicateMoovPP)
|
||||||
|
|
||||||
|
is_fmp4 = info_dict.get('protocol') == 'websocket_frag' and info_dict.get('container') == 'fmp4'
|
||||||
ffmpeg_fixup(downloader == 'web_socket_fragment', 'Malformed timestamps detected', FFmpegFixupTimestampPP)
|
ffmpeg_fixup(downloader == 'web_socket_fragment', 'Malformed timestamps detected', FFmpegFixupTimestampPP)
|
||||||
ffmpeg_fixup(downloader == 'web_socket_fragment', 'Malformed duration detected', FFmpegFixupDurationPP)
|
ffmpeg_fixup(downloader == 'web_socket_fragment', 'Malformed duration detected', FFmpegFixupDurationPP)
|
||||||
|
ffmpeg_fixup(downloader == 'web_socket_to_file' and is_fmp4, 'Malformed timestamps detected', MP4FixupTimestampPP)
|
||||||
|
ffmpeg_fixup(downloader == 'web_socket_to_file' and is_fmp4, 'Possible duplicate MOOV atoms', FFmpegFixupDuplicateMoovPP)
|
||||||
|
|
||||||
fixup()
|
fixup()
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -33,7 +33,7 @@ from .mhtml import MhtmlFD
|
||||||
from .niconico import NiconicoDmcFD, NiconicoLiveFD
|
from .niconico import NiconicoDmcFD, NiconicoLiveFD
|
||||||
from .rtmp import RtmpFD
|
from .rtmp import RtmpFD
|
||||||
from .rtsp import RtspFD
|
from .rtsp import RtspFD
|
||||||
from .websocket import WebSocketFragmentFD
|
from .websocket import WebSocketFragmentFD, WebSocketToFileFD
|
||||||
from .youtube_live_chat import YoutubeLiveChatFD
|
from .youtube_live_chat import YoutubeLiveChatFD
|
||||||
|
|
||||||
PROTOCOL_MAP = {
|
PROTOCOL_MAP = {
|
||||||
|
@ -121,6 +121,9 @@ def _get_suitable_downloader(info_dict, protocol, params, default):
|
||||||
elif params.get('hls_prefer_native') is False:
|
elif params.get('hls_prefer_native') is False:
|
||||||
return FFmpegFD
|
return FFmpegFD
|
||||||
|
|
||||||
|
if protocol == 'websocket_frag' and info_dict.get('container') == 'fmp4' and external_downloader != 'ffmpeg':
|
||||||
|
return WebSocketToFileFD
|
||||||
|
|
||||||
return PROTOCOL_MAP.get(protocol, default)
|
return PROTOCOL_MAP.get(protocol, default)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,22 +1,16 @@
|
||||||
import asyncio
|
import asyncio
|
||||||
import contextlib
|
import contextlib
|
||||||
import os
|
import os
|
||||||
import signal
|
|
||||||
import threading
|
import threading
|
||||||
|
import time
|
||||||
|
|
||||||
from .common import FileDownloader
|
from .common import FileDownloader
|
||||||
from .external import FFmpegFD
|
from .external import FFmpegFD
|
||||||
from ..dependencies import websockets
|
from ..dependencies import websockets
|
||||||
|
|
||||||
|
|
||||||
class FFmpegSinkFD(FileDownloader):
|
class _WebSocketFD(FileDownloader):
|
||||||
""" A sink to ffmpeg for downloading fragments in any form """
|
async def connect(self, stdin, info_dict):
|
||||||
|
|
||||||
def real_download(self, filename, info_dict):
|
|
||||||
info_copy = info_dict.copy()
|
|
||||||
info_copy['url'] = '-'
|
|
||||||
|
|
||||||
async def call_conn(proc, stdin):
|
|
||||||
try:
|
try:
|
||||||
await self.real_connection(stdin, info_dict)
|
await self.real_connection(stdin, info_dict)
|
||||||
except OSError:
|
except OSError:
|
||||||
|
@ -25,25 +19,7 @@ class FFmpegSinkFD(FileDownloader):
|
||||||
with contextlib.suppress(OSError):
|
with contextlib.suppress(OSError):
|
||||||
stdin.flush()
|
stdin.flush()
|
||||||
stdin.close()
|
stdin.close()
|
||||||
os.kill(os.getpid(), signal.SIGINT)
|
|
||||||
|
|
||||||
class FFmpegStdinFD(FFmpegFD):
|
|
||||||
@classmethod
|
|
||||||
def get_basename(cls):
|
|
||||||
return FFmpegFD.get_basename()
|
|
||||||
|
|
||||||
def on_process_started(self, proc, stdin):
|
|
||||||
thread = threading.Thread(target=asyncio.run, daemon=True, args=(call_conn(proc, stdin), ))
|
|
||||||
thread.start()
|
|
||||||
|
|
||||||
return FFmpegStdinFD(self.ydl, self.params or {}).download(filename, info_copy)
|
|
||||||
|
|
||||||
async def real_connection(self, sink, info_dict):
|
|
||||||
""" Override this in subclasses """
|
|
||||||
raise NotImplementedError('This method must be implemented by subclasses')
|
|
||||||
|
|
||||||
|
|
||||||
class WebSocketFragmentFD(FFmpegSinkFD):
|
|
||||||
async def real_connection(self, sink, info_dict):
|
async def real_connection(self, sink, info_dict):
|
||||||
async with websockets.connect(info_dict['url'], extra_headers=info_dict.get('http_headers', {})) as ws:
|
async with websockets.connect(info_dict['url'], extra_headers=info_dict.get('http_headers', {})) as ws:
|
||||||
while True:
|
while True:
|
||||||
|
@ -51,3 +27,67 @@ class WebSocketFragmentFD(FFmpegSinkFD):
|
||||||
if isinstance(recv, str):
|
if isinstance(recv, str):
|
||||||
recv = recv.encode('utf8')
|
recv = recv.encode('utf8')
|
||||||
sink.write(recv)
|
sink.write(recv)
|
||||||
|
|
||||||
|
|
||||||
|
class WebSocketFragmentFD(_WebSocketFD):
|
||||||
|
""" A sink to ffmpeg for downloading fragments in any form """
|
||||||
|
|
||||||
|
def real_download(self, filename, info_dict):
|
||||||
|
info_copy = info_dict.copy()
|
||||||
|
info_copy['url'] = '-'
|
||||||
|
connect = self.connect
|
||||||
|
|
||||||
|
class FFmpegStdinFD(FFmpegFD):
|
||||||
|
@classmethod
|
||||||
|
def get_basename(cls):
|
||||||
|
return FFmpegFD.get_basename()
|
||||||
|
|
||||||
|
def on_process_started(self, proc, stdin):
|
||||||
|
thread = threading.Thread(target=asyncio.run, daemon=True, args=(connect(stdin, info_dict), ))
|
||||||
|
thread.start()
|
||||||
|
|
||||||
|
return FFmpegStdinFD(self.ydl, self.params or {}).download(filename, info_copy)
|
||||||
|
|
||||||
|
|
||||||
|
class WebSocketToFileFD(_WebSocketFD):
|
||||||
|
""" A sink to a file for downloading fragments in any form """
|
||||||
|
def real_download(self, filename, info_dict):
|
||||||
|
tempname = self.temp_name(filename)
|
||||||
|
try:
|
||||||
|
with open(tempname, 'wb') as w:
|
||||||
|
started = time.time()
|
||||||
|
status = {
|
||||||
|
'filename': info_dict.get('_filename'),
|
||||||
|
'status': 'downloading',
|
||||||
|
'elapsed': 0,
|
||||||
|
'downloaded_bytes': 0,
|
||||||
|
}
|
||||||
|
self._hook_progress(status, info_dict)
|
||||||
|
|
||||||
|
thread = threading.Thread(target=asyncio.run, daemon=True, args=(self.connect(w, info_dict), ))
|
||||||
|
thread.start()
|
||||||
|
time_and_size, avg_len = [], 10
|
||||||
|
while thread.is_alive():
|
||||||
|
time.sleep(0.1)
|
||||||
|
|
||||||
|
downloaded, curr = w.tell(), time.time()
|
||||||
|
# taken from ffmpeg attachment
|
||||||
|
time_and_size.append((downloaded, curr))
|
||||||
|
time_and_size = time_and_size[-avg_len:]
|
||||||
|
if len(time_and_size) > 1:
|
||||||
|
last, early = time_and_size[0], time_and_size[-1]
|
||||||
|
average_speed = (early[0] - last[0]) / (early[1] - last[1])
|
||||||
|
else:
|
||||||
|
average_speed = None
|
||||||
|
|
||||||
|
status.update({
|
||||||
|
'downloaded_bytes': downloaded,
|
||||||
|
'speed': average_speed,
|
||||||
|
'elapsed': curr - started,
|
||||||
|
})
|
||||||
|
self._hook_progress(status, info_dict)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
os.replace(tempname, filename)
|
||||||
|
return True
|
||||||
|
|
|
@ -1,4 +1,3 @@
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
|
|
|
@ -3767,7 +3767,7 @@ class InfoExtractor:
|
||||||
""" Merge subtitle dictionaries, language by language. """
|
""" Merge subtitle dictionaries, language by language. """
|
||||||
if target is None:
|
if target is None:
|
||||||
target = {}
|
target = {}
|
||||||
for d in dicts:
|
for d in filter(None, dicts):
|
||||||
for lang, subs in d.items():
|
for lang, subs in d.items():
|
||||||
target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs)
|
target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs)
|
||||||
return target
|
return target
|
||||||
|
|
|
@ -176,7 +176,7 @@ class CTVNewsIE(InfoExtractor):
|
||||||
self._ninecninemedia_url_result(clip_id) for clip_id in
|
self._ninecninemedia_url_result(clip_id) for clip_id in
|
||||||
traverse_obj(webpage, (
|
traverse_obj(webpage, (
|
||||||
{find_element(tag='jasper-player-container', html=True)},
|
{find_element(tag='jasper-player-container', html=True)},
|
||||||
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId'))
|
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId', {str}))
|
||||||
]
|
]
|
||||||
|
|
||||||
return self.playlist_result(entries, page_id)
|
return self.playlist_result(entries, page_id)
|
||||||
|
|
|
@ -1,7 +1,10 @@
|
||||||
|
import time
|
||||||
|
|
||||||
from .common import InfoExtractor
|
from .common import InfoExtractor
|
||||||
from ..networking.exceptions import HTTPError
|
from ..networking.exceptions import HTTPError
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
|
jwt_decode_hs256,
|
||||||
parse_codecs,
|
parse_codecs,
|
||||||
try_get,
|
try_get,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
|
@ -13,9 +16,6 @@ from ..utils.traversal import traverse_obj
|
||||||
class DigitalConcertHallIE(InfoExtractor):
|
class DigitalConcertHallIE(InfoExtractor):
|
||||||
IE_DESC = 'DigitalConcertHall extractor'
|
IE_DESC = 'DigitalConcertHall extractor'
|
||||||
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
|
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
|
||||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
|
||||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
|
||||||
_ACCESS_TOKEN = None
|
|
||||||
_NETRC_MACHINE = 'digitalconcerthall'
|
_NETRC_MACHINE = 'digitalconcerthall'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'note': 'Playlist with only one video',
|
'note': 'Playlist with only one video',
|
||||||
|
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
|
||||||
'params': {'skip_download': 'm3u8'},
|
'params': {'skip_download': 'm3u8'},
|
||||||
'playlist_count': 1,
|
'playlist_count': 1,
|
||||||
}]
|
}]
|
||||||
|
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
|
||||||
|
'is the "access_token_production" from your browser local storage')
|
||||||
|
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
|
||||||
|
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||||
|
_CLIENT_ID = 'dch.webapp'
|
||||||
|
_CLIENT_SECRET = '2ySLN+2Fwb'
|
||||||
|
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||||
|
_OAUTH_HEADERS = {
|
||||||
|
'Accept': 'application/json',
|
||||||
|
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||||
|
'Origin': 'https://www.digitalconcerthall.com',
|
||||||
|
'Referer': 'https://www.digitalconcerthall.com/',
|
||||||
|
'User-Agent': _USER_AGENT,
|
||||||
|
}
|
||||||
|
_access_token = None
|
||||||
|
_access_token_expiry = 0
|
||||||
|
_refresh_token = None
|
||||||
|
|
||||||
def _perform_login(self, username, password):
|
@property
|
||||||
login_token = self._download_json(
|
def _access_token_is_expired(self):
|
||||||
self._OAUTH_URL,
|
return self._access_token_expiry - 30 <= int(time.time())
|
||||||
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
|
|
||||||
|
def _set_access_token(self, value):
|
||||||
|
self._access_token = value
|
||||||
|
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
|
||||||
|
|
||||||
|
def _cache_tokens(self, /):
|
||||||
|
self.cache.store(self._NETRC_MACHINE, 'tokens', {
|
||||||
|
'access_token': self._access_token,
|
||||||
|
'refresh_token': self._refresh_token,
|
||||||
|
})
|
||||||
|
|
||||||
|
def _fetch_new_tokens(self, invalidate=False):
|
||||||
|
if invalidate:
|
||||||
|
self.report_warning('Access token has been invalidated')
|
||||||
|
self._set_access_token(None)
|
||||||
|
|
||||||
|
if not self._access_token_is_expired:
|
||||||
|
return
|
||||||
|
|
||||||
|
if not self._refresh_token:
|
||||||
|
self._set_access_token(None)
|
||||||
|
self._cache_tokens()
|
||||||
|
raise ExtractorError(
|
||||||
|
'Access token has expired or been invalidated. '
|
||||||
|
'Get a new "access_token_production" value from your browser '
|
||||||
|
f'and try again, {self._REFRESH_HINT}', expected=True)
|
||||||
|
|
||||||
|
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
|
||||||
|
bearer_token = self._access_token or self._download_json(
|
||||||
|
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
|
||||||
|
data=urlencode_postdata({
|
||||||
'affiliate': 'none',
|
'affiliate': 'none',
|
||||||
'grant_type': 'device',
|
'grant_type': 'device',
|
||||||
'device_vendor': 'unknown',
|
'device_vendor': 'unknown',
|
||||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
|
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
|
||||||
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
|
# but this is no longer effective since actual login is not possible anymore
|
||||||
'app_id': 'dch.webapp',
|
'device_model': 'unknown',
|
||||||
|
'app_id': self._CLIENT_ID,
|
||||||
'app_distributor': 'berlinphil',
|
'app_distributor': 'berlinphil',
|
||||||
'app_version': '1.84.0',
|
'app_version': '1.95.0',
|
||||||
'client_secret': '2ySLN+2Fwb',
|
'client_secret': self._CLIENT_SECRET,
|
||||||
}), headers={
|
}), headers=self._OAUTH_HEADERS)['access_token']
|
||||||
'Accept': 'application/json',
|
|
||||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
|
||||||
'User-Agent': self._USER_AGENT,
|
|
||||||
})['access_token']
|
|
||||||
try:
|
try:
|
||||||
login_response = self._download_json(
|
response = self._download_json(
|
||||||
self._OAUTH_URL,
|
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
|
||||||
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
|
data=urlencode_postdata({
|
||||||
'grant_type': 'password',
|
'grant_type': 'refresh_token',
|
||||||
'username': username,
|
'refresh_token': self._refresh_token,
|
||||||
'password': password,
|
'client_id': self._CLIENT_ID,
|
||||||
|
'client_secret': self._CLIENT_SECRET,
|
||||||
}), headers={
|
}), headers={
|
||||||
'Accept': 'application/json',
|
**self._OAUTH_HEADERS,
|
||||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
'Authorization': f'Bearer {bearer_token}',
|
||||||
'Referer': 'https://www.digitalconcerthall.com',
|
|
||||||
'Authorization': f'Bearer {login_token}',
|
|
||||||
'User-Agent': self._USER_AGENT,
|
|
||||||
})
|
})
|
||||||
except ExtractorError as error:
|
except ExtractorError as e:
|
||||||
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||||
raise ExtractorError('Invalid username or password', expected=True)
|
self._set_access_token(None)
|
||||||
|
self._refresh_token = None
|
||||||
|
self._cache_tokens()
|
||||||
|
raise ExtractorError('Your tokens have been invalidated', expected=True)
|
||||||
raise
|
raise
|
||||||
self._ACCESS_TOKEN = login_response['access_token']
|
|
||||||
|
self._set_access_token(response['access_token'])
|
||||||
|
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
|
||||||
|
self.write_debug('New refresh token granted')
|
||||||
|
self._refresh_token = refresh_token
|
||||||
|
self._cache_tokens()
|
||||||
|
|
||||||
|
def _perform_login(self, username, password):
|
||||||
|
self.report_login()
|
||||||
|
|
||||||
|
if username == 'refresh':
|
||||||
|
self._refresh_token = password
|
||||||
|
self._fetch_new_tokens()
|
||||||
|
|
||||||
|
if username == 'token':
|
||||||
|
if not traverse_obj(password, {jwt_decode_hs256}):
|
||||||
|
raise ExtractorError(
|
||||||
|
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
|
||||||
|
self._set_access_token(password)
|
||||||
|
self._cache_tokens()
|
||||||
|
|
||||||
|
if username in ('refresh', 'token'):
|
||||||
|
if self.get_param('cachedir') is not False:
|
||||||
|
token_type = 'access' if username == 'token' else 'refresh'
|
||||||
|
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
|
||||||
|
'token next time, pass --username cache along with any password')
|
||||||
|
return
|
||||||
|
|
||||||
|
if username != 'cache':
|
||||||
|
raise ExtractorError(
|
||||||
|
'Login with username and password is no longer supported '
|
||||||
|
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
|
||||||
|
|
||||||
|
# Try cached access_token
|
||||||
|
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
|
||||||
|
self._set_access_token(cached_tokens.get('access_token'))
|
||||||
|
self._refresh_token = cached_tokens.get('refresh_token')
|
||||||
|
if not self._access_token_is_expired:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Try cached refresh_token
|
||||||
|
self._fetch_new_tokens(invalidate=True)
|
||||||
|
|
||||||
def _real_initialize(self):
|
def _real_initialize(self):
|
||||||
if not self._ACCESS_TOKEN:
|
if not self._access_token:
|
||||||
self.raise_login_required(method='password')
|
self.raise_login_required(
|
||||||
|
'All content on this site is only available for registered users. '
|
||||||
|
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
|
||||||
|
|
||||||
def _entries(self, items, language, type_, **kwargs):
|
def _entries(self, items, language, type_, **kwargs):
|
||||||
for item in items:
|
for item in items:
|
||||||
video_id = item['id']
|
video_id = item['id']
|
||||||
|
|
||||||
|
for should_retry in (True, False):
|
||||||
|
self._fetch_new_tokens(invalidate=not should_retry)
|
||||||
|
try:
|
||||||
stream_info = self._download_json(
|
stream_info = self._download_json(
|
||||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||||
'Accept': 'application/json',
|
'Accept': 'application/json',
|
||||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
'Authorization': f'Bearer {self._access_token}',
|
||||||
'Accept-Language': language,
|
'Accept-Language': language,
|
||||||
'User-Agent': self._USER_AGENT,
|
'User-Agent': self._USER_AGENT,
|
||||||
})
|
})
|
||||||
|
break
|
||||||
|
except ExtractorError as error:
|
||||||
|
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||||
|
continue
|
||||||
|
raise
|
||||||
|
|
||||||
formats = []
|
formats = []
|
||||||
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
|
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
|
||||||
|
@ -157,7 +255,6 @@ class DigitalConcertHallIE(InfoExtractor):
|
||||||
'Accept': 'application/json',
|
'Accept': 'application/json',
|
||||||
'Accept-Language': language,
|
'Accept-Language': language,
|
||||||
'User-Agent': self._USER_AGENT,
|
'User-Agent': self._USER_AGENT,
|
||||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
|
||||||
})
|
})
|
||||||
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))
|
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))
|
||||||
|
|
||||||
|
|
|
@ -569,7 +569,7 @@ class FacebookIE(InfoExtractor):
|
||||||
if dash_manifest:
|
if dash_manifest:
|
||||||
formats.extend(self._parse_mpd_formats(
|
formats.extend(self._parse_mpd_formats(
|
||||||
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
|
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
|
||||||
mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url))
|
mpd_url=url_or_none(vid_data.get('dash_manifest_url')) or mpd_url))
|
||||||
|
|
||||||
def process_formats(info):
|
def process_formats(info):
|
||||||
# Downloads with browser's User-Agent are rate limited. Working around
|
# Downloads with browser's User-Agent are rate limited. Working around
|
||||||
|
|
|
@ -259,6 +259,8 @@ class RedditIE(InfoExtractor):
|
||||||
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
|
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
|
||||||
except ExtractorError as e:
|
except ExtractorError as e:
|
||||||
if isinstance(e.cause, json.JSONDecodeError):
|
if isinstance(e.cause, json.JSONDecodeError):
|
||||||
|
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
|
||||||
|
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
|
||||||
self.raise_login_required('Account authentication is required')
|
self.raise_login_required('Account authentication is required')
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,10 @@ from ..utils import (
|
||||||
unified_timestamp,
|
unified_timestamp,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
)
|
)
|
||||||
from ..utils.traversal import traverse_obj
|
from ..utils.traversal import (
|
||||||
|
subs_list_to_dict,
|
||||||
|
traverse_obj,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class RutubeBaseIE(InfoExtractor):
|
class RutubeBaseIE(InfoExtractor):
|
||||||
|
@ -92,11 +95,11 @@ class RutubeBaseIE(InfoExtractor):
|
||||||
hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls')
|
hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls')
|
||||||
formats.extend(fmts)
|
formats.extend(fmts)
|
||||||
self._merge_subtitles(subs, target=subtitles)
|
self._merge_subtitles(subs, target=subtitles)
|
||||||
for caption in traverse_obj(options, ('captions', lambda _, v: url_or_none(v['file']))):
|
self._merge_subtitles(traverse_obj(options, ('captions', ..., {
|
||||||
subtitles.setdefault(caption.get('code') or 'ru', []).append({
|
'id': 'code',
|
||||||
'url': caption['file'],
|
'url': 'file',
|
||||||
'name': caption.get('langTitle'),
|
'name': ('langTitle', {str}),
|
||||||
})
|
}, all, {subs_list_to_dict(lang='ru')})), target=subtitles)
|
||||||
return formats, subtitles
|
return formats, subtitles
|
||||||
|
|
||||||
def _download_and_extract_formats_and_subtitles(self, video_id, query=None):
|
def _download_and_extract_formats_and_subtitles(self, video_id, query=None):
|
||||||
|
|
|
@ -241,7 +241,7 @@ class SoundcloudBaseIE(InfoExtractor):
|
||||||
format_urls.add(format_url)
|
format_urls.add(format_url)
|
||||||
formats.append({
|
formats.append({
|
||||||
'format_id': 'download',
|
'format_id': 'download',
|
||||||
'ext': urlhandle_detect_ext(urlh) or 'mp3',
|
'ext': urlhandle_detect_ext(urlh, default='mp3'),
|
||||||
'filesize': int_or_none(urlh.headers.get('Content-Length')),
|
'filesize': int_or_none(urlh.headers.get('Content-Length')),
|
||||||
'url': format_url,
|
'url': format_url,
|
||||||
'quality': 10,
|
'quality': 10,
|
||||||
|
|
|
@ -195,6 +195,7 @@ class TwitCastingIE(InfoExtractor):
|
||||||
'source_preference': -10,
|
'source_preference': -10,
|
||||||
# TwitCasting simply sends moof atom directly over WS
|
# TwitCasting simply sends moof atom directly over WS
|
||||||
'protocol': 'websocket_frag',
|
'protocol': 'websocket_frag',
|
||||||
|
'container': 'fmp4',
|
||||||
})
|
})
|
||||||
|
|
||||||
infodict = {
|
infodict = {
|
||||||
|
|
|
@ -419,7 +419,9 @@ def create_parser():
|
||||||
general.add_option(
|
general.add_option(
|
||||||
'--flat-playlist',
|
'--flat-playlist',
|
||||||
action='store_const', dest='extract_flat', const='in_playlist', default=False,
|
action='store_const', dest='extract_flat', const='in_playlist', default=False,
|
||||||
help='Do not extract the videos of a playlist, only list them')
|
help=(
|
||||||
|
'Do not extract a playlist\'s URL result entries; '
|
||||||
|
'some entry metadata may be missing and downloading may be bypassed'))
|
||||||
general.add_option(
|
general.add_option(
|
||||||
'--no-flat-playlist',
|
'--no-flat-playlist',
|
||||||
action='store_false', dest='extract_flat',
|
action='store_false', dest='extract_flat',
|
||||||
|
|
|
@ -30,6 +30,7 @@ from .metadataparser import (
|
||||||
)
|
)
|
||||||
from .modify_chapters import ModifyChaptersPP
|
from .modify_chapters import ModifyChaptersPP
|
||||||
from .movefilesafterdownload import MoveFilesAfterDownloadPP
|
from .movefilesafterdownload import MoveFilesAfterDownloadPP
|
||||||
|
from .mp4direct import MP4FixupTimestampPP
|
||||||
from .sponskrub import SponSkrubPP
|
from .sponskrub import SponSkrubPP
|
||||||
from .sponsorblock import SponsorBlockPP
|
from .sponsorblock import SponsorBlockPP
|
||||||
from .xattrpp import XAttrMetadataPP
|
from .xattrpp import XAttrMetadataPP
|
||||||
|
|
277
yt_dlp/postprocessor/mp4direct.py
Normal file
277
yt_dlp/postprocessor/mp4direct.py
Normal file
|
@ -0,0 +1,277 @@
|
||||||
|
import os
|
||||||
|
import struct
|
||||||
|
|
||||||
|
from io import BytesIO, RawIOBase
|
||||||
|
from math import inf
|
||||||
|
from typing import Tuple
|
||||||
|
|
||||||
|
from .common import PostProcessor
|
||||||
|
from ..utils import prepend_extension
|
||||||
|
|
||||||
|
|
||||||
|
class LengthLimiter(RawIOBase):
|
||||||
|
"""
|
||||||
|
A bytes IO to limit length to be read.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, r: RawIOBase, size: int):
|
||||||
|
self.r = r
|
||||||
|
self.remaining = size
|
||||||
|
|
||||||
|
def read(self, sz: int = None) -> bytes:
|
||||||
|
if self.remaining == 0:
|
||||||
|
return b''
|
||||||
|
if sz in (-1, None):
|
||||||
|
sz = self.remaining
|
||||||
|
sz = min(sz, self.remaining)
|
||||||
|
ret = self.r.read(sz)
|
||||||
|
if ret:
|
||||||
|
self.remaining -= len(ret)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def readall(self) -> bytes:
|
||||||
|
if self.remaining == 0:
|
||||||
|
return b''
|
||||||
|
ret = self.read(self.remaining)
|
||||||
|
if ret:
|
||||||
|
self.remaining -= len(ret)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def readable(self) -> bool:
|
||||||
|
return bool(self.remaining)
|
||||||
|
|
||||||
|
|
||||||
|
def read_harder(r, size):
|
||||||
|
"""
|
||||||
|
Try to read from the stream.
|
||||||
|
|
||||||
|
@params r byte stream to read
|
||||||
|
@params size Number of bytes to read in total
|
||||||
|
"""
|
||||||
|
|
||||||
|
retry = 0
|
||||||
|
buf = b''
|
||||||
|
while len(buf) < size and retry < 3:
|
||||||
|
ret = r.read(size - len(buf))
|
||||||
|
if not ret:
|
||||||
|
retry += 1
|
||||||
|
continue
|
||||||
|
retry = 0
|
||||||
|
buf += ret
|
||||||
|
|
||||||
|
return buf
|
||||||
|
|
||||||
|
|
||||||
|
def pack_be32(value: int) -> bytes:
|
||||||
|
""" Pack value to 4-byte-long bytes in the big-endian byte order """
|
||||||
|
return struct.pack('>I', value)
|
||||||
|
|
||||||
|
|
||||||
|
def pack_be64(value: int) -> bytes:
|
||||||
|
""" Pack value to 8-byte-long bytes in the big-endian byte order """
|
||||||
|
return struct.pack('>L', value)
|
||||||
|
|
||||||
|
|
||||||
|
def unpack_be32(value: bytes) -> int:
|
||||||
|
""" Convert 4-byte-long bytes in the big-endian byte order, to an integer value """
|
||||||
|
return struct.unpack('>I', value)[0]
|
||||||
|
|
||||||
|
|
||||||
|
def unpack_be64(value: bytes) -> int:
|
||||||
|
""" Convert 8-byte-long bytes in the big-endian byte order, to an integer value """
|
||||||
|
return struct.unpack('>L', value)[0]
|
||||||
|
|
||||||
|
|
||||||
|
def unpack_ver_flags(value: bytes) -> Tuple[int, int]:
|
||||||
|
"""
|
||||||
|
Unpack 4-byte-long value into version and flags.
|
||||||
|
@returns (version, flags)
|
||||||
|
"""
|
||||||
|
|
||||||
|
ver, up_flag, down_flag = struct.unpack('>BBH', value)
|
||||||
|
return ver, (up_flag << 16 | down_flag)
|
||||||
|
|
||||||
|
|
||||||
|
# https://github.com/gpac/mp4box.js/blob/4e1bc23724d2603754971abc00c2bd5aede7be60/src/box.js#L13-L40
|
||||||
|
MP4_CONTAINER_BOXES = ('moov', 'trak', 'edts', 'mdia', 'minf', 'dinf', 'stbl', 'mvex', 'moof', 'traf', 'vttc', 'tref', 'iref', 'mfra', 'meco', 'hnti', 'hinf', 'strk', 'strd', 'sinf', 'rinf', 'schi', 'trgr', 'udta', 'iprp', 'ipco')
|
||||||
|
""" List of boxes that nests the other boxes """
|
||||||
|
|
||||||
|
|
||||||
|
def parse_mp4_boxes(r: RawIOBase):
|
||||||
|
"""
|
||||||
|
Parses an ISO BMFF (which MP4 follows) and yields its boxes as a sequence.
|
||||||
|
This does not interpret content of these boxes.
|
||||||
|
|
||||||
|
Sequence details:
|
||||||
|
('atom', b'blablabla'): A box, with content (not container boxes)
|
||||||
|
('atom', b''): Possibly container box (must check MP4_CONTAINER_BOXES) or really an empty box
|
||||||
|
(None, 'atom'): End of a container box
|
||||||
|
|
||||||
|
Example: Path:
|
||||||
|
('test', b'123456') /test
|
||||||
|
('moov', b'') /moov (start of container box)
|
||||||
|
('helo', b'abcdef') /moov/helo
|
||||||
|
('1984', b'1q84') /moov/1984
|
||||||
|
('trak', b'') /moov/trak (start of container box)
|
||||||
|
('keys', b'2022') /moov/trak/keys
|
||||||
|
(None , 'trak') /moov/trak (end of container box)
|
||||||
|
('topp', b'1991') /moov/topp
|
||||||
|
(None , 'moov') /moov (end of container box)
|
||||||
|
"""
|
||||||
|
|
||||||
|
while True:
|
||||||
|
size_b = read_harder(r, 4)
|
||||||
|
if not size_b:
|
||||||
|
break
|
||||||
|
type_b = r.read(4)
|
||||||
|
# 00 00 00 20 is big-endian
|
||||||
|
box_size = unpack_be32(size_b)
|
||||||
|
type_s = type_b.decode()
|
||||||
|
if type_s in MP4_CONTAINER_BOXES:
|
||||||
|
yield (type_s, b'')
|
||||||
|
yield from parse_mp4_boxes(LengthLimiter(r, box_size - 8))
|
||||||
|
yield (None, type_s)
|
||||||
|
continue
|
||||||
|
# subtract by 8
|
||||||
|
full_body = read_harder(r, box_size - 8)
|
||||||
|
yield (type_s, full_body)
|
||||||
|
|
||||||
|
|
||||||
|
def write_mp4_boxes(w: RawIOBase, box_iter):
|
||||||
|
"""
|
||||||
|
Writes an ISO BMFF file from a given sequence to a given writer.
|
||||||
|
The iterator to be passed must follow parse_mp4_boxes's protocol.
|
||||||
|
"""
|
||||||
|
|
||||||
|
stack = [
|
||||||
|
(None, w), # parent box, IO
|
||||||
|
]
|
||||||
|
for btype, content in box_iter:
|
||||||
|
if btype in MP4_CONTAINER_BOXES:
|
||||||
|
bio = BytesIO()
|
||||||
|
stack.append((btype, bio))
|
||||||
|
continue
|
||||||
|
elif btype is None:
|
||||||
|
assert stack[-1][0] == content
|
||||||
|
btype, bio = stack.pop()
|
||||||
|
content = bio.getvalue()
|
||||||
|
|
||||||
|
wt = stack[-1][1]
|
||||||
|
wt.write(pack_be32(len(content) + 8))
|
||||||
|
wt.write(btype.encode()[:4])
|
||||||
|
wt.write(content)
|
||||||
|
|
||||||
|
|
||||||
|
class MP4FixupTimestampPP(PostProcessor):
|
||||||
|
|
||||||
|
@property
|
||||||
|
def available(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def analyze_mp4(self, filepath):
|
||||||
|
""" returns (baseMediaDecodeTime offset, sample duration cutoff) """
|
||||||
|
smallest_bmdt, known_sdur = inf, set()
|
||||||
|
with open(filepath, 'rb') as r:
|
||||||
|
for btype, content in parse_mp4_boxes(r):
|
||||||
|
if btype == 'tfdt':
|
||||||
|
version, _ = unpack_ver_flags(content[0:4])
|
||||||
|
# baseMediaDecodeTime always comes to the first
|
||||||
|
if version == 0:
|
||||||
|
bmdt = unpack_be32(content[4:8])
|
||||||
|
else:
|
||||||
|
bmdt = unpack_be64(content[4:12])
|
||||||
|
if bmdt == 0:
|
||||||
|
continue
|
||||||
|
smallest_bmdt = min(bmdt, smallest_bmdt)
|
||||||
|
elif btype == 'tfhd':
|
||||||
|
version, flags = unpack_ver_flags(content[0:4])
|
||||||
|
if not flags & 0x08:
|
||||||
|
# this box does not contain "sample duration"
|
||||||
|
continue
|
||||||
|
# https://github.com/gpac/mp4box.js/blob/4e1bc23724d2603754971abc00c2bd5aede7be60/src/box.js#L203-L209
|
||||||
|
# https://github.com/gpac/mp4box.js/blob/4e1bc23724d2603754971abc00c2bd5aede7be60/src/parsing/tfhd.js
|
||||||
|
sdur_start = 8 # header + track id
|
||||||
|
if flags & 0x01:
|
||||||
|
sdur_start += 8
|
||||||
|
if flags & 0x02:
|
||||||
|
sdur_start += 4
|
||||||
|
# the next 4 bytes are "sample duration"
|
||||||
|
sample_dur = unpack_be32(content[sdur_start:sdur_start + 4])
|
||||||
|
known_sdur.add(sample_dur)
|
||||||
|
|
||||||
|
maximum_sdur = max(known_sdur)
|
||||||
|
for multiplier in (0.7, 0.8, 0.9, 0.95):
|
||||||
|
sdur_cutoff = maximum_sdur * multiplier
|
||||||
|
if len(set(x for x in known_sdur if x > sdur_cutoff)) < 3:
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
sdur_cutoff = inf
|
||||||
|
|
||||||
|
return smallest_bmdt, sdur_cutoff
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def transform(r, bmdt_offset, sdur_cutoff):
|
||||||
|
for btype, content in r:
|
||||||
|
if btype == 'tfdt':
|
||||||
|
version, _ = unpack_ver_flags(content[0:4])
|
||||||
|
if version == 0:
|
||||||
|
bmdt = unpack_be32(content[4:8])
|
||||||
|
else:
|
||||||
|
bmdt = unpack_be64(content[4:12])
|
||||||
|
if bmdt == 0:
|
||||||
|
yield (btype, content)
|
||||||
|
continue
|
||||||
|
# calculate new baseMediaDecodeTime
|
||||||
|
bmdt = max(0, bmdt - bmdt_offset)
|
||||||
|
# pack everything again and insert as a new box
|
||||||
|
if version == 0:
|
||||||
|
bmdt_b = pack_be32(bmdt)
|
||||||
|
else:
|
||||||
|
bmdt_b = pack_be64(bmdt)
|
||||||
|
yield ('tfdt', content[0:4] + bmdt_b + content[8 + version * 4:])
|
||||||
|
continue
|
||||||
|
elif btype == 'tfhd':
|
||||||
|
version, flags = unpack_ver_flags(content[0:4])
|
||||||
|
if not flags & 0x08:
|
||||||
|
yield (btype, content)
|
||||||
|
continue
|
||||||
|
sdur_start = 8
|
||||||
|
if flags & 0x01:
|
||||||
|
sdur_start += 8
|
||||||
|
if flags & 0x02:
|
||||||
|
sdur_start += 4
|
||||||
|
sample_dur = unpack_be32(content[sdur_start:sdur_start + 4])
|
||||||
|
if sample_dur > sdur_cutoff:
|
||||||
|
sample_dur = 0
|
||||||
|
sd_b = pack_be32(sample_dur)
|
||||||
|
yield ('tfhd', content[:sdur_start] + sd_b + content[sdur_start + 4:])
|
||||||
|
continue
|
||||||
|
yield (btype, content)
|
||||||
|
|
||||||
|
def modify_mp4(self, src, dst, bmdt_offset, sdur_cutoff):
|
||||||
|
with open(src, 'rb') as r, open(dst, 'wb') as w:
|
||||||
|
write_mp4_boxes(w, self.transform(parse_mp4_boxes(r)))
|
||||||
|
|
||||||
|
def run(self, information):
|
||||||
|
filename = information['filepath']
|
||||||
|
temp_filename = prepend_extension(filename, 'temp')
|
||||||
|
|
||||||
|
self.write_debug('Analyzing MP4')
|
||||||
|
bmdt_offset, sdur_cutoff = self.analyze_mp4(filename)
|
||||||
|
working = inf not in (bmdt_offset, sdur_cutoff)
|
||||||
|
# if any of them are Infinity, there's something wrong
|
||||||
|
# baseMediaDecodeTime = to shift PTS
|
||||||
|
# sample duration = to define duration in each segment
|
||||||
|
self.write_debug(f'baseMediaDecodeTime offset = {bmdt_offset}, sample duration cutoff = {sdur_cutoff}')
|
||||||
|
if bmdt_offset == inf:
|
||||||
|
# safeguard
|
||||||
|
bmdt_offset = 0
|
||||||
|
self.modify_mp4(filename, temp_filename, bmdt_offset, sdur_cutoff)
|
||||||
|
if working:
|
||||||
|
self.to_screen('Duration of the file has been fixed')
|
||||||
|
else:
|
||||||
|
self.report_warning(f'Failed to fix duration of the file. (baseMediaDecodeTime offset = {bmdt_offset}, sample duration cutoff = {sdur_cutoff})')
|
||||||
|
|
||||||
|
os.replace(temp_filename, filename)
|
||||||
|
|
||||||
|
return [], information
|
|
@ -1,8 +1,8 @@
|
||||||
# Autogenerated by devscripts/update-version.py
|
# Autogenerated by devscripts/update-version.py
|
||||||
|
|
||||||
__version__ = '2024.11.04'
|
__version__ = '2024.11.18'
|
||||||
|
|
||||||
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c'
|
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
|
||||||
|
|
||||||
VARIANT = None
|
VARIANT = None
|
||||||
|
|
||||||
|
@ -12,4 +12,4 @@ CHANNEL = 'stable'
|
||||||
|
|
||||||
ORIGIN = 'yt-dlp/yt-dlp'
|
ORIGIN = 'yt-dlp/yt-dlp'
|
||||||
|
|
||||||
_pkg_version = '2024.11.04'
|
_pkg_version = '2024.11.18'
|
||||||
|
|
Loading…
Reference in New Issue
Block a user