Compare commits

...

35 Commits

Author SHA1 Message Date
N/Ame
d58cb6012b
Merge c59ce7d6a6 into f919729538 2024-11-18 16:05:13 +05:30
github-actions[bot]
f919729538 Release 2024.11.18
Created by: bashonly

:ci skip all
2024-11-18 05:45:05 +00:00
bashonly
7ea2787920
[ie/reddit] Improve error handling (#11573)
Authored by: bashonly
2024-11-18 05:36:38 +00:00
bashonly
f7257588bd
[ie/digitalconcerthall] Support login with access/refresh tokens (#11571)
Removes broken support for login with email and password
Removes obsolete `prefer_combined_hls` extractor-arg

Closes #11404, Closes #11436
Authored by: bashonly
2024-11-18 05:16:17 +00:00
grqx_wsl
c59ce7d6a6 [ie/boomplaypodcast] use the base extractor's method to extract title 2024-11-06 01:04:36 +13:00
grqx_wsl
bd857a06a0 fix: do not use classmethod; fix title in the base extractor 2024-11-06 00:10:40 +13:00
grqx_wsl
c58ee488a9 simplify BoomplayGenericPlaylistIE.suitable 2024-11-05 13:31:53 +13:00
grqx_wsl
eacad11a5a code formatting 2024-11-05 00:18:14 +13:00
grqx_wsl
d69a1be537 _urljoin(): let url_or_none sanitize the url;
more classmethods
2024-11-04 23:17:26 +13:00
grqx_wsl
5cbf04763b Merge remote-tracking branch 'upstream/master' into boomplay 2024-11-04 17:57:49 +13:00
grqx_wsl
901e78af62 improve regex 2024-11-04 14:19:52 +13:00
grqx_wsl
9a6f9843c0 use _extract_from_webpage and _extract_embed_urls
- `_extract_playlist_entries` is now a `classmethod`
- case insensitive html tag matching

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-11-04 14:09:42 +13:00
grqx_wsl
8ef2294282 case insensitive tag matching 2024-11-02 02:18:16 +13:00
grqx_wsl
0e344b806f [ie/boomplaypodcast]extract full description 2024-11-02 02:11:49 +13:00
grqx_termux
60b763c50f _TEST -> _TESTS
actually meant this. working on 2 branches simultanously can lead to
results like this...
2024-10-24 15:49:16 +13:00
grqx_termux
195af478f3 Revert "_TEST -> _TESTS"
This reverts commit aa34d34596.
2024-10-24 15:41:59 +13:00
dirkf
8a1daf41ab [ie/BoomplayEpisode] Make title extraction non-fatal
Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-10-23 23:58:57 +13:00
grqx_wsl
0f9b09842e remove playlist argument from BoomplayBaseIE._extract_page_metadata
Will consider `require_title` later if moving title extraction here
2024-10-23 23:52:44 +13:00
grqx_wsl
1066a94acf Merge remote-tracking branch 'upstream/master' into boomplay 2024-10-23 23:38:20 +13:00
grqx_wsl
aa34d34596 _TEST -> _TESTS 2024-10-23 22:53:24 +13:00
grqx_wsl
0e1851bc34 Merge remote-tracking branch 'refs/remotes/origin/boomplay' into boomplay 2024-10-18 13:35:10 +13:00
grqx_wsl
a886439396 _id -> item_id
Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-10-18 13:34:40 +13:00
N/Ame
38383ea313
use re.sub instead in description extraction
Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-10-18 13:34:07 +13:00
grqx_wsl
28a1163010 consistency: BoomplaySearchPageIE => BoomplaySearchURLIE 2024-10-18 13:28:49 +13:00
grqx_wsl
cee1c763e4 fix the docstring of BoomplayBaseIE.__yield_elements_text_and_html_by_class_and_tag 2024-10-16 23:48:40 +13:00
grqx_wsl
bbb121c2af Correct extractor name: BoomPlay==>Boomplay 2024-10-16 23:47:36 +13:00
grqx_wsl
6beca5eb57 revert 2024-10-15 14:56:37 +13:00
grqx_wsl
82d7e40908 Merge remote-tracking branch 'refs/remotes/origin/boomplay' into boomplay 2024-10-15 14:55:54 +13:00
grqx_wsl
5b1b5bb1b6 updxate _VALID_URL 2024-10-15 14:53:36 +13:00
N/Ame
445531c5a0
Update yt_dlp/extractor/boomplay.py 2024-10-15 13:27:09 +13:00
N/Ame
16d68723dc
Update yt_dlp/extractor/boomplay.py 2024-10-15 13:23:54 +13:00
grqx_wsl
5b962d70de improve metadata extraction, add extractor for search pages
- pass tests&code formatting

Co-authored-by: dirkf <fieldhouse@gmx.net>
Co-authored-by: grqx_wsl <173253225+grqx@users.noreply.github.com>
2024-10-14 23:49:07 +13:00
grqx_wsl
98d9edf823 Merge branch 'master' into boomplay 2024-10-14 16:41:30 +13:00
grqx_wsl
6d2de79b7a BoomPlayGenericPlaylistIE, BoomPlaySearchIE 2024-10-13 23:07:33 +13:00
grqx_wsl
a8769f672b [ie/boomplay] add extractors 2024-10-13 12:46:03 +13:00
9 changed files with 745 additions and 59 deletions

View File

@ -695,3 +695,15 @@ KBelmin
kesor kesor
MellowKyler MellowKyler
Wesley107772 Wesley107772
a13ssandr0
ChocoLZS
doe1080
hugovdev
jshumphrey
julionc
manavchaudhary1
powergold1
Sakura286
SamDecrock
stratus-ss
subrat-lima

View File

@ -4,6 +4,64 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master # To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
--> -->
### 2024.11.18
#### Important changes
- **Login with OAuth is no longer supported for YouTube**
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
#### Core changes
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
- **utils**
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
#### Extractor changes
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
- **chaturbate**
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
- **ctvnews**
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
- **spreaker**
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
- **youtube**
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
#### Misc. changes
- **build**
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
- **cleanup**
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
### 2024.11.04 ### 2024.11.04
#### Important changes #### Important changes

View File

@ -1867,9 +1867,6 @@ The following extractors use this feature:
#### bilibili #### bilibili
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats * `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
#### digitalconcerthall
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
#### sonylivseries #### sonylivseries
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc` * `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`

View File

@ -129,6 +129,8 @@
- **Bandcamp:album** - **Bandcamp:album**
- **Bandcamp:user** - **Bandcamp:user**
- **Bandcamp:weekly** - **Bandcamp:weekly**
- **Bandlab**
- **BandlabPlaylist**
- **BannedVideo** - **BannedVideo**
- **bbc**: [*bbc*](## "netrc machine") BBC - **bbc**: [*bbc*](## "netrc machine") BBC
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer - **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
@ -484,6 +486,7 @@
- **Gab** - **Gab**
- **GabTV** - **GabTV**
- **Gaia**: [*gaia*](## "netrc machine") - **Gaia**: [*gaia*](## "netrc machine")
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
- **GameJolt** - **GameJolt**
- **GameJoltCommunity** - **GameJoltCommunity**
- **GameJoltGame** - **GameJoltGame**
@ -651,6 +654,8 @@
- **Karaoketv** - **Karaoketv**
- **Katsomo**: (**Currently broken**) - **Katsomo**: (**Currently broken**)
- **KelbyOne**: (**Currently broken**) - **KelbyOne**: (**Currently broken**)
- **Kenh14Playlist**
- **Kenh14Video**
- **Ketnet** - **Ketnet**
- **khanacademy** - **khanacademy**
- **khanacademy:unit** - **khanacademy:unit**
@ -784,10 +789,6 @@
- **MicrosoftLearnSession** - **MicrosoftLearnSession**
- **MicrosoftMedius** - **MicrosoftMedius**
- **microsoftstream**: Microsoft Stream - **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom
- **mildom:clip**: Clip in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: VOD in Mildom
- **minds** - **minds**
- **minds:channel** - **minds:channel**
- **minds:group** - **minds:group**
@ -798,6 +799,7 @@
- **MiTele**: mitele.es - **MiTele**: mitele.es
- **mixch** - **mixch**
- **mixch:archive** - **mixch:archive**
- **mixch:movie**
- **mixcloud** - **mixcloud**
- **mixcloud:playlist** - **mixcloud:playlist**
- **mixcloud:user** - **mixcloud:user**
@ -1060,8 +1062,8 @@
- **PhilharmonieDeParis**: Philharmonie de Paris - **PhilharmonieDeParis**: Philharmonie de Paris
- **phoenix.de** - **phoenix.de**
- **Photobucket** - **Photobucket**
- **PiaLive**
- **Piapro**: [*piapro*](## "netrc machine") - **Piapro**: [*piapro*](## "netrc machine")
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
- **Picarto** - **Picarto**
- **PicartoVod** - **PicartoVod**
- **Piksel** - **Piksel**
@ -1088,8 +1090,6 @@
- **PodbayFMChannel** - **PodbayFMChannel**
- **Podchaser** - **Podchaser**
- **podomatic**: (**Currently broken**) - **podomatic**: (**Currently broken**)
- **Pokemon**
- **PokemonWatch**
- **PokerGo**: [*pokergo*](## "netrc machine") - **PokerGo**: [*pokergo*](## "netrc machine")
- **PokerGoCollection**: [*pokergo*](## "netrc machine") - **PokerGoCollection**: [*pokergo*](## "netrc machine")
- **PolsatGo** - **PolsatGo**
@ -1160,6 +1160,7 @@
- **RadioJavan**: (**Currently broken**) - **RadioJavan**: (**Currently broken**)
- **radiokapital** - **radiokapital**
- **radiokapital:show** - **radiokapital:show**
- **RadioRadicale**
- **RadioZetPodcast** - **RadioZetPodcast**
- **radlive** - **radlive**
- **radlive:channel** - **radlive:channel**
@ -1367,9 +1368,7 @@
- **spotify**: Spotify episodes (**Currently broken**) - **spotify**: Spotify episodes (**Currently broken**)
- **spotify:show**: Spotify shows (**Currently broken**) - **spotify:show**: Spotify shows (**Currently broken**)
- **Spreaker** - **Spreaker**
- **SpreakerPage**
- **SpreakerShow** - **SpreakerShow**
- **SpreakerShowPage**
- **SpringboardPlatform** - **SpringboardPlatform**
- **Sprout** - **Sprout**
- **SproutVideo** - **SproutVideo**
@ -1570,6 +1569,8 @@
- **UFCTV**: [*ufctv*](## "netrc machine") - **UFCTV**: [*ufctv*](## "netrc machine")
- **ukcolumn**: (**Currently broken**) - **ukcolumn**: (**Currently broken**)
- **UKTVPlay** - **UKTVPlay**
- **UlizaPlayer**
- **UlizaPortal**: ulizaportal.jp
- **umg:de**: Universal Music Deutschland (**Currently broken**) - **umg:de**: Universal Music Deutschland (**Currently broken**)
- **Unistra** - **Unistra**
- **Unity**: (**Currently broken**) - **Unity**: (**Currently broken**)
@ -1587,8 +1588,6 @@
- **Varzesh3**: (**Currently broken**) - **Varzesh3**: (**Currently broken**)
- **Vbox7** - **Vbox7**
- **Veo** - **Veo**
- **Veoh**
- **veoh:user**
- **Vesti**: Вести.Ru (**Currently broken**) - **Vesti**: Вести.Ru (**Currently broken**)
- **Vevo** - **Vevo**
- **VevoPlaylist** - **VevoPlaylist**

View File

@ -285,6 +285,16 @@ from .bloomberg import BloombergIE
from .bluesky import BlueskyIE from .bluesky import BlueskyIE
from .bokecc import BokeCCIE from .bokecc import BokeCCIE
from .bongacams import BongaCamsIE from .bongacams import BongaCamsIE
from .boomplay import (
BoomplayEpisodeIE,
BoomplayGenericPlaylistIE,
BoomplayMusicIE,
BoomplayPlaylistIE,
BoomplayPodcastIE,
BoomplaySearchIE,
BoomplaySearchURLIE,
BoomplayVideoIE,
)
from .boosty import BoostyIE from .boosty import BoostyIE
from .bostonglobe import BostonGlobeIE from .bostonglobe import BostonGlobeIE
from .box import BoxIE from .box import BoxIE

View File

@ -0,0 +1,511 @@
import base64
import functools
import json
import re
import urllib.parse
from .common import InfoExtractor, SearchInfoExtractor
from ..aes import aes_cbc_decrypt_bytes, aes_cbc_encrypt_bytes, unpad_pkcs7
from ..utils import (
ExtractorError,
classproperty,
clean_html,
extract_attributes,
get_elements_text_and_html_by_attribute,
int_or_none,
join_nonempty,
merge_dicts,
parse_count,
parse_duration,
smuggle_url,
strip_or_none,
unified_strdate,
unsmuggle_url,
url_or_none,
urlencode_postdata,
urljoin,
variadic,
)
from ..utils.traversal import traverse_obj
class BoomplayBaseIE(InfoExtractor):
# Calculated from const values, see lhx.AESUtils.encrypt in public.js
# Note that the real key/iv differs from `lhx.AESUtils.key`/`lhx.AESUtils.iv`
_KEY = b'boomplayVr3xopAM'
_IV = b'boomplay8xIsKTn9'
_BASE = 'https://www.boomplay.com'
_MEDIA_TYPES = ('songs', 'video', 'episode', 'podcasts', 'playlists', 'artists', 'albums')
_GEO_COUNTRIES = ['NG']
@staticmethod
def __yield_elements_text_and_html_by_class_and_tag(class_, tag, html):
"""
Yields content of all element matching `tag.class_` in html
class_ must be re escaped
"""
# get_elements_text_and_html_by_attribute returns a generator
return get_elements_text_and_html_by_attribute(
attribute='class', value=rf'''[^'"]*(?<=['"\s]){class_}(?=['"\s])[^'"]*''', html=html,
tag=tag, escape_value=False)
@classmethod
def __yield_elements_by_class_and_tag(cls, *args, **kwargs):
return (content for content, _ in cls.__yield_elements_text_and_html_by_class_and_tag(*args, **kwargs))
@classmethod
def __yield_elements_html_by_class_and_tag(cls, *args, **kwargs):
return (whole for _, whole in cls.__yield_elements_text_and_html_by_class_and_tag(*args, **kwargs))
@classmethod
def _get_elements_by_class_and_tag(cls, class_, tag, html):
return list(cls.__yield_elements_by_class_and_tag(class_, tag, html))
@classmethod
def _get_element_by_class_and_tag(cls, class_, tag, html):
return next(cls.__yield_elements_by_class_and_tag(class_, tag, html), None)
@classmethod
def _urljoin(cls, path):
return url_or_none(urljoin(base=cls._BASE, path=path))
def _get_playurl(self, item_id, item_type):
resp = self._download_json(
'https://www.boomplay.com/getResourceAddr', item_id,
note='Downloading play URL', errnote='Failed to download play URL',
data=urlencode_postdata({
'param': base64.b64encode(aes_cbc_encrypt_bytes(json.dumps({
'itemID': item_id,
'itemType': item_type,
}).encode(), self._KEY, self._IV)).decode(),
}), headers={
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
})
if not (source := resp.get('source')) and (code := resp.get('code')):
if 'unavailable in your country' in (desc := resp.get('desc')) or '':
# since NG must have failed ...
self.raise_geo_restricted(countries=['GH', 'KE', 'TZ', 'CM', 'CI'])
else:
raise ExtractorError(desc or f'Failed to get play url, code: {code}')
return unpad_pkcs7(aes_cbc_decrypt_bytes(
base64.b64decode(source),
self._KEY, self._IV)).decode()
def _extract_formats(self, item_id, item_type='MUSIC', **kwargs):
if url := url_or_none(self._get_playurl(item_id, item_type)):
return [{
'format_id': '0',
'url': url,
'http_headers': {
'Origin': 'https://www.boomplay.com',
'Referer': 'https://www.boomplay.com',
'X-Boomplay-Ref': 'Boomplay_WEBV1',
},
**kwargs,
}]
else:
self.raise_no_formats('No formats found')
def _extract_page_metadata(self, webpage, item_id):
metadata_div = self._get_element_by_class_and_tag('summary', 'div', webpage) or ''
metadata_entries = re.findall(r'(?si)<strong>(?P<entry>.*?)</strong>', metadata_div) or []
description = re.sub(
r'(?i)Listen and download music for free on Boomplay!', '',
clean_html(self._get_element_by_class_and_tag(
'description_content', 'span', webpage)) or '') or None
details_section = self._get_element_by_class_and_tag('songDetailInfo', 'section', webpage) or ''
metadata_entries.extend(re.findall(r'(?si)<li>(?P<entry>.*?)</li>', details_section) or [])
page_metadata = {
'id': item_id,
**self._extract_title_from_webpage(webpage),
'thumbnail': self._html_search_meta(['og:image', 'twitter:image'],
webpage, 'thumbnail', default=None),
'like_count': parse_count(self._get_element_by_class_and_tag('btn_favorite', 'button', metadata_div)),
'repost_count': parse_count(self._get_element_by_class_and_tag('btn_share', 'button', metadata_div)),
'comment_count': parse_count(self._get_element_by_class_and_tag('btn_comment', 'button', metadata_div)),
'duration': parse_duration(self._get_element_by_class_and_tag('btn_duration', 'button', metadata_div)),
'upload_date': unified_strdate(strip_or_none(
self._get_element_by_class_and_tag('btn_pubDate', 'button', metadata_div))),
'description': description,
}
for metadata_entry in metadata_entries:
if ':' not in metadata_entry:
continue
k, v = clean_html(metadata_entry).split(':', 1)
v = v.strip()
if 'artist' in k.lower():
page_metadata['artists'] = [v]
elif 'album' in k.lower():
page_metadata['album'] = v
elif 'genre' in k.lower():
page_metadata['genres'] = [v]
elif 'year of release' in k.lower():
page_metadata['release_year'] = int_or_none(v)
return page_metadata
def _extract_title_from_webpage(self, webpage):
if h1_title := self._html_search_regex(r'(?i)<h1[^>]*>([^<]+)</h1>', webpage, 'title', default=None):
return {'title': h1_title}
else:
return self._fix_title(
self._html_search_meta(['og:title', 'twitter:title'], webpage, 'title', default=None)
or self._html_search_regex(r'(?i)<title[^>]*>([^<]+)</title>', webpage, 'title', default=None))
@staticmethod
def _fix_title(title):
"""
fix various types of titles(og:title, twitter:title, title tag in html head)
"""
if not title:
return {}
title_patterns = (
r'^(?P<title>(?P<artist>.+)) Songs MP3 Download, New Songs \& Albums \| Boomplay$', # artists
r'^(?P<artist>.+?) - (?P<title>.+) MP3\ Download \& Lyrics \| Boomplay$', # music
r'^Download (?P<artist>.+) album songs: (?P<title>.+?) \| Boomplay Music$', # album
r'^Search:(?P<title>.+) \| Boomplay Music$', # search url
r'^(?P<title>.+) \| Podcast \| Boomplay$', # podcast, episode
r'^(?P<title>.+) \| Boomplay(?: Music)?$', # video, playlist, generic playlists
)
for pattern in title_patterns:
if match := re.search(pattern, title):
return {
'title': match.group('title'),
'artists': [match.group('artist')] if 'artist' in match.groupdict() else None,
}
return {'title': title}
@classmethod
def _extract_from_webpage(cls, url, webpage, **kwargs):
if kwargs:
url = smuggle_url(url, kwargs)
return super()._extract_from_webpage(url, webpage)
@classmethod
def _extract_embed_urls(cls, url, webpage):
url, smuggled_data = unsmuggle_url(url)
media_types = variadic(smuggled_data.get('media_types', cls._MEDIA_TYPES))
media_types = join_nonempty(*(
re.escape(v)for v in media_types if v in cls._MEDIA_TYPES),
delim='|')
for mobj in re.finditer(
rf'''(?ix)
<a
(?:\s(?:[^>"']|"[^"]*"|'[^']*')*)?
(?<=\s)href\s*=\s*(?P<_q>['"])
(?P<href>/(?:{media_types})/\d+/?[\-\w=?&#:;@]*)
(?P=_q)
(?:\s(?:[^>"']|"[^"]*"|'[^']*')*)?
>''', webpage):
if url := cls._urljoin(mobj.group('href')):
yield url
@classmethod
def _extract_playlist_entries(cls, webpage, media_types, warn=True):
song_list = strip_or_none(
cls._get_element_by_class_and_tag('morePart_musics', 'ol', webpage)
or cls._get_element_by_class_and_tag('morePart', 'ol', webpage)
or '')
entries = traverse_obj(cls.__yield_elements_html_by_class_and_tag(
'songName', 'a', song_list),
(..., {extract_attributes}, 'href', {cls._urljoin}, {cls.url_result}))
if not entries:
if warn:
cls.report_warning('Failed to extract playlist entries, finding suitable links instead!')
def strip_ie(entry):
# All our IEs have a _VALID_URL and set a key: don't use it
entry.pop('ie_key', None)
return entry
return (strip_ie(result) for result in
cls._extract_from_webpage(cls._BASE, webpage, media_types=media_types))
return entries
class BoomplayMusicIE(BoomplayBaseIE):
_VALID_URL = r'https?://(?:www\.)?boomplay\.com/songs/(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.boomplay.com/songs/165481965',
'md5': 'c5fb4f23e6aae98064230ef3c39c2178',
'info_dict': {
'title': 'Rise of the Fallen Heroes',
'ext': 'mp3',
'id': '165481965',
'artists': ['fatbunny'],
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/04/29/375ecda38f6f48179a93c72ab909118f_464_464.jpg',
'channel_url': 'https://www.boomplay.com/artists/52723101',
'duration': 125.0,
'release_year': 2024,
'comment_count': int,
'like_count': int,
'repost_count': int,
'album': 'Legendary Battle',
'genres': ['Metal'],
},
}]
def _real_extract(self, url):
song_id = self._match_id(url)
webpage = self._download_webpage(url, song_id)
ld_json_meta = next(self._yield_json_ld(webpage, song_id))
# TODO: extract comments(and lyrics? they don't have timestamps)
# example: https://www.boomplay.com/songs/96352673?from=home
return merge_dicts(
self._extract_page_metadata(webpage, song_id),
traverse_obj(ld_json_meta, {
'title': 'name',
'thumbnail': 'image',
'channel_url': ('byArtist', 0, '@id'),
'artists': ('byArtist', ..., 'name'),
'duration': ('duration', {parse_duration}),
}), {
'formats': self._extract_formats(song_id, 'MUSIC', vcodec='none'),
})
class BoomplayVideoIE(BoomplayBaseIE):
_VALID_URL = r'https?://(?:www\.)?boomplay\.com/video/(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.boomplay.com/video/1154892',
'md5': 'd9b67ad333d2292a82922062d065352d',
'info_dict': {
'id': '1154892',
'ext': 'mp4',
'title': 'Autumn blues',
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/10/10/2171dee9e1f8452e84021560729edb88.jpg',
'upload_date': '20241010',
'timestamp': 1728599214,
'view_count': int,
'duration': 177.0,
'description': 'Autumn blues by Lugo',
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
return merge_dicts(
self._extract_page_metadata(webpage, video_id),
self._search_json_ld(webpage, video_id), {
'formats': self._extract_formats(video_id, 'VIDEO', ext='mp4'),
})
class BoomplayEpisodeIE(BoomplayBaseIE):
_VALID_URL = r'https?://(?:www\.)?boomplay\.com/episode/(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.boomplay.com/episode/7132706',
'md5': 'f26e236b764baa53d7a2cbb7e9ce6dc4',
'info_dict': {
'id': '7132706',
'ext': 'mp3',
'title': 'Letting Go',
'repost_count': int,
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/05/06/fc535eaa25714b43a47185a9831887a5_320_320.jpg',
'comment_count': int,
'duration': 921.0,
'upload_date': '20240506',
'description': 'md5:5ec684b281fa0f9e4c31b3ee20c5e57a',
},
}]
def _real_extract(self, url):
ep_id = self._match_id(url)
webpage = self._download_webpage(url, ep_id)
return merge_dicts(
self._extract_page_metadata(webpage, ep_id), {
'description': self._html_search_meta(
['description', 'og:description', 'twitter:description'], webpage),
'formats': self._extract_formats(ep_id, 'EPISODE', vcodec='none'),
})
class BoomplayPodcastIE(BoomplayBaseIE):
_VALID_URL = r'https?://(?:www\.)?boomplay\.com/podcasts/(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.boomplay.com/podcasts/5372',
'playlist_count': 200,
'info_dict': {
'id': '5372',
'title': 'TED Talks Daily',
'description': r're:(?s)Every weekday, TED Talks Daily brings you the latest talks .{328} learn something new\.$',
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/12/22/6f9cf97ad6f846a0a7882c98dfcf4f8c_320_320.jpg',
'repost_count': int,
'comment_count': int,
'like_count': int,
},
}]
def _real_extract(self, url):
playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id)
song_list = self._get_element_by_class_and_tag('morePart_musics', 'ol', webpage)
song_list = traverse_obj(re.finditer(
r'''(?ix)
<li
(?:\s(?:[^>"']|"[^"]*"|'[^']*')*)?
\sdata-id\s*=\s*
(?P<_q>['"]?)
(?P<id>\d+)
(?P=_q)
(?:\s(?:[^>"']|"[^"]*"|'[^']*')*)?
>''',
song_list),
(..., 'id', {
lambda x: self.url_result(
f'https://www.boomplay.com/episode/{x}', BoomplayEpisodeIE, x),
}))
return self.playlist_result(
song_list, playlist_id,
**self._extract_page_metadata(webpage, playlist_id))
class BoomplayPlaylistIE(BoomplayBaseIE):
_VALID_URL = r'https?://(?:www\.)?boomplay\.com/(?:playlists|artists|albums)/(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.boomplay.com/playlists/33792494',
'info_dict': {
'id': '33792494',
'title': 'Daily Trending Indonesia',
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/08/19/d05d431ee616412caeacd7f78f4f68f5_320_320.jpeg',
'repost_count': int,
'comment_count': int,
'like_count': int,
'description': 'md5:7ebdffc5137c77acb62acb3c89248445',
},
'playlist_count': 10,
}, {
'url': 'https://www.boomplay.com/artists/52723101',
'only_matching': True,
}, {
'url': 'https://www.boomplay.com/albums/89611238?from=home#google_vignette',
'only_matching': True,
}]
def _real_extract(self, url):
playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id)
json_ld_metadata = next(self._yield_json_ld(webpage, playlist_id))
# schema `MusicGroup` not supported by self._json_ld()
return self.playlist_result(**merge_dicts(
self._extract_page_metadata(webpage, playlist_id),
traverse_obj(json_ld_metadata, {
'entries': ('track', ..., 'url', {
functools.partial(self.url_result, ie=BoomplayMusicIE),
}),
'playlist_title': 'name',
'thumbnail': 'image',
'artists': ('byArtist', ..., 'name'),
'channel_url': ('byArtist', 0, '@id'),
})))
class BoomplayGenericPlaylistIE(BoomplayBaseIE):
_VALID_URL = r'https?://(?:www\.)?boomplay\.com/.+'
_TESTS = [{
'url': 'https://www.boomplay.com/new-songs',
'playlist_mincount': 20,
'info_dict': {
'id': 'new-songs',
'title': 'New Songs',
'thumbnail': 'http://www.boomplay.com/pc/img/og_default_v3.jpg',
},
}, {
'url': 'https://www.boomplay.com/trending-songs',
'playlist_mincount': 20,
'info_dict': {
'id': 'trending-songs',
'title': 'Trending Songs',
'thumbnail': 'http://www.boomplay.com/pc/img/og_default_v3.jpg',
},
}]
@classmethod
def suitable(cls, url):
return super().suitable(url) and all(not ie.suitable(url) for ie in (
BoomplayEpisodeIE,
BoomplayMusicIE,
BoomplayPlaylistIE,
BoomplayPodcastIE,
BoomplaySearchURLIE,
BoomplayVideoIE,
))
def _real_extract(self, url):
playlist_id = self._generic_id(url)
webpage = self._download_webpage(url, playlist_id)
return self.playlist_result(
self._extract_playlist_entries(webpage, self._MEDIA_TYPES),
**self._extract_page_metadata(webpage, playlist_id))
class BoomplaySearchURLIE(BoomplayBaseIE):
_TESTS = [{
'url': 'https://www.boomplay.com/search/default/%20Rise%20of%20the%20Falletesn%20Heroes%20fatbunny',
'md5': 'c5fb4f23e6aae98064230ef3c39c2178',
'info_dict': {
'id': '165481965',
'ext': 'mp3',
'title': 'Rise of the Fallen Heroes',
'duration': 125.0,
'genres': ['Metal'],
'artists': ['fatbunny'],
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/04/29/375ecda38f6f48179a93c72ab909118f_464_464.jpg',
'channel_url': 'https://www.boomplay.com/artists/52723101',
'comment_count': int,
'repost_count': int,
'album': 'Legendary Battle',
'release_year': 2024,
'like_count': int,
},
}, {
'url': 'https://www.boomplay.com/search/video/%20Autumn%20blues',
'md5': 'd9b67ad333d2292a82922062d065352d',
'info_dict': {
'id': '1154892',
'title': 'Autumn blues',
'ext': 'mp4',
'timestamp': 1728599214,
'view_count': int,
'thumbnail': 'https://source.boomplaymusic.com/group10/M00/10/10/2171dee9e1f8452e84021560729edb88.jpg',
'description': 'Autumn blues by Lugo',
'upload_date': '20241010',
'duration': 177.0,
},
'params': {'playlist_items': '1'},
}]
@classproperty
def _VALID_URL(cls):
return r'https?://(?:www\.)?boomplay\.com/search/(?P<media_type>default|video|episode|podcasts|playlists|artists|albums)/(?P<query>[^?&#/]+)'
def _real_extract(self, url):
media_type, query = self._match_valid_url(url).group('media_type', 'query')
if media_type == 'default':
media_type = 'songs'
webpage = self._download_webpage(url, query)
return self.playlist_result(
self._extract_playlist_entries(webpage, media_type, warn=media_type == 'songs'),
**self._extract_page_metadata(webpage, query))
class BoomplaySearchIE(SearchInfoExtractor):
_SEARCH_KEY = 'boomplaysearch'
_RETURN_TYPE = 'url'
_TESTS = [{
'url': 'boomplaysearch:rise of the fallen heroes',
'only_matching': True,
}]
def _search_results(self, query):
yield self.url_result(
f'https://www.boomplay.com/search/default/{urllib.parse.quote(query)}',
BoomplaySearchURLIE)

View File

@ -1,7 +1,10 @@
import time
from .common import InfoExtractor from .common import InfoExtractor
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
jwt_decode_hs256,
parse_codecs, parse_codecs,
try_get, try_get,
url_or_none, url_or_none,
@ -13,9 +16,6 @@ from ..utils.traversal import traverse_obj
class DigitalConcertHallIE(InfoExtractor): class DigitalConcertHallIE(InfoExtractor):
IE_DESC = 'DigitalConcertHall extractor' IE_DESC = 'DigitalConcertHall extractor'
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?' _VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
_ACCESS_TOKEN = None
_NETRC_MACHINE = 'digitalconcerthall' _NETRC_MACHINE = 'digitalconcerthall'
_TESTS = [{ _TESTS = [{
'note': 'Playlist with only one video', 'note': 'Playlist with only one video',
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
'params': {'skip_download': 'm3u8'}, 'params': {'skip_download': 'm3u8'},
'playlist_count': 1, 'playlist_count': 1,
}] }]
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
'is the "access_token_production" from your browser local storage')
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
_CLIENT_ID = 'dch.webapp'
_CLIENT_SECRET = '2ySLN+2Fwb'
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
_OAUTH_HEADERS = {
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'Origin': 'https://www.digitalconcerthall.com',
'Referer': 'https://www.digitalconcerthall.com/',
'User-Agent': _USER_AGENT,
}
_access_token = None
_access_token_expiry = 0
_refresh_token = None
def _perform_login(self, username, password): @property
login_token = self._download_json( def _access_token_is_expired(self):
self._OAUTH_URL, return self._access_token_expiry - 30 <= int(time.time())
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
def _set_access_token(self, value):
self._access_token = value
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
def _cache_tokens(self, /):
self.cache.store(self._NETRC_MACHINE, 'tokens', {
'access_token': self._access_token,
'refresh_token': self._refresh_token,
})
def _fetch_new_tokens(self, invalidate=False):
if invalidate:
self.report_warning('Access token has been invalidated')
self._set_access_token(None)
if not self._access_token_is_expired:
return
if not self._refresh_token:
self._set_access_token(None)
self._cache_tokens()
raise ExtractorError(
'Access token has expired or been invalidated. '
'Get a new "access_token_production" value from your browser '
f'and try again, {self._REFRESH_HINT}', expected=True)
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
bearer_token = self._access_token or self._download_json(
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
data=urlencode_postdata({
'affiliate': 'none', 'affiliate': 'none',
'grant_type': 'device', 'grant_type': 'device',
'device_vendor': 'unknown', 'device_vendor': 'unknown',
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio # device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari', # but this is no longer effective since actual login is not possible anymore
'app_id': 'dch.webapp', 'device_model': 'unknown',
'app_id': self._CLIENT_ID,
'app_distributor': 'berlinphil', 'app_distributor': 'berlinphil',
'app_version': '1.84.0', 'app_version': '1.95.0',
'client_secret': '2ySLN+2Fwb', 'client_secret': self._CLIENT_SECRET,
}), headers={ }), headers=self._OAUTH_HEADERS)['access_token']
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'User-Agent': self._USER_AGENT,
})['access_token']
try: try:
login_response = self._download_json( response = self._download_json(
self._OAUTH_URL, self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({ data=urlencode_postdata({
'grant_type': 'password', 'grant_type': 'refresh_token',
'username': username, 'refresh_token': self._refresh_token,
'password': password, 'client_id': self._CLIENT_ID,
'client_secret': self._CLIENT_SECRET,
}), headers={ }), headers={
'Accept': 'application/json', **self._OAUTH_HEADERS,
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8', 'Authorization': f'Bearer {bearer_token}',
'Referer': 'https://www.digitalconcerthall.com',
'Authorization': f'Bearer {login_token}',
'User-Agent': self._USER_AGENT,
}) })
except ExtractorError as error: except ExtractorError as e:
if isinstance(error.cause, HTTPError) and error.cause.status == 401: if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Invalid username or password', expected=True) self._set_access_token(None)
self._refresh_token = None
self._cache_tokens()
raise ExtractorError('Your tokens have been invalidated', expected=True)
raise raise
self._ACCESS_TOKEN = login_response['access_token']
self._set_access_token(response['access_token'])
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
self.write_debug('New refresh token granted')
self._refresh_token = refresh_token
self._cache_tokens()
def _perform_login(self, username, password):
self.report_login()
if username == 'refresh':
self._refresh_token = password
self._fetch_new_tokens()
if username == 'token':
if not traverse_obj(password, {jwt_decode_hs256}):
raise ExtractorError(
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
self._set_access_token(password)
self._cache_tokens()
if username in ('refresh', 'token'):
if self.get_param('cachedir') is not False:
token_type = 'access' if username == 'token' else 'refresh'
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
'token next time, pass --username cache along with any password')
return
if username != 'cache':
raise ExtractorError(
'Login with username and password is no longer supported '
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
# Try cached access_token
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
self._set_access_token(cached_tokens.get('access_token'))
self._refresh_token = cached_tokens.get('refresh_token')
if not self._access_token_is_expired:
return
# Try cached refresh_token
self._fetch_new_tokens(invalidate=True)
def _real_initialize(self): def _real_initialize(self):
if not self._ACCESS_TOKEN: if not self._access_token:
self.raise_login_required(method='password') self.raise_login_required(
'All content on this site is only available for registered users. '
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
def _entries(self, items, language, type_, **kwargs): def _entries(self, items, language, type_, **kwargs):
for item in items: for item in items:
video_id = item['id'] video_id = item['id']
stream_info = self._download_json(
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={ for should_retry in (True, False):
'Accept': 'application/json', self._fetch_new_tokens(invalidate=not should_retry)
'Authorization': f'Bearer {self._ACCESS_TOKEN}', try:
'Accept-Language': language, stream_info = self._download_json(
'User-Agent': self._USER_AGENT, self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
}) 'Accept': 'application/json',
'Authorization': f'Bearer {self._access_token}',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
})
break
except ExtractorError as error:
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
continue
raise
formats = [] formats = []
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})): for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
@ -157,7 +255,6 @@ class DigitalConcertHallIE(InfoExtractor):
'Accept': 'application/json', 'Accept': 'application/json',
'Accept-Language': language, 'Accept-Language': language,
'User-Agent': self._USER_AGENT, 'User-Agent': self._USER_AGENT,
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
}) })
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...)) videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))

View File

@ -259,6 +259,8 @@ class RedditIE(InfoExtractor):
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403) f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
except ExtractorError as e: except ExtractorError as e:
if isinstance(e.cause, json.JSONDecodeError): if isinstance(e.cause, json.JSONDecodeError):
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
self.raise_login_required('Account authentication is required') self.raise_login_required('Account authentication is required')
raise raise

View File

@ -1,8 +1,8 @@
# Autogenerated by devscripts/update-version.py # Autogenerated by devscripts/update-version.py
__version__ = '2024.11.04' __version__ = '2024.11.18'
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c' RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
VARIANT = None VARIANT = None
@ -12,4 +12,4 @@ CHANNEL = 'stable'
ORIGIN = 'yt-dlp/yt-dlp' ORIGIN = 'yt-dlp/yt-dlp'
_pkg_version = '2024.11.04' _pkg_version = '2024.11.18'