Compare commits

...

21 Commits

Author SHA1 Message Date
MrDemocracy
4b0f23b6c2
Merge 7ab6662997 into f919729538 2024-11-18 12:31:11 +03:00
github-actions[bot]
f919729538 Release 2024.11.18
Created by: bashonly

:ci skip all
2024-11-18 05:45:05 +00:00
bashonly
7ea2787920
[ie/reddit] Improve error handling (#11573)
Authored by: bashonly
2024-11-18 05:36:38 +00:00
bashonly
f7257588bd
[ie/digitalconcerthall] Support login with access/refresh tokens (#11571)
Removes broken support for login with email and password
Removes obsolete `prefer_combined_hls` extractor-arg

Closes #11404, Closes #11436
Authored by: bashonly
2024-11-18 05:16:17 +00:00
bashonly
7ab6662997
Merge branch 'yt-dlp:master' into pr/10187 2024-11-15 21:54:22 -06:00
MrDemocracy
2b5eaf8601
[nrk] Wrong file 2024-10-24 17:38:51 +02:00
MrDemocracy
725ab6ef3e
[nrk] Linting 2024-10-24 17:31:01 +02:00
MrDemocracy
17b667c2fa
[nrk] Remove unused import 2024-10-24 17:26:20 +02:00
MrDemocracy
38746cb1af
[nrk] Accidentally removed login function 2024-10-24 17:24:10 +02:00
MrDemocracy
670ac229d9
[nrk] Run Ruff to apply linting fixes in nrk.py 2024-10-24 17:13:20 +02:00
MrDemocracy
3213c07265
[nrk] Restore NRKBaseIE class and remove subclassing from concrete IE 2024-10-24 17:09:06 +02:00
MrDemocracy
5cc9b64268
[nrk] Run autopep8 to format test_subtitles.py 2024-10-24 15:45:09 +02:00
MrDemocracy
0048ed894e
[nrk] Made suggested changes, some slight refactoring and updated subtitles test 2024-10-24 15:37:45 +02:00
MrDemocracy
b691d1dadb
[nrk] Remove unused manifest_type variable 2024-10-06 02:15:46 +02:00
MrDemocracy
4cd8abfc08
[nrk] Run autopep8 to format nrk.py 2024-10-06 02:12:37 +02:00
MrDemocracy
4522cce417
[nrk] Run Ruff to apply linting fixes in nrk.py 2024-10-06 02:05:27 +02:00
MrDemocracy
6b2b7dbc42
[nrk] Standardize string formatting in f-string 2024-10-06 02:00:15 +02:00
MrDemocracy
7e8e6cb621
[nrk] Modify api_url construction logic for season extractor 2024-10-06 01:44:41 +02:00
MrDemocracy
34236d0b95
[nrk] Add 1080p support, linting improvements, and update tests 2024-10-06 01:35:35 +02:00
MrDemocracy
6d7eb0e827
[nrk] Change initial chapters variable from None to empty list 2024-06-15 03:03:54 +02:00
MrDemocracy
b5a111eeb8
[nrk] Add login support and chapter extraction 2024-06-15 02:46:44 +02:00
10 changed files with 534 additions and 196 deletions

View File

@ -695,3 +695,15 @@ KBelmin
kesor
MellowKyler
Wesley107772
a13ssandr0
ChocoLZS
doe1080
hugovdev
jshumphrey
julionc
manavchaudhary1
powergold1
Sakura286
SamDecrock
stratus-ss
subrat-lima

View File

@ -4,6 +4,64 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
-->
### 2024.11.18
#### Important changes
- **Login with OAuth is no longer supported for YouTube**
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
#### Core changes
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
- **utils**
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
#### Extractor changes
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
- **chaturbate**
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
- **ctvnews**
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
- **spreaker**
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
- **youtube**
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
#### Misc. changes
- **build**
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
- **cleanup**
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
### 2024.11.04
#### Important changes

View File

@ -1867,9 +1867,6 @@ The following extractors use this feature:
#### bilibili
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
#### digitalconcerthall
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
#### sonylivseries
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`

View File

@ -129,6 +129,8 @@
- **Bandcamp:album**
- **Bandcamp:user**
- **Bandcamp:weekly**
- **Bandlab**
- **BandlabPlaylist**
- **BannedVideo**
- **bbc**: [*bbc*](## "netrc machine") BBC
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
@ -484,6 +486,7 @@
- **Gab**
- **GabTV**
- **Gaia**: [*gaia*](## "netrc machine")
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
- **GameJolt**
- **GameJoltCommunity**
- **GameJoltGame**
@ -651,6 +654,8 @@
- **Karaoketv**
- **Katsomo**: (**Currently broken**)
- **KelbyOne**: (**Currently broken**)
- **Kenh14Playlist**
- **Kenh14Video**
- **Ketnet**
- **khanacademy**
- **khanacademy:unit**
@ -784,10 +789,6 @@
- **MicrosoftLearnSession**
- **MicrosoftMedius**
- **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom
- **mildom:clip**: Clip in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: VOD in Mildom
- **minds**
- **minds:channel**
- **minds:group**
@ -798,6 +799,7 @@
- **MiTele**: mitele.es
- **mixch**
- **mixch:archive**
- **mixch:movie**
- **mixcloud**
- **mixcloud:playlist**
- **mixcloud:user**
@ -1060,8 +1062,8 @@
- **PhilharmonieDeParis**: Philharmonie de Paris
- **phoenix.de**
- **Photobucket**
- **PiaLive**
- **Piapro**: [*piapro*](## "netrc machine")
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
- **Picarto**
- **PicartoVod**
- **Piksel**
@ -1088,8 +1090,6 @@
- **PodbayFMChannel**
- **Podchaser**
- **podomatic**: (**Currently broken**)
- **Pokemon**
- **PokemonWatch**
- **PokerGo**: [*pokergo*](## "netrc machine")
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
- **PolsatGo**
@ -1160,6 +1160,7 @@
- **RadioJavan**: (**Currently broken**)
- **radiokapital**
- **radiokapital:show**
- **RadioRadicale**
- **RadioZetPodcast**
- **radlive**
- **radlive:channel**
@ -1367,9 +1368,7 @@
- **spotify**: Spotify episodes (**Currently broken**)
- **spotify:show**: Spotify shows (**Currently broken**)
- **Spreaker**
- **SpreakerPage**
- **SpreakerShow**
- **SpreakerShowPage**
- **SpringboardPlatform**
- **Sprout**
- **SproutVideo**
@ -1570,6 +1569,8 @@
- **UFCTV**: [*ufctv*](## "netrc machine")
- **ukcolumn**: (**Currently broken**)
- **UKTVPlay**
- **UlizaPlayer**
- **UlizaPortal**: ulizaportal.jp
- **umg:de**: Universal Music Deutschland (**Currently broken**)
- **Unistra**
- **Unity**: (**Currently broken**)
@ -1587,8 +1588,6 @@
- **Varzesh3**: (**Currently broken**)
- **Vbox7**
- **Veo**
- **Veoh**
- **veoh:user**
- **Vesti**: Вести.Ru (**Currently broken**)
- **Vevo**
- **VevoPlaylist**

View File

@ -11,7 +11,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, is_download_test, md5
from yt_dlp.extractor import (
NPOIE,
NRKTVIE,
NRKIE,
PBSIE,
CeskaTelevizeIE,
ComedyCentralIE,
@ -299,15 +299,16 @@ class TestMTVSubtitles(BaseTestSubtitles):
@is_download_test
class TestNRKSubtitles(BaseTestSubtitles):
url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1'
IE = NRKTVIE
url = 'nrk:DMPV73000411' # http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1
IE = NRKIE
def test_allsubtitles(self):
self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), {'nb-ttv'})
self.assertEqual(set(subtitles.keys()), {'nb-ttv', 'no'})
self.assertEqual(md5(subtitles['nb-ttv']), '67e06ff02d0deaf975e68f6cb8f6a149')
self.assertEqual(md5(subtitles['no']), 'fc01036074116d245ddc6ba6f679263b')
@is_download_test

View File

@ -1402,7 +1402,6 @@ from .nrk import (
NRKSkoleIE,
NRKTVDirekteIE,
NRKTVEpisodeIE,
NRKTVEpisodesIE,
NRKTVSeasonIE,
NRKTVSeriesIE,
)

View File

@ -1,7 +1,10 @@
import time
from .common import InfoExtractor
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
jwt_decode_hs256,
parse_codecs,
try_get,
url_or_none,
@ -13,9 +16,6 @@ from ..utils.traversal import traverse_obj
class DigitalConcertHallIE(InfoExtractor):
IE_DESC = 'DigitalConcertHall extractor'
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
_ACCESS_TOKEN = None
_NETRC_MACHINE = 'digitalconcerthall'
_TESTS = [{
'note': 'Playlist with only one video',
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
'params': {'skip_download': 'm3u8'},
'playlist_count': 1,
}]
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
'is the "access_token_production" from your browser local storage')
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
_CLIENT_ID = 'dch.webapp'
_CLIENT_SECRET = '2ySLN+2Fwb'
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
_OAUTH_HEADERS = {
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'Origin': 'https://www.digitalconcerthall.com',
'Referer': 'https://www.digitalconcerthall.com/',
'User-Agent': _USER_AGENT,
}
_access_token = None
_access_token_expiry = 0
_refresh_token = None
def _perform_login(self, username, password):
login_token = self._download_json(
self._OAUTH_URL,
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
@property
def _access_token_is_expired(self):
return self._access_token_expiry - 30 <= int(time.time())
def _set_access_token(self, value):
self._access_token = value
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
def _cache_tokens(self, /):
self.cache.store(self._NETRC_MACHINE, 'tokens', {
'access_token': self._access_token,
'refresh_token': self._refresh_token,
})
def _fetch_new_tokens(self, invalidate=False):
if invalidate:
self.report_warning('Access token has been invalidated')
self._set_access_token(None)
if not self._access_token_is_expired:
return
if not self._refresh_token:
self._set_access_token(None)
self._cache_tokens()
raise ExtractorError(
'Access token has expired or been invalidated. '
'Get a new "access_token_production" value from your browser '
f'and try again, {self._REFRESH_HINT}', expected=True)
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
bearer_token = self._access_token or self._download_json(
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
data=urlencode_postdata({
'affiliate': 'none',
'grant_type': 'device',
'device_vendor': 'unknown',
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
'app_id': 'dch.webapp',
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
# but this is no longer effective since actual login is not possible anymore
'device_model': 'unknown',
'app_id': self._CLIENT_ID,
'app_distributor': 'berlinphil',
'app_version': '1.84.0',
'client_secret': '2ySLN+2Fwb',
}), headers={
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'User-Agent': self._USER_AGENT,
})['access_token']
'app_version': '1.95.0',
'client_secret': self._CLIENT_SECRET,
}), headers=self._OAUTH_HEADERS)['access_token']
try:
login_response = self._download_json(
self._OAUTH_URL,
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
'grant_type': 'password',
'username': username,
'password': password,
response = self._download_json(
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
data=urlencode_postdata({
'grant_type': 'refresh_token',
'refresh_token': self._refresh_token,
'client_id': self._CLIENT_ID,
'client_secret': self._CLIENT_SECRET,
}), headers={
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'Referer': 'https://www.digitalconcerthall.com',
'Authorization': f'Bearer {login_token}',
'User-Agent': self._USER_AGENT,
**self._OAUTH_HEADERS,
'Authorization': f'Bearer {bearer_token}',
})
except ExtractorError as error:
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
raise ExtractorError('Invalid username or password', expected=True)
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
self._set_access_token(None)
self._refresh_token = None
self._cache_tokens()
raise ExtractorError('Your tokens have been invalidated', expected=True)
raise
self._ACCESS_TOKEN = login_response['access_token']
self._set_access_token(response['access_token'])
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
self.write_debug('New refresh token granted')
self._refresh_token = refresh_token
self._cache_tokens()
def _perform_login(self, username, password):
self.report_login()
if username == 'refresh':
self._refresh_token = password
self._fetch_new_tokens()
if username == 'token':
if not traverse_obj(password, {jwt_decode_hs256}):
raise ExtractorError(
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
self._set_access_token(password)
self._cache_tokens()
if username in ('refresh', 'token'):
if self.get_param('cachedir') is not False:
token_type = 'access' if username == 'token' else 'refresh'
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
'token next time, pass --username cache along with any password')
return
if username != 'cache':
raise ExtractorError(
'Login with username and password is no longer supported '
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
# Try cached access_token
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
self._set_access_token(cached_tokens.get('access_token'))
self._refresh_token = cached_tokens.get('refresh_token')
if not self._access_token_is_expired:
return
# Try cached refresh_token
self._fetch_new_tokens(invalidate=True)
def _real_initialize(self):
if not self._ACCESS_TOKEN:
self.raise_login_required(method='password')
if not self._access_token:
self.raise_login_required(
'All content on this site is only available for registered users. '
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
def _entries(self, items, language, type_, **kwargs):
for item in items:
video_id = item['id']
stream_info = self._download_json(
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
'Accept': 'application/json',
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
})
for should_retry in (True, False):
self._fetch_new_tokens(invalidate=not should_retry)
try:
stream_info = self._download_json(
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
'Accept': 'application/json',
'Authorization': f'Bearer {self._access_token}',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
})
break
except ExtractorError as error:
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
continue
raise
formats = []
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
@ -157,7 +255,6 @@ class DigitalConcertHallIE(InfoExtractor):
'Accept': 'application/json',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
})
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))

View File

@ -1,4 +1,5 @@
import itertools
import json
import random
import re
@ -7,11 +8,12 @@ from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
determine_ext,
float_or_none,
int_or_none,
parse_duration,
parse_iso8601,
str_or_none,
try_get,
traverse_obj,
url_or_none,
urljoin,
)
@ -25,18 +27,23 @@ class NRKBaseIE(InfoExtractor):
nrk-od-no\.telenorcdn\.net|
minicdn-od\.nrk\.no/od/nrkhd-osl-rr\.netwerk\.no/no
)/'''
_NETRC_MACHINE = 'nrk'
_LOGIN_URL = 'https://innlogging.nrk.no/logginn'
_AUTH_TOKEN = ''
_API_CALL_HEADERS = {'Accept': 'application/json;device=player-core'}
def _extract_nrk_formats_and_subtitles(self, asset_url, video_id):
def _extract_nrk_formats(self, asset_url, video_id):
if re.match(r'https?://[^/]+\.akamaihd\.net/i/', asset_url):
return self._extract_akamai_formats(asset_url, video_id)
asset_url = re.sub(r'(?:bw_(?:low|high)=\d+|no_audio_only)&?', '', asset_url)
formats = self._extract_m3u8_formats(
asset_url = re.sub(r'(?:bw_(?:low|high)=\d+|no_audio_only|adap=.+?\b)&?', '', asset_url)
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
asset_url, video_id, 'mp4', 'm3u8_native', fatal=False)
if not formats and re.search(self._CDN_REPL_REGEX, asset_url):
formats = self._extract_m3u8_formats(
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
re.sub(self._CDN_REPL_REGEX, '://nrk-od-%02d.akamaized.net/no/' % random.randint(0, 99), asset_url),
video_id, 'mp4', 'm3u8_native', fatal=False)
return formats
return formats, subtitles
def _raise_error(self, data):
MESSAGES = {
@ -47,7 +54,7 @@ class NRKBaseIE(InfoExtractor):
}
message_type = data.get('messageType', '')
# Can be ProgramIsGeoBlocked or ChannelIsGeoBlocked*
if 'IsGeoBlocked' in message_type or try_get(data, lambda x: x['usageRights']['isGeoBlocked']) is True:
if 'IsGeoBlocked' in message_type or traverse_obj(data, ('usageRights', 'isGeoBlocked')) is True:
self.raise_geo_restricted(
msg=MESSAGES.get('ProgramIsGeoBlocked'),
countries=self._GEO_COUNTRIES)
@ -58,7 +65,7 @@ class NRKBaseIE(InfoExtractor):
return self._download_json(
urljoin('https://psapi.nrk.no/', path),
video_id, note or f'Downloading {item} JSON',
fatal=fatal, query=query)
fatal=fatal, query=query, headers=self._API_CALL_HEADERS)
class NRKIE(NRKBaseIE):
@ -73,17 +80,20 @@ class NRKIE(NRKBaseIE):
)
(?P<id>[^?\#&]+)
'''
_TESTS = [{
# video
'url': 'http://www.nrk.no/video/PS*150533',
'md5': 'f46be075326e23ad0e524edfcb06aeb6',
'md5': '2b88a652ad2e275591e61cf550887eec',
'info_dict': {
'id': '150533',
'ext': 'mp4',
'title': 'Dompap og andre fugler i Piip-Show',
'description': 'md5:d9261ba34c43b61c812cb6b0269a5c8f',
'duration': 262,
'timestamp': 1395751833,
'upload_date': '20140325',
'thumbnail': 'https://gfx.nrk.no/0mZgeckEzRU6qTWrbQHD2QcyralHrYB08wBvh-K-AtAQ',
'alt_title': 'md5:d9261ba34c43b61c812cb6b0269a5c8f',
},
}, {
# audio
@ -95,6 +105,10 @@ class NRKIE(NRKBaseIE):
'title': 'Slik høres internett ut når du er blind',
'description': 'md5:a621f5cc1bd75c8d5104cb048c6b8568',
'duration': 20,
'alt_title': 'Cathrine Lie Wathne er blind, og bruker hurtigtaster for å navigere seg rundt på ulike nettsider.',
'upload_date': '20140425',
'timestamp': 1398429565,
'thumbnail': 'https://gfx.nrk.no/urxQMSXF-WnbfjBH5ke2igLGyN27EdJVWZ6FOsEAclhA',
},
}, {
'url': 'nrk:ecc1b952-96dc-4a98-81b9-5296dc7a98d9',
@ -144,18 +158,10 @@ class NRKIE(NRKBaseIE):
def _real_extract(self, url):
video_id = self._match_id(url).split('/')[-1]
def call_playback_api(item, query=None):
try:
return self._call_api(f'playback/{item}/program/{video_id}', video_id, item, query=query)
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 400:
return self._call_api(f'playback/{item}/{video_id}', video_id, item, query=query)
raise
# known values for preferredCdn: akamai, iponly, minicdn and telenor
manifest = call_playback_api('manifest', {'preferredCdn': 'akamai'})
manifest = self._call_api(f'playback/manifest/{video_id}', video_id, 'manifest', query={'preferredCdn': 'akamai'})
video_id = try_get(manifest, lambda x: x['id'], str) or video_id
video_id = manifest.get('id') or video_id
if manifest.get('playability') == 'nonPlayable':
self._raise_error(manifest['nonPlayable'])
@ -163,17 +169,22 @@ class NRKIE(NRKBaseIE):
playable = manifest['playable']
formats = []
for asset in playable['assets']:
if not isinstance(asset, dict):
continue
if asset.get('encrypted'):
subtitles = {}
has_drm = False
for asset in traverse_obj(playable, ('assets', ..., {dict})):
encryption_scheme = asset.get('encryptionScheme')
if encryption_scheme not in (None, 'none', 'statickey'):
self.report_warning(f'Skipping asset with unsupported encryption scheme "{encryption_scheme}"')
has_drm = True
continue
format_url = url_or_none(asset.get('url'))
if not format_url:
continue
asset_format = (asset.get('format') or '').lower()
if asset_format == 'hls' or determine_ext(format_url) == 'm3u8':
formats.extend(self._extract_nrk_formats(format_url, video_id))
fmts, subs = self._extract_nrk_formats_and_subtitles(format_url, video_id)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
elif asset_format == 'mp3':
formats.append({
'url': format_url,
@ -181,19 +192,22 @@ class NRKIE(NRKBaseIE):
'vcodec': 'none',
})
data = call_playback_api('metadata')
if not formats and has_drm:
self.report_drm(video_id)
preplay = data['preplay']
titles = preplay['titles']
title = titles['title']
data = self._call_api(traverse_obj(manifest, ('_links', 'metadata', 'href', {str})), video_id, 'metadata')
preplay = data.get('preplay')
titles = preplay.get('titles')
title = titles.get('title')
alt_title = titles.get('subtitle')
description = try_get(preplay, lambda x: x['description'].replace('\r', '\n'))
duration = parse_duration(playable.get('duration')) or parse_duration(data.get('duration'))
description = preplay.get('description')
# Use m3u8 vod dueration for NRKSkoleIE because of incorrect duration in metadata
duration = parse_duration(playable.get('duration')) or parse_duration(data.get('duration')) or self._extract_m3u8_vod_duration(formats[0]['url'], video_id)
thumbnails = []
for image in try_get(
preplay, lambda x: x['poster']['images'], list) or []:
for image in traverse_obj(preplay, ('poster', 'images', {list})) or []:
if not isinstance(image, dict):
continue
image_url = url_or_none(image.get('url'))
@ -205,13 +219,13 @@ class NRKIE(NRKBaseIE):
'height': int_or_none(image.get('pixelHeight')),
})
subtitles = {}
for sub in try_get(playable, lambda x: x['subtitles'], list) or []:
for sub in traverse_obj(playable, ('subtitles', {list})) or []:
if not isinstance(sub, dict):
continue
sub_url = url_or_none(sub.get('webVtt'))
if not sub_url:
continue
sub_key = str_or_none(sub.get('language')) or 'nb'
sub_type = str_or_none(sub.get('type'))
if sub_type:
@ -220,8 +234,26 @@ class NRKIE(NRKBaseIE):
'url': sub_url,
})
legal_age = try_get(
data, lambda x: x['legalAge']['body']['rating']['code'], str)
chapters = []
if data.get('skipDialogInfo'):
chapters = [item for item in [{
'start_time': float_or_none(traverse_obj(data, ('skipDialogInfo', 'startIntroInSeconds'))),
'end_time': float_or_none(traverse_obj(data, ('skipDialogInfo', 'endIntroInSeconds'))),
'title': 'Intro',
}, {
'start_time': float_or_none(traverse_obj(data, ('skipDialogInfo', 'startCreditsInSeconds'))),
'end_time': duration,
'title': 'Outro',
}] if item['start_time'] != item['end_time']]
if preplay.get('indexPoints'):
seconds_or_none = lambda x: float_or_none(parse_duration(x))
chapters += traverse_obj(preplay, ('indexPoints', ..., {
'start_time': ('startPoint', {seconds_or_none}),
'end_time': ('endPoint', {seconds_or_none}),
'title': ('title', {lambda x: x}),
}))
chapters = sorted(chapters, key=lambda x: x['start_time']) if chapters else None
legal_age = traverse_obj(data, ('legalAge', 'body', 'rating', 'code'))
# https://en.wikipedia.org/wiki/Norwegian_Media_Authority
age_limit = None
if legal_age:
@ -230,7 +262,7 @@ class NRKIE(NRKBaseIE):
elif legal_age.isdigit():
age_limit = int_or_none(legal_age)
is_series = try_get(data, lambda x: x['_links']['series']['name']) == 'series'
is_series = traverse_obj(data, ('_links', 'series', 'name')) == 'series'
info = {
'id': video_id,
@ -242,13 +274,23 @@ class NRKIE(NRKBaseIE):
'age_limit': age_limit,
'formats': formats,
'subtitles': subtitles,
'timestamp': parse_iso8601(try_get(manifest, lambda x: x['availability']['onDemand']['from'], str)),
'chapters': chapters,
'timestamp': parse_iso8601(traverse_obj(data, ('availability', 'onDemand', 'from'))),
}
if is_series:
series = season_id = season_number = episode = episode_number = None
programs = self._call_api(
f'programs/{video_id}', video_id, 'programs', fatal=False)
matched_dates = [
int(match.group()) // 1000
for date in [
traverse_obj(programs, ('firstTimeTransmitted', 'publicationDate')),
traverse_obj(programs, ('usageRights', 'availableFrom')),
] if date for match in [re.search(r'\d+', date)] if match
]
if matched_dates:
info.update({'timestamp': min(info['timestamp'], *matched_dates)})
if programs and isinstance(programs, dict):
series = str_or_none(programs.get('seriesTitle'))
season_id = str_or_none(programs.get('seasonId'))
@ -284,8 +326,38 @@ class NRKIE(NRKBaseIE):
return info
def _perform_login(self, username, password):
try:
self._download_json(
self._LOGIN_URL, None, headers={'Content-Type': 'application/json; charset=UTF-8', 'accept': 'application/json; charset=utf-8'},
data=json.dumps({
'clientId': '',
'hashedPassword': {'current': {
'hash': password,
'recipe': {
'algorithm': 'cleartext',
'salt': '',
},
},
},
'password': password,
'username': username,
}).encode())
class NRKTVIE(InfoExtractor):
self._download_webpage('https://tv.nrk.no/auth/web/login/opsession', None)
response = self._download_json('https://tv.nrk.no/auth/session/tokenforsub/_', None)
self._AUTH_TOKEN = traverse_obj(response, ('session', 'accessToken'))
self._API_CALL_HEADERS['authorization'] = f'Bearer {self._AUTH_TOKEN}'
except ExtractorError as e:
message = None
if isinstance(e.cause, HTTPError) and e.cause.status in (401, 400):
resp = self._parse_json(
e.cause.response.read().decode(), None, fatal=False) or {}
message = next((error['message'] for error in resp['errors'] if error['field'] == 'Password'), None)
self.report_warning(message or 'Unable to log in')
class NRKTVIE(NRKBaseIE):
IE_DESC = 'NRK TV and NRK Radio'
_EPISODE_RE = r'(?P<id>[a-zA-Z]{4}\d{8})'
_VALID_URL = rf'https?://(?:tv|radio)\.nrk(?:super)?\.no/(?:[^/]+/)*{_EPISODE_RE}'
@ -307,6 +379,14 @@ class NRKTVIE(InfoExtractor):
'ext': 'vtt',
}],
},
'upload_date': '20170627',
'chapters': [{'start_time': 0, 'end_time': 2213.0, 'title': '<Untitled Chapter 1>'}, {'start_time': 2213.0, 'end_time': 2223.44, 'title': 'Outro'}],
'timestamp': 1498591822,
'thumbnail': 'https://gfx.nrk.no/myRSc4vuFlahB60P3n6swwRTQUZI1LqJZl9B7icZFgzA',
'alt_title': 'md5:46923a6e6510eefcce23d5ef2a58f2ce',
},
'params': {
'skip_download': True,
},
}, {
'url': 'https://tv.nrk.no/serie/20-spoersmaal-tv/MUHH48000314/23-05-2014',
@ -318,9 +398,31 @@ class NRKTVIE(InfoExtractor):
'alt_title': '23. mai 2014',
'description': 'md5:bdea103bc35494c143c6a9acdd84887a',
'duration': 1741,
'age_limit': 0,
'series': '20 spørsmål',
'episode': '23. mai 2014',
'age_limit': 0,
'upload_date': '20140523',
'thumbnail': 'https://gfx.nrk.no/u7uCe79SEfPVGRAGVp2_uAZnNc4mfz_kjXg6Bgek8lMQ',
'season_id': '126936',
'season_number': 2014,
'season': 'Season 2014',
'chapters': [
{'start_time': 0.0, 'end_time': 39.0, 'title': 'Intro'},
{'start_time': 0.0, 'title': 'Velkommen', 'end_time': 152.32},
{'start_time': 152.32, 'title': 'Tannpirker', 'end_time': 304.76},
{'start_time': 304.76, 'title': 'Orgelbrus', 'end_time': 513.48},
{'start_time': 513.48, 'title': 'G-streng', 'end_time': 712.96},
{'start_time': 712.96, 'title': 'Medalje', 'end_time': 837.76},
{'start_time': 837.76, 'title': 'Globus', 'end_time': 1124.48},
{'start_time': 1124.48, 'title': 'Primstav', 'end_time': 1417.4},
{'start_time': 1417.4, 'title': 'Fyr', 'end_time': 1721.0},
{'start_time': 1721.0, 'end_time': 1741.0, 'title': 'Outro'},
],
'episode_number': 3,
'timestamp': 1400871900,
},
'params': {
'skip_download': True,
},
}, {
'url': 'https://tv.nrk.no/program/mdfp15000514',
@ -333,6 +435,18 @@ class NRKTVIE(InfoExtractor):
'series': 'Kunnskapskanalen',
'episode': 'Grunnlovsjubiléet - Stor ståhei for ingenting',
'age_limit': 0,
'upload_date': '20140524',
'episode_number': 17,
'chapters': [
{'start_time': 0, 'end_time': 4595.0, 'title': '<Untitled Chapter 1>'},
{'start_time': 4595.0, 'end_time': 4605.08, 'title': 'Outro'},
],
'season': 'Season 2014',
'timestamp': 1400937600,
'thumbnail': 'https://gfx.nrk.no/D2u6-EyVUZpVCq0PdSNHRgdBZCV40ekpk6s9fZWiMtyg',
'season_number': 2014,
'season_id': '39240',
'alt_title': 'Grunnlovsjubiléet - Stor ståhei for ingenting',
},
'params': {
'skip_download': True,
@ -343,23 +457,51 @@ class NRKTVIE(InfoExtractor):
'info_dict': {
'id': 'MSPO40010515',
'ext': 'mp4',
'title': 'Sprint fri teknikk, kvinner og menn 06.01.2015',
'description': 'md5:c03aba1e917561eface5214020551b7a',
'title': 'Tour de Ski - Sprint fri teknikk, kvinner og menn',
'description': 'md5:1f97a41f05a9486ee00c56f35f82993d',
'age_limit': 0,
'episode': 'Sprint fri teknikk, kvinner og menn',
'series': 'Tour de Ski',
'thumbnail': 'https://gfx.nrk.no/s9vNwGPGN-Un-UCvitD09we9HRLDxisnipA9K__d5c3Q',
'season_id': '53512',
'chapters': [
{'start_time': 0, 'end_time': 6938.0, 'title': '<Untitled Chapter 1>'},
{'start_time': 6938.0, 'end_time': 6947.52, 'title': 'Outro'},
],
'season_number': 2015,
'episode_number': 5,
'upload_date': '20150106',
'duration': 6947.52,
'timestamp': 1420545563,
'alt_title': 'Sprint fri teknikk, kvinner og menn',
'season': 'Season 2015',
},
'params': {
'skip_download': True,
},
'expected_warnings': ['Failed to download m3u8 information'],
'skip': 'particular part is not supported currently',
}, {
'url': 'https://tv.nrk.no/serie/tour-de-ski/MSPO40010515/06-01-2015',
'info_dict': {
'id': 'MSPO40010515',
'ext': 'mp4',
'title': 'Sprint fri teknikk, kvinner og menn 06.01.2015',
'description': 'md5:c03aba1e917561eface5214020551b7a',
'title': 'Tour de Ski - Sprint fri teknikk, kvinner og menn',
'description': 'md5:1f97a41f05a9486ee00c56f35f82993d',
'age_limit': 0,
'episode': 'Sprint fri teknikk, kvinner og menn',
'series': 'Tour de Ski',
'thumbnail': 'https://gfx.nrk.no/s9vNwGPGN-Un-UCvitD09we9HRLDxisnipA9K__d5c3Q',
'season_id': '53512',
'chapters': [
{'start_time': 0, 'end_time': 6938.0, 'title': '<Untitled Chapter 1>'},
{'start_time': 6938.0, 'end_time': 6947.52, 'title': 'Outro'},
],
'season_number': 2015,
'episode_number': 5,
'upload_date': '20150106',
'duration': 6947.52,
'timestamp': 1420545563,
'alt_title': 'Sprint fri teknikk, kvinner og menn',
'season': 'Season 2015',
},
'expected_warnings': ['Failed to download m3u8 information'],
'skip': 'Ikke tilgjengelig utenfor Norge',
@ -380,6 +522,7 @@ class NRKTVIE(InfoExtractor):
'params': {
'skip_download': True,
},
'skip': 'ProgramRightsHasExpired',
}, {
'url': 'https://tv.nrk.no/serie/nytt-paa-nytt/MUHH46000317/27-01-2017',
'info_dict': {
@ -413,7 +556,7 @@ class NRKTVIE(InfoExtractor):
f'nrk:{video_id}', ie=NRKIE.ie_key(), video_id=video_id)
class NRKTVEpisodeIE(InfoExtractor):
class NRKTVEpisodeIE(NRKBaseIE):
_VALID_URL = r'https?://tv\.nrk\.no/serie/(?P<id>[^/]+/sesong/(?P<season_number>\d+)/episode/(?P<episode_number>\d+))'
_TESTS = [{
'url': 'https://tv.nrk.no/serie/hellums-kro/sesong/1/episode/2',
@ -421,13 +564,24 @@ class NRKTVEpisodeIE(InfoExtractor):
'id': 'MUHH36005220',
'ext': 'mp4',
'title': 'Hellums kro - 2. Kro, krig og kjærlighet',
'description': 'md5:ad92ddffc04cea8ce14b415deef81787',
'description': 'md5:b32a7dc0b1ed27c8064f58b97bda4350',
'duration': 1563.92,
'series': 'Hellums kro',
'season_number': 1,
'episode_number': 2,
'episode': '2. Kro, krig og kjærlighet',
'age_limit': 6,
'timestamp': 1572584520,
'upload_date': '20191101',
'thumbnail': 'https://gfx.nrk.no/2_4mhU2JhR-8IYRC_OMmAQDbbOHgwcHqgi2sBrNrsjkg',
'alt_title': '2. Kro, krig og kjærlighet',
'season': 'Season 1',
'season_id': '124163',
'chapters': [
{'start_time': 0, 'end_time': 29.0, 'title': '<Untitled Chapter 1>'},
{'start_time': 29.0, 'end_time': 50.0, 'title': 'Intro'},
{'start_time': 1530.0, 'end_time': 1563.92, 'title': 'Outro'},
],
},
'params': {
'skip_download': True,
@ -453,26 +607,14 @@ class NRKTVEpisodeIE(InfoExtractor):
}]
def _real_extract(self, url):
display_id, season_number, episode_number = self._match_valid_url(url).groups()
# HEADRequest(url) only works if a regular GET request was recently made by anyone for the specific URL being requested.
response = self._request_webpage(url, None, expected_status=True)
webpage = self._download_webpage(url, display_id)
nrk_id = self._match_id(url)
info = self._search_json_ld(webpage, display_id, default={})
nrk_id = info.get('@id') or self._html_search_meta(
'nrk:program-id', webpage, default=None) or self._search_regex(
rf'data-program-id=["\']({NRKTVIE._EPISODE_RE})', webpage,
'nrk id')
assert re.match(NRKTVIE._EPISODE_RE, nrk_id)
info.update({
'_type': 'url',
'id': nrk_id,
'url': f'nrk:{nrk_id}',
'ie_key': NRKIE.ie_key(),
'season_number': int(season_number),
'episode_number': int(episode_number),
})
return info
return self.url_result(
response.url, NRKTVIE.ie_key(), nrk_id, url_transparent=True,
)
class NRKTVSerieBaseIE(NRKBaseIE):
@ -482,6 +624,9 @@ class NRKTVSerieBaseIE(NRKBaseIE):
entries = []
for episode in entry_list:
nrk_id = episode.get('prfId') or episode.get('episodeId')
if traverse_obj(episode, ('availability', 'status')) == 'expired':
self.report_warning(episode['availability'].get('label'), nrk_id)
continue
if not nrk_id or not isinstance(nrk_id, str):
continue
entries.append(self.url_result(
@ -508,18 +653,18 @@ class NRKTVSerieBaseIE(NRKBaseIE):
if not assets_key:
break
# Extract entries
entries = try_get(
entries = traverse_obj(
embedded,
(lambda x: x[assets_key]['_embedded'][assets_key],
lambda x: x[assets_key]),
list)
(assets_key, '_embedded', assets_key, {list}),
(assets_key, {list}),
)
yield from self._extract_entries(entries)
# Find next URL
next_url_path = try_get(
next_url_path = traverse_obj(
data,
(lambda x: x['_links']['next']['href'],
lambda x: x['_embedded'][assets_key]['_links']['next']['href']),
str)
('_links', 'next', 'href'),
('_embedded', assets_key, '_links', 'next', 'href'),
)
if not next_url_path:
break
data = self._call_api(
@ -548,6 +693,27 @@ class NRKTVSeasonIE(NRKTVSerieBaseIE):
'title': 'Sesong 1',
},
'playlist_mincount': 30,
}, {
'url': 'https://tv.nrk.no/serie/presten/sesong/ekstramateriale',
'info_dict': {
'id': 'MUHH47005117',
'ext': 'mp4',
'description': '',
'thumbnail': 'https://gfx.nrk.no/sJZroQqD2P8wGMMl5ADznwqiIlAXaCpNofA2pIhe3udA',
'alt_title': 'Bloopers: Episode 1',
'chapters': [
{'start_time': 0, 'end_time': 356.0, 'title': '<Untitled Chapter 1>'},
{'start_time': 356.0, 'end_time': 365.8, 'title': 'Outro'},
],
'upload_date': '20180302',
'timestamp': 1519966800,
'title': 'Presten',
'age_limit': 0,
'duration': 365.8,
},
'params': {
'skip_download': True,
},
}, {
# no /sesong/ in path
'url': 'https://tv.nrk.no/serie/lindmo/2016',
@ -572,6 +738,7 @@ class NRKTVSeasonIE(NRKTVSerieBaseIE):
'title': 'September 2015',
},
'playlist_mincount': 841,
'skip': 'ProgramRightsHasExpired',
}, {
# 180 entries, single page
'url': 'https://tv.nrk.no/serie/spangas/sesong/1',
@ -594,21 +761,20 @@ class NRKTVSeasonIE(NRKTVSerieBaseIE):
else super().suitable(url))
def _real_extract(self, url):
mobj = self._match_valid_url(url)
domain = mobj.group('domain')
serie_kind = mobj.group('serie_kind')
serie = mobj.group('serie')
season_id = mobj.group('id') or mobj.group('id_2')
domain, serie_kind, serie, season_id, season_id_2 = self._match_valid_url(url).group(
'domain', 'serie_kind', 'serie', 'id', 'id_2')
season_id = season_id or season_id_2
display_id = f'{serie}/{season_id}'
api_suffix = f'/seasons/{season_id}' if season_id != 'ekstramateriale' else '/extramaterial'
data = self._call_api(
f'{domain}/catalog/{self._catalog_name(serie_kind)}/{serie}/seasons/{season_id}',
f'{domain}/catalog/{self._catalog_name(serie_kind)}/{serie}{api_suffix}',
display_id, 'season', query={'pageSize': 50})
title = try_get(data, lambda x: x['titles']['title'], str) or display_id
return self.playlist_result(
self._entries(data, display_id),
display_id, title)
self._entries(data, display_id), display_id,
title=traverse_obj(data, ('titles', 'title', {str})))
class NRKTVSeriesIE(NRKTVSerieBaseIE):
@ -666,7 +832,7 @@ class NRKTVSeriesIE(NRKTVSerieBaseIE):
'info_dict': {
'id': 'dickie-dick-dickens',
'title': 'Dickie Dick Dickens',
'description': 'md5:19e67411ffe57f7dce08a943d7a0b91f',
'description': 'md5:605464fab26d06b1ce6a11c3ea37d36d',
},
'playlist_mincount': 8,
}, {
@ -676,6 +842,8 @@ class NRKTVSeriesIE(NRKTVSerieBaseIE):
'url': 'https://radio.nrk.no/podkast/ulrikkes_univers',
'info_dict': {
'id': 'ulrikkes_univers',
'title': 'Ulrikkes univers',
'description': 'md5:8af9fc2ee4aecd7f91777383fde50dcc',
},
'playlist_mincount': 10,
}, {
@ -699,16 +867,18 @@ class NRKTVSeriesIE(NRKTVSerieBaseIE):
series = self._call_api(
f'{domain}/catalog/{self._catalog_name(serie_kind)}/{series_id}',
series_id, 'serie', query={size_prefix + 'ageSize': 50})
titles = try_get(series, [
lambda x: x['titles'],
lambda x: x[x['type']]['titles'],
lambda x: x[x['seriesType']]['titles'],
]) or {}
titles = traverse_obj(
series,
(..., 'titles'),
(..., 'type', 'titles'),
(..., 'seriesType', 'titles'),
get_all=False,
)
entries = []
entries.extend(self._entries(series, series_id))
embedded = series.get('_embedded') or {}
linked_seasons = try_get(series, lambda x: x['_links']['seasons']) or []
linked_seasons = traverse_obj(series, ('_links', 'seasons')) or []
embedded_seasons = embedded.get('seasons') or []
if len(linked_seasons) > len(embedded_seasons):
for season in linked_seasons:
@ -731,7 +901,7 @@ class NRKTVSeriesIE(NRKTVSerieBaseIE):
entries, series_id, titles.get('title'), titles.get('subtitle'))
class NRKTVDirekteIE(NRKTVIE): # XXX: Do not subclass from concrete IE
class NRKTVDirekteIE(NRKBaseIE):
IE_DESC = 'NRK TV Direkte and NRK Radio Direkte'
_VALID_URL = r'https?://(?:tv|radio)\.nrk\.no/direkte/(?P<id>[^/?#&]+)'
@ -743,21 +913,29 @@ class NRKTVDirekteIE(NRKTVIE): # XXX: Do not subclass from concrete IE
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
return self.url_result(
f'nrk:{video_id}', ie=NRKIE.ie_key(), video_id=video_id)
class NRKRadioPodkastIE(InfoExtractor):
class NRKRadioPodkastIE(NRKBaseIE):
_VALID_URL = r'https?://radio\.nrk\.no/pod[ck]ast/(?:[^/]+/)+(?P<id>l_[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'
_TESTS = [{
'url': 'https://radio.nrk.no/podkast/ulrikkes_univers/l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8',
'md5': '8d40dab61cea8ab0114e090b029a0565',
'md5': 'a68c3564be2f4426254f026c95a06348',
'info_dict': {
'id': 'MUHH48000314AA',
'ext': 'mp4',
'title': '20 spørsmål 23.05.2014',
'description': 'md5:bdea103bc35494c143c6a9acdd84887a',
'duration': 1741,
'series': '20 spørsmål',
'episode': '23.05.2014',
'id': 'l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8',
'ext': 'mp3',
'timestamp': 1522897200,
'alt_title': 'md5:06eae9f8c8ccf0718b54c83654e65550',
'upload_date': '20180405',
'thumbnail': 'https://gfx.nrk.no/CEDlVkEKxLYiBZ-CXjxSxgduDdaL-a4XTZlar9AoJFOA',
'description': '',
'title': 'Jeg er sinna og det må du tåle!',
'age_limit': 0,
'duration': 1682.0,
},
}, {
'url': 'https://radio.nrk.no/podcast/ulrikkes_univers/l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8',
@ -776,15 +954,16 @@ class NRKRadioPodkastIE(InfoExtractor):
f'nrk:{video_id}', ie=NRKIE.ie_key(), video_id=video_id)
class NRKPlaylistBaseIE(InfoExtractor):
class NRKPlaylistBaseIE(NRKBaseIE):
def _extract_description(self, webpage):
pass
def _real_extract(self, url):
playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id)
# Uses the render HTML endpoint instead of the regular article URL to prevent unrelated videos from being downloaded
# if .rich[data-video-id] elements appear in the "related articles" section too instead of just the main article.
webpage = self._download_webpage(f'https://www.nrk.no/serum/api/render/{playlist_id.split("-")[-1]}', playlist_id)
entries = [
self.url_result(f'nrk:{video_id}', NRKIE.ie_key())
for video_id in re.findall(self._ITEM_RE, webpage)
@ -800,6 +979,8 @@ class NRKPlaylistBaseIE(InfoExtractor):
class NRKPlaylistIE(NRKPlaylistBaseIE):
_VALID_URL = r'https?://(?:www\.)?nrk\.no/(?!video|skole)(?:[^/]+/)+(?P<id>[^/]+)'
_ITEM_RE = r'class="[^"]*\brich\b[^"]*"[^>]+data-video-id="([^"]+)"'
_TITLE_RE = r'class="[^"]*\barticle-title\b[^"]*"[^>]*>([^<]+)<'
_DESCRIPTION_RE = r'class="[^"]*[\s"]article-lead[\s"][^>]*>[^<]*<p>([^<]*)<'
_TESTS = [{
'url': 'http://www.nrk.no/troms/gjenopplev-den-historiske-solformorkelsen-1.12270763',
'info_dict': {
@ -819,42 +1000,29 @@ class NRKPlaylistIE(NRKPlaylistBaseIE):
}]
def _extract_title(self, webpage):
return self._og_search_title(webpage, fatal=False)
return re.search(self._TITLE_RE, webpage).group(1)
def _extract_description(self, webpage):
return self._og_search_description(webpage)
return re.search(self._DESCRIPTION_RE, webpage).group(1)
class NRKTVEpisodesIE(NRKPlaylistBaseIE):
_VALID_URL = r'https?://tv\.nrk\.no/program/[Ee]pisodes/[^/]+/(?P<id>\d+)'
_ITEM_RE = rf'data-episode=["\']{NRKTVIE._EPISODE_RE}'
_TESTS = [{
'url': 'https://tv.nrk.no/program/episodes/nytt-paa-nytt/69031',
'info_dict': {
'id': '69031',
'title': 'Nytt på nytt, sesong: 201210',
},
'playlist_count': 4,
}]
def _extract_title(self, webpage):
return self._html_search_regex(
r'<h1>([^<]+)</h1>', webpage, 'title', fatal=False)
class NRKSkoleIE(InfoExtractor):
class NRKSkoleIE(NRKBaseIE):
IE_DESC = 'NRK Skole'
_VALID_URL = r'https?://(?:www\.)?nrk\.no/skole/?\?.*\bmediaId=(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.nrk.no/skole/?page=search&q=&mediaId=14099',
'md5': '18c12c3d071953c3bf8d54ef6b2587b7',
'md5': '1d54ec4cff70d8f2c7909d1922514af2',
'info_dict': {
'id': '6021',
'ext': 'mp4',
'title': 'Genetikk og eneggede tvillinger',
'description': 'md5:3aca25dcf38ec30f0363428d2b265f8d',
'description': 'md5:7c0cc42d35d99bbc58f45639cdbcc163',
'duration': 399,
'thumbnail': 'https://gfx.nrk.no/5SN-Uq11iR3ADwrCwTv0bAKbbBXXNpVJsaCLGiU8lFoQ',
'timestamp': 1205622000,
'upload_date': '20080315',
'alt_title': '',
},
}, {
'url': 'https://www.nrk.no/skole/?page=objectives&subject=naturfag&objective=K15114&mediaId=19355',
@ -863,9 +1031,14 @@ class NRKSkoleIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
nrk_id = self._download_json(
response = self._download_json(
f'https://nrkno-skole-prod.kube.nrk.no/skole/api/media/{video_id}',
video_id)['psId']
return self.url_result(f'nrk:{nrk_id}')
video_id)
nrk_id = response['psId']
return self.url_result(
f'nrk:{nrk_id}', NRKIE, nrk_id, url_transparent=True,
**traverse_obj(response, {
'title': ('title', {str}),
'timestamp': ('airedDate', {parse_iso8601}),
'description': ('summary', {str}),
}))

View File

@ -259,6 +259,8 @@ class RedditIE(InfoExtractor):
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
except ExtractorError as e:
if isinstance(e.cause, json.JSONDecodeError):
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
self.raise_login_required('Account authentication is required')
raise

View File

@ -1,8 +1,8 @@
# Autogenerated by devscripts/update-version.py
__version__ = '2024.11.04'
__version__ = '2024.11.18'
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c'
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
VARIANT = None
@ -12,4 +12,4 @@ CHANNEL = 'stable'
ORIGIN = 'yt-dlp/yt-dlp'
_pkg_version = '2024.11.04'
_pkg_version = '2024.11.18'