Compare commits

...

12 Commits

Author SHA1 Message Date
Mozi
a459be1f2f
Merge 8779a8897c into f919729538 2024-11-18 12:31:11 +03:00
github-actions[bot]
f919729538 Release 2024.11.18
Created by: bashonly

:ci skip all
2024-11-18 05:45:05 +00:00
bashonly
7ea2787920
[ie/reddit] Improve error handling (#11573)
Authored by: bashonly
2024-11-18 05:36:38 +00:00
bashonly
f7257588bd
[ie/digitalconcerthall] Support login with access/refresh tokens (#11571)
Removes broken support for login with email and password
Removes obsolete `prefer_combined_hls` extractor-arg

Closes #11404, Closes #11436
Authored by: bashonly
2024-11-18 05:16:17 +00:00
Mozi
8779a8897c simplify statements in traversal 2024-11-16 07:39:59 +00:00
Mozi
88de6d0c2d merge 'master' 2024-11-16 07:22:46 +00:00
Mozi
628ce197eb merge 'master' 2024-10-28 00:37:58 +00:00
Mozi
c0aa2e8160 fix usage of 'self._merge_subtitles' 2024-10-28 00:37:42 +00:00
Mozi
4b00360b4e [ie/vidio:live] the code I wrote does not seem to work. let's rewrite it
Those two URLs of Premier-exclusive livestreams are still not working!
2024-09-21 12:28:25 +00:00
Mozi
8155ed770b merge branch 'master' 2024-09-21 10:08:41 +00:00
Mozi
20c66ec13e [ie/vidio] Fix login; use new API; check DRM; extract comments 2024-09-21 10:07:45 +00:00
Mozi
3bb739f188 [ie/vidio:live] Add DASH support; use new API 2024-09-01 16:56:51 +00:00
8 changed files with 601 additions and 188 deletions

View File

@ -695,3 +695,15 @@ KBelmin
kesor
MellowKyler
Wesley107772
a13ssandr0
ChocoLZS
doe1080
hugovdev
jshumphrey
julionc
manavchaudhary1
powergold1
Sakura286
SamDecrock
stratus-ss
subrat-lima

View File

@ -4,6 +4,64 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
-->
### 2024.11.18
#### Important changes
- **Login with OAuth is no longer supported for YouTube**
Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
#### Core changes
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
- **utils**
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
#### Extractor changes
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
- **chaturbate**
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
- **ctvnews**
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
- **spreaker**
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
- **youtube**
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
#### Misc. changes
- **build**
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
- **cleanup**
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
### 2024.11.04
#### Important changes

View File

@ -1867,9 +1867,6 @@ The following extractors use this feature:
#### bilibili
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
#### digitalconcerthall
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
#### sonylivseries
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`

View File

@ -129,6 +129,8 @@
- **Bandcamp:album**
- **Bandcamp:user**
- **Bandcamp:weekly**
- **Bandlab**
- **BandlabPlaylist**
- **BannedVideo**
- **bbc**: [*bbc*](## "netrc machine") BBC
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
@ -484,6 +486,7 @@
- **Gab**
- **GabTV**
- **Gaia**: [*gaia*](## "netrc machine")
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
- **GameJolt**
- **GameJoltCommunity**
- **GameJoltGame**
@ -651,6 +654,8 @@
- **Karaoketv**
- **Katsomo**: (**Currently broken**)
- **KelbyOne**: (**Currently broken**)
- **Kenh14Playlist**
- **Kenh14Video**
- **Ketnet**
- **khanacademy**
- **khanacademy:unit**
@ -784,10 +789,6 @@
- **MicrosoftLearnSession**
- **MicrosoftMedius**
- **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom
- **mildom:clip**: Clip in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: VOD in Mildom
- **minds**
- **minds:channel**
- **minds:group**
@ -798,6 +799,7 @@
- **MiTele**: mitele.es
- **mixch**
- **mixch:archive**
- **mixch:movie**
- **mixcloud**
- **mixcloud:playlist**
- **mixcloud:user**
@ -1060,8 +1062,8 @@
- **PhilharmonieDeParis**: Philharmonie de Paris
- **phoenix.de**
- **Photobucket**
- **PiaLive**
- **Piapro**: [*piapro*](## "netrc machine")
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
- **Picarto**
- **PicartoVod**
- **Piksel**
@ -1088,8 +1090,6 @@
- **PodbayFMChannel**
- **Podchaser**
- **podomatic**: (**Currently broken**)
- **Pokemon**
- **PokemonWatch**
- **PokerGo**: [*pokergo*](## "netrc machine")
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
- **PolsatGo**
@ -1160,6 +1160,7 @@
- **RadioJavan**: (**Currently broken**)
- **radiokapital**
- **radiokapital:show**
- **RadioRadicale**
- **RadioZetPodcast**
- **radlive**
- **radlive:channel**
@ -1367,9 +1368,7 @@
- **spotify**: Spotify episodes (**Currently broken**)
- **spotify:show**: Spotify shows (**Currently broken**)
- **Spreaker**
- **SpreakerPage**
- **SpreakerShow**
- **SpreakerShowPage**
- **SpringboardPlatform**
- **Sprout**
- **SproutVideo**
@ -1570,6 +1569,8 @@
- **UFCTV**: [*ufctv*](## "netrc machine")
- **ukcolumn**: (**Currently broken**)
- **UKTVPlay**
- **UlizaPlayer**
- **UlizaPortal**: ulizaportal.jp
- **umg:de**: Universal Music Deutschland (**Currently broken**)
- **Unistra**
- **Unity**: (**Currently broken**)
@ -1587,8 +1588,6 @@
- **Varzesh3**: (**Currently broken**)
- **Vbox7**
- **Veo**
- **Veoh**
- **veoh:user**
- **Vesti**: Вести.Ru (**Currently broken**)
- **Vevo**
- **VevoPlaylist**

View File

@ -1,7 +1,10 @@
import time
from .common import InfoExtractor
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
jwt_decode_hs256,
parse_codecs,
try_get,
url_or_none,
@ -13,9 +16,6 @@ from ..utils.traversal import traverse_obj
class DigitalConcertHallIE(InfoExtractor):
IE_DESC = 'DigitalConcertHall extractor'
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
_ACCESS_TOKEN = None
_NETRC_MACHINE = 'digitalconcerthall'
_TESTS = [{
'note': 'Playlist with only one video',
@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
'params': {'skip_download': 'm3u8'},
'playlist_count': 1,
}]
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
'is the "access_token_production" from your browser local storage')
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
_CLIENT_ID = 'dch.webapp'
_CLIENT_SECRET = '2ySLN+2Fwb'
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
_OAUTH_HEADERS = {
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'Origin': 'https://www.digitalconcerthall.com',
'Referer': 'https://www.digitalconcerthall.com/',
'User-Agent': _USER_AGENT,
}
_access_token = None
_access_token_expiry = 0
_refresh_token = None
def _perform_login(self, username, password):
login_token = self._download_json(
self._OAUTH_URL,
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
@property
def _access_token_is_expired(self):
return self._access_token_expiry - 30 <= int(time.time())
def _set_access_token(self, value):
self._access_token = value
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
def _cache_tokens(self, /):
self.cache.store(self._NETRC_MACHINE, 'tokens', {
'access_token': self._access_token,
'refresh_token': self._refresh_token,
})
def _fetch_new_tokens(self, invalidate=False):
if invalidate:
self.report_warning('Access token has been invalidated')
self._set_access_token(None)
if not self._access_token_is_expired:
return
if not self._refresh_token:
self._set_access_token(None)
self._cache_tokens()
raise ExtractorError(
'Access token has expired or been invalidated. '
'Get a new "access_token_production" value from your browser '
f'and try again, {self._REFRESH_HINT}', expected=True)
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
bearer_token = self._access_token or self._download_json(
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
data=urlencode_postdata({
'affiliate': 'none',
'grant_type': 'device',
'device_vendor': 'unknown',
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
'app_id': 'dch.webapp',
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
# but this is no longer effective since actual login is not possible anymore
'device_model': 'unknown',
'app_id': self._CLIENT_ID,
'app_distributor': 'berlinphil',
'app_version': '1.84.0',
'client_secret': '2ySLN+2Fwb',
}), headers={
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'User-Agent': self._USER_AGENT,
})['access_token']
'app_version': '1.95.0',
'client_secret': self._CLIENT_SECRET,
}), headers=self._OAUTH_HEADERS)['access_token']
try:
login_response = self._download_json(
self._OAUTH_URL,
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
'grant_type': 'password',
'username': username,
'password': password,
response = self._download_json(
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
data=urlencode_postdata({
'grant_type': 'refresh_token',
'refresh_token': self._refresh_token,
'client_id': self._CLIENT_ID,
'client_secret': self._CLIENT_SECRET,
}), headers={
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
'Referer': 'https://www.digitalconcerthall.com',
'Authorization': f'Bearer {login_token}',
'User-Agent': self._USER_AGENT,
**self._OAUTH_HEADERS,
'Authorization': f'Bearer {bearer_token}',
})
except ExtractorError as error:
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
raise ExtractorError('Invalid username or password', expected=True)
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
self._set_access_token(None)
self._refresh_token = None
self._cache_tokens()
raise ExtractorError('Your tokens have been invalidated', expected=True)
raise
self._ACCESS_TOKEN = login_response['access_token']
self._set_access_token(response['access_token'])
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
self.write_debug('New refresh token granted')
self._refresh_token = refresh_token
self._cache_tokens()
def _perform_login(self, username, password):
self.report_login()
if username == 'refresh':
self._refresh_token = password
self._fetch_new_tokens()
if username == 'token':
if not traverse_obj(password, {jwt_decode_hs256}):
raise ExtractorError(
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
self._set_access_token(password)
self._cache_tokens()
if username in ('refresh', 'token'):
if self.get_param('cachedir') is not False:
token_type = 'access' if username == 'token' else 'refresh'
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
'token next time, pass --username cache along with any password')
return
if username != 'cache':
raise ExtractorError(
'Login with username and password is no longer supported '
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
# Try cached access_token
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
self._set_access_token(cached_tokens.get('access_token'))
self._refresh_token = cached_tokens.get('refresh_token')
if not self._access_token_is_expired:
return
# Try cached refresh_token
self._fetch_new_tokens(invalidate=True)
def _real_initialize(self):
if not self._ACCESS_TOKEN:
self.raise_login_required(method='password')
if not self._access_token:
self.raise_login_required(
'All content on this site is only available for registered users. '
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
def _entries(self, items, language, type_, **kwargs):
for item in items:
video_id = item['id']
stream_info = self._download_json(
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
'Accept': 'application/json',
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
})
for should_retry in (True, False):
self._fetch_new_tokens(invalidate=not should_retry)
try:
stream_info = self._download_json(
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
'Accept': 'application/json',
'Authorization': f'Bearer {self._access_token}',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
})
break
except ExtractorError as error:
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
continue
raise
formats = []
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
@ -157,7 +255,6 @@ class DigitalConcertHallIE(InfoExtractor):
'Accept': 'application/json',
'Accept-Language': language,
'User-Agent': self._USER_AGENT,
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
})
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))

View File

@ -259,6 +259,8 @@ class RedditIE(InfoExtractor):
f'https://www.reddit.com/{slug}/.json', video_id, expected_status=403)
except ExtractorError as e:
if isinstance(e.cause, json.JSONDecodeError):
if self._get_cookies('https://www.reddit.com/').get('reddit_session'):
raise ExtractorError('Your IP address is unable to access the Reddit API', expected=True)
self.raise_login_required('Account authentication is required')
raise

View File

@ -1,18 +1,24 @@
import json
from .common import InfoExtractor
from ..utils import (
ExtractorError,
clean_html,
format_field,
extract_attributes,
get_element_by_class,
get_element_html_by_id,
int_or_none,
parse_iso8601,
remove_end,
smuggle_url,
str_or_none,
strip_or_none,
str_to_int,
try_get,
unsmuggle_url,
url_or_none,
urlencode_postdata,
)
from ..utils.traversal import traverse_obj
class VidioBaseIE(InfoExtractor):
@ -35,6 +41,7 @@ class VidioBaseIE(InfoExtractor):
login_form.update({
'user[login]': username,
'user[password]': password,
'authenticity_token': self._html_search_meta('csrf-token', login_page, fatal=True),
})
login_post, login_post_urlh = self._download_webpage_handle(
self._LOGIN_URL, None, 'Logging in', data=urlencode_postdata(login_form), expected_status=[302, 401])
@ -58,6 +65,7 @@ class VidioBaseIE(InfoExtractor):
def _initialize_pre_login(self):
self._api_key = self._download_json(
'https://www.vidio.com/auth', None, data=b'')['api_key']
self._ua = self.get_param('http_headers')['User-Agent']
def _call_api(self, url, video_id, note=None):
return self._download_json(url, video_id, note=note, headers={
@ -67,7 +75,9 @@ class VidioBaseIE(InfoExtractor):
class VidioIE(VidioBaseIE):
_GEO_COUNTRIES = ['ID']
_VALID_URL = r'https?://(?:www\.)?vidio\.com/(watch|embed)/(?P<id>\d+)-(?P<display_id>[^/?#&]+)'
_EMBED_REGEX = [rf'(?x)<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})']
_TESTS = [{
'url': 'http://www.vidio.com/watch/165683-dj_ambred-booyah-live-2015',
'md5': 'abac81b1a205a8d94c609a473b5ea62a',
@ -77,113 +87,317 @@ class VidioIE(VidioBaseIE):
'ext': 'mp4',
'title': 'DJ_AMBRED - Booyah (Live 2015)',
'description': 'md5:27dc15f819b6a78a626490881adbadf8',
'thumbnail': r're:^https?://.*\.jpg$',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 149,
'like_count': int,
'uploader': 'TWELVE Pic',
'timestamp': 1444902800,
'uploader': 'twelvepictures',
'timestamp': 1444902960,
'upload_date': '20151015',
'uploader_id': 'twelvepictures',
'channel': 'Cover Music Video',
'uploader_id': '270115',
'channel': 'cover-music-video',
'channel_id': '280236',
'view_count': int,
'dislike_count': int,
'comment_count': int,
'channel_url': 'https://www.vidio.com/@twelvepictures/channels/280236-cover-music-video',
'tags': 'count:3',
'uploader_url': 'https://www.vidio.com/@twelvepictures',
'live_status': 'not_live',
'genres': ['vlog', 'comedy', 'edm'],
'season_id': '',
'season_name': '',
'age_limit': 13,
'comment_count': int,
},
'params': {
'getcomments': True,
},
}, {
# DRM protected
'url': 'https://www.vidio.com/watch/7095853-ep-04-sketch-book',
'md5': 'abac81b1a205a8d94c609a473b5ea62a',
'info_dict': {
'id': '7095853',
'display_id': 'ep-04-sketch-book',
'ext': 'mp4',
'title': 'Ep 04 - Sketch Book',
'description': 'md5:9e22b4b1dbd65209c143d7009e899830',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 2784,
'uploader': 'vidiooriginal',
'timestamp': 1658509200,
'upload_date': '20220722',
'uploader_id': '31052580',
'channel': 'cupcake-untuk-rain',
'channel_id': '52332655',
'channel_url': 'https://www.vidio.com/@vidiooriginal/channels/52332655-cupcake-untuk-rain',
'tags': [],
'uploader_url': 'https://www.vidio.com/@vidiooriginal',
'live_status': 'not_live',
'genres': ['romance', 'drama', 'comedy', 'Teen', 'love triangle'],
'season_id': '8220',
'season_name': 'Season 1',
'age_limit': 13,
'availability': 'premium_only',
'comment_count': int,
},
'expected_warnings': ['This video is DRM protected'],
'params': {
'getcomments': True,
'skip_download': True,
'ignore_no_formats_error': True,
},
}, {
'url': 'https://www.vidio.com/watch/7439193-episode-1-magic-5',
'md5': 'b1644c574aeb20c91503be367ac2d211',
'info_dict': {
'id': '7439193',
'display_id': 'episode-1-magic-5',
'ext': 'mp4',
'title': 'Episode 1 - Magic 5',
'description': 'md5:367255f9e8e7ad7192c26218f01b6260',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 6126,
'uploader': 'indosiar',
'timestamp': 1679315400,
'upload_date': '20230320',
'uploader_id': '12',
'channel': 'magic-5',
'channel_id': '52350795',
'channel_url': 'https://www.vidio.com/@indosiar/channels/52350795-magic-5',
'tags': ['basmalah', 'raden-rakha', 'eby-da-5', 'sinetron', 'afan-da-5', 'sridevi-da5'],
'uploader_url': 'https://www.vidio.com/@indosiar',
'live_status': 'not_live',
'genres': ['drama', 'fantasy', 'friendship'],
'season_id': '11017',
'season_name': 'Episode',
'age_limit': 13,
},
}, {
'url': 'https://www.vidio.com/watch/1716926-mas-suka-masukin-aja',
'md5': 'acc4009eeac0033328419aada7bc6925',
'info_dict': {
'id': '1716926',
'display_id': 'mas-suka-masukin-aja',
'ext': 'mp4',
'title': 'Mas Suka, Masukin Aja',
'description': 'md5:667093b08e07b6fb92f68037f81f2267',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 5080,
'uploader': 'vidiopremier',
'timestamp': 1564735560,
'upload_date': '20190802',
'uploader_id': '26094842',
'channel': 'mas-suka-masukin-aja',
'channel_id': '34112289',
'channel_url': 'https://www.vidio.com/@vidiopremier/channels/34112289-mas-suka-masukin-aja',
'tags': [],
'uploader_url': 'https://www.vidio.com/@vidiopremier',
'live_status': 'not_live',
'genres': ['comedy', 'romance'],
'season_id': '663',
'season_name': '',
'age_limit': 18,
'availability': 'premium_only',
},
'params': {
'ignore_no_formats_error': True,
},
'expected_warnings': ['This show isn\'t available in your country'],
}, {
'url': 'https://www.vidio.com/watch/2372948-first-day-of-school-kindergarten-life-song-beabeo-nursery-rhymes-kids-songs',
'md5': 'c6d1bde08eee88bea27cca9dc38bc3df',
'info_dict': {
'id': '2372948',
'display_id': 'first-day-of-school-kindergarten-life-song-beabeo-nursery-rhymes-kids-songs',
'ext': 'mp4',
'title': 'First Day of School | Kindergarten Life Song | BeaBeo Nursery Rhymes & Kids Songs',
'description': 'md5:d505486a67415903f7f3ab61adfd5a91',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 517,
'uploader': 'kidsstartv',
'timestamp': 1638518400,
'upload_date': '20211203',
'uploader_id': '38247189',
'channel': 'beabeo-school-series',
'channel_id': '52311987',
'channel_url': 'https://www.vidio.com/@kidsstartv/channels/52311987-beabeo-school-series',
'tags': [],
'uploader_url': 'https://www.vidio.com/@kidsstartv',
'live_status': 'not_live',
'genres': ['animation', 'Cartoon'],
'season_id': '6023',
'season_name': 'school series',
},
}, {
'url': 'https://www.vidio.com/watch/1550718-stand-by-me-doraemon',
'md5': '405b61a2f06c74e052e0bd67cad6b891',
'info_dict': {
'id': '1550718',
'display_id': 'stand-by-me-doraemon',
'ext': 'mp4',
'title': 'Stand by Me Doraemon',
'description': 'md5:673d899f6a58dd4b0d18aebe30545e2a',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 5429,
'uploader': 'vidiopremier',
'timestamp': 1545815634,
'upload_date': '20181226',
'uploader_id': '26094842',
'channel': 'stand-by-me-doraemon',
'channel_id': '29750953',
'channel_url': 'https://www.vidio.com/@vidiopremier/channels/29750953-stand-by-me-doraemon',
'tags': ['anime-lucu', 'top-10-this-week', 'kids', 'stand-by-me-doraemon-2'],
'uploader_url': 'https://www.vidio.com/@vidiopremier',
'live_status': 'not_live',
'genres': ['anime', 'family', 'adventure', 'comedy', 'coming of age'],
'season_id': '237',
'season_name': '',
'age_limit': 7,
'availability': 'premium_only',
},
'params': {
'ignore_no_formats_error': True,
},
'expected_warnings': ['This show isn\'t available in your country'],
}, {
# 404 Not Found
'url': 'https://www.vidio.com/watch/77949-south-korea-test-fires-missile-that-can-strike-all-of-the-north',
'only_matching': True,
}, {
# Premier-exclusive video
'url': 'https://www.vidio.com/watch/1550718-stand-by-me-doraemon',
'only_matching': True,
}, {
# embed url from https://enamplus.liputan6.com/read/5033648/video-fakta-temuan-suspek-cacar-monyet-di-jawa-tengah
'url': 'https://www.vidio.com/embed/7115874-fakta-temuan-suspek-cacar-monyet-di-jawa-tengah',
}]
_WEBPAGE_TESTS = [{
# embed player: https://www.vidio.com/embed/7115874-fakta-temuan-suspek-cacar-monyet-di-jawa-tengah
'url': 'https://enamplus.liputan6.com/read/5033648/video-fakta-temuan-suspek-cacar-monyet-di-jawa-tengah',
'info_dict': {
'id': '7115874',
'ext': 'mp4',
'channel_id': '40172876',
'comment_count': int,
'uploader_id': 'liputan6',
'view_count': int,
'dislike_count': int,
'upload_date': '20220804',
'uploader': 'Liputan6.com',
'display_id': 'fakta-temuan-suspek-cacar-monyet-di-jawa-tengah',
'channel': 'ENAM PLUS 165',
'timestamp': 1659605520,
'ext': 'mp4',
'title': 'Fakta Temuan Suspek Cacar Monyet di Jawa Tengah',
'duration': 59,
'like_count': int,
'tags': ['monkeypox indonesia', 'cacar monyet menyebar', 'suspek cacar monyet di indonesia', 'fakta', 'hoax atau bukan?', 'jawa tengah'],
'thumbnail': 'https://thumbor.prod.vidiocdn.com/83PN-_BKm5sS7emLtRxl506MLqQ=/640x360/filters:quality(70)/vidio-web-prod-video/uploads/video/image/7115874/fakta-suspek-cacar-monyet-di-jawa-tengah-24555a.jpg',
'uploader_url': 'https://www.vidio.com/@liputan6',
'description': 'md5:6d595a18d3b19ee378e335a6f288d5ac',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'duration': 59,
'uploader': 'liputan6',
'timestamp': 1659605693,
'upload_date': '20220804',
'uploader_id': '139',
'channel': 'enam-plus-165',
'channel_id': '40172876',
'channel_url': 'https://www.vidio.com/@liputan6/channels/40172876-enam-plus-165',
'tags': ['monkeypox-indonesia', 'cacar-monyet-menyebar', 'suspek-cacar-monyet-di-indonesia', 'fakta', 'hoax-atau-bukan', 'jawa-tengah'],
'uploader_url': 'https://www.vidio.com/@liputan6',
'live_status': 'not_live',
'genres': ['health'],
'season_id': '',
'season_name': '',
'age_limit': 13,
'comment_count': int,
},
'params': {
'getcomments': True,
},
}]
def _real_extract(self, url):
match = self._match_valid_url(url).groupdict()
video_id, display_id = match.get('id'), match.get('display_id')
data = self._call_api('https://api.vidio.com/videos/' + video_id, display_id)
video = data['videos'][0]
title = video['title'].strip()
is_premium = video.get('is_premium')
video_id, display_id = self._match_valid_url(url).groups()
if is_premium:
sources = self._download_json(
f'https://www.vidio.com/interactions_stream.json?video_id={video_id}&type=videos',
display_id, note='Downloading premier API JSON')
if not (sources.get('source') or sources.get('source_dash')):
self.raise_login_required('This video is only available for registered users with the appropriate subscription')
webpage = self._download_webpage(url, video_id)
api_data = self._call_api(f'https://api.vidio.com/videos/{video_id}', display_id, 'Downloading API data')
interactions_stream = self._download_json(
'https://www.vidio.com/interactions_stream.json', video_id,
query={'video_id': video_id, 'type': 'videos'}, note='Downloading stream info',
errnote='Unable to download stream info')
formats, subs = [], {}
if sources.get('source'):
hls_formats, hls_subs = self._extract_m3u8_formats_and_subtitles(
sources['source'], display_id, 'mp4', 'm3u8_native')
formats.extend(hls_formats)
subs.update(hls_subs)
if sources.get('source_dash'): # TODO: Find video example with source_dash
dash_formats, dash_subs = self._extract_mpd_formats_and_subtitles(
sources['source_dash'], display_id, 'dash')
formats.extend(dash_formats)
subs.update(dash_subs)
else:
hls_url = data['clips'][0]['hls_url']
formats, subs = self._extract_m3u8_formats_and_subtitles(
hls_url, display_id, 'mp4', 'm3u8_native')
attrs = extract_attributes(get_element_html_by_id(f'player-data-{video_id}', webpage))
get_first = lambda x: try_get(data, lambda y: y[x + 's'][0], dict) or {}
channel = get_first('channel')
user = get_first('user')
username = user.get('username')
get_count = lambda x: int_or_none(video.get('total_' + x))
if traverse_obj(attrs, ('data-drm-enabled', {lambda x: x == 'true'})):
self.report_drm(video_id)
if traverse_obj(attrs, ('data-geoblock', {lambda x: x == 'true'})):
self.raise_geo_restricted(
'This show isn\'t available in your country', countries=['ID'], metadata_available=True)
subtitles = dict(traverse_obj(attrs, ('data-subtitles', {json.loads}, ..., {
lambda x: (x['language'], [{'url': x['file']['url']}]),
})))
formats = []
# There are time-based strings in the playlist URL,
# so try the other URL iff no formats extracted from the prior one.
for m3u8_url in traverse_obj([
interactions_stream.get('source'),
attrs.get('data-vjs-clip-hls-url')], (..., {url_or_none})):
fmt, subs = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id, ext='mp4', m3u8_id='hls')
formats.extend(fmt)
self._merge_subtitles(subs, target=subtitles)
if fmt:
break
for mpd_url in traverse_obj([
interactions_stream.get('source_dash'),
attrs.get('data-vjs-clip-dash-url')], (..., {url_or_none})):
fmt, subs = self._extract_mpd_formats_and_subtitles(mpd_url, video_id, mpd_id='dash')
formats.extend(fmt)
self._merge_subtitles(subs, target=subtitles)
if fmt:
break
# TODO: extract also short previews of premier-exclusive videos from "attrs['data-content-preview-url']".
uploader = attrs.get('data-video-username')
uploader_url = f'https://www.vidio.com/@{uploader}'
channel = attrs.get('data-video-channel')
channel_id = attrs.get('data-video-channel-id')
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': strip_or_none(video.get('description')),
'thumbnail': video.get('image_url_medium'),
'duration': int_or_none(video.get('duration')),
'like_count': get_count('likes'),
'title': (traverse_obj(api_data, ('videos', 0, 'title'))
or attrs.get('data-video-title')
or self._html_extract_title(webpage)),
'live_status': 'not_live',
'formats': formats,
'subtitles': subs,
'uploader': user.get('name'),
'timestamp': parse_iso8601(video.get('created_at')),
'uploader_id': username,
'uploader_url': format_field(username, None, 'https://www.vidio.com/@%s'),
'channel': channel.get('name'),
'channel_id': str_or_none(channel.get('id')),
'view_count': get_count('view_count'),
'dislike_count': get_count('dislikes'),
'comment_count': get_count('comments'),
'tags': video.get('tag_list'),
'subtitles': subtitles,
'channel': channel,
'channel_id': channel_id,
'channel_url': f'{uploader_url}/channels/{channel_id}-{channel}',
'genres': traverse_obj(attrs, ('data-genres', {str_or_none}, {str.split(sep=',')}), default=[]),
'season_id': traverse_obj(attrs, ('data-season-id', {str_or_none})),
'season_name': traverse_obj(attrs, ('data-season-name', {str})),
'uploader': uploader,
'uploader_id': traverse_obj(attrs, ('data-video-user-id', {str_or_none})),
'uploader_url': uploader_url,
'thumbnail': traverse_obj(attrs, ('data-video-image-url', {url_or_none})),
'duration': traverse_obj(attrs, ('data-video-duration', {str_to_int})),
'description': traverse_obj(attrs, ('data-video-description', {str})),
'availability': self._availability(needs_premium=(attrs.get('data-access-type') == 'premium')),
'tags': traverse_obj(attrs, ('data-video-tags', {str_or_none}, {str.split(sep=',')}), default=[]),
'timestamp': traverse_obj(attrs, ('data-video-publish-date', {parse_iso8601(delimiter=' ')})),
'age_limit': (traverse_obj(attrs, ('data-adult', {lambda x: 18 if x == 'true' else 0}))
or traverse_obj(attrs, ('data-content-rating-option', {remove_end(end=' or more')}, {str_to_int}))),
'__post_extractor': self.extract_comments(video_id),
}
def _get_comments(self, video_id):
# TODO: extract replies under comments
def extract_comments(comments_data):
users = dict(traverse_obj(comments_data, ('included', ..., {
lambda x: (x['id'], {
'author': x['attributes']['username'],
'author_thumbnail': url_or_none(x['attributes']['avatar_url_big'] or x['attributes']['avatar_url_small']),
'author_url': url_or_none(x['links']['self']),
}),
})))
yield from traverse_obj(comments_data, ('data', ..., {
'id': 'id',
'text': ('attributes', 'content'),
'timestamp': ('attributes', 'created_at', {parse_iso8601}),
'like_count': ('attributes', 'likes'),
'author_id': ('attributes', 'user_id'),
}, {lambda x: {**x, **users.get(x['author_id'])}}))
comment_page_url = f'https://api.vidio.com/videos/{video_id}/comments'
while comment_page_url:
comments_data = self._call_api(comment_page_url, video_id, 'Downloading comments')
comment_page_url = traverse_obj(comments_data, ('links', 'next', {url_or_none}))
yield from extract_comments(comments_data)
class VidioPremierIE(VidioBaseIE):
_VALID_URL = r'https?://(?:www\.)?vidio\.com/premier/(?P<id>\d+)/(?P<display_id>[^/?#&]+)'
@ -234,10 +448,43 @@ class VidioLiveIE(VidioBaseIE):
'url': 'https://www.vidio.com/live/204-sctv',
'info_dict': {
'id': '204',
'title': 'SCTV',
'uploader': 'SCTV',
'uploader_id': 'sctv',
'thumbnail': r're:^https?://.*\.jpg$',
'ext': 'mp4',
'title': r're:SCTV \d{4}-\d{2}-\d{2} \d{2}:\d{2}',
'display_id': 'sctv',
'uploader': 'sctv',
'uploader_id': '4',
'uploader_url': 'https://www.vidio.com/@sctv',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'live_status': 'is_live',
'description': r're:^SCTV merupakan stasiun televisi nasional terkemuka di Indonesia.+',
'like_count': int,
'dislike_count': int,
'timestamp': 1461258000,
'upload_date': '20160421',
'tags': [],
'genres': [],
'age_limit': 13,
},
}, {
'url': 'https://vidio.com/live/733-trans-tv',
'info_dict': {
'id': '733',
'ext': 'mp4',
'title': r're:TRANS TV \d{4}-\d{2}-\d{2} \d{2}:\d{2}',
'display_id': 'trans-tv',
'uploader': 'transtv',
'uploader_id': '551300',
'uploader_url': 'https://www.vidio.com/@transtv',
'thumbnail': r're:^https?://thumbor\.prod\.vidiocdn\.com/.+\.jpg$',
'live_status': 'is_live',
'description': r're:^Trans TV adalah stasiun televisi swasta Indonesia.+',
'like_count': int,
'dislike_count': int,
'timestamp': 1461355080,
'upload_date': '20160422',
'tags': [],
'genres': [],
'age_limit': 13,
},
}, {
# Premier-exclusive livestream
@ -251,59 +498,60 @@ class VidioLiveIE(VidioBaseIE):
def _real_extract(self, url):
video_id, display_id = self._match_valid_url(url).groups()
stream_data = self._call_api(
f'https://www.vidio.com/api/livestreamings/{video_id}/detail', display_id)
stream_meta = stream_data['livestreamings'][0]
user = stream_data.get('users', [{}])[0]
title = stream_meta.get('title')
username = user.get('username')
webpage = self._download_webpage(url, video_id)
stream_meta = traverse_obj(self._call_api(
f'https://www.vidio.com/api/livestreamings/{video_id}/detail', video_id),
('livestreamings', 0, {dict}), default={})
tokenized_playlist_urls = self._download_json(
f'https://www.vidio.com/live/{video_id}/tokens', video_id,
query={'type': 'dash'}, note='Downloading tokenized playlist',
errnote='Unable to download tokenized playlist', data=b'')
interactions_stream = self._download_json(
'https://www.vidio.com/interactions_stream.json', video_id,
query={'video_id': video_id, 'type': 'videos'}, note='Downloading stream info',
errnote='Unable to download stream info')
attrs = extract_attributes(get_element_html_by_id(f'player-data-{video_id}', webpage))
if traverse_obj(attrs, ('data-drm-enabled', {lambda x: x == 'true'})):
self.report_drm(video_id)
if traverse_obj(attrs, ('data-geoblock', {lambda x: x == 'true'})):
self.raise_geo_restricted(
'This show isn\'t available in your country', countries=['ID'], metadata_available=True)
formats = []
if stream_meta.get('is_drm'):
if not self.get_param('allow_unplayable_formats'):
self.report_drm(video_id)
if stream_meta.get('is_premium'):
sources = self._download_json(
f'https://www.vidio.com/interactions_stream.json?video_id={video_id}&type=livestreamings',
display_id, note='Downloading premier API JSON')
if not (sources.get('source') or sources.get('source_dash')):
self.raise_login_required('This video is only available for registered users with the appropriate subscription')
if str_or_none(sources.get('source')):
token_json = self._download_json(
f'https://www.vidio.com/live/{video_id}/tokens',
display_id, note='Downloading HLS token JSON', data=b'')
formats.extend(self._extract_m3u8_formats(
sources['source'] + '?' + token_json.get('token', ''), display_id, 'mp4', 'm3u8_native'))
if str_or_none(sources.get('source_dash')):
pass
else:
if stream_meta.get('stream_token_url'):
token_json = self._download_json(
f'https://www.vidio.com/live/{video_id}/tokens',
display_id, note='Downloading HLS token JSON', data=b'')
formats.extend(self._extract_m3u8_formats(
stream_meta['stream_token_url'] + '?' + token_json.get('token', ''),
display_id, 'mp4', 'm3u8_native'))
if stream_meta.get('stream_dash_url'):
pass
if stream_meta.get('stream_url'):
formats.extend(self._extract_m3u8_formats(
stream_meta['stream_url'], display_id, 'mp4', 'm3u8_native'))
for m3u8_url in traverse_obj([
tokenized_playlist_urls.get('hls_url'),
interactions_stream.get('source')], (..., {url_or_none})):
formats.extend(self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', m3u8_id='hls'))
for mpd_url in traverse_obj([
tokenized_playlist_urls.get('dash_url'),
interactions_stream.get('source_dash')], (..., {url_or_none})):
formats.extend(self._extract_mpd_formats(mpd_url, video_id, mpd_id='dash'))
uploader = attrs.get('data-video-username')
uploader_url = f'https://www.vidio.com/@{uploader}'
return {
'id': video_id,
'display_id': display_id,
'title': title,
'is_live': True,
'description': strip_or_none(stream_meta.get('description')),
'thumbnail': stream_meta.get('image'),
'title': attrs.get('data-video-title'),
'live_status': 'is_live',
'formats': formats,
'genres': traverse_obj(attrs, ('data-genres', {str_or_none}, {str.split(sep=',')}), default=[]),
'uploader': uploader,
'uploader_id': traverse_obj(attrs, ('data-video-user-id', {str_or_none})),
'uploader_url': uploader_url,
'thumbnail': traverse_obj(attrs, ('data-video-image-url', {url_or_none})),
'description': traverse_obj(attrs, ('data-video-description', {str})),
'availability': self._availability(needs_premium=(attrs.get('data-access-type') == 'premium')),
'tags': traverse_obj(attrs, ('data-video-tags', {str_or_none}, {str.split(sep=',')}), default=[]),
'age_limit': (traverse_obj(attrs, ('data-adult', {lambda x: 18 if x == 'true' else 0}))
or traverse_obj(attrs, ('data-content-rating-option', {remove_end(end=' or more')}, {str_to_int}))),
'like_count': int_or_none(stream_meta.get('like')),
'dislike_count': int_or_none(stream_meta.get('dislike')),
'formats': formats,
'uploader': user.get('name'),
'timestamp': parse_iso8601(stream_meta.get('start_time')),
'uploader_id': username,
'uploader_url': format_field(username, None, 'https://www.vidio.com/@%s'),
}

View File

@ -1,8 +1,8 @@
# Autogenerated by devscripts/update-version.py
__version__ = '2024.11.04'
__version__ = '2024.11.18'
RELEASE_GIT_HEAD = '197d0b03b6a3c8fe4fa5ace630eeffec629bf72c'
RELEASE_GIT_HEAD = '7ea2787920cccc6b8ea30791993d114fbd564434'
VARIANT = None
@ -12,4 +12,4 @@ CHANNEL = 'stable'
ORIGIN = 'yt-dlp/yt-dlp'
_pkg_version = '2024.11.04'
_pkg_version = '2024.11.18'