Compare commits

..

No commits in common. "d492e5b934babc41c44e39d15266e5a335a8f0e5" and "249ae6be8689c6be2fe097706450c96bc0f7d591" have entirely different histories.

58 changed files with 834 additions and 2137 deletions

View File

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm reporting that yt-dlp is broken on a **supported** site - label: I'm reporting that yt-dlp is broken on a **supported** site
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

View File

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm reporting a new site support request - label: I'm reporting a new site support request
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

View File

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm requesting a site-specific feature - label: I'm requesting a site-specific feature
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

View File

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm reporting a bug unrelated to a specific site - label: I'm reporting a bug unrelated to a specific site
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

View File

@ -14,7 +14,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true required: true

View File

@ -20,7 +20,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true required: true

View File

@ -30,10 +30,6 @@ on:
meta_files: meta_files:
default: true default: true
type: boolean type: boolean
origin:
required: false
default: ''
type: string
secrets: secrets:
GPG_SIGNING_KEY: GPG_SIGNING_KEY:
required: false required: false
@ -41,13 +37,11 @@ on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
version: version:
description: | description: Version tag (YYYY.MM.DD[.REV])
VERSION: yyyy.mm.dd[.rev] or rev
required: true required: true
type: string type: string
channel: channel:
description: | description: Update channel (stable/nightly/...)
SOURCE of this build's updates: stable/nightly/master/<repo>
required: true required: true
default: stable default: stable
type: string type: string
@ -79,34 +73,16 @@ on:
description: SHA2-256SUMS, SHA2-512SUMS, _update_spec description: SHA2-256SUMS, SHA2-512SUMS, _update_spec
default: true default: true
type: boolean type: boolean
origin:
description: .
required: false
default: ''
type: choice
options:
- ''
permissions: permissions:
contents: read contents: read
jobs: jobs:
process:
runs-on: ubuntu-latest
outputs:
origin: ${{ steps.process_origin.outputs.origin }}
steps:
- name: Process origin
id: process_origin
run: |
echo "origin=${{ inputs.origin || github.repository }}" >> "$GITHUB_OUTPUT"
unix: unix:
needs: process
if: inputs.unix if: inputs.unix
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: with:
python-version: "3.10" python-version: "3.10"
@ -120,21 +96,22 @@ jobs:
auto-activate-base: false auto-activate-base: false
- name: Install Requirements - name: Install Requirements
run: | run: |
sudo apt -y install zip pandoc man sed sudo apt-get -y install zip pandoc man sed
python -m pip install -U pip setuptools wheel
python -m pip install -U Pyinstaller -r requirements.txt
reqs=$(mktemp) reqs=$(mktemp)
cat > "$reqs" << EOF cat > $reqs << EOF
python=3.10.* python=3.10.*
pyinstaller pyinstaller
cffi cffi
brotli-python brotli-python
secretstorage
EOF EOF
sed -E '/^(brotli|secretstorage).*/d' requirements.txt >> "$reqs" sed '/^brotli.*/d' requirements.txt >> $reqs
mamba create -n build --file "$reqs" mamba create -n build --file $reqs
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build Unix platform-independent binary - name: Build Unix platform-independent binary
run: | run: |
@ -173,7 +150,6 @@ jobs:
yt-dlp_linux.zip yt-dlp_linux.zip
linux_arm: linux_arm:
needs: process
if: inputs.linux_arm if: inputs.linux_arm
permissions: permissions:
contents: read contents: read
@ -186,7 +162,7 @@ jobs:
- aarch64 - aarch64
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
path: ./repo path: ./repo
- name: Virtualized Install, Prepare & Build - name: Virtualized Install, Prepare & Build
@ -209,7 +185,7 @@ jobs:
run: | run: |
cd repo cd repo
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
python3.8 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python3.8 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3.8 devscripts/make_lazy_extractors.py python3.8 devscripts/make_lazy_extractors.py
python3.8 pyinst.py python3.8 pyinst.py
@ -230,12 +206,11 @@ jobs:
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }} repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
macos: macos:
needs: process
if: inputs.macos if: inputs.macos
runs-on: macos-11 runs-on: macos-11
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
# NB: Building universal2 does not work with python from actions/setup-python # NB: Building universal2 does not work with python from actions/setup-python
- name: Install Requirements - name: Install Requirements
run: | run: |
@ -246,7 +221,7 @@ jobs:
- name: Prepare - name: Prepare
run: | run: |
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/make_lazy_extractors.py python3 devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
@ -272,12 +247,11 @@ jobs:
dist/yt-dlp_macos.zip dist/yt-dlp_macos.zip
macos_legacy: macos_legacy:
needs: process
if: inputs.macos_legacy if: inputs.macos_legacy
runs-on: macos-latest runs-on: macos-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Install Python - name: Install Python
# We need the official Python, because the GA ones only support newer macOS versions # We need the official Python, because the GA ones only support newer macOS versions
env: env:
@ -298,7 +272,7 @@ jobs:
- name: Prepare - name: Prepare
run: | run: |
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/make_lazy_extractors.py python3 devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
@ -322,12 +296,11 @@ jobs:
dist/yt-dlp_macos_legacy dist/yt-dlp_macos_legacy
windows: windows:
needs: process
if: inputs.windows if: inputs.windows
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: # 3.8 is used for Win7 support with: # 3.8 is used for Win7 support
python-version: "3.8" python-version: "3.8"
@ -338,7 +311,7 @@ jobs:
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
@ -370,12 +343,11 @@ jobs:
dist/yt-dlp_win.zip dist/yt-dlp_win.zip
windows32: windows32:
needs: process
if: inputs.windows32 if: inputs.windows32
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: with:
python-version: "3.8" python-version: "3.8"
@ -387,7 +359,7 @@ jobs:
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
@ -415,7 +387,6 @@ jobs:
meta_files: meta_files:
if: inputs.meta_files && always() && !cancelled() if: inputs.meta_files && always() && !cancelled()
needs: needs:
- process
- unix - unix
- linux_arm - linux_arm
- macos - macos

View File

@ -29,7 +29,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL

View File

@ -27,7 +27,7 @@ jobs:
python-version: pypy-3.9 python-version: pypy-3.9
run-tests-ext: bat run-tests-ext: bat
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:

View File

@ -9,7 +9,7 @@ jobs:
if: "contains(github.event.head_commit.message, 'ci run dl')" if: "contains(github.event.head_commit.message, 'ci run dl')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
@ -39,7 +39,7 @@ jobs:
python-version: pypy-3.9 python-version: pypy-3.9
run-tests-ext: bat run-tests-ext: bat
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:

97
.github/workflows/publish.yml vendored Normal file
View File

@ -0,0 +1,97 @@
name: Publish
on:
workflow_call:
inputs:
channel:
default: stable
required: true
type: string
version:
required: true
type: string
target_commitish:
required: true
type: string
prerelease:
default: false
required: true
type: boolean
secrets:
ARCHIVE_REPO_TOKEN:
required: false
permissions:
contents: write
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/download-artifact@v3
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Generate release notes
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20should%20I%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp#installation "Installation instructions") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/yt-dlp/yt-dlp/tree/2023.03.04#readme "Documentation") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
${{ inputs.channel != 'nightly' && '"[![Nightly](https://img.shields.io/badge/Get%20nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\")"' || '' }} \
> ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/yt-dlp/yt-dlp#release-files)
---
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
printf '%s\n\n' '**This is an automated nightly pre-release build**' >> ./NIGHTLY_NOTES
cat ./RELEASE_NOTES >> ./NIGHTLY_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Archive nightly release
env:
GH_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
GH_REPO: ${{ vars.ARCHIVE_REPO }}
if: |
inputs.channel == 'nightly' && env.GH_TOKEN != '' && env.GH_REPO != ''
run: |
gh release create \
--notes-file ARCHIVE_NOTES \
--title "yt-dlp nightly ${{ inputs.version }}" \
${{ inputs.version }} \
artifact/*
- name: Prune old nightly release
if: inputs.channel == 'nightly' && !vars.ARCHIVE_REPO
env:
GH_TOKEN: ${{ github.token }}
run: |
gh release delete --yes --cleanup-tag "nightly" || true
git tag --delete "nightly" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release${{ inputs.channel == 'nightly' && ' (nightly)' || '' }}
env:
GH_TOKEN: ${{ github.token }}
if: (inputs.channel == 'nightly' && !vars.ARCHIVE_REPO) || inputs.channel != 'nightly'
run: |
gh release create \
--notes-file ${{ inputs.channel == 'nightly' && 'NIGHTLY_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ inputs.target_commitish }} \
--title "yt-dlp ${{ inputs.channel == 'nightly' && 'nightly ' || '' }}${{ inputs.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ inputs.channel == 'nightly' && '"nightly"' || inputs.version }} \
artifact/*

View File

@ -9,7 +9,7 @@ jobs:
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python 3.11 - name: Set up Python 3.11
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
@ -25,7 +25,7 @@ jobs:
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
- name: Install flake8 - name: Install flake8
run: pip install flake8 run: pip install flake8

View File

@ -1,28 +0,0 @@
name: Release (master)
on:
push:
branches:
- master
paths:
- "yt_dlp/**.py"
- "!yt_dlp/version.py"
- "setup.py"
- "pyinst.py"
concurrency:
group: release-master
cancel-in-progress: true
permissions:
contents: read
jobs:
release:
if: vars.BUILD_MASTER != ''
uses: ./.github/workflows/release.yml
with:
prerelease: true
source: master
permissions:
contents: write
packages: write
id-token: write # mandatory for trusted publishing
secrets: inherit

View File

@ -1,35 +1,52 @@
name: Release (nightly) name: Release (nightly)
on: on:
schedule: push:
- cron: '23 23 * * *' branches:
- master
paths:
- "yt_dlp/**.py"
- "!yt_dlp/version.py"
concurrency:
group: release-nightly
cancel-in-progress: true
permissions: permissions:
contents: read contents: read
jobs: jobs:
check_nightly: prepare:
if: vars.BUILD_NIGHTLY != '' if: vars.BUILD_NIGHTLY != ''
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
commit: ${{ steps.check_for_new_commits.outputs.commit }} version: ${{ steps.get_version.outputs.version }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for new commits
id: check_for_new_commits
run: |
relevant_files=("yt_dlp/*.py" ':!yt_dlp/version.py' "setup.py" "pyinst.py")
echo "commit=$(git log --format=%H -1 --since="24 hours ago" -- "${relevant_files[@]}")" | tee "$GITHUB_OUTPUT"
release: steps:
needs: [check_nightly] - uses: actions/checkout@v3
if: ${{ needs.check_nightly.outputs.commit }} - name: Get version
uses: ./.github/workflows/release.yml id: get_version
run: |
python devscripts/update-version.py "$(date -u +"%H%M%S")" | grep -Po "version=\d+(\.\d+){3}" >> "$GITHUB_OUTPUT"
build:
needs: prepare
uses: ./.github/workflows/build.yml
with: with:
prerelease: true version: ${{ needs.prepare.outputs.version }}
source: nightly channel: nightly
permissions:
contents: read
packages: write # For package cache
secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish:
needs: [prepare, build]
uses: ./.github/workflows/publish.yml
secrets:
ARCHIVE_REPO_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
permissions: permissions:
contents: write contents: write
packages: write with:
id-token: write # mandatory for trusted publishing channel: nightly
secrets: inherit prerelease: true
version: ${{ needs.prepare.outputs.version }}
target_commitish: ${{ github.sha }}

View File

@ -1,45 +1,14 @@
name: Release name: Release
on: on:
workflow_call:
inputs:
prerelease:
required: false
default: true
type: boolean
source:
required: false
default: ''
type: string
target:
required: false
default: ''
type: string
version:
required: false
default: ''
type: string
workflow_dispatch: workflow_dispatch:
inputs: inputs:
source:
description: |
SOURCE of this release's updates:
channel, repo, tag, or channel/repo@tag
(default: <current_repo>)
required: false
default: ''
type: string
target:
description: |
TARGET to publish this release to:
channel, tag, or channel@tag
(default: <source> if writable else <current_repo>[@source_tag])
required: false
default: ''
type: string
version: version:
description: | description: Version tag (YYYY.MM.DD[.REV])
VERSION: yyyy.mm.dd[.rev] or rev required: false
(default: auto-generated) default: ''
type: string
channel:
description: Update channel (stable/nightly/...)
required: false required: false
default: '' default: ''
type: string type: string
@ -57,18 +26,12 @@ jobs:
contents: write contents: write
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
channel: ${{ steps.setup_variables.outputs.channel }} channel: ${{ steps.set_channel.outputs.channel }}
version: ${{ steps.setup_variables.outputs.version }} version: ${{ steps.update_version.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
target_repo_token: ${{ steps.setup_variables.outputs.target_repo_token }}
target_tag: ${{ steps.setup_variables.outputs.target_tag }}
pypi_project: ${{ steps.setup_variables.outputs.pypi_project }}
pypi_suffix: ${{ steps.setup_variables.outputs.pypi_suffix }}
pypi_token: ${{ steps.setup_variables.outputs.pypi_token }}
head_sha: ${{ steps.get_target.outputs.head_sha }} head_sha: ${{ steps.get_target.outputs.head_sha }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 0 fetch-depth: 0
@ -76,133 +39,25 @@ jobs:
with: with:
python-version: "3.10" python-version: "3.10"
- name: Process inputs - name: Set channel
id: process_inputs id: set_channel
run: | run: |
cat << EOF CHANNEL="${{ github.repository == 'yt-dlp/yt-dlp' && 'stable' || github.repository }}"
::group::Inputs echo "channel=${{ inputs.channel || '$CHANNEL' }}" > "$GITHUB_OUTPUT"
prerelease=${{ inputs.prerelease }}
source=${{ inputs.source }}
target=${{ inputs.target }}
version=${{ inputs.version }}
::endgroup::
EOF
IFS='@' read -r source_repo source_tag <<<"${{ inputs.source }}"
IFS='@' read -r target_repo target_tag <<<"${{ inputs.target }}"
cat << EOF >> "$GITHUB_OUTPUT"
source_repo=${source_repo}
source_tag=${source_tag}
target_repo=${target_repo}
target_tag=${target_tag}
EOF
- name: Setup variables - name: Update version
id: setup_variables id: update_version
env:
source_repo: ${{ steps.process_inputs.outputs.source_repo }}
source_tag: ${{ steps.process_inputs.outputs.source_tag }}
target_repo: ${{ steps.process_inputs.outputs.target_repo }}
target_tag: ${{ steps.process_inputs.outputs.target_tag }}
run: | run: |
# unholy bash monstrosity (sincere apologies) REVISION="${{ vars.PUSH_VERSION_COMMIT == '' && '$(date -u +"%H%M%S")' || '' }}"
fallback_token () { REVISION="${{ inputs.prerelease && '$(date -u +"%H%M%S")' || '$REVISION' }}"
if ${{ !secrets.ARCHIVE_REPO_TOKEN }}; then python devscripts/update-version.py ${{ inputs.version || '$REVISION' }} | \
echo "::error::Repository access secret ${target_repo_token^^} not found" grep -Po "version=\d+\.\d+\.\d+(\.\d+)?" >> "$GITHUB_OUTPUT"
exit 1
fi
target_repo_token=ARCHIVE_REPO_TOKEN
return 0
}
source_is_channel=0
[[ "${source_repo}" == 'stable' ]] && source_repo='yt-dlp/yt-dlp'
if [[ -z "${source_repo}" ]]; then
source_repo='${{ github.repository }}'
elif [[ '${{ vars[format('{0}_archive_repo', env.source_repo)] }}' ]]; then
source_is_channel=1
source_channel='${{ vars[format('{0}_archive_repo', env.source_repo)] }}'
elif [[ -z "${source_tag}" && "${source_repo}" != */* ]]; then
source_tag="${source_repo}"
source_repo='${{ github.repository }}'
fi
resolved_source="${source_repo}"
if [[ "${source_tag}" ]]; then
resolved_source="${resolved_source}@${source_tag}"
elif [[ "${source_repo}" == 'yt-dlp/yt-dlp' ]]; then
resolved_source='stable'
fi
revision="${{ (inputs.prerelease || !vars.PUSH_VERSION_COMMIT) && '$(date -u +"%H%M%S")' || '' }}"
version="$(
python devscripts/update-version.py \
-c "${resolved_source}" -r "${{ github.repository }}" ${{ inputs.version || '$revision' }} | \
grep -Po "version=\K\d+\.\d+\.\d+(\.\d+)?")"
if [[ "${target_repo}" ]]; then
if [[ -z "${target_tag}" ]]; then
if [[ '${{ vars[format('{0}_archive_repo', env.target_repo)] }}' ]]; then
target_tag="${source_tag:-${version}}"
else
target_tag="${target_repo}"
target_repo='${{ github.repository }}'
fi
fi
if [[ "${target_repo}" != '${{ github.repository}}' ]]; then
target_repo='${{ vars[format('{0}_archive_repo', env.target_repo)] }}'
target_repo_token='${{ env.target_repo }}_archive_repo_token'
${{ !!secrets[format('{0}_archive_repo_token', env.target_repo)] }} || fallback_token
pypi_project='${{ vars[format('{0}_pypi_project', env.target_repo)] }}'
pypi_suffix='${{ vars[format('{0}_pypi_suffix', env.target_repo)] }}'
${{ !secrets[format('{0}_pypi_token', env.target_repo)] }} || pypi_token='${{ env.target_repo }}_pypi_token'
fi
else
target_tag="${source_tag:-${version}}"
if ((source_is_channel)); then
target_repo="${source_channel}"
target_repo_token='${{ env.source_repo }}_archive_repo_token'
${{ !!secrets[format('{0}_archive_repo_token', env.source_repo)] }} || fallback_token
pypi_project='${{ vars[format('{0}_pypi_project', env.source_repo)] }}'
pypi_suffix='${{ vars[format('{0}_pypi_suffix', env.source_repo)] }}'
${{ !secrets[format('{0}_pypi_token', env.source_repo)] }} || pypi_token='${{ env.source_repo }}_pypi_token'
else
target_repo='${{ github.repository }}'
fi
fi
if [[ "${target_repo}" == '${{ github.repository }}' ]] && ${{ !inputs.prerelease }}; then
pypi_project='${{ vars.PYPI_PROJECT }}'
fi
if [[ -z "${pypi_token}" && "${pypi_project}" ]]; then
if ${{ !secrets.PYPI_TOKEN }}; then
pypi_token=OIDC
else
pypi_token=PYPI_TOKEN
fi
fi
echo "::group::Output variables"
cat << EOF | tee -a "$GITHUB_OUTPUT"
channel=${resolved_source}
version=${version}
target_repo=${target_repo}
target_repo_token=${target_repo_token}
target_tag=${target_tag}
pypi_project=${pypi_project}
pypi_suffix=${pypi_suffix}
pypi_token=${pypi_token}
EOF
echo "::endgroup::"
- name: Update documentation - name: Update documentation
env:
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
!inputs.prerelease && env.target_repo == github.repository
run: | run: |
make doc make doc
sed '/### /Q' Changelog.md >> ./CHANGELOG sed '/### /Q' Changelog.md >> ./CHANGELOG
echo '### ${{ env.version }}' >> ./CHANGELOG echo '### ${{ steps.update_version.outputs.version }}' >> ./CHANGELOG
python ./devscripts/make_changelog.py -vv -c >> ./CHANGELOG python ./devscripts/make_changelog.py -vv -c >> ./CHANGELOG
echo >> ./CHANGELOG echo >> ./CHANGELOG
grep -Poz '(?s)### \d+\.\d+\.\d+.+' 'Changelog.md' | head -n -1 >> ./CHANGELOG grep -Poz '(?s)### \d+\.\d+\.\d+.+' 'Changelog.md' | head -n -1 >> ./CHANGELOG
@ -210,16 +65,12 @@ jobs:
- name: Push to release - name: Push to release
id: push_release id: push_release
env: if: ${{ !inputs.prerelease }}
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
!inputs.prerelease && env.target_repo == github.repository
run: | run: |
git config --global user.name github-actions git config --global user.name github-actions
git config --global user.email github-actions@github.com git config --global user.email github-actions@example.com
git add -u git add -u
git commit -m "Release ${{ env.version }}" \ git commit -m "Release ${{ steps.update_version.outputs.version }}" \
-m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
git push origin --force ${{ github.event.ref }}:release git push origin --force ${{ github.event.ref }}:release
@ -229,10 +80,7 @@ jobs:
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT" echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
- name: Update master - name: Update master
env: if: vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease && env.target_repo == github.repository
run: git push origin ${{ github.event.ref }} run: git push origin ${{ github.event.ref }}
build: build:
@ -241,159 +89,75 @@ jobs:
with: with:
version: ${{ needs.prepare.outputs.version }} version: ${{ needs.prepare.outputs.version }}
channel: ${{ needs.prepare.outputs.channel }} channel: ${{ needs.prepare.outputs.channel }}
origin: ${{ needs.prepare.outputs.target_repo }}
permissions: permissions:
contents: read contents: read
packages: write # For package cache packages: write # For package cache
secrets: secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }} GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish_pypi: publish_pypi_homebrew:
needs: [prepare, build] needs: [prepare, build]
if: ${{ needs.prepare.outputs.pypi_project }}
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
id-token: write # mandatory for trusted publishing
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: with:
python-version: "3.10" python-version: "3.10"
- name: Install Requirements - name: Install Requirements
run: | run: |
sudo apt -y install pandoc man sudo apt-get -y install pandoc man
python -m pip install -U pip setuptools wheel twine python -m pip install -U pip setuptools wheel twine
python -m pip install -U -r requirements.txt python -m pip install -U -r requirements.txt
- name: Prepare - name: Prepare
env:
version: ${{ needs.prepare.outputs.version }}
suffix: ${{ needs.prepare.outputs.pypi_suffix }}
channel: ${{ needs.prepare.outputs.channel }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
pypi_project: ${{ needs.prepare.outputs.pypi_project }}
run: | run: |
python devscripts/update-version.py -c "${{ env.channel }}" -r "${{ env.target_repo }}" -s "${{ env.suffix }}" "${{ env.version }}" python devscripts/update-version.py ${{ needs.prepare.outputs.version }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
sed -i -E "s/(name=')[^']+(', # package name)/\1${{ env.pypi_project }}\2/" setup.py
- name: Build - name: Build and publish on PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
if: env.TWINE_PASSWORD != '' && !inputs.prerelease
run: | run: |
rm -rf dist/* rm -rf dist/*
make pypi-files make pypi-files
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update" python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
python setup.py sdist bdist_wheel python setup.py sdist bdist_wheel
- name: Publish to PyPI via token
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets[needs.prepare.outputs.pypi_token] }}
if: |
needs.prepare.outputs.pypi_token != 'OIDC' && env.TWINE_PASSWORD
run: |
twine upload dist/* twine upload dist/*
- name: Publish to PyPI via trusted publishing - name: Checkout Homebrew repository
if: | env:
needs.prepare.outputs.pypi_token == 'OIDC' BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
uses: pypa/gh-action-pypi-publish@release/v1 PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
uses: actions/checkout@v3
with: with:
verbose: true repository: yt-dlp/homebrew-taps
path: taps
ssh-key: ${{ secrets.BREW_TOKEN }}
- name: Update Homebrew Formulae
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
run: |
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.version }}"
git -C taps/ config user.name github-actions
git -C taps/ config user.email github-actions@example.com
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.version }}'
git -C taps/ push
publish: publish:
needs: [prepare, build] needs: [prepare, build]
uses: ./.github/workflows/publish.yml
permissions: permissions:
contents: write contents: write
runs-on: ubuntu-latest with:
channel: ${{ needs.prepare.outputs.channel }}
steps: prerelease: ${{ inputs.prerelease }}
- uses: actions/checkout@v4 version: ${{ needs.prepare.outputs.version }}
with: target_commitish: ${{ needs.prepare.outputs.head_sha }}
fetch-depth: 0
- uses: actions/download-artifact@v3
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Generate release notes
env:
head_sha: ${{ needs.prepare.outputs.head_sha }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20should%20I%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/${{ github.repository }}#installation "Installation instructions") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/${{ github.repository }}' \
'${{ env.target_repo == github.repository && format('/tree/{0}', env.target_tag) || '' }}#readme "Documentation") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
${{ env.target_repo == 'yt-dlp/yt-dlp' && '\
"[![Nightly](https://img.shields.io/badge/Get%20nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\") " \
"[![Master](https://img.shields.io/badge/Get%20master%20builds-lightblue.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-master-builds/releases/latest \"Master builds\")"' || '' }} > ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/${{ github.repository }}#release-files)
---
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
printf '%s\n\n' '**This is a pre-release build**' >> ./PRERELEASE_NOTES
cat ./RELEASE_NOTES >> ./PRERELEASE_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ env.head_sha }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Publish to archive repo
env:
GH_TOKEN: ${{ secrets[needs.prepare.outputs.target_repo_token] }}
GH_REPO: ${{ needs.prepare.outputs.target_repo }}
version: ${{ needs.prepare.outputs.version }}
channel: ${{ needs.prepare.outputs.channel }}
if: |
inputs.prerelease && env.GH_TOKEN != '' && env.GH_REPO != '' && env.GH_REPO != github.repository
run: |
title="${{ startswith(env.GH_REPO, 'yt-dlp/') && 'yt-dlp ' || '' }}${{ env.channel }}"
gh release create \
--notes-file ARCHIVE_NOTES \
--title "${title} ${{ env.version }}" \
${{ env.version }} \
artifact/*
- name: Prune old release
env:
GH_TOKEN: ${{ github.token }}
version: ${{ needs.prepare.outputs.version }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
if: |
env.target_repo == github.repository && env.target_tag != env.version
run: |
gh release delete --yes --cleanup-tag "${{ env.target_tag }}" || true
git tag --delete "${{ env.target_tag }}" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release
env:
GH_TOKEN: ${{ github.token }}
version: ${{ needs.prepare.outputs.version }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
head_sha: ${{ needs.prepare.outputs.head_sha }}
if: |
env.target_repo == github.repository
run: |
title="${{ github.repository == 'yt-dlp/yt-dlp' && 'yt-dlp ' || '' }}"
title+="${{ env.target_tag != env.version && format('{0} ', env.target_tag) || '' }}"
gh release create \
--notes-file ${{ inputs.prerelease && 'PRERELEASE_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ env.head_sha }} \
--title "${title}${{ env.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ env.target_tag }} \
artifact/*

View File

@ -121,7 +121,7 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
* **Self updater**: The releases can be updated using `yt-dlp -U`, and downgraded using `--update-to` if required * **Self updater**: The releases can be updated using `yt-dlp -U`, and downgraded using `--update-to` if required
* **Automated builds**: [Nightly/master builds](#update-channels) can be used with `--update-to nightly` and `--update-to master` * **Nightly builds**: [Automated nightly builds](#update-channels) can be used with `--update-to nightly`
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
@ -157,7 +157,6 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior * yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
* yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: [~~aria2c~~](https://github.com/yt-dlp/yt-dlp/issues/5931)). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is * yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: [~~aria2c~~](https://github.com/yt-dlp/yt-dlp/issues/5931)). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is
* yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this * yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this
* yt-dlp versions between 2021.11.10 and 2023.06.21 estimated `filesize_approx` values for fragmented/manifest formats. This was added for convenience in [f2fe69](https://github.com/yt-dlp/yt-dlp/commit/f2fe69c7b0d208bdb1f6292b4ae92bc1e1a7444a), but was reverted in [0dff8e](https://github.com/yt-dlp/yt-dlp/commit/0dff8e4d1e6e9fb938f4256ea9af7d81f42fd54f) due to the potentially extreme inaccuracy of the estimated values. Use `--compat-options manifest-filesize-approx` to keep extracting the estimated values
* yt-dlp uses modern http client backends such as `requests`. Use `--compat-options prefer-legacy-http-handler` to prefer the legacy http handler (`urllib`) to be used for standard http requests. * yt-dlp uses modern http client backends such as `requests`. Use `--compat-options prefer-legacy-http-handler` to prefer the legacy http handler (`urllib`) to be used for standard http requests.
For ease of use, a few more compat options are available: For ease of use, a few more compat options are available:
@ -193,11 +192,9 @@ For other third-party package managers, see [the wiki](https://github.com/yt-dlp
<a id="update-channels"/> <a id="update-channels"/>
There are currently three release channels for binaries: `stable`, `nightly` and `master`. There are currently two release channels for binaries, `stable` and `nightly`.
`stable` is the default channel, and many of its changes have been tested by users of the nightly channel.
* `stable` is the default channel, and many of its changes have been tested by users of the `nightly` and `master` channels. The `nightly` channel has releases built after each push to the master branch, and will have the most recent fixes and additions, but also have more risk of regressions. They are available in [their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
* The `nightly` channel has releases scheduled to build every day around midnight UTC, for a snapshot of the project's new patches and changes. This is the **recommended channel for regular users** of yt-dlp. The `nightly` releases are available from [yt-dlp/yt-dlp-nightly-builds](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases) or as development releases of the `yt-dlp` PyPI package (which can be installed with pip's `--pre` flag).
* The `master` channel features releases that are built after each push to the master branch, and these will have the very latest fixes and additions, but may also be more prone to regressions. They are available from [yt-dlp/yt-dlp-master-builds](https://github.com/yt-dlp/yt-dlp-master-builds/releases).
When using `--update`/`-U`, a release binary will only update to its current channel. When using `--update`/`-U`, a release binary will only update to its current channel.
`--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel. `--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel.
@ -205,19 +202,10 @@ When using `--update`/`-U`, a release binary will only update to its current cha
You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories. You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories.
Example usage: Example usage:
* `yt-dlp --update-to master` switch to the `master` channel and update to its latest release * `yt-dlp --update-to nightly` change to `nightly` channel and update to its latest release
* `yt-dlp --update-to stable@2023.07.06` upgrade/downgrade to release to `stable` channel tag `2023.07.06` * `yt-dlp --update-to stable@2023.02.17` upgrade/downgrade to release to `stable` channel tag `2023.02.17`
* `yt-dlp --update-to 2023.10.07` upgrade/downgrade to tag `2023.10.07` if it exists on the current channel * `yt-dlp --update-to 2023.01.06` upgrade/downgrade to tag `2023.01.06` if it exists on the current channel
* `yt-dlp --update-to example/yt-dlp@2023.09.24` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.09.24` * `yt-dlp --update-to example/yt-dlp@2023.03.01` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.03.01`
**Important**: Any user experiencing an issue with the `stable` release should install or update to the `nightly` release before submitting a bug report:
```
# To update to nightly from stable executable/binary:
yt-dlp --update-to nightly
# To install nightly with pip:
python -m pip install -U --pre yt-dlp
```
<!-- MANPAGE: BEGIN EXCLUDED SECTION --> <!-- MANPAGE: BEGIN EXCLUDED SECTION -->
## RELEASE FILES ## RELEASE FILES

View File

@ -12,6 +12,7 @@ import re
from devscripts.utils import ( from devscripts.utils import (
get_filename_args, get_filename_args,
read_file, read_file,
read_version,
write_file, write_file,
) )
@ -34,18 +35,19 @@ VERBOSE_TMPL = '''
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version %(version)s [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: %(version)s, Current version: %(version)s
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (%(version)s)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell
validations: validations:
@ -64,7 +66,7 @@ NO_SKIP = '''
def main(): def main():
fields = {'no_skip': NO_SKIP} fields = {'version': read_version(), 'no_skip': NO_SKIP}
fields['verbose'] = VERBOSE_TMPL % fields fields['verbose'] = VERBOSE_TMPL % fields
fields['verbose_optional'] = re.sub(r'(\n\s+validations:)?\n\s+required: true', '', fields['verbose']) fields['verbose_optional'] = re.sub(r'(\n\s+validations:)?\n\s+required: true', '', fields['verbose'])

View File

@ -0,0 +1,39 @@
#!/usr/bin/env python3
"""
Usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version>
version can be either 0-aligned (yt-dlp version) or normalized (PyPi version)
"""
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
import re
import urllib.request
from devscripts.utils import read_file, write_file
filename, version = sys.argv[1:]
normalized_version = '.'.join(str(int(x)) for x in version.split('.'))
pypi_release = json.loads(urllib.request.urlopen(
'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version
).read().decode())
tarball_file = next(x for x in pypi_release['urls'] if x['filename'].endswith('.tar.gz'))
sha256sum = tarball_file['digests']['sha256']
url = tarball_file['url']
formulae_text = read_file(filename)
formulae_text = re.sub(r'sha256 "[0-9a-f]*?"', 'sha256 "%s"' % sha256sum, formulae_text, count=1)
formulae_text = re.sub(r'url "[^"]*?"', 'url "%s"' % url, formulae_text, count=1)
write_file(filename, formulae_text)

View File

@ -20,7 +20,7 @@ def get_new_version(version, revision):
version = datetime.now(timezone.utc).strftime('%Y.%m.%d') version = datetime.now(timezone.utc).strftime('%Y.%m.%d')
if revision: if revision:
assert revision.isdecimal(), 'Revision must be a number' assert revision.isdigit(), 'Revision must be a number'
else: else:
old_version = read_version().split('.') old_version = read_version().split('.')
if version.split('.') == old_version[:3]: if version.split('.') == old_version[:3]:
@ -46,10 +46,6 @@ VARIANT = None
UPDATE_HINT = None UPDATE_HINT = None
CHANNEL = {channel!r} CHANNEL = {channel!r}
ORIGIN = {origin!r}
_pkg_version = {package_version!r}
''' '''
if __name__ == '__main__': if __name__ == '__main__':
@ -57,12 +53,6 @@ if __name__ == '__main__':
parser.add_argument( parser.add_argument(
'-c', '--channel', default='stable', '-c', '--channel', default='stable',
help='Select update channel (default: %(default)s)') help='Select update channel (default: %(default)s)')
parser.add_argument(
'-r', '--origin', default='local',
help='Select origin/repository (default: %(default)s)')
parser.add_argument(
'-s', '--suffix', default='',
help='Add an alphanumeric suffix to the package version, e.g. "dev"')
parser.add_argument( parser.add_argument(
'-o', '--output', default='yt_dlp/version.py', '-o', '--output', default='yt_dlp/version.py',
help='The output file to write to (default: %(default)s)') help='The output file to write to (default: %(default)s)')
@ -76,7 +66,6 @@ if __name__ == '__main__':
args.version if args.version and '.' in args.version args.version if args.version and '.' in args.version
else get_new_version(None, args.version)) else get_new_version(None, args.version))
write_file(args.output, VERSION_TEMPLATE.format( write_file(args.output, VERSION_TEMPLATE.format(
version=version, git_head=git_head, channel=args.channel, origin=args.origin, version=version, git_head=git_head, channel=args.channel))
package_version=f'{version}{args.suffix}'))
print(f'version={version} ({args.channel}), head={git_head}') print(f'version={version} ({args.channel}), head={git_head}')

View File

@ -13,11 +13,10 @@ def write_file(fname, content, mode='w'):
return f.write(content) return f.write(content)
def read_version(fname='yt_dlp/version.py', varname='__version__'): def read_version(fname='yt_dlp/version.py'):
"""Get the version without importing the package""" """Get the version without importing the package"""
items = {} exec(compile(read_file(fname), fname, 'exec'))
exec(compile(read_file(fname), fname, 'exec'), items) return locals()['__version__']
return items[varname]
def get_filename_args(has_infile=False, default_outfile=None): def get_filename_args(has_infile=False, default_outfile=None):

View File

@ -1,9 +1,8 @@
mutagen mutagen
pycryptodomex pycryptodomex
websockets websockets
brotli; implementation_name=='cpython' brotli; platform_python_implementation=='CPython'
brotlicffi; implementation_name!='cpython' brotlicffi; platform_python_implementation!='CPython'
certifi certifi
requests>=2.31.0,<3 requests>=2.31.0,<3
urllib3>=1.26.17,<3 urllib3>=1.26.17,<3
secretstorage; sys_platform=='linux' and (implementation_name!='pypy' or implementation_version>='7.3.10')

View File

@ -18,7 +18,7 @@ except ImportError:
from devscripts.utils import read_file, read_version from devscripts.utils import read_file, read_version
VERSION = read_version(varname='_pkg_version') VERSION = read_version()
DESCRIPTION = 'A youtube-dl fork with additional features and patches' DESCRIPTION = 'A youtube-dl fork with additional features and patches'
@ -142,7 +142,7 @@ def main():
params = build_params() params = build_params()
setup( setup(
name='yt-dlp', # package name (do not change/remove comment) name='yt-dlp',
version=VERSION, version=VERSION,
maintainer='pukkandan', maintainer='pukkandan',
maintainer_email='pukkandan.ytdlp@gmail.com', maintainer_email='pukkandan.ytdlp@gmail.com',

View File

@ -1,199 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, report_warning
from yt_dlp.update import Updater, UpdateInfo
TEST_API_DATA = {
'yt-dlp/yt-dlp/latest': {
'tag_name': '2023.12.31',
'target_commitish': 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb',
'name': 'yt-dlp 2023.12.31',
'body': 'BODY',
},
'yt-dlp/yt-dlp-nightly-builds/latest': {
'tag_name': '2023.12.31.123456',
'target_commitish': 'master',
'name': 'yt-dlp nightly 2023.12.31.123456',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/cccccccccccccccccccccccccccccccccccccccc',
},
'yt-dlp/yt-dlp-master-builds/latest': {
'tag_name': '2023.12.31.987654',
'target_commitish': 'master',
'name': 'yt-dlp master 2023.12.31.987654',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/dddddddddddddddddddddddddddddddddddddddd',
},
'yt-dlp/yt-dlp/tags/testing': {
'tag_name': 'testing',
'target_commitish': '9999999999999999999999999999999999999999',
'name': 'testing',
'body': 'BODY',
},
'fork/yt-dlp/latest': {
'tag_name': '2050.12.31',
'target_commitish': 'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee',
'name': '2050.12.31',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr0000': {
'tag_name': 'pr0000',
'target_commitish': 'ffffffffffffffffffffffffffffffffffffffff',
'name': 'pr1234 2023.11.11.000000',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr1234': {
'tag_name': 'pr1234',
'target_commitish': '0000000000000000000000000000000000000000',
'name': 'pr1234 2023.12.31.555555',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr9999': {
'tag_name': 'pr9999',
'target_commitish': '1111111111111111111111111111111111111111',
'name': 'pr9999',
'body': 'BODY',
},
'fork/yt-dlp-satellite/tags/pr987': {
'tag_name': 'pr987',
'target_commitish': 'master',
'name': 'pr987',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/2222222222222222222222222222222222222222',
},
}
TEST_LOCKFILE_V1 = '''# This file is used for regulating self-update
lock 2022.08.18.36 .+ Python 3.6
lock 2023.11.13 .+ Python 3.7
'''
TEST_LOCKFILE_V2 = '''# This file is used for regulating self-update
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3.6
lockV2 yt-dlp/yt-dlp 2023.11.13 .+ Python 3.7
'''
TEST_LOCKFILE_V1_V2 = '''# This file is used for regulating self-update
lock 2022.08.18.36 .+ Python 3.6
lock 2023.11.13 .+ Python 3.7
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3.6
lockV2 yt-dlp/yt-dlp 2023.11.13 .+ Python 3.7
lockV2 fork/yt-dlp pr0000 .+ Python 3.6
lockV2 fork/yt-dlp pr1234 .+ Python 3.7
lockV2 fork/yt-dlp pr9999 .+ Python 3.11
'''
class FakeUpdater(Updater):
current_version = '2022.01.01'
current_commit = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
_channel = 'stable'
_origin = 'yt-dlp/yt-dlp'
def _download_update_spec(self, *args, **kwargs):
return TEST_LOCKFILE_V1_V2
def _call_api(self, tag):
tag = f'tags/{tag}' if tag != 'latest' else tag
return TEST_API_DATA[f'{self.requested_repo}/{tag}']
def _report_error(self, msg, *args, **kwargs):
report_warning(msg)
class TestUpdate(unittest.TestCase):
maxDiff = None
def test_update_spec(self):
ydl = FakeYDL()
updater = FakeUpdater(ydl, 'stable@latest')
def test(lockfile, identifier, input_tag, expect_tag, exact=False, repo='yt-dlp/yt-dlp'):
updater._identifier = identifier
updater._exact = exact
updater.requested_repo = repo
result = updater._process_update_spec(lockfile, input_tag)
self.assertEqual(
result, expect_tag,
f'{identifier!r} requesting {repo}@{input_tag} (exact={exact}) '
f'returned {result!r} instead of {expect_tag!r}')
test(TEST_LOCKFILE_V1, 'zip Python 3.11.0', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V1, 'zip stable Python 3.11.0', '2023.11.13', '2023.11.13', exact=True)
test(TEST_LOCKFILE_V1, 'zip Python 3.6.0', '2023.11.13', '2022.08.18.36')
test(TEST_LOCKFILE_V1, 'zip stable Python 3.6.0', '2023.11.13', None, exact=True)
test(TEST_LOCKFILE_V1, 'zip Python 3.7.0', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V1, 'zip stable Python 3.7.1', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V1, 'zip Python 3.7.1', '2023.12.31', '2023.11.13')
test(TEST_LOCKFILE_V1, 'zip stable Python 3.7.1', '2023.12.31', '2023.11.13')
test(TEST_LOCKFILE_V2, 'zip Python 3.11.1', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V2, 'zip stable Python 3.11.1', '2023.12.31', '2023.12.31')
test(TEST_LOCKFILE_V2, 'zip Python 3.6.1', '2023.11.13', '2022.08.18.36')
test(TEST_LOCKFILE_V2, 'zip stable Python 3.7.2', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V2, 'zip Python 3.7.2', '2023.12.31', '2023.11.13')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.11.2', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V1_V2, 'zip stable Python 3.11.2', '2023.12.31', '2023.12.31')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.6.2', '2023.11.13', '2022.08.18.36')
test(TEST_LOCKFILE_V1_V2, 'zip stable Python 3.7.3', '2023.11.13', '2023.11.13')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.7.3', '2023.12.31', '2023.11.13')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.6.3', 'pr0000', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip stable Python 3.7.4', 'pr0000', 'pr0000', repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.6.4', 'pr0000', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.7.4', 'pr1234', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip stable Python 3.8.1', 'pr1234', 'pr1234', repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.7.5', 'pr1234', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.11.3', 'pr9999', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip stable Python 3.12.0', 'pr9999', 'pr9999', repo='fork/yt-dlp')
test(TEST_LOCKFILE_V1_V2, 'zip Python 3.11.4', 'pr9999', None, repo='fork/yt-dlp')
def test_query_update(self):
ydl = FakeYDL()
def test(target, expected, current_version=None, current_commit=None, identifier=None):
updater = FakeUpdater(ydl, target)
if current_version:
updater.current_version = current_version
if current_commit:
updater.current_commit = current_commit
updater._identifier = identifier or 'zip'
update_info = updater.query_update(_output=True)
self.assertDictEqual(
update_info.__dict__ if update_info else {}, expected.__dict__ if expected else {})
test('yt-dlp/yt-dlp@latest', UpdateInfo(
'2023.12.31', version='2023.12.31', requested_version='2023.12.31', commit='b' * 40))
test('yt-dlp/yt-dlp-nightly-builds@latest', UpdateInfo(
'2023.12.31.123456', version='2023.12.31.123456', requested_version='2023.12.31.123456', commit='c' * 40))
test('yt-dlp/yt-dlp-master-builds@latest', UpdateInfo(
'2023.12.31.987654', version='2023.12.31.987654', requested_version='2023.12.31.987654', commit='d' * 40))
test('fork/yt-dlp@latest', UpdateInfo(
'2050.12.31', version='2050.12.31', requested_version='2050.12.31', commit='e' * 40))
test('fork/yt-dlp@pr0000', UpdateInfo(
'pr0000', version='2023.11.11.000000', requested_version='2023.11.11.000000', commit='f' * 40))
test('fork/yt-dlp@pr1234', UpdateInfo(
'pr1234', version='2023.12.31.555555', requested_version='2023.12.31.555555', commit='0' * 40))
test('fork/yt-dlp@pr9999', UpdateInfo(
'pr9999', version=None, requested_version=None, commit='1' * 40))
test('fork/yt-dlp-satellite@pr987', UpdateInfo(
'pr987', version=None, requested_version=None, commit='2' * 40))
test('yt-dlp/yt-dlp', None, current_version='2024.01.01')
test('stable', UpdateInfo(
'2023.12.31', version='2023.12.31', requested_version='2023.12.31', commit='b' * 40))
test('nightly', UpdateInfo(
'2023.12.31.123456', version='2023.12.31.123456', requested_version='2023.12.31.123456', commit='c' * 40))
test('master', UpdateInfo(
'2023.12.31.987654', version='2023.12.31.987654', requested_version='2023.12.31.987654', commit='d' * 40))
test('testing', None, current_commit='9' * 40)
test('testing', UpdateInfo('testing', commit='9' * 40))
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
from yt_dlp.update import rsa_verify
class TestUpdate(unittest.TestCase):
def test_rsa_verify(self):
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'versions.json'), 'rb') as f:
versions_info = f.read().decode()
versions_info = json.loads(versions_info)
signature = versions_info['signature']
del versions_info['signature']
self.assertTrue(rsa_verify(
json.dumps(versions_info, sort_keys=True).encode(),
signature, UPDATES_RSA_KEY))
if __name__ == '__main__':
unittest.main()

34
test/versions.json Normal file
View File

@ -0,0 +1,34 @@
{
"latest": "2013.01.06",
"signature": "72158cdba391628569ffdbea259afbcf279bbe3d8aeb7492690735dc1cfa6afa754f55c61196f3871d429599ab22f2667f1fec98865527b32632e7f4b3675a7ef0f0fbe084d359256ae4bba68f0d33854e531a70754712f244be71d4b92e664302aa99653ee4df19800d955b6c4149cd2b3f24288d6e4b40b16126e01f4c8ce6",
"versions": {
"2013.01.02": {
"bin": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl",
"f5b502f8aaa77675c4884938b1e4871ebca2611813a0c0e74f60c0fbd6dcca6b"
],
"exe": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl.exe",
"75fa89d2ce297d102ff27675aa9d92545bbc91013f52ec52868c069f4f9f0422"
],
"tar": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl-2013.01.02.tar.gz",
"6a66d022ac8e1c13da284036288a133ec8dba003b7bd3a5179d0c0daca8c8196"
]
},
"2013.01.06": {
"bin": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl",
"64b6ed8865735c6302e836d4d832577321b4519aa02640dc508580c1ee824049"
],
"exe": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl.exe",
"58609baf91e4389d36e3ba586e21dab882daaaee537e4448b1265392ae86ff84"
],
"tar": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl-2013.01.06.tar.gz",
"fe77ab20a95d980ed17a659aa67e371fdd4d656d19c4c7950e7b720b0c2f1a86"
]
}
}
}

View File

@ -60,7 +60,7 @@ from .postprocessor import (
get_postprocessor, get_postprocessor,
) )
from .postprocessor.ffmpeg import resolve_mapping as resolve_recode_mapping from .postprocessor.ffmpeg import resolve_mapping as resolve_recode_mapping
from .update import REPOSITORY, _get_system_deprecation, _make_label, current_git_head, detect_variant from .update import REPOSITORY, _get_system_deprecation, current_git_head, detect_variant
from .utils import ( from .utils import (
DEFAULT_OUTTMPL, DEFAULT_OUTTMPL,
IDENTITY, IDENTITY,
@ -158,7 +158,7 @@ from .utils.networking import (
clean_proxies, clean_proxies,
std_headers, std_headers,
) )
from .version import CHANNEL, ORIGIN, RELEASE_GIT_HEAD, VARIANT, __version__ from .version import CHANNEL, RELEASE_GIT_HEAD, VARIANT, __version__
if compat_os_name == 'nt': if compat_os_name == 'nt':
import ctypes import ctypes
@ -2338,7 +2338,7 @@ class YoutubeDL:
return return
for f in formats: for f in formats:
if f.get('has_drm') or f.get('__needs_testing'): if f.get('has_drm'):
yield from self._check_formats([f]) yield from self._check_formats([f])
else: else:
yield f yield f
@ -2764,8 +2764,7 @@ class YoutubeDL:
format['dynamic_range'] = 'SDR' format['dynamic_range'] = 'SDR'
if format.get('aspect_ratio') is None: if format.get('aspect_ratio') is None:
format['aspect_ratio'] = try_call(lambda: round(format['width'] / format['height'], 2)) format['aspect_ratio'] = try_call(lambda: round(format['width'] / format['height'], 2))
# For fragmented formats, "tbr" is often max bitrate and not average if (not format.get('manifest_url') # For fragmented formats, "tbr" is often max bitrate and not average
if (('manifest-filesize-approx' in self.params['compat_opts'] or not format.get('manifest_url'))
and info_dict.get('duration') and format.get('tbr') and info_dict.get('duration') and format.get('tbr')
and not format.get('filesize') and not format.get('filesize_approx')): and not format.get('filesize') and not format.get('filesize_approx')):
format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8)) format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8))
@ -3544,14 +3543,14 @@ class YoutubeDL:
'version': __version__, 'version': __version__,
'current_git_head': current_git_head(), 'current_git_head': current_git_head(),
'release_git_head': RELEASE_GIT_HEAD, 'release_git_head': RELEASE_GIT_HEAD,
'repository': ORIGIN, 'repository': REPOSITORY,
}) })
if remove_private_keys: if remove_private_keys:
reject = lambda k, v: v is None or k.startswith('__') or k in { reject = lambda k, v: v is None or k.startswith('__') or k in {
'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries', 'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries',
'entries', 'filepath', '_filename', 'filename', 'infojson_filename', 'original_url', 'entries', 'filepath', '_filename', 'filename', 'infojson_filename', 'original_url',
'playlist_autonumber', 'playlist_autonumber', '_format_sort_fields',
} }
else: else:
reject = lambda k, v: False reject = lambda k, v: False
@ -3927,8 +3926,8 @@ class YoutubeDL:
source += '*' source += '*'
klass = type(self) klass = type(self)
write_debug(join_nonempty( write_debug(join_nonempty(
f'{REPOSITORY.rpartition("/")[2]} version', f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version',
_make_label(ORIGIN, CHANNEL.partition('@')[2] or __version__, __version__), f'{CHANNEL}@{__version__}',
f'[{RELEASE_GIT_HEAD[:9]}]' if RELEASE_GIT_HEAD else '', f'[{RELEASE_GIT_HEAD[:9]}]' if RELEASE_GIT_HEAD else '',
'' if source == 'unknown' else f'({source})', '' if source == 'unknown' else f'({source})',
'' if _IN_CLI else 'API' if klass == YoutubeDL else f'API:{self.__module__}.{klass.__qualname__}', '' if _IN_CLI else 'API' if klass == YoutubeDL else f'API:{self.__module__}.{klass.__qualname__}',
@ -4243,7 +4242,7 @@ class YoutubeDL:
self.write_debug(f'Skipping writing {label} thumbnail') self.write_debug(f'Skipping writing {label} thumbnail')
return ret return ret
if thumbnails and not self._ensure_dir_exists(filename): if not self._ensure_dir_exists(filename):
return None return None
for idx, t in list(enumerate(thumbnails))[::-1]: for idx, t in list(enumerate(thumbnails))[::-1]:

View File

@ -25,7 +25,7 @@ def get_hidden_imports():
for module in ('websockets', 'requests', 'urllib3'): for module in ('websockets', 'requests', 'urllib3'):
yield from collect_submodules(module) yield from collect_submodules(module)
# These are auto-detected, but explicitly add them just in case # These are auto-detected, but explicitly add them just in case
yield from ('mutagen', 'brotli', 'certifi', 'secretstorage') yield from ('mutagen', 'brotli', 'certifi')
hiddenimports = list(get_hidden_imports()) hiddenimports = list(get_hidden_imports())

View File

@ -15,15 +15,12 @@ class DashSegmentsFD(FragmentFD):
FD_NAME = 'dashsegments' FD_NAME = 'dashsegments'
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
if 'http_dash_segments_generator' in info_dict['protocol'].split('+'): if info_dict.get('is_live') and set(info_dict['protocol'].split('+')) != {'http_dash_segments_generator'}:
real_downloader = None # No external FD can support --live-from-start self.report_error('Live DASH videos are not supported')
else:
if info_dict.get('is_live'):
self.report_error('Live DASH videos are not supported')
real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-'))
real_start = time.time() real_start = time.time()
real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-'))
requested_formats = [{**info_dict, **fmt} for fmt in info_dict.get('requested_formats', [])] requested_formats = [{**info_dict, **fmt} for fmt in info_dict.get('requested_formats', [])]
args = [] args = []

View File

@ -335,7 +335,7 @@ class Aria2cFD(ExternalFD):
cmd += ['--auto-file-renaming=false'] cmd += ['--auto-file-renaming=false']
if 'fragments' in info_dict: if 'fragments' in info_dict:
cmd += ['--uri-selector=inorder'] cmd += ['--file-allocation=none', '--uri-selector=inorder']
url_list_file = '%s.frag.urls' % tmpfilename url_list_file = '%s.frag.urls' % tmpfilename
url_list = [] url_list = []
for frag_index, fragment in enumerate(info_dict['fragments']): for frag_index, fragment in enumerate(info_dict['fragments']):

View File

@ -953,7 +953,6 @@ from .lastfm import (
LastFMPlaylistIE, LastFMPlaylistIE,
LastFMUserIE, LastFMUserIE,
) )
from .laxarxames import LaXarxaMesIE
from .lbry import ( from .lbry import (
LBRYIE, LBRYIE,
LBRYChannelIE, LBRYChannelIE,
@ -1388,10 +1387,7 @@ from .oftv import (
from .oktoberfesttv import OktoberfestTVIE from .oktoberfesttv import OktoberfestTVIE
from .olympics import OlympicsReplayIE from .olympics import OlympicsReplayIE
from .on24 import On24IE from .on24 import On24IE
from .ondemandkorea import ( from .ondemandkorea import OnDemandKoreaIE
OnDemandKoreaIE,
OnDemandKoreaProgramIE,
)
from .onefootball import OneFootballIE from .onefootball import OneFootballIE
from .onenewsnz import OneNewsNZIE from .onenewsnz import OneNewsNZIE
from .oneplace import OnePlacePodcastIE from .oneplace import OnePlacePodcastIE
@ -1420,7 +1416,6 @@ from .orf import (
ORFTVthekIE, ORFTVthekIE,
ORFFM4StoryIE, ORFFM4StoryIE,
ORFRadioIE, ORFRadioIE,
ORFPodcastIE,
ORFIPTVIE, ORFIPTVIE,
) )
from .outsidetv import OutsideTVIE from .outsidetv import OutsideTVIE
@ -1583,10 +1578,6 @@ from .radiocanada import (
RadioCanadaIE, RadioCanadaIE,
RadioCanadaAudioVideoIE, RadioCanadaAudioVideoIE,
) )
from .radiocomercial import (
RadioComercialIE,
RadioComercialPlaylistIE,
)
from .radiode import RadioDeIE from .radiode import RadioDeIE
from .radiojavan import RadioJavanIE from .radiojavan import RadioJavanIE
from .radiobremen import RadioBremenIE from .radiobremen import RadioBremenIE
@ -1767,11 +1758,6 @@ from .samplefocus import SampleFocusIE
from .sapo import SapoIE from .sapo import SapoIE
from .savefrom import SaveFromIE from .savefrom import SaveFromIE
from .sbs import SBSIE from .sbs import SBSIE
from .sbscokr import (
SBSCoKrIE,
SBSCoKrAllvodProgramIE,
SBSCoKrProgramsVodIE,
)
from .screen9 import Screen9IE from .screen9 import Screen9IE
from .screencast import ScreencastIE from .screencast import ScreencastIE
from .screencastify import ScreencastifyIE from .screencastify import ScreencastifyIE
@ -1916,8 +1902,6 @@ from .srmediathek import SRMediathekIE
from .stacommu import ( from .stacommu import (
StacommuLiveIE, StacommuLiveIE,
StacommuVODIE, StacommuVODIE,
TheaterComplexTownVODIE,
TheaterComplexTownPPVIE,
) )
from .stanfordoc import StanfordOpenClassroomIE from .stanfordoc import StanfordOpenClassroomIE
from .startv import StarTVIE from .startv import StarTVIE
@ -2030,6 +2014,7 @@ from .thestar import TheStarIE
from .thesun import TheSunIE from .thesun import TheSunIE
from .theweatherchannel import TheWeatherChannelIE from .theweatherchannel import TheWeatherChannelIE
from .thisamericanlife import ThisAmericanLifeIE from .thisamericanlife import ThisAmericanLifeIE
from .thisav import ThisAVIE
from .thisoldhouse import ThisOldHouseIE from .thisoldhouse import ThisOldHouseIE
from .thisvid import ( from .thisvid import (
ThisVidIE, ThisVidIE,

View File

@ -21,10 +21,10 @@ class BrilliantpalaBaseIE(InfoExtractor):
def _get_logged_in_username(self, url, video_id): def _get_logged_in_username(self, url, video_id):
webpage, urlh = self._download_webpage_handle(url, video_id) webpage, urlh = self._download_webpage_handle(url, video_id)
if urlh.url.startswith(self._LOGIN_API): if self._LOGIN_API == urlh.url:
self.raise_login_required() self.raise_login_required()
return self._html_search_regex( return self._html_search_regex(
r'"username"\s*:\s*"(?P<username>[^"]+)"', webpage, 'logged-in username') r'"username"\s*:\s*"(?P<username>[^"]+)"', webpage, 'stream page info', 'username')
def _perform_login(self, username, password): def _perform_login(self, username, password):
login_form = self._hidden_inputs(self._download_webpage( login_form = self._hidden_inputs(self._download_webpage(

View File

@ -34,7 +34,6 @@ from ..utils import (
unified_timestamp, unified_timestamp,
unsmuggle_url, unsmuggle_url,
update_url_query, update_url_query,
urlhandle_detect_ext,
url_or_none, url_or_none,
urljoin, urljoin,
variadic, variadic,
@ -2460,7 +2459,7 @@ class GenericIE(InfoExtractor):
self.report_detected('direct video link') self.report_detected('direct video link')
headers = smuggled_data.get('http_headers', {}) headers = smuggled_data.get('http_headers', {})
format_id = str(m.group('format_id')) format_id = str(m.group('format_id'))
ext = determine_ext(url, default_ext=None) or urlhandle_detect_ext(full_response) ext = determine_ext(url)
subtitles = {} subtitles = {}
if format_id.endswith('mpegurl') or ext == 'm3u8': if format_id.endswith('mpegurl') or ext == 'm3u8':
formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4', headers=headers) formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4', headers=headers)
@ -2472,7 +2471,6 @@ class GenericIE(InfoExtractor):
formats = [{ formats = [{
'format_id': format_id, 'format_id': format_id,
'url': url, 'url': url,
'ext': ext,
'vcodec': 'none' if m.group('type') == 'audio' else None 'vcodec': 'none' if m.group('type') == 'audio' else None
}] }]
info_dict['direct'] = True info_dict['direct'] = True

View File

@ -1,73 +0,0 @@
import json
from .brightcove import BrightcoveNewIE
from .common import InfoExtractor
from ..utils import ExtractorError
from ..utils.traversal import traverse_obj
class LaXarxaMesIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?laxarxames\.cat/(?:[^/?#]+/)*?(player|movie-details)/(?P<id>\d+)'
_NETRC_MACHINE = 'laxarxames'
_TOKEN = None
_TESTS = [{
'url': 'https://www.laxarxames.cat/player/3459421',
'md5': '0966f46c34275934c19af78f3df6e2bc',
'info_dict': {
'id': '6339612436112',
'ext': 'mp4',
'title': 'Resum | UA Horta — UD Viladecans',
'timestamp': 1697905186,
'thumbnail': r're:https?://.*\.jpg',
'description': '',
'upload_date': '20231021',
'duration': 129.44,
'tags': ['ott', 'esports', '23-24', ' futbol', ' futbol-partits', 'elit', 'resum'],
'uploader_id': '5779379807001',
},
'skip': 'Requires login',
}]
def _perform_login(self, username, password):
if self._TOKEN:
return
login = self._download_json(
'https://api.laxarxames.cat/Authorization/SignIn', None, note='Logging in', headers={
'X-Tenantorigin': 'https://laxarxames.cat',
'Content-Type': 'application/json',
}, data=json.dumps({
'Username': username,
'Password': password,
'Device': {
'PlatformCode': 'WEB',
'Name': 'Mac OS ()',
},
}).encode(), expected_status=401)
self._TOKEN = traverse_obj(login, ('AuthorizationToken', 'Token', {str}))
if not self._TOKEN:
raise ExtractorError('Login failed', expected=True)
def _real_extract(self, url):
video_id = self._match_id(url)
if not self._TOKEN:
self.raise_login_required()
media_play_info = self._download_json(
'https://api.laxarxames.cat/Media/GetMediaPlayInfo', video_id,
data=json.dumps({
'MediaId': int(video_id),
'StreamType': 'MAIN'
}).encode(), headers={
'Authorization': f'Bearer {self._TOKEN}',
'X-Tenantorigin': 'https://laxarxames.cat',
'Content-Type': 'application/json',
})
if not traverse_obj(media_play_info, ('ContentUrl', {str})):
self.raise_no_formats('No video found', expected=True)
return self.url_result(
f'https://players.brightcove.net/5779379807001/default_default/index.html?videoId={media_play_info["ContentUrl"]}',
BrightcoveNewIE, video_id, media_play_info.get('Title'))

View File

@ -142,9 +142,6 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'subtitles': {'lyrics': [{'ext': 'lrc'}]}, 'subtitles': {'lyrics': [{'ext': 'lrc'}]},
"duration": 256, "duration": 256,
'thumbnail': r're:^http.*\.jpg', 'thumbnail': r're:^http.*\.jpg',
'album': '偶像练习生 表演曲目合集',
'average_rating': int,
'album_artist': '偶像练习生',
}, },
}, { }, {
'note': 'No lyrics.', 'note': 'No lyrics.',
@ -158,9 +155,6 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'timestamp': 1202745600, 'timestamp': 1202745600,
'duration': 263, 'duration': 263,
'thumbnail': r're:^http.*\.jpg', 'thumbnail': r're:^http.*\.jpg',
'album': 'Piano Solos Vol. 2',
'album_artist': 'Dustin O\'Halloran',
'average_rating': int,
}, },
}, { }, {
'url': 'https://y.music.163.com/m/song?app_version=8.8.45&id=95670&uct2=sKnvS4+0YStsWkqsPhFijw%3D%3D&dlt=0846', 'url': 'https://y.music.163.com/m/song?app_version=8.8.45&id=95670&uct2=sKnvS4+0YStsWkqsPhFijw%3D%3D&dlt=0846',
@ -177,9 +171,6 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'duration': 268, 'duration': 268,
'alt_title': '伴唱:现代人乐队 合唱:总政歌舞团', 'alt_title': '伴唱:现代人乐队 合唱:总政歌舞团',
'thumbnail': r're:^http.*\.jpg', 'thumbnail': r're:^http.*\.jpg',
'average_rating': int,
'album': '红色摇滚',
'album_artist': '侯牧人',
}, },
}, { }, {
'url': 'http://music.163.com/#/song?id=32102397', 'url': 'http://music.163.com/#/song?id=32102397',
@ -195,9 +186,6 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'subtitles': {'lyrics': [{'ext': 'lrc'}]}, 'subtitles': {'lyrics': [{'ext': 'lrc'}]},
'duration': 199, 'duration': 199,
'thumbnail': r're:^http.*\.jpg', 'thumbnail': r're:^http.*\.jpg',
'album': 'Bad Blood',
'average_rating': int,
'album_artist': 'Taylor Swift',
}, },
'skip': 'Blocked outside Mainland China', 'skip': 'Blocked outside Mainland China',
}, { }, {
@ -215,9 +203,6 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'duration': 229, 'duration': 229,
'alt_title': '说出愿望吧(Genie)', 'alt_title': '说出愿望吧(Genie)',
'thumbnail': r're:^http.*\.jpg', 'thumbnail': r're:^http.*\.jpg',
'average_rating': int,
'album': 'Oh!',
'album_artist': '少女时代',
}, },
'skip': 'Blocked outside Mainland China', 'skip': 'Blocked outside Mainland China',
}] }]
@ -268,15 +253,12 @@ class NetEaseMusicIE(NetEaseMusicBaseIE):
'formats': formats, 'formats': formats,
'alt_title': '/'.join(traverse_obj(info, (('transNames', 'alias'), ...))) or None, 'alt_title': '/'.join(traverse_obj(info, (('transNames', 'alias'), ...))) or None,
'creator': ' / '.join(traverse_obj(info, ('artists', ..., 'name'))) or None, 'creator': ' / '.join(traverse_obj(info, ('artists', ..., 'name'))) or None,
'album_artist': ' / '.join(traverse_obj(info, ('album', 'artists', ..., 'name'))) or None,
**lyric_data, **lyric_data,
**traverse_obj(info, { **traverse_obj(info, {
'title': ('name', {str}), 'title': ('name', {str}),
'timestamp': ('album', 'publishTime', {self.kilo_or_none}), 'timestamp': ('album', 'publishTime', {self.kilo_or_none}),
'thumbnail': ('album', 'picUrl', {url_or_none}), 'thumbnail': ('album', 'picUrl', {url_or_none}),
'duration': ('duration', {self.kilo_or_none}), 'duration': ('duration', {self.kilo_or_none}),
'album': ('album', 'name', {str}),
'average_rating': ('score', {int_or_none}),
}), }),
} }

View File

@ -3,8 +3,6 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
clean_html,
get_element_by_class,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
parse_duration, parse_duration,
@ -47,36 +45,25 @@ class NhkBaseIE(InfoExtractor):
self.cache.store('nhk', 'api_info', api_info) self.cache.store('nhk', 'api_info', api_info)
return api_info return api_info
def _extract_stream_info(self, vod_id): def _extract_formats_and_subtitles(self, vod_id):
for refresh in (False, True): for refresh in (False, True):
api_info = self._get_api_info(refresh) api_info = self._get_api_info(refresh)
if not api_info: if not api_info:
continue continue
api_url = api_info.pop('url') api_url = api_info.pop('url')
meta = traverse_obj( stream_url = traverse_obj(
self._download_json( self._download_json(
api_url, vod_id, 'Downloading stream url info', fatal=False, query={ api_url, vod_id, 'Downloading stream url info', fatal=False, query={
**api_info, **api_info,
'type': 'json', 'type': 'json',
'optional_id': vod_id, 'optional_id': vod_id,
'active_flg': 1, 'active_flg': 1,
}), ('meta', 0))
stream_url = traverse_obj(
meta, ('movie_url', ('mb_auto', 'auto_sp', 'auto_pc'), {url_or_none}), get_all=False)
if stream_url:
formats, subtitles = self._extract_m3u8_formats_and_subtitles(stream_url, vod_id)
return {
**traverse_obj(meta, {
'duration': ('duration', {int_or_none}),
'timestamp': ('publication_date', {unified_timestamp}),
'release_timestamp': ('insert_date', {unified_timestamp}),
'modified_timestamp': ('update_date', {unified_timestamp}),
}), }),
'formats': formats, ('meta', 0, 'movie_url', ('mb_auto', 'auto_sp', 'auto_pc'), {url_or_none}), get_all=False)
'subtitles': subtitles, if stream_url:
} return self._extract_m3u8_formats_and_subtitles(stream_url, vod_id)
raise ExtractorError('Unable to extract stream url') raise ExtractorError('Unable to extract stream url')
def _extract_episode_info(self, url, episode=None): def _extract_episode_info(self, url, episode=None):
@ -90,11 +77,11 @@ class NhkBaseIE(InfoExtractor):
if fetch_episode: if fetch_episode:
episode = self._call_api( episode = self._call_api(
episode_id, lang, is_video, True, episode_id[:4] == '9999')[0] episode_id, lang, is_video, True, episode_id[:4] == '9999')[0]
title = episode.get('sub_title_clean') or episode['sub_title']
def get_clean_field(key): def get_clean_field(key):
return clean_html(episode.get(key + '_clean') or episode.get(key)) return episode.get(key + '_clean') or episode.get(key)
title = get_clean_field('sub_title')
series = get_clean_field('title') series = get_clean_field('title')
thumbnails = [] thumbnails = []
@ -109,30 +96,22 @@ class NhkBaseIE(InfoExtractor):
'url': 'https://www3.nhk.or.jp' + img_path, 'url': 'https://www3.nhk.or.jp' + img_path,
}) })
episode_name = title
if series and title:
title = f'{series} - {title}'
elif series and not title:
title = series
series = None
episode_name = None
else: # title, no series
episode_name = None
info = { info = {
'id': episode_id + '-' + lang, 'id': episode_id + '-' + lang,
'title': title, 'title': '%s - %s' % (series, title) if series and title else title,
'description': get_clean_field('description'), 'description': get_clean_field('description'),
'thumbnails': thumbnails, 'thumbnails': thumbnails,
'series': series, 'series': series,
'episode': episode_name, 'episode': title,
} }
if is_video: if is_video:
vod_id = episode['vod_id'] vod_id = episode['vod_id']
formats, subs = self._extract_formats_and_subtitles(vod_id)
info.update({ info.update({
**self._extract_stream_info(vod_id),
'id': vod_id, 'id': vod_id,
'formats': formats,
'subtitles': subs,
}) })
else: else:
@ -169,14 +148,6 @@ class NhkVodIE(NhkBaseIE):
'thumbnail': 'md5:51bcef4a21936e7fea1ff4e06353f463', 'thumbnail': 'md5:51bcef4a21936e7fea1ff4e06353f463',
'episode': 'The Tohoku Shinkansen: Full Speed Ahead', 'episode': 'The Tohoku Shinkansen: Full Speed Ahead',
'series': 'Japan Railway Journal', 'series': 'Japan Railway Journal',
'modified_timestamp': 1694243656,
'timestamp': 1681428600,
'release_timestamp': 1693883728,
'duration': 1679,
'upload_date': '20230413',
'modified_date': '20230909',
'release_date': '20230905',
}, },
}, { }, {
# video clip # video clip
@ -190,13 +161,6 @@ class NhkVodIE(NhkBaseIE):
'thumbnail': 'md5:d6a4d9b6e9be90aaadda0bcce89631ed', 'thumbnail': 'md5:d6a4d9b6e9be90aaadda0bcce89631ed',
'series': 'Dining with the Chef', 'series': 'Dining with the Chef',
'episode': 'Chef Saito\'s Family recipe: MENCHI-KATSU', 'episode': 'Chef Saito\'s Family recipe: MENCHI-KATSU',
'duration': 148,
'upload_date': '20190816',
'release_date': '20230902',
'release_timestamp': 1693619292,
'modified_timestamp': 1694168033,
'modified_date': '20230908',
'timestamp': 1565997540,
}, },
}, { }, {
# radio # radio
@ -206,7 +170,7 @@ class NhkVodIE(NhkBaseIE):
'ext': 'm4a', 'ext': 'm4a',
'title': 'Living in Japan - Tips for Travelers to Japan / Ramen Vending Machines', 'title': 'Living in Japan - Tips for Travelers to Japan / Ramen Vending Machines',
'series': 'Living in Japan', 'series': 'Living in Japan',
'description': 'md5:0a0e2077d8f07a03071e990a6f51bfab', 'description': 'md5:850611969932874b4a3309e0cae06c2f',
'thumbnail': 'md5:960622fb6e06054a4a1a0c97ea752545', 'thumbnail': 'md5:960622fb6e06054a4a1a0c97ea752545',
'episode': 'Tips for Travelers to Japan / Ramen Vending Machines' 'episode': 'Tips for Travelers to Japan / Ramen Vending Machines'
}, },
@ -248,23 +212,6 @@ class NhkVodIE(NhkBaseIE):
'description': 'md5:9c1d6cbeadb827b955b20e99ab920ff0', 'description': 'md5:9c1d6cbeadb827b955b20e99ab920ff0',
}, },
'skip': 'expires 2023-10-15', 'skip': 'expires 2023-10-15',
}, {
# a one-off (single-episode series). title from the api is just '<p></p>'
'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/video/3004952/',
'info_dict': {
'id': 'nw_vod_v_en_3004_952_20230723091000_01_1690074552',
'ext': 'mp4',
'title': 'Barakan Discovers AMAMI OSHIMA: Isson\'s Treasure Island',
'description': 'md5:5db620c46a0698451cc59add8816b797',
'thumbnail': 'md5:67d9ff28009ba379bfa85ad1aaa0e2bd',
'release_date': '20230905',
'timestamp': 1690103400,
'duration': 2939,
'release_timestamp': 1693898699,
'modified_timestamp': 1698057495,
'modified_date': '20231023',
'upload_date': '20230723',
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@ -279,15 +226,13 @@ class NhkVodProgramIE(NhkBaseIE):
'info_dict': { 'info_dict': {
'id': 'sumo', 'id': 'sumo',
'title': 'GRAND SUMO Highlights', 'title': 'GRAND SUMO Highlights',
'description': 'md5:fc20d02dc6ce85e4b72e0273aa52fdbf',
}, },
'playlist_mincount': 0, 'playlist_mincount': 12,
}, { }, {
'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/video/japanrailway', 'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/video/japanrailway',
'info_dict': { 'info_dict': {
'id': 'japanrailway', 'id': 'japanrailway',
'title': 'Japan Railway Journal', 'title': 'Japan Railway Journal',
'description': 'md5:ea39d93af7d05835baadf10d1aae0e3f',
}, },
'playlist_mincount': 12, 'playlist_mincount': 12,
}, { }, {
@ -296,7 +241,6 @@ class NhkVodProgramIE(NhkBaseIE):
'info_dict': { 'info_dict': {
'id': 'japanrailway', 'id': 'japanrailway',
'title': 'Japan Railway Journal', 'title': 'Japan Railway Journal',
'description': 'md5:ea39d93af7d05835baadf10d1aae0e3f',
}, },
'playlist_mincount': 5, 'playlist_mincount': 5,
}, { }, {
@ -321,11 +265,11 @@ class NhkVodProgramIE(NhkBaseIE):
entries.append(self._extract_episode_info( entries.append(self._extract_episode_info(
urljoin(url, episode_path), episode)) urljoin(url, episode_path), episode))
html = self._download_webpage(url, program_id) program_title = None
program_title = clean_html(get_element_by_class('p-programDetail__title', html)) if entries:
program_description = clean_html(get_element_by_class('p-programDetail__text', html)) program_title = entries[0].get('series')
return self.playlist_result(entries, program_id, program_title, program_description) return self.playlist_result(entries, program_id, program_title)
class NhkForSchoolBangumiIE(InfoExtractor): class NhkForSchoolBangumiIE(InfoExtractor):
@ -477,7 +421,6 @@ class NhkRadiruIE(InfoExtractor):
'skip': 'Episode expired on 2023-04-16', 'skip': 'Episode expired on 2023-04-16',
'info_dict': { 'info_dict': {
'channel': 'NHK-FM', 'channel': 'NHK-FM',
'uploader': 'NHK-FM',
'description': 'md5:94b08bdeadde81a97df4ec882acce3e9', 'description': 'md5:94b08bdeadde81a97df4ec882acce3e9',
'ext': 'm4a', 'ext': 'm4a',
'id': '0449_01_3853544', 'id': '0449_01_3853544',
@ -498,7 +441,6 @@ class NhkRadiruIE(InfoExtractor):
'title': 'ベストオブクラシック', 'title': 'ベストオブクラシック',
'description': '世界中の上質な演奏会をじっくり堪能する本格派クラシック番組。', 'description': '世界中の上質な演奏会をじっくり堪能する本格派クラシック番組。',
'channel': 'NHK-FM', 'channel': 'NHK-FM',
'uploader': 'NHK-FM',
'thumbnail': 'https://www.nhk.or.jp/prog/img/458/g458.jpg', 'thumbnail': 'https://www.nhk.or.jp/prog/img/458/g458.jpg',
}, },
'playlist_mincount': 3, 'playlist_mincount': 3,
@ -512,7 +454,6 @@ class NhkRadiruIE(InfoExtractor):
'title': '有島武郎「一房のぶどう」', 'title': '有島武郎「一房のぶどう」',
'description': '朗読:川野一宇(ラジオ深夜便アンカー)\r\n\r\n2016年12月8日放送「ラジオ深夜便『アンカー朗読シリーズ』」より', 'description': '朗読:川野一宇(ラジオ深夜便アンカー)\r\n\r\n2016年12月8日放送「ラジオ深夜便『アンカー朗読シリーズ』」より',
'channel': 'NHKラジオ第1、NHK-FM', 'channel': 'NHKラジオ第1、NHK-FM',
'uploader': 'NHKラジオ第1、NHK-FM',
'timestamp': 1635757200, 'timestamp': 1635757200,
'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F300/img/corner/box_109_thumbnail.jpg', 'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F300/img/corner/box_109_thumbnail.jpg',
'release_date': '20161207', 'release_date': '20161207',
@ -528,7 +469,6 @@ class NhkRadiruIE(InfoExtractor):
'id': 'F261_01_3855109', 'id': 'F261_01_3855109',
'ext': 'm4a', 'ext': 'm4a',
'channel': 'NHKラジオ第1', 'channel': 'NHKラジオ第1',
'uploader': 'NHKラジオ第1',
'timestamp': 1681635900, 'timestamp': 1681635900,
'release_date': '20230416', 'release_date': '20230416',
'series': 'NHKラジオニュース', 'series': 'NHKラジオニュース',
@ -573,7 +513,6 @@ class NhkRadiruIE(InfoExtractor):
series_meta = traverse_obj(meta, { series_meta = traverse_obj(meta, {
'title': 'program_name', 'title': 'program_name',
'channel': 'media_name', 'channel': 'media_name',
'uploader': 'media_name',
'thumbnail': (('thumbnail_c', 'thumbnail_p'), {url_or_none}), 'thumbnail': (('thumbnail_c', 'thumbnail_p'), {url_or_none}),
}, get_all=False) }, get_all=False)
@ -602,7 +541,6 @@ class NhkRadioNewsPageIE(InfoExtractor):
'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F261/img/RADIONEWS_640.jpg', 'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F261/img/RADIONEWS_640.jpg',
'description': 'md5:bf2c5b397e44bc7eb26de98d8f15d79d', 'description': 'md5:bf2c5b397e44bc7eb26de98d8f15d79d',
'channel': 'NHKラジオ第1', 'channel': 'NHKラジオ第1',
'uploader': 'NHKラジオ第1',
'title': 'NHKラジオニュース', 'title': 'NHKラジオニュース',
} }
}] }]

View File

@ -13,7 +13,7 @@ from ..utils import (
class NovaEmbedIE(InfoExtractor): class NovaEmbedIE(InfoExtractor):
_VALID_URL = r'https?://media(?:tn)?\.cms\.nova\.cz/embed/(?P<id>[^/?#&]+)' _VALID_URL = r'https?://media\.cms\.nova\.cz/embed/(?P<id>[^/?#&]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://media.cms.nova.cz/embed/8o0n0r?autoplay=1', 'url': 'https://media.cms.nova.cz/embed/8o0n0r?autoplay=1',
'info_dict': { 'info_dict': {
@ -37,16 +37,6 @@ class NovaEmbedIE(InfoExtractor):
'duration': 114, 'duration': 114,
}, },
'params': {'skip_download': 'm3u8'}, 'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://mediatn.cms.nova.cz/embed/EU5ELEsmOHt?autoplay=1',
'info_dict': {
'id': 'EU5ELEsmOHt',
'ext': 'mp4',
'title': 'Haptické křeslo, bionická ruka nebo roboti. Reportérka se podívala na Týden inovací',
'thumbnail': r're:^https?://.*\.jpg',
'duration': 1780,
},
'params': {'skip_download': 'm3u8'},
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@ -245,7 +245,7 @@ class NPOIE(InfoExtractor):
'quality': 'npoplus', 'quality': 'npoplus',
'tokenId': player_token, 'tokenId': player_token,
'streamType': 'broadcast', 'streamType': 'broadcast',
}, data=b'') # endpoint requires POST })
if not streams: if not streams:
continue continue
stream = streams.get('stream') stream = streams.get('stream')

View File

@ -1,21 +1,21 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
js_to_json, js_to_json,
url_or_none, parse_duration,
) )
from ..utils.traversal import traverse_obj
class NTVDeIE(InfoExtractor): class NTVDeIE(InfoExtractor):
IE_NAME = 'n-tv.de' IE_NAME = 'n-tv.de'
_VALID_URL = r'https?://(?:www\.)?n-tv\.de/mediathek/(?:videos|magazine)/[^/?#]+/[^/?#]+-article(?P<id>[^/?#]+)\.html' _VALID_URL = r'https?://(?:www\.)?n-tv\.de/mediathek/videos/[^/?#]+/[^/?#]+-article(?P<id>.+)\.html'
_TESTS = [{ _TESTS = [{
'url': 'http://www.n-tv.de/mediathek/videos/panorama/Schnee-und-Glaette-fuehren-zu-zahlreichen-Unfaellen-und-Staus-article14438086.html', 'url': 'http://www.n-tv.de/mediathek/videos/panorama/Schnee-und-Glaette-fuehren-zu-zahlreichen-Unfaellen-und-Staus-article14438086.html',
'md5': '6bcf2a6638cb83f45d5561659a1cb498', 'md5': '6ef2514d4b1e8e03ca24b49e2f167153',
'info_dict': { 'info_dict': {
'id': '14438086', 'id': '14438086',
'ext': 'mp4', 'ext': 'mp4',
@ -23,61 +23,51 @@ class NTVDeIE(InfoExtractor):
'title': 'Schnee und Glätte führen zu zahlreichen Unfällen und Staus', 'title': 'Schnee und Glätte führen zu zahlreichen Unfällen und Staus',
'alt_title': 'Winterchaos auf deutschen Straßen', 'alt_title': 'Winterchaos auf deutschen Straßen',
'description': 'Schnee und Glätte sorgen deutschlandweit für einen chaotischen Start in die Woche: Auf den Straßen kommt es zu kilometerlangen Staus und Dutzenden Glätteunfällen. In Düsseldorf und München wirbelt der Schnee zudem den Flugplan durcheinander. Dutzende Flüge landen zu spät, einige fallen ganz aus.', 'description': 'Schnee und Glätte sorgen deutschlandweit für einen chaotischen Start in die Woche: Auf den Straßen kommt es zu kilometerlangen Staus und Dutzenden Glätteunfällen. In Düsseldorf und München wirbelt der Schnee zudem den Flugplan durcheinander. Dutzende Flüge landen zu spät, einige fallen ganz aus.',
'duration': 67, 'duration': 4020,
'timestamp': 1422892797, 'timestamp': 1422892797,
'upload_date': '20150202', 'upload_date': '20150202',
}, },
}, {
'url': 'https://www.n-tv.de/mediathek/magazine/auslandsreport/Juedische-Siedler-wollten-Rache-die-wollten-nur-toeten-article24523089.html',
'md5': 'c5c6014c014ccc3359470e1d34472bfd',
'info_dict': {
'id': '24523089',
'ext': 'mp4',
'thumbnail': r're:^https?://.*\.jpg$',
'title': 'Jüdische Siedler "wollten Rache, die wollten nur töten"',
'alt_title': 'Israelische Gewalt fern von Gaza',
'description': 'Vier Tage nach dem Massaker der Hamas greifen jüdische Siedler das Haus einer palästinensischen Familie im Westjordanland an. Die Überlebenden berichten, sie waren unbewaffnet, die Angreifer seien nur auf "Rache und Töten" aus gewesen. Als die Toten beerdigt werden sollen, eröffnen die Siedler erneut das Feuer.',
'duration': 326,
'timestamp': 1699688294,
'upload_date': '20231111',
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
info = self._search_json( info = self._parse_json(self._search_regex(
r'article:', webpage, 'info', video_id, transform_source=js_to_json) r'(?s)ntv\.pageInfo\.article\s*=\s*(\{.*?\});', webpage, 'info'),
video_id, transform_source=js_to_json)
vdata = self._search_json( timestamp = int_or_none(info.get('publishedDateAsUnixTimeStamp'))
r'\$\(\s*"#playerwrapper"\s*\)\s*\.data\(\s*"player",', vdata = self._parse_json(self._search_regex(
webpage, 'player data', video_id, r'(?s)\$\(\s*"\#player"\s*\)\s*\.data\(\s*"player",\s*(\{.*?\})\);',
transform_source=lambda s: js_to_json(re.sub(r'ivw:[^},]+', '', s)))['setup']['source'] webpage, 'player data'), video_id,
transform_source=lambda s: js_to_json(re.sub(r'advertising:\s*{[^}]+},', '', s)))
duration = parse_duration(vdata.get('duration'))
formats = [] formats = []
if vdata.get('progressive'): if vdata.get('video'):
formats.append({ formats.append({
'format_id': 'http', 'format_id': 'flash',
'url': vdata['progressive'], 'url': 'rtmp://fms.n-tv.de/%s' % vdata['video'],
}) })
if vdata.get('hls'): if vdata.get('videoMp4'):
formats.append({
'format_id': 'mobile',
'url': compat_urlparse.urljoin('http://video.n-tv.de', vdata['videoMp4']),
'tbr': 400, # estimation
})
if vdata.get('videoM3u8'):
m3u8_url = compat_urlparse.urljoin('http://video.n-tv.de', vdata['videoM3u8'])
formats.extend(self._extract_m3u8_formats( formats.extend(self._extract_m3u8_formats(
vdata['hls'], video_id, 'mp4', m3u8_id='hls', fatal=False)) m3u8_url, video_id, ext='mp4', entry_protocol='m3u8_native',
if vdata.get('dash'): quality=1, m3u8_id='hls', fatal=False))
formats.extend(self._extract_mpd_formats(vdata['dash'], video_id, fatal=False, mpd_id='dash'))
return { return {
'id': video_id, 'id': video_id,
**traverse_obj(info, { 'title': info['headline'],
'title': 'headline', 'description': info.get('intro'),
'description': 'intro', 'alt_title': info.get('kicker'),
'alt_title': 'kicker', 'timestamp': timestamp,
'timestamp': ('publishedDateAsUnixTimeStamp', {int_or_none}), 'thumbnail': vdata.get('html5VideoPoster'),
}), 'duration': duration,
**traverse_obj(vdata, {
'thumbnail': ('poster', {url_or_none}),
'duration': ('length', {int_or_none}),
}),
'formats': formats, 'formats': formats,
} }

View File

@ -1,167 +1,87 @@
import functools
import re import re
import uuid
from .common import InfoExtractor from .common import InfoExtractor
from ..networking import HEADRequest
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
OnDemandPagedList, js_to_json,
float_or_none,
int_or_none,
join_nonempty,
parse_age_limit,
parse_qs,
unified_strdate,
url_or_none,
) )
from ..utils.traversal import traverse_obj
class OnDemandKoreaIE(InfoExtractor): class OnDemandKoreaIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?ondemandkorea\.com/(?:en/)?player/vod/[a-z0-9-]+\?(?:[^#]+&)?contentId=(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?ondemandkorea\.com/(?P<id>[^/]+)\.html'
_GEO_COUNTRIES = ['US', 'CA'] _GEO_COUNTRIES = ['US', 'CA']
_TESTS = [{ _TESTS = [{
'url': 'https://www.ondemandkorea.com/player/vod/ask-us-anything?contentId=686471', 'url': 'https://www.ondemandkorea.com/ask-us-anything-e351.html',
'md5': 'e2ff77255d989e3135bde0c5889fbce8',
'info_dict': { 'info_dict': {
'id': '686471', 'id': 'ask-us-anything-e351',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Ask Us Anything: Jung Sung-ho, Park Seul-gi, Kim Bo-min, Yang Seung-won', 'title': 'Ask Us Anything : Jung Sung-ho, Park Seul-gi, Kim Bo-min, Yang Seung-won - 09/24/2022',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)', 'description': 'A talk show/game show with a school theme where celebrity guests appear as “transfer students.”',
'duration': 5486.955, 'thumbnail': r're:^https?://.*\.jpg$',
'release_date': '20220924',
'series': 'Ask Us Anything',
'series_id': 11790,
'episode_number': 351,
'episode': 'Jung Sung-ho, Park Seul-gi, Kim Bo-min, Yang Seung-won',
}, },
'params': {
'skip_download': 'm3u8 download'
}
}, { }, {
'url': 'https://www.ondemandkorea.com/player/vod/breakup-probation-a-week?contentId=1595796', 'url': 'https://www.ondemandkorea.com/work-later-drink-now-e1.html',
'md5': '57266c720006962be7ff415b24775caa',
'info_dict': { 'info_dict': {
'id': '1595796', 'id': 'work-later-drink-now-e1',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Breakup Probation, A Week: E08', 'title': 'Work Later, Drink Now : E01',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)', 'description': 'Work Later, Drink First follows three women who find solace in a glass of liquor at the end of the day. So-hee, who gets comfort from a cup of soju af',
'duration': 1586.0, 'thumbnail': r're:^https?://.*\.png$',
'release_date': '20231001', 'subtitles': {
'series': 'Breakup Probation, A Week', 'English': 'mincount:1',
'series_id': 22912, },
'episode_number': 8,
'episode': 'E08',
}, },
}, { 'params': {
'url': 'https://www.ondemandkorea.com/player/vod/the-outlaws?contentId=369531', 'skip_download': 'm3u8 download'
'md5': 'fa5523b87aa1f6d74fc622a97f2b47cd', }
'info_dict': {
'id': '369531',
'ext': 'mp4',
'release_date': '20220519',
'duration': 7267.0,
'title': 'The Outlaws: Main Movie',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)',
'age_limit': 18,
},
}, {
'url': 'https://www.ondemandkorea.com/en/player/vod/capture-the-moment-how-is-that-possible?contentId=1605006',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id, fatal=False)
data = self._download_json( if not webpage:
f'https://odkmedia.io/odx/api/v3/playback/{video_id}/', video_id, fatal=False, # Page sometimes returns captcha page with HTTP 403
headers={'service-name': 'odk'}, query={'did': str(uuid.uuid4())}, expected_status=(403, 404)) raise ExtractorError(
if not traverse_obj(data, ('result', {dict})): 'Unable to access page. You may have been blocked.',
msg = traverse_obj(data, ('messages', '__default'), 'title', expected_type=str) expected=True)
raise ExtractorError(msg or 'Got empty response from playback API', expected=True)
data = data['result'] if 'msg_block_01.png' in webpage:
self.raise_geo_restricted(
msg='This content is not available in your region',
countries=self._GEO_COUNTRIES)
def try_geo_bypass(url): if 'This video is only available to ODK PLUS members.' in webpage:
return traverse_obj(url, ({parse_qs}, 'stream_url', 0, {url_or_none})) or url raise ExtractorError(
'This video is only available to ODK PLUS members.',
expected=True)
def try_upgrade_quality(url): if 'ODK PREMIUM Members Only' in webpage:
mod_url = re.sub(r'_720(p?)\.m3u8', r'_1080\1.m3u8', url) raise ExtractorError(
return mod_url if mod_url != url and self._request_webpage( 'This video is only available to ODK PREMIUM members.',
HEADRequest(mod_url), video_id, note='Checking for higher quality format', expected=True)
errnote='No higher quality format found', fatal=False) else url
formats = [] title = self._search_regex(
for m3u8_url in traverse_obj(data, (('sources', 'manifest'), ..., 'url', {url_or_none}, {try_geo_bypass})): r'class=["\']episode_title["\'][^>]*>([^<]+)',
formats.extend(self._extract_m3u8_formats(try_upgrade_quality(m3u8_url), video_id, fatal=False)) webpage, 'episode_title', fatal=False) or self._og_search_title(webpage)
subtitles = {} jw_config = self._parse_json(
for track in traverse_obj(data, ('text_tracks', lambda _, v: url_or_none(v['url']))): self._search_regex((
subtitles.setdefault(track.get('language', 'und'), []).append({ r'(?P<options>{\s*[\'"]tracks[\'"].*?})[)\];]+$',
'url': track['url'], r'playlist\s*=\s*\[(?P<options>.+)];?$',
'ext': track.get('codec'), r'odkPlayer\.init.*?(?P<options>{[^;]+}).*?;',
'name': track.get('label'), ), webpage, 'jw config', flags=re.MULTILINE | re.DOTALL, group='options'),
}) video_id, transform_source=js_to_json)
info = self._parse_jwplayer_data(
jw_config, video_id, require_title=False, m3u8_id='hls',
base_url=url)
def if_series(key=None): info.update({
return lambda obj: obj[key] if key and obj['kind'] == 'series' else None 'title': title,
'description': self._og_search_description(webpage),
return { 'thumbnail': self._og_search_thumbnail(webpage)
'id': video_id, })
'title': join_nonempty( return info
('episode', 'program', 'title'),
('episode', 'title'), from_dict=data, delim=': '),
**traverse_obj(data, {
'thumbnail': ('episode', 'images', 'thumbnail', {url_or_none}),
'release_date': ('episode', 'release_date', {lambda x: x.replace('-', '')}, {unified_strdate}),
'duration': ('duration', {functools.partial(float_or_none, scale=1000)}),
'age_limit': ('age_rating', 'name', {lambda x: x.replace('R', '')}, {parse_age_limit}),
'series': ('episode', {if_series(key='program')}, 'title'),
'series_id': ('episode', {if_series(key='program')}, 'id'),
'episode': ('episode', {if_series(key='title')}),
'episode_number': ('episode', {if_series(key='number')}, {int_or_none}),
}, get_all=False),
'formats': formats,
'subtitles': subtitles,
}
class OnDemandKoreaProgramIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?ondemandkorea\.com/(?:en/)?player/vod/(?P<id>[a-z0-9-]+)(?:$|#)'
_GEO_COUNTRIES = ['US', 'CA']
_TESTS = [{
'url': 'https://www.ondemandkorea.com/player/vod/uskn-news',
'info_dict': {
'id': 'uskn-news',
},
'playlist_mincount': 755,
}, {
'url': 'https://www.ondemandkorea.com/en/player/vod/the-land',
'info_dict': {
'id': 'the-land',
},
'playlist_count': 52,
}]
_PAGE_SIZE = 100
def _fetch_page(self, display_id, page):
page += 1
page_data = self._download_json(
f'https://odkmedia.io/odx/api/v3/program/{display_id}/episodes/', display_id,
headers={'service-name': 'odk'}, query={
'page': page,
'page_size': self._PAGE_SIZE,
}, note=f'Downloading page {page}', expected_status=404)
for episode in traverse_obj(page_data, ('result', 'results', ...)):
yield self.url_result(
f'https://www.ondemandkorea.com/player/vod/{display_id}?contentId={episode["id"]}',
ie=OnDemandKoreaIE, video_title=episode.get('title'))
def _real_extract(self, url):
display_id = self._match_id(url)
entries = OnDemandPagedList(functools.partial(
self._fetch_page, display_id), self._PAGE_SIZE)
return self.playlist_result(entries, display_id)

View File

@ -4,16 +4,15 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..networking import HEADRequest from ..networking import HEADRequest
from ..utils import ( from ..utils import (
InAdvancePagedList,
clean_html, clean_html,
determine_ext, determine_ext,
float_or_none, float_or_none,
InAdvancePagedList,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
make_archive_id,
mimetype2ext,
orderedSet, orderedSet,
remove_end, remove_end,
make_archive_id,
smuggle_url, smuggle_url,
strip_jsonp, strip_jsonp,
try_call, try_call,
@ -22,7 +21,6 @@ from ..utils import (
unsmuggle_url, unsmuggle_url,
url_or_none, url_or_none,
) )
from ..utils.traversal import traverse_obj
class ORFTVthekIE(InfoExtractor): class ORFTVthekIE(InfoExtractor):
@ -336,45 +334,6 @@ class ORFRadioIE(InfoExtractor):
self._entries(data, station or station2), show_id, data.get('title'), clean_html(data.get('subtitle'))) self._entries(data, station or station2), show_id, data.get('title'), clean_html(data.get('subtitle')))
class ORFPodcastIE(InfoExtractor):
IE_NAME = 'orf:podcast'
_STATION_RE = '|'.join(map(re.escape, (
'bgl', 'fm4', 'ktn', 'noe', 'oe1', 'oe3',
'ooe', 'sbg', 'stm', 'tir', 'tv', 'vbg', 'wie')))
_VALID_URL = rf'https?://sound\.orf\.at/podcast/(?P<station>{_STATION_RE})/(?P<show>[\w-]+)/(?P<id>[\w-]+)'
_TESTS = [{
'url': 'https://sound.orf.at/podcast/oe3/fruehstueck-bei-mir/nicolas-stockhammer-15102023',
'md5': '526a5700e03d271a1505386a8721ab9b',
'info_dict': {
'id': 'nicolas-stockhammer-15102023',
'ext': 'mp3',
'title': 'Nicolas Stockhammer (15.10.2023)',
'duration': 3396.0,
'series': 'Frühstück bei mir',
},
'skip': 'ORF podcasts are only available for a limited time'
}]
def _real_extract(self, url):
station, show, show_id = self._match_valid_url(url).group('station', 'show', 'id')
data = self._download_json(
f'https://audioapi.orf.at/radiothek/api/2.0/podcast/{station}/{show}/{show_id}', show_id)
return {
'id': show_id,
'ext': 'mp3',
'vcodec': 'none',
**traverse_obj(data, ('payload', {
'url': ('enclosures', 0, 'url'),
'ext': ('enclosures', 0, 'type', {mimetype2ext}),
'title': 'title',
'description': ('description', {clean_html}),
'duration': ('duration', {functools.partial(float_or_none, scale=1000)}),
'series': ('podcast', 'title'),
})),
}
class ORFIPTVIE(InfoExtractor): class ORFIPTVIE(InfoExtractor):
IE_NAME = 'orf:iptv' IE_NAME = 'orf:iptv'
IE_DESC = 'iptv.ORF.at' IE_DESC = 'iptv.ORF.at'

View File

@ -4,7 +4,6 @@ from ..utils import (
parse_iso8601, parse_iso8601,
unescapeHTML, unescapeHTML,
) )
from ..utils.traversal import traverse_obj
class PeriscopeBaseIE(InfoExtractor): class PeriscopeBaseIE(InfoExtractor):
@ -21,25 +20,22 @@ class PeriscopeBaseIE(InfoExtractor):
title = broadcast.get('status') or 'Periscope Broadcast' title = broadcast.get('status') or 'Periscope Broadcast'
uploader = broadcast.get('user_display_name') or broadcast.get('username') uploader = broadcast.get('user_display_name') or broadcast.get('username')
title = '%s - %s' % (uploader, title) if uploader else title title = '%s - %s' % (uploader, title) if uploader else title
is_live = broadcast.get('state').lower() == 'running'
thumbnails = [{ thumbnails = [{
'url': broadcast[image], 'url': broadcast[image],
} for image in ('image_url', 'image_url_medium', 'image_url_small') if broadcast.get(image)] } for image in ('image_url', 'image_url_small') if broadcast.get(image)]
return { return {
'id': broadcast.get('id') or video_id, 'id': broadcast.get('id') or video_id,
'title': title, 'title': title,
'timestamp': parse_iso8601(broadcast.get('created_at')) or int_or_none( 'timestamp': parse_iso8601(broadcast.get('created_at')),
broadcast.get('created_at_ms'), scale=1000),
'release_timestamp': int_or_none(broadcast.get('scheduled_start_ms'), scale=1000),
'uploader': uploader, 'uploader': uploader,
'uploader_id': broadcast.get('user_id') or broadcast.get('username'), 'uploader_id': broadcast.get('user_id') or broadcast.get('username'),
'thumbnails': thumbnails, 'thumbnails': thumbnails,
'view_count': int_or_none(broadcast.get('total_watched')), 'view_count': int_or_none(broadcast.get('total_watched')),
'tags': broadcast.get('tags'), 'tags': broadcast.get('tags'),
'live_status': { 'is_live': is_live,
'running': 'is_live',
'not_started': 'is_upcoming',
}.get(traverse_obj(broadcast, ('state', {str.lower}))) or 'was_live'
} }
@staticmethod @staticmethod

View File

@ -262,14 +262,14 @@ class PolskieRadioAuditionIE(InfoExtractor):
query=query, headers={'x-api-key': '9bf6c5a2-a7d0-4980-9ed7-a3f7291f2a81'}) query=query, headers={'x-api-key': '9bf6c5a2-a7d0-4980-9ed7-a3f7291f2a81'})
def _entries(self, playlist_id, has_episodes, has_articles): def _entries(self, playlist_id, has_episodes, has_articles):
for i in itertools.count(0) if has_episodes else []: for i in itertools.count(1) if has_episodes else []:
page = self._call_lp3( page = self._call_lp3(
'AudioArticle/GetListByCategoryId', { 'AudioArticle/GetListByCategoryId', {
'categoryId': playlist_id, 'categoryId': playlist_id,
'PageSize': 10, 'PageSize': 10,
'skip': i, 'skip': i,
'format': 400, 'format': 400,
}, playlist_id, f'Downloading episode list page {i + 1}') }, playlist_id, f'Downloading episode list page {i}')
if not traverse_obj(page, 'data'): if not traverse_obj(page, 'data'):
break break
for episode in page['data']: for episode in page['data']:
@ -281,14 +281,14 @@ class PolskieRadioAuditionIE(InfoExtractor):
'timestamp': parse_iso8601(episode.get('datePublic')), 'timestamp': parse_iso8601(episode.get('datePublic')),
} }
for i in itertools.count(0) if has_articles else []: for i in itertools.count(1) if has_articles else []:
page = self._call_lp3( page = self._call_lp3(
'Article/GetListByCategoryId', { 'Article/GetListByCategoryId', {
'categoryId': playlist_id, 'categoryId': playlist_id,
'PageSize': 9, 'PageSize': 9,
'skip': i, 'skip': i,
'format': 400, 'format': 400,
}, playlist_id, f'Downloading article list page {i + 1}') }, playlist_id, f'Downloading article list page {i}')
if not traverse_obj(page, 'data'): if not traverse_obj(page, 'data'):
break break
for article in page['data']: for article in page['data']:

View File

@ -15,7 +15,7 @@ from ..utils import (
class QDanceIE(InfoExtractor): class QDanceIE(InfoExtractor):
_NETRC_MACHINE = 'qdance' _NETRC_MACHINE = 'qdance'
_VALID_URL = r'https?://(?:www\.)?q-dance\.com/network/(?:library|live)/(?P<id>[\w-]+)' _VALID_URL = r'https?://(?:www\.)?q-dance\.com/network/(?:library|live)/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'note': 'vod', 'note': 'vod',
'url': 'https://www.q-dance.com/network/library/146542138', 'url': 'https://www.q-dance.com/network/library/146542138',
@ -53,27 +53,6 @@ class QDanceIE(InfoExtractor):
'channel_id': 'qdancenetwork.video_149170353', 'channel_id': 'qdancenetwork.video_149170353',
}, },
'skip': 'Completed livestream', 'skip': 'Completed livestream',
}, {
'note': 'vod with alphanumeric id',
'url': 'https://www.q-dance.com/network/library/WhDleSIWSfeT3Q9ObBKBeA',
'info_dict': {
'id': 'WhDleSIWSfeT3Q9ObBKBeA',
'ext': 'mp4',
'title': 'Aftershock I Defqon.1 Weekend Festival 2023 I Sunday I BLUE',
'display_id': 'naam-i-defqon-1-weekend-festival-2023-i-dag-i-podium',
'description': 'Relive Defqon.1 Path of the Warrior with Aftershock at the BLUE 🔥',
'series': 'Defqon.1',
'series_id': '31840378',
'season': 'Defqon.1 Weekend Festival 2023',
'season_id': '141735599',
'duration': 3507,
'availability': 'premium_only',
'thumbnail': 'https://images.q-dance.network/1698158361-230625-135716-defqon-1-aftershock.jpg',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.q-dance.com/network/library/-uRFKXwmRZGVnve7av9uqA',
'only_matching': True,
}] }]
_access_token = None _access_token = None

View File

@ -1,150 +0,0 @@
import itertools
from .common import InfoExtractor
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
extract_attributes,
get_element_by_class,
get_element_html_by_class,
get_element_text_and_html_by_tag,
get_elements_html_by_class,
int_or_none,
join_nonempty,
try_call,
unified_strdate,
update_url,
urljoin
)
from ..utils.traversal import traverse_obj
class RadioComercialIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?radiocomercial\.pt/podcasts/[^/?#]+/t?(?P<season>\d+)/(?P<id>[\w-]+)'
_TESTS = [{
'url': 'https://radiocomercial.pt/podcasts/o-homem-que-mordeu-o-cao/t6/taylor-swift-entranhando-se-que-nem-uma-espada-no-ventre-dos-fas#page-content-wrapper',
'md5': '5f4fe8e485b29d2e8fd495605bc2c7e4',
'info_dict': {
'id': 'taylor-swift-entranhando-se-que-nem-uma-espada-no-ventre-dos-fas',
'ext': 'mp3',
'title': 'Taylor Swift entranhando-se que nem uma espada no ventre dos fãs.',
'release_date': '20231025',
'thumbnail': r're:https://radiocomercial.pt/upload/[^.]+.jpg',
'season': 6
}
}, {
'url': 'https://radiocomercial.pt/podcasts/convenca-me-num-minuto/t3/convenca-me-num-minuto-que-os-lobisomens-existem',
'md5': '47e96c273aef96a8eb160cd6cf46d782',
'info_dict': {
'id': 'convenca-me-num-minuto-que-os-lobisomens-existem',
'ext': 'mp3',
'title': 'Convença-me num minuto que os lobisomens existem',
'release_date': '20231026',
'thumbnail': r're:https://radiocomercial.pt/upload/[^.]+.jpg',
'season': 3
}
}, {
'url': 'https://radiocomercial.pt/podcasts/inacreditavel-by-ines-castel-branco/t2/o-desastre-de-aviao',
'md5': '69be64255420fec23b7259955d771e54',
'info_dict': {
'id': 'o-desastre-de-aviao',
'ext': 'mp3',
'title': 'O desastre de avião',
'description': 'md5:8a82beeb372641614772baab7246245f',
'release_date': '20231101',
'thumbnail': r're:https://radiocomercial.pt/upload/[^.]+.jpg',
'season': 2
},
'params': {
# inconsistant md5
'skip_download': True,
},
}, {
'url': 'https://radiocomercial.pt/podcasts/tnt-todos-no-top/2023/t-n-t-29-de-outubro',
'md5': '91d32d4d4b1407272068b102730fc9fa',
'info_dict': {
'id': 't-n-t-29-de-outubro',
'ext': 'mp3',
'title': 'T.N.T 29 de outubro',
'release_date': '20231029',
'thumbnail': r're:https://radiocomercial.pt/upload/[^.]+.jpg',
'season': 2023
}
}]
def _real_extract(self, url):
video_id, season = self._match_valid_url(url).group('id', 'season')
webpage = self._download_webpage(url, video_id)
return {
'id': video_id,
'title': self._html_extract_title(webpage),
'description': self._og_search_description(webpage, default=None),
'release_date': unified_strdate(get_element_by_class(
'date', get_element_html_by_class('descriptions', webpage) or '')),
'thumbnail': self._og_search_thumbnail(webpage),
'season': int_or_none(season),
'url': extract_attributes(get_element_html_by_class('audiofile', webpage) or '').get('href'),
}
class RadioComercialPlaylistIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?radiocomercial\.pt/podcasts/(?P<id>[\w-]+)(?:/t?(?P<season>\d+))?/?(?:$|[?#])'
_TESTS = [{
'url': 'https://radiocomercial.pt/podcasts/convenca-me-num-minuto/t3',
'info_dict': {
'id': 'convenca-me-num-minuto_t3',
'title': 'Convença-me num Minuto - Temporada 3',
},
'playlist_mincount': 32
}, {
'url': 'https://radiocomercial.pt/podcasts/o-homem-que-mordeu-o-cao',
'info_dict': {
'id': 'o-homem-que-mordeu-o-cao',
'title': 'O Homem Que Mordeu o Cão',
},
'playlist_mincount': 19
}, {
'url': 'https://radiocomercial.pt/podcasts/as-minhas-coisas-favoritas',
'info_dict': {
'id': 'as-minhas-coisas-favoritas',
'title': 'As Minhas Coisas Favoritas',
},
'playlist_mincount': 131
}, {
'url': 'https://radiocomercial.pt/podcasts/tnt-todos-no-top/t2023',
'info_dict': {
'id': 'tnt-todos-no-top_t2023',
'title': 'TNT - Todos No Top - Temporada 2023',
},
'playlist_mincount': 39
}]
def _entries(self, url, playlist_id):
for page in itertools.count(1):
try:
webpage = self._download_webpage(
f'{url}/{page}', playlist_id, f'Downloading page {page}')
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 404:
break
raise
episodes = get_elements_html_by_class('tm-ouvir-podcast', webpage)
if not episodes:
break
for url_path in traverse_obj(episodes, (..., {extract_attributes}, 'href')):
episode_url = urljoin(url, url_path)
if RadioComercialIE.suitable(episode_url):
yield episode_url
def _real_extract(self, url):
podcast, season = self._match_valid_url(url).group('id', 'season')
playlist_id = join_nonempty(podcast, season, delim='_t')
url = update_url(url, query=None, fragment=None)
webpage = self._download_webpage(url, playlist_id)
name = try_call(lambda: get_element_text_and_html_by_tag('h1', webpage)[0])
title = name if name == season else join_nonempty(name, season, delim=' - Temporada ')
return self.playlist_from_matches(
self._entries(url, playlist_id), playlist_id, title, ie=RadioComercialIE)

View File

@ -1,200 +0,0 @@
from .common import InfoExtractor
from ..utils import (
clean_html,
int_or_none,
parse_iso8601,
parse_resolution,
url_or_none,
)
from ..utils.traversal import traverse_obj
class SBSCoKrIE(InfoExtractor):
IE_NAME = 'sbs.co.kr'
_VALID_URL = [r'https?://allvod\.sbs\.co\.kr/allvod/vod(?:Package)?EndPage\.do\?(?:[^#]+&)?mdaId=(?P<id>\d+)',
r'https?://programs\.sbs\.co\.kr/(?:enter|drama|culture|sports|plus|mtv|kth)/[a-z0-9]+/(?:vod|clip|movie)/\d+/(?P<id>(?:OC)?\d+)']
_TESTS = [{
'url': 'https://programs.sbs.co.kr/enter/dongsang2/clip/52007/OC467706746?div=main_pop_clip',
'md5': 'c3f6d45e1fb5682039d94cda23c36f19',
'info_dict': {
'id': 'OC467706746',
'ext': 'mp4',
'title': '‘아슬아슬’ 박군♥한영의 새 집 인테리어 대첩♨',
'description': 'md5:6a71eb1979ee4a94ea380310068ccab4',
'thumbnail': 'https://img2.sbs.co.kr/ops_clip_img/2023/10/10/34c4c0f9-a9a5-4ff6-a92e-9bb4b5f6fa65915w1280.jpg',
'release_timestamp': 1696889400,
'release_date': '20231009',
'view_count': int,
'like_count': int,
'duration': 238,
'age_limit': 15,
'series': '동상이몽2_너는 내 운명',
'episode': '레이디제인, ‘혼전임신설’ 3개월 앞당긴 결혼식 비하인드 스토리 최초 공개!',
'episode_number': 311,
},
}, {
'url': 'https://allvod.sbs.co.kr/allvod/vodPackageEndPage.do?mdaId=22000489324&combiId=PA000000284&packageType=A&isFreeYN=',
'md5': 'bf46b2e89fda7ae7de01f5743cef7236',
'info_dict': {
'id': '22000489324',
'ext': 'mp4',
'title': '[다시보기] 트롤리 15회',
'description': 'md5:0e55d74bef1ac55c61ae90c73ac485f4',
'thumbnail': 'https://img2.sbs.co.kr/img/sbs_cms/WE/2023/02/14/arC1676333794938-1280-720.jpg',
'release_timestamp': 1676325600,
'release_date': '20230213',
'view_count': int,
'like_count': int,
'duration': 5931,
'age_limit': 15,
'series': '트롤리',
'episode': '이거 다 거짓말이야',
'episode_number': 15,
},
}, {
'url': 'https://programs.sbs.co.kr/enter/fourman/vod/69625/22000508948',
'md5': '41e8ae4cc6c8424f4e4d76661a4becbf',
'info_dict': {
'id': '22000508948',
'ext': 'mp4',
'title': '[다시보기] 신발 벗고 돌싱포맨 104회',
'description': 'md5:c6a247383c4dd661e4b956bf4d3b586e',
'thumbnail': 'https://img2.sbs.co.kr/img/sbs_cms/WE/2023/08/30/2vb1693355446261-1280-720.jpg',
'release_timestamp': 1693342800,
'release_date': '20230829',
'view_count': int,
'like_count': int,
'duration': 7036,
'age_limit': 15,
'series': '신발 벗고 돌싱포맨',
'episode': '돌싱포맨 저격수들 등장!',
'episode_number': 104,
},
}]
def _call_api(self, video_id, rscuse=''):
return self._download_json(
f'https://api.play.sbs.co.kr/1.0/sbs_vodall/{video_id}', video_id,
note=f'Downloading m3u8 information {rscuse}',
query={
'platform': 'pcweb',
'protocol': 'download',
'absolute_show': 'Y',
'service': 'program',
'ssl': 'Y',
'rscuse': rscuse,
})
def _real_extract(self, url):
video_id = self._match_id(url)
details = self._call_api(video_id)
source = traverse_obj(details, ('vod', 'source', 'mediasource', {dict})) or {}
formats = []
for stream in traverse_obj(details, (
'vod', 'source', 'mediasourcelist', lambda _, v: v['mediaurl'] or v['mediarscuse']
), default=[source]):
if not stream.get('mediaurl'):
new_source = traverse_obj(
self._call_api(video_id, rscuse=stream['mediarscuse']),
('vod', 'source', 'mediasource', {dict})) or {}
if new_source.get('mediarscuse') == source.get('mediarscuse') or not new_source.get('mediaurl'):
continue
stream = new_source
formats.append({
'url': stream['mediaurl'],
'format_id': stream.get('mediarscuse'),
'format_note': stream.get('medianame'),
**parse_resolution(stream.get('quality')),
'preference': int_or_none(stream.get('mediarscuse'))
})
caption_url = traverse_obj(details, ('vod', 'source', 'subtitle', {url_or_none}))
return {
'id': video_id,
**traverse_obj(details, ('vod', {
'title': ('info', 'title'),
'duration': ('info', 'duration', {int_or_none}),
'view_count': ('info', 'viewcount', {int_or_none}),
'like_count': ('info', 'likecount', {int_or_none}),
'description': ('info', 'synopsis', {clean_html}),
'episode': ('info', 'content', ('contenttitle', 'title')),
'episode_number': ('info', 'content', 'number', {int_or_none}),
'series': ('info', 'program', 'programtitle'),
'age_limit': ('info', 'targetage', {int_or_none}),
'release_timestamp': ('info', 'broaddate', {parse_iso8601}),
'thumbnail': ('source', 'thumbnail', 'origin', {url_or_none}),
}), get_all=False),
'formats': formats,
'subtitles': {'ko': [{'url': caption_url}]} if caption_url else None,
}
class SBSCoKrAllvodProgramIE(InfoExtractor):
IE_NAME = 'sbs.co.kr:allvod_program'
_VALID_URL = r'https?://allvod\.sbs\.co\.kr/allvod/vod(?:Free)?ProgramDetail\.do\?(?:[^#]+&)?pgmId=(?P<id>P?\d+)'
_TESTS = [{
'url': 'https://allvod.sbs.co.kr/allvod/vodFreeProgramDetail.do?type=legend&pgmId=22000010159&listOrder=vodCntAsc',
'info_dict': {
'_type': 'playlist',
'id': '22000010159',
},
'playlist_count': 18,
}, {
'url': 'https://allvod.sbs.co.kr/allvod/vodProgramDetail.do?pgmId=P460810577',
'info_dict': {
'_type': 'playlist',
'id': 'P460810577',
},
'playlist_count': 13,
}]
def _real_extract(self, url):
program_id = self._match_id(url)
details = self._download_json(
'https://allvod.sbs.co.kr/allvod/vodProgramDetail/vodProgramDetailAjax.do',
program_id, note='Downloading program details',
query={
'pgmId': program_id,
'currentCount': '10000',
})
return self.playlist_result(
[self.url_result(f'https://allvod.sbs.co.kr/allvod/vodEndPage.do?mdaId={video_id}', SBSCoKrIE)
for video_id in traverse_obj(details, ('list', ..., 'mdaId'))], program_id)
class SBSCoKrProgramsVodIE(InfoExtractor):
IE_NAME = 'sbs.co.kr:programs_vod'
_VALID_URL = r'https?://programs\.sbs\.co\.kr/(?:enter|drama|culture|sports|plus|mtv)/(?P<id>[a-z0-9]+)/vods'
_TESTS = [{
'url': 'https://programs.sbs.co.kr/culture/morningwide/vods/65007',
'info_dict': {
'_type': 'playlist',
'id': '00000210215',
},
'playlist_mincount': 9782,
}, {
'url': 'https://programs.sbs.co.kr/enter/dongsang2/vods/52006',
'info_dict': {
'_type': 'playlist',
'id': '22000010476',
},
'playlist_mincount': 312,
}]
def _real_extract(self, url):
program_slug = self._match_id(url)
program_id = self._download_json(
f'https://static.apis.sbs.co.kr/program-api/1.0/menu/{program_slug}', program_slug,
note='Downloading program menu data')['program']['programid']
return self.url_result(
f'https://allvod.sbs.co.kr/allvod/vodProgramDetail.do?pgmId={program_id}', SBSCoKrAllvodProgramIE)

View File

@ -38,48 +38,9 @@ class StacommuBaseIE(WrestleUniverseBaseIE):
return None return None
return traverse_obj(encryption_data, {'key': ('key', {decrypt}), 'iv': ('iv', {decrypt})}) return traverse_obj(encryption_data, {'key': ('key', {decrypt}), 'iv': ('iv', {decrypt})})
def _extract_vod(self, url):
video_id = self._match_id(url)
video_info = self._download_metadata(
url, video_id, 'ja', ('dehydratedState', 'queries', 0, 'state', 'data'))
hls_info, decrypt = self._call_encrypted_api(
video_id, ':watch', 'stream information', data={'method': 1})
return {
'id': video_id,
'formats': self._get_formats(hls_info, ('protocolHls', 'url', {url_or_none}), video_id),
'hls_aes': self._extract_hls_key(hls_info, 'protocolHls', decrypt),
**traverse_obj(video_info, {
'title': ('displayName', {str}),
'description': ('description', {str}),
'timestamp': ('watchStartTime', {int_or_none}),
'thumbnail': ('keyVisualUrl', {url_or_none}),
'cast': ('casts', ..., 'displayName', {str}),
'duration': ('duration', {int}),
}),
}
def _extract_ppv(self, url):
video_id = self._match_id(url)
video_info = self._call_api(video_id, msg='video information', query={'al': 'ja'}, auth=False)
hls_info, decrypt = self._call_encrypted_api(
video_id, ':watchArchive', 'stream information', data={'method': 1})
return {
'id': video_id,
'formats': self._get_formats(hls_info, ('hls', 'urls', ..., {url_or_none}), video_id),
'hls_aes': self._extract_hls_key(hls_info, 'hls', decrypt),
**traverse_obj(video_info, {
'title': ('displayName', {str}),
'timestamp': ('startTime', {int_or_none}),
'thumbnail': ('keyVisualUrl', {url_or_none}),
'duration': ('duration', {int_or_none}),
}),
}
class StacommuVODIE(StacommuBaseIE): class StacommuVODIE(StacommuBaseIE):
_VALID_URL = r'https?://www\.stacommu\.jp/(?:en/)?videos/episodes/(?P<id>[\da-zA-Z]+)' _VALID_URL = r'https?://www\.stacommu\.jp/videos/episodes/(?P<id>[\da-zA-Z]+)'
_TESTS = [{ _TESTS = [{
# not encrypted # not encrypted
'url': 'https://www.stacommu.jp/videos/episodes/aXcVKjHyAENEjard61soZZ', 'url': 'https://www.stacommu.jp/videos/episodes/aXcVKjHyAENEjard61soZZ',
@ -118,19 +79,34 @@ class StacommuVODIE(StacommuBaseIE):
'params': { 'params': {
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
}, {
'url': 'https://www.stacommu.jp/en/videos/episodes/aXcVKjHyAENEjard61soZZ',
'only_matching': True,
}] }]
_API_PATH = 'videoEpisodes' _API_PATH = 'videoEpisodes'
def _real_extract(self, url): def _real_extract(self, url):
return self._extract_vod(url) video_id = self._match_id(url)
video_info = self._download_metadata(
url, video_id, 'ja', ('dehydratedState', 'queries', 0, 'state', 'data'))
hls_info, decrypt = self._call_encrypted_api(
video_id, ':watch', 'stream information', data={'method': 1})
return {
'id': video_id,
'formats': self._get_formats(hls_info, ('protocolHls', 'url', {url_or_none}), video_id),
'hls_aes': self._extract_hls_key(hls_info, 'protocolHls', decrypt),
**traverse_obj(video_info, {
'title': ('displayName', {str}),
'description': ('description', {str}),
'timestamp': ('watchStartTime', {int_or_none}),
'thumbnail': ('keyVisualUrl', {url_or_none}),
'cast': ('casts', ..., 'displayName', {str}),
'duration': ('duration', {int}),
}),
}
class StacommuLiveIE(StacommuBaseIE): class StacommuLiveIE(StacommuBaseIE):
_VALID_URL = r'https?://www\.stacommu\.jp/(?:en/)?live/(?P<id>[\da-zA-Z]+)' _VALID_URL = r'https?://www\.stacommu\.jp/live/(?P<id>[\da-zA-Z]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.stacommu.jp/live/d2FJ3zLnndegZJCAEzGM3m', 'url': 'https://www.stacommu.jp/live/d2FJ3zLnndegZJCAEzGM3m',
'info_dict': { 'info_dict': {
@ -149,83 +125,24 @@ class StacommuLiveIE(StacommuBaseIE):
'params': { 'params': {
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
}, {
'url': 'https://www.stacommu.jp/en/live/d2FJ3zLnndegZJCAEzGM3m',
'only_matching': True,
}] }]
_API_PATH = 'events' _API_PATH = 'events'
def _real_extract(self, url): def _real_extract(self, url):
return self._extract_ppv(url) video_id = self._match_id(url)
video_info = self._call_api(video_id, msg='video information', query={'al': 'ja'}, auth=False)
hls_info, decrypt = self._call_encrypted_api(
video_id, ':watchArchive', 'stream information', data={'method': 1})
return {
class TheaterComplexTownBaseIE(StacommuBaseIE): 'id': video_id,
_NETRC_MACHINE = 'theatercomplextown' 'formats': self._get_formats(hls_info, ('hls', 'urls', ..., {url_or_none}), video_id),
_API_HOST = 'api.theater-complex.town' 'hls_aes': self._extract_hls_key(hls_info, 'hls', decrypt),
_LOGIN_QUERY = {'key': 'AIzaSyAgNCqToaIz4a062EeIrkhI_xetVfAOrfc'} **traverse_obj(video_info, {
_LOGIN_HEADERS = { 'title': ('displayName', {str}),
'Accept': '*/*', 'timestamp': ('startTime', {int_or_none}),
'Content-Type': 'application/json', 'thumbnail': ('keyVisualUrl', {url_or_none}),
'X-Client-Version': 'Chrome/JsCore/9.23.0/FirebaseCore-web', 'duration': ('duration', {int_or_none}),
'Referer': 'https://www.theater-complex.town/', }),
'Origin': 'https://www.theater-complex.town', }
}
class TheaterComplexTownVODIE(TheaterComplexTownBaseIE):
_VALID_URL = r'https?://(?:www\.)?theater-complex\.town/(?:en/)?videos/episodes/(?P<id>\w+)'
IE_NAME = 'theatercomplextown:vod'
_TESTS = [{
'url': 'https://www.theater-complex.town/videos/episodes/hoxqidYNoAn7bP92DN6p78',
'info_dict': {
'id': 'hoxqidYNoAn7bP92DN6p78',
'ext': 'mp4',
'title': '演劇ドラフトグランプリ2023 劇団『恋のぼり』〜劇団名決定秘話ラジオ',
'description': 'md5:a7e2e9cf570379ea67fb630f345ff65d',
'cast': ['玉城 裕規', '石川 凌雅'],
'thumbnail': 'https://image.theater-complex.town/5URnXX6KCeDysuFrPkP38o/5URnXX6KCeDysuFrPkP38o',
'upload_date': '20231103',
'timestamp': 1699016400,
'duration': 868,
},
'params': {
'skip_download': 'm3u8',
},
}, {
'url': 'https://www.theater-complex.town/en/videos/episodes/6QT7XYwM9dJz5Gf9VB6K5y',
'only_matching': True,
}]
_API_PATH = 'videoEpisodes'
def _real_extract(self, url):
return self._extract_vod(url)
class TheaterComplexTownPPVIE(TheaterComplexTownBaseIE):
_VALID_URL = r'https?://(?:www\.)?theater-complex\.town/(?:en/)?ppv/(?P<id>\w+)'
IE_NAME = 'theatercomplextown:ppv'
_TESTS = [{
'url': 'https://www.theater-complex.town/ppv/wytW3X7khrjJBUpKuV3jen',
'info_dict': {
'id': 'wytW3X7khrjJBUpKuV3jen',
'ext': 'mp4',
'title': 'BREAK FREE STARS 11月5日12:30千秋楽公演',
'thumbnail': 'https://image.theater-complex.town/5GWEB31JcTUfjtgdeV5t6o/5GWEB31JcTUfjtgdeV5t6o',
'upload_date': '20231105',
'timestamp': 1699155000,
'duration': 8378,
},
'params': {
'skip_download': 'm3u8',
},
}, {
'url': 'https://www.theater-complex.town/en/ppv/wytW3X7khrjJBUpKuV3jen',
'only_matching': True,
}]
_API_PATH = 'events'
def _real_extract(self, url):
return self._extract_ppv(url)

View File

@ -0,0 +1,66 @@
from .common import InfoExtractor
from ..utils import remove_end
class ThisAVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?thisav\.com/video/(?P<id>[0-9]+)/.*'
_TESTS = [{
# jwplayer
'url': 'http://www.thisav.com/video/47734/%98%26sup1%3B%83%9E%83%82---just-fit.html',
'md5': '0480f1ef3932d901f0e0e719f188f19b',
'info_dict': {
'id': '47734',
'ext': 'flv',
'title': '高樹マリア - Just fit',
'uploader': 'dj7970',
'uploader_id': 'dj7970'
}
}, {
# html5 media
'url': 'http://www.thisav.com/video/242352/nerdy-18yo-big-ass-tattoos-and-glasses.html',
'md5': 'ba90c076bd0f80203679e5b60bf523ee',
'info_dict': {
'id': '242352',
'ext': 'mp4',
'title': 'Nerdy 18yo Big Ass Tattoos and Glasses',
'uploader': 'cybersluts',
'uploader_id': 'cybersluts',
},
}]
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('id')
webpage = self._download_webpage(url, video_id)
title = remove_end(self._html_extract_title(webpage), ' - 視頻 - ThisAV.com-世界第一中文成人娛樂網站')
video_url = self._html_search_regex(
r"addVariable\('file','([^']+)'\);", webpage, 'video url', default=None)
if video_url:
info_dict = {
'formats': [{
'url': video_url,
}],
}
else:
entries = self._parse_html5_media_entries(url, webpage, video_id)
if entries:
info_dict = entries[0]
else:
info_dict = self._extract_jwplayer_data(
webpage, video_id, require_title=False)
uploader = self._html_search_regex(
r': <a href="http://www\.thisav\.com/user/[0-9]+/(?:[^"]+)">([^<]+)</a>',
webpage, 'uploader name', fatal=False)
uploader_id = self._html_search_regex(
r': <a href="http://www\.thisav\.com/user/[0-9]+/([^"]+)">(?:[^<]+)</a>',
webpage, 'uploader id', fatal=False)
info_dict.update({
'id': video_id,
'uploader': uploader,
'uploader_id': uploader_id,
'title': title,
})
return info_dict

View File

@ -1,23 +1,11 @@
import json
from .common import InfoExtractor from .common import InfoExtractor
from .zype import ZypeIE
from ..networking import HEADRequest from ..networking import HEADRequest
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
filter_dict,
parse_qs,
try_call,
urlencode_postdata,
)
class ThisOldHouseIE(InfoExtractor): class ThisOldHouseIE(InfoExtractor):
_NETRC_MACHINE = 'thisoldhouse' _VALID_URL = r'https?://(?:www\.)?thisoldhouse\.com/(?:watch|how-to|tv-episode|(?:[^/]+/)?\d+)/(?P<id>[^/?#]+)'
_VALID_URL = r'https?://(?:www\.)?thisoldhouse\.com/(?:watch|how-to|tv-episode|(?:[^/?#]+/)?\d+)/(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.thisoldhouse.com/furniture/21017078/how-to-build-a-storage-bench', 'url': 'https://www.thisoldhouse.com/how-to/how-to-build-storage-bench',
'info_dict': { 'info_dict': {
'id': '5dcdddf673c3f956ef5db202', 'id': '5dcdddf673c3f956ef5db202',
'ext': 'mp4', 'ext': 'mp4',
@ -35,16 +23,13 @@ class ThisOldHouseIE(InfoExtractor):
'skip_download': True, 'skip_download': True,
}, },
}, { }, {
# Page no longer has video
'url': 'https://www.thisoldhouse.com/watch/arlington-arts-crafts-arts-and-crafts-class-begins', 'url': 'https://www.thisoldhouse.com/watch/arlington-arts-crafts-arts-and-crafts-class-begins',
'only_matching': True, 'only_matching': True,
}, { }, {
# 404 Not Found
'url': 'https://www.thisoldhouse.com/tv-episode/ask-toh-shelf-rough-electric', 'url': 'https://www.thisoldhouse.com/tv-episode/ask-toh-shelf-rough-electric',
'only_matching': True, 'only_matching': True,
}, { }, {
# 404 Not Found 'url': 'https://www.thisoldhouse.com/furniture/21017078/how-to-build-a-storage-bench',
'url': 'https://www.thisoldhouse.com/how-to/how-to-build-storage-bench',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://www.thisoldhouse.com/21113884/s41-e13-paradise-lost', 'url': 'https://www.thisoldhouse.com/21113884/s41-e13-paradise-lost',
@ -54,51 +39,17 @@ class ThisOldHouseIE(InfoExtractor):
'url': 'https://www.thisoldhouse.com/21083431/seaside-transformation-the-westerly-project', 'url': 'https://www.thisoldhouse.com/21083431/seaside-transformation-the-westerly-project',
'only_matching': True, 'only_matching': True,
}] }]
_ZYPE_TMPL = 'https://player.zype.com/embed/%s.html?api_key=hsOk_yMSPYNrT22e9pu8hihLXjaZf0JW5jsOWv4ZqyHJFvkJn6rtToHl09tbbsbe'
_LOGIN_URL = 'https://login.thisoldhouse.com/usernamepassword/login'
def _perform_login(self, username, password):
self._request_webpage(
HEADRequest('https://www.thisoldhouse.com/insider'), None, 'Requesting session cookies')
urlh = self._request_webpage(
'https://www.thisoldhouse.com/wp-login.php', None, 'Requesting login info',
errnote='Unable to login', query={'redirect_to': 'https://www.thisoldhouse.com/insider'})
try:
auth_form = self._download_webpage(
self._LOGIN_URL, None, 'Submitting credentials', headers={
'Content-Type': 'application/json',
'Referer': urlh.url,
}, data=json.dumps(filter_dict({
**{('client_id' if k == 'client' else k): v[0] for k, v in parse_qs(urlh.url).items()},
'tenant': 'thisoldhouse',
'username': username,
'password': password,
'popup_options': {},
'sso': True,
'_csrf': try_call(lambda: self._get_cookies(self._LOGIN_URL)['_csrf'].value),
'_intstate': 'deprecated',
}), separators=(',', ':')).encode())
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Invalid username or password', expected=True)
raise
self._request_webpage(
'https://login.thisoldhouse.com/login/callback', None, 'Completing login',
data=urlencode_postdata(self._hidden_inputs(auth_form)))
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
if 'To Unlock This content' in webpage: if 'To Unlock This content' in webpage:
self.raise_login_required( self.raise_login_required(method='cookies')
'This video is only available for subscribers. ' video_url = self._search_regex(
'Note that --cookies-from-browser may not work due to this site using session cookies')
video_url, video_id = self._search_regex(
r'<iframe[^>]+src=[\'"]((?:https?:)?//(?:www\.)?thisoldhouse\.(?:chorus\.build|com)/videos/zype/([0-9a-f]{24})[^\'"]*)[\'"]', r'<iframe[^>]+src=[\'"]((?:https?:)?//(?:www\.)?thisoldhouse\.(?:chorus\.build|com)/videos/zype/([0-9a-f]{24})[^\'"]*)[\'"]',
webpage, 'video url', group=(1, 2)) webpage, 'video url')
video_url = self._request_webpage(HEADRequest(video_url), video_id, 'Resolving Zype URL').url if 'subscription_required=true' in video_url or 'c-entry-group-labels__image' in webpage:
return self.url_result(self._request_webpage(HEADRequest(video_url), display_id).url, 'Zype', display_id)
return self.url_result(video_url, ZypeIE, video_id) video_id = self._search_regex(r'(?:https?:)?//(?:www\.)?thisoldhouse\.(?:chorus\.build|com)/videos/zype/([0-9a-f]{24})', video_url, 'video id')
return self.url_result(self._ZYPE_TMPL % video_id, 'Zype', video_id)

View File

@ -142,7 +142,7 @@ class TwitCastingIE(InfoExtractor):
'https://twitcasting.tv/streamserver.php?target=%s&mode=client' % uploader_id, video_id, 'https://twitcasting.tv/streamserver.php?target=%s&mode=client' % uploader_id, video_id,
'Downloading live info', fatal=False) 'Downloading live info', fatal=False)
is_live = any(f'data-{x}' in webpage for x in ['is-onlive="true"', 'live-type="live"', 'status="online"']) is_live = 'data-status="online"' in webpage
if not traverse_obj(stream_server_data, 'llfmp4') and is_live: if not traverse_obj(stream_server_data, 'llfmp4') and is_live:
self.raise_login_required(method='cookies') self.raise_login_required(method='cookies')

View File

@ -1563,7 +1563,7 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
IE_NAME = 'twitter:broadcast' IE_NAME = 'twitter:broadcast'
_VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/broadcasts/(?P<id>[0-9a-zA-Z]{13})' _VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/broadcasts/(?P<id>[0-9a-zA-Z]{13})'
_TESTS = [{ _TEST = {
# untitled Periscope video # untitled Periscope video
'url': 'https://twitter.com/i/broadcasts/1yNGaQLWpejGj', 'url': 'https://twitter.com/i/broadcasts/1yNGaQLWpejGj',
'info_dict': { 'info_dict': {
@ -1571,42 +1571,11 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Andrea May Sahouri - Periscope Broadcast', 'title': 'Andrea May Sahouri - Periscope Broadcast',
'uploader': 'Andrea May Sahouri', 'uploader': 'Andrea May Sahouri',
'uploader_id': 'andreamsahouri', 'uploader_id': '1PXEdBZWpGwKe',
'uploader_url': 'https://twitter.com/andreamsahouri',
'timestamp': 1590973638,
'upload_date': '20200601',
'thumbnail': r're:^https?://[^?#]+\.jpg\?token=', 'thumbnail': r're:^https?://[^?#]+\.jpg\?token=',
'view_count': int, 'view_count': int,
}, },
}, { }
'url': 'https://twitter.com/i/broadcasts/1ZkKzeyrPbaxv',
'info_dict': {
'id': '1ZkKzeyrPbaxv',
'ext': 'mp4',
'title': 'Starship | SN10 | High-Altitude Flight Test',
'uploader': 'SpaceX',
'uploader_id': 'SpaceX',
'uploader_url': 'https://twitter.com/SpaceX',
'timestamp': 1614812942,
'upload_date': '20210303',
'thumbnail': r're:^https?://[^?#]+\.jpg\?token=',
'view_count': int,
},
}, {
'url': 'https://twitter.com/i/broadcasts/1OyKAVQrgzwGb',
'info_dict': {
'id': '1OyKAVQrgzwGb',
'ext': 'mp4',
'title': 'Starship Flight Test',
'uploader': 'SpaceX',
'uploader_id': 'SpaceX',
'uploader_url': 'https://twitter.com/SpaceX',
'timestamp': 1681993964,
'upload_date': '20230420',
'thumbnail': r're:^https?://[^?#]+\.jpg\?token=',
'view_count': int,
},
}]
def _real_extract(self, url): def _real_extract(self, url):
broadcast_id = self._match_id(url) broadcast_id = self._match_id(url)
@ -1616,12 +1585,6 @@ class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE):
if not broadcast: if not broadcast:
raise ExtractorError('Broadcast no longer exists', expected=True) raise ExtractorError('Broadcast no longer exists', expected=True)
info = self._parse_broadcast_data(broadcast, broadcast_id) info = self._parse_broadcast_data(broadcast, broadcast_id)
info['title'] = broadcast.get('status') or info.get('title')
info['uploader_id'] = broadcast.get('twitter_username') or info.get('uploader_id')
info['uploader_url'] = format_field(broadcast, 'twitter_username', 'https://twitter.com/%s', default=None)
if info['live_status'] == 'is_upcoming':
return info
media_key = broadcast['media_key'] media_key = broadcast['media_key']
source = self._call_api( source = self._call_api(
f'live_video_stream/status/{media_key}', media_key)['source'] f'live_video_stream/status/{media_key}', media_key)['source']

View File

@ -164,15 +164,11 @@ class KnownPiracyIE(UnsupportedInfoExtractor):
r'viewsb\.com', r'viewsb\.com',
r'filemoon\.sx', r'filemoon\.sx',
r'hentai\.animestigma\.com', r'hentai\.animestigma\.com',
r'thisav\.com',
) )
_TESTS = [{ _TESTS = [{
'url': 'http://dood.to/e/5s1wmbdacezb', 'url': 'http://dood.to/e/5s1wmbdacezb',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://thisav.com/en/terms',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@ -1,4 +1,3 @@
import json
import random import random
import itertools import itertools
import urllib.parse import urllib.parse
@ -19,33 +18,24 @@ from ..utils import (
class WeiboBaseIE(InfoExtractor): class WeiboBaseIE(InfoExtractor):
def _update_visitor_cookies(self, visitor_url, video_id): def _update_visitor_cookies(self, video_id):
headers = {'Referer': visitor_url}
chrome_ver = self._search_regex(
r'Chrome/(\d+)', self.get_param('http_headers')['User-Agent'], 'user agent version', default='90')
visitor_data = self._download_json( visitor_data = self._download_json(
'https://passport.weibo.com/visitor/genvisitor', video_id, 'https://passport.weibo.com/visitor/genvisitor', video_id,
note='Generating first-visit guest request', note='Generating first-visit guest request',
headers=headers, transform_source=strip_jsonp, transform_source=strip_jsonp,
data=urlencode_postdata({ data=urlencode_postdata({
'cb': 'gen_callback', 'cb': 'gen_callback',
'fp': json.dumps({ 'fp': '{"os":"2","browser":"Gecko57,0,0,0","fonts":"undefined","screenInfo":"1440*900*24","plugins":""}',
'os': '1', }))
'browser': f'Chrome{chrome_ver},0,0,0',
'fonts': 'undefined',
'screenInfo': '1920*1080*24',
'plugins': ''
}, separators=(',', ':'))}))['data']
self._download_webpage( self._download_webpage(
'https://passport.weibo.com/visitor/visitor', video_id, 'https://passport.weibo.com/visitor/visitor', video_id,
note='Running first-visit callback to get guest cookies', note='Running first-visit callback to get guest cookies',
headers=headers, query={ query={
'a': 'incarnate', 'a': 'incarnate',
't': visitor_data['tid'], 't': visitor_data['data']['tid'],
'w': 3 if visitor_data.get('new_tid') else 2, 'w': 2,
'c': f'{visitor_data.get("confidence", 100):03d}', 'c': '%03d' % visitor_data['data']['confidence'],
'gc': '',
'cb': 'cross_domain', 'cb': 'cross_domain',
'from': 'weibo', 'from': 'weibo',
'_rand': random.random(), '_rand': random.random(),
@ -54,7 +44,7 @@ class WeiboBaseIE(InfoExtractor):
def _weibo_download_json(self, url, video_id, *args, fatal=True, note='Downloading JSON metadata', **kwargs): def _weibo_download_json(self, url, video_id, *args, fatal=True, note='Downloading JSON metadata', **kwargs):
webpage, urlh = self._download_webpage_handle(url, video_id, *args, fatal=fatal, note=note, **kwargs) webpage, urlh = self._download_webpage_handle(url, video_id, *args, fatal=fatal, note=note, **kwargs)
if urllib.parse.urlparse(urlh.url).netloc == 'passport.weibo.com': if urllib.parse.urlparse(urlh.url).netloc == 'passport.weibo.com':
self._update_visitor_cookies(urlh.url, video_id) self._update_visitor_cookies(video_id)
webpage = self._download_webpage(url, video_id, *args, fatal=fatal, note=note, **kwargs) webpage = self._download_webpage(url, video_id, *args, fatal=fatal, note=note, **kwargs)
return self._parse_json(webpage, video_id, fatal=fatal) return self._parse_json(webpage, video_id, fatal=fatal)

View File

@ -45,10 +45,10 @@ class WeverseBaseIE(InfoExtractor):
'x-acc-trace-id': str(uuid.uuid4()), 'x-acc-trace-id': str(uuid.uuid4()),
'x-clog-user-device-id': str(uuid.uuid4()), 'x-clog-user-device-id': str(uuid.uuid4()),
} }
valid_username = traverse_obj(self._download_json( check_username = self._download_json(
f'{self._ACCOUNT_API_BASE}/signup/email/status', None, note='Checking username', f'{self._ACCOUNT_API_BASE}/signup/email/status', None,
query={'email': username}, headers=headers, expected_status=(400, 404)), 'hasPassword') note='Checking username', query={'email': username}, headers=headers)
if not valid_username: if not check_username.get('hasPassword'):
raise ExtractorError('Invalid username provided', expected=True) raise ExtractorError('Invalid username provided', expected=True)
headers['content-type'] = 'application/json' headers['content-type'] = 'application/json'

View File

@ -4560,14 +4560,6 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
self._parse_time_text(self._get_text(vpir, 'dateText'))) or upload_date self._parse_time_text(self._get_text(vpir, 'dateText'))) or upload_date
info['upload_date'] = upload_date info['upload_date'] = upload_date
if upload_date and live_status not in ('is_live', 'post_live', 'is_upcoming'):
# Newly uploaded videos' HLS formats are potentially problematic and need to be checked
upload_datetime = datetime_from_str(upload_date).replace(tzinfo=datetime.timezone.utc)
if upload_datetime >= datetime_from_str('today-1day'):
for fmt in info['formats']:
if fmt.get('protocol') == 'm3u8_native':
fmt['__needs_testing'] = True
for s_k, d_k in [('artist', 'creator'), ('track', 'alt_title')]: for s_k, d_k in [('artist', 'creator'), ('track', 'alt_title')]:
v = info.get(s_k) v = info.get(s_k)
if v: if v:

View File

@ -2,12 +2,10 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
str_or_none,
js_to_json, js_to_json,
parse_filesize, parse_filesize,
parse_resolution,
str_or_none,
traverse_obj, traverse_obj,
url_basename,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
) )
@ -43,18 +41,6 @@ class ZoomIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Timea Andrea Lelik\'s Personal Meeting Room', 'title': 'Timea Andrea Lelik\'s Personal Meeting Room',
}, },
'skip': 'This recording has expired',
}, {
# view_with_share URL
'url': 'https://cityofdetroit.zoom.us/rec/share/VjE-5kW3xmgbEYqR5KzRgZ1OFZvtMtiXk5HyRJo5kK4m5PYE6RF4rF_oiiO_9qaM.UTAg1MI7JSnF3ZjX',
'md5': 'bdc7867a5934c151957fb81321b3c024',
'info_dict': {
'id': 'VjE-5kW3xmgbEYqR5KzRgZ1OFZvtMtiXk5HyRJo5kK4m5PYE6RF4rF_oiiO_9qaM.UTAg1MI7JSnF3ZjX',
'ext': 'mp4',
'title': 'February 2022 Detroit Revenue Estimating Conference',
'duration': 7299,
'formats': 'mincount:3',
},
}] }]
def _get_page_data(self, webpage, video_id): def _get_page_data(self, webpage, video_id):
@ -86,7 +72,6 @@ class ZoomIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
base_url, url_type, video_id = self._match_valid_url(url).group('base_url', 'type', 'id') base_url, url_type, video_id = self._match_valid_url(url).group('base_url', 'type', 'id')
query = {}
if url_type == 'share': if url_type == 'share':
webpage = self._get_real_webpage(url, base_url, video_id, 'share') webpage = self._get_real_webpage(url, base_url, video_id, 'share')
@ -95,7 +80,6 @@ class ZoomIE(InfoExtractor):
f'{base_url}nws/recording/1.0/play/share-info/{meeting_id}', f'{base_url}nws/recording/1.0/play/share-info/{meeting_id}',
video_id, note='Downloading share info JSON')['result']['redirectUrl'] video_id, note='Downloading share info JSON')['result']['redirectUrl']
url = urljoin(base_url, redirect_path) url = urljoin(base_url, redirect_path)
query['continueMode'] = 'true'
webpage = self._get_real_webpage(url, base_url, video_id, 'play') webpage = self._get_real_webpage(url, base_url, video_id, 'play')
file_id = self._get_page_data(webpage, video_id)['fileId'] file_id = self._get_page_data(webpage, video_id)['fileId']
@ -104,7 +88,7 @@ class ZoomIE(InfoExtractor):
raise ExtractorError('Unable to extract file ID') raise ExtractorError('Unable to extract file ID')
data = self._download_json( data = self._download_json(
f'{base_url}nws/recording/1.0/play/info/{file_id}', video_id, query=query, f'{base_url}nws/recording/1.0/play/info/{file_id}', video_id,
note='Downloading play info JSON')['result'] note='Downloading play info JSON')['result']
subtitles = {} subtitles = {}
@ -120,10 +104,10 @@ class ZoomIE(InfoExtractor):
if data.get('viewMp4Url'): if data.get('viewMp4Url'):
formats.append({ formats.append({
'format_note': 'Camera stream', 'format_note': 'Camera stream',
'url': data['viewMp4Url'], 'url': str_or_none(data.get('viewMp4Url')),
'width': int_or_none(traverse_obj(data, ('viewResolvtions', 0))), 'width': int_or_none(traverse_obj(data, ('viewResolvtions', 0))),
'height': int_or_none(traverse_obj(data, ('viewResolvtions', 1))), 'height': int_or_none(traverse_obj(data, ('viewResolvtions', 1))),
'format_id': 'view', 'format_id': str_or_none(traverse_obj(data, ('recording', 'id'))),
'ext': 'mp4', 'ext': 'mp4',
'filesize_approx': parse_filesize(str_or_none(traverse_obj(data, ('recording', 'fileSizeInMB')))), 'filesize_approx': parse_filesize(str_or_none(traverse_obj(data, ('recording', 'fileSizeInMB')))),
'preference': 0 'preference': 0
@ -132,26 +116,14 @@ class ZoomIE(InfoExtractor):
if data.get('shareMp4Url'): if data.get('shareMp4Url'):
formats.append({ formats.append({
'format_note': 'Screen share stream', 'format_note': 'Screen share stream',
'url': data['shareMp4Url'], 'url': str_or_none(data.get('shareMp4Url')),
'width': int_or_none(traverse_obj(data, ('shareResolvtions', 0))), 'width': int_or_none(traverse_obj(data, ('shareResolvtions', 0))),
'height': int_or_none(traverse_obj(data, ('shareResolvtions', 1))), 'height': int_or_none(traverse_obj(data, ('shareResolvtions', 1))),
'format_id': 'share', 'format_id': str_or_none(traverse_obj(data, ('shareVideo', 'id'))),
'ext': 'mp4', 'ext': 'mp4',
'preference': -1 'preference': -1
}) })
view_with_share_url = data.get('viewMp4WithshareUrl')
if view_with_share_url:
formats.append({
**parse_resolution(self._search_regex(
r'_(\d+x\d+)\.mp4', url_basename(view_with_share_url), 'resolution', default=None)),
'format_note': 'Screen share with camera',
'url': view_with_share_url,
'format_id': 'view_with_share',
'ext': 'mp4',
'preference': 1
})
return { return {
'id': video_id, 'id': video_id,
'title': str_or_none(traverse_obj(data, ('meet', 'topic'))), 'title': str_or_none(traverse_obj(data, ('meet', 'topic'))),

View File

@ -471,12 +471,12 @@ def create_parser():
'no-attach-info-json', 'embed-thumbnail-atomicparsley', 'no-external-downloader-progress', 'no-attach-info-json', 'embed-thumbnail-atomicparsley', 'no-external-downloader-progress',
'embed-metadata', 'seperate-video-versions', 'no-clean-infojson', 'no-keep-subs', 'no-certifi', 'embed-metadata', 'seperate-video-versions', 'no-clean-infojson', 'no-keep-subs', 'no-certifi',
'no-youtube-channel-redirect', 'no-youtube-unavailable-videos', 'no-youtube-prefer-utc-upload-date', 'no-youtube-channel-redirect', 'no-youtube-unavailable-videos', 'no-youtube-prefer-utc-upload-date',
'prefer-legacy-http-handler', 'manifest-filesize-approx' 'prefer-legacy-http-handler'
}, 'aliases': { }, 'aliases': {
'youtube-dl': ['all', '-multistreams', '-playlist-match-filter', '-manifest-filesize-approx'], 'youtube-dl': ['all', '-multistreams', '-playlist-match-filter'],
'youtube-dlc': ['all', '-no-youtube-channel-redirect', '-no-live-chat', '-playlist-match-filter', '-manifest-filesize-approx'], 'youtube-dlc': ['all', '-no-youtube-channel-redirect', '-no-live-chat', '-playlist-match-filter'],
'2021': ['2022', 'no-certifi', 'filename-sanitization', 'no-youtube-prefer-utc-upload-date'], '2021': ['2022', 'no-certifi', 'filename-sanitization', 'no-youtube-prefer-utc-upload-date'],
'2022': ['no-external-downloader-progress', 'playlist-match-filter', 'prefer-legacy-http-handler', 'manifest-filesize-approx'], '2022': ['no-external-downloader-progress', 'playlist-match-filter', 'prefer-legacy-http-handler'],
} }
}, help=( }, help=(
'Options that can help keep compatibility with youtube-dl or youtube-dlc ' 'Options that can help keep compatibility with youtube-dl or youtube-dlc '

View File

@ -1,5 +1,3 @@
from __future__ import annotations
import atexit import atexit
import contextlib import contextlib
import hashlib import hashlib
@ -9,7 +7,6 @@ import platform
import re import re
import subprocess import subprocess
import sys import sys
from dataclasses import dataclass
from zipimport import zipimporter from zipimport import zipimporter
from .compat import functools # isort: split from .compat import functools # isort: split
@ -17,35 +14,24 @@ from .compat import compat_realpath, compat_shlex_quote
from .networking import Request from .networking import Request
from .networking.exceptions import HTTPError, network_exceptions from .networking.exceptions import HTTPError, network_exceptions
from .utils import ( from .utils import (
NO_DEFAULT,
Popen, Popen,
cached_method,
deprecation_warning, deprecation_warning,
format_field,
remove_end, remove_end,
remove_start,
shell_quote, shell_quote,
system_identifier, system_identifier,
version_tuple, version_tuple,
) )
from .version import ( from .version import CHANNEL, UPDATE_HINT, VARIANT, __version__
CHANNEL,
ORIGIN,
RELEASE_GIT_HEAD,
UPDATE_HINT,
VARIANT,
__version__,
)
UPDATE_SOURCES = { UPDATE_SOURCES = {
'stable': 'yt-dlp/yt-dlp', 'stable': 'yt-dlp/yt-dlp',
'nightly': 'yt-dlp/yt-dlp-nightly-builds', 'nightly': 'yt-dlp/yt-dlp-nightly-builds',
'master': 'yt-dlp/yt-dlp-master-builds',
} }
REPOSITORY = UPDATE_SOURCES['stable'] REPOSITORY = UPDATE_SOURCES['stable']
_INVERSE_UPDATE_SOURCES = {value: key for key, value in UPDATE_SOURCES.items()}
_VERSION_RE = re.compile(r'(\d+\.)*\d+') _VERSION_RE = re.compile(r'(\d+\.)*\d+')
_HASH_PATTERN = r'[\da-f]{40}'
_COMMIT_RE = re.compile(rf'Generated from: https://(?:[^/?#]+/){{3}}commit/(?P<hash>{_HASH_PATTERN})')
API_BASE_URL = 'https://api.github.com/repos' API_BASE_URL = 'https://api.github.com/repos'
@ -126,10 +112,6 @@ def is_non_updateable():
detect_variant(), _NON_UPDATEABLE_REASONS['unknown' if VARIANT else 'other']) detect_variant(), _NON_UPDATEABLE_REASONS['unknown' if VARIANT else 'other'])
def _get_binary_name():
return format_field(_FILE_SUFFIXES, detect_variant(), template='yt-dlp%s', ignore=None, default=None)
def _get_system_deprecation(): def _get_system_deprecation():
MIN_SUPPORTED, MIN_RECOMMENDED = (3, 8), (3, 8) MIN_SUPPORTED, MIN_RECOMMENDED = (3, 8), (3, 8)
@ -156,117 +138,73 @@ def _sha256_file(path):
return h.hexdigest() return h.hexdigest()
def _make_label(origin, tag, version=None):
if '/' in origin:
channel = _INVERSE_UPDATE_SOURCES.get(origin, origin)
else:
channel = origin
label = f'{channel}@{tag}'
if version and version != tag:
label += f' build {version}'
if channel != origin:
label += f' from {origin}'
return label
@dataclass
class UpdateInfo:
"""
Update target information
Can be created by `query_update()` or manually.
Attributes:
tag The release tag that will be updated to. If from query_update,
the value is after API resolution and update spec processing.
The only property that is required.
version The actual numeric version (if available) of the binary to be updated to,
after API resolution and update spec processing. (default: None)
requested_version Numeric version of the binary being requested (if available),
after API resolution only. (default: None)
commit Commit hash (if available) of the binary to be updated to,
after API resolution and update spec processing. (default: None)
This value will only match the RELEASE_GIT_HEAD of prerelease builds.
binary_name Filename of the binary to be updated to. (default: current binary name)
checksum Expected checksum (if available) of the binary to be
updated to. (default: None)
"""
tag: str
version: str | None = None
requested_version: str | None = None
commit: str | None = None
binary_name: str | None = _get_binary_name()
checksum: str | None = None
_has_update = True
class Updater: class Updater:
# XXX: use class variables to simplify testing _exact = True
_channel = CHANNEL
_origin = ORIGIN
def __init__(self, ydl, target: str | None = None): def __init__(self, ydl, target=None):
self.ydl = ydl self.ydl = ydl
# For backwards compat, target needs to be treated as if it could be None
self.requested_channel, sep, self.requested_tag = (target or self._channel).rpartition('@')
# Check if requested_tag is actually the requested repo/channel
if not sep and ('/' in self.requested_tag or self.requested_tag in UPDATE_SOURCES):
self.requested_channel = self.requested_tag
self.requested_tag: str = None # type: ignore (we set it later)
elif not self.requested_channel:
# User did not specify a channel, so we are requesting the default channel
self.requested_channel = self._channel.partition('@')[0]
# --update should not be treated as an exact tag request even if CHANNEL has a @tag self.target_channel, sep, self.target_tag = (target or CHANNEL).rpartition('@')
self._exact = bool(target) and target != self._channel # stable => stable@latest
if not self.requested_tag: if not sep and ('/' in self.target_tag or self.target_tag in UPDATE_SOURCES):
# User did not specify a tag, so we request 'latest' and track that no exact tag was passed self.target_channel = self.target_tag
self.requested_tag = 'latest' self.target_tag = None
elif not self.target_channel:
self.target_channel = CHANNEL.partition('@')[0]
if not self.target_tag:
self.target_tag = 'latest'
self._exact = False self._exact = False
elif self.target_tag != 'latest':
self.target_tag = f'tags/{self.target_tag}'
if '/' in self.requested_channel: if '/' in self.target_channel:
# requested_channel is actually a repository self._target_repo = self.target_channel
self.requested_repo = self.requested_channel if self.target_channel not in (CHANNEL, *UPDATE_SOURCES.values()):
if not self.requested_repo.startswith('yt-dlp/') and self.requested_repo != self._origin:
self.ydl.report_warning( self.ydl.report_warning(
f'You are switching to an {self.ydl._format_err("unofficial", "red")} executable ' f'You are switching to an {self.ydl._format_err("unofficial", "red")} executable '
f'from {self.ydl._format_err(self.requested_repo, self.ydl.Styles.EMPHASIS)}. ' f'from {self.ydl._format_err(self._target_repo, self.ydl.Styles.EMPHASIS)}. '
f'Run {self.ydl._format_err("at your own risk", "light red")}') f'Run {self.ydl._format_err("at your own risk", "light red")}')
self._block_restart('Automatically restarting into custom builds is disabled for security reasons') self._block_restart('Automatically restarting into custom builds is disabled for security reasons')
else: else:
# Check if requested_channel resolves to a known repository or else raise self._target_repo = UPDATE_SOURCES.get(self.target_channel)
self.requested_repo = UPDATE_SOURCES.get(self.requested_channel) if not self._target_repo:
if not self.requested_repo:
self._report_error( self._report_error(
f'Invalid update channel {self.requested_channel!r} requested. ' f'Invalid update channel {self.target_channel!r} requested. '
f'Valid channels are {", ".join(UPDATE_SOURCES)}', True) f'Valid channels are {", ".join(UPDATE_SOURCES)}', True)
self._identifier = f'{detect_variant()} {system_identifier()}' def _version_compare(self, a, b, channel=CHANNEL):
if self._exact and channel != self.target_channel:
return False
@property if _VERSION_RE.fullmatch(f'{a}.{b}'):
def current_version(self): a, b = version_tuple(a), version_tuple(b)
"""Current version""" return a == b if self._exact else a >= b
return __version__ return a == b
@property @functools.cached_property
def current_commit(self): def _tag(self):
"""Current commit hash""" if self._version_compare(self.current_version, self.latest_version):
return RELEASE_GIT_HEAD return self.target_tag
def _download_asset(self, name, tag=None): identifier = f'{detect_variant()} {self.target_channel} {system_identifier()}'
if not tag: for line in self._download('_update_spec', 'latest').decode().splitlines():
tag = self.requested_tag if not line.startswith('lock '):
continue
_, tag, pattern = line.split(' ', 2)
if re.match(pattern, identifier):
if not self._exact:
return f'tags/{tag}'
elif self.target_tag == 'latest' or not self._version_compare(
tag, self.target_tag[5:], channel=self.target_channel):
self._report_error(
f'yt-dlp cannot be updated above {tag} since you are on an older Python version', True)
return f'tags/{self.current_version}'
return self.target_tag
path = 'latest/download' if tag == 'latest' else f'download/{tag}' @cached_method
url = f'https://github.com/{self.requested_repo}/releases/{path}/{name}' def _get_version_info(self, tag):
self.ydl.write_debug(f'Downloading {name} from {url}') url = f'{API_BASE_URL}/{self._target_repo}/releases/{tag}'
return self.ydl.urlopen(url).read()
def _call_api(self, tag):
tag = f'tags/{tag}' if tag != 'latest' else tag
url = f'{API_BASE_URL}/{self.requested_repo}/releases/{tag}'
self.ydl.write_debug(f'Fetching release info: {url}') self.ydl.write_debug(f'Fetching release info: {url}')
return json.loads(self.ydl.urlopen(Request(url, headers={ return json.loads(self.ydl.urlopen(Request(url, headers={
'Accept': 'application/vnd.github+json', 'Accept': 'application/vnd.github+json',
@ -274,175 +212,105 @@ class Updater:
'X-GitHub-Api-Version': '2022-11-28', 'X-GitHub-Api-Version': '2022-11-28',
})).read().decode()) })).read().decode())
def _get_version_info(self, tag: str) -> tuple[str | None, str | None]: @property
if _VERSION_RE.fullmatch(tag): def current_version(self):
return tag, None """Current version"""
return __version__
api_info = self._call_api(tag) @staticmethod
def _label(channel, tag):
"""Label for a given channel and tag"""
return f'{channel}@{remove_start(tag, "tags/")}'
if tag == 'latest': def _get_actual_tag(self, tag):
requested_version = api_info['tag_name'] if tag.startswith('tags/'):
else: return tag[5:]
match = re.search(rf'\s+(?P<version>{_VERSION_RE.pattern})$', api_info.get('name', '')) return self._get_version_info(tag)['tag_name']
requested_version = match.group('version') if match else None
if re.fullmatch(_HASH_PATTERN, api_info.get('target_commitish', '')): @property
target_commitish = api_info['target_commitish'] def new_version(self):
else: """Version of the latest release we can update to"""
match = _COMMIT_RE.match(api_info.get('body', '')) return self._get_actual_tag(self._tag)
target_commitish = match.group('hash') if match else None
if not (requested_version or target_commitish): @property
self._report_error('One of either version or commit hash must be available on the release', expected=True) def latest_version(self):
"""Version of the target release"""
return self._get_actual_tag(self.target_tag)
return requested_version, target_commitish @property
def has_update(self):
"""Whether there is an update available"""
return not self._version_compare(self.current_version, self.new_version)
def _download_update_spec(self, source_tags): @functools.cached_property
for tag in source_tags: def filename(self):
try: """Filename of the executable"""
return self._download_asset('_update_spec', tag=tag).decode() return compat_realpath(_get_variant_and_executable_path()[1])
except network_exceptions as error:
if isinstance(error, HTTPError) and error.status == 404:
continue
self._report_network_error(f'fetch update spec: {error}')
def _download(self, name, tag):
slug = 'latest/download' if tag == 'latest' else f'download/{tag[5:]}'
url = f'https://github.com/{self._target_repo}/releases/{slug}/{name}'
self.ydl.write_debug(f'Downloading {name} from {url}')
return self.ydl.urlopen(url).read()
@functools.cached_property
def release_name(self):
"""The release filename"""
return f'yt-dlp{_FILE_SUFFIXES[detect_variant()]}'
@functools.cached_property
def release_hash(self):
"""Hash of the latest release"""
hash_data = dict(ln.split()[::-1] for ln in self._download('SHA2-256SUMS', self._tag).decode().splitlines())
return hash_data[self.release_name]
def _report_error(self, msg, expected=False):
self.ydl.report_error(msg, tb=False if expected else None)
self.ydl._download_retcode = 100
def _report_permission_error(self, file):
self._report_error(f'Unable to write to {file}; Try running as administrator', True)
def _report_network_error(self, action, delim=';'):
self._report_error( self._report_error(
f'The requested tag {self.requested_tag} does not exist for {self.requested_repo}', True) f'Unable to {action}{delim} visit '
return None f'https://github.com/{self._target_repo}/releases/{self.target_tag.replace("tags/", "tag/")}', True)
def _process_update_spec(self, lockfile: str, resolved_tag: str): def check_update(self):
lines = lockfile.splitlines() """Report whether there is an update available"""
is_version2 = any(line.startswith('lockV2 ') for line in lines) if not self._target_repo:
for line in lines:
if is_version2:
if not line.startswith(f'lockV2 {self.requested_repo} '):
continue
_, _, tag, pattern = line.split(' ', 3)
else:
if not line.startswith('lock '):
continue
_, tag, pattern = line.split(' ', 2)
if re.match(pattern, self._identifier):
if _VERSION_RE.fullmatch(tag):
if not self._exact:
return tag
elif self._version_compare(tag, resolved_tag):
return resolved_tag
elif tag != resolved_tag:
continue
self._report_error(
f'yt-dlp cannot be updated to {resolved_tag} since you are on an older Python version', True)
return None
return resolved_tag
def _version_compare(self, a: str, b: str):
"""
Compare two version strings
This function SHOULD NOT be called if self._exact == True
"""
if _VERSION_RE.fullmatch(f'{a}.{b}'):
return version_tuple(a) >= version_tuple(b)
return a == b
def query_update(self, *, _output=False) -> UpdateInfo | None:
"""Fetches and returns info about the available update"""
if not self.requested_repo:
self._report_error('No target repository could be determined from input')
return None
try:
requested_version, target_commitish = self._get_version_info(self.requested_tag)
except network_exceptions as e:
self._report_network_error(f'obtain version info ({e})', delim='; Please try again later or')
return None
if self._exact and self._origin != self.requested_repo:
has_update = True
elif requested_version:
if self._exact:
has_update = self.current_version != requested_version
else:
has_update = not self._version_compare(self.current_version, requested_version)
elif target_commitish:
has_update = target_commitish != self.current_commit
else:
has_update = False
resolved_tag = requested_version if self.requested_tag == 'latest' else self.requested_tag
current_label = _make_label(self._origin, self._channel.partition("@")[2] or self.current_version, self.current_version)
requested_label = _make_label(self.requested_repo, resolved_tag, requested_version)
latest_or_requested = f'{"Latest" if self.requested_tag == "latest" else "Requested"} version: {requested_label}'
if not has_update:
if _output:
self.ydl.to_screen(f'{latest_or_requested}\nyt-dlp is up to date ({current_label})')
return None
update_spec = self._download_update_spec(('latest', None) if requested_version else (None,))
if not update_spec:
return None
# `result_` prefixed vars == post-_process_update_spec() values
result_tag = self._process_update_spec(update_spec, resolved_tag)
if not result_tag or result_tag == self.current_version:
return None
elif result_tag == resolved_tag:
result_version = requested_version
elif _VERSION_RE.fullmatch(result_tag):
result_version = result_tag
else: # actual version being updated to is unknown
result_version = None
checksum = None
# Non-updateable variants can get update_info but need to skip checksum
if not is_non_updateable():
try:
hashes = self._download_asset('SHA2-256SUMS', result_tag)
except network_exceptions as error:
if not isinstance(error, HTTPError) or error.status != 404:
self._report_network_error(f'fetch checksums: {error}')
return None
self.ydl.report_warning('No hash information found for the release, skipping verification')
else:
for ln in hashes.decode().splitlines():
if ln.endswith(_get_binary_name()):
checksum = ln.split()[0]
break
if not checksum:
self.ydl.report_warning('The hash could not be found in the checksum file, skipping verification')
if _output:
update_label = _make_label(self.requested_repo, result_tag, result_version)
self.ydl.to_screen(
f'Current version: {current_label}\n{latest_or_requested}'
+ (f'\nUpgradable to: {update_label}' if update_label != requested_label else ''))
return UpdateInfo(
tag=result_tag,
version=result_version,
requested_version=requested_version,
commit=target_commitish if result_tag == resolved_tag else None,
checksum=checksum)
def update(self, update_info=NO_DEFAULT):
"""Update yt-dlp executable to the latest version"""
if update_info is NO_DEFAULT:
update_info = self.query_update(_output=True)
if not update_info:
return False return False
try:
self.ydl.to_screen((
f'Available version: {self._label(self.target_channel, self.latest_version)}, ' if self.target_tag == 'latest' else ''
) + f'Current version: {self._label(CHANNEL, self.current_version)}')
except network_exceptions as e:
return self._report_network_error(f'obtain version info ({e})', delim='; Please try again later or')
if not is_non_updateable():
self.ydl.to_screen(f'Current Build Hash: {_sha256_file(self.filename)}')
if self.has_update:
return True
if self.target_tag == self._tag:
self.ydl.to_screen(f'yt-dlp is up to date ({self._label(CHANNEL, self.current_version)})')
elif not self._exact:
self.ydl.report_warning('yt-dlp cannot be updated any further since you are on an older Python version')
return False
def update(self):
"""Update yt-dlp executable to the latest version"""
if not self.check_update():
return
err = is_non_updateable() err = is_non_updateable()
if err: if err:
self._report_error(err, True) return self._report_error(err, True)
return False self.ydl.to_screen(f'Updating to {self._label(self.target_channel, self.new_version)} ...')
if (_VERSION_RE.fullmatch(self.target_tag[5:])
self.ydl.to_screen(f'Current Build Hash: {_sha256_file(self.filename)}') and version_tuple(self.target_tag[5:]) < (2023, 3, 2)):
self.ydl.report_warning('You are downgrading to a version without --update-to')
update_label = _make_label(self.requested_repo, update_info.tag, update_info.version) self._block_restart('Cannot automatically restart to a version without --update-to')
self.ydl.to_screen(f'Updating to {update_label} ...')
directory = os.path.dirname(self.filename) directory = os.path.dirname(self.filename)
if not os.access(self.filename, os.W_OK): if not os.access(self.filename, os.W_OK):
@ -461,17 +329,20 @@ class Updater:
return self._report_error('Unable to remove the old version') return self._report_error('Unable to remove the old version')
try: try:
newcontent = self._download_asset(update_info.binary_name, update_info.tag) newcontent = self._download(self.release_name, self._tag)
except network_exceptions as e: except network_exceptions as e:
if isinstance(e, HTTPError) and e.status == 404: if isinstance(e, HTTPError) and e.status == 404:
return self._report_error( return self._report_error(
f'The requested tag {self.requested_repo}@{update_info.tag} does not exist', True) f'The requested tag {self._label(self.target_channel, self.target_tag)} does not exist', True)
return self._report_network_error(f'fetch updates: {e}', tag=update_info.tag) return self._report_network_error(f'fetch updates: {e}')
if not update_info.checksum: try:
self._block_restart('Automatically restarting into unverified builds is disabled for security reasons') expected_hash = self.release_hash
elif hashlib.sha256(newcontent).hexdigest() != update_info.checksum: except Exception:
return self._report_network_error('verify the new executable', tag=update_info.tag) self.ydl.report_warning('no hash information found for the release')
else:
if hashlib.sha256(newcontent).hexdigest() != expected_hash:
return self._report_network_error('verify the new executable')
try: try:
with open(new_filename, 'wb') as outf: with open(new_filename, 'wb') as outf:
@ -508,14 +379,9 @@ class Updater:
return self._report_error( return self._report_error(
f'Unable to set permissions. Run: sudo chmod a+rx {compat_shlex_quote(self.filename)}') f'Unable to set permissions. Run: sudo chmod a+rx {compat_shlex_quote(self.filename)}')
self.ydl.to_screen(f'Updated yt-dlp to {update_label}') self.ydl.to_screen(f'Updated yt-dlp to {self._label(self.target_channel, self.new_version)}')
return True return True
@functools.cached_property
def filename(self):
"""Filename of the executable"""
return compat_realpath(_get_variant_and_executable_path()[1])
@functools.cached_property @functools.cached_property
def cmd(self): def cmd(self):
"""The command-line to run the executable, if known""" """The command-line to run the executable, if known"""
@ -538,71 +404,6 @@ class Updater:
return self.ydl._download_retcode return self.ydl._download_retcode
self.restart = wrapper self.restart = wrapper
def _report_error(self, msg, expected=False):
self.ydl.report_error(msg, tb=False if expected else None)
self.ydl._download_retcode = 100
def _report_permission_error(self, file):
self._report_error(f'Unable to write to {file}; try running as administrator', True)
def _report_network_error(self, action, delim=';', tag=None):
if not tag:
tag = self.requested_tag
self._report_error(
f'Unable to {action}{delim} visit https://github.com/{self.requested_repo}/releases/'
+ tag if tag == "latest" else f"tag/{tag}", True)
# XXX: Everything below this line in this class is deprecated / for compat only
@property
def _target_tag(self):
"""Deprecated; requested tag with 'tags/' prepended when necessary for API calls"""
return f'tags/{self.requested_tag}' if self.requested_tag != 'latest' else self.requested_tag
def _check_update(self):
"""Deprecated; report whether there is an update available"""
return bool(self.query_update(_output=True))
def __getattr__(self, attribute: str):
"""Compat getter function for deprecated attributes"""
deprecated_props_map = {
'check_update': '_check_update',
'target_tag': '_target_tag',
'target_channel': 'requested_channel',
}
update_info_props_map = {
'has_update': '_has_update',
'new_version': 'version',
'latest_version': 'requested_version',
'release_name': 'binary_name',
'release_hash': 'checksum',
}
if attribute not in deprecated_props_map and attribute not in update_info_props_map:
raise AttributeError(f'{type(self).__name__!r} object has no attribute {attribute!r}')
msg = f'{type(self).__name__}.{attribute} is deprecated and will be removed in a future version'
if attribute in deprecated_props_map:
source_name = deprecated_props_map[attribute]
if not source_name.startswith('_'):
msg += f'. Please use {source_name!r} instead'
source = self
mapping = deprecated_props_map
else: # attribute in update_info_props_map
msg += '. Please call query_update() instead'
source = self.query_update()
if source is None:
source = UpdateInfo('', None, None, None)
source._has_update = False
mapping = update_info_props_map
deprecation_warning(msg)
for target_name, source_name in mapping.items():
value = getattr(source, source_name)
setattr(self, target_name, value)
return getattr(self, attribute)
def run_update(ydl): def run_update(ydl):
"""Update the program file with the latest version from the repository """Update the program file with the latest version from the repository
@ -611,4 +412,45 @@ def run_update(ydl):
return Updater(ydl).update() return Updater(ydl).update()
# Deprecated
def update_self(to_screen, verbose, opener):
import traceback
deprecation_warning(f'"{__name__}.update_self" is deprecated and may be removed '
f'in a future version. Use "{__name__}.run_update(ydl)" instead')
printfn = to_screen
class FakeYDL():
to_screen = printfn
def report_warning(self, msg, *args, **kwargs):
return printfn(f'WARNING: {msg}', *args, **kwargs)
def report_error(self, msg, tb=None):
printfn(f'ERROR: {msg}')
if not verbose:
return
if tb is None:
# Copied from YoutubeDL.trouble
if sys.exc_info()[0]:
tb = ''
if hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]:
tb += ''.join(traceback.format_exception(*sys.exc_info()[1].exc_info))
tb += traceback.format_exc()
else:
tb_data = traceback.format_list(traceback.extract_stack())
tb = ''.join(tb_data)
if tb:
printfn(tb)
def write_debug(self, msg, *args, **kwargs):
printfn(f'[debug] {msg}', *args, **kwargs)
def urlopen(self, url):
return opener.open(url)
return run_update(FakeYDL())
__all__ = ['Updater'] __all__ = ['Updater']

View File

@ -9,7 +9,3 @@ VARIANT = None
UPDATE_HINT = None UPDATE_HINT = None
CHANNEL = 'stable' CHANNEL = 'stable'
ORIGIN = 'yt-dlp/yt-dlp'
_pkg_version = '2023.10.13'