From d17fc42411e60cf724a99650eff2d9ee176990cc Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 21 Apr 2025 11:59:32 +0000 Subject: [PATCH 01/90] Bump pycares from 4.6.0 to 4.6.1 (#10778) Bumps [pycares](https://github.com/saghul/pycares) from 4.6.0 to 4.6.1.
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pycares&package-manager=pip&previous-version=4.6.0&new-version=4.6.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/base.txt | 2 +- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/runtime-deps.txt | 2 +- requirements/test.txt | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index b4366c8fa26..60e17822367 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -36,7 +36,7 @@ propcache==0.3.1 # via # -r requirements/runtime-deps.in # yarl -pycares==4.6.0 +pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 9cf1615af28..d50203137a2 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -148,7 +148,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.6.0 +pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/dev.txt b/requirements/dev.txt index fb26879cabc..9b93f12fd3d 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -145,7 +145,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.6.0 +pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/lint.txt b/requirements/lint.txt index ab419411f50..b01ef0c2978 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -59,7 +59,7 @@ pluggy==1.5.0 # via pytest pre-commit==4.2.0 # via -r requirements/lint.in -pycares==4.6.0 +pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/runtime-deps.txt b/requirements/runtime-deps.txt index a1d1a47cf00..e6bcad92614 100644 --- a/requirements/runtime-deps.txt +++ b/requirements/runtime-deps.txt @@ -32,7 +32,7 @@ propcache==0.3.1 # via # -r requirements/runtime-deps.in # yarl -pycares==4.6.0 +pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/test.txt b/requirements/test.txt index ab2185d9ee7..10abc497509 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -79,7 +79,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.6.0 +pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi From b000a88ddfa9eda5dedaeaad7c127d33d22f24f9 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 21 Apr 2025 12:11:24 +0000 Subject: [PATCH 02/90] Bump packaging from 24.2 to 25.0 (#10780) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [packaging](https://github.com/pypa/packaging) from 24.2 to 25.0.
Release notes

Sourced from packaging's releases.

25.0

What's Changed

New Contributors

Full Changelog: https://github.com/pypa/packaging/compare/24.2...25.0

Changelog

Sourced from packaging's changelog.

25.0 - 2025-04-19


* PEP 751: Add support for ``extras`` and ``dependency_groups`` markers.
(:issue:`885`)
* PEP 738: Add support for Android platform tags. (:issue:`880`)
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=packaging&package-manager=pip&previous-version=24.2&new-version=25.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/base.txt | 2 +- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- requirements/lint.txt | 2 +- requirements/test.txt | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index 60e17822367..8539638c3fa 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -30,7 +30,7 @@ multidict==6.4.3 # via # -r requirements/runtime-deps.in # yarl -packaging==24.2 +packaging==25.0 # via gunicorn propcache==0.3.1 # via diff --git a/requirements/constraints.txt b/requirements/constraints.txt index d50203137a2..e3c5315c4b0 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -128,7 +128,7 @@ mypy-extensions==1.0.0 # via mypy nodeenv==1.9.1 # via pre-commit -packaging==24.2 +packaging==25.0 # via # build # gunicorn diff --git a/requirements/dev.txt b/requirements/dev.txt index 9b93f12fd3d..03377ac2cd2 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -125,7 +125,7 @@ mypy-extensions==1.0.0 # via mypy nodeenv==1.9.1 # via pre-commit -packaging==24.2 +packaging==25.0 # via # build # gunicorn diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index fe5d7e5708d..85f4e321835 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -30,7 +30,7 @@ jinja2==3.1.6 # towncrier markupsafe==3.0.2 # via jinja2 -packaging==24.2 +packaging==25.0 # via sphinx pyenchant==3.2.2 # via sphinxcontrib-spelling diff --git a/requirements/doc.txt b/requirements/doc.txt index 086c945725e..4a559724883 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -30,7 +30,7 @@ jinja2==3.1.6 # towncrier markupsafe==3.0.2 # via jinja2 -packaging==24.2 +packaging==25.0 # via sphinx pygments==2.19.1 # via sphinx diff --git a/requirements/lint.txt b/requirements/lint.txt index b01ef0c2978..bee2eb3b2d0 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -51,7 +51,7 @@ mypy-extensions==1.0.0 # via mypy nodeenv==1.9.1 # via pre-commit -packaging==24.2 +packaging==25.0 # via pytest platformdirs==4.3.7 # via virtualenv diff --git a/requirements/test.txt b/requirements/test.txt index 10abc497509..45afe22f063 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -67,7 +67,7 @@ mypy==1.15.0 ; implementation_name == "cpython" # via -r requirements/test.in mypy-extensions==1.0.0 # via mypy -packaging==24.2 +packaging==25.0 # via # gunicorn # pytest From fcfb0d8239ad953da25327fef2ba8411ca6d7afc Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 21 Apr 2025 12:14:52 +0000 Subject: [PATCH 03/90] Bump identify from 2.6.9 to 2.6.10 (#10781) Bumps [identify](https://github.com/pre-commit/identify) from 2.6.9 to 2.6.10.
Commits
  • e200468 v2.6.10
  • 41f40e2 Merge pull request #517 from sebastiaanspeck/patch-1
  • 2ae839d Add support for Magik
  • dc20df2 Merge pull request #516 from pre-commit/pre-commit-ci-update-config
  • cba874f [pre-commit.ci] auto fixes from pre-commit.com hooks
  • e839dfb [pre-commit.ci] pre-commit autoupdate
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=identify&package-manager=pip&previous-version=2.6.9&new-version=2.6.10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index e3c5315c4b0..bdb5e135127 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -88,7 +88,7 @@ gidgethub==5.3.0 # via cherry-picker gunicorn==23.0.0 # via -r requirements/base.in -identify==2.6.9 +identify==2.6.10 # via pre-commit idna==3.3 # via diff --git a/requirements/dev.txt b/requirements/dev.txt index 03377ac2cd2..d8608ed720d 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -86,7 +86,7 @@ gidgethub==5.3.0 # via cherry-picker gunicorn==23.0.0 # via -r requirements/base.in -identify==2.6.9 +identify==2.6.10 # via pre-commit idna==3.4 # via diff --git a/requirements/lint.txt b/requirements/lint.txt index bee2eb3b2d0..e68392bef1e 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -33,7 +33,7 @@ forbiddenfruit==0.1.4 # via blockbuster freezegun==1.5.1 # via -r requirements/lint.in -identify==2.6.9 +identify==2.6.10 # via pre-commit idna==3.7 # via trustme From 397bb0ac26efcf7644fad1eefef3bc848e18b9e7 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 21 Apr 2025 18:56:30 +0000 Subject: [PATCH 04/90] Bump setuptools from 78.1.0 to 79.0.0 (#10782) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 78.1.0 to 79.0.0.
Changelog

Sourced from setuptools's changelog.

v79.0.0

Deprecations and Removals

  • Removed support for 'legacy-editable' installs. (#917)

v78.1.1

Bugfixes

  • More fully sanitized the filename in PackageIndex._download. (#4946)
Commits
  • 56962ec Bump version: 78.1.1 → 79.0.0
  • b137521 Merge pull request #4953 from pypa/debt/917/remove-legacy-editable
  • f89e652 Removed support for the 'legacy-editable' feature.
  • 8e4868a Bump version: 78.1.0 → 78.1.1
  • 100e9a6 Merge pull request #4951
  • 8faf1d7 Add news fragment.
  • 2ca4a9f Rely on re.sub to perform the decision in one expression.
  • e409e80 Extract _sanitize method for sanitizing the filename.
  • 250a6d1 Add a check to ensure the name resolves relative to the tmpdir.
  • d8390fe Extract _resolve_download_filename with test.
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=78.1.0&new-version=79.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index bdb5e135127..a964de9972d 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.0.1 # via pip-tools -setuptools==78.1.0 +setuptools==79.0.0 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index d8608ed720d..fbc496f6279 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.0.1 # via pip-tools -setuptools==78.1.0 +setuptools==79.0.0 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 85f4e321835..a005c791a44 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==78.1.0 +setuptools==79.0.0 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 4a559724883..28c10d44a0c 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==78.1.0 +setuptools==79.0.0 # via incremental From 60f15a489f84a666990a2cdf984271f30a72bbec Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Mon, 21 Apr 2025 19:05:04 +0000 Subject: [PATCH 05/90] [PR #10774/b0404741 backport][3.12] Rewrite changelog message for #9705 to be in the past tense (#10786) Co-authored-by: J. Nick Koston --- CHANGES/10761.contrib.rst | 1 + CHANGES/9705.contrib.rst | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10761.contrib.rst diff --git a/CHANGES/10761.contrib.rst b/CHANGES/10761.contrib.rst new file mode 120000 index 00000000000..3d35184e09d --- /dev/null +++ b/CHANGES/10761.contrib.rst @@ -0,0 +1 @@ +9705.contrib.rst \ No newline at end of file diff --git a/CHANGES/9705.contrib.rst b/CHANGES/9705.contrib.rst index 771fb442629..5d23e964fa1 100644 --- a/CHANGES/9705.contrib.rst +++ b/CHANGES/9705.contrib.rst @@ -1 +1 @@ -Speed up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. +Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. From f35bc390b064a96e5a5383dc851b7b67140621e0 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 24 Apr 2025 11:10:04 +0000 Subject: [PATCH 06/90] Bump setuptools from 79.0.0 to 79.0.1 (#10793) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 79.0.0 to 79.0.1.
Changelog

Sourced from setuptools's changelog.

v79.0.1

Bugfixes

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=79.0.0&new-version=79.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index a964de9972d..2affe3dd267 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.0.1 # via pip-tools -setuptools==79.0.0 +setuptools==79.0.1 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index fbc496f6279..ee56630ba79 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.0.1 # via pip-tools -setuptools==79.0.0 +setuptools==79.0.1 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index a005c791a44..2832f919ec6 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==79.0.0 +setuptools==79.0.1 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 28c10d44a0c..e71d185f8dd 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==79.0.0 +setuptools==79.0.1 # via incremental From ad5d1393e3680a1a6a38e1af88c1fbc57726d152 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 28 Apr 2025 12:16:21 +0000 Subject: [PATCH 07/90] Bump pip from 25.0.1 to 25.1 (#10804) Bumps [pip](https://github.com/pypa/pip) from 25.0.1 to 25.1.
Changelog

Sourced from pip's changelog.

25.1 (2025-04-26)

Deprecations and Removals

  • Drop support for Python 3.8. ([#12989](https://github.com/pypa/pip/issues/12989) <https://github.com/pypa/pip/issues/12989>_)
  • On python 3.14+, the pkg_resources metadata backend cannot be used anymore. ([#13010](https://github.com/pypa/pip/issues/13010) <https://github.com/pypa/pip/issues/13010>_)
  • Hide --no-python-version-warning from CLI help and documentation as it's useless since Python 2 support was removed. Despite being formerly slated for removal, the flag will remain as a no-op to avoid breakage. ([#13303](https://github.com/pypa/pip/issues/13303) <https://github.com/pypa/pip/issues/13303>_)
  • A warning is emitted when the deprecated pkg_resources library is used to inspect and discover installed packages. This warning should only be visible to users who set an undocumented environment variable to disable the default importlib.metadata backend. ([#13318](https://github.com/pypa/pip/issues/13318) <https://github.com/pypa/pip/issues/13318>_)
  • Deprecate the legacy setup.py bdist_wheel mechanism. To silence the warning, and future-proof their setup, users should enable --use-pep517 or add a pyproject.toml file to the projects they control. ([#13319](https://github.com/pypa/pip/issues/13319) <https://github.com/pypa/pip/issues/13319>_)

Features

  • Suggest checking "pip config debug" in case of an InvalidProxyURL error. ([#12649](https://github.com/pypa/pip/issues/12649) <https://github.com/pypa/pip/issues/12649>_)

  • Using --debug also enables verbose logging. ([#12710](https://github.com/pypa/pip/issues/12710) <https://github.com/pypa/pip/issues/12710>_)

  • Display a transient progress bar during package installation. ([#12712](https://github.com/pypa/pip/issues/12712) <https://github.com/pypa/pip/issues/12712>_)

  • Minor performance improvement when installing packages with a large number of dependencies by increasing the requirement string cache size. ([#12873](https://github.com/pypa/pip/issues/12873) <https://github.com/pypa/pip/issues/12873>_)

  • Add a --group option which allows installation from :pep:735 Dependency Groups. --group accepts arguments of the form group or path:group, where the default path is pyproject.toml, and installs the named Dependency Group from the provided pyproject.toml file. ([#12963](https://github.com/pypa/pip/issues/12963) <https://github.com/pypa/pip/issues/12963>_)

  • Add support to enable resuming incomplete downloads.

    Control the number of retry attempts using the --resume-retries flag. ([#12991](https://github.com/pypa/pip/issues/12991) <https://github.com/pypa/pip/issues/12991>_)

  • Use :pep:753 "Well-known Project URLs in Metadata" normalization rules when identifying an equivalent project URL to replace a missing Home-Page field in pip show. ([#13135](https://github.com/pypa/pip/issues/13135) <https://github.com/pypa/pip/issues/13135>_)

  • Remove experimental warning from pip index versions command. ([#13188](https://github.com/pypa/pip/issues/13188) <https://github.com/pypa/pip/issues/13188>_)

  • Add a structured --json output to pip index versions ([#13194](https://github.com/pypa/pip/issues/13194) <https://github.com/pypa/pip/issues/13194>_)

  • Add a new, experimental, pip lock command, implementing :pep:751. ([#13213](https://github.com/pypa/pip/issues/13213) <https://github.com/pypa/pip/issues/13213>_)

  • Speed up resolution by first only considering the preference of candidates that must be required to complete the resolution. ([#13253](https://github.com/pypa/pip/issues/13253) <https://github.com/pypa/pip/issues/13253>_)

  • Improved heuristics for determining the order of dependency resolution. ([#13273](https://github.com/pypa/pip/issues/13273) <https://github.com/pypa/pip/issues/13273>_)

  • Provide hint, documentation, and link to the documentation when resolution too deep error occurs. ([#13282](https://github.com/pypa/pip/issues/13282) <https://github.com/pypa/pip/issues/13282>_)

  • Include traceback on failure to import setuptools when setup.py is being invoked directly. ([#13290](https://github.com/pypa/pip/issues/13290) <https://github.com/pypa/pip/issues/13290>_)

  • Support for :pep:738 Android wheels. ([#13299](https://github.com/pypa/pip/issues/13299) <https://github.com/pypa/pip/issues/13299>_)

  • Display wheel build tag in pip list columns output if set. ([#5210](https://github.com/pypa/pip/issues/5210) <https://github.com/pypa/pip/issues/5210>_)

  • Build environment dependencies are no longer compiled to bytecode during

... (truncated)

Commits
  • daa7e54 Bump for release
  • 06c3182 Update AUTHORS.txt
  • b88324f Add a news file for the pip lock command
  • 38253a6 Merge pull request #13319 from sbidoul
  • 2791a8b Merge pull request #13344 from pypa/dependabot/pip/build-project/setuptools-7...
  • 24f4600 Remove LRU cache from methods [ruff rule cached-instance-method] (#13306)
  • d852ebd Merge pull request #12308
  • d35c08d Clarify what the removal of the pkg_ressources backend implies
  • e879422 Rename find_linked to find_legacy_editables
  • 4a76560 Fix uninstallation of zipped eggs
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pip&package-manager=pip&previous-version=25.0.1&new-version=25.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 2affe3dd267..239323965e2 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -297,7 +297,7 @@ zlib-ng==0.5.1 # -r requirements/test.in # The following packages are considered to be unsafe in a requirements file: -pip==25.0.1 +pip==25.1 # via pip-tools setuptools==79.0.1 # via diff --git a/requirements/dev.txt b/requirements/dev.txt index ee56630ba79..5c3f99c07e2 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -288,7 +288,7 @@ zlib-ng==0.5.1 # -r requirements/test.in # The following packages are considered to be unsafe in a requirements file: -pip==25.0.1 +pip==25.1 # via pip-tools setuptools==79.0.1 # via From 8207b94a96d21abe30c4a1dabed03d7fab49dbea Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 28 Apr 2025 12:29:35 +0000 Subject: [PATCH 08/90] Bump setuptools from 79.0.1 to 80.0.0 (#10802) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 79.0.1 to 80.0.0.
Changelog

Sourced from setuptools's changelog.

v80.0.0

Bugfixes

  • Update test to honor new behavior in importlib_metadata 8.7. (#4961)

Deprecations and Removals

  • Removed support for the easy_install command including the sandbox module. (#2908)
  • Develop command no longer uses easy_install, but instead defers execution to pip (which then will re-invoke Setuptools via PEP 517 to build the editable wheel). Most of the options to develop are dropped. This is the final warning before the command is dropped completely in a few months. Use-cases relying on 'setup.py develop' should pin to older Setuptools version or migrate to modern build tooling. (#4955)
Commits
  • aeea792 Bump version: 79.0.1 → 80.0.0
  • 2c874e7 Merge pull request #4962 from pypa/bugfix/4961-validated-eps
  • 82c588a Update test to honor new behavior in importlib_metadata 8.7
  • ef4cd29 Merge pull request #2908 from pypa/debt/remove-easy-install
  • 85bbad4 Merge branch 'main' into debt/remove-easy-install
  • 9653305 Merge pull request #4955 from pypa/debt/develop-uses-pip
  • da119e7 Set a due date 6 months in advance.
  • a7603da Rename news fragment to reference the pull request for better precise locality.
  • 018a20c Restore a few of the options to develop.
  • a5f02fe Remove another test relying on setup.py develop.
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=79.0.1&new-version=80.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 239323965e2..c64b82e2d96 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1 # via pip-tools -setuptools==79.0.1 +setuptools==80.0.0 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index 5c3f99c07e2..f4bc6e75475 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1 # via pip-tools -setuptools==79.0.1 +setuptools==80.0.0 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 2832f919ec6..af5cdaadead 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==79.0.1 +setuptools==80.0.0 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index e71d185f8dd..0dea484b6be 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==79.0.1 +setuptools==80.0.0 # via incremental From 007a251ee148f801881a9b943fc13fffc3bfb3c5 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 28 Apr 2025 12:39:35 +0000 Subject: [PATCH 09/90] Bump pypa/cibuildwheel from 2.23.2 to 2.23.3 (#10806) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.2 to 2.23.3.
Release notes

Sourced from pypa/cibuildwheel's releases.

v2.23.3

  • 🛠 Dependency updates, including Python 3.13.3 (#2371)
Changelog

Sourced from pypa/cibuildwheel's changelog.

v2.23.3

26 April 2025

  • 🛠 Dependency updates, including Python 3.13.3 (#2371)
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pypa/cibuildwheel&package-manager=github_actions&previous-version=2.23.2&new-version=2.23.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .github/workflows/ci-cd.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci-cd.yml b/.github/workflows/ci-cd.yml index ec85713319b..564aa1fea14 100644 --- a/.github/workflows/ci-cd.yml +++ b/.github/workflows/ci-cd.yml @@ -414,7 +414,7 @@ jobs: run: | make cythonize - name: Build wheels - uses: pypa/cibuildwheel@v2.23.2 + uses: pypa/cibuildwheel@v2.23.3 env: CIBW_SKIP: pp* ${{ matrix.musl == 'musllinux' && '*manylinux*' || '*musllinux*' }} CIBW_ARCHS_MACOS: x86_64 arm64 universal2 From bc64b83670796bbe4df27eb482916587a23ba787 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 30 Apr 2025 11:22:02 +0000 Subject: [PATCH 10/90] Bump pydantic from 2.11.3 to 2.11.4 (#10809) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.11.3 to 2.11.4.
Release notes

Sourced from pydantic's releases.

v2.11.4 2025-04-29

What's Changed

Packaging

Changes

  • Allow config and bases to be specified together in create_model() by @​Viicos in #11714. This change was backported as it was previously possible (although not meant to be supported) to provide model_config as a field, which would make it possible to provide both configuration and bases.

Fixes

Changelog

Sourced from pydantic's changelog.

v2.11.4 (2025-04-29)

GitHub release

What's Changed

Packaging

Changes

  • Allow config and bases to be specified together in create_model() by @​Viicos in #11714. This change was backported as it was previously possible (although not meant to be supported) to provide model_config as a field, which would make it possible to provide both configuration and bases.

Fixes

Commits
  • d444cd1 Prepare release v2.11.4
  • 828fc48 Add documentation note about common pitfall with the annotated pattern
  • 42bf1fd Bump pydantic-core to v2.33.2 (#11804)
  • 7b3f513 Allow config and bases to be specified together in create_model()
  • fc52138 Traverse function-before schemas during schema gathering
  • 25af789 Fix issue with recursive generic models
  • 91ef6bb Update monthly download count in documentation
  • a830775 Bump mkdocs-llmstxt to v0.2.0
  • f5d1c87 Fix crash when expanding root type in the mypy plugin
  • c80bb35 Remove coercion of decimal constraints
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pydantic&package-manager=pip&previous-version=2.11.3&new-version=2.11.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 4 ++-- requirements/dev.txt | 4 ++-- requirements/lint.txt | 4 ++-- requirements/test.txt | 4 ++-- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index c64b82e2d96..1842a494464 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -152,9 +152,9 @@ pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.3 +pydantic==2.11.4 # via python-on-whales -pydantic-core==2.33.1 +pydantic-core==2.33.2 # via pydantic pyenchant==3.2.2 # via sphinxcontrib-spelling diff --git a/requirements/dev.txt b/requirements/dev.txt index f4bc6e75475..fed94b86d7f 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -149,9 +149,9 @@ pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.3 +pydantic==2.11.4 # via python-on-whales -pydantic-core==2.33.1 +pydantic-core==2.33.2 # via pydantic pygments==2.19.1 # via diff --git a/requirements/lint.txt b/requirements/lint.txt index e68392bef1e..1642cd701dc 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -63,9 +63,9 @@ pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.3 +pydantic==2.11.4 # via python-on-whales -pydantic-core==2.33.1 +pydantic-core==2.33.2 # via pydantic pygments==2.19.1 # via rich diff --git a/requirements/test.txt b/requirements/test.txt index 45afe22f063..f8519c407e0 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -83,9 +83,9 @@ pycares==4.6.1 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.3 +pydantic==2.11.4 # via python-on-whales -pydantic-core==2.33.1 +pydantic-core==2.33.2 # via pydantic pygments==2.19.1 # via rich From 299fe00a2b7b1f5def7daa98260263a61bc9772d Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 30 Apr 2025 11:26:50 +0000 Subject: [PATCH 11/90] Bump setuptools from 80.0.0 to 80.0.1 (#10810) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 80.0.0 to 80.0.1.
Changelog

Sourced from setuptools's changelog.

v80.0.1

Bugfixes

  • Fixed index_url logic in develop compatibility shim. (#4966)
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=80.0.0&new-version=80.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 1842a494464..8f18ed2443d 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1 # via pip-tools -setuptools==80.0.0 +setuptools==80.0.1 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index fed94b86d7f..a918a3bf403 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1 # via pip-tools -setuptools==80.0.0 +setuptools==80.0.1 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index af5cdaadead..a302570ab48 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.0.0 +setuptools==80.0.1 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 0dea484b6be..2eea796696c 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.0.0 +setuptools==80.0.1 # via incremental From fc614bd2cd764a1e3491a2925362d7a0e6b5f7cc Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 1 May 2025 10:35:41 +0000 Subject: [PATCH 12/90] Bump setuptools from 80.0.1 to 80.1.0 (#10813) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 80.0.1 to 80.1.0.
Changelog

Sourced from setuptools's changelog.

v80.1.0

Features

  • Added a deadline of Oct 31 to the setup.py install deprecation.

Bugfixes

  • With setup.py install --prefix=..., fall back to distutils install rather than failing. Note that running setup.py install is deprecated. (#3143)
Commits
  • 6f7b6dd Bump version: 80.0.1 → 80.1.0
  • 25ac162 Fix error message string formatting (#4949)
  • 4566569 Merge pull request #4970 from pypa/bugfix/3143-simple-install
  • 7fc5e05 Add a due date on the deprecation.
  • d8071d6 Remove do_egg_install (unused).
  • a1ecac4 Remove run override as it now unconditionally calls super().
  • 2b0d173 Unify the behavior around the return type when calling super(install).
  • 0dc924a Fall back to distutils install rather than failing.
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=80.0.1&new-version=80.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 8f18ed2443d..fe5c208f216 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1 # via pip-tools -setuptools==80.0.1 +setuptools==80.1.0 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index a918a3bf403..4f0ab50e631 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1 # via pip-tools -setuptools==80.0.1 +setuptools==80.1.0 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index a302570ab48..41de69397c0 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.0.1 +setuptools==80.1.0 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 2eea796696c..328d0cc4cec 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.0.1 +setuptools==80.1.0 # via incremental From 7be295c384e45bdb21188eb405d33681c984ddd3 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 May 2025 11:11:42 +0000 Subject: [PATCH 13/90] Bump charset-normalizer from 3.4.1 to 3.4.2 (#10817) Bumps [charset-normalizer](https://github.com/jawah/charset_normalizer) from 3.4.1 to 3.4.2.
Release notes

Sourced from charset-normalizer's releases.

Version 3.4.2

3.4.2 (2025-05-02)

Fixed

  • Addressed the DeprecationWarning in our CLI regarding argparse.FileType by backporting the target class into the package. (#591)
  • Improved the overall reliability of the detector with CJK Ideographs. (#605) (#587)

Changed

  • Optional mypyc compilation upgraded to version 1.15 for Python >= 3.9
Changelog

Sourced from charset-normalizer's changelog.

3.4.2 (2025-05-02)

Fixed

  • Addressed the DeprecationWarning in our CLI regarding argparse.FileType by backporting the target class into the package. (#591)
  • Improved the overall reliability of the detector with CJK Ideographs. (#605) (#587)

Changed

  • Optional mypyc compilation upgraded to version 1.15 for Python >= 3.8
Commits
  • 6422af1 :pencil: update release date
  • 0e60ec1 :bookmark: Release 3.4.2 (#614)
  • f6630ce :arrow_up: Bump pypa/cibuildwheel from 2.23.2 to 2.23.3 (#617)
  • 677c999 :arrow_up: Bump actions/download-artifact from 4.2.1 to 4.3.0 (#618)
  • 960ab1e :arrow_up: Bump actions/setup-python from 5.5.0 to 5.6.0 (#619)
  • 6eb6325 :arrow_up: Bump github/codeql-action from 3.28.10 to 3.28.16 (#620)
  • c99c0f2 :arrow_up: Update coverage requirement from <7.7,>=7.2.7 to >=7.2.7,<7.9 (#606)
  • 270f28e :arrow_up: Bump actions/setup-python from 5.4.0 to 5.5.0 (#607)
  • d4d89a0 :arrow_up: Bump pypa/cibuildwheel from 2.22.0 to 2.23.2 (#608)
  • 905fcf5 :arrow_up: Bump slsa-framework/slsa-github-generator from 2.0.0 to 2.1.0 (#609)
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=charset-normalizer&package-manager=pip&previous-version=3.4.1&new-version=3.4.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index fe5c208f216..53471ff1651 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -43,7 +43,7 @@ cffi==1.17.1 # pytest-codspeed cfgv==3.4.0 # via pre-commit -charset-normalizer==3.4.1 +charset-normalizer==3.4.2 # via requests cherry-picker==2.5.0 # via -r requirements/dev.in diff --git a/requirements/dev.txt b/requirements/dev.txt index 4f0ab50e631..d04934b688a 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -43,7 +43,7 @@ cffi==1.17.1 # pytest-codspeed cfgv==3.4.0 # via pre-commit -charset-normalizer==3.4.1 +charset-normalizer==3.4.2 # via requests cherry-picker==2.5.0 # via -r requirements/dev.in diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 41de69397c0..3b83cd7fa12 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -12,7 +12,7 @@ babel==2.17.0 # via sphinx certifi==2025.1.31 # via requests -charset-normalizer==3.4.1 +charset-normalizer==3.4.2 # via requests click==8.1.8 # via towncrier diff --git a/requirements/doc.txt b/requirements/doc.txt index 328d0cc4cec..2d6c130c3a5 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -12,7 +12,7 @@ babel==2.17.0 # via sphinx certifi==2025.1.31 # via requests -charset-normalizer==3.4.1 +charset-normalizer==3.4.2 # via requests click==8.1.8 # via towncrier From b932fb46024a79ebdd2a7abb3e841f2357bf8193 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 May 2025 11:18:50 +0000 Subject: [PATCH 14/90] Bump pycares from 4.6.1 to 4.7.0 (#10818) Bumps [pycares](https://github.com/saghul/pycares) from 4.6.1 to 4.7.0.
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pycares&package-manager=pip&previous-version=4.6.1&new-version=4.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/base.txt | 2 +- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/runtime-deps.txt | 2 +- requirements/test.txt | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index 8539638c3fa..f4e64d57256 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -36,7 +36,7 @@ propcache==0.3.1 # via # -r requirements/runtime-deps.in # yarl -pycares==4.6.1 +pycares==4.7.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 53471ff1651..520229fad25 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -148,7 +148,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.6.1 +pycares==4.7.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/dev.txt b/requirements/dev.txt index d04934b688a..b9c7c5ce797 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -145,7 +145,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.6.1 +pycares==4.7.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/lint.txt b/requirements/lint.txt index 1642cd701dc..41e3a3f993f 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -59,7 +59,7 @@ pluggy==1.5.0 # via pytest pre-commit==4.2.0 # via -r requirements/lint.in -pycares==4.6.1 +pycares==4.7.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/runtime-deps.txt b/requirements/runtime-deps.txt index e6bcad92614..b7cebc81576 100644 --- a/requirements/runtime-deps.txt +++ b/requirements/runtime-deps.txt @@ -32,7 +32,7 @@ propcache==0.3.1 # via # -r requirements/runtime-deps.in # yarl -pycares==4.6.1 +pycares==4.7.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/test.txt b/requirements/test.txt index f8519c407e0..aec39654171 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -79,7 +79,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.6.1 +pycares==4.7.0 # via aiodns pycparser==2.22 # via cffi From 10fb7cd9c247807edc0dd598cca6c9cc47140bca Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 May 2025 11:27:52 +0000 Subject: [PATCH 15/90] Bump certifi from 2025.1.31 to 2025.4.26 (#10803) Bumps [certifi](https://github.com/certifi/python-certifi) from 2025.1.31 to 2025.4.26.
Commits
  • 275c9eb 2025.04.26 (#347)
  • 3788331 Bump actions/setup-python from 5.4.0 to 5.5.0 (#346)
  • 9d1f1b7 Bump actions/download-artifact from 4.1.9 to 4.2.1 (#344)
  • 96b97a5 Bump actions/upload-artifact from 4.6.1 to 4.6.2 (#343)
  • c054ed3 Bump peter-evans/create-pull-request from 7.0.7 to 7.0.8 (#342)
  • 44547fc Bump actions/download-artifact from 4.1.8 to 4.1.9 (#341)
  • 5ea5124 Bump actions/upload-artifact from 4.6.0 to 4.6.1 (#340)
  • 2f142b7 Bump peter-evans/create-pull-request from 7.0.6 to 7.0.7 (#339)
  • 80d2ebd Bump actions/setup-python from 5.3.0 to 5.4.0 (#337)
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2025.1.31&new-version=2025.4.26)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 520229fad25..d847f0b105d 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -34,7 +34,7 @@ brotli==1.1.0 ; platform_python_implementation == "CPython" # via -r requirements/runtime-deps.in build==1.2.2.post1 # via pip-tools -certifi==2025.1.31 +certifi==2025.4.26 # via requests cffi==1.17.1 # via diff --git a/requirements/dev.txt b/requirements/dev.txt index b9c7c5ce797..1e65f484415 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -34,7 +34,7 @@ brotli==1.1.0 ; platform_python_implementation == "CPython" # via -r requirements/runtime-deps.in build==1.2.2.post1 # via pip-tools -certifi==2025.1.31 +certifi==2025.4.26 # via requests cffi==1.17.1 # via diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 3b83cd7fa12..4a910c84110 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -10,7 +10,7 @@ alabaster==1.0.0 # via sphinx babel==2.17.0 # via sphinx -certifi==2025.1.31 +certifi==2025.4.26 # via requests charset-normalizer==3.4.2 # via requests diff --git a/requirements/doc.txt b/requirements/doc.txt index 2d6c130c3a5..cd4eb34e8e1 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -10,7 +10,7 @@ alabaster==1.0.0 # via sphinx babel==2.17.0 # via sphinx -certifi==2025.1.31 +certifi==2025.4.26 # via requests charset-normalizer==3.4.2 # via requests From e76655a18cbb3b453d2503399b11f06e43f6a4ad Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 May 2025 19:04:24 +0000 Subject: [PATCH 16/90] Bump aiodns from 3.2.0 to 3.3.0 (#10820) Bumps [aiodns](https://github.com/saghul/aiodns) from 3.2.0 to 3.3.0.
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiodns&package-manager=pip&previous-version=3.2.0&new-version=3.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/base.txt | 2 +- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/runtime-deps.txt | 2 +- requirements/test.txt | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index f4e64d57256..e456f72ab22 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/base.txt --strip-extras requirements/base.in # -aiodns==3.2.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in diff --git a/requirements/constraints.txt b/requirements/constraints.txt index d847f0b105d..c13b965cc39 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/constraints.txt --resolver=backtracking --strip-extras requirements/constraints.in # -aiodns==3.2.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" # via # -r requirements/lint.in # -r requirements/runtime-deps.in diff --git a/requirements/dev.txt b/requirements/dev.txt index 1e65f484415..bc15a7f0713 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/dev.txt --resolver=backtracking --strip-extras requirements/dev.in # -aiodns==3.2.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" # via # -r requirements/lint.in # -r requirements/runtime-deps.in diff --git a/requirements/lint.txt b/requirements/lint.txt index 41e3a3f993f..9b348fa9d47 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/lint.txt --resolver=backtracking --strip-extras requirements/lint.in # -aiodns==3.2.0 +aiodns==3.3.0 # via -r requirements/lint.in annotated-types==0.7.0 # via pydantic diff --git a/requirements/runtime-deps.txt b/requirements/runtime-deps.txt index b7cebc81576..cb69af4ee1f 100644 --- a/requirements/runtime-deps.txt +++ b/requirements/runtime-deps.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/runtime-deps.txt --strip-extras requirements/runtime-deps.in # -aiodns==3.2.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in diff --git a/requirements/test.txt b/requirements/test.txt index aec39654171..02891dd04e7 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/test.txt --resolver=backtracking --strip-extras requirements/test.in # -aiodns==3.2.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in From f3d2fbb5119fd610cc0100e3cb9281c782713af4 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 2 May 2025 19:17:39 +0000 Subject: [PATCH 17/90] Bump pip from 25.1 to 25.1.1 (#10822) Bumps [pip](https://github.com/pypa/pip) from 25.1 to 25.1.1.
Changelog

Sourced from pip's changelog.

25.1.1 (2025-05-02)

Bug Fixes

  • Fix req.source_dir AssertionError when using the legacy resolver. ([#13353](https://github.com/pypa/pip/issues/13353) <https://github.com/pypa/pip/issues/13353>_)
  • Fix crash on Python 3.9.6 and lower when pip failed to compile a Python module during installation. ([#13364](https://github.com/pypa/pip/issues/13364) <https://github.com/pypa/pip/issues/13364>_)
  • Names in dependency group includes are now normalized before lookup, which fixes incorrect Dependency group '...' not found errors. ([#13372](https://github.com/pypa/pip/issues/13372) <https://github.com/pypa/pip/issues/13372>_)

Vendored Libraries

  • Fix issues with using tomllib from the stdlib if available, rather than tomli
  • Upgrade dependency-groups to 1.3.1
Commits
  • 01857ef Bump for release
  • 08d8bb9 Merge pull request #13374 from pfmoore/fixups
  • 2bff84e Merge pull request #13363 from sbidoul/fix-source_dir-assert
  • 644e71d News file fixups
  • 426856f Merge pull request #13364 from ichard26/bugfix/python39
  • b7e3aea Merge pull request #13356 from eli-schwartz/tomllib
  • 8c678fe Merge pull request #13373 from sirosen/update-vendored-dependency-groups
  • 7d00639 Update newsfiles for dependency-groups patch
  • 6d28bbf Update version of dependency-groups to v1.3.1
  • 94bd66d Revert StreamWrapper removal to restore Python 3.9.{0,6} compat
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pip&package-manager=pip&previous-version=25.1&new-version=25.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index c13b965cc39..7cb5ccf0c20 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -297,7 +297,7 @@ zlib-ng==0.5.1 # -r requirements/test.in # The following packages are considered to be unsafe in a requirements file: -pip==25.1 +pip==25.1.1 # via pip-tools setuptools==80.1.0 # via diff --git a/requirements/dev.txt b/requirements/dev.txt index bc15a7f0713..ca22e9c09c0 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -288,7 +288,7 @@ zlib-ng==0.5.1 # -r requirements/test.in # The following packages are considered to be unsafe in a requirements file: -pip==25.1 +pip==25.1.1 # via pip-tools setuptools==80.1.0 # via From 1d6961ab648834df29bde4f4d431e9e0cd00dc9d Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Sat, 3 May 2025 18:25:41 -0500 Subject: [PATCH 18/90] [PR #10823/d17f2a4c backport][3.12] Remove constraint that prevented aiodns from being installed on Windows (#10825) Co-authored-by: J. Nick Koston closes #8121 --- CHANGES/10823.packaging.rst | 3 +++ requirements/runtime-deps.in | 2 +- setup.cfg | 3 +-- 3 files changed, 5 insertions(+), 3 deletions(-) create mode 100644 CHANGES/10823.packaging.rst diff --git a/CHANGES/10823.packaging.rst b/CHANGES/10823.packaging.rst new file mode 100644 index 00000000000..c65f8bea795 --- /dev/null +++ b/CHANGES/10823.packaging.rst @@ -0,0 +1,3 @@ +``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + +As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. diff --git a/requirements/runtime-deps.in b/requirements/runtime-deps.in index 425abdc85f6..7b0382a7a2b 100644 --- a/requirements/runtime-deps.in +++ b/requirements/runtime-deps.in @@ -1,6 +1,6 @@ # Extracted from `setup.cfg` via `make sync-direct-runtime-deps` -aiodns >= 3.2.0; sys_platform=="linux" or sys_platform=="darwin" +aiodns >= 3.3.0 aiohappyeyeballs >= 2.5.0 aiosignal >= 1.1.2 async-timeout >= 4.0, < 6.0 ; python_version < "3.11" diff --git a/setup.cfg b/setup.cfg index 83b33d01532..649a5aaa4eb 100644 --- a/setup.cfg +++ b/setup.cfg @@ -67,8 +67,7 @@ install_requires = [options.extras_require] speedups = - # required c-ares (aiodns' backend) will not build on windows - aiodns >= 3.2.0; sys_platform=="linux" or sys_platform=="darwin" + aiodns >= 3.3.0 Brotli; platform_python_implementation == 'CPython' brotlicffi; platform_python_implementation != 'CPython' From 2e00ed5db25466ed44b104677450c3d1265fcba9 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Sun, 4 May 2025 01:58:29 +0000 Subject: [PATCH 19/90] [PR #10797/ceed5028 backport][3.12] Build armv7l manylinux wheels (#10827) Co-authored-by: J. Nick Koston --- .github/workflows/ci-cd.yml | 3 +++ CHANGES/10797.feature.rst | 1 + 2 files changed, 4 insertions(+) create mode 100644 CHANGES/10797.feature.rst diff --git a/.github/workflows/ci-cd.yml b/.github/workflows/ci-cd.yml index 564aa1fea14..daa701c2aa9 100644 --- a/.github/workflows/ci-cd.yml +++ b/.github/workflows/ci-cd.yml @@ -364,6 +364,9 @@ jobs: - os: ubuntu-latest qemu: s390x musl: musllinux + - os: ubuntu-latest + qemu: armv7l + musl: "" - os: ubuntu-latest qemu: armv7l musl: musllinux diff --git a/CHANGES/10797.feature.rst b/CHANGES/10797.feature.rst new file mode 100644 index 00000000000..fc68d09f34e --- /dev/null +++ b/CHANGES/10797.feature.rst @@ -0,0 +1 @@ +Started building armv7l manylinux wheels -- by :user:`bdraco`. From 9e9b8cd2e097ed0a357adc4b53b8a0cc702debd2 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Sun, 4 May 2025 14:11:17 +0100 Subject: [PATCH 20/90] [PR #10798/73a8de00 backport][3.12] Fix error messages grammar (#10828) **This is a backport of PR #10798 as merged into master (73a8de00014e53ebcd2dded06b0932cca96c0e92).** Co-authored-by: David Xia --- .github/PULL_REQUEST_TEMPLATE.md | 4 ++-- aiohttp/_http_parser.pyx | 4 ++-- aiohttp/http_exceptions.py | 2 +- aiohttp/http_parser.py | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index d4b1dba4340..7a34e15c9bd 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -24,8 +24,8 @@ entertain early in the review process. Thank you in advance! ## Related issue number - - + + ## Checklist diff --git a/aiohttp/_http_parser.pyx b/aiohttp/_http_parser.pyx index 19dc3e63b74..16893f00e74 100644 --- a/aiohttp/_http_parser.pyx +++ b/aiohttp/_http_parser.pyx @@ -506,10 +506,10 @@ cdef class HttpParser: if self._payload is not None: if self._cparser.flags & cparser.F_CHUNKED: raise TransferEncodingError( - "Not enough data for satisfy transfer length header.") + "Not enough data to satisfy transfer length header.") elif self._cparser.flags & cparser.F_CONTENT_LENGTH: raise ContentLengthError( - "Not enough data for satisfy content length header.") + "Not enough data to satisfy content length header.") elif cparser.llhttp_get_errno(self._cparser) != cparser.HPE_OK: desc = cparser.llhttp_get_error_reason(self._cparser) raise PayloadEncodingError(desc.decode('latin-1')) diff --git a/aiohttp/http_exceptions.py b/aiohttp/http_exceptions.py index b8dda999acf..773830211e6 100644 --- a/aiohttp/http_exceptions.py +++ b/aiohttp/http_exceptions.py @@ -71,7 +71,7 @@ class TransferEncodingError(PayloadEncodingError): class ContentLengthError(PayloadEncodingError): - """Not enough data for satisfy content length header.""" + """Not enough data to satisfy content length header.""" class LineTooLong(BadHttpMessage): diff --git a/aiohttp/http_parser.py b/aiohttp/http_parser.py index 1b8b5b4d49e..db61ab5264c 100644 --- a/aiohttp/http_parser.py +++ b/aiohttp/http_parser.py @@ -804,11 +804,11 @@ def feed_eof(self) -> None: self.payload.feed_eof() elif self._type == ParseState.PARSE_LENGTH: raise ContentLengthError( - "Not enough data for satisfy content length header." + "Not enough data to satisfy content length header." ) elif self._type == ParseState.PARSE_CHUNKED: raise TransferEncodingError( - "Not enough data for satisfy transfer length header." + "Not enough data to satisfy transfer length header." ) def feed_data( From b227daeab404036d4e256945d49c9872c6dbdcb0 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 May 2025 11:38:32 +0000 Subject: [PATCH 21/90] Bump cryptography from 44.0.2 to 44.0.3 (#10833) Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.2 to 44.0.3.
Changelog

Sourced from cryptography's changelog.

44.0.3 - 2025-05-02


* Fixed compilation when using LibreSSL 4.1.0.

.. _v44-0-2:

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=44.0.2&new-version=44.0.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 4 ++-- requirements/dev.txt | 4 ++-- requirements/lint.txt | 2 +- requirements/test.txt | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 7cb5ccf0c20..38fad946d5c 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/constraints.txt --resolver=backtracking --strip-extras requirements/constraints.in # -aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 # via # -r requirements/lint.in # -r requirements/runtime-deps.in @@ -58,7 +58,7 @@ coverage==7.8.0 # via # -r requirements/test.in # pytest-cov -cryptography==44.0.2 +cryptography==44.0.3 # via # pyjwt # trustme diff --git a/requirements/dev.txt b/requirements/dev.txt index ca22e9c09c0..e9fa57ebecd 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/dev.txt --resolver=backtracking --strip-extras requirements/dev.in # -aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 # via # -r requirements/lint.in # -r requirements/runtime-deps.in @@ -58,7 +58,7 @@ coverage==7.8.0 # via # -r requirements/test.in # pytest-cov -cryptography==44.0.2 +cryptography==44.0.3 # via # pyjwt # trustme diff --git a/requirements/lint.txt b/requirements/lint.txt index 9b348fa9d47..8f161696935 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -21,7 +21,7 @@ cfgv==3.4.0 # via pre-commit click==8.1.8 # via slotscheck -cryptography==44.0.2 +cryptography==44.0.3 # via trustme distlib==0.3.9 # via virtualenv diff --git a/requirements/test.txt b/requirements/test.txt index 02891dd04e7..83f10badeac 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/test.txt --resolver=backtracking --strip-extras requirements/test.in # -aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in @@ -31,7 +31,7 @@ coverage==7.8.0 # via # -r requirements/test.in # pytest-cov -cryptography==44.0.2 +cryptography==44.0.3 # via trustme exceptiongroup==1.2.2 # via pytest From 8ac93fc6545555df95b8533f65979cd63692124a Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 May 2025 11:41:59 +0000 Subject: [PATCH 22/90] Bump setuptools from 80.1.0 to 80.3.1 (#10834) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 80.1.0 to 80.3.1.
Changelog

Sourced from setuptools's changelog.

v80.3.1

Bugfixes

  • Restored select attributes in easy_install for temporary pbr compatibility. (#4976)

v80.3.0

Features

v80.2.0

Features

  • Restored support for install_scripts --executable (and classic behavior for the executable for those invocations). Instead, build_editable provides the portable form of the executables for downstream installers to rewrite. (#4934)
Commits
  • f37845b Bump version: 80.3.0 → 80.3.1
  • a6f8db0 Merge pull request #4980 from pypa/debt/4976-pbr-compat
  • 05cf544 Add news fragment.
  • 5b39e4e Add the deprecation warning to attribute access.
  • 30c0038 Render the attributes dynamically.
  • d622935 Restore ScriptWriter and sys_executable properties.
  • 88bd892 Add a failing integration test. Ref #4976
  • 9dccfa4 Moved pbr setup into a fixture.
  • af8b322 Bump version: 80.2.0 → 80.3.0
  • e7b8084 Merge pull request #4963 from pypa/debt/remove-easy-install
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=80.1.0&new-version=80.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 38fad946d5c..6d961fe3bd0 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.1.0 +setuptools==80.3.1 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index e9fa57ebecd..438e4260559 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.1.0 +setuptools==80.3.1 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 4a910c84110..5024465497b 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.1.0 +setuptools==80.3.1 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index cd4eb34e8e1..6c3accba1ae 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.1.0 +setuptools==80.3.1 # via incremental From beaa695184ca2a1fcb2ba6338a10202c1666cddc Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 May 2025 19:32:40 +0000 Subject: [PATCH 23/90] Bump mypy-extensions from 1.0.0 to 1.1.0 (#10789) Bumps [mypy-extensions](https://github.com/python/mypy_extensions) from 1.0.0 to 1.1.0.
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=mypy-extensions&package-manager=pip&previous-version=1.0.0&new-version=1.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/test.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 6d961fe3bd0..a60e47c7bf1 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -124,7 +124,7 @@ mypy==1.15.0 ; implementation_name == "cpython" # via # -r requirements/lint.in # -r requirements/test.in -mypy-extensions==1.0.0 +mypy-extensions==1.1.0 # via mypy nodeenv==1.9.1 # via pre-commit diff --git a/requirements/dev.txt b/requirements/dev.txt index 438e4260559..3abd04c1316 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -121,7 +121,7 @@ mypy==1.15.0 ; implementation_name == "cpython" # via # -r requirements/lint.in # -r requirements/test.in -mypy-extensions==1.0.0 +mypy-extensions==1.1.0 # via mypy nodeenv==1.9.1 # via pre-commit diff --git a/requirements/lint.txt b/requirements/lint.txt index 8f161696935..c5e09327cac 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -47,7 +47,7 @@ mdurl==0.1.2 # via markdown-it-py mypy==1.15.0 ; implementation_name == "cpython" # via -r requirements/lint.in -mypy-extensions==1.0.0 +mypy-extensions==1.1.0 # via mypy nodeenv==1.9.1 # via pre-commit diff --git a/requirements/test.txt b/requirements/test.txt index 83f10badeac..b48ade24f17 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -65,7 +65,7 @@ multidict==6.4.3 # yarl mypy==1.15.0 ; implementation_name == "cpython" # via -r requirements/test.in -mypy-extensions==1.0.0 +mypy-extensions==1.1.0 # via mypy packaging==25.0 # via From 6be1a5a1354d7483d65a7550f213c943210f7cd7 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 May 2025 19:32:51 +0000 Subject: [PATCH 24/90] Bump pycares from 4.7.0 to 4.8.0 (#10832) Bumps [pycares](https://github.com/saghul/pycares) from 4.7.0 to 4.8.0.
Commits
  • 6405d1f Set version to 4.8.0
  • a563896 Add ARES_FLAG_NO_DFLT_SVR and ARES_FLAG_EDNS to API
  • da561b2 Update bundled c-ares to v1.34.5 (#221)
  • 129c07c Cancel previous CI jobs on pull request update
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pycares&package-manager=pip&previous-version=4.7.0&new-version=4.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/base.txt | 4 ++-- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/runtime-deps.txt | 4 ++-- requirements/test.txt | 2 +- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index e456f72ab22..7542a9d4bc2 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/base.txt --strip-extras requirements/base.in # -aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in @@ -36,7 +36,7 @@ propcache==0.3.1 # via # -r requirements/runtime-deps.in # yarl -pycares==4.7.0 +pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/constraints.txt b/requirements/constraints.txt index a60e47c7bf1..542704645da 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -148,7 +148,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.7.0 +pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/dev.txt b/requirements/dev.txt index 3abd04c1316..2211e7101e9 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -145,7 +145,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.7.0 +pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/lint.txt b/requirements/lint.txt index c5e09327cac..aef877972af 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -59,7 +59,7 @@ pluggy==1.5.0 # via pytest pre-commit==4.2.0 # via -r requirements/lint.in -pycares==4.7.0 +pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/runtime-deps.txt b/requirements/runtime-deps.txt index cb69af4ee1f..ca591313650 100644 --- a/requirements/runtime-deps.txt +++ b/requirements/runtime-deps.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/runtime-deps.txt --strip-extras requirements/runtime-deps.in # -aiodns==3.3.0 ; sys_platform == "linux" or sys_platform == "darwin" +aiodns==3.3.0 # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in @@ -32,7 +32,7 @@ propcache==0.3.1 # via # -r requirements/runtime-deps.in # yarl -pycares==4.7.0 +pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi diff --git a/requirements/test.txt b/requirements/test.txt index b48ade24f17..ba44a71270b 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -79,7 +79,7 @@ propcache==0.3.1 # yarl proxy-py==2.4.10 # via -r requirements/test.in -pycares==4.7.0 +pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi From 8c74667d4a8e51458e3b0f2eabe10c5bfbcd4d90 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 6 May 2025 11:00:42 +0000 Subject: [PATCH 25/90] Bump virtualenv from 20.30.0 to 20.31.1 (#10836) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [virtualenv](https://github.com/pypa/virtualenv) from 20.30.0 to 20.31.1.
Release notes

Sourced from virtualenv's releases.

20.31.1

What's Changed

Full Changelog: https://github.com/pypa/virtualenv/compare/20.31.0...20.31.1

20.31.0

What's Changed

New Contributors

Full Changelog: https://github.com/pypa/virtualenv/compare/20.30.0...20.31.0

Changelog

Sourced from virtualenv's changelog.

v20.31.1 (2025-05-05)

Bugfixes - 20.31.1

- Upgrade embedded wheels:
  • pip to 25.1.1 from 25.1
  • setuptools to 80.3.1 from 78.1.0 (:issue:2880)

v20.31.0 (2025-05-05)

Features - 20.31.0

  • No longer bundle wheel wheels (except on Python 3.8), setuptools includes native bdist_wheel support. Update pip to 25.1. (:issue:2868)

Bugfixes - 20.31.0

- ``get_embed_wheel()`` no longer fails with a
:exc:`TypeError` when it is
  called with an unknown *distribution*. (:issue:`2877`)
- Fix ``HelpFormatter`` error with Python 3.14.0b1. (:issue:`2878`)
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=virtualenv&package-manager=pip&previous-version=20.30.0&new-version=20.31.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 542704645da..3a68e5577e0 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -283,7 +283,7 @@ uvloop==0.21.0 ; platform_system != "Windows" # -r requirements/lint.in valkey==6.1.0 # via -r requirements/lint.in -virtualenv==20.30.0 +virtualenv==20.31.1 # via pre-commit wait-for-it==2.3.0 # via -r requirements/test.in diff --git a/requirements/dev.txt b/requirements/dev.txt index 2211e7101e9..37633e5db9e 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -274,7 +274,7 @@ uvloop==0.21.0 ; platform_system != "Windows" and implementation_name == "cpytho # -r requirements/lint.in valkey==6.1.0 # via -r requirements/lint.in -virtualenv==20.30.0 +virtualenv==20.31.1 # via pre-commit wait-for-it==2.3.0 # via -r requirements/test.in diff --git a/requirements/lint.txt b/requirements/lint.txt index aef877972af..3d44d329d5c 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -111,7 +111,7 @@ uvloop==0.21.0 ; platform_system != "Windows" # via -r requirements/lint.in valkey==6.1.0 # via -r requirements/lint.in -virtualenv==20.30.0 +virtualenv==20.31.1 # via pre-commit zlib-ng==0.5.1 # via -r requirements/lint.in From b65daf6b8a71056feb8b55ee8bbeda568d09ee13 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 8 May 2025 11:18:59 +0000 Subject: [PATCH 26/90] Bump platformdirs from 4.3.7 to 4.3.8 (#10840) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [platformdirs](https://github.com/tox-dev/platformdirs) from 4.3.7 to 4.3.8.
Release notes

Sourced from platformdirs's releases.

4.3.8

What's Changed

New Contributors

Full Changelog: https://github.com/tox-dev/platformdirs/compare/4.3.7...4.3.8

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=platformdirs&package-manager=pip&previous-version=4.3.7&new-version=4.3.8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 3a68e5577e0..ea02b7fac7c 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -136,7 +136,7 @@ packaging==25.0 # sphinx pip-tools==7.4.1 # via -r requirements/dev.in -platformdirs==4.3.7 +platformdirs==4.3.8 # via virtualenv pluggy==1.5.0 # via pytest diff --git a/requirements/dev.txt b/requirements/dev.txt index 37633e5db9e..fa9b70b829a 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -133,7 +133,7 @@ packaging==25.0 # sphinx pip-tools==7.4.1 # via -r requirements/dev.in -platformdirs==4.3.7 +platformdirs==4.3.8 # via virtualenv pluggy==1.5.0 # via pytest diff --git a/requirements/lint.txt b/requirements/lint.txt index 3d44d329d5c..856101b364c 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -53,7 +53,7 @@ nodeenv==1.9.1 # via pre-commit packaging==25.0 # via pytest -platformdirs==4.3.7 +platformdirs==4.3.8 # via virtualenv pluggy==1.5.0 # via pytest From 3d34912d8c1bf9d4a7a74ba25a2b56b7f24f0431 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 May 2025 11:18:28 +0000 Subject: [PATCH 27/90] Bump virtualenv from 20.31.1 to 20.31.2 (#10844) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [virtualenv](https://github.com/pypa/virtualenv) from 20.31.1 to 20.31.2.
Release notes

Sourced from virtualenv's releases.

20.31.2

What's Changed

Full Changelog: https://github.com/pypa/virtualenv/compare/20.31.1...20.31.2

Changelog

Sourced from virtualenv's changelog.

v20.31.2 (2025-05-08)

No significant changes.

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=virtualenv&package-manager=pip&previous-version=20.31.1&new-version=20.31.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index ea02b7fac7c..1de93e54a91 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -283,7 +283,7 @@ uvloop==0.21.0 ; platform_system != "Windows" # -r requirements/lint.in valkey==6.1.0 # via -r requirements/lint.in -virtualenv==20.31.1 +virtualenv==20.31.2 # via pre-commit wait-for-it==2.3.0 # via -r requirements/test.in diff --git a/requirements/dev.txt b/requirements/dev.txt index fa9b70b829a..45147987469 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -274,7 +274,7 @@ uvloop==0.21.0 ; platform_system != "Windows" and implementation_name == "cpytho # -r requirements/lint.in valkey==6.1.0 # via -r requirements/lint.in -virtualenv==20.31.1 +virtualenv==20.31.2 # via pre-commit wait-for-it==2.3.0 # via -r requirements/test.in diff --git a/requirements/lint.txt b/requirements/lint.txt index 856101b364c..8b36761a25f 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -111,7 +111,7 @@ uvloop==0.21.0 ; platform_system != "Windows" # via -r requirements/lint.in valkey==6.1.0 # via -r requirements/lint.in -virtualenv==20.31.1 +virtualenv==20.31.2 # via pre-commit zlib-ng==0.5.1 # via -r requirements/lint.in From 9671d88df758bed627ba14c6b099f05a47010e01 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 May 2025 11:27:41 +0000 Subject: [PATCH 28/90] Bump snowballstemmer from 2.2.0 to 3.0.0.1 (#10846) Bumps [snowballstemmer](https://github.com/snowballstem/snowball) from 2.2.0 to 3.0.0.1.
Changelog

Sourced from snowballstemmer's changelog.

Snowball 3.0.1 (2025-05-09)

Python

  • The init.py in 3.0.0 was incorrectly generated due to a missing build dependency and the list of algorithms was empty. First reported by laymonage. Thanks to Dmitry Shachnev, Henry Schreiner and Adam Turner for diagnosing and fixing. (#229, #230, #231)

  • Add trove classifiers for Armenian and Yiddish which have now been registered with PyPI. Thanks to Henry Schreiner and Dmitry Shachnev. (#228)

  • Update documented details of Python 2 support in old versions.

Snowball 3.0.0 (2025-05-08)

Ada

  • Bug fixes:

    • Fix invalid Ada code generated for Snowball loop (it was partly Pascal!) None of the stemmers shipped in previous releases triggered this bug, but the Turkish stemmer now does.

    • The Ada runtime was not tracking the current length of the string but instead used the current limit value or some other substitute, which manifested as various incorrect behaviours for code inside of setlimit.

    • size was incorrectly returning the difference between the limit and the backwards limit.

    • lenof or sizeof on a string variable generated Ada code that didn't even compile.

    • Fix incorrect preconditions on some methods in the runtime.

    • Fix bug in runtime code used by attach, insert, <- and string variable assignment when a (sub)string was replaced with a larger string. This bug was triggered by code in the Kraaij-Pohlmann Dutch stemmer implementation (which was previously not enabled by default but is now the standard Dutch stemmer).

    • Fix invalid code generated for insert, <- and string variable assignment. This bug was triggered by code in the Kraaij-Pohlmann Dutch stemmer implementation (which was previously not enabled by default but is now the standard Dutch stemmer).

... (truncated)

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=snowballstemmer&package-manager=pip&previous-version=2.2.0&new-version=3.0.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 1de93e54a91..aa545f6ed8a 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -215,7 +215,7 @@ six==1.17.0 # via python-dateutil slotscheck==0.19.1 # via -r requirements/lint.in -snowballstemmer==2.2.0 +snowballstemmer==3.0.0.1 # via sphinx sphinx==8.1.3 # via diff --git a/requirements/dev.txt b/requirements/dev.txt index 45147987469..2ad0375dafe 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -209,7 +209,7 @@ six==1.17.0 # via python-dateutil slotscheck==0.19.1 # via -r requirements/lint.in -snowballstemmer==2.2.0 +snowballstemmer==3.0.0.1 # via sphinx sphinx==8.1.3 # via diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 5024465497b..f36508ff1d9 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -40,7 +40,7 @@ requests==2.32.3 # via # sphinx # sphinxcontrib-spelling -snowballstemmer==2.2.0 +snowballstemmer==3.0.0.1 # via sphinx sphinx==8.1.3 # via diff --git a/requirements/doc.txt b/requirements/doc.txt index 6c3accba1ae..5e23790fad8 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -36,7 +36,7 @@ pygments==2.19.1 # via sphinx requests==2.32.3 # via sphinx -snowballstemmer==2.2.0 +snowballstemmer==3.0.0.1 # via sphinx sphinx==8.1.3 # via From 3e1251f0cdb15bf66fdaded383ec7e2cb1ca0588 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 9 May 2025 11:45:46 +0000 Subject: [PATCH 29/90] Bump aiodns from 3.3.0 to 3.4.0 (#10845) Bumps [aiodns](https://github.com/saghul/aiodns) from 3.3.0 to 3.4.0.
Changelog

Sourced from aiodns's changelog.

3.4.0

  • Added fallback to sock_state_cb if event_thread creation fails (#151)
    • Improved reliability on systems with exhausted inotify watches
    • Implemented transparent fallback mechanism to ensure DNS resolution continues to work
  • Implemented strict typing (#138)
    • Added comprehensive type annotations
    • Improved mypy configuration
    • Added py.typed marker file
  • Updated dependencies
    • Bumped pycares from 4.7.0 to 4.8.0 (#149)
  • Added support for Python 3.13 (#153)
    • Updated CI configuration to test with Python 3.13
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiodns&package-manager=pip&previous-version=3.3.0&new-version=3.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
--------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Sam Bull --- aiohttp/resolver.py | 7 ++----- requirements/base.txt | 2 +- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/runtime-deps.txt | 2 +- requirements/test.txt | 2 +- 7 files changed, 8 insertions(+), 11 deletions(-) diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py index e14179cc8a2..a5af5fddda6 100644 --- a/aiohttp/resolver.py +++ b/aiohttp/resolver.py @@ -1,6 +1,6 @@ import asyncio import socket -from typing import Any, Dict, List, Optional, Tuple, Type, Union +from typing import Any, Dict, Final, List, Optional, Tuple, Type, Union from .abc import AbstractResolver, ResolveResult @@ -153,10 +153,7 @@ async def resolve( async def _resolve_with_query( self, host: str, port: int = 0, family: int = socket.AF_INET ) -> List[Dict[str, Any]]: - if family == socket.AF_INET6: - qtype = "AAAA" - else: - qtype = "A" + qtype: Final = "AAAA" if family == socket.AF_INET6 else "A" try: resp = await self._resolver.query(host, qtype) diff --git a/requirements/base.txt b/requirements/base.txt index 7542a9d4bc2..1a0c6fe1046 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/base.txt --strip-extras requirements/base.in # -aiodns==3.3.0 +aiodns==3.4.0 # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in diff --git a/requirements/constraints.txt b/requirements/constraints.txt index aa545f6ed8a..ba568e73c18 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/constraints.txt --resolver=backtracking --strip-extras requirements/constraints.in # -aiodns==3.3.0 +aiodns==3.4.0 # via # -r requirements/lint.in # -r requirements/runtime-deps.in diff --git a/requirements/dev.txt b/requirements/dev.txt index 2ad0375dafe..e6ade218def 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/dev.txt --resolver=backtracking --strip-extras requirements/dev.in # -aiodns==3.3.0 +aiodns==3.4.0 # via # -r requirements/lint.in # -r requirements/runtime-deps.in diff --git a/requirements/lint.txt b/requirements/lint.txt index 8b36761a25f..97854dddbc5 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/lint.txt --resolver=backtracking --strip-extras requirements/lint.in # -aiodns==3.3.0 +aiodns==3.4.0 # via -r requirements/lint.in annotated-types==0.7.0 # via pydantic diff --git a/requirements/runtime-deps.txt b/requirements/runtime-deps.txt index ca591313650..863d4525cad 100644 --- a/requirements/runtime-deps.txt +++ b/requirements/runtime-deps.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/runtime-deps.txt --strip-extras requirements/runtime-deps.in # -aiodns==3.3.0 +aiodns==3.4.0 # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in diff --git a/requirements/test.txt b/requirements/test.txt index ba44a71270b..4949defcef3 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/test.txt --resolver=backtracking --strip-extras requirements/test.in # -aiodns==3.3.0 +aiodns==3.4.0 # via -r requirements/runtime-deps.in aiohappyeyeballs==2.6.1 # via -r requirements/runtime-deps.in From 34b8d0da148ece386c861173a24b8636fab556ed Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 12 May 2025 11:21:17 +0000 Subject: [PATCH 30/90] Bump snowballstemmer from 3.0.0.1 to 3.0.1 (#10856) Bumps [snowballstemmer](https://github.com/snowballstem/snowball) from 3.0.0.1 to 3.0.1.
Changelog

Sourced from snowballstemmer's changelog.

Snowball 3.0.1 (2025-05-09)

Python

  • The init.py in 3.0.0 was incorrectly generated due to a missing build dependency and the list of algorithms was empty. First reported by laymonage. Thanks to Dmitry Shachnev, Henry Schreiner and Adam Turner for diagnosing and fixing. (#229, #230, #231)

  • Add trove classifiers for Armenian and Yiddish which have now been registered with PyPI. Thanks to Henry Schreiner and Dmitry Shachnev. (#228)

  • Update documented details of Python 2 support in old versions.

Snowball 3.0.0 (2025-05-08)

Ada

  • Bug fixes:

    • Fix invalid Ada code generated for Snowball loop (it was partly Pascal!) None of the stemmers shipped in previous releases triggered this bug, but the Turkish stemmer now does.

    • The Ada runtime was not tracking the current length of the string but instead used the current limit value or some other substitute, which manifested as various incorrect behaviours for code inside of setlimit.

    • size was incorrectly returning the difference between the limit and the backwards limit.

    • lenof or sizeof on a string variable generated Ada code that didn't even compile.

    • Fix incorrect preconditions on some methods in the runtime.

    • Fix bug in runtime code used by attach, insert, <- and string variable assignment when a (sub)string was replaced with a larger string. This bug was triggered by code in the Kraaij-Pohlmann Dutch stemmer implementation (which was previously not enabled by default but is now the standard Dutch stemmer).

    • Fix invalid code generated for insert, <- and string variable assignment. This bug was triggered by code in the Kraaij-Pohlmann Dutch stemmer implementation (which was previously not enabled by default but is now the standard Dutch stemmer).

... (truncated)

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=snowballstemmer&package-manager=pip&previous-version=3.0.0.1&new-version=3.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index ba568e73c18..4edcb8b982a 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -215,7 +215,7 @@ six==1.17.0 # via python-dateutil slotscheck==0.19.1 # via -r requirements/lint.in -snowballstemmer==3.0.0.1 +snowballstemmer==3.0.1 # via sphinx sphinx==8.1.3 # via diff --git a/requirements/dev.txt b/requirements/dev.txt index e6ade218def..19f26d4f6e4 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -209,7 +209,7 @@ six==1.17.0 # via python-dateutil slotscheck==0.19.1 # via -r requirements/lint.in -snowballstemmer==3.0.0.1 +snowballstemmer==3.0.1 # via sphinx sphinx==8.1.3 # via diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index f36508ff1d9..24f8b6852b3 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -40,7 +40,7 @@ requests==2.32.3 # via # sphinx # sphinxcontrib-spelling -snowballstemmer==3.0.0.1 +snowballstemmer==3.0.1 # via sphinx sphinx==8.1.3 # via diff --git a/requirements/doc.txt b/requirements/doc.txt index 5e23790fad8..7c7c8833321 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -36,7 +36,7 @@ pygments==2.19.1 # via sphinx requests==2.32.3 # via sphinx -snowballstemmer==3.0.0.1 +snowballstemmer==3.0.1 # via sphinx sphinx==8.1.3 # via From 772de2edf71103bdda96a7a55f1d13e587be6545 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 12 May 2025 11:27:28 +0000 Subject: [PATCH 31/90] Bump setuptools from 80.3.1 to 80.4.0 (#10857) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 80.3.1 to 80.4.0.
Changelog

Sourced from setuptools's changelog.

v80.4.0

Features

  • Simplified the error reporting in editable installs. (#4984)
Commits
  • a82f96d Bump version: 80.3.1 → 80.4.0
  • aa4bdf8 Merge pull request #4985 from pypa/feature/user-focused-editable-installs
  • af2f2ba Add news fragment.
  • bcc23a2 Implement the editable debugging tips as a reference to the docs.
  • aa911c6 By default, provide a much more concise error message.
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=80.3.1&new-version=80.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 4edcb8b982a..3b028f12035 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.3.1 +setuptools==80.4.0 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index 19f26d4f6e4..4821baa3595 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.3.1 +setuptools==80.4.0 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 24f8b6852b3..282c9ec50a3 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.3.1 +setuptools==80.4.0 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 7c7c8833321..265dcbb092e 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.3.1 +setuptools==80.4.0 # via incremental From f08a55fdee4eaa098cbcd75a944f44cb06a5a7c3 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 15 May 2025 11:25:58 +0000 Subject: [PATCH 32/90] Bump setuptools from 80.4.0 to 80.7.1 (#10863) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 80.4.0 to 80.7.1.
Changelog

Sourced from setuptools's changelog.

v80.7.1

Bugfixes

  • Only attempt to fetch eggs for unsatisfied requirements. (#4998)
  • In installer, when discovering egg dists, let metadata discovery search each egg. (#4998)

v80.7.0

Features

  • Removed usage of pkg_resources from installer. Set an official deadline on the installer deprecation to 2025-10-31. (#4997)

Misc

v80.6.0

Features

  • Added a build dependency on coherent.licensed to inject the declared license text at build time. (#4981)

Misc

v80.5.0

Features

  • Replaced more references to pkg_resources with importlib equivalents. (#3085)

Misc

... (truncated)

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=80.4.0&new-version=80.7.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 3b028f12035..155f431a317 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -299,7 +299,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.4.0 +setuptools==80.7.1 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index 4821baa3595..8c5b84e4cdc 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -290,7 +290,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.4.0 +setuptools==80.7.1 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index 282c9ec50a3..e00e4b52226 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.4.0 +setuptools==80.7.1 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 265dcbb092e..0ee0b84218e 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.4.0 +setuptools==80.7.1 # via incremental From d615013e12eb096e1b464bf485026c3656e1274f Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 16 May 2025 11:21:57 +0000 Subject: [PATCH 33/90] Bump pluggy from 1.5.0 to 1.6.0 (#10865) Bumps [pluggy](https://github.com/pytest-dev/pluggy) from 1.5.0 to 1.6.0.
Changelog

Sourced from pluggy's changelog.

pluggy 1.6.0 (2025-05-15)

Deprecations and Removals

  • [#556](https://github.com/pytest-dev/pluggy/issues/556) <https://github.com/pytest-dev/pluggy/issues/556>_: Python 3.8 is no longer supported.

Bug Fixes

  • [#504](https://github.com/pytest-dev/pluggy/issues/504) <https://github.com/pytest-dev/pluggy/issues/504>_: Fix a regression in pluggy 1.1.0 where using :func:result.get_result() <pluggy.Result.get_result> on the same failed :class:~pluggy.Result causes the exception's traceback to get longer and longer.

  • [#544](https://github.com/pytest-dev/pluggy/issues/544) <https://github.com/pytest-dev/pluggy/issues/544>_: Correctly pass :class:StopIteration through hook wrappers.

    Raising a :class:StopIteration in a generator triggers a :class:RuntimeError.

    If the :class:RuntimeError of a generator has the passed in :class:StopIteration as cause resume with that :class:StopIteration as normal exception instead of failing with the :class:RuntimeError.

  • [#573](https://github.com/pytest-dev/pluggy/issues/573) <https://github.com/pytest-dev/pluggy/issues/573>_: Fix python 3.14 SyntaxError by rearranging code.

Commits
  • fd08ab5 Preparing release 1.6.0
  • c240362 [pre-commit.ci] pre-commit autoupdate (#578)
  • 0ceb558 Merge pull request #546 from RonnyPfannschmidt/ronny/hookwrapper-wrap-legacy
  • 1f4872e [pre-commit.ci] auto fixes from pre-commit.com hooks
  • 4be0c55 add changelog
  • 615c6c5 Merge branch 'main' into hookwrapper-wrap-legacy
  • 2acc644 [pre-commit.ci] pre-commit autoupdate (#577)
  • ea5ada0 [pre-commit.ci] pre-commit autoupdate (#576)
  • dfd250b [pre-commit.ci] pre-commit autoupdate (#575)
  • 1e1862f [pre-commit.ci] pre-commit autoupdate (#574)
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pluggy&package-manager=pip&previous-version=1.5.0&new-version=1.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/test.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 155f431a317..88de39cb86f 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -138,7 +138,7 @@ pip-tools==7.4.1 # via -r requirements/dev.in platformdirs==4.3.8 # via virtualenv -pluggy==1.5.0 +pluggy==1.6.0 # via pytest pre-commit==4.2.0 # via -r requirements/lint.in diff --git a/requirements/dev.txt b/requirements/dev.txt index 8c5b84e4cdc..7f005a26ef0 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -135,7 +135,7 @@ pip-tools==7.4.1 # via -r requirements/dev.in platformdirs==4.3.8 # via virtualenv -pluggy==1.5.0 +pluggy==1.6.0 # via pytest pre-commit==4.2.0 # via -r requirements/lint.in diff --git a/requirements/lint.txt b/requirements/lint.txt index 97854dddbc5..f6ac13607c0 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -55,7 +55,7 @@ packaging==25.0 # via pytest platformdirs==4.3.8 # via virtualenv -pluggy==1.5.0 +pluggy==1.6.0 # via pytest pre-commit==4.2.0 # via -r requirements/lint.in diff --git a/requirements/test.txt b/requirements/test.txt index 4949defcef3..1454e96cd07 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -71,7 +71,7 @@ packaging==25.0 # via # gunicorn # pytest -pluggy==1.5.0 +pluggy==1.6.0 # via pytest propcache==0.3.1 # via From 34d259af3eb4c78ab907af14fe513a75889de633 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 16 May 2025 19:48:18 +0000 Subject: [PATCH 34/90] [PR #10848/97eae194 backport][3.12] Add benchmark requests without session and alternating clients (#10867) Co-authored-by: J. Nick Koston resolver object churn in #10847 --- tests/test_benchmarks_client.py | 61 +++++++++++++++++++++++++++++++-- 1 file changed, 59 insertions(+), 2 deletions(-) diff --git a/tests/test_benchmarks_client.py b/tests/test_benchmarks_client.py index ef2a4d88c92..5e205549e9c 100644 --- a/tests/test_benchmarks_client.py +++ b/tests/test_benchmarks_client.py @@ -4,9 +4,10 @@ import pytest from pytest_codspeed import BenchmarkFixture +from yarl import URL -from aiohttp import hdrs, web -from aiohttp.pytest_plugin import AiohttpClient +from aiohttp import hdrs, request, web +from aiohttp.pytest_plugin import AiohttpClient, AiohttpServer def test_one_hundred_simple_get_requests( @@ -34,6 +35,62 @@ def _run() -> None: loop.run_until_complete(run_client_benchmark()) +def test_one_hundred_simple_get_requests_alternating_clients( + loop: asyncio.AbstractEventLoop, + aiohttp_client: AiohttpClient, + benchmark: BenchmarkFixture, +) -> None: + """Benchmark 100 simple GET requests with alternating clients.""" + message_count = 100 + + async def handler(request: web.Request) -> web.Response: + return web.Response() + + app = web.Application() + app.router.add_route("GET", "/", handler) + + async def run_client_benchmark() -> None: + client1 = await aiohttp_client(app) + client2 = await aiohttp_client(app) + for i in range(message_count): + if i % 2 == 0: + await client1.get("/") + else: + await client2.get("/") + await client1.close() + await client2.close() + + @benchmark + def _run() -> None: + loop.run_until_complete(run_client_benchmark()) + + +def test_one_hundred_simple_get_requests_no_session( + loop: asyncio.AbstractEventLoop, + aiohttp_server: AiohttpServer, + benchmark: BenchmarkFixture, +) -> None: + """Benchmark 100 simple GET requests without a session.""" + message_count = 100 + + async def handler(request: web.Request) -> web.Response: + return web.Response() + + app = web.Application() + app.router.add_route("GET", "/", handler) + server = loop.run_until_complete(aiohttp_server(app)) + url = URL(f"http://{server.host}:{server.port}/") + + async def run_client_benchmark() -> None: + for _ in range(message_count): + async with request("GET", url): + pass + + @benchmark + def _run() -> None: + loop.run_until_complete(run_client_benchmark()) + + def test_one_hundred_simple_get_requests_multiple_methods_route( loop: asyncio.AbstractEventLoop, aiohttp_client: AiohttpClient, From 99a2234a294617088882ff5ac7e1226771018dcf Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Sat, 17 May 2025 23:21:51 -0400 Subject: [PATCH 35/90] [PR #10868/323bdcf backport][3.12] Fix unclosed resources in proxy xfail tests (#10870) --- tests/test_proxy_functional.py | 110 +++++++++++++++++++-------------- 1 file changed, 62 insertions(+), 48 deletions(-) diff --git a/tests/test_proxy_functional.py b/tests/test_proxy_functional.py index 02d77700d96..d0e20eec6b4 100644 --- a/tests/test_proxy_functional.py +++ b/tests/test_proxy_functional.py @@ -5,6 +5,7 @@ import ssl import sys from re import match as match_regex +from typing import Awaitable, Callable from unittest import mock from uuid import uuid4 @@ -13,7 +14,7 @@ from yarl import URL import aiohttp -from aiohttp import web +from aiohttp import ClientResponse, web from aiohttp.client_exceptions import ClientConnectionError from aiohttp.helpers import IS_MACOS, IS_WINDOWS @@ -498,17 +499,22 @@ async def xtest_proxy_https_connect_with_port(proxy_test_server, get_request): @pytest.mark.xfail -async def xtest_proxy_https_send_body(proxy_test_server, loop): - sess = aiohttp.ClientSession(loop=loop) - proxy = await proxy_test_server() - proxy.return_value = {"status": 200, "body": b"1" * (2**20)} - url = "https://www.google.com.ua/search?q=aiohttp proxy" +async def xtest_proxy_https_send_body( + proxy_test_server: Callable[[], Awaitable[mock.Mock]], + loop: asyncio.AbstractEventLoop, +) -> None: + sess = aiohttp.ClientSession() + try: + proxy = await proxy_test_server() + proxy.return_value = {"status": 200, "body": b"1" * (2**20)} + url = "https://www.google.com.ua/search?q=aiohttp proxy" - async with sess.get(url, proxy=proxy.url) as resp: - body = await resp.read() - await sess.close() + async with sess.get(url, proxy=proxy.url) as resp: + body = await resp.read() - assert body == b"1" * (2**20) + assert body == b"1" * (2**20) + finally: + await sess.close() @pytest.mark.xfail @@ -592,42 +598,46 @@ async def xtest_proxy_https_auth(proxy_test_server, get_request): async def xtest_proxy_https_acquired_cleanup(proxy_test_server, loop): url = "https://secure.aiohttp.io/path" - conn = aiohttp.TCPConnector(loop=loop) - sess = aiohttp.ClientSession(connector=conn, loop=loop) - proxy = await proxy_test_server() - - assert 0 == len(conn._acquired) + conn = aiohttp.TCPConnector() + sess = aiohttp.ClientSession(connector=conn) + try: + proxy = await proxy_test_server() - async def request(): - async with sess.get(url, proxy=proxy.url): - assert 1 == len(conn._acquired) + assert 0 == len(conn._acquired) - await request() + async def request() -> None: + async with sess.get(url, proxy=proxy.url): + assert 1 == len(conn._acquired) - assert 0 == len(conn._acquired) + await request() - await sess.close() + assert 0 == len(conn._acquired) + finally: + await sess.close() + await conn.close() @pytest.mark.xfail async def xtest_proxy_https_acquired_cleanup_force(proxy_test_server, loop): url = "https://secure.aiohttp.io/path" - conn = aiohttp.TCPConnector(force_close=True, loop=loop) - sess = aiohttp.ClientSession(connector=conn, loop=loop) - proxy = await proxy_test_server() - - assert 0 == len(conn._acquired) + conn = aiohttp.TCPConnector(force_close=True) + sess = aiohttp.ClientSession(connector=conn) + try: + proxy = await proxy_test_server() - async def request(): - async with sess.get(url, proxy=proxy.url): - assert 1 == len(conn._acquired) + assert 0 == len(conn._acquired) - await request() + async def request() -> None: + async with sess.get(url, proxy=proxy.url): + assert 1 == len(conn._acquired) - assert 0 == len(conn._acquired) + await request() - await sess.close() + assert 0 == len(conn._acquired) + finally: + await sess.close() + await conn.close() @pytest.mark.xfail @@ -639,26 +649,30 @@ async def xtest_proxy_https_multi_conn_limit(proxy_test_server, loop): sess = aiohttp.ClientSession(connector=conn, loop=loop) proxy = await proxy_test_server() - current_pid = None + try: + current_pid = None - async def request(pid): - # process requests only one by one - nonlocal current_pid + async def request(pid: int) -> ClientResponse: + # process requests only one by one + nonlocal current_pid - async with sess.get(url, proxy=proxy.url) as resp: - current_pid = pid - await asyncio.sleep(0.2, loop=loop) - assert current_pid == pid + async with sess.get(url, proxy=proxy.url) as resp: + current_pid = pid + await asyncio.sleep(0.2) + assert current_pid == pid - return resp + return resp - requests = [request(pid) for pid in range(multi_conn_num)] - responses = await asyncio.gather(*requests, loop=loop) + requests = [request(pid) for pid in range(multi_conn_num)] + responses = await asyncio.gather(*requests, return_exceptions=True) - assert len(responses) == multi_conn_num - assert {resp.status for resp in responses} == {200} - - await sess.close() + # Filter out exceptions to count actual responses + actual_responses = [r for r in responses if isinstance(r, ClientResponse)] + assert len(actual_responses) == multi_conn_num + assert {resp.status for resp in actual_responses} == {200} + finally: + await sess.close() + await conn.close() def _patch_ssl_transport(monkeypatch): @@ -809,7 +823,7 @@ async def xtest_proxy_from_env_https(proxy_test_server, get_request, mocker): url = "https://aiohttp.io/path" proxy = await proxy_test_server() mocker.patch.dict(os.environ, {"https_proxy": str(proxy.url)}) - mock.patch("pathlib.Path.is_file", mock_is_file) + mocker.patch("pathlib.Path.is_file", mock_is_file) await get_request(url=url, trust_env=True) From 5f0902b330e73b238b1b2c680c43a4688728eb96 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Sun, 18 May 2025 13:46:36 -0400 Subject: [PATCH 36/90] [PR #10851/e5d1240 backport][3.12] remove use of deprecated policy API from tests (#10871) Co-authored-by: Kumar Aditya --- CHANGES/10851.bugfix.rst | 1 + CHANGES/10851.contrib.rst | 2 ++ aiohttp/pytest_plugin.py | 34 +++++++++++++++++++--------------- tests/conftest.py | 14 +++++--------- tests/test_connector.py | 2 +- tests/test_loop.py | 6 +++--- tests/test_proxy_functional.py | 2 +- 7 files changed, 32 insertions(+), 29 deletions(-) create mode 100644 CHANGES/10851.bugfix.rst create mode 100644 CHANGES/10851.contrib.rst diff --git a/CHANGES/10851.bugfix.rst b/CHANGES/10851.bugfix.rst new file mode 100644 index 00000000000..9c47cc95905 --- /dev/null +++ b/CHANGES/10851.bugfix.rst @@ -0,0 +1 @@ +Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. diff --git a/CHANGES/10851.contrib.rst b/CHANGES/10851.contrib.rst new file mode 100644 index 00000000000..623f96bc227 --- /dev/null +++ b/CHANGES/10851.contrib.rst @@ -0,0 +1,2 @@ +Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and +make it compatible with Python 3.14. diff --git a/aiohttp/pytest_plugin.py b/aiohttp/pytest_plugin.py index 128dc46081d..7d59fe820d6 100644 --- a/aiohttp/pytest_plugin.py +++ b/aiohttp/pytest_plugin.py @@ -10,7 +10,6 @@ Iterator, Optional, Protocol, - Type, Union, overload, ) @@ -208,9 +207,13 @@ def pytest_pyfunc_call(pyfuncitem): # type: ignore[no-untyped-def] """Run coroutines in an event loop instead of a normal function call.""" fast = pyfuncitem.config.getoption("--aiohttp-fast") if inspect.iscoroutinefunction(pyfuncitem.function): - existing_loop = pyfuncitem.funcargs.get( - "proactor_loop" - ) or pyfuncitem.funcargs.get("loop", None) + existing_loop = ( + pyfuncitem.funcargs.get("proactor_loop") + or pyfuncitem.funcargs.get("selector_loop") + or pyfuncitem.funcargs.get("uvloop_loop") + or pyfuncitem.funcargs.get("loop", None) + ) + with _runtime_warning_context(): with _passthrough_loop_context(existing_loop, fast=fast) as _loop: testargs = { @@ -227,11 +230,11 @@ def pytest_generate_tests(metafunc): # type: ignore[no-untyped-def] return loops = metafunc.config.option.aiohttp_loop - avail_factories: Dict[str, Type[asyncio.AbstractEventLoopPolicy]] - avail_factories = {"pyloop": asyncio.DefaultEventLoopPolicy} + avail_factories: dict[str, Callable[[], asyncio.AbstractEventLoop]] + avail_factories = {"pyloop": asyncio.new_event_loop} if uvloop is not None: # pragma: no cover - avail_factories["uvloop"] = uvloop.EventLoopPolicy + avail_factories["uvloop"] = uvloop.new_event_loop if loops == "all": loops = "pyloop,uvloop?" @@ -255,11 +258,13 @@ def pytest_generate_tests(metafunc): # type: ignore[no-untyped-def] @pytest.fixture -def loop(loop_factory, fast, loop_debug): # type: ignore[no-untyped-def] +def loop( + loop_factory: Callable[[], asyncio.AbstractEventLoop], + fast: bool, + loop_debug: bool, +) -> Iterator[asyncio.AbstractEventLoop]: """Return an instance of the event loop.""" - policy = loop_factory() - asyncio.set_event_loop_policy(policy) - with loop_context(fast=fast) as _loop: + with loop_context(loop_factory, fast=fast) as _loop: if loop_debug: _loop.set_debug(True) # pragma: no cover asyncio.set_event_loop(_loop) @@ -267,11 +272,10 @@ def loop(loop_factory, fast, loop_debug): # type: ignore[no-untyped-def] @pytest.fixture -def proactor_loop(): # type: ignore[no-untyped-def] - policy = asyncio.WindowsProactorEventLoopPolicy() # type: ignore[attr-defined] - asyncio.set_event_loop_policy(policy) +def proactor_loop() -> Iterator[asyncio.AbstractEventLoop]: + factory = asyncio.ProactorEventLoop # type: ignore[attr-defined] - with loop_context(policy.new_event_loop) as _loop: + with loop_context(factory) as _loop: asyncio.set_event_loop(_loop) yield _loop diff --git a/tests/conftest.py b/tests/conftest.py index de7f8316cb0..27cd5cbd6db 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -230,21 +230,17 @@ def _proto_factory(conn_closing_result=None, **kwargs): @pytest.fixture -def selector_loop(): - policy = asyncio.WindowsSelectorEventLoopPolicy() - asyncio.set_event_loop_policy(policy) - - with loop_context(policy.new_event_loop) as _loop: +def selector_loop() -> Iterator[asyncio.AbstractEventLoop]: + factory = asyncio.SelectorEventLoop + with loop_context(factory) as _loop: asyncio.set_event_loop(_loop) yield _loop @pytest.fixture def uvloop_loop() -> Iterator[asyncio.AbstractEventLoop]: - policy = uvloop.EventLoopPolicy() - asyncio.set_event_loop_policy(policy) - - with loop_context(policy.new_event_loop) as _loop: + factory = uvloop.new_event_loop + with loop_context(factory) as _loop: asyncio.set_event_loop(_loop) yield _loop diff --git a/tests/test_connector.py b/tests/test_connector.py index 28a2ae1d1d2..db0514e5f0d 100644 --- a/tests/test_connector.py +++ b/tests/test_connector.py @@ -107,7 +107,7 @@ def create_mocked_conn(conn_closing_result=None, **kwargs): try: loop = asyncio.get_running_loop() except RuntimeError: - loop = asyncio.get_event_loop_policy().get_event_loop() + loop = asyncio.get_event_loop() proto = mock.Mock(**kwargs) proto.closed = loop.create_future() diff --git a/tests/test_loop.py b/tests/test_loop.py index a973efe4c43..944f17e69f0 100644 --- a/tests/test_loop.py +++ b/tests/test_loop.py @@ -37,8 +37,8 @@ def test_default_loop(self) -> None: self.assertIs(self.loop, asyncio.get_event_loop_policy().get_event_loop()) -def test_default_loop(loop) -> None: - assert asyncio.get_event_loop_policy().get_event_loop() is loop +def test_default_loop(loop: asyncio.AbstractEventLoop) -> None: + assert asyncio.get_event_loop() is loop def test_setup_loop_non_main_thread() -> None: @@ -47,7 +47,7 @@ def test_setup_loop_non_main_thread() -> None: def target() -> None: try: with loop_context() as loop: - assert asyncio.get_event_loop_policy().get_event_loop() is loop + assert asyncio.get_event_loop() is loop loop.run_until_complete(test_subprocess_co(loop)) except Exception as exc: nonlocal child_exc diff --git a/tests/test_proxy_functional.py b/tests/test_proxy_functional.py index d0e20eec6b4..c6c6ac67c1b 100644 --- a/tests/test_proxy_functional.py +++ b/tests/test_proxy_functional.py @@ -204,7 +204,6 @@ async def test_https_proxy_unsupported_tls_in_tls( await asyncio.sleep(0.1) -@pytest.mark.usefixtures("uvloop_loop") @pytest.mark.skipif( platform.system() == "Windows" or sys.implementation.name != "cpython", reason="uvloop is not supported on Windows and non-CPython implementations", @@ -216,6 +215,7 @@ async def test_https_proxy_unsupported_tls_in_tls( async def test_uvloop_secure_https_proxy( client_ssl_ctx: ssl.SSLContext, secure_proxy_url: URL, + uvloop_loop: asyncio.AbstractEventLoop, ) -> None: """Ensure HTTPS sites are accessible through a secure proxy without warning when using uvloop.""" conn = aiohttp.TCPConnector() From 94901a221f374b60dfe2b56e1e5ae9cfe6fe303f Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 19 May 2025 10:29:12 +0000 Subject: [PATCH 37/90] Bump cryptography from 44.0.3 to 45.0.2 (#10873) Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.3 to 45.0.2.
Changelog

Sourced from cryptography's changelog.

45.0.2 - 2025-05-17


* Fixed using ``mypy`` with ``cryptography`` on older versions of
Python.

.. _v45-0-1:

45.0.1 - 2025-05-17

  • Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.5.0.

.. _v45-0-0:

45.0.0 - 2025-05-17 (YANKED)


* Support for Python 3.7 is deprecated and will be removed in the next
  ``cryptography`` release.
* Updated the minimum supported Rust version (MSRV) to 1.74.0, from
1.65.0.
* Added support for serialization of PKCS#12 Java truststores in

:func:`~cryptography.hazmat.primitives.serialization.pkcs12.serialize_java_truststore`
* Added
:meth:`~cryptography.hazmat.primitives.kdf.argon2.Argon2id.derive_phc_encoded`
and

:meth:`~cryptography.hazmat.primitives.kdf.argon2.Argon2id.verify_phc_encoded`
methods
  to support password hashing in the PHC string format
* Added support for PKCS7 decryption and encryption using AES-256 as the
  content algorithm, in addition to AES-128.
* **BACKWARDS INCOMPATIBLE:** Made SSH private key loading more
consistent with
  other private key loading:

:func:`~cryptography.hazmat.primitives.serialization.load_ssh_private_key`
  now raises a ``TypeError`` if the key is unencrypted but a password is
provided (previously no exception was raised), and raises a
``TypeError`` if
the key is encrypted but no password is provided (previously a
``ValueError``
  was raised).
* We significantly refactored how private key loading (

:func:`~cryptography.hazmat.primitives.serialization.load_pem_private_key`
  and

:func:`~cryptography.hazmat.primitives.serialization.load_der_private_key`)
works. This is intended to be backwards compatible for all well-formed
keys,
therefore if you discover a key that now raises an exception, please
file a
  bug with instructions for reproducing.
* Added ``unsafe_skip_rsa_key_validation`` keyword-argument to

:func:`~cryptography.hazmat.primitives.serialization.load_ssh_private_key`.
* Added :class:`~cryptography.hazmat.primitives.hashes.XOFHash` to
support
repeated :meth:`~cryptography.hazmat.primitives.hashes.XOFHash.squeeze`
  operations on extendable output functions.
* Added

:meth:`~cryptography.x509.ocsp.OCSPResponseBuilder.add_response_by_hash`
method to allow creating OCSP responses using certificate hash values
rather
  than full certificates.
</tr></table>

... (truncated)

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=44.0.3&new-version=45.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/test.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 88de39cb86f..b8d832fa429 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -58,7 +58,7 @@ coverage==7.8.0 # via # -r requirements/test.in # pytest-cov -cryptography==44.0.3 +cryptography==45.0.2 # via # pyjwt # trustme diff --git a/requirements/dev.txt b/requirements/dev.txt index 7f005a26ef0..aa5b83d6cec 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -58,7 +58,7 @@ coverage==7.8.0 # via # -r requirements/test.in # pytest-cov -cryptography==44.0.3 +cryptography==45.0.2 # via # pyjwt # trustme diff --git a/requirements/lint.txt b/requirements/lint.txt index f6ac13607c0..fcf5f2d0235 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -21,7 +21,7 @@ cfgv==3.4.0 # via pre-commit click==8.1.8 # via slotscheck -cryptography==44.0.3 +cryptography==45.0.2 # via trustme distlib==0.3.9 # via virtualenv diff --git a/requirements/test.txt b/requirements/test.txt index 1454e96cd07..bd64852cd18 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -31,7 +31,7 @@ coverage==7.8.0 # via # -r requirements/test.in # pytest-cov -cryptography==44.0.3 +cryptography==45.0.2 # via trustme exceptiongroup==1.2.2 # via pytest From cfe426951bfc734fe6c923072b52e6ecbe5c9e9f Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 19 May 2025 10:43:00 +0000 Subject: [PATCH 38/90] Bump exceptiongroup from 1.2.2 to 1.3.0 (#10859) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [exceptiongroup](https://github.com/agronholm/exceptiongroup) from 1.2.2 to 1.3.0.
Release notes

Sourced from exceptiongroup's releases.

1.3.0

  • Added **kwargs to function and method signatures as appropriate to match the signatures in the standard library
  • In line with the stdlib typings in typeshed, updated (Base)ExceptionGroup generic types to define defaults for their generic arguments (defaulting to BaseExceptionGroup[BaseException] and ExceptionGroup[Exception]) (PR by @​mikenerone)
  • Changed BaseExceptionGroup.__init__() to directly call BaseException.__init__() instead of the superclass __init__() in order to emulate the CPython behavior (broken or not) (PR by @​cfbolz)
  • Changed the exceptions attribute to always return the same tuple of exceptions, created from the original exceptions sequence passed to BaseExceptionGroup to match CPython behavior (#143)
Changelog

Sourced from exceptiongroup's changelog.

Version history

This library adheres to Semantic Versioning 2.0 <http://semver.org/>_.

1.3.0

  • Added **kwargs to function and method signatures as appropriate to match the signatures in the standard library
  • In line with the stdlib typings in typeshed, updated (Base)ExceptionGroup generic types to define defaults for their generic arguments (defaulting to BaseExceptionGroup[BaseException] and ExceptionGroup[Exception]) (PR by @​mikenerone)
  • Changed BaseExceptionGroup.__init__() to directly call BaseException.__init__() instead of the superclass __init__() in order to emulate the CPython behavior (broken or not) (PR by @​cfbolz)
  • Changed the exceptions attribute to always return the same tuple of exceptions, created from the original exceptions sequence passed to BaseExceptionGroup to match CPython behavior ([#143](https://github.com/agronholm/exceptiongroup/issues/143) <https://github.com/agronholm/exceptiongroup/issues/143>_)

1.2.2

  • Removed an assert in exceptiongroup._formatting that caused compatibility issues with Sentry ([#123](https://github.com/agronholm/exceptiongroup/issues/123) <https://github.com/agronholm/exceptiongroup/issues/123>_)

1.2.1

  • Updated the copying of __notes__ to match CPython behavior (PR by CF Bolz-Tereick)
  • Corrected the type annotation of the exception handler callback to accept a BaseExceptionGroup instead of BaseException
  • Fixed type errors on Python < 3.10 and the type annotation of suppress() (PR by John Litborn)

1.2.0

  • Added special monkeypatching if Apport <https://github.com/canonical/apport>_ has overridden sys.excepthook so it will format exception groups correctly (PR by John Litborn)
  • Added a backport of contextlib.suppress() from Python 3.12.1 which also handles suppressing exceptions inside exception groups
  • Fixed bare raise in a handler reraising the original naked exception rather than an exception group which is what is raised when you do a raise in an except* handler

1.1.3

  • catch() now raises a TypeError if passed an async exception handler instead of just giving a RuntimeWarning about the coroutine never being awaited. (#66, PR by John Litborn)

... (truncated)

Commits
  • 77fba8a Added the release version
  • 5e153aa Revert "Migrated test dependencies to dependency groups"
  • 5000bfe Migrated tox configuration to native TOML
  • 427220d Updated pytest options
  • 4ca264f Migrated test dependencies to dependency groups
  • 163c3a8 Marked test_exceptions_mutate_original_sequence as xfail on pypy3.11
  • a176574 Always create the exceptions tuple at init and return it from the exceptions ...
  • 550b796 Added BaseExceptionGroup.init, following CPython (#142)
  • 2a84dfd Added typevar defaults to (Base)ExceptionGroup (#147)
  • fb9133b [pre-commit.ci] pre-commit autoupdate (#145)
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=exceptiongroup&package-manager=pip&previous-version=1.2.2&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 3 ++- requirements/dev.txt | 3 ++- requirements/lint.txt | 3 ++- requirements/test.txt | 3 ++- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index b8d832fa429..b70023e65d8 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -68,7 +68,7 @@ distlib==0.3.9 # via virtualenv docutils==0.21.2 # via sphinx -exceptiongroup==1.2.2 +exceptiongroup==1.3.0 # via pytest execnet==2.1.1 # via pytest-xdist @@ -264,6 +264,7 @@ trustme==1.2.1 ; platform_machine != "i686" # -r requirements/test.in typing-extensions==4.13.2 # via + # exceptiongroup # multidict # mypy # pydantic diff --git a/requirements/dev.txt b/requirements/dev.txt index aa5b83d6cec..ce52430fbee 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -66,7 +66,7 @@ distlib==0.3.9 # via virtualenv docutils==0.21.2 # via sphinx -exceptiongroup==1.2.2 +exceptiongroup==1.3.0 # via pytest execnet==2.1.1 # via pytest-xdist @@ -255,6 +255,7 @@ trustme==1.2.1 ; platform_machine != "i686" # -r requirements/test.in typing-extensions==4.13.2 # via + # exceptiongroup # multidict # mypy # pydantic diff --git a/requirements/lint.txt b/requirements/lint.txt index fcf5f2d0235..28aa349a511 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -25,7 +25,7 @@ cryptography==45.0.2 # via trustme distlib==0.3.9 # via virtualenv -exceptiongroup==1.2.2 +exceptiongroup==1.3.0 # via pytest filelock==3.18.0 # via virtualenv @@ -99,6 +99,7 @@ trustme==1.2.1 # via -r requirements/lint.in typing-extensions==4.13.2 # via + # exceptiongroup # mypy # pydantic # pydantic-core diff --git a/requirements/test.txt b/requirements/test.txt index bd64852cd18..5b3444b3cc4 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -33,7 +33,7 @@ coverage==7.8.0 # pytest-cov cryptography==45.0.2 # via trustme -exceptiongroup==1.2.2 +exceptiongroup==1.3.0 # via pytest execnet==2.1.1 # via pytest-xdist @@ -127,6 +127,7 @@ trustme==1.2.1 ; platform_machine != "i686" # via -r requirements/test.in typing-extensions==4.13.2 # via + # exceptiongroup # multidict # mypy # pydantic From 6ea542ef4b78174b2b9c39146fc6f72abeaba2ce Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Mon, 19 May 2025 09:02:22 -0400 Subject: [PATCH 39/90] [3.12] Updates for Cython 3.1.1 (#10877) closes #10849 --- CHANGES/10877.packaging.rst | 1 + aiohttp/_websocket/reader_py.py | 2 +- requirements/constraints.txt | 2 +- requirements/cython.in | 2 +- requirements/cython.txt | 2 +- 5 files changed, 5 insertions(+), 4 deletions(-) create mode 100644 CHANGES/10877.packaging.rst diff --git a/CHANGES/10877.packaging.rst b/CHANGES/10877.packaging.rst new file mode 100644 index 00000000000..0bc2ee03984 --- /dev/null +++ b/CHANGES/10877.packaging.rst @@ -0,0 +1 @@ +Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` diff --git a/aiohttp/_websocket/reader_py.py b/aiohttp/_websocket/reader_py.py index 855f9c6d600..f966a1593c5 100644 --- a/aiohttp/_websocket/reader_py.py +++ b/aiohttp/_websocket/reader_py.py @@ -79,7 +79,7 @@ def exception(self) -> Optional[BaseException]: def set_exception( self, - exc: "BaseException", + exc: BaseException, exc_cause: builtins.BaseException = _EXC_SENTINEL, ) -> None: self._eof = True diff --git a/requirements/constraints.txt b/requirements/constraints.txt index b70023e65d8..9a53aaaea12 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -62,7 +62,7 @@ cryptography==45.0.2 # via # pyjwt # trustme -cython==3.0.12 +cython==3.1.1 # via -r requirements/cython.in distlib==0.3.9 # via virtualenv diff --git a/requirements/cython.in b/requirements/cython.in index 6f0238f170d..6b848f6df9e 100644 --- a/requirements/cython.in +++ b/requirements/cython.in @@ -1,3 +1,3 @@ -r multidict.in -Cython +Cython >= 3.1.1 diff --git a/requirements/cython.txt b/requirements/cython.txt index 8686651881b..1dd3cc00fc4 100644 --- a/requirements/cython.txt +++ b/requirements/cython.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/cython.txt --resolver=backtracking --strip-extras requirements/cython.in # -cython==3.0.12 +cython==3.1.1 # via -r requirements/cython.in multidict==6.4.3 # via -r requirements/multidict.in From 5044d537abe1ffff2f731631e4869ba025258b79 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Mon, 19 May 2025 11:41:30 -0400 Subject: [PATCH 40/90] [PR #9732/1e911ea backport][3.12] Add Client Middleware Support (#10879) Co-authored-by: Sam Bull Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- CHANGES/9732.feature.rst | 6 + aiohttp/__init__.py | 4 + aiohttp/client.py | 81 ++- aiohttp/client_middlewares.py | 58 ++ aiohttp/client_reqrep.py | 18 +- docs/client_advanced.rst | 212 ++++++ docs/client_reference.rst | 16 + tests/test_client_functional.py | 29 +- tests/test_client_middleware.py | 1116 +++++++++++++++++++++++++++++++ 9 files changed, 1512 insertions(+), 28 deletions(-) create mode 100644 CHANGES/9732.feature.rst create mode 100644 aiohttp/client_middlewares.py create mode 100644 tests/test_client_middleware.py diff --git a/CHANGES/9732.feature.rst b/CHANGES/9732.feature.rst new file mode 100644 index 00000000000..bf6dd8ebde3 --- /dev/null +++ b/CHANGES/9732.feature.rst @@ -0,0 +1,6 @@ +Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + +This change allows users to add middleware to the client session and requests, enabling features like +authentication, logging, and request/response modification without modifying the core +request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, +allowing middleware to access the session for making additional requests. diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 6321e713ed4..d18bab60d2e 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -47,6 +47,7 @@ WSServerHandshakeError, request, ) +from .client_middlewares import ClientHandlerType, ClientMiddlewareType from .compression_utils import set_zlib_backend from .connector import ( AddrInfoType as AddrInfoType, @@ -175,6 +176,9 @@ "NamedPipeConnector", "WSServerHandshakeError", "request", + # client_middleware + "ClientMiddlewareType", + "ClientHandlerType", # cookiejar "CookieJar", "DummyCookieJar", diff --git a/aiohttp/client.py b/aiohttp/client.py index 8ba5e282e2c..2b7afe1344c 100644 --- a/aiohttp/client.py +++ b/aiohttp/client.py @@ -70,6 +70,7 @@ WSMessageTypeError, WSServerHandshakeError, ) +from .client_middlewares import ClientMiddlewareType, build_client_middlewares from .client_reqrep import ( ClientRequest as ClientRequest, ClientResponse as ClientResponse, @@ -191,6 +192,7 @@ class _RequestOptions(TypedDict, total=False): auto_decompress: Union[bool, None] max_line_size: Union[int, None] max_field_size: Union[int, None] + middlewares: Optional[Tuple[ClientMiddlewareType, ...]] @attr.s(auto_attribs=True, frozen=True, slots=True) @@ -258,6 +260,7 @@ class ClientSession: "_default_proxy", "_default_proxy_auth", "_retry_connection", + "_middlewares", "requote_redirect_url", ] ) @@ -298,6 +301,7 @@ def __init__( max_line_size: int = 8190, max_field_size: int = 8190, fallback_charset_resolver: _CharsetResolver = lambda r, b: "utf-8", + middlewares: Optional[Tuple[ClientMiddlewareType, ...]] = None, ) -> None: # We initialise _connector to None immediately, as it's referenced in __del__() # and could cause issues if an exception occurs during initialisation. @@ -410,6 +414,7 @@ def __init__( self._default_proxy = proxy self._default_proxy_auth = proxy_auth self._retry_connection: bool = True + self._middlewares = middlewares def __init_subclass__(cls: Type["ClientSession"]) -> None: warnings.warn( @@ -500,6 +505,7 @@ async def _request( auto_decompress: Optional[bool] = None, max_line_size: Optional[int] = None, max_field_size: Optional[int] = None, + middlewares: Optional[Tuple[ClientMiddlewareType, ...]] = None, ) -> ClientResponse: # NOTE: timeout clamps existing connect and read timeouts. We cannot @@ -699,32 +705,33 @@ async def _request( trust_env=self.trust_env, ) - # connection timeout - try: - conn = await self._connector.connect( - req, traces=traces, timeout=real_timeout + # Core request handler - now includes connection logic + async def _connect_and_send_request( + req: ClientRequest, + ) -> ClientResponse: + # connection timeout + assert self._connector is not None + try: + conn = await self._connector.connect( + req, traces=traces, timeout=real_timeout + ) + except asyncio.TimeoutError as exc: + raise ConnectionTimeoutError( + f"Connection timeout to host {req.url}" + ) from exc + + assert conn.protocol is not None + conn.protocol.set_response_params( + timer=timer, + skip_payload=req.method in EMPTY_BODY_METHODS, + read_until_eof=read_until_eof, + auto_decompress=auto_decompress, + read_timeout=real_timeout.sock_read, + read_bufsize=read_bufsize, + timeout_ceil_threshold=self._connector._timeout_ceil_threshold, + max_line_size=max_line_size, + max_field_size=max_field_size, ) - except asyncio.TimeoutError as exc: - raise ConnectionTimeoutError( - f"Connection timeout to host {url}" - ) from exc - - assert conn.transport is not None - - assert conn.protocol is not None - conn.protocol.set_response_params( - timer=timer, - skip_payload=method in EMPTY_BODY_METHODS, - read_until_eof=read_until_eof, - auto_decompress=auto_decompress, - read_timeout=real_timeout.sock_read, - read_bufsize=read_bufsize, - timeout_ceil_threshold=self._connector._timeout_ceil_threshold, - max_line_size=max_line_size, - max_field_size=max_field_size, - ) - - try: try: resp = await req.send(conn) try: @@ -735,6 +742,30 @@ async def _request( except BaseException: conn.close() raise + return resp + + # Apply middleware (if any) - per-request middleware overrides session middleware + effective_middlewares = ( + self._middlewares if middlewares is None else middlewares + ) + + if effective_middlewares: + handler = build_client_middlewares( + _connect_and_send_request, effective_middlewares + ) + else: + handler = _connect_and_send_request + + try: + resp = await handler(req) + # Client connector errors should not be retried + except ( + ConnectionTimeoutError, + ClientConnectorError, + ClientConnectorCertificateError, + ClientConnectorSSLError, + ): + raise except (ClientOSError, ServerDisconnectedError): if retry_persistent_connection: retry_persistent_connection = False diff --git a/aiohttp/client_middlewares.py b/aiohttp/client_middlewares.py new file mode 100644 index 00000000000..6be353c3a40 --- /dev/null +++ b/aiohttp/client_middlewares.py @@ -0,0 +1,58 @@ +"""Client middleware support.""" + +from collections.abc import Awaitable, Callable + +from .client_reqrep import ClientRequest, ClientResponse + +__all__ = ("ClientMiddlewareType", "ClientHandlerType", "build_client_middlewares") + +# Type alias for client request handlers - functions that process requests and return responses +ClientHandlerType = Callable[[ClientRequest], Awaitable[ClientResponse]] + +# Type for client middleware - similar to server but uses ClientRequest/ClientResponse +ClientMiddlewareType = Callable[ + [ClientRequest, ClientHandlerType], Awaitable[ClientResponse] +] + + +def build_client_middlewares( + handler: ClientHandlerType, + middlewares: tuple[ClientMiddlewareType, ...], +) -> ClientHandlerType: + """ + Apply middlewares to request handler. + + The middlewares are applied in reverse order, so the first middleware + in the list wraps all subsequent middlewares and the handler. + + This implementation avoids using partial/update_wrapper to minimize overhead + and doesn't cache to avoid holding references to stateful middleware. + """ + if not middlewares: + return handler + + # Optimize for single middleware case + if len(middlewares) == 1: + middleware = middlewares[0] + + async def single_middleware_handler(req: ClientRequest) -> ClientResponse: + return await middleware(req, handler) + + return single_middleware_handler + + # Build the chain for multiple middlewares + current_handler = handler + + for middleware in reversed(middlewares): + # Create a new closure that captures the current state + def make_wrapper( + mw: ClientMiddlewareType, next_h: ClientHandlerType + ) -> ClientHandlerType: + async def wrapped(req: ClientRequest) -> ClientResponse: + return await mw(req, next_h) + + return wrapped + + current_handler = make_wrapper(middleware, current_handler) + + return current_handler diff --git a/aiohttp/client_reqrep.py b/aiohttp/client_reqrep.py index 43b48063c6e..ef0dd42b969 100644 --- a/aiohttp/client_reqrep.py +++ b/aiohttp/client_reqrep.py @@ -272,7 +272,13 @@ class ClientRequest: auth = None response = None - __writer = None # async task for streaming data + __writer: Optional["asyncio.Task[None]"] = None # async task for streaming data + + # These class defaults help create_autospec() work correctly. + # If autospec is improved in future, maybe these can be removed. + url = URL() + method = "GET" + _continue = None # waiter future for '100 Continue' response _skip_auto_headers: Optional["CIMultiDict[None]"] = None @@ -427,6 +433,16 @@ def request_info(self) -> RequestInfo: RequestInfo, (self.url, self.method, headers, self.original_url) ) + @property + def session(self) -> "ClientSession": + """Return the ClientSession instance. + + This property provides access to the ClientSession that initiated + this request, allowing middleware to make additional requests + using the same session. + """ + return self._session + def update_host(self, url: URL) -> None: """Update destination host, port and connection type (ssl).""" # get host/port diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index 39cd259dc9e..8795b3d164a 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -98,6 +98,218 @@ background. ``Authorization`` header will be removed if you get redirected to a different host or protocol. +.. _aiohttp-client-middleware: + +Client Middleware +----------------- + +aiohttp client supports middleware to intercept requests and responses. This can be +useful for authentication, logging, request/response modification, and retries. + +To create a middleware, you need to define an async function that accepts the request +and a handler function, and returns the response. The middleware must match the +:type:`ClientMiddlewareType` type signature:: + + import logging + from aiohttp import ClientSession, ClientRequest, ClientResponse, ClientHandlerType + + _LOGGER = logging.getLogger(__name__) + + async def my_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + # Process request before sending + _LOGGER.debug(f"Request: {request.method} {request.url}") + + # Call the next handler + response = await handler(request) + + # Process response after receiving + _LOGGER.debug(f"Response: {response.status}") + + return response + +You can apply middleware to a client session or to individual requests:: + + # Apply to all requests in a session + async with ClientSession(middlewares=(my_middleware,)) as session: + resp = await session.get('http://example.com') + + # Apply to a specific request + async with ClientSession() as session: + resp = await session.get('http://example.com', middlewares=(my_middleware,)) + +Middleware Examples +^^^^^^^^^^^^^^^^^^^ + +Here's a simple example showing request modification:: + + async def add_api_key_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + # Add API key to all requests + request.headers['X-API-Key'] = 'my-secret-key' + return await handler(request) + +.. _client-middleware-retry: + +Middleware Retry Pattern +^^^^^^^^^^^^^^^^^^^^^^^^ + +Client middleware can implement retry logic internally using a ``while`` loop. This allows the middleware to: + +- Retry requests based on response status codes or other conditions +- Modify the request between retries (e.g., refreshing tokens) +- Maintain state across retry attempts +- Control when to stop retrying and return the response + +This pattern is particularly useful for: + +- Refreshing authentication tokens after a 401 response +- Switching to fallback servers or authentication methods +- Adding or modifying headers based on error responses +- Implementing back-off strategies with increasing delays + +The middleware can maintain state between retries to track which strategies have been tried and modify the request accordingly for the next attempt. + +Example: Retrying requests with middleware +"""""""""""""""""""""""""""""""""""""""""" + +:: + + import logging + import aiohttp + + _LOGGER = logging.getLogger(__name__) + + class RetryMiddleware: + def __init__(self, max_retries: int = 3): + self.max_retries = max_retries + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + retry_count = 0 + use_fallback_auth = False + + while True: + # Modify request based on retry state + if use_fallback_auth: + request.headers['Authorization'] = 'Bearer fallback-token' + + response = await handler(request) + + # Retry on 401 errors with different authentication + if response.status == 401 and retry_count < self.max_retries: + retry_count += 1 + use_fallback_auth = True + _LOGGER.debug(f"Retrying with fallback auth (attempt {retry_count})") + continue + + # Retry on 5xx errors + if response.status >= 500 and retry_count < self.max_retries: + retry_count += 1 + _LOGGER.debug(f"Retrying request (attempt {retry_count})") + continue + + return response + +Middleware Chaining +^^^^^^^^^^^^^^^^^^^ + +Multiple middlewares are applied in the order they are listed:: + + import logging + + _LOGGER = logging.getLogger(__name__) + + async def logging_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + _LOGGER.debug(f"[LOG] {request.method} {request.url}") + return await handler(request) + + async def auth_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + request.headers['Authorization'] = 'Bearer token123' + return await handler(request) + + # Middlewares are applied in order: logging -> auth -> request + async with ClientSession(middlewares=(logging_middleware, auth_middleware)) as session: + resp = await session.get('http://example.com') + +.. note:: + + Client middleware is a powerful feature but should be used judiciously. + Each middleware adds overhead to request processing. For simple use cases + like adding static headers, you can often use request parameters + (e.g., ``headers``) or session configuration instead. + +.. warning:: + + Using the same session from within middleware can cause infinite recursion if + the middleware makes HTTP requests using the same session that has the middleware + applied. + + To avoid recursion, use one of these approaches: + + **Recommended:** Pass ``middlewares=()`` to requests made inside the middleware to + disable middleware for those specific requests:: + + async def log_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + async with request.session.post( + "https://logapi.example/log", + json={"url": str(request.url)}, + middlewares=() # This prevents infinite recursion + ) as resp: + pass + + return await handler(request) + + **Alternative:** Check the request contents (URL, path, host) to avoid applying + middleware to certain requests:: + + async def log_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + if request.url.host != "logapi.example": # Avoid infinite recursion + async with request.session.post( + "https://logapi.example/log", + json={"url": str(request.url)} + ) as resp: + pass + + return await handler(request) + +Middleware Type +^^^^^^^^^^^^^^^ + +.. type:: ClientMiddlewareType + + Type alias for client middleware functions. Middleware functions must have this signature:: + + Callable[ + [ClientRequest, ClientHandlerType], + Awaitable[ClientResponse] + ] + +.. type:: ClientHandlerType + + Type alias for client request handler functions:: + + Callable[ClientRequest, Awaitable[ClientResponse]] + Custom Cookies -------------- diff --git a/docs/client_reference.rst b/docs/client_reference.rst index aa664b24ff4..039419ba965 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -53,6 +53,7 @@ The client session supports the context manager protocol for self closing. trust_env=False, \ requote_redirect_url=True, \ trace_configs=None, \ + middlewares=None, \ read_bufsize=2**16, \ max_line_size=8190, \ max_field_size=8190, \ @@ -229,6 +230,13 @@ The client session supports the context manager protocol for self closing. disabling. See :ref:`aiohttp-client-tracing-reference` for more information. + :param middlewares: A tuple of middleware instances to apply to all session requests. + Each middleware must match the :type:`ClientMiddlewareType` signature. + ``None`` (default) is used when no middleware is needed. + See :ref:`aiohttp-client-middleware` for more information. + + .. versionadded:: 3.12 + :param int read_bufsize: Size of the read buffer (:attr:`ClientResponse.content`). 64 KiB by default. @@ -387,6 +395,7 @@ The client session supports the context manager protocol for self closing. server_hostname=None, \ proxy_headers=None, \ trace_request_ctx=None, \ + middlewares=None, \ read_bufsize=None, \ auto_decompress=None, \ max_line_size=None, \ @@ -535,6 +544,13 @@ The client session supports the context manager protocol for self closing. .. versionadded:: 3.0 + :param middlewares: A tuple of middleware instances to apply to this request only. + Each middleware must match the :type:`ClientMiddlewareType` signature. + ``None`` by default which uses session middlewares. + See :ref:`aiohttp-client-middleware` for more information. + + .. versionadded:: 3.12 + :param int read_bufsize: Size of the read buffer (:attr:`ClientResponse.content`). ``None`` by default, it means that the session global value is used. diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index 0ea3ce1619a..1154c7e5805 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -12,11 +12,12 @@ import tarfile import time import zipfile -from typing import Any, AsyncIterator, Awaitable, Callable, List, Type +from typing import Any, AsyncIterator, Awaitable, Callable, List, NoReturn, Type from unittest import mock import pytest from multidict import MultiDict +from pytest_mock import MockerFixture from yarl import URL import aiohttp @@ -1065,7 +1066,31 @@ async def handler(request): assert resp.status == 200 -async def test_readline_error_on_conn_close(aiohttp_client) -> None: +async def test_connection_timeout_error( + aiohttp_client: AiohttpClient, mocker: MockerFixture +) -> None: + """Test that ConnectionTimeoutError is raised when connection times out.""" + + async def handler(request: web.Request) -> NoReturn: + assert False, "Handler should not be called" + + app = web.Application() + app.router.add_route("GET", "/", handler) + client = await aiohttp_client(app) + + # Mock the connector's connect method to raise asyncio.TimeoutError + mock_connect = mocker.patch.object( + client.session._connector, "connect", side_effect=asyncio.TimeoutError() + ) + + with pytest.raises(aiohttp.ConnectionTimeoutError) as exc_info: + await client.get("/", timeout=aiohttp.ClientTimeout(connect=0.01)) + + assert "Connection timeout to host" in str(exc_info.value) + mock_connect.assert_called_once() + + +async def test_readline_error_on_conn_close(aiohttp_client: AiohttpClient) -> None: loop = asyncio.get_event_loop() async def handler(request): diff --git a/tests/test_client_middleware.py b/tests/test_client_middleware.py new file mode 100644 index 00000000000..7effa31c9f0 --- /dev/null +++ b/tests/test_client_middleware.py @@ -0,0 +1,1116 @@ +"""Tests for client middleware.""" + +import json +import socket +from typing import Dict, List, NoReturn, Optional, Union + +import pytest + +from aiohttp import ( + ClientError, + ClientHandlerType, + ClientRequest, + ClientResponse, + ClientSession, + ClientTimeout, + TCPConnector, + web, +) +from aiohttp.abc import ResolveResult +from aiohttp.client_middlewares import build_client_middlewares +from aiohttp.client_proto import ResponseHandler +from aiohttp.pytest_plugin import AiohttpServer +from aiohttp.resolver import ThreadedResolver +from aiohttp.tracing import Trace + + +class BlockedByMiddleware(ClientError): + """Custom exception for when middleware blocks a request.""" + + +async def test_client_middleware_called(aiohttp_server: AiohttpServer) -> None: + """Test that client middleware is called.""" + middleware_called = False + request_count = 0 + + async def handler(request: web.Request) -> web.Response: + nonlocal request_count + request_count += 1 + return web.Response(text=f"OK {request_count}") + + async def test_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + nonlocal middleware_called + middleware_called = True + response = await handler(request) + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(test_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "OK 1" + + assert middleware_called is True + assert request_count == 1 + + +async def test_client_middleware_retry(aiohttp_server: AiohttpServer) -> None: + """Test that middleware can trigger retries.""" + request_count = 0 + + async def handler(request: web.Request) -> web.Response: + nonlocal request_count + request_count += 1 + if request_count == 1: + return web.Response(status=503) + return web.Response(text=f"OK {request_count}") + + async def retry_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + retry_count = 0 + while True: + response = await handler(request) + if response.status == 503 and retry_count < 1: + retry_count += 1 + continue + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(retry_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "OK 2" + + assert request_count == 2 + + +async def test_client_middleware_per_request(aiohttp_server: AiohttpServer) -> None: + """Test that middleware can be specified per request.""" + session_middleware_called = False + request_middleware_called = False + + async def handler(request: web.Request) -> web.Response: + return web.Response(text="OK") + + async def session_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + nonlocal session_middleware_called + session_middleware_called = True + response = await handler(request) + return response + + async def request_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + nonlocal request_middleware_called + request_middleware_called = True + response = await handler(request) + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Request with session middleware + async with ClientSession(middlewares=(session_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + + assert session_middleware_called is True + assert request_middleware_called is False + + # Reset flags + session_middleware_called = False + + # Request with override middleware + async with ClientSession(middlewares=(session_middleware,)) as session: + async with session.get( + server.make_url("/"), middlewares=(request_middleware,) + ) as resp: + assert resp.status == 200 + + assert session_middleware_called is False + assert request_middleware_called is True + + +async def test_multiple_client_middlewares(aiohttp_server: AiohttpServer) -> None: + """Test that multiple middlewares are executed in order.""" + calls: list[str] = [] + + async def handler(request: web.Request) -> web.Response: + return web.Response(text="OK") + + async def middleware1( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + calls.append("before1") + response = await handler(request) + calls.append("after1") + return response + + async def middleware2( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + calls.append("before2") + response = await handler(request) + calls.append("after2") + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(middleware1, middleware2)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + + # Middlewares are applied in reverse order (like server middlewares) + # So middleware1 wraps middleware2 + assert calls == ["before1", "before2", "after2", "after1"] + + +async def test_client_middleware_auth_example(aiohttp_server: AiohttpServer) -> None: + """Test an authentication middleware example.""" + + async def handler(request: web.Request) -> web.Response: + auth_header = request.headers.get("Authorization") + if auth_header == "Bearer valid-token": + return web.Response(text="Authenticated") + return web.Response(status=401, text="Unauthorized") + + async def auth_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + # Add authentication header before request + request.headers["Authorization"] = "Bearer valid-token" + response = await handler(request) + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Without middleware - should fail + async with ClientSession() as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 401 + + # With middleware - should succeed + async with ClientSession(middlewares=(auth_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Authenticated" + + +async def test_client_middleware_challenge_auth(aiohttp_server: AiohttpServer) -> None: + """Test authentication middleware with challenge/response pattern like digest auth.""" + request_count = 0 + challenge_token = "challenge-123" + + async def handler(request: web.Request) -> web.Response: + nonlocal request_count + request_count += 1 + + auth_header = request.headers.get("Authorization") + + # First request - no auth header, return challenge + if request_count == 1 and not auth_header: + return web.Response( + status=401, + headers={ + "WWW-Authenticate": f'Custom realm="test", nonce="{challenge_token}"' + }, + ) + + # Subsequent requests - check for correct auth with challenge + if auth_header == f'Custom response="{challenge_token}-secret"': + return web.Response(text="Authenticated") + + assert False, "Should not reach here - invalid auth scenario" + + async def challenge_auth_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + challenge_data: Dict[str, Union[bool, str, None]] = { + "nonce": None, + "attempted": False, + } + + while True: + # If we have challenge data from previous attempt, add auth header + if challenge_data["nonce"] and challenge_data["attempted"]: + request.headers["Authorization"] = ( + f'Custom response="{challenge_data["nonce"]}-secret"' + ) + + response = await handler(request) + + # If we get a 401 with challenge, store it and retry + if response.status == 401 and not challenge_data["attempted"]: + www_auth = response.headers.get("WWW-Authenticate") + if www_auth and "nonce=" in www_auth: # pragma: no branch + # Extract nonce from authentication header + nonce_start = www_auth.find('nonce="') + 7 + nonce_end = www_auth.find('"', nonce_start) + challenge_data["nonce"] = www_auth[nonce_start:nonce_end] + challenge_data["attempted"] = True + continue + + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(challenge_auth_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Authenticated" + + # Should have made 2 requests: initial and retry with auth + assert request_count == 2 + + +async def test_client_middleware_multi_step_auth(aiohttp_server: AiohttpServer) -> None: + """Test middleware with multi-step authentication flow.""" + auth_state: dict[str, int] = {} + middleware_state: Dict[str, Optional[Union[int, str]]] = { + "step": 0, + "session": None, + "challenge": None, + } + + async def handler(request: web.Request) -> web.Response: + client_id = request.headers.get("X-Client-ID", "unknown") + auth_header = request.headers.get("Authorization") + step = auth_state.get(client_id, 0) + + # Step 0: No auth, request client ID + if step == 0 and not auth_header: + auth_state[client_id] = 1 + return web.Response( + status=401, headers={"X-Auth-Step": "1", "X-Session": "session-123"} + ) + + # Step 1: Has session, request credentials + if step == 1 and auth_header == "Bearer session-123": + auth_state[client_id] = 2 + return web.Response( + status=401, headers={"X-Auth-Step": "2", "X-Challenge": "challenge-456"} + ) + + # Step 2: Has challenge response, authenticate + if step == 2 and auth_header == "Bearer challenge-456-response": + return web.Response(text="Authenticated") + + assert False, "Should not reach here - invalid multi-step auth flow" + + async def multi_step_auth_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + request.headers["X-Client-ID"] = "test-client" + + while True: + # Apply auth based on current state + if middleware_state["step"] == 1 and middleware_state["session"]: + request.headers["Authorization"] = ( + f"Bearer {middleware_state['session']}" + ) + elif middleware_state["step"] == 2 and middleware_state["challenge"]: + request.headers["Authorization"] = ( + f"Bearer {middleware_state['challenge']}-response" + ) + + response = await handler(request) + + # Handle multi-step auth flow + if response.status == 401: + auth_step = response.headers.get("X-Auth-Step") + + if auth_step == "1": + # First step: store session token + middleware_state["session"] = response.headers.get("X-Session") + middleware_state["step"] = 1 + continue + + elif auth_step == "2": # pragma: no branch + # Second step: store challenge + middleware_state["challenge"] = response.headers.get("X-Challenge") + middleware_state["step"] = 2 + continue + + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(multi_step_auth_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Authenticated" + + +async def test_client_middleware_conditional_retry( + aiohttp_server: AiohttpServer, +) -> None: + """Test middleware with conditional retry based on response content.""" + request_count = 0 + token_state: Dict[str, Union[str, bool]] = { + "token": "old-token", + "refreshed": False, + } + + async def handler(request: web.Request) -> web.Response: + nonlocal request_count + request_count += 1 + + auth_token = request.headers.get("X-Auth-Token") + + if request_count == 1: + # First request returns expired token error + return web.json_response( + {"error": "token_expired", "refresh_required": True}, status=401 + ) + + if auth_token == "refreshed-token": + return web.json_response({"data": "success"}) + + assert False, "Should not reach here - invalid token refresh flow" + + async def token_refresh_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + while True: + # Add token to request + request.headers["X-Auth-Token"] = str(token_state["token"]) + + response = await handler(request) + + # Check if token needs refresh + if response.status == 401 and not token_state["refreshed"]: + data = await response.json() + if data.get("error") == "token_expired" and data.get( + "refresh_required" + ): # pragma: no branch + # Simulate token refresh + token_state["token"] = "refreshed-token" + token_state["refreshed"] = True + continue + + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(token_refresh_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + data = await resp.json() + assert data == {"data": "success"} + + assert request_count == 2 # Initial request + retry after refresh + + +async def test_build_client_middlewares_empty() -> None: + """Test build_client_middlewares with empty middlewares.""" + + async def handler(request: ClientRequest) -> NoReturn: + """Dummy handler.""" + assert False + + # Test empty case + result = build_client_middlewares(handler, ()) + assert result is handler # Should return handler unchanged + + +async def test_client_middleware_class_based_auth( + aiohttp_server: AiohttpServer, +) -> None: + """Test middleware using class-based pattern with instance state.""" + + class TokenAuthMiddleware: + """Middleware that handles token-based authentication.""" + + def __init__(self, token: str) -> None: + self.token = token + self.request_count = 0 + + async def __call__( + self, request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + self.request_count += 1 + request.headers["Authorization"] = f"Bearer {self.token}" + return await handler(request) + + async def handler(request: web.Request) -> web.Response: + auth_header = request.headers.get("Authorization") + if auth_header == "Bearer test-token": + return web.Response(text="Authenticated") + assert False, "Should not reach here - class auth should always have token" + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Create middleware instance + auth_middleware = TokenAuthMiddleware("test-token") + + async with ClientSession(middlewares=(auth_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Authenticated" + + # Verify the middleware was called + assert auth_middleware.request_count == 1 + + +async def test_client_middleware_stateful_retry(aiohttp_server: AiohttpServer) -> None: + """Test retry middleware using class with state management.""" + + class RetryMiddleware: + """Middleware that retries failed requests with backoff.""" + + def __init__(self, max_retries: int = 3) -> None: + self.max_retries = max_retries + self.retry_counts: Dict[int, int] = {} # Track retries per request + + async def __call__( + self, request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + retry_count = 0 + + while True: + response = await handler(request) + + if response.status >= 500 and retry_count < self.max_retries: + retry_count += 1 + continue + + return response + + request_count = 0 + + async def handler(request: web.Request) -> web.Response: + nonlocal request_count + request_count += 1 + + if request_count < 3: + return web.Response(status=503) + return web.Response(text="Success") + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + retry_middleware = RetryMiddleware(max_retries=2) + + async with ClientSession(middlewares=(retry_middleware,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Success" + + assert request_count == 3 # Initial + 2 retries + + +async def test_client_middleware_multiple_instances( + aiohttp_server: AiohttpServer, +) -> None: + """Test using multiple instances of the same middleware class.""" + + class HeaderMiddleware: + """Middleware that adds a header with instance-specific value.""" + + def __init__(self, header_name: str, header_value: str) -> None: + self.header_name = header_name + self.header_value = header_value + self.applied = False + + async def __call__( + self, request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + self.applied = True + request.headers[self.header_name] = self.header_value + return await handler(request) + + headers_received = {} + + async def handler(request: web.Request) -> web.Response: + headers_received.update(dict(request.headers)) + return web.Response(text="OK") + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Create two instances with different headers + middleware1 = HeaderMiddleware("X-Custom-1", "value1") + middleware2 = HeaderMiddleware("X-Custom-2", "value2") + + async with ClientSession(middlewares=(middleware1, middleware2)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + + # Both middlewares should have been applied + assert middleware1.applied is True + assert middleware2.applied is True + assert headers_received.get("X-Custom-1") == "value1" + assert headers_received.get("X-Custom-2") == "value2" + + +async def test_client_middleware_disable_with_empty_tuple( + aiohttp_server: AiohttpServer, +) -> None: + """Test that passing middlewares=() to a request disables session-level middlewares.""" + session_middleware_called = False + request_middleware_called = False + + async def handler(request: web.Request) -> web.Response: + auth_header = request.headers.get("Authorization") + if auth_header: + return web.Response(text=f"Auth: {auth_header}") + return web.Response(text="No auth") + + async def session_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + nonlocal session_middleware_called + session_middleware_called = True + request.headers["Authorization"] = "Bearer session-token" + response = await handler(request) + return response + + async def request_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + nonlocal request_middleware_called + request_middleware_called = True + request.headers["Authorization"] = "Bearer request-token" + response = await handler(request) + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Create session with middleware + async with ClientSession(middlewares=(session_middleware,)) as session: + # First request uses session middleware + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Auth: Bearer session-token" + assert session_middleware_called is True + assert request_middleware_called is False + + # Reset flags + session_middleware_called = False + request_middleware_called = False + + # Second request explicitly disables middlewares + async with session.get(server.make_url("/"), middlewares=()) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "No auth" + assert session_middleware_called is False + assert request_middleware_called is False + + # Reset flags + session_middleware_called = False + request_middleware_called = False + + # Third request uses request-specific middleware + async with session.get( + server.make_url("/"), middlewares=(request_middleware,) + ) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Auth: Bearer request-token" + assert session_middleware_called is False + assert request_middleware_called is True + + +@pytest.mark.parametrize( + "exception_class,match_text", + [ + (ValueError, "Middleware error"), + (ClientError, "Client error from middleware"), + (OSError, "OS error from middleware"), + ], +) +async def test_client_middleware_exception_closes_connection( + aiohttp_server: AiohttpServer, + exception_class: type[Exception], + match_text: str, +) -> None: + """Test that connections are closed when middleware raises an exception.""" + + async def handler(request: web.Request) -> NoReturn: + assert False, "Handler should not be reached" + + async def failing_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> NoReturn: + # Raise exception before the handler is called + raise exception_class(match_text) + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Create custom connector + connector = TCPConnector() + + async with ClientSession( + connector=connector, middlewares=(failing_middleware,) + ) as session: + # Make request that should fail in middleware + with pytest.raises(exception_class, match=match_text): + await session.get(server.make_url("/")) + + # Check that the connector has no active connections + # If connections were properly closed, _conns should be empty + assert len(connector._conns) == 0 + + await connector.close() + + +async def test_client_middleware_blocks_connection_before_established( + aiohttp_server: AiohttpServer, +) -> None: + """Test that middleware can block connections before they are established.""" + blocked_hosts = {"blocked.example.com", "evil.com"} + connection_attempts: List[str] = [] + + async def handler(request: web.Request) -> web.Response: + return web.Response(text="Reached") + + async def blocking_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + # Record the connection attempt + connection_attempts.append(str(request.url)) + + # Block requests to certain hosts + if request.url.host in blocked_hosts: + raise BlockedByMiddleware(f"Connection to {request.url.host} is blocked") + + # Allow the request to proceed + return await handler(request) + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + connector = TCPConnector() + async with ClientSession( + connector=connector, middlewares=(blocking_middleware,) + ) as session: + # Test allowed request + allowed_url = server.make_url("/") + async with session.get(allowed_url) as resp: + assert resp.status == 200 + assert await resp.text() == "Reached" + + # Test blocked request + with pytest.raises(BlockedByMiddleware) as exc_info: + # Use a fake URL that would fail DNS if connection was attempted + await session.get("https://blocked.example.com/") + + assert "Connection to blocked.example.com is blocked" in str(exc_info.value) + + # Test another blocked host + with pytest.raises(BlockedByMiddleware) as exc_info: + await session.get("https://evil.com/path") + + assert "Connection to evil.com is blocked" in str(exc_info.value) + + # Verify that connections were attempted in the correct order + assert len(connection_attempts) == 3 + assert allowed_url.host and allowed_url.host in connection_attempts[0] + assert "blocked.example.com" in connection_attempts[1] + assert "evil.com" in connection_attempts[2] + + # Check that no connections were leaked + assert len(connector._conns) == 0 + + +async def test_client_middleware_blocks_connection_without_dns_lookup( + aiohttp_server: AiohttpServer, +) -> None: + """Test that middleware prevents DNS lookups for blocked hosts.""" + blocked_hosts = {"blocked.domain.tld"} + dns_lookups_made: List[str] = [] + + # Create a simple server for the allowed request + async def handler(request: web.Request) -> web.Response: + return web.Response(text="OK") + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + class TrackingResolver(ThreadedResolver): + async def resolve( + self, + hostname: str, + port: int = 0, + family: socket.AddressFamily = socket.AF_INET, + ) -> List[ResolveResult]: + dns_lookups_made.append(hostname) + return await super().resolve(hostname, port, family) + + async def blocking_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + # Block requests to certain hosts before DNS lookup + if request.url.host in blocked_hosts: + raise BlockedByMiddleware(f"Blocked by policy: {request.url.host}") + + return await handler(request) + + resolver = TrackingResolver() + connector = TCPConnector(resolver=resolver) + async with ClientSession( + connector=connector, middlewares=(blocking_middleware,) + ) as session: + # Test blocked request to non-existent domain + with pytest.raises(BlockedByMiddleware) as exc_info: + await session.get("https://blocked.domain.tld/") + + assert "Blocked by policy: blocked.domain.tld" in str(exc_info.value) + + # Verify that no DNS lookup was made for the blocked domain + assert "blocked.domain.tld" not in dns_lookups_made + + # Test allowed request to existing server - this should trigger DNS lookup + async with session.get(f"http://localhost:{server.port}") as resp: + assert resp.status == 200 + + # Verify that DNS lookup was made for the allowed request + # The server might use a hostname that requires DNS resolution + assert len(dns_lookups_made) > 0 + + # Make sure blocked domain is still not in DNS lookups + assert "blocked.domain.tld" not in dns_lookups_made + + # Clean up + await connector.close() + + +async def test_client_middleware_retry_reuses_connection( + aiohttp_server: AiohttpServer, +) -> None: + """Test that connections are reused when middleware performs retries.""" + + async def handler(request: web.Request) -> web.Response: + return web.Response(text="OK") + + class TrackingConnector(TCPConnector): + """Connector that tracks connection attempts.""" + + connection_attempts = 0 + + async def _create_connection( + self, req: ClientRequest, traces: List["Trace"], timeout: "ClientTimeout" + ) -> ResponseHandler: + self.connection_attempts += 1 + return await super()._create_connection(req, traces, timeout) + + class RetryOnceMiddleware: + """Middleware that retries exactly once.""" + + def __init__(self) -> None: + self.attempt_count = 0 + + async def __call__( + self, request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + retry_count = 0 + while True: + self.attempt_count += 1 + if retry_count == 0: + retry_count += 1 + await handler(request) + continue + return await handler(request) + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + connector = TrackingConnector() + middleware = RetryOnceMiddleware() + + async with ClientSession(connector=connector, middlewares=(middleware,)) as session: + # Make initial request + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "OK" + + # Should have made 2 request attempts (initial + 1 retry) + assert middleware.attempt_count == 2 + # Should have created only 1 connection (reused on retry) + assert connector.connection_attempts == 1 + + await connector.close() + + +async def test_middleware_uses_session_avoids_recursion_with_path_check( + aiohttp_server: AiohttpServer, +) -> None: + """Test that middleware can avoid infinite recursion using a path check.""" + log_collector: List[Dict[str, str]] = [] + + async def log_api_handler(request: web.Request) -> web.Response: + """Handle log API requests.""" + data: Dict[str, str] = await request.json() + log_collector.append(data) + return web.Response(text="OK") + + async def main_handler(request: web.Request) -> web.Response: + """Handle main server requests.""" + return web.Response(text=f"Hello from {request.path}") + + # Create log API server + log_app = web.Application() + log_app.router.add_post("/log", log_api_handler) + log_server = await aiohttp_server(log_app) + + # Create main server + main_app = web.Application() + main_app.router.add_get("/{path:.*}", main_handler) + main_server = await aiohttp_server(main_app) + + async def log_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + """Log requests to external API, avoiding recursion with path check.""" + # Avoid infinite recursion by not logging requests to the /log endpoint + if request.url.path != "/log": + # Use the session from the request to make the logging call + async with request.session.post( + f"http://localhost:{log_server.port}/log", + json={"method": str(request.method), "url": str(request.url)}, + ) as resp: + assert resp.status == 200 + + return await handler(request) + + # Create session with the middleware + async with ClientSession(middlewares=(log_middleware,)) as session: + # Make request to main server - should be logged + async with session.get(main_server.make_url("/test")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Hello from /test" + + # Make direct request to log API - should NOT be logged (avoid recursion) + async with session.post( + log_server.make_url("/log"), + json={"method": "DIRECT_POST", "url": "manual_test_entry"}, + ) as resp: + assert resp.status == 200 + + # Check logs + # The first request should be logged + # The second request (to /log) should also be logged but not the middleware's own log request + assert len(log_collector) == 2 + assert log_collector[0]["method"] == "GET" + assert log_collector[0]["url"] == str(main_server.make_url("/test")) + assert log_collector[1]["method"] == "DIRECT_POST" + assert log_collector[1]["url"] == "manual_test_entry" + + +async def test_middleware_uses_session_avoids_recursion_with_disabled_middleware( + aiohttp_server: AiohttpServer, +) -> None: + """Test that middleware can avoid infinite recursion by disabling middleware.""" + log_collector: List[Dict[str, str]] = [] + request_count = 0 + + async def log_api_handler(request: web.Request) -> web.Response: + """Handle log API requests.""" + nonlocal request_count + request_count += 1 + data: Dict[str, str] = await request.json() + log_collector.append(data) + return web.Response(text="OK") + + async def main_handler(request: web.Request) -> web.Response: + """Handle main server requests.""" + return web.Response(text=f"Hello from {request.path}") + + # Create log API server + log_app = web.Application() + log_app.router.add_post("/log", log_api_handler) + log_server = await aiohttp_server(log_app) + + # Create main server + main_app = web.Application() + main_app.router.add_get("/{path:.*}", main_handler) + main_server = await aiohttp_server(main_app) + + async def log_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + """Log all requests using session with disabled middleware.""" + # Use the session from the request to make the logging call + # Disable middleware to avoid infinite recursion + async with request.session.post( + f"http://localhost:{log_server.port}/log", + json={"method": str(request.method), "url": str(request.url)}, + middlewares=(), # This prevents infinite recursion + ) as resp: + assert resp.status == 200 + + return await handler(request) + + # Create session with the middleware + async with ClientSession(middlewares=(log_middleware,)) as session: + # Make request to main server - should be logged + async with session.get(main_server.make_url("/test")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Hello from /test" + + # Make another request - should also be logged + async with session.get(main_server.make_url("/another")) as resp: + assert resp.status == 200 + + # Check logs - both requests should be logged + assert len(log_collector) == 2 + assert log_collector[0]["method"] == "GET" + assert log_collector[0]["url"] == str(main_server.make_url("/test")) + assert log_collector[1]["method"] == "GET" + assert log_collector[1]["url"] == str(main_server.make_url("/another")) + + # Ensure that log requests were made without the middleware + # (request_count equals number of logged requests, not infinite) + assert request_count == 2 + + +async def test_middleware_can_check_request_body( + aiohttp_server: AiohttpServer, +) -> None: + """Test that middleware can check request body.""" + received_bodies: List[str] = [] + received_headers: List[Dict[str, str]] = [] + + async def handler(request: web.Request) -> web.Response: + """Server handler that receives requests.""" + body = await request.text() + received_bodies.append(body) + received_headers.append(dict(request.headers)) + return web.Response(text="OK") + + app = web.Application() + app.router.add_post("/api", handler) + app.router.add_get("/api", handler) # Add GET handler too + server = await aiohttp_server(app) + + class CustomAuth: + """Middleware that follows the GitHub discussion pattern for authentication.""" + + def __init__(self, secretkey: str) -> None: + self.secretkey = secretkey + + def get_hash(self, request: ClientRequest) -> str: + if request.body: + data = request.body.decode("utf-8") + else: + data = "{}" + + # Simulate authentication hash without using real crypto + signature = f"SIGNATURE-{self.secretkey}-{len(data)}-{data[:10]}" + return signature + + async def __call__( + self, request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + request.headers["CUSTOM-AUTH"] = self.get_hash(request) + return await handler(request) + + middleware = CustomAuth("test-secret-key") + + async with ClientSession(middlewares=(middleware,)) as session: + # Test 1: Send JSON data with user/action + data1 = {"user": "alice", "action": "login"} + json_str1 = json.dumps(data1) + async with session.post( + server.make_url("/api"), + data=json_str1, + headers={"Content-Type": "application/json"}, + ) as resp: + assert resp.status == 200 + + # Test 2: Send JSON data with different fields + data2 = {"user": "bob", "value": 42} + json_str2 = json.dumps(data2) + async with session.post( + server.make_url("/api"), + data=json_str2, + headers={"Content-Type": "application/json"}, + ) as resp: + assert resp.status == 200 + + # Test 3: Send GET request with no body + async with session.get(server.make_url("/api")) as resp: + assert resp.status == 200 # GET with empty body still should validate + + # Test 4: Send plain text (non-JSON) + text_data = "plain text body" + async with session.post( + server.make_url("/api"), + data=text_data, + headers={"Content-Type": "text/plain"}, + ) as resp: + assert resp.status == 200 + + # Verify server received the correct headers with authentication + headers1 = received_headers[0] + assert ( + headers1["CUSTOM-AUTH"] + == f"SIGNATURE-test-secret-key-{len(json_str1)}-{json_str1[:10]}" + ) + + headers2 = received_headers[1] + assert ( + headers2["CUSTOM-AUTH"] + == f"SIGNATURE-test-secret-key-{len(json_str2)}-{json_str2[:10]}" + ) + + headers3 = received_headers[2] + # GET request with no body should have empty JSON body + assert headers3["CUSTOM-AUTH"] == "SIGNATURE-test-secret-key-2-{}" + + headers4 = received_headers[3] + assert ( + headers4["CUSTOM-AUTH"] + == f"SIGNATURE-test-secret-key-{len(text_data)}-{text_data[:10]}" + ) + + # Verify all responses were successful + assert received_bodies[0] == json_str1 + assert received_bodies[1] == json_str2 + assert received_bodies[2] == "" # GET request has no body + assert received_bodies[3] == text_data From 6473180d2485efbf1ee1d6b763b4fc64359aa3d3 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Mon, 19 May 2025 17:18:14 +0000 Subject: [PATCH 41/90] [PR #10884/d758b7ae backport][3.12] Fix flakey middleware connection reuse test (#10887) Co-authored-by: J. Nick Koston --- tests/test_client_middleware.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tests/test_client_middleware.py b/tests/test_client_middleware.py index 7effa31c9f0..2f79e4fd774 100644 --- a/tests/test_client_middleware.py +++ b/tests/test_client_middleware.py @@ -847,11 +847,12 @@ async def __call__( retry_count = 0 while True: self.attempt_count += 1 + response = await handler(request) if retry_count == 0: retry_count += 1 - await handler(request) + response.release() # Release the response to enable connection reuse continue - return await handler(request) + return response app = web.Application() app.router.add_get("/", handler) From 50dd4c6444a6061e810c10a15fa5e24c2f8b4b08 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Mon, 19 May 2025 14:16:34 -0400 Subject: [PATCH 42/90] [PR #10880/3c9d7abf backport][3.12] Add invalid content type test docs (#10886) --- docs/client_reference.rst | 7 ++++++- tests/test_web_response.py | 7 +++++++ 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/docs/client_reference.rst b/docs/client_reference.rst index 039419ba965..afe6c720d78 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -1506,7 +1506,12 @@ Response object Returns value is ``'application/octet-stream'`` if no Content-Type header present in HTTP headers according to - :rfc:`2616`. To make sure Content-Type header is not present in + :rfc:`9110`. If the *Content-Type* header is invalid (e.g., ``jpg`` + instead of ``image/jpeg``), the value is ``text/plain`` by default + according to :rfc:`2045`. To see the original header check + ``resp.headers['CONTENT-TYPE']``. + + To make sure Content-Type header is not present in the server reply, use :attr:`headers` or :attr:`raw_headers`, e.g. ``'CONTENT-TYPE' not in resp.headers``. diff --git a/tests/test_web_response.py b/tests/test_web_response.py index b7758f46baa..68ffe211f20 100644 --- a/tests/test_web_response.py +++ b/tests/test_web_response.py @@ -1164,6 +1164,13 @@ def test_ctor_content_type_with_extra() -> None: assert resp.headers["content-type"] == "text/plain; version=0.0.4; charset=utf-8" +def test_invalid_content_type_parses_to_text_plain() -> None: + resp = Response(text="test test", content_type="jpeg") + + assert resp.content_type == "text/plain" + assert resp.headers["content-type"] == "jpeg; charset=utf-8" + + def test_ctor_both_content_type_param_and_header_with_text() -> None: with pytest.raises(ValueError): Response( From 1cfe02881b49b3354bc7bbee59b0db920de5ef0d Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Mon, 19 May 2025 14:16:57 -0400 Subject: [PATCH 43/90] [PR #10881/4facc402 backport][3.12] Remove License from setup.cfg (#10883) --- CHANGES/10662.packaging.rst | 1 + CONTRIBUTORS.txt | 1 + setup.cfg | 2 -- 3 files changed, 2 insertions(+), 2 deletions(-) create mode 100644 CHANGES/10662.packaging.rst diff --git a/CHANGES/10662.packaging.rst b/CHANGES/10662.packaging.rst new file mode 100644 index 00000000000..2ed3a69cb56 --- /dev/null +++ b/CHANGES/10662.packaging.rst @@ -0,0 +1 @@ +Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. diff --git a/CONTRIBUTORS.txt b/CONTRIBUTORS.txt index 3815ae6829d..c70c86cf671 100644 --- a/CONTRIBUTORS.txt +++ b/CONTRIBUTORS.txt @@ -100,6 +100,7 @@ Denilson Amorim Denis Matiychuk Denis Moshensky Dennis Kliban +Devanshu Koyalkar Dima Veselov Dimitar Dimitrov Diogo Dutra da Mata diff --git a/setup.cfg b/setup.cfg index 649a5aaa4eb..23e56d61d00 100644 --- a/setup.cfg +++ b/setup.cfg @@ -25,8 +25,6 @@ classifiers = Intended Audience :: Developers - License :: OSI Approved :: Apache Software License - Operating System :: POSIX Operating System :: MacOS :: MacOS X Operating System :: Microsoft :: Windows From ad7ee7cb4bcd9222af99b38c7e92452ee55566b4 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Tue, 20 May 2025 15:05:25 +0000 Subject: [PATCH 44/90] [PR #10725/ab76c5a6 backport][3.12] Digest Authentication Middleware for aiohttp (#10894) Co-authored-by: Tim Menninger Co-authored-by: jf Co-authored-by: J. Nick Koston Co-authored-by: J. Nick Koston Co-authored-by: Sam Bull closes #2213 closes #4939 --- CHANGES/10725.feature.rst | 6 + CHANGES/2213.feature.rst | 1 + CONTRIBUTORS.txt | 1 + aiohttp/__init__.py | 2 + aiohttp/client_middleware_digest_auth.py | 416 ++++++++++ docs/client_advanced.rst | 20 + docs/client_reference.rst | 29 + docs/spelling_wordlist.txt | 1 + examples/digest_auth_qop_auth.py | 68 ++ tests/test_client_middleware_digest_auth.py | 801 ++++++++++++++++++++ 10 files changed, 1345 insertions(+) create mode 100644 CHANGES/10725.feature.rst create mode 120000 CHANGES/2213.feature.rst create mode 100644 aiohttp/client_middleware_digest_auth.py create mode 100644 examples/digest_auth_qop_auth.py create mode 100644 tests/test_client_middleware_digest_auth.py diff --git a/CHANGES/10725.feature.rst b/CHANGES/10725.feature.rst new file mode 100644 index 00000000000..2cb096a58e7 --- /dev/null +++ b/CHANGES/10725.feature.rst @@ -0,0 +1,6 @@ +Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) +that implements RFC 7616. The middleware supports all standard hash algorithms +(MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and +'auth-int' quality of protection options, and automatically manages the +authentication flow by intercepting 401 responses and retrying with proper +credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. diff --git a/CHANGES/2213.feature.rst b/CHANGES/2213.feature.rst new file mode 120000 index 00000000000..d118975e478 --- /dev/null +++ b/CHANGES/2213.feature.rst @@ -0,0 +1 @@ +10725.feature.rst \ No newline at end of file diff --git a/CONTRIBUTORS.txt b/CONTRIBUTORS.txt index c70c86cf671..32e6e119aa7 100644 --- a/CONTRIBUTORS.txt +++ b/CONTRIBUTORS.txt @@ -187,6 +187,7 @@ Jesus Cea Jian Zeng Jinkyu Yi Joel Watts +John Feusi John Parton Jon Nabozny Jonas Krüger Svensson diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index d18bab60d2e..4bc6a3a2b22 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -47,6 +47,7 @@ WSServerHandshakeError, request, ) +from .client_middleware_digest_auth import DigestAuthMiddleware from .client_middlewares import ClientHandlerType, ClientMiddlewareType from .compression_utils import set_zlib_backend from .connector import ( @@ -187,6 +188,7 @@ # helpers "BasicAuth", "ChainMapProxy", + "DigestAuthMiddleware", "ETag", "set_zlib_backend", # http diff --git a/aiohttp/client_middleware_digest_auth.py b/aiohttp/client_middleware_digest_auth.py new file mode 100644 index 00000000000..e9eb3ba82e2 --- /dev/null +++ b/aiohttp/client_middleware_digest_auth.py @@ -0,0 +1,416 @@ +""" +Digest authentication middleware for aiohttp client. + +This middleware implements HTTP Digest Authentication according to RFC 7616, +providing a more secure alternative to Basic Authentication. It supports all +standard hash algorithms including MD5, SHA, SHA-256, SHA-512 and their session +variants, as well as both 'auth' and 'auth-int' quality of protection (qop) options. +""" + +import hashlib +import os +import re +import time +from typing import ( + Callable, + Dict, + Final, + FrozenSet, + List, + Literal, + Tuple, + TypedDict, + Union, +) + +from yarl import URL + +from . import hdrs +from .client_exceptions import ClientError +from .client_middlewares import ClientHandlerType +from .client_reqrep import ClientRequest, ClientResponse + + +class DigestAuthChallenge(TypedDict, total=False): + realm: str + nonce: str + qop: str + algorithm: str + opaque: str + + +DigestFunctions: Dict[str, Callable[[bytes], "hashlib._Hash"]] = { + "MD5": hashlib.md5, + "MD5-SESS": hashlib.md5, + "SHA": hashlib.sha1, + "SHA-SESS": hashlib.sha1, + "SHA256": hashlib.sha256, + "SHA256-SESS": hashlib.sha256, + "SHA-256": hashlib.sha256, + "SHA-256-SESS": hashlib.sha256, + "SHA512": hashlib.sha512, + "SHA512-SESS": hashlib.sha512, + "SHA-512": hashlib.sha512, + "SHA-512-SESS": hashlib.sha512, +} + + +# Compile the regex pattern once at module level for performance +_HEADER_PAIRS_PATTERN = re.compile( + r'(\w+)\s*=\s*(?:"((?:[^"\\]|\\.)*)"|([^\s,]+))' + # | | | | | | | | | || | + # +----|--|-|-|--|----|------|----|--||-----|--> alphanumeric key + # +--|-|-|--|----|------|----|--||-----|--> maybe whitespace + # | | | | | | | || | + # +-|-|--|----|------|----|--||-----|--> = (delimiter) + # +-|--|----|------|----|--||-----|--> maybe whitespace + # | | | | | || | + # +--|----|------|----|--||-----|--> group quoted or unquoted + # | | | | || | + # +----|------|----|--||-----|--> if quoted... + # +------|----|--||-----|--> anything but " or \ + # +----|--||-----|--> escaped characters allowed + # +--||-----|--> or can be empty string + # || | + # +|-----|--> if unquoted... + # +-----|--> anything but , or + # +--> at least one char req'd +) + + +# RFC 7616: Challenge parameters to extract +CHALLENGE_FIELDS: Final[ + Tuple[Literal["realm", "nonce", "qop", "algorithm", "opaque"], ...] +] = ( + "realm", + "nonce", + "qop", + "algorithm", + "opaque", +) + +# Supported digest authentication algorithms +# Use a tuple of sorted keys for predictable documentation and error messages +SUPPORTED_ALGORITHMS: Final[Tuple[str, ...]] = tuple(sorted(DigestFunctions.keys())) + +# RFC 7616: Fields that require quoting in the Digest auth header +# These fields must be enclosed in double quotes in the Authorization header. +# Algorithm, qop, and nc are never quoted per RFC specifications. +# This frozen set is used by the template-based header construction to +# automatically determine which fields need quotes. +QUOTED_AUTH_FIELDS: Final[FrozenSet[str]] = frozenset( + {"username", "realm", "nonce", "uri", "response", "opaque", "cnonce"} +) + + +def escape_quotes(value: str) -> str: + """Escape double quotes for HTTP header values.""" + return value.replace('"', '\\"') + + +def unescape_quotes(value: str) -> str: + """Unescape double quotes in HTTP header values.""" + return value.replace('\\"', '"') + + +def parse_header_pairs(header: str) -> Dict[str, str]: + """ + Parse key-value pairs from WWW-Authenticate or similar HTTP headers. + + This function handles the complex format of WWW-Authenticate header values, + supporting both quoted and unquoted values, proper handling of commas in + quoted values, and whitespace variations per RFC 7616. + + Examples of supported formats: + - key1="value1", key2=value2 + - key1 = "value1" , key2="value, with, commas" + - key1=value1,key2="value2" + - realm="example.com", nonce="12345", qop="auth" + + Args: + header: The header value string to parse + + Returns: + Dictionary mapping parameter names to their values + """ + return { + stripped_key: unescape_quotes(quoted_val) if quoted_val else unquoted_val + for key, quoted_val, unquoted_val in _HEADER_PAIRS_PATTERN.findall(header) + if (stripped_key := key.strip()) + } + + +class DigestAuthMiddleware: + """ + HTTP digest authentication middleware for aiohttp client. + + This middleware intercepts 401 Unauthorized responses containing a Digest + authentication challenge, calculates the appropriate digest credentials, + and automatically retries the request with the proper Authorization header. + + Features: + - Handles all aspects of Digest authentication handshake automatically + - Supports all standard hash algorithms: + - MD5, MD5-SESS + - SHA, SHA-SESS + - SHA256, SHA256-SESS, SHA-256, SHA-256-SESS + - SHA512, SHA512-SESS, SHA-512, SHA-512-SESS + - Supports 'auth' and 'auth-int' quality of protection modes + - Properly handles quoted strings and parameter parsing + - Includes replay attack protection with client nonce count tracking + + Standards compliance: + - RFC 7616: HTTP Digest Access Authentication (primary reference) + - RFC 2617: HTTP Authentication (deprecated by RFC 7616) + - RFC 1945: Section 11.1 (username restrictions) + + Implementation notes: + The core digest calculation is inspired by the implementation in + https://github.com/requests/requests/blob/v2.18.4/requests/auth.py + with added support for modern digest auth features and error handling. + """ + + def __init__( + self, + login: str, + password: str, + ) -> None: + if login is None: + raise ValueError("None is not allowed as login value") + + if password is None: + raise ValueError("None is not allowed as password value") + + if ":" in login: + raise ValueError('A ":" is not allowed in username (RFC 1945#section-11.1)') + + self._login_str: Final[str] = login + self._login_bytes: Final[bytes] = login.encode("utf-8") + self._password_bytes: Final[bytes] = password.encode("utf-8") + + self._last_nonce_bytes = b"" + self._nonce_count = 0 + self._challenge: DigestAuthChallenge = {} + + def _encode(self, method: str, url: URL, body: Union[bytes, str]) -> str: + """ + Build digest authorization header for the current challenge. + + Args: + method: The HTTP method (GET, POST, etc.) + url: The request URL + body: The request body (used for qop=auth-int) + + Returns: + A fully formatted Digest authorization header string + + Raises: + ClientError: If the challenge is missing required parameters or + contains unsupported values + """ + challenge = self._challenge + if "realm" not in challenge: + raise ClientError( + "Malformed Digest auth challenge: Missing 'realm' parameter" + ) + + if "nonce" not in challenge: + raise ClientError( + "Malformed Digest auth challenge: Missing 'nonce' parameter" + ) + + # Empty realm values are allowed per RFC 7616 (SHOULD, not MUST, contain host name) + realm = challenge["realm"] + nonce = challenge["nonce"] + + # Empty nonce values are not allowed as they are security-critical for replay protection + if not nonce: + raise ClientError( + "Security issue: Digest auth challenge contains empty 'nonce' value" + ) + + qop_raw = challenge.get("qop", "") + algorithm = challenge.get("algorithm", "MD5").upper() + opaque = challenge.get("opaque", "") + + # Convert string values to bytes once + nonce_bytes = nonce.encode("utf-8") + realm_bytes = realm.encode("utf-8") + path = URL(url).path_qs + + # Process QoP + qop = "" + qop_bytes = b"" + if qop_raw: + valid_qops = {"auth", "auth-int"}.intersection( + {q.strip() for q in qop_raw.split(",") if q.strip()} + ) + if not valid_qops: + raise ClientError( + f"Digest auth error: Unsupported Quality of Protection (qop) value(s): {qop_raw}" + ) + + qop = "auth-int" if "auth-int" in valid_qops else "auth" + qop_bytes = qop.encode("utf-8") + + if algorithm not in DigestFunctions: + raise ClientError( + f"Digest auth error: Unsupported hash algorithm: {algorithm}. " + f"Supported algorithms: {', '.join(SUPPORTED_ALGORITHMS)}" + ) + hash_fn: Final = DigestFunctions[algorithm] + + def H(x: bytes) -> bytes: + """RFC 7616 Section 3: Hash function H(data) = hex(hash(data)).""" + return hash_fn(x).hexdigest().encode() + + def KD(s: bytes, d: bytes) -> bytes: + """RFC 7616 Section 3: KD(secret, data) = H(concat(secret, ":", data)).""" + return H(b":".join((s, d))) + + # Calculate A1 and A2 + A1 = b":".join((self._login_bytes, realm_bytes, self._password_bytes)) + A2 = f"{method.upper()}:{path}".encode() + if qop == "auth-int": + if isinstance(body, str): + entity_str = body.encode("utf-8", errors="replace") + else: + entity_str = body + entity_hash = H(entity_str) + A2 = b":".join((A2, entity_hash)) + + HA1 = H(A1) + HA2 = H(A2) + + # Nonce count handling + if nonce_bytes == self._last_nonce_bytes: + self._nonce_count += 1 + else: + self._nonce_count = 1 + + self._last_nonce_bytes = nonce_bytes + ncvalue = f"{self._nonce_count:08x}" + ncvalue_bytes = ncvalue.encode("utf-8") + + # Generate client nonce + cnonce = hashlib.sha1( + b"".join( + [ + str(self._nonce_count).encode("utf-8"), + nonce_bytes, + time.ctime().encode("utf-8"), + os.urandom(8), + ] + ) + ).hexdigest()[:16] + cnonce_bytes = cnonce.encode("utf-8") + + # Special handling for session-based algorithms + if algorithm.upper().endswith("-SESS"): + HA1 = H(b":".join((HA1, nonce_bytes, cnonce_bytes))) + + # Calculate the response digest + if qop: + noncebit = b":".join( + (nonce_bytes, ncvalue_bytes, cnonce_bytes, qop_bytes, HA2) + ) + response_digest = KD(HA1, noncebit) + else: + response_digest = KD(HA1, b":".join((nonce_bytes, HA2))) + + # Define a dict mapping of header fields to their values + # Group fields into always-present, optional, and qop-dependent + header_fields = { + # Always present fields + "username": escape_quotes(self._login_str), + "realm": escape_quotes(realm), + "nonce": escape_quotes(nonce), + "uri": path, + "response": response_digest.decode(), + "algorithm": algorithm, + } + + # Optional fields + if opaque: + header_fields["opaque"] = escape_quotes(opaque) + + # QoP-dependent fields + if qop: + header_fields["qop"] = qop + header_fields["nc"] = ncvalue + header_fields["cnonce"] = cnonce + + # Build header using templates for each field type + pairs: List[str] = [] + for field, value in header_fields.items(): + if field in QUOTED_AUTH_FIELDS: + pairs.append(f'{field}="{value}"') + else: + pairs.append(f"{field}={value}") + + return f"Digest {', '.join(pairs)}" + + def _authenticate(self, response: ClientResponse) -> bool: + """ + Takes the given response and tries digest-auth, if needed. + + Returns true if the original request must be resent. + """ + if response.status != 401: + return False + + auth_header = response.headers.get("www-authenticate", "") + if not auth_header: + return False # No authentication header present + + method, sep, headers = auth_header.partition(" ") + if not sep: + # No space found in www-authenticate header + return False # Malformed auth header, missing scheme separator + + if method.lower() != "digest": + # Not a digest auth challenge (could be Basic, Bearer, etc.) + return False + + if not headers: + # We have a digest scheme but no parameters + return False # Malformed digest header, missing parameters + + # We have a digest auth header with content + if not (header_pairs := parse_header_pairs(headers)): + # Failed to parse any key-value pairs + return False # Malformed digest header, no valid parameters + + # Extract challenge parameters + self._challenge = {} + for field in CHALLENGE_FIELDS: + if value := header_pairs.get(field): + self._challenge[field] = value + + # Return True only if we found at least one challenge parameter + return bool(self._challenge) + + async def __call__( + self, request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + """Run the digest auth middleware.""" + response = None + for retry_count in range(2): + # Apply authorization header if we have a challenge (on second attempt) + if retry_count > 0: + request.headers[hdrs.AUTHORIZATION] = self._encode( + request.method, request.url, request.body + ) + + # Send the request + response = await handler(request) + + # Check if we need to authenticate + if not self._authenticate(response): + break + elif retry_count < 1: + response.release() # Release the response to enable connection reuse on retry + + # At this point, response is guaranteed to be defined + assert response is not None + return response diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index 8795b3d164a..9affef7efe2 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -67,6 +67,26 @@ argument. An instance of :class:`BasicAuth` can be passed in like this:: async with ClientSession(auth=auth) as session: ... +For HTTP digest authentication, use the :class:`DigestAuthMiddleware` client middleware:: + + from aiohttp import ClientSession, DigestAuthMiddleware + + # Create the middleware with your credentials + digest_auth = DigestAuthMiddleware(login="user", password="password") + + # Pass it to the ClientSession as a tuple + async with ClientSession(middlewares=(digest_auth,)) as session: + # The middleware will automatically handle auth challenges + async with session.get("https://example.com/protected") as resp: + print(await resp.text()) + +The :class:`DigestAuthMiddleware` implements HTTP Digest Authentication according to RFC 7616, +providing a more secure alternative to Basic Authentication. It supports all +standard hash algorithms including MD5, SHA, SHA-256, SHA-512 and their session +variants, as well as both 'auth' and 'auth-int' quality of protection (qop) options. +The middleware automatically handles the authentication flow by intercepting 401 responses +and retrying with proper credentials. + Note that if the request is redirected and the redirect URL contains credentials, those credentials will supersede any previously set credentials. In other words, if ``http://user@example.com`` redirects to diff --git a/docs/client_reference.rst b/docs/client_reference.rst index afe6c720d78..8e6153bf40c 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -2010,6 +2010,7 @@ Utilities .. versionadded:: 3.2 + .. class:: BasicAuth(login, password='', encoding='latin1') HTTP basic authentication helper. @@ -2050,6 +2051,34 @@ Utilities :return: encoded authentication data, :class:`str`. + +.. class:: DigestAuthMiddleware(login, password) + + HTTP digest authentication client middleware. + + :param str login: login + :param str password: password + + This middleware supports HTTP digest authentication with both `auth` and + `auth-int` quality of protection (qop) modes, and a variety of hashing algorithms. + + It automatically handles the digest authentication handshake by: + + - Parsing 401 Unauthorized responses with `WWW-Authenticate: Digest` headers + - Generating appropriate `Authorization: Digest` headers on retry + - Maintaining nonce counts and challenge data per request + + Usage:: + + digest_auth_middleware = DigestAuthMiddleware(login="user", password="pass") + async with ClientSession(middlewares=(digest_auth_middleware,)) as session: + async with session.get("http://protected.example.com") as resp: + # The middleware automatically handles the digest auth handshake + assert resp.status == 200 + + .. versionadded:: 3.12 + + .. class:: CookieJar(*, unsafe=False, quote_cookie=True, treat_as_secure_origin = []) The cookie jar instance is available as :attr:`ClientSession.cookie_jar`. diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index f2321adb708..421ef842678 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -252,6 +252,7 @@ pyflakes pyright pytest Pytest +qop Quickstart quote’s rc diff --git a/examples/digest_auth_qop_auth.py b/examples/digest_auth_qop_auth.py new file mode 100644 index 00000000000..508f444e9f9 --- /dev/null +++ b/examples/digest_auth_qop_auth.py @@ -0,0 +1,68 @@ +#!/usr/bin/env python3 +""" +Example of using digest authentication middleware with aiohttp client. + +This example shows how to use the DigestAuthMiddleware from +aiohttp.client_middleware_digest_auth to authenticate with a server +that requires digest authentication with different qop options. + +In this case, it connects to httpbin.org's digest auth endpoint. +""" + +import asyncio +from itertools import product + +from yarl import URL + +from aiohttp import ClientSession +from aiohttp.client_middleware_digest_auth import DigestAuthMiddleware + +# Define QOP options available +QOP_OPTIONS = ["auth", "auth-int"] + +# Algorithms supported by httpbin.org +ALGORITHMS = ["MD5", "SHA-256", "SHA-512"] + +# Username and password for testing +USERNAME = "my" +PASSWORD = "dog" + +# All combinations of QOP options and algorithms +TEST_COMBINATIONS = list(product(QOP_OPTIONS, ALGORITHMS)) + + +async def main() -> None: + # Create a DigestAuthMiddleware instance with appropriate credentials + digest_auth = DigestAuthMiddleware(login=USERNAME, password=PASSWORD) + + # Create a client session with the digest auth middleware + async with ClientSession(middlewares=(digest_auth,)) as session: + # Test each combination of QOP and algorithm + for qop, algorithm in TEST_COMBINATIONS: + print(f"\n\n=== Testing with qop={qop}, algorithm={algorithm} ===\n") + + url = URL( + f"https://httpbin.org/digest-auth/{qop}/{USERNAME}/{PASSWORD}/{algorithm}" + ) + + async with session.get(url) as resp: + print(f"Status: {resp.status}") + print(f"Headers: {resp.headers}") + + # Parse the JSON response + json_response = await resp.json() + print(f"Response: {json_response}") + + # Verify authentication was successful + if resp.status == 200: + print("\nAuthentication successful!") + print(f"Authenticated user: {json_response.get('user')}") + print( + f"Authentication method: {json_response.get('authenticated')}" + ) + else: + print("\nAuthentication failed.") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/tests/test_client_middleware_digest_auth.py b/tests/test_client_middleware_digest_auth.py new file mode 100644 index 00000000000..26118288913 --- /dev/null +++ b/tests/test_client_middleware_digest_auth.py @@ -0,0 +1,801 @@ +"""Test digest authentication middleware for aiohttp client.""" + +from hashlib import md5, sha1 +from typing import Generator, Union +from unittest import mock + +import pytest +from yarl import URL + +from aiohttp import ClientSession, hdrs +from aiohttp.client_exceptions import ClientError +from aiohttp.client_middleware_digest_auth import ( + DigestAuthChallenge, + DigestAuthMiddleware, + DigestFunctions, + escape_quotes, + parse_header_pairs, + unescape_quotes, +) +from aiohttp.client_reqrep import ClientResponse +from aiohttp.pytest_plugin import AiohttpServer +from aiohttp.web import Application, Request, Response + + +@pytest.fixture +def digest_auth_mw() -> DigestAuthMiddleware: + return DigestAuthMiddleware("user", "pass") + + +@pytest.fixture +def basic_challenge() -> DigestAuthChallenge: + """Return a basic digest auth challenge with required fields only.""" + return DigestAuthChallenge(realm="test", nonce="abc") + + +@pytest.fixture +def complete_challenge() -> DigestAuthChallenge: + """Return a complete digest auth challenge with all fields.""" + return DigestAuthChallenge( + realm="test", nonce="abc", qop="auth", algorithm="MD5", opaque="xyz" + ) + + +@pytest.fixture +def qop_challenge() -> DigestAuthChallenge: + """Return a digest auth challenge with qop field.""" + return DigestAuthChallenge(realm="test", nonce="abc", qop="auth") + + +@pytest.fixture +def no_qop_challenge() -> DigestAuthChallenge: + """Return a digest auth challenge without qop.""" + return DigestAuthChallenge(realm="test-realm", nonce="testnonce", algorithm="MD5") + + +@pytest.fixture +def auth_mw_with_challenge( + digest_auth_mw: DigestAuthMiddleware, complete_challenge: DigestAuthChallenge +) -> DigestAuthMiddleware: + """Return a digest auth middleware with pre-set challenge.""" + digest_auth_mw._challenge = complete_challenge + digest_auth_mw._last_nonce_bytes = complete_challenge["nonce"].encode("utf-8") + digest_auth_mw._nonce_count = 0 + return digest_auth_mw + + +@pytest.fixture +def mock_sha1_digest() -> Generator[mock.MagicMock, None, None]: + """Mock SHA1 to return a predictable value for testing.""" + mock_digest = mock.MagicMock(spec=sha1()) + mock_digest.hexdigest.return_value = "deadbeefcafebabe" + with mock.patch("hashlib.sha1", return_value=mock_digest) as patched: + yield patched + + +@pytest.fixture +def mock_md5_digest() -> Generator[mock.MagicMock, None, None]: + """Mock MD5 to return a predictable value for testing.""" + mock_digest = mock.MagicMock(spec=md5()) + mock_digest.hexdigest.return_value = "abcdef0123456789" + with mock.patch("hashlib.md5", return_value=mock_digest) as patched: + yield patched + + +@pytest.mark.parametrize( + ("response_status", "headers", "expected_result", "expected_challenge"), + [ + # Valid digest with all fields + ( + 401, + { + "www-authenticate": 'Digest realm="test", nonce="abc", ' + 'qop="auth", opaque="xyz", algorithm=MD5' + }, + True, + { + "realm": "test", + "nonce": "abc", + "qop": "auth", + "algorithm": "MD5", + "opaque": "xyz", + }, + ), + # Valid digest without opaque + ( + 401, + {"www-authenticate": 'Digest realm="test", nonce="abc", qop="auth"'}, + True, + {"realm": "test", "nonce": "abc", "qop": "auth"}, + ), + # Non-401 status + (200, {}, False, {}), # No challenge should be set + ], +) +async def test_authenticate_scenarios( + digest_auth_mw: DigestAuthMiddleware, + response_status: int, + headers: dict[str, str], + expected_result: bool, + expected_challenge: dict[str, str], +) -> None: + """Test different authentication scenarios.""" + response = mock.MagicMock(spec=ClientResponse) + response.status = response_status + response.headers = headers + + result = digest_auth_mw._authenticate(response) + assert result == expected_result + + if expected_result: + challenge_dict = dict(digest_auth_mw._challenge) + for key, value in expected_challenge.items(): + assert challenge_dict[key] == value + + +@pytest.mark.parametrize( + ("challenge", "expected_error"), + [ + ( + DigestAuthChallenge(), + "Malformed Digest auth challenge: Missing 'realm' parameter", + ), + ( + DigestAuthChallenge(nonce="abc"), + "Malformed Digest auth challenge: Missing 'realm' parameter", + ), + ( + DigestAuthChallenge(realm="test"), + "Malformed Digest auth challenge: Missing 'nonce' parameter", + ), + ( + DigestAuthChallenge(realm="test", nonce=""), + "Security issue: Digest auth challenge contains empty 'nonce' value", + ), + ], +) +def test_encode_validation_errors( + digest_auth_mw: DigestAuthMiddleware, + challenge: DigestAuthChallenge, + expected_error: str, +) -> None: + """Test validation errors when encoding digest auth headers.""" + digest_auth_mw._challenge = challenge + with pytest.raises(ClientError, match=expected_error): + digest_auth_mw._encode("GET", URL("http://example.com/resource"), "") + + +def test_encode_digest_with_md5(auth_mw_with_challenge: DigestAuthMiddleware) -> None: + header = auth_mw_with_challenge._encode( + "GET", URL("http://example.com/resource"), "" + ) + assert header.startswith("Digest ") + assert 'username="user"' in header + assert "algorithm=MD5" in header + + +@pytest.mark.parametrize( + "algorithm", ["MD5-SESS", "SHA-SESS", "SHA-256-SESS", "SHA-512-SESS"] +) +def test_encode_digest_with_sess_algorithms( + digest_auth_mw: DigestAuthMiddleware, + qop_challenge: DigestAuthChallenge, + algorithm: str, +) -> None: + """Test that all session-based digest algorithms work correctly.""" + # Create a modified challenge with the test algorithm + challenge = qop_challenge.copy() + challenge["algorithm"] = algorithm + digest_auth_mw._challenge = challenge + + header = digest_auth_mw._encode("GET", URL("http://example.com/resource"), "") + assert f"algorithm={algorithm}" in header + + +def test_encode_unsupported_algorithm( + digest_auth_mw: DigestAuthMiddleware, basic_challenge: DigestAuthChallenge +) -> None: + """Test that unsupported algorithm raises ClientError.""" + # Create a modified challenge with an unsupported algorithm + challenge = basic_challenge.copy() + challenge["algorithm"] = "UNSUPPORTED" + digest_auth_mw._challenge = challenge + + with pytest.raises(ClientError, match="Unsupported hash algorithm"): + digest_auth_mw._encode("GET", URL("http://example.com/resource"), "") + + +def test_invalid_qop_rejected( + digest_auth_mw: DigestAuthMiddleware, basic_challenge: DigestAuthChallenge +) -> None: + """Test that invalid Quality of Protection values are rejected.""" + # Use bad QoP value to trigger error + challenge = basic_challenge.copy() + challenge["qop"] = "badvalue" + challenge["algorithm"] = "MD5" + digest_auth_mw._challenge = challenge + + # This should raise an error about unsupported QoP + with pytest.raises(ClientError, match="Unsupported Quality of Protection"): + digest_auth_mw._encode("GET", URL("http://example.com"), "") + + +def compute_expected_digest( + algorithm: str, + username: str, + password: str, + realm: str, + nonce: str, + uri: str, + method: str, + qop: str, + nc: str, + cnonce: str, + body: str = "", +) -> str: + hash_fn = DigestFunctions[algorithm] + + def H(x: str) -> str: + return hash_fn(x.encode()).hexdigest() + + def KD(secret: str, data: str) -> str: + return H(f"{secret}:{data}") + + A1 = f"{username}:{realm}:{password}" + HA1 = H(A1) + + if algorithm.upper().endswith("-SESS"): + HA1 = H(f"{HA1}:{nonce}:{cnonce}") + + A2 = f"{method}:{uri}" + if "auth-int" in qop: + entity_hash = H(body) + A2 = f"{A2}:{entity_hash}" + HA2 = H(A2) + + if qop: + return KD(HA1, f"{nonce}:{nc}:{cnonce}:{qop}:{HA2}") + else: + return KD(HA1, f"{nonce}:{HA2}") + + +@pytest.mark.parametrize("qop", ["auth", "auth-int", "auth,auth-int", ""]) +@pytest.mark.parametrize("algorithm", sorted(DigestFunctions.keys())) +@pytest.mark.parametrize( + ("body", "body_str"), + [ + ("this is a body", "this is a body"), # String case + (b"this is a body", "this is a body"), # Bytes case + ], +) +def test_digest_response_exact_match( + qop: str, + algorithm: str, + body: Union[str, bytes], + body_str: str, + mock_sha1_digest: mock.MagicMock, +) -> None: + # Fixed input values + login = "user" + password = "pass" + realm = "example.com" + nonce = "abc123nonce" + cnonce = "deadbeefcafebabe" + nc = 1 + ncvalue = f"{nc+1:08x}" + method = "GET" + uri = "/secret" + qop = "auth-int" if "auth-int" in qop else "auth" + + # Create the auth object + auth = DigestAuthMiddleware(login, password) + auth._challenge = DigestAuthChallenge( + realm=realm, nonce=nonce, qop=qop, algorithm=algorithm + ) + auth._last_nonce_bytes = nonce.encode("utf-8") + auth._nonce_count = nc + + header = auth._encode(method, URL(f"http://host{uri}"), body) + + # Get expected digest + expected = compute_expected_digest( + algorithm=algorithm, + username=login, + password=password, + realm=realm, + nonce=nonce, + uri=uri, + method=method, + qop=qop, + nc=ncvalue, + cnonce=cnonce, + body=body_str, + ) + + # Check that the response digest is exactly correct + assert f'response="{expected}"' in header + + +@pytest.mark.parametrize( + ("header", "expected_result"), + [ + # Normal quoted values + ( + 'realm="example.com", nonce="12345", qop="auth"', + {"realm": "example.com", "nonce": "12345", "qop": "auth"}, + ), + # Unquoted values + ( + "realm=example.com, nonce=12345, qop=auth", + {"realm": "example.com", "nonce": "12345", "qop": "auth"}, + ), + # Mixed quoted/unquoted with commas in quoted values + ( + 'realm="ex,ample", nonce=12345, qop="auth", domain="/test"', + { + "realm": "ex,ample", + "nonce": "12345", + "qop": "auth", + "domain": "/test", + }, + ), + # Header with scheme + ( + 'Digest realm="example.com", nonce="12345", qop="auth"', + {"realm": "example.com", "nonce": "12345", "qop": "auth"}, + ), + # No spaces after commas + ( + 'realm="test",nonce="123",qop="auth"', + {"realm": "test", "nonce": "123", "qop": "auth"}, + ), + # Extra whitespace + ( + 'realm = "test" , nonce = "123"', + {"realm": "test", "nonce": "123"}, + ), + # Escaped quotes + ( + 'realm="test\\"realm", nonce="123"', + {"realm": 'test"realm', "nonce": "123"}, + ), + # Single quotes (treated as regular chars) + ( + "realm='test', nonce=123", + {"realm": "'test'", "nonce": "123"}, + ), + # Empty header + ("", {}), + ], + ids=[ + "fully_quoted_header", + "unquoted_header", + "mixed_quoted_unquoted_with_commas", + "header_with_scheme", + "no_spaces_after_commas", + "extra_whitespace", + "escaped_quotes", + "single_quotes_as_regular_chars", + "empty_header", + ], +) +def test_parse_header_pairs(header: str, expected_result: dict[str, str]) -> None: + """Test parsing HTTP header pairs with various formats.""" + result = parse_header_pairs(header) + assert result == expected_result + + +def test_digest_auth_middleware_callable(digest_auth_mw: DigestAuthMiddleware) -> None: + """Test that DigestAuthMiddleware is callable.""" + assert callable(digest_auth_mw) + + +def test_middleware_invalid_login() -> None: + """Test that invalid login values raise errors.""" + with pytest.raises(ValueError, match="None is not allowed as login value"): + DigestAuthMiddleware(None, "pass") # type: ignore[arg-type] + + with pytest.raises(ValueError, match="None is not allowed as password value"): + DigestAuthMiddleware("user", None) # type: ignore[arg-type] + + with pytest.raises(ValueError, match=r"A \":\" is not allowed in username"): + DigestAuthMiddleware("user:name", "pass") + + +def test_escaping_quotes_in_auth_header() -> None: + """Test that double quotes are properly escaped in auth header.""" + auth = DigestAuthMiddleware('user"with"quotes', "pass") + auth._challenge = DigestAuthChallenge( + realm='realm"with"quotes', + nonce='nonce"with"quotes', + qop="auth", + algorithm="MD5", + opaque='opaque"with"quotes', + ) + + header = auth._encode("GET", URL("http://example.com/path"), "") + + # Check that quotes are escaped in the header + assert 'username="user\\"with\\"quotes"' in header + assert 'realm="realm\\"with\\"quotes"' in header + assert 'nonce="nonce\\"with\\"quotes"' in header + assert 'opaque="opaque\\"with\\"quotes"' in header + + +def test_template_based_header_construction( + auth_mw_with_challenge: DigestAuthMiddleware, + mock_sha1_digest: mock.MagicMock, + mock_md5_digest: mock.MagicMock, +) -> None: + """Test that the template-based header construction works correctly.""" + header = auth_mw_with_challenge._encode("GET", URL("http://example.com/test"), "") + + # Split the header into scheme and parameters + scheme, params_str = header.split(" ", 1) + assert scheme == "Digest" + + # Parse the parameters into a dictionary + params = { + key: value[1:-1] if value.startswith('"') and value.endswith('"') else value + for key, value in (param.split("=", 1) for param in params_str.split(", ")) + } + + # Check all required fields are present + assert "username" in params + assert "realm" in params + assert "nonce" in params + assert "uri" in params + assert "response" in params + assert "algorithm" in params + assert "qop" in params + assert "nc" in params + assert "cnonce" in params + assert "opaque" in params + + # Check that fields are quoted correctly + quoted_fields = [ + "username", + "realm", + "nonce", + "uri", + "response", + "opaque", + "cnonce", + ] + unquoted_fields = ["algorithm", "qop", "nc"] + + # Re-check the original header for proper quoting + for field in quoted_fields: + assert f'{field}="{params[field]}"' in header + + for field in unquoted_fields: + assert f"{field}={params[field]}" in header + + # Check specific values + assert params["username"] == "user" + assert params["realm"] == "test" + assert params["algorithm"] == "MD5" + assert params["nc"] == "00000001" # nonce_count = 1 (incremented from 0) + assert params["uri"] == "/test" # path component of URL + + +@pytest.mark.parametrize( + ("test_string", "expected_escaped", "description"), + [ + ('value"with"quotes', 'value\\"with\\"quotes', "Basic string with quotes"), + ("", "", "Empty string"), + ("no quotes", "no quotes", "String without quotes"), + ('with"one"quote', 'with\\"one\\"quote', "String with one quoted segment"), + ( + 'many"quotes"in"string', + 'many\\"quotes\\"in\\"string', + "String with multiple quoted segments", + ), + ('""', '\\"\\"', "Just double quotes"), + ('"', '\\"', "Single double quote"), + ('already\\"escaped', 'already\\\\"escaped', "Already escaped quotes"), + ], +) +def test_quote_escaping_functions( + test_string: str, expected_escaped: str, description: str +) -> None: + """Test that escape_quotes and unescape_quotes work correctly.""" + # Test escaping + escaped = escape_quotes(test_string) + assert escaped == expected_escaped + + # Test unescaping (should return to original) + unescaped = unescape_quotes(escaped) + assert unescaped == test_string + + # Test that they're inverse operations + assert unescape_quotes(escape_quotes(test_string)) == test_string + + +async def test_middleware_retry_on_401( + aiohttp_server: AiohttpServer, digest_auth_mw: DigestAuthMiddleware +) -> None: + """Test that the middleware retries on 401 errors.""" + request_count = 0 + + async def handler(request: Request) -> Response: + nonlocal request_count + request_count += 1 + + if request_count == 1: + # First request returns 401 with digest challenge + challenge = 'Digest realm="test", nonce="abc123", qop="auth", algorithm=MD5' + return Response( + status=401, + headers={"WWW-Authenticate": challenge}, + text="Unauthorized", + ) + + # Second request should have Authorization header + auth_header = request.headers.get(hdrs.AUTHORIZATION) + if auth_header and auth_header.startswith("Digest "): + # Return success response + return Response(text="OK") + + # This branch should not be reached in the tests + assert False, "This branch should not be reached" + + app = Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(digest_auth_mw,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text_content = await resp.text() + assert text_content == "OK" + + assert request_count == 2 # Initial request + retry with auth + + +async def test_digest_auth_no_qop( + aiohttp_server: AiohttpServer, + digest_auth_mw: DigestAuthMiddleware, + no_qop_challenge: DigestAuthChallenge, + mock_sha1_digest: mock.MagicMock, +) -> None: + """Test digest auth with a server that doesn't provide a QoP parameter.""" + request_count = 0 + realm = no_qop_challenge["realm"] + nonce = no_qop_challenge["nonce"] + algorithm = no_qop_challenge["algorithm"] + username = "user" + password = "pass" + uri = "/" + + async def handler(request: Request) -> Response: + nonlocal request_count + request_count += 1 + + if request_count == 1: + # First request returns 401 with digest challenge without qop + challenge = ( + f'Digest realm="{realm}", nonce="{nonce}", algorithm={algorithm}' + ) + return Response( + status=401, + headers={"WWW-Authenticate": challenge}, + text="Unauthorized", + ) + + # Second request should have Authorization header + auth_header = request.headers.get(hdrs.AUTHORIZATION) + assert auth_header and auth_header.startswith("Digest ") + + # Successful auth should have no qop param + assert "qop=" not in auth_header + assert "nc=" not in auth_header + assert "cnonce=" not in auth_header + + expected_digest = compute_expected_digest( + algorithm=algorithm, + username=username, + password=password, + realm=realm, + nonce=nonce, + uri=uri, + method="GET", + qop="", # This is the key part - explicitly setting qop="" + nc="", # Not needed for non-qop digest + cnonce="", # Not needed for non-qop digest + ) + # We mock the cnonce, so we can check the expected digest + assert expected_digest in auth_header + + return Response(text="OK") + + app = Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(digest_auth_mw,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text_content = await resp.text() + assert text_content == "OK" + + assert request_count == 2 # Initial request + retry with auth + + +async def test_digest_auth_without_opaque( + aiohttp_server: AiohttpServer, digest_auth_mw: DigestAuthMiddleware +) -> None: + """Test digest auth with a server that doesn't provide an opaque parameter.""" + request_count = 0 + + async def handler(request: Request) -> Response: + nonlocal request_count + request_count += 1 + + if request_count == 1: + # First request returns 401 with digest challenge without opaque + challenge = ( + 'Digest realm="test-realm", nonce="testnonce", ' + 'qop="auth", algorithm=MD5' + ) + return Response( + status=401, + headers={"WWW-Authenticate": challenge}, + text="Unauthorized", + ) + + # Second request should have Authorization header + auth_header = request.headers.get(hdrs.AUTHORIZATION) + assert auth_header and auth_header.startswith("Digest ") + # Successful auth should have no opaque param + assert "opaque=" not in auth_header + + return Response(text="OK") + + app = Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(digest_auth_mw,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text_content = await resp.text() + assert text_content == "OK" + + assert request_count == 2 # Initial request + retry with auth + + +@pytest.mark.parametrize( + "www_authenticate", + [ + None, + "DigestWithoutSpace", + 'Basic realm="test"', + "Digest ", + "Digest =invalid, format", + ], +) +async def test_auth_header_no_retry( + aiohttp_server: AiohttpServer, + www_authenticate: str, + digest_auth_mw: DigestAuthMiddleware, +) -> None: + """Test that middleware doesn't retry with invalid WWW-Authenticate headers.""" + request_count = 0 + + async def handler(request: Request) -> Response: + nonlocal request_count + request_count += 1 + + # First (and only) request returns 401 + headers = {} + if www_authenticate is not None: + headers["WWW-Authenticate"] = www_authenticate + + # Use a custom HTTPUnauthorized instead of the helper since + # we're specifically testing malformed headers + return Response(status=401, headers=headers, text="Unauthorized") + + app = Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(digest_auth_mw,)) as session: + async with session.get(server.make_url("/")) as resp: + assert resp.status == 401 + + # No retry should happen + assert request_count == 1 + + +async def test_direct_success_no_auth_needed( + aiohttp_server: AiohttpServer, digest_auth_mw: DigestAuthMiddleware +) -> None: + """Test middleware with a direct 200 response with no auth challenge.""" + request_count = 0 + + async def handler(request: Request) -> Response: + nonlocal request_count + request_count += 1 + + # Return success without auth challenge + return Response(text="OK") + + app = Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + async with ClientSession(middlewares=(digest_auth_mw,)) as session: + async with session.get(server.make_url("/")) as resp: + text = await resp.text() + assert resp.status == 200 + assert text == "OK" + + # Verify only one request was made + assert request_count == 1 + + +async def test_no_retry_on_second_401( + aiohttp_server: AiohttpServer, digest_auth_mw: DigestAuthMiddleware +) -> None: + """Test digest auth does not retry on second 401.""" + request_count = 0 + + async def handler(request: Request) -> Response: + nonlocal request_count + request_count += 1 + + # Always return 401 challenge + challenge = 'Digest realm="test", nonce="abc123", qop="auth", algorithm=MD5' + return Response( + status=401, + headers={"WWW-Authenticate": challenge}, + text="Unauthorized", + ) + + app = Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Create a session that uses the digest auth middleware + async with ClientSession(middlewares=(digest_auth_mw,)) as session: + async with session.get(server.make_url("/")) as resp: + await resp.text() + assert resp.status == 401 + + # Verify we made exactly 2 requests (initial + 1 retry) + assert request_count == 2 + + +@pytest.mark.parametrize( + ("status", "headers", "expected"), + [ + (200, {}, False), + (401, {"www-authenticate": ""}, False), + (401, {"www-authenticate": "DigestWithoutSpace"}, False), + (401, {"www-authenticate": "Basic realm=test"}, False), + (401, {"www-authenticate": "Digest "}, False), + (401, {"www-authenticate": "Digest =invalid, format"}, False), + ], + ids=[ + "different_status_code", + "empty_www_authenticate_header", + "no_space_after_scheme", + "different_scheme", + "empty_parameters", + "malformed_parameters", + ], +) +def test_authenticate_with_malformed_headers( + digest_auth_mw: DigestAuthMiddleware, + status: int, + headers: dict[str, str], + expected: bool, +) -> None: + """Test _authenticate method with various edge cases.""" + response = mock.MagicMock(spec=ClientResponse) + response.status = status + response.headers = headers + + result = digest_auth_mw._authenticate(response) + assert result == expected From d4eaf550c67515033db3d8fc726dc355662690e0 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Tue, 20 May 2025 11:54:11 -0400 Subject: [PATCH 45/90] [PR #10891/802152a backport][3.12] Fix flakey signal handling tests (#10896) --- tests/test_web_runner.py | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/tests/test_web_runner.py b/tests/test_web_runner.py index b71c34fe912..22ce3d00650 100644 --- a/tests/test_web_runner.py +++ b/tests/test_web_runner.py @@ -41,22 +41,40 @@ async def test_site_for_nonfrozen_app(make_runner: Any) -> None: platform.system() == "Windows", reason="the test is not valid for Windows" ) async def test_runner_setup_handle_signals(make_runner: Any) -> None: - runner = make_runner(handle_signals=True) - await runner.setup() - assert signal.getsignal(signal.SIGTERM) is not signal.SIG_DFL - await runner.cleanup() - assert signal.getsignal(signal.SIGTERM) is signal.SIG_DFL + # Save the original signal handler + original_handler = signal.getsignal(signal.SIGTERM) + try: + # Set a known state for the signal handler to avoid flaky tests + signal.signal(signal.SIGTERM, signal.SIG_DFL) + + runner = make_runner(handle_signals=True) + await runner.setup() + assert signal.getsignal(signal.SIGTERM) is not signal.SIG_DFL + await runner.cleanup() + assert signal.getsignal(signal.SIGTERM) is signal.SIG_DFL + finally: + # Restore original signal handler + signal.signal(signal.SIGTERM, original_handler) @pytest.mark.skipif( platform.system() == "Windows", reason="the test is not valid for Windows" ) async def test_runner_setup_without_signal_handling(make_runner: Any) -> None: - runner = make_runner(handle_signals=False) - await runner.setup() - assert signal.getsignal(signal.SIGTERM) is signal.SIG_DFL - await runner.cleanup() - assert signal.getsignal(signal.SIGTERM) is signal.SIG_DFL + # Save the original signal handler + original_handler = signal.getsignal(signal.SIGTERM) + try: + # Set a known state for the signal handler to avoid flaky tests + signal.signal(signal.SIGTERM, signal.SIG_DFL) + + runner = make_runner(handle_signals=False) + await runner.setup() + assert signal.getsignal(signal.SIGTERM) is signal.SIG_DFL + await runner.cleanup() + assert signal.getsignal(signal.SIGTERM) is signal.SIG_DFL + finally: + # Restore original signal handler + signal.signal(signal.SIGTERM, original_handler) async def test_site_double_added(make_runner: Any) -> None: From c8c3d5f2fd7d91b4af05b2ff848d0b3bd73e0cb2 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Tue, 20 May 2025 15:06:25 -0400 Subject: [PATCH 46/90] [PR #10898/a4be2cb backport][3.12] Cleanup tests to ensure connector cleanup and resource management (#10900) --- tests/test_connector.py | 106 ++++++++++++++++++--------------- tests/test_proxy_functional.py | 18 +++--- 2 files changed, 69 insertions(+), 55 deletions(-) diff --git a/tests/test_connector.py b/tests/test_connector.py index db0514e5f0d..fd2cdac7a94 100644 --- a/tests/test_connector.py +++ b/tests/test_connector.py @@ -307,80 +307,90 @@ async def test_close(loop) -> None: async def test_get(loop: asyncio.AbstractEventLoop, key: ConnectionKey) -> None: conn = aiohttp.BaseConnector() - assert await conn._get(key, []) is None + try: + assert await conn._get(key, []) is None - proto = create_mocked_conn(loop) - conn._conns[key] = deque([(proto, loop.time())]) - connection = await conn._get(key, []) - assert connection is not None - assert connection.protocol == proto - connection.close() - await conn.close() + proto = create_mocked_conn(loop) + conn._conns[key] = deque([(proto, loop.time())]) + connection = await conn._get(key, []) + assert connection is not None + assert connection.protocol == proto + connection.close() + finally: + await conn.close() async def test_get_unconnected_proto(loop) -> None: conn = aiohttp.BaseConnector() key = ConnectionKey("localhost", 80, False, False, None, None, None) - assert await conn._get(key, []) is None - - proto = create_mocked_conn(loop) - conn._conns[key] = deque([(proto, loop.time())]) - connection = await conn._get(key, []) - assert connection is not None - assert connection.protocol == proto - connection.close() + try: + assert await conn._get(key, []) is None - assert await conn._get(key, []) is None - conn._conns[key] = deque([(proto, loop.time())]) - proto.is_connected = lambda *args: False - assert await conn._get(key, []) is None - await conn.close() + proto = create_mocked_conn(loop) + conn._conns[key] = deque([(proto, loop.time())]) + connection = await conn._get(key, []) + assert connection is not None + assert connection.protocol == proto + connection.close() + + assert await conn._get(key, []) is None + conn._conns[key] = deque([(proto, loop.time())]) + proto.is_connected = lambda *args: False + assert await conn._get(key, []) is None + finally: + await conn.close() async def test_get_unconnected_proto_ssl(loop) -> None: conn = aiohttp.BaseConnector() key = ConnectionKey("localhost", 80, True, False, None, None, None) - assert await conn._get(key, []) is None - - proto = create_mocked_conn(loop) - conn._conns[key] = deque([(proto, loop.time())]) - connection = await conn._get(key, []) - assert connection is not None - assert connection.protocol == proto - connection.close() + try: + assert await conn._get(key, []) is None - assert await conn._get(key, []) is None - conn._conns[key] = deque([(proto, loop.time())]) - proto.is_connected = lambda *args: False - assert await conn._get(key, []) is None - await conn.close() + proto = create_mocked_conn(loop) + conn._conns[key] = deque([(proto, loop.time())]) + connection = await conn._get(key, []) + assert connection is not None + assert connection.protocol == proto + connection.close() + + assert await conn._get(key, []) is None + conn._conns[key] = deque([(proto, loop.time())]) + proto.is_connected = lambda *args: False + assert await conn._get(key, []) is None + finally: + await conn.close() async def test_get_expired(loop: asyncio.AbstractEventLoop) -> None: conn = aiohttp.BaseConnector() key = ConnectionKey("localhost", 80, False, False, None, None, None) - assert await conn._get(key, []) is None + try: + assert await conn._get(key, []) is None - proto = mock.Mock() - conn._conns[key] = deque([(proto, loop.time() - 1000)]) - assert await conn._get(key, []) is None - assert not conn._conns - await conn.close() + proto = create_mocked_conn(loop) + conn._conns[key] = deque([(proto, loop.time() - 1000)]) + assert await conn._get(key, []) is None + assert not conn._conns + finally: + await conn.close() @pytest.mark.usefixtures("enable_cleanup_closed") async def test_get_expired_ssl(loop: asyncio.AbstractEventLoop) -> None: conn = aiohttp.BaseConnector(enable_cleanup_closed=True) key = ConnectionKey("localhost", 80, True, False, None, None, None) - assert await conn._get(key, []) is None + try: + assert await conn._get(key, []) is None - proto = mock.Mock() - transport = proto.transport - conn._conns[key] = deque([(proto, loop.time() - 1000)]) - assert await conn._get(key, []) is None - assert not conn._conns - assert conn._cleanup_closed_transports == [transport] - await conn.close() + proto = create_mocked_conn(loop) + transport = proto.transport + conn._conns[key] = deque([(proto, loop.time() - 1000)]) + assert await conn._get(key, []) is None + assert not conn._conns + assert conn._cleanup_closed_transports == [transport] + finally: + await conn.close() async def test_release_acquired(loop, key) -> None: diff --git a/tests/test_proxy_functional.py b/tests/test_proxy_functional.py index c6c6ac67c1b..f86975b7423 100644 --- a/tests/test_proxy_functional.py +++ b/tests/test_proxy_functional.py @@ -220,14 +220,18 @@ async def test_uvloop_secure_https_proxy( """Ensure HTTPS sites are accessible through a secure proxy without warning when using uvloop.""" conn = aiohttp.TCPConnector() sess = aiohttp.ClientSession(connector=conn) - url = URL("https://example.com") - - async with sess.get(url, proxy=secure_proxy_url, ssl=client_ssl_ctx) as response: - assert response.status == 200 + try: + url = URL("https://example.com") - await sess.close() - await conn.close() - await asyncio.sleep(0.1) + async with sess.get( + url, proxy=secure_proxy_url, ssl=client_ssl_ctx + ) as response: + assert response.status == 200 + finally: + await sess.close() + await conn.close() + await asyncio.sleep(0) + await asyncio.sleep(0.1) @pytest.fixture From 64fc60030872b8b64fd676de0ed786512a277497 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Tue, 20 May 2025 16:30:11 -0400 Subject: [PATCH 47/90] [PR #10897/4624fed backport][3.12] Fix DNS resolver object churn for multiple sessions (#10906) --- CHANGES/10847.feature.rst | 5 + aiohttp/resolver.py | 84 +++++++++++- tests/test_resolver.py | 268 ++++++++++++++++++++++++++++++++++---- 3 files changed, 334 insertions(+), 23 deletions(-) create mode 100644 CHANGES/10847.feature.rst diff --git a/CHANGES/10847.feature.rst b/CHANGES/10847.feature.rst new file mode 100644 index 00000000000..bfa7f6d498a --- /dev/null +++ b/CHANGES/10847.feature.rst @@ -0,0 +1,5 @@ +Implemented shared DNS resolver management to fix excessive resolver object creation +when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures +only one ``DNSResolver`` object is created for default configurations, significantly +reducing resource usage and improving performance for applications using multiple +client sessions simultaneously -- by :user:`bdraco`. diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py index a5af5fddda6..8e73beb6e1e 100644 --- a/aiohttp/resolver.py +++ b/aiohttp/resolver.py @@ -1,5 +1,6 @@ import asyncio import socket +import weakref from typing import Any, Dict, Final, List, Optional, Tuple, Type, Union from .abc import AbstractResolver, ResolveResult @@ -93,7 +94,17 @@ def __init__( if aiodns is None: raise RuntimeError("Resolver requires aiodns library") - self._resolver = aiodns.DNSResolver(*args, **kwargs) + self._loop = asyncio.get_running_loop() + self._manager: Optional[_DNSResolverManager] = None + # If custom args are provided, create a dedicated resolver instance + # This means each AsyncResolver with custom args gets its own + # aiodns.DNSResolver instance + if args or kwargs: + self._resolver = aiodns.DNSResolver(*args, **kwargs) + return + # Use the shared resolver from the manager for default arguments + self._manager = _DNSResolverManager() + self._resolver = self._manager.get_resolver(self, self._loop) if not hasattr(self._resolver, "gethostbyname"): # aiodns 1.1 is not available, fallback to DNSResolver.query @@ -180,7 +191,78 @@ async def _resolve_with_query( return hosts async def close(self) -> None: + if self._manager: + # Release the resolver from the manager if using the shared resolver + self._manager.release_resolver(self, self._loop) + self._manager = None # Clear reference to manager + self._resolver = None # type: ignore[assignment] # Clear reference to resolver + return + # Otherwise cancel our dedicated resolver self._resolver.cancel() + self._resolver = None # type: ignore[assignment] # Clear reference + + +class _DNSResolverManager: + """Manager for aiodns.DNSResolver objects. + + This class manages shared aiodns.DNSResolver instances + with no custom arguments across different event loops. + """ + + _instance: Optional["_DNSResolverManager"] = None + + def __new__(cls) -> "_DNSResolverManager": + if cls._instance is None: + cls._instance = super().__new__(cls) + cls._instance._init() + return cls._instance + + def _init(self) -> None: + # Use WeakKeyDictionary to allow event loops to be garbage collected + self._loop_data: weakref.WeakKeyDictionary[ + asyncio.AbstractEventLoop, + tuple["aiodns.DNSResolver", weakref.WeakSet["AsyncResolver"]], + ] = weakref.WeakKeyDictionary() + + def get_resolver( + self, client: "AsyncResolver", loop: asyncio.AbstractEventLoop + ) -> "aiodns.DNSResolver": + """Get or create the shared aiodns.DNSResolver instance for a specific event loop. + + Args: + client: The AsyncResolver instance requesting the resolver. + This is required to track resolver usage. + loop: The event loop to use for the resolver. + """ + # Create a new resolver and client set for this loop if it doesn't exist + if loop not in self._loop_data: + resolver = aiodns.DNSResolver(loop=loop) + client_set: weakref.WeakSet["AsyncResolver"] = weakref.WeakSet() + self._loop_data[loop] = (resolver, client_set) + else: + # Get the existing resolver and client set + resolver, client_set = self._loop_data[loop] + + # Register this client with the loop + client_set.add(client) + return resolver + + def release_resolver( + self, client: "AsyncResolver", loop: asyncio.AbstractEventLoop + ) -> None: + """Release the resolver for an AsyncResolver client when it's closed. + + Args: + client: The AsyncResolver instance to release. + loop: The event loop the resolver was using. + """ + # Remove client from its loop's tracking + resolver, client_set = self._loop_data[loop] + client_set.discard(client) + # If no more clients for this loop, cancel and remove its resolver + if not client_set: + resolver.cancel() + del self._loop_data[loop] _DefaultType = Type[Union[AsyncResolver, ThreadedResolver]] diff --git a/tests/test_resolver.py b/tests/test_resolver.py index b4606067079..9a6a782c06a 100644 --- a/tests/test_resolver.py +++ b/tests/test_resolver.py @@ -1,6 +1,8 @@ import asyncio +import gc import ipaddress import socket +from collections.abc import Generator from ipaddress import ip_address from typing import Any, Awaitable, Callable, Collection, List, NamedTuple, Tuple, Union from unittest.mock import Mock, create_autospec, patch @@ -12,6 +14,7 @@ AsyncResolver, DefaultResolver, ThreadedResolver, + _DNSResolverManager, ) try: @@ -23,6 +26,48 @@ getaddrinfo = False +@pytest.fixture() +def check_no_lingering_resolvers() -> Generator[None, None, None]: + """Verify no resolvers remain after the test. + + This fixture should be used in any test that creates instances of + AsyncResolver or directly uses _DNSResolverManager. + """ + manager = _DNSResolverManager() + before = len(manager._loop_data) + yield + after = len(manager._loop_data) + if after > before: # pragma: no branch + # Force garbage collection to ensure weak references are updated + gc.collect() # pragma: no cover + after = len(manager._loop_data) # pragma: no cover + if after > before: # pragma: no cover + pytest.fail( # pragma: no cover + f"Lingering resolvers found: {(after - before)} " + "new AsyncResolver instances were not properly closed." + ) + + +@pytest.fixture() +def dns_resolver_manager() -> Generator[_DNSResolverManager, None, None]: + """Create a fresh _DNSResolverManager instance for testing. + + Saves and restores the singleton state to avoid affecting other tests. + """ + # Save the original instance + original_instance = _DNSResolverManager._instance + + # Reset the singleton + _DNSResolverManager._instance = None + + # Create and yield a fresh instance + try: + yield _DNSResolverManager() + finally: + # Clean up and restore the original instance + _DNSResolverManager._instance = original_instance + + class FakeAIODNSAddrInfoNode(NamedTuple): family: int @@ -117,7 +162,10 @@ async def fake(*args: Any, **kwargs: Any) -> Tuple[str, int]: @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") -async def test_async_resolver_positive_ipv4_lookup(loop: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_positive_ipv4_lookup( + loop: asyncio.AbstractEventLoop, +) -> None: with patch("aiodns.DNSResolver") as mock: mock().getaddrinfo.return_value = fake_aiodns_getaddrinfo_ipv4_result( ["127.0.0.1"] @@ -132,10 +180,14 @@ async def test_async_resolver_positive_ipv4_lookup(loop: Any) -> None: port=0, type=socket.SOCK_STREAM, ) + await resolver.close() @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") -async def test_async_resolver_positive_link_local_ipv6_lookup(loop: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_positive_link_local_ipv6_lookup( + loop: asyncio.AbstractEventLoop, +) -> None: with patch("aiodns.DNSResolver") as mock: mock().getaddrinfo.return_value = fake_aiodns_getaddrinfo_ipv6_result( ["fe80::1"] @@ -154,46 +206,44 @@ async def test_async_resolver_positive_link_local_ipv6_lookup(loop: Any) -> None type=socket.SOCK_STREAM, ) mock().getnameinfo.assert_called_with(("fe80::1", 0, 0, 3), _NAME_SOCKET_FLAGS) + await resolver.close() @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") -async def test_async_resolver_multiple_replies(loop: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_multiple_replies(loop: asyncio.AbstractEventLoop) -> None: with patch("aiodns.DNSResolver") as mock: ips = ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"] mock().getaddrinfo.return_value = fake_aiodns_getaddrinfo_ipv4_result(ips) resolver = AsyncResolver() real = await resolver.resolve("www.google.com") - ips = [ipaddress.ip_address(x["host"]) for x in real] - assert len(ips) > 3, "Expecting multiple addresses" - - -@pytest.mark.skipif(aiodns is None, reason="aiodns required") -async def test_async_resolver_query_multiple_replies(loop) -> None: - with patch("aiodns.DNSResolver") as mock: - del mock().gethostbyname - ips = ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"] - mock().query.return_value = fake_query_result(ips) - resolver = AsyncResolver(loop=loop) - real = await resolver.resolve("www.google.com") - ips = [ipaddress.ip_address(x["host"]) for x in real] + ipaddrs = [ipaddress.ip_address(x["host"]) for x in real] + assert len(ipaddrs) > 3, "Expecting multiple addresses" + await resolver.close() @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") -async def test_async_resolver_negative_lookup(loop: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_negative_lookup(loop: asyncio.AbstractEventLoop) -> None: with patch("aiodns.DNSResolver") as mock: mock().getaddrinfo.side_effect = aiodns.error.DNSError() resolver = AsyncResolver() with pytest.raises(OSError): await resolver.resolve("doesnotexist.bla") + await resolver.close() @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") -async def test_async_resolver_no_hosts_in_getaddrinfo(loop: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_no_hosts_in_getaddrinfo( + loop: asyncio.AbstractEventLoop, +) -> None: with patch("aiodns.DNSResolver") as mock: mock().getaddrinfo.return_value = fake_aiodns_getaddrinfo_ipv4_result([]) resolver = AsyncResolver() with pytest.raises(OSError): await resolver.resolve("doesnotexist.bla") + await resolver.close() async def test_threaded_resolver_positive_lookup() -> None: @@ -294,8 +344,9 @@ async def test_close_for_threaded_resolver(loop) -> None: @pytest.mark.skipif(aiodns is None, reason="aiodns required") -async def test_close_for_async_resolver(loop) -> None: - resolver = AsyncResolver(loop=loop) +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_close_for_async_resolver(loop: asyncio.AbstractEventLoop) -> None: + resolver = AsyncResolver() await resolver.close() @@ -306,7 +357,10 @@ async def test_default_loop_for_threaded_resolver(loop) -> None: @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") -async def test_async_resolver_ipv6_positive_lookup(loop: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_ipv6_positive_lookup( + loop: asyncio.AbstractEventLoop, +) -> None: with patch("aiodns.DNSResolver") as mock: mock().getaddrinfo.return_value = fake_aiodns_getaddrinfo_ipv6_result(["::1"]) resolver = AsyncResolver() @@ -319,6 +373,7 @@ async def test_async_resolver_ipv6_positive_lookup(loop: Any) -> None: port=0, type=socket.SOCK_STREAM, ) + await resolver.close() @pytest.mark.skipif(aiodns is None, reason="aiodns required") @@ -363,6 +418,7 @@ async def test_async_resolver_query_fallback_error_messages_passed_no_hosts( @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +@pytest.mark.usefixtures("check_no_lingering_resolvers") async def test_async_resolver_error_messages_passed( loop: asyncio.AbstractEventLoop, ) -> None: @@ -374,9 +430,11 @@ async def test_async_resolver_error_messages_passed( await resolver.resolve("x.org") assert excinfo.value.strerror == "Test error message" + await resolver.close() @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +@pytest.mark.usefixtures("check_no_lingering_resolvers") async def test_async_resolver_error_messages_passed_no_hosts( loop: asyncio.AbstractEventLoop, ) -> None: @@ -388,15 +446,20 @@ async def test_async_resolver_error_messages_passed_no_hosts( await resolver.resolve("x.org") assert excinfo.value.strerror == "DNS lookup failed" + await resolver.close() -async def test_async_resolver_aiodns_not_present(loop: Any, monkeypatch: Any) -> None: +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_aiodns_not_present( + loop: asyncio.AbstractEventLoop, monkeypatch: pytest.MonkeyPatch +) -> None: monkeypatch.setattr("aiohttp.resolver.aiodns", None) with pytest.raises(RuntimeError): AsyncResolver(loop=loop) @pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +@pytest.mark.usefixtures("check_no_lingering_resolvers") def test_aio_dns_is_default() -> None: assert DefaultResolver is AsyncResolver @@ -404,3 +467,164 @@ def test_aio_dns_is_default() -> None: @pytest.mark.skipif(getaddrinfo, reason="aiodns <3.2.0 required") def test_threaded_resolver_is_default() -> None: assert DefaultResolver is ThreadedResolver + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_sharing( + dns_resolver_manager: _DNSResolverManager, +) -> None: + """Test that the DNSResolverManager shares a resolver among AsyncResolver instances.""" + # Create two default AsyncResolver instances + resolver1 = AsyncResolver() + resolver2 = AsyncResolver() + + # Check that they share the same underlying resolver + assert resolver1._resolver is resolver2._resolver + + # Create an AsyncResolver with custom args + resolver3 = AsyncResolver(nameservers=["8.8.8.8"]) + + # Check that it has its own resolver + assert resolver1._resolver is not resolver3._resolver + + # Cleanup + await resolver1.close() + await resolver2.close() + await resolver3.close() + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_singleton( + dns_resolver_manager: _DNSResolverManager, +) -> None: + """Test that DNSResolverManager is a singleton.""" + # Create a second manager and check it's the same instance + manager1 = dns_resolver_manager + manager2 = _DNSResolverManager() + + assert manager1 is manager2 + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_resolver_lifecycle( + dns_resolver_manager: _DNSResolverManager, +) -> None: + """Test that DNSResolverManager creates and destroys resolver correctly.""" + manager = dns_resolver_manager + + # Initially there should be no resolvers + assert not manager._loop_data + + # Create a mock AsyncResolver for testing + mock_client = Mock(spec=AsyncResolver) + mock_client._loop = asyncio.get_running_loop() + + # Getting resolver should create one + mock_loop = mock_client._loop + resolver = manager.get_resolver(mock_client, mock_loop) + assert resolver is not None + assert manager._loop_data[mock_loop][0] is resolver + + # Getting it again should return the same instance + assert manager.get_resolver(mock_client, mock_loop) is resolver + + # Clean up + manager.release_resolver(mock_client, mock_loop) + assert not manager._loop_data + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_client_registration( + dns_resolver_manager: _DNSResolverManager, +) -> None: + """Test client registration and resolver release logic.""" + with patch("aiodns.DNSResolver") as mock: + # Create resolver instances + resolver1 = AsyncResolver() + resolver2 = AsyncResolver() + + # Both should use the same resolver from the manager + assert resolver1._resolver is resolver2._resolver + + # The manager should be tracking both clients + assert resolver1._manager is resolver2._manager + manager = resolver1._manager + assert manager is not None + loop = asyncio.get_running_loop() + _, client_set = manager._loop_data[loop] + assert len(client_set) == 2 + + # Close one resolver + await resolver1.close() + _, client_set = manager._loop_data[loop] + assert len(client_set) == 1 + + # Resolver should still exist + assert manager._loop_data # Not empty + + # Close the second resolver + await resolver2.close() + assert not manager._loop_data # Should be empty after closing all clients + + # Now all resolvers should be canceled and removed + assert not manager._loop_data # Should be empty + mock().cancel.assert_called_once() + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_multiple_event_loops( + dns_resolver_manager: _DNSResolverManager, +) -> None: + """Test that DNSResolverManager correctly manages resolvers across different event loops.""" + # Create separate resolvers for each loop + resolver1 = Mock(name="resolver1") + resolver2 = Mock(name="resolver2") + + # Create a patch that returns different resolvers based on the loop argument + mock_resolver = Mock() + mock_resolver.side_effect = lambda loop=None, **kwargs: ( + resolver1 if loop is asyncio.get_running_loop() else resolver2 + ) + + with patch("aiodns.DNSResolver", mock_resolver): + manager = dns_resolver_manager + + # Create two mock clients on different loops + mock_client1 = Mock(spec=AsyncResolver) + mock_client1._loop = asyncio.get_running_loop() + + # Create a second event loop + loop2 = Mock(spec=asyncio.AbstractEventLoop) + mock_client2 = Mock(spec=AsyncResolver) + mock_client2._loop = loop2 + + # Get resolvers for both clients + loop1 = mock_client1._loop + loop2 = mock_client2._loop + + # Get the resolvers through the manager + manager_resolver1 = manager.get_resolver(mock_client1, loop1) + manager_resolver2 = manager.get_resolver(mock_client2, loop2) + + # Should be different resolvers for different loops + assert manager_resolver1 is resolver1 + assert manager_resolver2 is resolver2 + assert manager._loop_data[loop1][0] is resolver1 + assert manager._loop_data[loop2][0] is resolver2 + + # Release the first resolver + manager.release_resolver(mock_client1, loop1) + + # First loop's resolver should be gone, but second should remain + assert loop1 not in manager._loop_data + assert loop2 in manager._loop_data + + # Release the second resolver + manager.release_resolver(mock_client2, loop2) + + # Both resolvers should be gone + assert not manager._loop_data + + # Verify resolver cleanup + resolver1.cancel.assert_called_once() + resolver2.cancel.assert_called_once() From f543fea73a8f3ea47d9e9de808eda63bed4d9e63 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Tue, 20 May 2025 20:38:08 +0000 Subject: [PATCH 48/90] [PR #10902/94de3f9d backport][3.12] Middleware cleanups (#10904) Co-authored-by: J. Nick Koston --- CHANGES/10902.feature.rst | 1 + aiohttp/client.py | 8 +- aiohttp/client_middlewares.py | 7 +- docs/client_advanced.rst | 288 ++++++++++++++++---------------- docs/client_reference.rst | 23 ++- tests/test_client_middleware.py | 126 +++++++++----- 6 files changed, 255 insertions(+), 198 deletions(-) create mode 120000 CHANGES/10902.feature.rst diff --git a/CHANGES/10902.feature.rst b/CHANGES/10902.feature.rst new file mode 120000 index 00000000000..b565aa68ee0 --- /dev/null +++ b/CHANGES/10902.feature.rst @@ -0,0 +1 @@ +9732.feature.rst \ No newline at end of file diff --git a/aiohttp/client.py b/aiohttp/client.py index 2b7afe1344c..bea1c6f61e7 100644 --- a/aiohttp/client.py +++ b/aiohttp/client.py @@ -24,6 +24,7 @@ List, Mapping, Optional, + Sequence, Set, Tuple, Type, @@ -192,7 +193,7 @@ class _RequestOptions(TypedDict, total=False): auto_decompress: Union[bool, None] max_line_size: Union[int, None] max_field_size: Union[int, None] - middlewares: Optional[Tuple[ClientMiddlewareType, ...]] + middlewares: Optional[Sequence[ClientMiddlewareType]] @attr.s(auto_attribs=True, frozen=True, slots=True) @@ -301,7 +302,7 @@ def __init__( max_line_size: int = 8190, max_field_size: int = 8190, fallback_charset_resolver: _CharsetResolver = lambda r, b: "utf-8", - middlewares: Optional[Tuple[ClientMiddlewareType, ...]] = None, + middlewares: Optional[Sequence[ClientMiddlewareType]] = None, ) -> None: # We initialise _connector to None immediately, as it's referenced in __del__() # and could cause issues if an exception occurs during initialisation. @@ -505,7 +506,7 @@ async def _request( auto_decompress: Optional[bool] = None, max_line_size: Optional[int] = None, max_field_size: Optional[int] = None, - middlewares: Optional[Tuple[ClientMiddlewareType, ...]] = None, + middlewares: Optional[Sequence[ClientMiddlewareType]] = None, ) -> ClientResponse: # NOTE: timeout clamps existing connect and read timeouts. We cannot @@ -705,7 +706,6 @@ async def _request( trust_env=self.trust_env, ) - # Core request handler - now includes connection logic async def _connect_and_send_request( req: ClientRequest, ) -> ClientResponse: diff --git a/aiohttp/client_middlewares.py b/aiohttp/client_middlewares.py index 6be353c3a40..3ca2cb202ad 100644 --- a/aiohttp/client_middlewares.py +++ b/aiohttp/client_middlewares.py @@ -1,6 +1,6 @@ """Client middleware support.""" -from collections.abc import Awaitable, Callable +from collections.abc import Awaitable, Callable, Sequence from .client_reqrep import ClientRequest, ClientResponse @@ -17,7 +17,7 @@ def build_client_middlewares( handler: ClientHandlerType, - middlewares: tuple[ClientMiddlewareType, ...], + middlewares: Sequence[ClientMiddlewareType], ) -> ClientHandlerType: """ Apply middlewares to request handler. @@ -28,9 +28,6 @@ def build_client_middlewares( This implementation avoids using partial/update_wrapper to minimize overhead and doesn't cache to avoid holding references to stateful middleware. """ - if not middlewares: - return handler - # Optimize for single middleware case if len(middlewares) == 1: middleware = middlewares[0] diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index 9affef7efe2..d598a40c6ab 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -123,32 +123,18 @@ background. Client Middleware ----------------- -aiohttp client supports middleware to intercept requests and responses. This can be +The client supports middleware to intercept requests and responses. This can be useful for authentication, logging, request/response modification, and retries. -To create a middleware, you need to define an async function that accepts the request -and a handler function, and returns the response. The middleware must match the -:type:`ClientMiddlewareType` type signature:: - - import logging - from aiohttp import ClientSession, ClientRequest, ClientResponse, ClientHandlerType - - _LOGGER = logging.getLogger(__name__) - - async def my_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - # Process request before sending - _LOGGER.debug(f"Request: {request.method} {request.url}") - - # Call the next handler - response = await handler(request) +Creating Middleware +^^^^^^^^^^^^^^^^^^^ - # Process response after receiving - _LOGGER.debug(f"Response: {response.status}") +To create a middleware, define an async function (or callable class) that accepts a request +and a handler function, and returns a response. Middleware must follow the +:type:`ClientMiddlewareType` signature (see :ref:`aiohttp-client-reference` for details). - return response +Using Middleware +^^^^^^^^^^^^^^^^ You can apply middleware to a client session or to individual requests:: @@ -160,175 +146,189 @@ You can apply middleware to a client session or to individual requests:: async with ClientSession() as session: resp = await session.get('http://example.com', middlewares=(my_middleware,)) -Middleware Examples +Middleware Chaining ^^^^^^^^^^^^^^^^^^^ -Here's a simple example showing request modification:: +Multiple middlewares are applied in the order they are listed:: - async def add_api_key_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - # Add API key to all requests - request.headers['X-API-Key'] = 'my-secret-key' - return await handler(request) + # Middlewares are applied in order: logging -> auth -> request + async with ClientSession(middlewares=(logging_middleware, auth_middleware)) as session: + resp = await session.get('http://example.com') + +A key aspect to understand about the flat middleware structure is that the execution flow follows this pattern: + +1. The first middleware in the list is called first and executes its code before calling the handler +2. The handler is the next middleware in the chain (or the actual request handler if there are no more middleware) +3. When the handler returns a response, execution continues in the first middleware after the handler call +4. This creates a nested "onion-like" pattern for execution + +For example, with ``middlewares=(middleware1, middleware2)``, the execution order would be: + +1. Enter ``middleware1`` (pre-request code) +2. Enter ``middleware2`` (pre-request code) +3. Execute the actual request handler +4. Exit ``middleware2`` (post-response code) +5. Exit ``middleware1`` (post-response code) + +This flat structure means that middleware is applied on each retry attempt inside the client's retry loop, not just once before all retries. This allows middleware to modify requests freshly on each retry attempt. + +.. note:: + + Client middleware is a powerful feature but should be used judiciously. + Each middleware adds overhead to request processing. For simple use cases + like adding static headers, you can often use request parameters + (e.g., ``headers``) or session configuration instead. + +Common Middleware Patterns +^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _client-middleware-retry: -Middleware Retry Pattern -^^^^^^^^^^^^^^^^^^^^^^^^ +Authentication and Retry +"""""""""""""""""""""""" -Client middleware can implement retry logic internally using a ``while`` loop. This allows the middleware to: +There are two recommended approaches for implementing retry logic: -- Retry requests based on response status codes or other conditions -- Modify the request between retries (e.g., refreshing tokens) -- Maintain state across retry attempts -- Control when to stop retrying and return the response +1. **For Loop Pattern (Simple Cases)** -This pattern is particularly useful for: + Use a bounded ``for`` loop when the number of retry attempts is known and fixed:: -- Refreshing authentication tokens after a 401 response -- Switching to fallback servers or authentication methods -- Adding or modifying headers based on error responses -- Implementing back-off strategies with increasing delays + import hashlib + from aiohttp import ClientSession, ClientRequest, ClientResponse, ClientHandlerType -The middleware can maintain state between retries to track which strategies have been tried and modify the request accordingly for the next attempt. + async def auth_retry_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + # Try up to 3 authentication methods + for attempt in range(3): + if attempt == 0: + # First attempt: use API key + request.headers["X-API-Key"] = "my-api-key" + elif attempt == 1: + # Second attempt: use Bearer token + request.headers["Authorization"] = "Bearer fallback-token" + else: + # Third attempt: use hash-based signature + secret_key = "my-secret-key" + url_path = str(request.url.path) + signature = hashlib.sha256(f"{url_path}{secret_key}".encode()).hexdigest() + request.headers["X-Signature"] = signature -Example: Retrying requests with middleware -"""""""""""""""""""""""""""""""""""""""""" + # Send the request + response = await handler(request) -:: + # If successful or not an auth error, return immediately + if response.status != 401: + return response - import logging - import aiohttp + # Return the last response if all retries are exhausted + return response - _LOGGER = logging.getLogger(__name__) +2. **While Loop Pattern (Complex Cases)** - class RetryMiddleware: - def __init__(self, max_retries: int = 3): - self.max_retries = max_retries + For more complex scenarios, use a ``while`` loop with strict exit conditions:: - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - retry_count = 0 - use_fallback_auth = False + import logging - while True: - # Modify request based on retry state - if use_fallback_auth: - request.headers['Authorization'] = 'Bearer fallback-token' + _LOGGER = logging.getLogger(__name__) - response = await handler(request) + class RetryMiddleware: + def __init__(self, max_retries: int = 3): + self.max_retries = max_retries - # Retry on 401 errors with different authentication - if response.status == 401 and retry_count < self.max_retries: - retry_count += 1 - use_fallback_auth = True - _LOGGER.debug(f"Retrying with fallback auth (attempt {retry_count})") - continue + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + retry_count = 0 - # Retry on 5xx errors - if response.status >= 500 and retry_count < self.max_retries: - retry_count += 1 - _LOGGER.debug(f"Retrying request (attempt {retry_count})") - continue + # Always have clear exit conditions + while retry_count <= self.max_retries: + # Send the request + response = await handler(request) - return response + # Exit conditions + if 200 <= response.status < 400 or retry_count >= self.max_retries: + return response -Middleware Chaining -^^^^^^^^^^^^^^^^^^^ + # Retry logic for different status codes + if response.status in (401, 429, 500, 502, 503, 504): + retry_count += 1 + _LOGGER.debug(f"Retrying request (attempt {retry_count}/{self.max_retries})") + continue -Multiple middlewares are applied in the order they are listed:: + # For any other status code, don't retry + return response - import logging + # Safety return (should never reach here) + return response - _LOGGER = logging.getLogger(__name__) +Request Modification +"""""""""""""""""""" - async def logging_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - _LOGGER.debug(f"[LOG] {request.method} {request.url}") - return await handler(request) +Modify request properties based on request content:: - async def auth_middleware( + async def content_type_middleware( request: ClientRequest, handler: ClientHandlerType ) -> ClientResponse: - request.headers['Authorization'] = 'Bearer token123' - return await handler(request) + # Examine URL path to determine content-type + if request.url.path.endswith('.json'): + request.headers['Content-Type'] = 'application/json' + elif request.url.path.endswith('.xml'): + request.headers['Content-Type'] = 'application/xml' - # Middlewares are applied in order: logging -> auth -> request - async with ClientSession(middlewares=(logging_middleware, auth_middleware)) as session: - resp = await session.get('http://example.com') + # Add custom headers based on HTTP method + if request.method == 'POST': + request.headers['X-Request-ID'] = f"post-{id(request)}" -.. note:: + return await handler(request) - Client middleware is a powerful feature but should be used judiciously. - Each middleware adds overhead to request processing. For simple use cases - like adding static headers, you can often use request parameters - (e.g., ``headers``) or session configuration instead. +Avoiding Infinite Recursion +^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. warning:: Using the same session from within middleware can cause infinite recursion if the middleware makes HTTP requests using the same session that has the middleware - applied. - - To avoid recursion, use one of these approaches: - - **Recommended:** Pass ``middlewares=()`` to requests made inside the middleware to - disable middleware for those specific requests:: - - async def log_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - async with request.session.post( - "https://logapi.example/log", - json={"url": str(request.url)}, - middlewares=() # This prevents infinite recursion - ) as resp: - pass + applied. This is especially risky in token refresh middleware or retry logic. - return await handler(request) + When implementing retry or refresh logic, always use bounded loops + (e.g., ``for _ in range(2):`` instead of ``while True:``) to prevent infinite recursion. - **Alternative:** Check the request contents (URL, path, host) to avoid applying - middleware to certain requests:: +To avoid recursion when making requests inside middleware, use one of these approaches: - async def log_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - if request.url.host != "logapi.example": # Avoid infinite recursion - async with request.session.post( - "https://logapi.example/log", - json={"url": str(request.url)} - ) as resp: - pass +**Option 1:** Disable middleware for internal requests:: - return await handler(request) - -Middleware Type -^^^^^^^^^^^^^^^ - -.. type:: ClientMiddlewareType - - Type alias for client middleware functions. Middleware functions must have this signature:: + async def log_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + async with request.session.post( + "https://logapi.example/log", + json={"url": str(request.url)}, + middlewares=() # This prevents infinite recursion + ) as resp: + pass - Callable[ - [ClientRequest, ClientHandlerType], - Awaitable[ClientResponse] - ] + return await handler(request) -.. type:: ClientHandlerType +**Option 2:** Check request details to avoid recursive application:: - Type alias for client request handler functions:: + async def log_middleware( + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + if request.url.host != "logapi.example": # Avoid infinite recursion + async with request.session.post( + "https://logapi.example/log", + json={"url": str(request.url)} + ) as resp: + pass - Callable[ClientRequest, Awaitable[ClientResponse]] + return await handler(request) Custom Cookies -------------- diff --git a/docs/client_reference.rst b/docs/client_reference.rst index 8e6153bf40c..97933ada1ed 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -230,7 +230,7 @@ The client session supports the context manager protocol for self closing. disabling. See :ref:`aiohttp-client-tracing-reference` for more information. - :param middlewares: A tuple of middleware instances to apply to all session requests. + :param middlewares: A sequence of middleware instances to apply to all session requests. Each middleware must match the :type:`ClientMiddlewareType` signature. ``None`` (default) is used when no middleware is needed. See :ref:`aiohttp-client-middleware` for more information. @@ -544,7 +544,7 @@ The client session supports the context manager protocol for self closing. .. versionadded:: 3.0 - :param middlewares: A tuple of middleware instances to apply to this request only. + :param middlewares: A sequence of middleware instances to apply to this request only. Each middleware must match the :type:`ClientMiddlewareType` signature. ``None`` by default which uses session middlewares. See :ref:`aiohttp-client-middleware` for more information. @@ -2624,3 +2624,22 @@ Hierarchy of exceptions * :exc:`InvalidUrlRedirectClientError` * :exc:`NonHttpUrlRedirectClientError` + + +Client Types +------------ + +.. type:: ClientMiddlewareType + + Type alias for client middleware functions. Middleware functions must have this signature:: + + Callable[ + [ClientRequest, ClientHandlerType], + Awaitable[ClientResponse] + ] + +.. type:: ClientHandlerType + + Type alias for client request handler functions:: + + Callable[[ClientRequest], Awaitable[ClientResponse]] diff --git a/tests/test_client_middleware.py b/tests/test_client_middleware.py index 2f79e4fd774..5894795dc21 100644 --- a/tests/test_client_middleware.py +++ b/tests/test_client_middleware.py @@ -74,13 +74,12 @@ async def handler(request: web.Request) -> web.Response: async def retry_middleware( request: ClientRequest, handler: ClientHandlerType ) -> ClientResponse: - retry_count = 0 - while True: + response = None + for _ in range(2): # pragma: no branch response = await handler(request) - if response.status == 503 and retry_count < 1: - retry_count += 1 - continue - return response + if response.ok: + return response + assert False, "not reachable in test" app = web.Application() app.router.add_get("/", handler) @@ -244,30 +243,28 @@ async def handler(request: web.Request) -> web.Response: async def challenge_auth_middleware( request: ClientRequest, handler: ClientHandlerType ) -> ClientResponse: - challenge_data: Dict[str, Union[bool, str, None]] = { - "nonce": None, - "attempted": False, - } + nonce: Optional[str] = None + attempted: bool = False while True: # If we have challenge data from previous attempt, add auth header - if challenge_data["nonce"] and challenge_data["attempted"]: - request.headers["Authorization"] = ( - f'Custom response="{challenge_data["nonce"]}-secret"' - ) + if nonce and attempted: + request.headers["Authorization"] = f'Custom response="{nonce}-secret"' response = await handler(request) # If we get a 401 with challenge, store it and retry - if response.status == 401 and not challenge_data["attempted"]: + if response.status == 401 and not attempted: www_auth = response.headers.get("WWW-Authenticate") - if www_auth and "nonce=" in www_auth: # pragma: no branch + if www_auth and "nonce=" in www_auth: # Extract nonce from authentication header nonce_start = www_auth.find('nonce="') + 7 nonce_end = www_auth.find('"', nonce_start) - challenge_data["nonce"] = www_auth[nonce_start:nonce_end] - challenge_data["attempted"] = True + nonce = www_auth[nonce_start:nonce_end] + attempted = True continue + else: + assert False, "Should not reach here" return response @@ -324,7 +321,7 @@ async def multi_step_auth_middleware( ) -> ClientResponse: request.headers["X-Client-ID"] = "test-client" - while True: + for _ in range(3): # Apply auth based on current state if middleware_state["step"] == 1 and middleware_state["session"]: request.headers["Authorization"] = ( @@ -347,13 +344,17 @@ async def multi_step_auth_middleware( middleware_state["step"] = 1 continue - elif auth_step == "2": # pragma: no branch + elif auth_step == "2": # Second step: store challenge middleware_state["challenge"] = response.headers.get("X-Challenge") middleware_state["step"] = 2 continue + else: + assert False, "Should not reach here" return response + # This should not be reached but keeps mypy happy + assert False, "Should not reach here" app = web.Application() app.router.add_get("/", handler) @@ -396,7 +397,7 @@ async def handler(request: web.Request) -> web.Response: async def token_refresh_middleware( request: ClientRequest, handler: ClientHandlerType ) -> ClientResponse: - while True: + for _ in range(2): # Add token to request request.headers["X-Auth-Token"] = str(token_state["token"]) @@ -407,13 +408,17 @@ async def token_refresh_middleware( data = await response.json() if data.get("error") == "token_expired" and data.get( "refresh_required" - ): # pragma: no branch + ): # Simulate token refresh token_state["token"] = "refreshed-token" token_state["refreshed"] = True continue + else: + assert False, "Should not reach here" return response + # This should not be reached but keeps mypy happy + assert False, "Should not reach here" app = web.Application() app.router.add_get("/", handler) @@ -490,7 +495,6 @@ class RetryMiddleware: def __init__(self, max_retries: int = 3) -> None: self.max_retries = max_retries - self.retry_counts: Dict[int, int] = {} # Track retries per request async def __call__( self, request: ClientRequest, handler: ClientHandlerType @@ -576,10 +580,55 @@ async def handler(request: web.Request) -> web.Response: assert headers_received.get("X-Custom-2") == "value2" -async def test_client_middleware_disable_with_empty_tuple( +async def test_request_middleware_overrides_session_middleware_with_empty( aiohttp_server: AiohttpServer, ) -> None: - """Test that passing middlewares=() to a request disables session-level middlewares.""" + """Test that passing empty middlewares tuple to a request disables session-level middlewares.""" + session_middleware_called = False + + async def handler(request: web.Request) -> web.Response: + auth_header = request.headers.get("Authorization") + if auth_header: + return web.Response(text=f"Auth: {auth_header}") + return web.Response(text="No auth") + + async def session_middleware( + request: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + nonlocal session_middleware_called + session_middleware_called = True + request.headers["Authorization"] = "Bearer session-token" + response = await handler(request) + return response + + app = web.Application() + app.router.add_get("/", handler) + server = await aiohttp_server(app) + + # Create session with middleware + async with ClientSession(middlewares=(session_middleware,)) as session: + # First request uses session middleware + async with session.get(server.make_url("/")) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "Auth: Bearer session-token" + assert session_middleware_called is True + + # Reset flags + session_middleware_called = False + + # Second request explicitly disables middlewares with empty tuple + async with session.get(server.make_url("/"), middlewares=()) as resp: + assert resp.status == 200 + text = await resp.text() + assert text == "No auth" + assert session_middleware_called is False + + +async def test_request_middleware_overrides_session_middleware_with_specific( + aiohttp_server: AiohttpServer, +) -> None: + """Test that passing specific middlewares to a request overrides session-level middlewares.""" session_middleware_called = False request_middleware_called = False @@ -625,19 +674,7 @@ async def request_middleware( session_middleware_called = False request_middleware_called = False - # Second request explicitly disables middlewares - async with session.get(server.make_url("/"), middlewares=()) as resp: - assert resp.status == 200 - text = await resp.text() - assert text == "No auth" - assert session_middleware_called is False - assert request_middleware_called is False - - # Reset flags - session_middleware_called = False - request_middleware_called = False - - # Third request uses request-specific middleware + # Second request uses request-specific middleware async with session.get( server.make_url("/"), middlewares=(request_middleware,) ) as resp: @@ -745,9 +782,13 @@ async def blocking_middleware( # Verify that connections were attempted in the correct order assert len(connection_attempts) == 3 - assert allowed_url.host and allowed_url.host in connection_attempts[0] - assert "blocked.example.com" in connection_attempts[1] - assert "evil.com" in connection_attempts[2] + assert allowed_url.host + + assert connection_attempts == [ + str(server.make_url("/")), + "https://blocked.example.com/", + "https://evil.com/path", + ] # Check that no connections were leaked assert len(connector._conns) == 0 @@ -1042,8 +1083,7 @@ def get_hash(self, request: ClientRequest) -> str: data = "{}" # Simulate authentication hash without using real crypto - signature = f"SIGNATURE-{self.secretkey}-{len(data)}-{data[:10]}" - return signature + return f"SIGNATURE-{self.secretkey}-{len(data)}-{data[:10]}" async def __call__( self, request: ClientRequest, handler: ClientHandlerType From 9a2835ae81c7dd085bf3fba22bbe9bfba421ae04 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Tue, 20 May 2025 20:56:32 +0000 Subject: [PATCH 49/90] [PR #10907/b25eca01 backport][3.12] Fix flakey test_uvloop_secure_https_proxy test (#10909) Co-authored-by: J. Nick Koston --- tests/test_proxy_functional.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tests/test_proxy_functional.py b/tests/test_proxy_functional.py index f86975b7423..78521ae6008 100644 --- a/tests/test_proxy_functional.py +++ b/tests/test_proxy_functional.py @@ -218,7 +218,7 @@ async def test_uvloop_secure_https_proxy( uvloop_loop: asyncio.AbstractEventLoop, ) -> None: """Ensure HTTPS sites are accessible through a secure proxy without warning when using uvloop.""" - conn = aiohttp.TCPConnector() + conn = aiohttp.TCPConnector(force_close=True) sess = aiohttp.ClientSession(connector=conn) try: url = URL("https://example.com") @@ -227,6 +227,8 @@ async def test_uvloop_secure_https_proxy( url, proxy=secure_proxy_url, ssl=client_ssl_ctx ) as response: assert response.status == 200 + # Ensure response body is read to completion + await response.read() finally: await sess.close() await conn.close() From 437dffaa1441b92ee373cadf6b64bc7fa11fe626 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 20 May 2025 21:05:20 +0000 Subject: [PATCH 50/90] Bump multidict from 6.4.3 to 6.4.4 (#10892) Bumps [multidict](https://github.com/aio-libs/multidict) from 6.4.3 to 6.4.4.
Release notes

Sourced from multidict's releases.

6.4.4

Bug fixes

  • Fixed a segmentation fault when calling :py:meth:multidict.MultiDict.setdefault with a single argument -- by :user:bdraco.

    Related issues and pull requests on GitHub: #1160.

  • Fixed a segmentation fault when attempting to directly instantiate view objects (multidict._ItemsView, multidict._KeysView, multidict._ValuesView) -- by :user:bdraco.

    View objects now raise a proper :exc:TypeError with the message "cannot create '...' instances directly" when direct instantiation is attempted.

    View objects should only be created through the proper methods: :py:meth:multidict.MultiDict.items, :py:meth:multidict.MultiDict.keys, and :py:meth:multidict.MultiDict.values.

    Related issues and pull requests on GitHub: #1164.

Miscellaneous internal changes

  • :class:multidict.MultiDictProxy was refactored to rely only on :class:multidict.MultiDict public interface and don't touch any implementation details.

    Related issues and pull requests on GitHub: #1150.

  • Multidict views were refactored to rely only on :class:multidict.MultiDict API and don't touch any implementation details.

    Related issues and pull requests on GitHub: #1152.

  • Dropped internal _Impl class from pure Python implementation, both pure Python and C Extension follows the same design internally now.

    Related issues and pull requests on GitHub: #1153.


Changelog

Sourced from multidict's changelog.

6.4.4

(2025-05-19)

Bug fixes

  • Fixed a segmentation fault when calling :py:meth:multidict.MultiDict.setdefault with a single argument -- by :user:bdraco.

    Related issues and pull requests on GitHub: :issue:1160.

  • Fixed a segmentation fault when attempting to directly instantiate view objects (multidict._ItemsView, multidict._KeysView, multidict._ValuesView) -- by :user:bdraco.

    View objects now raise a proper :exc:TypeError with the message "cannot create '...' instances directly" when direct instantiation is attempted.

    View objects should only be created through the proper methods: :py:meth:multidict.MultiDict.items, :py:meth:multidict.MultiDict.keys, and :py:meth:multidict.MultiDict.values.

    Related issues and pull requests on GitHub: :issue:1164.

Miscellaneous internal changes

  • :class:multidict.MultiDictProxy was refactored to rely only on :class:multidict.MultiDict public interface and don't touch any implementation details.

    Related issues and pull requests on GitHub: :issue:1150.

  • Multidict views were refactored to rely only on :class:multidict.MultiDict API and don't touch any implementation details.

    Related issues and pull requests on GitHub: :issue:1152.

  • Dropped internal _Impl class from pure Python implementation, both pure Python and C Extension follows the same design internally now.

    Related issues and pull requests on GitHub: :issue:1153.

... (truncated)

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=multidict&package-manager=pip&previous-version=6.4.3&new-version=6.4.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: J. Nick Koston --- requirements/base.txt | 2 +- requirements/constraints.txt | 2 +- requirements/cython.txt | 2 +- requirements/dev.txt | 2 +- requirements/multidict.txt | 2 +- requirements/runtime-deps.txt | 2 +- requirements/test.txt | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index 1a0c6fe1046..26c18e2f53e 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -26,7 +26,7 @@ gunicorn==23.0.0 # via -r requirements/base.in idna==3.4 # via yarl -multidict==6.4.3 +multidict==6.4.4 # via # -r requirements/runtime-deps.in # yarl diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 9a53aaaea12..3c3cf6cfacf 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -115,7 +115,7 @@ markupsafe==3.0.2 # via jinja2 mdurl==0.1.2 # via markdown-it-py -multidict==6.4.3 +multidict==6.4.4 # via # -r requirements/multidict.in # -r requirements/runtime-deps.in diff --git a/requirements/cython.txt b/requirements/cython.txt index 1dd3cc00fc4..8d7e2dc256c 100644 --- a/requirements/cython.txt +++ b/requirements/cython.txt @@ -6,7 +6,7 @@ # cython==3.1.1 # via -r requirements/cython.in -multidict==6.4.3 +multidict==6.4.4 # via -r requirements/multidict.in typing-extensions==4.13.2 # via multidict diff --git a/requirements/dev.txt b/requirements/dev.txt index ce52430fbee..82750d218f3 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -113,7 +113,7 @@ markupsafe==3.0.2 # via jinja2 mdurl==0.1.2 # via markdown-it-py -multidict==6.4.3 +multidict==6.4.4 # via # -r requirements/runtime-deps.in # yarl diff --git a/requirements/multidict.txt b/requirements/multidict.txt index 41435a67142..abd2e2cc9eb 100644 --- a/requirements/multidict.txt +++ b/requirements/multidict.txt @@ -4,7 +4,7 @@ # # pip-compile --allow-unsafe --output-file=requirements/multidict.txt --resolver=backtracking --strip-extras requirements/multidict.in # -multidict==6.4.3 +multidict==6.4.4 # via -r requirements/multidict.in typing-extensions==4.13.2 # via multidict diff --git a/requirements/runtime-deps.txt b/requirements/runtime-deps.txt index 863d4525cad..58263ab61ed 100644 --- a/requirements/runtime-deps.txt +++ b/requirements/runtime-deps.txt @@ -24,7 +24,7 @@ frozenlist==1.6.0 # aiosignal idna==3.4 # via yarl -multidict==6.4.3 +multidict==6.4.4 # via # -r requirements/runtime-deps.in # yarl diff --git a/requirements/test.txt b/requirements/test.txt index 5b3444b3cc4..683001e8967 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -59,7 +59,7 @@ markdown-it-py==3.0.0 # via rich mdurl==0.1.2 # via markdown-it-py -multidict==6.4.3 +multidict==6.4.4 # via # -r requirements/runtime-deps.in # yarl From a61fcc62f5f4eddcdfa5ca5bd489b7254e6c4290 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Tue, 20 May 2025 18:01:48 -0400 Subject: [PATCH 51/90] Release 3.12.0b0 (#10911) --- CHANGES.rst | 193 ++++++++++++++++++++++++++++++++++++++++++++ aiohttp/__init__.py | 2 +- 2 files changed, 194 insertions(+), 1 deletion(-) diff --git a/CHANGES.rst b/CHANGES.rst index 11fd19153e3..651437c90bd 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,199 @@ .. towncrier release notes start +3.12.0b0 (2025-05-20) +===================== + +Bug fixes +--------- + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.11.18 (2025-04-20) ==================== diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 4bc6a3a2b22..9ca85c654c5 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0.dev0" +__version__ = "3.12.0b0" from typing import TYPE_CHECKING, Tuple From 761a16c26a9d840c0018a7afb257ecf21bf04b47 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Tue, 20 May 2025 18:22:30 -0400 Subject: [PATCH 52/90] [PR #10910/36a2567 backport][3.12] Remove mocked coro from tests (#10914) --- CONTRIBUTORS.txt | 1 + tests/test_client_request.py | 7 +- tests/test_client_response.py | 3 +- tests/test_client_session.py | 15 ++- tests/test_client_ws.py | 9 +- tests/test_connector.py | 51 +++++----- tests/test_http_writer.py | 21 +++-- tests/test_multipart.py | 9 +- tests/test_proxy.py | 149 ++++++++++++++++-------------- tests/test_run_app.py | 42 +++++---- tests/test_tracing.py | 4 +- tests/test_web_app.py | 5 +- tests/test_web_functional.py | 20 ++-- tests/test_web_request_handler.py | 5 +- tests/test_web_response.py | 12 +-- tests/test_web_sendfile.py | 12 +-- tests/test_web_websocket.py | 25 ++--- tests/test_websocket_writer.py | 3 +- 18 files changed, 202 insertions(+), 191 deletions(-) diff --git a/CONTRIBUTORS.txt b/CONTRIBUTORS.txt index 32e6e119aa7..5ff1eea3da7 100644 --- a/CONTRIBUTORS.txt +++ b/CONTRIBUTORS.txt @@ -287,6 +287,7 @@ Pavol Vargovčík Pawel Kowalski Pawel Miech Pepe Osca +Phebe Polk Philipp A. Pierre-Louis Peeters Pieter van Beek diff --git a/tests/test_client_request.py b/tests/test_client_request.py index 6454b42c89b..4706c10a588 100644 --- a/tests/test_client_request.py +++ b/tests/test_client_request.py @@ -24,7 +24,6 @@ ) from aiohttp.compression_utils import ZLibBackend from aiohttp.http import HttpVersion10, HttpVersion11 -from aiohttp.test_utils import make_mocked_coro class WriterMock(mock.AsyncMock): @@ -806,7 +805,7 @@ async def test_content_encoding(loop, conn) -> None: "post", URL("http://python.org/"), data="foo", compress="deflate", loop=loop ) with mock.patch("aiohttp.client_reqrep.StreamWriter") as m_writer: - m_writer.return_value.write_headers = make_mocked_coro() + m_writer.return_value.write_headers = mock.AsyncMock() resp = await req.send(conn) assert req.headers["TRANSFER-ENCODING"] == "chunked" assert req.headers["CONTENT-ENCODING"] == "deflate" @@ -837,7 +836,7 @@ async def test_content_encoding_header(loop, conn) -> None: loop=loop, ) with mock.patch("aiohttp.client_reqrep.StreamWriter") as m_writer: - m_writer.return_value.write_headers = make_mocked_coro() + m_writer.return_value.write_headers = mock.AsyncMock() resp = await req.send(conn) assert not m_writer.return_value.enable_compression.called @@ -887,7 +886,7 @@ async def test_chunked2(loop, conn) -> None: async def test_chunked_explicit(loop, conn) -> None: req = ClientRequest("post", URL("http://python.org/"), chunked=True, loop=loop) with mock.patch("aiohttp.client_reqrep.StreamWriter") as m_writer: - m_writer.return_value.write_headers = make_mocked_coro() + m_writer.return_value.write_headers = mock.AsyncMock() resp = await req.send(conn) assert "chunked" == req.headers["TRANSFER-ENCODING"] diff --git a/tests/test_client_response.py b/tests/test_client_response.py index 18ba6c5149d..4a8000962d1 100644 --- a/tests/test_client_response.py +++ b/tests/test_client_response.py @@ -14,7 +14,6 @@ from aiohttp import ClientSession, http from aiohttp.client_reqrep import ClientResponse, RequestInfo from aiohttp.helpers import TimerNoop -from aiohttp.test_utils import make_mocked_coro class WriterMock(mock.AsyncMock): @@ -1104,7 +1103,7 @@ def test_redirect_history_in_exception() -> None: async def test_response_read_triggers_callback(loop, session) -> None: trace = mock.Mock() - trace.send_response_chunk_received = make_mocked_coro() + trace.send_response_chunk_received = mock.AsyncMock() response_method = "get" response_url = URL("http://def-cl-resp.org") response_body = b"This is response" diff --git a/tests/test_client_session.py b/tests/test_client_session.py index 548af5db551..0656a9ed023 100644 --- a/tests/test_client_session.py +++ b/tests/test_client_session.py @@ -23,7 +23,6 @@ from aiohttp.helpers import DEBUG from aiohttp.http import RawResponseMessage from aiohttp.pytest_plugin import AiohttpServer -from aiohttp.test_utils import make_mocked_coro from aiohttp.tracing import Trace @@ -738,10 +737,10 @@ async def handler(request: web.Request) -> web.Response: trace_config_ctx = mock.Mock() trace_request_ctx = {} body = "This is request body" - gathered_req_headers = CIMultiDict() - on_request_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_request_redirect = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_request_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + gathered_req_headers: CIMultiDict[str] = CIMultiDict() + on_request_start = mock.AsyncMock() + on_request_redirect = mock.AsyncMock() + on_request_end = mock.AsyncMock() with io.BytesIO() as gathered_req_body, io.BytesIO() as gathered_res_body: @@ -809,7 +808,7 @@ async def redirect_handler(request): app.router.add_get("/", root_handler) app.router.add_get("/redirect", redirect_handler) - mocks = [mock.Mock(side_effect=make_mocked_coro(mock.Mock())) for _ in range(7)] + mocks = [mock.AsyncMock() for _ in range(7)] ( on_request_start, on_request_redirect, @@ -900,8 +899,8 @@ def to_url(path: str) -> URL: async def test_request_tracing_exception() -> None: loop = asyncio.get_event_loop() - on_request_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_request_exception = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_request_end = mock.AsyncMock() + on_request_exception = mock.AsyncMock() trace_config = aiohttp.TraceConfig() trace_config.on_request_end.append(on_request_end) diff --git a/tests/test_client_ws.py b/tests/test_client_ws.py index 92b5d117db7..48481055a7f 100644 --- a/tests/test_client_ws.py +++ b/tests/test_client_ws.py @@ -11,7 +11,6 @@ from aiohttp import ClientConnectionResetError, ServerDisconnectedError, client, hdrs from aiohttp.http import WS_KEY from aiohttp.streams import EofStream -from aiohttp.test_utils import make_mocked_coro async def test_ws_connect(ws_key: Any, loop: Any, key_data: Any) -> None: @@ -352,7 +351,7 @@ async def test_close(loop, ws_key, key_data) -> None: m_req.return_value.set_result(resp) writer = mock.Mock() WebSocketWriter.return_value = writer - writer.close = make_mocked_coro() + writer.close = mock.AsyncMock() session = aiohttp.ClientSession(loop=loop) resp = await session.ws_connect("http://test.org") @@ -461,7 +460,7 @@ async def test_close_exc( m_req.return_value.set_result(mresp) writer = mock.Mock() WebSocketWriter.return_value = writer - writer.close = make_mocked_coro() + writer.close = mock.AsyncMock() session = aiohttp.ClientSession(loop=loop) resp = await session.ws_connect("http://test.org") @@ -595,7 +594,7 @@ async def test_reader_read_exception(ws_key, key_data, loop) -> None: writer = mock.Mock() WebSocketWriter.return_value = writer - writer.close = make_mocked_coro() + writer.close = mock.AsyncMock() session = aiohttp.ClientSession(loop=loop) resp = await session.ws_connect("http://test.org") @@ -731,7 +730,7 @@ async def test_ws_connect_deflate_per_message(loop, ws_key, key_data) -> None: m_req.return_value = loop.create_future() m_req.return_value.set_result(resp) writer = WebSocketWriter.return_value = mock.Mock() - send_frame = writer.send_frame = make_mocked_coro() + send_frame = writer.send_frame = mock.AsyncMock() session = aiohttp.ClientSession(loop=loop) resp = await session.ws_connect("http://test.org") diff --git a/tests/test_connector.py b/tests/test_connector.py index fd2cdac7a94..8128b47f02d 100644 --- a/tests/test_connector.py +++ b/tests/test_connector.py @@ -41,7 +41,7 @@ _DNSCacheTable, ) from aiohttp.resolver import ResolveResult -from aiohttp.test_utils import make_mocked_coro, unused_port +from aiohttp.test_utils import unused_port from aiohttp.tracing import Trace @@ -1347,10 +1347,10 @@ def exception_handler(loop, context): async def test_tcp_connector_dns_tracing(loop, dns_response) -> None: session = mock.Mock() trace_config_ctx = mock.Mock() - on_dns_resolvehost_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_resolvehost_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_cache_hit = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_cache_miss = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_dns_resolvehost_start = mock.AsyncMock() + on_dns_resolvehost_end = mock.AsyncMock() + on_dns_cache_hit = mock.AsyncMock() + on_dns_cache_miss = mock.AsyncMock() trace_config = aiohttp.TraceConfig( trace_config_ctx_factory=mock.Mock(return_value=trace_config_ctx) @@ -1392,8 +1392,8 @@ async def test_tcp_connector_dns_tracing(loop, dns_response) -> None: async def test_tcp_connector_dns_tracing_cache_disabled(loop, dns_response) -> None: session = mock.Mock() trace_config_ctx = mock.Mock() - on_dns_resolvehost_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_resolvehost_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_dns_resolvehost_start = mock.AsyncMock() + on_dns_resolvehost_end = mock.AsyncMock() trace_config = aiohttp.TraceConfig( trace_config_ctx_factory=mock.Mock(return_value=trace_config_ctx) @@ -1447,8 +1447,8 @@ async def test_tcp_connector_dns_tracing_cache_disabled(loop, dns_response) -> N async def test_tcp_connector_dns_tracing_throttle_requests(loop, dns_response) -> None: session = mock.Mock() trace_config_ctx = mock.Mock() - on_dns_cache_hit = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_cache_miss = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_dns_cache_hit = mock.AsyncMock() + on_dns_cache_miss = mock.AsyncMock() trace_config = aiohttp.TraceConfig( trace_config_ctx_factory=mock.Mock(return_value=trace_config_ctx) @@ -1477,8 +1477,8 @@ async def test_tcp_connector_dns_tracing_throttle_requests(loop, dns_response) - async def test_dns_error(loop) -> None: connector = aiohttp.TCPConnector(loop=loop) - connector._resolve_host = make_mocked_coro( - raise_exception=OSError("dont take it serious") + connector._resolve_host = mock.AsyncMock( + side_effect=OSError("dont take it serious") ) req = ClientRequest("GET", URL("http://www.python.org"), loop=loop) @@ -1577,8 +1577,8 @@ async def test_connect(loop, key) -> None: async def test_connect_tracing(loop) -> None: session = mock.Mock() trace_config_ctx = mock.Mock() - on_connection_create_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_connection_create_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_connection_create_start = mock.AsyncMock() + on_connection_create_end = mock.AsyncMock() trace_config = aiohttp.TraceConfig( trace_config_ctx_factory=mock.Mock(return_value=trace_config_ctx) @@ -2573,8 +2573,8 @@ async def f(): async def test_connect_queued_operation_tracing(loop, key) -> None: session = mock.Mock() trace_config_ctx = mock.Mock() - on_connection_queued_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_connection_queued_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_connection_queued_start = mock.AsyncMock() + on_connection_queued_end = mock.AsyncMock() trace_config = aiohttp.TraceConfig( trace_config_ctx_factory=mock.Mock(return_value=trace_config_ctx) @@ -2619,7 +2619,7 @@ async def f(): async def test_connect_reuseconn_tracing(loop, key) -> None: session = mock.Mock() trace_config_ctx = mock.Mock() - on_connection_reuseconn = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) + on_connection_reuseconn = mock.AsyncMock() trace_config = aiohttp.TraceConfig( trace_config_ctx_factory=mock.Mock(return_value=trace_config_ctx) @@ -3111,9 +3111,10 @@ async def test_unix_connector_not_found(loop) -> None: @pytest.mark.skipif(not hasattr(socket, "AF_UNIX"), reason="requires UNIX sockets") -async def test_unix_connector_permission(loop) -> None: - loop.create_unix_connection = make_mocked_coro(raise_exception=PermissionError()) - connector = aiohttp.UnixConnector("/" + uuid.uuid4().hex, loop=loop) +async def test_unix_connector_permission(loop: asyncio.AbstractEventLoop) -> None: + m = mock.AsyncMock(side_effect=PermissionError()) + with mock.patch.object(loop, "create_unix_connection", m): + connector = aiohttp.UnixConnector("/" + uuid.uuid4().hex) req = ClientRequest("GET", URL("http://www.python.org"), loop=loop) with pytest.raises(aiohttp.ClientConnectorError): @@ -3142,11 +3143,13 @@ async def test_named_pipe_connector_not_found(proactor_loop, pipe_name) -> None: @pytest.mark.skipif( platform.system() != "Windows", reason="Proactor Event loop present only in Windows" ) -async def test_named_pipe_connector_permission(proactor_loop, pipe_name) -> None: - proactor_loop.create_pipe_connection = make_mocked_coro( - raise_exception=PermissionError() - ) - connector = aiohttp.NamedPipeConnector(pipe_name, loop=proactor_loop) +async def test_named_pipe_connector_permission( + proactor_loop: asyncio.AbstractEventLoop, pipe_name: str +) -> None: + m = mock.AsyncMock(side_effect=PermissionError()) + with mock.patch.object(proactor_loop, "create_pipe_connection", m): + asyncio.set_event_loop(proactor_loop) + connector = aiohttp.NamedPipeConnector(pipe_name) req = ClientRequest("GET", URL("http://www.python.org"), loop=proactor_loop) with pytest.raises(aiohttp.ClientConnectorError): diff --git a/tests/test_http_writer.py b/tests/test_http_writer.py index 7f813692571..ec256275d22 100644 --- a/tests/test_http_writer.py +++ b/tests/test_http_writer.py @@ -12,7 +12,6 @@ from aiohttp.base_protocol import BaseProtocol from aiohttp.compression_utils import ZLibBackend from aiohttp.http_writer import _serialize_headers -from aiohttp.test_utils import make_mocked_coro @pytest.fixture @@ -58,7 +57,7 @@ def writelines(chunks: Iterable[bytes]) -> None: @pytest.fixture def protocol(loop, transport): protocol = mock.Mock(transport=transport) - protocol._drain_helper = make_mocked_coro() + protocol._drain_helper = mock.AsyncMock() return protocol @@ -732,7 +731,7 @@ async def test_write_payload_slicing_long_memoryview(buf, protocol, transport, l async def test_write_drain(protocol, transport, loop) -> None: msg = http.StreamWriter(protocol, loop) - msg.drain = make_mocked_coro() + msg.drain = mock.AsyncMock() await msg.write(b"1" * (64 * 1024 * 2), drain=False) assert not msg.drain.called @@ -741,8 +740,12 @@ async def test_write_drain(protocol, transport, loop) -> None: assert msg.buffer_size == 0 -async def test_write_calls_callback(protocol, transport, loop) -> None: - on_chunk_sent = make_mocked_coro() +async def test_write_calls_callback( + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + on_chunk_sent = mock.AsyncMock() msg = http.StreamWriter(protocol, loop, on_chunk_sent=on_chunk_sent) chunk = b"1" await msg.write(chunk) @@ -750,8 +753,12 @@ async def test_write_calls_callback(protocol, transport, loop) -> None: assert on_chunk_sent.call_args == mock.call(chunk) -async def test_write_eof_calls_callback(protocol, transport, loop) -> None: - on_chunk_sent = make_mocked_coro() +async def test_write_eof_calls_callback( + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + on_chunk_sent = mock.AsyncMock() msg = http.StreamWriter(protocol, loop, on_chunk_sent=on_chunk_sent) chunk = b"1" await msg.write_eof(chunk=chunk) diff --git a/tests/test_multipart.py b/tests/test_multipart.py index b0ca92fde9e..c76d523ca86 100644 --- a/tests/test_multipart.py +++ b/tests/test_multipart.py @@ -19,7 +19,6 @@ from aiohttp.helpers import parse_mimetype from aiohttp.multipart import MultipartResponseWrapper from aiohttp.streams import StreamReader -from aiohttp.test_utils import make_mocked_coro BOUNDARY = b"--:" @@ -97,21 +96,21 @@ def test_at_eof(self) -> None: async def test_next(self) -> None: wrapper = MultipartResponseWrapper(mock.Mock(), mock.Mock()) - wrapper.stream.next = make_mocked_coro(b"") + wrapper.stream.next = mock.AsyncMock(b"") wrapper.stream.at_eof.return_value = False await wrapper.next() assert wrapper.stream.next.called async def test_release(self) -> None: wrapper = MultipartResponseWrapper(mock.Mock(), mock.Mock()) - wrapper.resp.release = make_mocked_coro(None) + wrapper.resp.release = mock.AsyncMock(None) await wrapper.release() assert wrapper.resp.release.called async def test_release_when_stream_at_eof(self) -> None: wrapper = MultipartResponseWrapper(mock.Mock(), mock.Mock()) - wrapper.resp.release = make_mocked_coro(None) - wrapper.stream.next = make_mocked_coro(b"") + wrapper.resp.release = mock.AsyncMock(None) + wrapper.stream.next = mock.AsyncMock(b"") wrapper.stream.at_eof.return_value = True await wrapper.next() assert wrapper.stream.next.called diff --git a/tests/test_proxy.py b/tests/test_proxy.py index 83457de891f..0e73210f58b 100644 --- a/tests/test_proxy.py +++ b/tests/test_proxy.py @@ -14,7 +14,6 @@ from aiohttp.client_reqrep import ClientRequest, ClientResponse, Fingerprint from aiohttp.connector import _SSL_CONTEXT_VERIFIED from aiohttp.helpers import TimerNoop -from aiohttp.test_utils import make_mocked_coro pytestmark = pytest.mark.skipif( sys.platform == "win32", reason="Proxy tests are unstable on Windows" @@ -27,7 +26,9 @@ class TestProxy(unittest.TestCase): } mocked_response = mock.Mock(**response_mock_attrs) clientrequest_mock_attrs = { - "return_value.send.return_value.start": make_mocked_coro(mocked_response), + "return_value.send.return_value.start": mock.AsyncMock( + return_value=mocked_response + ), } def setUp(self): @@ -61,8 +62,8 @@ async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -79,7 +80,9 @@ async def make_conn(): "transport.get_extra_info.return_value": False, } ) - self.loop.create_connection = make_mocked_coro((proto.transport, proto)) + self.loop.create_connection = mock.AsyncMock( + return_value=(proto.transport, proto) + ) conn = self.loop.run_until_complete( connector.connect(req, None, aiohttp.ClientTimeout()) ) @@ -119,8 +122,8 @@ async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -137,7 +140,9 @@ async def make_conn(): "transport.get_extra_info.return_value": False, } ) - self.loop.create_connection = make_mocked_coro((proto.transport, proto)) + self.loop.create_connection = mock.AsyncMock( + return_value=(proto.transport, proto) + ) conn = self.loop.run_until_complete( connector.connect(req, None, aiohttp.ClientTimeout()) ) @@ -185,8 +190,8 @@ async def make_conn(): return aiohttp.TCPConnector() connector: aiohttp.TCPConnector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - raise_exception=OSError("dont take it serious") + connector._resolve_host = mock.AsyncMock( + side_effect=OSError("dont take it serious") ) req = ClientRequest( @@ -214,8 +219,8 @@ async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "www.python.org", "host": "127.0.0.1", @@ -226,8 +231,8 @@ async def make_conn(): } ] ) - connector._loop.create_connection = make_mocked_coro( - raise_exception=OSError("dont take it serious") + connector._loop.create_connection = mock.AsyncMock( + side_effect=OSError("dont take it serious") ) req = ClientRequest( @@ -266,15 +271,15 @@ def test_proxy_server_hostname_default( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -287,8 +292,8 @@ async def make_conn(): ) tr, proto = mock.Mock(), mock.Mock() - self.loop.create_connection = make_mocked_coro((tr, proto)) - self.loop.start_tls = make_mocked_coro(mock.Mock()) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) + self.loop.start_tls = mock.AsyncMock(return_value=mock.Mock()) req = ClientRequest( "GET", @@ -335,15 +340,15 @@ def test_proxy_server_hostname_override( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -356,8 +361,8 @@ async def make_conn(): ) tr, proto = mock.Mock(), mock.Mock() - self.loop.create_connection = make_mocked_coro((tr, proto)) - self.loop.start_tls = make_mocked_coro(mock.Mock()) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) + self.loop.start_tls = mock.AsyncMock(return_value=mock.Mock()) req = ClientRequest( "GET", @@ -513,15 +518,15 @@ def test_https_connect( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -534,8 +539,8 @@ async def make_conn(): ) tr, proto = mock.Mock(), mock.Mock() - self.loop.create_connection = make_mocked_coro((tr, proto)) - self.loop.start_tls = make_mocked_coro(mock.Mock()) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) + self.loop.start_tls = mock.AsyncMock(return_value=mock.Mock()) req = ClientRequest( "GET", @@ -580,15 +585,15 @@ def test_https_connect_certificate_error( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -601,9 +606,11 @@ async def make_conn(): ) # Called on connection to http://proxy.example.com - self.loop.create_connection = make_mocked_coro((mock.Mock(), mock.Mock())) + self.loop.create_connection = mock.AsyncMock( + return_value=(mock.Mock(), mock.Mock()) + ) # Called on connection to https://www.python.org - self.loop.start_tls = make_mocked_coro(raise_exception=ssl.CertificateError) + self.loop.start_tls = mock.AsyncMock(side_effect=ssl.CertificateError) req = ClientRequest( "GET", @@ -641,15 +648,15 @@ def test_https_connect_ssl_error( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -662,11 +669,11 @@ async def make_conn(): ) # Called on connection to http://proxy.example.com - self.loop.create_connection = make_mocked_coro( - (mock.Mock(), mock.Mock()), + self.loop.create_connection = mock.AsyncMock( + return_value=(mock.Mock(), mock.Mock()), ) # Called on connection to https://www.python.org - self.loop.start_tls = make_mocked_coro(raise_exception=ssl.SSLError) + self.loop.start_tls = mock.AsyncMock(side_effect=ssl.SSLError) req = ClientRequest( "GET", @@ -704,15 +711,17 @@ def test_https_connect_http_proxy_error( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=400, reason="bad request")) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock( + return_value=mock.Mock(status=400, reason="bad request") + ) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -726,7 +735,7 @@ async def make_conn(): tr, proto = mock.Mock(), mock.Mock() tr.get_extra_info.return_value = None - self.loop.create_connection = make_mocked_coro((tr, proto)) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) req = ClientRequest( "GET", @@ -770,15 +779,15 @@ def test_https_connect_resp_start_error( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(raise_exception=OSError("error message")) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(side_effect=OSError("error message")) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -792,7 +801,7 @@ async def make_conn(): tr, proto = mock.Mock(), mock.Mock() tr.get_extra_info.return_value = None - self.loop.create_connection = make_mocked_coro((tr, proto)) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) req = ClientRequest( "GET", @@ -821,8 +830,8 @@ async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -836,7 +845,7 @@ async def make_conn(): tr, proto = mock.Mock(), mock.Mock() tr.get_extra_info.return_value = None - self.loop.create_connection = make_mocked_coro((tr, proto)) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) req = ClientRequest( "GET", @@ -893,15 +902,15 @@ def test_https_connect_pass_ssl_context( loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -914,8 +923,8 @@ async def make_conn(): ) tr, proto = mock.Mock(), mock.Mock() - self.loop.create_connection = make_mocked_coro((tr, proto)) - self.loop.start_tls = make_mocked_coro(mock.Mock()) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) + self.loop.start_tls = mock.AsyncMock(return_value=mock.Mock()) req = ClientRequest( "GET", @@ -969,15 +978,15 @@ def test_https_auth(self, start_connection: Any, ClientRequestMock: Any) -> None loop=self.loop, session=mock.Mock(), ) - proxy_req.send = make_mocked_coro(proxy_resp) - proxy_resp.start = make_mocked_coro(mock.Mock(status=200)) + proxy_req.send = mock.AsyncMock(return_value=proxy_resp) + proxy_resp.start = mock.AsyncMock(return_value=mock.Mock(status=200)) async def make_conn(): return aiohttp.TCPConnector() connector = self.loop.run_until_complete(make_conn()) - connector._resolve_host = make_mocked_coro( - [ + connector._resolve_host = mock.AsyncMock( + return_value=[ { "hostname": "hostname", "host": "127.0.0.1", @@ -990,8 +999,8 @@ async def make_conn(): ) tr, proto = mock.Mock(), mock.Mock() - self.loop.create_connection = make_mocked_coro((tr, proto)) - self.loop.start_tls = make_mocked_coro(mock.Mock()) + self.loop.create_connection = mock.AsyncMock(return_value=(tr, proto)) + self.loop.start_tls = mock.AsyncMock(return_value=mock.Mock()) self.assertIn("AUTHORIZATION", proxy_req.headers) self.assertNotIn("PROXY-AUTHORIZATION", proxy_req.headers) diff --git a/tests/test_run_app.py b/tests/test_run_app.py index 9332d4aa96c..e269b452f86 100644 --- a/tests/test_run_app.py +++ b/tests/test_run_app.py @@ -14,6 +14,7 @@ Awaitable, Callable, Coroutine, + Iterator, NoReturn, Optional, Set, @@ -25,7 +26,6 @@ import pytest from aiohttp import ClientConnectorError, ClientSession, ClientTimeout, WSCloseCode, web -from aiohttp.test_utils import make_mocked_coro from aiohttp.web_runner import BaseRunner # Test for features of OS' socket support @@ -65,15 +65,25 @@ def skip_if_on_windows(): @pytest.fixture -def patched_loop(loop): - server = mock.Mock() - server.wait_closed = make_mocked_coro(None) - loop.create_server = make_mocked_coro(server) - unix_server = mock.Mock() - unix_server.wait_closed = make_mocked_coro(None) - loop.create_unix_server = make_mocked_coro(unix_server) - asyncio.set_event_loop(loop) - return loop +def patched_loop( + loop: asyncio.AbstractEventLoop, +) -> Iterator[asyncio.AbstractEventLoop]: + server = mock.create_autospec(asyncio.Server, spec_set=True, instance=True) + server.wait_closed.return_value = None + unix_server = mock.create_autospec(asyncio.Server, spec_set=True, instance=True) + unix_server.wait_closed.return_value = None + with mock.patch.object( + loop, "create_server", autospec=True, spec_set=True, return_value=server + ): + with mock.patch.object( + loop, + "create_unix_server", + autospec=True, + spec_set=True, + return_value=unix_server, + ): + asyncio.set_event_loop(loop) + yield loop def stopper(loop): @@ -88,9 +98,9 @@ def f(*args): def test_run_app_http(patched_loop) -> None: app = web.Application() - startup_handler = make_mocked_coro() + startup_handler = mock.AsyncMock() app.on_startup.append(startup_handler) - cleanup_handler = make_mocked_coro() + cleanup_handler = mock.AsyncMock() app.on_cleanup.append(cleanup_handler) web.run_app(app, print=stopper(patched_loop), loop=patched_loop) @@ -693,9 +703,9 @@ def test_startup_cleanup_signals_even_on_failure(patched_loop) -> None: patched_loop.create_server = mock.Mock(side_effect=RuntimeError()) app = web.Application() - startup_handler = make_mocked_coro() + startup_handler = mock.AsyncMock() app.on_startup.append(startup_handler) - cleanup_handler = make_mocked_coro() + cleanup_handler = mock.AsyncMock() app.on_cleanup.append(cleanup_handler) with pytest.raises(RuntimeError): @@ -711,9 +721,9 @@ def test_run_app_coro(patched_loop) -> None: async def make_app(): nonlocal startup_handler, cleanup_handler app = web.Application() - startup_handler = make_mocked_coro() + startup_handler = mock.AsyncMock() app.on_startup.append(startup_handler) - cleanup_handler = make_mocked_coro() + cleanup_handler = mock.AsyncMock() app.on_cleanup.append(cleanup_handler) return app diff --git a/tests/test_tracing.py b/tests/test_tracing.py index 809d757f199..845c0ba6ab4 100644 --- a/tests/test_tracing.py +++ b/tests/test_tracing.py @@ -1,9 +1,9 @@ from types import SimpleNamespace +from unittest import mock from unittest.mock import Mock import pytest -from aiohttp.test_utils import make_mocked_coro from aiohttp.tracing import ( Trace, TraceConfig, @@ -104,7 +104,7 @@ class TestTrace: async def test_send(self, signal, params, param_obj) -> None: session = Mock() trace_request_ctx = Mock() - callback = Mock(side_effect=make_mocked_coro(Mock())) + callback = mock.AsyncMock() trace_config = TraceConfig() getattr(trace_config, "on_%s" % signal).append(callback) diff --git a/tests/test_web_app.py b/tests/test_web_app.py index 8c03a6041b2..69655b1a49a 100644 --- a/tests/test_web_app.py +++ b/tests/test_web_app.py @@ -8,7 +8,6 @@ from aiohttp import log, web from aiohttp.abc import AbstractAccessLogger, AbstractRouter from aiohttp.helpers import DEBUG -from aiohttp.test_utils import make_mocked_coro from aiohttp.typedefs import Handler @@ -167,8 +166,8 @@ async def test_app_make_handler_raises_deprecation_warning() -> None: async def test_app_register_on_finish() -> None: app = web.Application() - cb1 = make_mocked_coro(None) - cb2 = make_mocked_coro(None) + cb1 = mock.AsyncMock(return_value=None) + cb2 = mock.AsyncMock(return_value=None) app.on_cleanup.append(cb1) app.on_cleanup.append(cb2) app.freeze() diff --git a/tests/test_web_functional.py b/tests/test_web_functional.py index 9cc05a08426..b6caf23df53 100644 --- a/tests/test_web_functional.py +++ b/tests/test_web_functional.py @@ -23,8 +23,7 @@ ) from aiohttp.compression_utils import ZLibBackend, ZLibCompressObjProtocol from aiohttp.hdrs import CONTENT_LENGTH, CONTENT_TYPE, TRANSFER_ENCODING -from aiohttp.pytest_plugin import AiohttpClient -from aiohttp.test_utils import make_mocked_coro +from aiohttp.pytest_plugin import AiohttpClient, AiohttpServer from aiohttp.typedefs import Handler from aiohttp.web_protocol import RequestHandler @@ -2025,15 +2024,14 @@ async def handler(request): assert resp.status == 200 -async def test_request_tracing(aiohttp_server) -> None: - - on_request_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_request_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_resolvehost_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_dns_resolvehost_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_request_redirect = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_connection_create_start = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) - on_connection_create_end = mock.Mock(side_effect=make_mocked_coro(mock.Mock())) +async def test_request_tracing(aiohttp_server: AiohttpServer) -> None: + on_request_start = mock.AsyncMock() + on_request_end = mock.AsyncMock() + on_dns_resolvehost_start = mock.AsyncMock() + on_dns_resolvehost_end = mock.AsyncMock() + on_request_redirect = mock.AsyncMock() + on_connection_create_start = mock.AsyncMock() + on_connection_create_end = mock.AsyncMock() async def redirector(request): raise web.HTTPFound(location=URL("/redirected")) diff --git a/tests/test_web_request_handler.py b/tests/test_web_request_handler.py index 4837cab030e..ee30e485f1b 100644 --- a/tests/test_web_request_handler.py +++ b/tests/test_web_request_handler.py @@ -1,7 +1,6 @@ from unittest import mock from aiohttp import web -from aiohttp.test_utils import make_mocked_coro async def serve(request: web.BaseRequest) -> web.Response: @@ -37,7 +36,7 @@ async def test_shutdown_no_timeout() -> None: handler = mock.Mock(spec_set=web.RequestHandler) handler._task_handler = None - handler.shutdown = make_mocked_coro(mock.Mock()) + handler.shutdown = mock.AsyncMock(return_value=mock.Mock()) transport = mock.Mock() manager.connection_made(handler, transport) @@ -52,7 +51,7 @@ async def test_shutdown_timeout() -> None: manager = web.Server(serve) handler = mock.Mock() - handler.shutdown = make_mocked_coro(mock.Mock()) + handler.shutdown = mock.AsyncMock(return_value=mock.Mock()) transport = mock.Mock() manager.connection_made(handler, transport) diff --git a/tests/test_web_response.py b/tests/test_web_response.py index 68ffe211f20..7b048970967 100644 --- a/tests/test_web_response.py +++ b/tests/test_web_response.py @@ -18,7 +18,7 @@ from aiohttp.http_writer import StreamWriter, _serialize_headers from aiohttp.multipart import BodyPartReader, MultipartWriter from aiohttp.payload import BytesPayload, StringPayload -from aiohttp.test_utils import make_mocked_coro, make_mocked_request +from aiohttp.test_utils import make_mocked_request from aiohttp.web import ContentCoding, Response, StreamResponse, json_response @@ -858,8 +858,8 @@ async def test_cannot_write_eof_twice() -> None: resp = StreamResponse() writer = mock.Mock() resp_impl = await resp.prepare(make_request("GET", "/")) - resp_impl.write = make_mocked_coro(None) - resp_impl.write_eof = make_mocked_coro(None) + resp_impl.write = mock.AsyncMock(None) + resp_impl.write_eof = mock.AsyncMock(None) await resp.write(b"data") assert resp_impl.write.called @@ -1065,7 +1065,7 @@ async def test_prepare_twice() -> None: async def test_prepare_calls_signal() -> None: app = mock.Mock() - sig = make_mocked_coro() + sig = mock.AsyncMock() on_response_prepare = aiosignal.Signal(app) on_response_prepare.append(sig) req = make_request("GET", "/", app=app, on_response_prepare=on_response_prepare) @@ -1336,8 +1336,8 @@ async def test_send_set_cookie_header(buf, writer) -> None: async def test_consecutive_write_eof() -> None: writer = mock.Mock() - writer.write_eof = make_mocked_coro() - writer.write_headers = make_mocked_coro() + writer.write_eof = mock.AsyncMock() + writer.write_headers = mock.AsyncMock() req = make_request("GET", "/", writer=writer) data = b"data" resp = Response(body=data) diff --git a/tests/test_web_sendfile.py b/tests/test_web_sendfile.py index 58a46ec602c..1776a3aabd3 100644 --- a/tests/test_web_sendfile.py +++ b/tests/test_web_sendfile.py @@ -3,7 +3,7 @@ from unittest import mock from aiohttp import hdrs -from aiohttp.test_utils import make_mocked_coro, make_mocked_request +from aiohttp.test_utils import make_mocked_request from aiohttp.web_fileresponse import FileResponse MOCK_MODE = S_IFREG | S_IRUSR | S_IWUSR @@ -28,7 +28,7 @@ def test_using_gzip_if_header_present_and_file_available(loop) -> None: file_sender = FileResponse(filepath) file_sender._path = filepath - file_sender._sendfile = make_mocked_coro(None) # type: ignore[assignment] + file_sender._sendfile = mock.AsyncMock(return_value=None) # type: ignore[method-assign] loop.run_until_complete(file_sender.prepare(request)) @@ -53,7 +53,7 @@ def test_gzip_if_header_not_present_and_file_available(loop) -> None: file_sender = FileResponse(filepath) file_sender._path = filepath - file_sender._sendfile = make_mocked_coro(None) # type: ignore[assignment] + file_sender._sendfile = mock.AsyncMock(return_value=None) # type: ignore[method-assign] loop.run_until_complete(file_sender.prepare(request)) @@ -76,7 +76,7 @@ def test_gzip_if_header_not_present_and_file_not_available(loop) -> None: file_sender = FileResponse(filepath) file_sender._path = filepath - file_sender._sendfile = make_mocked_coro(None) # type: ignore[assignment] + file_sender._sendfile = mock.AsyncMock(return_value=None) # type: ignore[method-assign] loop.run_until_complete(file_sender.prepare(request)) @@ -101,7 +101,7 @@ def test_gzip_if_header_present_and_file_not_available(loop) -> None: file_sender = FileResponse(filepath) file_sender._path = filepath - file_sender._sendfile = make_mocked_coro(None) # type: ignore[assignment] + file_sender._sendfile = mock.AsyncMock(return_value=None) # type: ignore[method-assign] loop.run_until_complete(file_sender.prepare(request)) @@ -120,7 +120,7 @@ def test_status_controlled_by_user(loop) -> None: file_sender = FileResponse(filepath, status=203) file_sender._path = filepath - file_sender._sendfile = make_mocked_coro(None) # type: ignore[assignment] + file_sender._sendfile = mock.AsyncMock(return_value=None) # type: ignore[method-assign] loop.run_until_complete(file_sender.prepare(request)) diff --git a/tests/test_web_websocket.py b/tests/test_web_websocket.py index f9a92d0587f..390d6224d3d 100644 --- a/tests/test_web_websocket.py +++ b/tests/test_web_websocket.py @@ -10,7 +10,7 @@ from aiohttp import WSMessage, WSMessageTypeError, WSMsgType, web from aiohttp.http import WS_CLOSED_MESSAGE from aiohttp.streams import EofStream -from aiohttp.test_utils import make_mocked_coro, make_mocked_request +from aiohttp.test_utils import make_mocked_request from aiohttp.web import HTTPBadRequest, WebSocketResponse from aiohttp.web_ws import WebSocketReady @@ -420,13 +420,11 @@ async def test_receive_eofstream_in_reader(make_request, loop) -> None: ws._reader = mock.Mock() exc = EofStream() - res = loop.create_future() - res.set_exception(exc) - ws._reader.read = make_mocked_coro(res) - ws._payload_writer.drain = mock.Mock() - ws._payload_writer.drain.return_value = loop.create_future() - ws._payload_writer.drain.return_value.set_result(True) - + ws._reader.read = mock.AsyncMock(side_effect=exc) + assert ws._payload_writer is not None + f = loop.create_future() + f.set_result(True) + ws._payload_writer.drain.return_value = f # type: ignore[attr-defined] msg = await ws.receive() assert msg.type == WSMsgType.CLOSED assert ws.closed @@ -439,12 +437,7 @@ async def test_receive_exception_in_reader(make_request: Any, loop: Any) -> None ws._reader = mock.Mock() exc = Exception() - res = loop.create_future() - res.set_exception(exc) - ws._reader.read = make_mocked_coro(res) - ws._payload_writer.drain = mock.Mock() - ws._payload_writer.drain.return_value = loop.create_future() - ws._payload_writer.drain.return_value.set_result(True) + ws._reader.read = mock.AsyncMock(side_effect=exc) msg = await ws.receive() assert msg.type == WSMsgType.ERROR @@ -526,9 +519,7 @@ async def test_receive_timeouterror(make_request: Any, loop: Any) -> None: assert len(ws._req.transport.close.mock_calls) == 0 ws._reader = mock.Mock() - res = loop.create_future() - res.set_exception(asyncio.TimeoutError()) - ws._reader.read = make_mocked_coro(res) + ws._reader.read = mock.AsyncMock(side_effect=asyncio.TimeoutError()) with pytest.raises(asyncio.TimeoutError): await ws.receive() diff --git a/tests/test_websocket_writer.py b/tests/test_websocket_writer.py index b39e411f90d..f5125dde361 100644 --- a/tests/test_websocket_writer.py +++ b/tests/test_websocket_writer.py @@ -8,13 +8,12 @@ from aiohttp import WSMsgType from aiohttp._websocket.reader import WebSocketDataQueue from aiohttp.http import WebSocketReader, WebSocketWriter -from aiohttp.test_utils import make_mocked_coro @pytest.fixture def protocol(): ret = mock.Mock() - ret._drain_helper = make_mocked_coro() + ret._drain_helper = mock.AsyncMock() return ret From 383323d74f7b73245de86b5e2bcc4723fc50ba91 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 21 May 2025 10:59:03 +0000 Subject: [PATCH 53/90] Bump setuptools from 80.7.1 to 80.8.0 (#10920) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [setuptools](https://github.com/pypa/setuptools) from 80.7.1 to 80.8.0.
Changelog

Sourced from setuptools's changelog.

v80.8.0

Features

  • Replaced more references to pkg_resources with importlib equivalents in wheel odule. (#3085)
  • Restore explicit LICENSE file. (#5001)
  • Removed no longer used build dependency on coherent.licensed. (#5003)
Commits
  • b3786cd Bump version: 80.7.1 → 80.8.0
  • 9179b75 Merge pull request #5003 from abravalheri/issue-5002
  • 6f937df Merge pull request #5004 from pypa/feature/remove-more-pkg_resources
  • 1bfd8db Add news fragment.
  • 0e19b82 Replace pkg_resources with importlib.metadata and packaging.requirements.
  • 95145dd Extract a method for converting requires.
  • 57d6fcd Add news fragment
  • 62e4793 Comment out unused build dependency
  • ae480ff Restore explicit LICENSE file
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=setuptools&package-manager=pip&previous-version=80.7.1&new-version=80.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/doc-spelling.txt | 2 +- requirements/doc.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index 3c3cf6cfacf..bcb2e81a5e0 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -300,7 +300,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.7.1 +setuptools==80.8.0 # via # incremental # pip-tools diff --git a/requirements/dev.txt b/requirements/dev.txt index 82750d218f3..0a992d8a1e1 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -291,7 +291,7 @@ zlib-ng==0.5.1 # The following packages are considered to be unsafe in a requirements file: pip==25.1.1 # via pip-tools -setuptools==80.7.1 +setuptools==80.8.0 # via # incremental # pip-tools diff --git a/requirements/doc-spelling.txt b/requirements/doc-spelling.txt index e00e4b52226..142aa6d7edb 100644 --- a/requirements/doc-spelling.txt +++ b/requirements/doc-spelling.txt @@ -76,5 +76,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.7.1 +setuptools==80.8.0 # via incremental diff --git a/requirements/doc.txt b/requirements/doc.txt index 0ee0b84218e..08f24f4175a 100644 --- a/requirements/doc.txt +++ b/requirements/doc.txt @@ -69,5 +69,5 @@ urllib3==2.4.0 # via requests # The following packages are considered to be unsafe in a requirements file: -setuptools==80.7.1 +setuptools==80.8.0 # via incremental From 2e51f406bf575c91557eb631ef890bddcc3621f3 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Wed, 21 May 2025 23:27:55 +0000 Subject: [PATCH 54/90] [PR #10923/19643c9c backport][3.12] Fix weakref garbage collection race condition in DNS resolver manager (#10924) Co-authored-by: J. Nick Koston --- CHANGES/10923.feature.rst | 1 + aiohttp/resolver.py | 5 ++++- tests/test_resolver.py | 46 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 51 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10923.feature.rst diff --git a/CHANGES/10923.feature.rst b/CHANGES/10923.feature.rst new file mode 120000 index 00000000000..879a4227358 --- /dev/null +++ b/CHANGES/10923.feature.rst @@ -0,0 +1 @@ +10847.feature.rst \ No newline at end of file diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py index 8e73beb6e1e..05accd19564 100644 --- a/aiohttp/resolver.py +++ b/aiohttp/resolver.py @@ -257,11 +257,14 @@ def release_resolver( loop: The event loop the resolver was using. """ # Remove client from its loop's tracking + if loop not in self._loop_data: + return resolver, client_set = self._loop_data[loop] client_set.discard(client) # If no more clients for this loop, cancel and remove its resolver if not client_set: - resolver.cancel() + if resolver is not None: + resolver.cancel() del self._loop_data[loop] diff --git a/tests/test_resolver.py b/tests/test_resolver.py index 9a6a782c06a..f6963121eb7 100644 --- a/tests/test_resolver.py +++ b/tests/test_resolver.py @@ -628,3 +628,49 @@ async def test_dns_resolver_manager_multiple_event_loops( # Verify resolver cleanup resolver1.cancel.assert_called_once() resolver2.cancel.assert_called_once() + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_weakref_garbage_collection() -> None: + """Test that release_resolver handles None resolver due to weakref garbage collection.""" + manager = _DNSResolverManager() + + # Create a mock resolver that will be None when accessed + mock_resolver = Mock() + mock_resolver.cancel = Mock() + + with patch("aiodns.DNSResolver", return_value=mock_resolver): + # Create an AsyncResolver to get a resolver from the manager + resolver = AsyncResolver() + loop = asyncio.get_running_loop() + + # Manually corrupt the data to simulate garbage collection + # by setting the resolver to None + manager._loop_data[loop] = (None, manager._loop_data[loop][1]) # type: ignore[assignment] + + # This should not raise an AttributeError: 'NoneType' object has no attribute 'cancel' + await resolver.close() + + # Verify no exception was raised and the loop data was cleaned up properly + # Since we set resolver to None and there was one client, the entry should be removed + assert loop not in manager._loop_data + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +async def test_dns_resolver_manager_missing_loop_data() -> None: + """Test that release_resolver handles missing loop data gracefully.""" + manager = _DNSResolverManager() + + with patch("aiodns.DNSResolver"): + # Create an AsyncResolver + resolver = AsyncResolver() + loop = asyncio.get_running_loop() + + # Manually remove the loop data to simulate race condition + manager._loop_data.clear() + + # This should not raise a KeyError + await resolver.close() + + # Verify no exception was raised + assert loop not in manager._loop_data From d2f682f0aebafaa5923c7024033f2f8037d19619 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 11:22:40 +0000 Subject: [PATCH 55/90] Bump typing-inspection from 0.4.0 to 0.4.1 (#10927) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [typing-inspection](https://github.com/pydantic/typing-inspection) from 0.4.0 to 0.4.1.
Changelog

Sourced from typing-inspection's changelog.

v0.4.1 (2025-05-21)

  • Use list as a type hint for InspectedAnnotation.metadata by @​Viicos in #43
Commits
  • 3bc3f96 Prepare release 0.4.1 (#44)
  • 17b939c Bump development dependencies (#46)
  • dcdd318 Add proper reference to dataclasses.IniVar in AnnotationSource.DATACLASS ...
  • 5f86b14 Use list as a type hint for InspectedAnnotation.metadata (#43)
  • b737855 Add SPDX license identifier (#42)
  • 61c25e5 Improve test coverage (#41)
  • a56b8c3 Fix implementation of is_union_origin() (#40)
  • e20451f Add typing_objects.is_forwardref() (#38)
  • eb7654b Fix some typos (#36)
  • 5cb7257 Fix compatibility with latest Python 3.14 release (#37)
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=typing-inspection&package-manager=pip&previous-version=0.4.0&new-version=0.4.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/lint.txt | 2 +- requirements/test.txt | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index bcb2e81a5e0..f63a437a276 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -272,7 +272,7 @@ typing-extensions==4.13.2 # python-on-whales # rich # typing-inspection -typing-inspection==0.4.0 +typing-inspection==0.4.1 # via pydantic uritemplate==4.1.1 # via gidgethub diff --git a/requirements/dev.txt b/requirements/dev.txt index 0a992d8a1e1..704649df008 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -263,7 +263,7 @@ typing-extensions==4.13.2 # python-on-whales # rich # typing-inspection -typing-inspection==0.4.0 +typing-inspection==0.4.1 # via pydantic uritemplate==4.1.1 # via gidgethub diff --git a/requirements/lint.txt b/requirements/lint.txt index 28aa349a511..99fcd3969e3 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -106,7 +106,7 @@ typing-extensions==4.13.2 # python-on-whales # rich # typing-inspection -typing-inspection==0.4.0 +typing-inspection==0.4.1 # via pydantic uvloop==0.21.0 ; platform_system != "Windows" # via -r requirements/lint.in diff --git a/requirements/test.txt b/requirements/test.txt index 683001e8967..da25768851e 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -135,7 +135,7 @@ typing-extensions==4.13.2 # python-on-whales # rich # typing-inspection -typing-inspection==0.4.0 +typing-inspection==0.4.1 # via pydantic uvloop==0.21.0 ; platform_system != "Windows" and implementation_name == "cpython" # via -r requirements/base.in From ee207f530a1c473719f4c1e68e79a1f85baffa98 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 11:24:42 +0000 Subject: [PATCH 56/90] Bump coverage from 7.8.0 to 7.8.1 (#10928) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.8.0 to 7.8.1.
Changelog

Sourced from coverage's changelog.

Version 7.8.1 — 2025-05-21

  • A number of EncodingWarnings were fixed that could appear if you've enabled PYTHONWARNDEFAULTENCODING, fixing issue 1966. Thanks, Henry Schreiner <pull 1967_>.

  • Fixed a race condition when using sys.monitoring with free-threading Python, closing issue 1970_.

.. _issue 1966: nedbat/coveragepy#1966 .. _pull 1967: nedbat/coveragepy#1967 .. _issue 1970: nedbat/coveragepy#1970

.. _changes_7-8-0:

Commits
  • ed98b87 docs: sample HTML for 7.8.1
  • b98bc9b docs: prep for 7.8.1
  • ecbd4da build: make a step more explicit
  • 2774410 test: simplify skipper, and make it suppressable
  • 8b9cecc fix: close a sys.monitoring race condition with free-threading. #1970
  • 66e4f8d test: try to unflake a test
  • 9975d0c build: no need for a separate doc_upgrade target
  • 6dec28b build: delete unused code in igor.py
  • 6376e35 build: clarify a .ignore rule
  • 9bdf366 chore: make upgrade doc_upgrade
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=coverage&package-manager=pip&previous-version=7.8.0&new-version=7.8.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 2 +- requirements/dev.txt | 2 +- requirements/test.txt | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index f63a437a276..e79f7008a7d 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -54,7 +54,7 @@ click==8.1.8 # slotscheck # towncrier # wait-for-it -coverage==7.8.0 +coverage==7.8.1 # via # -r requirements/test.in # pytest-cov diff --git a/requirements/dev.txt b/requirements/dev.txt index 704649df008..9b2c3ebeab3 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -54,7 +54,7 @@ click==8.1.8 # slotscheck # towncrier # wait-for-it -coverage==7.8.0 +coverage==7.8.1 # via # -r requirements/test.in # pytest-cov diff --git a/requirements/test.txt b/requirements/test.txt index da25768851e..63cb482c5e0 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -27,7 +27,7 @@ cffi==1.17.1 # pytest-codspeed click==8.1.8 # via wait-for-it -coverage==7.8.0 +coverage==7.8.1 # via # -r requirements/test.in # pytest-cov From 40563751adf02b811d5acf95696c096a4dbd9ed4 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 14:47:14 +0000 Subject: [PATCH 57/90] [PR #10760/4152a083 backport][3.12] Support using system llhttp library (#10929) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Michał Górny Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) --- CHANGES/10759.packaging.rst | 5 +++++ aiohttp/_cparser.pxd | 2 +- docs/glossary.rst | 11 +++++++++++ docs/spelling_wordlist.txt | 1 + pyproject.toml | 1 + requirements/test.in | 1 + setup.py | 37 +++++++++++++++++++++++++++++++------ 7 files changed, 51 insertions(+), 7 deletions(-) create mode 100644 CHANGES/10759.packaging.rst diff --git a/CHANGES/10759.packaging.rst b/CHANGES/10759.packaging.rst new file mode 100644 index 00000000000..6f41e873229 --- /dev/null +++ b/CHANGES/10759.packaging.rst @@ -0,0 +1,5 @@ +Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + +This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that +can be used to build aiohttp against the system install of the ``llhttp`` library rather +than the vendored one. diff --git a/aiohttp/_cparser.pxd b/aiohttp/_cparser.pxd index c2cd5a92fda..1b3be6d4efb 100644 --- a/aiohttp/_cparser.pxd +++ b/aiohttp/_cparser.pxd @@ -1,7 +1,7 @@ from libc.stdint cimport int32_t, uint8_t, uint16_t, uint64_t -cdef extern from "../vendor/llhttp/build/llhttp.h": +cdef extern from "llhttp.h": struct llhttp__internal_s: int32_t _index diff --git a/docs/glossary.rst b/docs/glossary.rst index 392ef740cd1..996ea982d58 100644 --- a/docs/glossary.rst +++ b/docs/glossary.rst @@ -151,6 +151,17 @@ Environment Variables ===================== +.. envvar:: AIOHTTP_NO_EXTENSIONS + + If set to a non-empty value while building from source, aiohttp will be built without speedups + written as C extensions. This option is primarily useful for debugging. + +.. envvar:: AIOHTTP_USE_SYSTEM_DEPS + + If set to a non-empty value while building from source, aiohttp will be built against + the system installation of llhttp rather than the vendored library. This option is primarily + meant to be used by downstream redistributors. + .. envvar:: NETRC If set, HTTP Basic Auth will be read from the file pointed to by this environment variable, diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index 421ef842678..b7153c68be8 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -175,6 +175,7 @@ kwargs latin lifecycle linux +llhttp localhost Locator login diff --git a/pyproject.toml b/pyproject.toml index 69f8a6b58b6..3ef37b5978b 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,5 +1,6 @@ [build-system] requires = [ + "pkgconfig", "setuptools >= 46.4.0", ] build-backend = "setuptools.build_meta" diff --git a/requirements/test.in b/requirements/test.in index 91b5e115952..1563689deae 100644 --- a/requirements/test.in +++ b/requirements/test.in @@ -5,6 +5,7 @@ coverage freezegun isal mypy; implementation_name == "cpython" +pkgconfig proxy.py >= 2.4.4rc5 pytest pytest-cov diff --git a/setup.py b/setup.py index 2f024e87ef2..fafb7dc7941 100644 --- a/setup.py +++ b/setup.py @@ -8,6 +8,9 @@ raise RuntimeError("aiohttp 3.x requires Python 3.9+") +USE_SYSTEM_DEPS = bool( + os.environ.get("AIOHTTP_USE_SYSTEM_DEPS", os.environ.get("USE_SYSTEM_DEPS")) +) NO_EXTENSIONS: bool = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) HERE = pathlib.Path(__file__).parent IS_GIT_REPO = (HERE / ".git").exists() @@ -17,7 +20,11 @@ NO_EXTENSIONS = True -if IS_GIT_REPO and not (HERE / "vendor/llhttp/README.md").exists(): +if ( + not USE_SYSTEM_DEPS + and IS_GIT_REPO + and not (HERE / "vendor/llhttp/README.md").exists() +): print("Install submodules when building from git clone", file=sys.stderr) print("Hint:", file=sys.stderr) print(" git submodule update --init", file=sys.stderr) @@ -26,6 +33,27 @@ # NOTE: makefile cythonizes all Cython modules +if USE_SYSTEM_DEPS: + import shlex + + import pkgconfig + + llhttp_sources = [] + llhttp_kwargs = { + "extra_compile_args": shlex.split(pkgconfig.cflags("libllhttp")), + "extra_link_args": shlex.split(pkgconfig.libs("libllhttp")), + } +else: + llhttp_sources = [ + "vendor/llhttp/build/c/llhttp.c", + "vendor/llhttp/src/native/api.c", + "vendor/llhttp/src/native/http.c", + ] + llhttp_kwargs = { + "define_macros": [("LLHTTP_STRICT_MODE", 0)], + "include_dirs": ["vendor/llhttp/build"], + } + extensions = [ Extension("aiohttp._websocket.mask", ["aiohttp/_websocket/mask.c"]), Extension( @@ -33,12 +61,9 @@ [ "aiohttp/_http_parser.c", "aiohttp/_find_header.c", - "vendor/llhttp/build/c/llhttp.c", - "vendor/llhttp/src/native/api.c", - "vendor/llhttp/src/native/http.c", + *llhttp_sources, ], - define_macros=[("LLHTTP_STRICT_MODE", 0)], - include_dirs=["vendor/llhttp/build"], + **llhttp_kwargs, ), Extension("aiohttp._http_writer", ["aiohttp/_http_writer.c"]), Extension("aiohttp._websocket.reader_c", ["aiohttp/_websocket/reader_c.c"]), From 10f0cf8d74d59aa293e2d998256e8d16fad7bd7e Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 14:51:24 +0000 Subject: [PATCH 58/90] [PR #10922/5fac5f19 backport][3.12] Add Winloop to test suite if User is using Windows (#10930) Co-authored-by: Vizonex <114684698+Vizonex@users.noreply.github.com> Co-authored-by: J. Nick Koston Co-authored-by: J. Nick Koston Co-authored-by: Sam Bull --- CHANGES/10922.contrib.rst | 1 + CONTRIBUTORS.txt | 1 + docs/spelling_wordlist.txt | 1 + requirements/base.in | 1 + requirements/base.txt | 1 + tests/conftest.py | 5 ++++- 6 files changed, 9 insertions(+), 1 deletion(-) create mode 100644 CHANGES/10922.contrib.rst diff --git a/CHANGES/10922.contrib.rst b/CHANGES/10922.contrib.rst new file mode 100644 index 00000000000..e5e1cfd8af6 --- /dev/null +++ b/CHANGES/10922.contrib.rst @@ -0,0 +1 @@ +Added Winloop to test suite to support in the future -- by :user:`Vizonex`. diff --git a/CONTRIBUTORS.txt b/CONTRIBUTORS.txt index 5ff1eea3da7..59edfd7ac3f 100644 --- a/CONTRIBUTORS.txt +++ b/CONTRIBUTORS.txt @@ -359,6 +359,7 @@ Vincent Maillol Vitalik Verhovodov Vitaly Haritonsky Vitaly Magerya +Vizonex Vladimir Kamarzin Vladimir Kozlovski Vladimir Rutsky diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index b7153c68be8..d0328529cfd 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -368,6 +368,7 @@ websocket’s websockets Websockets wildcard +Winloop Workflow ws wsgi diff --git a/requirements/base.in b/requirements/base.in index 70493b6c83a..816a4e84026 100644 --- a/requirements/base.in +++ b/requirements/base.in @@ -2,3 +2,4 @@ gunicorn uvloop; platform_system != "Windows" and implementation_name == "cpython" # MagicStack/uvloop#14 +winloop; platform_system == "Windows" and implementation_name == "cpython" diff --git a/requirements/base.txt b/requirements/base.txt index 26c18e2f53e..2cd73f52418 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -43,6 +43,7 @@ pycparser==2.22 typing-extensions==4.13.2 # via multidict uvloop==0.21.0 ; platform_system != "Windows" and implementation_name == "cpython" +winloop==0.1.8; platform_system == "Windows" and implementation_name == "cpython" # via -r requirements/base.in yarl==1.20.0 # via -r requirements/runtime-deps.in diff --git a/tests/conftest.py b/tests/conftest.py index 27cd5cbd6db..d9831aea523 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -34,7 +34,10 @@ try: - import uvloop + if sys.platform == "win32": + import winloop as uvloop + else: + import uvloop except ImportError: uvloop = None # type: ignore[assignment] From 0c161025b5f0f15f13e66cc1cba906e2428cc276 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 10:22:46 -0500 Subject: [PATCH 59/90] [PR #10915/545783b backport][3.12] Fix connection reuse for file-like data payloads (#10931) --- CHANGES/10325.bugfix.rst | 1 + CHANGES/10915.bugfix.rst | 3 + aiohttp/client_reqrep.py | 77 +++++- aiohttp/payload.py | 428 ++++++++++++++++++++++++++++---- tests/conftest.py | 16 ++ tests/test_client_functional.py | 170 ++++++++++++- tests/test_client_request.py | 98 +++++++- tests/test_payload.py | 351 ++++++++++++++++++++++++-- 8 files changed, 1060 insertions(+), 84 deletions(-) create mode 120000 CHANGES/10325.bugfix.rst create mode 100644 CHANGES/10915.bugfix.rst diff --git a/CHANGES/10325.bugfix.rst b/CHANGES/10325.bugfix.rst new file mode 120000 index 00000000000..aa085cc590d --- /dev/null +++ b/CHANGES/10325.bugfix.rst @@ -0,0 +1 @@ +10915.bugfix.rst \ No newline at end of file diff --git a/CHANGES/10915.bugfix.rst b/CHANGES/10915.bugfix.rst new file mode 100644 index 00000000000..f564603306b --- /dev/null +++ b/CHANGES/10915.bugfix.rst @@ -0,0 +1,3 @@ +Fixed connection reuse for file-like data payloads by ensuring buffer +truncation respects content-length boundaries and preventing premature +connection closure race -- by :user:`bdraco`. diff --git a/aiohttp/client_reqrep.py b/aiohttp/client_reqrep.py index ef0dd42b969..a50917150c5 100644 --- a/aiohttp/client_reqrep.py +++ b/aiohttp/client_reqrep.py @@ -370,6 +370,23 @@ def __init__( def __reset_writer(self, _: object = None) -> None: self.__writer = None + def _get_content_length(self) -> Optional[int]: + """Extract and validate Content-Length header value. + + Returns parsed Content-Length value or None if not set. + Raises ValueError if header exists but cannot be parsed as an integer. + """ + if hdrs.CONTENT_LENGTH not in self.headers: + return None + + content_length_hdr = self.headers[hdrs.CONTENT_LENGTH] + try: + return int(content_length_hdr) + except ValueError: + raise ValueError( + f"Invalid Content-Length header: {content_length_hdr}" + ) from None + @property def skip_auto_headers(self) -> CIMultiDict[None]: return self._skip_auto_headers or CIMultiDict() @@ -659,9 +676,37 @@ def update_proxy( self.proxy_headers = proxy_headers async def write_bytes( - self, writer: AbstractStreamWriter, conn: "Connection" + self, + writer: AbstractStreamWriter, + conn: "Connection", + content_length: Optional[int], ) -> None: - """Support coroutines that yields bytes objects.""" + """ + Write the request body to the connection stream. + + This method handles writing different types of request bodies: + 1. Payload objects (using their specialized write_with_length method) + 2. Bytes/bytearray objects + 3. Iterable body content + + Args: + writer: The stream writer to write the body to + conn: The connection being used for this request + content_length: Optional maximum number of bytes to write from the body + (None means write the entire body) + + The method properly handles: + - Waiting for 100-Continue responses if required + - Content length constraints for chunked encoding + - Error handling for network issues, cancellation, and other exceptions + - Signaling EOF and timeout management + + Raises: + ClientOSError: When there's an OS-level error writing the body + ClientConnectionError: When there's a general connection error + asyncio.CancelledError: When the operation is cancelled + + """ # 100 response if self._continue is not None: await writer.drain() @@ -671,16 +716,30 @@ async def write_bytes( assert protocol is not None try: if isinstance(self.body, payload.Payload): - await self.body.write(writer) + # Specialized handling for Payload objects that know how to write themselves + await self.body.write_with_length(writer, content_length) else: + # Handle bytes/bytearray by converting to an iterable for consistent handling if isinstance(self.body, (bytes, bytearray)): self.body = (self.body,) - for chunk in self.body: - await writer.write(chunk) + if content_length is None: + # Write the entire body without length constraint + for chunk in self.body: + await writer.write(chunk) + else: + # Write with length constraint, respecting content_length limit + # If the body is larger than content_length, we truncate it + remaining_bytes = content_length + for chunk in self.body: + await writer.write(chunk[:remaining_bytes]) + remaining_bytes -= len(chunk) + if remaining_bytes <= 0: + break except OSError as underlying_exc: reraised_exc = underlying_exc + # Distinguish between timeout and other OS errors for better error reporting exc_is_not_timeout = underlying_exc.errno is not None or not isinstance( underlying_exc, asyncio.TimeoutError ) @@ -692,18 +751,20 @@ async def write_bytes( set_exception(protocol, reraised_exc, underlying_exc) except asyncio.CancelledError: - # Body hasn't been fully sent, so connection can't be reused. + # Body hasn't been fully sent, so connection can't be reused conn.close() raise except Exception as underlying_exc: set_exception( protocol, ClientConnectionError( - f"Failed to send bytes into the underlying connection {conn !s}", + "Failed to send bytes into the underlying connection " + f"{conn !s}: {underlying_exc!r}", ), underlying_exc, ) else: + # Successfully wrote the body, signal EOF and start response timeout await writer.write_eof() protocol.start_timeout() @@ -768,7 +829,7 @@ async def send(self, conn: "Connection") -> "ClientResponse": await writer.write_headers(status_line, self.headers) task: Optional["asyncio.Task[None]"] if self.body or self._continue is not None or protocol.writing_paused: - coro = self.write_bytes(writer, conn) + coro = self.write_bytes(writer, conn, self._get_content_length()) if sys.version_info >= (3, 12): # Optimization for Python 3.12, try to write # bytes immediately to avoid having to schedule diff --git a/aiohttp/payload.py b/aiohttp/payload.py index 3f6d3672db2..823940902f5 100644 --- a/aiohttp/payload.py +++ b/aiohttp/payload.py @@ -16,6 +16,7 @@ Final, Iterable, Optional, + Set, TextIO, Tuple, Type, @@ -53,6 +54,9 @@ ) TOO_LARGE_BYTES_BODY: Final[int] = 2**20 # 1 MB +READ_SIZE: Final[int] = 2**16 # 64 KB +_CLOSE_FUTURES: Set[asyncio.Future[None]] = set() + if TYPE_CHECKING: from typing import List @@ -238,10 +242,46 @@ def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: @abstractmethod async def write(self, writer: AbstractStreamWriter) -> None: - """Write payload. + """Write payload to the writer stream. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + + This is a legacy method that writes the entire payload without length constraints. + + Important: + For new implementations, use write_with_length() instead of this method. + This method is maintained for backwards compatibility and will eventually + delegate to write_with_length(writer, None) in all implementations. + + All payload subclasses must override this method for backwards compatibility, + but new code should use write_with_length for more flexibility and control. + """ + + # write_with_length is new in aiohttp 3.12 + # it should be overridden by subclasses + async def write_with_length( + self, writer: AbstractStreamWriter, content_length: Optional[int] + ) -> None: + """ + Write payload with a specific content length constraint. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + content_length: Maximum number of bytes to write (None for unlimited) + + This method allows writing payload content with a specific length constraint, + which is particularly useful for HTTP responses with Content-Length header. + + Note: + This is the base implementation that provides backwards compatibility + for subclasses that don't override this method. Specific payload types + should override this method to implement proper length-constrained writing. - writer is an AbstractStreamWriter instance: """ + # Backwards compatibility for subclasses that don't override this method + # and for the default implementation + await self.write(writer) class BytesPayload(Payload): @@ -276,8 +316,40 @@ def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: return self._value.decode(encoding, errors) async def write(self, writer: AbstractStreamWriter) -> None: + """Write the entire bytes payload to the writer stream. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + + This method writes the entire bytes content without any length constraint. + + Note: + For new implementations that need length control, use write_with_length(). + This method is maintained for backwards compatibility and is equivalent + to write_with_length(writer, None). + """ await writer.write(self._value) + async def write_with_length( + self, writer: AbstractStreamWriter, content_length: Optional[int] + ) -> None: + """ + Write bytes payload with a specific content length constraint. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + content_length: Maximum number of bytes to write (None for unlimited) + + This method writes either the entire byte sequence or a slice of it + up to the specified content_length. For BytesPayload, this operation + is performed efficiently using array slicing. + + """ + if content_length is not None: + await writer.write(self._value[:content_length]) + else: + await writer.write(self._value) + class StringPayload(BytesPayload): def __init__( @@ -330,15 +402,165 @@ def __init__( if hdrs.CONTENT_DISPOSITION not in self.headers: self.set_content_disposition(disposition, filename=self._filename) + def _read_and_available_len( + self, remaining_content_len: Optional[int] + ) -> Tuple[Optional[int], bytes]: + """ + Read the file-like object and return both its total size and the first chunk. + + Args: + remaining_content_len: Optional limit on how many bytes to read in this operation. + If None, READ_SIZE will be used as the default chunk size. + + Returns: + A tuple containing: + - The total size of the remaining unread content (None if size cannot be determined) + - The first chunk of bytes read from the file object + + This method is optimized to perform both size calculation and initial read + in a single operation, which is executed in a single executor job to minimize + context switches and file operations when streaming content. + + """ + size = self.size # Call size only once since it does I/O + return size, self._value.read( + min(size or READ_SIZE, remaining_content_len or READ_SIZE) + ) + + def _read(self, remaining_content_len: Optional[int]) -> bytes: + """ + Read a chunk of data from the file-like object. + + Args: + remaining_content_len: Optional maximum number of bytes to read. + If None, READ_SIZE will be used as the default chunk size. + + Returns: + A chunk of bytes read from the file object, respecting the + remaining_content_len limit if specified. + + This method is used for subsequent reads during streaming after + the initial _read_and_available_len call has been made. + + """ + return self._value.read(remaining_content_len or READ_SIZE) # type: ignore[no-any-return] + + @property + def size(self) -> Optional[int]: + try: + return os.fstat(self._value.fileno()).st_size - self._value.tell() + except (AttributeError, OSError): + return None + async def write(self, writer: AbstractStreamWriter) -> None: - loop = asyncio.get_event_loop() + """ + Write the entire file-like payload to the writer stream. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + + This method writes the entire file content without any length constraint. + It delegates to write_with_length() with no length limit for implementation + consistency. + + Note: + For new implementations that need length control, use write_with_length() directly. + This method is maintained for backwards compatibility with existing code. + + """ + await self.write_with_length(writer, None) + + async def write_with_length( + self, writer: AbstractStreamWriter, content_length: Optional[int] + ) -> None: + """ + Write file-like payload with a specific content length constraint. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + content_length: Maximum number of bytes to write (None for unlimited) + + This method implements optimized streaming of file content with length constraints: + + 1. File reading is performed in a thread pool to avoid blocking the event loop + 2. Content is read and written in chunks to maintain memory efficiency + 3. Writing stops when either: + - All available file content has been written (when size is known) + - The specified content_length has been reached + 4. File resources are properly closed even if the operation is cancelled + + The implementation carefully handles both known-size and unknown-size payloads, + as well as constrained and unconstrained content lengths. + + """ + loop = asyncio.get_running_loop() + total_written_len = 0 + remaining_content_len = content_length + try: - chunk = await loop.run_in_executor(None, self._value.read, 2**16) + # Get initial data and available length + available_len, chunk = await loop.run_in_executor( + None, self._read_and_available_len, remaining_content_len + ) + # Process data chunks until done while chunk: - await writer.write(chunk) - chunk = await loop.run_in_executor(None, self._value.read, 2**16) + chunk_len = len(chunk) + + # Write data with or without length constraint + if remaining_content_len is None: + await writer.write(chunk) + else: + await writer.write(chunk[:remaining_content_len]) + remaining_content_len -= chunk_len + + total_written_len += chunk_len + + # Check if we're done writing + if self._should_stop_writing( + available_len, total_written_len, remaining_content_len + ): + return + + # Read next chunk + chunk = await loop.run_in_executor( + None, self._read, remaining_content_len + ) finally: - await loop.run_in_executor(None, self._value.close) + # Handle closing the file without awaiting to prevent cancellation issues + # when the StreamReader reaches EOF + self._schedule_file_close(loop) + + def _should_stop_writing( + self, + available_len: Optional[int], + total_written_len: int, + remaining_content_len: Optional[int], + ) -> bool: + """ + Determine if we should stop writing data. + + Args: + available_len: Known size of the payload if available (None if unknown) + total_written_len: Number of bytes already written + remaining_content_len: Remaining bytes to be written for content-length limited responses + + Returns: + True if we should stop writing data, based on either: + - Having written all available data (when size is known) + - Having written all requested content (when content-length is specified) + + """ + return (available_len is not None and total_written_len >= available_len) or ( + remaining_content_len is not None and remaining_content_len <= 0 + ) + + def _schedule_file_close(self, loop: asyncio.AbstractEventLoop) -> None: + """Schedule file closing without awaiting to prevent cancellation issues.""" + close_future = loop.run_in_executor(None, self._value.close) + # Hold a strong reference to the future to prevent it from being + # garbage collected before it completes. + _CLOSE_FUTURES.add(close_future) + close_future.add_done_callback(_CLOSE_FUTURES.remove) def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: return "".join(r.decode(encoding, errors) for r in self._value.readlines()) @@ -375,31 +597,60 @@ def __init__( **kwargs, ) - @property - def size(self) -> Optional[int]: - try: - return os.fstat(self._value.fileno()).st_size - self._value.tell() - except OSError: - return None + def _read_and_available_len( + self, remaining_content_len: Optional[int] + ) -> Tuple[Optional[int], bytes]: + """ + Read the text file-like object and return both its total size and the first chunk. + + Args: + remaining_content_len: Optional limit on how many bytes to read in this operation. + If None, READ_SIZE will be used as the default chunk size. + + Returns: + A tuple containing: + - The total size of the remaining unread content (None if size cannot be determined) + - The first chunk of bytes read from the file object, encoded using the payload's encoding + + This method is optimized to perform both size calculation and initial read + in a single operation, which is executed in a single executor job to minimize + context switches and file operations when streaming content. + + Note: + TextIOPayload handles encoding of the text content before writing it + to the stream. If no encoding is specified, UTF-8 is used as the default. + + """ + size = self.size + chunk = self._value.read( + min(size or READ_SIZE, remaining_content_len or READ_SIZE) + ) + return size, chunk.encode(self._encoding) if self._encoding else chunk.encode() + + def _read(self, remaining_content_len: Optional[int]) -> bytes: + """ + Read a chunk of data from the text file-like object. + + Args: + remaining_content_len: Optional maximum number of bytes to read. + If None, READ_SIZE will be used as the default chunk size. + + Returns: + A chunk of bytes read from the file object and encoded using the payload's + encoding. The data is automatically converted from text to bytes. + + This method is used for subsequent reads during streaming after + the initial _read_and_available_len call has been made. It properly + handles text encoding, converting the text content to bytes using + the specified encoding (or UTF-8 if none was provided). + + """ + chunk = self._value.read(remaining_content_len or READ_SIZE) + return chunk.encode(self._encoding) if self._encoding else chunk.encode() def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: return self._value.read() - async def write(self, writer: AbstractStreamWriter) -> None: - loop = asyncio.get_event_loop() - try: - chunk = await loop.run_in_executor(None, self._value.read, 2**16) - while chunk: - data = ( - chunk.encode(encoding=self._encoding) - if self._encoding - else chunk.encode() - ) - await writer.write(data) - chunk = await loop.run_in_executor(None, self._value.read, 2**16) - finally: - await loop.run_in_executor(None, self._value.close) - class BytesIOPayload(IOBasePayload): _value: io.BytesIO @@ -414,20 +665,55 @@ def size(self) -> int: def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: return self._value.read().decode(encoding, errors) + async def write(self, writer: AbstractStreamWriter) -> None: + return await self.write_with_length(writer, None) -class BufferedReaderPayload(IOBasePayload): - _value: io.BufferedIOBase + async def write_with_length( + self, writer: AbstractStreamWriter, content_length: Optional[int] + ) -> None: + """ + Write BytesIO payload with a specific content length constraint. - @property - def size(self) -> Optional[int]: + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + content_length: Maximum number of bytes to write (None for unlimited) + + This implementation is specifically optimized for BytesIO objects: + + 1. Reads content in chunks to maintain memory efficiency + 2. Yields control back to the event loop periodically to prevent blocking + when dealing with large BytesIO objects + 3. Respects content_length constraints when specified + 4. Properly cleans up by closing the BytesIO object when done or on error + + The periodic yielding to the event loop is important for maintaining + responsiveness when processing large in-memory buffers. + + """ + loop_count = 0 + remaining_bytes = content_length try: - return os.fstat(self._value.fileno()).st_size - self._value.tell() - except (OSError, AttributeError): - # data.fileno() is not supported, e.g. - # io.BufferedReader(io.BytesIO(b'data')) - # For some file-like objects (e.g. tarfile), the fileno() attribute may - # not exist at all, and will instead raise an AttributeError. - return None + while chunk := self._value.read(READ_SIZE): + if loop_count > 0: + # Avoid blocking the event loop + # if they pass a large BytesIO object + # and we are not in the first iteration + # of the loop + await asyncio.sleep(0) + if remaining_bytes is None: + await writer.write(chunk) + else: + await writer.write(chunk[:remaining_bytes]) + remaining_bytes -= len(chunk) + if remaining_bytes <= 0: + return + loop_count += 1 + finally: + self._value.close() + + +class BufferedReaderPayload(IOBasePayload): + _value: io.BufferedIOBase def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: return self._value.read().decode(encoding, errors) @@ -486,15 +772,63 @@ def __init__(self, value: _AsyncIterable, *args: Any, **kwargs: Any) -> None: self._iter = value.__aiter__() async def write(self, writer: AbstractStreamWriter) -> None: - if self._iter: - try: - # iter is not None check prevents rare cases - # when the case iterable is used twice - while True: - chunk = await self._iter.__anext__() + """ + Write the entire async iterable payload to the writer stream. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + + This method iterates through the async iterable and writes each chunk + to the writer without any length constraint. + + Note: + For new implementations that need length control, use write_with_length() directly. + This method is maintained for backwards compatibility with existing code. + + """ + await self.write_with_length(writer, None) + + async def write_with_length( + self, writer: AbstractStreamWriter, content_length: Optional[int] + ) -> None: + """ + Write async iterable payload with a specific content length constraint. + + Args: + writer: An AbstractStreamWriter instance that handles the actual writing + content_length: Maximum number of bytes to write (None for unlimited) + + This implementation handles streaming of async iterable content with length constraints: + + 1. Iterates through the async iterable one chunk at a time + 2. Respects content_length constraints when specified + 3. Handles the case when the iterable might be used twice + + Since async iterables are consumed as they're iterated, there is no way to + restart the iteration if it's already in progress or completed. + + """ + if self._iter is None: + return + + remaining_bytes = content_length + + try: + while True: + chunk = await self._iter.__anext__() + if remaining_bytes is None: await writer.write(chunk) - except StopAsyncIteration: - self._iter = None + # If we have a content length limit + elif remaining_bytes > 0: + await writer.write(chunk[:remaining_bytes]) + remaining_bytes -= len(chunk) + # We still want to exhaust the iterator even + # if we have reached the content length limit + # since the file handle may not get closed by + # the iterator if we don't do this + except StopAsyncIteration: + # Iterator is exhausted + self._iter = None def decode(self, encoding: str = "utf-8", errors: str = "strict") -> str: raise TypeError("Unable to decode.") diff --git a/tests/conftest.py b/tests/conftest.py index d9831aea523..696f5d0d035 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -17,6 +17,7 @@ import zlib_ng.zlib_ng from blockbuster import blockbuster_ctx +from aiohttp import payload from aiohttp.client_proto import ResponseHandler from aiohttp.compression_utils import ZLibBackend, ZLibBackendProtocol, set_zlib_backend from aiohttp.http import WS_KEY @@ -332,3 +333,18 @@ def parametrize_zlib_backend( yield set_zlib_backend(original_backend) + + +@pytest.fixture() +def cleanup_payload_pending_file_closes( + loop: asyncio.AbstractEventLoop, +) -> Generator[None, None, None]: + """Ensure all pending file close operations complete during test teardown.""" + yield + if payload._CLOSE_FUTURES: + # Only wait for futures from the current loop + loop_futures = [f for f in payload._CLOSE_FUTURES if f.get_loop() is loop] + if loop_futures: + loop.run_until_complete( + asyncio.gather(*loop_futures, return_exceptions=True) + ) diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index 1154c7e5805..bb4d70ef530 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -12,7 +12,16 @@ import tarfile import time import zipfile -from typing import Any, AsyncIterator, Awaitable, Callable, List, NoReturn, Type +from typing import ( + Any, + AsyncIterator, + Awaitable, + Callable, + List, + NoReturn, + Optional, + Type, +) from unittest import mock import pytest @@ -41,6 +50,13 @@ from aiohttp.typedefs import Handler +@pytest.fixture(autouse=True) +def cleanup( + cleanup_payload_pending_file_closes: None, +) -> None: + """Ensure all pending file close operations complete during test teardown.""" + + @pytest.fixture def here(): return pathlib.Path(__file__).parent @@ -1560,7 +1576,10 @@ async def handler(request: web.Request) -> web.Response: original_write_bytes = ClientRequest.write_bytes async def write_bytes( - self: ClientRequest, writer: StreamWriter, conn: Connection + self: ClientRequest, + writer: StreamWriter, + conn: Connection, + content_length: Optional[int] = None, ) -> None: nonlocal write_mock original_write = writer._write @@ -1568,7 +1587,7 @@ async def write_bytes( with mock.patch.object( writer, "_write", autospec=True, spec_set=True, side_effect=original_write ) as write_mock: - await original_write_bytes(self, writer, conn) + await original_write_bytes(self, writer, conn, content_length) with mock.patch.object(ClientRequest, "write_bytes", write_bytes): app = web.Application() @@ -1940,8 +1959,7 @@ async def handler(request): app.router.add_post("/", handler) client = await aiohttp_client(app) - with fname.open("rb") as f: - data_size = len(f.read()) + data_size = len(expected) with pytest.warns(DeprecationWarning): @@ -4146,3 +4164,145 @@ async def handler(request: web.Request) -> web.Response: with pytest.raises(RuntimeError, match="Connection closed"): await resp.read() + + +async def test_content_length_limit_enforced(aiohttp_server: AiohttpServer) -> None: + """Test that Content-Length header value limits the amount of data sent to the server.""" + received_data = bytearray() + + async def handler(request: web.Request) -> web.Response: + # Read all data from the request and store it + data = await request.read() + received_data.extend(data) + return web.Response(text="OK") + + app = web.Application() + app.router.add_post("/", handler) + + server = await aiohttp_server(app) + + # Create data larger than what we'll limit with Content-Length + data = b"X" * 1000 + # Only send 500 bytes even though data is 1000 bytes + headers = {"Content-Length": "500"} + + async with aiohttp.ClientSession() as session: + await session.post(server.make_url("/"), data=data, headers=headers) + + # Verify only 500 bytes (not the full 1000) were received by the server + assert len(received_data) == 500 + assert received_data == b"X" * 500 + + +async def test_content_length_limit_with_multiple_reads( + aiohttp_server: AiohttpServer, +) -> None: + """Test that Content-Length header value limits multi read data properly.""" + received_data = bytearray() + + async def handler(request: web.Request) -> web.Response: + # Read all data from the request and store it + data = await request.read() + received_data.extend(data) + return web.Response(text="OK") + + app = web.Application() + app.router.add_post("/", handler) + + server = await aiohttp_server(app) + + # Create an async generator of data + async def data_generator() -> AsyncIterator[bytes]: + yield b"Chunk1" * 100 # 600 bytes + yield b"Chunk2" * 100 # another 600 bytes + + # Limit to 800 bytes even though we'd generate 1200 bytes + headers = {"Content-Length": "800"} + + async with aiohttp.ClientSession() as session: + await session.post(server.make_url("/"), data=data_generator(), headers=headers) + + # Verify only 800 bytes (not the full 1200) were received by the server + assert len(received_data) == 800 + # First chunk fully sent (600 bytes) + assert received_data.startswith(b"Chunk1" * 100) + + # The rest should be from the second chunk (the exact split might vary by implementation) + assert b"Chunk2" in received_data # Some part of the second chunk was sent + # 200 bytes from the second chunk + assert len(received_data) - len(b"Chunk1" * 100) == 200 + + +async def test_post_connection_cleanup_with_bytesio( + aiohttp_client: AiohttpClient, +) -> None: + """Test that connections are properly cleaned up when using BytesIO data.""" + + async def handler(request: web.Request) -> web.Response: + return web.Response(body=b"") + + app = web.Application() + app.router.add_post("/hello", handler) + client = await aiohttp_client(app) + + # Test with direct bytes and BytesIO multiple times to ensure connection cleanup + for _ in range(10): + async with client.post( + "/hello", + data=b"x", + headers={"Content-Length": "1"}, + ) as response: + response.raise_for_status() + + assert client._session.connector is not None + assert len(client._session.connector._conns) == 1 + + x = io.BytesIO(b"x") + async with client.post( + "/hello", + data=x, + headers={"Content-Length": "1"}, + ) as response: + response.raise_for_status() + + assert len(client._session.connector._conns) == 1 + + +async def test_post_connection_cleanup_with_file( + aiohttp_client: AiohttpClient, here: pathlib.Path +) -> None: + """Test that connections are properly cleaned up when using file data.""" + + async def handler(request: web.Request) -> web.Response: + await request.read() + return web.Response(body=b"") + + app = web.Application() + app.router.add_post("/hello", handler) + client = await aiohttp_client(app) + + test_file = here / "data.unknown_mime_type" + + # Test with direct bytes and file multiple times to ensure connection cleanup + for _ in range(10): + async with client.post( + "/hello", + data=b"xx", + headers={"Content-Length": "2"}, + ) as response: + response.raise_for_status() + + assert client._session.connector is not None + assert len(client._session.connector._conns) == 1 + fh = await asyncio.get_running_loop().run_in_executor( + None, open, test_file, "rb" + ) + + async with client.post( + "/hello", + data=fh, + headers={"Content-Length": str(test_file.stat().st_size)}, + ) as response: + response.raise_for_status() + + assert len(client._session.connector._conns) == 1 diff --git a/tests/test_client_request.py b/tests/test_client_request.py index 4706c10a588..70b30dd14f2 100644 --- a/tests/test_client_request.py +++ b/tests/test_client_request.py @@ -4,8 +4,9 @@ import pathlib import sys import urllib.parse +from collections.abc import Callable, Iterable from http.cookies import BaseCookie, Morsel, SimpleCookie -from typing import Any, Callable, Dict, Iterable, Optional +from typing import Any, Optional, Protocol, Union from unittest import mock import pytest @@ -14,6 +15,7 @@ import aiohttp from aiohttp import BaseConnector, hdrs, helpers, payload +from aiohttp.abc import AbstractStreamWriter from aiohttp.client_exceptions import ClientConnectionError from aiohttp.client_reqrep import ( ClientRequest, @@ -23,7 +25,11 @@ _merge_ssl_params, ) from aiohttp.compression_utils import ZLibBackend -from aiohttp.http import HttpVersion10, HttpVersion11 +from aiohttp.http import HttpVersion10, HttpVersion11, StreamWriter + + +class _RequestMaker(Protocol): + def __call__(self, method: str, url: str, **kwargs: Any) -> ClientRequest: ... class WriterMock(mock.AsyncMock): @@ -309,7 +315,7 @@ def test_default_loop(loop) -> None: ), ) def test_host_header_fqdn( - make_request: Any, url: str, headers: Dict[str, str], expected: str + make_request: Any, url: str, headers: dict[str, str], expected: str ) -> None: req = make_request("get", url, headers=headers) assert req.headers["HOST"] == expected @@ -995,10 +1001,12 @@ async def gen(): assert req.headers["TRANSFER-ENCODING"] == "chunked" original_write_bytes = req.write_bytes - async def _mock_write_bytes(*args, **kwargs): + async def _mock_write_bytes( + writer: AbstractStreamWriter, conn: mock.Mock, content_length: Optional[int] + ) -> None: # Ensure the task is scheduled await asyncio.sleep(0) - return await original_write_bytes(*args, **kwargs) + await original_write_bytes(writer, conn, content_length) with mock.patch.object(req, "write_bytes", _mock_write_bytes): resp = await req.send(conn) @@ -1197,7 +1205,7 @@ async def test_oserror_on_write_bytes(loop, conn) -> None: writer = WriterMock() writer.write.side_effect = OSError - await req.write_bytes(writer, conn) + await req.write_bytes(writer, conn, None) assert conn.protocol.set_exception.called exc = conn.protocol.set_exception.call_args[0][0] @@ -1522,3 +1530,81 @@ def test_request_info_tuple_new() -> None: ).real_url is url ) + + +def test_get_content_length(make_request: _RequestMaker) -> None: + """Test _get_content_length method extracts Content-Length correctly.""" + req = make_request("get", "http://python.org/") + + # No Content-Length header + assert req._get_content_length() is None + + # Valid Content-Length header + req.headers["Content-Length"] = "42" + assert req._get_content_length() == 42 + + # Invalid Content-Length header + req.headers["Content-Length"] = "invalid" + with pytest.raises(ValueError, match="Invalid Content-Length header: invalid"): + req._get_content_length() + + +async def test_write_bytes_with_content_length_limit( + loop: asyncio.AbstractEventLoop, buf: bytearray, conn: mock.Mock +) -> None: + """Test that write_bytes respects content_length limit for different body types.""" + # Test with bytes data + data = b"Hello World" + req = ClientRequest("post", URL("http://python.org/"), loop=loop) + + req.body = data + + writer = StreamWriter(protocol=conn.protocol, loop=loop) + # Use content_length=5 to truncate data + await req.write_bytes(writer, conn, 5) + + # Verify only the first 5 bytes were written + assert buf == b"Hello" + await req.close() + + +@pytest.mark.parametrize( + "data", + [ + [b"Part1", b"Part2", b"Part3"], + b"Part1Part2Part3", + ], +) +async def test_write_bytes_with_iterable_content_length_limit( + loop: asyncio.AbstractEventLoop, + buf: bytearray, + conn: mock.Mock, + data: Union[list[bytes], bytes], +) -> None: + """Test that write_bytes respects content_length limit for iterable data.""" + # Test with iterable data + req = ClientRequest("post", URL("http://python.org/"), loop=loop) + req.body = data + + writer = StreamWriter(protocol=conn.protocol, loop=loop) + # Use content_length=7 to truncate at the middle of Part2 + await req.write_bytes(writer, conn, 7) + assert len(buf) == 7 + assert buf == b"Part1Pa" + await req.close() + + +async def test_write_bytes_empty_iterable_with_content_length( + loop: asyncio.AbstractEventLoop, buf: bytearray, conn: mock.Mock +) -> None: + """Test that write_bytes handles empty iterable body with content_length.""" + req = ClientRequest("post", URL("http://python.org/"), loop=loop) + req.body = [] # Empty iterable + + writer = StreamWriter(protocol=conn.protocol, loop=loop) + # Use content_length=10 with empty body + await req.write_bytes(writer, conn, 10) + + # Verify nothing was written + assert len(buf) == 0 + await req.close() diff --git a/tests/test_payload.py b/tests/test_payload.py index 0e2db91135b..af0230776e5 100644 --- a/tests/test_payload.py +++ b/tests/test_payload.py @@ -1,11 +1,22 @@ import array -import asyncio +import io +import unittest.mock +from collections.abc import AsyncIterator from io import StringIO -from unittest import mock +from typing import Optional, Union import pytest +from multidict import CIMultiDict -from aiohttp import payload, streams +from aiohttp import payload +from aiohttp.abc import AbstractStreamWriter + + +@pytest.fixture(autouse=True) +def cleanup( + cleanup_payload_pending_file_closes: None, +) -> None: + """Ensure all pending file close operations complete during test teardown.""" @pytest.fixture @@ -121,22 +132,326 @@ async def gen(): def test_async_iterable_payload_not_async_iterable() -> None: with pytest.raises(TypeError): - payload.AsyncIterablePayload(object()) + payload.AsyncIterablePayload(object()) # type: ignore[arg-type] + + +class MockStreamWriter(AbstractStreamWriter): + """Mock stream writer for testing payload writes.""" + + def __init__(self) -> None: + self.written: list[bytes] = [] + + async def write( + self, chunk: Union[bytes, bytearray, "memoryview[int]", "memoryview[bytes]"] + ) -> None: + """Store the chunk in the written list.""" + self.written.append(bytes(chunk)) + + async def write_eof(self, chunk: Optional[bytes] = None) -> None: + """write_eof implementation - no-op for tests.""" + + async def drain(self) -> None: + """Drain implementation - no-op for tests.""" + + def enable_compression( + self, encoding: str = "deflate", strategy: Optional[int] = None + ) -> None: + """Enable compression - no-op for tests.""" + + def enable_chunking(self) -> None: + """Enable chunking - no-op for tests.""" + + async def write_headers(self, status_line: str, headers: CIMultiDict[str]) -> None: + """Write headers - no-op for tests.""" + + def get_written_bytes(self) -> bytes: + """Return all written bytes as a single bytes object.""" + return b"".join(self.written) + + +async def test_bytes_payload_write_with_length_no_limit() -> None: + """Test BytesPayload writing with no content length limit.""" + data = b"0123456789" + p = payload.BytesPayload(data) + writer = MockStreamWriter() + + await p.write_with_length(writer, None) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == 10 + + +async def test_bytes_payload_write_with_length_exact() -> None: + """Test BytesPayload writing with exact content length.""" + data = b"0123456789" + p = payload.BytesPayload(data) + writer = MockStreamWriter() + + await p.write_with_length(writer, 10) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == 10 + + +async def test_bytes_payload_write_with_length_truncated() -> None: + """Test BytesPayload writing with truncated content length.""" + data = b"0123456789" + p = payload.BytesPayload(data) + writer = MockStreamWriter() + + await p.write_with_length(writer, 5) + assert writer.get_written_bytes() == b"01234" + assert len(writer.get_written_bytes()) == 5 + + +async def test_iobase_payload_write_with_length_no_limit() -> None: + """Test IOBasePayload writing with no content length limit.""" + data = b"0123456789" + p = payload.IOBasePayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await p.write_with_length(writer, None) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == 10 + + +async def test_iobase_payload_write_with_length_exact() -> None: + """Test IOBasePayload writing with exact content length.""" + data = b"0123456789" + p = payload.IOBasePayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await p.write_with_length(writer, 10) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == 10 + + +async def test_iobase_payload_write_with_length_truncated() -> None: + """Test IOBasePayload writing with truncated content length.""" + data = b"0123456789" + p = payload.IOBasePayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await p.write_with_length(writer, 5) + assert writer.get_written_bytes() == b"01234" + assert len(writer.get_written_bytes()) == 5 + + +async def test_bytesio_payload_write_with_length_no_limit() -> None: + """Test BytesIOPayload writing with no content length limit.""" + data = b"0123456789" + p = payload.BytesIOPayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await p.write_with_length(writer, None) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == 10 + + +async def test_bytesio_payload_write_with_length_exact() -> None: + """Test BytesIOPayload writing with exact content length.""" + data = b"0123456789" + p = payload.BytesIOPayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await p.write_with_length(writer, 10) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == 10 + + +async def test_bytesio_payload_write_with_length_truncated() -> None: + """Test BytesIOPayload writing with truncated content length.""" + data = b"0123456789" + payload_bytesio = payload.BytesIOPayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await payload_bytesio.write_with_length(writer, 5) + assert writer.get_written_bytes() == b"01234" + assert len(writer.get_written_bytes()) == 5 + + +async def test_bytesio_payload_write_with_length_remaining_zero() -> None: + """Test BytesIOPayload with content_length smaller than first read chunk.""" + data = b"0123456789" * 10 # 100 bytes + bio = io.BytesIO(data) + payload_bytesio = payload.BytesIOPayload(bio) + writer = MockStreamWriter() + + # Mock the read method to return smaller chunks + original_read = bio.read + read_calls = 0 + + def mock_read(size: Optional[int] = None) -> bytes: + nonlocal read_calls + read_calls += 1 + if read_calls == 1: + # First call: return 3 bytes (less than content_length=5) + return original_read(3) + else: + # Subsequent calls return remaining data normally + return original_read(size) + + with unittest.mock.patch.object(bio, "read", mock_read): + await payload_bytesio.write_with_length(writer, 5) + + assert len(writer.get_written_bytes()) == 5 + assert writer.get_written_bytes() == b"01234" + + +async def test_bytesio_payload_large_data_multiple_chunks() -> None: + """Test BytesIOPayload with large data requiring multiple read chunks.""" + chunk_size = 2**16 # 64KB (READ_SIZE) + data = b"x" * (chunk_size + 1000) # Slightly larger than READ_SIZE + payload_bytesio = payload.BytesIOPayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await payload_bytesio.write_with_length(writer, None) + assert writer.get_written_bytes() == data + assert len(writer.get_written_bytes()) == chunk_size + 1000 -async def test_stream_reader_long_lines() -> None: - loop = asyncio.get_event_loop() - DATA = b"0" * 1024**3 +async def test_bytesio_payload_remaining_bytes_exhausted() -> None: + """Test BytesIOPayload when remaining_bytes becomes <= 0.""" + data = b"0123456789abcdef" * 1000 # 16000 bytes + payload_bytesio = payload.BytesIOPayload(io.BytesIO(data)) + writer = MockStreamWriter() - stream = streams.StreamReader(mock.Mock(), 2**16, loop=loop) - stream.feed_data(DATA) - stream.feed_eof() - body = payload.get_payload(stream) + await payload_bytesio.write_with_length(writer, 8000) # Exactly half the data + written = writer.get_written_bytes() + assert len(written) == 8000 + assert written == data[:8000] + + +async def test_iobase_payload_exact_chunk_size_limit() -> None: + """Test IOBasePayload with content length matching exactly one read chunk.""" + chunk_size = 2**16 # 65536 bytes (READ_SIZE) + data = b"x" * chunk_size + b"extra" # Slightly larger than one read chunk + p = payload.IOBasePayload(io.BytesIO(data)) + writer = MockStreamWriter() + + await p.write_with_length(writer, chunk_size) + written = writer.get_written_bytes() + assert len(written) == chunk_size + assert written == data[:chunk_size] + + +async def test_async_iterable_payload_write_with_length_no_limit() -> None: + """Test AsyncIterablePayload writing with no content length limit.""" + + async def gen() -> AsyncIterator[bytes]: + yield b"0123" + yield b"4567" + yield b"89" + + p = payload.AsyncIterablePayload(gen()) + writer = MockStreamWriter() + + await p.write_with_length(writer, None) + assert writer.get_written_bytes() == b"0123456789" + assert len(writer.get_written_bytes()) == 10 + + +async def test_async_iterable_payload_write_with_length_exact() -> None: + """Test AsyncIterablePayload writing with exact content length.""" + + async def gen() -> AsyncIterator[bytes]: + yield b"0123" + yield b"4567" + yield b"89" + + p = payload.AsyncIterablePayload(gen()) + writer = MockStreamWriter() + + await p.write_with_length(writer, 10) + assert writer.get_written_bytes() == b"0123456789" + assert len(writer.get_written_bytes()) == 10 + + +async def test_async_iterable_payload_write_with_length_truncated_mid_chunk() -> None: + """Test AsyncIterablePayload writing with content length truncating mid-chunk.""" + + async def gen() -> AsyncIterator[bytes]: + yield b"0123" + yield b"4567" + yield b"89" # pragma: no cover + + p = payload.AsyncIterablePayload(gen()) + writer = MockStreamWriter() + + await p.write_with_length(writer, 6) + assert writer.get_written_bytes() == b"012345" + assert len(writer.get_written_bytes()) == 6 + + +async def test_async_iterable_payload_write_with_length_truncated_at_chunk() -> None: + """Test AsyncIterablePayload writing with content length truncating at chunk boundary.""" + + async def gen() -> AsyncIterator[bytes]: + yield b"0123" + yield b"4567" # pragma: no cover + yield b"89" # pragma: no cover + + p = payload.AsyncIterablePayload(gen()) + writer = MockStreamWriter() + + await p.write_with_length(writer, 4) + assert writer.get_written_bytes() == b"0123" + assert len(writer.get_written_bytes()) == 4 + + +async def test_bytes_payload_backwards_compatibility() -> None: + """Test BytesPayload.write() backwards compatibility delegates to write_with_length().""" + p = payload.BytesPayload(b"1234567890") + writer = MockStreamWriter() + + await p.write(writer) + assert writer.get_written_bytes() == b"1234567890" + + +async def test_textio_payload_with_encoding() -> None: + """Test TextIOPayload reading with encoding and size constraints.""" + data = io.StringIO("hello world") + p = payload.TextIOPayload(data, encoding="utf-8") + writer = MockStreamWriter() + + await p.write_with_length(writer, 8) + # Should write exactly 8 bytes: "hello wo" + assert writer.get_written_bytes() == b"hello wo" + + +async def test_bytesio_payload_backwards_compatibility() -> None: + """Test BytesIOPayload.write() backwards compatibility delegates to write_with_length().""" + data = io.BytesIO(b"test data") + p = payload.BytesIOPayload(data) + writer = MockStreamWriter() + + await p.write(writer) + assert writer.get_written_bytes() == b"test data" + + +async def test_async_iterable_payload_backwards_compatibility() -> None: + """Test AsyncIterablePayload.write() backwards compatibility delegates to write_with_length().""" + + async def gen() -> AsyncIterator[bytes]: + yield b"chunk1" + yield b"chunk2" # pragma: no cover + + p = payload.AsyncIterablePayload(gen()) + writer = MockStreamWriter() + + await p.write(writer) + assert writer.get_written_bytes() == b"chunk1chunk2" + + +async def test_async_iterable_payload_with_none_iterator() -> None: + """Test AsyncIterablePayload with None iterator returns early without writing.""" + + async def gen() -> AsyncIterator[bytes]: + yield b"test" # pragma: no cover + + p = payload.AsyncIterablePayload(gen()) + # Manually set _iter to None to test the guard clause + p._iter = None + writer = MockStreamWriter() - writer = mock.Mock() - writer.write.return_value = loop.create_future() - writer.write.return_value.set_result(None) - await body.write(writer) - writer.write.assert_called_once_with(mock.ANY) - (chunk,), _ = writer.write.call_args - assert len(chunk) == len(DATA) + # Should return early without writing anything + await p.write_with_length(writer, 10) + assert writer.get_written_bytes() == b"" From 11aaa23d5b3716114730cb90a81983d1110cae14 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 11:33:27 -0500 Subject: [PATCH 60/90] [PR #10932/6b3672f0 backport][3.12] Fix flakey test_normal_closure_while_client_sends_msg test (#10935) Co-authored-by: J. Nick Koston --- tests/test_web_websocket_functional.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tests/test_web_websocket_functional.py b/tests/test_web_websocket_functional.py index 945096a2af3..0229809592a 100644 --- a/tests/test_web_websocket_functional.py +++ b/tests/test_web_websocket_functional.py @@ -1246,13 +1246,13 @@ async def handler(request: web.Request) -> web.WebSocketResponse: async def test_normal_closure_while_client_sends_msg( aiohttp_client: AiohttpClient, ) -> None: - """Test abnormal closure when the server closes and the client doesn't respond.""" + """Test normal closure when the server closes and the client responds properly.""" close_code: Optional[WSCloseCode] = None got_close_code = asyncio.Event() async def handler(request: web.Request) -> web.WebSocketResponse: - # Setting a short close timeout - ws = web.WebSocketResponse(timeout=0.2) + # Setting a longer close timeout to avoid race conditions + ws = web.WebSocketResponse(timeout=1.0) await ws.prepare(request) await ws.close() From 38c23ede00245bcc875746a82aa9635d112781c4 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 11:46:08 -0500 Subject: [PATCH 61/90] [PR #10933/597161d backport][3.12] Fix flakey client functional keep alive tests (#10937) --- tests/test_client_functional.py | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index bb4d70ef530..c9d62184ef6 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -248,8 +248,8 @@ async def handler(request): assert 0 == len(client._session.connector._conns) -async def test_keepalive_timeout_async_sleep() -> None: - async def handler(request): +async def test_keepalive_timeout_async_sleep(unused_port_socket: socket.socket) -> None: + async def handler(request: web.Request) -> web.Response: body = await request.read() assert b"" == body return web.Response(body=b"OK") @@ -260,17 +260,18 @@ async def handler(request): runner = web.AppRunner(app, tcp_keepalive=True, keepalive_timeout=0.001) await runner.setup() - port = unused_port() - site = web.TCPSite(runner, host="localhost", port=port) + site = web.SockSite(runner, unused_port_socket) await site.start() + host, port = unused_port_socket.getsockname()[:2] + try: - async with aiohttp.client.ClientSession() as sess: - resp1 = await sess.get(f"http://localhost:{port}/") + async with aiohttp.ClientSession() as sess: + resp1 = await sess.get(f"http://{host}:{port}/") await resp1.read() # wait for server keepalive_timeout await asyncio.sleep(0.01) - resp2 = await sess.get(f"http://localhost:{port}/") + resp2 = await sess.get(f"http://{host}:{port}/") await resp2.read() finally: await asyncio.gather(runner.shutdown(), site.stop()) @@ -280,8 +281,8 @@ async def handler(request): sys.version_info[:2] == (3, 11), reason="https://github.com/pytest-dev/pytest/issues/10763", ) -async def test_keepalive_timeout_sync_sleep() -> None: - async def handler(request): +async def test_keepalive_timeout_sync_sleep(unused_port_socket: socket.socket) -> None: + async def handler(request: web.Request) -> web.Response: body = await request.read() assert b"" == body return web.Response(body=b"OK") @@ -292,18 +293,19 @@ async def handler(request): runner = web.AppRunner(app, tcp_keepalive=True, keepalive_timeout=0.001) await runner.setup() - port = unused_port() - site = web.TCPSite(runner, host="localhost", port=port) + site = web.SockSite(runner, unused_port_socket) await site.start() + host, port = unused_port_socket.getsockname()[:2] + try: - async with aiohttp.client.ClientSession() as sess: - resp1 = await sess.get(f"http://localhost:{port}/") + async with aiohttp.ClientSession() as sess: + resp1 = await sess.get(f"http://{host}:{port}/") await resp1.read() # wait for server keepalive_timeout # time.sleep is a more challenging scenario than asyncio.sleep time.sleep(0.01) - resp2 = await sess.get(f"http://localhost:{port}/") + resp2 = await sess.get(f"http://{host}:{port}/") await resp2.read() finally: await asyncio.gather(runner.shutdown(), site.stop()) From 69182c7b1b7d8a5ba0830b2f370c64464e94512a Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 12:17:17 -0500 Subject: [PATCH 62/90] [PR #10938/77c0115e backport][3.12] Fix flakey test_content_length_limit_with_multiple_reads test (#10939) Co-authored-by: J. Nick Koston --- tests/test_client_functional.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index c9d62184ef6..5e1a4d2ddb5 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -4222,7 +4222,10 @@ async def data_generator() -> AsyncIterator[bytes]: headers = {"Content-Length": "800"} async with aiohttp.ClientSession() as session: - await session.post(server.make_url("/"), data=data_generator(), headers=headers) + async with session.post( + server.make_url("/"), data=data_generator(), headers=headers + ) as resp: + await resp.read() # Ensure response is fully read and connection cleaned up # Verify only 800 bytes (not the full 1200) were received by the server assert len(received_data) == 800 From 69a7fd782d937f4ad6f1e92f46b245791e38b264 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 12:48:26 -0500 Subject: [PATCH 63/90] Release 3.12.0b1 (#10940) --- CHANGES.rst | 223 ++++++++++++++++++++++++++++++++++++++++++++ aiohttp/__init__.py | 2 +- 2 files changed, 224 insertions(+), 1 deletion(-) diff --git a/CHANGES.rst b/CHANGES.rst index 651437c90bd..b455c45f7a9 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,229 @@ .. towncrier release notes start +3.12.0b1 (2025-05-22) +===================== + +Bug fixes +--------- + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed connection reuse for file-like data payloads by ensuring buffer + truncation respects content-length boundaries and preventing premature + connection closure race -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10325`, :issue:`10915`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`, :issue:`10923`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + + This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that + can be used to build aiohttp against the system install of the ``llhttp`` library rather + than the vendored one. + + + *Related issues and pull requests on GitHub:* + :issue:`10759`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Added Winloop to test suite to support in the future -- by :user:`Vizonex`. + + + *Related issues and pull requests on GitHub:* + :issue:`10922`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.12.0b0 (2025-05-20) ===================== diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 9ca85c654c5..972eabeab7e 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0b0" +__version__ = "3.12.0b1" from typing import TYPE_CHECKING, Tuple From e9808c36da4968ff3ff6596a038599ec48a2e045 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 18:29:51 +0000 Subject: [PATCH 64/90] [PR #10941/6512aaa4 backport][3.12] Use anext in AsyncIterablePayload on Python 3.10+ (#10942) Co-authored-by: J. Nick Koston --- CHANGES/10941.bugfix.rst | 1 + aiohttp/payload.py | 5 ++++- 2 files changed, 5 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10941.bugfix.rst diff --git a/CHANGES/10941.bugfix.rst b/CHANGES/10941.bugfix.rst new file mode 120000 index 00000000000..aa085cc590d --- /dev/null +++ b/CHANGES/10941.bugfix.rst @@ -0,0 +1 @@ +10915.bugfix.rst \ No newline at end of file diff --git a/aiohttp/payload.py b/aiohttp/payload.py index 823940902f5..c954091adad 100644 --- a/aiohttp/payload.py +++ b/aiohttp/payload.py @@ -815,7 +815,10 @@ async def write_with_length( try: while True: - chunk = await self._iter.__anext__() + if sys.version_info >= (3, 10): + chunk = await anext(self._iter) + else: + chunk = await self._iter.__anext__() if remaining_bytes is None: await writer.write(chunk) # If we have a content length limit From 9bd43ed9d283425ede643b2ff575e3d5a229b6ed Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 13:58:00 -0500 Subject: [PATCH 65/90] [PR #10943/b1e9462 backport][3.12] Small improvements to payload cleanup fixture (#10944) --- CHANGES/10943.bugfix.rst | 1 + tests/conftest.py | 10 ++++------ 2 files changed, 5 insertions(+), 6 deletions(-) create mode 120000 CHANGES/10943.bugfix.rst diff --git a/CHANGES/10943.bugfix.rst b/CHANGES/10943.bugfix.rst new file mode 120000 index 00000000000..aa085cc590d --- /dev/null +++ b/CHANGES/10943.bugfix.rst @@ -0,0 +1 @@ +10915.bugfix.rst \ No newline at end of file diff --git a/tests/conftest.py b/tests/conftest.py index 696f5d0d035..69469b3c793 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -8,7 +8,7 @@ from hashlib import md5, sha1, sha256 from pathlib import Path from tempfile import TemporaryDirectory -from typing import Any, Generator, Iterator +from typing import Any, AsyncIterator, Generator, Iterator from unittest import mock from uuid import uuid4 @@ -336,15 +336,13 @@ def parametrize_zlib_backend( @pytest.fixture() -def cleanup_payload_pending_file_closes( +async def cleanup_payload_pending_file_closes( loop: asyncio.AbstractEventLoop, -) -> Generator[None, None, None]: +) -> AsyncIterator[None]: """Ensure all pending file close operations complete during test teardown.""" yield if payload._CLOSE_FUTURES: # Only wait for futures from the current loop loop_futures = [f for f in payload._CLOSE_FUTURES if f.get_loop() is loop] if loop_futures: - loop.run_until_complete( - asyncio.gather(*loop_futures, return_exceptions=True) - ) + await asyncio.gather(*loop_futures, return_exceptions=True) From 1f1bc8f7fa9d59454a03fde35a89cea315db5f41 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 19:57:52 +0000 Subject: [PATCH 66/90] [PR #10946/3c88f811 backport][3.12] Ensure AsyncResolver.close() can be called multiple times (#10947) Co-authored-by: J. Nick Koston --- CHANGES/10946.feature.rst | 1 + aiohttp/resolver.py | 3 ++- tests/test_resolver.py | 37 +++++++++++++++++++++++++++++++++++++ 3 files changed, 40 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10946.feature.rst diff --git a/CHANGES/10946.feature.rst b/CHANGES/10946.feature.rst new file mode 120000 index 00000000000..879a4227358 --- /dev/null +++ b/CHANGES/10946.feature.rst @@ -0,0 +1 @@ +10847.feature.rst \ No newline at end of file diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py index 05accd19564..1dcfca48153 100644 --- a/aiohttp/resolver.py +++ b/aiohttp/resolver.py @@ -198,7 +198,8 @@ async def close(self) -> None: self._resolver = None # type: ignore[assignment] # Clear reference to resolver return # Otherwise cancel our dedicated resolver - self._resolver.cancel() + if self._resolver is not None: + self._resolver.cancel() self._resolver = None # type: ignore[assignment] # Clear reference diff --git a/tests/test_resolver.py b/tests/test_resolver.py index f6963121eb7..17f1227cc72 100644 --- a/tests/test_resolver.py +++ b/tests/test_resolver.py @@ -674,3 +674,40 @@ async def test_dns_resolver_manager_missing_loop_data() -> None: # Verify no exception was raised assert loop not in manager._loop_data + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_close_multiple_times() -> None: + """Test that AsyncResolver.close() can be called multiple times without error.""" + with patch("aiodns.DNSResolver") as mock_dns_resolver: + mock_resolver = Mock() + mock_resolver.cancel = Mock() + mock_dns_resolver.return_value = mock_resolver + + # Create a resolver with custom args (dedicated resolver) + resolver = AsyncResolver(nameservers=["8.8.8.8"]) + + # Close it once + await resolver.close() + mock_resolver.cancel.assert_called_once() + + # Close it again - should not raise AttributeError + await resolver.close() + # cancel should still only be called once + mock_resolver.cancel.assert_called_once() + + +@pytest.mark.skipif(not getaddrinfo, reason="aiodns >=3.2.0 required") +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_close_with_none_resolver() -> None: + """Test that AsyncResolver.close() handles None resolver gracefully.""" + with patch("aiodns.DNSResolver"): + # Create a resolver with custom args (dedicated resolver) + resolver = AsyncResolver(nameservers=["8.8.8.8"]) + + # Manually set resolver to None to simulate edge case + resolver._resolver = None # type: ignore[assignment] + + # This should not raise AttributeError + await resolver.close() From 31d363823d1096b29772c9703d742be83373d6ce Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 15:13:23 -0500 Subject: [PATCH 67/90] Release 3.12.0b2 (#10948) --- CHANGES.rst | 223 ++++++++++++++++++++++++++++++++++++++++++++ aiohttp/__init__.py | 2 +- 2 files changed, 224 insertions(+), 1 deletion(-) diff --git a/CHANGES.rst b/CHANGES.rst index b455c45f7a9..a4b4886d291 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,229 @@ .. towncrier release notes start +3.12.0b2 (2025-05-22) +===================== + +Bug fixes +--------- + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed connection reuse for file-like data payloads by ensuring buffer + truncation respects content-length boundaries and preventing premature + connection closure race -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10325`, :issue:`10915`, :issue:`10941`, :issue:`10943`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`, :issue:`10923`, :issue:`10946`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + + This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that + can be used to build aiohttp against the system install of the ``llhttp`` library rather + than the vendored one. + + + *Related issues and pull requests on GitHub:* + :issue:`10759`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Added Winloop to test suite to support in the future -- by :user:`Vizonex`. + + + *Related issues and pull requests on GitHub:* + :issue:`10922`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.12.0b1 (2025-05-22) ===================== diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 972eabeab7e..2ab58f23574 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0b1" +__version__ = "3.12.0b2" from typing import TYPE_CHECKING, Tuple From b00739bce116ab5aa6b7064dcfb51196b710b2a2 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 16:56:01 -0500 Subject: [PATCH 68/90] [PR #10949/06e3b36 backport][3.12] Improve connection reuse test coverage (#10950) --- tests/test_client_functional.py | 125 ++++++++++++++++++++++++++++++++ tests/test_web_functional.py | 4 + 2 files changed, 129 insertions(+) diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index 5e1a4d2ddb5..ff9a33bda1b 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -4311,3 +4311,128 @@ async def handler(request: web.Request) -> web.Response: response.raise_for_status() assert len(client._session.connector._conns) == 1 + + +async def test_post_content_exception_connection_kept( + aiohttp_client: AiohttpClient, +) -> None: + """Test that connections are kept after content.set_exception() with POST.""" + + async def handler(request: web.Request) -> web.Response: + await request.read() + return web.Response( + body=b"x" * 1000 + ) # Larger response to ensure it's not pre-buffered + + app = web.Application() + app.router.add_post("/", handler) + client = await aiohttp_client(app) + + # POST request with body - connection should be closed after content exception + resp = await client.post("/", data=b"request body") + + with pytest.raises(RuntimeError): + async with resp: + assert resp.status == 200 + resp.content.set_exception(RuntimeError("Simulated error")) + await resp.read() + + assert resp.closed + + # Wait for any pending operations to complete + await resp.wait_for_close() + + assert client._session.connector is not None + # Connection is kept because content.set_exception() is a client-side operation + # that doesn't affect the underlying connection state + assert len(client._session.connector._conns) == 1 + + +async def test_network_error_connection_closed( + aiohttp_client: AiohttpClient, +) -> None: + """Test that connections are closed after network errors.""" + + async def handler(request: web.Request) -> NoReturn: + # Read the request body + await request.read() + + # Start sending response but close connection before completing + response = web.StreamResponse() + response.content_length = 1000 # Promise 1000 bytes + await response.prepare(request) + + # Send partial data then force close the connection + await response.write(b"x" * 100) # Only send 100 bytes + # Force close the transport to simulate network error + assert request.transport is not None + request.transport.close() + assert False, "Will not return" + + app = web.Application() + app.router.add_post("/", handler) + client = await aiohttp_client(app) + + # POST request that will fail due to network error + with pytest.raises(aiohttp.ClientPayloadError): + resp = await client.post("/", data=b"request body") + async with resp: + await resp.read() # This should fail + + # Give event loop a chance to process connection cleanup + await asyncio.sleep(0) + + assert client._session.connector is not None + # Connection should be closed due to network error + assert len(client._session.connector._conns) == 0 + + +async def test_client_side_network_error_connection_closed( + aiohttp_client: AiohttpClient, +) -> None: + """Test that connections are closed after client-side network errors.""" + handler_done = asyncio.Event() + + async def handler(request: web.Request) -> NoReturn: + # Read the request body + await request.read() + + # Start sending a large response + response = web.StreamResponse() + response.content_length = 10000 # Promise 10KB + await response.prepare(request) + + # Send some data + await response.write(b"x" * 1000) + + # Keep the response open - we'll interrupt from client side + await asyncio.wait_for(handler_done.wait(), timeout=5.0) + assert False, "Will not return" + + app = web.Application() + app.router.add_post("/", handler) + client = await aiohttp_client(app) + + # POST request that will fail due to client-side network error + with pytest.raises(aiohttp.ClientPayloadError): + resp = await client.post("/", data=b"request body") + async with resp: + # Simulate client-side network error by closing the transport + # This simulates connection reset, network failure, etc. + assert resp.connection is not None + assert resp.connection.protocol is not None + assert resp.connection.protocol.transport is not None + resp.connection.protocol.transport.close() + + # This should fail with connection error + await resp.read() + + # Signal handler to finish + handler_done.set() + + # Give event loop a chance to process connection cleanup + await asyncio.sleep(0) + + assert client._session.connector is not None + # Connection should be closed due to client-side network error + assert len(client._session.connector._conns) == 0 diff --git a/tests/test_web_functional.py b/tests/test_web_functional.py index b6caf23df53..c33b3cec1ff 100644 --- a/tests/test_web_functional.py +++ b/tests/test_web_functional.py @@ -1956,6 +1956,10 @@ async def handler(request): await resp.read() assert resp.closed + # Wait for any pending operations to complete + await resp.wait_for_close() + + assert session._connector is not None assert len(session._connector._conns) == 1 await session.close() From 12ff66d5312bd9df894e506f8802b133d0293b91 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 17:00:11 -0500 Subject: [PATCH 69/90] [3.12] Fix AsyncResolver not using the loop argument (#10951) fixes #10787 --- CHANGES/10951.bugfix.rst | 1 + aiohttp/resolver.py | 2 +- tests/test_resolver.py | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+), 1 deletion(-) create mode 100644 CHANGES/10951.bugfix.rst diff --git a/CHANGES/10951.bugfix.rst b/CHANGES/10951.bugfix.rst new file mode 100644 index 00000000000..d539fc1a52d --- /dev/null +++ b/CHANGES/10951.bugfix.rst @@ -0,0 +1 @@ +Fixed :py:class:`~aiohttp.resolver.AsyncResolver` not using the ``loop`` argument in versions 3.x where it should still be supported -- by :user:`bdraco`. diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py index 1dcfca48153..118bf8cbff7 100644 --- a/aiohttp/resolver.py +++ b/aiohttp/resolver.py @@ -94,7 +94,7 @@ def __init__( if aiodns is None: raise RuntimeError("Resolver requires aiodns library") - self._loop = asyncio.get_running_loop() + self._loop = loop or asyncio.get_running_loop() self._manager: Optional[_DNSResolverManager] = None # If custom args are provided, create a dedicated resolver instance # This means each AsyncResolver with custom args gets its own diff --git a/tests/test_resolver.py b/tests/test_resolver.py index 17f1227cc72..1866939ba6b 100644 --- a/tests/test_resolver.py +++ b/tests/test_resolver.py @@ -711,3 +711,37 @@ async def test_async_resolver_close_with_none_resolver() -> None: # This should not raise AttributeError await resolver.close() + + +@pytest.mark.skipif(aiodns is None, reason="aiodns required") +def test_async_resolver_uses_provided_loop() -> None: + """Test that AsyncResolver uses the loop parameter when provided.""" + # Create a custom event loop + custom_loop = asyncio.new_event_loop() + + try: + # Need to set the loop as current for get_running_loop() to work + asyncio.set_event_loop(custom_loop) + + # Create resolver with explicit loop parameter + resolver = AsyncResolver(loop=custom_loop) + + # Check that the resolver uses the provided loop + assert resolver._loop is custom_loop + finally: + asyncio.set_event_loop(None) + custom_loop.close() + + +@pytest.mark.skipif(aiodns is None, reason="aiodns required") +@pytest.mark.usefixtures("check_no_lingering_resolvers") +async def test_async_resolver_uses_running_loop_when_none_provided() -> None: + """Test that AsyncResolver uses get_running_loop() when no loop is provided.""" + # Create resolver without loop parameter + resolver = AsyncResolver() + + # Check that the resolver uses the current running loop + assert resolver._loop is asyncio.get_running_loop() + + # Clean up + await resolver.close() From 2eb3f6ca1420a2784a95c06f45eba6f73a0434f1 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 17:42:31 -0500 Subject: [PATCH 70/90] [PR #10952/45b74cfc backport][3.12] Remove manual release call in middleware (#10953) Co-authored-by: J. Nick Koston closes #10901 --- CHANGES/10952.feature.rst | 1 + aiohttp/client_middleware_digest_auth.py | 2 -- tests/test_client_middleware.py | 1 - 3 files changed, 1 insertion(+), 3 deletions(-) create mode 120000 CHANGES/10952.feature.rst diff --git a/CHANGES/10952.feature.rst b/CHANGES/10952.feature.rst new file mode 120000 index 00000000000..b565aa68ee0 --- /dev/null +++ b/CHANGES/10952.feature.rst @@ -0,0 +1 @@ +9732.feature.rst \ No newline at end of file diff --git a/aiohttp/client_middleware_digest_auth.py b/aiohttp/client_middleware_digest_auth.py index e9eb3ba82e2..b63efaf0142 100644 --- a/aiohttp/client_middleware_digest_auth.py +++ b/aiohttp/client_middleware_digest_auth.py @@ -408,8 +408,6 @@ async def __call__( # Check if we need to authenticate if not self._authenticate(response): break - elif retry_count < 1: - response.release() # Release the response to enable connection reuse on retry # At this point, response is guaranteed to be defined assert response is not None diff --git a/tests/test_client_middleware.py b/tests/test_client_middleware.py index 5894795dc21..883d853d2e8 100644 --- a/tests/test_client_middleware.py +++ b/tests/test_client_middleware.py @@ -891,7 +891,6 @@ async def __call__( response = await handler(request) if retry_count == 0: retry_count += 1 - response.release() # Release the response to enable connection reuse continue return response From b5a061bb556e2e2ab1d16603c9b0fc492eccad6f Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Thu, 22 May 2025 18:05:58 -0500 Subject: [PATCH 71/90] Release 3.12.0b3 (#10955) --- CHANGES.rst | 231 ++++++++++++++++++++++++++++++++++++++++++++ aiohttp/__init__.py | 2 +- 2 files changed, 232 insertions(+), 1 deletion(-) diff --git a/CHANGES.rst b/CHANGES.rst index a4b4886d291..c0a9b20f200 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,237 @@ .. towncrier release notes start +3.12.0b3 (2025-05-22) +===================== + +Bug fixes +--------- + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed connection reuse for file-like data payloads by ensuring buffer + truncation respects content-length boundaries and preventing premature + connection closure race -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10325`, :issue:`10915`, :issue:`10941`, :issue:`10943`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Fixed :py:class:`~aiohttp.resolver.AsyncResolver` not using the ``loop`` argument in versions 3.x where it should still be supported -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10951`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`, :issue:`10952`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`, :issue:`10923`, :issue:`10946`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + + This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that + can be used to build aiohttp against the system install of the ``llhttp`` library rather + than the vendored one. + + + *Related issues and pull requests on GitHub:* + :issue:`10759`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Added Winloop to test suite to support in the future -- by :user:`Vizonex`. + + + *Related issues and pull requests on GitHub:* + :issue:`10922`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.12.0b2 (2025-05-22) ===================== diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 2ab58f23574..0ca44564e46 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0b2" +__version__ = "3.12.0b3" from typing import TYPE_CHECKING, Tuple From 6ccd3d5b91a9d0b3003a7c15e19542aa25f46a00 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 23:53:40 +0000 Subject: [PATCH 72/90] [PR #10956/5dcb36a4 backport][3.12] Fix some missing connector closes in tests (#10957) Co-authored-by: J. Nick Koston --- tests/test_client_middleware.py | 2 ++ tests/test_proxy_functional.py | 3 +++ tests/test_web_sendfile_functional.py | 1 + 3 files changed, 6 insertions(+) diff --git a/tests/test_client_middleware.py b/tests/test_client_middleware.py index 883d853d2e8..9d49b750333 100644 --- a/tests/test_client_middleware.py +++ b/tests/test_client_middleware.py @@ -793,6 +793,8 @@ async def blocking_middleware( # Check that no connections were leaked assert len(connector._conns) == 0 + await connector.close() + async def test_client_middleware_blocks_connection_without_dns_lookup( aiohttp_server: AiohttpServer, diff --git a/tests/test_proxy_functional.py b/tests/test_proxy_functional.py index 78521ae6008..5b33ed6ca3b 100644 --- a/tests/test_proxy_functional.py +++ b/tests/test_proxy_functional.py @@ -418,6 +418,7 @@ async def test_proxy_http_acquired_cleanup(proxy_test_server, loop) -> None: assert 0 == len(conn._acquired) await sess.close() + await conn.close() @pytest.mark.skip("we need to reconsider how we test this") @@ -439,6 +440,7 @@ async def request(): assert 0 == len(conn._acquired) await sess.close() + await conn.close() @pytest.mark.skip("we need to reconsider how we test this") @@ -470,6 +472,7 @@ async def request(pid): assert {resp.status for resp in responses} == {200} await sess.close() + await conn.close() @pytest.mark.xfail diff --git a/tests/test_web_sendfile_functional.py b/tests/test_web_sendfile_functional.py index 0c3e9ba68b5..0325a4658e2 100644 --- a/tests/test_web_sendfile_functional.py +++ b/tests/test_web_sendfile_functional.py @@ -614,6 +614,7 @@ async def test_static_file_ssl( await resp.release() await client.close() + await conn.close() async def test_static_file_directory_traversal_attack(aiohttp_client) -> None: From 2c55e880b9131da8ea3f6787904937f5fc3f52ef Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Thu, 22 May 2025 21:34:35 -0500 Subject: [PATCH 73/90] [PR #10959/cc234c6d backport][3.12] Change ClientSession middlewares default to be an empty tuple (#10960) Co-authored-by: J. Nick Koston closes #10905 --- CHANGES/10959.feature.rst | 1 + aiohttp/client.py | 2 +- docs/client_reference.rst | 4 ++-- 3 files changed, 4 insertions(+), 3 deletions(-) create mode 120000 CHANGES/10959.feature.rst diff --git a/CHANGES/10959.feature.rst b/CHANGES/10959.feature.rst new file mode 120000 index 00000000000..b565aa68ee0 --- /dev/null +++ b/CHANGES/10959.feature.rst @@ -0,0 +1 @@ +9732.feature.rst \ No newline at end of file diff --git a/aiohttp/client.py b/aiohttp/client.py index bea1c6f61e7..811c8f97588 100644 --- a/aiohttp/client.py +++ b/aiohttp/client.py @@ -302,7 +302,7 @@ def __init__( max_line_size: int = 8190, max_field_size: int = 8190, fallback_charset_resolver: _CharsetResolver = lambda r, b: "utf-8", - middlewares: Optional[Sequence[ClientMiddlewareType]] = None, + middlewares: Sequence[ClientMiddlewareType] = (), ) -> None: # We initialise _connector to None immediately, as it's referenced in __del__() # and could cause issues if an exception occurs during initialisation. diff --git a/docs/client_reference.rst b/docs/client_reference.rst index 97933ada1ed..cd825b403a0 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -53,7 +53,7 @@ The client session supports the context manager protocol for self closing. trust_env=False, \ requote_redirect_url=True, \ trace_configs=None, \ - middlewares=None, \ + middlewares=(), \ read_bufsize=2**16, \ max_line_size=8190, \ max_field_size=8190, \ @@ -232,7 +232,7 @@ The client session supports the context manager protocol for self closing. :param middlewares: A sequence of middleware instances to apply to all session requests. Each middleware must match the :type:`ClientMiddlewareType` signature. - ``None`` (default) is used when no middleware is needed. + ``()`` (empty tuple, default) is used when no middleware is needed. See :ref:`aiohttp-client-middleware` for more information. .. versionadded:: 3.12 From 15bef6ed99cd99d067eaa65566731fd0c01a3da1 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 10:59:04 +0000 Subject: [PATCH 74/90] Bump pydantic from 2.11.4 to 2.11.5 (#10963) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.11.4 to 2.11.5.
Release notes

Sourced from pydantic's releases.

v2.11.5 2025-05-22

What's Changed

Fixes

Full Changelog: https://github.com/pydantic/pydantic/compare/v2.11.4...v2.11.5

Changelog

Sourced from pydantic's changelog.

v2.11.5 (2025-05-22)

GitHub release

What's Changed

Fixes

Commits
  • 5e6d1dc Prepare release v2.11.5
  • 1b63218 Do not duplicate metadata on model rebuild (#11902)
  • 5aefad8 Do not delete mock validator/serializer in model_rebuild()
  • 8fbe658 Check if FieldInfo is complete after applying type variable map
  • 12b371a Update documentation about @dataclass_transform support
  • 3a6aef4 Fix missing link in documentation
  • 0506b9c Fix light/dark mode documentation toggle
  • 58078c8 Fix typo in documentation
  • See full diff in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pydantic&package-manager=pip&previous-version=2.11.4&new-version=2.11.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- requirements/constraints.txt | 4 +++- requirements/dev.txt | 4 +++- requirements/lint.txt | 2 +- requirements/test.txt | 4 +++- 4 files changed, 10 insertions(+), 4 deletions(-) diff --git a/requirements/constraints.txt b/requirements/constraints.txt index e79f7008a7d..9bcdeb5ff8b 100644 --- a/requirements/constraints.txt +++ b/requirements/constraints.txt @@ -136,6 +136,8 @@ packaging==25.0 # sphinx pip-tools==7.4.1 # via -r requirements/dev.in +pkgconfig==1.5.5 + # via -r requirements/test.in platformdirs==4.3.8 # via virtualenv pluggy==1.6.0 @@ -152,7 +154,7 @@ pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.4 +pydantic==2.11.5 # via python-on-whales pydantic-core==2.33.2 # via pydantic diff --git a/requirements/dev.txt b/requirements/dev.txt index 9b2c3ebeab3..26728928cee 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -133,6 +133,8 @@ packaging==25.0 # sphinx pip-tools==7.4.1 # via -r requirements/dev.in +pkgconfig==1.5.5 + # via -r requirements/test.in platformdirs==4.3.8 # via virtualenv pluggy==1.6.0 @@ -149,7 +151,7 @@ pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.4 +pydantic==2.11.5 # via python-on-whales pydantic-core==2.33.2 # via pydantic diff --git a/requirements/lint.txt b/requirements/lint.txt index 99fcd3969e3..57729254937 100644 --- a/requirements/lint.txt +++ b/requirements/lint.txt @@ -63,7 +63,7 @@ pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.4 +pydantic==2.11.5 # via python-on-whales pydantic-core==2.33.2 # via pydantic diff --git a/requirements/test.txt b/requirements/test.txt index 63cb482c5e0..007852dbcaa 100644 --- a/requirements/test.txt +++ b/requirements/test.txt @@ -71,6 +71,8 @@ packaging==25.0 # via # gunicorn # pytest +pkgconfig==1.5.5 + # via -r requirements/test.in pluggy==1.6.0 # via pytest propcache==0.3.1 @@ -83,7 +85,7 @@ pycares==4.8.0 # via aiodns pycparser==2.22 # via cffi -pydantic==2.11.4 +pydantic==2.11.5 # via python-on-whales pydantic-core==2.33.2 # via pydantic From 11c7c433df6453cd1d06467d0e325a8256308249 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 14:00:51 +0000 Subject: [PATCH 75/90] [PR #10962/84decfe5 backport][3.12] add example of setting network interface in custom socket creation (#10966) Co-authored-by: Cycloctane Co-authored-by: J. Nick Koston closes #7132 --- CHANGES/10962.feature.rst | 1 + docs/client_advanced.rst | 13 +++++++++++++ 2 files changed, 14 insertions(+) create mode 120000 CHANGES/10962.feature.rst diff --git a/CHANGES/10962.feature.rst b/CHANGES/10962.feature.rst new file mode 120000 index 00000000000..7c4f9a7b83b --- /dev/null +++ b/CHANGES/10962.feature.rst @@ -0,0 +1 @@ +10520.feature.rst \ No newline at end of file diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index d598a40c6ab..033b5f5705d 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -714,6 +714,19 @@ make all sockets respect 9*7200 = 18 hours:: return sock conn = aiohttp.TCPConnector(socket_factory=socket_factory) +``socket_factory`` may also be used for binding to the specific network +interface on supported platforms:: + + def socket_factory(addr_info): + family, type_, proto, _, _ = addr_info + sock = socket.socket(family=family, type=type_, proto=proto) + sock.setsockopt( + socket.SOL_SOCKET, socket.SO_BINDTODEVICE, b'eth0' + ) + return sock + + conn = aiohttp.TCPConnector(socket_factory=socket_factory) + Named pipes in Windows ^^^^^^^^^^^^^^^^^^^^^^ From 82497a690746f6c81455f1dc879d33545b07ad99 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 14:02:04 +0000 Subject: [PATCH 76/90] [PR #10961/5e68276c backport][3.12] fix example in socket_factory docs (#10967) Co-authored-by: Cycloctane Co-authored-by: J. Nick Koston --- CHANGES/10961.feature.rst | 1 + docs/client_advanced.rst | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10961.feature.rst diff --git a/CHANGES/10961.feature.rst b/CHANGES/10961.feature.rst new file mode 120000 index 00000000000..7c4f9a7b83b --- /dev/null +++ b/CHANGES/10961.feature.rst @@ -0,0 +1 @@ +10520.feature.rst \ No newline at end of file diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index 033b5f5705d..c5b542e82fd 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -706,12 +706,13 @@ make all sockets respect 9*7200 = 18 hours:: import socket def socket_factory(addr_info): - family, type_, proto, _, _, _ = addr_info + family, type_, proto, _, _ = addr_info sock = socket.socket(family=family, type=type_, proto=proto) sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, True) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 7200) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 9) return sock + conn = aiohttp.TCPConnector(socket_factory=socket_factory) ``socket_factory`` may also be used for binding to the specific network From 6f4f83f04bd98e8b736b767fdd98323fdd578185 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 17:39:13 +0200 Subject: [PATCH 77/90] [PR #10945/18785096 backport][3.12] Add Client Middleware Cookbook (#10969) Co-authored-by: J. Nick Koston --- CHANGES/10945.feature.rst | 1 + docs/client.rst | 1 + docs/client_advanced.rst | 2 + docs/client_middleware_cookbook.rst | 358 +++++++++++++++++++++++++++ docs/spelling_wordlist.txt | 1 + examples/basic_auth_middleware.py | 190 ++++++++++++++ examples/combined_middleware.py | 320 ++++++++++++++++++++++++ examples/logging_middleware.py | 169 +++++++++++++ examples/retry_middleware.py | 245 ++++++++++++++++++ examples/token_refresh_middleware.py | 336 +++++++++++++++++++++++++ 10 files changed, 1623 insertions(+) create mode 120000 CHANGES/10945.feature.rst create mode 100644 docs/client_middleware_cookbook.rst create mode 100644 examples/basic_auth_middleware.py create mode 100644 examples/combined_middleware.py create mode 100644 examples/logging_middleware.py create mode 100644 examples/retry_middleware.py create mode 100644 examples/token_refresh_middleware.py diff --git a/CHANGES/10945.feature.rst b/CHANGES/10945.feature.rst new file mode 120000 index 00000000000..b565aa68ee0 --- /dev/null +++ b/CHANGES/10945.feature.rst @@ -0,0 +1 @@ +9732.feature.rst \ No newline at end of file diff --git a/docs/client.rst b/docs/client.rst index 78fbeae4ded..9109c3772da 100644 --- a/docs/client.rst +++ b/docs/client.rst @@ -14,6 +14,7 @@ The page contains all information about aiohttp Client API: Quickstart Advanced Usage + Client Middleware Cookbook Reference Tracing Reference The aiohttp Request Lifecycle diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index c5b542e82fd..5a94e68ec1f 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -126,6 +126,8 @@ Client Middleware The client supports middleware to intercept requests and responses. This can be useful for authentication, logging, request/response modification, and retries. +For practical examples and common middleware patterns, see the :ref:`aiohttp-client-middleware-cookbook`. + Creating Middleware ^^^^^^^^^^^^^^^^^^^ diff --git a/docs/client_middleware_cookbook.rst b/docs/client_middleware_cookbook.rst new file mode 100644 index 00000000000..4b8d6ddd5f8 --- /dev/null +++ b/docs/client_middleware_cookbook.rst @@ -0,0 +1,358 @@ +.. currentmodule:: aiohttp + +.. _aiohttp-client-middleware-cookbook: + +Client Middleware Cookbook +========================== + +This cookbook provides practical examples of implementing client middleware for common use cases. + +.. note:: + + All examples in this cookbook are also available as complete, runnable scripts in the + ``examples/`` directory of the aiohttp repository. Look for files named ``*_middleware.py``. + +.. _cookbook-basic-auth-middleware: + +Basic Authentication Middleware +------------------------------- + +Basic authentication is a simple authentication scheme built into the HTTP protocol. +Here's a middleware that automatically adds Basic Auth headers to all requests: + +.. code-block:: python + + import base64 + from aiohttp import ClientRequest, ClientResponse, ClientHandlerType, hdrs + + class BasicAuthMiddleware: + """Middleware that adds Basic Authentication to all requests.""" + + def __init__(self, username: str, password: str) -> None: + self.username = username + self.password = password + self._auth_header = self._encode_credentials() + + def _encode_credentials(self) -> str: + """Encode username and password to base64.""" + credentials = f"{self.username}:{self.password}" + encoded = base64.b64encode(credentials.encode()).decode() + return f"Basic {encoded}" + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + """Add Basic Auth header to the request.""" + # Only add auth if not already present + if hdrs.AUTHORIZATION not in request.headers: + request.headers[hdrs.AUTHORIZATION] = self._auth_header + + # Proceed with the request + return await handler(request) + +Usage example: + +.. code-block:: python + + import aiohttp + import asyncio + import logging + + _LOGGER = logging.getLogger(__name__) + + async def main(): + # Create middleware instance + auth_middleware = BasicAuthMiddleware("user", "pass") + + # Use middleware in session + async with aiohttp.ClientSession(middlewares=(auth_middleware,)) as session: + async with session.get("https://httpbin.org/basic-auth/user/pass") as resp: + _LOGGER.debug("Status: %s", resp.status) + data = await resp.json() + _LOGGER.debug("Response: %s", data) + + asyncio.run(main()) + +.. _cookbook-retry-middleware: + +Simple Retry Middleware +----------------------- + +A retry middleware that automatically retries failed requests with exponential backoff: + +.. code-block:: python + + import asyncio + import logging + from http import HTTPStatus + from typing import Union, Set + from aiohttp import ClientRequest, ClientResponse, ClientHandlerType + + _LOGGER = logging.getLogger(__name__) + + DEFAULT_RETRY_STATUSES = { + HTTPStatus.TOO_MANY_REQUESTS, + HTTPStatus.INTERNAL_SERVER_ERROR, + HTTPStatus.BAD_GATEWAY, + HTTPStatus.SERVICE_UNAVAILABLE, + HTTPStatus.GATEWAY_TIMEOUT + } + + class RetryMiddleware: + """Middleware that retries failed requests with exponential backoff.""" + + def __init__( + self, + max_retries: int = 3, + retry_statuses: Union[Set[int], None] = None, + initial_delay: float = 1.0, + backoff_factor: float = 2.0 + ) -> None: + self.max_retries = max_retries + self.retry_statuses = retry_statuses or DEFAULT_RETRY_STATUSES + self.initial_delay = initial_delay + self.backoff_factor = backoff_factor + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + """Execute request with retry logic.""" + last_response = None + delay = self.initial_delay + + for attempt in range(self.max_retries + 1): + if attempt > 0: + _LOGGER.info( + "Retrying request to %s (attempt %s/%s)", + request.url, + attempt + 1, + self.max_retries + 1 + ) + + # Execute the request + response = await handler(request) + last_response = response + + # Check if we should retry + if response.status not in self.retry_statuses: + return response + + # Don't retry if we've exhausted attempts + if attempt >= self.max_retries: + _LOGGER.warning( + "Max retries (%s) exceeded for %s", + self.max_retries, + request.url + ) + return response + + # Wait before retrying + _LOGGER.debug("Waiting %ss before retry...", delay) + await asyncio.sleep(delay) + delay *= self.backoff_factor + + # Return the last response + return last_response + +Usage example: + +.. code-block:: python + + import aiohttp + import asyncio + import logging + from http import HTTPStatus + + _LOGGER = logging.getLogger(__name__) + + RETRY_STATUSES = { + HTTPStatus.TOO_MANY_REQUESTS, + HTTPStatus.INTERNAL_SERVER_ERROR, + HTTPStatus.BAD_GATEWAY, + HTTPStatus.SERVICE_UNAVAILABLE, + HTTPStatus.GATEWAY_TIMEOUT + } + + async def main(): + # Create retry middleware with custom settings + retry_middleware = RetryMiddleware( + max_retries=3, + retry_statuses=RETRY_STATUSES, + initial_delay=0.5, + backoff_factor=2.0 + ) + + async with aiohttp.ClientSession(middlewares=(retry_middleware,)) as session: + # This will automatically retry on server errors + async with session.get("https://httpbin.org/status/500") as resp: + _LOGGER.debug("Final status: %s", resp.status) + + asyncio.run(main()) + +.. _cookbook-combining-middleware: + +Combining Multiple Middleware +----------------------------- + +You can combine multiple middleware to create powerful request pipelines: + +.. code-block:: python + + import time + import logging + from aiohttp import ClientRequest, ClientResponse, ClientHandlerType + + _LOGGER = logging.getLogger(__name__) + + class LoggingMiddleware: + """Middleware that logs request timing and response status.""" + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + start_time = time.monotonic() + + # Log request + _LOGGER.debug("[REQUEST] %s %s", request.method, request.url) + + # Execute request + response = await handler(request) + + # Log response + duration = time.monotonic() - start_time + _LOGGER.debug("[RESPONSE] %s in %.2fs", response.status, duration) + + return response + + # Combine multiple middleware + async def main(): + # Middleware are applied in order: logging -> auth -> retry -> request + logging_middleware = LoggingMiddleware() + auth_middleware = BasicAuthMiddleware("user", "pass") + retry_middleware = RetryMiddleware(max_retries=2) + + async with aiohttp.ClientSession( + middlewares=(logging_middleware, auth_middleware, retry_middleware) + ) as session: + async with session.get("https://httpbin.org/basic-auth/user/pass") as resp: + text = await resp.text() + _LOGGER.debug("Response text: %s", text) + +.. _cookbook-token-refresh-middleware: + +Token Refresh Middleware +------------------------ + +A more advanced example showing JWT token refresh: + +.. code-block:: python + + import asyncio + import time + from http import HTTPStatus + from typing import Union + from aiohttp import ClientRequest, ClientResponse, ClientHandlerType, hdrs + + class TokenRefreshMiddleware: + """Middleware that handles JWT token refresh automatically.""" + + def __init__(self, token_endpoint: str, refresh_token: str) -> None: + self.token_endpoint = token_endpoint + self.refresh_token = refresh_token + self.access_token: Union[str, None] = None + self.token_expires_at: Union[float, None] = None + self._refresh_lock = asyncio.Lock() + + async def _refresh_access_token(self, session) -> str: + """Refresh the access token using the refresh token.""" + async with self._refresh_lock: + # Check if another coroutine already refreshed the token + if self.token_expires_at and time.time() < self.token_expires_at: + return self.access_token + + # Make refresh request without middleware to avoid recursion + async with session.post( + self.token_endpoint, + json={"refresh_token": self.refresh_token}, + middlewares=() # Disable middleware for this request + ) as resp: + resp.raise_for_status() + data = await resp.json() + + if "access_token" not in data: + raise ValueError("No access_token in refresh response") + + self.access_token = data["access_token"] + # Token expires in 1 hour for demo, refresh 5 min early + expires_in = data.get("expires_in", 3600) + self.token_expires_at = time.time() + expires_in - 300 + return self.access_token + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType + ) -> ClientResponse: + """Add auth token to request, refreshing if needed.""" + # Skip token for refresh endpoint + if str(request.url).endswith('/token/refresh'): + return await handler(request) + + # Refresh token if needed + if not self.access_token or ( + self.token_expires_at and time.time() >= self.token_expires_at + ): + await self._refresh_access_token(request.session) + + # Add token to request + request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" + + # Execute request + response = await handler(request) + + # If we get 401, try refreshing token once + if response.status == HTTPStatus.UNAUTHORIZED: + await self._refresh_access_token(request.session) + request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" + response = await handler(request) + + return response + +Best Practices +-------------- + +1. **Keep middleware focused**: Each middleware should have a single responsibility. + +2. **Order matters**: Middleware execute in the order they're listed. Place logging first, + authentication before retry, etc. + +3. **Avoid infinite recursion**: When making HTTP requests inside middleware, either: + + - Use ``middlewares=()`` to disable middleware for internal requests + - Check the request URL/host to skip middleware for specific endpoints + - Use a separate session for internal requests + +4. **Handle errors gracefully**: Don't let middleware errors break the request flow unless + absolutely necessary. + +5. **Use bounded loops**: Always use ``for`` loops with a maximum iteration count instead + of unbounded ``while`` loops to prevent infinite retries. + +6. **Consider performance**: Each middleware adds overhead. For simple cases like adding + static headers, consider using session or request parameters instead. + +7. **Test thoroughly**: Middleware can affect all requests in subtle ways. Test edge cases + like network errors, timeouts, and concurrent requests. + +See Also +-------- + +- :ref:`aiohttp-client-middleware` - Core middleware documentation +- :ref:`aiohttp-client-advanced` - Advanced client usage +- :class:`DigestAuthMiddleware` - Built-in digest authentication middleware diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index d0328529cfd..c22e584cadf 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -28,6 +28,7 @@ autoformatters autogenerates autogeneration awaitable +backoff backend backends backport diff --git a/examples/basic_auth_middleware.py b/examples/basic_auth_middleware.py new file mode 100644 index 00000000000..4c30f477505 --- /dev/null +++ b/examples/basic_auth_middleware.py @@ -0,0 +1,190 @@ +#!/usr/bin/env python3 +""" +Example of using basic authentication middleware with aiohttp client. + +This example shows how to implement a middleware that automatically adds +Basic Authentication headers to all requests. The middleware encodes the +username and password in base64 format as required by the HTTP Basic Auth +specification. + +This example includes a test server that validates basic auth credentials. +""" + +import asyncio +import base64 +import binascii +import logging + +from aiohttp import ( + ClientHandlerType, + ClientRequest, + ClientResponse, + ClientSession, + hdrs, + web, +) + +logging.basicConfig(level=logging.DEBUG) +_LOGGER = logging.getLogger(__name__) + + +class BasicAuthMiddleware: + """Middleware that adds Basic Authentication to all requests.""" + + def __init__(self, username: str, password: str) -> None: + self.username = username + self.password = password + self._auth_header = self._encode_credentials() + + def _encode_credentials(self) -> str: + """Encode username and password to base64.""" + credentials = f"{self.username}:{self.password}" + encoded = base64.b64encode(credentials.encode()).decode() + return f"Basic {encoded}" + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + """Add Basic Auth header to the request.""" + # Only add auth if not already present + if hdrs.AUTHORIZATION not in request.headers: + request.headers[hdrs.AUTHORIZATION] = self._auth_header + + # Proceed with the request + return await handler(request) + + +class TestServer: + """Test server for basic auth endpoints.""" + + async def handle_basic_auth(self, request: web.Request) -> web.Response: + """Handle basic auth validation.""" + # Get expected credentials from path + expected_user = request.match_info["user"] + expected_pass = request.match_info["pass"] + + # Check if Authorization header is present + auth_header = request.headers.get(hdrs.AUTHORIZATION, "") + + if not auth_header.startswith("Basic "): + return web.Response( + status=401, + text="Unauthorized", + headers={hdrs.WWW_AUTHENTICATE: 'Basic realm="test"'}, + ) + + # Decode the credentials + encoded_creds = auth_header[6:] # Remove "Basic " + try: + decoded = base64.b64decode(encoded_creds).decode() + username, password = decoded.split(":", 1) + except (ValueError, binascii.Error): + return web.Response( + status=401, + text="Invalid credentials format", + headers={hdrs.WWW_AUTHENTICATE: 'Basic realm="test"'}, + ) + + # Validate credentials + if username != expected_user or password != expected_pass: + return web.Response( + status=401, + text="Invalid username or password", + headers={hdrs.WWW_AUTHENTICATE: 'Basic realm="test"'}, + ) + + return web.json_response({"authenticated": True, "user": username}) + + async def handle_protected_resource(self, request: web.Request) -> web.Response: + """A protected resource that requires any valid auth.""" + auth_header = request.headers.get(hdrs.AUTHORIZATION, "") + + if not auth_header.startswith("Basic "): + return web.Response( + status=401, + text="Authentication required", + headers={hdrs.WWW_AUTHENTICATE: 'Basic realm="protected"'}, + ) + + return web.json_response( + { + "message": "Access granted to protected resource", + "auth_provided": True, + } + ) + + +async def run_test_server() -> web.AppRunner: + """Run a simple test server with basic auth endpoints.""" + app = web.Application() + server = TestServer() + + app.router.add_get("/basic-auth/{user}/{pass}", server.handle_basic_auth) + app.router.add_get("/protected", server.handle_protected_resource) + + runner = web.AppRunner(app) + await runner.setup() + site = web.TCPSite(runner, "localhost", 8080) + await site.start() + return runner + + +async def run_tests() -> None: + """Run all basic auth middleware tests.""" + # Create middleware instance + auth_middleware = BasicAuthMiddleware("user", "pass") + + # Use middleware in session + async with ClientSession(middlewares=(auth_middleware,)) as session: + # Test 1: Correct credentials endpoint + print("=== Test 1: Correct credentials ===") + async with session.get("http://localhost:8080/basic-auth/user/pass") as resp: + _LOGGER.info("Status: %s", resp.status) + + if resp.status == 200: + data = await resp.json() + _LOGGER.info("Response: %s", data) + print("Authentication successful!") + print(f"Authenticated: {data.get('authenticated')}") + print(f"User: {data.get('user')}") + else: + print("Authentication failed!") + print(f"Status: {resp.status}") + text = await resp.text() + print(f"Response: {text}") + + # Test 2: Wrong credentials endpoint + print("\n=== Test 2: Wrong credentials endpoint ===") + async with session.get("http://localhost:8080/basic-auth/other/secret") as resp: + if resp.status == 401: + print("Authentication failed as expected (wrong credentials)") + text = await resp.text() + print(f"Response: {text}") + else: + print(f"Unexpected status: {resp.status}") + + # Test 3: Protected resource + print("\n=== Test 3: Access protected resource ===") + async with session.get("http://localhost:8080/protected") as resp: + if resp.status == 200: + data = await resp.json() + print("Successfully accessed protected resource!") + print(f"Response: {data}") + else: + print(f"Failed to access protected resource: {resp.status}") + + +async def main() -> None: + # Start test server + server = await run_test_server() + + try: + await run_tests() + finally: + await server.cleanup() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/combined_middleware.py b/examples/combined_middleware.py new file mode 100644 index 00000000000..8646a182b98 --- /dev/null +++ b/examples/combined_middleware.py @@ -0,0 +1,320 @@ +#!/usr/bin/env python3 +""" +Example of combining multiple middleware with aiohttp client. + +This example shows how to chain multiple middleware together to create +a powerful request pipeline. Middleware are applied in order, demonstrating +how logging, authentication, and retry logic can work together. + +The order of middleware matters: +1. Logging (outermost) - logs all attempts including retries +2. Authentication - adds auth headers before retry logic +3. Retry (innermost) - retries requests on failure +""" + +import asyncio +import base64 +import binascii +import logging +import time +from http import HTTPStatus +from typing import TYPE_CHECKING, Set, Union + +from aiohttp import ( + ClientHandlerType, + ClientRequest, + ClientResponse, + ClientSession, + hdrs, + web, +) + +logging.basicConfig( + level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s" +) +_LOGGER = logging.getLogger(__name__) + + +class LoggingMiddleware: + """Middleware that logs request timing and response status.""" + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + start_time = time.monotonic() + + # Log request + _LOGGER.info("[REQUEST] %s %s", request.method, request.url) + + # Execute request + response = await handler(request) + + # Log response + duration = time.monotonic() - start_time + _LOGGER.info( + "[RESPONSE] %s in %.2fs - Status: %s", + request.url.path, + duration, + response.status, + ) + + return response + + +class BasicAuthMiddleware: + """Middleware that adds Basic Authentication to all requests.""" + + def __init__(self, username: str, password: str) -> None: + self.username = username + self.password = password + self._auth_header = self._encode_credentials() + + def _encode_credentials(self) -> str: + """Encode username and password to base64.""" + credentials = f"{self.username}:{self.password}" + encoded = base64.b64encode(credentials.encode()).decode() + return f"Basic {encoded}" + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + """Add Basic Auth header to the request.""" + # Only add auth if not already present + if hdrs.AUTHORIZATION not in request.headers: + request.headers[hdrs.AUTHORIZATION] = self._auth_header + _LOGGER.debug("Added Basic Auth header") + + # Proceed with the request + return await handler(request) + + +DEFAULT_RETRY_STATUSES: Set[HTTPStatus] = { + HTTPStatus.TOO_MANY_REQUESTS, + HTTPStatus.INTERNAL_SERVER_ERROR, + HTTPStatus.BAD_GATEWAY, + HTTPStatus.SERVICE_UNAVAILABLE, + HTTPStatus.GATEWAY_TIMEOUT, +} + + +class RetryMiddleware: + """Middleware that retries failed requests with exponential backoff.""" + + def __init__( + self, + max_retries: int = 3, + retry_statuses: Union[Set[HTTPStatus], None] = None, + initial_delay: float = 1.0, + backoff_factor: float = 2.0, + ) -> None: + self.max_retries = max_retries + self.retry_statuses = retry_statuses or DEFAULT_RETRY_STATUSES + self.initial_delay = initial_delay + self.backoff_factor = backoff_factor + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + """Execute request with retry logic.""" + last_response: Union[ClientResponse, None] = None + delay = self.initial_delay + + for attempt in range(self.max_retries + 1): + if attempt > 0: + _LOGGER.info( + "Retrying request (attempt %s/%s)", + attempt + 1, + self.max_retries + 1, + ) + + # Execute the request + response = await handler(request) + last_response = response + + # Check if we should retry + if response.status not in self.retry_statuses: + return response + + # Don't retry if we've exhausted attempts + if attempt >= self.max_retries: + _LOGGER.warning("Max retries exceeded") + return response + + # Wait before retrying + _LOGGER.debug("Waiting %ss before retry...", delay) + await asyncio.sleep(delay) + delay *= self.backoff_factor + + if TYPE_CHECKING: + assert last_response is not None # Always set since we loop at least once + return last_response + + +class TestServer: + """Test server with stateful endpoints for middleware testing.""" + + def __init__(self) -> None: + self.flaky_counter = 0 + self.protected_counter = 0 + + async def handle_protected(self, request: web.Request) -> web.Response: + """Protected endpoint that requires authentication and is flaky on first attempt.""" + auth_header = request.headers.get(hdrs.AUTHORIZATION, "") + + if not auth_header.startswith("Basic "): + return web.Response( + status=401, + text="Unauthorized", + headers={hdrs.WWW_AUTHENTICATE: 'Basic realm="test"'}, + ) + + # Decode the credentials + encoded_creds = auth_header[6:] # Remove "Basic " + try: + decoded = base64.b64decode(encoded_creds).decode() + username, password = decoded.split(":", 1) + except (ValueError, binascii.Error): + return web.Response( + status=401, + text="Invalid credentials format", + headers={hdrs.WWW_AUTHENTICATE: 'Basic realm="test"'}, + ) + + # Validate credentials + if username != "user" or password != "pass": + return web.Response(status=401, text="Invalid credentials") + + # Fail with 500 on first attempt to test retry + auth combination + self.protected_counter += 1 + if self.protected_counter == 1: + return web.Response( + status=500, text="Internal server error (first attempt)" + ) + + return web.json_response( + { + "message": "Access granted", + "user": username, + "resource": "protected data", + } + ) + + async def handle_flaky(self, request: web.Request) -> web.Response: + """Endpoint that fails a few times before succeeding.""" + self.flaky_counter += 1 + + # Fail the first 2 requests, succeed on the 3rd + if self.flaky_counter <= 2: + return web.Response( + status=503, + text=f"Service temporarily unavailable (attempt {self.flaky_counter})", + ) + + # Reset counter and return success + self.flaky_counter = 0 + return web.json_response( + { + "message": "Success after retries!", + "data": "Important information retrieved", + } + ) + + async def handle_always_fail(self, request: web.Request) -> web.Response: + """Endpoint that always returns an error.""" + return web.Response(status=500, text="Internal server error") + + async def handle_status(self, request: web.Request) -> web.Response: + """Return the status code specified in the path.""" + status = int(request.match_info["status"]) + return web.Response(status=status, text=f"Status: {status}") + + +async def run_test_server() -> web.AppRunner: + """Run a test server with various endpoints.""" + app = web.Application() + server = TestServer() + + app.router.add_get("/protected", server.handle_protected) + app.router.add_get("/flaky", server.handle_flaky) + app.router.add_get("/always-fail", server.handle_always_fail) + app.router.add_get("/status/{status}", server.handle_status) + + runner = web.AppRunner(app) + await runner.setup() + site = web.TCPSite(runner, "localhost", 8080) + await site.start() + return runner + + +async def run_tests() -> None: + """Run all the middleware tests.""" + # Create middleware instances + logging_middleware = LoggingMiddleware() + auth_middleware = BasicAuthMiddleware("user", "pass") + retry_middleware = RetryMiddleware(max_retries=2, initial_delay=0.5) + + # Combine middleware - order matters! + # Applied in order: logging -> auth -> retry -> request + async with ClientSession( + middlewares=(logging_middleware, auth_middleware, retry_middleware) + ) as session: + + print( + "=== Test 1: Protected endpoint with auth (fails once, then succeeds) ===" + ) + print("This tests retry + auth working together...") + async with session.get("http://localhost:8080/protected") as resp: + if resp.status == 200: + data = await resp.json() + print(f"Success after retry! Response: {data}") + else: + print(f"Failed with status: {resp.status}") + + print("\n=== Test 2: Flaky endpoint (fails twice, then succeeds) ===") + print("Watch the logs to see retries in action...") + async with session.get("http://localhost:8080/flaky") as resp: + if resp.status == 200: + data = await resp.json() + print(f"Success after retries! Response: {data}") + else: + text = await resp.text() + print(f"Failed with status {resp.status}: {text}") + + print("\n=== Test 3: Always failing endpoint ===") + async with session.get("http://localhost:8080/always-fail") as resp: + print(f"Final status after retries: {resp.status}") + + print("\n=== Test 4: Non-retryable status (404) ===") + async with session.get("http://localhost:8080/status/404") as resp: + print(f"Status: {resp.status} (no retries for 404)") + + # Test without middleware for comparison + print("\n=== Test 5: Request without middleware ===") + print("Making a request to protected endpoint without middleware...") + async with session.get( + "http://localhost:8080/protected", middlewares=() + ) as resp: + print(f"Status without middleware: {resp.status}") + if resp.status == 401: + print("Failed as expected - no auth header added") + + +async def main() -> None: + # Start test server + server = await run_test_server() + + try: + await run_tests() + + finally: + await server.cleanup() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/logging_middleware.py b/examples/logging_middleware.py new file mode 100644 index 00000000000..b6345953db2 --- /dev/null +++ b/examples/logging_middleware.py @@ -0,0 +1,169 @@ +#!/usr/bin/env python3 +""" +Example of using logging middleware with aiohttp client. + +This example shows how to implement a middleware that logs request timing +and response status. This is useful for debugging, monitoring, and +understanding the flow of HTTP requests in your application. + +This example includes a test server with various endpoints. +""" + +import asyncio +import json +import logging +import time +from typing import Any, Coroutine, List + +from aiohttp import ClientHandlerType, ClientRequest, ClientResponse, ClientSession, web + +logging.basicConfig( + level=logging.DEBUG, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s" +) +_LOGGER = logging.getLogger(__name__) + + +class LoggingMiddleware: + """Middleware that logs request timing and response status.""" + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + start_time = time.monotonic() + + # Log request + _LOGGER.info("[REQUEST] %s %s", request.method, request.url) + if request.headers: + _LOGGER.debug("[REQUEST HEADERS] %s", request.headers) + + # Execute request + response = await handler(request) + + # Log response + duration = time.monotonic() - start_time + _LOGGER.info( + "[RESPONSE] %s %s - Status: %s - Duration: %.3fs", + request.method, + request.url, + response.status, + duration, + ) + _LOGGER.debug("[RESPONSE HEADERS] %s", response.headers) + + return response + + +class TestServer: + """Test server for logging middleware demo.""" + + async def handle_hello(self, request: web.Request) -> web.Response: + """Simple hello endpoint.""" + name = request.match_info.get("name", "World") + return web.json_response({"message": f"Hello, {name}!"}) + + async def handle_slow(self, request: web.Request) -> web.Response: + """Endpoint that simulates slow response.""" + delay = float(request.match_info.get("delay", 1)) + await asyncio.sleep(delay) + return web.json_response({"message": "Slow response completed", "delay": delay}) + + async def handle_error(self, request: web.Request) -> web.Response: + """Endpoint that returns an error.""" + status = int(request.match_info.get("status", 500)) + return web.Response(status=status, text=f"Error response with status {status}") + + async def handle_json_data(self, request: web.Request) -> web.Response: + """Endpoint that echoes JSON data.""" + try: + data = await request.json() + return web.json_response({"echo": data, "received_at": time.time()}) + except json.JSONDecodeError: + return web.json_response({"error": "Invalid JSON"}, status=400) + + +async def run_test_server() -> web.AppRunner: + """Run a simple test server.""" + app = web.Application() + server = TestServer() + + app.router.add_get("/hello", server.handle_hello) + app.router.add_get("/hello/{name}", server.handle_hello) + app.router.add_get("/slow/{delay}", server.handle_slow) + app.router.add_get("/error/{status}", server.handle_error) + app.router.add_post("/echo", server.handle_json_data) + + runner = web.AppRunner(app) + await runner.setup() + site = web.TCPSite(runner, "localhost", 8080) + await site.start() + return runner + + +async def run_tests() -> None: + """Run all the middleware tests.""" + # Create logging middleware + logging_middleware = LoggingMiddleware() + + # Use middleware in session + async with ClientSession(middlewares=(logging_middleware,)) as session: + # Test 1: Simple GET request + print("\n=== Test 1: Simple GET request ===") + async with session.get("http://localhost:8080/hello") as resp: + data = await resp.json() + print(f"Response: {data}") + + # Test 2: GET with parameter + print("\n=== Test 2: GET with parameter ===") + async with session.get("http://localhost:8080/hello/Alice") as resp: + data = await resp.json() + print(f"Response: {data}") + + # Test 3: Slow request + print("\n=== Test 3: Slow request (2 seconds) ===") + async with session.get("http://localhost:8080/slow/2") as resp: + data = await resp.json() + print(f"Response: {data}") + + # Test 4: Error response + print("\n=== Test 4: Error response ===") + async with session.get("http://localhost:8080/error/404") as resp: + text = await resp.text() + print(f"Response: {text}") + + # Test 5: POST with JSON data + print("\n=== Test 5: POST with JSON data ===") + payload = {"name": "Bob", "age": 30, "city": "New York"} + async with session.post("http://localhost:8080/echo", json=payload) as resp: + data = await resp.json() + print(f"Response: {data}") + + # Test 6: Multiple concurrent requests + print("\n=== Test 6: Multiple concurrent requests ===") + coros: List[Coroutine[Any, Any, ClientResponse]] = [] + for i in range(3): + coro = session.get(f"http://localhost:8080/hello/User{i}") + coros.append(coro) + + responses = await asyncio.gather(*coros) + for i, resp in enumerate(responses): + async with resp: + data = await resp.json() + print(f"Concurrent request {i}: {data}") + + +async def main() -> None: + # Start test server + server = await run_test_server() + + try: + await run_tests() + + finally: + # Cleanup server + await server.cleanup() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/retry_middleware.py b/examples/retry_middleware.py new file mode 100644 index 00000000000..c8fa829455a --- /dev/null +++ b/examples/retry_middleware.py @@ -0,0 +1,245 @@ +#!/usr/bin/env python3 +""" +Example of using retry middleware with aiohttp client. + +This example shows how to implement a middleware that automatically retries +failed requests with exponential backoff. The middleware can be configured +with custom retry statuses, maximum retries, and backoff parameters. + +This example includes a test server that simulates various HTTP responses +and can return different status codes on sequential requests. +""" + +import asyncio +import logging +from http import HTTPStatus +from typing import TYPE_CHECKING, Dict, List, Set, Union + +from aiohttp import ClientHandlerType, ClientRequest, ClientResponse, ClientSession, web + +logging.basicConfig(level=logging.INFO) +_LOGGER = logging.getLogger(__name__) + +DEFAULT_RETRY_STATUSES: Set[HTTPStatus] = { + HTTPStatus.TOO_MANY_REQUESTS, + HTTPStatus.INTERNAL_SERVER_ERROR, + HTTPStatus.BAD_GATEWAY, + HTTPStatus.SERVICE_UNAVAILABLE, + HTTPStatus.GATEWAY_TIMEOUT, +} + + +class RetryMiddleware: + """Middleware that retries failed requests with exponential backoff.""" + + def __init__( + self, + max_retries: int = 3, + retry_statuses: Union[Set[HTTPStatus], None] = None, + initial_delay: float = 1.0, + backoff_factor: float = 2.0, + ) -> None: + self.max_retries = max_retries + self.retry_statuses = retry_statuses or DEFAULT_RETRY_STATUSES + self.initial_delay = initial_delay + self.backoff_factor = backoff_factor + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + """Execute request with retry logic.""" + last_response: Union[ClientResponse, None] = None + delay = self.initial_delay + + for attempt in range(self.max_retries + 1): + if attempt > 0: + _LOGGER.info( + "Retrying request to %s (attempt %s/%s)", + request.url, + attempt + 1, + self.max_retries + 1, + ) + + # Execute the request + response = await handler(request) + last_response = response + + # Check if we should retry + if response.status not in self.retry_statuses: + return response + + # Don't retry if we've exhausted attempts + if attempt >= self.max_retries: + _LOGGER.warning( + "Max retries (%s) exceeded for %s", self.max_retries, request.url + ) + return response + + # Wait before retrying + _LOGGER.debug("Waiting %ss before retry...", delay) + await asyncio.sleep(delay) + delay *= self.backoff_factor + + # Return the last response + if TYPE_CHECKING: + assert last_response is not None # Always set since we loop at least once + return last_response + + +class TestServer: + """Test server with stateful endpoints for retry testing.""" + + def __init__(self) -> None: + self.request_counters: Dict[str, int] = {} + self.status_sequences: Dict[str, List[int]] = { + "eventually-ok": [500, 503, 502, 200], # Fails 3 times, then succeeds + "always-error": [500, 500, 500, 500], # Always fails + "immediate-ok": [200], # Succeeds immediately + "flaky": [503, 200], # Fails once, then succeeds + } + + async def handle_status(self, request: web.Request) -> web.Response: + """Return the status code specified in the path.""" + status = int(request.match_info["status"]) + return web.Response(status=status, text=f"Status: {status}") + + async def handle_status_sequence(self, request: web.Request) -> web.Response: + """Return different status codes on sequential requests.""" + path = request.path + + # Initialize counter for this path if needed + if path not in self.request_counters: + self.request_counters[path] = 0 + + # Get the status sequence for this path + sequence_name = request.match_info["name"] + if sequence_name not in self.status_sequences: + return web.Response(status=404, text="Sequence not found") + + sequence = self.status_sequences[sequence_name] + + # Get the current status based on request count + count = self.request_counters[path] + if count < len(sequence): + status = sequence[count] + else: + # After sequence ends, always return the last status + status = sequence[-1] + + # Increment counter for next request + self.request_counters[path] += 1 + + return web.Response( + status=status, text=f"Request #{count + 1}: Status {status}" + ) + + async def handle_delay(self, request: web.Request) -> web.Response: + """Delay response by specified seconds.""" + delay = float(request.match_info["delay"]) + await asyncio.sleep(delay) + return web.json_response({"delay": delay, "message": "Response after delay"}) + + async def handle_reset(self, request: web.Request) -> web.Response: + """Reset request counters.""" + self.request_counters = {} + return web.Response(text="Counters reset") + + +async def run_test_server() -> web.AppRunner: + """Run a simple test server.""" + app = web.Application() + server = TestServer() + + app.router.add_get("/status/{status}", server.handle_status) + app.router.add_get("/sequence/{name}", server.handle_status_sequence) + app.router.add_get("/delay/{delay}", server.handle_delay) + app.router.add_post("/reset", server.handle_reset) + + runner = web.AppRunner(app) + await runner.setup() + site = web.TCPSite(runner, "localhost", 8080) + await site.start() + return runner + + +async def run_tests() -> None: + """Run all retry middleware tests.""" + # Create retry middleware with custom settings + retry_middleware = RetryMiddleware( + max_retries=3, + retry_statuses=DEFAULT_RETRY_STATUSES, + initial_delay=0.5, + backoff_factor=2.0, + ) + + async with ClientSession(middlewares=(retry_middleware,)) as session: + # Reset counters before tests + await session.post("http://localhost:8080/reset") + + # Test 1: Request that succeeds immediately + print("=== Test 1: Immediate success ===") + async with session.get("http://localhost:8080/sequence/immediate-ok") as resp: + text = await resp.text() + print(f"Final status: {resp.status}") + print(f"Response: {text}") + print("Success - no retries needed\n") + + # Test 2: Request that eventually succeeds after retries + print("=== Test 2: Eventually succeeds (500->503->502->200) ===") + async with session.get("http://localhost:8080/sequence/eventually-ok") as resp: + text = await resp.text() + print(f"Final status: {resp.status}") + print(f"Response: {text}") + if resp.status == 200: + print("Success after retries!\n") + else: + print("Failed after retries\n") + + # Test 3: Request that always fails + print("=== Test 3: Always fails (500->500->500->500) ===") + async with session.get("http://localhost:8080/sequence/always-error") as resp: + text = await resp.text() + print(f"Final status: {resp.status}") + print(f"Response: {text}") + print("Failed after exhausting all retries\n") + + # Test 4: Flaky service (fails once then succeeds) + print("=== Test 4: Flaky service (503->200) ===") + await session.post("http://localhost:8080/reset") # Reset counters + async with session.get("http://localhost:8080/sequence/flaky") as resp: + text = await resp.text() + print(f"Final status: {resp.status}") + print(f"Response: {text}") + print("Success after one retry!\n") + + # Test 5: Non-retryable status + print("=== Test 5: Non-retryable status (404) ===") + async with session.get("http://localhost:8080/status/404") as resp: + print(f"Final status: {resp.status}") + print("Failed immediately - not a retryable status\n") + + # Test 6: Delayed response + print("=== Test 6: Testing with delay endpoint ===") + try: + async with session.get("http://localhost:8080/delay/0.5") as resp: + print(f"Status: {resp.status}") + data = await resp.json() + print(f"Response received after delay: {data}\n") + except asyncio.TimeoutError: + print("Request timed out\n") + + +async def main() -> None: + # Start test server + server = await run_test_server() + + try: + await run_tests() + finally: + await server.cleanup() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/token_refresh_middleware.py b/examples/token_refresh_middleware.py new file mode 100644 index 00000000000..8a7ff963850 --- /dev/null +++ b/examples/token_refresh_middleware.py @@ -0,0 +1,336 @@ +#!/usr/bin/env python3 +""" +Example of using token refresh middleware with aiohttp client. + +This example shows how to implement a middleware that handles JWT token +refresh automatically. The middleware: +- Adds bearer tokens to requests +- Detects when tokens are expired +- Automatically refreshes tokens when needed +- Handles concurrent requests during token refresh + +This example includes a test server that simulates a JWT auth system. +Note: This is a simplified example for demonstration purposes. +In production, use proper JWT libraries and secure token storage. +""" + +import asyncio +import hashlib +import json +import logging +import secrets +import time +from http import HTTPStatus +from typing import TYPE_CHECKING, Any, Coroutine, Dict, List, Union + +from aiohttp import ( + ClientHandlerType, + ClientRequest, + ClientResponse, + ClientSession, + hdrs, + web, +) + +logging.basicConfig(level=logging.INFO) +_LOGGER = logging.getLogger(__name__) + + +class TokenRefreshMiddleware: + """Middleware that handles JWT token refresh automatically.""" + + def __init__(self, token_endpoint: str, refresh_token: str) -> None: + self.token_endpoint = token_endpoint + self.refresh_token = refresh_token + self.access_token: Union[str, None] = None + self.token_expires_at: Union[float, None] = None + self._refresh_lock = asyncio.Lock() + + async def _refresh_access_token(self, session: ClientSession) -> str: + """Refresh the access token using the refresh token.""" + async with self._refresh_lock: + # Check if another coroutine already refreshed the token + if ( + self.token_expires_at + and time.time() < self.token_expires_at + and self.access_token + ): + _LOGGER.debug("Token already refreshed by another request") + return self.access_token + + _LOGGER.info("Refreshing access token...") + + # Make refresh request without middleware to avoid recursion + async with session.post( + self.token_endpoint, + json={"refresh_token": self.refresh_token}, + middlewares=(), # Disable middleware for this request + ) as resp: + resp.raise_for_status() + data = await resp.json() + + if "access_token" not in data: + raise ValueError("No access_token in refresh response") + + self.access_token = data["access_token"] + # Token expires in 5 minutes for demo, refresh 30 seconds early + expires_in = data.get("expires_in", 300) + self.token_expires_at = time.time() + expires_in - 30 + + _LOGGER.info( + "Token refreshed successfully, expires in %s seconds", expires_in + ) + if TYPE_CHECKING: + assert self.access_token is not None # Just assigned above + return self.access_token + + async def __call__( + self, + request: ClientRequest, + handler: ClientHandlerType, + ) -> ClientResponse: + """Add auth token to request, refreshing if needed.""" + # Skip token for refresh endpoint to avoid recursion + if str(request.url).endswith("/token/refresh"): + return await handler(request) + + # Refresh token if needed + if not self.access_token or ( + self.token_expires_at and time.time() >= self.token_expires_at + ): + await self._refresh_access_token(request.session) + + # Add token to request + request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" + _LOGGER.debug("Added Bearer token to request") + + # Execute request + response = await handler(request) + + # If we get 401, try refreshing token once + if response.status == HTTPStatus.UNAUTHORIZED: + _LOGGER.info("Got 401, attempting token refresh...") + await self._refresh_access_token(request.session) + request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" + response = await handler(request) + + return response + + +class TestServer: + """Test server with JWT-like token authentication.""" + + def __init__(self) -> None: + self.tokens_db: Dict[str, Dict[str, Union[str, float]]] = {} + self.refresh_tokens_db: Dict[str, Dict[str, Union[str, float]]] = { + # Hash of refresh token -> user data + hashlib.sha256(b"demo_refresh_token_12345").hexdigest(): { + "user_id": "user123", + "username": "testuser", + "issued_at": time.time(), + } + } + + def generate_access_token(self) -> str: + """Generate a secure random access token.""" + return secrets.token_urlsafe(32) + + async def _process_token_refresh(self, data: Dict[str, str]) -> web.Response: + """Process the token refresh request.""" + refresh_token = data.get("refresh_token") + + if not refresh_token: + return web.json_response({"error": "refresh_token required"}, status=400) + + # Hash the refresh token to look it up + refresh_token_hash = hashlib.sha256(refresh_token.encode()).hexdigest() + + if refresh_token_hash not in self.refresh_tokens_db: + return web.json_response({"error": "Invalid refresh token"}, status=401) + + user_data = self.refresh_tokens_db[refresh_token_hash] + + # Generate new access token + access_token = self.generate_access_token() + expires_in = 300 # 5 minutes for demo + + # Store the access token with expiry + token_hash = hashlib.sha256(access_token.encode()).hexdigest() + self.tokens_db[token_hash] = { + "user_id": user_data["user_id"], + "username": user_data["username"], + "expires_at": time.time() + expires_in, + "issued_at": time.time(), + } + + # Clean up expired tokens periodically + current_time = time.time() + self.tokens_db = { + k: v + for k, v in self.tokens_db.items() + if isinstance(v["expires_at"], float) and v["expires_at"] > current_time + } + + return web.json_response( + { + "access_token": access_token, + "token_type": "Bearer", + "expires_in": expires_in, + } + ) + + async def handle_token_refresh(self, request: web.Request) -> web.Response: + """Handle token refresh requests.""" + try: + data = await request.json() + return await self._process_token_refresh(data) + except json.JSONDecodeError: + return web.json_response({"error": "Invalid request"}, status=400) + + async def verify_bearer_token( + self, request: web.Request + ) -> Union[Dict[str, Union[str, float]], None]: + """Verify bearer token and return user data if valid.""" + auth_header = request.headers.get(hdrs.AUTHORIZATION, "") + + if not auth_header.startswith("Bearer "): + return None + + token = auth_header[7:] # Remove "Bearer " + token_hash = hashlib.sha256(token.encode()).hexdigest() + + # Check if token exists and is not expired + if token_hash in self.tokens_db: + token_data = self.tokens_db[token_hash] + if ( + isinstance(token_data["expires_at"], float) + and token_data["expires_at"] > time.time() + ): + return token_data + + return None + + async def handle_protected_resource(self, request: web.Request) -> web.Response: + """Protected endpoint that requires valid bearer token.""" + user_data = await self.verify_bearer_token(request) + + if not user_data: + return web.json_response({"error": "Invalid or expired token"}, status=401) + + return web.json_response( + { + "message": "Access granted to protected resource", + "user": user_data["username"], + "data": "Secret information", + } + ) + + async def handle_user_info(self, request: web.Request) -> web.Response: + """Another protected endpoint.""" + user_data = await self.verify_bearer_token(request) + + if not user_data: + return web.json_response({"error": "Invalid or expired token"}, status=401) + + return web.json_response( + { + "user_id": user_data["user_id"], + "username": user_data["username"], + "email": f"{user_data['username']}@example.com", + "roles": ["user", "admin"], + } + ) + + +async def run_test_server() -> web.AppRunner: + """Run a test server with JWT auth endpoints.""" + test_server = TestServer() + app = web.Application() + app.router.add_post("/token/refresh", test_server.handle_token_refresh) + app.router.add_get("/api/protected", test_server.handle_protected_resource) + app.router.add_get("/api/user", test_server.handle_user_info) + + runner = web.AppRunner(app) + await runner.setup() + site = web.TCPSite(runner, "localhost", 8080) + await site.start() + return runner + + +async def run_tests() -> None: + """Run all token refresh middleware tests.""" + # Create token refresh middleware + # In a real app, this refresh token would be securely stored + token_middleware = TokenRefreshMiddleware( + token_endpoint="http://localhost:8080/token/refresh", + refresh_token="demo_refresh_token_12345", + ) + + async with ClientSession(middlewares=(token_middleware,)) as session: + print("=== Test 1: First request (will trigger token refresh) ===") + async with session.get("http://localhost:8080/api/protected") as resp: + if resp.status == 200: + data = await resp.json() + print(f"Success! Response: {data}") + else: + print(f"Failed with status: {resp.status}") + + print("\n=== Test 2: Second request (uses cached token) ===") + async with session.get("http://localhost:8080/api/user") as resp: + if resp.status == 200: + data = await resp.json() + print(f"User info: {data}") + else: + print(f"Failed with status: {resp.status}") + + print("\n=== Test 3: Multiple concurrent requests ===") + print("(Should only refresh token once)") + coros: List[Coroutine[Any, Any, ClientResponse]] = [] + for i in range(3): + coro = session.get("http://localhost:8080/api/protected") + coros.append(coro) + + responses = await asyncio.gather(*coros) + for i, resp in enumerate(responses): + async with resp: + if resp.status == 200: + print(f"Request {i + 1}: Success") + else: + print(f"Request {i + 1}: Failed with {resp.status}") + + print("\n=== Test 4: Simulate token expiry ===") + # For demo purposes, force token expiry + token_middleware.token_expires_at = time.time() - 1 + + print("Token expired, next request should trigger refresh...") + async with session.get("http://localhost:8080/api/protected") as resp: + if resp.status == 200: + data = await resp.json() + print(f"Success after token refresh! Response: {data}") + else: + print(f"Failed with status: {resp.status}") + + print("\n=== Test 5: Request without middleware (no auth) ===") + # Make a request without any middleware to show the difference + async with session.get( + "http://localhost:8080/api/protected", + middlewares=(), # Bypass all middleware for this request + ) as resp: + print(f"Status: {resp.status}") + if resp.status == 401: + error = await resp.json() + print(f"Failed as expected without auth: {error}") + + +async def main() -> None: + # Start test server + server = await run_test_server() + + try: + await run_tests() + finally: + await server.cleanup() + + +if __name__ == "__main__": + asyncio.run(main()) From ca98b978f09324429fc87d6df74db905a68af71c Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 17:39:01 +0100 Subject: [PATCH 78/90] [PR #10972/a023a245 backport][3.12] Upgrade to llhttp 3.9 (#10973) **This is a backport of PR #10972 as merged into master (a023a245f675b77c746d4cac37ac5289e4196070).** Co-authored-by: Sam Bull --- CHANGES/10972.feature.rst | 1 + vendor/llhttp | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) create mode 100644 CHANGES/10972.feature.rst diff --git a/CHANGES/10972.feature.rst b/CHANGES/10972.feature.rst new file mode 100644 index 00000000000..1d3779a3969 --- /dev/null +++ b/CHANGES/10972.feature.rst @@ -0,0 +1 @@ +Upgraded to LLHTTP 9.3.0 -- by :user:`Dreamsorcerer`. diff --git a/vendor/llhttp b/vendor/llhttp index b0b279fb5a6..36151b9a7d6 160000 --- a/vendor/llhttp +++ b/vendor/llhttp @@ -1 +1 @@ -Subproject commit b0b279fb5a617ab3bc2fc11c5f8bd937aac687c1 +Subproject commit 36151b9a7d6320072e24e472a769a5e09f9e969d From 8368069d3f7ac363d15f9312eb1a7edbfdd66736 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 12:04:09 -0500 Subject: [PATCH 79/90] [PR #10968/ff7feaf4 backport][3.12] Update Key Features to mention client middleware (#10975) Co-authored-by: J. Nick Koston --- CHANGES/10968.feature.rst | 1 + docs/client_reference.rst | 1 - docs/index.rst | 2 ++ 3 files changed, 3 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10968.feature.rst diff --git a/CHANGES/10968.feature.rst b/CHANGES/10968.feature.rst new file mode 120000 index 00000000000..b565aa68ee0 --- /dev/null +++ b/CHANGES/10968.feature.rst @@ -0,0 +1 @@ +9732.feature.rst \ No newline at end of file diff --git a/docs/client_reference.rst b/docs/client_reference.rst index cd825b403a0..fa0a50425af 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -2051,7 +2051,6 @@ Utilities :return: encoded authentication data, :class:`str`. - .. class:: DigestAuthMiddleware(login, password) HTTP digest authentication client middleware. diff --git a/docs/index.rst b/docs/index.rst index 4ce20aca643..f9c4a4b2c54 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -23,6 +23,8 @@ Key Features without the Callback Hell. - Web-server has :ref:`aiohttp-web-middlewares`, :ref:`aiohttp-web-signals` and pluggable routing. +- Client supports :ref:`middleware ` for + customizing request/response processing. .. _aiohttp-installation: From bfe0bd18eb3df343f054c9259346d126d1742436 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 13:47:30 -0500 Subject: [PATCH 80/90] [PR #10977/48f5324b backport][3.12] Fix flakey test_aiohttp_request_ctx_manager_close_sess_on_error test (#10980) Co-authored-by: J. Nick Koston --- tests/test_client_functional.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index ff9a33bda1b..6a031de6a35 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -3393,6 +3393,9 @@ async def handler(request): pass assert cm._session.closed + # Allow event loop to process transport cleanup + # on Python < 3.11 + await asyncio.sleep(0) async def test_aiohttp_request_ctx_manager_not_found() -> None: From b21ae981269fe344bcc570ed443093f2f5f4d4ef Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 19:25:37 +0000 Subject: [PATCH 81/90] [PR #10971/1ee187c0 backport][3.12] Fix `WebSocketResponse.prepared` not correctly reflect the WebSocket's prepared state (#10983) Co-authored-by: J. Nick Koston fixes #6009 --- CHANGES/6009.bugfix.rst | 1 + aiohttp/web_ws.py | 4 + tests/test_web_websocket.py | 2 +- tests/test_web_websocket_functional.py | 113 +++++++++++++++++++++++++ 4 files changed, 119 insertions(+), 1 deletion(-) create mode 100644 CHANGES/6009.bugfix.rst diff --git a/CHANGES/6009.bugfix.rst b/CHANGES/6009.bugfix.rst new file mode 100644 index 00000000000..6462da31869 --- /dev/null +++ b/CHANGES/6009.bugfix.rst @@ -0,0 +1 @@ +Fixed ``WebSocketResponse.prepared`` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` diff --git a/aiohttp/web_ws.py b/aiohttp/web_ws.py index 439b8049987..575f9a3dc85 100644 --- a/aiohttp/web_ws.py +++ b/aiohttp/web_ws.py @@ -354,6 +354,10 @@ def can_prepare(self, request: BaseRequest) -> WebSocketReady: else: return WebSocketReady(True, protocol) + @property + def prepared(self) -> bool: + return self._writer is not None + @property def closed(self) -> bool: return self._closed diff --git a/tests/test_web_websocket.py b/tests/test_web_websocket.py index 390d6224d3d..3a285b76aad 100644 --- a/tests/test_web_websocket.py +++ b/tests/test_web_websocket.py @@ -639,4 +639,4 @@ async def test_get_extra_info( await ws.prepare(req) ws._writer = ws_transport - assert ws.get_extra_info(valid_key, default_value) == expected_result + assert expected_result == ws.get_extra_info(valid_key, default_value) diff --git a/tests/test_web_websocket_functional.py b/tests/test_web_websocket_functional.py index 0229809592a..f7f5c31356c 100644 --- a/tests/test_web_websocket_functional.py +++ b/tests/test_web_websocket_functional.py @@ -1281,3 +1281,116 @@ async def handler(request: web.Request) -> web.WebSocketResponse: ) await client.server.close() assert close_code == WSCloseCode.OK + + +async def test_websocket_prepare_timeout_close_issue( + loop: asyncio.AbstractEventLoop, aiohttp_client: AiohttpClient +) -> None: + """Test that WebSocket can handle prepare with early returns. + + This is a regression test for issue #6009 where the prepared property + incorrectly checked _payload_writer instead of _writer. + """ + + async def handler(request: web.Request) -> web.WebSocketResponse: + ws = web.WebSocketResponse() + assert ws.can_prepare(request) + await ws.prepare(request) + await ws.send_str("test") + await ws.close() + return ws + + app = web.Application() + app.router.add_route("GET", "/ws", handler) + client = await aiohttp_client(app) + + # Connect via websocket + ws = await client.ws_connect("/ws") + msg = await ws.receive() + assert msg.type is WSMsgType.TEXT + assert msg.data == "test" + await ws.close() + + +async def test_websocket_prepare_timeout_from_issue_reproducer( + loop: asyncio.AbstractEventLoop, aiohttp_client: AiohttpClient +) -> None: + """Test websocket behavior when prepare is interrupted. + + This test verifies the fix for issue #6009 where close() would + fail after prepare() was interrupted. + """ + prepare_complete = asyncio.Event() + close_complete = asyncio.Event() + + async def handler(request: web.Request) -> web.WebSocketResponse: + ws = web.WebSocketResponse() + + # Prepare the websocket + await ws.prepare(request) + prepare_complete.set() + + # Send a message to confirm connection works + await ws.send_str("connected") + + # Wait for client to close + msg = await ws.receive() + assert msg.type is WSMsgType.CLOSE + await ws.close() + close_complete.set() + + return ws + + app = web.Application() + app.router.add_route("GET", "/ws", handler) + client = await aiohttp_client(app) + + # Connect and verify the connection works + ws = await client.ws_connect("/ws") + await prepare_complete.wait() + + msg = await ws.receive() + assert msg.type is WSMsgType.TEXT + assert msg.data == "connected" + + # Close the connection + await ws.close() + await close_complete.wait() + + +async def test_websocket_prepared_property( + loop: asyncio.AbstractEventLoop, aiohttp_client: AiohttpClient +) -> None: + """Test that WebSocketResponse.prepared property correctly reflects state.""" + prepare_called = asyncio.Event() + + async def handler(request: web.Request) -> web.WebSocketResponse: + ws = web.WebSocketResponse() + + # Initially not prepared + initial_state = ws.prepared + assert not initial_state + + # After prepare() is called, should be prepared + await ws.prepare(request) + prepare_called.set() + + # Check prepared state + prepared_state = ws.prepared + assert prepared_state + + # Send a message to verify the connection works + await ws.send_str("test") + await ws.close() + return ws + + app = web.Application() + app.router.add_route("GET", "/", handler) + client = await aiohttp_client(app) + + ws = await client.ws_connect("/") + await prepare_called.wait() + msg = await ws.receive() + assert msg.type is WSMsgType.TEXT + assert msg.data == "test" + await ws.close() From 12ce8115da48f7db7f61fb2c267afffc3814ac8b Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 19:43:40 +0000 Subject: [PATCH 82/90] [PR #10981/2617ab23 backport][3.12] Fix flakey test_client_middleware_retry_reuses_connection test (#10986) --- tests/test_client_middleware.py | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/tests/test_client_middleware.py b/tests/test_client_middleware.py index 9d49b750333..e698e8ee825 100644 --- a/tests/test_client_middleware.py +++ b/tests/test_client_middleware.py @@ -863,8 +863,13 @@ async def test_client_middleware_retry_reuses_connection( aiohttp_server: AiohttpServer, ) -> None: """Test that connections are reused when middleware performs retries.""" + request_count = 0 async def handler(request: web.Request) -> web.Response: + nonlocal request_count + request_count += 1 + if request_count == 1: + return web.Response(status=400) # First request returns 400 with no body return web.Response(text="OK") class TrackingConnector(TCPConnector): @@ -891,7 +896,7 @@ async def __call__( while True: self.attempt_count += 1 response = await handler(request) - if retry_count == 0: + if response.status == 400 and retry_count == 0: retry_count += 1 continue return response From 6a60fb79b78b5b5f64494ff34d7db81390d1d46d Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Fri, 23 May 2025 14:44:10 -0500 Subject: [PATCH 83/90] [PR #10970/bb5fc59 backport][3.12] Add warning about consuming the payload in middleware (#10984) --- CHANGES/2914.doc.rst | 4 ++ docs/web_advanced.rst | 99 ++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 97 insertions(+), 6 deletions(-) create mode 100644 CHANGES/2914.doc.rst diff --git a/CHANGES/2914.doc.rst b/CHANGES/2914.doc.rst new file mode 100644 index 00000000000..25592bf79bc --- /dev/null +++ b/CHANGES/2914.doc.rst @@ -0,0 +1,4 @@ +Improved documentation for middleware by adding warnings and examples about +request body stream consumption. The documentation now clearly explains that +request body streams can only be read once and provides best practices for +sharing parsed request data between middleware and handlers -- by :user:`bdraco`. diff --git a/docs/web_advanced.rst b/docs/web_advanced.rst index 070bae34f10..a4ca513b572 100644 --- a/docs/web_advanced.rst +++ b/docs/web_advanced.rst @@ -569,10 +569,14 @@ A *middleware* is a coroutine that can modify either the request or response. For example, here's a simple *middleware* which appends ``' wink'`` to the response:: - from aiohttp.web import middleware + from aiohttp import web + from typing import Callable, Awaitable - @middleware - async def middleware(request, handler): + @web.middleware + async def middleware( + request: web.Request, + handler: Callable[[web.Request], Awaitable[web.StreamResponse]] + ) -> web.StreamResponse: resp = await handler(request) resp.text = resp.text + ' wink' return resp @@ -614,20 +618,27 @@ post-processing like handling *CORS* and so on. The following code demonstrates middlewares execution order:: from aiohttp import web + from typing import Callable, Awaitable - async def test(request): + async def test(request: web.Request) -> web.Response: print('Handler function called') return web.Response(text="Hello") @web.middleware - async def middleware1(request, handler): + async def middleware1( + request: web.Request, + handler: Callable[[web.Request], Awaitable[web.StreamResponse]] + ) -> web.StreamResponse: print('Middleware 1 called') response = await handler(request) print('Middleware 1 finished') return response @web.middleware - async def middleware2(request, handler): + async def middleware2( + request: web.Request, + handler: Callable[[web.Request], Awaitable[web.StreamResponse]] + ) -> web.StreamResponse: print('Middleware 2 called') response = await handler(request) print('Middleware 2 finished') @@ -646,6 +657,82 @@ Produced output:: Middleware 2 finished Middleware 1 finished +Request Body Stream Consumption +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. warning:: + + When middleware reads the request body (using :meth:`~aiohttp.web.BaseRequest.read`, + :meth:`~aiohttp.web.BaseRequest.text`, :meth:`~aiohttp.web.BaseRequest.json`, or + :meth:`~aiohttp.web.BaseRequest.post`), the body stream is consumed. However, these + high-level methods cache their result, so subsequent calls from the handler or other + middleware will return the same cached value. + + The important distinction is: + + - High-level methods (:meth:`~aiohttp.web.BaseRequest.read`, :meth:`~aiohttp.web.BaseRequest.text`, + :meth:`~aiohttp.web.BaseRequest.json`, :meth:`~aiohttp.web.BaseRequest.post`) cache their + results internally, so they can be called multiple times and will return the same value. + - Direct stream access via :attr:`~aiohttp.web.BaseRequest.content` does NOT have this + caching behavior. Once you read from ``request.content`` directly (e.g., using + ``await request.content.read()``), subsequent reads will return empty bytes. + +Consider this middleware that logs request bodies:: + + from aiohttp import web + from typing import Callable, Awaitable + + async def logging_middleware( + request: web.Request, + handler: Callable[[web.Request], Awaitable[web.StreamResponse]] + ) -> web.StreamResponse: + # This consumes the request body stream + body = await request.text() + print(f"Request body: {body}") + return await handler(request) + + async def handler(request: web.Request) -> web.Response: + # This will return the same value that was read in the middleware + # (i.e., the cached result, not an empty string) + body = await request.text() + return web.Response(text=f"Received: {body}") + +In contrast, when accessing the stream directly (not recommended in middleware):: + + async def stream_middleware( + request: web.Request, + handler: Callable[[web.Request], Awaitable[web.StreamResponse]] + ) -> web.StreamResponse: + # Reading directly from the stream - this consumes it! + data = await request.content.read() + print(f"Stream data: {data}") + return await handler(request) + + async def handler(request: web.Request) -> web.Response: + # This will return empty bytes because the stream was already consumed + data = await request.content.read() + # data will be b'' (empty bytes) + + # However, high-level methods would still work if called for the first time: + # body = await request.text() # This would read from internal cache if available + return web.Response(text=f"Received: {data}") + +When working with raw stream data that needs to be shared between middleware and handlers:: + + async def stream_parsing_middleware( + request: web.Request, + handler: Callable[[web.Request], Awaitable[web.StreamResponse]] + ) -> web.StreamResponse: + # Read stream once and store the data + raw_data = await request.content.read() + request['raw_body'] = raw_data + return await handler(request) + + async def handler(request: web.Request) -> web.Response: + # Access the stored data instead of reading the stream again + raw_data = request.get('raw_body', b'') + return web.Response(body=raw_data) + Example ^^^^^^^ From fa088d0782d6a63ac3a1c182626692696b82eea7 Mon Sep 17 00:00:00 2001 From: "patchback[bot]" <45432694+patchback[bot]@users.noreply.github.com> Date: Fri, 23 May 2025 15:44:30 -0500 Subject: [PATCH 84/90] [PR #10988/54f1a84f backport][3.12] Add missing prepared method for WebSocketResponse to docs (#10990) Co-authored-by: J. Nick Koston --- CHANGES/10988.bugfix.rst | 1 + CHANGES/6009.bugfix.rst | 2 +- docs/web_reference.rst | 5 +++++ 3 files changed, 7 insertions(+), 1 deletion(-) create mode 120000 CHANGES/10988.bugfix.rst diff --git a/CHANGES/10988.bugfix.rst b/CHANGES/10988.bugfix.rst new file mode 120000 index 00000000000..6e737bb336c --- /dev/null +++ b/CHANGES/10988.bugfix.rst @@ -0,0 +1 @@ +6009.bugfix.rst \ No newline at end of file diff --git a/CHANGES/6009.bugfix.rst b/CHANGES/6009.bugfix.rst index 6462da31869..a530832c8a9 100644 --- a/CHANGES/6009.bugfix.rst +++ b/CHANGES/6009.bugfix.rst @@ -1 +1 @@ -Fixed ``WebSocketResponse.prepared`` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` +Fixed :py:attr:`~aiohttp.web.WebSocketResponse.prepared` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` diff --git a/docs/web_reference.rst b/docs/web_reference.rst index f2954b06b51..bcf20817aab 100644 --- a/docs/web_reference.rst +++ b/docs/web_reference.rst @@ -1076,6 +1076,11 @@ and :ref:`aiohttp-web-signals` handlers:: of closing. :const:`~aiohttp.WSMsgType.CLOSE` message has been received from peer. + .. attribute:: prepared + + Read-only :class:`bool` property, ``True`` if :meth:`prepare` has + been called, ``False`` otherwise. + .. attribute:: close_code Read-only property, close code from peer. It is set to ``None`` on From 560ffbfaaaab89d0148ce572cd130782dd662f32 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Fri, 23 May 2025 18:33:06 -0500 Subject: [PATCH 85/90] Release 3.12.0rc0 (#10987) --- CHANGES.rst | 262 ++++++++++++++++++++++++++++++++++++++++++++ aiohttp/__init__.py | 2 +- 2 files changed, 263 insertions(+), 1 deletion(-) diff --git a/CHANGES.rst b/CHANGES.rst index c0a9b20f200..3ea3455294d 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,268 @@ .. towncrier release notes start +3.12.0rc0 (2025-05-23) +====================== + +Bug fixes +--------- + +- Fixed :py:attr:`~aiohttp.web.WebSocketResponse.prepared` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`6009`, :issue:`10988`. + + + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed connection reuse for file-like data payloads by ensuring buffer + truncation respects content-length boundaries and preventing premature + connection closure race -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10325`, :issue:`10915`, :issue:`10941`, :issue:`10943`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Fixed :py:class:`~aiohttp.resolver.AsyncResolver` not using the ``loop`` argument in versions 3.x where it should still be supported -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10951`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`, :issue:`10945`, :issue:`10952`, :issue:`10959`, :issue:`10968`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`, :issue:`10961`, :issue:`10962`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`, :issue:`10923`, :issue:`10946`. + + + +- Upgraded to LLHTTP 9.3.0 -- by :user:`Dreamsorcerer`. + + + *Related issues and pull requests on GitHub:* + :issue:`10972`. + + + + +Improved documentation +---------------------- + +- Improved documentation for middleware by adding warnings and examples about + request body stream consumption. The documentation now clearly explains that + request body streams can only be read once and provides best practices for + sharing parsed request data between middleware and handlers -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2914`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + + This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that + can be used to build aiohttp against the system install of the ``llhttp`` library rather + than the vendored one. + + + *Related issues and pull requests on GitHub:* + :issue:`10759`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Added Winloop to test suite to support in the future -- by :user:`Vizonex`. + + + *Related issues and pull requests on GitHub:* + :issue:`10922`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.12.0b3 (2025-05-22) ===================== diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 0ca44564e46..0de2fb48b1b 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0b3" +__version__ = "3.12.0rc0" from typing import TYPE_CHECKING, Tuple From c265228701988a0357ecf6f863b460335a161b74 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Sat, 24 May 2025 12:23:32 -0500 Subject: [PATCH 86/90] [PR #10991/452458a backport][3.12] Optimize small HTTP requests/responses by coalescing headers and body into a single packet (#10992) --- CHANGES/10991.feature.rst | 7 + aiohttp/abc.py | 7 + aiohttp/client_reqrep.py | 5 + aiohttp/http_writer.py | 158 ++++++- aiohttp/web_response.py | 5 + docs/spelling_wordlist.txt | 1 + tests/test_client_functional.py | 79 +++- tests/test_http_writer.py | 786 +++++++++++++++++++++++++++++++- tests/test_web_response.py | 43 ++ tests/test_web_sendfile.py | 30 ++ tests/test_web_server.py | 5 +- 11 files changed, 1099 insertions(+), 27 deletions(-) create mode 100644 CHANGES/10991.feature.rst diff --git a/CHANGES/10991.feature.rst b/CHANGES/10991.feature.rst new file mode 100644 index 00000000000..687a1a752f6 --- /dev/null +++ b/CHANGES/10991.feature.rst @@ -0,0 +1,7 @@ +Optimized small HTTP requests/responses by coalescing headers and body into a single TCP packet -- by :user:`bdraco`. + +This change enhances network efficiency by reducing the number of packets sent for small HTTP payloads, improving latency and reducing overhead. Most importantly, this fixes compatibility with memory-constrained IoT devices that can only perform a single read operation and expect HTTP requests in one packet. The optimization uses zero-copy ``writelines`` when coalescing data and works with both regular and chunked transfer encoding. + +When ``aiohttp`` uses client middleware to communicate with an ``aiohttp`` server, connection reuse is more likely to occur since complete responses arrive in a single packet for small payloads. + +This aligns ``aiohttp`` with other popular HTTP clients that already coalesce small requests. diff --git a/aiohttp/abc.py b/aiohttp/abc.py index 3c4f8c61b00..c1bf5032d0d 100644 --- a/aiohttp/abc.py +++ b/aiohttp/abc.py @@ -232,6 +232,13 @@ async def write_headers( ) -> None: """Write HTTP headers""" + def send_headers(self) -> None: + """Force sending buffered headers if not already sent. + + Required only if write_headers() buffers headers instead of sending immediately. + For backwards compatibility, this method does nothing by default. + """ + class AbstractAccessLogger(ABC): """Abstract writer to access log.""" diff --git a/aiohttp/client_reqrep.py b/aiohttp/client_reqrep.py index a50917150c5..fb83eefd51f 100644 --- a/aiohttp/client_reqrep.py +++ b/aiohttp/client_reqrep.py @@ -709,6 +709,8 @@ async def write_bytes( """ # 100 response if self._continue is not None: + # Force headers to be sent before waiting for 100-continue + writer.send_headers() await writer.drain() await self._continue @@ -826,7 +828,10 @@ async def send(self, conn: "Connection") -> "ClientResponse": # status + headers status_line = f"{self.method} {path} HTTP/{v.major}.{v.minor}" + + # Buffer headers for potential coalescing with body await writer.write_headers(status_line, self.headers) + task: Optional["asyncio.Task[None]"] if self.body or self._continue is not None or protocol.writing_paused: coro = self.write_bytes(writer, conn, self._get_content_length()) diff --git a/aiohttp/http_writer.py b/aiohttp/http_writer.py index 3e05628238d..a140b218b25 100644 --- a/aiohttp/http_writer.py +++ b/aiohttp/http_writer.py @@ -3,6 +3,7 @@ import asyncio import sys from typing import ( # noqa + TYPE_CHECKING, Any, Awaitable, Callable, @@ -66,6 +67,8 @@ def __init__( self.loop = loop self._on_chunk_sent: _T_OnChunkSent = on_chunk_sent self._on_headers_sent: _T_OnHeadersSent = on_headers_sent + self._headers_buf: Optional[bytes] = None + self._headers_written: bool = False @property def transport(self) -> Optional[asyncio.Transport]: @@ -106,6 +109,49 @@ def _writelines(self, chunks: Iterable[bytes]) -> None: else: transport.writelines(chunks) + def _write_chunked_payload( + self, chunk: Union[bytes, bytearray, "memoryview[int]", "memoryview[bytes]"] + ) -> None: + """Write a chunk with proper chunked encoding.""" + chunk_len_pre = f"{len(chunk):x}\r\n".encode("ascii") + self._writelines((chunk_len_pre, chunk, b"\r\n")) + + def _send_headers_with_payload( + self, + chunk: Union[bytes, bytearray, "memoryview[int]", "memoryview[bytes]"], + is_eof: bool, + ) -> None: + """Send buffered headers with payload, coalescing into single write.""" + # Mark headers as written + self._headers_written = True + headers_buf = self._headers_buf + self._headers_buf = None + + if TYPE_CHECKING: + # Safe because callers (write() and write_eof()) only invoke this method + # after checking that self._headers_buf is truthy + assert headers_buf is not None + + if not self.chunked: + # Non-chunked: coalesce headers with body + if chunk: + self._writelines((headers_buf, chunk)) + else: + self._write(headers_buf) + return + + # Coalesce headers with chunked data + if chunk: + chunk_len_pre = f"{len(chunk):x}\r\n".encode("ascii") + if is_eof: + self._writelines((headers_buf, chunk_len_pre, chunk, b"\r\n0\r\n\r\n")) + else: + self._writelines((headers_buf, chunk_len_pre, chunk, b"\r\n")) + elif is_eof: + self._writelines((headers_buf, b"0\r\n\r\n")) + else: + self._write(headers_buf) + async def write( self, chunk: Union[bytes, bytearray, memoryview], @@ -113,7 +159,8 @@ async def write( drain: bool = True, LIMIT: int = 0x10000, ) -> None: - """Writes chunk of data to a stream. + """ + Writes chunk of data to a stream. write_eof() indicates end of stream. writer can't be used after write_eof() method being called. @@ -142,31 +189,75 @@ async def write( if not chunk: return + # Handle buffered headers for small payload optimization + if self._headers_buf and not self._headers_written: + self._send_headers_with_payload(chunk, False) + if drain and self.buffer_size > LIMIT: + self.buffer_size = 0 + await self.drain() + return + if chunk: if self.chunked: - self._writelines( - (f"{len(chunk):x}\r\n".encode("ascii"), chunk, b"\r\n") - ) + self._write_chunked_payload(chunk) else: self._write(chunk) - if self.buffer_size > LIMIT and drain: + if drain and self.buffer_size > LIMIT: self.buffer_size = 0 await self.drain() async def write_headers( self, status_line: str, headers: "CIMultiDict[str]" ) -> None: - """Write request/response status and headers.""" + """Write headers to the stream.""" if self._on_headers_sent is not None: await self._on_headers_sent(headers) - # status + headers buf = _serialize_headers(status_line, headers) - self._write(buf) + self._headers_written = False + self._headers_buf = buf + + def send_headers(self) -> None: + """Force sending buffered headers if not already sent.""" + if not self._headers_buf or self._headers_written: + return + + self._headers_written = True + headers_buf = self._headers_buf + self._headers_buf = None + + if TYPE_CHECKING: + # Safe because we only enter this block when self._headers_buf is truthy + assert headers_buf is not None + + self._write(headers_buf) def set_eof(self) -> None: """Indicate that the message is complete.""" + if self._eof: + return + + # If headers haven't been sent yet, send them now + # This handles the case where there's no body at all + if self._headers_buf and not self._headers_written: + self._headers_written = True + headers_buf = self._headers_buf + self._headers_buf = None + + if TYPE_CHECKING: + # Safe because we only enter this block when self._headers_buf is truthy + assert headers_buf is not None + + # Combine headers and chunked EOF marker in a single write + if self.chunked: + self._writelines((headers_buf, b"0\r\n\r\n")) + else: + self._write(headers_buf) + elif self.chunked and self._headers_written: + # Headers already sent, just send the final chunk marker + self._write(b"0\r\n\r\n") + self._eof = True async def write_eof(self, chunk: bytes = b"") -> None: @@ -176,6 +267,7 @@ async def write_eof(self, chunk: bytes = b"") -> None: if chunk and self._on_chunk_sent is not None: await self._on_chunk_sent(chunk) + # Handle body/compression if self._compress: chunks: List[bytes] = [] chunks_len = 0 @@ -188,6 +280,26 @@ async def write_eof(self, chunk: bytes = b"") -> None: chunks.append(flush_chunk) assert chunks_len + # Send buffered headers with compressed data if not yet sent + if self._headers_buf and not self._headers_written: + self._headers_written = True + headers_buf = self._headers_buf + self._headers_buf = None + + if self.chunked: + # Coalesce headers with compressed chunked data + chunk_len_pre = f"{chunks_len:x}\r\n".encode("ascii") + self._writelines( + (headers_buf, chunk_len_pre, *chunks, b"\r\n0\r\n\r\n") + ) + else: + # Coalesce headers with compressed data + self._writelines((headers_buf, *chunks)) + await self.drain() + self._eof = True + return + + # Headers already sent, just write compressed data if self.chunked: chunk_len_pre = f"{chunks_len:x}\r\n".encode("ascii") self._writelines((chunk_len_pre, *chunks, b"\r\n0\r\n\r\n")) @@ -195,16 +307,34 @@ async def write_eof(self, chunk: bytes = b"") -> None: self._writelines(chunks) else: self._write(chunks[0]) - elif self.chunked: + await self.drain() + self._eof = True + return + + # No compression - send buffered headers if not yet sent + if self._headers_buf and not self._headers_written: + # Use helper to send headers with payload + self._send_headers_with_payload(chunk, True) + await self.drain() + self._eof = True + return + + # Handle remaining body + if self.chunked: if chunk: - chunk_len_pre = f"{len(chunk):x}\r\n".encode("ascii") - self._writelines((chunk_len_pre, chunk, b"\r\n0\r\n\r\n")) + # Write final chunk with EOF marker + self._writelines( + (f"{len(chunk):x}\r\n".encode("ascii"), chunk, b"\r\n0\r\n\r\n") + ) else: self._write(b"0\r\n\r\n") - elif chunk: - self._write(chunk) + await self.drain() + self._eof = True + return - await self.drain() + if chunk: + self._write(chunk) + await self.drain() self._eof = True diff --git a/aiohttp/web_response.py b/aiohttp/web_response.py index 8a940ef43bf..84ad18e8b4f 100644 --- a/aiohttp/web_response.py +++ b/aiohttp/web_response.py @@ -89,6 +89,7 @@ class StreamResponse(BaseClass, HeadersMixin): _must_be_empty_body: Optional[bool] = None _body_length = 0 _cookies: Optional[SimpleCookie] = None + _send_headers_immediately = True def __init__( self, @@ -542,6 +543,9 @@ async def _write_headers(self) -> None: version = request.version status_line = f"HTTP/{version[0]}.{version[1]} {self._status} {self._reason}" await writer.write_headers(status_line, self._headers) + # Send headers immediately if not opted into buffering + if self._send_headers_immediately: + writer.send_headers() async def write(self, data: Union[bytes, bytearray, memoryview]) -> None: assert isinstance( @@ -619,6 +623,7 @@ def __bool__(self) -> bool: class Response(StreamResponse): _compressed_body: Optional[bytes] = None + _send_headers_immediately = False def __init__( self, diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index c22e584cadf..3f67df33159 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -153,6 +153,7 @@ initializer inline intaking io +IoT ip IP ipdb diff --git a/tests/test_client_functional.py b/tests/test_client_functional.py index 6a031de6a35..29838c39a71 100644 --- a/tests/test_client_functional.py +++ b/tests/test_client_functional.py @@ -1575,6 +1575,7 @@ async def handler(request: web.Request) -> web.Response: return web.json_response({"ok": True}) write_mock = None + writelines_mock = None original_write_bytes = ClientRequest.write_bytes async def write_bytes( @@ -1583,12 +1584,26 @@ async def write_bytes( conn: Connection, content_length: Optional[int] = None, ) -> None: - nonlocal write_mock + nonlocal write_mock, writelines_mock original_write = writer._write - - with mock.patch.object( - writer, "_write", autospec=True, spec_set=True, side_effect=original_write - ) as write_mock: + original_writelines = writer._writelines + + with ( + mock.patch.object( + writer, + "_write", + autospec=True, + spec_set=True, + side_effect=original_write, + ) as write_mock, + mock.patch.object( + writer, + "_writelines", + autospec=True, + spec_set=True, + side_effect=original_writelines, + ) as writelines_mock, + ): await original_write_bytes(self, writer, conn, content_length) with mock.patch.object(ClientRequest, "write_bytes", write_bytes): @@ -1601,9 +1616,20 @@ async def write_bytes( content = await resp.json() assert content == {"ok": True} - assert write_mock is not None - # No chunks should have been sent for an empty body. - write_mock.assert_not_called() + # With packet coalescing, headers are buffered and may be written + # during write_bytes if there's an empty body to process. + # The test should verify no body chunks are written, but headers + # may be written as part of the coalescing optimization. + # If _write was called, it should only be for headers ending with \r\n\r\n + # and not any body content + for call in write_mock.call_args_list: # type: ignore[union-attr] + data = call[0][0] + assert data.endswith( + b"\r\n\r\n" + ), "Only headers should be written, not body chunks" + + # No body data should be written via writelines either + writelines_mock.assert_not_called() # type: ignore[union-attr] async def test_GET_DEFLATE_no_body(aiohttp_client: AiohttpClient) -> None: @@ -4439,3 +4465,40 @@ async def handler(request: web.Request) -> NoReturn: assert client._session.connector is not None # Connection should be closed due to client-side network error assert len(client._session.connector._conns) == 0 + + +async def test_empty_response_non_chunked(aiohttp_client: AiohttpClient) -> None: + """Test non-chunked response with empty body.""" + + async def handler(request: web.Request) -> web.Response: + # Return empty response with Content-Length: 0 + return web.Response(body=b"", headers={"Content-Length": "0"}) + + app = web.Application() + app.router.add_get("/empty", handler) + client = await aiohttp_client(app) + + resp = await client.get("/empty") + assert resp.status == 200 + assert resp.headers.get("Content-Length") == "0" + data = await resp.read() + assert data == b"" + resp.close() + + +async def test_set_eof_on_empty_response(aiohttp_client: AiohttpClient) -> None: + """Test that triggers set_eof() method.""" + + async def handler(request: web.Request) -> web.Response: + # Return response that completes immediately + return web.Response(status=204) # No Content + + app = web.Application() + app.router.add_get("/no-content", handler) + client = await aiohttp_client(app) + + resp = await client.get("/no-content") + assert resp.status == 204 + data = await resp.read() + assert data == b"" + resp.close() diff --git a/tests/test_http_writer.py b/tests/test_http_writer.py index ec256275d22..ffd20a0d677 100644 --- a/tests/test_http_writer.py +++ b/tests/test_http_writer.py @@ -87,7 +87,100 @@ def test_payloadwriter_properties(transport, protocol, loop) -> None: assert writer.transport == transport -async def test_write_payload_eof(transport, protocol, loop) -> None: +async def test_write_headers_buffered_small_payload( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + msg = http.StreamWriter(protocol, loop) + headers = CIMultiDict({"Content-Length": "11", "Host": "example.com"}) + + # Write headers - should be buffered + await msg.write_headers("GET / HTTP/1.1", headers) + assert len(buf) == 0 # Headers not sent yet + + # Write small body - should coalesce with headers + await msg.write(b"Hello World", drain=False) + + # Verify content + assert b"GET / HTTP/1.1\r\n" in buf + assert b"Host: example.com\r\n" in buf + assert b"Content-Length: 11\r\n" in buf + assert b"\r\n\r\nHello World" in buf + + +async def test_write_headers_chunked_coalescing( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + msg = http.StreamWriter(protocol, loop) + msg.enable_chunking() + headers = CIMultiDict({"Transfer-Encoding": "chunked", "Host": "example.com"}) + + # Write headers - should be buffered + await msg.write_headers("POST /upload HTTP/1.1", headers) + assert len(buf) == 0 # Headers not sent yet + + # Write first chunk - should coalesce with headers + await msg.write(b"First chunk", drain=False) + + # Verify content + assert b"POST /upload HTTP/1.1\r\n" in buf + assert b"Transfer-Encoding: chunked\r\n" in buf + # "b" is hex for 11 (length of "First chunk") + assert b"\r\n\r\nb\r\nFirst chunk\r\n" in buf + + +async def test_write_eof_with_buffered_headers( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + msg = http.StreamWriter(protocol, loop) + headers = CIMultiDict({"Content-Length": "9", "Host": "example.com"}) + + # Write headers - should be buffered + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Call write_eof with body - should coalesce + await msg.write_eof(b"Last data") + + # Verify content + assert b"POST /data HTTP/1.1\r\n" in buf + assert b"\r\n\r\nLast data" in buf + + +async def test_set_eof_sends_buffered_headers( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + msg = http.StreamWriter(protocol, loop) + headers = CIMultiDict({"Host": "example.com"}) + + # Write headers - should be buffered + await msg.write_headers("GET /empty HTTP/1.1", headers) + assert len(buf) == 0 + + # Call set_eof without body - headers should be sent + msg.set_eof() + + # Headers should be sent + assert len(buf) > 0 + assert b"GET /empty HTTP/1.1\r\n" in buf + + +async def test_write_payload_eof( + transport: asyncio.Transport, + protocol: BaseProtocol, + loop: asyncio.AbstractEventLoop, +) -> None: write = transport.write = mock.Mock() msg = http.StreamWriter(protocol, loop) @@ -825,14 +918,66 @@ async def test_set_eof_after_write_headers( msg = http.StreamWriter(protocol, loop) status_line = "HTTP/1.1 200 OK" good_headers = CIMultiDict({"Set-Cookie": "abc=123"}) + + # Write headers - should be buffered await msg.write_headers(status_line, good_headers) + assert not transport.write.called # Headers are buffered + + # set_eof should send the buffered headers + msg.set_eof() assert transport.write.called + + # Subsequent write_eof should do nothing transport.write.reset_mock() - msg.set_eof() await msg.write_eof() assert not transport.write.called +async def test_write_headers_does_not_write_immediately( + protocol: BaseProtocol, + transport: mock.Mock, + loop: asyncio.AbstractEventLoop, +) -> None: + msg = http.StreamWriter(protocol, loop) + status_line = "HTTP/1.1 200 OK" + headers = CIMultiDict({"Content-Type": "text/plain"}) + + # write_headers should buffer, not write immediately + await msg.write_headers(status_line, headers) + assert not transport.write.called + assert not transport.writelines.called + + # Headers should be sent when set_eof is called + msg.set_eof() + assert transport.write.called + + +async def test_write_headers_with_compression_coalescing( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + headers = CIMultiDict({"Content-Encoding": "deflate", "Host": "example.com"}) + + # Write headers - should be buffered + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Write compressed data via write_eof - should coalesce + await msg.write_eof(b"Hello World") + + # Verify headers are present + assert b"POST /data HTTP/1.1\r\n" in buf + assert b"Content-Encoding: deflate\r\n" in buf + + # Verify compressed data is present + # The data should contain headers + compressed payload + assert len(buf) > 50 # Should have headers + some compressed data + + @pytest.mark.parametrize( "char", [ @@ -857,3 +1002,640 @@ def test_serialize_headers_raises_on_new_line_or_carriage_return(char: str) -> N ), ): _serialize_headers(status_line, headers) + + +async def test_write_compressed_data_with_headers_coalescing( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that headers are coalesced with compressed data in write() method.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + headers = CIMultiDict({"Content-Encoding": "deflate", "Host": "example.com"}) + + # Write headers - should be buffered + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Write compressed data - should coalesce with headers + await msg.write(b"Hello World") + + # Headers and compressed data should be written together + assert b"POST /data HTTP/1.1\r\n" in buf + assert b"Content-Encoding: deflate\r\n" in buf + assert len(buf) > 50 # Headers + compressed data + + +async def test_write_compressed_chunked_with_headers_coalescing( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test headers coalescing with compressed chunked data.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + msg.enable_chunking() + headers = CIMultiDict( + {"Content-Encoding": "deflate", "Transfer-Encoding": "chunked"} + ) + + # Write headers - should be buffered + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Write compressed chunked data - should coalesce + await msg.write(b"Hello World") + + # Check headers are present + assert b"POST /data HTTP/1.1\r\n" in buf + assert b"Transfer-Encoding: chunked\r\n" in buf + + # Should have chunk size marker for compressed data + output = buf.decode("latin-1", errors="ignore") + assert "\r\n" in output # Should have chunk markers + + +async def test_write_multiple_compressed_chunks_after_headers_sent( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test multiple compressed writes after headers are already sent.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + headers = CIMultiDict({"Content-Encoding": "deflate"}) + + # Write headers and send them immediately by writing first chunk + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 # Headers buffered + + # Write first chunk - this will send headers + compressed data + await msg.write(b"First chunk of data that should compress") + len_after_first = len(buf) + assert len_after_first > 0 # Headers + first chunk written + + # Write second chunk and force flush via EOF + await msg.write(b"Second chunk of data that should also compress well") + await msg.write_eof() + + # After EOF, all compressed data should be flushed + final_len = len(buf) + assert final_len > len_after_first + + +async def test_write_eof_empty_compressed_with_buffered_headers( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test write_eof with no data but compression enabled and buffered headers.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + headers = CIMultiDict({"Content-Encoding": "deflate"}) + + # Write headers - should be buffered + await msg.write_headers("GET /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Write EOF with no data - should still coalesce headers with compression flush + await msg.write_eof() + + # Headers should be present + assert b"GET /data HTTP/1.1\r\n" in buf + assert b"Content-Encoding: deflate\r\n" in buf + # Should have compression flush data + assert len(buf) > 40 + + +async def test_write_compressed_gzip_with_headers_coalescing( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test gzip compression with header coalescing.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("gzip") + headers = CIMultiDict({"Content-Encoding": "gzip"}) + + # Write headers - should be buffered + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Write gzip compressed data via write_eof + await msg.write_eof(b"Test gzip compression") + + # Verify coalescing happened + assert b"POST /data HTTP/1.1\r\n" in buf + assert b"Content-Encoding: gzip\r\n" in buf + # Gzip typically produces more overhead than deflate + assert len(buf) > 60 + + +async def test_compression_with_content_length_constraint( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test compression respects content length constraints.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + msg.length = 5 # Set small content length + headers = CIMultiDict({"Content-Length": "5"}) + + await msg.write_headers("POST /data HTTP/1.1", headers) + # Write some initial data to trigger headers to be sent + await msg.write(b"12345") # This matches our content length of 5 + headers_and_first_chunk_len = len(buf) + + # Try to write more data than content length allows + await msg.write(b"This is a longer message") + + # The second write should not add any data since content length is exhausted + # After writing 5 bytes, length becomes 0, so additional writes are ignored + assert len(buf) == headers_and_first_chunk_len # No additional data written + + +async def test_write_compressed_zero_length_chunk( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test writing empty chunk with compression.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + + await msg.write_headers("POST /data HTTP/1.1", CIMultiDict()) + # Force headers to be sent by writing something + await msg.write(b"x") # Write something to trigger header send + buf.clear() + + # Write empty chunk - compression may still produce output + await msg.write(b"") + + # With compression, even empty input might produce small output + # due to compression state, but it should be minimal + assert len(buf) < 10 # Should be very small if anything + + +async def test_chunked_compressed_eof_coalescing( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test chunked compressed data with EOF marker coalescing.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_compression("deflate") + msg.enable_chunking() + headers = CIMultiDict( + {"Content-Encoding": "deflate", "Transfer-Encoding": "chunked"} + ) + + # Buffer headers + await msg.write_headers("POST /data HTTP/1.1", headers) + assert len(buf) == 0 + + # Write compressed chunked data with EOF + await msg.write_eof(b"Final compressed chunk") + + # Should have headers + assert b"POST /data HTTP/1.1\r\n" in buf + + # Should end with chunked EOF marker + assert buf.endswith(b"0\r\n\r\n") + + # Should have chunk size in hex before the compressed data + output = buf + # Verify we have chunk markers - look for \r\n followed by hex digits + # The chunk size should be between the headers and the compressed data + assert b"\r\n\r\n" in output # End of headers + # After headers, we should have a hex chunk size + headers_end = output.find(b"\r\n\r\n") + 4 + chunk_data = output[headers_end:] + # Should start with hex digits followed by \r\n + assert ( + chunk_data[:10] + .strip() + .decode("ascii", errors="ignore") + .replace("\r\n", "") + .isalnum() + ) + + +async def test_compression_different_strategies( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test compression with different strategies.""" + # Test with best speed strategy (default) + msg1 = http.StreamWriter(protocol, loop) + msg1.enable_compression("deflate") # Default strategy + + await msg1.write_headers("POST /fast HTTP/1.1", CIMultiDict()) + await msg1.write_eof(b"Test data for compression test data for compression") + + buf1_len = len(buf) + + # Both should produce output + assert buf1_len > 0 + # Headers should be present + assert b"POST /fast HTTP/1.1\r\n" in buf + + # Since we can't easily test different compression strategies + # (the compressor initialization might not support strategy parameter), + # we just verify that compression works + + +async def test_chunked_headers_single_write_with_set_eof( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that set_eof combines headers and chunked EOF in single write.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_chunking() + + # Write headers - should be buffered + headers = CIMultiDict({"Transfer-Encoding": "chunked", "Host": "example.com"}) + await msg.write_headers("GET /test HTTP/1.1", headers) + assert len(buf) == 0 # Headers not sent yet + assert not transport.writelines.called # type: ignore[attr-defined] # No writelines calls yet + + # Call set_eof - should send headers + chunked EOF in single write call + msg.set_eof() + + # Should have exactly one write call (since payload is small, writelines falls back to write) + assert transport.write.call_count == 1 # type: ignore[attr-defined] + assert transport.writelines.call_count == 0 # type: ignore[attr-defined] # Not called for small payloads + + # The write call should have the combined headers and chunked EOF marker + write_data = transport.write.call_args[0][0] # type: ignore[attr-defined] + assert write_data.startswith(b"GET /test HTTP/1.1\r\n") + assert b"Transfer-Encoding: chunked\r\n" in write_data + assert write_data.endswith(b"\r\n\r\n0\r\n\r\n") # Headers end + chunked EOF + + # Verify final output + assert b"GET /test HTTP/1.1\r\n" in buf + assert b"Transfer-Encoding: chunked\r\n" in buf + assert buf.endswith(b"0\r\n\r\n") + + +async def test_send_headers_forces_header_write( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that send_headers() forces writing buffered headers.""" + msg = http.StreamWriter(protocol, loop) + headers = CIMultiDict({"Content-Length": "10", "Host": "example.com"}) + + # Write headers (should be buffered) + await msg.write_headers("GET /test HTTP/1.1", headers) + assert len(buf) == 0 # Headers buffered + + # Force send headers + msg.send_headers() + + # Headers should now be written + assert b"GET /test HTTP/1.1\r\n" in buf + assert b"Content-Length: 10\r\n" in buf + assert b"Host: example.com\r\n" in buf + + # Writing body should not resend headers + buf.clear() + await msg.write(b"0123456789") + assert b"GET /test" not in buf # Headers not repeated + assert buf == b"0123456789" # Just the body + + +async def test_send_headers_idempotent( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that send_headers() is idempotent and safe to call multiple times.""" + msg = http.StreamWriter(protocol, loop) + headers = CIMultiDict({"Content-Length": "5", "Host": "example.com"}) + + # Write headers (should be buffered) + await msg.write_headers("GET /test HTTP/1.1", headers) + assert len(buf) == 0 # Headers buffered + + # Force send headers + msg.send_headers() + headers_output = bytes(buf) + + # Call send_headers again - should be no-op + msg.send_headers() + assert buf == headers_output # No additional output + + # Call send_headers after headers already sent - should be no-op + await msg.write(b"hello") + msg.send_headers() + assert buf[len(headers_output) :] == b"hello" # Only body added + + +async def test_send_headers_no_buffered_headers( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that send_headers() is safe when no headers are buffered.""" + msg = http.StreamWriter(protocol, loop) + + # Call send_headers without writing headers first + msg.send_headers() # Should not crash + assert len(buf) == 0 # No output + + +async def test_write_drain_condition_with_small_buffer( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that drain is not called when buffer_size <= LIMIT.""" + msg = http.StreamWriter(protocol, loop) + + # Write headers first + await msg.write_headers("GET /test HTTP/1.1", CIMultiDict()) + msg.send_headers() # Send headers to start with clean state + + # Reset buffer size manually since send_headers doesn't do it + msg.buffer_size = 0 + + # Reset drain helper mock + protocol._drain_helper.reset_mock() # type: ignore[attr-defined] + + # Write small amount of data with drain=True but buffer under limit + small_data = b"x" * 100 # Much less than LIMIT (2**16) + await msg.write(small_data, drain=True) + + # Drain should NOT be called because buffer_size <= LIMIT + assert not protocol._drain_helper.called # type: ignore[attr-defined] + assert msg.buffer_size == 100 + assert small_data in buf + + +async def test_write_drain_condition_with_large_buffer( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that drain is called only when drain=True AND buffer_size > LIMIT.""" + msg = http.StreamWriter(protocol, loop) + + # Write headers first + await msg.write_headers("GET /test HTTP/1.1", CIMultiDict()) + msg.send_headers() # Send headers to start with clean state + + # Reset buffer size manually since send_headers doesn't do it + msg.buffer_size = 0 + + # Reset drain helper mock + protocol._drain_helper.reset_mock() # type: ignore[attr-defined] + + # Write large amount of data with drain=True + large_data = b"x" * (2**16 + 1) # Just over LIMIT + await msg.write(large_data, drain=True) + + # Drain should be called because drain=True AND buffer_size > LIMIT + assert protocol._drain_helper.called # type: ignore[attr-defined] + assert msg.buffer_size == 0 # Buffer reset after drain + assert large_data in buf + + +async def test_write_no_drain_with_large_buffer( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that drain is not called when drain=False even with large buffer.""" + msg = http.StreamWriter(protocol, loop) + + # Write headers first + await msg.write_headers("GET /test HTTP/1.1", CIMultiDict()) + msg.send_headers() # Send headers to start with clean state + + # Reset buffer size manually since send_headers doesn't do it + msg.buffer_size = 0 + + # Reset drain helper mock + protocol._drain_helper.reset_mock() # type: ignore[attr-defined] + + # Write large amount of data with drain=False + large_data = b"x" * (2**16 + 1) # Just over LIMIT + await msg.write(large_data, drain=False) + + # Drain should NOT be called because drain=False + assert not protocol._drain_helper.called # type: ignore[attr-defined] + assert msg.buffer_size == (2**16 + 1) # Buffer not reset + assert large_data in buf + + +async def test_set_eof_idempotent( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test that set_eof() is idempotent and can be called multiple times safely.""" + msg = http.StreamWriter(protocol, loop) + + # Test 1: Multiple set_eof calls with buffered headers + headers = CIMultiDict({"Content-Length": "0"}) + await msg.write_headers("GET /test HTTP/1.1", headers) + + # First set_eof should send headers + msg.set_eof() + first_output = buf + assert b"GET /test HTTP/1.1\r\n" in first_output + assert b"Content-Length: 0\r\n" in first_output + + # Second set_eof should be no-op + msg.set_eof() + assert bytes(buf) == first_output # No additional output + + # Third set_eof should also be no-op + msg.set_eof() + assert bytes(buf) == first_output # Still no additional output + + # Test 2: set_eof with chunked encoding + buf.clear() + msg2 = http.StreamWriter(protocol, loop) + msg2.enable_chunking() + + headers2 = CIMultiDict({"Transfer-Encoding": "chunked"}) + await msg2.write_headers("POST /data HTTP/1.1", headers2) + + # First set_eof should send headers + chunked EOF + msg2.set_eof() + chunked_output = buf + assert b"POST /data HTTP/1.1\r\n" in buf + assert b"Transfer-Encoding: chunked\r\n" in buf + assert b"0\r\n\r\n" in buf # Chunked EOF marker + + # Second set_eof should be no-op + msg2.set_eof() + assert buf == chunked_output # No additional output + + # Test 3: set_eof after headers already sent + buf.clear() + msg3 = http.StreamWriter(protocol, loop) + + headers3 = CIMultiDict({"Content-Length": "5"}) + await msg3.write_headers("PUT /update HTTP/1.1", headers3) + + # Send headers by writing some data + await msg3.write(b"hello") + headers_and_body = buf + + # set_eof after headers sent should be no-op + msg3.set_eof() + assert buf == headers_and_body # No additional output + + # Another set_eof should still be no-op + msg3.set_eof() + assert buf == headers_and_body # Still no additional output + + +async def test_non_chunked_write_empty_body( + buf: bytearray, + protocol: BaseProtocol, + transport: mock.Mock, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test non-chunked response with empty body.""" + msg = http.StreamWriter(protocol, loop) + + # Non-chunked response with Content-Length: 0 + headers = CIMultiDict({"Content-Length": "0"}) + await msg.write_headers("GET /empty HTTP/1.1", headers) + + # Write empty body + await msg.write(b"") + + # Check the output + assert b"GET /empty HTTP/1.1\r\n" in buf + assert b"Content-Length: 0\r\n" in buf + + +async def test_chunked_headers_sent_with_empty_chunk_not_eof( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test chunked encoding where headers are sent without data and not EOF.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_chunking() + + headers = CIMultiDict({"Transfer-Encoding": "chunked"}) + await msg.write_headers("POST /upload HTTP/1.1", headers) + + # This should trigger the else case in _send_headers_with_payload + # by having no chunk data and is_eof=False + await msg.write(b"") + + # Headers should be sent alone + assert b"POST /upload HTTP/1.1\r\n" in buf + assert b"Transfer-Encoding: chunked\r\n" in buf + # Should not have any chunk markers yet + assert b"0\r\n" not in buf + + +async def test_chunked_set_eof_after_headers_sent( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test chunked encoding where set_eof is called after headers already sent.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_chunking() + + headers = CIMultiDict({"Transfer-Encoding": "chunked"}) + await msg.write_headers("POST /data HTTP/1.1", headers) + + # Send headers by writing some data + await msg.write(b"test data") + buf.clear() # Clear buffer to check only what set_eof writes + + # This should trigger writing chunked EOF when headers already sent + msg.set_eof() + + # Should only have the chunked EOF marker + assert buf == b"0\r\n\r\n" + + +@pytest.mark.usefixtures("enable_writelines") +@pytest.mark.usefixtures("force_writelines_small_payloads") +async def test_write_eof_chunked_with_data_using_writelines( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test write_eof with chunked data that uses writelines (line 336).""" + msg = http.StreamWriter(protocol, loop) + msg.enable_chunking() + + headers = CIMultiDict({"Transfer-Encoding": "chunked"}) + await msg.write_headers("POST /data HTTP/1.1", headers) + + # Send headers first + await msg.write(b"initial") + transport.writelines.reset_mock() # type: ignore[attr-defined] + + # This should trigger writelines for final chunk with EOF + await msg.write_eof(b"final chunk data") + + # Should have used writelines + assert transport.writelines.called # type: ignore[attr-defined] + # Get the data from writelines call + writelines_data = transport.writelines.call_args[0][0] # type: ignore[attr-defined] + combined = b"".join(writelines_data) + + # Should have chunk size, data, and EOF marker + assert b"10\r\n" in combined # hex for 16 (length of "final chunk data") + assert b"final chunk data" in combined + assert b"0\r\n\r\n" in combined + + +async def test_send_headers_with_payload_chunked_eof_no_data( + buf: bytearray, + protocol: BaseProtocol, + transport: asyncio.Transport, + loop: asyncio.AbstractEventLoop, +) -> None: + """Test _send_headers_with_payload with chunked, is_eof=True but no chunk data.""" + msg = http.StreamWriter(protocol, loop) + msg.enable_chunking() + + headers = CIMultiDict({"Transfer-Encoding": "chunked"}) + await msg.write_headers("GET /test HTTP/1.1", headers) + + # This triggers the elif is_eof branch in _send_headers_with_payload + # by calling write_eof with empty chunk + await msg.write_eof(b"") + + # Should have headers and chunked EOF marker together + assert b"GET /test HTTP/1.1\r\n" in buf + assert b"Transfer-Encoding: chunked\r\n" in buf + assert buf.endswith(b"0\r\n\r\n") diff --git a/tests/test_web_response.py b/tests/test_web_response.py index 7b048970967..c07bf671d8c 100644 --- a/tests/test_web_response.py +++ b/tests/test_web_response.py @@ -1540,3 +1540,46 @@ async def test_passing_cimultidict_to_web_response_not_mutated( await resp.prepare(req) assert resp.content_length == 6 assert not headers + + +async def test_stream_response_sends_headers_immediately() -> None: + """Test that StreamResponse sends headers immediately.""" + writer = mock.create_autospec(StreamWriter, spec_set=True) + writer.write_headers = mock.AsyncMock() + writer.send_headers = mock.Mock() + writer.write_eof = mock.AsyncMock() + + req = make_request("GET", "/", writer=writer) + resp = StreamResponse() + + # StreamResponse should have _send_headers_immediately = True + assert resp._send_headers_immediately is True + + # Prepare the response + await resp.prepare(req) + + # Headers should be sent immediately + writer.send_headers.assert_called_once() + + +async def test_response_buffers_headers() -> None: + """Test that Response buffers headers for packet coalescing.""" + writer = mock.create_autospec(StreamWriter, spec_set=True) + writer.write_headers = mock.AsyncMock() + writer.send_headers = mock.Mock() + writer.write_eof = mock.AsyncMock() + + req = make_request("GET", "/", writer=writer) + resp = Response(body=b"hello") + + # Response should have _send_headers_immediately = False + assert resp._send_headers_immediately is False + + # Prepare the response + await resp.prepare(req) + + # Headers should NOT be sent immediately + writer.send_headers.assert_not_called() + + # But write_headers should have been called + writer.write_headers.assert_called_once() diff --git a/tests/test_web_sendfile.py b/tests/test_web_sendfile.py index 1776a3aabd3..61c3b49834f 100644 --- a/tests/test_web_sendfile.py +++ b/tests/test_web_sendfile.py @@ -3,6 +3,7 @@ from unittest import mock from aiohttp import hdrs +from aiohttp.http_writer import StreamWriter from aiohttp.test_utils import make_mocked_request from aiohttp.web_fileresponse import FileResponse @@ -125,3 +126,32 @@ def test_status_controlled_by_user(loop) -> None: loop.run_until_complete(file_sender.prepare(request)) assert file_sender._status == 203 + + +async def test_file_response_sends_headers_immediately() -> None: + """Test that FileResponse sends headers immediately (inherits from StreamResponse).""" + writer = mock.create_autospec(StreamWriter, spec_set=True) + writer.write_headers = mock.AsyncMock() + writer.send_headers = mock.Mock() + writer.write_eof = mock.AsyncMock() + + request = make_mocked_request("GET", "http://python.org/logo.png", writer=writer) + + filepath = mock.create_autospec(Path, spec_set=True) + filepath.name = "logo.png" + filepath.stat.return_value.st_size = 1024 + filepath.stat.return_value.st_mtime_ns = 1603733507222449291 + filepath.stat.return_value.st_mode = MOCK_MODE + + file_sender = FileResponse(filepath) + file_sender._path = filepath + file_sender._sendfile = mock.AsyncMock(return_value=None) # type: ignore[method-assign] + + # FileResponse inherits from StreamResponse, so should send immediately + assert file_sender._send_headers_immediately is True + + # Prepare the response + await file_sender.prepare(request) + + # Headers should be sent immediately + writer.send_headers.assert_called_once() diff --git a/tests/test_web_server.py b/tests/test_web_server.py index d2f1341afe0..09b7d0bc71b 100644 --- a/tests/test_web_server.py +++ b/tests/test_web_server.py @@ -261,9 +261,8 @@ async def handler(request): server = await aiohttp_raw_server(handler, logger=logger) cli = await aiohttp_client(server) - resp = await cli.get("/path/to") - with pytest.raises(client.ClientPayloadError): - await resp.read() + with pytest.raises(client.ServerDisconnectedError): + await cli.get("/path/to") logger.debug.assert_called_with("Ignored premature client disconnection") From 1b5b0d9f0d825c68de0f75bc786cd0af14c99a77 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Sat, 24 May 2025 12:38:09 -0500 Subject: [PATCH 87/90] Release 3.12.0rc1 (#10993) --- CHANGES.rst | 276 ++++++++++++++++++++++++++++++++++++++++++++ aiohttp/__init__.py | 2 +- 2 files changed, 277 insertions(+), 1 deletion(-) diff --git a/CHANGES.rst b/CHANGES.rst index 3ea3455294d..176dcf88179 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,282 @@ .. towncrier release notes start +3.12.0rc1 (2025-05-24) +====================== + +Bug fixes +--------- + +- Fixed :py:attr:`~aiohttp.web.WebSocketResponse.prepared` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`6009`, :issue:`10988`. + + + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed connection reuse for file-like data payloads by ensuring buffer + truncation respects content-length boundaries and preventing premature + connection closure race -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10325`, :issue:`10915`, :issue:`10941`, :issue:`10943`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Fixed :py:class:`~aiohttp.resolver.AsyncResolver` not using the ``loop`` argument in versions 3.x where it should still be supported -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10951`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`, :issue:`10945`, :issue:`10952`, :issue:`10959`, :issue:`10968`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`, :issue:`10961`, :issue:`10962`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`, :issue:`10923`, :issue:`10946`. + + + +- Upgraded to LLHTTP 9.3.0 -- by :user:`Dreamsorcerer`. + + + *Related issues and pull requests on GitHub:* + :issue:`10972`. + + + +- Optimized small HTTP requests/responses by coalescing headers and body into a single TCP packet -- by :user:`bdraco`. + + This change enhances network efficiency by reducing the number of packets sent for small HTTP payloads, improving latency and reducing overhead. Most importantly, this fixes compatibility with memory-constrained IoT devices that can only perform a single read operation and expect HTTP requests in one packet. The optimization uses zero-copy ``writelines`` when coalescing data and works with both regular and chunked transfer encoding. + + When ``aiohttp`` uses client middleware to communicate with an ``aiohttp`` server, connection reuse is more likely to occur since complete responses arrive in a single packet for small payloads. + + This aligns ``aiohttp`` with other popular HTTP clients that already coalesce small requests. + + + *Related issues and pull requests on GitHub:* + :issue:`10991`. + + + + +Improved documentation +---------------------- + +- Improved documentation for middleware by adding warnings and examples about + request body stream consumption. The documentation now clearly explains that + request body streams can only be read once and provides best practices for + sharing parsed request data between middleware and handlers -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2914`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + + This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that + can be used to build aiohttp against the system install of the ``llhttp`` library rather + than the vendored one. + + + *Related issues and pull requests on GitHub:* + :issue:`10759`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Added Winloop to test suite to support in the future -- by :user:`Vizonex`. + + + *Related issues and pull requests on GitHub:* + :issue:`10922`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.12.0rc0 (2025-05-23) ====================== diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index 0de2fb48b1b..fdad4aac495 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0rc0" +__version__ = "3.12.0rc1" from typing import TYPE_CHECKING, Tuple From df30c550251cfe90da13100ca91ec72b01245841 Mon Sep 17 00:00:00 2001 From: Sam Bull Date: Sat, 24 May 2025 22:25:57 +0100 Subject: [PATCH 88/90] Cookbook changes (#10978) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: J. Nick Koston --- .mypy.ini | 2 +- docs/client_advanced.rst | 195 ++----------- docs/client_middleware_cookbook.rst | 351 ++++-------------------- docs/client_reference.rst | 127 +++++++++ docs/code/client_middleware_cookbook.py | 143 ++++++++++ docs/conf.py | 3 +- docs/spelling_wordlist.txt | 1 + setup.cfg | 1 + 8 files changed, 363 insertions(+), 460 deletions(-) create mode 100644 docs/code/client_middleware_cookbook.py diff --git a/.mypy.ini b/.mypy.ini index 2167434fff4..a4e914e757d 100644 --- a/.mypy.ini +++ b/.mypy.ini @@ -1,5 +1,5 @@ [mypy] -files = aiohttp, examples, tests +files = aiohttp, docs/code, examples, tests check_untyped_defs = True follow_imports_for_stubs = True disallow_any_decorated = True diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index dcbd743d6fb..09ec0f1f356 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -131,29 +131,33 @@ Client Middleware ----------------- The client supports middleware to intercept requests and responses. This can be -useful for authentication, logging, request/response modification, and retries. +useful for authentication, logging, request/response modification, retries etc. -For practical examples and common middleware patterns, see the :ref:`aiohttp-client-middleware-cookbook`. +For more examples and common middleware patterns, see the :ref:`aiohttp-client-middleware-cookbook`. -Creating Middleware -^^^^^^^^^^^^^^^^^^^ +Creating a middleware +^^^^^^^^^^^^^^^^^^^^^ -To create a middleware, define an async function (or callable class) that accepts a request -and a handler function, and returns a response. Middleware must follow the -:type:`ClientMiddlewareType` signature (see :ref:`aiohttp-client-reference` for details). +To create a middleware, define an async function (or callable class) that accepts a request object +and a handler function, and returns a response. Middlewares must follow the +:type:`ClientMiddlewareType` signature:: -Using Middleware -^^^^^^^^^^^^^^^^ + async def auth_middleware(req: ClientRequest, handler: ClientHandlerType) -> ClientResponse: + req.headers["Authorization"] = get_auth_header() + return await handler(req) + +Using Middlewares +^^^^^^^^^^^^^^^^^ -You can apply middleware to a client session or to individual requests:: +You can apply middlewares to a client session or to individual requests:: # Apply to all requests in a session async with ClientSession(middlewares=(my_middleware,)) as session: - resp = await session.get('http://example.com') + resp = await session.get("http://example.com") # Apply to a specific request async with ClientSession() as session: - resp = await session.get('http://example.com', middlewares=(my_middleware,)) + resp = await session.get("http://example.com", middlewares=(my_middleware,)) Middleware Chaining ^^^^^^^^^^^^^^^^^^^ @@ -162,13 +166,14 @@ Multiple middlewares are applied in the order they are listed:: # Middlewares are applied in order: logging -> auth -> request async with ClientSession(middlewares=(logging_middleware, auth_middleware)) as session: - resp = await session.get('http://example.com') + async with session.get("http://example.com") as resp: + ... -A key aspect to understand about the flat middleware structure is that the execution flow follows this pattern: +A key aspect to understand about the middleware sequence is that the execution flow follows this pattern: 1. The first middleware in the list is called first and executes its code before calling the handler -2. The handler is the next middleware in the chain (or the actual request handler if there are no more middleware) -3. When the handler returns a response, execution continues in the first middleware after the handler call +2. The handler is the next middleware in the chain (or the request handler if there are no more middlewares) +3. When the handler returns a response, execution continues from the last middleware right after the handler call 4. This creates a nested "onion-like" pattern for execution For example, with ``middlewares=(middleware1, middleware2)``, the execution order would be: @@ -179,7 +184,12 @@ For example, with ``middlewares=(middleware1, middleware2)``, the execution orde 4. Exit ``middleware2`` (post-response code) 5. Exit ``middleware1`` (post-response code) -This flat structure means that middleware is applied on each retry attempt inside the client's retry loop, not just once before all retries. This allows middleware to modify requests freshly on each retry attempt. +This flat structure means that a middleware is applied on each retry attempt inside the client's retry loop, +not just once before all retries. This allows middleware to modify requests freshly on each retry attempt. + +For example, if we had a retry middleware and a logging middleware, and we want every retried request to be +logged separately, then we'd need to specify ``middlewares=(retry_mw, logging_mw)``. If we reversed the order +to ``middlewares=(logging_mw, retry_mw)``, then we'd only log once regardless of how many retries are done. .. note:: @@ -188,157 +198,6 @@ This flat structure means that middleware is applied on each retry attempt insid like adding static headers, you can often use request parameters (e.g., ``headers``) or session configuration instead. -Common Middleware Patterns -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. _client-middleware-retry: - -Authentication and Retry -"""""""""""""""""""""""" - -There are two recommended approaches for implementing retry logic: - -1. **For Loop Pattern (Simple Cases)** - - Use a bounded ``for`` loop when the number of retry attempts is known and fixed:: - - import hashlib - from aiohttp import ClientSession, ClientRequest, ClientResponse, ClientHandlerType - - async def auth_retry_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - # Try up to 3 authentication methods - for attempt in range(3): - if attempt == 0: - # First attempt: use API key - request.headers["X-API-Key"] = "my-api-key" - elif attempt == 1: - # Second attempt: use Bearer token - request.headers["Authorization"] = "Bearer fallback-token" - else: - # Third attempt: use hash-based signature - secret_key = "my-secret-key" - url_path = str(request.url.path) - signature = hashlib.sha256(f"{url_path}{secret_key}".encode()).hexdigest() - request.headers["X-Signature"] = signature - - # Send the request - response = await handler(request) - - # If successful or not an auth error, return immediately - if response.status != 401: - return response - - # Return the last response if all retries are exhausted - return response - -2. **While Loop Pattern (Complex Cases)** - - For more complex scenarios, use a ``while`` loop with strict exit conditions:: - - import logging - - _LOGGER = logging.getLogger(__name__) - - class RetryMiddleware: - def __init__(self, max_retries: int = 3): - self.max_retries = max_retries - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - retry_count = 0 - - # Always have clear exit conditions - while retry_count <= self.max_retries: - # Send the request - response = await handler(request) - - # Exit conditions - if 200 <= response.status < 400 or retry_count >= self.max_retries: - return response - - # Retry logic for different status codes - if response.status in (401, 429, 500, 502, 503, 504): - retry_count += 1 - _LOGGER.debug(f"Retrying request (attempt {retry_count}/{self.max_retries})") - continue - - # For any other status code, don't retry - return response - - # Safety return (should never reach here) - return response - -Request Modification -"""""""""""""""""""" - -Modify request properties based on request content:: - - async def content_type_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - # Examine URL path to determine content-type - if request.url.path.endswith('.json'): - request.headers['Content-Type'] = 'application/json' - elif request.url.path.endswith('.xml'): - request.headers['Content-Type'] = 'application/xml' - - # Add custom headers based on HTTP method - if request.method == 'POST': - request.headers['X-Request-ID'] = f"post-{id(request)}" - - return await handler(request) - -Avoiding Infinite Recursion -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. warning:: - - Using the same session from within middleware can cause infinite recursion if - the middleware makes HTTP requests using the same session that has the middleware - applied. This is especially risky in token refresh middleware or retry logic. - - When implementing retry or refresh logic, always use bounded loops - (e.g., ``for _ in range(2):`` instead of ``while True:``) to prevent infinite recursion. - -To avoid recursion when making requests inside middleware, use one of these approaches: - -**Option 1:** Disable middleware for internal requests:: - - async def log_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - async with request.session.post( - "https://logapi.example/log", - json={"url": str(request.url)}, - middlewares=() # This prevents infinite recursion - ) as resp: - pass - - return await handler(request) - -**Option 2:** Check request details to avoid recursive application:: - - async def log_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - if request.url.host != "logapi.example": # Avoid infinite recursion - async with request.session.post( - "https://logapi.example/log", - json={"url": str(request.url)} - ) as resp: - pass - - return await handler(request) - Custom Cookies -------------- diff --git a/docs/client_middleware_cookbook.rst b/docs/client_middleware_cookbook.rst index 4b8d6ddd5f8..33994160fba 100644 --- a/docs/client_middleware_cookbook.rst +++ b/docs/client_middleware_cookbook.rst @@ -5,331 +5,102 @@ Client Middleware Cookbook ========================== -This cookbook provides practical examples of implementing client middleware for common use cases. +This cookbook provides examples of how client middlewares can be used for common use cases. -.. note:: +Simple Retry Middleware +----------------------- - All examples in this cookbook are also available as complete, runnable scripts in the - ``examples/`` directory of the aiohttp repository. Look for files named ``*_middleware.py``. +It's very easy to create middlewares that can retry a connection on a given condition: -.. _cookbook-basic-auth-middleware: +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: retry_middleware -Basic Authentication Middleware -------------------------------- +.. warning:: -Basic authentication is a simple authentication scheme built into the HTTP protocol. -Here's a middleware that automatically adds Basic Auth headers to all requests: + It is recommended to ensure loops are bounded (e.g. using a ``for`` loop) to avoid + creating an infinite loop. -.. code-block:: python +Logging to an external service +------------------------------ - import base64 - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType, hdrs +If we needed to log our requests via an API call to an external server or similar, we could +create a simple middleware like this: - class BasicAuthMiddleware: - """Middleware that adds Basic Authentication to all requests.""" +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: api_logging_middleware - def __init__(self, username: str, password: str) -> None: - self.username = username - self.password = password - self._auth_header = self._encode_credentials() +.. warning:: - def _encode_credentials(self) -> str: - """Encode username and password to base64.""" - credentials = f"{self.username}:{self.password}" - encoded = base64.b64encode(credentials.encode()).decode() - return f"Basic {encoded}" + Using the same session from within a middleware can cause infinite recursion if + that request gets processed again by the middleware. - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - """Add Basic Auth header to the request.""" - # Only add auth if not already present - if hdrs.AUTHORIZATION not in request.headers: - request.headers[hdrs.AUTHORIZATION] = self._auth_header + To avoid such recursion a middleware should typically make requests with + ``middlewares=()`` or else contain some condition to stop the request triggering + the same logic when it is processed again by the middleware (e.g by whitelisting + the API domain of the request). - # Proceed with the request - return await handler(request) +Token Refresh Middleware +------------------------ -Usage example: +If you need to refresh access tokens to continue accessing an API, this is also a good +candidate for a middleware. For example, you could check for a 401 response, then +refresh the token and retry: -.. code-block:: python +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: TokenRefresh401Middleware - import aiohttp - import asyncio - import logging +If you have an expiry time for the token, you could refresh at the expiry time, to avoid the +failed request: - _LOGGER = logging.getLogger(__name__) +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: TokenRefreshExpiryMiddleware - async def main(): - # Create middleware instance - auth_middleware = BasicAuthMiddleware("user", "pass") +Or you could even refresh preemptively in a background task to avoid any API delays. This is probably more +efficient to implement without a middleware: - # Use middleware in session - async with aiohttp.ClientSession(middlewares=(auth_middleware,)) as session: - async with session.get("https://httpbin.org/basic-auth/user/pass") as resp: - _LOGGER.debug("Status: %s", resp.status) - data = await resp.json() - _LOGGER.debug("Response: %s", data) +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: token_refresh_preemptively_example - asyncio.run(main()) +Or combine the above approaches to create a more robust solution. -.. _cookbook-retry-middleware: +.. note:: -Simple Retry Middleware ------------------------ + These can also be adjusted to handle proxy auth by modifying + :attr:`ClientRequest.proxy_headers`. -A retry middleware that automatically retries failed requests with exponential backoff: - -.. code-block:: python - - import asyncio - import logging - from http import HTTPStatus - from typing import Union, Set - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType - - _LOGGER = logging.getLogger(__name__) - - DEFAULT_RETRY_STATUSES = { - HTTPStatus.TOO_MANY_REQUESTS, - HTTPStatus.INTERNAL_SERVER_ERROR, - HTTPStatus.BAD_GATEWAY, - HTTPStatus.SERVICE_UNAVAILABLE, - HTTPStatus.GATEWAY_TIMEOUT - } - - class RetryMiddleware: - """Middleware that retries failed requests with exponential backoff.""" - - def __init__( - self, - max_retries: int = 3, - retry_statuses: Union[Set[int], None] = None, - initial_delay: float = 1.0, - backoff_factor: float = 2.0 - ) -> None: - self.max_retries = max_retries - self.retry_statuses = retry_statuses or DEFAULT_RETRY_STATUSES - self.initial_delay = initial_delay - self.backoff_factor = backoff_factor - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - """Execute request with retry logic.""" - last_response = None - delay = self.initial_delay - - for attempt in range(self.max_retries + 1): - if attempt > 0: - _LOGGER.info( - "Retrying request to %s (attempt %s/%s)", - request.url, - attempt + 1, - self.max_retries + 1 - ) - - # Execute the request - response = await handler(request) - last_response = response - - # Check if we should retry - if response.status not in self.retry_statuses: - return response - - # Don't retry if we've exhausted attempts - if attempt >= self.max_retries: - _LOGGER.warning( - "Max retries (%s) exceeded for %s", - self.max_retries, - request.url - ) - return response - - # Wait before retrying - _LOGGER.debug("Waiting %ss before retry...", delay) - await asyncio.sleep(delay) - delay *= self.backoff_factor - - # Return the last response - return last_response - -Usage example: - -.. code-block:: python - - import aiohttp - import asyncio - import logging - from http import HTTPStatus - - _LOGGER = logging.getLogger(__name__) - - RETRY_STATUSES = { - HTTPStatus.TOO_MANY_REQUESTS, - HTTPStatus.INTERNAL_SERVER_ERROR, - HTTPStatus.BAD_GATEWAY, - HTTPStatus.SERVICE_UNAVAILABLE, - HTTPStatus.GATEWAY_TIMEOUT - } - - async def main(): - # Create retry middleware with custom settings - retry_middleware = RetryMiddleware( - max_retries=3, - retry_statuses=RETRY_STATUSES, - initial_delay=0.5, - backoff_factor=2.0 - ) - - async with aiohttp.ClientSession(middlewares=(retry_middleware,)) as session: - # This will automatically retry on server errors - async with session.get("https://httpbin.org/status/500") as resp: - _LOGGER.debug("Final status: %s", resp.status) - - asyncio.run(main()) - -.. _cookbook-combining-middleware: - -Combining Multiple Middleware ------------------------------ - -You can combine multiple middleware to create powerful request pipelines: - -.. code-block:: python - - import time - import logging - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType - - _LOGGER = logging.getLogger(__name__) - - class LoggingMiddleware: - """Middleware that logs request timing and response status.""" - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - start_time = time.monotonic() - - # Log request - _LOGGER.debug("[REQUEST] %s %s", request.method, request.url) - - # Execute request - response = await handler(request) +Server-side Request Forgery Protection +-------------------------------------- - # Log response - duration = time.monotonic() - start_time - _LOGGER.debug("[RESPONSE] %s in %.2fs", response.status, duration) +To provide protection against server-side request forgery, we could blacklist any internal +IPs or domains. We could create a middleware that rejects requests made to a blacklist: - return response +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: ssrf_middleware - # Combine multiple middleware - async def main(): - # Middleware are applied in order: logging -> auth -> retry -> request - logging_middleware = LoggingMiddleware() - auth_middleware = BasicAuthMiddleware("user", "pass") - retry_middleware = RetryMiddleware(max_retries=2) +.. warning:: - async with aiohttp.ClientSession( - middlewares=(logging_middleware, auth_middleware, retry_middleware) - ) as session: - async with session.get("https://httpbin.org/basic-auth/user/pass") as resp: - text = await resp.text() - _LOGGER.debug("Response text: %s", text) + The above example is simplified for demonstration purposes. A production-ready + implementation should also check IPv6 addresses (``::1``), private IP ranges, + link-local addresses, and other internal hostnames. Consider using a well-tested + library for SSRF protection in production environments. -.. _cookbook-token-refresh-middleware: +If you know that your services correctly reject requests with an incorrect `Host` header, then +that may provide sufficient protection. Otherwise, we still have a concern with an attacker's +own domain resolving to a blacklisted IP. To provide complete protection, we can also +create a custom resolver: -Token Refresh Middleware ------------------------- +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: SSRFConnector + +Using both of these together in a session should provide full SSRF protection. -A more advanced example showing JWT token refresh: - -.. code-block:: python - - import asyncio - import time - from http import HTTPStatus - from typing import Union - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType, hdrs - - class TokenRefreshMiddleware: - """Middleware that handles JWT token refresh automatically.""" - - def __init__(self, token_endpoint: str, refresh_token: str) -> None: - self.token_endpoint = token_endpoint - self.refresh_token = refresh_token - self.access_token: Union[str, None] = None - self.token_expires_at: Union[float, None] = None - self._refresh_lock = asyncio.Lock() - - async def _refresh_access_token(self, session) -> str: - """Refresh the access token using the refresh token.""" - async with self._refresh_lock: - # Check if another coroutine already refreshed the token - if self.token_expires_at and time.time() < self.token_expires_at: - return self.access_token - - # Make refresh request without middleware to avoid recursion - async with session.post( - self.token_endpoint, - json={"refresh_token": self.refresh_token}, - middlewares=() # Disable middleware for this request - ) as resp: - resp.raise_for_status() - data = await resp.json() - - if "access_token" not in data: - raise ValueError("No access_token in refresh response") - - self.access_token = data["access_token"] - # Token expires in 1 hour for demo, refresh 5 min early - expires_in = data.get("expires_in", 3600) - self.token_expires_at = time.time() + expires_in - 300 - return self.access_token - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - """Add auth token to request, refreshing if needed.""" - # Skip token for refresh endpoint - if str(request.url).endswith('/token/refresh'): - return await handler(request) - - # Refresh token if needed - if not self.access_token or ( - self.token_expires_at and time.time() >= self.token_expires_at - ): - await self._refresh_access_token(request.session) - - # Add token to request - request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" - - # Execute request - response = await handler(request) - - # If we get 401, try refreshing token once - if response.status == HTTPStatus.UNAUTHORIZED: - await self._refresh_access_token(request.session) - request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" - response = await handler(request) - - return response Best Practices -------------- 1. **Keep middleware focused**: Each middleware should have a single responsibility. -2. **Order matters**: Middleware execute in the order they're listed. Place logging first, +2. **Order matters**: Middlewares execute in the order they're listed. Place logging first, authentication before retry, etc. 3. **Avoid infinite recursion**: When making HTTP requests inside middleware, either: diff --git a/docs/client_reference.rst b/docs/client_reference.rst index 40b4f7bcbf9..faae389f95c 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -1848,6 +1848,133 @@ manually. :raise TypeError: if message is :const:`~aiohttp.WSMsgType.BINARY`. :raise ValueError: if message is not valid JSON. +ClientRequest +------------- + +.. class:: ClientRequest + + Represents an HTTP request to be sent by the client. + + This object encapsulates all the details of an HTTP request before it is sent. + It is primarily used within client middleware to inspect or modify requests. + + .. note:: + + You typically don't create ``ClientRequest`` instances directly. They are + created internally by :class:`ClientSession` methods and passed to middleware. + + For more information about using middleware, see :ref:`aiohttp-client-middleware`. + + .. attribute:: body + :type: Payload | FormData + + The request body payload. This can be: + + - A :class:`Payload` object for raw data (default is empty bytes ``b""``) + - A :class:`FormData` object for form submissions + + .. attribute:: chunked + :type: bool | None + + Whether to use chunked transfer encoding: + + - ``True``: Use chunked encoding + - ``False``: Don't use chunked encoding + - ``None``: Automatically determine based on body + + .. attribute:: compress + :type: str | None + + The compression encoding for the request body. Common values include + ``'gzip'`` and ``'deflate'``, but any string value is technically allowed. + ``None`` means no compression. + + .. attribute:: headers + :type: multidict.CIMultiDict + + The HTTP headers that will be sent with the request. This is a case-insensitive + multidict that can be modified by middleware. + + .. code-block:: python + + # Add or modify headers + request.headers['X-Custom-Header'] = 'value' + request.headers['User-Agent'] = 'MyApp/1.0' + + .. attribute:: is_ssl + :type: bool + + ``True`` if the request uses a secure scheme (e.g., HTTPS, WSS), ``False`` otherwise. + + .. attribute:: method + :type: str + + The HTTP method of the request (e.g., ``'GET'``, ``'POST'``, ``'PUT'``, etc.). + + .. attribute:: original_url + :type: yarl.URL + + The original URL passed to the request method, including any fragment. + This preserves the exact URL as provided by the user. + + .. attribute:: proxy + :type: yarl.URL | None + + The proxy URL if the request will be sent through a proxy, ``None`` otherwise. + + .. attribute:: proxy_headers + :type: multidict.CIMultiDict | None + + Headers to be sent to the proxy server (e.g., ``Proxy-Authorization``). + Only set when :attr:`proxy` is not ``None``. + + .. attribute:: response_class + :type: type[ClientResponse] + + The class to use for creating the response object. Defaults to + :class:`ClientResponse` but can be customized for special handling. + + .. attribute:: server_hostname + :type: str | None + + Override the hostname for SSL certificate verification. Useful when + connecting through proxies or to IP addresses. + + .. attribute:: session + :type: ClientSession + + The client session that created this request. Useful for accessing + session-level configuration or making additional requests within middleware. + + .. warning:: + Be careful when making requests with the same session inside middleware + to avoid infinite recursion. Use ``middlewares=()`` parameter when needed. + + .. attribute:: ssl + :type: ssl.SSLContext | bool | Fingerprint + + SSL validation configuration for this request: + + - ``True``: Use default SSL verification + - ``False``: Skip SSL verification + - :class:`ssl.SSLContext`: Custom SSL context + - :class:`Fingerprint`: Verify specific certificate fingerprint + + .. attribute:: url + :type: yarl.URL + + The target URL of the request with the fragment (``#...``) part stripped. + This is the actual URL that will be used for the connection. + + .. note:: + To access the original URL with fragment, use :attr:`original_url`. + + .. attribute:: version + :type: HttpVersion + + The HTTP version to use for the request (e.g., ``HttpVersion(1, 1)`` for HTTP/1.1). + + Utilities --------- diff --git a/docs/code/client_middleware_cookbook.py b/docs/code/client_middleware_cookbook.py new file mode 100644 index 00000000000..b013e76b206 --- /dev/null +++ b/docs/code/client_middleware_cookbook.py @@ -0,0 +1,143 @@ +"""This is a collection of semi-complete examples that get included into the cookbook page.""" + +import asyncio +import logging +import time +from collections.abc import AsyncIterator, Sequence +from contextlib import asynccontextmanager, suppress + +from aiohttp import ( + ClientError, + ClientHandlerType, + ClientRequest, + ClientResponse, + ClientSession, + TCPConnector, +) +from aiohttp.abc import ResolveResult +from aiohttp.tracing import Trace + + +class SSRFError(ClientError): + """A request was made to a blacklisted host.""" + + +async def retry_middleware( + req: ClientRequest, handler: ClientHandlerType +) -> ClientResponse: + for _ in range(3): # Try up to 3 times + resp = await handler(req) + if resp.ok: + return resp + return resp # type: ignore[possibly-undefined] + + +async def api_logging_middleware( + req: ClientRequest, handler: ClientHandlerType +) -> ClientResponse: + # We use middlewares=() to avoid infinite recursion. + async with req.session.post("/log", data=req.url.host, middlewares=()) as resp: + if not resp.ok: + logging.warning("Log endpoint failed") + + return await handler(req) + + +class TokenRefresh401Middleware: + def __init__(self, refresh_token: str, access_token: str): + self.access_token = access_token + self.refresh_token = refresh_token + self.lock = asyncio.Lock() + + async def __call__( + self, req: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + for _ in range(2): # Retry at most one time + token = self.access_token + req.headers["Authorization"] = f"Bearer {token}" + resp = await handler(req) + if resp.status != 401: + return resp + async with self.lock: + if token != self.access_token: # Already refreshed + continue + url = "https://api.example/refresh" + async with req.session.post(url, data=self.refresh_token) as resp: + # Add error handling as needed + data = await resp.json() + self.access_token = data["access_token"] + return resp # type: ignore[possibly-undefined] + + +class TokenRefreshExpiryMiddleware: + def __init__(self, refresh_token: str): + self.access_token = "" + self.expires_at = 0 + self.refresh_token = refresh_token + self.lock = asyncio.Lock() + + async def __call__( + self, req: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + if self.expires_at <= time.time(): + token = self.access_token + async with self.lock: + if token == self.access_token: # Still not refreshed + url = "https://api.example/refresh" + async with req.session.post(url, data=self.refresh_token) as resp: + # Add error handling as needed + data = await resp.json() + self.access_token = data["access_token"] + self.expires_at = data["expires_at"] + + req.headers["Authorization"] = f"Bearer {self.access_token}" + return await handler(req) + + +async def token_refresh_preemptively_example() -> None: + async def set_token(session: ClientSession, event: asyncio.Event) -> None: + while True: + async with session.post("/refresh") as resp: + token = await resp.json() + session.headers["Authorization"] = f"Bearer {token['auth']}" + event.set() + await asyncio.sleep(token["valid_duration"]) + + @asynccontextmanager + async def auto_refresh_client() -> AsyncIterator[ClientSession]: + async with ClientSession() as session: + ready = asyncio.Event() + t = asyncio.create_task(set_token(session, ready)) + await ready.wait() + yield session + t.cancel() + with suppress(asyncio.CancelledError): + await t + + async with auto_refresh_client() as sess: + ... + + +async def ssrf_middleware( + req: ClientRequest, handler: ClientHandlerType +) -> ClientResponse: + # WARNING: This is a simplified example for demonstration purposes only. + # A complete implementation should also check: + # - IPv6 loopback (::1) + # - Private IP ranges (10.x.x.x, 192.168.x.x, 172.16-31.x.x) + # - Link-local addresses (169.254.x.x, fe80::/10) + # - Other internal hostnames and aliases + if req.url.host in {"127.0.0.1", "localhost"}: + raise SSRFError(req.url.host) + return await handler(req) + + +class SSRFConnector(TCPConnector): + async def _resolve_host( + self, host: str, port: int, traces: Sequence[Trace] | None = None + ) -> list[ResolveResult]: + res = await super()._resolve_host(host, port, traces) + # WARNING: This is a simplified example - should also check ::1, private ranges, etc. + if any(r["host"] in {"127.0.0.1"} for r in res): + raise SSRFError() + return res diff --git a/docs/conf.py b/docs/conf.py index 0be0e21eaef..505359b917e 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -396,8 +396,9 @@ ("py:class", "aiohttp.web.RequestHandler"), # undocumented ("py:class", "aiohttp.NamedPipeConnector"), # undocumented ("py:class", "aiohttp.protocol.HttpVersion"), # undocumented - ("py:class", "aiohttp.ClientRequest"), # undocumented + ("py:class", "HttpVersion"), # undocumented ("py:class", "aiohttp.payload.Payload"), # undocumented + ("py:class", "Payload"), # undocumented ("py:class", "aiohttp.resolver.AsyncResolver"), # undocumented ("py:class", "aiohttp.resolver.ThreadedResolver"), # undocumented ("py:func", "aiohttp.ws_connect"), # undocumented diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index 68d4693bac0..ff8bfb8b508 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -146,6 +146,7 @@ HTTPException HttpProcessingError httpretty https +hostname impl incapsulates Indices diff --git a/setup.cfg b/setup.cfg index 30aa2e87838..c4ab069f396 100644 --- a/setup.cfg +++ b/setup.cfg @@ -98,6 +98,7 @@ max-line-length = 88 per-file-ignores = # I900: Shouldn't appear in requirements for examples. examples/*:I900 + docs/code/*:F841 # flake8-requirements known-modules = proxy.py:[proxy] From cfe3d219d0016571a2fe239a24da1421a1f947da Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Sat, 24 May 2025 16:38:56 -0500 Subject: [PATCH 89/90] [PR #10978/df30c55 backport][3.12] Cookbook changes (#10995) Co-authored-by: Sam Bull Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- .mypy.ini | 2 +- docs/client_advanced.rst | 195 ++----------- docs/client_middleware_cookbook.rst | 351 ++++-------------------- docs/client_reference.rst | 127 +++++++++ docs/code/client_middleware_cookbook.py | 143 ++++++++++ docs/conf.py | 3 +- docs/spelling_wordlist.txt | 1 + setup.cfg | 1 + 8 files changed, 363 insertions(+), 460 deletions(-) create mode 100644 docs/code/client_middleware_cookbook.py diff --git a/.mypy.ini b/.mypy.ini index 78001c36e8f..e91bd30d58f 100644 --- a/.mypy.ini +++ b/.mypy.ini @@ -1,5 +1,5 @@ [mypy] -files = aiohttp, examples +files = aiohttp, docs/code, examples check_untyped_defs = True follow_imports_for_stubs = True #disallow_any_decorated = True diff --git a/docs/client_advanced.rst b/docs/client_advanced.rst index 5a94e68ec1f..18c274ca7f5 100644 --- a/docs/client_advanced.rst +++ b/docs/client_advanced.rst @@ -124,29 +124,33 @@ Client Middleware ----------------- The client supports middleware to intercept requests and responses. This can be -useful for authentication, logging, request/response modification, and retries. +useful for authentication, logging, request/response modification, retries etc. -For practical examples and common middleware patterns, see the :ref:`aiohttp-client-middleware-cookbook`. +For more examples and common middleware patterns, see the :ref:`aiohttp-client-middleware-cookbook`. -Creating Middleware -^^^^^^^^^^^^^^^^^^^ +Creating a middleware +^^^^^^^^^^^^^^^^^^^^^ -To create a middleware, define an async function (or callable class) that accepts a request -and a handler function, and returns a response. Middleware must follow the -:type:`ClientMiddlewareType` signature (see :ref:`aiohttp-client-reference` for details). +To create a middleware, define an async function (or callable class) that accepts a request object +and a handler function, and returns a response. Middlewares must follow the +:type:`ClientMiddlewareType` signature:: -Using Middleware -^^^^^^^^^^^^^^^^ + async def auth_middleware(req: ClientRequest, handler: ClientHandlerType) -> ClientResponse: + req.headers["Authorization"] = get_auth_header() + return await handler(req) + +Using Middlewares +^^^^^^^^^^^^^^^^^ -You can apply middleware to a client session or to individual requests:: +You can apply middlewares to a client session or to individual requests:: # Apply to all requests in a session async with ClientSession(middlewares=(my_middleware,)) as session: - resp = await session.get('http://example.com') + resp = await session.get("http://example.com") # Apply to a specific request async with ClientSession() as session: - resp = await session.get('http://example.com', middlewares=(my_middleware,)) + resp = await session.get("http://example.com", middlewares=(my_middleware,)) Middleware Chaining ^^^^^^^^^^^^^^^^^^^ @@ -155,13 +159,14 @@ Multiple middlewares are applied in the order they are listed:: # Middlewares are applied in order: logging -> auth -> request async with ClientSession(middlewares=(logging_middleware, auth_middleware)) as session: - resp = await session.get('http://example.com') + async with session.get("http://example.com") as resp: + ... -A key aspect to understand about the flat middleware structure is that the execution flow follows this pattern: +A key aspect to understand about the middleware sequence is that the execution flow follows this pattern: 1. The first middleware in the list is called first and executes its code before calling the handler -2. The handler is the next middleware in the chain (or the actual request handler if there are no more middleware) -3. When the handler returns a response, execution continues in the first middleware after the handler call +2. The handler is the next middleware in the chain (or the request handler if there are no more middlewares) +3. When the handler returns a response, execution continues from the last middleware right after the handler call 4. This creates a nested "onion-like" pattern for execution For example, with ``middlewares=(middleware1, middleware2)``, the execution order would be: @@ -172,7 +177,12 @@ For example, with ``middlewares=(middleware1, middleware2)``, the execution orde 4. Exit ``middleware2`` (post-response code) 5. Exit ``middleware1`` (post-response code) -This flat structure means that middleware is applied on each retry attempt inside the client's retry loop, not just once before all retries. This allows middleware to modify requests freshly on each retry attempt. +This flat structure means that a middleware is applied on each retry attempt inside the client's retry loop, +not just once before all retries. This allows middleware to modify requests freshly on each retry attempt. + +For example, if we had a retry middleware and a logging middleware, and we want every retried request to be +logged separately, then we'd need to specify ``middlewares=(retry_mw, logging_mw)``. If we reversed the order +to ``middlewares=(logging_mw, retry_mw)``, then we'd only log once regardless of how many retries are done. .. note:: @@ -181,157 +191,6 @@ This flat structure means that middleware is applied on each retry attempt insid like adding static headers, you can often use request parameters (e.g., ``headers``) or session configuration instead. -Common Middleware Patterns -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. _client-middleware-retry: - -Authentication and Retry -"""""""""""""""""""""""" - -There are two recommended approaches for implementing retry logic: - -1. **For Loop Pattern (Simple Cases)** - - Use a bounded ``for`` loop when the number of retry attempts is known and fixed:: - - import hashlib - from aiohttp import ClientSession, ClientRequest, ClientResponse, ClientHandlerType - - async def auth_retry_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - # Try up to 3 authentication methods - for attempt in range(3): - if attempt == 0: - # First attempt: use API key - request.headers["X-API-Key"] = "my-api-key" - elif attempt == 1: - # Second attempt: use Bearer token - request.headers["Authorization"] = "Bearer fallback-token" - else: - # Third attempt: use hash-based signature - secret_key = "my-secret-key" - url_path = str(request.url.path) - signature = hashlib.sha256(f"{url_path}{secret_key}".encode()).hexdigest() - request.headers["X-Signature"] = signature - - # Send the request - response = await handler(request) - - # If successful or not an auth error, return immediately - if response.status != 401: - return response - - # Return the last response if all retries are exhausted - return response - -2. **While Loop Pattern (Complex Cases)** - - For more complex scenarios, use a ``while`` loop with strict exit conditions:: - - import logging - - _LOGGER = logging.getLogger(__name__) - - class RetryMiddleware: - def __init__(self, max_retries: int = 3): - self.max_retries = max_retries - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - retry_count = 0 - - # Always have clear exit conditions - while retry_count <= self.max_retries: - # Send the request - response = await handler(request) - - # Exit conditions - if 200 <= response.status < 400 or retry_count >= self.max_retries: - return response - - # Retry logic for different status codes - if response.status in (401, 429, 500, 502, 503, 504): - retry_count += 1 - _LOGGER.debug(f"Retrying request (attempt {retry_count}/{self.max_retries})") - continue - - # For any other status code, don't retry - return response - - # Safety return (should never reach here) - return response - -Request Modification -"""""""""""""""""""" - -Modify request properties based on request content:: - - async def content_type_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - # Examine URL path to determine content-type - if request.url.path.endswith('.json'): - request.headers['Content-Type'] = 'application/json' - elif request.url.path.endswith('.xml'): - request.headers['Content-Type'] = 'application/xml' - - # Add custom headers based on HTTP method - if request.method == 'POST': - request.headers['X-Request-ID'] = f"post-{id(request)}" - - return await handler(request) - -Avoiding Infinite Recursion -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. warning:: - - Using the same session from within middleware can cause infinite recursion if - the middleware makes HTTP requests using the same session that has the middleware - applied. This is especially risky in token refresh middleware or retry logic. - - When implementing retry or refresh logic, always use bounded loops - (e.g., ``for _ in range(2):`` instead of ``while True:``) to prevent infinite recursion. - -To avoid recursion when making requests inside middleware, use one of these approaches: - -**Option 1:** Disable middleware for internal requests:: - - async def log_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - async with request.session.post( - "https://logapi.example/log", - json={"url": str(request.url)}, - middlewares=() # This prevents infinite recursion - ) as resp: - pass - - return await handler(request) - -**Option 2:** Check request details to avoid recursive application:: - - async def log_middleware( - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - if request.url.host != "logapi.example": # Avoid infinite recursion - async with request.session.post( - "https://logapi.example/log", - json={"url": str(request.url)} - ) as resp: - pass - - return await handler(request) - Custom Cookies -------------- diff --git a/docs/client_middleware_cookbook.rst b/docs/client_middleware_cookbook.rst index 4b8d6ddd5f8..33994160fba 100644 --- a/docs/client_middleware_cookbook.rst +++ b/docs/client_middleware_cookbook.rst @@ -5,331 +5,102 @@ Client Middleware Cookbook ========================== -This cookbook provides practical examples of implementing client middleware for common use cases. +This cookbook provides examples of how client middlewares can be used for common use cases. -.. note:: +Simple Retry Middleware +----------------------- - All examples in this cookbook are also available as complete, runnable scripts in the - ``examples/`` directory of the aiohttp repository. Look for files named ``*_middleware.py``. +It's very easy to create middlewares that can retry a connection on a given condition: -.. _cookbook-basic-auth-middleware: +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: retry_middleware -Basic Authentication Middleware -------------------------------- +.. warning:: -Basic authentication is a simple authentication scheme built into the HTTP protocol. -Here's a middleware that automatically adds Basic Auth headers to all requests: + It is recommended to ensure loops are bounded (e.g. using a ``for`` loop) to avoid + creating an infinite loop. -.. code-block:: python +Logging to an external service +------------------------------ - import base64 - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType, hdrs +If we needed to log our requests via an API call to an external server or similar, we could +create a simple middleware like this: - class BasicAuthMiddleware: - """Middleware that adds Basic Authentication to all requests.""" +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: api_logging_middleware - def __init__(self, username: str, password: str) -> None: - self.username = username - self.password = password - self._auth_header = self._encode_credentials() +.. warning:: - def _encode_credentials(self) -> str: - """Encode username and password to base64.""" - credentials = f"{self.username}:{self.password}" - encoded = base64.b64encode(credentials.encode()).decode() - return f"Basic {encoded}" + Using the same session from within a middleware can cause infinite recursion if + that request gets processed again by the middleware. - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - """Add Basic Auth header to the request.""" - # Only add auth if not already present - if hdrs.AUTHORIZATION not in request.headers: - request.headers[hdrs.AUTHORIZATION] = self._auth_header + To avoid such recursion a middleware should typically make requests with + ``middlewares=()`` or else contain some condition to stop the request triggering + the same logic when it is processed again by the middleware (e.g by whitelisting + the API domain of the request). - # Proceed with the request - return await handler(request) +Token Refresh Middleware +------------------------ -Usage example: +If you need to refresh access tokens to continue accessing an API, this is also a good +candidate for a middleware. For example, you could check for a 401 response, then +refresh the token and retry: -.. code-block:: python +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: TokenRefresh401Middleware - import aiohttp - import asyncio - import logging +If you have an expiry time for the token, you could refresh at the expiry time, to avoid the +failed request: - _LOGGER = logging.getLogger(__name__) +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: TokenRefreshExpiryMiddleware - async def main(): - # Create middleware instance - auth_middleware = BasicAuthMiddleware("user", "pass") +Or you could even refresh preemptively in a background task to avoid any API delays. This is probably more +efficient to implement without a middleware: - # Use middleware in session - async with aiohttp.ClientSession(middlewares=(auth_middleware,)) as session: - async with session.get("https://httpbin.org/basic-auth/user/pass") as resp: - _LOGGER.debug("Status: %s", resp.status) - data = await resp.json() - _LOGGER.debug("Response: %s", data) +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: token_refresh_preemptively_example - asyncio.run(main()) +Or combine the above approaches to create a more robust solution. -.. _cookbook-retry-middleware: +.. note:: -Simple Retry Middleware ------------------------ + These can also be adjusted to handle proxy auth by modifying + :attr:`ClientRequest.proxy_headers`. -A retry middleware that automatically retries failed requests with exponential backoff: - -.. code-block:: python - - import asyncio - import logging - from http import HTTPStatus - from typing import Union, Set - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType - - _LOGGER = logging.getLogger(__name__) - - DEFAULT_RETRY_STATUSES = { - HTTPStatus.TOO_MANY_REQUESTS, - HTTPStatus.INTERNAL_SERVER_ERROR, - HTTPStatus.BAD_GATEWAY, - HTTPStatus.SERVICE_UNAVAILABLE, - HTTPStatus.GATEWAY_TIMEOUT - } - - class RetryMiddleware: - """Middleware that retries failed requests with exponential backoff.""" - - def __init__( - self, - max_retries: int = 3, - retry_statuses: Union[Set[int], None] = None, - initial_delay: float = 1.0, - backoff_factor: float = 2.0 - ) -> None: - self.max_retries = max_retries - self.retry_statuses = retry_statuses or DEFAULT_RETRY_STATUSES - self.initial_delay = initial_delay - self.backoff_factor = backoff_factor - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - """Execute request with retry logic.""" - last_response = None - delay = self.initial_delay - - for attempt in range(self.max_retries + 1): - if attempt > 0: - _LOGGER.info( - "Retrying request to %s (attempt %s/%s)", - request.url, - attempt + 1, - self.max_retries + 1 - ) - - # Execute the request - response = await handler(request) - last_response = response - - # Check if we should retry - if response.status not in self.retry_statuses: - return response - - # Don't retry if we've exhausted attempts - if attempt >= self.max_retries: - _LOGGER.warning( - "Max retries (%s) exceeded for %s", - self.max_retries, - request.url - ) - return response - - # Wait before retrying - _LOGGER.debug("Waiting %ss before retry...", delay) - await asyncio.sleep(delay) - delay *= self.backoff_factor - - # Return the last response - return last_response - -Usage example: - -.. code-block:: python - - import aiohttp - import asyncio - import logging - from http import HTTPStatus - - _LOGGER = logging.getLogger(__name__) - - RETRY_STATUSES = { - HTTPStatus.TOO_MANY_REQUESTS, - HTTPStatus.INTERNAL_SERVER_ERROR, - HTTPStatus.BAD_GATEWAY, - HTTPStatus.SERVICE_UNAVAILABLE, - HTTPStatus.GATEWAY_TIMEOUT - } - - async def main(): - # Create retry middleware with custom settings - retry_middleware = RetryMiddleware( - max_retries=3, - retry_statuses=RETRY_STATUSES, - initial_delay=0.5, - backoff_factor=2.0 - ) - - async with aiohttp.ClientSession(middlewares=(retry_middleware,)) as session: - # This will automatically retry on server errors - async with session.get("https://httpbin.org/status/500") as resp: - _LOGGER.debug("Final status: %s", resp.status) - - asyncio.run(main()) - -.. _cookbook-combining-middleware: - -Combining Multiple Middleware ------------------------------ - -You can combine multiple middleware to create powerful request pipelines: - -.. code-block:: python - - import time - import logging - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType - - _LOGGER = logging.getLogger(__name__) - - class LoggingMiddleware: - """Middleware that logs request timing and response status.""" - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - start_time = time.monotonic() - - # Log request - _LOGGER.debug("[REQUEST] %s %s", request.method, request.url) - - # Execute request - response = await handler(request) +Server-side Request Forgery Protection +-------------------------------------- - # Log response - duration = time.monotonic() - start_time - _LOGGER.debug("[RESPONSE] %s in %.2fs", response.status, duration) +To provide protection against server-side request forgery, we could blacklist any internal +IPs or domains. We could create a middleware that rejects requests made to a blacklist: - return response +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: ssrf_middleware - # Combine multiple middleware - async def main(): - # Middleware are applied in order: logging -> auth -> retry -> request - logging_middleware = LoggingMiddleware() - auth_middleware = BasicAuthMiddleware("user", "pass") - retry_middleware = RetryMiddleware(max_retries=2) +.. warning:: - async with aiohttp.ClientSession( - middlewares=(logging_middleware, auth_middleware, retry_middleware) - ) as session: - async with session.get("https://httpbin.org/basic-auth/user/pass") as resp: - text = await resp.text() - _LOGGER.debug("Response text: %s", text) + The above example is simplified for demonstration purposes. A production-ready + implementation should also check IPv6 addresses (``::1``), private IP ranges, + link-local addresses, and other internal hostnames. Consider using a well-tested + library for SSRF protection in production environments. -.. _cookbook-token-refresh-middleware: +If you know that your services correctly reject requests with an incorrect `Host` header, then +that may provide sufficient protection. Otherwise, we still have a concern with an attacker's +own domain resolving to a blacklisted IP. To provide complete protection, we can also +create a custom resolver: -Token Refresh Middleware ------------------------- +.. literalinclude:: code/client_middleware_cookbook.py + :pyobject: SSRFConnector + +Using both of these together in a session should provide full SSRF protection. -A more advanced example showing JWT token refresh: - -.. code-block:: python - - import asyncio - import time - from http import HTTPStatus - from typing import Union - from aiohttp import ClientRequest, ClientResponse, ClientHandlerType, hdrs - - class TokenRefreshMiddleware: - """Middleware that handles JWT token refresh automatically.""" - - def __init__(self, token_endpoint: str, refresh_token: str) -> None: - self.token_endpoint = token_endpoint - self.refresh_token = refresh_token - self.access_token: Union[str, None] = None - self.token_expires_at: Union[float, None] = None - self._refresh_lock = asyncio.Lock() - - async def _refresh_access_token(self, session) -> str: - """Refresh the access token using the refresh token.""" - async with self._refresh_lock: - # Check if another coroutine already refreshed the token - if self.token_expires_at and time.time() < self.token_expires_at: - return self.access_token - - # Make refresh request without middleware to avoid recursion - async with session.post( - self.token_endpoint, - json={"refresh_token": self.refresh_token}, - middlewares=() # Disable middleware for this request - ) as resp: - resp.raise_for_status() - data = await resp.json() - - if "access_token" not in data: - raise ValueError("No access_token in refresh response") - - self.access_token = data["access_token"] - # Token expires in 1 hour for demo, refresh 5 min early - expires_in = data.get("expires_in", 3600) - self.token_expires_at = time.time() + expires_in - 300 - return self.access_token - - async def __call__( - self, - request: ClientRequest, - handler: ClientHandlerType - ) -> ClientResponse: - """Add auth token to request, refreshing if needed.""" - # Skip token for refresh endpoint - if str(request.url).endswith('/token/refresh'): - return await handler(request) - - # Refresh token if needed - if not self.access_token or ( - self.token_expires_at and time.time() >= self.token_expires_at - ): - await self._refresh_access_token(request.session) - - # Add token to request - request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" - - # Execute request - response = await handler(request) - - # If we get 401, try refreshing token once - if response.status == HTTPStatus.UNAUTHORIZED: - await self._refresh_access_token(request.session) - request.headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" - response = await handler(request) - - return response Best Practices -------------- 1. **Keep middleware focused**: Each middleware should have a single responsibility. -2. **Order matters**: Middleware execute in the order they're listed. Place logging first, +2. **Order matters**: Middlewares execute in the order they're listed. Place logging first, authentication before retry, etc. 3. **Avoid infinite recursion**: When making HTTP requests inside middleware, either: diff --git a/docs/client_reference.rst b/docs/client_reference.rst index fa0a50425af..606df6acc0a 100644 --- a/docs/client_reference.rst +++ b/docs/client_reference.rst @@ -1864,6 +1864,133 @@ manually. :raise TypeError: if message is :const:`~aiohttp.WSMsgType.BINARY`. :raise ValueError: if message is not valid JSON. +ClientRequest +------------- + +.. class:: ClientRequest + + Represents an HTTP request to be sent by the client. + + This object encapsulates all the details of an HTTP request before it is sent. + It is primarily used within client middleware to inspect or modify requests. + + .. note:: + + You typically don't create ``ClientRequest`` instances directly. They are + created internally by :class:`ClientSession` methods and passed to middleware. + + For more information about using middleware, see :ref:`aiohttp-client-middleware`. + + .. attribute:: body + :type: Payload | FormData + + The request body payload. This can be: + + - A :class:`Payload` object for raw data (default is empty bytes ``b""``) + - A :class:`FormData` object for form submissions + + .. attribute:: chunked + :type: bool | None + + Whether to use chunked transfer encoding: + + - ``True``: Use chunked encoding + - ``False``: Don't use chunked encoding + - ``None``: Automatically determine based on body + + .. attribute:: compress + :type: str | None + + The compression encoding for the request body. Common values include + ``'gzip'`` and ``'deflate'``, but any string value is technically allowed. + ``None`` means no compression. + + .. attribute:: headers + :type: multidict.CIMultiDict + + The HTTP headers that will be sent with the request. This is a case-insensitive + multidict that can be modified by middleware. + + .. code-block:: python + + # Add or modify headers + request.headers['X-Custom-Header'] = 'value' + request.headers['User-Agent'] = 'MyApp/1.0' + + .. attribute:: is_ssl + :type: bool + + ``True`` if the request uses a secure scheme (e.g., HTTPS, WSS), ``False`` otherwise. + + .. attribute:: method + :type: str + + The HTTP method of the request (e.g., ``'GET'``, ``'POST'``, ``'PUT'``, etc.). + + .. attribute:: original_url + :type: yarl.URL + + The original URL passed to the request method, including any fragment. + This preserves the exact URL as provided by the user. + + .. attribute:: proxy + :type: yarl.URL | None + + The proxy URL if the request will be sent through a proxy, ``None`` otherwise. + + .. attribute:: proxy_headers + :type: multidict.CIMultiDict | None + + Headers to be sent to the proxy server (e.g., ``Proxy-Authorization``). + Only set when :attr:`proxy` is not ``None``. + + .. attribute:: response_class + :type: type[ClientResponse] + + The class to use for creating the response object. Defaults to + :class:`ClientResponse` but can be customized for special handling. + + .. attribute:: server_hostname + :type: str | None + + Override the hostname for SSL certificate verification. Useful when + connecting through proxies or to IP addresses. + + .. attribute:: session + :type: ClientSession + + The client session that created this request. Useful for accessing + session-level configuration or making additional requests within middleware. + + .. warning:: + Be careful when making requests with the same session inside middleware + to avoid infinite recursion. Use ``middlewares=()`` parameter when needed. + + .. attribute:: ssl + :type: ssl.SSLContext | bool | Fingerprint + + SSL validation configuration for this request: + + - ``True``: Use default SSL verification + - ``False``: Skip SSL verification + - :class:`ssl.SSLContext`: Custom SSL context + - :class:`Fingerprint`: Verify specific certificate fingerprint + + .. attribute:: url + :type: yarl.URL + + The target URL of the request with the fragment (``#...``) part stripped. + This is the actual URL that will be used for the connection. + + .. note:: + To access the original URL with fragment, use :attr:`original_url`. + + .. attribute:: version + :type: HttpVersion + + The HTTP version to use for the request (e.g., ``HttpVersion(1, 1)`` for HTTP/1.1). + + Utilities --------- diff --git a/docs/code/client_middleware_cookbook.py b/docs/code/client_middleware_cookbook.py new file mode 100644 index 00000000000..5bd84c68ac7 --- /dev/null +++ b/docs/code/client_middleware_cookbook.py @@ -0,0 +1,143 @@ +"""This is a collection of semi-complete examples that get included into the cookbook page.""" + +import asyncio +import logging +import time +from collections.abc import AsyncIterator, Sequence +from contextlib import asynccontextmanager, suppress + +from aiohttp import ( + ClientError, + ClientHandlerType, + ClientRequest, + ClientResponse, + ClientSession, + TCPConnector, +) +from aiohttp.abc import ResolveResult +from aiohttp.tracing import Trace + + +class SSRFError(ClientError): + """A request was made to a blacklisted host.""" + + +async def retry_middleware( + req: ClientRequest, handler: ClientHandlerType +) -> ClientResponse: + for _ in range(3): # Try up to 3 times + resp = await handler(req) + if resp.ok: + return resp + return resp + + +async def api_logging_middleware( + req: ClientRequest, handler: ClientHandlerType +) -> ClientResponse: + # We use middlewares=() to avoid infinite recursion. + async with req.session.post("/log", data=req.url.host, middlewares=()) as resp: + if not resp.ok: + logging.warning("Log endpoint failed") + + return await handler(req) + + +class TokenRefresh401Middleware: + def __init__(self, refresh_token: str, access_token: str): + self.access_token = access_token + self.refresh_token = refresh_token + self.lock = asyncio.Lock() + + async def __call__( + self, req: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + for _ in range(2): # Retry at most one time + token = self.access_token + req.headers["Authorization"] = f"Bearer {token}" + resp = await handler(req) + if resp.status != 401: + return resp + async with self.lock: + if token != self.access_token: # Already refreshed + continue + url = "https://api.example/refresh" + async with req.session.post(url, data=self.refresh_token) as resp: + # Add error handling as needed + data = await resp.json() + self.access_token = data["access_token"] + return resp + + +class TokenRefreshExpiryMiddleware: + def __init__(self, refresh_token: str): + self.access_token = "" + self.expires_at = 0 + self.refresh_token = refresh_token + self.lock = asyncio.Lock() + + async def __call__( + self, req: ClientRequest, handler: ClientHandlerType + ) -> ClientResponse: + if self.expires_at <= time.time(): + token = self.access_token + async with self.lock: + if token == self.access_token: # Still not refreshed + url = "https://api.example/refresh" + async with req.session.post(url, data=self.refresh_token) as resp: + # Add error handling as needed + data = await resp.json() + self.access_token = data["access_token"] + self.expires_at = data["expires_at"] + + req.headers["Authorization"] = f"Bearer {self.access_token}" + return await handler(req) + + +async def token_refresh_preemptively_example() -> None: + async def set_token(session: ClientSession, event: asyncio.Event) -> None: + while True: + async with session.post("/refresh") as resp: + token = await resp.json() + session.headers["Authorization"] = f"Bearer {token['auth']}" + event.set() + await asyncio.sleep(token["valid_duration"]) + + @asynccontextmanager + async def auto_refresh_client() -> AsyncIterator[ClientSession]: + async with ClientSession() as session: + ready = asyncio.Event() + t = asyncio.create_task(set_token(session, ready)) + await ready.wait() + yield session + t.cancel() + with suppress(asyncio.CancelledError): + await t + + async with auto_refresh_client() as sess: + ... + + +async def ssrf_middleware( + req: ClientRequest, handler: ClientHandlerType +) -> ClientResponse: + # WARNING: This is a simplified example for demonstration purposes only. + # A complete implementation should also check: + # - IPv6 loopback (::1) + # - Private IP ranges (10.x.x.x, 192.168.x.x, 172.16-31.x.x) + # - Link-local addresses (169.254.x.x, fe80::/10) + # - Other internal hostnames and aliases + if req.url.host in {"127.0.0.1", "localhost"}: + raise SSRFError(req.url.host) + return await handler(req) + + +class SSRFConnector(TCPConnector): + async def _resolve_host( + self, host: str, port: int, traces: Sequence[Trace] | None = None + ) -> list[ResolveResult]: + res = await super()._resolve_host(host, port, traces) + # WARNING: This is a simplified example - should also check ::1, private ranges, etc. + if any(r["host"] in {"127.0.0.1"} for r in res): + raise SSRFError() + return res diff --git a/docs/conf.py b/docs/conf.py index 84dadfc8442..a449f223e1d 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -397,8 +397,9 @@ ("py:class", "aiohttp.web.RequestHandler"), # undocumented ("py:class", "aiohttp.NamedPipeConnector"), # undocumented ("py:class", "aiohttp.protocol.HttpVersion"), # undocumented - ("py:class", "aiohttp.ClientRequest"), # undocumented + ("py:class", "HttpVersion"), # undocumented ("py:class", "aiohttp.payload.Payload"), # undocumented + ("py:class", "Payload"), # undocumented ("py:class", "aiohttp.resolver.AsyncResolver"), # undocumented ("py:class", "aiohttp.resolver.ThreadedResolver"), # undocumented ("py:func", "aiohttp.ws_connect"), # undocumented diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index 3f67df33159..8b389cc11f6 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -145,6 +145,7 @@ HTTPException HttpProcessingError httpretty https +hostname impl incapsulates Indices diff --git a/setup.cfg b/setup.cfg index 23e56d61d00..4adfde579a0 100644 --- a/setup.cfg +++ b/setup.cfg @@ -99,6 +99,7 @@ max-line-length = 88 per-file-ignores = # I900: Shouldn't appear in requirements for examples. examples/*:I900 + docs/code/*:F841 # flake8-requirements known-modules = proxy.py:[proxy] From 7a6ee687e945902d55eb08027987ac34c5f02840 Mon Sep 17 00:00:00 2001 From: "J. Nick Koston" Date: Sat, 24 May 2025 16:53:33 -0500 Subject: [PATCH 90/90] Release 3.12.0 (#10996) --- CHANGES.rst | 276 ++++++++++++++++++++++++++++++++++++ CHANGES/10074.feature.rst | 2 - CHANGES/10119.bugfix.rst | 1 - CHANGES/10120.feature.rst | 1 - CHANGES/10146.misc.rst | 1 - CHANGES/10325.bugfix.rst | 1 - CHANGES/10433.feature.rst | 1 - CHANGES/10474.feature.rst | 1 - CHANGES/10520.feature.rst | 2 - CHANGES/10662.packaging.rst | 1 - CHANGES/10725.feature.rst | 6 - CHANGES/10759.packaging.rst | 5 - CHANGES/10761.contrib.rst | 1 - CHANGES/10797.feature.rst | 1 - CHANGES/10823.packaging.rst | 3 - CHANGES/10847.feature.rst | 5 - CHANGES/10851.bugfix.rst | 1 - CHANGES/10851.contrib.rst | 2 - CHANGES/10877.packaging.rst | 1 - CHANGES/10902.feature.rst | 1 - CHANGES/10915.bugfix.rst | 3 - CHANGES/10922.contrib.rst | 1 - CHANGES/10923.feature.rst | 1 - CHANGES/10941.bugfix.rst | 1 - CHANGES/10943.bugfix.rst | 1 - CHANGES/10945.feature.rst | 1 - CHANGES/10946.feature.rst | 1 - CHANGES/10951.bugfix.rst | 1 - CHANGES/10952.feature.rst | 1 - CHANGES/10959.feature.rst | 1 - CHANGES/10961.feature.rst | 1 - CHANGES/10962.feature.rst | 1 - CHANGES/10968.feature.rst | 1 - CHANGES/10972.feature.rst | 1 - CHANGES/10988.bugfix.rst | 1 - CHANGES/10991.feature.rst | 7 - CHANGES/2213.feature.rst | 1 - CHANGES/2914.doc.rst | 4 - CHANGES/6009.bugfix.rst | 1 - CHANGES/9705.contrib.rst | 1 - CHANGES/9732.feature.rst | 6 - CHANGES/9798.feature.rst | 5 - CHANGES/9870.misc.rst | 1 - aiohttp/__init__.py | 2 +- 44 files changed, 277 insertions(+), 81 deletions(-) delete mode 100644 CHANGES/10074.feature.rst delete mode 100644 CHANGES/10119.bugfix.rst delete mode 100644 CHANGES/10120.feature.rst delete mode 100644 CHANGES/10146.misc.rst delete mode 120000 CHANGES/10325.bugfix.rst delete mode 100644 CHANGES/10433.feature.rst delete mode 120000 CHANGES/10474.feature.rst delete mode 100644 CHANGES/10520.feature.rst delete mode 100644 CHANGES/10662.packaging.rst delete mode 100644 CHANGES/10725.feature.rst delete mode 100644 CHANGES/10759.packaging.rst delete mode 120000 CHANGES/10761.contrib.rst delete mode 100644 CHANGES/10797.feature.rst delete mode 100644 CHANGES/10823.packaging.rst delete mode 100644 CHANGES/10847.feature.rst delete mode 100644 CHANGES/10851.bugfix.rst delete mode 100644 CHANGES/10851.contrib.rst delete mode 100644 CHANGES/10877.packaging.rst delete mode 120000 CHANGES/10902.feature.rst delete mode 100644 CHANGES/10915.bugfix.rst delete mode 100644 CHANGES/10922.contrib.rst delete mode 120000 CHANGES/10923.feature.rst delete mode 120000 CHANGES/10941.bugfix.rst delete mode 120000 CHANGES/10943.bugfix.rst delete mode 120000 CHANGES/10945.feature.rst delete mode 120000 CHANGES/10946.feature.rst delete mode 100644 CHANGES/10951.bugfix.rst delete mode 120000 CHANGES/10952.feature.rst delete mode 120000 CHANGES/10959.feature.rst delete mode 120000 CHANGES/10961.feature.rst delete mode 120000 CHANGES/10962.feature.rst delete mode 120000 CHANGES/10968.feature.rst delete mode 100644 CHANGES/10972.feature.rst delete mode 120000 CHANGES/10988.bugfix.rst delete mode 100644 CHANGES/10991.feature.rst delete mode 120000 CHANGES/2213.feature.rst delete mode 100644 CHANGES/2914.doc.rst delete mode 100644 CHANGES/6009.bugfix.rst delete mode 100644 CHANGES/9705.contrib.rst delete mode 100644 CHANGES/9732.feature.rst delete mode 100644 CHANGES/9798.feature.rst delete mode 100644 CHANGES/9870.misc.rst diff --git a/CHANGES.rst b/CHANGES.rst index 176dcf88179..ddbebd82369 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -10,6 +10,282 @@ .. towncrier release notes start +3.12.0 (2025-05-24) +=================== + +Bug fixes +--------- + +- Fixed :py:attr:`~aiohttp.web.WebSocketResponse.prepared` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`6009`, :issue:`10988`. + + + +- Response is now always True, instead of using MutableMapping behaviour (False when map is empty) + + + *Related issues and pull requests on GitHub:* + :issue:`10119`. + + + +- Fixed connection reuse for file-like data payloads by ensuring buffer + truncation respects content-length boundaries and preventing premature + connection closure race -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10325`, :issue:`10915`, :issue:`10941`, :issue:`10943`. + + + +- Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Fixed :py:class:`~aiohttp.resolver.AsyncResolver` not using the ``loop`` argument in versions 3.x where it should still be supported -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10951`. + + + + +Features +-------- + +- Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) + that implements RFC 7616. The middleware supports all standard hash algorithms + (MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and + 'auth-int' quality of protection options, and automatically manages the + authentication flow by intercepting 401 responses and retrying with proper + credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2213`, :issue:`10725`. + + + +- Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. + + This change allows users to add middleware to the client session and requests, enabling features like + authentication, logging, and request/response modification without modifying the core + request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, + allowing middleware to access the session for making additional requests. + + + *Related issues and pull requests on GitHub:* + :issue:`9732`, :issue:`10902`, :issue:`10945`, :issue:`10952`, :issue:`10959`, :issue:`10968`. + + + +- Allow user setting zlib compression backend -- by :user:`TimMenninger` + + This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the + zlib compression module of their choice. Default behavior continues to use + the builtin ``zlib`` library. + + + *Related issues and pull requests on GitHub:* + :issue:`9798`. + + + +- Added support for overriding the base URL with an absolute one in client sessions + -- by :user:`vivodi`. + + + *Related issues and pull requests on GitHub:* + :issue:`10074`. + + + +- Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. + + + *Related issues and pull requests on GitHub:* + :issue:`10120`. + + + +- Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. + + + *Related issues and pull requests on GitHub:* + :issue:`10433`. + + + +- Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options + -- by :user:`TimMenninger`. + + + *Related issues and pull requests on GitHub:* + :issue:`10474`, :issue:`10520`, :issue:`10961`, :issue:`10962`. + + + +- Started building armv7l manylinux wheels -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10797`. + + + +- Implemented shared DNS resolver management to fix excessive resolver object creation + when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures + only one ``DNSResolver`` object is created for default configurations, significantly + reducing resource usage and improving performance for applications using multiple + client sessions simultaneously -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10847`, :issue:`10923`, :issue:`10946`. + + + +- Upgraded to LLHTTP 9.3.0 -- by :user:`Dreamsorcerer`. + + + *Related issues and pull requests on GitHub:* + :issue:`10972`. + + + +- Optimized small HTTP requests/responses by coalescing headers and body into a single TCP packet -- by :user:`bdraco`. + + This change enhances network efficiency by reducing the number of packets sent for small HTTP payloads, improving latency and reducing overhead. Most importantly, this fixes compatibility with memory-constrained IoT devices that can only perform a single read operation and expect HTTP requests in one packet. The optimization uses zero-copy ``writelines`` when coalescing data and works with both regular and chunked transfer encoding. + + When ``aiohttp`` uses client middleware to communicate with an ``aiohttp`` server, connection reuse is more likely to occur since complete responses arrive in a single packet for small payloads. + + This aligns ``aiohttp`` with other popular HTTP clients that already coalesce small requests. + + + *Related issues and pull requests on GitHub:* + :issue:`10991`. + + + + +Improved documentation +---------------------- + +- Improved documentation for middleware by adding warnings and examples about + request body stream consumption. The documentation now clearly explains that + request body streams can only be read once and provides best practices for + sharing parsed request data between middleware and handlers -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`2914`. + + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. + + + *Related issues and pull requests on GitHub:* + :issue:`10662`. + + + +- Added support for building against system ``llhttp`` library -- by :user:`mgorny`. + + This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that + can be used to build aiohttp against the system install of the ``llhttp`` library rather + than the vendored one. + + + *Related issues and pull requests on GitHub:* + :issue:`10759`. + + + +- ``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. + + As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. + + + *Related issues and pull requests on GitHub:* + :issue:`10823`. + + + +- Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` + + + *Related issues and pull requests on GitHub:* + :issue:`10877`. + + + + +Contributor-facing changes +-------------------------- + +- Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. + + + *Related issues and pull requests on GitHub:* + :issue:`9705`, :issue:`10761`. + + + +- Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and + make it compatible with Python 3.14. + + + *Related issues and pull requests on GitHub:* + :issue:`10851`. + + + +- Added Winloop to test suite to support in the future -- by :user:`Vizonex`. + + + *Related issues and pull requests on GitHub:* + :issue:`10922`. + + + + +Miscellaneous internal changes +------------------------------ + +- Added support for the ``partitioned`` attribute in the ``set_cookie`` method. + + + *Related issues and pull requests on GitHub:* + :issue:`9870`. + + + +- Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. + + + *Related issues and pull requests on GitHub:* + :issue:`10146`. + + + + +---- + + 3.12.0rc1 (2025-05-24) ====================== diff --git a/CHANGES/10074.feature.rst b/CHANGES/10074.feature.rst deleted file mode 100644 index d956c38af57..00000000000 --- a/CHANGES/10074.feature.rst +++ /dev/null @@ -1,2 +0,0 @@ -Added support for overriding the base URL with an absolute one in client sessions --- by :user:`vivodi`. diff --git a/CHANGES/10119.bugfix.rst b/CHANGES/10119.bugfix.rst deleted file mode 100644 index 86d2511f5b5..00000000000 --- a/CHANGES/10119.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Response is now always True, instead of using MutableMapping behaviour (False when map is empty) diff --git a/CHANGES/10120.feature.rst b/CHANGES/10120.feature.rst deleted file mode 100644 index 98cee5650d6..00000000000 --- a/CHANGES/10120.feature.rst +++ /dev/null @@ -1 +0,0 @@ -Added ``host`` parameter to ``aiohttp_server`` fixture -- by :user:`christianwbrock`. diff --git a/CHANGES/10146.misc.rst b/CHANGES/10146.misc.rst deleted file mode 100644 index bee4ef68fb3..00000000000 --- a/CHANGES/10146.misc.rst +++ /dev/null @@ -1 +0,0 @@ -Setting :attr:`aiohttp.web.StreamResponse.last_modified` to an unsupported type will now raise :exc:`TypeError` instead of silently failing -- by :user:`bdraco`. diff --git a/CHANGES/10325.bugfix.rst b/CHANGES/10325.bugfix.rst deleted file mode 120000 index aa085cc590d..00000000000 --- a/CHANGES/10325.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -10915.bugfix.rst \ No newline at end of file diff --git a/CHANGES/10433.feature.rst b/CHANGES/10433.feature.rst deleted file mode 100644 index 11a29d6e368..00000000000 --- a/CHANGES/10433.feature.rst +++ /dev/null @@ -1 +0,0 @@ -Detect blocking calls in coroutines using BlockBuster -- by :user:`cbornet`. diff --git a/CHANGES/10474.feature.rst b/CHANGES/10474.feature.rst deleted file mode 120000 index 7c4f9a7b83b..00000000000 --- a/CHANGES/10474.feature.rst +++ /dev/null @@ -1 +0,0 @@ -10520.feature.rst \ No newline at end of file diff --git a/CHANGES/10520.feature.rst b/CHANGES/10520.feature.rst deleted file mode 100644 index 3d2877b5c09..00000000000 --- a/CHANGES/10520.feature.rst +++ /dev/null @@ -1,2 +0,0 @@ -Added ``socket_factory`` to :py:class:`aiohttp.TCPConnector` to allow specifying custom socket options --- by :user:`TimMenninger`. diff --git a/CHANGES/10662.packaging.rst b/CHANGES/10662.packaging.rst deleted file mode 100644 index 2ed3a69cb56..00000000000 --- a/CHANGES/10662.packaging.rst +++ /dev/null @@ -1 +0,0 @@ -Removed non SPDX-license description from ``setup.cfg`` -- by :user:`devanshu-ziphq`. diff --git a/CHANGES/10725.feature.rst b/CHANGES/10725.feature.rst deleted file mode 100644 index 2cb096a58e7..00000000000 --- a/CHANGES/10725.feature.rst +++ /dev/null @@ -1,6 +0,0 @@ -Added a comprehensive HTTP Digest Authentication client middleware (DigestAuthMiddleware) -that implements RFC 7616. The middleware supports all standard hash algorithms -(MD5, SHA, SHA-256, SHA-512) with session variants, handles both 'auth' and -'auth-int' quality of protection options, and automatically manages the -authentication flow by intercepting 401 responses and retrying with proper -credentials -- by :user:`feus4177`, :user:`TimMenninger`, and :user:`bdraco`. diff --git a/CHANGES/10759.packaging.rst b/CHANGES/10759.packaging.rst deleted file mode 100644 index 6f41e873229..00000000000 --- a/CHANGES/10759.packaging.rst +++ /dev/null @@ -1,5 +0,0 @@ -Added support for building against system ``llhttp`` library -- by :user:`mgorny`. - -This change adds support for :envvar:`AIOHTTP_USE_SYSTEM_DEPS` environment variable that -can be used to build aiohttp against the system install of the ``llhttp`` library rather -than the vendored one. diff --git a/CHANGES/10761.contrib.rst b/CHANGES/10761.contrib.rst deleted file mode 120000 index 3d35184e09d..00000000000 --- a/CHANGES/10761.contrib.rst +++ /dev/null @@ -1 +0,0 @@ -9705.contrib.rst \ No newline at end of file diff --git a/CHANGES/10797.feature.rst b/CHANGES/10797.feature.rst deleted file mode 100644 index fc68d09f34e..00000000000 --- a/CHANGES/10797.feature.rst +++ /dev/null @@ -1 +0,0 @@ -Started building armv7l manylinux wheels -- by :user:`bdraco`. diff --git a/CHANGES/10823.packaging.rst b/CHANGES/10823.packaging.rst deleted file mode 100644 index c65f8bea795..00000000000 --- a/CHANGES/10823.packaging.rst +++ /dev/null @@ -1,3 +0,0 @@ -``aiodns`` is now installed on Windows with speedups extra -- by :user:`bdraco`. - -As of ``aiodns`` 3.3.0, ``SelectorEventLoop`` is no longer required when using ``pycares`` 4.7.0 or later. diff --git a/CHANGES/10847.feature.rst b/CHANGES/10847.feature.rst deleted file mode 100644 index bfa7f6d498a..00000000000 --- a/CHANGES/10847.feature.rst +++ /dev/null @@ -1,5 +0,0 @@ -Implemented shared DNS resolver management to fix excessive resolver object creation -when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures -only one ``DNSResolver`` object is created for default configurations, significantly -reducing resource usage and improving performance for applications using multiple -client sessions simultaneously -- by :user:`bdraco`. diff --git a/CHANGES/10851.bugfix.rst b/CHANGES/10851.bugfix.rst deleted file mode 100644 index 9c47cc95905..00000000000 --- a/CHANGES/10851.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed pytest plugin to not use deprecated :py:mod:`asyncio` policy APIs. diff --git a/CHANGES/10851.contrib.rst b/CHANGES/10851.contrib.rst deleted file mode 100644 index 623f96bc227..00000000000 --- a/CHANGES/10851.contrib.rst +++ /dev/null @@ -1,2 +0,0 @@ -Updated tests to avoid using deprecated :py:mod:`asyncio` policy APIs and -make it compatible with Python 3.14. diff --git a/CHANGES/10877.packaging.rst b/CHANGES/10877.packaging.rst deleted file mode 100644 index 0bc2ee03984..00000000000 --- a/CHANGES/10877.packaging.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed compatibility issue with Cython 3.1.1 -- by :user:`bdraco` diff --git a/CHANGES/10902.feature.rst b/CHANGES/10902.feature.rst deleted file mode 120000 index b565aa68ee0..00000000000 --- a/CHANGES/10902.feature.rst +++ /dev/null @@ -1 +0,0 @@ -9732.feature.rst \ No newline at end of file diff --git a/CHANGES/10915.bugfix.rst b/CHANGES/10915.bugfix.rst deleted file mode 100644 index f564603306b..00000000000 --- a/CHANGES/10915.bugfix.rst +++ /dev/null @@ -1,3 +0,0 @@ -Fixed connection reuse for file-like data payloads by ensuring buffer -truncation respects content-length boundaries and preventing premature -connection closure race -- by :user:`bdraco`. diff --git a/CHANGES/10922.contrib.rst b/CHANGES/10922.contrib.rst deleted file mode 100644 index e5e1cfd8af6..00000000000 --- a/CHANGES/10922.contrib.rst +++ /dev/null @@ -1 +0,0 @@ -Added Winloop to test suite to support in the future -- by :user:`Vizonex`. diff --git a/CHANGES/10923.feature.rst b/CHANGES/10923.feature.rst deleted file mode 120000 index 879a4227358..00000000000 --- a/CHANGES/10923.feature.rst +++ /dev/null @@ -1 +0,0 @@ -10847.feature.rst \ No newline at end of file diff --git a/CHANGES/10941.bugfix.rst b/CHANGES/10941.bugfix.rst deleted file mode 120000 index aa085cc590d..00000000000 --- a/CHANGES/10941.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -10915.bugfix.rst \ No newline at end of file diff --git a/CHANGES/10943.bugfix.rst b/CHANGES/10943.bugfix.rst deleted file mode 120000 index aa085cc590d..00000000000 --- a/CHANGES/10943.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -10915.bugfix.rst \ No newline at end of file diff --git a/CHANGES/10945.feature.rst b/CHANGES/10945.feature.rst deleted file mode 120000 index b565aa68ee0..00000000000 --- a/CHANGES/10945.feature.rst +++ /dev/null @@ -1 +0,0 @@ -9732.feature.rst \ No newline at end of file diff --git a/CHANGES/10946.feature.rst b/CHANGES/10946.feature.rst deleted file mode 120000 index 879a4227358..00000000000 --- a/CHANGES/10946.feature.rst +++ /dev/null @@ -1 +0,0 @@ -10847.feature.rst \ No newline at end of file diff --git a/CHANGES/10951.bugfix.rst b/CHANGES/10951.bugfix.rst deleted file mode 100644 index d539fc1a52d..00000000000 --- a/CHANGES/10951.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed :py:class:`~aiohttp.resolver.AsyncResolver` not using the ``loop`` argument in versions 3.x where it should still be supported -- by :user:`bdraco`. diff --git a/CHANGES/10952.feature.rst b/CHANGES/10952.feature.rst deleted file mode 120000 index b565aa68ee0..00000000000 --- a/CHANGES/10952.feature.rst +++ /dev/null @@ -1 +0,0 @@ -9732.feature.rst \ No newline at end of file diff --git a/CHANGES/10959.feature.rst b/CHANGES/10959.feature.rst deleted file mode 120000 index b565aa68ee0..00000000000 --- a/CHANGES/10959.feature.rst +++ /dev/null @@ -1 +0,0 @@ -9732.feature.rst \ No newline at end of file diff --git a/CHANGES/10961.feature.rst b/CHANGES/10961.feature.rst deleted file mode 120000 index 7c4f9a7b83b..00000000000 --- a/CHANGES/10961.feature.rst +++ /dev/null @@ -1 +0,0 @@ -10520.feature.rst \ No newline at end of file diff --git a/CHANGES/10962.feature.rst b/CHANGES/10962.feature.rst deleted file mode 120000 index 7c4f9a7b83b..00000000000 --- a/CHANGES/10962.feature.rst +++ /dev/null @@ -1 +0,0 @@ -10520.feature.rst \ No newline at end of file diff --git a/CHANGES/10968.feature.rst b/CHANGES/10968.feature.rst deleted file mode 120000 index b565aa68ee0..00000000000 --- a/CHANGES/10968.feature.rst +++ /dev/null @@ -1 +0,0 @@ -9732.feature.rst \ No newline at end of file diff --git a/CHANGES/10972.feature.rst b/CHANGES/10972.feature.rst deleted file mode 100644 index 1d3779a3969..00000000000 --- a/CHANGES/10972.feature.rst +++ /dev/null @@ -1 +0,0 @@ -Upgraded to LLHTTP 9.3.0 -- by :user:`Dreamsorcerer`. diff --git a/CHANGES/10988.bugfix.rst b/CHANGES/10988.bugfix.rst deleted file mode 120000 index 6e737bb336c..00000000000 --- a/CHANGES/10988.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -6009.bugfix.rst \ No newline at end of file diff --git a/CHANGES/10991.feature.rst b/CHANGES/10991.feature.rst deleted file mode 100644 index 687a1a752f6..00000000000 --- a/CHANGES/10991.feature.rst +++ /dev/null @@ -1,7 +0,0 @@ -Optimized small HTTP requests/responses by coalescing headers and body into a single TCP packet -- by :user:`bdraco`. - -This change enhances network efficiency by reducing the number of packets sent for small HTTP payloads, improving latency and reducing overhead. Most importantly, this fixes compatibility with memory-constrained IoT devices that can only perform a single read operation and expect HTTP requests in one packet. The optimization uses zero-copy ``writelines`` when coalescing data and works with both regular and chunked transfer encoding. - -When ``aiohttp`` uses client middleware to communicate with an ``aiohttp`` server, connection reuse is more likely to occur since complete responses arrive in a single packet for small payloads. - -This aligns ``aiohttp`` with other popular HTTP clients that already coalesce small requests. diff --git a/CHANGES/2213.feature.rst b/CHANGES/2213.feature.rst deleted file mode 120000 index d118975e478..00000000000 --- a/CHANGES/2213.feature.rst +++ /dev/null @@ -1 +0,0 @@ -10725.feature.rst \ No newline at end of file diff --git a/CHANGES/2914.doc.rst b/CHANGES/2914.doc.rst deleted file mode 100644 index 25592bf79bc..00000000000 --- a/CHANGES/2914.doc.rst +++ /dev/null @@ -1,4 +0,0 @@ -Improved documentation for middleware by adding warnings and examples about -request body stream consumption. The documentation now clearly explains that -request body streams can only be read once and provides best practices for -sharing parsed request data between middleware and handlers -- by :user:`bdraco`. diff --git a/CHANGES/6009.bugfix.rst b/CHANGES/6009.bugfix.rst deleted file mode 100644 index a530832c8a9..00000000000 --- a/CHANGES/6009.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed :py:attr:`~aiohttp.web.WebSocketResponse.prepared` property to correctly reflect the prepared state, especially during timeout scenarios -- by :user:`bdraco` diff --git a/CHANGES/9705.contrib.rst b/CHANGES/9705.contrib.rst deleted file mode 100644 index 5d23e964fa1..00000000000 --- a/CHANGES/9705.contrib.rst +++ /dev/null @@ -1 +0,0 @@ -Sped up tests by disabling ``blockbuster`` fixture for ``test_static_file_huge`` and ``test_static_file_huge_cancel`` tests -- by :user:`dikos1337`. diff --git a/CHANGES/9732.feature.rst b/CHANGES/9732.feature.rst deleted file mode 100644 index bf6dd8ebde3..00000000000 --- a/CHANGES/9732.feature.rst +++ /dev/null @@ -1,6 +0,0 @@ -Added client middleware support -- by :user:`bdraco` and :user:`Dreamsorcerer`. - -This change allows users to add middleware to the client session and requests, enabling features like -authentication, logging, and request/response modification without modifying the core -request logic. Additionally, the ``session`` attribute was added to ``ClientRequest``, -allowing middleware to access the session for making additional requests. diff --git a/CHANGES/9798.feature.rst b/CHANGES/9798.feature.rst deleted file mode 100644 index c1584b04491..00000000000 --- a/CHANGES/9798.feature.rst +++ /dev/null @@ -1,5 +0,0 @@ -Allow user setting zlib compression backend -- by :user:`TimMenninger` - -This change allows the user to call :func:`aiohttp.set_zlib_backend()` with the -zlib compression module of their choice. Default behavior continues to use -the builtin ``zlib`` library. diff --git a/CHANGES/9870.misc.rst b/CHANGES/9870.misc.rst deleted file mode 100644 index caa8f45e522..00000000000 --- a/CHANGES/9870.misc.rst +++ /dev/null @@ -1 +0,0 @@ -Added support for the ``partitioned`` attribute in the ``set_cookie`` method. diff --git a/aiohttp/__init__.py b/aiohttp/__init__.py index fdad4aac495..bd797bcf6ef 100644 --- a/aiohttp/__init__.py +++ b/aiohttp/__init__.py @@ -1,4 +1,4 @@ -__version__ = "3.12.0rc1" +__version__ = "3.12.0" from typing import TYPE_CHECKING, Tuple