-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce resumable downloads with --resume-retries #12991
base: main
Are you sure you want to change the base?
Conversation
- Added —resume-retries option to allow resuming incomplete downloads - Setting —resume-retries=N allows pip to make N attempts to resume downloading, in case of dropped or timed out connections - Each resume attempt uses the values specified for —retries and —timeout internally Signed-off-by: gmargaritis <[email protected]>
16fb735
to
dbc6a64
Compare
I'm guessing the CI fails because of the new linter rules introduced in 102d818 |
Does this do rsync-style checksums? That would increase reliability. |
Signed-off-by: gmargaritis <[email protected]>
Hey @notatallshaw 👋 Is there anything that I can do to move this one forward? |
A pip maintainer needs to take up the task of reviewing it, as we're all volunteers it's a matter of finding time. I think my main concern would be the behavior when interacting with index servers that behave badly, e.g. give the wrong content length (usually 0). Your description looks good to me, but I haven't had time to look over the code yet. |
Yeah, I know how it goes, so no worries! If you need any clarifications or would like me to make changes, I'd be happy to help! |
any chances that it'll be merged soon? |
I've had an initial cursory glace at this PR and it appears to be sufficiently high quality. I've also run the functionality locally (select a large wheel to download and then disconnect my WiFi midway through the download) and it has a good UX. My main concern, although this is a ship that has probably sailed, is it would be nice for pip not to have to directly handle HTTP intricacies and leave that to a separate library. I can’t promise a full review or other maintainers will agree, but I am adding it to the 25.1 milestone for it to be tracked. |
The PR looks good, although I’m not a http expert so I can’t comment on details like status and header handling. Like @notatallshaw I wish we could leave this sort of detail to a 3rd party library, but that would be a major refactoring. Add this PR (along with cert handling, parallel downloads, etc) to the list of reasons we should consider such a refactoring, but in the meantime I’m in favour of adding this. |
There isn’t an “approve with conditions” button, but I approve this change on the basis that someone who understands http should check the header and status handling. |
I'll tack this onto my to-do list. Not sure if I can call myself a HTTP expert, but I've done a fair bit of webdev as a hobby so I'm decently familiar with HTTP statuses and header handling. Sorry for taking so long to review. Large PRs like these are appreciated since they do often implement major improvements, but they're also tedious to review and pretty daunting. Not really a good excuse, but that's how it feels. Thanks @notatallshaw for the initial pass and confirming this is worth the look. |
Awesome! Thank you for all your efforts! Don’t worry about it, I know how it feels! Let me know if you need anything ✌️ |
Hopefully this gets added soon, downloading GBs of stuff over slow internet and then having to restart from the beginning is not an experience i would recommend |
@Ibrahima-prog I hear ya! This is on my radar to review. I haven't gotten around to it yet. And truthfully, I probably won't find the time until at least next Thursday. This will make it into the pip 25.1 release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've read through the code a couple of times and tried to educate myself on the relevant HTTP tags, as well as trying it against PyPI. Overall I'm happy with this PR, but I have left a few comments on edge cases that I would like you to address or provide thoughts on.
On the topic of edge cases, here's an example where a range request is possible on the index but the HEAD request to check returns a 405: astral-sh/uv#11379, I don't think this PR is affected by this behavior, but it's an interesting example of how edge casey all this behavior can be.
if bytes_received < total_length: | ||
self._attempt_resume( | ||
resp, link, content_file, total_length, bytes_received, filepath | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have one concern and one nitpick here:
Concern: It looks like previously pip was only using the total_length for the progress bar, it was not validating that the download actually matched the total length. Should we be concerned that there are people using buggy HTTP servers that provide the wrong Content-Length
?
While I do like erroring out on the download clearly being incomplete / wrong, I think it's the correct default behavior. But if users do complain that this breaks using pip for them, what do we tell them? Should we provide an escape hatch for users of broken HTTP servers? I appreciate this is a hypothetical.
Nitpick: It would nice that if self._resume_retries
is 0 that self._attempt_resume
is never called.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if users do complain that this breaks using pip for them, what do we tell them? Should we provide an escape hatch for users of broken HTTP servers? I appreciate this is a hypothetical.
I'm going to say that we can release this as-is, and if enough people complain, then we can add an escape hatch. I don't want to add flags prematurely. My gut is that at least one person is going to complain, but they really should fix their HTTP server.
Nitpick: It would nice that if self._resume_retries is 0 that self._attempt_resume is never called.
Agreed 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to say that we can release this as-is, and if enough people complain, then we can add an escape hatch. I don't want to add flags prematurely. My gut is that at least one person is going to complain, but they really should fix their HTTP server.
That were my thoughts as well 👆
self._resume_retries
can not be explicitly set to 0 by the user, thus self._attempt_resume
is not called:
Also, we terminate the loop when there are 0 retries left:
src/pip/_internal/exceptions.py
Outdated
def __init__(self, link: str, resume_retries: int) -> None: | ||
message = ( | ||
f"Download failed after {resume_retries} attempts because not enough" | ||
" bytes were received. The incomplete file has been cleaned up." | ||
) | ||
hint = "Use --resume-retries to configure resume retry limit." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the number of bytes downloaded, or at least special case when there are 0 bytes downloaded and let the user know that no data was downloaded.
I have commonly seen the error when a corporate firewall allows an HTTP GET to start but blocks all the data and no data is downloaded, resulting in an empty file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented in c93c72a
except (ConnectionError, ReadTimeoutError): | ||
continue | ||
|
||
if total_length and bytes_received < total_length: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about changing the <
to !=
?
I have seen corporate proxies return completely different data, such as a page saying the download is not allowed with the full text of the internal policies related to network traffic.
This would change the semantics of the error, something like DownloadError
instead of IncompleteDownloadError
, with verbiage related to it possibly being incomplete or a blocked network request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer if we more strongly suggest that the download was incomplete. I agree that we should mention the possibility that the response is total nonsense (yay for enterprise proxies), but the error message should emphasize the more likely culprit of an incomplete download. Perhaps we could check the response Content-Type
and if it isn't what we're expecting, then we can assume the response is total nonsense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this logic is within the scope of resumable downloads, I’d suggest leaving this out of scope for now, as it affects pip’s overall download behavior, not just retries.
Changing < to !=
would alter the semantics of the error, making it more about detecting completely different responses rather than just incomplete downloads. If we want to handle cases where proxies return unexpected content (like policy pages), that should be considered holistically across all downloads, not just resumable ones.
For now, the retry mechanism should continue treating incomplete downloads as the primary concern. If the connection is stable, pip won’t crash, and a broader discussion would be needed for verifying response integrity (e.g., checking Content-Type).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(This is a preliminary review consisting of feedback that immediately came to mind. I will need to review this again in more detail.)
Thank you so much for working on this! It's great to see the resumption of incomplete downloads make progress! I left some initial comments, but I have a larger question: why are the resume retries set to 0
by default?
I'd prefer if pip's behaviour emulated that of a browser where it caches the incomplete download so the download can be resumed at some later point, but I realize that would be significantly increase the complexity of the implementation (and it'd also introduce some friction as the download would have to be manually restarted).
However, defaulting to not resuming here results in poor UX IMO. Imagine I download half of a $very-large-wheel (e.g., PyTorch @ ~1 GB) and then the download is dropped. I get a incomplete-download-error
explaining what happened. Good! I try again with --resume-retries 5
What's not so good is that pip will have to download all of said $very-large-wheel again.
If I'm on a slow or metered connection, I'd be frustrated that I have to download everything again. Doubly so if the connection failed at 90% or similar. In addition, it's not immediately clear how many resumption retries I should pass to pip. 1? 2?
Would be possible to default to allowing a few (1-3) resume attempts? That way, if the download fails halfway through, the download will be given another shot. It may not be enough if the connection is so unstable that it requires a ton of resumes, but for one-off failures, it would still be a major improvement. As long as the messaging is clear, I don't think automatic resumes would be that annoying to the user.1 I consider resumes as the preferred option and opting out of resumption to be an exceptional (but still important to support!) case.
Anyway, thank you again for your patience! I also appreciate all of the tests (although I have only scanned through them very briefly). Despite the flaws and my critiques, this is a major step forward, giving users a fighting chance to download large distributions on unstable connections.
Footnotes
-
Although if resumes are the default, perhaps we shouldn't allow the download to restart from zero (i.e., range requests are NOT supported) multiple times? Downloading the whole file numerous times over could be very slow and surprising (especially for users on metered connections) and thus be something they need to opt into... although that would make the default special. A default of
--resume-retries 3
would be treated differently from the user specifying--resume-retries 3
↩
# request a partial download | ||
if range_start: | ||
headers["Range"] = f"bytes={range_start}-" | ||
# make sure the file hasn't changed | ||
if if_range: | ||
headers["If-Range"] = if_range |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we enforce that these two parameters must be given simultaneously? While it is permissible to issue a range request without If-Range
, it is generally inadvisable as then we lose the protection that the file hasn't been changed in between retries. For PyPI, this is unlikely to be a problem as the distribution files never change, and the index pages are so small that they will be rarely retried, but unless there is good reason to, I'd prefer requiring the safety net of If-Range
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented in 66f68ca
|
||
return bytes_received | ||
|
||
def _attempt_resume( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible for total_length
to be None and resumption to still function? AFAICT by reading the current logic, no. The annotation can be changed to int
and the tests for total_length
can be dropped in the function's body.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is indeed not possible for the resumption to still function if total_length
is None
, but since _attempt_resume
relies on _get_http_response_size
among other things, we have to define it as Optional[int]
except (ConnectionError, ReadTimeoutError): | ||
continue | ||
|
||
if total_length and bytes_received < total_length: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer if we more strongly suggest that the download was incomplete. I agree that we should mention the possibility that the response is total nonsense (yay for enterprise proxies), but the error message should emphasize the more likely culprit of an incomplete download. Perhaps we could check the response Content-Type
and if it isn't what we're expecting, then we can assume the response is total nonsense?
@thk686 I'm very late to the party, but could you elaborate on how checksums come into play? AFAIK, indices don't serve the checksums of their distributions so there is no way pip could double check the download wasn't corrupted unless the checksums were given by the user. This PR uses conditional range requests (via the |
I was thinking of over-the-wire corruption. In some cases it can be
beneficial to checksum on both ends.
…On Wed, Mar 12, 2025 at 6:05 PM Richard Si ***@***.***> wrote:
Does this do rsync-style checksums <https://pypi.org/project/pyrsync/>?
That would increase reliability.
@thk686 <https://github.com/thk686> I'm very late to the party, but could
you elaborate on how checksums come into play? AFAIK, indices don't serve
the checksums of their distributions so there is no way pip could double
check the download wasn't corrupted unless the checksums were given by the
user. This PR uses conditional range requests (via the If-Range HTTP
request header) which will avoid the issue of the file being changed on the
server in-between requests.
—
Reply to this email directly, view it on GitHub
<#12991 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEQXSKWTOCZKZHHELLCWID2UC4SHAVCNFSM6AAAAABPMRUVG6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMJZGMYTKMRSGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
[image: ichard26]*ichard26* left a comment (pypa/pip#12991)
<#12991 (comment)>
Does this do rsync-style checksums <https://pypi.org/project/pyrsync/>?
That would increase reliability.
@thk686 <https://github.com/thk686> I'm very late to the party, but could
you elaborate on how checksums come into play? AFAIK, indices don't serve
the checksums of their distributions so there is no way pip could double
check the download wasn't corrupted unless the checksums were given by the
user. This PR uses conditional range requests (via the If-Range HTTP
request header) which will avoid the issue of the file being changed on the
server in-between requests.
—
Reply to this email directly, view it on GitHub
<#12991 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEQXSKWTOCZKZHHELLCWID2UC4SHAVCNFSM6AAAAABPMRUVG6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMJZGMYTKMRSGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Timothy H. Keitt
www keittlab org
|
There has been some discussion around this in the past1 and I’d pretty much prefer it. However, I think that it’s out of scope for this first step in implementing resumable downloads, considering the amount of work needed.
I initially set the default We have two options:
Footnotes |
Signed-off-by: gmargaritis <[email protected]> (cherry picked from commit f2e48c3f5885305369b88761ab74cd16a0869667)
Signed-off-by: gmargaritis <[email protected]> (cherry picked from commit 53ce184348de1af4937dc04de7a1aedbe4ede19a)
Signed-off-by: gmargaritis <[email protected]> (cherry picked from commit 1f8d7fe0b0a5c7b53719bd8713619f982c042dbf)
Signed-off-by: gmargaritis <[email protected]> (cherry picked from commit af6b7ac624ebc18035d2da217c4c1850a6850cd7)
Signed-off-by: gmargaritis <[email protected]> (cherry picked from commit 67e366aec42d913436159ca3bf877c46a0d5cd2c)
…_download Signed-off-by: gmargaritis <[email protected]>
Just so everyone is on the same page, I plan on re-reviewing this PR sometime this week. I'm working on prototyping some code style changes which I'll share soon. Beyond that, I'd like to review the other parts of the resuming UX. After that, I should be happy enough with this to merge it and let any other suggestions be handled at a later date. |
Resolves #4796
Introduced the
--resume-retries
option in order to allow resuming incomplete downloads incase of dropped or timed out connections.This option additionally uses the values specified for
--retries
and--timeout
for each resume attempt, since they are passed in the session.Used
0
as the default in order to keep backwards compatibility.This PR is based on #11180