Skip to content
Advertisement

Python multi connection downloader resuming after pausing makes download run endlessly

I have written a Python script that downloads a single file using 32 connections if available.

I have written a multiconnection downloader that works fine without pausing, but won’t stop downloading after resuming, the progress would go beyond 100%…

Like this:

JavaScript

After progress exceeds 100%, there will be error messages like this:

JavaScript

(The above doesn’t include all of the error message)

I have encountered all sorts of errors after resuming, but most importantly, the server will often send extra bytes from previous request, whose connection is dead and needless to say this breaks the whole code.

How should I implement pause and resume correctly?

I am thinking about multiprocessing, I assume the sessions and connections are all PID and port number related, and so far I haven’t encountered a new run of the script that received extra bytes from previous runs of the script, so I guess using another process with a new PID and new port number plus requests.session() plus {'connection': 'close'} for each download should guarantee that no extra bytes from previous connections will be received, I just don’t know how to share variables between processes…

The code: downloader.py

JavaScript

For testing purposes this is a dumbed-down version of the script, with all checks removed while retaining the same functionality (sorry it really takes all these lines to show the download information):

JavaScript

The url isn’t accessible without a VPN in my locale, and test 0 always results True, that is, if the connection hasn’t gone dead during the download, and test 1 sometimes results True, sometimes results False, sometimes it doesn’t finish(progress bar goes beyond 100%)…

How can my code be salvaged?

Advertisement

Answer

This might not be your only problem but you have a race condition that could show up if you pause and resume quickly (where the definition of quickly varies greatly depending on your circumstances). Consider that you’ve got 32 threads each requesting a MB chunk, let’s call them threads 0-31. They are sitting their downloading and you pause. The threads do not know that you paused until they get a chunk of data as they are sitting in blocking io. Not sure what speed your connection is or how many cores your machine has (threads will sometimes act in parallel when they don’t need the GIL,) but this process could take a lot longer than you expect. Then you unpause and your code creates new threads 32-63 but some or all of threads 0-31 are still waiting for the next chunk. You set threads 32-63 in motion and then you turn off your pause flag. Those threads that didn’t end from 0-31 then wake up and see that things aren’t paused. Now you have multiple threads accessing the same state variables

JavaScript

so if thread 0 is downloading the same chunk as thread 31 they both keep updating all the same state and they add to position and count even though they are downloading overlapping parts of the file. You even reuse the objects that the threads live inside of so that state can get really really messed up.

JavaScript

There might be some other problems in your code and it is a lot to sort through so I suggest taking the time to do some refactoring to eliminate duplicate code and organise things into more functions. I don’t believe in crazy tiny functions, but you could use a few sub functions like download_multi(download_state) and download_single maybe. I am relatively confident however that your current problem will be solved if you ensure the threads you have running actually end after you pause. To do so you need to actually hold references to your threads

somewhere:

JavaScript

When you create your threads (the first time and after you unpause, or preferably this would be in a function and you’d do it there and return the list):

JavaScript

Then when you unpause:

JavaScript

This way you have the threads working, they quit when you pause and you rebuild them. So join should return as soon as they break out of the blocking io call to iter_content. This way those threads are always dead before you make the new ones.

What I would do myself however would be to create sockets from each thread to the main process. When pause is detected the threads shut down the request and save any data that’s already waiting in the OS buffer then go into a blocking receive on the socket (there might be a way to use select with a socket and requests to allow you to even break out of the blocking io involved in r.iter_content immediately but I leave that for your research). When the program is unpaused the main process would send some value to indicate the program should restart (you’d want at least two signals the threads would recognise, one for quitting gracefully and one to resume. The codes can be single characters.) When the value is sent to each thread that thread will unblock and can then restart the download using requests and its previous state like nothing happened.

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement