I’m trying to create a Python function that does the same thing as this wget command:
wget -c --read-timeout=5 --tries=0 "$URL"
-c
– Continue from where you left off if the download is interrupted.
--read-timeout=5
– If there is no new data coming in for over 5 seconds, give up and try again. Given -c
this mean it will try again from where it left off.
--tries=0
– Retry forever.
Those three arguments used in tandem results in a download that cannot fail.
I want to duplicate those features in my Python script, but I don’t know where to begin…
Advertisement
Answer
urllib.request should work. Just set it up in a while(not done) loop, check if a localfile already exists, if it does send a GET with a RANGE header, specifying how far you got in downloading the localfile. Be sure to use read() to append to the localfile until an error occurs.
This is also potentially a duplicate of Python urllib2 resume download doesn’t work when network reconnects