Skip to content
Advertisement

request.urlopen(url) not return website response or timeout

I want to take some website’s sources for a project. When i try to get response, program just stuck and wait for response. No matter how long i wait no timeout or response. Here is my code:

link = "https://eu.mouser.com/"
linkResponse = urllib.request.urlopen(link)
readedResponse = linkResponse.readlines()
writer = open("html.txt", "w")
for line in readedResponse:
    writer.write(str(line))
    writer.write("n")
writer.close()

When i try to other websites, urlopen return their response. But when i try to get “eu.mouser.com” and “uk.farnell.com” not return their response. I ll skip their response, even urlopen not return a timeout. What is the problem there? Is there another way to take the website’s sources? (Sorry for my bad english)

Advertisement

Answer

urllib.request.urlopen docs claims that

The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). This actually only works for HTTP, HTTPS and FTP connections.

without explaining how to find said default, I managed to provoke timeout after directly providing 5 (seconds) as timeout

import urllib.request
url = "https://uk.farnell.com"
urllib.request.urlopen(url, timeout=5)

gives

socket.timeout: The read operation timed out
User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement