I am new to Python and Web scraping but it’s been two weeks that I periodically scrape one website and successfully download images from it. I use different proxies and sometimes change them. But starting yesterday all my proxies suddenly stopped working with a timeout error. I’ve tried a whole list of them and all fail. Could this be a kind of site protection from scraping? If yes, is there a way to overcome it?
JavaScript
x
15
15
1
header = {
2
"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
3
}
4
5
proxies = {
6
"http": "http://188.114.99.153",
7
"https": "http://180.94.69.66:8080"
8
}
9
10
url = 'https://parovoz.com/newgallery/index.php?&LNG=RU&NO_ICONS=0&CATEG=-1&HOWMANY=192'
11
12
html = requests.get(url, headers=header, proxies=proxies, timeout=10).text
13
14
soup = BeautifulSoup(html, 'lxml')
15
Error message:
JavaScript
1
2
1
ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001536A8E7190>, 'Connection to 180.94.69.66 timed out. (connect timeout=10)'))
2
Advertisement
Answer
This will GET
the URL and retry 3 times in case of ConnectTimeoutError
. It will help to apply delays between attempts to avoid failing again in case of periodic request quota.
Take a look at urllib3.util.retry.Retry, it has many options to simplify retries.
JavaScript
1
23
23
1
import requests
2
from requests.adapters import HTTPAdapter
3
from urllib3.util.retry import Retry
4
from bs4 import BeautifulSoup
5
6
header = {
7
"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
8
}
9
10
11
url = 'https://parovoz.com/newgallery/index.php?&LNG=RU&NO_ICONS=0&CATEG=-1&HOWMANY=192'
12
13
14
session = requests.Session()
15
retry = Retry(connect=3, backoff_factor=0.5)
16
adapter = HTTPAdapter(max_retries=retry)
17
session.mount('http://', adapter)
18
session.mount('https://', adapter)
19
20
html = session.get(url, headers=header).text
21
soup = BeautifulSoup(html, 'lxml')
22
print(soup)
23