Skip to content
Advertisement

Python – Iterate through list of website and scrape data – failing at requests.get

I have a list of items that I scraped from Github. This is sitting in df_actionname [‘ActionName’]. Each [‘ActionName’] can then be converted into a [‘Weblink’] to create a website link. I am trying to loop through each weblink and scrape data from it.

My code:

JavaScript

My code is failing at ” detailpage= requests.get(URL) ” The error message I am getting is:

in get_adapter raise InvalidSchema(f”No connection adapters were found for {url!r}”) requests.exceptions.InvalidSchema: No connection adapters were found for ‘0 https://github.com/marketplace/actions/Truffle…n1 https://github.com/marketplace/actions/Metrics…n2 https://github.com/marketplace/actions/Super-L…n3 https://github.com/marketplace/actions/Swift-DocnName: Weblink, dtype: object’

Advertisement

Answer

You need to set a single valid url. Changing your for loop to

JavaScript

gives me the output

JavaScript

The way you were doing it, not only was your code basically trying to repeatedly sending the same GET request every loop (since URL was not dependent on website at all), the input of requests.get was not a single url, as you can see if you add a print before the request: screenshot

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement