I am learning web scrapping since I need it for my work. I wrote the following code:
from selenium import webdriver chromedriver='/home/es/drivers/chromedriver' driver = webdriver.Chrome(chromedriver) driver.implicitly_wait(30) driver.get('http://crdd.osdd.net/raghava/hemolytik/submitkey_browse.php?ran=1955') df = pd.read_html(driver.find_element_by_id("table.example.display.datatable").get_attribute('example'))[0]
However, it is showing the following error:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="table.example.display.datatable"]"} (Session info: chrome=103.0.5060.134)
Then I inspect the table that I wanna scrape this table from this page
what is the attribute that needs to be included in get_attribute()
function in the following line?
df = pd.read_html(driver.find_element_by_id("table.example.display.datatable").get_attribute('example'))[0]
what I should write in the driver.find_element_by_id
?
EDITED:
Some tables have lots of records in multi-pages.
For example, this page has 2,246 entries, which shows 100 entries on each page. Once I tried to web-scrape it, there were only 320 entries in df
and the record ID is from 1232-1713, which means it took entries from the next few pages and it is not starting from the first page to the end at the last page.
What we can do in such cases?
Advertisement
Answer
You need to get the outerHTML
property of the table first, then call the table element from pandas
.
You need to wait for element to be visible. Use explicit wait like WebdriverWait()
driver.get('http://crdd.osdd.net/raghava/hemolytik/submitkey_browse.php?ran=1955') table=WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"table#example"))) tableRows=table.get_attribute("outerHTML") df = pd.read_html(tableRows)[0] print(df)
Import below libraries.
from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By import pandas as pd
Output:
ID PMID YEAR ... DSSP Natural Structure Final Structure 0 1643 16137634 2005 ... CCCCCCCCCCCSCCCC NaN NaN 1 1644 16137634 2005 ... CCTTSCCSSCCCC NaN NaN 2 1645 16137634 2005 ... CTTTCGGGHHHHHHHHCC NaN NaN 3 1646 16137634 2005 ... CGGGTTTHHHHHHHGGGC NaN NaN 4 1647 16137634 2005 ... CCSCCCSSCHHHHHHHHHTTC NaN NaN 5 1910 16730859 2006 ... CCCCCCCSSCCSHHHHHHHHTTHHHHHHHHSSCCC NaN NaN 6 1911 16730859 2006 ... CCSCC NaN NaN 7 1912 16730859 2006 ... CCSSSCSCC NaN NaN 8 1913 16730859 2006 ... CCCSSCCSSCCSHHHHHTTHHHHTTTCSCC NaN NaN 9 1914 16730859 2006 ... CCSHHHHHHHHHHHHHCCCC NaN NaN 10 2110 11226440 2001 ... CCCSSCCCBTTBTSSSSSSCSCC NaN NaN 11 3799 9204560 1997 ... CCSSCC NaN NaN 12 4149 16137634 2005 ... CCHHHHHHHHHHHC NaN NaN [13 rows x 17 columns]