I’m a noob trying to learn Python by scraping a website to track fund parameters. So far, the following code isolates and shows the data that I need,
from bs4 import BeautifulSoup import requests source = requests.get('https://www.fundaggregatorurl.com/path/to/fund').text soup = BeautifulSoup(source, 'lxml') # print(soup.prettify()) print("n1Y growth rate vs S&P BSE 500 TRIn") # Pinpoints the 1Y growth rate of the scheme and the S&P BSE 500 TRI for snippet in soup.find_all('div', class_='scheme_per_amt prcntreturn 1Y'): print(snippet.text.lstrip()) print("nNAV, AUM and Expense Ration") # Pinpoints NAV, AUM and Expense Ratio for snippet in soup.find_all('span', class_='amt'): print(snippet.text) # Get the risk analysis data source = requests.get('https://www.fundaggregatorurl.com/path/to/fund/riskanalysis').text soup = BeautifulSoup(source, 'lxml') print("nRisk Ratiosn") # Pinpoints NAV, AUM and Expense Ratio for snippet in soup.find_all('div', class_='percentage'): split_data = snippet.text.split('vs') print(*split_data, sep=" ") print()
This code shows the following data:
1Y growth rate vs S&P BSE 500 TRI 68.83% 50.85% NAV, AUM and Expense Ratio 185.9414 2704.36 1.5% Risk Ratios 19.76 17.95 0.89 0.93 0.77 0.72 0.17 0.14 4.59 2.32
How can I write this data to a CSV with the following headers?
Fund growth Category Growth Current NAV AUM Expense Ratio Fund std dev Category std dev Fund beta Category beta Fund Sharpe ratio Category Sharpe ratio Fund Treynor's ratio Category Treynor's Ratio Fund Jension's Alpha Category Jension's Alpha 68.83% 50.85% 185.9414 2704.36 1.5% 19.76 17.95 0.89 0.93 0.77 0.72 0.17 0.14 4.59 2.32
This is for a single fund and I need to get this data for about 100 more funds. I will experiment more and any issues there are perhaps for another Q at a later time :) Since I’m a newbie, any other improvements and why you’d do those would also be appreciated!
Advertisement
Answer
Assemble the data for each fund in a list to easily write it out in CSV format using Python’s builtin csv module:
import csv funds = ['fund1', 'fund2'] # the header should match the number of data items header = ['Fund growth', 'Category Growth', 'Current NAV', 'AUM'] with open('funds.csv', 'w', newline='') as csvfile: fund_writer = csv.writer(csvfile) fund_writer.writerow(header) for fund in funds: fund_data = [] source = requests.get('https://www.fundaggregatorurl.com/path/to/' + fund).text soup = BeautifulSoup(source, 'lxml') for snippet in soup.find_all('div', class_='scheme_per_amt prcntreturn 1Y'): fund_data.append(snippet.text.lstrip()) # do remaining parsing... fund_writer.writerow(fund_data)