下面是一个硒Web刮板,它遍历此网站页面的不同选项卡(https://www.fangraphs.com/leaders.aspx?pos=all&stats=bat&lg=all&qual=y&type=8&season=2018&month=0&season1=2018&ind=0),选择“导出数据”按钮,下载数据,添加Yearid列,然后将数据加载到MySQL表中。

import sys
import pandas as pd
import os
import time
from datetime import datetime
from selenium import webdriver
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
from sqlalchemy import create_engine


button_text_to_url_type = {
    'dashboard': 8,
    'standard': 0,
     'advanced': 1,
     'batted_ball': 2,
     'win_probability': 3,
     'pitch_type': 4,
     'pitch_values': 7,
     'plate_discipline': 5,
     'value': 6
}

download_dir = os.getcwd()
profile = FirefoxProfile("C:/Users/PATHTOFIREFOX")
profile.set_preference("browser.helperApps.neverAsk.saveToDisk", 'text/csv')
profile.set_preference("browser.download.manager.showWhenStarting", False)
profile.set_preference("browser.download.dir", download_dir)
profile.set_preference("browser.download.folderList", 2)
driver = webdriver.Firefox(firefox_profile=profile)


today = datetime.today()
for button_text, url_type in button_text_to_url_type.items():

    default_filepath = os.path.join(download_dir, 'Fangraphs Leaderboard.csv')
    desired_filepath = os.path.join(download_dir,
                                    '{}_{}_{}_Leaderboard_{}.csv'.format(today.year, today.month, today.day,
                                                                         button_text))

    driver.get(
        "https://www.fangraphs.com/leaders.aspx?pos=all&stats=bat&lg=all&qual=0&type={}&season=2018&month=0&season1=2018&ind=0&team=&rost=&age=&filter=&players=".format(
            url_type))
    driver.find_element_by_link_text('Export Data').click()
    if os.path.isfile(default_filepath):
        os.rename(default_filepath, desired_filepath)
        print('Renamed file {} to {}'.format(default_filepath, desired_filepath))
    else:
        sys.exit('Error, unable to locate file at {}'.format(default_filepath))

    df = pd.read_csv(desired_filepath)
    df["yearid"] = datetime.today().year
    df.to_csv(desired_filepath)

    engine = create_engine("mysql+pymysql://{user}:{pw}@localhost/{db}"
                           .format(user="walker",
                                   pw="password",
                                   db="data"))
    df.to_sql(con=engine, name='fg_test_hitting_{}'.format(button_text), if_exists='replace')

time.sleep(10)
driver.quit()


抓取工具运行正常,但是,当我下载数据时,某些列会在整数(即25%)后下载带有%符号的数据,这会影响我在MySQL中的格式设置。将数据抓取到Pandas数据框中时,是否可以更改包含%符号的列,使其仅显示整数?如果是这样,我将如何在我创建的循环中实现此目标,以从网站的各个选项卡中抓取数据?我还想从该过程中排除数据的第一行,因为那是我保留列名的行。提前致谢!

最佳答案

抓取所有内容并将其保存在熊猫数据框中后,您只需将所有%标志替换为replace

df.replace('%','')

关于python - 在使用Selenium Scraper(Python)时消除%符号,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/50867356/

10-12 04:50