我已经用python与硒结合编写了一个脚本,以从网页中获取一些信息。要获得内容,必须单击较大表中每个名称旁边的+符号。单击这些+符号后,将显示与每个名称相关的所有表格。我的脚本可以非常有效地做到这一点。但是,下一步是解析这些表格数据。这就是我困扰的地方。每个表的数据都被解析了,但是很多空白行无处不在。

我如何才能踢出那些空白行并仅继续解析那些表格数据?

link to that site

这是我的脚本:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

url = "replace with above link"

def get_info(driver,link):
    driver.get(link)
    for items in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR,"table.tableagmark img[style^='cursor:']"))):
        items.location
        items.click()
        wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,"table[style='font-size:16px;']")))
    fetch_table()

def fetch_table():
    for items in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table[style='font-size:16px;'] tr"))):
        data = [item.text for item in items.find_elements_by_css_selector("td")]
        print(data)

if __name__ == '__main__':
    driver = webdriver.Chrome()
    wait = WebDriverWait(driver,10)
    try:
        get_info(driver,url)
    finally:
        driver.quit()


这是输出的样子(在每个表格内容之前和之后):

['', '']
['', '']
['', '']
['', '']
['', '']
['', '']
['', '']
['', '']
['', '']
['', '']
[]
['Achanta', 'Apr 16 2018 11:24AM']
['Addanki', 'Apr 13 2018 6:00PM']
['Adoni', 'Apr 18 2018 12:17PM']

最佳答案

您可以通过使用某种过滤器来跳过对空文本节点的处理,仅使用正确的选择器即可节省大量时间:

def get_info(driver,link):
    driver.get(link)
    for items in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "img[src='../images/plus.png']"))):
        items.click()
    fetch_table()

def fetch_table():
    for items in wait.until(EC.presence_of_all_elements_located((By.XPATH, "//td/table//tr[not(th)]"))):
        data = [item.text for item in items.find_elements_by_css_selector("td")]
        print(data)

关于python - 无法摆脱输出中的许多空白行,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/51388221/

10-10 18:15