本文介绍了有没有一种方法可以使用python捕获跨越几页的表的列?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正试图从一张跨越46页的表格中获取ETF的报价:
I am trying to get the tickers of ETFs from a table that spans over 46 pages:
我的代码是
import bs4 as bs
import pickle
import requests
def save_ETF_tickers():
resp = requests.get('http://etfdb.com/type/region/north-america/us/#etfs&sort_name=assets_under_management&sort_order=desc&page=1')
soup = bs.BeautifulSoup(resp.text, "lxml")
table = soup.find('table',{'class': 'table mm-mobile-table table-module2 table-default table-striped table-hover table-pagination'})
tickers = []
for row in table.findAll('tr')[1:26]:
ticker = row.findAll('td')[0].text
tickers.append(ticker)
with open("ETFtickers.pickle", "wb") as f:
pickle.dump(tickers, f)
print(tickers)
return tickers
save_ETF_tickers()
我知道这仅检查"page = 1",但我不知道如何从所有46页中检索数据.
I know that this one only check "page=1" but I couldn't figure out how to retrieve data from all of the 46 pages.
非常感谢您的帮助
推荐答案
您可以使用etfdb-api
Node.js软件包: https://www.npmjs.com/package/etfdb-api
You could use the etfdb-api
Node.js package: https://www.npmjs.com/package/etfdb-api
它为您提供:
- 分页
- 排序方式:年初至今回报,价格,AUM,平均音量等
- 排序:DESC | ASC
这是一个示例JSON响应:
Here is a sample JSON response:
{
"symbol": {
"type": "link",
"text": "VIXM",
"url": "/etf/VIXM/"
},
"name": {
"type": "link",
"text": "ProShares VIX Mid-Term Futures ETF",
"url": "/etf/VIXM/"
},
"mobile_title": "VIXM - ProShares VIX Mid-Term Futures ETF",
"price": "$26.47",
"assets": "$48.21",
"average_volume": "69,873",
"ytd": "25.15%",
"overall_rating": {
"type": "restricted",
"url": "/members/join/"
},
"asset_class": "Volatility"
},
{
"symbol": {
"type": "link",
"text": "DGBP",
"url": "/etf/DGBP/"
},
"name": {
"type": "link",
"text": "VelocityShares Daily 4x Long USD vs GBP ETN",
"url": "/etf/DGBP/"
},
"mobile_title": "DGBP - VelocityShares Daily 4x Long USD vs GBP ETN",
"price": "$30.62",
"assets": "$4.85",
"average_volume": "1,038",
"ytd": "25.13%",
"overall_rating": {
"type": "restricted",
"url": "/members/join/"
},
"asset_class": "Currency"
}
免责声明:我是作者. :)
Disclaimer: I'm the author. :)
这篇关于有没有一种方法可以使用python捕获跨越几页的表的列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!