本文介绍了在python中调用CNBC的后端API的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

作为此问题的后续操作,如何找到XHR用于从CNBC新闻上的后端API检索数据的请求,以便能够抓取此 CNBC搜索查询?

As a followup to this question, how can I locate the XHR request which is used to retrieve the data from the back-end API on CNBC News in order to be able to scrape this CNBC search query?

最终目标是拥有一个包含以下内容的文档:标题,日期,全文和网址.

The end goal is to have a doc with: headline, date, full article and url.

我发现了这一点: https://api.sail-personalize.com/v1/personalize/initialize?pageviews=1&isMobile=0&query=coronavirus&qsearchterm=coronavirus

告诉我我没有访问权限.反正有一种访问信息的方法吗?

Which tells me I don't have access. Is there a way to access information anyway?

推荐答案

实际上,我为您提供的以前的答案是针对有关XHR请求的问题:

Actually my previous answer for you were addressing your question regarding the XHR request:

但是这里我们使用screenshot:

import requests

params = {
    "queryly_key": "31a35d40a9a64ab3",
    "query": "coronavirus",
    "endindex": "0",
    "batchsize": "100",
    "callback": "",
    "showfaceted": "true",
    "timezoneoffset": "-120",
    "facetedfields": "formats",
    "facetedkey": "formats|",
    "facetedvalue":
    "!Press Release|",
    "needtoptickers": "1",
    "additionalindexes": "4cd6f71fbf22424d,937d600b0d0d4e23,3bfbe40caee7443e,626fdfcd96444f28"
}

goal = ["cn:title", "_pubDate", "cn:liveURL", "description"]


def main(url):
    with requests.Session() as req:
        for page, item in enumerate(range(0, 1100, 100)):
            print(f"Extracting Page# {page +1}")
            params["endindex"] = item
            r = req.get(url, params=params).json()
            for loop in r['results']:
                print([loop[x] for x in goal])


main("https://api.queryly.com/cnbc/json.aspx")

Pandas DataFrame版本:

import requests
import pandas as pd

params = {
    "queryly_key": "31a35d40a9a64ab3",
    "query": "coronavirus",
    "endindex": "0",
    "batchsize": "100",
    "callback": "",
    "showfaceted": "true",
    "timezoneoffset": "-120",
    "facetedfields": "formats",
    "facetedkey": "formats|",
    "facetedvalue":
    "!Press Release|",
    "needtoptickers": "1",
    "additionalindexes": "4cd6f71fbf22424d,937d600b0d0d4e23,3bfbe40caee7443e,626fdfcd96444f28"
}

goal = ["cn:title", "_pubDate", "cn:liveURL", "description"]


def main(url):
    with requests.Session() as req:
        allin = []
        for page, item in enumerate(range(0, 1100, 100)):
            print(f"Extracting Page# {page +1}")
            params["endindex"] = item
            r = req.get(url, params=params).json()
            for loop in r['results']:
                allin.append([loop[x] for x in goal])
        new = pd.DataFrame(
            allin, columns=["Title", "Date", "Url", "Description"])
        new.to_csv("data.csv", index=False)


main("https://api.queryly.com/cnbc/json.aspx")

输出:在线查看

这篇关于在python中调用CNBC的后端API的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

11-03 05:58