问题描述
我在 R 中使用 twitteR 包来根据推文的 id 提取推文.但是我无法在不达到速率限制或错误 404 的情况下对多个推文 ID 执行此操作.这是因为我正在使用 showStatus() - 一次一个推文 ID.我正在寻找类似于 getStatuses() 的函数 - 多个推文 ID/请求
I am using the twitteR package in R to extract tweets based on their ids. But I am unable to do this for multiple tweet ids without hitting either a rate limit or an error 404.This is because I am using the showStatus() - one tweet id at a time.I am looking for a function similar to getStatuses() - multiple tweet id/request
是否有一种有效的方法来执行此操作.我想在使用 outh 的 15 分钟窗口中只能发出 60 个请求.
Is there an efficient way to perform this action.I suppose only 60 requests can be made in a 15 minute window using the outh.
那么,我如何确保:-1.为单个请求检索多个推文 ID,然后重复这些请求.2.正在检查速率限制.3.未找到推文的错误处理.
So, how do I ensure :-1.Retrieve multiple tweet ids for single request thereafter repeating these requests.2.Rate limit is under check.3.Error handling for tweets not found.
P.S:此活动不是基于用户的.
P.S : This activity is not user based.
谢谢
推荐答案
我最近遇到了同样的问题.要批量检索推文,Twitter 建议使用 lookup
-method 由其 API 提供.这样,每个请求最多可以获得 100 条推文.
I have come across the same issue recently. For retrieving tweets in bulk, Twitter recommends using the lookup
-method provided by its API. That way you can get up to 100 tweets per request.
不幸的是,这还没有在 twitteR
包中实现;所以我尝试组合一个快速函数(通过重用 twitteR
包中的大量代码)来使用该 API 方法:
Unfortunately, this has not been implemented in the twitteR
package yet; so I've tried to hack together a quick function (by re-using lots of code from the twitteR
package) to use that API method:
lookupStatus <- function (ids, ...){
lapply(ids, twitteR:::check_id)
batches <- split(ids, ceiling(seq_along(ids)/100))
results <- lapply(batches, function(batch) {
params <- parseIDs(batch)
statuses <- twitteR:::twInterfaceObj$doAPICall(paste("statuses", "lookup",
sep = "/"),
params = params, ...)
twitteR:::import_statuses(statuses)
})
return(unlist(results))
}
parseIDs <- function(ids){
id_list <- list()
if (length(ids) > 0) {
id_list$id <- paste(ids, collapse = ",")
}
return(id_list)
}
确保您的 ids
向量属于 character
类(否则非常大的 ID 可能会出现一些问题).
Make sure that your vector of ids
is of class character
(otherwise there can be a some problems with very large IDs).
像这样使用函数:
ids <- c("432656548536401920", "332526548546401821")
tweets <- lookupStatus(ids, retryOnRateLimit=100)
设置较高的 retryOnRateLimit
可确保您获得所有推文,即使您的 ID 向量包含超过 18,000 个条目 (每个请求 100 个 ID,每 15 分钟窗口 180 个请求).
Setting a high retryOnRateLimit
ensures you get all your tweets, even if your vector of IDs has more than 18,000 entries (100 IDs per request, 180 requests per 15-minute window).
像往常一样,您可以使用 twListToDF(tweets)
将推文转换为数据框.
As usual, you can turn the tweets into a data frame with twListToDF(tweets)
.
这篇关于如何使用 R 从 tweet_id 检索多条推文的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!