问题描述
我感兴趣的表是纽约市米其林星级餐厅的维基百科表,获得的星级数以图片表示.
The table I am interested in is the Wikipedia table of Michelin-starred restaurants in NYC, and the number of stars awarded is indicated by pictures.
我可以使用两个步骤来抓取表格(首先获取Name"和Borough"列中的单词,然后获取表格正文中的 alt 标签),但我想知道是否可以完成一步到位.我能够使用 rvest 包抓取数据.
I was able to scrape the table using two steps (first get the words in the "Name" and "Borough" columns, second get the alt tags in the table body), but I want to know if it can be done in one step. I was able to scrape the data using the rvest package.
由于 XML::readHTMLTable 函数无法读取维基百科页面,我尝试了 htmltab 包,但没有运气,因为我无法弄清楚 bodyFun 参数所需的函数.说实话,我是网络抓取......和功能的新手.
Since wikipedia pages can't be read by the XML::readHTMLTable function, I tried the htmltab package with no luck, because I couldn't figure out the function needed for the bodyFun argument. Truth be told, I am a newbie to web scraping...and functions.
我参考的问题:
这是我的代码:
library(stringr)
library(rvest)
library(data.table)
url <- "http://en.wikipedia.org/wiki/List_of_Michelin_starred_restaurants_in_New_York_City"
#Scrape the first two columns, restaurant name and borough
name.boro <- url %>% read_html() %>% html_nodes("table") %>% html_table(fill = TRUE)
name.boro <- as.data.table(name.boro[[1]])
name.boro[, 3:length(name.boro) := NULL]
135 * 13 #1,755 cells in first table
#scrape tables for img alt
#note that because I used the "td" node, entries for all cells in all tables were pulled
stars <- url %>% read_html() %>% html_nodes("td") %>% html_node("img") %>% html_attr("alt")
stars
#Make vector of numbers to index each column
df <- vector("list", 13)
for (i in 1:13){
df[[i]] <- seq(i, 1755, 13)
}
#Put everything together
Mich.Guide <- name.boro
Mich.Guide[, c("X2006", "X2007", "X2008", "X2009", "X2010", "X2011", "X2012", "X2013", "X2014", "X2015",
"X2016") := .(stars[unlist(df[3])], stars[unlist(df[4])], stars[unlist(df[5])],
stars[unlist(df[6])], stars[unlist(df[7])], stars[unlist(df[8])],
stars[unlist(df[9])], stars[unlist(df[10])], stars[unlist(df[11])],
stars[unlist(df[12])], stars[unlist(df[13])] )]
谢谢!
推荐答案
你可以试试下面的
require(rvest)
url <- "http://en.wikipedia.org/wiki/List_of_Michelin_starred_restaurants_in_New_York_City"
doc <- read_html(url)
col_names <- doc %>% html_nodes("#mw-content-text > table > tr:nth-child(1) > th") %>% html_text()
tbody <- doc %>% html_nodes("#mw-content-text > table > tr:not(:first-child)")
extract_tr <- function(tr){
scope <- tr %>% html_children()
c(scope[1:2] %>% html_text(),
scope[3:length(scope)] %>% html_node("img") %>% html_attr("alt"))
}
res <- tbody %>% sapply(extract_tr)
res <- as.data.frame(t(res), stringsAsFactors = FALSE)
colnames(res) <- col_names
现在你有了原始表.我将列的解析保留为整数,将列名留给您
Now you have the raw-table. I leave the parsing of the columns to integer and the column-names to you
这篇关于使用 R 抓取带有图像、文本和空白单元格的 Wikipedia HTML 表格的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!