问题描述
我想为实验制作一个基于Java的网络爬虫。我听说如果这是你第一次使用Java制作一个Web爬虫是可行的方法。但是,我有两个重要问题。
I wanted to make a Java based web crawler for an experiment. I heard that making a Web Crawler in Java was the way to go if this is your first time. However, I have two important questions.
-
我的程序如何访问或连接到网页?请简要说明一下。 (我理解从硬件到软件的抽象层的基础知识,这里我对Java抽象感兴趣)
How will my program 'visit' or 'connect' to web pages? Please give a brief explanation. (I understand the basics of the layers of abstraction from the hardware up to the software, here I am interested in the Java abstractions)
我应该使用哪些库?我想我需要一个用于连接网页的库,一个用于HTTP / HTTPS协议的库,以及一个用于HTML解析的库。
What libraries should I use? I would assume I need a library for connecting to web pages, a library for HTTP/HTTPS protocol, and a library for HTML parsing.
推荐答案
这是您的程序访问或连接网页的方式。
This is How your program 'visit' or 'connect' to web pages.
URL url;
InputStream is = null;
DataInputStream dis;
String line;
try {
url = new URL("http://stackoverflow.com/");
is = url.openStream(); // throws an IOException
dis = new DataInputStream(new BufferedInputStream(is));
while ((line = dis.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
try {
is.close();
} catch (IOException ioe) {
// nothing to see here
}
}
这将下载html页面的来源。
This will download source of html page.
对于HTML解析,请参阅
For HTML parsing see this
另请查看和
这篇关于Java Web Crawler库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!