我一直在玩Crawler4j,并成功地使它爬行了一些页面,但是没有成功地爬行了其他页面。例如,我已使用以下代码成功抓取了Reddi:

public class Controller {
    public static void main(String[] args) throws Exception {
        String crawlStorageFolder = "//home/user/Documents/Misc/Crawler/test";
        int numberOfCrawlers = 1;

        CrawlConfig config = new CrawlConfig();
       config.setCrawlStorageFolder(crawlStorageFolder);

        /*
         * Instantiate the controller for this crawl.
         */
        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

        /*
         * For each crawl, you need to add some seed urls. These are the first
         * URLs that are fetched and then the crawler starts following links
         * which are found in these pages
         */
        controller.addSeed("https://www.reddit.com/r/movies");
        controller.addSeed("https://www.reddit.com/r/politics");


        /*
         * Start the crawl. This is a blocking operation, meaning that your code
         * will reach the line after this only when crawling is finished.
         */
        controller.start(MyCrawler.class, numberOfCrawlers);
    }


}

与:
@Override
 public boolean shouldVisit(Page referringPage, WebURL url) {
     String href = url.getURL().toLowerCase();
     return !FILTERS.matcher(href).matches()
            && href.startsWith("https://www.reddit.com/");
 }

在MyCrawler.java中。但是,当我尝试爬网http://www.ratemyprofessors.com/时,该程序只是挂起而没有输出,并且不爬网任何内容。我在myController.java中使用上面的以下代码:
controller.addSeed("http://www.ratemyprofessors.com/campusRatings.jsp?sid=1222");
controller.addSeed("http://www.ratemyprofessors.com/ShowRatings.jsp?tid=136044");

在MyCrawler.java中:
 @Override
 public boolean shouldVisit(Page referringPage, WebURL url) {
     String href = url.getURL().toLowerCase();
     return !FILTERS.matcher(href).matches()
            && href.startsWith("http://www.ratemyprofessors.com/");
 }

所以我想知道:
  • 是否有些服务器能够立即识别爬网程序并且不允许它们收集数据?
  • 我注意到RateMyProfessor页面是.jsp格式;这可能与它有关吗?
  • 有什么方法可以更好地调试它?控制台不输出任何内容。
  • 最佳答案

    crawler4j尊重爬虫的礼貌,例如 robots.txt 。在您的情况下,此文件为以下one

    检查此文件可以发现,它不允许爬取给定的种子点:

     Disallow: /ShowRatings.jsp
     Disallow: /campusRatings.jsp
    
    crawler4j日志输出支持此理论:
    2015-12-15 19:47:18,791 WARN  [main] CrawlController (430): Robots.txt does not allow this seed: http://www.ratemyprofessors.com/campusRatings.jsp?sid=1222
    2015-12-15 19:47:18,793 WARN  [main] CrawlController (430): Robots.txt does not allow this seed: http://www.ratemyprofessors.com/ShowRatings.jsp?tid=136044
    

    10-07 20:35