Open Source Web Crawlers in Java


Open Source Web Crawlers in Java
 

Heritrix
Heritrix is the Internet Archive’s open-source, extensible, web-scale, archival-quality web crawler project.
WebSPHINX
WebSPHINX ( Website-Specific Processors for HTML INformation eXtraction) is a Java class library and interactive development environment for Web crawlers that browse and process Web pages automatically.
JSpider
A highly configurable and customizable Web Spider engine, Developed under the LGPL Open Source license, In 100% pure Java.
Web-Harvest
Web-Harvest is Open Source Web Data Extraction tool written in Java. It offers a way to collect desired Web pages and extract useful data from them. In order to do that, it leverages well established techniques and technologies for text/xml manipulation such as XSLT, XQuery and Regular Expressions. Web-Harvest mainly focuses on HTML/XML based web sites which still make vast majority of the Web content. On the other hand, it could be easily supplemented by custom Java libraries in order to augment its extraction capabilities.
WebEater
A 100% pure Java program for web site retrieval and offline viewing.
Bixo
Bixo is an open source web mining toolkit that runs as a series of Cascading pipes on top of Hadoop. By building a customized Cascading pipe assembly, you can quickly create specialized web mining applications that are optimized for a particular use case.
Java Web Crawler
Java Web Crawler is a simple Web crawling utility written in Java. It supports the robots exclusion standard.
WebLech
WebLech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. WebLech is multithreaded and will feature a GUI console.
Arachnid
Arachnid is a Java-based web spider framework. It includes a simple HTML parser object that parses an input stream containing HTML content. Simple Web spiders can be created by sub-classing Arachnid and adding a few lines of code called after each page of a Web site is parsed.
JoBo
JoBo is a simple program to download complete websites to your local computer. Internally it is basically a web spider. The main advantage to other download tools is that it can automatically fill out forms (e.g. for automated login) and also use cookies for session handling. Compared to other products the GUI seems to be very simple, but the internal features matters ! Do you know any download tool that allows it to login to a web server and download content if that server uses a web forms for login and cookies for session handling ? It also features very flexible rules to limit downloads by URL, size and/or MIME type.
Crawler4j
Crawler4j is a Java library which provides a simple interface for crawling the web. Using it, you can setup a multi-threaded web crawler in 5 minutes! It is also very efficient, it has been able to download and parse 200 pages per second on a Quad core PC with cable connection.
Ex-Crawler
Ex-Crawler is divided into three subprojects. Ex-Crawler server daemon is a highly configurable, flexible (Web-) Crawler, including distributed grid / volunteer computing features written in Java. Crawled informations are stored in MySQL, MSSQL or PostgreSQL database. It supports plugins through multiple Plugin Interfaces. It comes with it’s own socket server, where you can configure it, add urls and much more. Including user accounts and user levels, which are shared with the webfrontend search engine. With the Ex-Crawler distributed crawling graphical client, other people / computers can crawl and analyse websites, images and more for the crawler. The third part of the project is the web front end search engine.

Author: Gilbert Tan TS

IT expert with more than 20 years experience in Multiple OS, Security, Data & Internet , Interests include AI and Big Data, Internet and multimedia. An experienced Real Estate agent, Insurance agent, and a Futures trader. I am capable of finding any answers in the world you want as long as there are reports available online for me to do my own research to bring you closest to all the unsolved mysteries in this world, because I can find all the paths to the Truth, and what the Future holds. All I need is to observe, test and probe to research on anything I want, what you need to do will take months to achieve, all I need is a few hours.​

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.