A Powerful Spider(Web Crawler) System in Python.
Go to file
Keith Tunstead 25a472d21e cleanup
2019-11-06 17:36:09 +01:00
.github
data
docs removed beanstalkc 2019-11-06 17:24:27 +01:00
pyspider cleanup 2019-11-06 17:36:09 +01:00
tests cleanup 2019-11-06 17:36:09 +01:00
tools
.coveragerc
.env full working example 2019-11-01 20:12:01 +01:00
.gitignore
.travis.yml removed beanstalkc 2019-11-06 17:24:27 +01:00
config_example.json full working example 2019-11-01 20:12:01 +01:00
docker-compose.yaml cleanup 2019-11-06 17:36:09 +01:00
Dockerfile
LICENSE
MANIFEST.in
mkdocs.yml
README.md removed beanstalkc 2019-11-06 17:24:27 +01:00
requirements.txt
run.py
setup.py removed beanstalkc 2019-11-06 17:24:27 +01:00
tox.ini

pyspider Build Status Coverage Status Try

A Powerful Spider(Web Crawler) System in Python. TRY IT NOW!

  • Write script in Python
  • Powerful WebUI with script editor, task monitor, project manager and result viewer
  • MySQL, MongoDB, Redis, SQLite, Elasticsearch; PostgreSQL with SQLAlchemy as database backend
  • RabbitMQ, Redis and Kombu as message queue
  • Task priority, retry, periodical, recrawl by age, etc...
  • Distributed architecture, Crawl Javascript pages, Python 2.{6,7}, 3.{3,4,5,6} support, etc...

Tutorial: http://docs.pyspider.org/en/latest/tutorial/
Documentation: http://docs.pyspider.org/
Release notes: https://github.com/binux/pyspider/releases

Sample Code

from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl('http://scrapy.org/', callback=self.index_page)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc('a[href^="http"]').items():
            self.crawl(each.attr.href, callback=self.detail_page)

    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

Demo

Installation

WARNING: WebUI is open to the public by default, it can be used to execute any command which may harm your system. Please use it in an internal network or enable need-auth for webui.

Quickstart: http://docs.pyspider.org/en/latest/Quickstart/

Contribute

TODO

v0.4.0

  • a visual scraping interface like portia

License

Licensed under the Apache License, Version 2.0