A Powerful Spider(Web Crawler) System in Python.
Go to file
2020-08-02 10:34:20 -07:00
.github add ISSUE_TEMPLATE 2017-04-18 23:01:04 +01:00
data add test for scheduler 2014-03-07 01:44:58 +08:00
docs added https to couchdb + cleanup + added couchdb to docs 2019-11-08 10:57:10 +01:00
pyspider drop support for couchdb 2020-07-26 21:38:19 -07:00
tests more couchdb_password 2020-07-26 19:35:59 -07:00
tools tools/migrate.py 2015-10-01 00:44:44 +01:00
.coveragerc fix coverage 2015-01-18 22:44:22 +08:00
.gitignore remove mongo indexing and stat_count when start-up (#754) 2018-03-14 21:34:07 -07:00
.travis.yml drop support for couchdb 2020-07-26 21:38:19 -07:00
config_example.json improve docker-compose sample 2019-11-13 22:16:04 -08:00
docker-compose.yaml more couchdb_password 2020-07-26 19:35:59 -07:00
Dockerfile making symlink to node_modules 2019-10-24 19:42:50 +02:00
LICENSE update readme and license 2014-11-16 23:36:16 +08:00
MANIFEST.in fix path in MANIFEST.in 2015-01-29 00:44:43 +08:00
mkdocs.yml add docs/Deployment-demo.pyspider.org.md 2016-07-10 11:16:44 +01:00
README.md drop support for couchdb 2020-07-26 21:38:19 -07:00
requirements.txt improve couchdb allow empty username password 2020-07-26 20:15:35 -07:00
run.py move run.py to pyspider 2014-11-24 23:16:31 +08:00
setup.py improve couchdb allow empty username password 2020-07-26 20:15:35 -07:00
tox.ini fix test break because couchdb failing to start 2020-07-26 16:20:28 -07:00

pyspider Build Status Coverage Status

A Powerful Spider(Web Crawler) System in Python.

  • Write script in Python
  • Powerful WebUI with script editor, task monitor, project manager and result viewer
  • MySQL, MongoDB, Redis, SQLite, Elasticsearch; PostgreSQL with SQLAlchemy as database backend
  • RabbitMQ, Redis and Kombu as message queue
  • Task priority, retry, periodical, recrawl by age, etc...
  • Distributed architecture, Crawl Javascript pages, Python 2.{6,7}, 3.{3,4,5,6} support, etc...

Tutorial: http://docs.pyspider.org/en/latest/tutorial/
Documentation: http://docs.pyspider.org/
Release notes: https://github.com/binux/pyspider/releases

Sample Code

from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl('http://scrapy.org/', callback=self.index_page)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc('a[href^="http"]').items():
            self.crawl(each.attr.href, callback=self.detail_page)

    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

Installation

WARNING: WebUI is open to the public by default, it can be used to execute any command which may harm your system. Please use it in an internal network or enable need-auth for webui.

Quickstart: http://docs.pyspider.org/en/latest/Quickstart/

Contribute

TODO

v0.4.0

  • a visual scraping interface like portia

License

Licensed under the Apache License, Version 2.0