A Powerful Spider(Web Crawler) System in Python.
Go to file
2015-12-13 18:29:18 +00:00
data
docs as markdown not support by read-the-docs, update index.md 2015-11-21 17:20:29 +00:00
pyspider new fetcher ghost.py 2015-12-13 18:29:18 +00:00
tests new fetcher ghost.py 2015-12-13 18:29:18 +00:00
tools tools/migrate.py 2015-10-01 00:44:44 +01:00
.coveragerc
.gitignore ignore IntelliJ IDEA config dir 2015-09-09 18:15:57 +08:00
.travis.yml update travis fix coveralls, enable postgresql test 2015-10-13 00:33:11 +01:00
Dockerfile add postgresql support to docker, try catch for postgresql 2015-09-29 20:56:40 +01:00
LICENSE
MANIFEST.in
mkdocs.yml add docs/Working-with-Results.md 2015-11-11 22:24:30 +00:00
README.md fix markdown style 2015-11-21 17:08:24 +00:00
requirements.txt mongodb support for pymongo 3.0 2015-11-21 17:17:23 +00:00
run.py
setup.py new fetcher ghost.py 2015-12-13 18:29:18 +00:00
tox.ini drop requirements.txt 2015-06-03 17:08:07 +08:00

pyspider Build Status Coverage Status Try

A Powerful Spider(Web Crawler) System in Python. TRY IT NOW!

  • Write script in Python
  • Powerful WebUI with script editor, task monitor, project manager and result viewer
  • MySQL, MongoDB, Redis, SQLite, PostgreSQL with SQLAlchemy as database backend
  • RabbitMQ, Beanstalk, Redis and Kombu as message queue
  • Task priority, retry, periodical, recrawl by age, etc...
  • Distributed architecture, Crawl Javascript pages, Python 2&3, etc...

Tutorial: http://docs.pyspider.org/en/latest/tutorial/
Documentation: http://docs.pyspider.org/
Release notes: https://github.com/binux/pyspider/releases

Sample Code

from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl('http://scrapy.org/', callback=self.index_page)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc('a[href^="http"]').items():
            self.crawl(each.attr.href, callback=self.detail_page)

    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

Demo

Installation

Quickstart: http://docs.pyspider.org/en/latest/Quickstart/

Contribute

TODO

v0.4.0

  • local mode, load script from file.
  • works as a framework (all components running in one process, no threads)
  • redis
  • shell mode like scrapy shell
  • a visual scraping interface like portia

more

License

Licensed under the Apache License, Version 2.0