A Powerful Spider(Web Crawler) System in Python.
Go to file
2019-02-23 15:50:38 -08:00
.github add ISSUE_TEMPLATE 2017-04-18 23:01:04 +01:00
data add test for scheduler 2014-03-07 01:44:58 +08:00
docs Fixed typo 2018-04-05 23:59:07 +08:00
pyspider modify async to async_mode to support python3.7 2018-10-15 17:28:22 +08:00
tests change async to async_ (#803) 2018-06-14 12:07:51 -07:00
tools tools/migrate.py 2015-10-01 00:44:44 +01:00
.coveragerc fix coverage 2015-01-18 22:44:22 +08:00
.gitignore remove mongo indexing and stat_count when start-up (#754) 2018-03-14 21:34:07 -07:00
.travis.yml fix build for 3.3 2019-02-23 15:50:38 -08:00
Dockerfile fix docker build 2017-03-05 23:32:22 +00:00
LICENSE update readme and license 2014-11-16 23:36:16 +08:00
MANIFEST.in fix path in MANIFEST.in 2015-01-29 00:44:43 +08:00
mkdocs.yml add docs/Deployment-demo.pyspider.org.md 2016-07-10 11:16:44 +01:00
README.md Grammar Changes 2017-06-15 09:27:37 -06:00
requirements.txt Update requirements.txt (#774) 2018-03-14 21:29:09 -07:00
run.py move run.py to pyspider 2014-11-24 23:16:31 +08:00
setup.py fix build for 3.3 2019-02-23 15:50:38 -08:00
tox.ini change dockerfile mysql-connector-python curl 2017-01-17 01:17:25 +08:00

pyspider Build Status Coverage Status Try

A Powerful Spider(Web Crawler) System in Python. TRY IT NOW!

Tutorial: http://docs.pyspider.org/en/latest/tutorial/
Documentation: http://docs.pyspider.org/
Release notes: https://github.com/binux/pyspider/releases

Sample Code

from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl('http://scrapy.org/', callback=self.index_page)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc('a[href^="http"]').items():
            self.crawl(each.attr.href, callback=self.detail_page)

    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

Demo

Installation

WARNING: WebUI is open to the public by default, it can be used to execute any command which may harm your system. Please use it in an internal network or enable need-auth for webui.

Quickstart: http://docs.pyspider.org/en/latest/Quickstart/

Contribute

TODO

v0.4.0

  • a visual scraping interface like portia

License

Licensed under the Apache License, Version 2.0