A Powerful Spider(Web Crawler) System in Python.
Go to file
2020-07-26 14:48:40 -07:00
.github add ISSUE_TEMPLATE 2017-04-18 23:01:04 +01:00
data
docs added https to couchdb + cleanup + added couchdb to docs 2019-11-08 10:57:10 +01:00
pyspider improve docker-compose sample 2019-11-13 22:16:04 -08:00
tests fixed test_60_relist_projects change 2019-11-07 09:47:33 +01:00
tools
.coveragerc
.gitignore remove mongo indexing and stat_count when start-up (#754) 2018-03-14 21:34:07 -07:00
.travis.yml fixed .travis 2019-11-07 09:49:43 +01:00
config_example.json improve docker-compose sample 2019-11-13 22:16:04 -08:00
docker-compose.yaml improve docker-compose sample 2019-11-13 22:16:04 -08:00
Dockerfile making symlink to node_modules 2019-10-24 19:42:50 +02:00
LICENSE
MANIFEST.in
mkdocs.yml
README.md remove demo link 2020-07-26 14:48:40 -07:00
requirements.txt upgraded pika 2019-10-25 15:27:07 +02:00
run.py
setup.py removed python 3.8 from setup.py 2019-11-07 09:45:08 +01:00
tox.ini change dockerfile mysql-connector-python curl 2017-01-17 01:17:25 +08:00

pyspider Build Status Coverage Status

A Powerful Spider(Web Crawler) System in Python.

Tutorial: http://docs.pyspider.org/en/latest/tutorial/
Documentation: http://docs.pyspider.org/
Release notes: https://github.com/binux/pyspider/releases

Sample Code

from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl('http://scrapy.org/', callback=self.index_page)

    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc('a[href^="http"]').items():
            self.crawl(each.attr.href, callback=self.detail_page)

    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

Installation

WARNING: WebUI is open to the public by default, it can be used to execute any command which may harm your system. Please use it in an internal network or enable need-auth for webui.

Quickstart: http://docs.pyspider.org/en/latest/Quickstart/

Contribute

TODO

v0.4.0

  • a visual scraping interface like portia

License

Licensed under the Apache License, Version 2.0