Skip to content

Latest commit

 

History

History
31 lines (21 loc) · 980 Bytes

README.md

File metadata and controls

31 lines (21 loc) · 980 Bytes

crawling-scale-up

Repository for the Mastering Web Scraping in Python: Scaling to Distributed Crawling blogpost with the final code.

Installation

You will need Redis and python3 installed. After that, install all the necessary libraries by running pip install.

pip install install requests beautifulsoup4 playwright "celery[redis]"
npx playwright install

Execute

Configure the Redis connection on the repo file and Celery on the tasks file.

You need to start Celery and the run the main script that will start queueing pages to crawl.

celery -A tasks worker
python3 main.py 

Contributing

Pull requests are welcome. For significant changes, please open an issue first to discuss what you would like to change.

License

MIT