Skip to content

Latest commit

 

History

History
54 lines (36 loc) · 2.43 KB

README.md

File metadata and controls

54 lines (36 loc) · 2.43 KB

BigBanyanTree


BigBanyanTree is an initiative to empower engineering colleges to set up their data engineering clusters and drive interest in data processing and analysis using tools such as Apache Spark.

This project was made in collaboration with Suchit under the guidance of Mr. Harsh Singhal.

The endeavour comprised of 3 main steps:

  • Set up a dedicated Apache Spark cluster along with Jupyterlab interface to run Spark jobs.
  • Parse a random 1% sample of the Common Crawl data dumps spanning the years 2018 to 2024, extracting various attributes.
  • Perform various analyses on the extracted datasets and open-source our findings.

Check out the open-sourced HuggingFace datasets we created at huggingface.co/big-banyan-tree

Apache Cluster Setup

We first set up an Apache Spark cluster in standalone mode on a dedicated Hetzner server. The entire server setup was made quite simple and straightforward by making use of Docker and Docker Compose.

To get a more in-depth understanding of our Apache Spark cluster setup, check out the following resources :

CommonCrawl Data Processing

Common Crawl releases data dumps every few months, containing raw HTML source code of the literal Internet, and open-sources this data using archival file storage formats such as WARC (Web Archive).

Under the BigBanyanTree project, we undertook two main data processing tasks:

  • Extracting webpage JavaScript libraries from src tags within HTML script tags, among other fields.
  • Enriching server IP & geolocation data using the MaxMind Database.

For a deep dive into both these topics, check out our blogs :

Extracted Data Analysis

TODO