Skip to content

cookandy/cloudflare-elk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This project allows you to quickly analyze logs from your Cloudflare domains using the ELK stack.

This project is similar to Cloudflare's Elasticsearch log integration, but is small and easy enough to run on your local machine.

screenshot

Prerequisites

  1. An enterprise Cloudflare account (required to use the log API)
  2. Your API email address and key (found on your Cloudflare profile page)
  3. Docker and Docker Compose
  4. On Linux, you may need to set sysctl -w vm.max_map_count=262144 (see here for more info)

Quick Start

  1. Clone this project

    git clone https://github.com/cookandy/cloudflare-elk.git

  2. From the cloudflare-elk directory, edit docker-compose.yml and include the following required fields

    • CF_EMAIL: your Cloudflare email address
    • CF_API_KEY: your Cloudflare API key
    • CF_ZONES: a comma-separated list of Cloudflare zone IDs to retrieve logs from (found on your domain's page)
    • CF_FIELDS: a comma-separated list of fields to be retrieved for your logs (see all available fields here)
  3. Run docker-compose up -d to start the container

  4. Wait a minute or two for everything to start up, and then create the geopoint data and import the dashboards by running this command:

    docker exec cf-elk /scripts/import-dashboard.sh

  5. Go to http://localhost:5601 and view your Cloudflare logs

Details

This container is built on top of the sebp/elk project, with some additional start up scripts. The startup script in this project does the following

  • Sets system variables
  • Updates the cron schedule for fetching logs and cleaning old indices
  • Loads cron schedule
  • Downloads the Geolite DB
  • Runs the original ELK start script

The container takes a coupe minutes to fully start Elasticsearch, Logstash, and Kibana. After the ELK server has started, you can run /scripts/import-dashboard.sh from within the container to set up the ES geohash, and import the saved objects. If the import is successful, you'll see

{"acknowledged":true}{"success":true,"successCount":16}

Because the Cloudflare logging API requires end time to be at least 1 minute in the past, logs will always be delayed by at least 1 minute.

Scheduled times

There are two environment variables to control how often scripts are run, and are expressed via cron syntax

  • CF_LOGS_FETCH_SCHEDULE: how often to fetch logs from the Cloudflare API. The default is every 5 min
  • ES_CLEAN_INDICES_SCHEDULE: how often run the clean indices script. The default is once/day. This clean up script also uses ES_INDEX_RETENTION_DAYS to determine how many days worth of indices to keep.

Fetching logs

The environment variable CF_LOGS_FETCH_MIN determines how many minutes of logs you want to fetch with each call. The default is 5. The logs are temporarily downloaded as gz files inside the container, and are removed once ingested via logstash's file_completed_action option.

Volume mappings

The data directory contains data from Elasticsearch and logstash, and will be persisted after a container restart.

  • /data/es-data: this contains Elasticsearch data
  • /data/logstash-logs: this contains the logs downloaded from Cloudflare. Logs are put into subdirectories named <CF_ZONE>/<date>/<time_from>-<time_to>.gz

About

Quickly analyze your Cloudflare logs with ELK

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages