Skip to content

Commit

Permalink
Influxdb 2 (#1)
Browse files Browse the repository at this point in the history
* Boerderij#203

* Update docker compose to specify influxdb:1.8.4

* Update requirements to use urllib3==1.26.5

* updated to support Radarr and Sonarr V3 Api

* bump requirements for requests

* Fix Sonarr & Radarr V3 API /queue endpoint (Boerderij#220)

* Fix lint issues

* More lint fixes

* Update Sonarr structures

* Add Overseerr Support (Boerderij#210)

* Remove duplicate structures

* update changelog to reflect v1.7.7 changes

* Add IP data to tautulli Boerderij#202

* add missing ip address in tautulli

* Fixed: Streamlined API calls to Radarr and Sonarr (Boerderij#221)

* Fixed: Sonarr Data pull issues (Boerderij#222)

* Fix Sonarrr calendar

* Update lidarr structure (Boerderij#225)

Added missing arguments to Lidarr structure

Fixes Boerderij#223

* Clean up request totals. Upstream change sct/overseerr#2426

* Cleanup blank space

* Fix requested_date syntax.

* Fix requested_date for Overseerr tv and movie

* Fix overseerr config refernces

* Fix overseerr structures

* Update intparser to accommodate changes to config structure

* Cleanup overseerr data collection

* Fix SERVICES_ENABLED in varken.py to acomidate overseerr

* Fixed: Sonarr/Lidarr Queues (Boerderij#227)

* Change sonarr queue structures to str

* Fixed: Multipage queue fetching

* Update historical tautulli import (Boerderij#226)

* Fixed: Sonarr perams ordering

* Fixed: Proper warnings for missing data in sonarr and radarr

* Added: Overseerr ENVs to docker compose.

* Added: Logging to empty/no data returns

* Update Sonarr & Lidarr Structs to match latest API changes (Boerderij#231)

* Add support for estimatedCompletionTime in LidarrQueue

* Add support for tvdbId in SonarrEpisode struct

* Fix typo in docker yml

* Rename example url for overseerr in docker yml

* Update radarr structures to inclue originalLanguage

* Update radarr structures to include addOptions

* Update radarr structures to include popularity

* fix(ombi): Update structures.py (Boerderij#238)

* feat(docker): remove envs from example

* fix(logging): remove depreciation warning. Var for debug mode (Boerderij#240)

* Support InfluxDB 2.x as addition to 1.8

* Document that influxdb 2.x is supported

* Include influxdb username/password for v2 server support

* Support an optional v prefix for influxdb version strings

---------

Co-authored-by: mal5305 <[email protected]>
Co-authored-by: samwiseg0 <[email protected]>
Co-authored-by: Robin <[email protected]>
Co-authored-by: tigattack <[email protected]>
Co-authored-by: Stewart Thomson <[email protected]>
Co-authored-by: Cameron Stephen <[email protected]>
Co-authored-by: MDHMatt <[email protected]>
Co-authored-by: Nathan Adams <[email protected]>
  • Loading branch information
9 people authored Jun 22, 2023
1 parent b5a83f0 commit 555f7d1
Show file tree
Hide file tree
Showing 14 changed files with 527 additions and 197 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ ecosystem into InfluxDB using Grafana for a frontend
Requirements:
* [Python 3.6.7+](https://www.python.org/downloads/release/python-367/)
* [Python3-pip](https://pip.pypa.io/en/stable/installing/)
* [InfluxDB 1.8.x](https://www.influxdata.com/)
* [InfluxDB 1.8.x or 2.x](https://www.influxdata.com/)
* [Grafana](https://grafana.com/)

<p align="center">
Expand Down Expand Up @@ -50,7 +50,7 @@ Please read [Asking for Support](https://wiki.cajun.pro/books/varken/chapter/ask

### InfluxDB
[InfluxDB Installation Documentation](https://wiki.cajun.pro/books/varken/page/influxdb-d1f)
Note: Only v1.8.x is currently supported.
Note: Only v1.8.x or v2.x are supported.

Influxdb is required but not packaged as part of Varken. Varken will create
its database on its own. If you choose to give varken user permissions that
Expand Down
21 changes: 18 additions & 3 deletions Varken.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
import platform
import schedule
import distro
from time import sleep
from queue import Queue
from sys import version
from threading import Thread
from os import environ as env
from os import access, R_OK, getenv
from distro import linux_distribution
from os.path import isdir, abspath, dirname, join
from argparse import ArgumentParser, RawTextHelpFormatter
from logging import getLogger, StreamHandler, Formatter, DEBUG


# Needed to check version of python
from varken import structures # noqa
from varken.ombi import OmbiAPI
from varken.overseerr import OverseerrAPI
from varken.unifi import UniFiAPI
from varken import VERSION, BRANCH, BUILD_DATE
from varken.sonarr import SonarrAPI
Expand All @@ -27,7 +29,7 @@
from varken.varkenlogger import VarkenLogger


PLATFORM_LINUX_DISTRO = ' '.join(x for x in linux_distribution() if x)
PLATFORM_LINUX_DISTRO = ' '.join(distro.id() + distro.version() + distro.name())


def thread(job, **kwargs):
Expand Down Expand Up @@ -156,6 +158,18 @@ def thread(job, **kwargs):
at_time = schedule.every(server.issue_status_run_seconds).seconds
at_time.do(thread, OMBI.get_issue_counts).tag("ombi-{}-get_issue_counts".format(server.id))

if CONFIG.overseerr_enabled:
for server in CONFIG.overseerr_servers:
OVERSEER = OverseerrAPI(server, DBMANAGER)
if server.get_request_total_counts:
at_time = schedule.every(server.request_total_run_seconds).seconds
at_time.do(thread, OVERSEER.get_request_counts).tag("overseerr-{}-get_request_counts"
.format(server.id))
if server.num_latest_requests_to_fetch > 0:
at_time = schedule.every(server.num_latest_requests_seconds).seconds
at_time.do(thread, OVERSEER.get_latest_requests).tag("overseerr-{}-get_latest_requests"
.format(server.id))

if CONFIG.sickchill_enabled:
for server in CONFIG.sickchill_servers:
SICKCHILL = SickChillAPI(server, DBMANAGER)
Expand All @@ -171,7 +185,8 @@ def thread(job, **kwargs):

# Run all on startup
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled, CONFIG.unifi_enabled,
CONFIG.sonarr_enabled, CONFIG.sickchill_enabled, CONFIG.lidarr_enabled]
CONFIG.sonarr_enabled, CONFIG.sickchill_enabled, CONFIG.lidarr_enabled,
CONFIG.overseerr_enabled]
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
vl.logger.error("All services disabled. Exiting")
exit(1)
Expand Down
15 changes: 14 additions & 1 deletion data/varken.example.ini
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ sonarr_server_ids = 1,2
radarr_server_ids = 1,2
lidarr_server_ids = false
tautulli_server_ids = 1
ombi_server_ids = 1
ombi_server_ids = false
overseerr_server_ids = 1
sickchill_server_ids = false
unifi_server_ids = false
maxmind_license_key = xxxxxxxxxxxxxxxx
Expand All @@ -15,6 +16,7 @@ ssl = false
verify_ssl = false
username = root
password = root
org = -

[tautulli-1]
url = tautulli.domain.tld:8181
Expand Down Expand Up @@ -95,6 +97,17 @@ request_total_run_seconds = 300
get_issue_status_counts = true
issue_status_run_seconds = 300

[overseerr-1]
url = overseerr.domain.tld
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = false
get_request_total_counts = true
request_total_run_seconds = 30
get_latest_requests = true
num_latest_requests_to_fetch = 10
num_latest_requests_seconds = 30

[sickchill-1]
url = sickchill.domain.tld:8081
apikey = xxxxxxxxxxxxxxxx
Expand Down
91 changes: 3 additions & 88 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ services:
influxdb:
hostname: influxdb
container_name: influxdb
image: influxdb
image: influxdb:1.8
networks:
- internal
volumes:
Expand All @@ -22,91 +22,6 @@ services:
- /path/to/docker-varken/config-folder:/config
environment:
- TZ=America/Chicago
- VRKN_GLOBAL_SONARR_SERVER_IDS=1,2
- VRKN_GLOBAL_RADARR_SERVER_IDS=1,2
- VRKN_GLOBAL_LIDARR_SERVER_IDS=false
- VRKN_GLOBAL_TAUTULLI_SERVER_IDS=1
- VRKN_GLOBAL_OMBI_SERVER_IDS=1
- VRKN_GLOBAL_SICKCHILL_SERVER_IDS=false
- VRKN_GLOBAL_UNIFI_SERVER_IDS=false
- VRKN_GLOBAL_MAXMIND_LICENSE_KEY=xxxxxxxxxxxxxxxx
- VRKN_INFLUXDB_URL=influxdb.domain.tld
- VRKN_INFLUXDB_PORT=8086
- VRKN_INFLUXDB_SSL=false
- VRKN_INFLUXDB_VERIFY_SSL=false
- VRKN_INFLUXDB_USERNAME=root
- VRKN_INFLUXDB_PASSWORD=root
- VRKN_TAUTULLI_1_URL=tautulli.domain.tld:8181
- VRKN_TAUTULLI_1_FALLBACK_IP=1.1.1.1
- VRKN_TAUTULLI_1_APIKEY=xxxxxxxxxxxxxxxx
- VRKN_TAUTULLI_1_SSL=false
- VRKN_TAUTULLI_1_VERIFY_SSL=false
- VRKN_TAUTULLI_1_GET_ACTIVITY=true
- VRKN_TAUTULLI_1_GET_ACTIVITY_RUN_SECONDS=30
- VRKN_TAUTULLI_1_GET_STATS=true
- VRKN_TAUTULLI_1_GET_STATS_RUN_SECONDS=3600
- VRKN_SONARR_1_URL=sonarr1.domain.tld:8989
- VRKN_SONARR_1_APIKEY=xxxxxxxxxxxxxxxx
- VRKN_SONARR_1_SSL=false
- VRKN_SONARR_1_VERIFY_SSL=false
- VRKN_SONARR_1_MISSING_DAYS=7
- VRKN_SONARR_1_MISSING_DAYS_RUN_SECONDS=300
- VRKN_SONARR_1_FUTURE_DAYS=1
- VRKN_SONARR_1_FUTURE_DAYS_RUN_SECONDS=300
- VRKN_SONARR_1_QUEUE=true
- VRKN_SONARR_1_QUEUE_RUN_SECONDS=300
- VRKN_SONARR_2_URL=sonarr2.domain.tld:8989
- VRKN_SONARR_2_APIKEY=yyyyyyyyyyyyyyyy
- VRKN_SONARR_2_SSL=false
- VRKN_SONARR_2_VERIFY_SSL=false
- VRKN_SONARR_2_MISSING_DAYS=7
- VRKN_SONARR_2_MISSING_DAYS_RUN_SECONDS=300
- VRKN_SONARR_2_FUTURE_DAYS=1
- VRKN_SONARR_2_FUTURE_DAYS_RUN_SECONDS=300
- VRKN_SONARR_2_QUEUE=true
- VRKN_SONARR_2_QUEUE_RUN_SECONDS=300
- VRKN_RADARR_1_URL=radarr1.domain.tld
- VRKN_RADARR_1_APIKEY=xxxxxxxxxxxxxxxx
- VRKN_RADARR_1_SSL=false
- VRKN_RADARR_1_VERIFY_SSL=false
- VRKN_RADARR_1_QUEUE=true
- VRKN_RADARR_1_QUEUE_RUN_SECONDS=300
- VRKN_RADARR_1_GET_MISSING=true
- VRKN_RADARR_1_GET_MISSING_RUN_SECONDS=300
- VRKN_RADARR_2_URL=radarr2.domain.tld
- VRKN_RADARR_2_APIKEY=yyyyyyyyyyyyyyyy
- VRKN_RADARR_2_SSL=false
- VRKN_RADARR_2_VERIFY_SSL=false
- VRKN_RADARR_2_QUEUE=true
- VRKN_RADARR_2_QUEUE_RUN_SECONDS=300
- VRKN_RADARR_2_GET_MISSING=true
- VRKN_RADARR_2_GET_MISSING_RUN_SECONDS=300
- VRKN_LIDARR_1_URL=lidarr1.domain.tld:8686
- VRKN_LIDARR_1_APIKEY=xxxxxxxxxxxxxxxx
- VRKN_LIDARR_1_SSL=false
- VRKN_LIDARR_1_VERIFY_SSL=false
- VRKN_LIDARR_1_MISSING_DAYS=30
- VRKN_LIDARR_1_MISSING_DAYS_RUN_SECONDS=300
- VRKN_LIDARR_1_FUTURE_DAYS=30
- VRKN_LIDARR_1_FUTURE_DAYS_RUN_SECONDS=300
- VRKN_LIDARR_1_QUEUE=true
- VRKN_LIDARR_1_QUEUE_RUN_SECONDS=300
- VRKN_OMBI_1_URL=ombi.domain.tld
- VRKN_OMBI_1_APIKEY=xxxxxxxxxxxxxxxx
- VRKN_OMBI_1_SSL=false
- VRKN_OMBI_1_VERIFY_SSL=false
- VRKN_OMBI_1_GET_REQUEST_TYPE_COUNTS=true
- VRKN_OMBI_1_REQUEST_TYPE_RUN_SECONDS=300
- VRKN_OMBI_1_GET_REQUEST_TOTAL_COUNTS=true
- VRKN_OMBI_1_REQUEST_TOTAL_RUN_SECONDS=300
- VRKN_OMBI_1_GET_ISSUE_STATUS_COUNTS=true
- VRKN_OMBI_1_ISSUE_STATUS_RUN_SECONDS=300
- VRKN_SICKCHILL_1_URL=sickchill.domain.tld:8081
- VRKN_SICKCHILL_1_APIKEY=xxxxxxxxxxxxxxxx
- VRKN_SICKCHILL_1_SSL=false
- VRKN_SICKCHILL_1_VERIFY_SSL=false
- VRKN_SICKCHILL_1_GET_MISSING=true
- VRKN_SICKCHILL_1_GET_MISSING_RUN_SECONDS=300
depends_on:
- influxdb
restart: unless-stopped
Expand All @@ -118,7 +33,7 @@ services:
- internal
ports:
- 3000:3000
volumes:
volumes:
- /path/to/docker-grafana/config-folder:/config
environment:
- GF_PATHS_DATA=/config/data
Expand All @@ -128,4 +43,4 @@ services:
depends_on:
- influxdb
- varken
restart: unless-stopped
restart: unless-stopped
5 changes: 3 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,10 @@
# Potential requirements.
# pip3 install -r requirements.txt
#---------------------------------------------------------
requests==2.21
requests==2.25.1
geoip2==2.9.0
influxdb==5.2.0
influxdb-client==1.30.0
schedule==0.6.0
distro==1.4.0
urllib3==1.24.2
urllib3==1.26.5
2 changes: 1 addition & 1 deletion utilities/historical_tautulli_import.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
DBMANAGER = DBManager(CONFIG.influx_server)

if CONFIG.tautulli_enabled:
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER, CONFIG.tautulli_servers[0].maxmind_license_key)
for server in CONFIG.tautulli_servers:
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
TAUTULLI.get_historical(days=opts.days)
1 change: 1 addition & 0 deletions varken.xml
Original file line number Diff line number Diff line change
Expand Up @@ -51,5 +51,6 @@
<Labels/>
<Config Name="PGID" Target="PGID" Default="" Mode="" Description="Container Variable: PGID" Type="Variable" Display="always" Required="true" Mask="false">99</Config>
<Config Name="PUID" Target="PUID" Default="" Mode="" Description="Container Variable: PUID" Type="Variable" Display="always" Required="true" Mask="false">100</Config>
<Config Name="Debug" Target="DEBUG" Default="False" Mode="" Description="Turn Debug on or off" Type="Variable" Display="always" Required="false" Mask="false">False</Config>
<Config Name="Varken DataDir" Target="/config" Default="" Mode="rw" Description="Container Path: /config" Type="Path" Display="advanced-hide" Required="true" Mask="false">/mnt/user/appdata/varken</Config>
</Container>
80 changes: 63 additions & 17 deletions varken/dbmanager.py
Original file line number Diff line number Diff line change
@@ -1,45 +1,91 @@
import re
from sys import exit
from logging import getLogger
from influxdb import InfluxDBClient
from requests.exceptions import ConnectionError
from influxdb.exceptions import InfluxDBServerError
from influxdb_client import InfluxDBClient, BucketRetentionRules
from influxdb_client.client.write_api import SYNCHRONOUS
from influxdb_client.client.exceptions import InfluxDBError
from urllib3.exceptions import NewConnectionError


class DBManager(object):
def __init__(self, server):
self.server = server
self.logger = getLogger()
self.bucket = "varken"

if self.server.url == "influxdb.domain.tld":
self.logger.critical("You have not configured your varken.ini. Please read Wiki page for configuration")
exit()
self.influx = InfluxDBClient(host=self.server.url, port=self.server.port, username=self.server.username,
password=self.server.password, ssl=self.server.ssl, database='varken',
verify_ssl=self.server.verify_ssl)

url = self.server.url
if 'http' not in url:
scheme = 'http'
if self.server.ssl:
scheme = 'https'
url = "{}://{}:{}".format(scheme, self.server.url, self.server.port)
token = f'{self.server.username}:{self.server.password}'

self.influx = InfluxDBClient(url=url, token=token,
verify_ssl=self.server.verify_ssl, org=self.server.org)

try:
version = self.influx.request('ping', expected_response_code=204).headers['X-Influxdb-Version']
version = self.influx.version()
self.logger.info('Influxdb version: %s', version)
except ConnectionError:
self.logger.critical("Error testing connection to InfluxDB. Please check your url/hostname")
match = re.match(r'v?(\d+)\.', version)
if match:
self.version = int(match[1])
self.logger.info("Using InfluxDB API v%s", self.version)
else:
self.logger.critical("Unknown influxdb version")
exit(1)
except NewConnectionError:
self.logger.critical("Error getting InfluxDB version number. Please check your url/hostname are valid")
exit(1)

databases = [db['name'] for db in self.influx.get_list_database()]
if self.version >= 2:
# If we pass username/password to a v1 server, it breaks :(
self.influx = InfluxDBClient(url=url, username=self.server.username,
password=self.server.password,
verify_ssl=self.server.verify_ssl, org=self.server.org)
self.create_v2_bucket()
else:
self.create_v1_database()

if 'varken' not in databases:
def create_v2_bucket(self):
if not self.influx.buckets_api().find_bucket_by_name(self.bucket):
self.logger.info("Creating varken bucket")

retention = BucketRetentionRules(type="expire", every_seconds=60 * 60 * 24 * 30,
shard_group_duration_seconds=60 * 60)
self.influx.buckets_api().create_bucket(bucket_name=self.bucket,
retention_rules=retention)

def create_v1_database(self):
from influxdb import InfluxDBClient
client = InfluxDBClient(host=self.server.url, port=self.server.port, username=self.server.username,
password=self.server.password, ssl=self.server.ssl, database=self.bucket,
verify_ssl=self.server.verify_ssl)
databases = [db['name'] for db in client.get_list_database()]

if self.bucket not in databases:
self.logger.info("Creating varken database")
self.influx.create_database('varken')
client.create_database(self.bucket)

retention_policies = [policy['name'] for policy in
self.influx.get_list_retention_policies(database='varken')]
client.get_list_retention_policies(database=self.bucket)]
if 'varken 30d-1h' not in retention_policies:
self.logger.info("Creating varken retention policy (30d-1h)")
self.influx.create_retention_policy(name='varken 30d-1h', duration='30d', replication='1',
database='varken', default=True, shard_duration='1h')
client.create_retention_policy(name='varken 30d-1h', duration='30d', replication='1',
database=self.bucket, default=True, shard_duration='1h')

self.bucket = f'{self.bucket}/varken 30d-1h'

def write_points(self, data):
d = data
self.logger.debug('Writing Data to InfluxDB %s', d)
write_api = self.influx.write_api(write_options=SYNCHRONOUS)
try:
self.influx.write_points(d)
except (InfluxDBServerError, ConnectionError) as e:
write_api.write(bucket=self.bucket, record=data)
except (InfluxDBError, NewConnectionError) as e:
self.logger.error('Error writing data to influxdb. Dropping this set of data. '
'Check your database! Error: %s', e)
Loading

0 comments on commit 555f7d1

Please sign in to comment.