Status | |
---|---|
Stability | beta |
Distributions | contrib |
Issues | |
Code Owners | @djaglowski | Seeking more code owners! |
The File Storage extension can persist state to the local file system.
The extension requires read and write access to a directory. A default directory can be used, but it must already exist in order for the extension to operate.
directory
is the relative or absolute path to the dedicated data storage directory.
The default directory is %ProgramData%\Otelcol\FileStorage
on Windows and /var/lib/otelcol/file_storage
otherwise.
timeout
is the maximum time to wait for a file lock. This value does not need to be modified in most circumstances.
The default timeout is 1s
.
fsync
when set, will force the database to perform an fsync after each write. This helps to ensure database integretity if there is an interruption to the database process, but at the cost of performance. See DB.NoSync for more information.
compaction
defines how and when files should be compacted. There are two modes of compaction available (both of which can be set concurrently):
compaction.on_start
(default: false), which happens when collector startscompaction.on_rebound
(default: false), which happens online when certain criteria are met; it's discussed in more detail below
compaction.directory
specifies the directory used for compaction (as a midstep).
compaction.max_transaction_size
(default: 65536): defines maximum size of the compaction transaction.
A value of zero will ignore transaction sizes.
compaction.cleanup_on_start
(default: false) - specifies if removal of compaction temporary files is performed on start.
It will remove all temporary files in the compaction directory (those which start with tempdb
),
temp files will be left if a previous run of the process is killed while compacting.
For rebound compaction, there are two additional parameters available:
compaction.rebound_needed_threshold_mib
(default: 100) - when allocated data exceeds this amount, the "compaction needed" flag will be enabledcompaction.rebound_trigger_threshold_mib
(default: 10) - if the "compaction needed" flag is set and allocated data drops below this amount, compaction will begin and the "compaction needed" flag will be clearedcompaction.check_interval
(default: 5s) - specifies how frequently the conditions for compaction are being checked
The idea behind rebound compaction is that in certain workloads (e.g. persistent queue) the storage might grow significantly (e.g. when the exporter is unable to send the data due to network problem) after which it is being emptied as the underlying issue is gone (e.g. network connectivity is back). This leaves a significant space that needs to be reclaimed (also, this space is reported in memory usage as mmap() is used underneath). The optimal conditions for this to happen online is after the storage is largely drained, which is being controlled by rebound_trigger_threshold_mib
. To make sure this is not too sensitive, there's also rebound_needed_threshold_mib
which specifies the total claimed space size that must be met for online compaction to even be considered. Consider following diagram for an example of meeting the rebound (online) compaction conditions.
▲
│
│ XX.............
m │ XXXX............
e ├───────────XXXXXXX..........──────────── rebound_needed_threshold_mib
m │ XXXXXXXXX..........
o │ XXXXXXXXXXX.........
r │ XXXXXXXXXXXXXXXXX....
y ├─────XXXXXXXXXXXXXXXXXXXXX..──────────── rebound_trigger_threshold_mib
│ XXXXXXXXXXXXXXXXXXXXXXXXXX.........
│ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
└──────────────── time ─────────────────►
│ | |
issue draining compaction happens
starts begins and reclaims space
X - actually used space
. - claimed but no longer used space
extensions:
file_storage:
file_storage/all_settings:
directory: /var/lib/otelcol/mydir
timeout: 1s
compaction:
on_start: true
directory: /tmp/
max_transaction_size: 65_536
fsync: false
service:
extensions: [file_storage, file_storage/all_settings]
pipelines:
traces:
receivers: [nop]
processors: [nop]
exporters: [nop]
# Data pipeline is required to load the config.
receivers:
nop:
processors:
nop:
exporters:
nop:
The extension uses the type and name of the component using the extension to create a file where the component's data is stored.
For example, if a Filelog receiver named filelog/logs
uses the extension, its data is stored in a file named receiver_filelog_logs
.
Sometimes the component name contains characters that either have special meaning in paths - like /
- or are problematic or even forbidden in file names (depending on the host operating system), like ?
or |
.
To prevent surprising or erroneous behavior, some characters in the component names are replaced before creating the file name to store data by the extension.
For example, for a Filelog receiver named filelog/logs/container
, the component name logs/container
is sanitized into logs~007Econtainer
and the data is stored in a file named receiver_filelog_logs~007Econtainer
.
Every unsafe character is replaced with a tilde ~
and the character's Unicode number in hex.
The only safe characters are: uppercase and lowercase ASCII letters A-Z
and a-z
, digits 0-9
, dot .
, hyphen -
, underscore _
.
The tilde ~
character is also replaced even though it is a safe character, to make sure that the sanitized component name never overlaps with a component name that does not require sanitization.
Currently, the File Storage extension uses bbolt to store and read data on disk. The following troubleshooting method works for bbolt-managed files. As such, there is no guarantee that this method will continue to work in the future, particularly if the extension switches away from bbolt.
When troubleshooting components that use the File Storage extension, it is sometimes helpful to read the raw contents of files created by the extension for the component. The simplest way to read files created by the File Storage extension is to use the strings utility (Linux, Windows).
For example, here are the contents of the file created by the File Storage extension when it's configured as the storage
for the filelog
receiver.
$ strings /tmp/otelcol/file_storage/filelogreceiver/receiver_filelog_
default
file_input.knownFiles2
{"Fingerprint":{"first_bytes":"MzEwNzkKMjE5Cg=="},"Offset":10,"FileAttributes":{"log.file.name":"1.log"},"HeaderFinalized":false,"FlushState":{"LastDataChange":"2024-03-20T18:16:18.164331-07:00","LastDataLength":0}}
{"Fingerprint":{"first_bytes":"MjQ0MDMK"},"Offset":6,"FileAttributes":{"log.file.name":"2.log"},"HeaderFinalized":false,"FlushState":{"LastDataChange":"2024-03-20T18:16:39.96429-07:00","LastDataLength":0}}
default
file_input.knownFiles2
{"Fingerprint":{"first_bytes":"MzEwNzkKMjE5Cg=="},"Offset":10,"FileAttributes":{"log.file.name":"1.log"},"HeaderFinalized":false,"FlushState":{"LastDataChange":"2024-03-20T18:16:18.164331-07:00","LastDataLength":0}}
{"Fingerprint":{"first_bytes":"MjQ0MDMK"},"Offset":6,"FileAttributes":{"log.file.name":"2.log"},"HeaderFinalized":false,"FlushState":{"LastDataChange":"2024-03-20T18:16:39.96429-07:00","LastDataLength":0}}
default
file_input.knownFiles2
{"Fingerprint":{"first_bytes":"MzEwNzkKMjE5Cg=="},"Offset":10,"FileAttributes":{"log.file.name":"1.log"},"HeaderFinalized":false,"FlushState":{"LastDataChange":"2024-03-20T18:16:18.164331-07:00","LastDataLength":0}}
{"Fingerprint":{"first_bytes":"MjQ0MDMK"},"Offset":6,"FileAttributes":{"log.file.name":"2.log"},"HeaderFinalized":false,"FlushState":{"LastDataChange":"2024-03-20T18:16:39.96429-07:00","LastDataLength":0}}