The incoming Airbyte data is structured depending on the target Redis cache/data type. This connector maps an incoming data from a namespace and stream to a unique Redis key.
For the hash implementation as a Redis data type the keys and the hashes are structured in the following way:
key:
namespace:stream:id
hash:
_airbyte_ab_id
: Sequential id for a given key generated by using the INCR Redis command._airbyte_emitted_at
: a timestamp representing when the event was received from the data source._airbyte_data
: a json text/object representing the data that was received from the data source.
Feature | Support | Notes |
---|---|---|
Full Refresh Sync | ✅ | Existing keys in the Redis cache are deleted and replaced with the new keys. |
Incremental - Append Sync | ✅ | New keys are inserted in the same keyspace without touching the existing keys. |
Incremental - Deduped History | ❌ | |
Namespaces | ✅ | Namespaces will be used to determine the correct Redis key. |
As long as you have the necessary memory capacity for your cache, Redis should be able to handle even millions of records without any issues since the data is stored in-memory with the option to save snapshots periodically on disk.
- The connector is fully compatible with redis 2.8.x, 3.x.x and above
- Configuration
- host: Hostname or address of the Redis server where to connect.
- port: Port of the Redis server where to connect.
- username: Username for authenticating with the Redis server.
- password: Password for authenticating with the Redis server.
- cache_type: Redis cache/data type to use when storing the incoming messages. i.e hash,set,list,stream,etc.
######TODO: more info, screenshots?, etc...