Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
odinserj committed Apr 23, 2024
1 parent 342637d commit e332254
Showing 1 changed file with 27 additions and 8 deletions.
35 changes: 27 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,37 +2,56 @@

[![Build status](https://ci.appveyor.com/api/projects/status/yq82w8ji419c61vy?svg=true)](https://ci.appveyor.com/project/HangfireIO/hangfire-inmemory)

This is an attempt to create an efficient transactional in-memory storage for Hangfire with data structures close to their optimal representation. The result of this attempt should enable production-ready usage of this storage implementation and handle particular properties of in-memory processing like avoiding `OutOfMemoryException` at any cost and moderate load on garbage collection. Of course we can't avoid them entirely, but at least can control them somehow.
This is an efficient implementation of in-memory job storage for Hangfire with data structures close to their optimal representation.

Read and write queries are processed by a single thread to avoid additional synchronization between threads and keep everything as simple as possible (keeping future async-based implementation in mind). Monitoring API also uses that dedicated thread, but its future implementation can be changed by using concurrent data structures and immutability, but I expect this will increase load on garbage collection.
The goal of this storage is to provide developers a fast path to start using Hangfire without setting up any additional infrastructure like SQL Server or Redis, with the possibility of swapping implementation in the production environment. The non-goal of this implementation is to compete with TPL or other in-memory processing libraries – serialization alone adds a significant overhead.

Distributed locks (heh, in an in-process storage), queue fetch logic (even from multiple queues) and transactional queries are implemented as blocking operations, so there is no active polling in these cases. Every data returned by storage can be safely changed without causing underlying storage state to be changed with bypassing required transactional processing logic, so everything is safe (but increase load on GC). Every data structure, including indexes and their records, is removed when empty to avoid memory leaks.
This implementation uses proper synchronization, and everything, including locks, queues, or queries, is implemented as blocking operations, so there's no active polling in these cases. Read and write queries are processed by a dedicated background thread to avoid additional synchronization between threads, trying to keep everything as simple as possible with the future async-based implementation in mind. The internal state uses `SortedDictionary`, `SortedSet`, and `LinkedList` data structures to avoid getting even huge collections to the Large Object Heap, reducing the potential `OutOfMemoryException`-related issues due to memory fragmentation.

This storage also uses a monotonic clock whenever possible by leveraging the `Stopwatch.GetTimestamp` method. So, expiration rules don't break when the clock suddenly jumps to the future or the past due to synchronization issues or manual updates.

## Requirements

Minimal supported version of Hangfire is 1.8.0. Latest version that supports previous versions of Hangfire is Hangfire.InMemory 0.3.7.

## Installation

[Hangfire.InMemory](https://www.nuget.org/packages/Hangfire.InMemory/) is available on NuGet so we can install it as usual using your favorite package manager.
[Hangfire.InMemory](https://www.nuget.org/packages/Hangfire.InMemory/) is available on NuGet, so we can install it, as usual, using our favorite package manager.

```powershell
> dotnet add package Hangfire.InMemory
```

## Configuration

After the package is installed we can use the new `UseInMemoryStorage` method for the `IGlobalConfiguration` interface to register the storage.
After the package is installed, we can use the new `UseInMemoryStorage` method for the `IGlobalConfiguration` interface to register the storage.

```csharp
GlobalConfiguration.Configuration.UseInMemoryStorage();
```

### Maximum Expiration Time

Starting from version 0.7.0, the package controls the maximum expiration time for storage entries and sets it to *2 hours* when a higher expiration time is passed. By default, the expiration time for background jobs is *24 hours*, and for batch jobs and their contents is *7 days* which can be too big for in-memory storage that runs side-by-side with the application.
Starting from version 0.7.0, the package controls the maximum expiration time for storage entries and sets it to *3 hours* by default when a higher expiration time is passed. For example, the default expiration time for background jobs is *24 hours*, and for batch jobs and their contents, the default time is *7 days*, which can be too big for in-memory storage that runs side-by-side with the application.

We can control this behavior or even turn it off with the `MaxExpirationTime` option available in the `InMemoryStorageOptions` class in the following way:

```csharp
GlobalConfiguration.Configuration.UseInMemoryStorage(new InMemoryStorageOptions
{
  MaxExpirationTime = TimeSpan.FromHours(3) // Default value, we can also set it to `null` to disable.
});
```

It is also possible to use `TimeSpan.Zero` as a value for this option. In this case, entries will be removed immediately instead of relying on the time-based eviction implementation. Please note that some unwanted side effects may appear when using low value – for example, an antecedent background job may be created, processed, and expired before its continuation is created, resulting in exceptions.

### Comparing Keys

We can control this behavior or even disable it with the `MaxExpirationTime` option available in the `InMemoryStorageOptions` class in the following way.
Different storages use different rules for comparing keys. Some of them, like Redis, use case-sensitive comparisons, while others, like SQL Server, may use case-insensitive comparison implementation. It is possible to set this behavior explicitly and simplify moving to another storage implementation in a production environment by configuring the `StringComparer` option in the `InMemoryStorageOptions` class in the following way:

```csharp
GlobalConfiguration.Configuration.UseInMemoryStorage(new InMemoryStorageOptions
{
MaxExpirationTime = TimeSpan.FromHours(6) // Or set to `null` to disable
  StringComparer = StringComparer.Ordinal; // Default value, case-sensitive.
});
```

0 comments on commit e332254

Please sign in to comment.