Skip to content

High performance setup ka GE

ArchiBot edited this page Nov 13, 2021 · 7 revisions

High-performance setup

This is exact opposite of low-memory setup and typically you want to follow those tips if you want to further increase ASF performance (in terms of CPU speed), for potential cost of increased memory usage.


ASF already tries to prefer performance when it comes to general balanced tuning, therefore there is not a lot you can do to further increase its performance, although you're not completely out of options either. However, keep in mind that those options are not enabled by default, which means that they're not good enough to consider them balanced for majority of usages, therefore you should decide yourself if memory increase brought by them is acceptable for you.


Runtime tuning (advanced)

Below tricks involve serious memory and startup time increase and should therefore be used with caution.

The recommended way of applying those settings is through DOTNET_ environment properties. Of course, you could also use other methods, e.g. runtimeconfig.json, but some settings are impossible to be set this way, and on top of that ASF will replace your custom runtimeconfig.json with its own on the next update, therefore we recommend environment properties that you can set easily prior to launching the process.

.NET runtime allows you to tweak garbage collector in a lot of ways, effectively fine-tuning the GC process according to your needs.

Configures whether the application uses workstation garbage collection or server garbage collection.

You can read the exact specific of the server GC at fundamentals of garbage collection.

ASF is using workstation garbage collection by default. This is mainly because of a good balance between memory usage and performance, which is more than enough for just a few bots, as usually a single concurrent background GC thread is fast enough to handle entire memory allocated by ASF.

However, today we have a lot of CPU cores that ASF can greatly benefit from, by having a dedicated GC thread per each CPU vCore that is available. This can greatly improve the performance during heavy ASF tasks such as parsing badge pages or the inventory, since every CPU vCore can help, as opposed to just 2 (main and GC). Server GC is recommended for machines with 3 CPU vCores and more, workstation GC is automatically forced if your machine has just 1 CPU vCore, and if you have exactly 2 then you can consider trying both (results may vary).

Server GC itself does not result in a very huge memory increase by just being active, but it has much bigger generation sizes, and therefore is far more lazy when it comes to giving memory back to OS. You may find yourself in a sweet spot where server GC increases performance significantly and you'd like to keep using it, but at the same time you can't afford that huge memory increase that comes out of using it. Luckily for you, there is a "best of both worlds" setting, by using server GC with GCLatencyLevel configuration property set to 0, which will still enable server GC, but limit generation sizes and focus more on memory. Alternatively, you might also experiment with another property, GCHeapHardLimitPercent, or even both of them at the same time.

However, if memory is not a problem for you (as GC still takes into account your available memory and tweaks itself), it's a much better idea to not change those properties at all, achieving superior performance in result.

This setting enables dynamic or tiered profile-guided optimization (PGO) in .NET 6 and later versions.

Disabled by default. In a nutshell, this will cause JIT to spend more time analyzing ASF's code and its patterns in order to generate superior code optimized for your typical usage. If you want to learn more about this setting, visit performance improvements in .NET 6.

Configures whether the .NET Core runtime uses pre-compiled code for images with available ReadyToRun data. Disabling this option forces the runtime to JIT-compile framework code.

Enabled by default. Disabling this in combination with enabling DOTNET_TieredPGO allows you to extend tiered profile-guided optimization to the whole .NET platform, and not just ASF code.

Configures whether the JIT compiler uses quick JIT on methods that contain loops. Enabling quick JIT for loops may improve startup performance. However, long-running loops can get stuck in less-optimized code for long periods.

Disabled by default. While the description doesn't make it obvious, enabling this will allow methods with loops to go through additional compilation tier, which will allow DOTNET_TieredPGO to do a better job by analyzing its usage data.


You can enable selected properties by setting appropriate environment variables. For example, on Linux (shell):

export DOTNET_gcServer=1

export DOTNET_TieredPGO=1
export DOTNET_ReadyToRun=0
export DOTNET_TC_QuickJitForLoops=1

./ArchiSteamFarm # For OS-specific build

Or on Windows (powershell):

$Env:DOTNET_gcServer=1
$Env:DOTNET_TieredPGO=1
$Env:DOTNET_ReadyToRun=0
$Env:DOTNET_TC_QuickJitForLoops=1

.\ArchiSteamFarm.exe # For OS-specific build

Recommended optimization

  • Ensure that you're using default value of OptimizationMode which is MaxPerformance. This is by far the most important setting, as using MinMemoryUsage value has dramatic effects on performance.
  • Enable server GC. Server GC can be immediately seen as being active by significant memory increase compared to workstation GC. This will spawn a GC thread for every CPU thread your machine has in order to perform GC operations in parallel with maximum speed.
  • If you can't afford memory increase due to server GC, consider tweaking GCLatencyLevel and/or GCHeapHardLimitPercent to achieve "the best of both worlds". However, if your memory can afford it, then it's better to keep it at default - server GC already tweaks itself during runtime and is smart enough to use less memory when your OS will truly need it.
  • You can also consider increased optimization for longer startup time with additional tweaking through DOTNET_ properties explained above.

Applying recommendations above allows you to have superior ASF performance that should be blazing fast even with hundreds or thousands of enabled bots. CPU should not be a bottleneck anymore, as ASF is able to use your entire CPU power when needed, cutting required time to bare minimum. The next step would be CPU and RAM upgrades.

Clone this wiki locally