Releases: codedeliveryservice/Reckless
Reckless v0.7.0
Release Notes
The NNUE hidden layer size has been increased from 128 to 384, further improved by adding 4 output buckets (#56) and material scaling (#62). The final architecture is (768 -> 384)x2 -> 1x4
.
Changelog
Time management
Performance optimizations
- Allocate a quiet move list on the stack (#51)
- Implement operation fusion for NNUE (#58)
- Optimize accumulator handling (#59)
Various search improvements
- History tuning (#39)
- Null Move Pruning tuning (#54)
- Check extensions before the move loop (#40)
- Disable quiescence search pruning for recaptures (#35)
- Treat non-winning captures as unfavorable in quiescence search (#57)
- Static Exchange Evaluation (#36, #37, #44, and #61)
- Fully fractional LMR (#60)
Features
Full Changelog: v0.6.0...v0.7.0
Self-Play Benchmark Against v0.6.0
STC 8.0+0.08s
Elo | 172.12 +- 11.31 (95%)
Conf | 8.0+0.08s Threads=1 Hash=32MB
Games | N: 2000 W: 1007 L: 90 D: 903
Penta | [6, 38, 229, 487, 240]
LTC 40.0+0.4s
Elo | 154.77 +- 13.83 (95%)
Conf | 40.0+0.40s Threads=1 Hash=128MB
Games | N: 1002 W: 449 L: 30 D: 523
Penta | [0, 16, 136, 263, 86]
Reckless v0.6.0
Release Notes
Alongside numerous search improvements and adjustments, Reckless now supports the multi-threaded search, implemented using the Lazy SMP approach by sharing the lockless transposition table between search threads (#20, #27).
The activation function has been switched to SCReLU (bee8f74), and other three networks (#14, #26, and #33) have been trained and used during the development process.
Changelog
Time management
- Time adjustment based on distribution of root nodes (#1)
- Cyclic TC improvements (#31)
- Fischer TC improvements (#32)
Late move reductions
History heuristics
- Follow-up move history (#11)
- Counter move history (#12)
- Linear history formula (#13)
- Separate bonus and malus (#21)
- Index by side to move in main history (#23)
Performance optimizations
- Transposition table prefetching (#4)
- Handwritten SIMD for AVX2 instructions (#16)
- Faster repetition detection (#28)
Various search improvements
- Introduce razoring (#3)
- Fail-soft null move pruning (#6)
- Probe transposition table before stand pat (#7)
- Adaptive NMP based on static evaluation (#8)
- Use transposition table score to adjust eval (#10)
- SPSA tuning session (#17)
- Move check extension inside move loop (#19)
- Update aspiration search delta function (#15)
- Reset killer moves for child nodes (#22)
- Avoid using static evaluation when in check (#24)
- Increase research depth when LMR search results are promising (#30)
- Reset killer moves before null move pruning (#34)
Full Changelog: v0.5.0...v0.6.0
Acknowledgments
Special thanks to @AndyGrant for kindly sharing his CPU time and for developing OpenBench, which is actively used in the development process.
Self-Play Benchmark Against v0.5.0
STC 8.0+0.08s
Elo | 155.12 +- 11.83 (95%)
Conf | 8.0+0.08s Threads=1 Hash=32MB
Games | N: 2000 W: 994 L: 156 D: 850
Penta | [8, 53, 272, 427, 240]
LTC 40.0+0.4s
Elo | 157.43 +- 15.49 (95%)
Conf | 40.0+0.40s Threads=1 Hash=128MB
Games | N: 1006 W: 474 L: 47 D: 485
Penta | [0, 20, 145, 229, 109]
Reckless v0.5.0
Release Notes
This release introduces NNUE (Efficiently Updatable Neural Network), which completely replaces the previously used HCE (Handcrafted Evaluation).
The training data was generated through self-play, initially using a randomly initialized network. It was later iteratively trained on repeatedly generated data, with each iteration improving the network's strength and data quality. The training process was carried out using a custom NNUE trainer.
Additionally, a few minor changes and refactoring have been made.
Full Changelog: v0.4.0...v0.5.0
UCI Support
- Added support for the UCI
go nodes <x>
command. - Added the custom
eval
command.
Self-Play Benchmark Against v0.4.0
STC 8+0.08s
Score of Reckless 0.5.0 vs Reckless 0.4.0: 811 - 46 - 143 [0.882] 1000
... Reckless 0.5.0 playing White: 411 - 22 - 67 [0.889] 500
... Reckless 0.5.0 playing Black: 400 - 24 - 76 [0.876] 500
... White vs Black: 435 - 422 - 143 [0.506] 1000
Elo difference: 350.3 +/- 27.2, LOS: 100.0 %, DrawRatio: 14.3 %
LTC 40+0.4s
Score of Reckless 0.5.0 vs Reckless 0.4.0: 376 - 18 - 106 [0.858] 500
... Reckless 0.5.0 playing White: 203 - 3 - 44 [0.900] 250
... Reckless 0.5.0 playing Black: 173 - 15 - 62 [0.816] 250
... White vs Black: 218 - 176 - 106 [0.542] 500
Elo difference: 312.5 +/- 33.0, LOS: 100.0 %, DrawRatio: 21.2 %
Reckless v0.4.0
Search Improvements
- Main Search: Implemented internal iterative reductions, futility pruning, and the improving heuristic.
- Late Move Reductions: Implemented a logarithmic formula, adjusted based on the history heuristic.
- Null Move Pruning: Adjusted based on depth, with added zugzwang risk minimization.
- History Heuristic: Now persistent between searches and utilizes a gravity formula.
- Quiescence Search: Enhanced with delta pruning and transposition table utilization.
Evaluation Improvements
- Enemy king-relative PST
- Passed pawns
- Isolated pawns
Other Changes
- Implemented a Triangular PV table to report a full-length principal variation line.
Bug Fixes
- Fixed a formatting bug when reporting mating scores.
- Fixed a cache size reset bug when the
ucinewgame
command is received.
Self-Play Benchmark Against v0.3.0
STC 10+0.1s
Score of Reckless v0.4.0 vs Reckless v0.3.0: 539 - 47 - 164 [0.828] 750
... Reckless v0.4.0 playing White: 284 - 17 - 74 [0.856] 375
... Reckless v0.4.0 playing Black: 255 - 30 - 90 [0.800] 375
... White vs Black: 314 - 272 - 164 [0.528] 750
Elo difference: 273.0 +/- 25.9, LOS: 100.0 %, DrawRatio: 21.9 %
LTC 60+0.6s
Score of Reckless v0.4.0 vs Reckless v0.3.0: 287 - 15 - 98 [0.840] 400
... Reckless v0.4.0 playing White: 152 - 5 - 43 [0.868] 200
... Reckless v0.4.0 playing Black: 135 - 10 - 55 [0.813] 200
... White vs Black: 162 - 140 - 98 [0.527] 400
Elo difference: 288.1 +/- 34.5, LOS: 100.0 %, DrawRatio: 24.5 %
Reckless v0.3.0
Evaluation Improvements
- King-relative PST has replaced material evaluation and traditional piece-square tables.
- Weight tuning has been performed using a gradient descent tuner.
- A tempo bonus for the side to move has been added.
Search Improvements
- Optimal Time Management has been introduced for games with incremental time controls.
- Adaptive Late Move Reductions has been implemented, replacing a constant reduction value.
- Quiet Late Move Pruning has been implemented.
- Penalties have been introduced for bad quiet moves in fail-high nodes that don't cause a beta cutoff.
- Minor search enhancements, along with other improvements, have also led to a nice Elo gain.
Self-Play Benchmark Against v0.2.0
STC 10+0.1s
Score of Reckless v0.3.0 vs Reckless v0.2.0: 398 - 38 - 64 [0.860] 500
... Reckless v0.3.0 playing White: 205 - 14 - 31 [0.882] 250
... Reckless v0.3.0 playing Black: 193 - 24 - 33 [0.838] 250
... White vs Black: 229 - 207 - 64 [0.522] 500
Elo difference: 315.3 +/- 37.9, LOS: 100.0 %, DrawRatio: 12.8 %
LTC 60+0.6s
Score of Reckless v0.3.0 vs Reckless v0.2.0: 248 - 12 - 40 [0.893] 300
... Reckless v0.3.0 playing White: 131 - 4 - 15 [0.923] 150
... Reckless v0.3.0 playing Black: 117 - 8 - 25 [0.863] 150
... White vs Black: 139 - 121 - 40 [0.530] 300
Elo difference: 369.2 +/- 52.4, LOS: 100.0 %, DrawRatio: 13.3 %
Reckless v0.2.0
Elo Rating Improvements
Significant Elo rating improvements were achieved through the implementation of tapered evaluation with weights tuning and the introduction of Reverse Futility Pruning. Additionally, some minor gains are attributed to code refactoring and other improvements.
Code Refactoring and Organization
Efforts have been made to refine the codebase, focusing on enhancing organization and maintainability. Notably, the project has been merged into a single crate, and several bugs encountered during the refactoring process have been resolved.
Engine Approach Changes
In this release, some changes have been made to the engine's approach:
- Switched from Fail-Hard to Fail-Soft framework for Alpha Beta pruning.
- Switched from Make/Undo to Copy/Make for improved performance and simplified code.
UCI Protocol Improvements
The UCI protocol implementation now includes support for reporting seldepth
and hashfull
parameters.
Self-Play Benchmark Against v0.1.0
STC 10+0.1s
Score of Reckless v0.2.0 vs Reckless v0.1.0: 237 - 29 - 34 [0.847] 300
... Reckless v0.2.0 playing White: 120 - 15 - 15 [0.850] 150
... Reckless v0.2.0 playing Black: 117 - 14 - 19 [0.843] 150
... White vs Black: 134 - 132 - 34 [0.503] 300
Elo difference: 296.8 +/- 48.9, LOS: 100.0 %, DrawRatio: 11.3 %
LTC 60+0.6s
Score of Reckless v0.2.0 vs Reckless v0.1.0: 87 - 5 - 8 [0.910] 100
... Reckless v0.2.0 playing White: 44 - 3 - 3 [0.910] 50
... Reckless v0.2.0 playing Black: 43 - 2 - 5 [0.910] 50
... White vs Black: 46 - 46 - 8 [0.500] 100
Elo difference: 401.9 +/- 114.5, LOS: 100.0 %, DrawRatio: 8.0 %
Reckless v0.1.0
Introducing Reckless, a Rust-based UCI chess engine.
The engine features basic search techniques, including iterative deepening, aspiration windows, principle variation search, as well as simple implementations of selectivity techniques like null move pruning and late move reduction, and move ordering strategies such as TT-move ordering, MVV-LVA, killer heuristic, and history heuristic.
The static evaluation function combines material, mobility, and piece-square tables, utilizing the Simplified Evaluation Function as a guide. For more information, please visit the Chess Programming Wiki.
Elo Estimation
The engine hasn't been tested in CCRL, and therefore doesn't have an assigned Elo rating. However, it has undergone approximate local testing against two other engines at a time control of 60+0.6s with a 10-move opening book.
In comparison, it outperformed both the Halogen 3.0 engine, which holds a CCRL Blitz rating of 2093, and the BadChessEngine 0.4.4 engine (2044 Elo). While an exact rating cannot be determined based on these results alone, it's estimated to be around 2100 Elo.
Rank Name Elo +/- Games Wins Losses Draws Points Score Draw
1 Reckless 0.1.0 44 56 120 55 40 25 67.5 56.3% 20.8%
2 Halogen 3.0 -14 56 120 46 51 23 57.5 47.9% 19.2%
3 BadChessEngine 0.4.4 -29 56 120 43 53 24 55.0 45.8% 20.0%