diff --git a/.github/workflows/test-suite.yml b/.github/workflows/test-suite.yml
index 8da46ed8eea..bba670cc225 100644
--- a/.github/workflows/test-suite.yml
+++ b/.github/workflows/test-suite.yml
@@ -363,6 +363,8 @@ jobs:
run: CARGO_HOME=$(readlink -f $HOME) make vendor
- name: Markdown-linter
run: make mdlint
+ - name: Spell-check
+ uses: rojopolis/spellcheck-github-actions@v0
check-msrv:
name: check-msrv
runs-on: ubuntu-latest
diff --git a/.spellcheck.yml b/.spellcheck.yml
new file mode 100644
index 00000000000..692bc4d176c
--- /dev/null
+++ b/.spellcheck.yml
@@ -0,0 +1,35 @@
+matrix:
+- name: Markdown
+ sources:
+ - './book/**/*.md'
+ - 'README.md'
+ - 'CONTRIBUTING.md'
+ - 'SECURITY.md'
+ - './scripts/local_testnet/README.md'
+ default_encoding: utf-8
+ aspell:
+ lang: en
+ dictionary:
+ wordlists:
+ - wordlist.txt
+ encoding: utf-8
+ pipeline:
+ - pyspelling.filters.url:
+ - pyspelling.filters.markdown:
+ markdown_extensions:
+ - pymdownx.superfences:
+ - pymdownx.highlight:
+ - pymdownx.striphtml:
+ - pymdownx.magiclink:
+ - pyspelling.filters.html:
+ comments: false
+ ignores:
+ - code
+ - pre
+ - pyspelling.filters.context:
+ context_visible_first: true
+ delimiters:
+ # Ignore hex strings
+ - open: '0x[a-fA-F0-9]'
+ close: '[^a-fA-F0-9]'
+
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 3c53558a100..4cad219c89f 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -85,7 +85,7 @@ steps:
5. Commit your changes and push them to your fork with `$ git push origin
your_feature_name`.
6. Go to your fork on github.com and use the web interface to create a pull
- request into the sigp/lighthouse repo.
+ request into the sigp/lighthouse repository.
From there, the repository maintainers will review the PR and either accept it
or provide some constructive feedback.
diff --git a/README.md b/README.md
index 4b22087bcdc..147a06e5040 100644
--- a/README.md
+++ b/README.md
@@ -26,7 +26,7 @@ Lighthouse is:
- Built in [Rust](https://www.rust-lang.org), a modern language providing unique safety guarantees and
excellent performance (comparable to C++).
- Funded by various organisations, including Sigma Prime, the
- Ethereum Foundation, ConsenSys, the Decentralization Foundation and private individuals.
+ Ethereum Foundation, Consensys, the Decentralization Foundation and private individuals.
- Actively involved in the specification and security analysis of the
Ethereum proof-of-stake consensus specification.
diff --git a/book/src/advanced_database.md b/book/src/advanced_database.md
index d8d6ea61a18..c275d0ed96f 100644
--- a/book/src/advanced_database.md
+++ b/book/src/advanced_database.md
@@ -14,7 +14,7 @@ more detail below).
The full states upon which blocks are replayed are referred to as _snapshots_ in the case of the
freezer DB, and _epoch boundary states_ in the case of the hot DB.
-The frequency at which the hot database stores full `BeaconState`s is fixed to one-state-per-epoch
+The frequency at which the hot database stores full `BeaconState` is fixed to one-state-per-epoch
in order to keep loads of recent states performant. For the freezer DB, the frequency is
configurable via the `--hierarchy-exponents` CLI flag, which is the topic of the next section.
@@ -56,7 +56,7 @@ that we have observed are:
_a lot_ of space. It's even possible to push beyond that with `--hierarchy-exponents 0` which
would store a full state every single slot (NOT RECOMMENDED).
- **Less diff layers are not necessarily faster**. One might expect that the fewer diff layers there
- are, the less work Lighthouse would have to do to reconstruct any particular state. In practise
+ are, the less work Lighthouse would have to do to reconstruct any particular state. In practice
this seems to be offset by the increased size of diffs in each layer making the diffs take longer
to apply. We observed no significant performance benefit from `--hierarchy-exponents 5,7,11`, and
a substantial increase in space consumed.
diff --git a/book/src/advanced_networking.md b/book/src/advanced_networking.md
index 732b4f51e65..c0f6b5485ef 100644
--- a/book/src/advanced_networking.md
+++ b/book/src/advanced_networking.md
@@ -68,7 +68,7 @@ The steps to do port forwarding depends on the router, but the general steps are
1. Determine the default gateway IP:
- On Linux: open a terminal and run `ip route | grep default`, the result should look something similar to `default via 192.168.50.1 dev wlp2s0 proto dhcp metric 600`. The `192.168.50.1` is your router management default gateway IP.
- - On MacOS: open a terminal and run `netstat -nr|grep default` and it should return the default gateway IP.
+ - On macOS: open a terminal and run `netstat -nr|grep default` and it should return the default gateway IP.
- On Windows: open a command prompt and run `ipconfig` and look for the `Default Gateway` which will show you the gateway IP.
The default gateway IP usually looks like 192.168.X.X. Once you obtain the IP, enter it to a web browser and it will lead you to the router management page.
@@ -91,7 +91,7 @@ The steps to do port forwarding depends on the router, but the general steps are
- Internal port: `9001`
- IP address: Choose the device that is running Lighthouse.
-1. To check that you have successfully opened the ports, go to [yougetsignal](https://www.yougetsignal.com/tools/open-ports/) and enter `9000` in the `port number`. If it shows "open", then you have successfully set up port forwarding. If it shows "closed", double check your settings, and also check that you have allowed firewall rules on port 9000. Note: this will only confirm if port 9000/TCP is open. You will need to ensure you have correctly setup port forwarding for the UDP ports (`9000` and `9001` by default).
+1. To check that you have successfully opened the ports, go to [`yougetsignal`](https://www.yougetsignal.com/tools/open-ports/) and enter `9000` in the `port number`. If it shows "open", then you have successfully set up port forwarding. If it shows "closed", double check your settings, and also check that you have allowed firewall rules on port 9000. Note: this will only confirm if port 9000/TCP is open. You will need to ensure you have correctly setup port forwarding for the UDP ports (`9000` and `9001` by default).
## ENR Configuration
@@ -141,7 +141,7 @@ To listen over both IPv4 and IPv6:
- Set two listening addresses using the `--listen-address` flag twice ensuring
the two addresses are one IPv4, and the other IPv6. When doing so, the
`--port` and `--discovery-port` flags will apply exclusively to IPv4. Note
- that this behaviour differs from the Ipv6 only case described above.
+ that this behaviour differs from the IPv6 only case described above.
- If necessary, set the `--port6` flag to configure the port used for TCP and
UDP over IPv6. This flag has no effect when listening over IPv6 only.
- If necessary, set the `--discovery-port6` flag to configure the IPv6 UDP
diff --git a/book/src/api-lighthouse.md b/book/src/api-lighthouse.md
index b63505c4901..5428ab8f9ae 100644
--- a/book/src/api-lighthouse.md
+++ b/book/src/api-lighthouse.md
@@ -508,23 +508,31 @@ curl "http://localhost:5052/lighthouse/database/info" | jq
```json
{
- "schema_version": 18,
+ "schema_version": 22,
"config": {
- "slots_per_restore_point": 8192,
- "slots_per_restore_point_set_explicitly": false,
"block_cache_size": 5,
+ "state_cache_size": 128,
+ "compression_level": 1,
"historic_state_cache_size": 1,
+ "hdiff_buffer_cache_size": 16,
"compact_on_init": false,
"compact_on_prune": true,
"prune_payloads": true,
+ "hierarchy_config": {
+ "exponents": [
+ 5,
+ 7,
+ 11
+ ]
+ },
"prune_blobs": true,
"epochs_per_blob_prune": 1,
"blob_prune_margin_epochs": 0
},
"split": {
- "slot": "7454656",
- "state_root": "0xbecfb1c8ee209854c611ebc967daa77da25b27f1a8ef51402fdbe060587d7653",
- "block_root": "0x8730e946901b0a406313d36b3363a1b7091604e1346a3410c1a7edce93239a68"
+ "slot": "10530592",
+ "state_root": "0xd27e6ce699637cf9b5c7ca632118b7ce12c2f5070bb25a27ac353ff2799d4466",
+ "block_root": "0x71509a1cb374773d680cd77148c73ab3563526dacb0ab837bb0c87e686962eae"
},
"anchor": {
"anchor_slot": "7451168",
@@ -543,8 +551,19 @@ curl "http://localhost:5052/lighthouse/database/info" | jq
For more information about the split point, see the [Database Configuration](./advanced_database.md)
docs.
-The `anchor` will be `null` unless the node has been synced with checkpoint sync and state
-reconstruction has yet to be completed. For more information
+For archive nodes, the `anchor` will be:
+
+```json
+"anchor": {
+ "anchor_slot": "0",
+ "oldest_block_slot": "0",
+ "oldest_block_parent": "0x0000000000000000000000000000000000000000000000000000000000000000",
+ "state_upper_limit": "0",
+ "state_lower_limit": "0"
+ },
+```
+
+indicating that all states with slots `>= 0` are available, i.e., full state history. For more information
on the specific meanings of these fields see the docs on [Checkpoint
Sync](./checkpoint-sync.md#reconstructing-states).
diff --git a/book/src/faq.md b/book/src/faq.md
index 04e5ce5bc8f..d23951c8c77 100644
--- a/book/src/faq.md
+++ b/book/src/faq.md
@@ -92,7 +92,7 @@ If the reason for the error message is caused by no. 1 above, you may want to lo
- Power outage. If power outages are an issue at your place, consider getting a UPS to avoid ungraceful shutdown of services.
- The service file is not stopped properly. To overcome this, make sure that the process is stopped properly, e.g., during client updates.
-- Out of memory (oom) error. This can happen when the system memory usage has reached its maximum and causes the execution engine to be killed. To confirm that the error is due to oom, run `sudo dmesg -T | grep killed` to look for killed processes. If you are using geth as the execution client, a short term solution is to reduce the resources used. For example, you can reduce the cache by adding the flag `--cache 2048`. If the oom occurs rather frequently, a long term solution is to increase the memory capacity of the computer.
+- Out of memory (oom) error. This can happen when the system memory usage has reached its maximum and causes the execution engine to be killed. To confirm that the error is due to oom, run `sudo dmesg -T | grep killed` to look for killed processes. If you are using Geth as the execution client, a short term solution is to reduce the resources used. For example, you can reduce the cache by adding the flag `--cache 2048`. If the oom occurs rather frequently, a long term solution is to increase the memory capacity of the computer.
### I see beacon logs showing `Error during execution engine upcheck`, what should I do?
@@ -302,7 +302,7 @@ An example of the log: (debug logs can be found under `$datadir/beacon/logs`):
Delayed head block, set_as_head_time_ms: 27, imported_time_ms: 168, attestable_delay_ms: 4209, available_delay_ms: 4186, execution_time_ms: 201, blob_delay_ms: 3815, observed_delay_ms: 3984, total_delay_ms: 4381, slot: 1886014, proposer_index: 733, block_root: 0xa7390baac88d50f1cbb5ad81691915f6402385a12521a670bbbd4cd5f8bf3934, service: beacon, module: beacon_chain::canonical_head:1441
```
-The field to look for is `attestable_delay`, which defines the time when a block is ready for the validator to attest. If the `attestable_delay` is greater than 4s which has past the window of attestation, the attestation wil fail. In the above example, the delay is mostly caused by late block observed by the node, as shown in `observed_delay`. The `observed_delay` is determined mostly by the proposer and partly by your networking setup (e.g., how long it took for the node to receive the block). Ideally, `observed_delay` should be less than 3 seconds. In this example, the validator failed to attest the block due to the block arriving late.
+The field to look for is `attestable_delay`, which defines the time when a block is ready for the validator to attest. If the `attestable_delay` is greater than 4s which has past the window of attestation, the attestation will fail. In the above example, the delay is mostly caused by late block observed by the node, as shown in `observed_delay`. The `observed_delay` is determined mostly by the proposer and partly by your networking setup (e.g., how long it took for the node to receive the block). Ideally, `observed_delay` should be less than 3 seconds. In this example, the validator failed to attest the block due to the block arriving late.
Another example of log:
@@ -315,7 +315,7 @@ In this example, we see that the `execution_time_ms` is 4694ms. The `execution_t
### Sometimes I miss the attestation head vote, resulting in penalty. Is this normal?
-In general, it is unavoidable to have some penalties occasionally. This is particularly the case when you are assigned to attest on the first slot of an epoch and if the proposer of that slot releases the block late, then you will get penalised for missing the target and head votes. Your attestation performance does not only depend on your own setup, but also on everyone elses performance.
+In general, it is unavoidable to have some penalties occasionally. This is particularly the case when you are assigned to attest on the first slot of an epoch and if the proposer of that slot releases the block late, then you will get penalised for missing the target and head votes. Your attestation performance does not only depend on your own setup, but also on everyone else's performance.
You could also check for the sync aggregate participation percentage on block explorers such as [beaconcha.in](https://beaconcha.in/). A low sync aggregate participation percentage (e.g., 60-70%) indicates that the block that you are assigned to attest to may be published late. As a result, your validator fails to correctly attest to the block.
diff --git a/book/src/graffiti.md b/book/src/graffiti.md
index ba9c7d05d70..7b402ea866f 100644
--- a/book/src/graffiti.md
+++ b/book/src/graffiti.md
@@ -4,7 +4,7 @@ Lighthouse provides four options for setting validator graffiti.
## 1. Using the "--graffiti-file" flag on the validator client
-Users can specify a file with the `--graffiti-file` flag. This option is useful for dynamically changing graffitis for various use cases (e.g. drawing on the beaconcha.in graffiti wall). This file is loaded once on startup and reloaded everytime a validator is chosen to propose a block.
+Users can specify a file with the `--graffiti-file` flag. This option is useful for dynamically changing graffitis for various use cases (e.g. drawing on the beaconcha.in graffiti wall). This file is loaded once on startup and reloaded every time a validator is chosen to propose a block.
Usage:
`lighthouse vc --graffiti-file graffiti_file.txt`
diff --git a/book/src/homebrew.md b/book/src/homebrew.md
index da92dcb26ce..f94764889e6 100644
--- a/book/src/homebrew.md
+++ b/book/src/homebrew.md
@@ -31,6 +31,6 @@ Alternatively, you can find the `lighthouse` binary at:
The [formula][] is kept up-to-date by the Homebrew community and a bot that lists for new releases.
-The package source can be found in the [homebrew-core](https://github.com/Homebrew/homebrew-core/blob/master/Formula/l/lighthouse.rb) repo.
+The package source can be found in the [homebrew-core](https://github.com/Homebrew/homebrew-core/blob/master/Formula/l/lighthouse.rb) repository.
[formula]: https://formulae.brew.sh/formula/lighthouse
diff --git a/book/src/late-block-re-orgs.md b/book/src/late-block-re-orgs.md
index 4a00f33aa44..fca156bda3f 100644
--- a/book/src/late-block-re-orgs.md
+++ b/book/src/late-block-re-orgs.md
@@ -46,24 +46,31 @@ You can track the reasons for re-orgs being attempted (or not) via Lighthouse's
A pair of messages at `INFO` level will be logged if a re-org opportunity is detected:
-> INFO Attempting re-org due to weak head threshold_weight: 45455983852725, head_weight: 0, parent: 0x09d953b69041f280758400c671130d174113bbf57c2d26553a77fb514cad4890, weak_head: 0xf64f8e5ed617dc18c1e759dab5d008369767c3678416dac2fe1d389562842b49
-
-> INFO Proposing block to re-org current head head_to_reorg: 0xf64f…2b49, slot: 1105320
+```text
+INFO Attempting re-org due to weak head threshold_weight: 45455983852725, head_weight: 0, parent: 0x09d953b69041f280758400c671130d174113bbf57c2d26553a77fb514cad4890, weak_head: 0xf64f8e5ed617dc18c1e759dab5d008369767c3678416dac2fe1d389562842b49
+INFO Proposing block to re-org current head head_to_reorg: 0xf64f…2b49, slot: 1105320
+```
This should be followed shortly after by a `INFO` log indicating that a re-org occurred. This is
expected and normal:
-> INFO Beacon chain re-org reorg_distance: 1, new_slot: 1105320, new_head: 0x72791549e4ca792f91053bc7cf1e55c6fbe745f78ce7a16fc3acb6f09161becd, previous_slot: 1105319, previous_head: 0xf64f8e5ed617dc18c1e759dab5d008369767c3678416dac2fe1d389562842b49
+```text
+INFO Beacon chain re-org reorg_distance: 1, new_slot: 1105320, new_head: 0x72791549e4ca792f91053bc7cf1e55c6fbe745f78ce7a16fc3acb6f09161becd, previous_slot: 1105319, previous_head: 0xf64f8e5ed617dc18c1e759dab5d008369767c3678416dac2fe1d389562842b49
+```
In case a re-org is not viable (which should be most of the time), Lighthouse will just propose a
block as normal and log the reason the re-org was not attempted at debug level:
-> DEBG Not attempting re-org reason: head not late
+```text
+DEBG Not attempting re-org reason: head not late
+```
If you are interested in digging into the timing of `forkchoiceUpdated` messages sent to the
execution layer, there is also a debug log for the suppression of `forkchoiceUpdated` messages
when Lighthouse thinks that a re-org is likely:
-> DEBG Fork choice update overridden slot: 1105320, override: 0x09d953b69041f280758400c671130d174113bbf57c2d26553a77fb514cad4890, canonical_head: 0xf64f8e5ed617dc18c1e759dab5d008369767c3678416dac2fe1d389562842b49
+```text
+DEBG Fork choice update overridden slot: 1105320, override: 0x09d953b69041f280758400c671130d174113bbf57c2d26553a77fb514cad4890, canonical_head: 0xf64f8e5ed617dc18c1e759dab5d008369767c3678416dac2fe1d389562842b49
+```
[the spec]: https://github.com/ethereum/consensus-specs/pull/3034
diff --git a/book/src/ui-faqs.md b/book/src/ui-faqs.md
index efa6d3d4ab2..08878753161 100644
--- a/book/src/ui-faqs.md
+++ b/book/src/ui-faqs.md
@@ -6,7 +6,7 @@ Yes, the most current Siren version requires Lighthouse v4.3.0 or higher to func
## 2. Where can I find my API token?
-The required Api token may be found in the default data directory of the validator client. For more information please refer to the lighthouse ui configuration [`api token section`](./api-vc-auth-header.md).
+The required API token may be found in the default data directory of the validator client. For more information please refer to the lighthouse ui configuration [`api token section`](./api-vc-auth-header.md).
## 3. How do I fix the Node Network Errors?
diff --git a/book/src/ui-installation.md b/book/src/ui-installation.md
index 1444c0d6331..b7c5557b3c3 100644
--- a/book/src/ui-installation.md
+++ b/book/src/ui-installation.md
@@ -1,6 +1,6 @@
# 📦 Installation
-Siren supports any operating system that supports container runtimes and/or NodeJS 18, this includes Linux, MacOS, and Windows. The recommended way of running Siren is by launching the [docker container](https://hub.docker.com/r/sigp/siren) , but running the application directly is also possible.
+Siren supports any operating system that supports container runtime and/or NodeJS 18, this includes Linux, macOS, and Windows. The recommended way of running Siren is by launching the [docker container](https://hub.docker.com/r/sigp/siren) , but running the application directly is also possible.
## Version Requirement
diff --git a/book/src/validator-inclusion.md b/book/src/validator-inclusion.md
index 092c813a1ea..eef563dcdb7 100644
--- a/book/src/validator-inclusion.md
+++ b/book/src/validator-inclusion.md
@@ -56,7 +56,6 @@ The following fields are returned:
able to vote) during the current epoch.
- `current_epoch_target_attesting_gwei`: the total staked gwei that attested to
the majority-elected Casper FFG target epoch during the current epoch.
-- `previous_epoch_active_gwei`: as per `current_epoch_active_gwei`, but during the previous epoch.
- `previous_epoch_target_attesting_gwei`: see `current_epoch_target_attesting_gwei`.
- `previous_epoch_head_attesting_gwei`: the total staked gwei that attested to a
head beacon block that is in the canonical chain.
diff --git a/book/src/validator-manager.md b/book/src/validator-manager.md
index a71fab1e3ad..11df2af0378 100644
--- a/book/src/validator-manager.md
+++ b/book/src/validator-manager.md
@@ -32,3 +32,4 @@ The `validator-manager` boasts the following features:
- [Creating and importing validators using the `create` and `import` commands.](./validator-manager-create.md)
- [Moving validators between two VCs using the `move` command.](./validator-manager-move.md)
+- [Managing validators such as delete, import and list validators.](./validator-manager-api.md)
diff --git a/book/src/validator-monitoring.md b/book/src/validator-monitoring.md
index 6439ea83a32..bbc95460ec9 100644
--- a/book/src/validator-monitoring.md
+++ b/book/src/validator-monitoring.md
@@ -134,7 +134,7 @@ validator_monitor_attestation_simulator_source_attester_hit_total
validator_monitor_attestation_simulator_source_attester_miss_total
```
-A grafana dashboard to view the metrics for attestation simulator is available [here](https://github.com/sigp/lighthouse-metrics/blob/master/dashboards/AttestationSimulator.json).
+A Grafana dashboard to view the metrics for attestation simulator is available [here](https://github.com/sigp/lighthouse-metrics/blob/master/dashboards/AttestationSimulator.json).
The attestation simulator provides an insight into the attestation performance of a beacon node. It can be used as an indication of how expediently the beacon node has completed importing blocks within the 4s time frame for an attestation to be made.
diff --git a/scripts/local_testnet/README.md b/scripts/local_testnet/README.md
index ca701eb7e91..159c89badbc 100644
--- a/scripts/local_testnet/README.md
+++ b/scripts/local_testnet/README.md
@@ -1,6 +1,6 @@
# Simple Local Testnet
-These scripts allow for running a small local testnet with a default of 4 beacon nodes, 4 validator clients and 4 geth execution clients using Kurtosis.
+These scripts allow for running a small local testnet with a default of 4 beacon nodes, 4 validator clients and 4 Geth execution clients using Kurtosis.
This setup can be useful for testing and development.
## Installation
@@ -9,7 +9,7 @@ This setup can be useful for testing and development.
1. Install [Kurtosis](https://docs.kurtosis.com/install/). Verify that Kurtosis has been successfully installed by running `kurtosis version` which should display the version.
-1. Install [yq](https://github.com/mikefarah/yq). If you are on Ubuntu, you can install `yq` by running `snap install yq`.
+1. Install [`yq`](https://github.com/mikefarah/yq). If you are on Ubuntu, you can install `yq` by running `snap install yq`.
## Starting the testnet
@@ -22,7 +22,7 @@ cd ./scripts/local_testnet
It will build a Lighthouse docker image from the root of the directory and will take an approximately 12 minutes to complete. Once built, the testing will be started automatically. You will see a list of services running and "Started!" at the end.
You can also select your own Lighthouse docker image to use by specifying it in `network_params.yml` under the `cl_image` key.
-Full configuration reference for kurtosis is specified [here](https://github.com/ethpandaops/ethereum-package?tab=readme-ov-file#configuration).
+Full configuration reference for Kurtosis is specified [here](https://github.com/ethpandaops/ethereum-package?tab=readme-ov-file#configuration).
To view all running services:
@@ -36,7 +36,7 @@ To view the logs:
kurtosis service logs local-testnet $SERVICE_NAME
```
-where `$SERVICE_NAME` is obtained by inspecting the running services above. For example, to view the logs of the first beacon node, validator client and geth:
+where `$SERVICE_NAME` is obtained by inspecting the running services above. For example, to view the logs of the first beacon node, validator client and Geth:
```bash
kurtosis service logs local-testnet -f cl-1-lighthouse-geth
diff --git a/wordlist.txt b/wordlist.txt
new file mode 100644
index 00000000000..f06c278866d
--- /dev/null
+++ b/wordlist.txt
@@ -0,0 +1,235 @@
+APIs
+ARMv
+AUR
+Backends
+Backfilling
+Beaconcha
+Besu
+Broadwell
+BIP
+BLS
+BN
+BNs
+BTC
+BTEC
+Casper
+CentOS
+Chiado
+CMake
+CoinCashew
+Consensys
+CORS
+CPUs
+DBs
+DES
+DHT
+DNS
+Dockerhub
+DoS
+EIP
+ENR
+Erigon
+Esat's
+ETH
+EthDocker
+Ethereum
+Ethstaker
+Exercism
+Extractable
+FFG
+Geth
+Gitcoin
+Gnosis
+Goerli
+Grafana
+Holesky
+Homebrew
+Infura
+IPs
+IPv
+JSON
+KeyManager
+Kurtosis
+LMDB
+LLVM
+LRU
+LTO
+Mainnet
+MDBX
+Merkle
+MEV
+MSRV
+NAT's
+Nethermind
+NodeJS
+NullLogger
+PathBuf
+PowerShell
+PPA
+Pre
+Proto
+PRs
+Prysm
+QUIC
+RasPi
+README
+RESTful
+Reth
+RHEL
+Ropsten
+RPC
+Ryzen
+Sepolia
+Somer
+SSD
+SSL
+SSZ
+Styleguide
+TCP
+Teku
+TLS
+TODOs
+UDP
+UI
+UPnP
+USD
+UX
+Validator
+VC
+VCs
+VPN
+Withdrawable
+WSL
+YAML
+aarch
+anonymize
+api
+attester
+backend
+backends
+backfill
+backfilling
+beaconcha
+bitfield
+blockchain
+bn
+cli
+clippy
+config
+cpu
+cryptocurrencies
+cryptographic
+danksharding
+datadir
+datadirs
+de
+decrypt
+decrypted
+dest
+dir
+disincentivise
+doppelgänger
+dropdown
+else's
+env
+eth
+ethdo
+ethereum
+ethstaker
+filesystem
+frontend
+gapped
+github
+graffitis
+gwei
+hdiffs
+homebrew
+hostname
+html
+http
+https
+hDiff
+implementers
+interoperable
+io
+iowait
+jemalloc
+json
+jwt
+kb
+keymanager
+keypair
+keypairs
+keystore
+keystores
+linter
+linux
+localhost
+lossy
+macOS
+mainnet
+makefile
+mdBook
+mev
+misconfiguration
+mkcert
+namespace
+natively
+nd
+ness
+nginx
+nitty
+oom
+orging
+orgs
+os
+paul
+pem
+performant
+pid
+pre
+pubkey
+pubkeys
+rc
+reimport
+resync
+roadmap
+runtime
+rustfmt
+rustup
+schemas
+sigmaprime
+sigp
+slashable
+slashings
+spec'd
+src
+stakers
+subnet
+subnets
+systemd
+testnet
+testnets
+th
+toml
+topologies
+tradeoffs
+transactional
+tweakers
+ui
+unadvanced
+unaggregated
+unencrypted
+unfinalized
+untrusted
+uptimes
+url
+validator
+validators
+validator's
+vc
+virt
+webapp
+withdrawable
+yaml
+yml