Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Typographical Errors in Documentation Files #35723

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/src/clusters/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Each cluster node maintains various counters that are incremented on certain eve

## TPS

Each node's bank runtime maintains a count of transactions that it has processed. The dashboard first calculates the median count of transactions across all metrics enabled nodes in the cluster. The median cluster transaction count is then averaged over a 2 second period and displayed in the TPS time series graph. The dashboard also shows the Mean TPS, Max TPS and Total Transaction Count stats which are all calculated from the median transaction count.
Each node's bank runtime maintains a count of transactions that it has processed. The dashboard first calculates the median count of transactions across all metrics enabled nodes in the cluster. The median cluster transaction count is then averaged over a 2-second period and displayed in the TPS time series graph. The dashboard also shows the Mean TPS, Max TPS and Total Transaction Count stats which are all calculated from the median transaction count.

## Confirmation Time

Expand All @@ -24,4 +24,4 @@ The validator software is deployed to GCP n1-standard-16 instances with 1TB pd-s

solana-bench-tps is started after the network converges from a client machine with n1-standard-16 CPU-only instance with the following arguments: `--tx\_count=50000 --thread-batch-sleep 1000`

TPS and confirmation metrics are captured from the dashboard numbers over a 5 minute average of when the bench-tps transfer stage begins.
TPS and confirmation metrics are captured from the dashboard numbers over a 5-minute average of when the bench-tps transfer stage begins.
2 changes: 1 addition & 1 deletion docs/src/consensus/leader-rotation.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In this unstable scenario, multiple valid leader schedules exist.
- A leader schedule is generated for every fork whose direct parent is in the previous epoch.
- The leader schedule is valid after the start of the next epoch for descendant forks until it is updated.

Each partition's schedule will diverge after the partition lasts more than an epoch. For this reason, the epoch duration should be selected to be much much larger then slot time and the expected length for a fork to be committed to root.
Each partition's schedule will diverge after the partition lasts more than an epoch. For this reason, the epoch duration should be selected to be much larger then slot time and the expected length for a fork to be committed to root.

After observing the cluster for a sufficient amount of time, the leader schedule offset can be selected based on the median partition duration and its standard deviation. For example, an offset longer then the median partition duration plus six standard deviations would reduce the likelihood of an inconsistent ledger schedule in the cluster to 1 in 1 million.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/implemented-proposals/rpc-transaction-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ different tables for quick searching.

New data may be copied into the instance at anytime without affecting the
existing data, and all data is immutable. Generally the expectation is that new
data will be uploaded once an current epoch completes but there is no limitation
data will be uploaded once a current epoch completes but there is no limitation
on the frequency of data dumps.

Cleanup of old data is automatic by configuring the data retention policy of the
Expand Down