Skip to content

Commit

Permalink
Fix a number of typos (#34385)
Browse files Browse the repository at this point in the history
* Update vote-accounts.md

* Update what-is-a-validator.md

* Update what-is-a-validator.md

* Update accounts-db-replication.md

* Update blockstore-rocksdb-compaction.md

* Update rip-curl.md

* Update ledger-replication-to-implement.md

* Update optimistic_confirmation.md

* Update return-data.md

* Update handle-duplicate-block.md

* Update timely-vote-credits.md

* Update optimistic-transaction-propagation-signal.md

* Update simple-payment-and-state-verification.md

* Update off-chain-message-signing.md

* Update mod.rs

* Update elgamal.rs

* Update ledger.md

* Update deploy-a-program.md

* Update staking-rewards.md

* Update reliable-vote-transmission.md

* Update repair-service.md

* Update abi-management.md

* Update testing-programs.md

* Update docs/src/implemented-proposals/staking-rewards.md

Co-authored-by: Tyera <[email protected]>

---------

Co-authored-by: Tyera <[email protected]>
  • Loading branch information
pandabadger and CriesofCarrots authored Dec 12, 2023
1 parent 05dae59 commit 549c3e7
Show file tree
Hide file tree
Showing 22 changed files with 43 additions and 43 deletions.
2 changes: 1 addition & 1 deletion docs/src/cli/examples/deploy-a-program.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ $ sha256sum extended.so dump.so

Instead of deploying directly to the program account, the program can be written
to an intermediary buffer account. Intermediary accounts can be useful for
things like multi-entity governed programs where the governing members fist
things like multi-entity governed programs where the governing members first
verify the intermediary buffer contents and then vote to allow an upgrade using
it.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/cli/wallets/hardware/ledger.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ solana balance 7cvkjYAkUYs4W8XcXsca7cBrEGFeSUjeZmKoNBvEwyri

You can also view the balance of any account address on the Accounts tab in the
[Explorer](https://explorer.solana.com/accounts) and paste the address in the
box to view the balance in you web browser.
box to view the balance in your web browser.

Note: Any address with a balance of 0 SOL, such as a newly created one on your
Ledger, will show as "Not Found" in the explorer. Empty accounts and
Expand Down
6 changes: 3 additions & 3 deletions docs/src/implemented-proposals/abi-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ name suggests, there is no need to implement `AbiEnumVisitor` for other types.
To summarize this interplay, `serde` handles the recursive serialization control
flow in tandem with `AbiDigester`. The initial entry point in tests and child
`AbiDigester`s use `AbiExample` recursively to create an example object
hierarchal graph. And `AbiDigester` uses `AbiEnumVisitor` to inquiry the actual
hierarchical graph. And `AbiDigester` uses `AbiEnumVisitor` to inquiry the actual
ABI information using the constructed sample.

`Default` isn't enough for `AbiExample`. Various collection's `::default()` is
Expand All @@ -142,7 +142,7 @@ On the other hand, ABI digesting can't be done only with `AbiExample`, either.
`AbiEnumVisitor` is required because all variants of an `enum` cannot be
traversed just with a single variant of it as a ABI example.

Digestable information:
Digestible information:

- rust's type name
- `serde`'s data type name
Expand All @@ -152,7 +152,7 @@ Digestable information:
- `enum`: normal variants and `struct`- and `tuple`- styles.
- attributes: `serde(serialize_with=...)` and `serde(skip)`

Not digestable information:
Not digestible information:

- Any custom serialize code path not touched by the sample provided by
`AbiExample`. (technically not possible)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Validator votes are messages that have a critical function for consensus and con

1. Leader rotation is triggered by PoH, which is clock with high drift. So many nodes are likely to have an incorrect view if the next leader is active in realtime or not.
2. The next leader may be easily be flooded. Thus a DDOS would not only prevent delivery of regular transactions, but also consensus messages.
3. UDP is unreliable, and our asynchronous protocol requires any message that is transmitted to be retransmitted until it is observed in the ledger. Retransmittion could potentially cause an unintentional _thundering herd_ against the leader with a large number of validators. Worst case flood would be `(num_nodes * num_retransmits)`.
3. UDP is unreliable, and our asynchronous protocol requires any message that is transmitted to be retransmitted until it is observed in the ledger. Retransmission could potentially cause an unintentional _thundering herd_ against the leader with a large number of validators. Worst case flood would be `(num_nodes * num_retransmits)`.
4. Tracking if the vote has been transmitted or not via the ledger does not guarantee it will appear in a confirmed block. The current observed block may be unrolled. Validators would need to maintain state for each vote and fork.

## Design
Expand Down
2 changes: 1 addition & 1 deletion docs/src/implemented-proposals/repair-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The different protocol strategies to address the above challenges:
Blockstore tracks the latest root slot. RepairService will then periodically
iterate every fork in blockstore starting from the root slot, sending repair
requests to validators for any missing shreds. It will send at most some `N`
repair reqeusts per iteration. Shred repair should prioritize repairing
repair requests per iteration. Shred repair should prioritize repairing
forks based on the leader's fork weight. Validators should only send repair
requests to validators who have marked that slot as completed in their
EpochSlots. Validators should prioritize repairing shreds in each slot
Expand Down
2 changes: 1 addition & 1 deletion docs/src/implemented-proposals/staking-rewards.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ Solana's trustless sense of time and ordering provided by its PoH data structure

As discussed in the [Economic Design](ed_overview/ed_overview.md) section, annual validator interest rates are to be specified as a function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online and actively participating in the validation process throughout the entirety of their _validation period_. For validators that go offline/fail to validate transactions during this period, their annual reward is effectively reduced.

Similarly, we may consider an algorithmic reduction in a validator's active amount staked amount in the case that they are offline. I.e. if a validator is inactive for some amount of time, either due to a partition or otherwise, the amount of their stake that is considered ‘active’ \(eligible to earn rewards\) may be reduced. This design would be structured to help long-lived partitions to eventually reach finality on their respective chains as the % of non-voting total stake is reduced over time until a supermajority can be achieved by the active validators in each partition. Similarly, upon re-engaging, the ‘active’ amount staked will come back online at some defined rate. Different rates of stake reduction may be considered depending on the size of the partition/active set.
Similarly, we may consider an algorithmic reduction in a validator's active staked amount in the case that they are offline. I.e. if a validator is inactive for some amount of time, either due to a partition or otherwise, the amount of their stake that is considered ‘active’ \(eligible to earn rewards\) may be reduced. This design would be structured to help long-lived partitions to eventually reach finality on their respective chains as the % of non-voting total stake is reduced over time until a supermajority can be achieved by the active validators in each partition. Similarly, upon re-engaging, the ‘active’ amount staked will come back online at some defined rate. Different rates of stake reduction may be considered depending on the size of the partition/active set.
2 changes: 1 addition & 1 deletion docs/src/implemented-proposals/testing-programs.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ trait SyncClient {
}
```

Users send transactions and asynchrounously and synchrounously await results.
Users send transactions and asynchronously and synchronously await results.

### ThinClient for Clusters

Expand Down
2 changes: 1 addition & 1 deletion docs/src/operations/guides/vote-accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ Rotating the vote account authority keys requires special handling when dealing
with a live validator.

Note that vote account key rotation has no effect on the stake accounts that
have been delegate to the vote account. For example it is possible to use key
have been delegated to the vote account. For example it is possible to use key
rotation to transfer all authority of a vote account from one entity to another
without any impact to staking rewards.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/accounts-db-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ During replication we also need to replicate the information of accounts that ha
up due to zero lamports, i.e. we need to be able to tell the difference between an account in a
given slot which was not updated and hence has no storage entry in that slot, and one that
holds 0 lamports and has been cleaned up through the history. We may record this via some
"Tombstone" mechanism -- recording the dead accounts cleaned up fora slot. The tombstones
"Tombstone" mechanism -- recording the dead accounts cleaned up for a slot. The tombstones
themselves can be removed after exceeding the retention period expressed as epochs. Any
attempt to replicate slots with tombstones removed will fail and the replica should skip
this slot and try later ones.
Expand Down
4 changes: 2 additions & 2 deletions docs/src/proposals/blockstore-rocksdb-compaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ close to 1 read amplification. As each key is only inserted once, we have
space amplification 1.

### Use Current Settings for Metadata Column Families
The second type of the column families related to shred insertion is medadata
The second type of the column families related to shred insertion is metadata
column families. These metadata column families contributes ~1% of the shred
insertion data in size. The largest metadata column family here is the Index
column family, which occupies 0.8% of the shred insertion data.
Expand Down Expand Up @@ -160,7 +160,7 @@ in Solana's BlockStore use case:
Here we discuss Level to FIFO and FIFO to Level migrations:

### Level to FIFO
heoretically, FIFO compaction is the superset of all other compaction styles,
Theoretically, FIFO compaction is the superset of all other compaction styles,
as it does not have any assumption of the LSM tree structure. However, the
current RocksDB implementation does not offer such flexibility while it is
theoretically doable.
Expand Down
6 changes: 3 additions & 3 deletions docs/src/proposals/handle-duplicate-block.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Before a duplicate slot `S` is `duplicate_confirmed`, it's first excluded from t
Some notes about the `DUPLICATE_THRESHOLD`. In the cases below, assume `DUPLICATE_THRESHOLD = 52`:

a) If less than `2 * DUPLICATE_THRESHOLD - 1` percentage of the network is malicious, then there can only be one such `duplicate_confirmed` version of the slot. With `DUPLICATE_THRESHOLD = 52`, this is
a malcious tolerance of `4%`
a malicious tolerance of `4%`

b) The liveness of the network is at most `1 - DUPLICATE_THRESHOLD - SWITCH_THRESHOLD`. This is because if you need at least `SWITCH_THRESHOLD` percentage of the stake voting on a different fork in order to switch off of a duplicate fork that has `< DUPLICATE_THRESHOLD` stake voting on it, and is *not* `duplicate_confirmed`. For `DUPLICATE_THRESHOLD = 52` and `DUPLICATE_THRESHOLD = 38`, this implies a liveness tolerance of `10%`.

Expand All @@ -38,7 +38,7 @@ For example in the situation below, validators that voted on `2` can't vote any
```

3. Switching proofs need to be extended to allow including vote hashes from different versions of the same same slot (detected through 1). Right now this is not supported since switching proofs can
3. Switching proofs need to be extended to allow including vote hashes from different versions of the same slot (detected through 1). Right now this is not supported since switching proofs can
only be built using votes from banks in BankForks, and two different versions of the same slot cannot
simultaneously exist in BankForks. For instance:

Expand Down Expand Up @@ -73,7 +73,7 @@ This problem we need to solve is modeled simply by the below scenario:
```
Assume the following:

1. Due to gossiping duplciate proofs, we assume everyone will eventually see duplicate proofs for 2 and 4, so everyone agrees to remove them from fork choice until they are `duplicate_confirmed`.
1. Due to gossiping duplicate proofs, we assume everyone will eventually see duplicate proofs for 2 and 4, so everyone agrees to remove them from fork choice until they are `duplicate_confirmed`.

2. Due to lockouts, `> DUPLICATE_THRESHOLD` of the stake votes on 4, but not 2. This means at least `DUPLICATE_THRESHOLD` of people have the "correct" version of both slots 2 and 4.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/ledger-replication-to-implement.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ For each turn of the PoRep game, both Validators and Archivers evaluate each sta

For any random seed, we force everyone to use a signature that is derived from a PoH hash at the turn boundary. Everyone uses the same count, so the same PoH hash is signed by every participant. The signatures are then each cryptographically tied to the keypair, which prevents a leader from grinding on the resulting value for more than 1 identity.

Since there are many more client identities then encryption identities, we need to split the reward for multiple clients, and prevent Sybil attacks from generating many clients to acquire the same block of data. To remain BFT we want to avoid a single human entity from storing all the replications of a single chunk of the ledger.
Since there are many more client identities than encryption identities, we need to split the reward for multiple clients, and prevent Sybil attacks from generating many clients to acquire the same block of data. To remain BFT we want to avoid a single human entity from storing all the replications of a single chunk of the ledger.

Our solution to this is to force the clients to continue using the same identity. If the first round is used to acquire the same block for many client identities, the second round for the same client identities will force a redistribution of the signatures, and therefore PoRep identities and blocks. Thus to get a reward for archivers need to store the first block for free and the network can reward long lived client identities more than new ones.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/off-chain-message-signing.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ This may be any arbitrary bytes. For instance the on-chain address of a program,
DAO instance, Candy Machine, etc.

This field **SHOULD** be displayed to users as a base58-encoded ASCII string rather
than interpretted otherwise.
than interpreted otherwise.

#### Message Format

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ concatenating (1), (2), and (3)
deduplicating this list of entries by pubkey favoring entries with contact info
filtering this list by entries with contact info

This list is then is randomly shuffled by stake weight.
This list is then randomly shuffled by stake weight.

Shreds are then retransmitted to up to FANOUT neighbors and up to FANOUT
children.
Expand All @@ -37,7 +37,7 @@ First, only epoch staked nodes will be considered regardless of presence of
contact info (and possibly including the validator node itself).

A deterministic ordering of the epoch staked nodes will be created based on the
derministic shred seed using weighted_shuffle.
deterministic shred seed using weighted_shuffle.

Let `neighbor_set` be selected from up to FANOUT neighbors of the current node.
Let `child_set` be selected from up to FANOUT children of the current node.
Expand Down Expand Up @@ -73,7 +73,7 @@ distribution levels.
distribution levels because of lack of contact info.
- Current node was part of original epoch staked shuffle from retransmitter
but was filtered out because of missing contact info. Current node subsequently
receives retransmisison of shred and assumes that the retransmit was a result
receives retransmission of shred and assumes that the retransmit was a result
of the deterministic tree calculation and not from subsequent random selection.
This should be benign because the current node will underestimate prior stake
weight in the retransmission tree.
Expand Down Expand Up @@ -105,5 +105,5 @@ Practically, signals should fall into the following buckets:
1.2. can signal layer 1 + subset of layer 2 when retransmit is sent
3. layer 2
3.1. can signal layer 2 when shred is received
3.2. can signal layer 2 + subset of layer 3 when retrnasmit is sent
3.2. can signal layer 2 + subset of layer 3 when retransmit is sent
4. current node not a member of epoch staked nodes, no signal can be sent
8 changes: 4 additions & 4 deletions docs/src/proposals/optimistic_confirmation.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ the votes must satisfy:

- `X <= S.last`, `X' <= S'.last`
- All `s` in `S` are ancestors/descendants of one another,
all `s'` in `S'` are ancsestors/descendants of one another,
all `s'` in `S'` are ancestors/descendants of one another,
-
- `X == X'` implies `S` is parent of `S'` or `S'` is a parent of `S`
- `X' > X` implies `X' > S.last` and `S'.last > S.last`
Expand Down Expand Up @@ -312,7 +312,7 @@ true that `B' > X`
```

`Proof`: Let `Vote(X, S)` be a vote in the `Optimistic Votes` set. Then by
definition, given the "optimistcally confirmed" block `B`, `X <= B <= S.last`.
definition, given the "optimistically confirmed" block `B`, `X <= B <= S.last`.

Because `X` is a parent of `B`, and `B'` is not a parent or ancestor of `B`,
then:
Expand All @@ -322,7 +322,7 @@ then:

Now consider if `B'` < `X`:

`Case B' < X`: We wll show this is a violation of lockouts.
`Case B' < X`: We will show this is a violation of lockouts.
From above, we know `B'` is not a parent of `X`. Then because `B'` was rooted,
and `B'` is not a parent of `X`, then the validator should not have been able
to vote on the higher slot `X` that does not descend from `B'`.
Expand Down Expand Up @@ -361,7 +361,7 @@ By `Lemma 2` we know `B' > X`, and from above `S_v.last > B'`, so then
From above, `S.last >= B >= X` so for all such "switching votes", `X_v > B`.

Now ordering all these "switching votes" in time, let `V` to be the validator
in `Optimistic Validators` that first submitted such a "swtching vote"
in `Optimistic Validators` that first submitted such a "switching vote"
`Vote(X', S')`, where `X' > B`. We know that such a validator exists because
we know from above that all delinquent validators must have submitted such
a vote, and the delinquent validators are a subset of the
Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/return-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ strings in the [stable log](https://github.com/solana-labs/solana/blob/952928419

Solidity on Ethereum allows the contract to return an error in the return data. In this case, all
the account data changes for the account should be reverted. On Solana, any non-zero exit code
for a SBF prorgram means the entire transaction fails. We do not wish to support an error return
for a SBF program means the entire transaction fails. We do not wish to support an error return
by returning success and then returning an error in the return data. This would mean we would have
to support reverting the account data changes; this too expensive both on the VM side and the SBF
contract side.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/proposals/rip-curl.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Easier for validators to support:
has no significant resource constraints.
- Transaction status is never stored in memory and cannot be polled for.
- Signatures are only stored in memory until the desired commitment level or
until the blockhash expires, which ever is later.
until the blockhash expires, whichever is later.

How it works:

Expand Down
Loading

0 comments on commit 549c3e7

Please sign in to comment.