diff --git a/docs/clusters.md b/docs/clusters.md
new file mode 100644
index 000000000..df0f774ff
--- /dev/null
+++ b/docs/clusters.md
@@ -0,0 +1,176 @@
+---
+title: Solana Clusters
+---
+
+Solana maintains several different clusters with different purposes.
+
+Before you begin make sure you have first
+[installed the Solana command line tools](cli/install-solana-cli-tools.md)
+
+Explorers:
+
+- [http://explorer.solana.com/](https://explorer.solana.com/).
+- [http://solanabeach.io/](http://solanabeach.io/).
+
+## Devnet
+
+- Devnet serves as a playground for anyone who wants to take Solana for a test
+ drive, as a user, token holder, app developer, or validator.
+- Application developers should target Devnet.
+- Potential validators should first target Devnet.
+- Key differences between Devnet and Mainnet Beta:
+ - Devnet tokens are **not real**
+ - Devnet includes a token faucet for airdrops for application testing
+ - Devnet may be subject to ledger resets
+ - Devnet typically runs the same software release branch version as Mainnet
+ Beta, but may run a newer minor release version than Mainnet Beta.
+- Gossip entrypoint for Devnet: `entrypoint.devnet.solana.com:8001`
+- Metrics environment variable for Devnet:
+
+```bash
+export SOLANA_METRICS_CONFIG="host=https://metrics.solana.com:8086,db=devnet,u=scratch_writer,p=topsecret"
+```
+
+- RPC URL for Devnet: `https://api.devnet.solana.com`
+
+##### Example `solana` command-line configuration
+
+```bash
+solana config set --url https://api.devnet.solana.com
+```
+
+##### Example `solana-validator` command-line
+
+```bash
+$ solana-validator \
+ --identity validator-keypair.json \
+ --vote-account vote-account-keypair.json \
+ --known-validator dv1ZAGvdsz5hHLwWXsVnM94hWf1pjbKVau1QVkaMJ92 \
+ --known-validator dv2eQHeP4RFrJZ6UeiZWoc3XTtmtZCUKxxCApCDcRNV \
+ --known-validator dv4ACNkpYPcE3aKmYDqZm9G5EB3J4MRoeE7WNDRBVJB \
+ --known-validator dv3qDFk1DTF36Z62bNvrCXe9sKATA6xvVy6A798xxAS \
+ --only-known-rpc \
+ --ledger ledger \
+ --rpc-port 8899 \
+ --dynamic-port-range 8000-8020 \
+ --entrypoint entrypoint.devnet.solana.com:8001 \
+ --entrypoint entrypoint2.devnet.solana.com:8001 \
+ --entrypoint entrypoint3.devnet.solana.com:8001 \
+ --entrypoint entrypoint4.devnet.solana.com:8001 \
+ --entrypoint entrypoint5.devnet.solana.com:8001 \
+ --expected-genesis-hash EtWTRABZaYq6iMfeYKouRu166VU2xqa1wcaWoxPkrZBG \
+ --wal-recovery-mode skip_any_corrupted_record \
+ --limit-ledger-size
+```
+
+The
+[`--known-validator`s](running-validator/validator-start.md#known-validators)
+are operated by Solana Labs
+
+## Testnet
+
+- Testnet is where the Solana core contributors stress test recent release
+ features on a live cluster, particularly focused on network performance,
+ stability and validator behavior.
+- Testnet tokens are **not real**
+- Testnet may be subject to ledger resets.
+- Testnet includes a token faucet for airdrops for application testing
+- Testnet typically runs a newer software release branch than both Devnet and
+ Mainnet Beta
+- Gossip entrypoint for Testnet: `entrypoint.testnet.solana.com:8001`
+- Metrics environment variable for Testnet:
+
+```bash
+export SOLANA_METRICS_CONFIG="host=https://metrics.solana.com:8086,db=tds,u=testnet_write,p=c4fa841aa918bf8274e3e2a44d77568d9861b3ea"
+```
+
+- RPC URL for Testnet: `https://api.testnet.solana.com`
+
+##### Example `solana` command-line configuration
+
+```bash
+solana config set --url https://api.testnet.solana.com
+```
+
+##### Example `solana-validator` command-line
+
+```bash
+$ solana-validator \
+ --identity validator-keypair.json \
+ --vote-account vote-account-keypair.json \
+ --known-validator 5D1fNXzvv5NjV1ysLjirC4WY92RNsVH18vjmcszZd8on \
+ --known-validator dDzy5SR3AXdYWVqbDEkVFdvSPCtS9ihF5kJkHCtXoFs \
+ --known-validator Ft5fbkqNa76vnsjYNwjDZUXoTWpP7VYm3mtsaQckQADN \
+ --known-validator eoKpUABi59aT4rR9HGS3LcMecfut9x7zJyodWWP43YQ \
+ --known-validator 9QxCLckBiJc783jnMvXZubK4wH86Eqqvashtrwvcsgkv \
+ --only-known-rpc \
+ --ledger ledger \
+ --rpc-port 8899 \
+ --dynamic-port-range 8000-8020 \
+ --entrypoint entrypoint.testnet.solana.com:8001 \
+ --entrypoint entrypoint2.testnet.solana.com:8001 \
+ --entrypoint entrypoint3.testnet.solana.com:8001 \
+ --expected-genesis-hash 4uhcVJyU9pJkvQyS88uRDiswHXSCkY3zQawwpjk2NsNY \
+ --wal-recovery-mode skip_any_corrupted_record \
+ --limit-ledger-size
+```
+
+The identities of the
+[`--known-validator`s](running-validator/validator-start.md#known-validators)
+are:
+
+- `5D1fNXzvv5NjV1ysLjirC4WY92RNsVH18vjmcszZd8on` - Solana Labs
+- `dDzy5SR3AXdYWVqbDEkVFdvSPCtS9ihF5kJkHCtXoFs` - MonkeDAO
+- `Ft5fbkqNa76vnsjYNwjDZUXoTWpP7VYm3mtsaQckQADN` - Certus One
+- `eoKpUABi59aT4rR9HGS3LcMecfut9x7zJyodWWP43YQ` - SerGo
+- `9QxCLckBiJc783jnMvXZubK4wH86Eqqvashtrwvcsgkv` - Algo|Stake
+
+## Mainnet Beta
+
+A permissionless, persistent cluster for Solana users, builders, validators and
+token holders.
+
+- Tokens that are issued on Mainnet Beta are **real** SOL
+- Gossip entrypoint for Mainnet Beta: `entrypoint.mainnet-beta.solana.com:8001`
+- Metrics environment variable for Mainnet Beta:
+
+```bash
+export SOLANA_METRICS_CONFIG="host=https://metrics.solana.com:8086,db=mainnet-beta,u=mainnet-beta_write,p=password"
+```
+
+- RPC URL for Mainnet Beta: `https://api.mainnet-beta.solana.com`
+
+##### Example `solana` command-line configuration
+
+```bash
+solana config set --url https://api.mainnet-beta.solana.com
+```
+
+##### Example `solana-validator` command-line
+
+```bash
+$ solana-validator \
+ --identity ~/validator-keypair.json \
+ --vote-account ~/vote-account-keypair.json \
+ --known-validator 7Np41oeYqPefeNQEHSv1UDhYrehxin3NStELsSKCT4K2 \
+ --known-validator GdnSyH3YtwcxFvQrVVJMm1JhTS4QVX7MFsX56uJLUfiZ \
+ --known-validator DE1bawNcRJB9rVm3buyMVfr8mBEoyyu73NBovf2oXJsJ \
+ --known-validator CakcnaRDHka2gXyfbEd2d3xsvkJkqsLw2akB3zsN1D2S \
+ --only-known-rpc \
+ --ledger ledger \
+ --rpc-port 8899 \
+ --private-rpc \
+ --dynamic-port-range 8000-8020 \
+ --entrypoint entrypoint.mainnet-beta.solana.com:8001 \
+ --entrypoint entrypoint2.mainnet-beta.solana.com:8001 \
+ --entrypoint entrypoint3.mainnet-beta.solana.com:8001 \
+ --entrypoint entrypoint4.mainnet-beta.solana.com:8001 \
+ --entrypoint entrypoint5.mainnet-beta.solana.com:8001 \
+ --expected-genesis-hash 5eykt4UsFv8P8NJdTREpY1vzqKqZKvdpKuc147dw2N9d \
+ --wal-recovery-mode skip_any_corrupted_record \
+ --limit-ledger-size
+```
+
+All four
+[`--known-validator`s](running-validator/validator-start.md#known-validators)
+are operated by Solana Labs
diff --git a/docs/clusters/rpc-endpoints.md b/docs/clusters/rpc-endpoints.md
new file mode 100644
index 000000000..a2bd93d9d
--- /dev/null
+++ b/docs/clusters/rpc-endpoints.md
@@ -0,0 +1,65 @@
+---
+title: Solana Cluster RPC Endpoints
+---
+
+Solana maintains dedicated api nodes to fulfill [JSON-RPC](/api) requests for
+each public cluster, and third parties may as well. Here are the public RPC
+endpoints currently available and recommended for each public cluster:
+
+## Devnet
+
+#### Endpoint
+
+- `https://api.devnet.solana.com` - single Solana-hosted api node; rate-limited
+
+#### Rate Limits
+
+- Maximum number of requests per 10 seconds per IP: 100
+- Maximum number of requests per 10 seconds per IP for a single RPC: 40
+- Maximum concurrent connections per IP: 40
+- Maximum connection rate per 10 seconds per IP: 40
+- Maximum amount of data per 30 second: 100 MB
+
+## Testnet
+
+#### Endpoint
+
+- `https://api.testnet.solana.com` - single Solana-hosted api node; rate-limited
+
+#### Rate Limits
+
+- Maximum number of requests per 10 seconds per IP: 100
+- Maximum number of requests per 10 seconds per IP for a single RPC: 40
+- Maximum concurrent connections per IP: 40
+- Maximum connection rate per 10 seconds per IP: 40
+- Maximum amount of data per 30 second: 100 MB
+
+## Mainnet Beta
+
+#### Endpoints\*
+
+- `https://api.mainnet-beta.solana.com` - Solana-hosted api node cluster, backed
+ by a load balancer; rate-limited
+
+#### Rate Limits
+
+- Maximum number of requests per 10 seconds per IP: 100
+- Maximum number of requests per 10 seconds per IP for a single RPC: 40
+- Maximum concurrent connections per IP: 40
+- Maximum connection rate per 10 seconds per IP: 40
+- Maximum amount of data per 30 second: 100 MB
+
+\*The public RPC endpoints are not intended for production applications. Please
+use dedicated/private RPC servers when you launch your application, drop NFTs,
+etc. The public services are subject to abuse and rate limits may change without
+prior notice. Likewise, high-traffic websites may be blocked without prior
+notice.
+
+## Common HTTP Error Codes
+
+- 403 -- Your IP address or website has been blocked. It is time to run your own
+ RPC server(s) or find a private service.
+- 429 -- Your IP address is exceeding the rate limits. Slow down! Use the
+ [Retry-After](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After)
+ HTTP response header to determine how long to wait before making another
+ request.
diff --git a/docs/developing/clients/javascript-api.md b/docs/developing/clients/javascript-api.md
new file mode 100644
index 000000000..59e6d1a5e
--- /dev/null
+++ b/docs/developing/clients/javascript-api.md
@@ -0,0 +1,403 @@
+---
+title: Web3 JavaScript API
+---
+
+## What is Solana-Web3.js?
+
+The Solana-Web3.js library aims to provide complete coverage of Solana. The
+library was built on top of the [Solana JSON RPC API](/api).
+
+You can find the full documentation for the `@solana/web3.js` library
+[here](https://solana-labs.github.io/solana-web3.js/).
+
+## Common Terminology
+
+| Term | Definition |
+| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Program | Stateless executable code written to interpret instructions. Programs are capable of performing actions based on the instructions provided. |
+| Instruction | The smallest unit of a program that a client can include in a transaction. Within its processing code, an instruction may contain one or more cross-program invocations. |
+| Transaction | One or more instructions signed by the client using one or more Keypairs and executed atomically with only two possible outcomes: success or failure. |
+
+For the full list of terms, see
+[Solana terminology](../../terminology#cross-program-invocation)
+
+## Getting Started
+
+### Installation
+
+#### yarn
+
+```bash
+$ yarn add @solana/web3.js
+```
+
+#### npm
+
+```bash
+$ npm install --save @solana/web3.js
+```
+
+#### Bundle
+
+```html
+
+
+
+
+
+```
+
+### Usage
+
+#### Javascript
+
+```javascript
+const solanaWeb3 = require("@solana/web3.js");
+console.log(solanaWeb3);
+```
+
+#### ES6
+
+```javascript
+import * as solanaWeb3 from "@solana/web3.js";
+console.log(solanaWeb3);
+```
+
+#### Browser Bundle
+
+```javascript
+// solanaWeb3 is provided in the global namespace by the bundle script
+console.log(solanaWeb3);
+```
+
+## Quickstart
+
+### Connecting to a Wallet
+
+To allow users to use your dApp or application on Solana, they will need to get
+access to their Keypair. A Keypair is a private key with a matching public key,
+used to sign transactions.
+
+There are two ways to obtain a Keypair:
+
+1. Generate a new Keypair
+2. Obtain a Keypair using the secret key
+
+You can obtain a new Keypair with the following:
+
+```javascript
+const { Keypair } = require("@solana/web3.js");
+
+let keypair = Keypair.generate();
+```
+
+This will generate a brand new Keypair for a user to fund and use within your
+application.
+
+You can allow entry of the secretKey using a textbox, and obtain the Keypair
+with `Keypair.fromSecretKey(secretKey)`.
+
+```javascript
+const { Keypair } = require("@solana/web3.js");
+
+let secretKey = Uint8Array.from([
+ 202, 171, 192, 129, 150, 189, 204, 241, 142, 71, 205, 2, 81, 97, 2, 176, 48,
+ 81, 45, 1, 96, 138, 220, 132, 231, 131, 120, 77, 66, 40, 97, 172, 91, 245, 84,
+ 221, 157, 190, 9, 145, 176, 130, 25, 43, 72, 107, 190, 229, 75, 88, 191, 136,
+ 7, 167, 109, 91, 170, 164, 186, 15, 142, 36, 12, 23,
+]);
+
+let keypair = Keypair.fromSecretKey(secretKey);
+```
+
+Many wallets today allow users to bring their Keypair using a variety of
+extensions or web wallets. The general recommendation is to use wallets, not
+Keypairs, to sign transactions. The wallet creates a layer of separation between
+the dApp and the Keypair, ensuring that the dApp never has access to the secret
+key. You can find ways to connect to external wallets with the
+[wallet-adapter](https://github.com/solana-labs/wallet-adapter) library.
+
+### Creating and Sending Transactions
+
+To interact with programs on Solana, you create, sign, and send transactions to
+the network. Transactions are collections of instructions with signatures. The
+order that instructions exist in a transaction determines the order they are
+executed.
+
+A transaction in Solana-Web3.js is created using the
+[`Transaction`](javascript-api.md#Transaction) object and adding desired
+messages, addresses, or instructions.
+
+Take the example of a transfer transaction:
+
+```javascript
+const {
+ Keypair,
+ Transaction,
+ SystemProgram,
+ LAMPORTS_PER_SOL,
+} = require("@solana/web3.js");
+
+let fromKeypair = Keypair.generate();
+let toKeypair = Keypair.generate();
+let transaction = new Transaction();
+
+transaction.add(
+ SystemProgram.transfer({
+ fromPubkey: fromKeypair.publicKey,
+ toPubkey: toKeypair.publicKey,
+ lamports: LAMPORTS_PER_SOL,
+ }),
+);
+```
+
+The above code achieves creating a transaction ready to be signed and
+broadcasted to the network. The `SystemProgram.transfer` instruction was added
+to the transaction, containing the amount of lamports to send, and the `to` and
+`from` public keys.
+
+All that is left is to sign the transaction with keypair and send it over the
+network. You can accomplish sending a transaction by using
+`sendAndConfirmTransaction` if you wish to alert the user or do something after
+a transaction is finished, or use `sendTransaction` if you don't need to wait
+for the transaction to be confirmed.
+
+```javascript
+const {
+ sendAndConfirmTransaction,
+ clusterApiUrl,
+ Connection,
+} = require("@solana/web3.js");
+
+let keypair = Keypair.generate();
+let connection = new Connection(clusterApiUrl("testnet"));
+
+sendAndConfirmTransaction(connection, transaction, [keypair]);
+```
+
+The above code takes in a `TransactionInstruction` using `SystemProgram`,
+creates a `Transaction`, and sends it over the network. You use `Connection` in
+order to define which Solana network you are connecting to, namely
+`mainnet-beta`, `testnet`, or `devnet`.
+
+### Interacting with Custom Programs
+
+The previous section visits sending basic transactions. In Solana everything you
+do interacts with different programs, including the previous section's transfer
+transaction. At the time of writing programs on Solana are either written in
+Rust or C.
+
+Let's look at the `SystemProgram`. The method signature for allocating space in
+your account on Solana in Rust looks like this:
+
+```rust
+pub fn allocate(
+ pubkey: &Pubkey,
+ space: u64
+) -> Instruction
+```
+
+In Solana when you want to interact with a program you must first know all the
+accounts you will be interacting with.
+
+You must always provide every account that the program will be interacting
+within the instruction. Not only that, but you must provide whether or not the
+account is `isSigner` or `isWritable`.
+
+In the `allocate` method above, a single account `pubkey` is required, as well
+as an amount of `space` for allocation. We know that the `allocate` method
+writes to the account by allocating space within it, making the `pubkey`
+required to be `isWritable`. `isSigner` is required when you are designating the
+account that is running the instruction. In this case, the signer is the account
+calling to allocate space within itself.
+
+Let's look at how to call this instruction using solana-web3.js:
+
+```javascript
+let keypair = web3.Keypair.generate();
+let payer = web3.Keypair.generate();
+let connection = new web3.Connection(web3.clusterApiUrl("testnet"));
+
+let airdropSignature = await connection.requestAirdrop(
+ payer.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+```
+
+First, we set up the account Keypair and connection so that we have an account
+to make allocate on the testnet. We also create a payer Keypair and airdrop some
+sol so we can pay for the allocate transaction.
+
+```javascript
+let allocateTransaction = new web3.Transaction({
+ feePayer: payer.publicKey,
+});
+let keys = [{ pubkey: keypair.publicKey, isSigner: true, isWritable: true }];
+let params = { space: 100 };
+```
+
+We create the transaction `allocateTransaction`, keys, and params objects.
+`feePayer` is an optional field when creating a transaction that specifies who
+is paying for the transaction, defaulting to the pubkey of the first signer in
+the transaction. `keys` represents all accounts that the program's `allocate`
+function will interact with. Since the `allocate` function also required space,
+we created `params` to be used later when invoking the `allocate` function.
+
+```javascript
+let allocateStruct = {
+ index: 8,
+ layout: struct([u32("instruction"), ns64("space")]),
+};
+```
+
+The above is created using `u32` and `ns64` from `@solana/buffer-layout` to
+facilitate the payload creation. The `allocate` function takes in the parameter
+`space`. To interact with the function we must provide the data as a Buffer
+format. The `buffer-layout` library helps with allocating the buffer and
+encoding it correctly for Rust programs on Solana to interpret.
+
+Let's break down this struct.
+
+```javascript
+{
+ index: 8, /* <-- */
+ layout: struct([
+ u32('instruction'),
+ ns64('space'),
+ ])
+}
+```
+
+`index` is set to 8 because the function `allocate` is in the 8th position in
+the instruction enum for `SystemProgram`.
+
+```rust
+/* https://github.com/solana-labs/solana/blob/21bc43ed58c63c827ba4db30426965ef3e807180/sdk/program/src/system_instruction.rs#L142-L305 */
+pub enum SystemInstruction {
+ /** 0 **/CreateAccount {/**/},
+ /** 1 **/Assign {/**/},
+ /** 2 **/Transfer {/**/},
+ /** 3 **/CreateAccountWithSeed {/**/},
+ /** 4 **/AdvanceNonceAccount,
+ /** 5 **/WithdrawNonceAccount(u64),
+ /** 6 **/InitializeNonceAccount(Pubkey),
+ /** 7 **/AuthorizeNonceAccount(Pubkey),
+ /** 8 **/Allocate {/**/},
+ /** 9 **/AllocateWithSeed {/**/},
+ /** 10 **/AssignWithSeed {/**/},
+ /** 11 **/TransferWithSeed {/**/},
+ /** 12 **/UpgradeNonceAccount,
+}
+```
+
+Next up is `u32('instruction')`.
+
+```javascript
+{
+ index: 8,
+ layout: struct([
+ u32('instruction'), /* <-- */
+ ns64('space'),
+ ])
+}
+```
+
+The `layout` in the allocate struct must always have `u32('instruction')` first
+when you are using it to call an instruction.
+
+```javascript
+{
+ index: 8,
+ layout: struct([
+ u32('instruction'),
+ ns64('space'), /* <-- */
+ ])
+}
+```
+
+`ns64('space')` is the argument for the `allocate` function. You can see in the
+original `allocate` function in Rust that space was of the type `u64`. `u64` is
+an unsigned 64bit integer. Javascript by default only provides up to 53bit
+integers. `ns64` comes from `@solana/buffer-layout` to help with type
+conversions between Rust and Javascript. You can find more type conversions
+between Rust and Javascript at
+[solana-labs/buffer-layout](https://github.com/solana-labs/buffer-layout).
+
+```javascript
+let data = Buffer.alloc(allocateStruct.layout.span);
+let layoutFields = Object.assign({ instruction: allocateStruct.index }, params);
+allocateStruct.layout.encode(layoutFields, data);
+```
+
+Using the previously created bufferLayout, we can allocate a data buffer. We
+then assign our params `{ space: 100 }` so that it maps correctly to the layout,
+and encode it to the data buffer. Now the data is ready to be sent to the
+program.
+
+```javascript
+allocateTransaction.add(
+ new web3.TransactionInstruction({
+ keys,
+ programId: web3.SystemProgram.programId,
+ data,
+ }),
+);
+
+await web3.sendAndConfirmTransaction(connection, allocateTransaction, [
+ payer,
+ keypair,
+]);
+```
+
+Finally, we add the transaction instruction with all the account keys, payer,
+data, and programId and broadcast the transaction to the network.
+
+The full code can be found below.
+
+```javascript
+const { struct, u32, ns64 } = require("@solana/buffer-layout");
+const { Buffer } = require("buffer");
+const web3 = require("@solana/web3.js");
+
+let keypair = web3.Keypair.generate();
+let payer = web3.Keypair.generate();
+
+let connection = new web3.Connection(web3.clusterApiUrl("testnet"));
+
+let airdropSignature = await connection.requestAirdrop(
+ payer.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+
+let allocateTransaction = new web3.Transaction({
+ feePayer: payer.publicKey,
+});
+let keys = [{ pubkey: keypair.publicKey, isSigner: true, isWritable: true }];
+let params = { space: 100 };
+
+let allocateStruct = {
+ index: 8,
+ layout: struct([u32("instruction"), ns64("space")]),
+};
+
+let data = Buffer.alloc(allocateStruct.layout.span);
+let layoutFields = Object.assign({ instruction: allocateStruct.index }, params);
+allocateStruct.layout.encode(layoutFields, data);
+
+allocateTransaction.add(
+ new web3.TransactionInstruction({
+ keys,
+ programId: web3.SystemProgram.programId,
+ data,
+ }),
+);
+
+await web3.sendAndConfirmTransaction(connection, allocateTransaction, [
+ payer,
+ keypair,
+]);
+```
diff --git a/docs/developing/clients/javascript-reference.md b/docs/developing/clients/javascript-reference.md
new file mode 100644
index 000000000..ce9ff6c6a
--- /dev/null
+++ b/docs/developing/clients/javascript-reference.md
@@ -0,0 +1,858 @@
+---
+title: Web3 API Reference
+---
+
+## Web3 API Reference Guide
+
+The `@solana/web3.js` library is a package that has coverage over the
+[Solana JSON RPC API](/api).
+
+You can find the full documentation for the `@solana/web3.js` library
+[here](https://solana-labs.github.io/solana-web3.js/).
+
+## General
+
+### Connection
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Connection.html)
+
+Connection is used to interact with the [Solana JSON RPC](/api). You can use
+Connection to confirm transactions, get account info, and more.
+
+You create a connection by defining the JSON RPC cluster endpoint and the
+desired commitment. Once this is complete, you can use this connection object to
+interact with any of the Solana JSON RPC API.
+
+#### Example Usage
+
+```javascript
+const web3 = require("@solana/web3.js");
+
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+let slot = await connection.getSlot();
+console.log(slot);
+// 93186439
+
+let blockTime = await connection.getBlockTime(slot);
+console.log(blockTime);
+// 1630747045
+
+let block = await connection.getBlock(slot);
+console.log(block);
+
+/*
+{
+ blockHeight: null,
+ blockTime: 1630747045,
+ blockhash: 'AsFv1aV5DGip9YJHHqVjrGg6EKk55xuyxn2HeiN9xQyn',
+ parentSlot: 93186438,
+ previousBlockhash: '11111111111111111111111111111111',
+ rewards: [],
+ transactions: []
+}
+*/
+
+let slotLeader = await connection.getSlotLeader();
+console.log(slotLeader);
+//49AqLYbpJYc2DrzGUAH1fhWJy62yxBxpLEkfJwjKy2jr
+```
+
+The above example shows only a few of the methods on Connection. Please see the
+[source generated docs](https://solana-labs.github.io/solana-web3.js/classes/Connection.html)
+for the full list.
+
+### Transaction
+
+[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/Transaction.html)
+
+A transaction is used to interact with programs on the Solana blockchain. These
+transactions are constructed with TransactionInstructions, containing all the
+accounts possible to interact with, as well as any needed data or program
+addresses. Each TransactionInstruction consists of keys, data, and a programId.
+You can do multiple instructions in a single transaction, interacting with
+multiple programs at once.
+
+#### Example Usage
+
+```javascript
+const web3 = require("@solana/web3.js");
+const nacl = require("tweetnacl");
+
+// Airdrop SOL for paying transactions
+let payer = web3.Keypair.generate();
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+let airdropSignature = await connection.requestAirdrop(
+ payer.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+
+let toAccount = web3.Keypair.generate();
+
+// Create Simple Transaction
+let transaction = new web3.Transaction();
+
+// Add an instruction to execute
+transaction.add(
+ web3.SystemProgram.transfer({
+ fromPubkey: payer.publicKey,
+ toPubkey: toAccount.publicKey,
+ lamports: 1000,
+ }),
+);
+
+// Send and confirm transaction
+// Note: feePayer is by default the first signer, or payer, if the parameter is not set
+await web3.sendAndConfirmTransaction(connection, transaction, [payer]);
+
+// Alternatively, manually construct the transaction
+let recentBlockhash = await connection.getRecentBlockhash();
+let manualTransaction = new web3.Transaction({
+ recentBlockhash: recentBlockhash.blockhash,
+ feePayer: payer.publicKey,
+});
+manualTransaction.add(
+ web3.SystemProgram.transfer({
+ fromPubkey: payer.publicKey,
+ toPubkey: toAccount.publicKey,
+ lamports: 1000,
+ }),
+);
+
+let transactionBuffer = manualTransaction.serializeMessage();
+let signature = nacl.sign.detached(transactionBuffer, payer.secretKey);
+
+manualTransaction.addSignature(payer.publicKey, signature);
+
+let isVerifiedSignature = manualTransaction.verifySignatures();
+console.log(`The signatures were verified: ${isVerifiedSignature}`);
+
+// The signatures were verified: true
+
+let rawTransaction = manualTransaction.serialize();
+
+await web3.sendAndConfirmRawTransaction(connection, rawTransaction);
+```
+
+### Keypair
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Keypair.html)
+
+The keypair is used to create an account with a public key and secret key within
+Solana. You can either generate, generate from a seed, or create from a secret
+key.
+
+#### Example Usage
+
+```javascript
+const { Keypair } = require("@solana/web3.js");
+
+let account = Keypair.generate();
+
+console.log(account.publicKey.toBase58());
+console.log(account.secretKey);
+
+// 2DVaHtcdTf7cm18Zm9VV8rKK4oSnjmTkKE6MiXe18Qsb
+// Uint8Array(64) [
+// 152, 43, 116, 211, 207, 41, 220, 33, 193, 168, 118,
+// 24, 176, 83, 206, 132, 47, 194, 2, 203, 186, 131,
+// 197, 228, 156, 170, 154, 41, 56, 76, 159, 124, 18,
+// 14, 247, 32, 210, 51, 102, 41, 43, 21, 12, 170,
+// 166, 210, 195, 188, 60, 220, 210, 96, 136, 158, 6,
+// 205, 189, 165, 112, 32, 200, 116, 164, 234
+// ]
+
+let seed = Uint8Array.from([
+ 70, 60, 102, 100, 70, 60, 102, 100, 70, 60, 102, 100, 70, 60, 102, 100, 70,
+ 60, 102, 100, 70, 60, 102, 100, 70, 60, 102, 100, 70, 60, 102, 100,
+]);
+let accountFromSeed = Keypair.fromSeed(seed);
+
+console.log(accountFromSeed.publicKey.toBase58());
+console.log(accountFromSeed.secretKey);
+
+// 3LDverZtSC9Duw2wyGC1C38atMG49toPNW9jtGJiw9Ar
+// Uint8Array(64) [
+// 70, 60, 102, 100, 70, 60, 102, 100, 70, 60, 102,
+// 100, 70, 60, 102, 100, 70, 60, 102, 100, 70, 60,
+// 102, 100, 70, 60, 102, 100, 70, 60, 102, 100, 34,
+// 164, 6, 12, 9, 193, 196, 30, 148, 122, 175, 11,
+// 28, 243, 209, 82, 240, 184, 30, 31, 56, 223, 236,
+// 227, 60, 72, 215, 47, 208, 209, 162, 59
+// ]
+
+let accountFromSecret = Keypair.fromSecretKey(account.secretKey);
+
+console.log(accountFromSecret.publicKey.toBase58());
+console.log(accountFromSecret.secretKey);
+
+// 2DVaHtcdTf7cm18Zm9VV8rKK4oSnjmTkKE6MiXe18Qsb
+// Uint8Array(64) [
+// 152, 43, 116, 211, 207, 41, 220, 33, 193, 168, 118,
+// 24, 176, 83, 206, 132, 47, 194, 2, 203, 186, 131,
+// 197, 228, 156, 170, 154, 41, 56, 76, 159, 124, 18,
+// 14, 247, 32, 210, 51, 102, 41, 43, 21, 12, 170,
+// 166, 210, 195, 188, 60, 220, 210, 96, 136, 158, 6,
+// 205, 189, 165, 112, 32, 200, 116, 164, 234
+// ]
+```
+
+Using `generate` generates a random Keypair for use as an account on Solana.
+Using `fromSeed`, you can generate a Keypair using a deterministic constructor.
+`fromSecret` creates a Keypair from a secret Uint8array. You can see that the
+publicKey for the `generate` Keypair and `fromSecret` Keypair are the same
+because the secret from the `generate` Keypair is used in `fromSecret`.
+
+**Warning**: Do not use `fromSeed` unless you are creating a seed with high
+entropy. Do not share your seed. Treat the seed like you would a private key.
+
+### PublicKey
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/PublicKey.html)
+
+PublicKey is used throughout `@solana/web3.js` in transactions, keypairs, and
+programs. You require publickey when listing each account in a transaction and
+as a general identifier on Solana.
+
+A PublicKey can be created with a base58 encoded string, buffer, Uint8Array,
+number, and an array of numbers.
+
+#### Example Usage
+
+```javascript
+const { Buffer } = require("buffer");
+const web3 = require("@solana/web3.js");
+const crypto = require("crypto");
+
+// Create a PublicKey with a base58 encoded string
+let base58publicKey = new web3.PublicKey(
+ "5xot9PVkphiX2adznghwrAuxGs2zeWisNSxMW6hU6Hkj",
+);
+console.log(base58publicKey.toBase58());
+
+// 5xot9PVkphiX2adznghwrAuxGs2zeWisNSxMW6hU6Hkj
+
+// Create a Program Address
+let highEntropyBuffer = crypto.randomBytes(31);
+let programAddressFromKey = await web3.PublicKey.createProgramAddress(
+ [highEntropyBuffer.slice(0, 31)],
+ base58publicKey,
+);
+console.log(`Generated Program Address: ${programAddressFromKey.toBase58()}`);
+
+// Generated Program Address: 3thxPEEz4EDWHNxo1LpEpsAxZryPAHyvNVXJEJWgBgwJ
+
+// Find Program address given a PublicKey
+let validProgramAddress = await web3.PublicKey.findProgramAddress(
+ [Buffer.from("", "utf8")],
+ programAddressFromKey,
+);
+console.log(`Valid Program Address: ${validProgramAddress}`);
+
+// Valid Program Address: C14Gs3oyeXbASzwUpqSymCKpEyccfEuSe8VRar9vJQRE,253
+```
+
+### SystemProgram
+
+[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/SystemProgram.html)
+
+The SystemProgram grants the ability to create accounts, allocate account data,
+assign an account to programs, work with nonce accounts, and transfer lamports.
+You can use the SystemInstruction class to help with decoding and reading
+individual instructions
+
+#### Example Usage
+
+```javascript
+const web3 = require("@solana/web3.js");
+
+// Airdrop SOL for paying transactions
+let payer = web3.Keypair.generate();
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+let airdropSignature = await connection.requestAirdrop(
+ payer.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+
+// Allocate Account Data
+let allocatedAccount = web3.Keypair.generate();
+let allocateInstruction = web3.SystemProgram.allocate({
+ accountPubkey: allocatedAccount.publicKey,
+ space: 100,
+});
+let transaction = new web3.Transaction().add(allocateInstruction);
+
+await web3.sendAndConfirmTransaction(connection, transaction, [
+ payer,
+ allocatedAccount,
+]);
+
+// Create Nonce Account
+let nonceAccount = web3.Keypair.generate();
+let minimumAmountForNonceAccount =
+ await connection.getMinimumBalanceForRentExemption(web3.NONCE_ACCOUNT_LENGTH);
+let createNonceAccountTransaction = new web3.Transaction().add(
+ web3.SystemProgram.createNonceAccount({
+ fromPubkey: payer.publicKey,
+ noncePubkey: nonceAccount.publicKey,
+ authorizedPubkey: payer.publicKey,
+ lamports: minimumAmountForNonceAccount,
+ }),
+);
+
+await web3.sendAndConfirmTransaction(
+ connection,
+ createNonceAccountTransaction,
+ [payer, nonceAccount],
+);
+
+// Advance nonce - Used to create transactions as an account custodian
+let advanceNonceTransaction = new web3.Transaction().add(
+ web3.SystemProgram.nonceAdvance({
+ noncePubkey: nonceAccount.publicKey,
+ authorizedPubkey: payer.publicKey,
+ }),
+);
+
+await web3.sendAndConfirmTransaction(connection, advanceNonceTransaction, [
+ payer,
+]);
+
+// Transfer lamports between accounts
+let toAccount = web3.Keypair.generate();
+
+let transferTransaction = new web3.Transaction().add(
+ web3.SystemProgram.transfer({
+ fromPubkey: payer.publicKey,
+ toPubkey: toAccount.publicKey,
+ lamports: 1000,
+ }),
+);
+await web3.sendAndConfirmTransaction(connection, transferTransaction, [payer]);
+
+// Assign a new account to a program
+let programId = web3.Keypair.generate();
+let assignedAccount = web3.Keypair.generate();
+
+let assignTransaction = new web3.Transaction().add(
+ web3.SystemProgram.assign({
+ accountPubkey: assignedAccount.publicKey,
+ programId: programId.publicKey,
+ }),
+);
+
+await web3.sendAndConfirmTransaction(connection, assignTransaction, [
+ payer,
+ assignedAccount,
+]);
+```
+
+### Secp256k1Program
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Secp256k1Program.html)
+
+The Secp256k1Program is used to verify Secp256k1 signatures, which are used by
+both Bitcoin and Ethereum.
+
+#### Example Usage
+
+```javascript
+const { keccak_256 } = require("js-sha3");
+const web3 = require("@solana/web3.js");
+const secp256k1 = require("secp256k1");
+
+// Create a Ethereum Address from secp256k1
+let secp256k1PrivateKey;
+do {
+ secp256k1PrivateKey = web3.Keypair.generate().secretKey.slice(0, 32);
+} while (!secp256k1.privateKeyVerify(secp256k1PrivateKey));
+
+let secp256k1PublicKey = secp256k1
+ .publicKeyCreate(secp256k1PrivateKey, false)
+ .slice(1);
+
+let ethAddress =
+ web3.Secp256k1Program.publicKeyToEthAddress(secp256k1PublicKey);
+console.log(`Ethereum Address: 0x${ethAddress.toString("hex")}`);
+
+// Ethereum Address: 0xadbf43eec40694eacf36e34bb5337fba6a2aa8ee
+
+// Fund a keypair to create instructions
+let fromPublicKey = web3.Keypair.generate();
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+let airdropSignature = await connection.requestAirdrop(
+ fromPublicKey.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+
+// Sign Message with Ethereum Key
+let plaintext = Buffer.from("string address");
+let plaintextHash = Buffer.from(keccak_256.update(plaintext).digest());
+let { signature, recid: recoveryId } = secp256k1.ecdsaSign(
+ plaintextHash,
+ secp256k1PrivateKey,
+);
+
+// Create transaction to verify the signature
+let transaction = new Transaction().add(
+ web3.Secp256k1Program.createInstructionWithEthAddress({
+ ethAddress: ethAddress.toString("hex"),
+ plaintext,
+ signature,
+ recoveryId,
+ }),
+);
+
+// Transaction will succeed if the message is verified to be signed by the address
+await web3.sendAndConfirmTransaction(connection, transaction, [fromPublicKey]);
+```
+
+### Message
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Message.html)
+
+Message is used as another way to construct transactions. You can construct a
+message using the accounts, header, instructions, and recentBlockhash that are a
+part of a transaction. A [Transaction](javascript-api.md#Transaction) is a
+Message plus the list of required signatures required to execute the
+transaction.
+
+#### Example Usage
+
+```javascript
+const { Buffer } = require("buffer");
+const bs58 = require("bs58");
+const web3 = require("@solana/web3.js");
+
+let toPublicKey = web3.Keypair.generate().publicKey;
+let fromPublicKey = web3.Keypair.generate();
+
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+let airdropSignature = await connection.requestAirdrop(
+ fromPublicKey.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+
+let type = web3.SYSTEM_INSTRUCTION_LAYOUTS.Transfer;
+let data = Buffer.alloc(type.layout.span);
+let layoutFields = Object.assign({ instruction: type.index });
+type.layout.encode(layoutFields, data);
+
+let recentBlockhash = await connection.getRecentBlockhash();
+
+let messageParams = {
+ accountKeys: [
+ fromPublicKey.publicKey.toString(),
+ toPublicKey.toString(),
+ web3.SystemProgram.programId.toString(),
+ ],
+ header: {
+ numReadonlySignedAccounts: 0,
+ numReadonlyUnsignedAccounts: 1,
+ numRequiredSignatures: 1,
+ },
+ instructions: [
+ {
+ accounts: [0, 1],
+ data: bs58.encode(data),
+ programIdIndex: 2,
+ },
+ ],
+ recentBlockhash,
+};
+
+let message = new web3.Message(messageParams);
+
+let transaction = web3.Transaction.populate(message, [
+ fromPublicKey.publicKey.toString(),
+]);
+
+await web3.sendAndConfirmTransaction(connection, transaction, [fromPublicKey]);
+```
+
+### Struct
+
+[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/Struct.html)
+
+The struct class is used to create Rust compatible structs in javascript. This
+class is only compatible with Borsh encoded Rust structs.
+
+#### Example Usage
+
+Struct in Rust:
+
+```rust
+pub struct Fee {
+ pub denominator: u64,
+ pub numerator: u64,
+}
+```
+
+Using web3:
+
+```javascript
+import BN from "bn.js";
+import { Struct } from "@solana/web3.js";
+
+export class Fee extends Struct {
+ denominator: BN;
+ numerator: BN;
+}
+```
+
+### Enum
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Enum.html)
+
+The Enum class is used to represent a Rust compatible Enum in javascript. The
+enum will just be a string representation if logged but can be properly
+encoded/decoded when used in conjunction with
+[Struct](javascript-api.md#Struct). This class is only compatible with Borsh
+encoded Rust enumerations.
+
+#### Example Usage
+
+Rust:
+
+```rust
+pub enum AccountType {
+ Uninitialized,
+ StakePool,
+ ValidatorList,
+}
+```
+
+Web3:
+
+```javascript
+import { Enum } from "@solana/web3.js";
+
+export class AccountType extends Enum {}
+```
+
+### NonceAccount
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/NonceAccount.html)
+
+Normally a transaction is rejected if a transaction's `recentBlockhash` field is
+too old. To provide for certain custodial services, Nonce Accounts are used.
+Transactions which use a `recentBlockhash` captured on-chain by a Nonce Account
+do not expire as long at the Nonce Account is not advanced.
+
+You can create a nonce account by first creating a normal account, then using
+`SystemProgram` to make the account a Nonce Account.
+
+#### Example Usage
+
+```javascript
+const web3 = require("@solana/web3.js");
+
+// Create connection
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+// Generate accounts
+let account = web3.Keypair.generate();
+let nonceAccount = web3.Keypair.generate();
+
+// Fund account
+let airdropSignature = await connection.requestAirdrop(
+ account.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+
+await connection.confirmTransaction({ signature: airdropSignature });
+
+// Get Minimum amount for rent exemption
+let minimumAmount = await connection.getMinimumBalanceForRentExemption(
+ web3.NONCE_ACCOUNT_LENGTH,
+);
+
+// Form CreateNonceAccount transaction
+let transaction = new web3.Transaction().add(
+ web3.SystemProgram.createNonceAccount({
+ fromPubkey: account.publicKey,
+ noncePubkey: nonceAccount.publicKey,
+ authorizedPubkey: account.publicKey,
+ lamports: minimumAmount,
+ }),
+);
+// Create Nonce Account
+await web3.sendAndConfirmTransaction(connection, transaction, [
+ account,
+ nonceAccount,
+]);
+
+let nonceAccountData = await connection.getNonce(
+ nonceAccount.publicKey,
+ "confirmed",
+);
+
+console.log(nonceAccountData);
+// NonceAccount {
+// authorizedPubkey: PublicKey {
+// _bn:
+// },
+// nonce: '93zGZbhMmReyz4YHXjt2gHsvu5tjARsyukxD4xnaWaBq',
+// feeCalculator: { lamportsPerSignature: 5000 }
+// }
+
+let nonceAccountInfo = await connection.getAccountInfo(
+ nonceAccount.publicKey,
+ "confirmed",
+);
+
+let nonceAccountFromInfo = web3.NonceAccount.fromAccountData(
+ nonceAccountInfo.data,
+);
+
+console.log(nonceAccountFromInfo);
+// NonceAccount {
+// authorizedPubkey: PublicKey {
+// _bn:
+// },
+// nonce: '93zGZbhMmReyz4YHXjt2gHsvu5tjARsyukxD4xnaWaBq',
+// feeCalculator: { lamportsPerSignature: 5000 }
+// }
+```
+
+The above example shows both how to create a `NonceAccount` using
+`SystemProgram.createNonceAccount`, as well as how to retrieve the
+`NonceAccount` from accountInfo. Using the nonce, you can create transactions
+offline with the nonce in place of the `recentBlockhash`.
+
+### VoteAccount
+
+[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/VoteAccount.html)
+
+Vote account is an object that grants the capability of decoding vote accounts
+from the native vote account program on the network.
+
+#### Example Usage
+
+```javascript
+const web3 = require("@solana/web3.js");
+
+let voteAccountInfo = await connection.getProgramAccounts(web3.VOTE_PROGRAM_ID);
+let voteAccountFromData = web3.VoteAccount.fromAccountData(
+ voteAccountInfo[0].account.data,
+);
+console.log(voteAccountFromData);
+/*
+VoteAccount {
+ nodePubkey: PublicKey {
+ _bn:
+ },
+ authorizedWithdrawer: PublicKey {
+ _bn:
+ },
+ commission: 10,
+ rootSlot: 104570885,
+ votes: [
+ { slot: 104570886, confirmationCount: 31 },
+ { slot: 104570887, confirmationCount: 30 },
+ { slot: 104570888, confirmationCount: 29 },
+ { slot: 104570889, confirmationCount: 28 },
+ { slot: 104570890, confirmationCount: 27 },
+ { slot: 104570891, confirmationCount: 26 },
+ { slot: 104570892, confirmationCount: 25 },
+ { slot: 104570893, confirmationCount: 24 },
+ { slot: 104570894, confirmationCount: 23 },
+ ...
+ ],
+ authorizedVoters: [ { epoch: 242, authorizedVoter: [PublicKey] } ],
+ priorVoters: [
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object], [Object],
+ [Object], [Object]
+ ],
+ epochCredits: [
+ { epoch: 179, credits: 33723163, prevCredits: 33431259 },
+ { epoch: 180, credits: 34022643, prevCredits: 33723163 },
+ { epoch: 181, credits: 34331103, prevCredits: 34022643 },
+ { epoch: 182, credits: 34619348, prevCredits: 34331103 },
+ { epoch: 183, credits: 34880375, prevCredits: 34619348 },
+ { epoch: 184, credits: 35074055, prevCredits: 34880375 },
+ { epoch: 185, credits: 35254965, prevCredits: 35074055 },
+ { epoch: 186, credits: 35437863, prevCredits: 35254965 },
+ { epoch: 187, credits: 35672671, prevCredits: 35437863 },
+ { epoch: 188, credits: 35950286, prevCredits: 35672671 },
+ { epoch: 189, credits: 36228439, prevCredits: 35950286 },
+ ...
+ ],
+ lastTimestamp: { slot: 104570916, timestamp: 1635730116 }
+}
+*/
+```
+
+## Staking
+
+### StakeProgram
+
+[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/StakeProgram.html)
+
+The StakeProgram facilitates staking SOL and delegating them to any validators
+on the network. You can use StakeProgram to create a stake account, stake some
+SOL, authorize accounts for withdrawal of your stake, deactivate your stake, and
+withdraw your funds. The StakeInstruction class is used to decode and read more
+instructions from transactions calling the StakeProgram
+
+#### Example Usage
+
+```javascript
+const web3 = require("@solana/web3.js");
+
+// Fund a key to create transactions
+let fromPublicKey = web3.Keypair.generate();
+let connection = new web3.Connection(web3.clusterApiUrl("devnet"), "confirmed");
+
+let airdropSignature = await connection.requestAirdrop(
+ fromPublicKey.publicKey,
+ web3.LAMPORTS_PER_SOL,
+);
+await connection.confirmTransaction({ signature: airdropSignature });
+
+// Create Account
+let stakeAccount = web3.Keypair.generate();
+let authorizedAccount = web3.Keypair.generate();
+/* Note: This is the minimum amount for a stake account -- Add additional Lamports for staking
+ For example, we add 50 lamports as part of the stake */
+let lamportsForStakeAccount =
+ (await connection.getMinimumBalanceForRentExemption(
+ web3.StakeProgram.space,
+ )) + 50;
+
+let createAccountTransaction = web3.StakeProgram.createAccount({
+ fromPubkey: fromPublicKey.publicKey,
+ authorized: new web3.Authorized(
+ authorizedAccount.publicKey,
+ authorizedAccount.publicKey,
+ ),
+ lamports: lamportsForStakeAccount,
+ lockup: new web3.Lockup(0, 0, fromPublicKey.publicKey),
+ stakePubkey: stakeAccount.publicKey,
+});
+await web3.sendAndConfirmTransaction(connection, createAccountTransaction, [
+ fromPublicKey,
+ stakeAccount,
+]);
+
+// Check that stake is available
+let stakeBalance = await connection.getBalance(stakeAccount.publicKey);
+console.log(`Stake balance: ${stakeBalance}`);
+// Stake balance: 2282930
+
+// We can verify the state of our stake. This may take some time to become active
+let stakeState = await connection.getStakeActivation(stakeAccount.publicKey);
+console.log(`Stake state: ${stakeState.state}`);
+// Stake state: inactive
+
+// To delegate our stake, we get the current vote accounts and choose the first
+let voteAccounts = await connection.getVoteAccounts();
+let voteAccount = voteAccounts.current.concat(voteAccounts.delinquent)[0];
+let votePubkey = new web3.PublicKey(voteAccount.votePubkey);
+
+// We can then delegate our stake to the voteAccount
+let delegateTransaction = web3.StakeProgram.delegate({
+ stakePubkey: stakeAccount.publicKey,
+ authorizedPubkey: authorizedAccount.publicKey,
+ votePubkey: votePubkey,
+});
+await web3.sendAndConfirmTransaction(connection, delegateTransaction, [
+ fromPublicKey,
+ authorizedAccount,
+]);
+
+// To withdraw our funds, we first have to deactivate the stake
+let deactivateTransaction = web3.StakeProgram.deactivate({
+ stakePubkey: stakeAccount.publicKey,
+ authorizedPubkey: authorizedAccount.publicKey,
+});
+await web3.sendAndConfirmTransaction(connection, deactivateTransaction, [
+ fromPublicKey,
+ authorizedAccount,
+]);
+
+// Once deactivated, we can withdraw our funds
+let withdrawTransaction = web3.StakeProgram.withdraw({
+ stakePubkey: stakeAccount.publicKey,
+ authorizedPubkey: authorizedAccount.publicKey,
+ toPubkey: fromPublicKey.publicKey,
+ lamports: stakeBalance,
+});
+
+await web3.sendAndConfirmTransaction(connection, withdrawTransaction, [
+ fromPublicKey,
+ authorizedAccount,
+]);
+```
+
+### Authorized
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Authorized.html)
+
+Authorized is an object used when creating an authorized account for staking
+within Solana. You can designate a `staker` and `withdrawer` separately,
+allowing for a different account to withdraw other than the staker.
+
+You can find more usage of the `Authorized` object under
+[`StakeProgram`](javascript-api.md#StakeProgram)
+
+### Lockup
+
+[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Lockup.html)
+
+Lockup is used in conjunction with the
+[StakeProgram](javascript-api.md#StakeProgram) to create an account. The Lockup
+is used to determine how long the stake will be locked, or unable to be
+retrieved. If the Lockup is set to 0 for both epoch and the Unix timestamp, the
+lockup will be disabled for the stake account.
+
+#### Example Usage
+
+```javascript
+const {
+ Authorized,
+ Keypair,
+ Lockup,
+ StakeProgram,
+} = require("@solana/web3.js");
+
+let account = Keypair.generate();
+let stakeAccount = Keypair.generate();
+let authorized = new Authorized(account.publicKey, account.publicKey);
+let lockup = new Lockup(0, 0, account.publicKey);
+
+let createStakeAccountInstruction = StakeProgram.createAccount({
+ fromPubkey: account.publicKey,
+ authorized: authorized,
+ lamports: 1000,
+ lockup: lockup,
+ stakePubkey: stakeAccount.publicKey,
+});
+```
+
+The above code creates a `createStakeAccountInstruction` to be used when
+creating an account with the `StakeProgram`. The Lockup is set to 0 for both the
+epoch and Unix timestamp, disabling lockup for the account.
+
+See [StakeProgram](javascript-api.md#StakeProgram) for more.
diff --git a/docs/developing/clients/rust-api.md b/docs/developing/clients/rust-api.md
new file mode 100644
index 000000000..68ab93773
--- /dev/null
+++ b/docs/developing/clients/rust-api.md
@@ -0,0 +1,36 @@
+---
+title: Rust API
+---
+
+Solana's Rust crates are [published to crates.io][crates.io] and can be found
+[on docs.rs with the "solana-" prefix][docs.rs].
+
+[crates.io]: https://crates.io/search?q=solana-
+[docs.rs]: https://docs.rs/releases/search?query=solana-
+
+Some important crates:
+
+- [`solana-program`] — Imported by programs running on Solana, compiled to
+ SBF. This crate contains many fundamental data types and is re-exported from
+ [`solana-sdk`], which cannot be imported from a Solana program.
+
+- [`solana-sdk`] — The basic off-chain SDK, it re-exports
+ [`solana-program`] and adds more APIs on top of that. Most Solana programs
+ that do not run on-chain will import this.
+
+- [`solana-client`] — For interacting with a Solana node via the
+ [JSON RPC API](/api).
+
+- [`solana-cli-config`] — Loading and saving the Solana CLI configuration
+ file.
+
+- [`solana-clap-utils`] — Routines for setting up a CLI, using [`clap`],
+ as used by the main Solana CLI. Includes functions for loading all types of
+ signers supported by the CLI.
+
+[`solana-program`]: https://docs.rs/solana-program
+[`solana-sdk`]: https://docs.rs/solana-sdk
+[`solana-client`]: https://docs.rs/solana-client
+[`solana-cli-config`]: https://docs.rs/solana-cli-config
+[`solana-clap-utils`]: https://docs.rs/solana-clap-utils
+[`clap`]: https://docs.rs/clap
diff --git a/docs/developing/guides/compressed-nfts.md b/docs/developing/guides/compressed-nfts.md
new file mode 100644
index 000000000..3dd613dfa
--- /dev/null
+++ b/docs/developing/guides/compressed-nfts.md
@@ -0,0 +1,862 @@
+---
+title: Creating Compressed NFTs with JavaScript
+description:
+ "Compressed NFTs use the Bubblegum program from Metaplex to cheaply and
+ securely store NFT metadata using State Compression on Solana."
+keywords:
+ - compression
+ - merkle tree
+ - read api
+ - metaplex
+---
+
+Compressed NFTs on Solana use the
+[Bubblegum](https://docs.metaplex.com/programs/compression/) program from
+Metaplex to cheaply and securely store NFT metadata using
+[State Compression](../../learn/state-compression.md).
+
+This developer guide will use JavaScript/TypeScript to demonstrate:
+
+- [how to create a tree for compressed NFTs](#create-a-tree),
+- [how to mint compressed NFTs into a tree](#mint-compressed-nfts),
+- [how to get compressed NFT metadata from the Read API](#reading-compressed-nfts-metadata),
+ and
+- [how to transfer compressed NFTs](#transfer-compressed-nfts)
+
+## Intro to Compressed NFTs
+
+Compressed NFTs use [State Compression](../../learn/state-compression.md) and
+[merkle trees](../../learn/state-compression.md#what-is-a-merkle-tree) to
+drastically reduce the storage cost for NFTs. Instead of storing an NFT's
+metadata in a typical Solana account, compressed NFTs store the metadata within
+the ledger. This allows compressed NFTs to still inherit the security and speed
+of the Solana blockchain, while at the same time reducing the overall storage
+costs.
+
+Even though the on-chain data storage mechanism is different than their
+uncompressed counterparts, compressed NFTs still follow the exact same
+[Metadata](https://docs.metaplex.com/programs/token-metadata/accounts#metadata)
+schema/structure. Allowing you to define your Collection and NFT in an identical
+way.
+
+However, the process to mint and transfer compressed NFTs is different from
+uncompressed NFTs. Aside from using a different on-chain program, compressed
+NFTs are minting into a merkle tree and require verification of a "proof" to
+transfer. More on this below.
+
+### Compressed NFTs and indexers
+
+Since compressed NFTs store all of their metadata in the
+[ledger](../../terminology.md#ledger), instead of in traditional
+[accounts](../../terminology.md#account) like uncompressed NFTs, we will need to
+help of indexing services to quickly fetch our compressed NFT's metadata.
+
+Supporting RPC providers are using the Digital Asset Standard Read API (or "Read
+API" for short) to add additional RPC methods that developers can call. These
+additional, NFT oriented methods, are loaded with all the information about
+particular NFTs. Including support for **BOTH** compressed NFTs **AND**
+uncompressed NFTs.
+
+:::caution Metadata is secured by the ledger and cached by indexers
+
+Since validators do not keep a very long history of the recent ledger data,
+these indexers effectively "cache" the compressed NFT metadata passed through
+the Solana ledger. Quickly serving it back on request to improve speed and user
+experience of applications.
+
+However, since the metadata was already secured by the ledger when minting the
+compressed NFT, anyone could re-index the metadata directly from the secure
+ledger. Allowing for independent verification of the data, should the need or
+desire arise.
+
+:::
+
+These indexing services are already available from some of the common RPC
+providers, with more rolling out support in the near future. To name a few of
+the RPC providers that already support the Read API:
+
+- Helius
+- Triton
+- SimpleHash
+
+### How to mint compressed NFTs
+
+The process to create or mint compressed NFTs on Solana is similar to creating a
+"traditional NFT collection", with a few differences. The mint process will
+happen in 3 primary steps:
+
+- create an NFT collection (or use an existing one)
+- create a
+ [concurrent merkle tree](../../learn/state-compression.md#what-is-a-concurrent-merkle-tree)
+ (using the `@solana/spl-account-compression` SDK)
+- mint compressed NFTs into your tree (to any owner's address you want)
+
+### How to transfer a compressed NFT
+
+Once your compressed NFT exists on the Solana blockchain, the process to
+transfer ownership of a compressed NFT happens in a few broad steps:
+
+1. get the NFT "asset" information (from the indexer)
+2. get the NFT's "proof" (from the indexer)
+3. get the Merkle tree account (from the Solana blockchain)
+4. prepare the asset proof (by parsing and formatting it)
+5. build and send the transfer instruction
+
+The first three steps primarily involve gathering specific pieces of information
+(the `proof` and the tree's canopy depth) for the NFT to be transferred. These
+pieces of information are needed to correctly parse/format the `proof` to
+actually be sent within the transfer instruction itself.
+
+## Getting started
+
+For this guide, we are going to make a few assumptions about the compressed NFT
+collection we are going to create:
+
+- we are going to use TypeScript and NodeJS for this example
+- we will use a single, **new** Metaplex collection
+
+### Project Setup
+
+Before we start creating our compressed NFT collection, we need to install a few
+packages:
+
+- [`@solana/web3.js`](https://www.npmjs.com/package/@solana/web3.js) - the base
+ Solana JS SDK for interacting with the blockchain, including making our RPC
+ connection and sending transactions
+- [`@solana/spl-token`](https://www.npmjs.com/package/@solana/spl-token) - used
+ in creating our collection and mint on-chain
+- [`@solana/spl-account-compression`](https://www.npmjs.com/package/@solana/spl-account-compression) -
+ used to create the on-chain tree to store our compressed NFTs
+- [`@metaplex-foundation/mpl-bubblegum`](https://www.npmjs.com/package/@metaplex-foundation/mpl-bubblegum) -
+ used to get the types and helper functions for minting and transferring
+ compressed NFTs on-chain
+- [`@metaplex-foundation/mpl-token-metadata`](https://www.npmjs.com/package/@metaplex-foundation/mpl-token-metadata) -
+used to get the types and helper functions for our NFT's metadata
+
+
+Using your preferred package manager (e.g. npm, yarn, pnpm, etc), install these
+packages into your project:
+
+```sh
+yarn add @solana/web3.js @solana/spl-token @solana/spl-account-compression
+```
+
+```sh
+yarn add @metaplex-foundation/mpl-bubblegum @metaplex-foundation/mpl-token-metadata
+```
+
+## Create a Collection
+
+NFTs are normally grouped together into a
+[Collection](https://docs.metaplex.com/programs/token-metadata/certified-collections#collection-nfts)
+using the Metaplex standard. This is true for **BOTH** traditional NFTs **AND**
+compressed NFTs. The NFT Collection will store all the broad metadata for our
+NFT grouping, such as the collection image and name that will appear in wallets
+and explorers.
+
+Under the hood, an NFT collection acts similar to any other token on Solana.
+More specifically, a Collection is effectively a uncompressed NFT. So we
+actually create them following the same process of creating an
+[SPL token](https://spl.solana.com/token):
+
+- create a new token "mint"
+- create a associated token account (`ata`) for our token mint
+- actually mint a single token
+- store the collection's metadata in an Account on-chain
+
+Since NFT Collections having nothing special to do with
+[State Compression](../../learn/state-compression.md) or
+[compressed NFTs](./compressed-nfts.md), we will not cover creating one in this
+guide.
+
+### Collection addresses
+
+Even though this guide does not cover creating one, we will need the many of the
+various addresses for your Collection, including:
+
+- `collectionAuthority` - this may be your `payer` but it also might not be
+- `collectionMint` - the collection's mint address
+- `collectionMetadata` - the collection's metadata account
+- `editionAccount` - for example, the `masterEditionAccount` created for your
+ collection
+
+## Create a tree
+
+One of the most important decisions to make when creating compressed NFTs is
+[how to setup your tree](../../learn/state-compression.md#sizing-a-concurrent-merkle-tree).
+Especially since the values used to size your tree will determine the overall
+cost of creation, and **CANNOT** be changed after creation.
+
+:::caution
+
+A tree is **NOT** the same thing as a collection. A single collection can use
+_any_ number of trees. In fact, this is usually recommended for larger
+collections due to smaller trees having greater composability.
+
+Conversely, even though a tree **could** be used in multiple collections, it is
+generally considered an anti-pattern and is not recommended.
+
+:::
+
+Using the helper functions provided by the
+[`@solana/spl-account-compression`](https://www.npmjs.com/package/@solana/spl-account-compression)
+SDK, we can create our tree in the following steps:
+
+- decide on our tree size
+- generate a new Keypair and allocated space for the tree on-chain
+- actually create the tree (making it owned by the Bubblegum program)
+
+### Size your tree
+
+Your tree size is set by 3 values, each serving a very specific purpose:
+
+1. `maxDepth` - used to determine how many NFTs we can have in the tree
+2. `maxBufferSize` - used to determine how many updates to your tree are
+ possible in the same block
+3. `canopyDepth` - used to store a portion of the proof on chain, and as such is
+ a large of cost and composability of your compressed NFT collection
+
+:::info
+
+Read more about the details about
+[State Compression](../../learn/state-compression.md), including
+[how to size a tree](../../learn/state-compression.md#sizing-a-concurrent-merkle-tree)
+and potential composability concerns.
+
+:::
+
+Let's assume we are going to create a compressed NFT collection with 10k NFTs in
+it. And since our collection is relatively small, we only need a single smaller
+tree to store all the NFTs:
+
+```ts
+// define the depth and buffer size of our tree to be created
+const maxDepthSizePair: ValidDepthSizePair = {
+ // max=16,384 nodes (for a `maxDepth` of 14)
+ maxDepth: 14,
+ maxBufferSize: 64,
+};
+
+// define the canopy depth of our tree to be created
+const canopyDepth = 10;
+```
+
+Setting a `maxDepth` of `14` will allow our tree to hold up to `16,384`
+compressed NFTs, more than exceeding our `10k` collection size.
+
+Since only specific
+[`ValidDepthSizePair`](https://solana-labs.github.io/solana-program-library/account-compression/sdk/docs/modules/index.html#ValidDepthSizePair)
+pairs are allowed, simply set the `maxBufferSize` to the corresponding value
+tied to your desired `maxDepth`.
+
+Next, setting `canopyDepth` of `10` tells our tree to store `10` of our "proof
+node hashes" on-chain. Thus requiring us to always include `4` proof node values
+(i.e. `maxDepth - canopyDepth`) in every compressed NFT transfer instruction.
+
+### Generate addresses for the tree
+
+When creating a new tree, we need to generate a new
+[Keypair](../../terminology.md#keypair) address for the tree to have:
+
+```ts
+const treeKeypair = Keypair.generate();
+```
+
+Since our tree will be used for compressed NFTs, we will also need to derive an
+Account with authority that is owned by the Bubblegum program (i.e. PDA):
+
+```ts
+// derive the tree's authority (PDA), owned by Bubblegum
+const [treeAuthority, _bump] = PublicKey.findProgramAddressSync(
+ [treeKeypair.publicKey.toBuffer()],
+ BUBBLEGUM_PROGRAM_ID,
+);
+```
+
+### Build the tree creation instructions
+
+With our tree size values defined, and our addresses generated, we need to build
+two related instructions:
+
+1. allocate enough space on-chain for our tree
+2. actually create the tree, owned by the Bubblegum program
+
+Using the
+[`createAllocTreeIx`](https://solana-labs.github.io/solana-program-library/account-compression/sdk/docs/modules/index.html#createAllocTreeIx)
+helper function, we allocate enough space on-chain for our tree.
+
+```ts
+// allocate the tree's account on chain with the `space`
+const allocTreeIx = await createAllocTreeIx(
+ connection,
+ treeKeypair.publicKey,
+ payer.publicKey,
+ maxDepthSizePair,
+ canopyDepth,
+);
+```
+
+Then using the
+[`createCreateTreeInstruction`](https://metaplex-foundation.github.io/metaplex-program-library/docs/bubblegum/functions/createCreateTreeInstruction.html)
+from the Bubblegum SDK, we actually create the tree on-chain. Making it owned by
+the Bubblegum program.
+
+```ts
+// create the instruction to actually create the tree
+const createTreeIx = createCreateTreeInstruction(
+ {
+ payer: payer.publicKey,
+ treeCreator: payer.publicKey,
+ treeAuthority,
+ merkleTree: treeKeypair.publicKey,
+ compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID,
+ // NOTE: this is used for some on chain logging
+ logWrapper: SPL_NOOP_PROGRAM_ID,
+ },
+ {
+ maxBufferSize: maxDepthSizePair.maxBufferSize,
+ maxDepth: maxDepthSizePair.maxDepth,
+ public: false,
+ },
+ BUBBLEGUM_PROGRAM_ID,
+);
+```
+
+### Build and send the transaction
+
+With our two instructions built, we can add them into a transaction and send
+them to the blockchain, making sure both the `payer` and generated `treeKeypair`
+sign the transaction:
+
+```ts
+// build the transaction
+const tx = new Transaction().add(allocTreeIx).add(createTreeIx);
+tx.feePayer = payer.publicKey;
+
+// send the transaction
+const txSignature = await sendAndConfirmTransaction(
+ connection,
+ tx,
+ // ensuring the `treeKeypair` PDA and the `payer` are BOTH signers
+ [treeKeypair, payer],
+ {
+ commitment: "confirmed",
+ skipPreflight: true,
+ },
+);
+```
+
+After a few short moments, and once the transaction is confirmed, we are ready
+to start minting compressed NFTs into our tree.
+
+## Mint compressed NFTs
+
+Since compressed NFTs follow the same Metaplex
+[metadata standards](https://docs.metaplex.com/programs/token-metadata/accounts#metadata)
+as traditional NFTs, we can define our actual NFTs data the same way.
+
+The primary difference is that with compressed NFTs the metadata is actually
+stored in the ledger (unlike traditional NFTs that store them in accounts). The
+metadata gets "hashed" and stored in our tree, and by association, secured by
+the Solana ledger.
+
+Allowing us to cryptographically verify that our original metadata has not
+changed (unless we want it to).
+
+:::info
+
+Learn more about how State Compression uses
+[concurrent merkle trees](../../learn/state-compression.md#what-is-a-concurrent-merkle-tree)
+to cryptographically secure off-chain data using the Solana ledger.
+
+:::
+
+### Define our NFT's metadata
+
+We can define the specific metadata for the single NFT we are about to mint:
+
+```ts
+const compressedNFTMetadata: MetadataArgs = {
+ name: "NFT Name",
+ symbol: "ANY",
+ // specific json metadata for each NFT
+ uri: "https://supersweetcollection.notarealurl/token.json",
+ creators: null,
+ editionNonce: 0,
+ uses: null,
+ collection: null,
+ primarySaleHappened: false,
+ sellerFeeBasisPoints: 0,
+ isMutable: false,
+ // these values are taken from the Bubblegum package
+ tokenProgramVersion: TokenProgramVersion.Original,
+ tokenStandard: TokenStandard.NonFungible,
+};
+```
+
+In this demo, the key pieces of our NFT's metadata to note are:
+
+- `name` - this is the actual name of our NFT that will be displayed in wallets
+ and on explorers.
+- `uri` - this is the address for your NFTs metadata JSON file.
+- `creators` - for this example, we are not storing a list of creators. If you
+ want your NFTs to have royalties, you will need to store actual data here. You
+ can checkout the Metaplex docs for more info on it.
+
+### Derive the Bubblegum signer
+
+When minting new compressed NFTs, the Bubblegum program needs a PDA to perform a
+[cross-program invocation](../programming-model/calling-between-programs#cross-program-invocations)
+(`cpi`) to the SPL compression program.
+
+:::caution
+
+This `bubblegumSigner` PDA is derived using a hard coded seed string of
+`collection_cpi` and owned by the Bubblegum program. If this hard coded value is
+not provided correctly, your compressed NFT minting will fail.
+
+:::
+
+Below, we derive this PDA using the **required** hard coded seed string of
+`collection_cpi`:
+
+```ts
+// derive a PDA (owned by Bubblegum) to act as the signer of the compressed minting
+const [bubblegumSigner, _bump2] = PublicKey.findProgramAddressSync(
+ // `collection_cpi` is a custom prefix required by the Bubblegum program
+ [Buffer.from("collection_cpi", "utf8")],
+ BUBBLEGUM_PROGRAM_ID,
+);
+```
+
+### Create the mint instruction
+
+Now we should have all the information we need to actually mint our compressed
+NFT.
+
+Using the `createMintToCollectionV1Instruction` helper function provided in the
+Bubblegum SDK, we can craft the instruction to actually mint our compressed NFT
+directly into our collection.
+
+If you have minted traditional NFTs on Solana, this will look fairly similar. We
+are creating a new instruction, giving several of the account addresses you
+might expect (e.g. the `payer`, `tokenMetadataProgram`, and various collection
+addresses), and then some tree specific addresses.
+
+The addresses to pay special attention to are:
+
+- `leafOwner` - this will be the owner of the compressed NFT. You can either
+ mint it your self (i.e. the `payer`), or airdrop to any other Solana address
+- `leafDelegate` - this is the delegated authority of this specific NFT we are
+ about to mint. If you do not want to have a delegated authority for the NFT we
+ are about to mint, then this value should be set to the same address of
+ `leafOwner`.
+
+```ts
+const compressedMintIx = createMintToCollectionV1Instruction(
+ {
+ payer: payer.publicKey,
+
+ merkleTree: treeAddress,
+ treeAuthority,
+ treeDelegate: payer.publicKey,
+
+ // set the receiver of the NFT
+ leafOwner: receiverAddress || payer.publicKey,
+ // set a delegated authority over this NFT
+ leafDelegate: payer.publicKey,
+
+ // collection details
+ collectionAuthority: payer.publicKey,
+ collectionAuthorityRecordPda: BUBBLEGUM_PROGRAM_ID,
+ collectionMint: collectionMint,
+ collectionMetadata: collectionMetadata,
+ editionAccount: collectionMasterEditionAccount,
+
+ // other accounts
+ bubblegumSigner: bubblegumSigner,
+ compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID,
+ logWrapper: SPL_NOOP_PROGRAM_ID,
+ tokenMetadataProgram: TOKEN_METADATA_PROGRAM_ID,
+ },
+ {
+ metadataArgs: Object.assign(compressedNFTMetadata, {
+ collection: { key: collectionMint, verified: false },
+ }),
+ },
+);
+```
+
+Some of the other tree specific addresses are:
+
+- `merkleTree` - the address of our tree we created
+- `treeAuthority` - the authority of the tree
+- `treeDelegate` - the delegated authority of the entire tree
+
+Then we also have all of our NFT collection's addresses, including the mint
+address, metadata account, and edition account. These addresses are also
+standard to pass in when minting uncompressed NFTs.
+
+#### Sign and send the transaction
+
+Once our compressed mint instruction has been created, we can add it to a
+transaction and send it to the Solana network:
+
+```ts
+const tx = new Transaction().add(compressedMintIx);
+tx.feePayer = payer.publicKey;
+
+// send the transaction to the cluster
+const txSignature = await sendAndConfirmTransaction(connection, tx, [payer], {
+ commitment: "confirmed",
+ skipPreflight: true,
+});
+```
+
+## Reading compressed NFTs metadata
+
+With the help of a supporting RPC provider, developers can use the Digital Asset
+Standard Read API (or "Read API" for short) to fetch the metadata of NFTs.
+
+:::info
+
+The Read API supports both compressed NFTs and traditional/uncompressed NFTs.
+You can use the same RPC endpoints to retrieve all the assorted information for
+both types of NFTs, including auto-fetching the NFTs' JSON URI.
+
+:::
+
+### Using the Read API
+
+When working with the Read API and a supporting RPC provider, developers can
+make `POST` requests to the RPC endpoint using your preferred method of making
+such requests (e.g. `curl`, JavaScript `fetch()`, etc).
+
+:::warning Asset ID
+
+Within the Read API, digital assets (i.e. NFTs) are indexed by their `id`. This
+asset `id` value differs slightly between traditional NFTs and compressed NFTs:
+
+- for traditional/uncompressed NFTs: this is the token's address for the actual
+ Account on-chain that stores the metadata for the asset.
+- for compressed NFTs: this is the `id` of the compressed NFT within the tree
+ and is **NOT** an actual on-chain Account address. While a compressed NFT's
+ `assetId` resembles a traditional Solana Account address, it is not.
+
+:::
+
+### Common Read API Methods
+
+While the Read API supports more than these listed below, the most commonly used
+methods are:
+
+- `getAsset` - get a specific NFT asset by its `id`
+- `getAssetProof` - returns the merkle proof that is required to transfer a
+ compressed NFT, by its asset `id`
+- `getAssetsByOwner` - get the assets owned by a specific address
+- `getAssetsByGroup` - get the assets by a specific grouping (i.e. a collection)
+
+:::info Read API Methods, Schema, and Specification
+
+Explore all the additional RPC methods added by Digital Asset Standard Read API
+on [Metaplex's RPC Playground](https://metaplex-read-api.surge.sh/). Here you
+will also find the expected inputs and response schema for each supported RPC
+method.
+
+:::
+
+### Example Read API Request
+
+For demonstration, below is an example request for the `getAsset` method using
+the
+[JavaScript Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API),
+which is built into modern JavaScript runtimes:
+
+```ts
+// make a POST request to the RPC using the JavaScript `fetch` api
+const response = await fetch(rpcEndpointUrl, {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json",
+ },
+ body: JSON.stringify({
+ jsonrpc: "2.0",
+ id: "rpd-op-123",
+ method: "getAsset",
+ params: {
+ id: "5q7qQ4FWYyj4vnFrivRBe6beo6p88X8HTkkyVPjPkQmF",
+ },
+ }),
+});
+```
+
+### Example Read API Response
+
+With a successful response from the RPC, you should seem similar data to this:
+
+```ts
+{
+ interface: 'V1_NFT',
+ id: '5q7qQ4FWYyj4vnFrivRBe6beo6p88X8HTkkyVPjPkQmF',
+ content: [Object],
+ authorities: [Array],
+ compression: [Object],
+ grouping: [],
+ royalty: [Object],
+ creators: [],
+ ownership: [Object],
+ supply: [Object],
+ mutable: false
+}
+```
+
+The response fields to pay special attention to are:
+
+- `id` - this is your asset's `id`
+- `grouping` - can tell you the collection address that the NFT belongs to. The
+ collection address will be the `group_value`.
+- `metadata` - contains the actual metadata for the NFT, including the auto
+ fetched JSON uri set when the NFT was minted
+- `ownership` - gives you the NFT owner's address (and also if the NFT has
+ delegated authority to another address)
+- `compression` - tells you if this NFT is actually using compression or not.
+ For compressed NFTs, this will also give you the tree address that is storing
+ the compressed NFT on chain.
+
+:::caution
+
+Some of the returned values may be empty if the NFT is **not** a compressed NFT,
+such as many of the `compression` fields. This is expected.
+
+:::
+
+## Transfer compressed NFTs
+
+Transferring compressed NFTs is different from transferring uncompressed NFTs.
+Aside from using a different on-chain program, compressed NFTs require the use
+of a asset's "merkle proof" (or `proof` for short) to actually change ownership.
+
+:::info What is a merkle proof?
+
+An asset's "merkle proof" is a listing of all the "adjacent hashes" within the
+tree that are required to validate a specific leaf within said tree.
+
+These proof hashes themselves, and the specific asset's leaf data, are hashed
+together in a deterministic way to compute the "root hash". Therefore, allowing
+for cryptographic validation of an asset within the merkle tree.
+
+**NOTE:** While each of these hash values resemble a Solana Account's
+[address/public key](../../terminology.md#public-key-pubkey), they are not
+addresses.
+
+:::
+
+Transferring ownership of a compressed NFT happens in 5 broad steps:
+
+1. get the NFT's "asset" data (from the indexer)
+2. get the NFT's proof (from the indexer)
+3. get the Merkle tree account (directly from the Solana blockchain)
+4. prepare the asset proof
+5. build and send the transfer instruction
+
+The first three steps primarily involve gathering specific pieces of information
+(the `proof` and the tree's canopy depth) for the NFT to be transferred. These
+pieces of information are needed to correctly parse/format the `proof` to
+actually be sent within the transfer instruction itself.
+
+### Get the asset
+
+To perform the transfer of our compressed NFT, we will need to retrieve a few
+pieces of information about the NFT.
+
+For starters, we will need to get some the asset's information in order to allow
+the on-chain compression program to correctly perform validation and security
+checks.
+
+We can use the `getAsset` RPC method to retrieve two important pieces of
+information for the compressed NFT: the `data_hash` and `creator_hash`.
+
+#### Example response from the `getAsset` method
+
+Below is an example response from the `getAsset` method:
+
+```ts
+compression: {
+ eligible: false,
+ compressed: true,
+ data_hash: 'D57LAefACeaJesajt6VPAxY4QFXhHjPyZbjq9efrt3jP',
+ creator_hash: '6Q7xtKPmmLihpHGVBA6u1ENE351YKoyqd3ssHACfmXbn',
+ asset_hash: 'F3oDH1mJ47Z7tNBHvrpN5UFf4VAeQSwTtxZeJmn7q3Fh',
+ tree: 'BBUkS4LZQ7mU8iZXYLVGNUjSxCYnB3x44UuPVHVXS9Fo',
+ seq: 3,
+ leaf_id: 0
+}
+```
+
+### Get the asset proof
+
+The next step in preparing your compressed NFT transfer instruction, is to get a
+**valid** asset `proof` to perform the transfer. This proof is required by the
+on-chain compression program to validate on-chain information.
+
+We can use the `getAssetProof` RPC method to retrieve two important pieces of
+information:
+
+- `proof` - the "full proof" that is required to perform the transfer (more on
+ this below)
+- `tree_id` - the on-chain address of the compressed NFTs tree
+
+:::info Full proof is returned
+
+The `getAssetProof` RPC method returns the complete listing of "proof hashes"
+that are used to perform the compressed NFT transfer. Since this "full proof" is
+returned from the RPC, we will need to remove the portion of the "full proof"
+that is stored on-chain via the tree's `canopy`.
+
+:::
+
+#### Example response from the `getAssetProof` method
+
+Below is an example response from the `getAssetProof` method:
+
+```ts
+{
+ root: '7dy5bzgaRcUnNH2KMExwNXXNaCJnf7wQqxc2VrGXy9qr',
+ proof: [
+ 'HdvzZ4hrPEdEarJfEzAavNJEZcCS1YU1fg2uBvQGwAAb',
+ ...
+ '3e2oBSLfSDVdUdS7jRGFKa8nreJUA9sFPEELrHaQyd4J'
+ ],
+ node_index: 131072,
+ leaf: 'F3oDH1mJ47Z7tNBHvrpN5UFf4VAeQSwTtxZeJmn7q3Fh',
+ tree_id: 'BBUkS4LZQ7mU8iZXYLVGNUjSxCYnB3x44UuPVHVXS9Fo'
+}
+```
+
+### Get the Merkle tree account
+
+Since the `getAssetProof` will always return the "full proof", we will have to
+reduce it down in order to remove the proof hashes that are stored on-chain in
+the tree's canopy. But in order to remove the correct number of proof addresses,
+we need to know the tree's `canopyDepth`.
+
+Once we have our compressed NFT's tree address (the `tree_id` value from
+`getAssetProof`), we can use the
+[`ConcurrentMerkleTreeAccount`](https://solana-labs.github.io/solana-program-library/account-compression/sdk/docs/classes/index.ConcurrentMerkleTreeAccount.html)
+class, from the `@solana/spl-account-compression` SDK:
+
+```ts
+// retrieve the merkle tree's account from the blockchain
+const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress(
+ connection,
+ treeAddress,
+);
+
+// extract the needed values for our transfer instruction
+const treeAuthority = treeAccount.getAuthority();
+const canopyDepth = treeAccount.getCanopyDepth();
+```
+
+For the transfer instruction, we will also need the current `treeAuthority`
+address which we can also get via the `treeAccount`.
+
+### Prepare the asset proof
+
+With our "full proof" and `canopyDepth` values on hand, we can correctly format
+the `proof` to be submitted within the transfer instruction itself.
+
+Since we will use the `createTransferInstruction` helper function from the
+Bubblegum SDK to actually build our transfer instruction, we need to:
+
+- remove the proof values that are already stored on-chain in the
+ [tree's canopy](../../learn/state-compression.md#canopy-depth), and
+- convert the remaining proof values into the valid `AccountMeta` structure that
+ the instruction builder function accepts
+
+```ts
+// parse the list of proof addresses into a valid AccountMeta[]
+const proof: AccountMeta[] = assetProof.proof
+ .slice(0, assetProof.proof.length - (!!canopyDepth ? canopyDepth : 0))
+ .map((node: string) => ({
+ pubkey: new PublicKey(node),
+ isSigner: false,
+ isWritable: false,
+ }));
+```
+
+In the TypeScript code example above, we are first taking a `slice` of our "full
+proof", starting at the beginning of the array, and ensuring we only have
+`proof.length - canopyDepth` number of proof values. This will remove the
+portion of the proof that is already stored on-chain in the tree's canopy.
+
+Then we are structuring each of the remaining proof values as a valid
+`AccountMeta`, since the proof is submitted on-chain in the form of "extra
+accounts" within the transfer instruction.
+
+### Build the transfer instruction
+
+Finally, with all the required pieces of data about our tree and compressed
+NFTs, and a correctly formatted proof, we are ready to actually create the
+transfer instruction.
+
+Build your transfer instruction using the
+[`createTransferInstruction`](https://metaplex-foundation.github.io/metaplex-program-library/docs/bubblegum/functions/createTransferInstruction.html)
+helper function from the Bubblegum SDK:
+
+```ts
+// create the NFT transfer instruction (via the Bubblegum package)
+const transferIx = createTransferInstruction(
+ {
+ merkleTree: treeAddress,
+ treeAuthority,
+ leafOwner,
+ leafDelegate,
+ newLeafOwner,
+ logWrapper: SPL_NOOP_PROGRAM_ID,
+ compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID,
+ anchorRemainingAccounts: proof,
+ },
+ {
+ root: [...new PublicKey(assetProof.root.trim()).toBytes()],
+ dataHash: [...new PublicKey(asset.compression.data_hash.trim()).toBytes()],
+ creatorHash: [
+ ...new PublicKey(asset.compression.creator_hash.trim()).toBytes(),
+ ],
+ nonce: asset.compression.leaf_id,
+ index: asset.compression.leaf_id,
+ },
+ BUBBLEGUM_PROGRAM_ID,
+);
+```
+
+Aside from passing in our assorted Account addresses and the asset's proof, we
+are converting the string values of our `data_hash`, `creator_hash`, `root` hash
+into an array of bytes that is accepted by the `createTransferInstruction`
+helper function.
+
+Since each of these hash values resemble and are formatted similar to
+PublicKeys, we can use the
+[`PublicKey`](https://solana-labs.github.io/solana-web3.js/classes/PublicKey.html)
+class in web3.js to convert them into a accepted byte array format.
+
+#### Send the transaction
+
+With our transfer instructions built, we can add it into a transaction and send
+it to the blockchain similar to before. Making sure either the current
+`leafOwner` or the `leafDelegate` signs the transaction.
+
+:::note
+
+After each successful transfer of a compressed NFT, the `leafDelegate` should
+reset to an empty value. Meaning the specific asset will not have delegated
+authority to an address other than its owner.
+
+:::
+
+And once confirmed by the cluster, we will have successfully transferred a
+compressed NFT.
+
+## Example code repository
+
+You can find an example code repository for this developer guide on the Solana
+Developers GitHub: https://github.com/solana-developers/compressed-nfts
diff --git a/docs/developing/intro/programs.md b/docs/developing/intro/programs.md
new file mode 100644
index 000000000..886723a2d
--- /dev/null
+++ b/docs/developing/intro/programs.md
@@ -0,0 +1,89 @@
+---
+title: What are Solana Programs?
+description:
+ "A Solana Program, aka smart contract, is the executable code that interprets
+ the instructions on the blockchain. There are two types: Native and on chain."
+---
+
+Solana Programs, often referred to as "_smart contracts_" on other blockchains,
+are the executable code that interprets the instructions sent inside of each
+transaction on the blockchain. They can be deployed directly into the core of
+the network as [Native Programs](#native-programs), or published by anyone as
+[On Chain Programs](#on-chain-programs). Programs are the core building blocks
+of the network and handle everything from sending tokens between wallets, to
+accepting votes of a DAOs, to tracking ownership of NFTs.
+
+Both types of programs run on top of the
+[Sealevel runtime](https://medium.com/solana-labs/sealevel-parallel-processing-thousands-of-smart-contracts-d814b378192),
+which is Solana's _parallel processing_ model that helps to enable the high
+transactions speeds of the blockchain.
+
+## Key points
+
+- Programs are essentially special type of
+ [Accounts](../programming-model/accounts.md) that is marked as "_executable_"
+- Programs can own other Accounts
+- Programs can only _change the data_ or _debit_ accounts they own
+- Any program can _read_ or _credit_ another account
+- Programs are considered stateless since the primary data stored in a program
+ account is the compiled SBF code
+- Programs can be upgraded by their owner (see more on that below)
+
+## Types of programs
+
+The Solana blockchain has two types of programs:
+
+- Native programs
+- On chain programs
+
+### On chain programs
+
+These user written programs, often referred to as "_smart contracts_" on other
+blockchains, are deployed directly to the blockchain for anyone to interact with
+and execute. Hence the name "on chain"!
+
+In effect, "on chain programs" are any program that is not baked directly into
+the Solana cluster's core code (like the native programs discussed below).
+
+And even though Solana Labs maintains a small subset of these on chain programs
+(collectively known as the [Solana Program Library](https://spl.solana.com/)),
+anyone can create or publish one. On chain programs can also be updated directly
+on the blockchain by the respective program's Account owner.
+
+### Native programs
+
+_Native programs_ are programs that are built directly into the core of the
+Solana blockchain.
+
+Similar to other "on chain" programs in Solana, native programs can be called by
+any other program/user. However, they can only be upgraded as part of the core
+blockchain and cluster updates. These native program upgrades are controlled via
+the releases to the [different clusters](../../cluster/overview.md).
+
+#### Examples of native programs include:
+
+- [System Program](../runtime-facilities/programs.md#system-program): Create new
+ accounts, transfer tokens, and more
+- [BPF Loader Program](../runtime-facilities/programs.md#bpf-loader): Deploys,
+ upgrades, and executes programs on chain
+- [Vote program](../runtime-facilities/programs.md#vote-program): Create and
+ manage accounts that track validator voting state and rewards.
+
+## Executable
+
+When a Solana program is deployed onto the network, it is marked as "executable"
+by the [BPF Loader Program](../runtime-facilities/programs.md#bpf-loader). This
+allows the Solana runtime to efficiently and properly execute the compiled
+program code.
+
+## Upgradable
+
+Unlike other blockchains, Solana programs can be upgraded after they are
+deployed to the network.
+
+Native programs can only be upgraded as part of cluster updates when new
+software releases are made.
+
+On chain programs can be upgraded by the account that is marked as the "_Upgrade
+Authority_", which is usually the Solana account/address that deployed the
+program to begin with.
diff --git a/docs/developing/intro/rent.md b/docs/developing/intro/rent.md
new file mode 100644
index 000000000..b0802f569
--- /dev/null
+++ b/docs/developing/intro/rent.md
@@ -0,0 +1,70 @@
+---
+title: What is rent?
+description:
+ "Rent: the small fee Solana accounts incur to store data on the blockchain.
+ Accounts with >2 years of rent are rent exempt and do not pay the periodic
+ fee."
+---
+
+The fee for every Solana Account to store data on the blockchain is called
+"_rent_". This _time and space_ based fee is required to keep an account, and
+therefore its data, alive on the blockchain since
+[clusters](../../cluster/overview.md) must actively maintain this data.
+
+All Solana Accounts (and therefore Programs) are required to maintain a high
+enough LAMPORT balance to become [rent exempt](#rent-exempt) and remain on the
+Solana blockchain.
+
+When an Account no longer has enough LAMPORTS to pay its rent, it will be
+removed from the network in a process known as
+[Garbage Collection](#garbage-collection).
+
+> **Note:** Rent is different from
+> [transactions fees](../../transaction_fees.md). Rent is paid (or held in an
+> Account) to keep data stored on the Solana blockchain. Whereas transaction
+> fees are paid to process
+> [instructions](../developing/../programming-model/transactions.md#instructions)
+> on the network.
+
+### Rent rate
+
+The Solana rent rate is set on a network wide basis, primarily based on the set
+LAMPORTS _per_ byte _per_ year.
+
+Currently, the rent rate is a static amount and stored in the
+[Rent sysvar](../runtime-facilities/sysvars.md#rent).
+
+## Rent exempt
+
+Accounts that maintain a minimum LAMPORT balance greater than 2 years worth of
+rent payments are considered "_rent exempt_" and will not incur a rent
+collection.
+
+> At the time of writing this, new Accounts and Programs **are required** to be
+> initialized with enough LAMPORTS to become rent-exempt. The RPC endpoints have
+> the ability to calculate this
+> [estimated rent exempt balance](../../api/http#getminimumbalanceforrentexemption)
+> and is recommended to be used.
+
+Every time an account's balance is reduced, a check is performed to see if the
+account is still rent exempt. Transactions that would cause an account's balance
+to drop below the rent exempt threshold will fail.
+
+## Garbage collection
+
+Accounts that do not maintain their rent exempt status, or have a balance high
+enough to pay rent, are removed from the network in a process known as _garbage
+collection_. This process is done to help reduce the network wide storage of no
+longer used/maintained data.
+
+You can learn more about
+[garbage collection here](../../implemented-proposals/persistent-account-storage.md#garbage-collection)
+in this implemented proposal.
+
+## Learn more about Rent
+
+You can learn more about Solana Rent with the following articles and
+documentation:
+
+- [Implemented Proposals - Rent](../../implemented-proposals/rent.md)
+- [Implemented Proposals - Account Storage](../../implemented-proposals/persistent-account-storage.md)
diff --git a/docs/developing/intro/transaction_fees.md b/docs/developing/intro/transaction_fees.md
new file mode 100644
index 000000000..d32bbf653
--- /dev/null
+++ b/docs/developing/intro/transaction_fees.md
@@ -0,0 +1,128 @@
+---
+title: Transaction Fees
+description:
+ "Transaction fees are the small fees paid to process instructions on the
+ network. These fees are based on computation and an optional prioritization
+ fee."
+keywords:
+ - instruction fee
+ - processing fee
+ - storage fee
+ - low fee blockchain
+ - gas
+ - gwei
+ - cheap network
+ - affordable blockchain
+---
+
+The small fees paid to process
+[instructions](./../../terminology.md#instruction) on the Solana blockchain are
+known as "_transaction fees_".
+
+As each transaction (which contains one or more instructions) is sent through
+the network, it gets processed by the current leader validation-client. Once
+confirmed as a global state transaction, this _transaction fee_ is paid to the
+network to help support the economic design of the Solana blockchain.
+
+> NOTE: Transactions fees are different from the blockchain's data storage fee
+> called [rent](./rent.md)
+
+### Transaction Fee Calculation
+
+Currently, the amount of resources consumed by a transaction do not impact fees
+in any way. This is because the runtime imposes a small cap on the amount of
+resources that transaction instructions can use, not to mention that the size of
+transactions is limited as well. So right now, transaction fees are solely
+determined by the number of signatures that need to be verified in a
+transaction. The only limit on the number of signatures in a transaction is the
+max size of transaction itself. Each signature (64 bytes) in a transaction (max
+1232 bytes) must reference a unique public key (32 bytes) so a single
+transaction could contain as many as 12 signatures (not sure why you would do
+that). The fee per transaction signature can be fetched with the `solana` cli:
+
+```bash
+$ solana fees
+Blockhash: 8eULQbYYp67o5tGF2gxACnBCKAE39TetbYYMGTx3iBFc
+Lamports per signature: 5000
+Last valid block height: 94236543
+```
+
+The `solana` cli `fees` subcommand calls the `getFees` RPC API method to
+retrieve the above output information, so your application can call that method
+directly as well:
+
+```bash
+$ curl http://api.mainnet-beta.solana.com -H "Content-Type: application/json" -d '
+ {"jsonrpc":"2.0","id":1, "method":"getFees"}
+'
+
+# RESULT (lastValidSlot removed since it's inaccurate)
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "context": {
+ "slot": 106818885
+ },
+ "value": {
+ "blockhash": "78e3YBCMXJBiPD1HpyVtVfFzZFPG6nUycnQcyNMSUQzB",
+ "feeCalculator": {
+ "lamportsPerSignature": 5000
+ },
+ "lastValidBlockHeight": 96137823
+ }
+ },
+ "id": 1
+}
+```
+
+### Fee Determinism
+
+It's important to keep in mind that fee rates (such as `lamports_per_signature`)
+are subject to change from block to block (though that hasn't happened in the
+full history of the `mainnet-beta` cluster). Despite the fact that fees can
+fluctuate, fees for a transaction can still be calculated deterministically when
+creating (and before signing) a transaction. This determinism comes from the
+fact that fees are applied using the rates from the block whose blockhash
+matches the `recent_blockhash` field in a transaction. Blockhashes can only be
+referenced by a transaction for a few minutes before they expire.
+
+Transactions with expired blockhashes will be ignored and dropped by the
+cluster, so it's important to understand how expiration actually works. Before
+transactions are added to a block and during block validation,
+[each transaction's recent blockhash is checked](https://github.com/solana-labs/solana/blob/647aa926673e3df4443d8b3d9e3f759e8ca2c44b/runtime/src/bank.rs#L3482)
+to ensure it hasn't expired yet. The max age of a transaction's blockhash is
+only 150 blocks. This means that if no slots are skipped in between, the
+blockhash for block 100 would be usable by transactions processed in blocks 101
+to 252, inclusive (during block 101 the age of block 100 is "0" and during block
+252 its age is "150"). However, it's important to remember that slots may be
+skipped and that age checks use "block height" _not_ "slot height". Since slots
+are skipped occasionally, the actual age of a blockhash can be a bit longer than
+150 slots. At the time of writing, slot times are about 500ms and skip rate is
+about 5% so the expected lifetime of a transaction which uses the most recent
+blockhash is about 1min 19s.
+
+### Fee Collection
+
+Transactions are required to have at least one account which has signed the
+transaction and is writable. Writable signer accounts are serialized first in
+the list of transaction accounts and the first of these accounts is always used
+as the "fee payer".
+
+Before any transaction instructions are processed, the fee payer account balance
+will be deducted to pay for transaction fees. If the fee payer balance is not
+sufficient to cover transaction fees, the transaction will be dropped by the
+cluster. If the balance was sufficient, the fees will be deducted whether the
+transaction is processed successfully or not. In fact, if any of the transaction
+instructions return an error or violate runtime restrictions, all account
+changes _except_ the transaction fee deduction will be rolled back.
+
+### Fee Distribution
+
+Transaction fees are partially burned and the remaining fees are collected by
+the validator that produced the block that the corresponding transactions were
+included in. The transaction fee burn rate was initialized as 50% when inflation
+rewards were enabled at the beginning of 2021 and has not changed so far. These
+fees incentivize a validator to process as many transactions as possible during
+its slots in the leader schedule. Collected fees are deposited in the
+validator's account (listed in the leader schedule for the current slot) after
+processing all of the transactions included in a block.
diff --git a/docs/developing/lookup-tables.md b/docs/developing/lookup-tables.md
new file mode 100644
index 000000000..9c2a79e59
--- /dev/null
+++ b/docs/developing/lookup-tables.md
@@ -0,0 +1,188 @@
+---
+title: Address Lookup Tables
+description: ""
+---
+
+Address Lookup Tables, commonly referred to as "_lookup tables_" or "_ALTs_" for
+short, allow developers to create a collection of related addresses to
+efficiently load more addresses in a single transaction.
+
+Since each transaction on the Solana blockchain requires a listing of every
+address that is interacted with as part of the transaction, this listing would
+effectively be capped at 32 addresses per transaction. With the help of
+[Address Lookup Tables](./lookup-tables.md), a transaction would now be able to
+raise that limit to 256 addresses per transaction.
+
+## Compressing on chain addresses
+
+After all the desired addresses have been stored on chain in an Address Lookup
+Table, each address can be referenced inside a transaction by its 1-byte index
+within the table (instead of their full 32-byte address). This lookup method
+effectively "_compresses_" a 32-byte address into a 1-byte index value.
+
+This "_compression_" enables storing up to 256 addresses in a single lookup
+table for use inside any given transaction.
+
+## Versioned Transactions
+
+To utilize an Address Lookup Table inside a transaction, developers must use v0
+transactions that were introduced with the new
+[Versioned Transaction format](./versioned-transactions.md).
+
+## How to create an address lookup table
+
+Creating a new lookup table with the `@solana/web3.js` library is similar to the
+older `legacy` transactions, but with some differences.
+
+Using the `@solana/web3.js` library, you can use the
+[`createLookupTable`](https://solana-labs.github.io/solana-web3.js/classes/AddressLookupTableProgram.html#createLookupTable)
+function to construct the instruction needed to create a new lookup table, as
+well as determine its address:
+
+```js
+const web3 = require("@solana/web3.js");
+
+// connect to a cluster and get the current `slot`
+const connection = new web3.Connection(web3.clusterApiUrl("devnet"));
+const slot = await connection.getSlot();
+
+// Assumption:
+// `payer` is a valid `Keypair` with enough SOL to pay for the execution
+
+const [lookupTableInst, lookupTableAddress] =
+ web3.AddressLookupTableProgram.createLookupTable({
+ authority: payer.publicKey,
+ payer: payer.publicKey,
+ recentSlot: slot,
+ });
+
+console.log("lookup table address:", lookupTableAddress.toBase58());
+
+// To create the Address Lookup Table on chain:
+// send the `lookupTableInst` instruction in a transaction
+```
+
+> NOTE: Address lookup tables can be **created** with either a `v0` transaction
+> or a `legacy` transaction. But the Solana runtime can only retrieve and handle
+> the additional addresses within a lookup table while using
+> [v0 Versioned Transactions](./versioned-transactions.md#current-transaction-versions).
+
+## Add addresses to a lookup table
+
+Adding addresses to a lookup table is known as "_extending_". Using the
+`@solana/web3.js` library, you can create a new _extend_ instruction using the
+[`extendLookupTable`](https://solana-labs.github.io/solana-web3.js/classes/AddressLookupTableProgram.html#extendLookupTable)
+method:
+
+```js
+// add addresses to the `lookupTableAddress` table via an `extend` instruction
+const extendInstruction = web3.AddressLookupTableProgram.extendLookupTable({
+ payer: payer.publicKey,
+ authority: payer.publicKey,
+ lookupTable: lookupTableAddress,
+ addresses: [
+ payer.publicKey,
+ web3.SystemProgram.programId,
+ // list more `publicKey` addresses here
+ ],
+});
+
+// Send this `extendInstruction` in a transaction to the cluster
+// to insert the listing of `addresses` into your lookup table with address `lookupTableAddress`
+```
+
+> NOTE: Due to the same memory limits of `legacy` transactions, any transaction
+> used to _extend_ an Address Lookup Table is also limited in how many addresses
+> can be added at a time. Because of this, you will need to use multiple
+> transactions to _extend_ any table with more addresses (~20) that can fit
+> within a single transaction's memory limits.
+
+Once these addresses have been inserted into the table, and stored on chain, you
+will be able to utilize the Address Lookup Table in future transactions.
+Enabling up to 256 addresses in those future transactions.
+
+## Fetch an Address Lookup Table
+
+Similar to requesting another account (or PDA) from the cluster, you can fetch a
+complete Address Lookup Table with the
+[`getAddressLookupTable`](https://solana-labs.github.io/solana-web3.js/classes/Connection.html#getAddressLookupTable)
+method:
+
+```js
+// define the `PublicKey` of the lookup table to fetch
+const lookupTableAddress = new web3.PublicKey("");
+
+// get the table from the cluster
+const lookupTableAccount = (
+ await connection.getAddressLookupTable(lookupTableAddress)
+).value;
+
+// `lookupTableAccount` will now be a `AddressLookupTableAccount` object
+
+console.log("Table address from cluster:", lookupTableAccount.key.toBase58());
+```
+
+Our `lookupTableAccount` variable will now be a `AddressLookupTableAccount`
+object which we can parse to read the listing of all the addresses stored on
+chain in the lookup table:
+
+```js
+// loop through and parse all the addresses stored in the table
+for (let i = 0; i < lookupTableAccount.state.addresses.length; i++) {
+ const address = lookupTableAccount.state.addresses[i];
+ console.log(i, address.toBase58());
+}
+```
+
+## How to use an address lookup table in a transaction
+
+After you have created your lookup table, and stored your needed address on
+chain (via extending the lookup table), you can create a `v0` transaction to
+utilize the on chain lookup capabilities.
+
+Just like older `legacy` transactions, you can create all the
+[instructions](./../terminology.md#instruction) your transaction will execute on
+chain. You can then provide an array of these instructions to the
+[Message](./../terminology.md#message) used in the `v0 transaction.
+
+> NOTE: The instructions used inside a `v0` transaction can be constructed using
+> the same methods and functions used to create the instructions in the past.
+> There is no required change to the instructions used involving an Address
+> Lookup Table.
+
+```js
+// Assumptions:
+// - `arrayOfInstructions` has been created as an `array` of `TransactionInstruction`
+// - we are using the `lookupTableAccount` obtained above
+
+// construct a v0 compatible transaction `Message`
+const messageV0 = new web3.TransactionMessage({
+ payerKey: payer.publicKey,
+ recentBlockhash: blockhash,
+ instructions: arrayOfInstructions, // note this is an array of instructions
+}).compileToV0Message([lookupTableAccount]);
+
+// create a v0 transaction from the v0 message
+const transactionV0 = new web3.VersionedTransaction(messageV0);
+
+// sign the v0 transaction using the file system wallet we created named `payer`
+transactionV0.sign([payer]);
+
+// send and confirm the transaction
+// (NOTE: There is NOT an array of Signers here; see the note below...)
+const txid = await web3.sendAndConfirmTransaction(connection, transactionV0);
+
+console.log(
+ `Transaction: https://explorer.solana.com/tx/${txid}?cluster=devnet`,
+);
+```
+
+> NOTE: When sending a `VersionedTransaction` to the cluster, it must be signed
+> BEFORE calling the `sendAndConfirmTransaction` method. If you pass an array of
+> `Signer` (like with `legacy` transactions) the method will trigger an error!
+
+## More Resources
+
+- Read the [proposal](./../proposals/versioned-transactions.md) for Address
+ Lookup Tables and Versioned transactions
+- [Example Rust program using Address Lookup Tables](https://github.com/TeamRaccoons/address-lookup-table-multi-swap)
diff --git a/docs/developing/on-chain-programs/debugging.md b/docs/developing/on-chain-programs/debugging.md
new file mode 100644
index 000000000..3a8d684c1
--- /dev/null
+++ b/docs/developing/on-chain-programs/debugging.md
@@ -0,0 +1,268 @@
+---
+title: "Debugging Programs"
+---
+
+Solana programs run on-chain, so debugging them in the wild can be challenging.
+To make debugging programs easier, developers can write unit tests that directly
+test their program's execution via the Solana runtime, or run a local cluster
+that will allow RPC clients to interact with their program.
+
+## Running unit tests
+
+- [Testing with Rust](developing-rust.md#how-to-test)
+- [Testing with C](developing-c.md#how-to-test)
+
+## Logging
+
+During program execution both the runtime and the program log status and error
+messages.
+
+For information about how to log from a program see the language specific
+documentation:
+
+- [Logging from a Rust program](developing-rust.md#logging)
+- [Logging from a C program](developing-c.md#logging)
+
+When running a local cluster the logs are written to stdout as long as they are
+enabled via the `RUST_LOG` log mask. From the perspective of program development
+it is helpful to focus on just the runtime and program logs and not the rest of
+the cluster logs. To focus in on program specific information the following log
+mask is recommended:
+
+`export RUST_LOG=solana_runtime::system_instruction_processor=trace,solana_runtime::message_processor=info,solana_bpf_loader=debug,solana_rbpf=debug`
+
+Log messages coming directly from the program (not the runtime) will be
+displayed in the form:
+
+`Program log: `
+
+## Error Handling
+
+The amount of information that can be communicated via a transaction error is
+limited but there are many points of possible failures. The following are
+possible failure points and information about what errors to expect and where to
+get more information:
+
+- The SBF loader may fail to parse the program, this should not happen since the
+ loader has already _finalized_ the program's account data.
+ - `InstructionError::InvalidAccountData` will be returned as part of the
+ transaction error.
+- The SBF loader may fail to setup the program's execution environment
+ - `InstructionError::Custom(0x0b9f_0001)` will be returned as part of the
+ transaction error. "0x0b9f_0001" is the hexadecimal representation of
+ [`VirtualMachineCreationFailed`](https://github.com/solana-labs/solana/blob/bc7133d7526a041d1aaee807b80922baa89b6f90/programs/bpf_loader/src/lib.rs#L44).
+- The SBF loader may have detected a fatal error during program executions
+ (things like panics, memory violations, system call errors, etc...)
+ - `InstructionError::Custom(0x0b9f_0002)` will be returned as part of the
+ transaction error. "0x0b9f_0002" is the hexadecimal representation of
+ [`VirtualMachineFailedToRunProgram`](https://github.com/solana-labs/solana/blob/bc7133d7526a041d1aaee807b80922baa89b6f90/programs/bpf_loader/src/lib.rs#L46).
+- The program itself may return an error
+ - `InstructionError::Custom()` will be returned. The "user
+ defined value" must not conflict with any of the
+ [builtin runtime program errors](https://github.com/solana-labs/solana/blob/bc7133d7526a041d1aaee807b80922baa89b6f90/sdk/program/src/program_error.rs#L87).
+ Programs typically use enumeration types to define error codes starting at
+ zero so they won't conflict.
+
+In the case of `VirtualMachineFailedToRunProgram` errors, more information about
+the specifics of what failed are written to the
+[program's execution logs](debugging.md#logging).
+
+For example, an access violation involving the stack will look something like
+this:
+
+`SBF program 4uQeVj5tqViQh7yWWGStvkEG1Zmhx6uasJtWCJziofM failed: out of bounds memory store (insn #615), addr 0x200001e38/8`
+
+## Monitoring Compute Budget Consumption
+
+The program can log the remaining number of compute units it will be allowed
+before program execution is halted. Programs can use these logs to wrap
+operations they wish to profile.
+
+- [Log the remaining compute units from a Rust program](developing-rust.md#compute-budget)
+- [Log the remaining compute units from a C program](developing-c.md#compute-budget)
+
+See [compute budget](developing/programming-model/runtime.md#compute-budget) for
+more information.
+
+## ELF Dump
+
+The SBF shared object internals can be dumped to a text file to gain more
+insight into a program's composition and what it may be doing at runtime.
+
+- [Create a dump file of a Rust program](developing-rust.md#elf-dump)
+- [Create a dump file of a C program](developing-c.md#elf-dump)
+
+## Instruction Tracing
+
+During execution the runtime SBF interpreter can be configured to log a trace
+message for each SBF instruction executed. This can be very helpful for things
+like pin-pointing the runtime context leading up to a memory access violation.
+
+The trace logs together with the [ELF dump](#elf-dump) can provide a lot of
+insight (though the traces produce a lot of information).
+
+To turn on SBF interpreter trace messages in a local cluster configure the
+`solana_rbpf` level in `RUST_LOG` to `trace`. For example:
+
+`export RUST_LOG=solana_rbpf=trace`
+
+## Source level debugging
+
+Source level debugging of on-chain programs written in Rust or C can be done
+using the `program run` subcommand of `solana-ledger-tool`, and lldb,
+distributed with Solana Rust and Clang compiler binary package platform-tools.
+
+The `solana-ledger-tool program run` subcommand loads a compiled on-chain
+program, executes it in RBPF virtual machine and runs a gdb server that accepts
+incoming connections from LLDB or GDB. Once lldb is connected to
+`solana-ledger-tool` gdbserver, it can control execution of an on-chain program.
+Run `solana-ledger-tool program run --help` for an example of specifying input
+data for parameters of the program entrypoint function.
+
+To compile a program for debugging use cargo-build-sbf build utility with the
+command line option `--debug`. The utility will generate two loadable files, one
+a usual loadable module with the extension `.so`, and another the same loadable
+module but containing Dwarf debug information, a file with extension `.debug`.
+
+To execute a program in debugger, run `solana-ledger-tool program run` with
+`-e debugger` command line option. For example, a crate named 'helloworld' is
+compiled and an executable program is built in `target/deploy` directory. There
+should be three files in that directory
+
+- helloworld-keypair.json -- a keypair for deploying the program,
+- helloworld.debug -- a binary file containing debug information,
+- helloworld.so -- an executable file loadable into the virtual machine. The
+ command line for running `solana-ledger-tool` would be something like this
+
+```
+solana-ledger-tool program run -l test-ledger -e debugger target/deploy/helloworld.so
+```
+
+Note that `solana-ledger-tool` always loads a ledger database. Most on-chain
+programs interact with a ledger in some manner. Even if for debugging purpose a
+ledger is not needed, it has to be provided to `solana-ledger-tool`. A minimal
+ledger database can be created by running `solana-test-validator`, which creates
+a ledger in `test-ledger` subdirectory.
+
+In debugger mode `solana-ledger-tool program run` loads an `.so` file and starts
+listening for an incoming connection from a debugger
+
+```
+Waiting for a Debugger connection on "127.0.0.1:9001"...
+```
+
+To connect to `solana-ledger-tool` and execute the program, run lldb. For
+debugging rust programs it may be beneficial to run solana-lldb wrapper to lldb,
+i.e. at a new shell prompt (other than the one used to start
+`solana-ledger-tool`) run the command
+
+```
+solana-lldb
+```
+
+This script is installed in platform-tools path. If that path is not added to
+`PATH` environment variable, it may be necessary to specify the full path, e.g.
+
+```
+~/.cache/solana/v1.35/platform-tools/llvm/bin/solana-lldb
+```
+
+After starting the debugger, load the .debug file by entering the following
+command at the debugger prompt
+
+```
+(lldb) file target/deploy/helloworld.debug
+```
+
+If the debugger finds the file, it will print something like this
+
+```
+Current executable set to '/path/helloworld.debug' (bpf).
+```
+
+Now, connect to the gdb server that `solana-ledger-tool` implements, and debug
+the program as usual. Enter the following command at lldb prompt
+
+```
+(lldb) gdb-remote 127.0.0.1:9001
+```
+
+If the debugger and the gdb server establish a connection, the execution of the
+program will be stopped at the entrypoint function, and lldb should print
+several lines of the source code around the entrypoint function signature. From
+this point on, normal lldb commands can be used to control execution of the
+program being debugged.
+
+### Debugging in an IDE
+
+To debug on-chain programs in Visual Studio IDE, install the CodeLLDB extension.
+Open CodeLLDB Extension Settings. In Advanced settings change the value of
+`Lldb: Library` field to the path of `liblldb.so` (or liblldb.dylib on macOS).
+For example on Linux a possible path to Solana customized lldb can be
+`/home//.cache/solana/v1.33/platform-tools/llvm/lib/liblldb.so.` where
+`` is your Linux system username. This can also be added directly to
+`~/.config/Code/User/settings.json` file, e.g.
+
+```
+{
+ "lldb.library": "/home//.cache/solana/v1.35/platform-tools/llvm/lib/liblldb.so"
+}
+```
+
+In `.vscode` subdirectory of your on-chain project, create two files
+
+First file is `tasks.json` with the following content
+
+```
+{
+ "version": "2.0.0",
+ "tasks": [
+ {
+ "label": "build",
+ "type": "shell",
+ "command": "cargo build-sbf --debug",
+ "problemMatcher": [],
+ "group": {
+ "kind": "build",
+ "isDefault": true
+ }
+ },
+ {
+ "label": "solana-debugger",
+ "type": "shell",
+ "command": "solana-ledger-tool program run -l test-ledger -e debugger ${workspaceFolder}/target/deploy/helloworld.so"
+ }
+ ]
+}
+```
+
+The first task is to build the on-chain program using cargo-build-sbf utility.
+The second task is to run `solana-ledger-tool program run` in debugger mode.
+
+Another file is `launch.json` with the following content
+
+```
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "type": "lldb",
+ "request": "custom",
+ "name": "Debug",
+ "targetCreateCommands": ["target create ${workspaceFolder}/target/deploy/helloworld.debug"],
+ "processCreateCommands": ["gdb-remote 127.0.0.1:9001"]
+ }
+ ]
+}
+```
+
+This file specifies how to run debugger and to connect it to the gdb server
+implemented by `solana-ledger-tool`.
+
+To start debugging a program, first build it by running the build task. The next
+step is to run `solana-debugger` task. The tasks specified in `tasks.json` file
+are started from `Terminal >> Run Task...` menu of VSCode. When
+`solana-ledger-tool` is running and listening from incoming connections, it's
+time to start the debugger. Launch it from VSCode `Run and Debug` menu. If
+everything is set up correctly, VSCode will start a debugging session and the
+program execution should stop on the entrance into the `entrypoint` function.
diff --git a/docs/developing/on-chain-programs/deploying.md b/docs/developing/on-chain-programs/deploying.md
new file mode 100644
index 000000000..f75570627
--- /dev/null
+++ b/docs/developing/on-chain-programs/deploying.md
@@ -0,0 +1,258 @@
+---
+title: "Deploying Programs"
+description:
+ "Deploying on-chain programs can be done using the Solana CLI using the
+ Upgradable BPF loader to upload the compiled byte-code to the Solana
+ blockchain."
+---
+
+Solana on-chain programs (otherwise known as "smart contracts") are stored in
+"executable" accounts on Solana. These accounts are identical to any other
+account but with the exception of:
+
+- having the "executable" flag enabled, and
+- the owner being assigned to a BPF loader
+
+Besides those exceptions, they are governed by the same runtime rules as
+non-executable accounts, hold SOL tokens for rent fees, and store a data buffer
+which is managed by the BPF loader program. The latest BPF loader is called the
+"Upgradeable BPF Loader".
+
+## Overview of the Upgradeable BPF Loader
+
+### State accounts
+
+The Upgradeable BPF loader program supports three different types of state
+accounts:
+
+1. [Program account](https://github.com/solana-labs/solana/blob/master/sdk/program/src/bpf_loader_upgradeable.rs#L34):
+ This is the main account of an on-chain program and its address is commonly
+ referred to as a "program id." Program id's are what transaction instructions
+ reference in order to invoke a program. Program accounts are immutable once
+ deployed, so you can think of them as a proxy account to the byte-code and
+ state stored in other accounts.
+2. [Program data account](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/bpf_loader_upgradeable.rs#L39):
+ This account is what stores the executable byte-code of an on-chain program.
+ When a program is upgraded, this account's data is updated with new
+ byte-code. In addition to byte-code, program data accounts are also
+ responsible for storing the slot when it was last modified and the address of
+ the sole account authorized to modify the account (this address can be
+ cleared to make a program immutable).
+3. [Buffer accounts](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/bpf_loader_upgradeable.rs#L27):
+ These accounts temporarily store byte-code while a program is being actively
+ deployed through a series of transactions. They also each store the address
+ of the sole account which is authorized to do writes.
+
+### Instructions
+
+The state accounts listed above can only be modified with one of the following
+instructions supported by the Upgradeable BPF Loader program:
+
+1. [Initialize buffer](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/loader_upgradeable_instruction.rs#L21):
+ Creates a buffer account and stores an authority address which is allowed to
+ modify the buffer.
+2. [Write](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/loader_upgradeable_instruction.rs#L28):
+ Writes byte-code at a specified byte offset inside a buffer account. Writes
+ are processed in small chunks due to a limitation of Solana transactions
+ having a maximum serialized size of 1232 bytes.
+3. [Deploy](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/loader_upgradeable_instruction.rs#L77):
+ Creates both a program account and a program data account. It fills the
+ program data account by copying the byte-code stored in a buffer account. If
+ the byte-code is valid, the program account will be set as executable,
+ allowing it to be invoked. If the byte-code is invalid, the instruction will
+ fail and all changes are reverted.
+4. [Upgrade](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/loader_upgradeable_instruction.rs#L102):
+ Fills an existing program data account by copying executable byte-code from a
+ buffer account. Similar to the deploy instruction, it will only succeed if
+ the byte-code is valid.
+5. [Set authority](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/loader_upgradeable_instruction.rs#L114):
+ Updates the authority of a program data or buffer account if the account's
+ current authority has signed the transaction being processed. If the
+ authority is deleted without replacement, it can never be set to a new
+ address and the account can never be closed.
+6. [Close](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/sdk/program/src/loader_upgradeable_instruction.rs#L127):
+ Clears the data of a program data account or buffer account and reclaims the
+ SOL used for the rent exemption deposit.
+
+## How `solana program deploy` works
+
+Deploying a program on Solana requires hundreds, if not thousands of
+transactions, due to the max size limit of 1232 bytes for Solana transactions.
+The Solana CLI takes care of this rapid firing of transactions with the
+`solana program deploy` subcommand. The process can be broken down into the
+following 3 phases:
+
+1. [Buffer initialization](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/cli/src/program.rs#L2113):
+ First, the CLI sends a transaction which
+ [creates a buffer account](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/cli/src/program.rs#L1903)
+ large enough for the byte-code being deployed. It also invokes the
+ [initialize buffer instruction](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/programs/bpf_loader/src/lib.rs#L320)
+ to set the buffer authority to restrict writes to the deployer's chosen
+ address.
+2. [Buffer writes](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/cli/src/program.rs#L2129):
+ Once the buffer account is initialized, the CLI
+ [breaks up the program byte-code](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/cli/src/program.rs#L1940)
+ into ~1KB chunks and
+ [sends transactions at a rate of 100 transactions per second](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/client/src/tpu_client.rs#L133)
+ to write each chunk with
+ [the write buffer instruction](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/programs/bpf_loader/src/lib.rs#L334).
+ These transactions are sent directly to the current leader's transaction
+ processing (TPU) port and are processed in parallel with each other. Once all
+ transactions have been sent, the CLI
+ [polls the RPC API with batches of transaction signatures](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/client/src/tpu_client.rs#L216)
+ to ensure that every write was successful and confirmed.
+3. [Finalization](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/cli/src/program.rs#L1807):
+ Once writes are completed, the CLI
+ [sends a final transaction](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/cli/src/program.rs#L2150)
+ to either
+ [deploy a new program](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/programs/bpf_loader/src/lib.rs#L362)
+ or
+ [upgrade an existing program](https://github.com/solana-labs/solana/blob/7409d9d2687fba21078a745842c25df805cdf105/programs/bpf_loader/src/lib.rs#L513).
+ In either case, the byte-code written to the buffer account will be copied
+ into a program data account and verified.
+
+## Reclaim rent from program accounts
+
+The storage of data on the Solana blockchain requires the payment of
+[rent](./../intro/rent.md), including for the byte-code for on-chain programs.
+Therefore as you deploy more or larger programs, the amount of rent paid to
+remain rent-exempt will also become larger.
+
+Using the current rent cost model configuration, a rent-exempt account requires
+a deposit of ~0.7 SOL per 100KB stored. These costs can have an outsized impact
+on developers who deploy their own programs since
+[program accounts](./../programming-model/accounts.md#executable) are among the
+largest we typically see on Solana.
+
+#### Example of how much data is used for programs
+
+As a data point of the number of accounts and potential data stored on-chain,
+below is the distribution of the largest accounts (at least 100KB) at slot
+`103,089,804` on `mainnet-beta` by assigned on-chain program:
+
+1. **Serum Dex v3**: 1798 accounts
+2. **Metaplex Candy Machine**: 1089 accounts
+3. **Serum Dex v2**: 864 accounts
+4. **Upgradeable BPF Program Loader**: 824 accounts
+5. **BPF Program Loader v2**: 191 accounts
+6. **BPF Program Loader v1**: 150 accounts
+
+> _Note: this data was pulled with a modified `solana-ledger-tool` built from
+> this branch:
+> [https://github.com/jstarry/solana/tree/large-account-stats](https://github.com/jstarry/solana/tree/large-account-stats)_
+
+### Reclaiming buffer accounts
+
+Buffer accounts are used by the Upgradeable BPF loader to temporarily store
+byte-code that is in the process of being deployed on-chain. This temporary
+buffer is required when upgrading programs because the currently deployed
+program's byte-code cannot be affected by an in-progress upgrade.
+
+Unfortunately, deploys fail occasionally and instead of reusing the buffer
+account, developers might retry their deployment with a new buffer and not
+realize that they stored a good chunk of SOL in a forgotten buffer account from
+an earlier deploy.
+
+> As of slot `103,089,804` on `mainnet-beta` there are 276 abandoned buffer
+> accounts that could be reclaimed!
+
+Developers can check if they own any abandoned buffer accounts by using the
+Solana CLI:
+
+```bash
+solana program show --buffers --keypair ~/.config/solana/MY_KEYPAIR.json
+
+Buffer Address | Authority | Balance
+9vXW2c3qo6DrLHa1Pkya4Mw2BWZSRYs9aoyoP3g85wCA | 2nr1bHFT86W9tGnyvmYW4vcHKsQB3sVQfnddasz4kExM | 3.41076888 SOL
+```
+
+And they can close those buffers to reclaim the SOL balance with the following
+command:
+
+```bash
+solana program close --buffers --keypair ~/.config/solana/MY_KEYPAIR.json
+```
+
+#### Fetch the owners of buffer accounts via RPC API
+
+The owners of all abandoned program deploy buffer accounts can be fetched via
+the RPC API:
+
+```bash
+curl http://api.mainnet-beta.solana.com -H "Content-Type: application/json" \
+--data-binary @- << EOF | jq --raw-output '.result | .[] | .account.data[0]'
+{
+ "jsonrpc":"2.0", "id":1, "method":"getProgramAccounts",
+ "params":[
+ "BPFLoaderUpgradeab1e11111111111111111111111",
+ {
+ "dataSlice": {"offset": 5, "length": 32},
+ "filters": [{"memcmp": {"offset": 0, "bytes": "2UzHM"}}],
+ "encoding": "base64"
+ }
+ ]
+}
+EOF
+```
+
+After re-encoding the base64 encoded keys into base58 and grouping by key, we
+see some accounts have over 10 buffer accounts they could close, yikes!
+
+```bash
+'BE3G2F5jKygsSNbPFKHHTxvKpuFXSumASeGweLcei6G3' => 10 buffer accounts
+'EsQ179Q8ESroBnnmTDmWEV4rZLkRc3yck32PqMxypE5z' => 10 buffer accounts
+'6KXtB89kAgzW7ApFzqhBg5tgnVinzP4NSXVqMAWnXcHs' => 12 buffer accounts
+'FinVobfi4tbdMdfN9jhzUuDVqGXfcFnRGX57xHcTWLfW' => 15 buffer accounts
+'TESAinbTL2eBLkWqyGA82y1RS6kArHvuYWfkL9dKkbs' => 42 buffer accounts
+```
+
+### Reclaiming program data accounts
+
+You may now realize that program data accounts (the accounts that store the
+executable byte-code for an on-chain program) can also be closed.
+
+> **Note:** This does _not_ mean that _program accounts_ can be closed (those
+> are immutable and can never be reclaimed, but it's fine they're pretty small).
+> It's also important to keep in mind that once program data accounts are
+> deleted, they can never be recreated for an existing program. Therefore, the
+> corresponding program (and its program id) for any closed program data account
+> is effectively disabled forever and may not be re-deployed
+
+While it would be uncommon for developers to need to close program data accounts
+since they can be rewritten during upgrades, one potential scenario is that
+since program data accounts can't be _resized_. You may wish to deploy your
+program at a new address to accommodate larger executables.
+
+The ability to reclaim program data account rent deposits also makes testing and
+experimentation on the `mainnet-beta` cluster a lot less costly since you could
+reclaim everything except the transaction fees and a small amount of rent for
+the program account. Lastly, this could help developers recover most of their
+funds if they mistakenly deploy a program at an unintended address or on the
+wrong cluster.
+
+To view the programs which are owned by your wallet address, you can run:
+
+```bash
+solana -V # must be 1.7.11 or higher!
+solana program show --programs --keypair ~/.config/solana/MY_KEYPAIR.json
+
+Program Id | Slot | Authority | Balance
+CN5x9WEusU6pNH66G22SnspVx4cogWLqMfmb85Z3GW7N | 53796672 | 2nr1bHFT86W9tGnyvmYW4vcHKsQB3sVQfnddasz4kExM | 0.54397272 SOL
+```
+
+To close those program data accounts and reclaim their SOL balance, you can run:
+
+```bash
+solana program close --programs --keypair ~/.config/solana/MY_KEYPAIR.json
+```
+
+You might be concerned about this feature allowing malicious actors to close a
+program in a way that negatively impacts end users. While this is a valid
+concern in general, closing program data accounts doesn't make this any more
+exploitable than was already possible.
+
+Even without the ability to close a program data account, any upgradeable
+program could be upgraded to a no-op implementation and then have its upgrade
+authority cleared to make it immutable forever. This new feature for closing
+program data accounts merely adds the ability to reclaim the rent deposit,
+disabling a program was already technically possible.
diff --git a/docs/developing/on-chain-programs/developing-c.md b/docs/developing/on-chain-programs/developing-c.md
new file mode 100644
index 000000000..d5e74f898
--- /dev/null
+++ b/docs/developing/on-chain-programs/developing-c.md
@@ -0,0 +1,193 @@
+---
+title: "Developing with C"
+---
+
+Solana supports writing on-chain programs using the C and C++ programming
+languages.
+
+## Project Layout
+
+C projects are laid out as follows:
+
+```
+/src/
+/makefile
+```
+
+The `makefile` should contain the following:
+
+```bash
+OUT_DIR :=
+include ~/.local/share/solana/install/active_release/bin/sdk/sbf/c/sbf.mk
+```
+
+The sbf-sdk may not be in the exact place specified above but if you setup your
+environment per [How to Build](#how-to-build) then it should be.
+
+## How to Build
+
+First setup the environment:
+
+- Install the latest Rust stable from https://rustup.rs
+- Install the latest
+ [Solana command-line tools](../../cli/install-solana-cli-tools.md)
+
+Then build using make:
+
+```bash
+make -C
+```
+
+## How to Test
+
+Solana uses the [Criterion](https://github.com/Snaipe/Criterion) test framework
+and tests are executed each time the program is built
+[How to Build](#how-to-build).
+
+To add tests, create a new file next to your source file named
+`test_.c` and populate it with criterion test cases. See the
+[Criterion docs](https://criterion.readthedocs.io/en/master) for information on
+how to write a test case.
+
+## Program Entrypoint
+
+Programs export a known entrypoint symbol which the Solana runtime looks up and
+calls when invoking a program. Solana supports multiple versions of the SBF
+loader and the entrypoints may vary between them. Programs must be written for
+and deployed to the same loader. For more details see the
+[FAQ section on Loaders](./faq.md#loaders).
+
+Currently there are two supported loaders
+[SBF Loader](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/bpf_loader.rs#L17)
+and
+[SBF loader deprecated](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/bpf_loader_deprecated.rs#L14).
+
+They both have the same raw entrypoint definition, the following is the raw
+symbol that the runtime looks up and calls:
+
+```c
+extern uint64_t entrypoint(const uint8_t *input)
+```
+
+This entrypoint takes a generic byte array which contains the serialized program
+parameters (program id, accounts, instruction data, etc...). To deserialize the
+parameters each loader contains its own [helper function](#Serialization).
+
+### Serialization
+
+Each loader provides a helper function that deserializes the program's input
+parameters into C types:
+
+- [SBF Loader deserialization](https://github.com/solana-labs/solana/blob/d2ee9db2143859fa5dc26b15ee6da9c25cc0429c/sdk/sbf/c/inc/solana_sdk.h#L304)
+- [SBF Loader deprecated deserialization](https://github.com/solana-labs/solana/blob/8415c22b593f164020adc7afe782e8041d756ddf/sdk/sbf/c/inc/deserialize_deprecated.h#L25)
+
+Some programs may want to perform deserialization themselves, and they can by
+providing their own implementation of the [raw entrypoint](#program-entrypoint).
+Take note that the provided deserialization functions retain references back to
+the serialized byte array for variables that the program is allowed to modify
+(lamports, account data). The reason for this is that upon return the loader
+will read those modifications so they may be committed. If a program implements
+their own deserialization function they need to ensure that any modifications
+the program wishes to commit must be written back into the input byte array.
+
+Details on how the loader serializes the program inputs can be found in the
+[Input Parameter Serialization](./faq.md#input-parameter-serialization) docs.
+
+## Data Types
+
+The loader's deserialization helper function populates the
+[SolParameters](https://github.com/solana-labs/solana/blob/8415c22b593f164020adc7afe782e8041d756ddf/sdk/sbf/c/inc/solana_sdk.h#L276)
+structure:
+
+```c
+/**
+ * Structure that the program's entrypoint input data is deserialized into.
+ */
+typedef struct {
+ SolAccountInfo* ka; /** Pointer to an array of SolAccountInfo, must already
+ point to an array of SolAccountInfos */
+ uint64_t ka_num; /** Number of SolAccountInfo entries in `ka` */
+ const uint8_t *data; /** pointer to the instruction data */
+ uint64_t data_len; /** Length in bytes of the instruction data */
+ const SolPubkey *program_id; /** program_id of the currently executing program */
+} SolParameters;
+```
+
+'ka' is an ordered array of the accounts referenced by the instruction and
+represented as a
+[SolAccountInfo](https://github.com/solana-labs/solana/blob/8415c22b593f164020adc7afe782e8041d756ddf/sdk/sbf/c/inc/solana_sdk.h#L173)
+structures. An account's place in the array signifies its meaning, for example,
+when transferring lamports an instruction may define the first account as the
+source and the second as the destination.
+
+The members of the `SolAccountInfo` structure are read-only except for
+`lamports` and `data`. Both may be modified by the program in accordance with
+the
+[runtime enforcement policy](developing/programming-model/accounts.md#policy).
+When an instruction reference the same account multiple times there may be
+duplicate `SolAccountInfo` entries in the array but they both point back to the
+original input byte array. A program should handle these cases delicately to
+avoid overlapping read/writes to the same buffer. If a program implements their
+own deserialization function care should be taken to handle duplicate accounts
+appropriately.
+
+`data` is the general purpose byte array from the
+[instruction's instruction data](developing/programming-model/transactions.md#instruction-data)
+being processed.
+
+`program_id` is the public key of the currently executing program.
+
+## Heap
+
+C programs can allocate memory via the system call
+[`calloc`](https://github.com/solana-labs/solana/blob/c3d2d2134c93001566e1e56f691582f379b5ae55/sdk/sbf/c/inc/solana_sdk.h#L245)
+or implement their own heap on top of the 32KB heap region starting at virtual
+address x300000000. The heap region is also used by `calloc` so if a program
+implements their own heap it should not also call `calloc`.
+
+## Logging
+
+The runtime provides two system calls that take data and log it to the program
+logs.
+
+- [`sol_log(const char*)`](https://github.com/solana-labs/solana/blob/d2ee9db2143859fa5dc26b15ee6da9c25cc0429c/sdk/sbf/c/inc/solana_sdk.h#L128)
+- [`sol_log_64(uint64_t, uint64_t, uint64_t, uint64_t, uint64_t)`](https://github.com/solana-labs/solana/blob/d2ee9db2143859fa5dc26b15ee6da9c25cc0429c/sdk/sbf/c/inc/solana_sdk.h#L134)
+
+The [debugging](debugging.md#logging) section has more information about working
+with program logs.
+
+## Compute Budget
+
+Use the system call `sol_remaining_compute_units()` to return a `u64` indicating
+the number of compute units remaining for this transaction.
+
+Use the system call
+[`sol_log_compute_units()`](https://github.com/solana-labs/solana/blob/d3a3a7548c857f26ec2cb10e270da72d373020ec/sdk/sbf/c/inc/solana_sdk.h#L140)
+to log a message containing the remaining number of compute units the program
+may consume before execution is halted
+
+See [compute budget](developing/programming-model/runtime.md#compute-budget) for
+more information.
+
+## ELF Dump
+
+The SBF shared object internals can be dumped to a text file to gain more
+insight into a program's composition and what it may be doing at runtime. The
+dump will contain both the ELF information as well as a list of all the symbols
+and the instructions that implement them. Some of the SBF loader's error log
+messages will reference specific instruction numbers where the error occurred.
+These references can be looked up in the ELF dump to identify the offending
+instruction and its context.
+
+To create a dump file:
+
+```bash
+$ cd
+$ make dump_
+```
+
+## Examples
+
+The
+[Solana Program Library github](https://github.com/solana-labs/solana-program-library/tree/master/examples/c)
+repo contains a collection of C examples
diff --git a/docs/developing/on-chain-programs/developing-rust.md b/docs/developing/on-chain-programs/developing-rust.md
new file mode 100644
index 000000000..263ec25e5
--- /dev/null
+++ b/docs/developing/on-chain-programs/developing-rust.md
@@ -0,0 +1,384 @@
+---
+title: "Developing with Rust"
+---
+
+Solana supports writing on-chain programs using the
+[Rust](https://www.rust-lang.org/) programming language.
+
+## Project Layout
+
+Solana Rust programs follow the typical
+[Rust project layout](https://doc.rust-lang.org/cargo/guide/project-layout.html):
+
+```
+/inc/
+/src/
+/Cargo.toml
+```
+
+Solana Rust programs may depend directly on each other in order to gain access
+to instruction helpers when making
+[cross-program invocations](developing/programming-model/calling-between-programs.md#cross-program-invocations).
+When doing so it's important to not pull in the dependent program's entrypoint
+symbols because they may conflict with the program's own. To avoid this,
+programs should define an `no-entrypoint` feature in `Cargo.toml` and use to
+exclude the entrypoint.
+
+- [Define the feature](https://github.com/solana-labs/solana-program-library/blob/fca9836a2c8e18fc7e3595287484e9acd60a8f64/token/program/Cargo.toml#L12)
+- [Exclude the entrypoint](https://github.com/solana-labs/solana-program-library/blob/fca9836a2c8e18fc7e3595287484e9acd60a8f64/token/program/src/lib.rs#L12)
+
+Then when other programs include this program as a dependency, they should do so
+using the `no-entrypoint` feature.
+
+- [Include without entrypoint](https://github.com/solana-labs/solana-program-library/blob/fca9836a2c8e18fc7e3595287484e9acd60a8f64/token-swap/program/Cargo.toml#L22)
+
+## Project Dependencies
+
+At a minimum, Solana Rust programs must pull in the
+[solana-program](https://crates.io/crates/solana-program) crate.
+
+Solana SBF programs have some [restrictions](#restrictions) that may prevent the
+inclusion of some crates as dependencies or require special handling.
+
+For example:
+
+- Crates that require the architecture be a subset of the ones supported by the
+ official toolchain. There is no workaround for this unless that crate is
+ forked and SBF added to that those architecture checks.
+- Crates may depend on `rand` which is not supported in Solana's deterministic
+ program environment. To include a `rand` dependent crate refer to
+ [Depending on Rand](#depending-on-rand).
+- Crates may overflow the stack even if the stack overflowing code isn't
+ included in the program itself. For more information refer to
+ [Stack](./faq.md#stack).
+
+## How to Build
+
+First setup the environment:
+
+- Install the latest Rust stable from https://rustup.rs/
+- Install the latest
+ [Solana command-line tools](../../cli/install-solana-cli-tools.md)
+
+The normal cargo build is available for building programs against your host
+machine which can be used for unit testing:
+
+```bash
+$ cargo build
+```
+
+To build a specific program, such as SPL Token, for the Solana SBF target which
+can be deployed to the cluster:
+
+```bash
+$ cd
+$ cargo build-bpf
+```
+
+## How to Test
+
+Solana programs can be unit tested via the traditional `cargo test` mechanism by
+exercising program functions directly.
+
+To help facilitate testing in an environment that more closely matches a live
+cluster, developers can use the
+[`program-test`](https://crates.io/crates/solana-program-test) crate. The
+`program-test` crate starts up a local instance of the runtime and allows tests
+to send multiple transactions while keeping state for the duration of the test.
+
+For more information the
+[test in sysvar example](https://github.com/solana-labs/solana-program-library/blob/master/examples/rust/sysvar/tests/functional.rs)
+shows how an instruction containing sysvar account is sent and processed by the
+program.
+
+## Program Entrypoint
+
+Programs export a known entrypoint symbol which the Solana runtime looks up and
+calls when invoking a program. Solana supports multiple versions of the BPF
+loader and the entrypoints may vary between them. Programs must be written for
+and deployed to the same loader. For more details see the
+[FAQ section on Loaders](./faq.md#loaders).
+
+Currently there are two supported loaders
+[BPF Loader](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/bpf_loader.rs#L17)
+and
+[BPF loader deprecated](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/bpf_loader_deprecated.rs#L14)
+
+They both have the same raw entrypoint definition, the following is the raw
+symbol that the runtime looks up and calls:
+
+```rust
+#[no_mangle]
+pub unsafe extern "C" fn entrypoint(input: *mut u8) -> u64;
+```
+
+This entrypoint takes a generic byte array which contains the serialized program
+parameters (program id, accounts, instruction data, etc...). To deserialize the
+parameters each loader contains its own wrapper macro that exports the raw
+entrypoint, deserializes the parameters, calls a user defined instruction
+processing function, and returns the results.
+
+You can find the entrypoint macros here:
+
+- [BPF Loader's entrypoint macro](https://github.com/solana-labs/solana/blob/9b1199cdb1b391b00d510ed7fc4866bdf6ee4eb3/sdk/program/src/entrypoint.rs#L42)
+- [BPF Loader deprecated's entrypoint macro](https://github.com/solana-labs/solana/blob/9b1199cdb1b391b00d510ed7fc4866bdf6ee4eb3/sdk/program/src/entrypoint_deprecated.rs#L38)
+
+The program defined instruction processing function that the entrypoint macros
+call must be of this form:
+
+```rust
+pub type ProcessInstruction =
+ fn(program_id: &Pubkey, accounts: &[AccountInfo], instruction_data: &[u8]) -> ProgramResult;
+```
+
+### Parameter Deserialization
+
+Each loader provides a helper function that deserializes the program's input
+parameters into Rust types. The entrypoint macros automatically calls the
+deserialization helper:
+
+- [BPF Loader deserialization](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/entrypoint.rs#L146)
+- [BPF Loader deprecated deserialization](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/entrypoint_deprecated.rs#L57)
+
+Some programs may want to perform deserialization themselves and they can by
+providing their own implementation of the [raw entrypoint](#program-entrypoint).
+Take note that the provided deserialization functions retain references back to
+the serialized byte array for variables that the program is allowed to modify
+(lamports, account data). The reason for this is that upon return the loader
+will read those modifications so they may be committed. If a program implements
+their own deserialization function they need to ensure that any modifications
+the program wishes to commit be written back into the input byte array.
+
+Details on how the loader serializes the program inputs can be found in the
+[Input Parameter Serialization](./faq.md#input-parameter-serialization) docs.
+
+### Data Types
+
+The loader's entrypoint macros call the program defined instruction processor
+function with the following parameters:
+
+```rust
+program_id: &Pubkey,
+accounts: &[AccountInfo],
+instruction_data: &[u8]
+```
+
+The program id is the public key of the currently executing program.
+
+The accounts is an ordered slice of the accounts referenced by the instruction
+and represented as an
+[AccountInfo](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/account_info.rs#L12)
+structures. An account's place in the array signifies its meaning, for example,
+when transferring lamports an instruction may define the first account as the
+source and the second as the destination.
+
+The members of the `AccountInfo` structure are read-only except for `lamports`
+and `data`. Both may be modified by the program in accordance with the
+[runtime enforcement policy](developing/programming-model/accounts.md#policy).
+Both of these members are protected by the Rust `RefCell` construct, so they
+must be borrowed to read or write to them. The reason for this is they both
+point back to the original input byte array, but there may be multiple entries
+in the accounts slice that point to the same account. Using `RefCell` ensures
+that the program does not accidentally perform overlapping read/writes to the
+same underlying data via multiple `AccountInfo` structures. If a program
+implements their own deserialization function care should be taken to handle
+duplicate accounts appropriately.
+
+The instruction data is the general purpose byte array from the
+[instruction's instruction data](developing/programming-model/transactions.md#instruction-data)
+being processed.
+
+## Heap
+
+Rust programs implement the heap directly by defining a custom
+[`global_allocator`](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/entrypoint.rs#L72)
+
+Programs may implement their own `global_allocator` based on its specific needs.
+Refer to the [custom heap example](#examples) for more information.
+
+## Restrictions
+
+On-chain Rust programs support most of Rust's libstd, libcore, and liballoc, as
+well as many 3rd party crates.
+
+There are some limitations since these programs run in a resource-constrained,
+single-threaded environment, as well as being deterministic:
+
+- No access to
+ - `rand`
+ - `std::fs`
+ - `std::net`
+ - `std::future`
+ - `std::process`
+ - `std::sync`
+ - `std::task`
+ - `std::thread`
+ - `std::time`
+- Limited access to:
+ - `std::hash`
+ - `std::os`
+- Bincode is extremely computationally expensive in both cycles and call depth
+ and should be avoided
+- String formatting should be avoided since it is also computationally
+ expensive.
+- No support for `println!`, `print!`, the Solana [logging helpers](#logging)
+ should be used instead.
+- The runtime enforces a limit on the number of instructions a program can
+ execute during the processing of one instruction. See
+ [computation budget](developing/programming-model/runtime.md#compute-budget)
+ for more information.
+
+## Depending on Rand
+
+Programs are constrained to run deterministically, so random numbers are not
+available. Sometimes a program may depend on a crate that depends itself on
+`rand` even if the program does not use any of the random number functionality.
+If a program depends on `rand`, the compilation will fail because there is no
+`get-random` support for Solana. The error will typically look like this:
+
+```
+error: target is not supported, for more information see: https://docs.rs/getrandom/#unsupported-targets
+ --> /Users/jack/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/src/lib.rs:257:9
+ |
+257 | / compile_error!("\
+258 | | target is not supported, for more information see: \
+259 | | https://docs.rs/getrandom/#unsupported-targets\
+260 | | ");
+ | |___________^
+```
+
+To work around this dependency issue, add the following dependency to the
+program's `Cargo.toml`:
+
+```
+getrandom = { version = "0.1.14", features = ["dummy"] }
+```
+
+or if the dependency is on getrandom v0.2 add:
+
+```
+getrandom = { version = "0.2.2", features = ["custom"] }
+```
+
+## Logging
+
+Rust's `println!` macro is computationally expensive and not supported. Instead
+the helper macro
+[`msg!`](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/log.rs#L33)
+is provided.
+
+`msg!` has two forms:
+
+```rust
+msg!("A string");
+```
+
+or
+
+```rust
+msg!(0_64, 1_64, 2_64, 3_64, 4_64);
+```
+
+Both forms output the results to the program logs. If a program so wishes they
+can emulate `println!` by using `format!`:
+
+```rust
+msg!("Some variable: {:?}", variable);
+```
+
+The [debugging](debugging.md#logging) section has more information about working
+with program logs the [Rust examples](#examples) contains a logging example.
+
+## Panicking
+
+Rust's `panic!`, `assert!`, and internal panic results are printed to the
+[program logs](debugging.md#logging) by default.
+
+```
+INFO solana_runtime::message_processor] Finalized account CGLhHSuWsp1gT4B7MY2KACqp9RUwQRhcUFfVSuxpSajZ
+INFO solana_runtime::message_processor] Call SBF program CGLhHSuWsp1gT4B7MY2KACqp9RUwQRhcUFfVSuxpSajZ
+INFO solana_runtime::message_processor] Program log: Panicked at: 'assertion failed: `(left == right)`
+ left: `1`,
+ right: `2`', rust/panic/src/lib.rs:22:5
+INFO solana_runtime::message_processor] SBF program consumed 5453 of 200000 units
+INFO solana_runtime::message_processor] SBF program CGLhHSuWsp1gT4B7MY2KACqp9RUwQRhcUFfVSuxpSajZ failed: BPF program panicked
+```
+
+### Custom Panic Handler
+
+Programs can override the default panic handler by providing their own
+implementation.
+
+First define the `custom-panic` feature in the program's `Cargo.toml`
+
+```toml
+[features]
+default = ["custom-panic"]
+custom-panic = []
+```
+
+Then provide a custom implementation of the panic handler:
+
+```rust
+#[cfg(all(feature = "custom-panic", target_os = "solana"))]
+#[no_mangle]
+fn custom_panic(info: &core::panic::PanicInfo<'_>) {
+ solana_program::msg!("program custom panic enabled");
+ solana_program::msg!("{}", info);
+}
+```
+
+In the above snippit, the default implementation is shown, but developers may
+replace that with something that better suits their needs.
+
+One of the side effects of supporting full panic messages by default is that
+programs incur the cost of pulling in more of Rust's `libstd` implementation
+into program's shared object. Typical programs will already be pulling in a fair
+amount of `libstd` and may not notice much of an increase in the shared object
+size. But programs that explicitly attempt to be very small by avoiding `libstd`
+may take a significant impact (~25kb). To eliminate that impact, programs can
+provide their own custom panic handler with an empty implementation.
+
+```rust
+#[cfg(all(feature = "custom-panic", target_os = "solana"))]
+#[no_mangle]
+fn custom_panic(info: &core::panic::PanicInfo<'_>) {
+ // Do nothing to save space
+}
+```
+
+## Compute Budget
+
+Use the system call `sol_remaining_compute_units()` to return a `u64` indicating
+the number of compute units remaining for this transaction.
+
+Use the system call
+[`sol_log_compute_units()`](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/log.rs#L141)
+to log a message containing the remaining number of compute units the program
+may consume before execution is halted
+
+See [compute budget](developing/programming-model/runtime.md#compute-budget) for
+more information.
+
+## ELF Dump
+
+The SBF shared object internals can be dumped to a text file to gain more
+insight into a program's composition and what it may be doing at runtime. The
+dump will contain both the ELF information as well as a list of all the symbols
+and the instructions that implement them. Some of the BPF loader's error log
+messages will reference specific instruction numbers where the error occurred.
+These references can be looked up in the ELF dump to identify the offending
+instruction and its context.
+
+To create a dump file:
+
+```bash
+$ cd
+$ cargo build-bpf --dump
+```
+
+## Examples
+
+The
+[Solana Program Library github](https://github.com/solana-labs/solana-program-library/tree/master/examples/rust)
+repo contains a collection of Rust examples.
diff --git a/docs/developing/on-chain-programs/examples.md b/docs/developing/on-chain-programs/examples.md
new file mode 100644
index 000000000..1aaf154c7
--- /dev/null
+++ b/docs/developing/on-chain-programs/examples.md
@@ -0,0 +1,37 @@
+---
+title: "Program Examples"
+---
+
+## Break
+
+[Break](https://break.solana.com/) is a React app that gives users a visceral
+feeling for just how fast and high-performance the Solana network really is. Can
+you _break_ the Solana blockchain? During a 15 second play-though, each click of
+a button or keystroke sends a new transaction to the cluster. Smash the keyboard
+as fast as you can and watch your transactions get finalized in real time while
+the network takes it all in stride!
+
+Break can be played on our Devnet, Testnet and Mainnet Beta networks. Plays are
+free on Devnet and Testnet, where the session is funded by a network faucet. On
+Mainnet Beta, users pay to play 0.08 SOL per game. The session account can be
+funded by a local keystore wallet or by scanning a QR code from Trust Wallet to
+transfer the tokens.
+
+[Click here to play Break](https://break.solana.com/)
+
+### Build and Run
+
+First fetch the latest version of the example code:
+
+```bash
+$ git clone https://github.com/solana-labs/break.git
+$ cd break
+```
+
+Next, follow the steps in the git repository's
+[README](https://github.com/solana-labs/break/blob/master/README.md).
+
+## Language Specific
+
+- [Rust](developing-rust.md#examples)
+- [C](developing-c.md#examples)
diff --git a/docs/developing/on-chain-programs/faq.md b/docs/developing/on-chain-programs/faq.md
new file mode 100644
index 000000000..7496173bb
--- /dev/null
+++ b/docs/developing/on-chain-programs/faq.md
@@ -0,0 +1,226 @@
+---
+title: "FAQ"
+---
+
+When writing or interacting with Solana programs, there are common questions or
+challenges that often come up. Below are resources to help answer these
+questions.
+
+If not addressed here, ask on
+[StackExchange](https://solana.stackexchange.com/questions/ask?tags=solana-program)
+with the `solana-program` tag.
+
+## Limitations
+
+Developing programs on the Solana blockchain have some inherent limitation
+associated with them. Below is a list of common limitation that you may run
+into.
+
+See [Limitations of developing programs](./limitations.md) for more details
+
+## Berkeley Packet Filter (BPF)
+
+Solana on-chain programs are compiled via the
+[LLVM compiler infrastructure](https://llvm.org/) to an
+[Executable and Linkable Format (ELF)](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format)
+containing a variation of the
+[Berkeley Packet Filter (BPF)](https://en.wikipedia.org/wiki/Berkeley_Packet_Filter)
+bytecode.
+
+Because Solana uses the LLVM compiler infrastructure, a program may be written
+in any programming language that can target the LLVM's BPF backend.
+
+BPF provides an efficient
+[instruction set](https://github.com/iovisor/bpf-docs/blob/master/eBPF.md) that
+can be executed in an interpreted virtual machine or as efficient just-in-time
+compiled native instructions.
+
+## Memory map
+
+The virtual address memory map used by Solana SBF programs is fixed and laid out
+as follows
+
+- Program code starts at 0x100000000
+- Stack data starts at 0x200000000
+- Heap data starts at 0x300000000
+- Program input parameters start at 0x400000000
+
+The above virtual addresses are start addresses but programs are given access to
+a subset of the memory map. The program will panic if it attempts to read or
+write to a virtual address that it was not granted access to, and an
+`AccessViolation` error will be returned that contains the address and size of
+the attempted violation.
+
+## InvalidAccountData
+
+This program error can happen for a lot of reasons. Usually, it's caused by
+passing an account to the program that the program is not expecting, either in
+the wrong position in the instruction or an account not compatible with the
+instruction being executed.
+
+An implementation of a program might also cause this error when performing a
+cross-program instruction and forgetting to provide the account for the program
+that you are calling.
+
+## InvalidInstructionData
+
+This program error can occur while trying to deserialize the instruction, check
+that the structure passed in matches exactly the instruction. There may be some
+padding between fields. If the program implements the Rust `Pack` trait then try
+packing and unpacking the instruction type `T` to determine the exact encoding
+the program expects:
+
+https://github.com/solana-labs/solana/blob/v1.4/sdk/program/src/program_pack.rs
+
+## MissingRequiredSignature
+
+Some instructions require the account to be a signer; this error is returned if
+an account is expected to be signed but is not.
+
+An implementation of a program might also cause this error when performing a
+cross-program invocation that requires a signed program address, but the passed
+signer seeds passed to
+[`invoke_signed`](developing/programming-model/calling-between-programs.md)
+don't match the signer seeds used to create the program address
+[`create_program_address`](developing/programming-model/calling-between-programs.md#program-derived-addresses).
+
+## `rand` Rust dependency causes compilation failure
+
+See [Rust Project Dependencies](developing-rust.md#project-dependencies)
+
+## Rust restrictions
+
+See [Rust restrictions](developing-rust.md#restrictions)
+
+## Stack
+
+SBF uses stack frames instead of a variable stack pointer. Each stack frame is
+4KB in size.
+
+If a program violates that stack frame size, the compiler will report the
+overrun as a warning.
+
+For example:
+
+```
+Error: Function _ZN16curve25519_dalek7edwards21EdwardsBasepointTable6create17h178b3d2411f7f082E Stack offset of -30728 exceeded max offset of -4096 by 26632 bytes, please minimize large stack variables
+```
+
+The message identifies which symbol is exceeding its stack frame, but the name
+might be mangled if it is a Rust or C++ symbol.
+
+> To demangle a Rust symbol use [rustfilt](https://github.com/luser/rustfilt).
+
+The above warning came from a Rust program, so the demangled symbol name is:
+
+```bash
+rustfilt _ZN16curve25519_dalek7edwards21EdwardsBasepointTable6create17h178b3d2411f7f082E
+curve25519_dalek::edwards::EdwardsBasepointTable::create
+```
+
+To demangle a C++ symbol use `c++filt` from binutils.
+
+The reason a warning is reported rather than an error is because some dependent
+crates may include functionality that violates the stack frame restrictions even
+if the program doesn't use that functionality. If the program violates the stack
+size at runtime, an `AccessViolation` error will be reported.
+
+SBF stack frames occupy a virtual address range starting at `0x200000000`.
+
+## Heap size
+
+Programs have access to a runtime heap either directly in C or via the Rust
+`alloc` APIs. To facilitate fast allocations, a simple 32KB bump heap is
+utilized. The heap does not support `free` or `realloc` so use it wisely.
+
+Internally, programs have access to the 32KB memory region starting at virtual
+address 0x300000000 and may implement a custom heap based on the program's
+specific needs.
+
+- [Rust program heap usage](developing-rust.md#heap)
+- [C program heap usage](developing-c.md#heap)
+
+## Loaders
+
+Programs are deployed with and executed by runtime loaders, currently there are
+two supported loaders
+[BPF Loader](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/bpf_loader.rs#L17)
+and
+[BPF loader deprecated](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/bpf_loader_deprecated.rs#L14)
+
+Loaders may support different application binary interfaces so developers must
+write their programs for and deploy them to the same loader. If a program
+written for one loader is deployed to a different one the result is usually a
+`AccessViolation` error due to mismatched deserialization of the program's input
+parameters.
+
+For all practical purposes program should always be written to target the latest
+BPF loader and the latest loader is the default for the command-line interface
+and the javascript APIs.
+
+For language specific information about implementing a program for a particular
+loader see:
+
+- [Rust program entrypoints](developing-rust.md#program-entrypoint)
+- [C program entrypoints](developing-c.md#program-entrypoint)
+
+### Deployment
+
+SBF program deployment is the process of uploading a BPF shared object into a
+program account's data and marking the account executable. A client breaks the
+SBF shared object into smaller pieces and sends them as the instruction data of
+[`Write`](https://github.com/solana-labs/solana/blob/bc7133d7526a041d1aaee807b80922baa89b6f90/sdk/program/src/loader_instruction.rs#L13)
+instructions to the loader where loader writes that data into the program's
+account data. Once all the pieces are received the client sends a
+[`Finalize`](https://github.com/solana-labs/solana/blob/bc7133d7526a041d1aaee807b80922baa89b6f90/sdk/program/src/loader_instruction.rs#L30)
+instruction to the loader, the loader then validates that the SBF data is valid
+and marks the program account as _executable_. Once the program account is
+marked executable, subsequent transactions may issue instructions for that
+program to process.
+
+When an instruction is directed at an executable SBF program the loader
+configures the program's execution environment, serializes the program's input
+parameters, calls the program's entrypoint, and reports any errors encountered.
+
+For further information see [deploying](deploying.md)
+
+### Input Parameter Serialization
+
+SBF loaders serialize the program input parameters into a byte array that is
+then passed to the program's entrypoint, where the program is responsible for
+deserializing it on-chain. One of the changes between the deprecated loader and
+the current loader is that the input parameters are serialized in a way that
+results in various parameters falling on aligned offsets within the aligned byte
+array. This allows deserialization implementations to directly reference the
+byte array and provide aligned pointers to the program.
+
+For language specific information about serialization see:
+
+- [Rust program parameter deserialization](developing-rust.md#parameter-deserialization)
+- [C program parameter deserialization](developing-c.md#parameter-deserialization)
+
+The latest loader serializes the program input parameters as follows (all
+encoding is little endian):
+
+- 8 bytes unsigned number of accounts
+- For each account
+ - 1 byte indicating if this is a duplicate account, if not a duplicate then
+ the value is 0xff, otherwise the value is the index of the account it is a
+ duplicate of.
+ - If duplicate: 7 bytes of padding
+ - If not duplicate:
+ - 1 byte boolean, true if account is a signer
+ - 1 byte boolean, true if account is writable
+ - 1 byte boolean, true if account is executable
+ - 4 bytes of padding
+ - 32 bytes of the account public key
+ - 32 bytes of the account's owner public key
+ - 8 bytes unsigned number of lamports owned by the account
+ - 8 bytes unsigned number of bytes of account data
+ - x bytes of account data
+ - 10k bytes of padding, used for realloc
+ - enough padding to align the offset to 8 bytes.
+ - 8 bytes rent epoch
+- 8 bytes of unsigned number of instruction data
+- x bytes of instruction data
+- 32 bytes of the program id
diff --git a/docs/developing/on-chain-programs/limitations.md b/docs/developing/on-chain-programs/limitations.md
new file mode 100644
index 000000000..c17c5be97
--- /dev/null
+++ b/docs/developing/on-chain-programs/limitations.md
@@ -0,0 +1,87 @@
+---
+title: "Limitations"
+---
+
+Developing programs on the Solana blockchain have some inherent limitation
+associated with them. Below is a list of common limitation that you may run
+into.
+
+## Rust libraries
+
+Since Rust based on-chain programs must run be deterministic while running in a
+resource-constrained, single-threaded environment, they have some limitations on
+various libraries.
+
+See [Developing with Rust - Restrictions](./developing-rust.md#restrictions) for
+a detailed breakdown these restrictions and limitations.
+
+## Compute budget
+
+To prevent abuse of the blockchain's computational resources, each transaction
+is allocated a [compute budget](./../../terminology.md#compute-budget).
+Exceeding this compute budget will result in the transaction failing.
+
+See [computational constraints](../programming-model/runtime.md#compute-budget)
+in the Runtime for more specific details.
+
+## Call stack depth - `CallDepthExceeded` error
+
+Solana programs are constrained to run quickly, and to facilitate this, the
+program's call stack is limited to a max depth of **64 frames**.
+
+When a program exceeds the allowed call stack depth limit, it will receive the
+`CallDepthExceeded` error.
+
+## CPI call depth - `CallDepth` error
+
+Cross-program invocations allow programs to invoke other programs directly, but
+the depth is constrained currently to `4`.
+
+When a program exceeds the allowed
+[cross-program invocation call depth](../programming-model/calling-between-programs.md#call-depth),
+it will receive a `CallDepth` error
+
+## Float Rust types support
+
+Programs support a limited subset of Rust's float operations. If a program
+attempts to use a float operation that is not supported, the runtime will report
+an unresolved symbol error.
+
+Float operations are performed via software libraries, specifically LLVM's float
+built-ins. Due to the software emulated, they consume more compute units than
+integer operations. In general, fixed point operations are recommended where
+possible.
+
+The Solana Program Library math tests will report the performance of some math
+operations:
+https://github.com/solana-labs/solana-program-library/tree/master/libraries/math
+
+To run the test: sync the repo and run:
+
+```sh
+cargo test-sbf -- --nocapture --test-threads=1
+```
+
+Recent results show the float operations take more instructions compared to
+integers equivalents. Fixed point implementations may vary but will also be less
+than the float equivalents:
+
+```
+ u64 f32
+Multiply 8 176
+Divide 9 219
+```
+
+## Static writable data
+
+Program shared objects do not support writable shared data. Programs are shared
+between multiple parallel executions using the same shared read-only code and
+data. This means that developers should not include any static writable or
+global variables in programs. In the future a copy-on-write mechanism could be
+added to support writable data.
+
+## Signed division
+
+The SBF instruction set does not support
+[signed division](https://www.kernel.org/doc/html/latest/bpf/bpf_design_QA.Html#q-why-there-is-no-bpf-sdiv-for-signed-divide-operation).
+Adding a signed division instruction is a consideration.
diff --git a/docs/developing/on-chain-programs/overview.md b/docs/developing/on-chain-programs/overview.md
new file mode 100644
index 000000000..939438077
--- /dev/null
+++ b/docs/developing/on-chain-programs/overview.md
@@ -0,0 +1,94 @@
+---
+title: "Overview of Writing Programs"
+sidebarLabel: "Overview"
+---
+
+Developers can write and deploy their own programs to the Solana blockchain.
+While developing these "on-chain" programs can seem cumbersome, the entire
+process can be broadly summarized into a few key steps.
+
+## Solana Development Lifecycle
+
+1. Setup your development environment
+2. Write your program
+3. Compile the program
+4. Generate the program's public address
+5. Deploy the program
+
+### 1. Setup your development environment
+
+The most robust way of getting started with Solana development, is
+[installing the Solana CLI](./../../cli/install-solana-cli-tools.md) tools on
+your local computer. This will allow you to have the most powerful development
+environment.
+
+Some developers may also opt for using
+[Solana Playground](https://beta.solpg.io/), a browser based IDE. It will let
+you write, build, and deploy on-chain programs. All from your browser. No
+installation needed.
+
+### 2. Write your program
+
+Writing Solana programs is most commonly done so using the Rust language. These
+Rust programs are effectively the same as creating a traditional
+[Rust library](https://doc.rust-lang.org/rust-by-example/crates/lib.html).
+
+> You can read more about other [supported languages](#support-languages) below.
+
+### 3. Compile the program
+
+Once the program is written, it must be complied down to
+[Berkley Packet Filter](./faq.md#berkeley-packet-filter-bpf) byte-code that will
+then be deployed to the blockchain.
+
+### 4. Generate the program's public address
+
+Using the [Solana CLI](./../../cli/install-solana-cli-tools.md), the developer
+will generate a new unique [Keypair](./../../terminology.md#keypair) for the new
+program. The public address (aka
+[Pubkey](./../../terminology.md#public-key-pubkey)) from this Keypair will be
+used on-chain as the program's public address (aka
+[`programId`](./../../terminology.md#program-id)).
+
+### 5. Deploying the program
+
+Then again using the CLI, the compiled program can be deployed to the selected
+blockchain cluster by creating many transactions containing the program's
+byte-code. Due to the transaction memory size limitations, each transaction
+effectively sends small chunks of the program to the blockchain in a rapid-fire
+manner.
+
+Once the entire program has been sent to the blockchain, a final transaction is
+sent to write all of the buffered byte-code to the program's data account. This
+either mark the new program as
+[`executable`](./../programming-model/accounts.md#executable), or complete the
+process to upgrade an existing program (if it already existed).
+
+## Support languages
+
+Solana programs are typically written in the
+[Rust language](./developing-rust.md), but [C/C++](./developing-c.md) are also
+supported.
+
+There are also various community driven efforts to enable writing on-chain
+programs using other languages, including:
+
+- Python via [Seahorse](https://seahorse-lang.org/) (that acts as a wrapper the
+ Rust based Anchor framework)
+
+## Example programs
+
+You can also explore the [Program Examples](./examples.md) for examples of
+on-chain programs.
+
+## Limitations
+
+As you dive deeper into program development, it is important to understand some
+of the important limitations associated with on-chain programs.
+
+Read more details on the [Limitations](./limitations.md) page
+
+## Frequently asked questions
+
+Discover many of the [frequently asked questions](./faq.md) other developers
+have about writing/understanding Solana programs.
diff --git a/docs/developing/programming-model/accounts.md b/docs/developing/programming-model/accounts.md
new file mode 100644
index 000000000..d5aa73a6f
--- /dev/null
+++ b/docs/developing/programming-model/accounts.md
@@ -0,0 +1,174 @@
+---
+title: "Accounts"
+---
+
+## Storing State between Transactions
+
+If the program needs to store state between transactions, it does so using
+_accounts_. Accounts are similar to files in operating systems such as Linux in
+that they may hold arbitrary data that persists beyond the lifetime of a
+program. Also like a file, an account includes metadata that tells the runtime
+who is allowed to access the data and how.
+
+Unlike a file, the account includes metadata for the lifetime of the file. That
+lifetime is expressed by a number of fractional native tokens called _lamports_.
+Accounts are held in validator memory and pay ["rent"](#rent) to stay there.
+Each validator periodically scans all accounts and collects rent. Any account
+that drops to zero lamports is purged. Accounts can also be marked
+[rent-exempt](#rent-exemption) if they contain a sufficient number of lamports.
+
+In the same way that a Linux user uses a path to look up a file, a Solana client
+uses an _address_ to look up an account. The address is a 256-bit public key.
+
+## Signers
+
+Transactions include one or more digital [signatures](terminology.md#signature)
+each corresponding to an account address referenced by the transaction. Each of
+these addresses must be the public key of an ed25519 keypair, and the signature
+signifies that the holder of the matching private key signed, and thus,
+"authorized" the transaction. In this case, the account is referred to as a
+_signer_. Whether an account is a signer or not is communicated to the program
+as part of the account's metadata. Programs can then use that information to
+make authority decisions.
+
+## Read-only
+
+Transactions can [indicate](transactions.md#message-header-format) that some of
+the accounts it references be treated as _read-only accounts_ in order to enable
+parallel account processing between transactions. The runtime permits read-only
+accounts to be read concurrently by multiple programs. If a program attempts to
+modify a read-only account, the transaction is rejected by the runtime.
+
+## Executable
+
+If an account is marked "executable" in its metadata, then it is considered a
+program which can be executed by including the account's public key in an
+instruction's [program id](transactions.md#program-id). Accounts are marked as
+executable during a successful program deployment process by the loader that
+owns the account. When a program is deployed to the execution engine (SBF
+deployment), the loader determines that the bytecode in the account's data is
+valid. If so, the loader permanently marks the program account as executable.
+
+If a program is marked as final (non-upgradeable), the runtime enforces that the
+account's data (the program) is immutable. Through the upgradeable loader, it is
+possible to upload a totally new program to an existing program address.
+
+## Creating
+
+To create an account, a client generates a _keypair_ and registers its public
+key using the `SystemProgram::CreateAccount` instruction with a fixed storage
+size in bytes preallocated. The current maximum size of an account's data is 10
+MiB, which can be changed (increased or decreased) at a rate over all accounts
+of 20 MiB per transaction, and the size can be increased by 10 KiB per account
+and per instruction.
+
+An account address can be any arbitrary 256 bit value, and there are mechanisms
+for advanced users to create derived addresses
+(`SystemProgram::CreateAccountWithSeed`,
+[`Pubkey::CreateProgramAddress`](calling-between-programs.md#program-derived-addresses)).
+
+Accounts that have never been created via the system program can also be passed
+to programs. When an instruction references an account that hasn't been
+previously created, the program will be passed an account with no data and zero
+lamports that is owned by the system program.
+
+Such newly created accounts reflect whether they sign the transaction, and
+therefore, can be used as an authority. Authorities in this context convey to
+the program that the holder of the private key associated with the account's
+public key signed the transaction. The account's public key may be known to the
+program or recorded in another account, signifying some kind of ownership or
+authority over an asset or operation the program controls or performs.
+
+## Ownership and Assignment to Programs
+
+A created account is initialized to be _owned_ by a built-in program called the
+System program and is called a _system account_ aptly. An account includes
+"owner" metadata. The owner is a program id. The runtime grants the program
+write access to the account if its id matches the owner. For the case of the
+System program, the runtime allows clients to transfer lamports and importantly
+_assign_ account ownership, meaning changing the owner to a different program
+id. If an account is not owned by a program, the program is only permitted to
+read its data and credit the account.
+
+## Verifying validity of unmodified, reference-only accounts
+
+For security purposes, it is recommended that programs check the validity of any
+account it reads, but does not modify.
+
+This is because a malicious user could create accounts with arbitrary data and
+then pass these accounts to the program in place of valid accounts. The
+arbitrary data could be crafted in a way that leads to unexpected or harmful
+program behavior.
+
+The security model enforces that an account's data can only be modified by the
+account's `Owner` program. This allows the program to trust that the data is
+passed to them via accounts they own. The runtime enforces this by rejecting any
+transaction containing a program that attempts to write to an account it does
+not own.
+
+If a program were to not check account validity, it might read an account it
+thinks it owns, but doesn't. Anyone can issue instructions to a program, and the
+runtime does not know that those accounts are expected to be owned by the
+program.
+
+To check an account's validity, the program should either check the account's
+address against a known value, or check that the account is indeed owned
+correctly (usually owned by the program itself).
+
+One example is when programs use a sysvar account. Unless the program checks the
+account's address or owner, it's impossible to be sure whether it's a real and
+valid sysvar account merely by successful deserialization of the account's data.
+
+Accordingly, the Solana SDK
+[checks the sysvar account's validity during deserialization](https://github.com/solana-labs/solana/blob/a95675a7ce1651f7b59443eb146b356bc4b3f374/sdk/program/src/sysvar/mod.rs#L65).
+An alternative and safer way to read a sysvar is via the sysvar's
+[`get()` function](https://github.com/solana-labs/solana/blob/64bfc14a75671e4ec3fe969ded01a599645080eb/sdk/program/src/sysvar/mod.rs#L73)
+which doesn't require these checks.
+
+If the program always modifies the account in question, the address/owner check
+isn't required because modifying an unowned account will be rejected by the
+runtime, and the containing transaction will be thrown out.
+
+## Rent
+
+Keeping accounts alive on Solana incurs a storage cost called _rent_ because the
+blockchain cluster must actively maintain the data to process any future
+transactions. This is different from Bitcoin and Ethereum, where storing
+accounts doesn't incur any costs.
+
+Currently, all new accounts are required to be rent-exempt.
+
+### Rent exemption
+
+An account is considered rent-exempt if it holds at least 2 years worth of rent.
+This is checked every time an account's balance is reduced, and transactions
+that would reduce the balance to below the minimum amount will fail.
+
+Program executable accounts are required by the runtime to be rent-exempt to
+avoid being purged.
+
+:::info Use the
+[`getMinimumBalanceForRentExemption`](../../api/http#getminimumbalanceforrentexemption)
+RPC endpoint to calculate the minimum balance for a particular account size. The
+following calculation is illustrative only. :::
+
+For example, a program executable with the size of 15,000 bytes requires a
+balance of 105,290,880 lamports (=~ 0.105 SOL) to be rent-exempt:
+
+```text
+105,290,880 = 19.055441478439427 (fee rate) * (128 + 15_000)(account size including metadata) * ((365.25/2) * 2)(epochs in 2 years)
+```
+
+Rent can also be estimated via the
+[`solana rent` CLI subcommand](cli/usage.md#solana-rent)
+
+```text
+$ solana rent 15000
+Rent per byte-year: 0.00000348 SOL
+Rent per epoch: 0.000288276 SOL
+Rent-exempt minimum: 0.10529088 SOL
+```
+
+Note: Rest assured that, should the storage rent rate need to be increased at
+some point in the future, steps will be taken to ensure that accounts that are
+rent-exempt before the increase will remain rent-exempt afterwards
diff --git a/docs/developing/programming-model/calling-between-programs.md b/docs/developing/programming-model/calling-between-programs.md
new file mode 100644
index 000000000..0d6ce4290
--- /dev/null
+++ b/docs/developing/programming-model/calling-between-programs.md
@@ -0,0 +1,361 @@
+---
+title: Calling Between Programs
+---
+
+## Cross-Program Invocations
+
+The Solana runtime allows programs to call each other via a mechanism called
+cross-program invocation. Calling between programs is achieved by one program
+invoking an instruction of the other. The invoking program is halted until the
+invoked program finishes processing the instruction.
+
+For example, a client could create a transaction that modifies two accounts,
+each owned by separate on-chain programs:
+
+```rust,ignore
+let message = Message::new(vec![
+ token_instruction::pay(&alice_pubkey),
+ acme_instruction::launch_missiles(&bob_pubkey),
+]);
+client.send_and_confirm_message(&[&alice_keypair, &bob_keypair], &message);
+```
+
+A client may instead allow the `acme` program to conveniently invoke `token`
+instructions on the client's behalf:
+
+```rust,ignore
+let message = Message::new(vec![
+ acme_instruction::pay_and_launch_missiles(&alice_pubkey, &bob_pubkey),
+]);
+client.send_and_confirm_message(&[&alice_keypair, &bob_keypair], &message);
+```
+
+Given two on-chain programs, `token` and `acme`, each implementing instructions
+`pay()` and `launch_missiles()` respectively, `acme` can be implemented with a
+call to a function defined in the `token` module by issuing a cross-program
+invocation:
+
+```rust,ignore
+mod acme {
+ use token_instruction;
+
+ fn launch_missiles(accounts: &[AccountInfo]) -> Result<()> {
+ ...
+ }
+
+ fn pay_and_launch_missiles(accounts: &[AccountInfo]) -> Result<()> {
+ let alice_pubkey = accounts[1].key;
+ let instruction = token_instruction::pay(&alice_pubkey);
+ invoke(&instruction, accounts)?;
+
+ launch_missiles(accounts)?;
+ }
+```
+
+`invoke()` is built into Solana's runtime and is responsible for routing the
+given instruction to the `token` program via the instruction's `program_id`
+field.
+
+Note that `invoke` requires the caller to pass all the accounts required by the
+instruction being invoked, except for the executable account (the `program_id`).
+
+Before invoking `pay()`, the runtime must ensure that `acme` didn't modify any
+accounts owned by `token`. It does this by applying the runtime's policy to the
+current state of the accounts at the time `acme` calls `invoke` vs. the initial
+state of the accounts at the beginning of the `acme`'s instruction. After
+`pay()` completes, the runtime must again ensure that `token` didn't modify any
+accounts owned by `acme` by again applying the runtime's policy, but this time
+with the `token` program ID. Lastly, after `pay_and_launch_missiles()`
+completes, the runtime must apply the runtime policy one more time where it
+normally would, but using all updated `pre_*` variables. If executing
+`pay_and_launch_missiles()` up to `pay()` made no invalid account changes,
+`pay()` made no invalid changes, and executing from `pay()` until
+`pay_and_launch_missiles()` returns made no invalid changes, then the runtime
+can transitively assume `pay_and_launch_missiles()` as a whole made no invalid
+account changes, and therefore commit all these account modifications.
+
+### Instructions that require privileges
+
+The runtime uses the privileges granted to the caller program to determine what
+privileges can be extended to the callee. Privileges in this context refer to
+signers and writable accounts. For example, if the instruction the caller is
+processing contains a signer or writable account, then the caller can invoke an
+instruction that also contains that signer and/or writable account.
+
+This privilege extension relies on the fact that programs are immutable, except
+during the special case of program upgrades.
+
+In the case of the `acme` program, the runtime can safely treat the
+transaction's signature as a signature of a `token` instruction. When the
+runtime sees the `token` instruction references `alice_pubkey`, it looks up the
+key in the `acme` instruction to see if that key corresponds to a signed
+account. In this case, it does and thereby authorizes the `token` program to
+modify Alice's account.
+
+### Program signed accounts
+
+Programs can issue instructions that contain signed accounts that were not
+signed in the original transaction by using
+[Program derived addresses](#program-derived-addresses).
+
+To sign an account with program derived addresses, a program may
+`invoke_signed()`.
+
+```rust,ignore
+ invoke_signed(
+ &instruction,
+ accounts,
+ &[&["First addresses seed"],
+ &["Second addresses first seed", "Second addresses second seed"]],
+ )?;
+```
+
+### Call Depth
+
+Cross-program invocations allow programs to invoke other programs directly, but
+the depth is constrained currently to 4.
+
+### Reentrancy
+
+Reentrancy is currently limited to direct self recursion, capped at a fixed
+depth. This restriction prevents situations where a program might invoke another
+from an intermediary state without the knowledge that it might later be called
+back into. Direct recursion gives the program full control of its state at the
+point that it gets called back.
+
+## Program Derived Addresses
+
+Program derived addresses allow programmatically generated signatures to be used
+when [calling between programs](#cross-program-invocations).
+
+Using a program derived address, a program may be given the authority over an
+account and later transfer that authority to another. This is possible because
+the program can act as the signer in the transaction that gives authority.
+
+For example, if two users want to make a wager on the outcome of a game in
+Solana, they must each transfer their wager's assets to some intermediary that
+will honor their agreement. Currently, there is no way to implement this
+intermediary as a program in Solana because the intermediary program cannot
+transfer the assets to the winner.
+
+This capability is necessary for many DeFi applications since they require
+assets to be transferred to an escrow agent until some event occurs that
+determines the new owner.
+
+- Decentralized Exchanges that transfer assets between matching bid and ask
+ orders.
+
+- Auctions that transfer assets to the winner.
+
+- Games or prediction markets that collect and redistribute prizes to the
+ winners.
+
+Program derived address:
+
+1. Allow programs to control specific addresses, called program addresses, in
+ such a way that no external user can generate valid transactions with
+ signatures for those addresses.
+
+2. Allow programs to programmatically sign for program addresses that are
+ present in instructions invoked via
+ [Cross-Program Invocations](#cross-program-invocations).
+
+Given the two conditions, users can securely transfer or assign the authority of
+on-chain assets to program addresses, and the program can then assign that
+authority elsewhere at its discretion.
+
+### Private keys for program addresses
+
+A program address does not lie on the ed25519 curve and therefore has no valid
+private key associated with it, and thus generating a signature for it is
+impossible. While it has no private key of its own, it can be used by a program
+to issue an instruction that includes the program address as a signer.
+
+### Hash-based generated program addresses
+
+Program addresses are deterministically derived from a collection of seeds and a
+program id using a 256-bit pre-image resistant hash function. Program address
+must not lie on the ed25519 curve to ensure there is no associated private key.
+During generation, an error will be returned if the address is found to lie on
+the curve. There is about a 50/50 chance of this happening for a given
+collection of seeds and program id. If this occurs a different set of seeds or a
+seed bump (additional 8 bit seed) can be used to find a valid program address
+off the curve.
+
+Deterministic program addresses for programs follow a similar derivation path as
+Accounts created with `SystemInstruction::CreateAccountWithSeed` which is
+implemented with `Pubkey::create_with_seed`.
+
+For reference, that implementation is as follows:
+
+```rust,ignore
+pub fn create_with_seed(
+ base: &Pubkey,
+ seed: &str,
+ program_id: &Pubkey,
+) -> Result {
+ if seed.len() > MAX_ADDRESS_SEED_LEN {
+ return Err(SystemError::MaxSeedLengthExceeded);
+ }
+
+ Ok(Pubkey::new(
+ hashv(&[base.as_ref(), seed.as_ref(), program_id.as_ref()]).as_ref(),
+ ))
+}
+```
+
+Programs can deterministically derive any number of addresses by using seeds.
+These seeds can symbolically identify how the addresses are used.
+
+From `Pubkey`::
+
+```rust,ignore
+/// Generate a derived program address
+/// * seeds, symbolic keywords used to derive the key
+/// * program_id, program that the address is derived for
+pub fn create_program_address(
+ seeds: &[&[u8]],
+ program_id: &Pubkey,
+) -> Result
+
+/// Find a valid off-curve derived program address and its bump seed
+/// * seeds, symbolic keywords used to derive the key
+/// * program_id, program that the address is derived for
+pub fn find_program_address(
+ seeds: &[&[u8]],
+ program_id: &Pubkey,
+) -> Option<(Pubkey, u8)> {
+ let mut bump_seed = [std::u8::MAX];
+ for _ in 0..std::u8::MAX {
+ let mut seeds_with_bump = seeds.to_vec();
+ seeds_with_bump.push(&bump_seed);
+ if let Ok(address) = create_program_address(&seeds_with_bump, program_id) {
+ return Some((address, bump_seed[0]));
+ }
+ bump_seed[0] -= 1;
+ }
+ None
+}
+```
+
+**Warning**: Because of the way the seeds are hashed there is a potential for
+program address collisions for the same program id. The seeds are hashed
+sequentially which means that seeds {"abcdef"}, {"abc", "def"}, and {"ab", "cd",
+"ef"} will all result in the same program address given the same program id.
+Since the chance of collision is local to a given program id, the developer of
+that program must take care to choose seeds that do not collide with each other.
+For seed schemes that are susceptible to this type of hash collision, a common
+remedy is to insert separators between seeds, e.g. transforming {"abc", "def"}
+into {"abc", "-", "def"}.
+
+### Using program addresses
+
+Clients can use the `create_program_address` function to generate a destination
+address. In this example, we assume that
+`create_program_address(&[&["escrow"]], &escrow_program_id)` generates a valid
+program address that is off the curve.
+
+```rust,ignore
+// deterministically derive the escrow key
+let escrow_pubkey = create_program_address(&[&["escrow"]], &escrow_program_id);
+
+// construct a transfer message using that key
+let message = Message::new(vec![
+ token_instruction::transfer(&alice_pubkey, &escrow_pubkey, 1),
+]);
+
+// process the message which transfer one 1 token to the escrow
+client.send_and_confirm_message(&[&alice_keypair], &message);
+```
+
+Programs can use the same function to generate the same address. In the function
+below the program issues a `token_instruction::transfer` from a program address
+as if it had the private key to sign the transaction.
+
+```rust,ignore
+fn transfer_one_token_from_escrow(
+ program_id: &Pubkey,
+ accounts: &[AccountInfo],
+) -> ProgramResult {
+ // User supplies the destination
+ let alice_pubkey = keyed_accounts[1].unsigned_key();
+
+ // Deterministically derive the escrow pubkey.
+ let escrow_pubkey = create_program_address(&[&["escrow"]], program_id);
+
+ // Create the transfer instruction
+ let instruction = token_instruction::transfer(&escrow_pubkey, &alice_pubkey, 1);
+
+ // The runtime deterministically derives the key from the currently
+ // executing program ID and the supplied keywords.
+ // If the derived address matches a key marked as signed in the instruction
+ // then that key is accepted as signed.
+ invoke_signed(&instruction, accounts, &[&["escrow"]])
+}
+```
+
+Note that the address generated using `create_program_address` is not guaranteed
+to be a valid program address off the curve. For example, let's assume that the
+seed `"escrow2"` does not generate a valid program address.
+
+To generate a valid program address using `"escrow2"` as a seed, use
+`find_program_address`, iterating through possible bump seeds until a valid
+combination is found. The preceding example becomes:
+
+```rust,ignore
+// find the escrow key and valid bump seed
+let (escrow_pubkey2, escrow_bump_seed) = find_program_address(&[&["escrow2"]], &escrow_program_id);
+
+// construct a transfer message using that key
+let message = Message::new(vec![
+ token_instruction::transfer(&alice_pubkey, &escrow_pubkey2, 1),
+]);
+
+// process the message which transfer one 1 token to the escrow
+client.send_and_confirm_message(&[&alice_keypair], &message);
+```
+
+Within the program, this becomes:
+
+```rust,ignore
+fn transfer_one_token_from_escrow2(
+ program_id: &Pubkey,
+ accounts: &[AccountInfo],
+) -> ProgramResult {
+ // User supplies the destination
+ let alice_pubkey = keyed_accounts[1].unsigned_key();
+
+ // Iteratively derive the escrow pubkey
+ let (escrow_pubkey2, bump_seed) = find_program_address(&[&["escrow2"]], program_id);
+
+ // Create the transfer instruction
+ let instruction = token_instruction::transfer(&escrow_pubkey2, &alice_pubkey, 1);
+
+ // Include the generated bump seed to the list of all seeds
+ invoke_signed(&instruction, accounts, &[&["escrow2", &[bump_seed]]])
+}
+```
+
+Since `find_program_address` requires iterating over a number of calls to
+`create_program_address`, it may use more
+[compute budget](developing/programming-model/runtime.md#compute-budget) when
+used on-chain. To reduce the compute cost, use `find_program_address` off-chain
+and pass the resulting bump seed to the program.
+
+### Instructions that require signers
+
+The addresses generated with `create_program_address` and `find_program_address`
+are indistinguishable from any other public key. The only way for the runtime to
+verify that the address belongs to a program is for the program to supply the
+seeds used to generate the address.
+
+The runtime will internally call `create_program_address`, and compare the
+result against the addresses supplied in the instruction.
+
+## Examples
+
+Refer to
+[Developing with Rust](developing/on-chain-programs/../../../on-chain-programs/developing-rust.md#examples)
+and
+[Developing with C](developing/on-chain-programs/../../../on-chain-programs/developing-c.md#examples)
+for examples of how to use cross-program invocation.
diff --git a/docs/developing/programming-model/overview.md b/docs/developing/programming-model/overview.md
new file mode 100644
index 000000000..43375b529
--- /dev/null
+++ b/docs/developing/programming-model/overview.md
@@ -0,0 +1,17 @@
+---
+title: "Overview"
+---
+
+An [app](terminology.md#app) interacts with a Solana cluster by sending it
+[transactions](transactions.md) with one or more
+[instructions](transactions.md#instructions). The Solana [runtime](runtime.md)
+passes those instructions to [programs](terminology.md#program) deployed by app
+developers beforehand. An instruction might, for example, tell a program to
+transfer [lamports](terminology.md#lamport) from one [account](accounts.md) to
+another or create an interactive contract that governs how lamports are
+transferred. Instructions are executed sequentially and atomically for each
+transaction. If any instruction is invalid, all account changes in the
+transaction are discarded.
+
+To start developing immediately you can build, deploy, and run one of the
+[examples](developing/on-chain-programs/examples.md).
diff --git a/docs/developing/programming-model/runtime.md b/docs/developing/programming-model/runtime.md
new file mode 100644
index 000000000..1ef245451
--- /dev/null
+++ b/docs/developing/programming-model/runtime.md
@@ -0,0 +1,173 @@
+---
+title: "Runtime"
+---
+
+## Capability of Programs
+
+The runtime only permits the owner program to debit the account or modify its
+data. The program then defines additional rules for whether the client can
+modify accounts it owns. In the case of the System program, it allows users to
+transfer lamports by recognizing transaction signatures. If it sees the client
+signed the transaction using the keypair's _private key_, it knows the client
+authorized the token transfer.
+
+In other words, the entire set of accounts owned by a given program can be
+regarded as a key-value store, where a key is the account address and value is
+program-specific arbitrary binary data. A program author can decide how to
+manage the program's whole state, possibly as many accounts.
+
+After the runtime executes each of the transaction's instructions, it uses the
+account metadata to verify that the access policy was not violated. If a program
+violates the policy, the runtime discards all account changes made by all
+instructions in the transaction, and marks the transaction as failed.
+
+### Policy
+
+After a program has processed an instruction, the runtime verifies that the
+program only performed operations it was permitted to, and that the results
+adhere to the runtime policy.
+
+The policy is as follows:
+
+- Only the owner of the account may change owner.
+ - And only if the account is writable.
+ - And only if the account is not executable.
+ - And only if the data is zero-initialized or empty.
+- An account not assigned to the program cannot have its balance decrease.
+- The balance of read-only and executable accounts may not change.
+- Only the owner may change account size and data.
+ - And if the account is writable.
+ - And if the account is not executable.
+- Executable is one-way (false->true) and only the account owner may set it.
+- No one can make modifications to the rent_epoch associated with this account.
+
+## Balancing the balances
+
+Before and after each instruction, the sum of all account balances must stay the
+same. E.g. if one account's balance is increased, another's must be decreased by
+the same amount. Because the runtime can not see changes to accounts which were
+not passed to it, all accounts for which the balances were modified must be
+passed, even if they are not needed in the called instruction.
+
+## Compute Budget
+
+To prevent abuse of computational resources, each transaction is allocated a
+compute budget. The budget specifies a maximum number of compute units that a
+transaction can consume, the costs associated with different types of operations
+the transaction may perform, and operational bounds the transaction must adhere
+to.
+
+As the transaction is processed compute units are consumed by its instruction's
+programs performing operations such as executing SBF instructions, calling
+syscalls, etc... When the transaction consumes its entire budget, or exceeds a
+bound such as attempting a call stack that is too deep, or loaded account data
+size exceeds limit, the runtime halts the transaction processing and returns an
+error.
+
+The following operations incur a compute cost:
+
+- Executing SBF instructions
+- Passing data between programs
+- Calling system calls
+ - logging
+ - creating program addresses
+ - cross-program invocations
+ - ...
+
+For cross-program invocations, the instructions invoked inherit the budget of
+their parent. If an invoked instruction consumes the transactions remaining
+budget, or exceeds a bound, the entire invocation chain and the top level
+transaction processing are halted.
+
+The current
+[compute budget](https://github.com/solana-labs/solana/blob/090e11210aa7222d8295610a6ccac4acda711bb9/program-runtime/src/compute_budget.rs#L26-L87)
+can be found in the Solana Program Runtime.
+
+#### Example Compute Budget
+
+For example, if the compute budget set in the Solana runtime is:
+
+```rust
+max_units: 1,400,000,
+log_u64_units: 100,
+create_program address units: 1500,
+invoke_units: 1000,
+max_invoke_stack_height: 5,
+max_instruction_trace_length: 64,
+max_call_depth: 64,
+stack_frame_size: 4096,
+log_pubkey_units: 100,
+...
+```
+
+Then any transaction:
+
+- Could execute 1,400,000 SBF instructions, if it did nothing else.
+- Cannot exceed 4k of stack usage.
+- Cannot exceed a SBF call depth of 64.
+- Cannot exceed invoke stack height of 5 (4 levels of cross-program
+ invocations).
+
+> **NOTE:** Since the compute budget is consumed incrementally as the
+> transaction executes, the total budget consumption will be a combination of
+> the various costs of the operations it performs.
+
+At runtime a program may log how much of the compute budget remains. See
+[debugging](developing/on-chain-programs/debugging.md#monitoring-compute-budget-consumption)
+for more information.
+
+### Prioritization fees
+
+As part of the Compute Budget, the runtime supports transactions including an
+**optional** fee to prioritize itself against others known as a
+[prioritization fee](./../../transaction_fees.md#prioritization-fee).
+
+This _prioritization fee_ is calculated by multiplying the number of _compute
+units_ by the _compute unit price_ (measured in micro-lamports). These values
+may be set via the Compute Budget instructions `SetComputeUnitLimit` and
+`SetComputeUnitPrice` once per transaction.
+
+:::info You can learn more of the specifics of _how_ and _when_ to set a
+prioritization fee on the
+[transaction fees](./../../transaction_fees.md#prioritization-fee) page. :::
+
+### Accounts data size limit
+
+A transaction should request the maximum bytes of accounts data it is allowed to
+load by including a `SetLoadedAccountsDataSizeLimit` instruction, requested
+limit is capped by `MAX_LOADED_ACCOUNTS_DATA_SIZE_BYTES`. If no
+`SetLoadedAccountsDataSizeLimit` is provided, the transaction is defaulted to
+have limit of `MAX_LOADED_ACCOUNTS_DATA_SIZE_BYTES`.
+
+The `ComputeBudgetInstruction::set_loaded_accounts_data_size_limit` function can
+be used to create this instruction:
+
+```rust
+let instruction = ComputeBudgetInstruction::set_loaded_accounts_data_size_limit(100_000);
+```
+
+## New Features
+
+As Solana evolves, new features or patches may be introduced that changes the
+behavior of the cluster and how programs run. Changes in behavior must be
+coordinated between the various nodes of the cluster. If nodes do not
+coordinate, then these changes can result in a break-down of consensus. Solana
+supports a mechanism called runtime features to facilitate the smooth adoption
+of changes.
+
+Runtime features are epoch coordinated events where one or more behavior changes
+to the cluster will occur. New changes to Solana that will change behavior are
+wrapped with feature gates and disabled by default. The Solana tools are then
+used to activate a feature, which marks it pending, once marked pending the
+feature will be activated at the next epoch.
+
+To determine which features are activated use the
+[Solana command-line tools](cli/install-solana-cli-tools.md):
+
+```bash
+solana feature status
+```
+
+If you encounter problems, first ensure that the Solana tools version you are
+using match the version returned by `solana cluster-version`. If they do not
+match, [install the correct tool suite](cli/install-solana-cli-tools.md).
diff --git a/docs/developing/programming-model/transactions.md b/docs/developing/programming-model/transactions.md
new file mode 100644
index 000000000..23ea794df
--- /dev/null
+++ b/docs/developing/programming-model/transactions.md
@@ -0,0 +1,238 @@
+---
+title: "Transactions"
+description:
+ "A Solana transaction consists of one or more instructions, an array of
+ accounts to read and write data from, and one or more signatures."
+---
+
+On the Solana blockchain, program execution begins with a
+[transaction](./../../terminology.md#transaction) being submitted to the
+cluster. With each transaction consisting of one or many
+[instructions](./../../terminology.md#instruction), the runtime will process
+each of the instructions contained within the transaction, in order, and
+atomically. If any part of an instruction fails, then the entire transaction
+will fail.
+
+## Overview of a Transaction
+
+On Solana, clients update the runtime (for example, debiting an account) by
+submitting a transaction to the cluster.
+
+This transaction consists of three parts:
+
+- one or more instructions
+- an array of accounts to read or write from
+- one or more signatures
+
+An [instruction](./../../terminology.md#instruction) is the smallest execution
+logic on Solana. Instructions are basically a call to update the global Solana
+state. Instructions invoke programs that make calls to the Solana runtime to
+update the state (for example, calling the token program to transfer tokens from
+your account to another account).
+
+[Programs](./../intro/programs.md) on Solana don’t store data/state; rather,
+data/state is stored in accounts.
+
+[Signatures](./../../terminology.md#signature) verify that we have the authority
+to read or write data to the accounts that we list.
+
+## Anatomy of a Transaction
+
+This section covers the binary format of a transaction.
+
+### Transaction Format
+
+A transaction contains a [compact-array](#compact-array-format) of signatures,
+followed by a [message](#message-format). Each item in the signatures array is a
+[digital signature](#signature-format) of the given message. The Solana runtime
+verifies that the number of signatures matches the number in the first 8 bits of
+the [message header](#message-header-format). It also verifies that each
+signature was signed by the private key corresponding to the public key at the
+same index in the message's account addresses array.
+
+#### Signature Format
+
+Each digital signature is in the ed25519 binary format and consumes 64 bytes.
+
+### Message Format
+
+A message contains a [header](#message-header-format), followed by a
+compact-array of [account addresses](#account-addresses-format), followed by a
+recent [blockhash](#blockhash-format), followed by a compact-array of
+[instructions](#instruction-format).
+
+#### Message Header Format
+
+The message header contains three unsigned 8-bit values. The first value is the
+number of required signatures in the containing transaction. The second value is
+the number of those corresponding account addresses that are read-only. The
+third value in the message header is the number of read-only account addresses
+not requiring signatures.
+
+#### Account Addresses Format
+
+The addresses that require signatures appear at the beginning of the account
+address array, with addresses requesting read-write access first, and read-only
+accounts following. The addresses that do not require signatures follow the
+addresses that do, again with read-write accounts first and read-only accounts
+following.
+
+#### Blockhash Format
+
+A blockhash contains a 32-byte SHA-256 hash. It is used to indicate when a
+client last observed the ledger. Validators will reject transactions when the
+blockhash is too old.
+
+### Instruction Format
+
+An instruction contains a program id index, followed by a compact-array of
+account address indexes, followed by a compact-array of opaque 8-bit data. The
+program id index is used to identify an on-chain program that can interpret the
+opaque data. The program id index is an unsigned 8-bit index to an account
+address in the message's array of account addresses. The account address indexes
+are each an unsigned 8-bit index into that same array.
+
+### Compact-Array Format
+
+A compact-array is serialized as the array length, followed by each array item.
+The array length is a special multi-byte encoding called compact-u16.
+
+#### Compact-u16 Format
+
+A compact-u16 is a multi-byte encoding of 16 bits. The first byte contains the
+lower 7 bits of the value in its lower 7 bits. If the value is above 0x7f, the
+high bit is set and the next 7 bits of the value are placed into the lower 7
+bits of a second byte. If the value is above 0x3fff, the high bit is set and the
+remaining 2 bits of the value are placed into the lower 2 bits of a third byte.
+
+### Account Address Format
+
+An account address is 32-bytes of arbitrary data. When the address requires a
+digital signature, the runtime interprets it as the public key of an ed25519
+keypair.
+
+## Instructions
+
+Each [instruction](terminology.md#instruction) specifies a single program, a
+subset of the transaction's accounts that should be passed to the program, and a
+data byte array that is passed to the program. The program interprets the data
+array and operates on the accounts specified by the instructions. The program
+can return successfully, or with an error code. An error return causes the
+entire transaction to fail immediately.
+
+Programs typically provide helper functions to construct instructions they
+support. For example, the system program provides the following Rust helper to
+construct a
+[`SystemInstruction::CreateAccount`](https://github.com/solana-labs/solana/blob/6606590b8132e56dab9e60b3f7d20ba7412a736c/sdk/program/src/system_instruction.rs#L63)
+instruction:
+
+```rust
+pub fn create_account(
+ from_pubkey: &Pubkey,
+ to_pubkey: &Pubkey,
+ lamports: u64,
+ space: u64,
+ owner: &Pubkey,
+) -> Instruction {
+ let account_metas = vec![
+ AccountMeta::new(*from_pubkey, true),
+ AccountMeta::new(*to_pubkey, true),
+ ];
+ Instruction::new_with_bincode(
+ system_program::id(),
+ &SystemInstruction::CreateAccount {
+ lamports,
+ space,
+ owner: *owner,
+ },
+ account_metas,
+ )
+}
+```
+
+Which can be found here:
+
+https://github.com/solana-labs/solana/blob/6606590b8132e56dab9e60b3f7d20ba7412a736c/sdk/program/src/system_instruction.rs#L220
+
+### Program Id
+
+The instruction's [program id](./../../terminology.md#program-id) specifies
+which program will process this instruction. The program's account's owner
+specifies which loader should be used to load and execute the program, and the
+data contains information about how the runtime should execute the program.
+
+In the case of [on-chain SBF programs](./../on-chain-programs/overview.md), the
+owner is the SBF Loader and the account data holds the BPF bytecode. Program
+accounts are permanently marked as executable by the loader once they are
+successfully deployed. The runtime will reject transactions that specify
+programs that are not executable.
+
+Unlike on-chain programs, [Native Programs](../runtime-facilities/programs.md)
+are handled differently in that they are built directly into the Solana runtime.
+
+### Accounts
+
+The accounts referenced by an instruction represent on-chain state and serve as
+both the inputs and outputs of a program. More information about accounts can be
+found in the [Accounts](./accounts.md) section.
+
+### Instruction data
+
+Each instruction carries a general purpose byte array that is passed to the
+program along with the accounts. The contents of the instruction data is program
+specific and typically used to convey what operations the program should
+perform, and any additional information those operations may need above and
+beyond what the accounts contain.
+
+Programs are free to specify how information is encoded into the instruction
+data byte array. The choice of how data is encoded should consider the overhead
+of decoding, since that step is performed by the program on-chain. It's been
+observed that some common encodings (Rust's bincode for example) are very
+inefficient.
+
+The
+[Solana Program Library's Token program](https://github.com/solana-labs/solana-program-library/tree/master/token)
+gives one example of how instruction data can be encoded efficiently, but note
+that this method only supports fixed sized types. Token utilizes the
+[Pack](https://github.com/solana-labs/solana/blob/master/sdk/program/src/program_pack.rs)
+trait to encode/decode instruction data for both token instructions as well as
+token account states.
+
+### Multiple instructions in a single transaction
+
+A transaction can contain instructions in any order. This means a malicious user
+could craft transactions that may pose instructions in an order that the program
+has not been protected against. Programs should be hardened to properly and
+safely handle any possible instruction sequence.
+
+One not so obvious example is account deinitialization. Some programs may
+attempt to deinitialize an account by setting its lamports to zero, with the
+assumption that the runtime will delete the account. This assumption may be
+valid between transactions, but it is not between instructions or cross-program
+invocations. To harden against this, the program should also explicitly zero out
+the account's data.
+
+An example of where this could be a problem is if a token program, upon
+transferring the token out of an account, sets the account's lamports to zero,
+assuming it will be deleted by the runtime. If the program does not zero out the
+account's data, a malicious user could trail this instruction with another that
+transfers the tokens a second time.
+
+## Signatures
+
+Each transaction explicitly lists all account public keys referenced by the
+transaction's instructions. A subset of those public keys are each accompanied
+by a transaction signature. Those signatures signal on-chain programs that the
+account holder has authorized the transaction. Typically, the program uses the
+authorization to permit debiting the account or modifying its data. More
+information about how the authorization is communicated to a program can be
+found in [Accounts](./accounts.md#signers)
+
+## Recent Blockhash
+
+A transaction includes a recent [blockhash](../../terminology.md#blockhash) to
+prevent duplication and to give transactions lifetimes. Any transaction that is
+completely identical to a previous one is rejected, so adding a newer blockhash
+allows multiple transactions to repeat the exact same action. Transactions also
+have lifetimes that are defined by the blockhash, as any transaction whose
+blockhash is too old will be rejected.
diff --git a/docs/developing/transaction_confirmation.md b/docs/developing/transaction_confirmation.md
new file mode 100644
index 000000000..28f2e7c47
--- /dev/null
+++ b/docs/developing/transaction_confirmation.md
@@ -0,0 +1,374 @@
+---
+title: "Transaction Confirmation"
+---
+
+Problems relating to
+[transaction confirmation](./../terminology.md#transaction-confirmations) are
+common with many newer developers while building applications. This article aims
+to boost the overall understanding of the confirmation mechanism used on the
+Solana blockchain, including some recommended best practices.
+
+## Brief background on transactions
+
+Let’s first make sure we’re all on the same page and thinking about the same
+things...
+
+### What is a transaction?
+
+Transactions consist of two components: a [message](./../terminology.md#message)
+and a [list of signatures](./../terminology.md#signature). The transaction
+message is where the magic happens and at a high level it consists of three
+components:
+
+- a **list of instructions** to invoke,
+- a **list of accounts** to load, and
+- a **“recent blockhash.”**
+
+In this article, we’re going to be focusing a lot on a transaction’s
+[recent blockhash](./../terminology.md#blockhash) because it plays a big role in
+transaction confirmation.
+
+### Transaction lifecycle refresher
+
+Below is a high level view of the lifecycle of a transaction. This article will
+touch on everything except steps 1 and 4.
+
+1. Create a list of instructions along with the list of accounts that
+ instructions need to read and write
+2. Fetch a recent blockhash and use it to prepare a transaction message
+3. Simulate the transaction to ensure it behaves as expected
+4. Prompt user to sign the prepared transaction message with their private key
+5. Send the transaction to an RPC node which attempts to forward it to the
+ current block producer
+6. Hope that a block producer validates and commits the transaction into their
+ produced block
+7. Confirm the transaction has either been included in a block or detect when it
+ has expired
+
+## What is a Blockhash?
+
+A [“blockhash”](./../terminology.md#blockhash) refers to the last Proof of
+History (PoH) hash for a [“slot”](./../terminology.md#slot) (description below).
+Since Solana uses PoH as a trusted clock, a transaction’s recent blockhash can
+be thought of as a **timestamp**.
+
+### Proof of History refresher
+
+Solana’s Proof of History mechanism uses a very long chain of recursive SHA-256
+hashes to build a trusted clock. The “history” part of the name comes from the
+fact that block producers hash transaction id’s into the stream to record which
+transactions were processed in their block.
+
+[PoH hash calculation](https://github.com/solana-labs/solana/blob/9488a73f5252ad0d7ea830a0b456d9aa4bfbb7c1/entry/src/poh.rs#L82):
+`next_hash = hash(prev_hash, hash(transaction_ids))`
+
+PoH can be used as a trusted clock because each hash must be produced
+sequentially. Each produced block contains a blockhash and a list of hash
+checkpoints called “ticks” so that validators can verify the full chain of
+hashes in parallel and prove that some amount of time has actually passed. The
+stream of hashes can be broken up into the following time units:
+
+# Transaction Expiration
+
+By default, all Solana transactions will expire if not committed to a block in a
+certain amount of time. The **vast majority** of transaction confirmation issues
+are related to how RPC nodes and validators detect and handle **expired**
+transactions. A solid understanding of how transaction expiration works should
+help you diagnose the bulk of your transaction confirmation issues.
+
+## How does transaction expiration work?
+
+Each transaction includes a “recent blockhash” which is used as a PoH clock
+timestamp and expires when that blockhash is no longer “recent” enough. More
+concretely, Solana validators look up the corresponding slot number for each
+transaction’s blockhash that they wish to process in a block. If the validator
+[can’t find a slot number for the blockhash](https://github.com/solana-labs/solana/blob/9488a73f5252ad0d7ea830a0b456d9aa4bfbb7c1/runtime/src/bank.rs#L3687)
+or if the looked up slot number is more than 151 slots lower than the slot
+number of the block being processed, the transaction will be rejected.
+
+Slots are configured to last about
+[400ms](https://github.com/solana-labs/solana/blob/47b938e617b77eb3fc171f19aae62222503098d7/sdk/program/src/clock.rs#L12)
+but often fluctuate between 400ms and 600ms, so a given blockhash can only be
+used by transactions for about 60 to 90 seconds.
+
+Transaction has expired pseudocode:
+`currentBankSlot > slotForTxRecentBlockhash + 151`
+
+Transaction not expired pseudocode:
+`currentBankSlot - slotForTxRecentBlockhash < 152`
+
+### Example of transaction expiration
+
+Let’s walk through a quick example:
+
+1. A validator is producing a new block for slot #1000
+2. The validator receives a transaction with recent blockhash `1234...` from a
+ user
+3. The validator checks the `1234...` blockhash against the list of recent
+ blockhashes leading up to its new block and discovers that it was the
+ blockhash for slot #849
+4. Since slot #849 is exactly 151 slots lower than slot #1000, the transaction
+ hasn’t expired yet and can still be processed!
+5. But wait, before actually processing the transaction, the validator finished
+ the block for slot #1000 and starts producing the block for slot #1001
+ (validators get to produce blocks for 4 consecutive slots).
+6. The validator checks the same transaction again and finds that it’s now too
+ old and drops it because it’s now 152 slots lower than the current slot :(
+
+## Why do transactions expire?
+
+There’s a very good reason for this actually, it’s to help validators avoid
+processing the same transaction twice.
+
+A naive brute force approach to prevent double processing could be to check
+every new transaction against the blockchain’s entire transaction history. But
+by having transactions expire after a short amount of time, validators only need
+to check if a new transaction is in a relatively small set of _recently_
+processed transactions.
+
+### Other blockchains
+
+Solana’s approach of prevent double processing is quite different from other
+blockchains. For example, Ethereum tracks a counter (nonce) for each transaction
+sender and will only process transactions that use the next valid nonce.
+
+Ethereum’s approach is simple for validators to implement, but it can be
+problematic for users. Many people have encountered situations when their
+Ethereum transactions got stuck in a _pending_ state for a long time and all the
+later transactions, which used higher nonce values, were blocked from
+processing.
+
+### Advantages on Solana
+
+There are a few advantages to Solana’s approach:
+
+1. A single fee payer can submit multiple transactions at the same time that are
+ allowed to be processed in any order. This might happen if you’re using
+ multiple applications at the same time.
+2. If a transaction doesn’t get committed to a block and expires, users can try
+ again knowing that their previous transaction won’t ever be processed.
+
+By not using counters, the Solana wallet experience may be easier for users to
+understand because they can get to success, failure, or expiration states
+quickly and avoid annoying pending states.
+
+### Disadvantages on Solana
+
+Of course there are some disadvantages too:
+
+1. Validators have to actively track a set of all processed transaction id’s to
+ prevent double processing.
+2. If the expiration time period is too short, users might not be able to submit
+ their transaction before it expires.
+
+These disadvantages highlight a tradeoff in how transaction expiration is
+configured. If the expiration time of a transaction is increased, validators
+need to use more memory to track more transactions. If expiration time is
+decreased, users don’t have enough time to submit their transaction.
+
+Currently, Solana clusters require that transactions use blockhashes that are no
+more than
+[151 slots](https://github.com/solana-labs/solana/blob/9488a73f5252ad0d7ea830a0b456d9aa4bfbb7c1/sdk/program/src/clock.rs#L65)
+old.
+
+> This [Github issue](https://github.com/solana-labs/solana/issues/23582)
+> contains some calculations that estimate that mainnet-beta validators need
+> about 150MB of memory to track transactions. This could be slimmed down in the
+> future if necessary without decreasing expiration time as I’ve detailed in
+> that issue.
+
+## Transaction confirmation tips
+
+As mentioned before, blockhashes expire after a time period of only 151 slots
+which can pass as quickly as **one minute** when slots are processed within the
+target time of 400ms.
+
+One minute is not a lot of time considering that a client needs to fetch a
+recent blockhash, wait for the user to sign, and finally hope that the
+broadcasted transaction reaches a leader that is willing to accept it. Let’s go
+through some tips to help avoid confirmation failures due to transaction
+expiration!
+
+### Fetch blockhashes with the appropriate commitment level
+
+Given the short expiration time frame, it’s imperative that clients help users
+create transactions with blockhash that is as recent as possible.
+
+When fetching blockhashes, the current recommended RPC API is called
+[`getLatestBlockhash`](/api/http#getlatestblockhash). By default, this API uses
+the `"finalized"` commitment level to return the most recently finalized block’s
+blockhash. However, you can override this behavior by
+[setting the `commitment` parameter](/api/http#configuring-state-commitment) to
+a different commitment level.
+
+**Recommendation**
+
+The `"confirmed"` commitment level should almost always be used for RPC requests
+because it’s usually only a few slots behind the `"processed"` commitment and
+has a very low chance of belonging to a dropped
+[fork](./../cluster/fork-generation.md).
+
+But feel free to consider the other options:
+
+- Choosing `"processed"` will let you fetch the most recent blockhash compared
+ to other commitment levels and therefore gives you the most time to prepare
+ and process a transaction. But due to the prevalence of forking in the Solana
+ protocol, roughly 5% of blocks don’t end up being finalized by the cluster so
+ there’s a real chance that your transaction uses a blockhash that belongs to a
+ dropped fork. Transactions that use blockhashes for abandoned blocks won’t
+ ever be considered recent by any blocks that are in the finalized blockchain.
+- Using the default commitment level `"finalized"` will eliminate any risk that
+ the blockhash you choose will belong to a dropped fork. The tradeoff is that
+ there is typically at least a 32 slot difference between the most recent
+ confirmed block and the most recent finalized block. This tradeoff is pretty
+ severe and effectively reduces the expiration of your transactions by about 13
+ seconds but this could be even more during unstable cluster conditions.
+
+### Use an appropriate preflight commitment level
+
+If your transaction uses a blockhash that was fetched from one RPC node then you
+send, or simulate, that transaction with a different RPC node, you could run
+into issues due to one node lagging behind the other.
+
+When RPC nodes receive a `sendTransaction` request, they will attempt to
+determine the expiration block of your transaction using the most recent
+finalized block or with the block selected by the `preflightCommitment`
+parameter. A **VERY** common issue is that a received transaction’s blockhash
+was produced after the block used to calculate the expiration for that
+transaction. If an RPC node can’t determine when your transaction expires, it
+will only forward your transaction **one time** and then will **drop** the
+transaction.
+
+Similarly, when RPC nodes receive a `simulateTransaction` request, they will
+simulate your transaction using the most recent finalized block or with the
+block selected by the `preflightCommitment` parameter. If the block chosen for
+simulation is older than the block used for your transaction’s blockhash, the
+simulation will fail with the dreaded “blockhash not found” error.
+
+**Recommendation**
+
+Even if you use `skipPreflight`, **ALWAYS** set the `preflightCommitment`
+parameter to the same commitment level used to fetch your transaction’s
+blockhash for both `sendTransaction` and `simulateTransaction` requests.
+
+### Be wary of lagging RPC nodes when sending transactions
+
+When your application uses an RPC pool service or when the RPC endpoint differs
+between creating a transaction and sending a transaction, you need to be wary of
+situations where one RPC node is lagging behind the other. For example, if you
+fetch a transaction blockhash from one RPC node then you send that transaction
+to a second RPC node for forwarding or simulation, the second RPC node might be
+lagging behind the first.
+
+**Recommendation**
+
+For `sendTransaction` requests, clients should keep resending a transaction to a
+RPC node on a frequent interval so that if an RPC node is slightly lagging
+behind the cluster, it will eventually catch up and detect your transaction’s
+expiration properly.
+
+For `simulateTransaction` requests, clients should use the
+[`replaceRecentBlockhash`](/api/http#simulatetransaction) parameter to tell the
+RPC node to replace the simulated transaction’s blockhash with a blockhash that
+will always be valid for simulation.
+
+### Avoid reusing stale blockhashes
+
+Even if your application has fetched a very recent blockhash, be sure that
+you’re not reusing that blockhash in transactions for too long. The ideal
+scenario is that a recent blockhash is fetched right before a user signs their
+transaction.
+
+**Recommendation for applications**
+
+Poll for new recent blockhashes on a frequent basis to ensure that whenever a
+user triggers an action that creates a transaction, your application already has
+a fresh blockhash that’s ready to go.
+
+**Recommendation for wallets**
+
+Poll for new recent blockhashes on a frequent basis and replace a transaction’s
+recent blockhash right before they sign the transaction to ensure the blockhash
+is as fresh as possible.
+
+### Use healthy RPC nodes when fetching blockhashes
+
+By fetching the latest blockhash with the `"confirmed"` commitment level from an
+RPC node, it’s going to respond with the blockhash for the latest confirmed
+block that it’s aware of. Solana’s block propagation protocol prioritizes
+sending blocks to staked nodes so RPC nodes naturally lag about a block behind
+the rest of the cluster. They also have to do more work to handle application
+requests and can lag a lot more under heavy user traffic.
+
+Lagging RPC nodes can therefore respond to blockhash requests with blockhashes
+that were confirmed by the cluster quite awhile ago. By default, a lagging RPC
+node detects that it is more than 150 slots behind the cluster will stop
+responding to requests, but just before hitting that threshold they can still
+return a blockhash that is just about to expire.
+
+**Recommendation**
+
+Monitor the health of your RPC nodes to ensure that they have an up-to-date view
+of the cluster state with one of the following methods:
+
+1. Fetch your RPC node’s highest processed slot by using the
+ [`getSlot`](/api/http#getslot) RPC API with the `"processed"` commitment
+ level and then call the
+ [`getMaxShredInsertSlot](/api/http#getmaxshredinsertslot) RPC API to get the
+ highest slot that your RPC node has received a “shred” of a block for. If the
+ difference between these responses is very large, the cluster is producing
+ blocks far ahead of what the RPC node has processed.
+2. Call the `getLatestBlockhash` RPC API with the `"confirmed"` commitment level
+ on a few different RPC API nodes and use the blockhash from the node that
+ returns the highest slot for its
+ [context slot](/api/http#rpcresponse-structure).
+
+### Wait long enough for expiration
+
+**Recommendation**
+
+When calling [`getLatestBlockhash`](/api/http#getlatestblockhash) RPC API to get
+a recent blockhash for your transaction, take note of the
+`"lastValidBlockHeight"` in the response.
+
+Then, poll the [`getBlockHeight`](/api/http#getblockheight) RPC API with the
+“confirmed” commitment level until it returns a block height greater than the
+previously returned last valid block height.
+
+### Consider using “durable” transactions
+
+Sometimes transaction expiration issues are really hard to avoid (e.g. offline
+signing, cluster instability). If the previous tips are still not sufficient for
+your use-case, you can switch to using durable transactions (they just require a
+bit of setup).
+
+To start using durable transactions, a user first needs to submit a transaction
+that
+[invokes instructions that create a special on-chain “nonce” account](https://docs.rs/solana-program/latest/solana_program/system_instruction/fn.create_nonce_account.html)
+and stores a “durable blockhash” inside of it. At any point in the future (as
+long as the nonce account hasn’t been used yet), the user can create a durable
+transaction by following these 2 rules:
+
+1. The instruction list must start with an
+ [“advance nonce” system instruction](https://docs.rs/solana-program/latest/solana_program/system_instruction/fn.advance_nonce_account.html)
+ which loads their on-chain nonce account
+2. The transaction’s blockhash must be equal to the durable blockhash stored by
+ the on-chain nonce account
+
+Here’s how these transactions are processed by the Solana runtime:
+
+1. If the transaction’s blockhash is no longer “recent”, the runtime checks if
+ the transaction’s instruction list begins with an “advance nonce” system
+ instruction
+2. If so, it then loads the nonce account specified by the “advance nonce”
+ instruction
+3. Then it checks that the stored durable blockhash matches the transaction’s
+ blockhash
+4. Lastly it makes sure to advance the nonce account’s stored blockhash to the
+ latest recent blockhash to ensure that the same transaction can never be
+ processed again
+
+For more details about how these durable transactions work, you can read the
+[original proposal](./../implemented-proposals/durable-tx-nonces.md) and
+[check out an example](./clients/javascript-reference#nonceaccount) in the
+Solana docs.
diff --git a/docs/developing/versioned-transactions.md b/docs/developing/versioned-transactions.md
new file mode 100644
index 000000000..e74b58558
--- /dev/null
+++ b/docs/developing/versioned-transactions.md
@@ -0,0 +1,184 @@
+---
+title: Versioned Transactions
+description: ""
+---
+
+[Versioned Transactions](./versioned-transactions.md) are the new transaction
+format that allow for additional functionality in the Solana runtime, including
+[Address Lookup Tables](./lookup-tables.md).
+
+While changes to [on chain](./on-chain-programs/overview.md) programs are
+**NOT** required to support the new functionality of versioned transactions (or
+for backwards compatibility), developers **WILL** need update their client side
+code to prevent
+[errors due to different transaction versions](#max-supported-transaction-version).
+
+## Current Transaction Versions
+
+The Solana runtime supports two transaction versions:
+
+- `legacy` - older transaction format with no additional benefit
+- `0` - added support for [Address Lookup Tables](./lookup-tables.md)
+
+## Max supported transaction version
+
+All RPC requests that return a transaction **_should_** specify the highest
+version of transactions they will support in their application using the
+`maxSupportedTransactionVersion` option, including
+[`getBlock`](../api/http#getblock) and
+[`getTransaction`](../api/http#gettransaction).
+
+An RPC request will fail if a
+[Versioned Transaction](./versioned-transactions.md) is returned that is higher
+than the set `maxSupportedTransactionVersion`. (i.e. if a version `0`
+transaction is returned when `legacy` is selected)
+
+> WARNING: If no `maxSupportedTransactionVersion` value is set, then only
+> `legacy` transactions will be allowed in the RPC response. Therefore, your RPC
+> requests **WILL** fail if any version `0` transactions are returned.
+
+## How to set max supported version
+
+You can set the `maxSupportedTransactionVersion` using both the
+[`@solana/web3.js`](https://solana-labs.github.io/solana-web3.js/) library and
+JSON formatted requests directly to an RPC endpoint.
+
+### Using web3.js
+
+Using the [`@solana/web3.js`](https://solana-labs.github.io/solana-web3.js/)
+library, you can retrieve the most recent block or get a specific transaction:
+
+```js
+// connect to the `devnet` cluster and get the current `slot`
+const connection = new web3.Connection(web3.clusterApiUrl("devnet"));
+const slot = await connection.getSlot();
+
+// get the latest block (allowing for v0 transactions)
+const block = await connection.getBlock(slot, {
+ maxSupportedTransactionVersion: 0,
+});
+
+// get a specific transaction (allowing for v0 transactions)
+const getTx = await connection.getTransaction(
+ "3jpoANiFeVGisWRY5UP648xRXs3iQasCHABPWRWnoEjeA93nc79WrnGgpgazjq4K9m8g2NJoyKoWBV1Kx5VmtwHQ",
+ {
+ maxSupportedTransactionVersion: 0,
+ },
+);
+```
+
+### JSON requests to the RPC
+
+Using a standard JSON formatted POST request, you can set the
+`maxSupportedTransactionVersion` when retrieving a specific block:
+
+```bash
+curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d \
+'{"jsonrpc": "2.0", "id":1, "method": "getBlock", "params": [430, {
+ "encoding":"json",
+ "maxSupportedTransactionVersion":0,
+ "transactionDetails":"full",
+ "rewards":false
+}]}'
+```
+
+## How to create a Versioned Transaction
+
+Versioned transactions can be created similar to the older method of creating
+transactions. There are differences in using certain libraries that should be
+noted.
+
+Below is an example of how to create a Versioned Transaction, using the
+`@solana/web3.js` library, to send perform a SOL transfer between two accounts.
+
+#### Notes:
+
+- `payer` is a valid `Keypair` wallet, funded with SOL
+- `toAccount` a valid `Keypair`
+
+Firstly, import the web3.js library and create a `connection` to your desired
+cluster.
+
+We then define the recent `blockhash` and `minRent` we will need for our
+transaction and the account:
+
+```js
+const web3 = require("@solana/web3.js");
+
+// connect to the cluster and get the minimum rent for rent exempt status
+const connection = new web3.Connection(web3.clusterApiUrl("devnet"));
+let minRent = await connection.getMinimumBalanceForRentExemption(0);
+let blockhash = await connection
+ .getLatestBlockhash()
+ .then(res => res.blockhash);
+```
+
+Create an `array` of all the `instructions` you desire to send in your
+transaction. In this example below, we are creating a simple SOL transfer
+instruction:
+
+```js
+// create an array with your desires `instructions`
+const instructions = [
+ web3.SystemProgram.transfer({
+ fromPubkey: payer.publicKey,
+ toPubkey: toAccount.publicKey,
+ lamports: minRent,
+ }),
+];
+```
+
+Next, construct a `MessageV0` formatted transaction message with your desired
+`instructions`:
+
+```js
+// create v0 compatible message
+const messageV0 = new web3.TransactionMessage({
+ payerKey: payer.publicKey,
+ recentBlockhash: blockhash,
+ instructions,
+}).compileToV0Message();
+```
+
+Then, create a new `VersionedTransaction`, passing in our v0 compatible message:
+
+```js
+const transaction = new web3.VersionedTransaction(messageV0);
+
+// sign your transaction with the required `Signers`
+transaction.sign([payer]);
+```
+
+You can sign the transaction by either:
+
+- passing an array of `signatures` into the `VersionedTransaction` method, or
+- call the `transaction.sign()` method, passing an array of the required
+ `Signers`
+
+> NOTE: After calling the `transaction.sign()` method, all the previous
+> transaction `signatures` will be fully replaced by new signatures created from
+> the provided in `Signers`.
+
+After your `VersionedTransaction` has been signed by all required accounts, you
+can send it to the cluster and `await` the response:
+
+```js
+// send our v0 transaction to the cluster
+const txid = await connection.sendTransaction(transaction);
+console.log(`https://explorer.solana.com/tx/${txid}?cluster=devnet`);
+```
+
+> NOTE: Unlike `legacy` transactions, sending a `VersionedTransaction` via
+> `sendTransaction` does **NOT** support transaction signing via passing in an
+> array of `Signers` as the second parameter. You will need to sign the
+> transaction before calling `connection.sendTransaction()`.
+
+## More Resources
+
+- using
+ [Versioned Transactions for Address Lookup Tables](./lookup-tables.md#how-to-create-an-address-lookup-table)
+- view an
+ [example of a v0 transaction](https://explorer.solana.com/tx/3jpoANiFeVGisWRY5UP648xRXs3iQasCHABPWRWnoEjeA93nc79WrnGgpgazjq4K9m8g2NJoyKoWBV1Kx5VmtwHQ/?cluster=devnet)
+ on Solana Explorer
+- read the [accepted proposal](./../proposals/versioned-transactions.md) for
+ Versioned Transaction and Address Lookup Tables
diff --git a/docs/economics_overview.md b/docs/economics_overview.md
new file mode 100644
index 000000000..90e002a37
--- /dev/null
+++ b/docs/economics_overview.md
@@ -0,0 +1,47 @@
+---
+title: Solana Economics Overview
+---
+
+**Subject to change.**
+
+Solana’s crypto-economic system is designed to promote a healthy, long term
+self-sustaining economy with participant incentives aligned to the security and
+decentralization of the network. The main participants in this economy are
+validation-clients. Their contributions to the network, state validation, and
+their requisite incentive mechanisms are discussed below.
+
+The main channels of participant remittances are referred to as protocol-based
+rewards and transaction fees. Protocol-based rewards are generated from
+inflationary issuances from a protocol-defined inflation schedule. These rewards
+will constitute the total protocol-based reward delivered to validation clients,
+the remaining sourced from transaction fees. In the early days of the network,
+it is likely that protocol-based rewards, deployed based on predefined issuance
+schedule, will drive the majority of participant incentives to participate in
+the network.
+
+These protocol-based rewards are calculated per epoch and distributed across the
+active delegated stake and validator set (per validator commission). As
+discussed further below, the per annum inflation rate is based on a
+pre-determined disinflationary schedule. This provides the network with supply
+predictability which supports long term economic stability and security.
+
+Transaction fees are participant-to-participant transfers, attached to network
+interactions as a motivation and compensation for the inclusion and execution of
+a proposed transaction. A mechanism for long-term economic stability and forking
+protection through partial burning of each transaction fee is also discussed
+below.
+
+First, an overview of the inflation design is presented. This section starts
+with defining and clarifying [Terminology](inflation/terminology.md) commonly
+used subsequently in the discussion of inflation and the related components.
+Following that, we outline Solana's proposed
+[Inflation Schedule](inflation/inflation_schedule.md), i.e. the specific
+parameters that uniquely parameterize the protocol-driven inflationary issuance
+over time. Next is a brief section on
+[Adjusted Staking Yield](inflation/adjusted_staking_yield.md), and how token
+dilution might influence staking behavior.
+
+An overview of [Transaction Fees](transaction_fees.md) on Solana is followed by
+a discussion of [Storage Rent Economics](storage_rent_economics.md) in which we
+describe an implementation of storage rent to account for the externality costs
+of maintaining the active state of the ledger.
diff --git a/docs/getstarted/hello-world.md b/docs/getstarted/hello-world.md
new file mode 100644
index 000000000..b7d6af35d
--- /dev/null
+++ b/docs/getstarted/hello-world.md
@@ -0,0 +1,302 @@
+---
+title: "Hello World Quickstart Guide"
+description:
+ 'This "hello world" quickstart guide will demonstrate how to setup, build, and
+ deploy your first Solana program in your browser with Solana Playground.'
+keywords:
+ - playground
+ - solana pg
+ - on chain
+ - rust
+ - native program
+ - tutorial
+ - intro to solana development
+ - blockchain developer
+ - blockchain tutorial
+ - web3 developer
+---
+
+For this "hello world" quickstart guide, we will use
+[Solana Playground](https://beta.solpg.io), a browser based IDE to develop and
+deploy our Solana program. To use it, you do **NOT** have to install any
+software on your computer. Simply open Solana Playground in your browser of
+choice, and you are ready to write and deploy Solana programs.
+
+## What you will learn
+
+- How to get started with Solana Playground
+- How to create a Solana wallet on Playground
+- How to program a basic Solana program in Rust
+- How to build and deploy a Solana Rust program
+- How to interact with your on chain program using JavaScript
+
+## Using Solana Playground
+
+[Solana Playground](https://beta.solpg.io) is browser based application that
+will let you write, build, and deploy on chain Solana programs. All from your
+browser. No installation needed.
+
+It is a great developer resource for getting started with Solana development,
+especially on Windows.
+
+### Import our example project
+
+In a new tab in your browser, open our example "_Hello World_" project on Solana
+Playground: https://beta.solpg.io/6314a69688a7fca897ad7d1d
+
+Next, import the project into your local workspace by clicking the "**Import**"
+icon and naming your project `hello_world`.
+
+![Import the get started Solana program on Solana Playground](/img/quickstarts/solana-get-started-import-on-playground.png)
+
+> If you do **not** import the program into **your** Solana Playground, then you
+> will **not** be able to make changes to the code. But you **will** still be
+> able to build and deploy the code to a Solana cluster.
+
+### Create a Playground wallet
+
+Normally with [local development](./local.md), you will need to create a file
+system wallet for use with the Solana CLI. But with the Solana Playground, you
+only need to click a few buttons to create a browser based wallet.
+
+:::caution Your _Playground Wallet_ will be saved in your browser's local
+storage. Clearing your browser cache will remove your saved wallet. When
+creating a new wallet, you will have the option to save a local copy of your
+wallet's keypair file. :::
+
+Click on the red status indicator button at the bottom left of the screen,
+(optionally) save your wallet's keypair file to your computer for backup, then
+click "**Continue**".
+
+After your Playground Wallet is created, you will notice the bottom of the
+window now states your wallet's address, your SOL balance, and the Solana
+cluster you are connected to (Devnet is usually the default/recommended, but a
+"localhost" [test validator](./local.md) is also acceptable).
+
+## Create a Solana program
+
+The code for your Rust based Solana program will live in your `src/lib.rs` file.
+Inside `src/lib.rs` you will be able to import your Rust crates and define your
+logic. Open your `src/lib.rs` file within Solana Playground.
+
+### Import the `solana_program` crate
+
+At the top of `lib.rs`, we import the `solana-program` crate and bring our
+needed items into the local namespace:
+
+```rust
+use solana_program::{
+ account_info::AccountInfo,
+ entrypoint,
+ entrypoint::ProgramResult,
+ pubkey::Pubkey,
+ msg,
+};
+```
+
+### Write your program logic
+
+Every Solana program must define an `entrypoint` that tells the Solana runtime
+where to start executing your on chain code. Your program's
+[entrypoint](../developing/on-chain-programs/developing-rust#program-entrypoint)
+should provide a public function named `process_instruction`:
+
+```rust
+// declare and export the program's entrypoint
+entrypoint!(process_instruction);
+
+// program entrypoint's implementation
+pub fn process_instruction(
+ program_id: &Pubkey,
+ accounts: &[AccountInfo],
+ instruction_data: &[u8]
+) -> ProgramResult {
+ // log a message to the blockchain
+ msg!("Hello, world!");
+
+ // gracefully exit the program
+ Ok(())
+}
+```
+
+Every on chain program should return the `Ok`
+[result enum](https://doc.rust-lang.org/std/result/) with a value of `()`. This
+tells the Solana runtime that your program executed successfully without errors.
+
+Our program above will simply
+[log a message](../developing/on-chain-programs/debugging#logging) of "_Hello,
+world!_" to the blockchain cluster, then gracefully exit with `Ok(())`.
+
+### Build your program
+
+On the left sidebar, select the "**Build & Deploy**" tab. Next, click the
+"Build" button.
+
+If you look at the Playground's terminal, you should see your Solana program
+begin to compile. Once complete, you will see a success message.
+
+![Viewing a successful build of your Rust based program](/img/quickstarts/solana-get-started-successful-build.png)
+
+:::caution You may receive _warning_ when your program is compiled due to unused
+variables. Don't worry, these warning will not affect your build. They are due
+to our very simple program not using all the variables we declared in the
+`process_instruction` function. :::
+
+### Deploy your program
+
+You can click the "Deploy" button to deploy your first program to the Solana
+blockchain. Specifically to your selected cluster (e.g. Devnet, Testnet, etc).
+
+After each deployment, you will see your Playground Wallet balance change. By
+default, Solana Playground will automatically request SOL airdrops on your
+behalf to ensure your wallet has enough SOL to cover the cost of deployment.
+
+> Note: If you need more SOL, you can airdrop more by typing airdrop command in
+> the playground terminal:
+
+```sh
+solana airdrop 2
+```
+
+![Build and deploy your Solana program to the blockchain](/img/quickstarts/solana-get-started-build-and-deploy.png)
+
+### Find your program id
+
+When executing a program using
+[web3.js](../developing/clients/javascript-reference.md) or from
+[another Solana program](../developing/programming-model/calling-between-programs.md),
+you will need to provide the `program id` (aka public address of your program).
+
+Inside Solana Playground's **Build & Deploy** sidebar, you can find your
+`program id` under the **Program Credentials** dropdown.
+
+#### Congratulations!
+
+You have successfully setup, built, and deployed a Solana program using the Rust
+language directly in your browser. Next, we will demonstrate how to interact
+with your on chain program.
+
+## Interact with your on chain program
+
+Once you have successfully deployed a Solana program to the blockchain, you will
+want to be able to interact with that program.
+
+Like most developers creating dApps and websites, we will interact with our on
+chain program using JavaScript. Specifically, will use the open source
+[NPM package](https://www.npmjs.com/package/@solana/web3.js) `@solana/web3.js`
+to aid in our client application.
+
+:::info This web3.js package is an abstraction layer on top of the
+[JSON RPC API](/api) that reduced the need for rewriting common boilerplate,
+helping to simplify your client side application code. :::
+
+### Initialize client
+
+We will be using Solana Playground for the client generation. Create a client
+folder by running `run` command in the playground terminal:
+
+```bash
+run
+```
+
+We have created `client` folder and a default `client.ts`. This is where we will
+work for the rest of our `hello world` program.
+
+### Playground globals
+
+In playground, there are many utilities that are globally available for us to
+use without installing or setting up anything. Most important ones for our
+`hello world` program are `web3` for `@solana/web3.js` and `pg` for Solana
+Playground utilities.
+
+:::info You can go over all of the available globals by pressing `CTRL+SPACE`
+(or `CMD+SPACE` on macOS) inside the editor. :::
+
+### Call the program
+
+To execute your on chain program, you must send a
+[transaction](../developing/programming-model/transactions.md) to it. Each
+transaction submitted to the Solana blockchain contains a listing of
+instructions (and the program's that instruction will interact with).
+
+Here we create a new transaction and add a single `instruction` to it:
+
+```js
+// create an empty transaction
+const transaction = new web3.Transaction();
+
+// add a hello world program instruction to the transaction
+transaction.add(
+ new web3.TransactionInstruction({
+ keys: [],
+ programId: new web3.PublicKey(pg.PROGRAM_ID),
+ }),
+);
+```
+
+Each `instruction` must include all the keys involved in the operation and the
+program ID we want to execute. In this example `keys` is empty because our
+program only logs `hello world` and doesn't need any accounts.
+
+With our transaction created, we can submit it to the cluster:
+
+```js
+// send the transaction to the Solana cluster
+console.log("Sending transaction...");
+const txHash = await web3.sendAndConfirmTransaction(
+ pg.connection,
+ transaction,
+ [pg.wallet.keypair],
+);
+console.log("Transaction sent with hash:", txHash);
+```
+
+:::info The first signer in the signers array is the transaction fee payer by
+default. We are signing with our keypair `pg.wallet.keypair`. :::
+
+### Run the application
+
+With the client application written, you can run the code via the same `run`
+command.
+
+Once your application completes, you will see output similar to this:
+
+```sh
+Running client...
+ client.ts:
+ My address: GkxZRRNPfaUfL9XdYVfKF3rWjMcj5md6b6mpRoWpURwP
+ My balance: 5.7254472 SOL
+ Sending transaction...
+ Transaction sent with hash: 2Ra7D9JoqeNsax9HmNq6MB4qWtKPGcLwoqQ27mPYsPFh3h8wignvKB2mWZVvdzCyTnp7CEZhfg2cEpbavib9mCcq
+```
+
+### Get transaction logs
+
+We will be using `solana-cli` directly in playground to get the information
+about any transaction:
+
+```sh
+solana confirm -v
+```
+
+Change `` with the hash you received from calling
+`hello world` program.
+
+You should see `Hello, world!` in the **Log Messages** section of the output. 🎉
+
+#### Congratulations!!!
+
+You have now written a client application for your on chain program. You are now
+a Solana developer!
+
+PS: Try to update your program's message then re-build, re-deploy, and
+re-execute your program.
+
+## Next steps
+
+See the links below to learn more about writing Solana programs:
+
+- [Setup your local development environment](./local.md)
+- [Overview of writing Solana programs](../developing/on-chain-programs/overview)
+- [Learn more about developing Solana programs with Rust](../developing/on-chain-programs/developing-Rust)
+- [Debugging on chain programs](../developing/on-chain-programs/debugging)
diff --git a/docs/getstarted/local.md b/docs/getstarted/local.md
new file mode 100644
index 000000000..006ce572f
--- /dev/null
+++ b/docs/getstarted/local.md
@@ -0,0 +1,186 @@
+---
+title: "Local Development Quickstart"
+description:
+ "This quickstart guide will demonstrate how to quickly install and setup your
+ local Solana development environment."
+keywords:
+ - rust
+ - cargo
+ - toml
+ - program
+ - tutorial
+ - intro to solana development
+ - blockchain developer
+ - blockchain tutorial
+ - web3 developer
+---
+
+This quickstart guide will demonstrate how to quickly install and setup your
+local development environment, getting you ready to start developing and
+deploying Solana programs to the blockchain.
+
+## What you will learn
+
+- How to install the Solana CLI locally
+- How to setup a localhost Solana cluster/validator
+- How to create a Solana wallet for developing
+- How to airdrop SOL tokens for your wallet
+
+## Install the Solana CLI
+
+To interact with the Solana network from your terminal, you will need to install
+the [Solana CLI tool suite](./../cli/install-solana-cli-tools) on your local
+system.
+
+
+macOS / Linux / Windows Subsystem for Linux (WSL)
+Open your favourite terminal application and install the CLI by running:
+
+```bash
+sh -c "$(curl -sSfL https://release.solana.com/stable/install)"
+```
+
+Depending on your system, the end of the installer messaging may prompt you to
+
+```bash
+Please update your PATH environment variable to include the solana programs:
+```
+
+If you get the above message, copy and paste the recommended command below it to
+update `PATH`
+
+Confirm you have the desired version of `solana` installed by running:
+
+```bash
+solana --version
+```
+
+After a successful install, `solana-install update` may be used to easily update
+the Solana software to a newer version at any time.
+
+
+
+
+Windows
+
+:::caution [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) is the
+recommended environment for Windows users. :::
+
+- Open a Command Prompt (`cmd.exe`) as an Administrator
+
+ - Search for Command Prompt in the Windows search bar. When the Command Prompt
+ app appears, right-click and select “Open as Administrator”. If you are
+ prompted by a pop-up window asking “Do you want to allow this app to make
+ changes to your device?”, click Yes.
+
+- Copy and paste the following command, then press Enter to download the Solana
+ installer into a temporary directory:
+
+```bash
+cmd /c "curl https://release.solana.com/stable/solana-install-init-x86_64-pc-windows-msvc.exe --output C:\solana-install-tmp\solana-install-init.exe --create-dirs"
+```
+
+- Copy and paste the following command, then press Enter to install the latest
+ version of Solana. If you see a security pop-up by your system, please select
+ to allow the program to run.
+
+```bash
+C:\solana-install-tmp\solana-install-init.exe stable
+```
+
+- When the installer is finished, press Enter.
+
+- Close the command prompt window and re-open a new command prompt window as a
+ normal user
+- Confirm you have the desired version of `solana` installed by entering:
+
+```bash
+solana --version
+```
+
+After a successful install, `solana-install update` may be used to easily update
+the Solana software to a newer version at any time.
+
+
+
+## Setup a localhost blockchain cluster
+
+The Solana CLI comes with the
+[test validator](./../developing/test-validator.md) built in. This command line
+tool will allow you to run a full blockchain cluster on your machine.
+
+```bash
+solana-test-validator
+```
+
+> **PRO TIP:** Run the Solana test validator in a new/separate terminal window
+> that will remain open. The command line program must remain running for your
+> localhost cluster to remain online and ready for action.
+
+Configure your Solana CLI to use your localhost validator for all your future
+terminal commands:
+
+```bash
+solana config set --url localhost
+```
+
+At any time, you can view your current Solana CLI configuration settings:
+
+```bash
+solana config get
+```
+
+## Create a file system wallet
+
+To deploy a program with Solana CLI, you will need a Solana wallet with SOL
+tokens to pay for the cost of transactions.
+
+Let's create a simple file system wallet for testing:
+
+```bash
+solana-keygen new
+```
+
+By default, the `solana-keygen` command will create a new file system wallet
+located at `~/.config/solana/id.json`. You can manually specify the output file
+location using the `--outfile /path` option.
+
+> **NOTE:** If you already have a file system wallet saved at the default
+> location, this command will **NOT** override it (unless you explicitly force
+> override using the `--force` flag).
+
+### Set your new wallet as default
+
+With your new file system wallet created, you must tell the Solana CLI to use
+this wallet to deploy and take ownership of your on chain program:
+
+```bash
+solana config set -k ~/.config/solana/id.json
+```
+
+## Airdrop SOL tokens to your wallet
+
+Once your new wallet is set as the default, you can request a free airdrop of
+SOL tokens to it:
+
+```bash
+solana airdrop 2
+```
+
+> **NOTE:** The `solana airdrop` command has a limit of how many SOL tokens can
+> be requested _per airdrop_ for each cluster (localhost, testnet, or devent).
+> If your airdrop transaction fails, lower your airdrop request quantity and try
+> again.
+
+You can check your current wallet's SOL balance any time:
+
+```bash
+solana balance
+```
+
+## Next steps
+
+See the links below to learn more about writing Rust based Solana programs:
+
+- [Create and deploy a Solana Rust program](./rust.md)
+- [Overview of writing Solana programs](../developing/on-chain-programs/overview)
diff --git a/docs/getstarted/overview.md b/docs/getstarted/overview.md
new file mode 100644
index 000000000..ddc0aa94f
--- /dev/null
+++ b/docs/getstarted/overview.md
@@ -0,0 +1,240 @@
+---
+title: "Introduction to Solana Development"
+description:
+ "Learn about the basic development concepts of the Solana blockchain."
+keywords:
+ - accounts
+ - transactions
+ - nft
+ - solana basics
+ - tutorial
+ - intro to solana development
+ - blockchain developer
+ - blockchain tutorial
+ - web3 developer
+---
+
+Welcome to the Solana developer docs!
+
+This guide contains step-by-step instructions on how to get started. Before we
+get into the hands on part of the guide, we'll cover basic concepts that all
+developers need to be familiar with to build on Solana:
+
+- Transactions
+- Accounts
+- Programs
+
+## What you will learn
+
+- What the developer workflows look like
+- What transactions, accounts, and programs are
+- Test networks and other tools
+
+## An overview of Solana developer workflows
+
+The Solana network can be thought of as one massive global computer where anyone
+can store and execute code for a fee. Deployed code is called a program, often
+referred to as a "smart contract" on other blockchains. To interact with a
+program, you need to send a transaction on the blockchain from a client.
+
+Here's a high level representation of this. It’s important to note that this is
+an oversimplification of the Solana network for the purposes of learning in an
+easy-to-understand way.
+
+![Solana developer workflows program-client model](/img/quickstarts/solana-overview-client-program.png)
+
+### Program development
+
+The first development workflow allows you to to create and deploy custom Rust, C
+and C++ programs directly to the blockchain. Once these programs are deployed,
+anyone who knows how to communicate with them can use them.
+
+You can communicate with these programs by writing dApps with any of the
+available client SDKs (or the [CLI](../cli.md)), all of which use the
+[JSON RPC API](../api) under the hood.
+
+### Client development
+
+The second development workflow is the dApp side where you can write dApps that
+communicate with deployed programs. Your apps can submit transactions with
+instructions to these programs via a client SDK to create a wide variety of
+applications such as wallets, exchanges and more. The most popular apps are
+browser extension wallets and web apps, but you can build mobile/desktop apps or
+anything that can communicate with the JSON RPC API.
+
+These two pieces work together to create a network of dApps and programs that
+can communicate with each other to update the state and query the blockchain.
+
+## Wallets
+
+A wallet is a pair of public and private keys that are used to verify actions on
+the blockchain. The public key is used to identify the account and the private
+key is used to sign transactions.
+
+## Transactions
+
+A transaction is the fundamental unit of activity on the Solana blockchain: it's
+a signed data structure that contains instructions for the network to perform a
+particular operation like transferring tokens.
+
+You need a transaction to create, update or delete data on-chain. You can read
+data without a transaction.
+
+All transactions interact with programs on the network - these can be system
+programs or user built programs. Transactions tell the program what they want to
+do with a bunch of instructions, and if they're valid, the program will execute
+them and update the state of the blockchain. Think of it like a write command
+that can be rejected if certain conditions aren't met.
+
+Here's a visual representation of what a transaction contains:
+![Visual layout of a transaction](/img/transaction.svg)
+
+- Signatures: An array of digital signatures from the transaction's signers.
+- Message: The actual instructions that the transaction is issuing to the
+ network.
+ - Message header: 3 `uint8s` describing how many accounts will sign the
+ payload, how many won’t, and how many are read-only.
+ - Account addresses: an array of addresses of the accounts that will be used
+ in the transaction.
+ - Recent blockhash: a unique value that identifies a recent block - this
+ ensures the transaction is not too old and is not re-processed.
+ - Instructions: which program to call, which accounts to use, and any
+ additional data needed for the program to execute the instruction.
+
+Transactions can be created and signed using clients via SDKs, or even on-chain
+programs.
+
+You can learn more about transactions
+[here](../developing/programming-model/transactions.md).
+
+### Instructions
+
+Instructions are the most basic operational unit on Solana. A transaction can
+contain one or more instructions. Instructions are executed sequentially in the
+order they are provided in the transaction by programs on the blockchain. If any
+part of an instruction fails, the entire transaction will fail.
+
+Here's what an instruction looks like:
+
+| Item | Description |
+| ------------ | -------------------------------------------------------------------------------------------------------- |
+| `Program ID` | The ID of the program being called |
+| `Accounts` | The accounts that the instruction wants to read or modify |
+| `Data` | Input data provided to the program as additional information or parameters in the format of a byte array |
+
+You can read more about instructions
+[here](../developing/programming-model/transactions#instructions).
+
+### Transaction Fees
+
+Every time you submit a transaction, somebody on the network is providing space
+and processing power to make it happen. To facilitate this, transactions on
+Solana require a fee to be paid in Lamports, which are the smallest units of SOL
+(like cents to a dollar or paise to a rupee). One SOL is equal to 1,000,000,000
+Lamports, and one Lamport has a value of 0.000000001 SOL. This fee is paid to
+the validators who process the transaction.
+
+Transactions fees are calculated based on two main parts:
+
+- a statically set base fee per signature, and
+- the computational resources used during the transaction, measured in
+ "[_compute units_](../terminology.md#compute-units)"
+
+The more work a transaction requires, the more compute units it will use, and
+the more it will cost.
+
+You can read more about transaction fees [here](../transaction_fees.md).
+
+## Accounts
+
+Accounts on Solana are storage spaces that can hold arbitrary data up to 10MB.
+They're used to store data, user programs, and native system programs.
+
+If a program needs to store state between transactions, it does so using
+accounts. This means that all programs on Solana are stateless - they don't
+store any state data, only code. If an account stores program code, it's marked
+"executable" and can process instructions.
+
+The easiest way to think of an account is like a file. Users can have many
+different files. Developers can write programs that can "talk" to these files.
+In the same way that a Linux user uses a path to look up a file, a Solana client
+uses an address to look up an account. The address is a 256-bit public key. Also
+like a file, an account includes metadata that tells the runtime who is allowed
+to access the data and how. This prevents unauthorized changes to the data in
+the account.
+
+Unlike a file, the account includes metadata for the lifetime of the file.
+Solana accounts have a unique lifecycle. When an account is created, it needs to
+be assigned some space, and tokens are required to rent this space. If an
+account doesn't have enough tokens to cover the rent, it will be removed.
+However, if the account does hold enough tokens to cover the rent for two years,
+it's considered "rent-exempt" and won't be deleted.
+
+You can read more about accounts
+[here](../developing/programming-model/accounts.md).
+
+## Programs
+
+Programs are the foundation of the Solana blockchain. They're responsible for
+everything that happens on the network: creating accounts, processing
+transactions, collecting fees, and more.
+
+Programs process instructions from both end users and other programs. All
+programs are stateless: any data they interact with is stored in separate
+accounts that are passed in via instructions.
+
+There are two sets of programs that are maintained by the Solana Labs team:
+[Native Programs](../developing/runtime-facilities/programs.md) and the
+[Solana Program Library (SPL)](https://spl.solana.com/). These serve as core
+building blocks for on-chain interactions. Native programs are used for core
+blockchain functionality like creating new accounts, assigning ownership,
+transferring SOL, and more. SPL programs are used for creating, swapping, and
+lending tokens, as well as generating stake pools and maintaining an on-chain
+name service.
+
+You can interact with both native programs and SPL programs easily using the
+Solana CLI and the SDKs, allowing you to create complete dApps without writing
+Rust. You can also build on top of any user programs that have been deployed to
+the network - all you need is the program's address and how it works: the
+account structures, instructions, and error codes.
+
+Developers most commonly write programs in Rust using frameworks such as Anchor.
+However, programs can be written in any language that compiles to BPF, including
+C++ and Move.
+
+You can learn more about programs [here](../developing/intro/programs.md).
+
+## Testing and developing environments
+
+When developing on Solana you have a few options for environments.
+
+The easiest and quickest way to get started is the
+[Solana Playground](https://beta.solpg.io) - a browser based IDE that allows you
+to write, deploy, and test programs.
+
+The most popular setup is [local development](local.md) with a local validator
+that you run on your machine - this allows you to test your programs locally
+before deploying them to any network.
+
+In each environment, you'll be using one of three networks:
+
+- Mainnet Beta - the "production" network where all the action happens.
+ Transactions cost real money here.
+- Testnet - used for stress testing recent releases. Focused on network
+ performance, stability, and validator behavior.
+- Devnet - the primary network for development. Most closely resembles Mainnet
+ Beta, but tokens are not real.
+
+Devnet has a faucet that allows you to get free SOL to test with. It costs $0 to
+do development on Solana.
+
+Check out the [clusters page](../clusters.md) for more information on these.
+
+## Next steps
+
+You're now ready to get started building on Solana!
+
+- [Deploy your first Solana program in the browser](./hello-world.md)
+- [Setup your local development environment](./local.md)
+- [Get started building programs locally with Rust](./rust.md)
+- [Overview of writing Solana programs](../developing/on-chain-programs/overview)
diff --git a/docs/getstarted/rust.md b/docs/getstarted/rust.md
new file mode 100644
index 000000000..c4dd23159
--- /dev/null
+++ b/docs/getstarted/rust.md
@@ -0,0 +1,188 @@
+---
+title: "Rust Program Quickstart"
+description:
+ "This quickstart guide will demonstrate how to quickly setup, build, and
+ deploy your first Rust based Solana program to the blockchain."
+keywords:
+ - rust
+ - cargo
+ - toml
+ - program
+ - tutorial
+ - intro to solana development
+ - blockchain developer
+ - blockchain tutorial
+ - web3 developer
+---
+
+Rust is the most common programming language to write Solana programs with. This
+quickstart guide will demonstrate how to quickly setup, build, and deploy your
+first Rust based Solana program to the blockchain.
+
+> **NOTE: ** This guide uses the Solana CLI and assumes you have setup your
+> local development environment. Checkout our
+> [local development quickstart guide](./local.md) here to quickly get setup.
+
+## What you will learn
+
+- How to install the Rust language locally
+- How to initialize a new Solana Rust program
+- How to code a basic Solana program in Rust
+- How to build and deploy your Rust program
+
+## Install Rust and Cargo
+
+To be able to compile Rust based Solana programs, install the Rust language and
+Cargo (the Rust package manager) using [Rustup](https://rustup.rs/):
+
+```bash
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+```
+
+## Run your localhost validator
+
+The Solana CLI comes with the [test validator](../developing/test-validator.md)
+built in. This command line tool will allow you to run a full blockchain cluster
+on your machine.
+
+```bash
+solana-test-validator
+```
+
+> **PRO TIP:** Run the Solana test validator in a new/separate terminal window
+> that will remain open. This command line program must remain running for your
+> localhost validator to remain online and ready for action.
+
+Configure your Solana CLI to use your localhost validator for all your future
+terminal commands and Solana program deployment:
+
+```bash
+solana config set --url localhost
+```
+
+## Create a new Rust library with Cargo
+
+Solana programs written in Rust are _libraries_ which are compiled to
+[BPF bytecode](../developing/on-chain-programs/faq.md#berkeley-packet-filter-bpf)
+and saved in the `.so` format.
+
+Initialize a new Rust library named `hello_world` via the Cargo command line:
+
+```bash
+cargo init hello_world --lib
+cd hello_world
+```
+
+Add the `solana-program` crate to your new Rust library:
+
+```bash
+cargo add solana-program
+```
+
+Open your `Cargo.toml` file and add these required Rust library configuration
+settings, updating your project name as appropriate:
+
+```toml
+[lib]
+name = "hello_world"
+crate-type = ["cdylib", "lib"]
+```
+
+## Create your first Solana program
+
+The code for your Rust based Solana program will live in your `src/lib.rs` file.
+Inside `src/lib.rs` you will be able to import your Rust crates and define your
+logic. Open your `src/lib.rs` file in your favorite editor.
+
+At the top of `lib.rs`, import the `solana-program` crate and bring our needed
+items into the local namespace:
+
+```rust
+use solana_program::{
+ account_info::AccountInfo,
+ entrypoint,
+ entrypoint::ProgramResult,
+ pubkey::Pubkey,
+ msg,
+};
+```
+
+Every Solana program must define an `entrypoint` that tells the Solana runtime
+where to start executing your on chain code. Your program's
+[entrypoint](../developing/on-chain-programs/developing-rust#program-entrypoint)
+should provide a public function named `process_instruction`:
+
+```rust
+// declare and export the program's entrypoint
+entrypoint!(process_instruction);
+
+// program entrypoint's implementation
+pub fn process_instruction(
+ program_id: &Pubkey,
+ accounts: &[AccountInfo],
+ instruction_data: &[u8]
+) -> ProgramResult {
+ // log a message to the blockchain
+ msg!("Hello, world!");
+
+ // gracefully exit the program
+ Ok(())
+}
+```
+
+Every on chain program should return the `Ok`
+[result enum](https://doc.rust-lang.org/std/result/) with a value of `()`. This
+tells the Solana runtime that your program executed successfully without errors.
+
+This program above will simply
+[log a message](../developing/on-chain-programs/debugging#logging) of "_Hello,
+world!_" to the blockchain cluster, then gracefully exit with `Ok(())`.
+
+## Build your Rust program
+
+Inside a terminal window, you can build your Solana Rust program by running in
+the root of your project (i.e. the directory with your `Cargo.toml` file):
+
+```bash
+cargo build-bpf
+```
+
+> **NOTE:** After each time you build your Solana program, the above command
+> will output the build path of your compiled program's `.so` file and the
+> default keyfile that will be used for the program's address. `cargo build-bpf`
+> installs the toolchain from the currently installed solana CLI tools. You may
+> need to upgrade those tools if you encounter any version incompatibilities.
+
+## Deploy your Solana program
+
+Using the Solana CLI, you can deploy your program to your currently selected
+cluster:
+
+```bash
+solana program deploy ./target/deploy/hello_world.so
+```
+
+Once your Solana program has been deployed (and the transaction
+[finalized](../cluster/commitments.md)), the above command will output your
+program's public address (aka its "program id").
+
+```bash
+# example output
+Program Id: EFH95fWg49vkFNbAdw9vy75tM7sWZ2hQbTTUmuACGip3
+```
+
+#### Congratulations!
+
+You have successfully setup, built, and deployed a Solana program using the Rust
+language.
+
+> PS: Check your Solana wallet's balance again after you deployed. See how much
+> SOL it cost to deploy your simple program?
+
+## Next steps
+
+See the links below to learn more about writing Rust based Solana programs:
+
+- [Overview of writing Solana programs](../developing/on-chain-programs/overview)
+- [Learn more about developing Solana programs with Rust](../developing/on-chain-programs/developing-Rust)
+- [Debugging on chain programs](../developing/on-chain-programs/debugging)
diff --git a/docs/history.md b/docs/history.md
new file mode 100644
index 000000000..a08c70f5d
--- /dev/null
+++ b/docs/history.md
@@ -0,0 +1,60 @@
+---
+title: History
+---
+
+In November of 2017, Anatoly Yakovenko published a whitepaper describing Proof
+of History, a technique for keeping time between computers that do not trust one
+another. From Anatoly's previous experience designing distributed systems at
+Qualcomm, Mesosphere and Dropbox, he knew that a reliable clock makes network
+synchronization very simple. When synchronization is simple the resulting
+network can be blazing fast, bound only by network bandwidth.
+
+Anatoly watched as blockchain systems without clocks, such as Bitcoin and
+Ethereum, struggled to scale beyond 15 transactions per second worldwide when
+centralized payment systems such as Visa required peaks of 65,000 tps. Without a
+clock, it was clear they'd never graduate to being the global payment system or
+global supercomputer most had dreamed them to be. When Anatoly solved the
+problem of getting computers that don’t trust each other to agree on time, he
+knew he had the key to bring 40 years of distributed systems research to the
+world of blockchain. The resulting cluster wouldn't be just 10 times faster, or
+a 100 times, or a 1,000 times, but 10,000 times faster, right out of the gate!
+
+Anatoly's implementation began in a private codebase and was implemented in the
+C programming language. Greg Fitzgerald, who had previously worked with Anatoly
+at semiconductor giant Qualcomm Incorporated, encouraged him to reimplement the
+project in the Rust programming language. Greg had worked on the LLVM compiler
+infrastructure, which underlies both the Clang C/C++ compiler as well as the
+Rust compiler. Greg claimed that the language's safety guarantees would improve
+software productivity and that its lack of a garbage collector would allow
+programs to perform as well as those written in C. Anatoly gave it a shot and
+just two weeks later, had migrated his entire codebase to Rust. Sold. With plans
+to weave all the world's transactions together on a single, scalable blockchain,
+Anatoly called the project Loom.
+
+On February 13th of 2018, Greg began prototyping the first open source
+implementation of Anatoly's whitepaper. The project was published to GitHub
+under the name Silk in the loomprotocol organization. On February 28th, Greg
+made his first release, demonstrating 10 thousand signed transactions could be
+verified and processed in just over half a second. Shortly after, another former
+Qualcomm cohort, Stephen Akridge, demonstrated throughput could be massively
+improved by offloading signature verification to graphics processors. Anatoly
+recruited Greg, Stephen and three others to co-found a company, then called
+Loom.
+
+Around the same time, Ethereum-based project Loom Network sprung up and many
+people were confused about whether they were the same project. The Loom team
+decided it would rebrand. They chose the name Solana, a nod to a small beach
+town North of San Diego called Solana Beach, where Anatoly, Greg and Stephen
+lived and surfed for three years when they worked for Qualcomm. On March 28th,
+the team created the Solana GitHub organization and renamed Greg's prototype
+Silk to Solana.
+
+In June of 2018, the team scaled up the technology to run on cloud-based
+networks and on July 19th, published a 50-node, permissioned, public testnet
+consistently supporting bursts of 250,000 transactions per second. In a later
+release in December, called v0.10 Pillbox, the team published a permissioned
+testnet running 150 nodes on a gigabit network and demonstrated soak tests
+processing an _average_ of 200 thousand transactions per second with bursts over
+500 thousand. The project was also extended to support on-chain programs written
+in the C programming language and run concurrently in a safe execution
+environment called SBF.
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 000000000..fc8949ec1
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,65 @@
+---
+title: Home
+sidebarLabel: Home
+description: "Solana is a high performance network that is utilized for a range
+ of use cases, \
+ including finance, NFTs, payments, and gaming."
+# displayed_sidebar: introductionSidebar
+---
+
+# Solana Documentation
+
+Solana is a blockchain built for mass adoption. It's a high performance network
+that is utilized for a range of use cases, including finance, NFTs, payments,
+and gaming. Solana operates as a single global state machine, and is open,
+interoperable and decentralized.
+
+## Getting started
+
+Dive right into Solana to start building or setup your tooling.
+
+- [Setup local environment](/cli) - Install the Solana CLI to get your local
+ development environment setup
+- [Hello World in your browser](getstarted/hello-world) - Build and deploy your
+ first on-chain Solana program, directly in your browser using Solana
+ Playground
+
+## Start learning
+
+Build a strong understanding of the core concepts that make Solana different
+from other blockchains.
+
+- [Transactions](./developing/programming-model/transactions) - Collection of
+ instructions for the blockchain to execute
+- [Accounts](./developing/programming-model/accounts) - Data and state storage
+ mechanism for Solana
+- [Programs](./developing/intro/programs) - The executable code used to perform
+ actions on the blockchain
+- [Cross-Program Invocation](./developing/programming-model/calling-between-programs) -
+ Core of the "composability" of Solana, this is how programs can "call" each
+ other.
+
+## Understanding the architecture
+
+Get to know the underlying architecture of how the proof-of-stake blockchain
+works.
+
+- [Validators](./validator/anatomy) - the individual nodes that are the backbone
+ of the network
+- [Clusters](./cluster/overview) - a collection of validators that work together
+ for consensus
+
+## Running a validator
+
+Explore what it takes to operate a Solana validator and help secure the network.
+
+- [System requirements](./running-validator/validator-reqs) - Recommended
+ hardware requirements and expected SOL needed to operate a validator
+- [Quick start guide](./validator/get-started/setup-a-validator) - Setup a
+ validator and get connected to a cluster for the first time
+
+## Learn more
+
+import HomeCtaLinks from "../components/HomeCtaLinks";
+
+
diff --git a/docs/inflation/adjusted_staking_yield.md b/docs/inflation/adjusted_staking_yield.md
new file mode 100644
index 000000000..77103ddf2
--- /dev/null
+++ b/docs/inflation/adjusted_staking_yield.md
@@ -0,0 +1,170 @@
+---
+title: Adjusted Staking Yield
+---
+
+### Token Dilution
+
+Similarly we can look at the expected _Staked Dilution_ (i.e. _Adjusted Staking
+Yield_) and _Un-staked Dilution_ as previously defined. Again, _dilution_ in
+this context is defined as the change in fractional representation (i.e.
+ownership) of a set of tokens within a larger set. In this sense, dilution can
+be a positive value: an increase in fractional ownership (staked dilution /
+_Adjusted Staking Yield_), or a negative value: a decrease in fractional
+ownership (un-staked dilution).
+
+We are interested in the relative change in ownership of staked vs un-staked
+tokens as the overall token pool increases with inflation issuance. As
+discussed, this issuance is distributed only to staked token holders, increasing
+the staked token fractional representation of the _Total Current Supply_.
+
+Continuing with the same _Inflation Schedule_ parameters as above, we see the
+fraction of staked supply grow as shown below.
+
+![](/img/p_ex_staked_supply_w_range_initial_stake.png)
+
+Due to this relative change in representation, the proportion of stake of any
+token holder will also change as a function of the _Inflation Schedule_ and the
+proportion of all tokens that are staked.
+
+Of initial interest, however, is the _dilution of **un-staked** tokens_, or
+$D_{us}$. In the case of un-staked tokens, token dilution is only a function of
+the _Inflation Schedule_ because the amount of un-staked tokens doesn't change
+over time.
+
+This can be seen by explicitly calculating un-staked dilution as $D_{us}$. The
+un-staked proportion of the token pool at time $t$ is $P_{us}(t_{N})$ and
+$I_{t}$ is the incremental inflation rate applied between any two consecutive
+time points. $SOL_{us}(t)$ and $SOL_{total}(t)$ is the amount of un-staked and
+total SOL on the network, respectively, at time $t$. Therefore
+$P_{us}(t) = SOL_{us}(t)/SOL_{total}(t)$.
+
+$$
+\begin{aligned}
+ D_{us} &= \left( \frac{P_{us}(t_{1}) - P_{us}(t_{0})}{P_{us}(t_{0})} \right)\\
+ &= \left( \frac{ \left( \frac{SOL_{us}(t_{2})}{SOL_{total}(t_{2})} \right) - \left( \frac{SOL_{us}(t_{1})}{SOL_{total}(t_{1})} \right)}{ \left( \frac{SOL_{us}(t_{1})}{SOL_{total}(t_{1})} \right) } \right)\\
+
+\end{aligned}
+$$
+
+However, because inflation issuance only increases the total amount and the
+un-staked supply doesn't change:
+
+$$
+\begin{aligned}
+ SOL_{us}(t_2) &= SOL_{us}(t_1)\\
+ SOL_{total}(t_2) &= SOL_{total}(t_1)\times (1 + I_{t_1})\\
+\end{aligned}
+$$
+
+So $D_{us}$ becomes:
+
+$$
+\begin{aligned}
+ D_{us} &= \left( \frac{ \left( \frac{SOL_{us}(t_{1})}{SOL_{total}(t_{1})\times (1 + I_{1})} \right) - \left( \frac{SOL_{us}(t_{1})}{SOL_{total}(t_{1})} \right)}{ \left( \frac{SOL_{us}(t_{1})}{SOL_{total}(t_{1})} \right) } \right)\\
+ D_{us} &= \frac{1}{(1 + I_{1})} - 1\\
+\end{aligned}
+$$
+
+Or generally, dilution for un-staked tokens over any time frame undergoing
+inflation $I$:
+
+$$
+D_{us} = -\frac{I}{I + 1} \\
+$$
+
+So as guessed, this dilution is independent of the total proportion of staked
+tokens and only depends on inflation rate. This can be seen with our example
+_Inflation Schedule_ here:
+
+![p_ex_unstaked_dilution](/img/p_ex_unstaked_dilution.png)
+
+### Estimated Adjusted Staked Yield
+
+We can do a similar calculation to determine the _dilution_ of staked token
+holders, or as we've defined here as the **_Adjusted Staked Yield_**, keeping in
+mind that dilution in this context is an _increase_ in proportional ownership
+over time. We'll use the terminology _Adjusted Staked Yield_ to avoid confusion
+going forward.
+
+To see the functional form, we calculate, $Y_{adj}$, or the _Adjusted Staked
+Yield_ (to be compared to _D\_{us}_ the dilution of un-staked tokens above),
+where $P_{s}(t)$ is the staked proportion of token pool at time $t$ and $I_{t}$
+is the incremental inflation rate applied between any two consecutive time
+points. The definition of $Y_{adj}$ is therefore:
+
+$$
+ Y_{adj} = \frac{P_s(t_2) - P_s(t_1)}{P_s(t_1)}\\
+$$
+
+As seen in the plot above, the proportion of staked tokens increases with
+inflation issuance. Letting $SOL_s(t)$ and $SOL_{\text{total}}(t)$ represent the
+amount of staked and total SOL at time $t$ respectively:
+
+$$
+ P_s(t_2) = \frac{SOL_s(t_1) + SOL_{\text{total}}(t_1)\times I(t_1)}{SOL_{\text{total}}(t_1)\times (1 + I(t_1))}\\
+$$
+
+Where $SOL_{\text{total}}(t_1)\times I(t_1)$ is the additional inflation
+issuance added to the staked token pool. Now we can write $Y_{adj}$ in common
+terms $t_1 = t$:
+
+$$
+\begin{aligned}
+Y_{adj} &= \frac{\frac{SOL_s(t) + SOL_{\text{total}}(t)\times I(t)}{SOL_{\text{total}}(t)\times (1 + I(t))} - \frac{SOL_s(t)}{SOL_{\text{total}}(t)} }{ \frac{SOL_s(t)}{SOL_{\text{total}}(t)} } \\
+ &= \frac{ SOL_{\text{total}}(t)\times (SOL_s(t) + SOL_{\text{total}}(t)\times I(t)) }{ SOL_s(t)\times SOL_{\text{total}}\times (1 + I(t)) } -1 \\
+\end{aligned}
+$$
+
+which simplifies to:
+
+$$
+Y_{adj} = \frac{ 1 + I(t)/P_s(t) }{ 1 + I(t) } - 1\\
+$$
+
+So we see that the _Adjusted Staked Yield_ a function of the inflation rate and
+the percent of staked tokens on the network. We can see this plotted for various
+staking fractions here:
+
+![p_ex_adjusted_staked_yields](/img/p_ex_adjusted_staked_yields.png)
+
+It is also clear that in all cases, dilution of un-staked tokens $>$ adjusted
+staked yield (i.e. dilution of staked tokens). Explicitly we can look at the
+_relative dilution of un-staked tokens to staked tokens:_ $D_{us}/Y_{adj}$. Here
+the relationship to inflation drops out and the relative dilution, i.e. the
+impact of staking tokens vs not staking tokens, is purely a function of the % of
+the total token supply staked. From above
+
+$$
+\begin{aligned}
+Y_{adj} &= \frac{ 1 + I/P_s }{ 1 + I } - 1,~\text{and}\\
+D_{us} &= -\frac{I}{I + 1},~\text{so} \\
+\frac{D_{us}}{Y_{adj}} &= \frac{ \frac{I}{I + 1} }{ \frac{ 1 + I/P_s }{ 1 + I } - 1 } \\
+\end{aligned}
+$$
+
+which simplifies as,
+
+$$
+ \begin{aligned}
+ \frac{D_{us}}{Y_{adj}} &= \frac{ I }{ 1 + \frac{I}{P_s} - (1 + I)}\\
+ &= \frac{ I }{ \frac{I}{P_s} - I}\\
+ \frac{D_{us}}{Y_{adj}}&= \frac{ P_s }{ 1 - P_s}\\
+ \end{aligned}
+$$
+
+Where we can see a primary dependence of the relative dilution of un-staked
+tokens to staked tokens is on the function of the proportion of total tokens
+staked. As shown above, the proportion of total tokens staked changes over time
+(i.e. $P_s = P_s(t)$ due to the re-staking of inflation issuance thus we see
+relative dilution grow over time as:
+
+![p_ex_relative_dilution](/img/p_ex_relative_dilution.png)
+
+As might be intuitive, as the total fraction of staked tokens increases the
+relative dilution of un-staked tokens grows dramatically. E.g. with $80\%$ of
+the network tokens staked, an un-staked token holder will experience ~$400\%$
+more dilution than a staked holder.
+
+Again, this represents the change in fractional change in ownership of staked
+tokens and illustrates the built-in incentive for token holder to stake their
+tokens to earn _Staked Yield_ and avoid _Un-staked Dilution_.
diff --git a/docs/inflation/inflation_schedule.md b/docs/inflation/inflation_schedule.md
new file mode 100644
index 000000000..1e4d97892
--- /dev/null
+++ b/docs/inflation/inflation_schedule.md
@@ -0,0 +1,84 @@
+---
+title: Solana's Proposed Inflation Schedule
+---
+
+As mentioned above, the network's _Inflation Schedule_ is uniquely described by
+three parameters: _Initial Inflation Rate_, _Disinflation Rate_ and _Long-term
+Inflation Rate_. When considering these numbers, there are many factors to take
+into account:
+
+- A large portion of the SOL issued via inflation will be distributed to
+ stake-holders in proportion to the SOL they have staked. We want to ensure
+ that the _Inflation Schedule_ design results in reasonable _Staking Yields_
+ for token holders who delegate SOL and for validation service providers (via
+ commissions taken from _Staking Yields_).
+- The primary driver of _Staked Yield_ is the amount of SOL staked divided by
+ the total amount of SOL (% of total SOL staked). Therefore the distribution
+ and delegation of tokens across validators are important factors to understand
+ when determining initial inflation parameters.
+- Yield throttling is a current area of research that would impact
+ _staking-yields_. This is not taken into consideration in the discussion here
+ or the modeling below.
+- Overall token issuance - i.e. what do we expect the Current Total Supply to be
+ in 10 years, or 20 years?
+- Long-term, steady-state inflation is an important consideration not only for
+ sustainable support for the validator ecosystem and the Solana Foundation
+ grant programs, but also should be tuned in consideration with expected token
+ losses and burning over time.
+- The rate at which we expect network usage to grow, as a consideration to the
+ disinflationary rate. Over time, we plan for inflation to drop and expect that
+ usage will grow.
+
+Based on these considerations and the community discussions following the
+initial design, the Solana Foundation proposes the following Inflation Schedule
+parameters:
+
+- Initial Inflation Rate: $8\%$
+- Disinflation Rate: $-15\%$
+- Long-term Inflation Rate: $1.5\%$
+
+These parameters define the proposed _Inflation Schedule_. Below we show
+implications of these parameters. These plots only show the impact of inflation
+issuances given the Inflation Schedule as parameterized above. They _do not
+account_ for other factors that may impact the Total Supply such as fee/rent
+burning, slashing or other unforeseen future token destruction events.
+Therefore, what is presented here is an **upper limit** on the amount of SOL
+issued via inflation.
+
+![](/img/p_inflation_schedule.png)
+
+In the above graph we see the annual inflation rate [$\%$] over time, given the
+inflation parameters proposed above.
+
+![](/img/p_total_supply.png)
+
+Similarly, here we see the _Total Current Supply_ of SOL [MM] over time,
+assuming an initial _Total Current Supply_ of `488,587,349 SOL` (i.e. for this
+example, taking the _Total Current Supply_ as of `2020-01-25` and simulating
+inflation starting from that day).
+
+Setting aside validator uptime and commissions, the expected Staking Yield and
+Adjusted Staking Yield metrics are then primarily a function of the % of total
+SOL staked on the network. Therefore we can we can model _Staking Yield_, if we
+introduce an additional parameter _% of Staked SOL_:
+
+$$
+\%~\text{SOL Staked} = \frac{\text{Total SOL Staked}}{\text{Total Current Supply}}
+$$
+
+This parameter must be estimated because it is a dynamic property of the token
+holders and staking incentives. The values of _% of Staked SOL_ presented here
+range from $60\% - 90\%$, which we feel covers the likely range we expect to
+observe, based on feedback from the investor and validator communities as well
+as what is observed on comparable Proof-of-Stake protocols.
+
+![](/img/p_ex_staked_yields.png)
+
+Again, the above shows an example _Staked Yield_ that a staker might expect over
+time on the Solana network with the _Inflation Schedule_ as specified. This is
+an idealized _Staked Yield_ as it neglects validator uptime impact on rewards,
+validator commissions, potential yield throttling and potential slashing
+incidents. It additionally ignores that _% of Staked SOL_ is dynamic by design -
+the economic incentives set up by this _Inflation Schedule_ are more clearly
+seen when _Token Dilution_ is taken into account (see the **Adjusted Staking
+Yield** section below).
diff --git a/docs/inflation/terminology.md b/docs/inflation/terminology.md
new file mode 100644
index 000000000..24ffb19dc
--- /dev/null
+++ b/docs/inflation/terminology.md
@@ -0,0 +1,111 @@
+---
+title: Terminology
+---
+
+Many terms are thrown around when discussing inflation and the related
+components (e.g. rewards/yield/interest), we try to define and clarify some
+commonly used concept here:
+
+### Total Current Supply [SOL]
+
+The total amount of tokens (locked or unlocked) that have been generated (via
+genesis block or protocol inflation) minus any tokens that have been burnt (via
+transaction fees or other mechanism) or slashed. At network launch, 500,000,000
+SOL were instantiated in the genesis block. Since then the Total Current Supply
+has been reduced by the burning of transaction fees and a planned token
+reduction event. Solana’s _Total Current Supply_ can be found at
+https://explorer.solana.com/supply
+
+### Inflation Rate [%]
+
+The Solana protocol will automatically create new tokens on a predetermined
+inflation schedule (discussed below). The _Inflation Rate [%]_ is the annualized
+growth rate of the _Total Current Supply_ at any point in time.
+
+### Inflation Schedule
+
+A deterministic description of token issuance over time. The Solana Foundation
+is proposing a disinflationary _Inflation Schedule_. I.e. Inflation starts at
+its highest value, the rate reduces over time until stabilizing at a
+predetermined long-term inflation rate (see discussion below). This schedule is
+completely and uniquely parameterized by three numbers:
+
+- **Initial Inflation Rate [%]**: The starting _Inflation Rate_ for when
+ inflation is first enabled. Token issuance rate can only decrease from this
+ point.
+- **Disinflation Rate [%]**: The rate at which the _Inflation Rate_ is reduced.
+- **Long-term Inflation Rate [%]**: The stable, long-term _Inflation Rate_ to be
+ expected.
+
+### Effective Inflation Rate [%]
+
+The inflation rate actually observed on the Solana network after accounting for
+other factors that might decrease the _Total Current Supply_. Note that it is
+not possible for tokens to be created outside of what is described by the
+_Inflation Schedule_.
+
+- While the _Inflation Schedule_ determines how the protocol issues SOL, this
+ neglects the concurrent elimination of tokens in the ecosystem due to various
+ factors. The primary token burning mechanism is the burning of a portion of
+ each transaction fee. $50\%$ of each transaction fee is burned, with the
+ remaining fee retained by the validator that processes the transaction.
+- Additional factors such as loss of private keys and slashing events should
+ also be considered in a holistic analysis of the _Effective Inflation Rate_.
+ For example, it’s estimated that $10-20\%$ of all BTC have been lost and are
+ unrecoverable and that networks may experience similar yearly losses at the
+ rate of $1-2\%$.
+
+### Staking Yield [%]
+
+The rate of return (aka _interest_) earned on SOL staked on the network. It is
+often quoted as an annualized rate (e.g. "the network _staking yield_ is
+currently $10\%$ per year").
+
+- _Staking yield_ is of great interest to validators and token holders who wish
+ to delegate their tokens to avoid token dilution due to inflation (the extent
+ of which is discussed below).
+- $100\%$ of inflationary issuances are to be distributed to staked
+ token-holders in proportion to their staked SOL and to validators who charge a
+ commission on the rewards earned by their delegated SOL.
+ - There may be future consideration for an additional split of inflation
+ issuance with the introduction of _Archivers_ into the economy. _Archivers_
+ are network participants who provide a decentralized storage service and
+ should also be incentivized with token distribution from inflation issuances
+ for this service. - Similarly, early designs specified a fixed percentage of
+ inflationary issuance to be delivered to the Foundation treasury for
+ operational expenses and future grants. However, inflation will be launching
+ without any portion allocated to the Foundation.
+- _Staking yield_ can be calculated from the _Inflation Schedule_ along with the
+ fraction of the _Total Current Supply_ that is staked at any given time. The
+ explicit relationship is given by:
+
+$$
+\begin{aligned}
+\text{Staking Yield} =~&\text{Inflation Rate}\times\text{Validator Uptime}~\times \\
+&\left( 1 - \text{Validator Fee} \right) \times \left( \frac{1}{\%~\text{SOL Staked}} \right) \\
+\text{where:}\\
+\%~\text{SOL Staked} &= \frac{\text{Total SOL Staked}}{\text{Total Current Supply}}
+\end{aligned}
+$$
+
+### Token Dilution [%]
+
+Dilution is defined here as the change in proportional representation of a set
+of tokens within a larger set due to the introduction of new tokens. In
+practical terms, we discuss the dilution of staked or un-staked tokens due to
+the introduction and distribution of inflation issuance across the network. As
+will be shown below, while dilution impacts every token holder, the _relative_
+dilution between staked and un-staked tokens should be the primary concern to
+un-staked token holders. Staking tokens, which will receive their proportional
+distribution of inflation issuance, should assuage any dilution concerns for
+staked token holders. I.e. dilution from 'inflation' is offset by the
+distribution of new tokens to staked token holders, nullifying the 'dilutive'
+effects of the inflation for that group.
+
+### Adjusted Staking Yield [%]
+
+A complete appraisal of earning potential from staking tokens should take into
+account staked _Token Dilution_ and its impact on the _Staking Yield_. For this,
+we define the _Adjusted Staking Yield_ as the change in fractional token supply
+ownership of staked tokens due to the distribution of inflation issuance. I.e.
+the positive dilutive effects of inflation.
diff --git a/docs/integrations/exchange.md b/docs/integrations/exchange.md
new file mode 100644
index 000000000..08f7016bf
--- /dev/null
+++ b/docs/integrations/exchange.md
@@ -0,0 +1,946 @@
+---
+title: Add Solana to Your Exchange
+---
+
+This guide describes how to add Solana's native token SOL to your cryptocurrency
+exchange.
+
+## Node Setup
+
+We highly recommend setting up at least two nodes on high-grade computers/cloud
+instances, upgrading to newer versions promptly, and keeping an eye on service
+operations with a bundled monitoring tool.
+
+This setup enables you:
+
+- to have a self-administered gateway to the Solana mainnet-beta cluster to get
+ data and submit withdrawal transactions
+- to have full control over how much historical block data is retained
+- to maintain your service availability even if one node fails
+
+Solana nodes demand relatively high computing power to handle our fast blocks
+and high TPS. For specific requirements, please see
+[hardware recommendations](../running-validator/validator-reqs.md).
+
+To run an api node:
+
+1. [Install the Solana command-line tool suite](../cli/install-solana-cli-tools.md)
+2. Start the validator with at least the following parameters:
+
+```bash
+solana-validator \
+ --ledger \
+ --identity \
+ --entrypoint \
+ --expected-genesis-hash \
+ --rpc-port 8899 \
+ --no-voting \
+ --enable-rpc-transaction-history \
+ --limit-ledger-size \
+ --known-validator \
+ --only-known-rpc
+```
+
+Customize `--ledger` to your desired ledger storage location, and `--rpc-port`
+to the port you want to expose.
+
+The `--entrypoint` and `--expected-genesis-hash` parameters are all specific to
+the cluster you are joining.
+[Current parameters for Mainnet Beta](../clusters.md#example-solana-validator-command-line-2)
+
+The `--limit-ledger-size` parameter allows you to specify how many ledger
+[shreds](../terminology.md#shred) your node retains on disk. If you do not
+include this parameter, the validator will keep the entire ledger until it runs
+out of disk space. The default value attempts to keep the ledger disk usage
+under 500GB. More or less disk usage may be requested by adding an argument to
+`--limit-ledger-size` if desired. Check `solana-validator --help` for the
+default limit value used by `--limit-ledger-size`. More information about
+selecting a custom limit value is
+[available here](https://github.com/solana-labs/solana/blob/583cec922b6107e0f85c7e14cb5e642bc7dfb340/core/src/ledger_cleanup_service.rs#L15-L26).
+
+Specifying one or more `--known-validator` parameters can protect you from
+booting from a malicious snapshot.
+[More on the value of booting with known validators](../running-validator/validator-start.md#known-validators)
+
+Optional parameters to consider:
+
+- `--private-rpc` prevents your RPC port from being published for use by other
+ nodes
+- `--rpc-bind-address` allows you to specify a different IP address to bind the
+ RPC port
+
+### Automatic Restarts and Monitoring
+
+We recommend configuring each of your nodes to restart automatically on exit, to
+ensure you miss as little data as possible. Running the solana software as a
+systemd service is one great option.
+
+For monitoring, we provide
+[`solana-watchtower`](https://github.com/solana-labs/solana/blob/master/watchtower/README.md),
+which can monitor your validator and detect with the `solana-validator` process
+is unhealthy. It can directly be configured to alert you via Slack, Telegram,
+Discord, or Twillio. For details, run `solana-watchtower --help`.
+
+```bash
+solana-watchtower --validator-identity
+```
+
+> You can find more information about the
+> [best practices for Solana Watchtower](../validator/best-practices/monitoring.md#solana-watchtower)
+> here in the docs.
+
+#### New Software Release Announcements
+
+We release new software frequently (around 1 release / week). Sometimes newer
+versions include incompatible protocol changes, which necessitate timely
+software update to avoid errors in processing blocks.
+
+Our official release announcements for all kinds of releases (normal and
+security) are communicated via a [discord](https://solana.com/discord) channel
+called `#mb-announcement` (`mb` stands for `mainnet-beta`).
+
+Like staked validators, we expect any exchange-operated validators to be updated
+at your earliest convenience within a business day or two after a normal release
+announcement. For security-related releases, more urgent action may be needed.
+
+### Ledger Continuity
+
+By default, each of your nodes will boot from a snapshot provided by one of your
+known validators. This snapshot reflects the current state of the chain, but
+does not contain the complete historical ledger. If one of your node exits and
+boots from a new snapshot, there may be a gap in the ledger on that node. In
+order to prevent this issue, add the `--no-snapshot-fetch` parameter to your
+`solana-validator` command to receive historical ledger data instead of a
+snapshot.
+
+Do not pass the `--no-snapshot-fetch` parameter on your initial boot as it's not
+possible to boot the node all the way from the genesis block. Instead boot from
+a snapshot first and then add the `--no-snapshot-fetch` parameter for reboots.
+
+It is important to note that the amount of historical ledger available to your
+nodes from the rest of the network is limited at any point in time. Once
+operational if your validators experience significant downtime they may not be
+able to catch up to the network and will need to download a new snapshot from a
+known validator. In doing so your validators will now have a gap in its
+historical ledger data that cannot be filled.
+
+### Minimizing Validator Port Exposure
+
+The validator requires that various UDP and TCP ports be open for inbound
+traffic from all other Solana validators. While this is the most efficient mode
+of operation, and is strongly recommended, it is possible to restrict the
+validator to only require inbound traffic from one other Solana validator.
+
+First add the `--restricted-repair-only-mode` argument. This will cause the
+validator to operate in a restricted mode where it will not receive pushes from
+the rest of the validators, and instead will need to continually poll other
+validators for blocks. The validator will only transmit UDP packets to other
+validators using the _Gossip_ and _ServeR_ ("serve repair") ports, and only
+receive UDP packets on its _Gossip_ and _Repair_ ports.
+
+The _Gossip_ port is bi-directional and allows your validator to remain in
+contact with the rest of the cluster. Your validator transmits on the _ServeR_
+to make repair requests to obtaining new blocks from the rest of the network,
+since Turbine is now disabled. Your validator will then receive repair responses
+on the _Repair_ port from other validators.
+
+To further restrict the validator to only requesting blocks from one or more
+validators, first determine the identity pubkey for that validator and add the
+`--gossip-pull-validator PUBKEY --repair-validator PUBKEY` arguments for each
+PUBKEY. This will cause your validator to be a resource drain on each validator
+that you add, so please do this sparingly and only after consulting with the
+target validator.
+
+Your validator should now only be communicating with the explicitly listed
+validators and only on the _Gossip_, _Repair_ and _ServeR_ ports.
+
+## Setting up Deposit Accounts
+
+Solana accounts do not require any on-chain initialization; once they contain
+some SOL, they exist. To set up a deposit account for your exchange, simply
+generate a Solana keypair using any of our
+[wallet tools](../wallet-guide/cli.md).
+
+We recommend using a unique deposit account for each of your users.
+
+Solana accounts must be made rent-exempt by containing 2-years worth of
+[rent](developing/programming-model/accounts.md#rent) in SOL. In order to find
+the minimum rent-exempt balance for your deposit accounts, query the
+[`getMinimumBalanceForRentExemption` endpoint](../api/http#getminimumbalanceforrentexemption):
+
+```bash
+curl localhost:8899 -X POST -H "Content-Type: application/json" -d '{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "method": "getMinimumBalanceForRentExemption",
+ "params":[0]
+}'
+
+# Result
+{"jsonrpc":"2.0","result":890880,"id":1}
+```
+
+### Offline Accounts
+
+You may wish to keep the keys for one or more collection accounts offline for
+greater security. If so, you will need to move SOL to hot accounts using our
+[offline methods](../offline-signing.md).
+
+## Listening for Deposits
+
+When a user wants to deposit SOL into your exchange, instruct them to send a
+transfer to the appropriate deposit address.
+
+### Versioned Transaction Migration
+
+When the Mainnet Beta network starts processing versioned transactions,
+exchanges **MUST** make changes. If no changes are made, deposit detection will
+no longer work properly because fetching a versioned transaction or a block
+containing versioned transactions will return an error.
+
+- `{"maxSupportedTransactionVersion": 0}`
+
+ The `maxSupportedTransactionVersion` parameter must be added to `getBlock` and
+ `getTransaction` requests to avoid disruption to deposit detection. The latest
+ transaction version is `0` and should be specified as the max supported
+ transaction version value.
+
+It's important to understand that versioned transactions allow users to create
+transactions that use another set of account keys loaded from on-chain address
+lookup tables.
+
+- `{"encoding": "jsonParsed"}`
+
+ When fetching blocks and transactions, it's now recommended to use the
+ `"jsonParsed"` encoding because it includes all transaction account keys
+ (including those from lookup tables) in the message `"accountKeys"` list. This
+ makes it straightforward to resolve balance changes detailed in `preBalances`
+ / `postBalances` and `preTokenBalances` / `postTokenBalances`.
+
+ If the `"json"` encoding is used instead, entries in `preBalances` /
+ `postBalances` and `preTokenBalances` / `postTokenBalances` may refer to
+ account keys that are **NOT** in the `"accountKeys"` list and need to be
+ resolved using `"loadedAddresses"` entries in the transaction metadata.
+
+### Poll for Blocks
+
+To track all the deposit accounts for your exchange, poll for each confirmed
+block and inspect for addresses of interest, using the JSON-RPC service of your
+Solana API node.
+
+- To identify which blocks are available, send a
+ [`getBlocks`](../api/http#getblocks) request, passing the last block you have
+ already processed as the start-slot parameter:
+
+```bash
+curl https://api.devnet.solana.com -X POST -H "Content-Type: application/json" -d '{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "method": "getBlocks",
+ "params": [160017005, 160017015]
+}'
+
+# Result
+{"jsonrpc":"2.0","result":[160017005,160017006,160017007,160017012,160017013,160017014,160017015],"id":1}
+```
+
+Not every slot produces a block, so there may be gaps in the sequence of
+integers.
+
+- For each block, request its contents with a [`getBlock`](../api/http#getblock)
+ request:
+
+### Block Fetching Tips
+
+- `{"rewards": false}`
+
+By default, fetched blocks will return information about validator fees on each
+block and staking rewards on epoch boundaries. If you don't need this
+information, disable it with the "rewards" parameter.
+
+- `{"transactionDetails": "accounts"}`
+
+By default, fetched blocks will return a lot of transaction info and metadata
+that isn't necessary for tracking account balances. Set the "transactionDetails"
+parameter to speed up block fetching.
+
+```bash
+curl https://api.devnet.solana.com -X POST -H 'Content-Type: application/json' -d '{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "method": "getBlock",
+ "params": [
+ 166974442,
+ {
+ "encoding": "jsonParsed",
+ "maxSupportedTransactionVersion": 0,
+ "transactionDetails": "accounts",
+ "rewards": false
+ }
+ ]
+}'
+
+# Result
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "blockHeight": 157201607,
+ "blockTime": 1665070281,
+ "blockhash": "HKhao674uvFc4wMK1Cm3UyuuGbKExdgPFjXQ5xtvsG3o",
+ "parentSlot": 166974441,
+ "previousBlockhash": "98CNLU4rsYa2HDUyp7PubU4DhwYJJhSX9v6pvE7SWsAo",
+ "transactions": [
+ ... (omit)
+ {
+ "meta": {
+ "err": null,
+ "fee": 5000,
+ "postBalances": [
+ 1110663066,
+ 1,
+ 1040000000
+ ],
+ "postTokenBalances": [],
+ "preBalances": [
+ 1120668066,
+ 1,
+ 1030000000
+ ],
+ "preTokenBalances": [],
+ "status": {
+ "Ok": null
+ }
+ },
+ "transaction": {
+ "accountKeys": [
+ {
+ "pubkey": "9aE476sH92Vz7DMPyq5WLPkrKWivxeuTKEFKd2sZZcde",
+ "signer": true,
+ "source": "transaction",
+ "writable": true
+ },
+ {
+ "pubkey": "11111111111111111111111111111111",
+ "signer": false,
+ "source": "transaction",
+ "writable": false
+ },
+ {
+ "pubkey": "G1wZ113tiUHdSpQEBcid8n1x8BAvcWZoZgxPKxgE5B7o",
+ "signer": false,
+ "source": "lookupTable",
+ "writable": true
+ }
+ ],
+ "signatures": [
+ "2CxNRsyRT7y88GBwvAB3hRg8wijMSZh3VNYXAdUesGSyvbRJbRR2q9G1KSEpQENmXHmmMLHiXumw4dp8CvzQMjrM"
+ ]
+ },
+ "version": 0
+ },
+ ... (omit)
+ ]
+ },
+ "id": 1
+}
+```
+
+The `preBalances` and `postBalances` fields allow you to track the balance
+changes in every account without having to parse the entire transaction. They
+list the starting and ending balances of each account in
+[lamports](../terminology.md#lamport), indexed to the `accountKeys` list. For
+example, if the deposit address of interest is
+`G1wZ113tiUHdSpQEBcid8n1x8BAvcWZoZgxPKxgE5B7o`, this transaction represents a
+transfer of 1040000000 - 1030000000 = 10,000,000 lamports = 0.01 SOL
+
+If you need more information about the transaction type or other specifics, you
+can request the block from RPC in binary format, and parse it using either our
+[Rust SDK](https://github.com/solana-labs/solana) or
+[Javascript SDK](https://github.com/solana-labs/solana-web3.js).
+
+### Address History
+
+You can also query the transaction history of a specific address. This is
+generally _not_ a viable method for tracking all your deposit addresses over all
+slots, but may be useful for examining a few accounts for a specific period of
+time.
+
+- Send a [`getSignaturesForAddress`](../api/http#getsignaturesforaddress)
+ request to the api node:
+
+```bash
+curl localhost:8899 -X POST -H "Content-Type: application/json" -d '{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "method": "getSignaturesForAddress",
+ "params": [
+ "3M2b3tLji7rvscqrLAHMukYxDK2nB96Q9hwfV6QkdzBN",
+ {
+ "limit": 3
+ }
+ ]
+}'
+
+# Result
+{
+ "jsonrpc": "2.0",
+ "result": [
+ {
+ "blockTime": 1662064640,
+ "confirmationStatus": "finalized",
+ "err": null,
+ "memo": null,
+ "signature": "3EDRvnD5TbbMS2mCusop6oyHLD8CgnjncaYQd5RXpgnjYUXRCYwiNPmXb6ZG5KdTK4zAaygEhfdLoP7TDzwKBVQp",
+ "slot": 148697216
+ },
+ {
+ "blockTime": 1662064434,
+ "confirmationStatus": "finalized",
+ "err": null,
+ "memo": null,
+ "signature": "4rPQ5wthgSP1kLdLqcRgQnkYkPAZqjv5vm59LijrQDSKuL2HLmZHoHjdSLDXXWFwWdaKXUuryRBGwEvSxn3TQckY",
+ "slot": 148696843
+ },
+ {
+ "blockTime": 1662064341,
+ "confirmationStatus": "finalized",
+ "err": null,
+ "memo": null,
+ "signature": "36Q383JMiqiobuPV9qBqy41xjMsVnQBm9rdZSdpbrLTGhSQDTGZJnocM4TQTVfUGfV2vEX9ZB3sex6wUBUWzjEvs",
+ "slot": 148696677
+ }
+ ],
+ "id": 1
+}
+```
+
+- For each signature returned, get the transaction details by sending a
+ [`getTransaction`](../api/http#gettransaction) request:
+
+```bash
+curl https://api.devnet.solana.com -X POST -H 'Content-Type: application/json' -d '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"getTransaction",
+ "params":[
+ "2CxNRsyRT7y88GBwvAB3hRg8wijMSZh3VNYXAdUesGSyvbRJbRR2q9G1KSEpQENmXHmmMLHiXumw4dp8CvzQMjrM",
+ {
+ "encoding":"jsonParsed",
+ "maxSupportedTransactionVersion":0
+ }
+ ]
+}'
+
+# Result
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "blockTime": 1665070281,
+ "meta": {
+ "err": null,
+ "fee": 5000,
+ "innerInstructions": [],
+ "logMessages": [
+ "Program 11111111111111111111111111111111 invoke [1]",
+ "Program 11111111111111111111111111111111 success"
+ ],
+ "postBalances": [
+ 1110663066,
+ 1,
+ 1040000000
+ ],
+ "postTokenBalances": [],
+ "preBalances": [
+ 1120668066,
+ 1,
+ 1030000000
+ ],
+ "preTokenBalances": [],
+ "rewards": [],
+ "status": {
+ "Ok": null
+ }
+ },
+ "slot": 166974442,
+ "transaction": {
+ "message": {
+ "accountKeys": [
+ {
+ "pubkey": "9aE476sH92Vz7DMPyq5WLPkrKWivxeuTKEFKd2sZZcde",
+ "signer": true,
+ "source": "transaction",
+ "writable": true
+ },
+ {
+ "pubkey": "11111111111111111111111111111111",
+ "signer": false,
+ "source": "transaction",
+ "writable": false
+ },
+ {
+ "pubkey": "G1wZ113tiUHdSpQEBcid8n1x8BAvcWZoZgxPKxgE5B7o",
+ "signer": false,
+ "source": "lookupTable",
+ "writable": true
+ }
+ ],
+ "addressTableLookups": [
+ {
+ "accountKey": "4syr5pBaboZy4cZyF6sys82uGD7jEvoAP2ZMaoich4fZ",
+ "readonlyIndexes": [],
+ "writableIndexes": [
+ 3
+ ]
+ }
+ ],
+ "instructions": [
+ {
+ "parsed": {
+ "info": {
+ "destination": "G1wZ113tiUHdSpQEBcid8n1x8BAvcWZoZgxPKxgE5B7o",
+ "lamports": 10000000,
+ "source": "9aE476sH92Vz7DMPyq5WLPkrKWivxeuTKEFKd2sZZcde"
+ },
+ "type": "transfer"
+ },
+ "program": "system",
+ "programId": "11111111111111111111111111111111"
+ }
+ ],
+ "recentBlockhash": "BhhivDNgoy4L5tLtHb1s3TP19uUXqKiy4FfUR34d93eT"
+ },
+ "signatures": [
+ "2CxNRsyRT7y88GBwvAB3hRg8wijMSZh3VNYXAdUesGSyvbRJbRR2q9G1KSEpQENmXHmmMLHiXumw4dp8CvzQMjrM"
+ ]
+ },
+ "version": 0
+ },
+ "id": 1
+}
+```
+
+## Sending Withdrawals
+
+To accommodate a user's request to withdraw SOL, you must generate a Solana
+transfer transaction, and send it to the api node to be forwarded to your
+cluster.
+
+### Synchronous
+
+Sending a synchronous transfer to the Solana cluster allows you to easily ensure
+that a transfer is successful and finalized by the cluster.
+
+Solana's command-line tool offers a simple command, `solana transfer`, to
+generate, submit, and confirm transfer transactions. By default, this method
+will wait and track progress on stderr until the transaction has been finalized
+by the cluster. If the transaction fails, it will report any transaction errors.
+
+```bash
+solana transfer --allow-unfunded-recipient --keypair --url http://localhost:8899
+```
+
+The [Solana Javascript SDK](https://github.com/solana-labs/solana-web3.js)
+offers a similar approach for the JS ecosystem. Use the `SystemProgram` to build
+a transfer transaction, and submit it using the `sendAndConfirmTransaction`
+method.
+
+### Asynchronous
+
+For greater flexibility, you can submit withdrawal transfers asynchronously. In
+these cases, it is your responsibility to verify that the transaction succeeded
+and was finalized by the cluster.
+
+**Note:** Each transaction contains a
+[recent blockhash](developing/programming-model/transactions.md#blockhash-format)
+to indicate its liveness. It is **critical** to wait until this blockhash
+expires before retrying a withdrawal transfer that does not appear to have been
+confirmed or finalized by the cluster. Otherwise, you risk a double spend. See
+more on [blockhash expiration](#blockhash-expiration) below.
+
+First, get a recent blockhash using the [`getFees`](../api/http#getfees)
+endpoint or the CLI command:
+
+```bash
+solana fees --url http://localhost:8899
+```
+
+In the command-line tool, pass the `--no-wait` argument to send a transfer
+asynchronously, and include your recent blockhash with the `--blockhash`
+argument:
+
+```bash
+solana transfer --no-wait --allow-unfunded-recipient --blockhash --keypair --url http://localhost:8899
+```
+
+You can also build, sign, and serialize the transaction manually, and fire it
+off to the cluster using the JSON-RPC
+[`sendTransaction`](../api/http#sendtransaction) endpoint.
+
+#### Transaction Confirmations & Finality
+
+Get the status of a batch of transactions using the
+[`getSignatureStatuses`](../api/http#getsignaturestatuses) JSON-RPC endpoint.
+The `confirmations` field reports how many
+[confirmed blocks](../terminology.md#confirmed-block) have elapsed since the
+transaction was processed. If `confirmations: null`, it is
+[finalized](../terminology.md#finality).
+
+```bash
+curl localhost:8899 -X POST -H "Content-Type: application/json" -d '{
+ "jsonrpc":"2.0",
+ "id":1,
+ "method":"getSignatureStatuses",
+ "params":[
+ [
+ "5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW",
+ "5j7s6NiJS3JAkvgkoc18WVAsiSaci2pxB2A6ueCJP4tprA2TFg9wSyTLeYouxPBJEMzJinENTkpA52YStRW5Dia7"
+ ]
+ ]
+}'
+
+# Result
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "context": {
+ "slot": 82
+ },
+ "value": [
+ {
+ "slot": 72,
+ "confirmations": 10,
+ "err": null,
+ "status": {
+ "Ok": null
+ }
+ },
+ {
+ "slot": 48,
+ "confirmations": null,
+ "err": null,
+ "status": {
+ "Ok": null
+ }
+ }
+ ]
+ },
+ "id": 1
+}
+```
+
+#### Blockhash Expiration
+
+You can check whether a particular blockhash is still valid by sending a
+[`getFeeCalculatorForBlockhash`](../api/http#getfeecalculatorforblockhash)
+request with the blockhash as a parameter. If the response value is `null`, the
+blockhash is expired, and the withdrawal transaction using that blockhash should
+never succeed.
+
+### Validating User-supplied Account Addresses for Withdrawals
+
+As withdrawals are irreversible, it may be a good practice to validate a
+user-supplied account address before authorizing a withdrawal in order to
+prevent accidental loss of user funds.
+
+#### Basic verification
+
+Solana addresses a 32-byte array, encoded with the bitcoin base58 alphabet. This
+results in an ASCII text string matching the following regular expression:
+
+```
+[1-9A-HJ-NP-Za-km-z]{32,44}
+```
+
+This check is insufficient on its own as Solana addresses are not checksummed,
+so typos cannot be detected. To further validate the user's input, the string
+can be decoded and the resulting byte array's length confirmed to be 32.
+However, there are some addresses that can decode to 32 bytes despite a typo
+such as a single missing character, reversed characters and ignored case
+
+#### Advanced verification
+
+Due to the vulnerability to typos described above, it is recommended that the
+balance be queried for candidate withdraw addresses and the user prompted to
+confirm their intentions if a non-zero balance is discovered.
+
+#### Valid ed25519 pubkey check
+
+The address of a normal account in Solana is a Base58-encoded string of a
+256-bit ed25519 public key. Not all bit patterns are valid public keys for the
+ed25519 curve, so it is possible to ensure user-supplied account addresses are
+at least correct ed25519 public keys.
+
+#### Java
+
+Here is a Java example of validating a user-supplied address as a valid ed25519
+public key:
+
+The following code sample assumes you're using the Maven.
+
+`pom.xml`:
+
+```xml
+
+ ...
+
+ spring
+ https://repo.spring.io/libs-release/
+
+
+
+...
+
+
+ ...
+
+ io.github.novacrypto
+ Base58
+ 0.1.3
+
+
+ cafe.cryptography
+ curve25519-elisabeth
+ 0.1.0
+
+
+```
+
+```java
+import io.github.novacrypto.base58.Base58;
+import cafe.cryptography.curve25519.CompressedEdwardsY;
+
+public class PubkeyValidator
+{
+ public static boolean verifyPubkey(String userProvidedPubkey)
+ {
+ try {
+ return _verifyPubkeyInternal(userProvidedPubkey);
+ } catch (Exception e) {
+ return false;
+ }
+ }
+
+ public static boolean _verifyPubkeyInternal(String maybePubkey) throws Exception
+ {
+ byte[] bytes = Base58.base58Decode(maybePubkey);
+ return !(new CompressedEdwardsY(bytes)).decompress().isSmallOrder();
+ }
+}
+```
+
+## Minimum Deposit & Withdrawal Amounts
+
+Every deposit and withdrawal of SOL must be greater or equal to the minimum
+rent-exempt balance for the account at the wallet address (a basic SOL account
+holding no data), currently: 0.000890880 SOL
+
+Similarly, every deposit account must contain at least this balance.
+
+```bash
+curl localhost:8899 -X POST -H "Content-Type: application/json" -d '{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "method": "getMinimumBalanceForRentExemption",
+ "params": [0]
+}'
+
+# Result
+{"jsonrpc":"2.0","result":890880,"id":1}
+```
+
+## Supporting the SPL Token Standard
+
+[SPL Token](https://spl.solana.com/token) is the standard for wrapped/synthetic
+token creation and exchange on the Solana blockchain.
+
+The SPL Token workflow is similar to that of native SOL tokens, but there are a
+few differences which will be discussed in this section.
+
+### Token Mints
+
+Each _type_ of SPL Token is declared by creating a _mint_ account. This account
+stores metadata describing token features like the supply, number of decimals,
+and various authorities with control over the mint. Each SPL Token account
+references its associated mint and may only interact with SPL Tokens of that
+type.
+
+### Installing the `spl-token` CLI Tool
+
+SPL Token accounts are queried and modified using the `spl-token` command line
+utility. The examples provided in this section depend upon having it installed
+on the local system.
+
+`spl-token` is distributed from Rust
+[crates.io](https://crates.io/crates/spl-token) via the Rust `cargo` command
+line utility. The latest version of `cargo` can be installed using a handy
+one-liner for your platform at [rustup.rs](https://rustup.rs). Once `cargo` is
+installed, `spl-token` can be obtained with the following command:
+
+```
+cargo install spl-token-cli
+```
+
+You can then check the installed version to verify
+
+```
+spl-token --version
+```
+
+Which should result in something like
+
+```text
+spl-token-cli 2.0.1
+```
+
+### Account Creation
+
+SPL Token accounts carry additional requirements that native System Program
+accounts do not:
+
+1. SPL Token accounts must be created before an amount of tokens can be
+ deposited. Token accounts can be created explicitly with the
+ `spl-token create-account` command, or implicitly by the
+ `spl-token transfer --fund-recipient ...` command.
+1. SPL Token accounts must remain
+ [rent-exempt](developing/programming-model/accounts.md#rent-exemption) for
+ the duration of their existence and therefore require a small amount of
+ native SOL tokens be deposited at account creation. For SPL Token v2
+ accounts, this amount is 0.00203928 SOL (2,039,280 lamports).
+
+#### Command Line
+
+To create an SPL Token account with the following properties:
+
+1. Associated with the given mint
+1. Owned by the funding account's keypair
+
+```
+spl-token create-account
+```
+
+#### Example
+
+```
+$ spl-token create-account AkUFCWTXb3w9nY2n6SFJvBV6VwvFUCe4KBMCcgLsa2ir
+Creating account 6VzWGL51jLebvnDifvcuEDec17sK6Wupi4gYhm5RzfkV
+Signature: 4JsqZEPra2eDTHtHpB4FMWSfk3UgcCVmkKkP7zESZeMrKmFFkDkNd91pKP3vPVVZZPiu5XxyJwS73Vi5WsZL88D7
+```
+
+Or to create an SPL Token account with a specific keypair:
+
+```
+$ solana-keygen new -o token-account.json
+$ spl-token create-account AkUFCWTXb3w9nY2n6SFJvBV6VwvFUCe4KBMCcgLsa2ir token-account.json
+Creating account 6VzWGL51jLebvnDifvcuEDec17sK6Wupi4gYhm5RzfkV
+Signature: 4JsqZEPra2eDTHtHpB4FMWSfk3UgcCVmkKkP7zESZeMrKmFFkDkNd91pKP3vPVVZZPiu5XxyJwS73Vi5WsZL88D7
+```
+
+### Checking an Account's Balance
+
+#### Command Line
+
+```
+spl-token balance
+```
+
+#### Example
+
+```
+$ solana balance 6VzWGL51jLebvnDifvcuEDec17sK6Wupi4gYhm5RzfkV
+0
+```
+
+### Token Transfers
+
+The source account for a transfer is the actual token account that contains the
+amount.
+
+The recipient address however can be a normal wallet account. If an associated
+token account for the given mint does not yet exist for that wallet, the
+transfer will create it provided that the `--fund-recipient` argument as
+provided.
+
+#### Command Line
+
+```
+spl-token transfer --fund-recipient
+```
+
+#### Example
+
+```
+$ spl-token transfer 6B199xxzw3PkAm25hGJpjj3Wj3WNYNHzDAnt1tEqg5BN 1 6VzWGL51jLebvnDifvcuEDec17sK6Wupi4gYhm5RzfkV
+Transfer 1 tokens
+ Sender: 6B199xxzw3PkAm25hGJpjj3Wj3WNYNHzDAnt1tEqg5BN
+ Recipient: 6VzWGL51jLebvnDifvcuEDec17sK6Wupi4gYhm5RzfkV
+Signature: 3R6tsog17QM8KfzbcbdP4aoMfwgo6hBggJDVy7dZPVmH2xbCWjEj31JKD53NzMrf25ChFjY7Uv2dfCDq4mGFFyAj
+```
+
+### Depositing
+
+Since each `(wallet, mint)` pair requires a separate account on chain. It is
+recommended that the addresses for these accounts be derived from SOL deposit
+wallets using the
+[Associated Token Account](https://spl.solana.com/associated-token-account)
+(ATA) scheme and that _only_ deposits from ATA addresses be accepted.
+
+Monitoring for deposit transactions should follow the
+[block polling](#poll-for-blocks) method described above. Each new block should
+be scanned for successful transactions referencing user token-account derived
+addresses. The `preTokenBalance` and `postTokenBalance` fields from the
+transaction's metadata must then be used to determine the effective balance
+change. These fields will identify the token mint and account owner (main wallet
+address) of the affected account.
+
+Note that if a receiving account is created during the transaction, it will have
+no `preTokenBalance` entry as there is no existing account state. In this case,
+the initial balance can be assumed to be zero.
+
+### Withdrawing
+
+The withdrawal address a user provides must be the that of their SOL wallet.
+
+Before executing a withdrawal [transfer](#token-transfers), the exchange should
+check the address as
+[described above](#validating-user-supplied-account-addresses-for-withdrawals).
+Additionally this address must be owned by the System Program and have no
+account data. If the address has no SOL balance, user confirmation should be
+obtained before proceeding with the withdrawal. All other withdrawal addresses
+must be rejected.
+
+From the withdrawal address, the
+[Associated Token Account](https://spl.solana.com/associated-token-account)
+(ATA) for the correct mint is derived and the transfer issued to that account
+via a
+[TransferChecked](https://github.com/solana-labs/solana-program-library/blob/fc0d6a2db79bd6499f04b9be7ead0c400283845e/token/program/src/instruction.rs#L268)
+instruction. Note that it is possible that the ATA address does not yet exist,
+at which point the exchange should fund the account on behalf of the user. For
+SPL Token v2 accounts, funding the withdrawal account will require 0.00203928
+SOL (2,039,280 lamports).
+
+Template `spl-token transfer` command for a withdrawal:
+
+```
+$ spl-token transfer --fund-recipient
+```
+
+### Other Considerations
+
+#### Freeze Authority
+
+For regulatory compliance reasons, an SPL Token issuing entity may optionally
+choose to hold "Freeze Authority" over all accounts created in association with
+its mint. This allows them to
+[freeze](https://spl.solana.com/token#freezing-accounts) the assets in a given
+account at will, rendering the account unusable until thawed. If this feature is
+in use, the freeze authority's pubkey will be registered in the SPL Token's mint
+account.
+
+## Testing the Integration
+
+Be sure to test your complete workflow on Solana devnet and testnet
+[clusters](../clusters.md) before moving to production on mainnet-beta. Devnet
+is the most open and flexible, and ideal for initial development, while testnet
+offers more realistic cluster configuration. Both devnet and testnet support a
+faucet, run `solana airdrop 1` to obtain some devnet or testnet SOL for
+development and testing.
diff --git a/docs/integrations/retrying-transactions.md b/docs/integrations/retrying-transactions.md
new file mode 100644
index 000000000..4676a8070
--- /dev/null
+++ b/docs/integrations/retrying-transactions.md
@@ -0,0 +1,330 @@
+---
+title: Retrying Transactions
+---
+
+# Retrying Transactions
+
+On some occasions, a seemingly valid transaction may be dropped before it is
+included in a block. This most often occurs during periods of network
+congestion, when an RPC node fails to rebroadcast the transaction to the
+[leader](../terminology#leader). To an end-user, it may appear as if their
+transaction disappears entirely. While RPC nodes are equipped with a generic
+rebroadcasting algorithm, application developers are also capable of developing
+their own custom rebroadcasting logic.
+
+## Facts
+
+:::note Fact Sheet
+
+- RPC nodes will attempt to rebroadcast transactions using a generic algorithm
+- Application developers can implement their own custom rebroadcasting logic
+- Developers should take advantage of the `maxRetries` parameter on the
+ `sendTransaction` JSON-RPC method
+- Developers should enable preflight checks to raise errors before transactions
+ are submitted
+- Before re-signing any transaction, it is **very important** to ensure that the
+ initial transaction’s blockhash has expired
+
+:::
+
+## The Journey of a Transaction
+
+### How Clients Submit Transactions
+
+In Solana, there is no concept of a mempool. All transactions, whether they are
+initiated programmatically or by an end-user, are efficiently routed to leaders
+so that they can be processed into a block. There are two main ways in which a
+transaction can be sent to leaders:
+
+1. By proxy via an RPC server and the
+ [sendTransaction](../api/http#sendtransaction) JSON-RPC method
+2. Directly to leaders via a
+ [TPU Client](https://docs.rs/solana-client/1.7.3/solana_client/tpu_client/index.html)
+
+The vast majority of end-users will submit transactions via an RPC server. When
+a client submits a transaction, the receiving RPC node will in turn attempt to
+broadcast the transaction to both the current and next leaders. Until the
+transaction is processed by a leader, there is no record of the transaction
+outside of what the client and the relaying RPC nodes are aware of. In the case
+of a TPU client, rebroadcast and leader forwarding is handled entirely by the
+client software.
+
+![Transaction Journey](../../static/img/rt-tx-journey.png)
+
+
+
+### How RPC Nodes Broadcast Transactions
+
+After an RPC node receives a transaction via `sendTransaction`, it will convert
+the transaction into a
+[UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol) packet before
+forwarding it to the relevant leaders. UDP allows validators to quickly
+communicate with one another, but does not provide any guarantees regarding
+transaction delivery.
+
+Because Solana’s leader schedule is known in advance of every
+[epoch](../terminology#epoch) (~2 days), an RPC node will broadcast its
+transaction directly to the current and next leaders. This is in contrast to
+other gossip protocols such as Ethereum that propagate transactions randomly and
+broadly across the entire network. By default, RPC nodes will try to forward
+transactions to leaders every two seconds until either the transaction is
+finalized or the transaction’s blockhash expires (150 blocks or ~1 minute 19
+seconds as of the time of this writing). If the outstanding rebroadcast queue
+size is greater than
+[10,000 transactions](https://github.com/solana-labs/solana/blob/bfbbc53dac93b3a5c6be9b4b65f679fdb13e41d9/send-transaction-service/src/send_transaction_service.rs#L20),
+newly submitted transactions are dropped. There are command-line
+[arguments](https://github.com/solana-labs/solana/blob/bfbbc53dac93b3a5c6be9b4b65f679fdb13e41d9/validator/src/main.rs#L1172)
+that RPC operators can adjust to change the default behavior of this retry
+logic.
+
+When an RPC node broadcasts a transaction, it will attempt to forward the
+transaction to a leader’s
+[Transaction Processing Unit (TPU)](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/core/src/validator.rs#L867).
+The TPU processes transactions in five distinct phases:
+
+- [Fetch Stage](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/core/src/fetch_stage.rs#L21)
+- [SigVerify Stage](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/core/src/tpu.rs#L91)
+- [Banking Stage](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/core/src/banking_stage.rs#L249)
+- [Proof of History Service](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/poh/src/poh_service.rs)
+- [Broadcast Stage](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/core/src/tpu.rs#L136)
+
+![TPU Overview](../../static/img/rt-tpu-jito-labs.png)
+
+Of these five phases, the Fetch Stage is responsible for receiving transactions.
+Within the Fetch Stage, validators will categorize incoming transactions
+according to three ports:
+
+- [tpu](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/gossip/src/contact_info.rs#L27)
+ handles regular transactions such as token transfers, NFT mints, and program
+ instructions
+- [tpu_vote](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/gossip/src/contact_info.rs#L31)
+ focuses exclusively on voting transactions
+- [tpu_forwards](https://github.com/solana-labs/solana/blob/cd6f931223181d5a1d47cba64e857785a175a760/gossip/src/contact_info.rs#L29)
+ forwards unprocessed packets to the next leader if the current leader is
+ unable to process all transactions
+
+For more information on the TPU, please refer to
+[this excellent writeup by Jito Labs](https://jito-labs.medium.com/solana-validator-101-transaction-processing-90bcdc271143).
+
+## How Transactions Get Dropped
+
+Throughout a transaction’s journey, there are a few scenarios in which the
+transaction can be unintentionally dropped from the network.
+
+### Before a transaction is processed
+
+If the network drops a transaction, it will most likely do so before the
+transaction is processed by a leader. UDP
+[packet loss](https://en.wikipedia.org/wiki/Packet_loss) is the simplest reason
+why this might occur. During times of intense network load, it’s also possible
+for validators to become overwhelmed by the sheer number of transactions
+required for processing. While validators are equipped to forward surplus
+transactions via `tpu_forwards`, there is a limit to the amount of data that can
+be
+[forwarded](https://github.com/solana-labs/solana/blob/master/core/src/banking_stage.rs#L389).
+Furthermore, each forward is limited to a single hop between validators. That
+is, transactions received on the `tpu_forwards` port are not forwarded on to
+other validators.
+
+There are also two lesser known reasons why a transaction may be dropped before
+it is processed. The first scenario involves transactions that are submitted via
+an RPC pool. Occasionally, part of the RPC pool can be sufficiently ahead of the
+rest of the pool. This can cause issues when nodes within the pool are required
+to work together. In this example, the transaction’s
+[recentBlockhash](../developing/programming-model/transactions#recent-blockhash)
+is queried from the advanced part of the pool (Backend A). When the transaction
+is submitted to the lagging part of the pool (Backend B), the nodes will not
+recognize the advanced blockhash and will drop the transaction. This can be
+detected upon transaction submission if developers enable
+[preflight checks](../api/http#sendtransaction) on `sendTransaction`.
+
+![Dropped via RPC Pool](../../static/img/rt-dropped-via-rpc-pool.png)
+
+Temporarily network forks can also result in dropped transactions. If a
+validator is slow to replay its blocks within the Banking Stage, it may end up
+creating a minority fork. When a client builds a transaction, it’s possible for
+the transaction to reference a `recentBlockhash` that only exists on the
+minority fork. After the transaction is submitted, the cluster can then switch
+away from its minority fork before the transaction is processed. In this
+scenario, the transaction is dropped due to the blockhash not being found.
+
+![Dropped due to Minority Fork (Before Processed)](../../static/img/rt-dropped-minority-fork-pre-process.png)
+
+### After a transaction is processed and before it is finalized
+
+In the event a transaction references a `recentBlockhash` from a minority fork,
+it’s still possible for the transaction to be processed. In this case, however,
+it would be processed by the leader on the minority fork. When this leader
+attempts to share its processed transactions with the rest of the network, it
+would fail to reach consensus with the majority of validators that do not
+recognize the minority fork. At this time, the transaction would be dropped
+before it could be finalized.
+
+![Dropped due to Minority Fork (After Processed)](../../static/img/rt-dropped-minority-fork-post-process.png)
+
+## Handling Dropped Transactions
+
+While RPC nodes will attempt to rebroadcast transactions, the algorithm they
+employ is generic and often ill-suited for the needs of specific applications.
+To prepare for times of network congestion, application developers should
+customize their own rebroadcasting logic.
+
+### An In-Depth Look at sendTransaction
+
+When it comes to submitting transactions, the `sendTransaction` RPC method is
+the primary tool available to developers. `sendTransaction` is only responsible
+for relaying a transaction from a client to an RPC node. If the node receives
+the transaction, `sendTransaction` will return the transaction id that can be
+used to track the transaction. A successful response does not indicate whether
+the transaction will be processed or finalized by the cluster.
+
+:::note
+
+### Request Parameters
+
+- `transaction`: `string` - fully-signed Transaction, as encoded string
+- (optional) `configuration object`: `object`
+ - `skipPreflight`: `boolean` - if true, skip the preflight transaction checks
+ (default: false)
+ - (optional) `preflightCommitment`: `string` -
+ [Commitment](../api/http#configuring-state-commitment) level to use for
+ preflight simulations against the bank slot (default: "finalized").
+ - (optional) `encoding`: `string` - Encoding used for the transaction data.
+ Either "base58" (slow), or "base64". (default: "base58").
+ - (optional) `maxRetries`: `usize` - Maximum number of times for the RPC node
+ to retry sending the transaction to the leader. If this parameter is not
+ provided, the RPC node will retry the transaction until it is finalized or
+ until the blockhash expires.
+
+Response
+
+- `transaction id`: `string` - First transaction signature embedded in the
+ transaction, as base-58 encoded string. This transaction id can be used with
+ [`getSignatureStatuses`](../api/http#getsignaturestatuses) to poll for status
+ updates.
+
+:::
+
+## Customizing Rebroadcast Logic
+
+In order to develop their own rebroadcasting logic, developers should take
+advantage of `sendTransaction`’s `maxRetries` parameter. If provided,
+`maxRetries` will override an RPC node’s default retry logic, allowing
+developers to manually control the retry process
+[within reasonable bounds](https://github.com/solana-labs/solana/blob/98707baec2385a4f7114d2167ef6dfb1406f954f/validator/src/main.rs#L1258-L1274).
+
+A common pattern for manually retrying transactions involves temporarily storing
+the `lastValidBlockHeight` that comes from
+[getLatestBlockhash](../api/http#getlatestblockhash). Once stashed, an
+application can then
+[poll the cluster’s blockheight](../api/http#getblockheight) and manually retry
+the transaction at an appropriate interval. In times of network congestion, it’s
+advantageous to set `maxRetries` to 0 and manually rebroadcast via a custom
+algorithm. While some applications may employ an
+[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff)
+algorithm, others such as [Mango](https://www.mango.markets/) opt to
+[continuously resubmit](https://github.com/blockworks-foundation/mango-ui/blob/b6abfc6c13b71fc17ebbe766f50b8215fa1ec54f/src/utils/send.tsx#L713)
+transactions at a constant interval until some timeout has occurred.
+
+```ts
+import {
+ Keypair,
+ Connection,
+ LAMPORTS_PER_SOL,
+ SystemProgram,
+ Transaction,
+} from "@solana/web3.js";
+import * as nacl from "tweetnacl";
+
+const sleep = async (ms: number) => {
+ return new Promise(r => setTimeout(r, ms));
+};
+
+(async () => {
+ const payer = Keypair.generate();
+ const toAccount = Keypair.generate().publicKey;
+
+ const connection = new Connection("http://127.0.0.1:8899", "confirmed");
+
+ const airdropSignature = await connection.requestAirdrop(
+ payer.publicKey,
+ LAMPORTS_PER_SOL,
+ );
+
+ await connection.confirmTransaction({ signature: airdropSignature });
+
+ const blockhashResponse = await connection.getLatestBlockhashAndContext();
+ const lastValidBlockHeight = blockhashResponse.context.slot + 150;
+
+ const transaction = new Transaction({
+ feePayer: payer.publicKey,
+ blockhash: blockhashResponse.value.blockhash,
+ lastValidBlockHeight: lastValidBlockHeight,
+ }).add(
+ SystemProgram.transfer({
+ fromPubkey: payer.publicKey,
+ toPubkey: toAccount,
+ lamports: 1000000,
+ }),
+ );
+ const message = transaction.serializeMessage();
+ const signature = nacl.sign.detached(message, payer.secretKey);
+ transaction.addSignature(payer.publicKey, Buffer.from(signature));
+ const rawTransaction = transaction.serialize();
+ let blockheight = await connection.getBlockHeight();
+
+ while (blockheight < lastValidBlockHeight) {
+ connection.sendRawTransaction(rawTransaction, {
+ skipPreflight: true,
+ });
+ await sleep(500);
+ blockheight = await connection.getBlockHeight();
+ }
+})();
+```
+
+When polling via `getLatestBlockhash`, applications should specify their
+intended [commitment](../api/http#configuring-state-commitment) level. By
+setting its commitment to `confirmed` (voted on) or `finalized` (~30 blocks
+after `confirmed`), an application can avoid polling a blockhash from a minority
+fork.
+
+If an application has access to RPC nodes behind a load balancer, it can also
+choose to divide its workload amongst specific nodes. RPC nodes that serve
+data-intensive requests such as
+[getProgramAccounts](https://solanacookbook.com/guides/get-program-accounts.html)
+may be prone to falling behind and can be ill-suited for also forwarding
+transactions. For applications that handle time-sensitive transactions, it may
+be prudent to have dedicated nodes that only handle `sendTransaction`.
+
+### The Cost of Skipping Preflight
+
+By default, `sendTransaction` will perform three preflight checks prior to
+submitting a transaction. Specifically, `sendTransaction` will:
+
+- Verify that all signatures are valid
+- Check that the referenced blockhash is within the last 150 blocks
+- Simulate the transaction against the bank slot specified by the
+ `preflightCommitment`
+
+In the event that any of these three preflight checks fail, `sendTransaction`
+will raise an error prior to submitting the transaction. Preflight checks can
+often be the difference between losing a transaction and allowing a client to
+gracefully handle an error. To ensure that these common errors are accounted
+for, it is recommended that developers keep `skipPreflight` set to `false`.
+
+### When to Re-Sign Transactions
+
+Despite all attempts to rebroadcast, there may be times in which a client is
+required to re-sign a transaction. Before re-signing any transaction, it is
+**very important** to ensure that the initial transaction’s blockhash has
+expired. If the initial blockhash is still valid, it is possible for both
+transactions to be accepted by the network. To an end-user, this would appear as
+if they unintentionally sent the same transaction twice.
+
+In Solana, a dropped transaction can be safely discarded once the blockhash it
+references is older than the `lastValidBlockHeight` received from
+`getLatestBlockhash`. Developers should keep track of this
+`lastValidBlockHeight` by querying [`getEpochInfo`](../api/http#getepochinfo)
+and comparing with `blockHeight` in the response. Once a blockhash is
+invalidated, clients may re-sign with a newly-queried blockhash.
diff --git a/docs/introduction.md b/docs/introduction.md
new file mode 100644
index 000000000..486075686
--- /dev/null
+++ b/docs/introduction.md
@@ -0,0 +1,108 @@
+---
+title: Introduction
+---
+
+## What is Solana?
+
+Solana is an open source project implementing a new, high-performance,
+permissionless blockchain. The Solana Foundation is based in Geneva, Switzerland
+and maintains the open source project.
+
+## Why Solana?
+
+It is possible for a centralized database to process 710,000 transactions per
+second on a standard gigabit network if the transactions are, on average, no
+more than 176 bytes. A centralized database can also replicate itself and
+maintain high availability without significantly compromising that transaction
+rate using the distributed system technique known as Optimistic Concurrency
+Control
+[\[H.T.Kung, J.T.Robinson (1981)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.4735).
+At Solana, we are demonstrating that these same theoretical limits apply just as
+well to blockchain on an adversarial network. The key ingredient? Finding a way
+to share time when nodes cannot rely upon one another. Once nodes can rely upon
+time, suddenly ~40 years of distributed systems research becomes applicable to
+blockchain!
+
+> Perhaps the most striking difference between algorithms obtained by our method
+> and ones based upon timeout is that using timeout produces a traditional
+> distributed algorithm in which the processes operate asynchronously, while our
+> method produces a globally synchronous one in which every process does the
+> same thing at (approximately) the same time. Our method seems to contradict
+> the whole purpose of distributed processing, which is to permit different
+> processes to operate independently and perform different functions. However,
+> if a distributed system is really a single system, then the processes must be
+> synchronized in some way. Conceptually, the easiest way to synchronize
+> processes is to get them all to do the same thing at the same time. Therefore,
+> our method is used to implement a kernel that performs the necessary
+> synchronization--for example, making sure that two different processes do not
+> try to modify a file at the same time. Processes might spend only a small
+> fraction of their time executing the synchronizing kernel; the rest of the
+> time, they can operate independently--e.g., accessing different files. This is
+> an approach we have advocated even when fault-tolerance is not required. The
+> method's basic simplicity makes it easier to understand the precise properties
+> of a system, which is crucial if one is to know just how fault-tolerant the
+> system is.
+> [\[L.Lamport (1984)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.1078)
+
+Furthermore, and much to our surprise, it can be implemented using a mechanism
+that has existed in Bitcoin since day one. The Bitcoin feature is called
+nLocktime and it can be used to postdate transactions using block height instead
+of a timestamp. As a Bitcoin client, you would use block height instead of a
+timestamp if you don't rely upon the network. Block height turns out to be an
+instance of what's being called a Verifiable Delay Function in cryptography
+circles. It's a cryptographically secure way to say time has passed. In Solana,
+we use a far more granular verifiable delay function, a SHA 256 hash chain, to
+checkpoint the ledger and coordinate consensus. With it, we implement Optimistic
+Concurrency Control and are now well en route towards that theoretical limit of
+710,000 transactions per second.
+
+## Documentation Overview
+
+The Solana docs describe the Solana open source project, a blockchain built from
+the ground up for scale. They cover why Solana is useful, how to use it, how it
+works, and why it will continue to work long after the company Solana closes its
+doors. The goal of the Solana architecture is to demonstrate there exists a set
+of software algorithms that when used in combination to implement a blockchain,
+removes software as a performance bottleneck, allowing transaction throughput to
+scale proportionally with network bandwidth. The architecture goes on to satisfy
+all three desirable properties of a proper blockchain: it is scalable, secure
+and decentralized.
+
+The architecture describes a theoretical upper bound of 710 thousand
+transactions per second \(tps\) on a standard gigabit network and 28.4 million
+tps on 40 gigabit. Furthermore, the architecture supports safe, concurrent
+execution of programs authored in general-purpose programming languages such as
+C or Rust.
+
+## What is a Solana Cluster?
+
+A cluster is a set of computers that work together and can be viewed from the
+outside as a single system. A Solana cluster is a set of independently owned
+computers working together \(and sometimes against each other\) to verify the
+output of untrusted, user-submitted programs. A Solana cluster can be utilized
+any time a user wants to preserve an immutable record of events in time or
+programmatic interpretations of those events. One use is to track which of the
+computers did meaningful work to keep the cluster running. Another use might be
+to track the possession of real-world assets. In each case, the cluster produces
+a record of events called the ledger. It will be preserved for the lifetime of
+the cluster. As long as someone somewhere in the world maintains a copy of the
+ledger, the output of its programs \(which may contain a record of who possesses
+what\) will forever be reproducible, independent of the organization that
+launched it.
+
+## What are SOLs?
+
+A SOL is the name of Solana's native token, which can be passed to nodes in a
+Solana cluster in exchange for running an on-chain program or validating its
+output. The system may perform micropayments of fractional SOLs, which are
+called _lamports_. They are named in honor of Solana's biggest technical
+influence, [Leslie Lamport](https://en.wikipedia.org/wiki/Leslie_Lamport). A
+lamport has a value of 0.000000001 SOL.
+
+## Disclaimer
+
+All claims, content, designs, algorithms, estimates, roadmaps, specifications,
+and performance measurements described in this project are done with the
+author's best effort. It is up to the reader to check and validate their
+accuracy and truthfulness. Furthermore, nothing in this project constitutes a
+solicitation for investment.
diff --git a/docs/learn/state-compression.md b/docs/learn/state-compression.md
new file mode 100644
index 000000000..993544944
--- /dev/null
+++ b/docs/learn/state-compression.md
@@ -0,0 +1,334 @@
+---
+title: State Compression
+description:
+ 'State Compression is the method of cheaply and securely storing
+ "fingerprints" of off-chain data in the Solana leger, instead of expensive
+ accounts.'
+---
+
+On Solana, [State Compression](./state-compression.md) is the method of creating
+a "fingerprint" (or hash) of off-chain data and storing this fingerprint
+on-chain for secure verification. Effectively using the security of the Solana
+ledger to securely validate off-chain data, verifying it has not been tampered
+with.
+
+This method of "compression" allows Solana programs and dApps to use cheap
+blockchain [ledger](./../terminology.md#ledger) space, instead of the more
+expensive [account](./../terminology.md#account) space, to securely store data.
+
+This is accomplished by using a special binary tree structure, known as a
+[concurrent merkle tree](#what-is-a-concurrent-merkle-tree), to create a hash of
+each piece of data (called a `leaf`), hashing those together, and only storing
+this final hash on-chain.
+
+## What is State Compression?
+
+In simple terms, state compression uses "**_tree_**" structures to
+cryptographically hash off-chain data together, in a deterministic way, to
+compute a single final hash that gets stored on-chain.
+
+These _trees_ are created in this "_deterministic_" process by:
+
+- taking any piece of data
+- creating a hash of this data
+- storing this hash as a `leaf` the bottom of the tree
+- each `leaf` pair is then hash together, creating a `branch`
+- each `branch` is then hash together
+- continually climbing the tree and hashing adjacent branches together
+- once at the top of the tree, a final `root hash` is produced
+
+This `root hash` is then stored on chain, as a verifiable **_proof_** of all of
+the data within every leaf. Allowing anyone to cryptographically verify all the
+off-chain data within the tree, while only actually storing a **minimal** amount
+of data on-chain. Therefore, significantly reducing the cost to store/prove
+large amounts of data due to this "state compression".
+
+## Merkle trees and concurrent merkle trees
+
+Solana's state compression used a special type of
+[merkle tree](#what-is-a-merkle-tree) that allows for multiple changes to any
+given tree to happen, while still maintaining the integrity and validity of the
+tree.
+
+This special tree, known as a
+"[concurrent merkle tree](#what-is-a-concurrent-merkle-tree)", effectively
+retains a "changelog" of the tree on-chain. Allowing for multiple rapid changes
+to the same tree (i.e. all in the same block), before a proof is invalidated.
+
+### What is a merkle tree?
+
+A [merkle tree](https://en.wikipedia.org/wiki/merkle_tree), sometimes called a
+"hash tree", is a hash based binary tree structure where each `leaf` node is
+represented as a cryptographic hash of its inner data. And every node that is
+**not** a leaf, called a `branch`, is represented as a hash of its child leaf
+hashes.
+
+Each branch is then also hashed together, climbing the tree, until eventually
+only a single hash remains. This final hash, called the `root hash` or "root",
+can then be used in combination with a "proof path" to verify any piece of data
+stored within a leaf node.
+
+Once a final `root hash` has been computed, any piece of data stored within a
+`leaf` node can be verified by rehashing the specific leaf's data and the hash
+label of each adjacent branch climbing the tree (known as the `proof` or "proof
+path"). Comparing this "rehash" to the `root hash` is the verification of the
+underlying leaf data. If they match, the data is verified accurate. If they do
+not match, the leaf data was changed.
+
+Whenever desired, the original leaf data can be changed by simply hashing the
+**new leaf** data and recomputing the root hash in the same manner of the
+original root. This **new root hash** is then used to verify any of the data,
+and effectively invalidates the previous root hash and previous proof.
+Therefore, each change to these _traditional merkle trees_ are required to be
+performed in series.
+
+:::info
+
+This process of changing leaf data, and computing a new root hash can be a
+**very common** thing when using merkle trees! While it is one of the design
+points of the tree, it can result in one of the most notable drawbacks: rapid
+changes.
+
+:::
+
+### What is a Concurrent merkle tree?
+
+In high throughput applications, like within the
+[Solana runtime](/src/validator/runtime.md), requests to change an on-chain
+_traditional merkle tree_ could be received by validators in relatively rapid
+succession (e.g. within the same slot). Each leaf data change would still be
+required to performed in series. Resulting in each subsequent request for change
+to fail, due to the root hash and proof being invalidated by the previous change
+request in the slot.
+
+Enter, Concurrent merkle trees.
+
+A **Concurrent merkle tree** stores a **secure changelog** of the most recent
+changes, their root hash, and the proof to derive it. This changelog "buffer" is
+stored on-chain in an account specific to each tree, with a maximum number of
+changelog "records" (aka `maxBufferSize`).
+
+When multiple leaf data change requests are received by validators in the same
+slot, the on-chain _concurrent merkle tree_ can use this "changelog buffer" as a
+source of truth for more acceptable proofs. Effectively allowing for up to
+`maxBufferSize` changes to the same tree in the same slot. Significantly
+boosting throughput.
+
+## Sizing a concurrent merkle tree
+
+When creating one of these on-chain trees, there are 3 values that will
+determine the size of your tree, the cost to create your tree, and the number of
+concurrent changes to your tree:
+
+1. max depth
+2. max buffer size
+3. canopy depth
+
+### Max depth
+
+The "max depth" of a tree is the **maximum number** of hops to get from any data
+`leaf` to the `root` of the tree.
+
+Since merkle trees are binary trees, every leaf is connected to **only one**
+other leaf; existing as a `leaf pair`.
+
+Therefore, the `maxDepth` of a tree is used to determine the maximum number of
+nodes (aka pieces of data or `leafs`) to store within the tree using a simple
+calculation:
+
+```
+nodes_count = 2 ^ maxDepth
+```
+
+Since a trees depth must be set at tree creation, you must decide how many
+pieces of data you want your tree to store. Then using the simple calculation
+above, you can determine the lowest `maxDepth` to store your data.
+
+#### Example 1: minting 100 nfts
+
+If you wanted to create a tree to store 100 compressed nfts, we will need a
+minimum of "100 leafs" or "100 nodes".
+
+```
+// maxDepth=6 -> 64 nodes
+2^6 = 64
+
+// maxDepth=7 -> 128 nodes
+2^7 = 128
+```
+
+We must use a `maxDepth` of `7` to ensure we can store all of our data.
+
+#### Example 2: minting 15000 nfts
+
+If you wanted to create a tree to store 15000 compressed nfts, we will need a
+minimum of "15000 leafs" or "15000 nodes".
+
+```
+// maxDepth=13 -> 8192 nodes
+2^13 = 8192
+
+// maxDepth=14 -> 16384 nodes
+2^14 = 16384
+```
+
+We must use a `maxDepth` of `14` to ensure we can store all of our data.
+
+#### The higher the max depth, the higher the cost
+
+The `maxDepth` value will be one of the primary drivers of cost when creating a
+tree since you will pay this cost upfront at tree creation. The higher the max
+tree depth depth, the more data fingerprints (aka hashes) you can store, the
+higher the cost.
+
+### Max buffer size
+
+The "max buffer size" is effectively the maximum number of changes that can
+occur on a tree, with the `root hash` still being valid.
+
+Due to the root hash effectively being a single hash of all leaf data, changing
+any single leaf would invalidate the proof needed for all subsequent attempts to
+change any leaf of a regular tree.
+
+But with a [concurrent tree](#what-is-a-concurrent-merkle-tree), there is
+effectively a changelog of updates for these proofs. This changelog buffer is
+sized and set at tree creation via this `maxBufferSize` value.
+
+### Canopy depth
+
+The "canopy depth", sometimes called the canopy size, is the number of proof
+nodes that are cached/stored on-chain for any given proof path.
+
+When performing an update action on a `leaf`, like transferring ownership (e.g.
+selling a compressed NFT), the **complete** proof path must be used to verify
+original ownership of the leaf and therefore allow for the update action. This
+verification is performed using the **complete** proof path to correctly compute
+the current `root hash` (or any cached `root hash` via the on-chain "concurrent
+buffer").
+
+The larger a tree's max depth is, the more proof nodes are required to perform
+this verification. For example, if your max depth is `14`, there are `14` total
+proof nodes required to be used to verify. As a tree gets larger, the complete
+proof path gets larger.
+
+Normally, each of these proof nodes would be required to be included within each
+tree update transaction. Since each proof node value takes up `32 bytes` in a
+transaction (similar to providing a Public Key), larger trees would very quickly
+exceed the maximum transaction size limit.
+
+Enter the canopy. The canopy enables storing a set number of proof nodes on
+chain (for any given proof path). Allowing for less proof nodes to be included
+within each update transactions, therefore keeping the overall transaction size
+below the limit.
+
+For example, a tree with a max depth of `14` would require `14` total proof
+nodes. With a canopy of `10`, only `4` proof nodes are required to be submitted
+per update transaction.
+
+#### The larger the canopy depth value, the higher the cost
+
+The `canopyDepth` value is also a primary factor of cost when creating a tree
+since you will pay this cost upfront at tree creation. The higher the canopy
+depth, the more data proof nodes are stored on chain, the higher the cost.
+
+#### Smaller canopy limits composability
+
+While a tree's creation costs are higher with a higher canopy, having a lower
+`canopyDepth` will require more proof nodes to be included within each update
+transaction. The more nodes required to be submitted, the larger the transaction
+size, and therefore the easier it is to exceed the transaction size limits.
+
+This will also be the case for any other Solana program or dApp that attempts to
+interact with your tree/leafs. If your tree requires too many proof nodes
+(because of a low canopy depth), then any other additional actions another
+on-chain program **could** offer will be **limited** by their specific
+instruction size plus your proof node list size. Limiting composability, and
+potential additional utility for your specific tree.
+
+For example, if your tree is being used for compressed NFTs and has a very low
+canopy depth, an NFT marketplace may only be able to support simple NFTs
+transfers. And not be able to support an on-chain bidding system.
+
+## Cost of creating a tree
+
+The cost of creating a concurrent merkle tree is based on the tree's size
+parameters: `maxDepth`, `maxBufferSize`, and `canopyDepth`. These values are all
+used to calculate the on-chain storage (in bytes) required for a tree to exist
+on chain.
+
+Once the required space (in bytes) has been calculated, and using the
+[`getMinimumBalanceForRentExemption`](/api/http#getminimumbalanceforrentexemption)
+RPC method, request the cost (in lamports) to allocate this amount of bytes
+on-chain.
+
+### Calculate tree cost in JavaScript
+
+Within the
+[`@solana/spl-account-compression`](https://www.npmjs.com/package/@solana/spl-account-compression)
+package, developers can use the
+[`getConcurrentMerkleTreeAccountSize`](https://solana-labs.github.io/solana-program-library/account-compression/sdk/docs/modules/index.html#getConcurrentMerkleTreeAccountSize)
+function to calculate the required space for a given tree size parameters.
+
+Then using the
+[`getMinimumBalanceForRentExemption`](https://solana-labs.github.io/solana-web3.js/classes/Connection.html#getMinimumBalanceForRentExemption)
+function to get the final cost (in lamports) to allocate the required space for
+the tree on-chain.
+
+Then determine the cost in lamports to make an account of this size rent exempt,
+similar to any other account creation.
+
+```ts
+// calculate the space required for the tree
+const requiredSpace = getConcurrentMerkleTreeAccountSize(
+ maxDepth,
+ maxBufferSize,
+ canopyDepth,
+);
+
+// get the cost (in lamports) to store the tree on-chain
+const storageCost = await connection.getMinimumBalanceForRentExemption(
+ requiredSpace,
+);
+```
+
+### Example costs
+
+Listed below are several example costs, for different tree sizes, including how
+many leaf nodes are possible for each:
+
+**Example #1: 16,384 nodes costing 0.222 SOL**
+
+- max depth of `14` and max buffer size of `64`
+- maximum number of leaf nodes: `16,384`
+- canopy depth of `0` costs approximately `0.222 SOL` to create
+
+**Example #2: 16,384 nodes costing 1.134 SOL**
+
+- max depth of `14` and max buffer size of `64`
+- maximum number of leaf nodes: `16,384`
+- canopy depth of `11` costs approximately `1.134 SOL` to create
+
+**Example #3: 1,048,576 nodes costing 1.673 SOL**
+
+- max depth of `20` and max buffer size of `256`
+- maximum number of leaf nodes: `1,048,576`
+- canopy depth of `10` costs approximately `1.673 SOL` to create
+
+**Example #4: 1,048,576 nodes costing 15.814 SOL**
+
+- max depth of `20` and max buffer size of `256`
+- maximum number of leaf nodes: `1,048,576`
+- canopy depth of `15` costs approximately `15.814 SOL` to create
+
+## Compressed NFTs
+
+Compressed NFTs are one of the most popular use cases for State Compression on
+Solana. With compression, a one million NFT collection could be minted for
+`~50 SOL`, vice `~12,000 SOL` for its uncompressed equivalent collection.
+
+:::info Developer Guide
+
+Read our developer guide for
+[minting and transferring compressed NFTs](./../developing/guides/compressed-nfts).
+
+:::
diff --git a/docs/staking.md b/docs/staking.md
new file mode 100644
index 000000000..003c9252b
--- /dev/null
+++ b/docs/staking.md
@@ -0,0 +1,99 @@
+---
+title: Staking on Solana
+---
+
+_Note before reading: All references to increases in values are in absolute
+terms with regards to balance of SOL. This document makes no suggestion as to
+the monetary value of SOL at any time._
+
+By staking your SOL tokens, you help secure the network and
+[earn rewards](implemented-proposals/staking-rewards.md) while doing so.
+
+You can stake by delegating your tokens to validators who process transactions
+and run the network.
+
+Delegating stake is a shared-risk shared-reward financial model that may provide
+returns to holders of tokens delegated for a long period. This is achieved by
+aligning the financial incentives of the token-holders (delegators) and the
+validators to whom they delegate.
+
+The more stake delegated to a validator, the more often this validator is chosen
+to write new transactions to the ledger. The more transactions the validator
+writes, the more rewards the validator and its delegators earn. Validators who
+configure their systems to be able to process more transactions earn
+proportionally more rewards and because they keep the network running as fast
+and as smoothly as possible.
+
+Validators incur costs by running and maintaining their systems, and this is
+passed on to delegators in the form of a fee collected as a percentage of
+rewards earned. This fee is known as a _commission_. Since validators earn more
+rewards the more stake is delegated to them, they may compete with one another
+to offer the lowest commission for their services.
+
+You risk losing tokens when staking through a process known as _slashing_.
+Slashing involves the removal and destruction of a portion of a validator's
+delegated stake in response to intentional malicious behavior, such as creating
+invalid transactions or censoring certain types of transactions or network
+participants.
+
+When a validator is slashed, all token holders who have delegated stake to that
+validator lose a portion of their delegation. While this means an immediate loss
+for the token holder, it also is a loss of future rewards for the validator due
+to their reduced total delegation. More details on the slashing roadmap can be
+found
+[here](proposals/optimistic-confirmation-and-slashing.md#slashing-roadmap).
+
+Rewards and slashing align validator and token holder interests which helps keep
+the network secure, robust and performant.
+
+## How do I stake my SOL tokens?
+
+You can stake SOL by moving your tokens into a wallet that supports staking. The
+wallet provides steps to create a stake account and do the delegation.
+
+#### Supported Wallets
+
+Many web and mobile wallets support Solana staking operations. Please check with
+your favorite wallet's maintainers regarding status
+
+#### Solana command line tools
+
+- Solana command line tools can perform all stake operations in conjunction with
+ a CLI-generated keypair file wallet, a paper wallet, or with a connected
+ Ledger Nano.
+ [Staking commands using the Solana Command Line Tools](cli/delegate-stake.md).
+
+#### Create a Stake Account
+
+Follow the wallet's instructions for creating a staking account. This account
+will be of a different type than one used to simply send and receive tokens.
+
+#### Select a Validator
+
+Follow the wallet's instructions for selecting a validator. You can get
+information about potentially performant validators from the links below. The
+Solana Foundation does not recommend any particular validator.
+
+The site solanabeach.io is built and maintained by one of our validators,
+Staking Facilities. It provides a some high-level graphical information about
+the network as a whole, as well as a list of each validator and some recent
+performance statistics about each one.
+
+- https://solanabeach.io
+
+To view block production statistics, use the Solana command-line tools:
+
+- `solana validators`
+- `solana block-production`
+
+The Solana team does not make recommendations on how to interpret this
+information. Do your own due diligence.
+
+#### Delegate your Stake
+
+Follow the wallet's instructions for delegating your to your chosen validator.
+
+## Stake Account Details
+
+For more information about the operations and permissions associated with a
+stake account, please see [Stake Accounts](staking/stake-accounts.md)
diff --git a/docs/staking/stake-accounts.md b/docs/staking/stake-accounts.md
new file mode 100644
index 000000000..48b7ba854
--- /dev/null
+++ b/docs/staking/stake-accounts.md
@@ -0,0 +1,145 @@
+---
+title: Stake Account Structure
+---
+
+A stake account on Solana can be used to delegate tokens to validators on the
+network to potentially earn rewards for the owner of the stake account. Stake
+accounts are created and managed differently than a traditional wallet address,
+known as a _system account_. A system account is only able to send and receive
+SOL from other accounts on the network, whereas a stake account supports more
+complex operations needed to manage a delegation of tokens.
+
+Stake accounts on Solana also work differently than those of other
+Proof-of-Stake blockchain networks that you may be familiar with. This document
+describes the high-level structure and functions of a Solana stake account.
+
+#### Account Address
+
+Each stake account has a unique address which can be used to look up the account
+information in the command line or in any network explorer tools. However,
+unlike a wallet address in which the holder of the address's keypair controls
+the wallet, the keypair associated with a stake account address does not
+necessarily have any control over the account. In fact, a keypair or private key
+may not even exist for a stake account's address.
+
+The only time a stake account's address has a keypair file is when
+[creating a stake account using the command line tools](../cli/delegate-stake.md#create-a-stake-account).
+A new keypair file is created first only to ensure that the stake account's
+address is new and unique.
+
+#### Understanding Account Authorities
+
+Certain types of accounts may have one or more _signing authorities_ associated
+with a given account. An account authority is used to sign certain transactions
+for the account it controls. This is different from some other blockchain
+networks where the holder of the keypair associated with the account's address
+controls all of the account's activity.
+
+Each stake account has two signing authorities specified by their respective
+address, each of which is authorized to perform certain operations on the stake
+account.
+
+The _stake authority_ is used to sign transactions for the following operations:
+
+- Delegating stake
+- Deactivating the stake delegation
+- Splitting the stake account, creating a new stake account with a portion of
+ the funds in the first account
+- Merging two stake accounts into one
+- Setting a new stake authority
+
+The _withdraw authority_ signs transactions for the following:
+
+- Withdrawing un-delegated stake into a wallet address
+- Setting a new withdraw authority
+- Setting a new stake authority
+
+The stake authority and withdraw authority are set when the stake account is
+created, and they can be changed to authorize a new signing address at any time.
+The stake and withdraw authority can be the same address or two different
+addresses.
+
+The withdraw authority keypair holds more control over the account as it is
+needed to liquidate the tokens in the stake account, and can be used to reset
+the stake authority if the stake authority keypair becomes lost or compromised.
+
+Securing the withdraw authority against loss or theft is of utmost importance
+when managing a stake account.
+
+#### Multiple Delegations
+
+Each stake account may only be used to delegate to one validator at a time. All
+of the tokens in the account are either delegated or un-delegated, or in the
+process of becoming delegated or un-delegated. To delegate a fraction of your
+tokens to a validator, or to delegate to multiple validators, you must create
+multiple stake accounts.
+
+This can be accomplished by creating multiple stake accounts from a wallet
+address containing some tokens, or by creating a single large stake account and
+using the stake authority to split the account into multiple accounts with token
+balances of your choosing.
+
+The same stake and withdraw authorities can be assigned to multiple stake
+accounts.
+
+#### Merging stake accounts
+
+Two stake accounts that have the same authorities and lockup can be merged into
+a single resulting stake account. A merge is possible between two stakes in the
+following states with no additional conditions:
+
+- two deactivated stakes
+- an inactive stake into an activating stake during its activation epoch
+
+For the following cases, the voter pubkey and vote credits observed must match:
+
+- two activated stakes
+- two activating accounts that share an activation epoch, during the activation
+ epoch
+
+All other combinations of stake states will fail to merge, including all
+"transient" states, where a stake is activating or deactivating with a non-zero
+effective stake.
+
+#### Delegation Warmup and Cooldown
+
+When a stake account is delegated, or a delegation is deactivated, the operation
+does not take effect immediately.
+
+A delegation or deactivation takes several [epochs](../terminology.md#epoch) to
+complete, with a fraction of the delegation becoming active or inactive at each
+epoch boundary after the transaction containing the instructions has been
+submitted to the cluster.
+
+There is also a limit on how much total stake can become delegated or
+deactivated in a single epoch, to prevent large sudden changes in stake across
+the network as a whole. Since warmup and cooldown are dependent on the behavior
+of other network participants, their exact duration is difficult to predict.
+Details on the warmup and cooldown timing can be found
+[here](../cluster/stake-delegation-and-rewards.md#stake-warmup-cooldown-withdrawal).
+
+#### Lockups
+
+Stake accounts can have a lockup which prevents the tokens they hold from being
+withdrawn before a particular date or epoch has been reached. While locked up,
+the stake account can still be delegated, un-delegated, or split, and its stake
+authority can be changed as normal. Only withdrawal into another wallet or
+updating the withdraw authority is not allowed.
+
+A lockup can only be added when a stake account is first created, but it can be
+modified later, by the _lockup authority_ or _custodian_, the address of which
+is also set when the account is created.
+
+#### Destroying a Stake Account
+
+Like other types of accounts on the Solana network, a stake account that has a
+balance of 0 SOL is no longer tracked. If a stake account is not delegated and
+all of the tokens it contains are withdrawn to a wallet address, the account at
+that address is effectively destroyed, and will need to be manually re-created
+for the address to be used again.
+
+#### Viewing Stake Accounts
+
+Stake account details can be viewed on the
+[Solana Explorer](http://explorer.solana.com/accounts) by copying and pasting an
+account address into the search bar.
diff --git a/docs/staking/stake-programming.md b/docs/staking/stake-programming.md
new file mode 100644
index 000000000..afac9e315
--- /dev/null
+++ b/docs/staking/stake-programming.md
@@ -0,0 +1,28 @@
+---
+title: Stake Programming
+---
+
+To maximize stake distribution, decentralization, and censorship resistance on
+the Solana network, staking can be performed programmatically. The team and
+community have developed several on-chain and off-chain programs to make stakes
+easier to manage.
+
+#### Stake-o-matic aka Auto-delegation Bots
+
+This off-chain program manages a large population of validators staked by a
+central authority. The Solana Foundation uses an auto-delegation bot to
+regularly delegate its stake to "non-delinquent" validators that meet specified
+performance requirements.
+
+#### Stake Pools
+
+This on-chain program pools together SOL to be staked by a manager, allowing SOL
+holders to stake and earn rewards without managing stakes. Users deposit SOL in
+exchange for SPL tokens (staking derivatives) that represent their ownership in
+the stake pool. The pool manager stakes deposited SOL according to their
+strategy, perhaps using a variant of an auto-delegation bot as described above.
+As stakes earn rewards, the pool and pool tokens grow proportionally in value.
+Finally, pool token holders can send SPL tokens back to the stake pool to redeem
+SOL, thereby participating in decentralization with much less work required.
+More information can be found at the
+[SPL stake pool documentation](https://spl.solana.com/stake-pool).
diff --git a/docs/storage_rent_economics.md b/docs/storage_rent_economics.md
new file mode 100644
index 000000000..eac4b8314
--- /dev/null
+++ b/docs/storage_rent_economics.md
@@ -0,0 +1,39 @@
+---
+title: Storage Rent Economics
+---
+
+Each transaction that is submitted to the Solana ledger imposes costs.
+Transaction fees paid by the submitter, and collected by a validator, in theory,
+account for the acute, transactional, costs of validating and adding that data
+to the ledger. Unaccounted in this process is the mid-term storage of active
+ledger state, necessarily maintained by the rotating validator set. This type of
+storage imposes costs not only to validators but also to the broader network as
+active state grows so does data transmission and validation overhead. To account
+for these costs, we describe here our preliminary design and implementation of
+storage rent.
+
+Storage rent can be paid via one of two methods:
+
+Method 1: Set it and forget it
+
+With this approach, accounts with two-years worth of rent deposits secured are
+exempt from network rent charges. By maintaining this minimum-balance, the
+broader network benefits from reduced liquidity and the account holder can rest
+assured that their `Account::data` will be retained for continual access/usage.
+
+Method 2: Pay per byte
+
+If an account has less than two-years worth of deposited rent the network
+charges rent on a per-epoch basis, in credit for the next epoch. This rent is
+deducted at a rate specified in genesis, in lamports per kilobyte-year.
+
+For information on the technical implementation details of this design, see the
+[Rent](implemented-proposals/rent.md) section.
+
+**Note:** New accounts now **are required** to be initialized with enough
+lamports to be rent exempt. Additionally, transactions that leave an account's
+balance below the rent exempt minimum (and non-zero) will **fail**. This
+essentially renders all accounts rent exempt. Rent-paying accounts that were
+created before this requirement will continue paying rent until either (1) their
+balance falls to zero, or (2) a transaction increases the account's balance to
+be rent exempt.
diff --git a/docs/terminology.md b/docs/terminology.md
new file mode 100644
index 000000000..21044fce9
--- /dev/null
+++ b/docs/terminology.md
@@ -0,0 +1,526 @@
+---
+title: Terminology
+description:
+ "Learn the essential terminology used throughout the Solana blockchain and
+ development models."
+keywords:
+ - terms
+ - dictionary
+ - definitions
+ - define
+ - programming models
+---
+
+The following terms are used throughout the Solana documentation and development
+ecosystem.
+
+## account
+
+A record in the Solana ledger that either holds data or is an executable
+program.
+
+Like an account at a traditional bank, a Solana account may hold funds called
+[lamports](#lamport). Like a file in Linux, it is addressable by a key, often
+referred to as a [public key](#public-key-pubkey) or pubkey.
+
+The key may be one of:
+
+- an ed25519 public key
+- a program-derived account address (32byte value forced off the ed25519 curve)
+- a hash of an ed25519 public key with a 32 character string
+
+## account owner
+
+The address of the program that owns the account. Only the owning program is
+capable of modifying the account.
+
+## app
+
+A front-end application that interacts with a Solana cluster.
+
+## bank state
+
+The result of interpreting all programs on the ledger at a given
+[tick height](#tick-height). It includes at least the set of all
+[accounts](#account) holding nonzero [native tokens](#native-token).
+
+## block
+
+A contiguous set of [entries](#entry) on the ledger covered by a
+[vote](#ledger-vote). A [leader](#leader) produces at most one block per
+[slot](#slot).
+
+## blockhash
+
+A unique value ([hash](#hash)) that identifies a record (block). Solana computes
+a blockhash from the last [entry id](#entry-id) of the block.
+
+## block height
+
+The number of [blocks](#block) beneath the current block. The first block after
+the [genesis block](#genesis-block) has height one.
+
+## bootstrap validator
+
+The [validator](#validator) that produces the genesis (first) [block](#block) of
+a block chain.
+
+## BPF loader
+
+The Solana program that owns and loads
+[BPF](developing/on-chain-programs/faq#berkeley-packet-filter-bpf) smart
+contract programs, allowing the program to interface with the runtime.
+
+## client
+
+A computer program that accesses the Solana server network [cluster](#cluster).
+
+## commitment
+
+A measure of the network confirmation for the [block](#block).
+
+## cluster
+
+A set of [validators](#validator) maintaining a single [ledger](#ledger).
+
+## compute budget
+
+The maximum number of [compute units](#compute-units) consumed per transaction.
+
+## compute units
+
+The smallest unit of measure for consumption of computational resources of the
+blockchain.
+
+## confirmation time
+
+The wallclock duration between a [leader](#leader) creating a
+[tick entry](#tick) and creating a [confirmed block](#confirmed-block).
+
+## confirmed block
+
+A [block](#block) that has received a [super majority](#supermajority) of
+[ledger votes](#ledger-vote).
+
+## control plane
+
+A gossip network connecting all [nodes](#node) of a [cluster](#cluster).
+
+## cooldown period
+
+Some number of [epochs](#epoch) after [stake](#stake) has been deactivated while
+it progressively becomes available for withdrawal. During this period, the stake
+is considered to be "deactivating". More info about:
+[warmup and cooldown](implemented-proposals/staking-rewards.md#stake-warmup-cooldown-withdrawal)
+
+## credit
+
+See [vote credit](#vote-credit).
+
+## cross-program invocation (CPI)
+
+A call from one smart contract program to another. For more information, see
+[calling between programs](developing/programming-model/calling-between-programs.md).
+
+## data plane
+
+A multicast network used to efficiently validate [entries](#entry) and gain
+consensus.
+
+## drone
+
+An off-chain service that acts as a custodian for a user's private key. It
+typically serves to validate and sign transactions.
+
+## entry
+
+An entry on the [ledger](#ledger) either a [tick](#tick) or a
+[transaction's entry](#transactions-entry).
+
+## entry id
+
+A preimage resistant [hash](#hash) over the final contents of an entry, which
+acts as the [entry's](#entry) globally unique identifier. The hash serves as
+evidence of:
+
+- The entry being generated after a duration of time
+- The specified [transactions](#transaction) are those included in the entry
+- The entry's position with respect to other entries in [ledger](#ledger)
+
+See [proof of history](#proof-of-history-poh).
+
+## epoch
+
+The time, i.e. number of [slots](#slot), for which a
+[leader schedule](#leader-schedule) is valid.
+
+## fee account
+
+The fee account in the transaction is the account that pays for the cost of
+including the transaction in the ledger. This is the first account in the
+transaction. This account must be declared as Read-Write (writable) in the
+transaction since paying for the transaction reduces the account balance.
+
+## finality
+
+When nodes representing 2/3rd of the [stake](#stake) have a common
+[root](#root).
+
+## fork
+
+A [ledger](#ledger) derived from common entries but then diverged.
+
+## genesis block
+
+The first [block](#block) in the chain.
+
+## genesis config
+
+The configuration file that prepares the [ledger](#ledger) for the
+[genesis block](#genesis-block).
+
+## hash
+
+A digital fingerprint of a sequence of bytes.
+
+## inflation
+
+An increase in token supply over time used to fund rewards for validation and to
+fund continued development of Solana.
+
+## inner instruction
+
+See [cross-program invocation](#cross-program-invocation-cpi).
+
+## instruction
+
+The smallest contiguous unit of execution logic in a [program](#program). An
+instruction specifies which program it is calling, which accounts it wants to
+read or modify, and additional data that serves as auxiliary input to the
+program. A [client](#client) can include one or multiple instructions in a
+[transaction](#transaction). An instruction may contain one or more
+[cross-program invocations](#cross-program-invocation-cpi).
+
+## keypair
+
+A [public key](#public-key-pubkey) and corresponding [private key](#private-key)
+for accessing an account.
+
+## lamport
+
+A fractional [native token](#native-token) with the value of 0.000000001
+[sol](#sol).
+
+:::info Within the compute budget, a quantity of
+_[micro-lamports](https://github.com/solana-labs/solana/blob/ced8f6a512c61e0dd5308095ae8457add4a39e94/program-runtime/src/prioritization_fee.rs#L1-L2)_
+is used in the calculation of [prioritization fees](#prioritization-fee). :::
+
+## leader
+
+The role of a [validator](#validator) when it is appending [entries](#entry) to
+the [ledger](#ledger).
+
+## leader schedule
+
+A sequence of [validator](#validator) [public keys](#public-key-pubkey) mapped
+to [slots](#slot). The cluster uses the leader schedule to determine which
+validator is the [leader](#leader) at any moment in time.
+
+## ledger
+
+A list of [entries](#entry) containing [transactions](#transaction) signed by
+[clients](#client). Conceptually, this can be traced back to the
+[genesis block](#genesis-block), but an actual [validator](#validator)'s ledger
+may have only newer [blocks](#block) to reduce storage, as older ones are not
+needed for validation of future blocks by design.
+
+## ledger vote
+
+A [hash](#hash) of the [validator's state](#bank-state) at a given
+[tick height](#tick-height). It comprises a [validator's](#validator)
+affirmation that a [block](#block) it has received has been verified, as well as
+a promise not to vote for a conflicting [block](#block) \(i.e. [fork](#fork)\)
+for a specific amount of time, the [lockout](#lockout) period.
+
+## light client
+
+A type of [client](#client) that can verify it's pointing to a valid
+[cluster](#cluster). It performs more ledger verification than a
+[thin client](#thin-client) and less than a [validator](#validator).
+
+## loader
+
+A [program](#program) with the ability to interpret the binary encoding of other
+on-chain programs.
+
+## lockout
+
+The duration of time for which a [validator](#validator) is unable to
+[vote](#ledger-vote) on another [fork](#fork).
+
+## message
+
+The structured contents of a [transaction](#transaction). Generally containing a
+header, array of account addresses, recent [blockhash](#blockhash), and an array
+of [instructions](#instruction).
+
+Learn more about the
+[message formatting inside of transactions](./developing/programming-model/transactions.md#message-format)
+here.
+
+## native token
+
+The [token](#token) used to track work done by [nodes](#node) in a cluster.
+
+## node
+
+A computer participating in a [cluster](#cluster).
+
+## node count
+
+The number of [validators](#validator) participating in a [cluster](#cluster).
+
+## PoH
+
+See [Proof of History](#proof-of-history-poh).
+
+## point
+
+A weighted [credit](#credit) in a rewards regime. In the [validator](#validator)
+[rewards regime](cluster/stake-delegation-and-rewards.md), the number of points
+owed to a [stake](#stake) during redemption is the product of the
+[vote credits](#vote-credit) earned and the number of lamports staked.
+
+## private key
+
+The private key of a [keypair](#keypair).
+
+## program
+
+The executable code that interprets the [instructions](#instruction) sent inside
+of each [transaction](#transaction) on the Solana. These programs are often
+referred to as "[_smart contracts_](./developing//intro/programs.md)" on other
+blockchains.
+
+## program derived account (PDA)
+
+An account whose signing authority is a program and thus is not controlled by a
+private key like other accounts.
+
+## program id
+
+The public key of the [account](#account) containing a [program](#program).
+
+## proof of history (PoH)
+
+A stack of proofs, each of which proves that some data existed before the proof
+was created and that a precise duration of time passed before the previous
+proof. Like a [VDF](#verifiable-delay-function-vdf), a Proof of History can be
+verified in less time than it took to produce.
+
+## prioritization fee
+
+An additional fee user can specify in the compute budget
+[instruction](#instruction) to prioritize their [transactions](#transaction).
+
+The prioritization fee is calculated by multiplying the requested maximum
+compute units by the compute-unit price (specified in increments of 0.000001
+lamports per compute unit) rounded up to the nearest lamport.
+
+Transactions should request the minimum amount of compute units required for
+execution to minimize fees.
+
+## public key (pubkey)
+
+The public key of a [keypair](#keypair).
+
+## rent
+
+Fee paid by [Accounts](#account) and [Programs](#program) to store data on the
+blockchain. When accounts do not have enough balance to pay rent, they may be
+Garbage Collected.
+
+See also [rent exempt](#rent-exempt) below. Learn more about rent here:
+[What is rent?](../src/developing/intro/rent.md).
+
+## rent exempt
+
+Accounts that maintain more than 2 years with of rent payments in their account
+are considered "_rent exempt_" and will not incur the
+[collection of rent](../src/developing/intro/rent.md#collecting-rent).
+
+## root
+
+A [block](#block) or [slot](#slot) that has reached maximum [lockout](#lockout)
+on a [validator](#validator). The root is the highest block that is an ancestor
+of all active forks on a validator. All ancestor blocks of a root are also
+transitively a root. Blocks that are not an ancestor and not a descendant of the
+root are excluded from consideration for consensus and can be discarded.
+
+## runtime
+
+The component of a [validator](#validator) responsible for [program](#program)
+execution.
+
+## Sealevel
+
+Solana's parallel smart contracts run-time.
+
+## shred
+
+A fraction of a [block](#block); the smallest unit sent between
+[validators](#validator).
+
+## signature
+
+A 64-byte ed25519 signature of R (32-bytes) and S (32-bytes). With the
+requirement that R is a packed Edwards point not of small order and S is a
+scalar in the range of 0 <= S < L. This requirement ensures no signature
+malleability. Each transaction must have at least one signature for
+[fee account](terminology#fee-account). Thus, the first signature in transaction
+can be treated as [transaction id](#transaction-id)
+
+## skip rate
+
+The percentage of [skipped slots](#skipped-slot) out of the total leader slots
+in the current epoch. This metric can be misleading as it has high variance
+after the epoch boundary when the sample size is small, as well as for
+validators with a low number of leader slots, however can also be useful in
+identifying node misconfigurations at times.
+
+## skipped slot
+
+A past [slot](#slot) that did not produce a [block](#block), because the leader
+was offline or the [fork](#fork) containing the slot was abandoned for a better
+alternative by cluster consensus. A skipped slot will not appear as an ancestor
+for blocks at subsequent slots, nor increment the
+[block height](terminology#block-height), nor expire the oldest
+`recent_blockhash`.
+
+Whether a slot has been skipped can only be determined when it becomes older
+than the latest [rooted](#root) (thus not-skipped) slot.
+
+## slot
+
+The period of time for which each [leader](#leader) ingests transactions and
+produces a [block](#block).
+
+Collectively, slots create a logical clock. Slots are ordered sequentially and
+non-overlapping, comprising roughly equal real-world time as per
+[PoH](#proof-of-history-poh).
+
+## smart contract
+
+A program on a blockchain that can read and modify accounts over which it has
+control.
+
+## sol
+
+The [native token](#native-token) of a Solana [cluster](#cluster).
+
+## Solana Program Library (SPL)
+
+A [library of programs](https://spl.solana.com/) on Solana such as spl-token
+that facilitates tasks such as creating and using tokens.
+
+## stake
+
+Tokens forfeit to the [cluster](#cluster) if malicious [validator](#validator)
+behavior can be proven.
+
+## supermajority
+
+2/3 of a [cluster](#cluster).
+
+## sysvar
+
+A system [account](#account).
+[Sysvars](developing/runtime-facilities/sysvars.md) provide cluster state
+information such as current tick height, rewards [points](#point) values, etc.
+Programs can access Sysvars via a Sysvar account (pubkey) or by querying via a
+syscall.
+
+## thin client
+
+A type of [client](#client) that trusts it is communicating with a valid
+[cluster](#cluster).
+
+## tick
+
+A ledger [entry](#entry) that estimates wallclock duration.
+
+## tick height
+
+The Nth [tick](#tick) in the [ledger](#ledger).
+
+## token
+
+A digitally transferable asset.
+
+## tps
+
+[Transactions](#transaction) per second.
+
+## tpu
+
+[Transaction processing unit](validator/tpu.md).
+
+## transaction
+
+One or more [instructions](#instruction) signed by a [client](#client) using one
+or more [keypairs](#keypair) and executed atomically with only two possible
+outcomes: success or failure.
+
+## transaction id
+
+The first [signature](#signature) in a [transaction](#transaction), which can be
+used to uniquely identify the transaction across the complete [ledger](#ledger).
+
+## transaction confirmations
+
+The number of [confirmed blocks](#confirmed-block) since the transaction was
+accepted onto the [ledger](#ledger). A transaction is finalized when its block
+becomes a [root](#root).
+
+## transactions entry
+
+A set of [transactions](#transaction) that may be executed in parallel.
+
+## tvu
+
+[Transaction validation unit](validator/tvu.md).
+
+## validator
+
+A full participant in a Solana network [cluster](#cluster) that produces new
+[blocks](#block). A validator validates the transactions added to the
+[ledger](#ledger)
+
+## VDF
+
+See [verifiable delay function](#verifiable-delay-function-vdf).
+
+## verifiable delay function (VDF)
+
+A function that takes a fixed amount of time to execute that produces a proof
+that it ran, which can then be verified in less time than it took to produce.
+
+## vote
+
+See [ledger vote](#ledger-vote).
+
+## vote credit
+
+A reward tally for [validators](#validator). A vote credit is awarded to a
+validator in its vote account when the validator reaches a [root](#root).
+
+## wallet
+
+A collection of [keypairs](#keypair) that allows users to manage their funds.
+
+## warmup period
+
+Some number of [epochs](#epoch) after [stake](#stake) has been delegated while
+it progressively becomes effective. During this period, the stake is considered
+to be "activating". More info about:
+[warmup and cooldown](cluster/stake-delegation-and-rewards.md#stake-warmup-cooldown-withdrawal)
diff --git a/docs/transaction_fees.md b/docs/transaction_fees.md
new file mode 100644
index 000000000..e3a65fb93
--- /dev/null
+++ b/docs/transaction_fees.md
@@ -0,0 +1,234 @@
+---
+title: Transaction Fees
+description:
+ "Transaction fees are the small fees paid to process instructions on the
+ network. These fees are based on computation and an optional prioritization
+ fee."
+keywords:
+ - instruction fee
+ - processing fee
+ - storage fee
+ - low fee blockchain
+ - gas
+ - gwei
+ - cheap network
+ - affordable blockchain
+---
+
+The small fees paid to process [instructions](./terminology.md#instruction) on
+the Solana blockchain are known as "_transaction fees_".
+
+As each transaction (which contains one or more instructions) is sent through
+the network, it gets processed by the current leader validation-client. Once
+confirmed as a global state transaction, this _transaction fee_ is paid to the
+network to help support the [economic design](#economic-design) of the Solana
+blockchain.
+
+> **NOTE:** Transaction fees are different from
+> [account rent](./terminology.md#rent)! While transaction fees are paid to
+> process instructions on the Solana network, rent is paid to store data on the
+> blockchain.
+
+> You can learn more about rent here:
+> [What is rent?](./developing/intro/rent.md)
+
+## Why pay transaction fees?
+
+Transaction fees offer many benefits in the Solana
+[economic design](#basic-economic-design) described below. Mainly:
+
+- they provide compensation to the validator network for the CPU/GPU resources
+ necessary to process transactions,
+- reduce network spam by introducing real cost to transactions,
+- and provide long-term economic stability to the network through a
+ protocol-captured minimum fee amount per transaction
+
+> **NOTE:** Network consensus votes are sent as normal system transfers, which
+> means that validators pay transaction fees to participate in consensus.
+
+## Basic economic design
+
+Many blockchain networks \(e.g. Bitcoin and Ethereum\), rely on inflationary
+_protocol-based rewards_ to secure the network in the short-term. Over the
+long-term, these networks will increasingly rely on _transaction fees_ to
+sustain security.
+
+The same is true on Solana. Specifically:
+
+- A fixed proportion (initially 50%) of each transaction fee is _burned_
+ (destroyed), with the remaining going to the current
+ [leader](./terminology.md#leader) processing the transaction.
+- A scheduled global inflation rate provides a source for
+ [rewards](./implemented-proposals/staking-rewards.md) distributed to
+ [Solana Validators](../src/running-validator.md).
+
+### Why burn some fees?
+
+As mentioned above, a fixed proportion of each transaction fee is _burned_
+(destroyed). This is intended to cement the economic value of SOL and thus
+sustain the network's security. Unlike a scheme where transactions fees are
+completely burned, leaders are still incentivized to include as many
+transactions as possible in their slots.
+
+Burnt fees can also help prevent malicious validators from censoring
+transactions by being considered in [fork](./terminology.md#fork) selection.
+
+#### Example of an attack:
+
+In the case of a [Proof of History (PoH)](./terminology.md#proof-of-history-poh)
+fork with a malicious, censoring leader:
+
+- due to the fees lost from censoring, we would expect the total fees burned to
+ be **_less than_** a comparable honest fork
+- if the censoring leader is to compensate for these lost protocol fees, they
+ would have to replace the burnt fees on their fork themselves
+- thus potentially reducing the incentive to censor in the first place
+
+## Calculating transaction fees
+
+Transactions fees are calculated based on two main parts:
+
+- a statically set base fee per signature, and
+- the computational resources used during the transaction, measured in
+ "[_compute units_](./terminology.md#compute-units)"
+
+Since each transaction may require a different amount of computational
+resources, they are alloted a maximum number of _compute units_ per transaction
+known as the "[_compute budget_](./terminology.md#compute-budget)".
+
+The execution of each instruction within a transaction consumes a different
+number of _compute units_. After the maximum number of _compute units_ has been
+consumed (aka compute budget exhaustion), the runtime will halt the transaction
+and return an error. This results in a failed transaction.
+
+> **Learn more:** compute units and the
+> [Compute Budget](./developing/programming-model/runtime#compute-budget) in the
+> Runtime and [requesting a fee estimate](../api/http#getfeeformessage) from the
+> RPC.
+
+## Prioritization fee
+
+A Solana transaction can include an **optional** fee to prioritize itself
+against others known as a
+"_[prioritization fee](./terminology.md#prioritization-fee)_". Paying this
+additional fee helps boost how a transaction is prioritized against others,
+resulting in faster execution times.
+
+### How the prioritization fee is calculated
+
+A transaction's [prioritization fee](./terminology.md#prioritization-fee) is
+calculated by multiplying the maximum number of **_compute units_** by the
+**_compute unit price_** (measured in _micro-lamports_).
+
+Each transaction can set the maximum number of compute units it is allowed to
+consume and the compute unit price by including a `SetComputeUnitLimit` and
+`SetComputeUnitPrice` compute budget instruction respectively.
+
+:::info
+[Compute Budget instructions](https://github.com/solana-labs/solana/blob/master/sdk/src/compute_budget.rs)
+do **not** require any accounts. :::
+
+If no `SetComputeUnitLimit` instruction is provided, the limit will be
+calculated as the product of the number of instructions in the transaction and
+the default per-instruction units, which is currently
+[200k](https://github.com/solana-labs/solana/blob/4293f11cf13fc1e83f1baa2ca3bb2f8ea8f9a000/program-runtime/src/compute_budget.rs#L13).
+
+If no `SetComputeUnitPrice` instruction is provided, the transaction will
+default to no additional elevated fee and the lowest priority.
+
+### How to set the prioritization fee
+
+A transaction's prioritization fee is set by including a `SetComputeUnitPrice`
+instruction, and optionally a `SetComputeUnitLimit` instruction. The runtime
+will use these values to calculate the prioritization fee, which will be used to
+prioritize the given transaction within the block.
+
+You can craft each of these instructions via their `rust` or `@solana/web3.js`
+functions. Each of these instructions can then be included in the transaction
+and sent to the cluster like normal. See also the
+[best practices](#prioritization-fee-best-practices) below.
+
+:::caution Transactions can only contain **one of each type** of compute budget
+instruction. Duplicate types will result in an
+[`TransactionError::DuplicateInstruction`](https://github.com/solana-labs/solana/blob/master/sdk/src/transaction/error.rs#L144-145)
+error, and ultimately transaction failure. :::
+
+#### Rust
+
+The rust `solana-sdk` crate includes functions within
+[`ComputeBudgetInstruction`](https://docs.rs/solana-sdk/latest/solana_sdk/compute_budget/enum.ComputeBudgetInstruction.html)
+to craft instructions for setting the _compute unit limit_ and _compute unit
+price_:
+
+```rust
+let instruction = ComputeBudgetInstruction::set_compute_unit_limit(300_000);
+```
+
+```rust
+let instruction = ComputeBudgetInstruction::set_compute_unit_price(1);
+```
+
+#### Javascript
+
+The `@solana/web3.js` library includes functions within the
+[`ComputeBudgetProgram`](https://solana-labs.github.io/solana-web3.js/classes/ComputeBudgetProgram.html)
+class to craft instructions for setting the _compute unit limit_ and _compute
+unit price_:
+
+```js
+const instruction = ComputeBudgetProgram.setComputeUnitLimit({
+ units: 300_000,
+});
+```
+
+```js
+const instruction = ComputeBudgetProgram.setComputeUnitPrice({
+ microLamports: 1,
+});
+```
+
+### Prioritization fee best practices
+
+#### Request the minimum compute units
+
+Transactions should request the minimum amount of compute units required for
+execution to minimize fees. Also note that fees are not adjusted when the number
+of requested compute units exceeds the number of compute units actually consumed
+by an executed transaction.
+
+#### Get recent prioritization fees
+
+Prior to sending a transaction to the cluster, you can use the
+[`getRecentPrioritizationFees`](/api/http#getrecentprioritizationfees) RPC
+method to get a list of the recent paid prioritization fees within the recent
+blocks processed by the node.
+
+You could then use this data to estimate an appropriate prioritization fee for
+your transaction to both (a) better ensure it gets processed by the cluster and
+(b) minimize the fees paid.
+
+## Fee Collection
+
+Transactions are required to have at least one account which has signed the
+transaction and is writable. Writable signer accounts are serialized first in
+the list of transaction accounts and the first of these accounts is always used
+as the "fee payer".
+
+Before any transaction instructions are processed, the fee payer account balance
+will be deducted to pay for transaction fees. If the fee payer balance is not
+sufficient to cover transaction fees, the transaction will be dropped by the
+cluster. If the balance was sufficient, the fees will be deducted whether the
+transaction is processed successfully or not. In fact, if any of the transaction
+instructions return an error or violate runtime restrictions, all account
+changes _except_ the transaction fee deduction will be rolled back.
+
+## Fee Distribution
+
+Transaction fees are partially burned and the remaining fees are collected by
+the validator that produced the block that the corresponding transactions were
+included in. The transaction fee burn rate was initialized as 50% when inflation
+rewards were enabled at the beginning of 2021 and has not changed so far. These
+fees incentivize a validator to process as many transactions as possible during
+its slots in the leader schedule. Collected fees are deposited in the
+validator's account (listed in the leader schedule for the current slot) after
+processing all of the transactions included in a block.
diff --git a/docs/wallet-guide.md b/docs/wallet-guide.md
new file mode 100644
index 000000000..40eea6861
--- /dev/null
+++ b/docs/wallet-guide.md
@@ -0,0 +1,56 @@
+---
+title: Solana Wallet Guide
+---
+
+This document describes the different wallet options that are available to users
+of Solana who want to be able to send, receive and interact with SOL tokens on
+the Solana blockchain.
+
+## What is a Wallet?
+
+A crypto wallet is a device or application that stores a collection of keys and
+can be used to send, receive, and track ownership of cryptocurrencies. Wallets
+can take many forms. A wallet might be a directory or file in your computer's
+file system, a piece of paper, or a specialized device called a _hardware
+wallet_. There are also various smartphone apps and computer programs that
+provide a user-friendly way to create and manage wallets.
+
+A _keypair_ is a securely generated _private key_ and its
+cryptographically-derived _public key_. A private key and its corresponding
+public key are together known as a _keypair_. A wallet contains a collection of
+one or more keypairs and provides some means to interact with them.
+
+The _public key_ (commonly shortened to _pubkey_) is known as the wallet's
+_receiving address_ or simply its _address_. The wallet address **may be shared
+and displayed freely**. When another party is going to send some amount of
+cryptocurrency to a wallet, they need to know the wallet's receiving address.
+Depending on a blockchain's implementation, the address can also be used to view
+certain information about a wallet, such as viewing the balance, but has no
+ability to change anything about the wallet or withdraw any tokens.
+
+The _private key_ is required to digitally sign any transactions to send
+cryptocurrencies to another address or to make any changes to the wallet. The
+private key **must never be shared**. If someone gains access to the private key
+to a wallet, they can withdraw all the tokens it contains. If the private key
+for a wallet is lost, any tokens that have been sent to that wallet's address
+are **permanently lost**.
+
+Different wallet solutions offer different approaches to keypair security,
+interacting with the keypair, and signing transactions to use/spend the tokens.
+Some are easier to use than others. Some store and back up private keys more
+securely. Solana supports multiple types of wallets so you can choose the right
+balance of security and convenience.
+
+**If you want to be able to receive SOL tokens on the Solana blockchain, you
+first will need to create a wallet.**
+
+## Supported Wallets
+
+Several browser and mobile app based wallets support Solana. Find the right one
+for you on the
+[Solana Ecosystem](https://solana.com/ecosystem/explore?categories=wallet) page.
+
+For advanced users or developers, the
+[command-line wallets](wallet-guide/cli.md) may be more appropriate, as new
+features on the Solana blockchain will always be supported on the command line
+first before being integrated into third-party solutions.