diff --git a/docs/architecture.mdx b/docs/architecture.mdx index 1aae651d7..3a150d7cf 100644 --- a/docs/architecture.mdx +++ b/docs/architecture.mdx @@ -151,7 +151,7 @@ Optionally, this node can open its RPC interface to serve different kinds of req For more information about the architecture of Tezos, see: -- [Accounts](./architecture/accounts) +- [Accounts and addresses](./architecture/accounts) - [Tokens](./architecture/tokens) - [Smart Optimistic Rollups](./architecture/smart-rollups) - [Governance](./architecture/governance) diff --git a/docs/architecture/accounts.md b/docs/architecture/accounts.md index 1dad01d8c..f196d96db 100644 --- a/docs/architecture/accounts.md +++ b/docs/architecture/accounts.md @@ -1,21 +1,27 @@ --- -title: Accounts +title: Accounts and addresses authors: "Tim McMackin" last_update: - date: 29 December 2023 + date: 10 January 2024 --- +## Accounts + Tezos uses these types of accounts: -- Classic accounts (also known as _implicit accounts_) store tez (ꜩ) and tickets. -These accounts have addresses that start with "tz1", "tz2", "tz3" or "tz4." -Any wallet application or the Octez command-line tool can create implicit accounts. +- User accounts (sometimes known as _implicit accounts_) store tez (ꜩ) and tickets. +Any wallet application or the Octez command-line tool can create user accounts. -- Smart contract accounts (also known as _originated accounts_) store immutable code, mutable storage, tez (ꜩ), and tickets. -Smart contracts have addresses that start with "KT1." +- Smart contract accounts (sometimes known as _originated accounts_) store immutable code, mutable storage, tez (ꜩ), and tickets. See [Smart contracts](../smart-contracts). -- Smart Rollup accounts are another type of originated account. -Their addresses start with `SR1`. +## Addresses + +- User accounts have addresses that start with "tz1", "tz2", "tz3" or "tz4." + +- Smart contracts have addresses that start with "KT1." + +- Smart Rollups have addresses, but are not accounts because they cannot store tez. +Their addresses start with "SR1". They have a tree of commitments attached to them. See [Smart Optimistic Rollups](./smart-rollups). diff --git a/docs/architecture/smart-rollups.md b/docs/architecture/smart-rollups.md deleted file mode 100644 index 553da21fb..000000000 --- a/docs/architecture/smart-rollups.md +++ /dev/null @@ -1,1306 +0,0 @@ ---- -title: Smart Optimistic Rollups -authors: 'Nomadic Labs, TriliTech' -last_update: - date: 30 June 2023 ---- - -Rollups play a crucial part in providing next-generation scaling on Tezos. This page gives a technical introduction to Smart Rollups, their optimistic nature, and an intro to developing your own WASM kernel. - -## Examples - -For examples of Smart Rollups, see this repository: https://gitlab.com/tezos/kernel-gallery. - -## What is a rollup? - -A **rollup** is a processing unit that receives, retrieves, and -interprets input messages to update its local state and to produce -output messages targeting the Tezos blockchain. In this documentation, -we will generally refer to the rollup under consideration as the Layer 2 -on top of the Tezos blockchain, considered as layer 1. - -Rollups are a permissionless scaling solution for the Tezos blockchain. -Indeed, anyone can originate and operate one or more rollups, allowing -to increase the throughput of the Tezos blockchain, (almost) -arbitrarily. - -The integration of these rollups in the Tezos protocol is *optimistic*: -this means that when an operator publishes a claim about the state of -the rollup, this claim is *a priori* trusted. However, a refutation -mechanism allows anyone to economically punish a participant who has -published an invalid claim. Therefore, thanks to the refutation -mechanism, a single honest participant is enough to guarantee that the -input messages are correctly interpreted. - -In the Tezos protocol, the subsystem of Smart Rollups is generic with -respect to the syntax and the semantics of the input messages. More -precisely, the originator of a smart rollup provides a program (in one -of the languages supported by Tezos) responsible for interpreting input -messages. During the refutation mechanism, the execution of this program -is handled by a **proof-generating virtual machine (PVM)** for this -language, provided by the Tezos protocol, which allows to prove that the -result of applying an input message to the rollup context is correct. -The rest of the time, any VM implementation of the chosen language can -be used to run the smart rollup program, provided that it is compliant -with the PVM. - -The smart rollup infrastructure currently supports the WebAssembly -language. A WASM rollup runs a WASM program named a **kernel**. The role of the kernel is to process input messages, to update a state, and to output messages targeting layer 1 following a user-defined logic. - -Anyone can develop a kernel or reuse existing kernels. A typical use -case of WASM rollups is to deploy a kernel that implements the Ethereum -Virtual Machine (EVM) and to get as a result an EVM-compatible Layer 2 -running on top of the Tezos blockchain. WASM rollups are not limited to -this use case though: they are fully programmable, hence their names, -smart optimistic rollups, as they are very close to smart contracts in -terms of expressiveness. - -The purpose of this documentation is to give: - -- an overview of the terminology and basic principles of Smart Rollups -- a complete tour of Smart Rollups related workflows -- a reference documentation for the development of a WASM kernel. - -# Overview - -Just like smart contracts, Smart Rollups are decentralized software -components. However, contrary to smart contracts that are processed by -the network validators automatically, a smart rollup requires a -dedicated *rollup node* to function. - -Any user can originate, operate, and interact with a rollup. For the -sake of clarity, we will distinguish three kinds of users in this -documentation: operators, kernel developers, and end-users. An operator -deploys the rollup node to make the rollup progress. A kernel developer -writes a kernel to be executed within a rollup. An end-user interacts -with the rollup through layer 1 operations or Layer 2 input messages. - -## Address - -When a smart rollup is originated on layer 1, a unique address is -generated to uniquely identify it. A smart rollup address starts with -the prefix `sr1`. - -## Inputs - -There are two channels of communication to interact with Smart Rollups: - -1. a global **rollups inbox** allows layer 1 to transmit - information to all the rollups. This unique inbox contains two kinds - of messages: *external* messages are pushed through a layer 1 - manager operation while *internal* messages are pushed by layer 1 - smart contracts or the protocol itself. -2. a **reveal data channel** allows the rollup to retrieve data coming - from data sources external to layer 1. - -### External messages - -Anyone can push a message to the rollups inbox. This message is a mere -sequence of bytes following no particular underlying format. The -interpretation of this sequence of bytes is the responsibility of each -kernel. - -There are two ways for end-users to push an external message to the -rollups inbox: first, they can inject the dedicated layer 1 operation -using the Octez client second, they can use -the batcher of a smart rollup node (see [Sending an External Inbox Message](#sending-an-external-inbox-message)). - -### Internal messages - -Contrary to external messages, which are submitted by the end users, -internal messages are constructed by layer 1. - -At the beginning of every Tezos block, layer 1 pushes two internal -messages: - -- `"Start of level"` - no associated payload -- `"Info per level"` - provides -to the kernel the timestamp and block hash of the predecessor of the -current Tezos block. - -A rollup is identified by an address and has an associated Michelson -type (defined at origination time). Any layer 1 smart contract can -perform a transfer to this address with a payload of this type. This -transfer is realized as an internal message pushed to the rollups inbox. - -Finally, after the application of the operations of the Tezos block, the layer 1 pushes one final internal message `"End of level"`. Similarly to `"Start of level"`, this internal messages does not come with any payload. - -### Reveal data channel - -The reveal data channel is a communication interface that allows the -rollup to request data from sources that are external to the inbox and -can be unknown to layer 1. The rollup node has the responsibility to -answer the rollup requests. - -A rollup can do the following requests through the reveal data channel: - -1. **preimage requests**: The rollup can request arbitrary data of at - most 4kBytes, provided that it knows its (blake2b) hash. The request - is fulfilled by the rollup node (see [Populating the Reveal Channel](#populating-the-reveal-channel)). -2. **metadata requests**: The rollup can request information from the - protocol, namely the address and the origination level of the rollup - node itself. The rollup node retrieves this information through RPCs - to answer the rollup. - -Information passing through the reveal data channel does not have to be -considered by layer 1: for this reason, the volume of information is -not limited by the bandwidth of layer 1. Thus, the reveal data -channel can be used to upload large volumes of data to the rollup. - -## Origination - -When originated, a rollup is characterized by the name of the device it -runs, the proof-generating virtual machine (PVM), by the -source code of the rollup running under this device, and by the -Michelson type of the entrypoint used by layer 1 smart contracts to -communicate with the rollup through internal messages. - -## Processing - -Each time a Tezos block is finalized, a rollup reacts to three kinds of -events: the beginning of the block, the input messages contained in that -block, and the end of the block. A **rollup node** implements this -reactive process: it downloads the Tezos block and interprets it -according to the semantics of the PVM. This interpretation can require -updating a state, downloading data from other sources, or performing -some cryptographic verifications. The state of the rollup contains an -**outbox**, a sequence of latent calls to layer 1 contracts. - -The behavior of the rollup node is deterministic and fully specified by -a reference implementation of the PVM embedded in the protocol. Notice -that the PVM implementation is meant for verification, not performance: -for this reason, a rollup node does not normally run a PVM to process -inputs but a **fast execution engine** (e.g., WASMER for the WASM PVM in -the case of the rollup node distributed with Octez). This fast execution -engine implements the exact same semantics as the PVM. - -## Commitments - -Starting from the rollup origination level, levels are partitioned into -**commitment periods** of 60 consecutive blocks. - -A **commitment** claims that the interpretation of all inbox messages -published during a given commitment period and applied on the state of a -parent commitment leads to a given new state by performing a given number -of execution steps of the PVM. Execution steps are called **ticks** in -the Smart Rollups terminology. A commitment must be published on the -layer 1 after each commitment period to have the rollup progress. A -commitment is always based on a parent commitment (except for the -genesis commitment that is automatically published at origination time). - -Since the PVM is deterministic and the inputs are completely determined -by layer 1 rollups inbox and the reveal channel, there is only one -honest commitment. In other words, if two distinct commitments are -published for the same commitment period, one of them must be wrong. - -Notice that, to publish a commitment, an operator must provide a deposit -of 10,000 tez. For this reason, the operator is said to be a **staker**. -Several users can stake on the same commitment. When a staker *S* -publishes a new commitment based on a commitment *S* is staking on, *S* -does not have to provide a new deposit: the deposit also applies to this -new commitment. - -There is no need to synchronize between operators: if two honest -operators publish the same commitment for a given commitment period, the -commitment will be published with two stakes on it. - -A commitment is optimistically trusted but it can be refuted until it is -said to be **cemented** (i.e., final, unchangeable). Indeed, right after -a commitment is published, a two-weeks refutation period starts. During -the refutation period, anyone noticing that a commitment for a given -commitment period is invalid can post a concurrent commitment for the -same commitment period to force the removal of the invalid commitment. -If no one posts such a concurrent commitment during the refutation -period, the commitment can be cemented with a dedicated operation -injected in layer 1, and the outbox messages can be executed by the -layer 1 by an explicit layer 1 operation typically to transfer assets from the rollup to layer 1 (see [Triggering Execution of an Outbox Message](#triggering-execution-of-an-outbox-message)). - -## Refutation - -Because of concurrent commitments, a rollup is generally related to a -**commitment tree** where branches correspond to different claims about -the rollup state. - -By construction, only one view of the rollup state is valid (as the PVM -is deterministic). When two concurrent branches exist in the commitment -tree, the cementation process is stopped at the first fork in the tree. -To unfreeze the cementation process, a **refutation game** must be -started between *two concurrent stakers* of these branches. Refutation -games are automatically played by rollup nodes to defend their stakes: -honest participants are guaranteed to win these games. Therefore, an -honest participant should not have to worry about refutation games. -Finally, a running refutation game does not prevent new commitments to -be published on top of the disputed commitments. - -A refutation game is decomposed into two main steps: a dissection -mechanism and a final conflict resolution phase. During the first phase, -the two stakers exchange hashes about intermediate states of the rollups -in a way that allows them to converge to the very first tick on which -they disagree. The exact number of hashes exchanged at a given step is -PVM-dependent. During the final phase, the stakers must provide a proof -that they correctly interpreted this conflicting tick. - -The layer 1 PVM then determines whether these proofs are valid. There -are only two possible outcomes: either one of the staker has provided a -valid proof, then that staker wins the game, and is rewarded with half -of the opponent's deposit (the other half being burnt); or, both -stakers have provided an invalid proof and they both lose their deposit. -In the end, at most one stake will be kept in the commitment tree. When -a commitment has no more stake on it (because all stakers have lost the -related refutation games), it is removed from the tree. An honest player -*H* must therefore play as many refutation games as there are stakes on -the commitments in conflict with *H*'s own commitment. - -Finally, notice that each player is subject to a timer similar to a -chess clock, allowing each player to play only up to one week: after -this time is elapsed, a player can be dismissed by any layer 1 user -playing a timeout operation. Thus, the refutation game played by the two -players can last at most 2 weeks. - -There is no timeout for starting a refutation game after having -published a concurrent commitment. However, assuming the existence of an -honest participant, that participant will start the refutation game with -all concurrent stakers to avoid the rollup being stuck. - -# Workflows - -## Tools - -Smart Rollups come with two new executable programs: the Octez rollup -node and the Octez rollup client. - -The Octez rollup node is used by a rollup operator to deploy a rollup. -The rollup node is responsible for making the rollup progress by -publishing commitments and by playing refutation games. - -Just like the Octez node, the Octez rollup node provides an RPC -interface `RPC <../api/openapi>`. The -services of this interface can be called directly with HTTP requests or -indirectly using the Octez rollup client. - -## Prerequisites - -An Octez rollup node needs an Octez node to run. We assume that an Octez node has been launched locally: - -```sh -octez-node config init --data-dir "${ONODE_DIR}" --network "${NETWORK}" -octez-node run --data-dir "${ONODE_DIR}" --network "${NETWORK}" --rpc-addr 127.0.0.1 -``` - -Finally, you need to check that your balance is greater than 10,000 tez -to make sure that staking is possible. If your balance is not -sufficient, you can get test tokens from a faucet. - -```sh -octez-client get balance for "${OPERATOR_ADDR}" -``` - -## Origination - -Anyone can originate a smart rollup with the following invocation of the Octez client: - -```sh -octez-client originate smart rollup "${SOR_ALIAS}" \ - from "${OPERATOR_ADDR}" \ - of kind wasm_2_0_0 \ - of type bytes \ - with kernel "${KERNEL}" \ - --burn-cap 999 -``` - -where `${SOR_ALIAS}` is an alias to memorize the smart rollup address in the client. This alias can be used in any command where a smart rollup address is expected. `${KERNEL}` is a hex representation of a WebAssembly bytecode serving as an initial program to boot on. - -You can obtain this representation through the WASM bytecode file named `kernel.wasm`: - -```sh -xxd -ps -c 0 | tr -d '\n' -``` - -To experiment, we propose that you use the value `${KERNEL}` defined in the file `sr_boot_kernel.sh`. - -```sh -source sr_boot_kernel.sh -``` - -If everything went well, the origination command results in: - -```sh - This sequence of operations was run: - Manager signed operations: - From: tz1fp5ncDmqYwYC568fREYz9iwQTgGQuKZqX - Fee to the baker: ꜩ0.000357 - Expected counter: 36 - Gas limit: 1000 - Storage limit: 0 bytes - Balance updates: - tz1fp5ncDmqYwYC568fREYz9iwQTgGQuKZqX ... -ꜩ0.000357 - payload fees(the block proposer) ....... +ꜩ0.000357 - Revelation of manager public key: - Contract: tz1fp5ncDmqYwYC568fREYz9iwQTgGQuKZqX - Key: edpkukxtw4fHmffj4wtZohVKwNwUZvYm6HMog5QMe9EyYK3QwRwBjp - This revelation was successfully applied - Consumed gas: 1000 - Manager signed operations: - From: tz1fp5ncDmqYwYC568fREYz9iwQTgGQuKZqX - Fee to the baker: ꜩ0.000956 - Expected counter: 37 - Gas limit: 2849 - Storage limit: 6572 bytes - Balance updates: - tz1fp5ncDmqYwYC568fREYz9iwQTgGQuKZqX ... -ꜩ0.000956 - payload fees(the block proposer) ....... +ꜩ0.000956 - Smart rollup origination: - Kind: wasm_2_0_0 - Parameter type: bytes - Kernel Blake2B hash: '24df9e3c520dd9a9c49b447766e8a604d31138c1aacb4a67532499c6a8b348cc' - This smart rollup origination was successfully applied - Consumed gas: 2748.269 - Storage size: 6552 bytes - Address: sr1RYurGZtN8KNSpkMcCt9CgWeUaNkzsAfXf - Genesis commitment hash: src13wCGc2nMVfN7rD1rgeG3g1q7oXYX2m5MJY5ZRooVhLt7JwKXwX - Balance updates: - tz1fp5ncDmqYwYC568fREYz9iwQTgGQuKZqX ... -ꜩ1.638 - storage fees ........................... +ꜩ1.638 - -``` - -The address `sr1RYurGZtN8KNSpkMcCt9CgWeUaNkzsAfXf` is the smart rollup -address. Let's refer to it as `${SOR_ADDR}` from now on. - -## Deploying a rollup node - -Now that the rollup is originated, anyone can deploy a rollup node to advance the rollup. - -First, we need to decide on a directory where the rollup node stores its data. Let us assign this path to `${ROLLUP_NODE_DIR}`. - -The rollup node can be run with: - -```sh -octez-smart-rollup-node-alpha --base-dir "${OCLIENT_DIR}" \ - run operator for "${SOR_ALIAS_OR_ADDR}" \ - with operators "${OPERATOR_ADDR}" \ - --data-dir "${ROLLUP_NODE_DIR}" -``` - -The log should show that the rollup node follows layer 1 chain and is processing the inbox of each level. - -:::note Distinct layer 1 Addresses -Distinct layer 1 addresses could be used for layer 1 -operations issued by the rollup node simply by editing the configuration file to set different addresses for `publish` `add_messages` `cement` `refute`. -::: - - -In addition, a rollup node can run under different modes: - -1. `operator` activates a full-fledged rollup node. This means that the - rollup node will do everything needed to make the rollup progress. - This includes following layer 1 chain, reconstructing inboxes, - updating the states, publishing and cementing commitments regularly, - and playing the refutation games. In this mode, the rollup node will - accept transactions in its queue and batch them on layer 1. -2. `batcher` means that the rollup node will accept transactions in its - queue and batch them on layer 1. In this mode, the rollup node - follows layer 1 chain, but it does not update its state and does - not reconstruct inboxes. Consequently, it does not publish - commitments nor play refutation games. -3. `observer` means that the rollup node follows layer 1 chain to - reconstruct inboxes, to update its state. However, it will neither - publish commitments, nor play a refutation game. It does not include - the message batching service either. -4. `maintenance` is the same as the operator mode except that it does - not include the message batching service. -5. `accuser` follows the `layer1-chain` and computes commitments but does - not publish them. Only when a conflicting commitment (published by - another staker) is detected will the **"accuser node"** publish a - commitment and participate in the subsequent refutation game. - -The following table summarizes the operation modes, focusing on the L1 -operations which are injected by the rollup node in each mode. - - -  | Add Messages | Publish | Cement | Refute ----|---|---|---|--- -Operator | Yes | Yes | Yes | Yes -Batcher | Yes | No | No | No -Observer | No | No | No | No -Maintenance | No | Yes | Yes | Yes -Accuser | No | Yes* | No | Yes - -:::note When does an accuser publish commitments? -An accuser node will publish commitments only when it detects conflicts. In this case, it must deposit 10,000 tez. -::: - -### Configuration file - -The rollup node can also be configured with the following command that -uses the same arguments as the `run` command: - -```sh -octez-smart-rollup-node-alpha --base-dir "${OCLIENT_DIR}" \ - init operator config for "${SOR_ALIAS_OR_ADDR}" \ - with operators "${OPERATOR_ADDR}" \ - --data-dir "${ROLLUP_NODE_DIR}" -``` - -This creates a configuration file at `${ROLLUP_NODE_DIR}/config.json`: - -```sh -{ - "data-dir": "${ROLLUP_NODE_DIR}", - "smart-rollup-address": "${SOR_ADDR}", - "smart-rollup-node-operator": { - "publish": "${OPERATOR_ADDR}", - "add_messages": "${OPERATOR_ADDR}", - "cement": "${OPERATOR_ADDR}", - "refute": "${OPERATOR_ADDR}" - }, - "fee-parameters": {}, - "mode": "operator" -} -``` - -The rollup node can now be run with: - -```sh -octez-smart-rollup-node-alpha -d "${OCLIENT_DIR}" run --data-dir ${ROLLUP_NODE_DIR} -``` - -The configuration will be read from `${ROLLUP_NODE_DIR}/config.json`. - -### Rollup node in a sandbox - -The node can also be tested locally with a sandbox environment. - -Once you initialized the **sandboxed** client data with: - -```sh -./src/bin_client/octez-init-sandboxed-client.sh -``` - -You can run a sandboxed rollup node with: - -```sh -`octez-smart-rollup-node-Pt${CURRENT_PROTOCOL} run`. -``` - -where `${CURRENT_PROTOCOL}` represents the current latest protocol i.e. `PtMumbai`, `PtNairob` etc. - -A temporary directory `/tmp/tezos-smart-rollup-node.xxxxxxxx` will be -used. However, a specific data directory can be set with the environment variable `SCORU_DATA_DIR`. - -## Sending an External Inbox Message - -The Octez client can be used to send an external message into the rollup inbox. Assuming that `${EMESSAGE}` is the hexadecimal representation of the message payload, to inject an external message, run: - -```sh -octez-client" -d "${OCLIENT_DIR}" -p Pt${CURRENT_PROTOCOL} \ - send smart rollup message "hex:[ \"${EMESSAGE}\" ]" \ - from "${OPERATOR_ADDR}" -``` - -Let's now produce some viable contents for `${EMESSAGE}`. The kernel used previously in our running example is a simple "echo" kernel that copies its input as a new message to its outbox. Therefore, the input must be a valid binary encoding of an outbox message to make this work. - -Specifically, assuming that we have originated a layer 1 smart contract as follows: - -```sh -octez-client -d "${OCLIENT_DIR}" -p Pt${CURRENT_PROTOCOL} \ - originate contract go transferring 1 from "${OPERATOR_ADDR}" \ - running 'parameter string; storage string; code {CAR; NIL operation; PAIR};' \ - --init '""' --burn-cap 0.4 -``` - -and that this contract is identified by a address `${CONTRACT}`, then -one can encode an outbox transaction using the Octez rollup client as -follows: - -```sh -MESSAGE='[ { \ - "destination" : "${CONTRACT}", \ - "parameters" : "\"Hello world\"", \ - "entrypoint" : "%default" } ]' - -EMESSAGE=$(octez-smart-rollup-client-Pt${CURRENT_PROTOCOL} encode outbox message "${MESSAGE}") -``` - -## Triggering Execution of an Outbox Message - -Once an outbox message has been pushed to the outbox by the kernel at -some level `${L}`, the user needs to wait for the commitment that includes this level to be cemented. On Dailynet, the cementation process -of a non-disputed commitment is 40 blocks long while on Mainnet, it is 2 -weeks long. - -When the commitment is cemented, one can observe that the outbox is -populated as follows: - -```sh -octez-smart-rollup-client-Pt${CURRENT_PROTOCOL} rpc get \ - /global/block/cemented/outbox/${L}/messages -``` - -Here is the output for this command: - -``` -[ { "outbox_level": ${L}, "message_index": "0", - "message": - { "transactions": - [ { "parameters": { "string": "Hello world" }, - "destination": "${CONTRACT}", - "entrypoint": "%default" } ] } } ] -``` - -At this point, the actual execution of a given outbox message can be -triggered. This requires precomputing a proof that this outbox message -is indeed in the outbox. In the case of our running example, this proof -is retrieved as follows: - -```sh -PROOF=$(octez-smart-rollup-client-Pt${CURRENT_PROTOCOL} get proof for message 0 \ - of outbox at level "${L}" \ - transferring "${MESSAGE}") -``` - -Finally, the execution of the outbox message is done as follows: - -```sh -"${TEZOS_PATH}/octez-client" -d "${OCLIENT_DIR}" -p Pt${CURRENT_PROTOCOL} \ - execute outbox message of smart rollup "${SOR_ALIAS_OR_ADDR}" \ - from "${OPERATOR_ADDR}" for commitment hash "${LCC}" \ - and output proof "${PROOF}" -``` - -where `${LCC}` is the hash of the latest cemented commitment. - -:::note Who can trigger the execution of an outbox message? -Anyone can trigger the execution of an outbox message (not only an -operator). - -To check the contract has indeed been called with the parameter `Hello World` through an internal operation, we can check the receipt. More complex parameters, typically assets represented as tickets, -can be used as long as they match the type of the entrypoint of the -destination smart contract. - -## Sending An Internal Inbox Message - -A smart contract can push an internal message in the rollup inbox using -the Michelson `TRANSFER_TOKENS` instruction targeting a specific rollup -address. The parameter of this transfer must be a value of the Michelson type declared at the origination of this rollup. - -Remember that our running example rollup has been originated with: - -```sh -octez-client originate smart rollup "${SOR_ALIAS}" \ - from "${OPERATOR_ADDR}" \ - of kind wasm_2_0_0 \ - of type bytes \ - booting with "${KERNEL}" \ - -burn-cap 999 -``` - -The fragment `of type bytes` declares that the rollup is expecting values of type `bytes`. Any Michelson type could have been used. To transfer tickets to a rollup, this type must -mention tickets. - -Here is an example of a Michelson script that sends an internal message -to the rollup of our running example. The payload of the internal -message is the value passed as parameter of type `bytes` to the rollup. - -```sh -parameter bytes; -storage unit; -code - { - UNPAIR; - PUSH address "${SOR_ADDR}"; - CONTRACT bytes; - IF_NONE { PUSH string "Invalid address"; FAILWITH } {}; - PUSH mutez 0; - DIG 2; - TRANSFER_TOKENS; - NIL operation; - SWAP; - CONS; - PAIR; - } -``` - -## Populating the Reveal Channel - -It is the responsibility of rollup node operators to provide the data passed through the reveal data channel when the rollup requests it. - -To answer a request for a page of hash `H`, the rollup node tries to -read the content of a file `H` named `${ROLLUP_NODE_DIR}/wasm_2_0_0`. - -Notice that a page cannot exceed 4KB. Hence, larger pieces of data must -be represented with multiple pages that reference each other through -hashes. It is up to the kernel to decide how to implement this. For -instance, one can classify pages into two categories: index pages that -are hashes for other pages and leaf pages that contain actual payloads. - -## Configure WebAssembly Fast Execution - -When the rollup node advances its internal rollup state under normal -operation, it does so in a mode called `Fast Execution`. - -This mode uses [Wasmer](https://wasmer.io) when running WebAssembly code which allows you to configure the compiler it will use to deal with the WebAssembly code. It can be done using the `OCTEZ_WASMER_COMPILER` environment variable which will be picked up by the smart rollup node. - -The performance of the WebAssembly execution is affected primarily by the choice of compiler. Some compilers offer additional security guarantees which might be attractive to you. - -Here are some compiler options: - -Compiler | `OCTEZ_WASMER_COMPILER` | Description ---- | --- | --- -Singlepass | `singlepass` | [When to use Singlepass](https://github.com/wasmerio/wasmer/tree/master/lib/compiler-singlepass#when-to-use-singlepass) | -Cranelift | `cranelift` | [When to use Cranelift](https://github.com/wasmerio/wasmer/tree/master/lib/compiler-cranelift#when-to-use-cranelift) - -## Developing WASM Kernels - -A rollup is primarily characterized by the semantics given to the -input messages it processes. These semantics are provided at origination time as a WASM program (i.e. `wasm_2_0_0`) called a -**kernel**. The kernel is a WASM module encoded in the binary format as defined by the WASM standard. - -A key requirement for any web3 technology is determinism. To ensure determinism, the following restrictions are in place: - -1. Instructions and types related to floating-point arithmetic are not - supported. This is because IEEE floats are not deterministic, as the - standard includes undefined behavior operations. -2. The length of the call stack of the WASM kernel is restricted to - 300. - -Otherwise, we support the full WASM language. A valid kernel is a WASM module that satisfies the following constraints: - -1. It exports a function `kernel_run` that takes no argument and - returns nothing. -2. It declares and exports exactly one memory. -3. It only imports the host functions exported by the (virtual) module - `smart_rollup_core`. - -For instance, an example of a simple `Hello World` kernel is the -following WASM program in text format. - -```sh -(module - (import "smart_rollup_core" "write_debug" - (func $write_debug (param i32 i32) (result i32))) - (memory 1) - (export "mem" (memory 0)) - (data (i32.const 100) "hello, world!") - (func (export "kernel_run") - (local $hello_address i32) - (local $hello_length i32) - (local.set $hello_address (i32.const 100)) - (local.set $hello_length (i32.const 13)) - (drop (call $write_debug (local.get $hello_address) - (local.get $hello_length))))) -``` - -This program can be compiled to the WASM binary format with -general-purpose tool like [WABT](https://github.com/WebAssembly/wabt). - -```sh -wat2wasm hello.wat -o hello.wasm -``` - -The contents of the resulting `hello.wasm` file is a valid WASM kernel. One of the benefits of choosing WASM as the programming language for Smart Rollups is that WASM has gradually become a ubiquitous compilation target over the years. Its popularity has grown to the point where mainstream, industrial languages like Go or Rust now natively compile to WASM. For example, `cargo`, the official Rust package manager, provides an official target to compile Rust to `.wasm` binary files, which are valid WASM kernels. This means that, for this particular example, one can build a WASM kernel while enjoying the strengths and convenience of the Rust language and the Rust ecosystem. - -In the context of Smart Rollups, Rust has become the primary language where the WASM backend has been tested extensively. However, the WASM VM has not been modified in any way to favor this language. We fully expect that other mainstream languages, such as Go, are also great candidates for implementing WASM kernels. - -Let's move on and continue by: - -1. explaining the execution environment of a WASM kernel i.e. when it is parsed, executed, etc. -2. explaining, in detail, the API at the disposal of WASM kernel developers. -3. demonstrating how Rust can be used to implement a WASM kernel. - -### Execution Environment - -Fundamentally, the life cycle of a smart rollup is a never-ending loop -of fetching inputs from layer 1, and executing the `kernel_run` function exposed by the WASM kernel. - -### State - -The smart rollup carries two states: - -1. A transient state, that is reset after each call to the `kernel_run` function, similar to RAM. -2. A persistent state, that is preserved across `kernel_run` calls. - - The **inbox** possesses this persistent state which is regularly populated with the inputs coming from layer 1. - - The **outbox** which the kernel can populate with contract calls targeting smart contracts in layer 1. This can be thought of as durable storage similar to a file system. - - -The durable storage is a persistent tree, whose contents is addressed by path-like keys. - -A path in the storage may contain: -- a value (also called file) consisting of a sequence of raw bytes -- and/or any number of subtrees (also called directories) i.e. the paths in the storage prefixed by the current path. - -Thus, unlike most file systems, a path in the durable storage may be at the same time a file and a directory (a set of sub-paths). - -The WASM kernel can write and read the raw bytes stored under a given -path (the file), but can also interact (delete, copy, move, etc.) with -subtrees (directories). - -:::note Read-only values and subtrees -The values and subtrees under the key `/readonly` are not writable by a -kernel, but can be used by the PVM to give information to the kernel. -::: - -### Control Flow - -When a new block is published on Tezos, the inbox exposed to the smart -rollup is populated with all the inputs published on Tezos in this -block. Keep in mind that all Smart Rollups which are originated on Tezos share the same inbox. As a consequence, a WASM kernel has to filter the inputs that are relevant for its purpose from the ones it does not need to process. - -Once the inbox has been populated with the inputs of the Tezos block, -the `kernel_run` function is called, from a clean `transient` state. -More precisely, the WASM kernel is re-initialized, then `kernel_run` is -called. - -By default, the WASM kernel yields when `kernel_run` returns. In this case: - -- The WASM kernel execution is put on hold while the inputs of the next inbox are loaded. -- The inputs that were not consumed by `kernel_run` are dropped -- `kernel_run` can prevent the WASM kernel from yielding by writing arbitrary data under the path `/kernel/env/reboot` -in its durable storage. -- In such a case (known as `reboot`), `kernel_run` is called again, without dropping unread inputs. -- The value at `/kernel/env/reboot` is removed between each call of `kernel_run`, and the `kernel_run` function can maximally postpone yielding 1,000 reboots for each Tezos level. - -A call to `kernel_run` cannot take an arbitrary amount of time to -complete, because diverging computations are not compatible with the -optimistic rollup infrastructure of Tezos. To dodge the halting problem, the reference interpreter of WASM (used during the refutation game) enforces a bound on the number of ticks used in a call to `kernel_run`. Once the maximum number of ticks is reached, the execution of `kernel_run` is trapped (*i.e.*, interrupted with an error). In turn, the fast execution engine does not enforce this time limit. Hence, it is the responsibility of the kernel developer to implement a `kernel_run` which does not exceed its tick budget. - -The current bound is set to 11,000,000,000 ticks. -`octez-smart-rollup-wasm-debugger` is the best tool available -to verify the `kernel_run` function does not go over this tick limit. - -The direct consequence of this setup is that it might be necessary for a WASM kernel to span a long computation across several calls to -`kernel_run`, requiring serialization of any data it needs in the -durable storage to avoid loss. - -Finally, the kernel can verify if the previous `kernel_run` invocation -was trapped by verifying if some data are stored under the path `kernel/env/stuck`. - -### Host Functions - -At its core, the WASM machine defined in the WASM standard is an -evolved arithmetic machine. It needs to be enriched with so-called -"host" functions to be used for greater purposes. The host -functions provide an API to the WASM program to interact externally. - -For Smart Rollups, the host functions exposed to a WASM kernel allow -it to interact with the components of persistent state: - - - `read_input` - loads the oldest input still present in the inbox of the smart rollup in the transient memory of the WASM kernel. This means that the input is lost at the next invocation of `kernel_run` if it is not written in the durable storage - -- `write_output` - writes an in-memory buffer to the outbox of the smart rollup. If the content of the buffer follows the expected encoding, it can be interpreted within layer 1 as a smart contract call, once a commitment acknowledging the call to this host function is cemented - -- `write_debug` - can be used by the WASM kernel to log events which can potentially be interpreted by an instrumented rollup node - -- `store_has` - returns the kind of data (if any) stored in the durable storage under a given path: a directory, a file, neither or both - -- `store_delete` - cuts the subtree out (via a given path) from the durable storage - -- `store_copy` - copies the subtree (via a given path) to another key. - -- `store_move` - behaves as `store_copy`, but also cuts the original subtree out of the tree. - -- `store_read` - loads at most 2048 bytes from a file in the durable storage to a buffer in the memory of the WASM kernel. - -- `store_write` - writes at most 2048 bytes from a buffer in the memory of the WASM kernel to a file of the durable storage, increasing its size if necessary. Note that files in the durable storage cannot exceed $$2^{31} - 1$$ bytes, (i.e. 2GB - 1). - -- `store_value_size` - returns the size (in bytes) of a file under a given key in the durable storage. - -- `store_list_size` - returns the number of child objects (either directories or files) under a given key. - -`reveal_preimage` - loads in memory the preimage of a hash. The size of the hash in bytes must be specified as an input to the function. - -`reveal_metadata` - loads in memory the address of the smart rollup (20 bytes), and the Tezos level of its origination (4 bytes). - -These host functions use a "C-like" API. In particular, most of them -return a signed 32bit integer, where negative values are reserved for -conveying errors, as shown in the next table. - - -Code | Description ---- | --- -` > -1` | Input is too large to be a valid key of the durable storage -` > -2` | Input cannot be parsed as a valid key of the durable storage -` > -3` | There is no file under the requested key -` > -4` | The host functions tried to read or write an invalid section (determined by an offset and a length) of the value stored under a given key -` > -5` | Cannot write a value beyond the 2GB size limit -` > -6` | Invalid memory access (segmentation fault) -` > -7` | Tried to read from the inbox or write to the outbox more than 4,096 bytes -` > -8` | Unknown error due to an invalid access -` > -9` | Attempt to modify a readonly value -` > -10` | Key has no tree in the storage -` > -11` | Outbox is full, no new message can be appended - - -## Implementing a WASM Kernel in Rust - -:::note Rust Familiarity -This document is not a tutorial about Rust. Familiarity with the -language and its ecosystem (in particular, how Rust crates are structured) is assumed. -::: - -Though WASM is a good fit for efficiently executing computation-intensive, arbitrary programs, it is a low-level, -stack-based, memory unsafe language. Fortunately, it was designed to be -a compilation target, not a language in which developers would directly -write their programs. - -Rust has several advantages that makes it a good candidate for writing -the kernel of a smart rollup. Not only does the Rust compiler treat WASM as a first class citizen when it comes to compilation targets, but its approach to memory safety eliminates large classes of bugs and -vulnerabilities that arbitrary WASM programs may suffer from. - -### Setting-up Rust - -[rustup](https://rustup.rs) is the standard way to get Rust. Once -`rustup` is installed, enabling WASM as a compilation target is as -simple as running the following command. - -```sh -rustup target add wasm32-unknown-unknown -``` - -Rust also proposes the `wasm64-unknown-unknown` compilation target. This target is **not** compatible with Tezos Smart Rollups, which only -provides a 32bit address space. - -The simplest kernel one can implement in Rust (the one that returns directly after being called, without specification) is the following Rust file (by convention named `lib.rs` in Rust). - -```rust -#[no_mangle] -pub extern "C" fn kernel_run() { -} -``` - -This code can be easily computed with `cargo` with the following -`Cargo.toml`. - -```toml -[package] -name = 'noop' -version = '0.1.0' -edition = '2021' - -[lib] -crate-type = ["cdylib"] -``` - -The key line to spot is the `crate-type` definition to `cdylib`. When writing a library that will eventually be consumed by a Kernel WASM crate, this line must be modified to: - -```toml -crate-type = ["cdylib", "rlib"] -``` - -Compiling our "noop" kernel is done by calling `cargo` with the correct -argument: - -```sh -cargo build --target wasm32-unknown-unknown -``` - -It is also possible to use the `--release` CLI flag to tell `cargo` to -optimize the kernel. To make the use of the `target` optional, it is possible to create a `.cargo/config.toml` file, containing the following line. - -```toml -[build] -target = "wasm32-unknown-unknown" - -[rust] -lld = true% -``` - -The resulting project looks as follows. - -```sh -. -├── .cargo -│   └── config.toml -├── Cargo.toml -└── src -└── lib.rs -``` - -and the kernel can be found in the `target/` directory `./target/wasm32-unknown-unknown/release/noop.wasm`. - -By default, Rust binaries (including WASM binaries) contain a lot of -debugging information and possibly unused code that we do not want to -deploy in our rollup. For instance, our `noop` kernel is 1.7MBytes. We can use [wasm-strip](https://github.com/WebAssembly/wabt) to reduce -the size of the kernel down to 115 bytes in this case. - -### Host Functions in Rust - -The host functions exported by the WASM runtime to Rust programs are -exposed by the API below. The `link` pragma is used to specify the -module that exports them (in our case, `smart_rollup_core`). - -```rust -#[repr(C)] -pub struct ReadInputMessageInfo { - pub level: i32, - pub id: i32, -} - -#[link(wasm_import_module = "smart_rollup_core")] -extern "C" { - /// Returns the number of bytes written to `dst`, or an error code. - pub fn read_input( - message_info: *mut ReadInputMessageInfo, - dst: *mut u8, - max_bytes: usize, - ) -> i32; - - /// Returns 0 in case of success, or an error code. - pub fn write_output(src: *const u8, num_bytes: usize) -> i32; - - /// Does nothing. Does not check the correctness of its argument. - pub fn write_debug(src: *const u8, num_bytes: usize); - - /// Returns - /// - 0 the key is missing - /// - 1 only a file is stored under the path - /// - 2 only directories under the path - /// - 3 both a file and directories - pub fn store_has(path: *const u8, path_len: usize) -> i32; - - /// Returns 0 in case of success, or an error code - pub fn store_delete(path: *const u8, path_len: usize) -> i32; - - /// Returns the number of children (file and directories) under a - /// given key. - pub fn store_list_size(path: *const u8, path_len: usize) -> i64; - - /// Returns 0 in case of success, or an error code. - pub fn store_copy( - src_path: *const u8, - scr_path_len: usize, - dst_path: *const u8, - dst_path_len: usize, - ) -> i32; - - /// Returns 0 in case of success, or an error code. - pub fn store_move( - src_path: *const u8, - scr_path_len: usize, - dst_path: *const u8, - dst_path_len: usize, - ) -> i32; - - /// Returns the number of bytes written to the durable storage - /// (should be equal to `num_bytes`, or an error code. - pub fn store_read( - path: *const u8, - path_len: usize, - offset: usize, - dst: *mut u8, - num_bytes: usize, - ) -> i32; - - /// Returns 0 in case of success, or an error code. - pub fn store_write( - path: *const u8, - path_len: usize, - offset: usize, - src: *const u8, - num_bytes: usize, - ) -> i32; - - /// Returns the number of bytes written at `dst`, or an error - /// code. - pub fn reveal_metadata( - dst: *mut u8, - max_bytes: usize, - ) -> i32; - - /// Returns the number of bytes written at `dst`, or an error - /// code. - pub fn reveal_preimage( - hash_addr: *const u8, - hash_size: u8, - dst: *mut u8, - max_bytes: usize, - ) -> i32; -} -``` - -These functions are marked as `unsafe` for Rust. It is possible to -provide a safe API on top of them. For instance, the `read_input` host -function can be used to declare a safe function which allocates a fresh -Rust Vector to receive the input. - -```rust -// Assuming the host functions are defined in a module `host`. - -pub const MAX_MESSAGE_SIZE: u32 = 4096u32; - -pub struct Input { - pub level: u32, - pub id: u32, - pub payload: Vec, -} - -pub fn next_input() -> Option { - let mut payload = Vec::with_capacity(MAX_MESSAGE_SIZE as usize); - - // Placeholder values - let mut message_info = ReadInputMessageInfo { level: 0, id: 0 }; - - let size = unsafe { - host::read_input( - &mut message_info, - payload.as_mut_ptr(), - MAX_MESSAGE_SIZE, - ) - }; - - if 0 < payload.len() { - unsafe { payload.set_len(size as usize) }; - Some(Input { - level: message_info.level as u32, - id: message_info.id as u32, - payload, - }) - } else { - None - } -} -``` - -Coupling `Vec::with_capacity` along with the `set_len` unsafe function -is a good approach to avoid initializing the 4,096 bytes of memory every time you want to load data of arbitrary size into the WASM memory. - -### Testing your Kernel - -:::note Smart Rollup WASM Debugger -`octez-smart-rollup-wasm-debugger` is available in the Octez -distribution starting with `/releases/version-16`. -::: - - -Testing a kernel without having to start a rollup node on a test network is very convenient. We provide a debugger as a means to evaluate the WASM PVM without relying on any node and network: - -```sh -octez-smart-rollup-wasm-debugger "${WASM_FILE}" --inputs "${JSON_INPUTS}" --rollup "${SOR_ADDR}" -``` - -`octez-smart-rollup-wasm-debugger` takes as its argument the WASM kernel to be debugged, either a `.wasm` file (the binary representation of WebAssembly modules) or as a `.wast` file (its textual representation) and actually parses and typechecks the kernel before giving it to the PVM. - -Beside the kernel file, the debugger can optionally take an input file -containing inboxes and a rollup address. The expected contents of the -inboxes is a JSON value, with the following schema: - -```json -[ - [ { "payload" : , - "sender" : , - "source" : - "destination" : } - .. - // or - { "external" : } - .. - ] -] -``` - -The contents of the input file is a JSON array of arrays of inputs, -which encodes a sequence of inboxes, where an inbox is a set of -messages. These inboxes are read in the same order as they appear in the JSON file. - -For example, here is a valid input file that defines two inboxes: the first array encodes an inbox containing only an external message, while the second array encodes an inbox containing two messages: - -```json -[ - [ - { - "external": - "0000000023030b01d1a37c088a1221b636bb5fccb35e05181038ba7c000000000764656661756c74" - } - ], - [ - { - "payload" : "0", - "sender" : "KT1ThEdxfUcWUwqsdergy3QnbCWGHSUHeHJq", - "source" : "tz1RjtZUVeLhADFHDL8UwDZA6vjWWhojpu5w", - "destination" : "sr1RYurGZtN8KNSpkMcCt9CgWeUaNkzsAfXf" - }, - { "payload" : "Pair Unit False" } - ] -] -``` - -Note that the `sender`, `source` and `destination` fields are optional -and will be given default values by the debugger, respectively: - -- `KT18amZmM5W7qDWVt2pH6uj7sCEd3kbzLrHT` -- `tz1Ke2h7sDdakHJQh8WX4Z372du1KChsksyU` -- `sr163Lv22CdE8QagCwf48PWDTquk6isQwv57` - -If no input file is given, the inbox will be assumed empty. If the option `--rollup` is given, it replaces the default value for the rollup address. - -`octez-smart-rollup-wasm-debugger` is a debugger, as such it waits for -user inputs to continue its execution. Its initial state is exactly the -same as right after its origination. Its current state can be inspected -with the command `show status`: - -```sh -> show status -Status: Waiting for inputs -Internal state: Collect -``` - -When started, the kernel is in collection mode internally. This means -that it is not executing any WASM code, and is waiting for inputs in -order to proceed. The command `load inputs` will load the first inbox -from the file given with the option `--inputs`, putting `Start_of_level` and `Info_per_level` before these inputs and `End_of_level` after the inputs. - -```sh -> load inputs -Loaded 3 inputs at level 0 - -> show status -Status: Evaluating -Internal state: Snapshot -``` - -The internal input buffer can be inspected with `show inbox`: - -```sh -> show inbox -Inbox has 3 messages: -{ raw_level: 0; - counter: 0 - payload: Start_of_level } -{ raw_level: 0; - counter: 1 - payload: 0000000023030b01d1a37c088a1221b636bb5fccb35e05181038ba7c000000000764656661756c74 } -{ raw_level: 0; - counter: 2 - payload: End_of_level } -``` - -The first input of an inbox at the beginning of a level is -`Start_of_level`, and is represented by the message `\000\001` on the -kernel side. We can now start a `kernel_run` evaluation: - -```sh -> step kernel_run -Evaluation took 11000000000 ticks so far -Status: Waiting for inputs -Internal state: Collect -``` - -The memory of the interpreter is flushed between two `kernel_run` calls -(at the `Snapshot` and `Collect` internal states), however the durable -storage can be used as a persistent memory. Let's assume this kernel -wrote data at key `/store/key`: - -```sh -> show key /store/key -`` -``` - -Since the representation of values is decided by the kernel, the -debugger can only return its raw value. Please note that the command -`show keys ` will return the keys for the given path. This can -help you navigate in the durable storage. - -```sh -> show keys /store -/key -/another_key -... -``` - -It is also possible to inspect the memory by stopping the PVM before its snapshot internal state, with `step result`, and inspect the memory at pointer `n` and length `l`, and finally evaluate until the next `kernel_run`: - -```sh -> step result -Evaluation took 2500 ticks so far -Status: Evaluating -Internal state: Evaluation succeeded - -> show memory at p for l bytes -`` - -> step kernel_run -Evaluation took 7500 ticks so far -Status: Evaluating -Internal state: Snapshot -``` - -Once again, note that values from the memory are output as is, since the representation is internal to WASM. - -Finally, it is possible to evaluate the whole inbox with `step inbox`. -It will take care of the possible reboots asked by the kernel (through -the usage of the `/kernel/env/reboot_flag` flag) and stop at the next -collection phase: - -```sh -> step inbox -Evaluation took 44000000000 ticks -Status: Waiting for inputs -Internal state: Collect -``` - -It is also possible to show the outbox for any given level: - -```sh -> show outbox at level 0 -Outbox has N messages: -{ unparsed_parameters: ..; - destination: ..; - entrypoint: ..; } -.. -``` - -The reveal channel described previously is available in the debugger, -either automatically or through specific commands. The debugger can automatically fill preimages from files in a specific directory on the disk, by default in the `preimage` subdirectory of the working directory. It can be configured with the option `--preimage-dir `. - -In case there is no corresponding file found for the requested preimage, the debugger will ask for the hexadecimal value of the preimage: - -```sh -> step inbox -Preimage for hash 0000[..] not found. -> 48656c6c6f207468657265210a -Hello there! -... -``` - -Metadata is automatically filled with level `0` as origination level -and the configured smart rollup address (or the default one). - -Note that when stepping tick by tick (using the `step tick` command), it is possible to end up in a situation were the evaluation stops on -`Waiting for reveal`. If the expected value is a metadata, the command -`reveal metadata` will give the default metadata to the kernel. If the -value expected is the preimage of a given hash, there are two possible -solutions: - -- `reveal preimage` - read the value from the disk. In that case, the - debugger will look for a file of the same name as the expected hash - in the `preimage` subdirectory. -- `reveal preimage of ` - used to feed a - custom preimage hash. - -## Glossary - -- **PVM**: A Proof-generating Virtual Machine is a reference - implementation for a device on top of which a smart rollup can be - executed. This reference implementation is part of the Tezos - protocol and is the unique source of truth regarding the semantics - of rollups. The PVM is able to produce proofs enforcing this truth. - This ability is used during the final step of refutation games. -- **Inbox**: A sequence of messages from layer 1 to Smart Rollups. - The contents of the inbox is determined by the consensus of the - Tezos protocol. -- **Outbox**: A sequence of messages from a smart rollup to - layer 1. Messages are smart contract calls, potentially containing - tickets. These calls can be triggered only when the related - commitment is cemented (hence, at least two weeks after the actual - execution of the operation). -- **Commitment period**: A period of 60 blocks during which all inbox - messages must be processed by the rollup node state to compute a - commitment. A commitment must be published for each commitment - period. -- **Refutation period**: At the end of each commitment period, a - period of two weeks starts to allow any commitment related to this - commitment period to be challenged. -- **Staker**: An implicit account that has made a deposit on a - commitment. -- **Refutation game**: A process by which the Tezos protocol solves a - conflict between two stakers. diff --git a/docs/architecture/smart-rollups.mdx b/docs/architecture/smart-rollups.mdx new file mode 100644 index 000000000..0ba3a2826 --- /dev/null +++ b/docs/architecture/smart-rollups.mdx @@ -0,0 +1,172 @@ +--- +title: Smart Rollups +authors: 'Nomadic Labs, TriliTech, Tim McMackin' +last_update: + date: 18 January 2024 +--- + +import LucidDiagram from '@site/src/components/LucidDiagram'; + +Smart Rollups play a crucial part in providing high scalability on Tezos. +They handle logic in a separate environment that can run transactions at a much higher rate and can use larger amounts of data than the main Tezos network. + +The transactions and logic that Smart Rollups run is called _layer 2_ to differentiate it from the main network, which is called _layer 1_. + +Anyone can run a node based on a Smart Rollup to execute its code and verify that other nodes are running it correctly, just like anyone can run nodes, bakers, and accusers on layer 1. +This code, called the _kernel_, runs in a deterministic manner and according to a given semantics, which guarantees that results are reproducible by any rollup node with the same kernel. +The semantics is precisely defined by a reference virtual machine called a proof-generating virtual machine (PVM), able to generate a proof that executing a program in a given context results in a given state. +During normal execution, the Smart Rollup can use any virtual machine that is compatible with the PVM semantics, which allows the Smart Rollup to be more efficient. + +Using the PVM and optionally a compatible VM guarantees that if a divergence in results is found, it can be tracked down to a single elementary step that was not executed correctly by some node. +In this way, multiple nodes can run the same rollup and each node can verify the state of the rollup. + +For a tutorial on Smart Rollups, see [Deploy a Smart Rollup](../tutorials/smart-rollup). + +For reference on Smart Rollups, see [Smart Optimistic Rollups](https://tezos.gitlab.io/active/smart_rollups.html) in the Octez documentation. + +This diagram shows a high-level view of how Smart Rollups interact with layer 1: + + + +## Uses for Smart Rollups + +- Smart Rollups allow you to run large amounts of processing and manipulate large amounts of data that would be too slow or expensive to run on layer 1. + +- Smart Rollups can run far more transactions per second than layer 1. + +- Smart Rollups allow you to avoid some transaction fees and storage fees. + +- Smart Rollups can retrieve data from outside the blockchain in specific ways that smart contracts can't. + +- Smart Rollups can implement different execution environments, such as execution environments that are compatible with other blockchains. +For example, Smart Rollups enable [Etherlink](https://www.etherlink.com/), which makes it possible to run EVM applications (originally written for Ethereum) on Tezos. + +## Communication + +Smart Rollups have access to two sources of information: the rollup inbox and the reveal data channel. +These are the only sources of information that rollups can use. +In particular, Smart Rollup nodes cannot communicate directly with each other; they do not have a peer-ro-peer communication channel like layer 1 nodes. + +### Rollup inbox + +Each layer 1 block has a _rollup inbox_ that contains messages from layer 1 to all rollups. +Anyone can add a message to this inbox and all messages are visible to all rollups. +Smart Rollups filter the inbox to the messages that they are interested in and act on them accordingly. + +The messages that users add to the rollup inbox are called _external messages_. +For example, users can add messages to the inbox with the Octez client `send smart rollup message` command. + +Similarly, smart contracts can add messages in a way similar to calling a smart contract entrypoint, by using the Michelson `TRANSFER_TOKENS` instruction. +The messages that smart contracts add to the inbox are called _internal messages_. + +Each block also contains the following internal messages, which are created by the protocol: + +- `Start of level`, which indicates the beginning of the block +- `Info per level`, which includes the timestamp and block hash of the preceding block +- `End of level`, which indicates the end of the block + +Smart Rollup nodes can use these internal messages to know when blocks begin and end. + +### Reveal data channel + +Smart Rollups can request arbitrary information through the _reveal data channel_. +Importantly, as opposed to internal and external messages, the information that passes through the reveal data channel does not pass through layer 1, so it is not limited by the bandwidth of layer 1 and can include large amounts of data. + +The reveal data channel supports these requests: + +- A rollup node can request an arbitrary data page up to 4KB if it knows the blake2b hash of the page, known as _preimage requests_. +To transfer more than 4KB of data, rollups must use multiple pages, which may contain hashes that point to other pages. + +- A rollup node can request information about the rollup, including the address and origination level of the rollup, known as _metadata requests_. + +{/* +TODO how is this data provided? +Where does it come from? +Do we need instructions on how to provide data? +Eventually include: + - importing data from a DAC certificate (which can contain anything ultimately, including a kernel to upgrade to) + - revealing data from the (WIP) DAL +*/} + +## Smart Rollup lifecycle + +The general flow of a Smart Rollup goes through these phases: + +1. Origination: A user called the _rollup operator_ originates the Smart Rollup to layer 1 and one or more users start nodes based on that Smart Rollup to independently verify its operation. +1. Commitment periods: The Smart Rollup nodes receive the messages in the Smart Rollup inbox, run processing based on those messages, generate but do not run outbox messages, and publish a hash of their state at the end of the period, called a commitment. +1. Refutation periods: Nodes can publish a concurrent commitment to refute a published commitment. +1. Triggering outbox messages: When the commitment can no longer be refuted, any client can trigger outbox messages, which create transactions. + +Here is more information on each of these phases: + +{/* TODO diagram of commitment periods and refutation periods? */} + +### Origination + +Like smart contracts, users deploy Smart Rollups to layer 1 in a process called _origination_. + +The origination process stores data about the rollup on layer 1, including: + +- An address for the rollup, which starts with `sr1` +- The type of proof-generating virtual machine (PVM) for the rollup, which defines the execution engine of the rollup kernel; currently only the `wasm_2_0_0` PVM is supported +- The installer kernel, which is a WebAssembly program that allows nodes to download and install the complete rollup kernel +- The Michelson data type of the messages it receives from layer 1 +- The genesis commitment that forms the basis for commitments that rollups nodes publish in the future + +After it is originated, anyone can run a Smart Rollup node based on this information. + +### Commitment periods + +Starting from the rollup origination level, levels are partitioned into _commitment periods_ of 60 consecutive layer 1 blocks. +During each commitment period, each rollup node receives the messages in the rollup inbox, processes them, and updates its state. + +Because Smart Rollup nodes behave in a deterministic manner, their states should all be the same if they have processed the same inbox messages with the same kernel starting from the same origination level. +This state is referred to as the "state of the rollup." + +At the end of a commitment period, the next commitment period starts. + +Any time after each commitment period, at least one rollup node must publish a hash of its state to layer 1, which is called its _commitment_. +Each commitment builds on the previous commitment, and so on, back to the genesis commitment from when the Smart Rollup was originated. + +Nodes must stake 10,000 tez along with their commitments. +When nodes make identical commitments, their stakes are combined into a single stake for the commitment. + +### Refutation periods + +Because the PVM is deterministic and all of the inputs are the same for all nodes, any honest node that runs the same Smart Rollup produces the same commitment. +As long as nodes publish matching commitments, they continue running normally. + +When the first commitment for a past commitment period is published, a refutation period starts, during which any rollup node can publish its own commitment for the same commitment period, especially if it did not achieve the same state. +During the refutation period for a commitment period, if two or more nodes publish different commitments, two of them play a _refutation game_ to identify the correct commitment. +The nodes automatically play the refutation game by stepping through their logic using the PVM to identify the point at which they differ. +At this point, the PVM is used to identify the correct commitment, if any. + +Each refutation game has one of two results: + +- Neither commitment is correct. +In this case, the protocol burns both commitments' stakes and eliminates both commitments. + +- One commitment is correct and the other is not. +In this case, the protocol eliminates the incorrect commitment, burns half of the incorrect commitment's stake, and gives the other half to the correct commitment's stake. + +This refutation game happens as many times as is necessary to eliminate incorrect commitments. +Because the node that ran the PVM correctly is guaranteed to win the refutation game, a single honest node is enough to ensure that the Smart Rollup is running correctly. +This kind of Smart Rollup is called a Smart Optimistic Rollup because the commitments are assumed to be correct until they are proven wrong by an honest rollup node. + +When there is only one commitment left, either because all nodes published identical commitments during the whole refutation period or because this commitment won the refutation games and eliminated all other commitments, then this correct commitment can be _cemented_ by a dedicated layer 1 operation and becomes final and unchangeable. +The commitments for the next commitment period build on the last cemented commitment. + +The refutation period lasts 2 weeks on Mainnet; it can be different on other networks. + +### Triggering outbox messages + +Smart Rollups can generate transactions to run on layer 1, but those transactions do not run immediately. +When a commitment includes layer 1 transactions, these transactions go into the Smart Rollup outbox and wait for the commitment to be cemented. + +After the commitment is cemented, clients can trigger transactions in the outbox with the Octez client `execute outbox message` command. +When they trigger a transaction, it runs like any other call to a smart contract. +For more information, see [Triggering the execution of an outbox message](https://tezos.gitlab.io/shell/smart_rollup_node.html?highlight=triggering) in the Octez documentation. + +## Examples + +For examples of Smart Rollups, see this repository: https://gitlab.com/tezos/kernel-gallery. diff --git a/docs/architecture/tokens.md b/docs/architecture/tokens.md index 5ba3e6f94..78235e5f8 100644 --- a/docs/architecture/tokens.md +++ b/docs/architecture/tokens.md @@ -21,7 +21,7 @@ One exception is tickets, which are directly stored and managed by smart contrac To learn about tokens, see these tutorials: - [Create an NFT](../tutorials/create-an-nft) -- [Build your first app on Tezos](../tutorials/build-your-first-app) +- [Build a simple web application](../tutorials/build-your-first-app) ## Fungible and non-fungible tokens diff --git a/docs/dApps.mdx b/docs/dApps.mdx index f8f808787..ded06035e 100644 --- a/docs/dApps.mdx +++ b/docs/dApps.mdx @@ -47,6 +47,6 @@ For information on typical tasks that dApps do, see: These tutorials cover dApps of different complexities: -- For a simple dApp, see [Build your first app on Tezos](./tutorials/build-your-first-app) +- For a simple dApp, see [Build a simple web application](./tutorials/build-your-first-app) - For a dApp that mints NFTs, see [Mint NFTs from a web app](./tutorials/create-an-nft/nft-web-app) - For a large dApp that allows users to buy and sell NFTs, see [Build an NFT marketplace](./tutorials/build-an-nft-marketplace) diff --git a/docs/dApps/sending-transactions.md b/docs/dApps/sending-transactions.md index ce64a1615..fda2b0529 100644 --- a/docs/dApps/sending-transactions.md +++ b/docs/dApps/sending-transactions.md @@ -82,7 +82,7 @@ try { } ``` -For examples of calling smart contracts, see tutorials such as [Build your first app on Tezos](../tutorials/build-your-first-app) or [Create a contract and web app that mints NFTs](../tutorials/create-an-nft/nft-taquito). +For examples of calling smart contracts, see tutorials such as [Build a simple web application](../tutorials/build-your-first-app) or [Create a contract and web app that mints NFTs](../tutorials/create-an-nft/nft-taquito). For more information about using Taquito, see [Smart contracts](https://tezostaquito.io/docs/smartcontracts) in the Taquito documentation. diff --git a/docs/dApps/taquito.md b/docs/dApps/taquito.md index c12c122c0..b418094f7 100644 --- a/docs/dApps/taquito.md +++ b/docs/dApps/taquito.md @@ -89,13 +89,13 @@ Tezos.setWalletProvider(wallet) ## Getting data from the Tezos blockchain -Taquito provides methods to get different types of data from the Tezos blockchain, for example, the balance of an implicit account, the storage of a contract or token metadata. +Taquito provides methods to get different types of data from the Tezos blockchain, for example, the balance of a user account, the storage of a contract or token metadata. > Note: querying data from the blockchain doesn't create a new transaction. ### Getting the balance of an account -Taquito allows developers to get the current balance in tez of an implicit account. The `getBalance` method is available on the instance of the TezosToolkit and requires a parameter of type `string` that represents the address of the account. +Taquito allows developers to get the current balance in tez of a user account. The `getBalance` method is available on the instance of the TezosToolkit and requires a parameter of type `string` that represents the address of the account. The returned value is of type `BigNumber`: diff --git a/docs/dApps/wallets.md b/docs/dApps/wallets.md index 877f20c65..4c8ddbc05 100644 --- a/docs/dApps/wallets.md +++ b/docs/dApps/wallets.md @@ -29,7 +29,7 @@ The primary tools that dApps use to connect to wallets are: ## Beacon and Taquito Most of the time, dApps use Beacon and Taquito together for a straightforward way to connect to wallets and submit transactions. -For an example, see the tutorial [Build your first app on Tezos](../tutorials/build-your-first-app). +For an example, see the tutorial [Build a simple web application](../tutorials/build-your-first-app). ### Connecting to wallets @@ -49,7 +49,7 @@ const address = await wallet.getPKH(); When this code runs, Beacon opens a popup window that guides the user through connecting their wallet. Then the application can send transactions to Tezos. -See [Part 3: Sending transactions](../tutorials/build-your-first-app/sending-transactions) in the tutorial [Build your first app on Tezos](../tutorials/build-your-first-app). +See [Part 3: Sending transactions](../tutorials/build-your-first-app/sending-transactions) in the tutorial [Build a simple web application](../tutorials/build-your-first-app). ### Reconnecting to wallets diff --git a/docs/overview/common-applications.md b/docs/overview/common-applications.md index db0ee374a..135f2348a 100644 --- a/docs/overview/common-applications.md +++ b/docs/overview/common-applications.md @@ -22,7 +22,7 @@ Tezos is being used by the French Armies and Gendarmerie's Information Center to In recent years, the concept of Central Bank Digital Currencies (CBDCs) has gained traction, with several countries around the world exploring their own CBDC projects. Société Générale carried out a series of successful tests [using Tezos](https://decrypt.co/112127/societe-generales-crypto-division-lands-regulatory-approval-france) to explore the potential of CBDCs. In September 2020, the bank announced that it had completed a pilot program using a custom-built version of the Tezos blockchain to simulate the issuance and circulation of CBDCs. The pilot involved testing the technology's ability to handle transactions, make payments, and settle transactions in a digital environment. -The Califonia DMV is also using Tezos for its project to [put car titles on the blockchain](https://fortune.com/crypto/2023/01/26/california-announces-dmv-run-blockchain-through-partnership-with-tezos/). +The California DMV is also using Tezos for its project to [put car titles on the blockchain](https://fortune.com/crypto/2023/01/26/california-announces-dmv-run-blockchain-through-partnership-with-tezos/). [Sword Group](https://www.sword-group.com/2020/09/28/sword-launches-tezos-digisign/) an international technology company, launched DigiSign, an open-source tool built on Tezos that enables users to digitally sign, certify, and verify the authenticity of digital documents. diff --git a/docs/overview/glossary.md b/docs/overview/glossary.md index 9092cadde..dbdb12eb2 100644 --- a/docs/overview/glossary.md +++ b/docs/overview/glossary.md @@ -129,7 +129,7 @@ The following is adapted from this [Agora post](https://forum.tezosagora.org/t/n In the context, each account is associated with a balance (an amount of tez available). - An account can be either an originated account or an implicit account. + An account can be a user account or a smart contract. - **Baker** @@ -190,9 +190,9 @@ The following is adapted from this [Agora post](https://forum.tezosagora.org/t/n - **Delegate** - An implicit account that can participate in consensus and in governance. + A user account that can participate in consensus and in governance. Actual participation is under further provisions, like having a minimal stake. - An implicit account becomes a delegate by registering as such. + A user account becomes a delegate by registering as such. Through delegation, other accounts can delegate their rights to a delegate account. The delegate's rights are calculated based on its stake. Note that `tz4` accounts cannot be delegates. @@ -252,15 +252,7 @@ The following is adapted from this [Agora post](https://forum.tezosagora.org/t/n - **Implicit account** - An account that is linked to a public key. Contrary to a smart - contract, an implicit account cannot include a script and it - cannot reject incoming transactions. - - If *registered*, an implicit account can act as a delegate. - - The address of an implicit account always starts with the - letters tz followed by 1, 2, 3, or 4 (depending on the - signature scheme) and finally the hash of the public key. + See [User account](#user-account). - **Layer 1** @@ -345,6 +337,20 @@ The following is adapted from this [Agora post](https://forum.tezosagora.org/t/n An operation to transfer tez between two accounts, or to run the code of a smart contract. + + +- **User account** + + An account that is linked to a public key. Contrary to a smart + contract, a user account cannot include a script and it + cannot reject incoming transactions. + + If *registered*, a user account can act as a delegate. + + The address of a user account always starts with the + letters tz followed by 1, 2, 3, or 4 (depending on the + signature scheme) and finally the hash of the public key. + - **Validation pass** An index (a natural number) associated with a particular kind of diff --git a/docs/overview/quickstart.md b/docs/overview/quickstart.md index 10bd53e33..c7aeb3743 100644 --- a/docs/overview/quickstart.md +++ b/docs/overview/quickstart.md @@ -3,4 +3,4 @@ Simple page to provide links to tutorials and what you learn from each one. - To learn about smart contracts, go to 'Deploy your first smart contract' -- To learn about dApps, go to 'Build your first app on Tezos' \ No newline at end of file +- To learn about dApps, go to 'Build a simple web application' \ No newline at end of file diff --git a/docs/smart-contracts/creating.md b/docs/smart-contracts/creating.md index 1803df43d..63f788fbd 100644 --- a/docs/smart-contracts/creating.md +++ b/docs/smart-contracts/creating.md @@ -10,7 +10,7 @@ This documentation provides step-by-step instructions for creating smart contrac ## Choosing your smart contract language Tezos supports a variety of smart contract [languages](./languages): Michelson, SmartPy, LIGO, Archetype. -You can select a language based on your familarity with programming paragims, the complexity of the contract you want to deploy, and the specific features you require. Here's a more detailed table for each language: +You can select a language based on your familiarity with programming paradigms, the complexity of the contract you want to deploy, and the specific features you require. Here's a more detailed table for each language: | | **Michelson** | **SmartPy** | **LIGO** | **Archetype** | |:----------------:|:----------------------------------------------------------:|:-----------------------------------------------------:|:-------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------:| @@ -18,7 +18,7 @@ You can select a language based on your familarity with programming paragims, th | **Capabilities** | Full control over contract, optimal for gas efficiency | Easy to write, automatically manages stack operations | Statically-typed, strong error checking | Specialized for formal verification and correctness | | **Use Cases** | Optimized contracts, developers with blockchain experience | Python developers, rapid prototyping | Developers familiar with static typing, variety of mainstream programming backgrounds | High-security contracts, developers looking for formal proof of contract behavior | -For beginners, we recommand **SmartPy** or **LIGO** for their higher-level more abstracted approach. +For beginners, we recommend **SmartPy** or **LIGO** for their higher-level more abstracted approach. ## Making a strategic choice diff --git a/docs/smart-contracts/data-types/complex-data-types.md b/docs/smart-contracts/data-types/complex-data-types.md index 801c9c15b..c120d688f 100644 --- a/docs/smart-contracts/data-types/complex-data-types.md +++ b/docs/smart-contracts/data-types/complex-data-types.md @@ -392,7 +392,7 @@ The ticket's information is public and can be read by any contract that holds th Contracts can pass tickets to entrypoints to change which contract is in control of the ticket. If contract A passes a ticket to contract B, contract A loses all access to the ticket. -Contracts can pass tickets only to other contracts (implicit accounts) because the entrypoint must accept a ticket of the correct type; contracts cannot pass tickets to user accounts. +Contracts can pass tickets to other contracts via entrypoints accepting a ticket of the correct type; contracts can also pass tickets to user accounts. ### Ticket features diff --git a/docs/smart-contracts/data-types/primitive-data-types.md b/docs/smart-contracts/data-types/primitive-data-types.md index bb3920e74..629d5cfce 100644 --- a/docs/smart-contracts/data-types/primitive-data-types.md +++ b/docs/smart-contracts/data-types/primitive-data-types.md @@ -137,8 +137,8 @@ For more information about serialization, see [Serialization](../serialization). Boolean types on Tezos (`bool`) work the same way as in most programming languages. -- A boolean value can be `True` or `False` -- Comparison operators produce boolean values +- A Boolean value can be `True` or `False` +- Comparison operators produce Boolean values - Boolean values can be used in conditional statements or `while` loops - The usual logic operators are supported: `AND`, `OR`, `XOR`, `NOT` @@ -160,16 +160,16 @@ The following operations are supported on timestamps: ## Addresses {#addresses} -On Tezos, each account is uniquely identified by its `address`, whether it is a user account (implicit account) or a contract (originated account). +On Tezos, each account is uniquely identified by its `address`. Internally, addresses take the form of a `string` type. -For implicit accounts, the string starts with "tz1", "tz2", "tz3" or "tz4". -For originated accounts, the string starts with "KT1". +For user accounts, the string starts with "tz1", "tz2", "tz3" or "tz4". +For smart contract accounts, the string starts with "KT1". | Type of Account | Example | | --- | --- | -| Implicit Account | `tz1YWK1gDPQx9N1Jh4JnmVre7xN6xhGGM4uC` | -| Originated Account | `KT1S5hgipNSTFehZo7v81gq6fcLChbRwptqy` | +| User account | `tz1YWK1gDPQx9N1Jh4JnmVre7xN6xhGGM4uC` | +| Smart contract | `KT1S5hgipNSTFehZo7v81gq6fcLChbRwptqy` | The next part of the string is a `Base58` encoded hash, followed by a 4-byte checksum. diff --git a/docs/smart-contracts/deploying.md b/docs/smart-contracts/deploying.md index 9e3656b26..68a8d3333 100644 --- a/docs/smart-contracts/deploying.md +++ b/docs/smart-contracts/deploying.md @@ -5,7 +5,7 @@ last_update: date: 6 November 2023 --- ## Introduction -In Tezos, deploying a smart contract is often referred to as “origination”. This process essentially creates a new account that holds the smart contract's script. Contracts originated in this manner have addresses that start with `KT1` (known as originated accounts), which distinguishes them from the implicit accounts with addresses beginning with `tz1`, `tz2`, or `tz3`. +In Tezos, deploying a smart contract is often referred to as “origination”. This process essentially creates a new account that holds the smart contract's script. Contracts originated in this manner have addresses that start with `KT1`, which distinguishes them from the user accounts with addresses beginning with `tz1`, `tz2`, or `tz3`. ## Prerequisites - Compile your contract and its initial storage diff --git a/docs/smart-contracts/events.md b/docs/smart-contracts/events.md index 4619a8a51..a1f01f2f4 100644 --- a/docs/smart-contracts/events.md +++ b/docs/smart-contracts/events.md @@ -23,7 +23,27 @@ The event can also include these optional fields: Each high-level language has its own way of creating events. The compiled Michelson code uses the `EMIT` command to emit the event. -For example, this SmartPy contract stores a number and emits events when that amount changes: +For example, this contract stores a number and emits events when that amount changes: + +JsLIGO + +```ligolang +type storage = int; + +@entry +const add = (addAmount: int, s: storage): [list, storage] => + [list([Tezos.emit("%add",{ source: Tezos.get_source(), addAmount: addAmount })]), + s + addAmount + ]; + +@entry +const reset = (_: unit, s: storage): [list, storage] => + [list([Tezos.emit("%reset",{ source: Tezos.get_source(), previousValue: s })]), + 0 + ]; +``` + +SmartPy ```python import smartpy as sp @@ -82,7 +102,7 @@ Tezos.setStreamProvider( Tezos.getFactory(PollingSubscribeProvider)({ shouldObservableSubscriptionRetry: true, pollingIntervalMilliseconds: 1500, - }), + }) ); try { @@ -91,7 +111,7 @@ try { address: contractAddress, }); - sub.on("data", console.log); + sub.on('data', console.log); } catch (e) { console.log(e); } @@ -103,50 +123,47 @@ The event data is in Michelson format, so an event from the `reset` entrypoint o ```json { - "opHash": "onw8EwWVnZbx2yBHhL72ECRdCPBbw7z1d5hVCJxp7vzihVELM2m", - "blockHash": "BM1avumf2rXSFYKf4JS7YJePAL3gutRJwmazvqcSAoaqVBPAmTf", - "level": 4908983, - "kind": "event", - "source": "KT1AJ6EjaJHmH6WiExCGc3PgHo3JB5hBMhEx", - "nonce": 0, - "type": { - "prim": "pair", - "args": [ - { - "prim": "int", - "annots": [ - "%previousValue" - ] - }, - { - "prim": "address", - "annots": [ - "%source" - ] - } - ] - }, - "tag": "reset", - "payload": { - "prim": "Pair", - "args": [ - { - "int": "17" - }, - { - "bytes": "000032041dca76bac940b478aae673e362bd15847ed8" - } - ] - }, - "result": { - "status": "applied", - "consumed_milligas": "100000" - } + "opHash": "onw8EwWVnZbx2yBHhL72ECRdCPBbw7z1d5hVCJxp7vzihVELM2m", + "blockHash": "BM1avumf2rXSFYKf4JS7YJePAL3gutRJwmazvqcSAoaqVBPAmTf", + "level": 4908983, + "kind": "event", + "source": "KT1AJ6EjaJHmH6WiExCGc3PgHo3JB5hBMhEx", + "nonce": 0, + "type": { + "prim": "pair", + "args": [ + { + "prim": "int", + "annots": ["%previousValue"] + }, + { + "prim": "address", + "annots": ["%source"] + } + ] + }, + "tag": "reset", + "payload": { + "prim": "Pair", + "args": [ + { + "int": "17" + }, + { + "bytes": "000032041dca76bac940b478aae673e362bd15847ed8" + } + ] + }, + "result": { + "status": "applied", + "consumed_milligas": "100000" + } } ``` Note that the address field is returned as a byte value. To convert the bytes to an address, use the `encodePubKey` function in `@taquito/utils`. + You can see the complete content of the event operation by looking up the operation hash in a block explorer. diff --git a/docs/smart-contracts/languages/ligo.md b/docs/smart-contracts/languages/ligo.md index 9ac6b4da6..9da88a75c 100644 --- a/docs/smart-contracts/languages/ligo.md +++ b/docs/smart-contracts/languages/ligo.md @@ -8,7 +8,7 @@ LIGO is a functional programming language that is intended to be both user-frien LIGO offers two syntaxes: -- JsLIGO, a sytax that is inspired by TypeScript/JavaScript +- JsLIGO, a syntax that is inspired by TypeScript/JavaScript - CameLIGO, a syntax that is inspired by OCaml You can use either syntax and compile to Michelson to run on Tezos. @@ -16,7 +16,7 @@ You can use either syntax and compile to Michelson to run on Tezos. To learn LIGO, see these tutorials: - [Deploy a smart contract with CameLIGO](../../tutorials/smart-contract/cameligo) -- [Deploy a smart contract with jsLIGO](../../tutorials/smart-contract/jsligo) +- [Deploy a smart contract with JsLIGO](../../tutorials/smart-contract/jsligo) Let's define a LIGO contract in the two flavours above. diff --git a/docs/smart-contracts/logic/comparing.md b/docs/smart-contracts/logic/comparing.md index b9843a120..e7791c02e 100644 --- a/docs/smart-contracts/logic/comparing.md +++ b/docs/smart-contracts/logic/comparing.md @@ -17,7 +17,7 @@ How values are compared depends on the type of the values: - Strings, `bytes`, `key_hash`, `key`, `signature` and `chain_id` values are compared lexicographically. - Boolean values are compared so that false is strictly less than true. - Address are compared as follows: - - Addresses of implicit accounts are strictly less than addresses of originated accounts. + - Addresses of user accounts are strictly less than addresses of smart contracts. - Addresses of the same type are compared lexicographically. - Pair values (and therefore records) are compared component by component, starting with the first component. - Options are compared as follows: diff --git a/docs/smart-contracts/multisig-usage.md b/docs/smart-contracts/multisig-usage.md index 6b65c1ac5..67bdf6456 100644 --- a/docs/smart-contracts/multisig-usage.md +++ b/docs/smart-contracts/multisig-usage.md @@ -58,7 +58,7 @@ Note, this section uses the The Generic Multisig allows us to set administrators of the contract (`signerKeys`) and the number of those administrators required to sign -(`threshold`). As of writing, the command line tool only allows `tz1` implicit +(`threshold`). As of writing, the command line tool only allows `tz1` user accounts to be administrators, though the contract allows `KT1` originated accounts as well. diff --git a/docs/smart-contracts/special-values.md b/docs/smart-contracts/special-values.md index ae3a3b5c7..1b25057ce 100644 --- a/docs/smart-contracts/special-values.md +++ b/docs/smart-contracts/special-values.md @@ -27,7 +27,7 @@ For example, assume that user A called contract B that in turn called contract C When C runs, `source` is the address of A, while `caller` is the address of B. :::warning Access permissions -It is best practice to implement permissioning based on `caller` instead of `source` because any implicit account can call any entrypoint on Tezos. +It is best practice to implement permissioning based on `caller` instead of `source` because any user account can call any entrypoint on Tezos. ::: - `self`: The address of the contract itself. diff --git a/docs/tutorials.mdx b/docs/tutorials.mdx index 8149c111a..16bfc8711 100644 --- a/docs/tutorials.mdx +++ b/docs/tutorials.mdx @@ -54,7 +54,7 @@ These tutorials contain multiple parts and are intended for developers with some /> + + + + diff --git a/docs/tutorials/build-an-nft-marketplace/part-1.md b/docs/tutorials/build-an-nft-marketplace/part-1.md index 418247299..b837790e4 100644 --- a/docs/tutorials/build-an-nft-marketplace/part-1.md +++ b/docs/tutorials/build-an-nft-marketplace/part-1.md @@ -386,7 +386,7 @@ To save time, this tutorial provides a starter React application. ``` This application contains basic navigation and the ability to connect to wallets. - For a tutorial that includes connecting to wallets, see [Build your first app on Tezos](../build-your-first-app). + For a tutorial that includes connecting to wallets, see [Build a simple web application](../build-your-first-app). Because Taqueria automatically keeps track of your deployed contract, the application automatically accesses the contract and shows that there are no NFTs in it yet. The application looks like this: diff --git a/docs/tutorials/build-files-archive-with-dal.mdx b/docs/tutorials/build-files-archive-with-dal.mdx new file mode 100644 index 000000000..d4f9f41a2 --- /dev/null +++ b/docs/tutorials/build-files-archive-with-dal.mdx @@ -0,0 +1,151 @@ +--- +title: Implement a file archive with the DAL and a Smart Rollup +authors: 'Tezos Core Developers' +last_update: + date: 22 January 2024 +--- + +import LucidDiagram from '@site/src/components/LucidDiagram'; + +:::note Experimental +The data availability layer is an experimental feature that is not yet available on Tezos Mainnet. +The way the DAL works may change significantly before it is generally available. +::: + +The data availability layer (DAL) is a companion peer-to-peer network for the Tezos blockchain, designed to provide additional data bandwidth to Smart Rollups. +It allows users to share large amounts of data in a way that is decentralized and permissionless, because anyone can join the network and post and read data on it. + +In this tutorial, you will set up a file archive that stores and retrieves files with the DAL. +You will learn: + +- How data is organized and shared with the DAL and the reveal data channel +- How to read data from the DAL in a Smart Rollup +- How to host a DAL node +- How to publish data and files with the DAL + +Because the DAL is not yet available on Tezos Mainnet, this tutorial uses the [Weeklynet test network](https://teztnets.com/weeklynet-about), which runs on a newer version of the protocol that includes the DAL. + +See these links for more information about the DAL: + +- For technical information about how the DAL works, see [Data Availability Layer](https://tezos.gitlab.io/shell/dal.html) in the Octez documentation. +- For more information about the approach for the DAL, see [The Rollup Booster: A Data-Availability Layer for Tezos](https://research-development.nomadic-labs.com/data-availability-layer-tezos.html). + +## Prerequisites + +This article assumes some familiarity with Smart Rollups. +If you are new to Smart Rollups, see the tutorial [Deploy a Smart Rollup](./smart-rollup). + +### Set up a Weeklynet environment and account + +Because Weeklynet requires a specific version of the Octez suite, you can't use most wallet applications and installations of the Octez suite with it. +Instead, you must set up an environment with a specific version of the Octez suite and use it to create and fund an account. +Note that Weeklynet is reset every Wednesday, so you must recreate your environment and account after the network resets. + +The easiest way to do this is to use the Docker image that is generated each time Weeklynet is reset and recreated. +As another option, you can build the specific version of the Octez suite locally. +For instructions, see the Weeklynet page at https://teztnets.com/weeklynet-about. + +To set up an environment and account in a Docker container, follow these steps: + +1. From the [Weeklynet](https://teztnets.com/weeklynet-about) page, find the Docker command to create a container from the correct Docker image, as in this example: + + ```bash + docker run -it --entrypoint=/bin/sh tezos/tezos:master_7f3bfc90_20240116181914 + ``` + + The image tag in this command changes each time the network is reset. + +1. Copy the URL of the public RPC endpoint for Weeklynet, such as `https://rpc.weeklynet-2024-01-17.teztnets.com`. +This endpoint also changes each time the network is reset. + +1. For convenience, you may want to set this endpoint as the value of the `ENDPOINT` environment variable. + +1. In the container, initialize the Octez client with that endpoint, such as this example: + + ```bash + octez-client -E https://rpc.weeklynet-2024-01-17.teztnets.com config init + ``` + +1. Create an account with the command `octez-client gen keys $MY_ACCOUNT`, where `$MY_ACCOUNT` is an alias for your account. + +1. Get the public key hash of the new account by running the command `octez-client show address $MY_ACCOUNT`. + +1. From the [Weeklynet](https://teztnets.com/weeklynet-about) page, open the Weeklynet faucet and send some tez to the account. + +Now you can use this account to deploy Smart Rollups. + +### Install Rust + +To run this tutorial, install Rust by running the following command. +The application in this tutorial uses Rust because of its support for WebAssembly (WASM), the language that Smart Rollups use to communicate. +Rollups can use any language that has WASM compilation support. + +```bash +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh +``` + +Then, add WASM as a compilation target for Rust by running this command: + +```bash +rustup target add wasm32-unknown-unknown +``` + +You can see other ways of installing Rust at https://www.rust-lang.org. + +## Why the DAL? + +The DAL has earned the nickname "Rollup Booster" from its ability to address +the last bottleneck Smart Rollups developers could not overcome without +sacrificing decentralization: block space. Smart Rollups offload +*computation* from layer 1, but the transactions that they process still need to +originate from somewhere. + +By default, that "somewhere" is the layer 1 blocks, yet the size of a Tezos +block is limited to around 500KBytes. In this model, while Smart Rollups do not +compete for layer 1 gas anymore, they still compete for block space. + +{/* Is this info about the reveal data channel needed here? */} +Additionally, a Smart Rollup can fetch data from an additional source called the +reveal data channel, which allows them to retrieve arbitrary data. +The reveal channel is a powerful way to share data, because it allows a Smart Rollup +operator to post hashes instead of full data files on layer 1. But it is a +double-edged sword, because nothing enforces the availability of the data in the +first place. [Solutions exist to address this +challenge](https://research-development.nomadic-labs.com/introducing-data-availability-committees.html), +but they are purely off-chain ones, coming with no guarantee from layer 1. + +The DAL allows third parties to publish data and have bakers attest that the data is available. +When enough bakers have attested that the data is available, Smart Rollups can retrieve the data without the need for additional trusted third-parties. + +## How the DAL works + +In this tutorial, you create a file archive application that allows clients to upload data to the DAL. +You also create a Smart Rollup that listens to the DAL and responds to that data. + +The DAL works like this: + +1. Users post data to a DAL node. +1. The DAL node returns a certificate. +This certificate includes a commitment that the data is available and a proof of the data. +1. Users post the certificate to layer 1 via the Octez client, which is much cheaper than posting the complete data. +1. When the certificate is confirmed in a block, layer 1 splits the data into shards and assigns those shards to bakers, who verify that the data is available. +1. Bakers verify that the data is available and attest that the data is available in their usual block attestations to layer 1. +They have a certain number of blocks to do so, known as the _attestation lag_, and if they don't by the end of this period, the certificate is considered bogus and the related data is dropped. +1. Other DAL nodes get the data from the initial DAL node through the peer-to-peer network. +1. The Smart Rollup node monitors the blocks and when it sees attested DAL data, it connects to a DAL node to request the data. +1. The Smart Rollup node stores the data in its durable storage, addressed by its hash. +Smart Rollups must store the data because it is available on the DAL for only a short time. +1. Users who know the hash of the data can download it from the Smart Rollup node. + +The overall workflow is summarized in the following figure: + + + +There are many steps in the DAL process, but the most complicated parts (storing and sharing data) are handled automatically by the various daemons in the Octez suite. + +:::note +When you install a Smart Rollup, you provide only the installer kernel on layer 1 and the full kernel via the reveal data channel. +Currently, you cannot send the full kernel data over the data availability layer, so this tutorial relies on the reveal data channel to install the kernel as usual. +::: + +When your environment is ready, get started by going to [Part 1: Getting the DAL parameters](./build-files-archive-with-dal/get-dal-params). diff --git a/docs/tutorials/build-files-archive-with-dal/get-dal-params.mdx b/docs/tutorials/build-files-archive-with-dal/get-dal-params.mdx new file mode 100644 index 000000000..2676e9be4 --- /dev/null +++ b/docs/tutorials/build-files-archive-with-dal/get-dal-params.mdx @@ -0,0 +1,173 @@ +--- +title: "Part 1: Getting the DAL parameters" +authors: 'Tezos Core Developers' +last_update: + date: 17 January 2024 +--- + +import LucidDiagram from '@site/src/components/LucidDiagram'; + +The data availability layer stores information about the available data in layer 1 blocks. +Each block has several byte-vectors called _slots_, each with a maximum size. +DAL users can add information about the available data as _pages_ in these slots, as shown in this figure: + + + +The data in a slot is broken into pages to ensure that each piece of data can fit in a single Tezos operation. +This data must fit in a single operation to allow the Smart Rollup refutation game to work, in which every execution step of the Smart Rollup must be provable to layer 1. +{/* TODO link to Smart Rollup topic for more info on the refutation game */} + +When clients add data, they must specify which slot to add it to. +Note that because the DAL is permissionless, clients may try to add data to the same slot in the same block. +In this case, the first operation in the block takes precedence, which leaves the baker that creates the block in control of which data makes it into the block. +Other operations that try to add data to the same slot fail. + +The number and size of these slots can change. +Different networks can have different DAL parameters. +Future changes to the protocol may allow the DAL to resize dynamically based on usage. + +Therefore, clients must get information about the DAL before sending data to it. +In these steps, you set up a simple Smart Rollup to get the current DAL parameters and print them to the log. + +## Prerequisites + +Before you begin, make sure that you have installed the prerequisites and set up an environment and an account as described in [Implement a file archive with the DAL and a Smart Rollup](../build-files-archive-with-dal). + +## Fetching the DAL parameters in a kernel + +To get the DAL parameters, you can use built-in functions in the Tezos [Rust SDK](https://crates.io/crates/tezos-smart-rollup). + +1. In a folder for your project, create a file named `Cargo.toml` with this code: + + ```toml + [package] + name = "files_archive" + version = "0.1.0" + edition = "2021" + + [lib] + crate-type = ["cdylib", "lib"] + + [dependencies] + tezos-smart-rollup = { version = "0.2.2", features = [ "proto-alpha" ] } + ``` + + As a reminder, the kernel of a Smart Rollup is a WASM program. + The `proto-alpha` feature is necessary to get access to the functions specific to the DAL. + +1. Create a file named `src/lib.rs` to be the kernel. + +1. In the `src/lib.rs` file, add this code: + + ```rust + use tezos_smart_rollup::{kernel_entry, prelude::*}; + + pub fn entry(host: &mut R) { + let param = host.reveal_dal_parameters(); + debug_msg!(host, "{:?}\n", param); + } + + kernel_entry!(entry); + ``` + + This function gets the DAL parameters of the currently connected network and prints them to the log. + +1. Build the kernel: + + ```bash + cargo build --release --target wasm32-unknown-unknown + cp target/wasm32-unknown-unknown/release/files_archive.wasm . + ``` + +1. Get the installer kernel: + + ```bash + cargo install tezos-smart-rollup-installer + export PATH="${HOME}/.local/bin:${PATH}" + smart-rollup-installer get-reveal-installer \ + -P _rollup_node/wasm_2_0_0 \ + -u files_archive.wasm \ + -o installer.hex + ``` + +Now the Smart Rollup is ready to deploy. + +## Deploying the Smart Rollup and starting a node + +Follow these steps to deploy the Smart Rollup to Weeklynet and start a node: + +1. Run this command to deploy the Smart Rollup, replacing `$MY_ACCOUNT` with your account alias and `$ENDPOINT` with the RPC endpoint: + + ```bash + octez-client --endpoint ${ENDPOINT} \ + originate smart rollup files_archive from ${MY_ACCOUNT} \ + of kind wasm_2_0_0 of type unit with kernel "$(cat installer.hex)" \ + --burn-cap 2.0 --force + ``` + +1. Start the node with this command: + + ```bash + octez-smart-rollup-node --endpoint ${ENDPOINT} \ + run observer for files_archive with operators \ + --data-dir ./_rollup_node --log-kernel-debug + ``` + + For simplicity, this command runs the Smart Rollup in observer mode, which does not require a stake of 10,000 tez to publish commitments. + +1. Open a new terminal window and run this command to watch the node's log: + + ```bash + tail -F _rollup_node/kernel.log + ``` + +The log prints the current DAL parameters, as in this example: + +``` +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +``` + +These parameters are: + +- `number_of_slots`: The number of slots in each block +- `slot_size`: The size of each slot in bytes +- `page_size`: The size of each page in bytes +- `attestation_lag`: The number of subsequent blocks in which bakers can attest that the data is available; if enough attestations are available by the time this number of blocks have been created, the data becomes available to Smart Rollups + +## Setting up a deployment script + +In later parts of this tutorial, you will update and redeploy the Smart Rollup multiple times. +To simplify the process, you can use this script. +To use it, pass the alias of your account in the Octez client: + +```bash +#!/usr/bin/bash + +alias="${1}" + +set -e + +cargo build --release --target wasm32-unknown-unknown + +rm -rf _rollup_node + +cp target/wasm32-unknown-unknown/release/files_archive.wasm . + +smart-rollup-installer get-reveal-installer -P _rollup_node/wasm_2_0_0 \ + -u files_archive.wasm -o installer.hex + +octez-client --endpoint ${ENDPOINT} \ + originate smart rollup files_archive from "${alias}" of kind wasm_2_0_0 \ + of type unit with kernel "$(cat installer.hex)" --burn-cap 2.0 --force + +octez-smart-rollup-node --endpoint ${ENDPOINT} \ + run observer for files_archive with operators --data-dir _rollup_node \ + --dal-node http://localhost:10732 --log-kernel-debug +``` + +In the next section, you will get information about the state of slots in the DAL. +See [Part 2: Getting slot information](./get-slot-info). diff --git a/docs/tutorials/build-files-archive-with-dal/get-slot-info.mdx b/docs/tutorials/build-files-archive-with-dal/get-slot-info.mdx new file mode 100644 index 000000000..281ef074f --- /dev/null +++ b/docs/tutorials/build-files-archive-with-dal/get-slot-info.mdx @@ -0,0 +1,138 @@ +--- +title: "Part 2: Getting slot information" +authors: 'Tezos Core Developers' +last_update: + date: 17 January 2024 +--- + +When clients send data to the DAL, they must choose which slot to put it in. +This can cause conflicts, because only one client can write data to a given slot in a single block. +If more than one client tries to write to the same slot and a baker includes those operations in the same block, only the first operation in the block succeeds in writing data to the slot. +The other operations fail and the clients must re-submit the data to be included in a future block. + +For this reason, clients should check the status of slots to avoid conflicts. +For example, slots 0, 30, and 31 are often used for regression tests. + +To see which slots are in use, you can use the Explorus indexer at https://explorus.io/dal and select Weeklynet. +For example, this screenshot shows that slots 10 and 25 are in use: + +![The Explorus indexer, showing the slots that are in use in each block](/img/tutorials/dal-explorus-slots.png) + +You can also see the state of the DAL slots by running a DAL node. +To reduce the amount of data that they have to manage, DAL nodes can subscribe to certain slots and ignore the data in others. +Similarly, the protocol assigns bakers to monitor certain slots. + +## Starting a DAL node + +To run a DAL node, use the Octez `octez-dal-node` command and pass the slots to monitor in the `--producer-profiles` argument. + +Run this command to start a DAL node and monitor slot 0: + +```bash +octez-dal-node run --endpoint ${ENDPOINT} \ + --producer-profiles=0 --data-dir _dal_node +``` + +## Accessing the slot data from a Smart Rollup + +Follow these steps to update the Smart Rollup to access information about slot 0: + +1. Update the `src/lib.rs` file to have this code: + + ```rust + use tezos_smart_rollup::{host::RuntimeError, kernel_entry, prelude::*}; + use tezos_smart_rollup_host::dal_parameters::RollupDalParameters; + + pub fn run( + host: &mut R, + param: &RollupDalParameters, + slot_index: u8, + ) -> Result<(), RuntimeError> { + let sol = host.read_input()?.unwrap(); + + let target_level = sol.level as usize - param.attestation_lag as usize; + + let mut buffer = vec![0u8; param.page_size as usize]; + + let bytes_read = host.reveal_dal_page(target_level as i32, slot_index, 0, &mut buffer)?; + + if 0 < bytes_read { + debug_msg!( + host, + "Attested slot at index {} for level {}: {:?}\n", + slot_index, + target_level, + &buffer.as_slice()[0..10] + ); + } else { + debug_msg!( + host, + "No attested slot at index {} for level {}\n", + slot_index, + target_level + ); + } + + Ok(()) + } + + pub fn entry(host: &mut R) { + let param = host.reveal_dal_parameters(); + debug_msg!(host, "{:?}\n", param); + + match run(host, ¶m, 0) { + Ok(()) => debug_msg!(host, "See you in the next level\n"), + Err(_) => debug_msg!(host, "Something went wrong for some reasons"), + } + } + + kernel_entry!(entry); + ``` + + The key change is the addition of the function `run`. + Using this function allows the code to use the `?` operator of Rust by using a function that returns a `Result` type. + + The `run` function proceeds as follows: + + 1. First, it uses the DAL parameters to know the first level where a slot might be used. + It subtracts the attestation lag from the current level, which it gets from the Smart Rollup inbox; the result is the most recent block that may have attested data in it. + 1. It allocates `Vec` buffer of the current page size. + 1. It attempts to fill the buffer with the `read_dal_page` function provided + by the SDK. + 1. It checks the value returned by the function, which is the number of bytes + read. + Zero bytes mean that the slot has no attested data in it. + Otherwise, it is necessarily the size of the page, because that's the size of the buffer. + +1. Update the `Cargo.toml` file to add this dependency at the end: + + ```toml + tezos-smart-rollup-host = { version = "0.2.2", features = [ "proto-alpha" ] } + ``` + +1. Run the commands to build and deploy the Smart Rollup and start the node. +You can use the script in [Part 1: Getting the DAL parameters](./get-dal-params) to simplify the process. + +1. In another terminal window, view the log with the command `tail -F _rollup_node/kernel.log`. + +The log shows information about slot 0, as in this example: + +``` +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +No attested slot at index 0 for level 56875 +See you in the next level +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +Attested slot at index 0 for level 56876: [16, 0, 0, 2, 89, 87, 0, 0, 0, 0] +See you in the next level +RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } +No attested slot at index 0 for level 56877 +See you in the next level +``` + +For the first 4 Tezos blocks produced after the origination of the Smart Rollup, the kernel will report that no slot has been attested for the targeted level, _even if Explorus states the opposite_. +This is because, as of January, 2024, a Smart Rollup cannot fetch the content of a slot published before it is originated. +This is why you must wait for 4 blocks before seeing slot page contents being +logged. + +Now that you can see the state of the slots, you can find an unused slot and publish data to it. +When you are ready, continue to [Part 3: Publishing on the DAL](./publishing-on-the-dal). diff --git a/docs/tutorials/build-files-archive-with-dal/publishing-on-the-dal.mdx b/docs/tutorials/build-files-archive-with-dal/publishing-on-the-dal.mdx new file mode 100644 index 000000000..cf33e9751 --- /dev/null +++ b/docs/tutorials/build-files-archive-with-dal/publishing-on-the-dal.mdx @@ -0,0 +1,141 @@ +--- +title: "Part 3: Publishing on the DAL" +authors: 'Tezos Core Developers' +last_update: + date: 17 January 2024 +--- + +Now that you can get information about the DAL, the next step is to publish data to it and verify that the kernel can access it. + +:::note Planning ahead +Before trying to run the code yourself, look at [Explorus](https://explorus.io/dal), select Weeklynet, and choose a slot that is not currently being used. +::: + +The examples in this tutorial use slot 10. + +## Switching slots + +When you have selected a slot that does not appear to be in use, follow these steps to restart the Smart Rollup and DAL node: + +1. Stop the DAL node and restart it with a new `--producer-profiles` argument. +For example, this command uses slot 10: + + ```bash + octez-dal-node run --endpoint ${ENDPOINT} \ + --producer-profiles=10 --data-dir _dal_node + ``` + +1. Update the kernel to monitor that slot by updating this line: + + ```rust + match run(host, ¶m, 0) { + ``` + + For example, to monitor slot 10, change the 0 to a 10, as in this code: + + ```rust + match run(host, ¶m, 10) { + ``` + +1. Run the commands to build and deploy the Smart Rollup and start the node. +You can use the script in [Part 1: Getting the DAL parameters](./get-dal-params) to simplify the process. + +## Publishing messages + +The DAL node provides an RPC endpoint for clients to send data to be added to a slot: `POST /slot`, whose body is the contents of the slot. + +1. Run this command to publish a message to the DAL: + + ```bash + curl localhost:10732/slot --data '"Hello, world!"' -H 'Content-Type: application/json' + ``` + + This command assumes that you have not changed the default RPC server address. + + The command returns the certificate from the DAL node, which looks like this example: + + ```json + { + "commitment": "sh1u3tr3YKPDYUp2wWKCfmV5KZb82FREhv8GtDeR3EJccsBerWGwJYKufsDNH8rk4XqGrXdooZ", + "commitment_proof":"8229c63b8e858d9a96321c80a204756020dd13243621c11bec61f182a23714cf6e0985675fff45f1164657ad0c7b9418" + } + ``` + + Note that the value of the message is in double quotes because it must be a valid JSON string, as hinted by the `Content-Type` header. + +1. Using the values of the commitment and proof from the previous command, post the certificate to layer 1 with this command: + + ```bash + commitment="sh1u3tr3YKPDYUp2wWKCfmV5KZb82FREhv8GtDeR3EJccsBerWGwJYKufsDNH8rk4XqGrXdooZ" + proof="8229c63b8e858d9a96321c80a204756020dd13243621c11bec61f182a23714cf6e0985675fff45f1164657ad0c7b9418" + octez-client --endpoint ${ENDPOINT} \ + publish dal commitment "${commitment}" from ${MY_ACCOUNT} for slot 10 \ + with proof "${proof}" + ``` + + After 4 blocks, you should see a message in the kernel log that looks like this: + + ``` + RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } + Attested slot at index 10 for level 57293: [72, 101, 108, 108, 111, 44, 32, 119, 111, 114] + See you in the next level + ``` + + You can verify your message by converting the bytes in the message back to the first 10 characters of the string "Hello, World!" + + If you see a message that says "A slot header for this slot was already proposed," another transaction tried to write to that slot in the same block, so you must try again. + + If you don't see information about the attested slot, check the page at https://explorus.io/dal. + If that page shows red (unattested) slots, it's possible that the attesters for the network are offline. + +## Publishing files + +You can also send raw bytes to the DAL node with the header `Content-Type: application/octet-stream`. +In this case, you must prefix the data with its size due to limitations of the DAL. + +1. Install the `jq` and `xxd` programs. + +1. Create a file named `upload_file.sh` and add this code: + + ```bash + #!/usr/bin/bash + + path="${1}" + alias="${2}" + index="${3}" + + target="$(mktemp)" + echo "storing temporary file at ${target}" + file_size="$(cat "${path}" | wc -c)" + slot_size_bin="$(printf "%08x" "${file_size}")" + slot_contents="$(cat ${path} | xxd -p)" + + echo -n "${slot_size_bin}${slot_contents}" | xxd -p -r > "${target}" + + certificate="$(curl localhost:10732/slot --data-binary "@${target}" -H 'Content-Type: application/octet-stream')" + + echo "${certificate}" + + commitment="$(echo -n ${certificate} | jq '.commitment' -r)" + proof="$(echo -n ${certificate} | jq '.commitment_proof' -r)" + + octez-client --endpoint ${ENDPOINT} \ + publish dal commitment "${commitment}" from "${alias}" \ + for slot "${index}" with proof "${proof}" + + rm "${target}" + ``` + + The script accepts three arguments: the file to send, the account alias to use and the slot index to use. + This script also assumes that the `PATH` and `ENDPOINT` environment variables are correctly set. + For example: + + ```bash + ./upload_file.sh myFile.txt $MY_ACCOUNT 10 + ``` + + Again, by inspecting the kernel logs, you should be able to see that the file that you wanted to publish is indeed the one fetched by the Smart Rollup. + +Now you can publish data to the DAL and use it in a Smart Rollup. +In the next section, you write to and retrieve the entire slot. +When you are ready, go to [Part 4: Using the entire slot](./using-full-slot). diff --git a/docs/tutorials/build-files-archive-with-dal/using-full-slot.mdx b/docs/tutorials/build-files-archive-with-dal/using-full-slot.mdx new file mode 100644 index 000000000..6ee53c95b --- /dev/null +++ b/docs/tutorials/build-files-archive-with-dal/using-full-slot.mdx @@ -0,0 +1,146 @@ +--- +title: "Part 4: Using the entire slot" +authors: 'Tezos Core Developers' +last_update: + date: 18 January 2024 +--- + +In some cases, you may want to retrieve the entire contents of a slot. +For example, it can be convenient to get the entire slot because it has a fixed size, while the data in the slot may be smaller and padded to fit the slot. + +## Fetching and storing the full slot + +Retrieving the full slot is similar to retrieving any data from the slot. +In this case, you change the kernel to retrieve data of the exact size of the slot. + +1. Update the `run` function in the `lib/rs` file to this code: + + ```rust + pub fn run( + host: &mut R, + param: &RollupDalParameters, + slot_index: u8, + ) -> Result<(), RuntimeError> { + // Reading one message from the shared inbox is always safe, + // because the shared inbox contains at least 3 messages per + // Tezos block. + let sol = host.read_input()?.unwrap(); + + let target_level = sol.level as usize - param.attestation_lag as usize; + + let mut buffer = vec![0u8; param.slot_size as usize]; + + let bytes_read = host.reveal_dal_page(target_level as i32, slot_index, 0, &mut buffer)?; + + if bytes_read == 0 { + debug_msg!( + host, + "No attested slot at index {} for level {}\n", + slot_index, + target_level + ); + + return Ok(()); + } + + debug_msg!( + host, + "Attested slot at index {} for level {}\n", + slot_index, + target_level + ); + + let num_pages = param.slot_size / param.page_size; + + for page_index in 1..num_pages { + let _result = host.reveal_dal_page( + target_level as i32, + slot_index, + page_index.try_into().unwrap(), + &mut buffer[page_index as usize * (param.page_size as usize) + ..(page_index as usize + 1) * (param.page_size as usize)], + ); + } + + let hash = blake2b::digest(&buffer, 32).unwrap(); + let key = hex::encode(hash); + let path = OwnedPath::try_from(format!("/{}", key)).unwrap(); + + debug_msg!(host, "Saving slot under `{}'\n", path); + + let () = host.store_write_all(&path, &buffer)?; + + Ok(()) + } + ``` + + Now the `run` function works like this: + + 1. It allocates a buffer of the size of a slot, not a size of a page. + 1. It tries to fetch the contents of the first page. + If 0 bytes are written by `reveal_dal_page`, the targeted slot has not been + attested for this block. + 1. If the targeted slot has been attested, the function reads as many pages as necessary to get the full slot data. + 1. It stores the data in the durable storage, using the Blake2B hash (encoded in hexadecimal) as its key. + +1. Add these `use` statements to the beginning of the file: + + ```rust + use tezos_crypto_rs::blake2b; + use tezos_smart_rollup::storage::path::OwnedPath; + ``` + + These dependencies use `tezos_crypto_rs` for hashing, and `hex` for encoding. + +1. Add the matching dependencies to the `Cargo.toml` file: + + ```toml + tezos_crypto_rs = { version = "0.5.2", default-features = false } + hex = "0.4.3" + ``` + + Adding `default-features = false` for `tezos_crypto_rs` is necessary for the crate to be compatible with Smart Rollups. + +1. Deploy the Smart Rollup again, publish a file as you did in the previous section, and wait for enough levels to pass. +The Smart Rollup log shows the hash of the data, as in this example: + + ``` + RollupDalParameters { number_of_slots: 32, attestation_lag: 4, slot_size: 65536, page_size: 4096 } + Attested slot at index 10 for level 15482 + Saving slot under `/6a578d1e6746d29243ff81923bcea6375e9344d719ca118e14cd9f3d3b00cd96' + See you in the next level + ``` + +1. Get the data from the slot by passing the hash, as in this example: + + ```bash + hash=6a578d1e6746d29243ff81923bcea6375e9344d719ca118e14cd9f3d3b00cd96 + curl "http://localhost:8932/global/block/head/durable/wasm_2_0_0/value?key=/${hash}" \ + -H 'Content-Type: application/octet-stream' \ + -o slot.bin + ``` + +1. Convert the contents of the slot to text by running this command: + + ```bash + xxd -r -p slot.bin + ``` + + The console shows your message in text, such as "Hi! This is a message to go on the DAL." + +:::note Why `diff` won't work +You cannot use `diff` to ensure that the file you originally published and the one that you downloaded from the rollup node are equal. +Indeed, they are not: because the size of a slot is fixed, the DAL node pads the value it receives from `POST /slot` in order to ensure that it has the correct slot size. +::: + +## Next steps + +Now you know how to send files to the DAL and use a Smart Rollup to store the data. + +From there, the sky's the limit. +You can implement many other features, such as: + +- Handling more than one file per level +- Having file publishers pay for the storage that they are using in layer 2 by allowing them to deposit tez to the Smart Rollup and sign the files they publish +- Building a frontend to visualize the files in the archive +- Providing the original size of the file by modifying the script to prefix the file with its size diff --git a/docs/tutorials/build-your-first-app.md b/docs/tutorials/build-your-first-app.md index 56ec332e4..badc62fee 100644 --- a/docs/tutorials/build-your-first-app.md +++ b/docs/tutorials/build-your-first-app.md @@ -1,5 +1,5 @@ --- -title: Build your first app on Tezos +title: Build a simple web application authors: 'Claude Barde, Tim McMackin' last_update: date: 17 October 2023 diff --git a/docs/tutorials/create-an-nft.md b/docs/tutorials/create-an-nft.md index 3daca3f93..35a4d6210 100644 --- a/docs/tutorials/create-an-nft.md +++ b/docs/tutorials/create-an-nft.md @@ -5,6 +5,6 @@ title: Create an NFT There are many ways to create (or "mint") NFTs; in particular, you can create all of the NFTs in a collection at once or set up an application that can create NFTs as users request them. Try one of these tutorials to see different ways of minting NFTs: -- [Create NFTs with the `tznft` tool](./create-an-nft/nft-tznft): Create metadata files to describe NFTs and then use a command-line tool to mint them and manipulate them +- [Create NFTs from the command line](./create-an-nft/nft-tznft): Create metadata files to describe NFTs and then use a command-line tool to mint them and manipulate them - [Mint NFTs from a web app](./create-an-nft/nft-web-app): Use a pre-existing contract to create NFTs from a web application - [Create a contract and web app that mints NFTs](./create-an-nft/nft-taquito): Set up a web application that authorized users can use to create NFTs and all users can use to see the NFTs in a collection diff --git a/docs/tutorials/create-an-nft/nft-tznft.md b/docs/tutorials/create-an-nft/nft-tznft.md index d18029228..3b5ffea1f 100644 --- a/docs/tutorials/create-an-nft/nft-tznft.md +++ b/docs/tutorials/create-an-nft/nft-tznft.md @@ -1,5 +1,5 @@ --- -title: Create NFTs with the `tznft` tool +title: Create NFTs from the command line authors: 'Sol Lederer, Tim McMackin' last_update: date: 18 September 2023 @@ -505,7 +505,7 @@ The command is the same as for the sandbox: The block explorer shows information about the contract that manages the NFTs, including a list of all NFTs in the contract, who owns them, and a list of recent transactions. -Now the NFTs are on Tezos ghostnet and you can transfer and manipulate them just like you did in the sandbox. +Now the NFTs are on Tezos Ghostnet and you can transfer and manipulate them just like you did in the sandbox. You may need to create and fund more account aliases to transfer them, but the commands are the same. For example, to transfer NFTs to an account with the alias `other-account`, run this command: diff --git a/docs/tutorials/create-an-nft/nft-web-app.md b/docs/tutorials/create-an-nft/nft-web-app.md index cff2304c5..6985c2410 100644 --- a/docs/tutorials/create-an-nft/nft-web-app.md +++ b/docs/tutorials/create-an-nft/nft-web-app.md @@ -16,7 +16,7 @@ You will learn: ## Prerequisites -This tutorial uses [Javascript](https://www.javascript.com/), so it will be easier if you are familiar with JavaScript. +This tutorial uses [JavaScript](https://www.javascript.com/), so it will be easier if you are familiar with JavaScript. - You do not need any familiarity with any of the libraries in the tutorial, including [Taquito](https://tezostaquito.io/), a JavaScript library that helps developers access Tezos. diff --git a/docs/tutorials/dapp.md b/docs/tutorials/dapp.md index a99d3b936..c5ffe475e 100644 --- a/docs/tutorials/dapp.md +++ b/docs/tutorials/dapp.md @@ -22,7 +22,7 @@ sequenceDiagram You will learn : - How to create a Tezos project with Taqueria. -- How to create a smart contract in jsLIGO. +- How to create a smart contract in JsLIGO. - How to deploy the smart contract a real testnet named Ghostnet. - How to create a frontend dApp using Taquito library and interact with a Tezos browser wallet. - How to use an indexer like TZKT. diff --git a/docs/tutorials/dapp/part-1.md b/docs/tutorials/dapp/part-1.md index 1854d7d23..27415200b 100644 --- a/docs/tutorials/dapp/part-1.md +++ b/docs/tutorials/dapp/part-1.md @@ -164,7 +164,7 @@ Taqueria is generating the `.tz` Michelson file on the `artifacts` folder. The M The default Tezos testing testnet is called **Ghostnet**. -> :warning: You need an account to deploy a contract with some `tez` (the Tezos native currency). The first time you deploy a contract with Taqueria, it is generating a new implicit account with `0 tez`. +> :warning: You need an account to deploy a contract with some `tez` (the Tezos native currency). The first time you deploy a contract with Taqueria, it is generating a new user account with `0 tez`. 1. Deploy your contract to the `testing` environment. Ut forces Taqueria to generate a default account on a testing config file. diff --git a/docs/tutorials/dapp/part-2.md b/docs/tutorials/dapp/part-2.md index d2d8a93ee..eace5e0d6 100644 --- a/docs/tutorials/dapp/part-2.md +++ b/docs/tutorials/dapp/part-2.md @@ -84,7 +84,7 @@ sequenceDiagram Explanation: - - `...store` do a copy by value of your object. [Have a look on the Functional updates documentation](https://ligolang.org/docs/language-basics/maps-records/#functional-updates). Note: you cannot do assignment like this `store.pokeTraces=...` in jsLIGO, there are no concept of Classes, use `Functional updates` instead. + - `...store` do a copy by value of your object. [Have a look on the Functional updates documentation](https://ligolang.org/docs/language-basics/maps-records/#functional-updates). Note: you cannot do assignment like this `store.pokeTraces=...` in JsLIGO, there are no concept of Classes, use `Functional updates` instead. - `Map.add(...`: Add a key, value entry to a map. For more information about [Map](https://ligolang.org/docs/language-basics/maps-records/#maps). - `export type storage = {...};` a `Record` type is declared, it is an [object structure](https://ligolang.org/docs/language-basics/maps-records#records). - `Tezos.get_self_address()` is a native function that returns the current contract address running this code. Have a look on [Tezos native functions](https://ligolang.org/docs/reference/current-reference). @@ -201,7 +201,7 @@ sequenceDiagram - `#import "./pokeGame.jsligo" "PokeGame"` to import the source file as module in order to call functions and use object definitions. - `export type main_fn` it will be useful later for the mutation tests to point to the main function to call/mutate. - - `Test.reset_state ( 2...` this creates two implicit accounts on the test environment. + - `Test.reset_state ( 2...` this creates two user accounts on the test environment. - `Test.nth_bootstrap_account` this return the nth account from the environment. - `Test.to_contract(taddr)` and `Tezos.address(contr)` are util functions to convert typed addresses, contract and contract addresses. - `let _testPoke = (s : address) : unit => {...}` declaring function starting with `_` is escaping the test for execution. Use this to factorize tests changing only the parameters of the function for different scenarios. diff --git a/docs/tutorials/dapp/part-3.md b/docs/tutorials/dapp/part-3.md index ac207fc76..baba11aac 100644 --- a/docs/tutorials/dapp/part-3.md +++ b/docs/tutorials/dapp/part-3.md @@ -49,7 +49,7 @@ Tickets features: - Not comparable: it makes no sense to compare tickets because tickets from same type are all equals and can be merged into a single ticket. When ticket types are different then it is no more comparable. - Transferable: you can send ticket into a Transaction parameter. -- Storable: only on smart contract storage for the moment (Note: a new protocol release will enable it for implicit account soon). +- Storable: only on smart contract storage for the moment (Note: a new protocol release will enable it for user account soon). - Non dupable: you cannot copy or duplicate a ticket, it is a unique singleton object living in specific blockchain instance. - Splittable: if amount is > 2 then you can split ticket object into 2 objects. - Mergeable: you can merge ticket from same ticketer and same type. diff --git a/docs/tutorials/join-dal-baker.md b/docs/tutorials/join-dal-baker.md new file mode 100644 index 000000000..5894d3070 --- /dev/null +++ b/docs/tutorials/join-dal-baker.md @@ -0,0 +1,26 @@ +# How to join the Tezos DAL as a baker, in 5 steps + +Tezos' [Data-Availability Layer](https://tezos.gitlab.io/shell/dal.html) (DAL for short), is an experimental feature which is, at the time of writing, not available on Tezos Mainnet but planned to be proposed in a protocol amendment in the near future. + +The DAL is a key component for the scalability of Tezos. In a nutshell, the DAL is about increasing the data bandwidth available for Tezos Smart Rollups thanks to a new parallel P2P network on which rollups could connect to fetch inputs without compromising their security. + +In order for the DAL to be as secure as the Tezos Layer 1 itself, bakers need to play a very important role in it. Currently, bakers on the L1 network are not only responsible for producing blocks but also for attesting that blocks are published on the L1 network. They are rewarded for this contribution through protocol incentives. Similarly, the role of bakers in the DAL would be to attest the publication of data on the DAL's P2P network. They would in turn be rewarded for this through (potentially different) protocol incentives. + +Given that setting up a new P2P network with several hundred active participants may take some time, the first proposed version of the DAL for Tezos Mainnet will not provide any participation incentives. This will let plenty of time for bakers to join the DAL network without risking any reward loss, ensuring a smooth transition. + +This incentive-free version of the DAL is already available on the Weeklynet test network. In this tutorial you will learn how to join Weeklynet as a baker and attest the publication of data on the DAL network. + +:::warning +This tutorial uses a very simple setup running all required daemons on the same machine. In a production environment, we advise against running a DAL attester node under the same IP address than a baker's node because the DAL node may leak the IP address and ease DOS attacks on the baker. See also [the DAL documentation page on baking](https://tezos.gitlab.io/shell/dal_bakers.html). +::: + +:::warning +The UX of the DAL components will be subject to changes with the feedback from the testers following this tutorial, so this tutorial will be updated accordingly. Feel free to file issues if it's not up-to-date. +::: + +- [Step 1: get a Weeklynet-compatible Octez version](./join-dal-baker/get-octez) +- [Step 2: run an Octez node on Weeklynet](./join-dal-baker/run-node) +- [Step 3: setting up a baker account on Weeklynet](./join-dal-baker/prepare-account) +- [Step 4: run an Octez DAL node on Weeklynet](./join-dal-baker/run-dal-node) +- [Step 5: run an Octez baking daemon on Weeklynet](./join-dal-baker/run-baker) +- [Conclusion](./join-dal-baker/conclusion) diff --git a/docs/tutorials/join-dal-baker/conclusion.md b/docs/tutorials/join-dal-baker/conclusion.md new file mode 100644 index 000000000..58c297504 --- /dev/null +++ b/docs/tutorials/join-dal-baker/conclusion.md @@ -0,0 +1,3 @@ +# Conclusion + +In this tutorial we have gone through all the steps needed to participate in the Weeklynet test network as a baker and DAL attester. We could further improve the setup by defining system services so that the daemons are automatically launched when the machine starts or when the network restarts on Wednesday. We could also plug a monitoring solution such as the Prometheus + Grafana combo; a Grafana dashboard template for DAL nodes is available in Grafazos. The interactions between our baker and the Weeklynet chain can be observed on the Explorus block explorer which is aware of the DAL and can in particular display which DAL slots are being used at each level. diff --git a/docs/tutorials/join-dal-baker/get-octez.md b/docs/tutorials/join-dal-baker/get-octez.md new file mode 100644 index 000000000..a0f2d991a --- /dev/null +++ b/docs/tutorials/join-dal-baker/get-octez.md @@ -0,0 +1,13 @@ +# Step 1: Get a Weeklynet-compatible Octez version + +The Weeklynet test network is restarted once every Wednesday at 0h UTC, and for most of its lifetime (from level 512) it runs a development version of the Tezos protocol, called Alpha, which is not part of any released version of Octez. For this reason, baking on Weeklynet requires to run Octez either with Docker using a specific Docker image, or by building it from source using a specific git commit. + +To get this specific Docker image, or the hash of this specific commit, see https://teztnets.com/weeklynet-about. This page also contains the proper `octez-node config init` incantation to configure the Octez node with the current network parameters of Weeklynet, the URL of a public RPC endpoint, and a link to a faucet distributing free testnet tez. + +For example, the commands to start a Docker image and configure the Octez node for the Weeklynet launched on January 17 2024, the instructions were: + +``` +docker run -it --entrypoint=/bin/sh tezos/tezos:master_7f3bfc90_20240116181914 + +octez-node config init --network https://teztnets.com/weeklynet-2024-01-17 +``` diff --git a/docs/tutorials/join-dal-baker/prepare-account.md b/docs/tutorials/join-dal-baker/prepare-account.md new file mode 100644 index 000000000..064fbd00e --- /dev/null +++ b/docs/tutorials/join-dal-baker/prepare-account.md @@ -0,0 +1,57 @@ +# Step 3: Set up a baker account on Weeklynet + +Our baker needs an implicit account consisting of a pair of keys and an address. The simplest way to get them is to ask the Octez client to randomly generate them and associate them to the `my_baker` alias: + +``` +octez-client gen keys my_baker +``` + +The address of the generated account can be obtained with the following command: + +``` +octez-client show address my_baker +``` + +Let's record this address in a shell variable, this will be useful for the some commands which cannot get addresses by their octez-client aliases. + +``` +MY_BAKER="$(octez-client show address my_baker | head -n 1 | cut -d ' ' -f 2)" +``` + +At this point, the balance of the `my_baker` account is still empty as can be seen with the following command: + +``` +octez-client --endpoint "$ENDPOINT" get balance for my_baker +``` + +In order to get some consensus and DAL rights, we need to put some tez on the account. Fortunately, getting free testnet tez is easy thanks to the testnet faucet. To use it, we need to enter the generated address in the Weeklynet faucet linked from https://teztnets.com/weeklynet-about. We need at least 6k tez for running a baker but the more tez we have the more rights we will get and the shorter we will have to wait to produce blocks and attestations; that being said, baking with too much stake would prevent us from leaving the network without disturbing or even halting it so to avoid breaking the network for all other testers let's not be too greedy. 50k tez should be enough to get enough rights to easily check if our baker behaves as expected while not much disturbing the network when our baker will stop operating. + +Once the tez are obtained from the faucet, we can check with the same `get balance` command that they have been received: + +``` +octez-client --endpoint "$ENDPOINT" get balance for my_baker +``` + +At this point, the `my_baker` account owns enough stake to bake but has still no consensus nor DAL rights because we haven't declared to the Tezos protocol our intention to become a baker. This can be achieved with the following command: + +``` +octez-client --endpoint "$ENDPOINT" register key my_baker as delegate +``` + +Seven cycles later (about 1h40 on Weeklynet), our baker will start receiving rights. To see for instance its consensus attestation rights in the current cycle, we can use the following RPC: + +``` +octez-client --endpoint "$ENDPOINT" rpc get /chains/main/blocks/head/helpers/attestation_rights\?delegate="$MY_BAKER" +``` + +To see the DAL attestation rights of all bakers, we can use the following RPC: + +``` +octez-client --endpoint "$ENDPOINT" rpc get /chains/main/blocks/head/context/dal/shards +``` + +This command returns an array of DAL attestation rights. The 2048 shards which are expected to be attested at this level are shared between active bakers proportionally to their stake. Each baker is assigned a slice of shard indices represented in the output of this command by a pair consisting of the first index and the length of the slice. So to check if some rights were assigned to us we can filter the array to our baker by running this command: + +``` +octez-client --endpoint "$ENDPOINT" rpc get /chains/main/blocks/head/context/dal/shards | grep "$MY_BAKER" +``` diff --git a/docs/tutorials/join-dal-baker/run-baker.md b/docs/tutorials/join-dal-baker/run-baker.md new file mode 100644 index 000000000..79180c327 --- /dev/null +++ b/docs/tutorials/join-dal-baker/run-baker.md @@ -0,0 +1,26 @@ +# Step 5: Run an Octez baking daemon on Weeklynet + +The baking daemon is launched almost as usual, the only difference is that we use the `--dal-node http://127.0.0.1` option to tell it to connect to the DAL node that we just launched in the previous step. + +``` +octez-baker-alpha run with local node "$HOME/.tezos-node" my_baker --liquidity-baking-toggle-vote on --adaptive-issuance-vote on --dal-node http://127.0.0.1 >> "$HOME/octez-baker.log" 2>&1 +``` + +We can check that the DAL is now subscribed to the relevant topics by retrying the following RPC, which should now return all the topics of the form `{"slot_index":,"pkh":"
"}` where `index` varies between `0` included and the number of slot indexes (`32` on Weeklynet) excluded: + +``` +curl http://localhost:10732/p2p/gossipsub/topics +``` + +We can also look at the baker logs to see if it manages to inject the expected operations. At each level, the baker is expected to: +- receive a block proposal (log message: "received new proposal ... at level ..., round ...") +- inject a preattestation for it (log message: "injected preattestation ... for my_baker (<address>) for level ..., round ...") +- receive a block (log message: "received new head ... at level ..., round ...") +- inject an attestation for it (log message: "injected attestation ... for my_baker (<address>) for level ..., round ...") +- inject a DAL attestation indicating which of the shards assigned to the baker have been seen on the DAL network (log message: "injected DAL attestation ... for level ..., round ..., with bitset ... for my_baker (<address>) to attest slots published at level ..."); if no shard was seen (either because they did not reach the DAL node for some reason or simply because nothing was published on the DAL at the targeted level), the operation is skipped (log message: "Skipping the injection of the DAL attestation for attestation level ..., round ..., as currently no slot published at level ... is attestable.") + +Optionally, we can also launch an accuser which will monitor the behaviour of the other Weeklynet bakers and denounce them to the Tezos protocol if they are caught double-signing any block or consensus operation. + +``` +octez-accuser-alpha run >> "$HOME/octez-accuser.log" 2>&1 +``` diff --git a/docs/tutorials/join-dal-baker/run-dal-node.md b/docs/tutorials/join-dal-baker/run-dal-node.md new file mode 100644 index 000000000..cfbd5fbca --- /dev/null +++ b/docs/tutorials/join-dal-baker/run-dal-node.md @@ -0,0 +1,21 @@ +# Step 4: Run an Octez DAL node on Weeklynet + +``` +octez-dal-node run >> "$HOME/octez-dal-node.log" 2>&1 +``` + +This, too, may take some time to launch the first time because it needs to generate a new identity file, this time for the DAL network. + +When running normally, the logs of the DAL node should contain one line per block applied by the layer 1 node looking like: + +``` +: layer 1 node's block at level , round is final +``` + +The DAL node we have launched connects to the DAL network but it is not yet subscribed to any Gossipsub topic. We can observe this by requesting the topics it registered to, using the following RPC: + +``` +curl http://localhost:10732/p2p/gossipsub/topics +``` + +In particular, it won't collect the shards assigned to our baker until it is subscribed to the corresponding topics. Don't worry, the baker daemon will automatically ask the DAL to subscribe to the relevant topics. diff --git a/docs/tutorials/join-dal-baker/run-node.md b/docs/tutorials/join-dal-baker/run-node.md new file mode 100644 index 000000000..540bf3c20 --- /dev/null +++ b/docs/tutorials/join-dal-baker/run-node.md @@ -0,0 +1,20 @@ +# Step 2: Run an Octez node on Weeklynet + +Once the Octez node has been configured to join Weeklynet, we can launch it and make its RPC available: + +``` +octez-node run --rpc-addr 127.0.0.1:8732 --log-output="$HOME/octez-node.log" +``` + +At first launch, the node will generate a fresh identity file used to identify itself on the Weeklynet L1 network, it then bootstraps the chain which means that it downloads and applies all the blocks. This takes a variable amount of time depending on when during the week these instructions are followed but at worst, on a Tuesday evening, it takes a few hours. Fortunately, we can continue to set up our Weeklynet baking infrastructure while the node is bootstrapping, all we have to do for this is to use another, already bootstrapped, node as RPC endpoint for `octez-client` when we want to interact with the chain. + +A public RPC endpoint URL for Weeklynet is linked from the https://teztnets.com/weeklynet-about page, let's record it in a shell variable: +``` +ENDPOINT="" +``` + +For example, for the Weeklynet launched on January 17 2024, the endpoint was: + +``` +ENDPOINT=https://rpc.weeklynet-2024-01-17.teztnets.com +``` diff --git a/docs/tutorials/mobile/part-1.md b/docs/tutorials/mobile/part-1.md index 64bced378..59c2bf7f5 100644 --- a/docs/tutorials/mobile/part-1.md +++ b/docs/tutorials/mobile/part-1.md @@ -7,7 +7,7 @@ last_update: On this first section, you will: -- Create the game smart contract importing an existing Ligo library +- Create the game smart contract importing an existing LIGO library - Deploy your smart contract to the Ghostnet - Get the Shifumi Git repository folders to copy the game UI and CSS for the second party @@ -22,7 +22,7 @@ On this first section, you will: taq install @taqueria/plugin-ligo ``` -1. Download the Ligo Shifumi template, and copy the files to Taqueria **contracts** folder: +1. Download the LIGO Shifumi template, and copy the files to Taqueria **contracts** folder: ```bash TAQ_LIGO_IMAGE=ligolang/ligo:1.2.0 taq ligo --command "init contract --template shifumi-jsligo shifumiTemplate" diff --git a/docs/tutorials/mobile/part-2.md b/docs/tutorials/mobile/part-2.md index ce96dc543..eda7a7d5e 100644 --- a/docs/tutorials/mobile/part-2.md +++ b/docs/tutorials/mobile/part-2.md @@ -22,7 +22,7 @@ A web3 mobile application is not different from a web2 one in terms of its basic ionic start app blank --type react ``` -1. Generate smart contract types from the taqueria plugin: +1. Generate smart contract types from the Taqueria plugin: This command generates Typescript classes from the smart contract interface definition that is used on the frontend. @@ -452,7 +452,7 @@ A web3 mobile application is not different from a web2 one in terms of its basic - `import "@ionic..."`: Default standard Ionic imports. - `import ... from "@airgap/beacon-types" ... from "@taquito/beacon-wallet" ... from "@taquito/taquito"`: Require libraries to interact with the Tezos node and the wallet. - - `export class Action implements ActionCisor, ActionPaper, ActionStone {...}`: Representation of the Ligo variant `Action` in Typescript, which is needed when passing arguments on `Play` function. + - `export class Action implements ActionCisor, ActionPaper, ActionStone {...}`: Representation of the LIGO variant `Action` in Typescript, which is needed when passing arguments on `Play` function. - `export type Session = {...}`: Taqueria exports the global storage type but not this sub-type from the storage type; it is needed for later, so extract a copy. - `export const UserContext = React.createContext(null)`: Global React context that is passed along pages. More info on React context [here](https://beta.reactjs.org/learn/passing-data-deeply-with-context). - `const refreshStorage = async (event?: CustomEvent): Promise => {...`: A useful function to force the smart contract storage to refresh on React state changes (user balance, state of the game). diff --git a/docs/tutorials/smart-contract.md b/docs/tutorials/smart-contract.md index 93f8ea809..84ee56f22 100644 --- a/docs/tutorials/smart-contract.md +++ b/docs/tutorials/smart-contract.md @@ -24,6 +24,6 @@ You can run the tutorial with the version of the language you are most familiar You do not need an experience in these languages to run the tutorial. - To use SmartPy, a language similar to Python, see [Deploy a smart contract with SmartPy](./smart-contract/smartpy) -- To use jsLIGO, a language similar to JavaScript and TypeScript, see [Deploy a smart contract with jsLIGO](./smart-contract/jsligo) +- To use JsLIGO, a language similar to JavaScript and TypeScript, see [Deploy a smart contract with JsLIGO](./smart-contract/jsligo) - To use CameLIGO, a language similar to OCaml, see [Deploy a smart contract with CameLIGO](./smart-contract/cameligo) - To learn the Archetype language, try [Deploy a smart contract with Archetype](./smart-contract/archetype). diff --git a/docs/tutorials/smart-contract/archetype.md b/docs/tutorials/smart-contract/archetype.md index 73a694d97..05c958f4e 100644 --- a/docs/tutorials/smart-contract/archetype.md +++ b/docs/tutorials/smart-contract/archetype.md @@ -10,7 +10,7 @@ It uses the completium-cli command-line tool, which lets you work with Archetype - If you are more familiar with Python, try [Deploy a smart contract with SmartPy](./smartpy). - If you are more familiar with OCaml, try [Deploy a smart contract with CameLIGO](./cameligo). -- If you are more familiar with JavaScript, try [Deploy a smart contract with jsLIGO](./jsligo). +- If you are more familiar with JavaScript, try [Deploy a smart contract with JsLIGO](./jsligo). In this tutorial, you will learn how to: @@ -324,5 +324,5 @@ Then, you can verify the updated storage on the block explorer or by running the Now the contract is running on the Tezos blockchain. You or any other user can call it from any source that can send transactions to Tezos, including command-line clients, dApps, and other contracts. -If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build your first app on Tezos](../build-your-first-app/). +If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build a simple web application](../build-your-first-app/). You can also try adding your own endpoints and originating a new contract, but you cannot update the existing contract after it is deployed. diff --git a/docs/tutorials/smart-contract/cameligo.mdx b/docs/tutorials/smart-contract/cameligo.mdx index dcb19a6d2..77524c426 100644 --- a/docs/tutorials/smart-contract/cameligo.mdx +++ b/docs/tutorials/smart-contract/cameligo.mdx @@ -8,7 +8,7 @@ last_update: This tutorial covers writing and deploying a simple smart contract with the LIGO programming language. Specifically, this tutorial uses the CameLIGO version of LIGO, which has syntax similar to OCaml, but you don't need any experience with OCaml or LIGO to do this tutorial. -- If you are more familiar with JavaScript, try [Deploy a smart contract with jsLIGO](./jsligo). +- If you are more familiar with JavaScript, try [Deploy a smart contract with JsLIGO](./jsligo). - If you are more familiar with Python, try [Deploy a smart contract with SmartPy](./smartpy). - To learn the Archetype language, try [Deploy a smart contract with Archetype](./archetype). @@ -288,5 +288,5 @@ It also allows you to call the contract. Now the contract is running on the Tezos blockchain. You or any other user can call it from any source that can send transactions to Tezos, including Octez, dApps, and other contracts. -If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build your first app on Tezos](../build-your-first-app/). +If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build a simple web application](../build-your-first-app/). You can also try adding your own endpoints and originating a new contract, but you cannot update the existing contract after it is deployed. diff --git a/docs/tutorials/smart-contract/jsligo.mdx b/docs/tutorials/smart-contract/jsligo.mdx index eeb26626d..2d98625d1 100644 --- a/docs/tutorials/smart-contract/jsligo.mdx +++ b/docs/tutorials/smart-contract/jsligo.mdx @@ -1,12 +1,12 @@ --- -title: Deploy a smart contract with jsLIGO +title: Deploy a smart contract with JsLIGO authors: 'John Joubert, Sasha Aldrick, Claude Barde, Tim McMackin' last_update: date: 3 January 2024 --- This tutorial covers writing and deploying a simple smart contract with the LIGO programming language. -Specifically, this tutorial uses the jsLIGO version of LIGO, which has syntax similar to JavaScript, but you don't need any experience with JavaScript or LIGO to do this tutorial. +Specifically, this tutorial uses the JsLIGO version of LIGO, which has syntax similar to JavaScript, but you don't need any experience with JavaScript or LIGO to do this tutorial. - If you are more familiar with Python, try [Deploy a smart contract with SmartPy](./smartpy). - If you are more familiar with OCaml, try [Deploy a smart contract with CameLIGO](./cameligo). @@ -303,5 +303,5 @@ It also allows you to call the contract. Now the contract is running on the Tezos blockchain. You or any other user can call it from any source that can send transactions to Tezos, including Octez, dApps, and other contracts. -If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build your first app on Tezos](../build-your-first-app/). +If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build a simple web application](../build-your-first-app/). You can also try adding your own endpoints and originating a new contract, but you cannot update the existing contract after it is deployed. diff --git a/docs/tutorials/smart-contract/smartpy.mdx b/docs/tutorials/smart-contract/smartpy.mdx index 50da77700..a51717c5e 100644 --- a/docs/tutorials/smart-contract/smartpy.mdx +++ b/docs/tutorials/smart-contract/smartpy.mdx @@ -9,7 +9,7 @@ This tutorial covers writing and deploying a simple smart contract with the Smar SmartPy has syntax similar to Python, but you don't need any experience with Python or SmartPy to do this tutorial. - If you are more familiar with OCaml, try [Deploy a smart contract with CameLIGO](./cameligo). -- If you are more familiar with JavaScript, try [Deploy a smart contract with jsLIGO](./jsligo). +- If you are more familiar with JavaScript, try [Deploy a smart contract with JsLIGO](./jsligo). - To learn the Archetype language, try [Deploy a smart contract with Archetype](./archetype). SmartPy is a high-level programming language that you can use to write smart contracts for the Tezos blockchain. @@ -290,5 +290,5 @@ It will not be shown again. Now the contract is running on the Tezos blockchain. You or any other user can call it from any source that can send transactions to Tezos, including Octez, dApps, and other contracts. -If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build your first app on Tezos](../build-your-first-app/). +If you want to continue working with this contract, try creating a dApp to call it from a web application, similar to the dApp that you create in the tutorial [Build a simple web application](../build-your-first-app/). You can also try adding your own endpoints and originating a new contract, but you cannot update the existing contract after it is deployed. diff --git a/docs/tutorials/smart-rollup.mdx b/docs/tutorials/smart-rollup.mdx index 0cb4c192f..3ff19b49b 100644 --- a/docs/tutorials/smart-rollup.mdx +++ b/docs/tutorials/smart-rollup.mdx @@ -21,12 +21,12 @@ Smart Rollups are processing units that run outside the Tezos network but commun These processing units can run arbitrarily large amounts of code without waiting for Tezos baking nodes to run and verify that code. Smart Rollups use Tezos for information and transactions but can run large applications at their own speed, independently of the Tezos baking system. -In this way, Smart Rollups allow Tezos to scale to support large, complex applications without slowing Tezos itself. +In this way, Smart Rollups allow Tezos to scale to support large, complex applications without slowing Tezos itself or incurring large transaction and storage fees. The processing that runs on Tezos itself via smart contracts is referred to as _layer 1_ and the processing that Smart Rollups run is referred to as _layer 2_. To learn about running code in smart contracts, see the tutorial [Deploy a smart contract](./smart-contract). Rollups also have an outbox, which consists of calls to smart contracts on layer 1. -These calls are how rollups send messages back to Tezos. +These calls are how rollups send messages back to layer 1. Smart Rollups can run any kind of applications that they want, such as: diff --git a/docs/unity/quickstart.md b/docs/unity/quickstart.md index 000e94883..d15fb60a5 100644 --- a/docs/unity/quickstart.md +++ b/docs/unity/quickstart.md @@ -19,7 +19,7 @@ These instructions cover: 1. In your Unity project, in the Package Manager panel, click the `+` symbol and then click **Add package from git URL**. -1. Enter the url `https://github.com/trilitech/tezos-unity-sdk.git` and click **Add**. +1. Enter the URL `https://github.com/trilitech/tezos-unity-sdk.git` and click **Add**. The Package Manager panel downloads and installs the SDK. You can see its assets in the Project panel under Packages > Tezos Unity SDK. diff --git a/docs/unity/reference/DAppMetadata.md b/docs/unity/reference/DAppMetadata.md index 2e9a3dc4f..6bb60fce2 100644 --- a/docs/unity/reference/DAppMetadata.md +++ b/docs/unity/reference/DAppMetadata.md @@ -15,7 +15,7 @@ These properties are read-only: - `Name`: The name of the project, which is shown in wallet applications when users connect to the project - `Url`: The home page of the project - `Icon`: The URL to a favicon for the project -- `Description`: A description of hte project +- `Description`: A description of the project ## Methods diff --git a/docs/unity/scenes.md b/docs/unity/scenes.md index 7a27b7864..75bb36ed6 100644 --- a/docs/unity/scenes.md +++ b/docs/unity/scenes.md @@ -56,7 +56,7 @@ This scene includes buttons that link to the other scenes. ## Wallet Connection scene -This scene shows how to to use the TezosAuthenticator prefab to connect to a user's wallet and get information about their account. +This scene shows how to use the TezosAuthenticator prefab to connect to a user's wallet and get information about their account. The scene uses the platform type to determine how to connect to a user's wallet. In the TezosAuthenticator `SetPlatformFlags` function, it checks what platform it is running on: diff --git a/docusaurus.config.js b/docusaurus.config.js index b6195480e..f2ba4c536 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -22,7 +22,7 @@ img-src 'self' https://*.googletagmanager.com https://*.google-analytics.com dat media-src 'self'; form-action 'self'; connect-src 'self' https://*.algolia.net https://*.algolianet.com https://*.googletagmanager.com https://*.google-analytics.com https://*.analytics.google.com; -frame-src https://tezosbot.vercel.app https://calendly.com/ lucid.app;`; +frame-src https://tezosbot.vercel.app lucid.app;`; /** @type {import('@docusaurus/types').Config} */ const config = { @@ -121,7 +121,7 @@ const config = { }, prism: { theme: require('prism-react-renderer/themes/github'), - additionalLanguages: ['csharp'], + additionalLanguages: ['csharp', 'toml'], }, // https://github.com/flexanalytics/plugin-image-zoom // Enable click to zoom in to large images diff --git a/package-lock.json b/package-lock.json index 037d12193..2b672c0b9 100644 --- a/package-lock.json +++ b/package-lock.json @@ -20,7 +20,6 @@ "plugin-image-zoom": "github:flexanalytics/plugin-image-zoom", "prism-react-renderer": "1.3.5", "react": "18.2", - "react-calendly": "4.3.0", "react-dom": "18.2", "rehype-katex": "7.0.0", "remark-math": "6.0.0", @@ -14204,19 +14203,6 @@ "node": ">=0.10.0" } }, - "node_modules/react-calendly": { - "version": "4.3.0", - "resolved": "https://registry.npmjs.org/react-calendly/-/react-calendly-4.3.0.tgz", - "integrity": "sha512-JFZzYhyJBaoZDseB3UqzeOx1rbzCK24nr5pqH/6zJEh7CZ/pn5R49rkIJ0g5E7j5WQ3K7xBSgBD7WgM36v3gZw==", - "engines": { - "node": ">=8", - "npm": ">=5" - }, - "peerDependencies": { - "react": ">=16.8.0", - "react-dom": ">=16.8.0" - } - }, "node_modules/react-dev-utils": { "version": "12.0.1", "resolved": "https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-12.0.1.tgz", diff --git a/package.json b/package.json index 695725b0d..de6a30869 100644 --- a/package.json +++ b/package.json @@ -27,7 +27,6 @@ "plugin-image-zoom": "github:flexanalytics/plugin-image-zoom", "prism-react-renderer": "1.3.5", "react": "18.2", - "react-calendly": "4.3.0", "react-dom": "18.2", "rehype-katex": "7.0.0", "remark-math": "6.0.0", diff --git a/sidebars.js b/sidebars.js index d6797a7d1..7bcb34483 100644 --- a/sidebars.js +++ b/sidebars.js @@ -249,126 +249,154 @@ const sidebars = { }, ], tutorialsSidebar: [ + { + type: 'doc', + label: 'Tutorials home', + id: 'tutorials', + }, + { + type: 'html', + value: '
Beginner
', + className: 'menu__divider', + }, { type: 'category', - label: 'Tutorials', + label: 'Deploy a smart contract', link: { type: 'doc', - id: 'tutorials', + id: 'tutorials/smart-contract', }, items: [ - { - type: 'category', - label: 'Deploy a smart contract', - link: { - type: 'doc', - id: 'tutorials/smart-contract', - }, - items: [ - 'tutorials/smart-contract/jsligo', - 'tutorials/smart-contract/cameligo', - 'tutorials/smart-contract/smartpy', - 'tutorials/smart-contract/archetype', - ], - }, - { - type: 'category', - label: 'Create an NFT', - link: { - type: 'doc', - id: 'tutorials/create-an-nft', - }, - items: [ - 'tutorials/create-an-nft/nft-tznft', - 'tutorials/create-an-nft/nft-taquito', - { - type: 'category', - label: 'Mint NFTs from a web app', - link: { - type: 'doc', - id: 'tutorials/create-an-nft/nft-web-app', - }, - items: [ - 'tutorials/create-an-nft/nft-web-app/setting-up-app', - 'tutorials/create-an-nft/nft-web-app/defining-functions', - 'tutorials/create-an-nft/nft-web-app/lets-play', - ], - }, - ], - }, - { - type: 'category', - label: 'Build your first app', - link: { - type: 'doc', - id: 'tutorials/build-your-first-app', - }, - items: [ - 'tutorials/build-your-first-app/setting-up-app', - 'tutorials/build-your-first-app/wallets-tokens', - 'tutorials/build-your-first-app/sending-transactions', - 'tutorials/build-your-first-app/getting-information', - ], - }, - - { - type: 'category', - label: 'Start with a minimum dApp and add new features', - link: { - type: 'doc', - id: 'tutorials/dapp', - }, - items: [ - 'tutorials/dapp/part-1', - 'tutorials/dapp/part-2', - 'tutorials/dapp/part-3', - 'tutorials/dapp/part-4', - ], - }, - - { - type: 'category', - label: 'Deploy a smart rollup', - link: { - type: 'doc', - id: 'tutorials/smart-rollup', - }, - items: [ - 'tutorials/smart-rollup/set-up', - 'tutorials/smart-rollup/debug', - 'tutorials/smart-rollup/optimize', - 'tutorials/smart-rollup/deploy', - 'tutorials/smart-rollup/run', - ], - }, - { - type: 'category', - label: 'Build an NFT marketplace', - link: { - type: 'doc', - id: 'tutorials/build-an-nft-marketplace', - }, - items: [ - 'tutorials/build-an-nft-marketplace/part-1', - 'tutorials/build-an-nft-marketplace/part-2', - 'tutorials/build-an-nft-marketplace/part-3', - 'tutorials/build-an-nft-marketplace/part-4', - ], - }, - { - type: 'category', - label: 'Create a mobile game', - link: { - type: 'doc', - id: 'tutorials/mobile', - }, - items: [ - 'tutorials/mobile/part-1', - 'tutorials/mobile/part-2', - 'tutorials/mobile/part-3', - 'tutorials/mobile/part-4', - ], - }, + 'tutorials/smart-contract/jsligo', + 'tutorials/smart-contract/cameligo', + 'tutorials/smart-contract/smartpy', + 'tutorials/smart-contract/archetype', + ], + }, + { + type: 'category', + label: 'Mint NFTs from a web app', + link: { + type: 'doc', + id: 'tutorials/create-an-nft/nft-web-app', + }, + items: [ + 'tutorials/create-an-nft/nft-web-app/setting-up-app', + 'tutorials/create-an-nft/nft-web-app/defining-functions', + 'tutorials/create-an-nft/nft-web-app/lets-play', + ], + }, + { + type: 'category', + label: 'Start with a minimum dApp and add new features', + link: { + type: 'doc', + id: 'tutorials/dapp', + }, + items: [ + 'tutorials/dapp/part-1', + 'tutorials/dapp/part-2', + 'tutorials/dapp/part-3', + 'tutorials/dapp/part-4', + ], + }, + { + type: 'html', + value: '
Intermediate
', + className: 'menu__divider', + }, + { + type: 'category', + label: 'Build a simple web application', + link: { + type: 'doc', + id: 'tutorials/build-your-first-app', + }, + items: [ + 'tutorials/build-your-first-app/setting-up-app', + 'tutorials/build-your-first-app/wallets-tokens', + 'tutorials/build-your-first-app/sending-transactions', + 'tutorials/build-your-first-app/getting-information', + ], + }, + 'tutorials/create-an-nft/nft-taquito', + 'tutorials/create-an-nft/nft-tznft', + { + type: 'html', + value: '
Advanced
', + className: 'menu__divider', + }, + { + type: 'category', + label: 'Deploy a Smart Rollup', + link: { + type: 'doc', + id: 'tutorials/smart-rollup', + }, + items: [ + 'tutorials/smart-rollup/set-up', + 'tutorials/smart-rollup/debug', + 'tutorials/smart-rollup/optimize', + 'tutorials/smart-rollup/deploy', + 'tutorials/smart-rollup/run', + ], + }, + { + type: 'category', + label: 'Build an NFT marketplace', + link: { + type: 'doc', + id: 'tutorials/build-an-nft-marketplace', + }, + items: [ + 'tutorials/build-an-nft-marketplace/part-1', + 'tutorials/build-an-nft-marketplace/part-2', + 'tutorials/build-an-nft-marketplace/part-3', + 'tutorials/build-an-nft-marketplace/part-4', + ], + }, + { + type: 'category', + label: 'Create a mobile game', + link: { + type: 'doc', + id: 'tutorials/mobile', + }, + items: [ + 'tutorials/mobile/part-1', + 'tutorials/mobile/part-2', + 'tutorials/mobile/part-3', + 'tutorials/mobile/part-4', + ], + }, + { + type: 'category', + label: 'Implement a file archive with the DAL', + link: { + type: 'doc', + id: 'tutorials/build-files-archive-with-dal', + }, + items: [ + 'tutorials/build-files-archive-with-dal/get-dal-params', + 'tutorials/build-files-archive-with-dal/get-slot-info', + 'tutorials/build-files-archive-with-dal/publishing-on-the-dal', + 'tutorials/build-files-archive-with-dal/using-full-slot', + ], + }, + { + type: 'category', + label: 'Join the DAL as a Weeklynet baker', + link: { + type: 'doc', + id: 'tutorials/join-dal-baker', + }, + items: [ + 'tutorials/join-dal-baker/get-octez', + 'tutorials/join-dal-baker/run-node', + 'tutorials/join-dal-baker/prepare-account', + 'tutorials/join-dal-baker/run-dal-node', + 'tutorials/join-dal-baker/run-baker', + 'tutorials/join-dal-baker/conclusion', ], }, ], diff --git a/src/components/BuildSection/styles.module.css b/src/components/BuildSection/styles.module.css index f8a6b35ca..b6943967c 100644 --- a/src/components/BuildSection/styles.module.css +++ b/src/components/BuildSection/styles.module.css @@ -6,7 +6,6 @@ height: auto; padding: 100px 81px; background: #F2F3F7; - margin-top: 90px; } .container { diff --git a/src/components/CalendlyEmbed.jsx b/src/components/CalendlyEmbed.jsx deleted file mode 100644 index c0ceab351..000000000 --- a/src/components/CalendlyEmbed.jsx +++ /dev/null @@ -1,36 +0,0 @@ -import React, { useState, useEffect } from 'react'; -import { InlineWidget } from 'react-calendly'; - -export default function CalendlyEmbed() { - const url = 'https://calendly.com/developer-success-on-tezos/15min'; - const [isMobile, setIsMobile] = useState(false); - - useEffect(() => { - const handleResize = () => { - setIsMobile(window.innerWidth < 1000); - }; - window.addEventListener('resize', handleResize); - handleResize(); - return () => window.removeEventListener('resize', handleResize); - }, []); - - return ( -
- {isMobile ? ( - - ) : ( - - )} -
- ); -} diff --git a/src/css/custom.css b/src/css/custom.css index 2423fd415..9426cf4df 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -148,6 +148,18 @@ nav.navbar { color: #0D61FF; } +/* Headings for tutorials sidebar */ +.menu__divider { + font-family: 'GT Eesti Display', sans-serif; + font-weight: bold; + font-size: 16px; + line-height: 26px; + color: #4A4E52; + transition: color 0.3s; + padding-left: var(--ifm-menu-link-padding-horizontal); + padding-top: 30px; +} + /* breadcrumbs menu */ .breadcrumbs__item--active .breadcrumbs__link { color: #0D61FF; diff --git a/src/pages/index.js b/src/pages/index.js index d23bdbaef..36ff855e1 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -3,7 +3,6 @@ import clsx from 'clsx'; import useDocusaurusContext from '@docusaurus/useDocusaurusContext'; import Layout from '@theme/Layout'; import HomepageFeatures from '@site/src/components/HomepageFeatures'; -import CalendlyEmbed from '@site/src/components/CalendlyEmbed.jsx'; import styles from './index.module.css'; import BuildSection from '@site/src/components/BuildSection'; import Footer from '@site/src/components/Footer'; @@ -30,11 +29,6 @@ export default function Home() {
-
-

We are here for you

-

Book a 15 min, 1 to 1 session hosted by the TriliTech Developer Success team to discuss and answer your technical questions.

-
-