Skip to content

Commit

Permalink
add AlgoKit docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ryanRfox authored Oct 18, 2023
2 parents a754898 + 60e80ec commit b0763e5
Show file tree
Hide file tree
Showing 49 changed files with 3,606 additions and 106 deletions.
2 changes: 1 addition & 1 deletion .go-algorand-beta.version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v3.18.1-beta
v3.19.0-beta
4 changes: 2 additions & 2 deletions docs/get-details/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@ title: Get details

arrange:
- index.md
- algokit
- accounts
- transactions
- asa.md
- atomic_transfers.md
- atc.md
- asa.md
- dapps
- algokit.md
- indexer.md
- conduit.md
- stateproofs
Expand Down
9 changes: 9 additions & 0 deletions docs/get-details/algokit/.pages
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
title: AlgoKit

arrange:
- index.md
- features
- cli-reference.md
- tutorials
- architecture-decisions
- articles
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
title: sandbox approach

- **Status**: Approved
- **Owner:** Rob Moore
- **Deciders**: Anne Kenyon (Algorand Inc.), Alessandro Cappellato (Algorand Foundation), Will Winder (Algorand Inc.)
- **Date created**: 2022-11-14
- **Date decided:** 2022-11-14
- **Date updated**: 2022-11-16

## Context

In order for AlgoKit to facilitate a productive development experience it needs to provide a managed Algorand sandbox experience. This allows developers to run an offline (local-only) private instance of Algorand that they can privately experiment with, run automated tests against and reset at will.

## Requirements

- The sandbox works cross-platform (i.e. runs natively on Windows, Mac and Linux)
- You can spin up algod and indexer since both have useful use cases when developing
- The sandbox is kept up to date with the latest version of algod / indexer
- There is access to KMD so that you can programmatically fund accounts to improve the developer experience and reduce manual effort
- There is access to the tealdbg port outside of algod so you can attach a debugger to it
- The sandbox is isolated and (once running) works offline so the workload is private, allows development when there is no internet (e.g. when on a plane) and allows for multiple instances to be run in parallel (e.g. when developing multiple independent projects simultaneously)
- Works in continuous integration and local development environments so you can facilitate automated testing

## Principles

- **[AlgoKit Guiding Principles](../index.md#Guiding-Principles)** - specifically Seamless onramp, Leverage existing ecosystem, Meet devs where they are
- **Lightweight** - the solution should have as low an impact as possible on resources on the developers machine
- **Fast** - the solution should start quickly, which makes for a nicer experience locally and also allows it to be used for continuous integration automation testing

## Options

### Option 1 - Pre-built DockerHub images

Pre-built application developer-optimised DockerHub images that work cross-platform; aka an evolved AlgoKit version of <https://github.com/MakerXStudio/algorand-sandbox-dev>.

**Pros**

- It's quick to download the images and quick to start the container since you don't need to compile Algod / indexer and the images are optimised for small size
- The only dependency needed is Docker, which is a fairly common dependency for most developers to use these days
- The images are reasonably lightweight
- The images provide an optimised application developer experience with: (devmode) algo, KMD, tealdbg, indexer
- It natively works cross-platform

**Cons**

- Some people have reported problems running WSL 2 on a small proportion of Windows environments (to get the latest Docker experience)
- Docker within Docker can be a problem in some CI environments that run agents on Docker in the first place
- Work needs to be done to create an automated CI/CD that automatically releases new versions to keep it up to date with latest algod/indexer versions

### Option 2 - Lightweight algod client implementation

Work with the Algorand Inc. team to get a lightweight algod client that can run outside of a Docker container cross-platform.

**Pros**

- Likely to be the most lightweight and fastest option - opening up better/easier isolated/parallelised automated testing options
- Wouldn't need Docker as a dependency

**Cons**

- Indexer wouldn't be supported (Postgres would require Docker anyway)
- Algorand Inc. does not distribute Windows binaries.

### Option 3 - Sandbox

Use the existing [Algorand Sandbox](https://github.com/algorand/sandbox).

**Pros**

- Implicitly kept up to date with Algorand - no extra thing to maintain
- Battle-tested by the core Algorand team day-in-day-out
- Supports all environments including unreleased feature branches (because it can target a git repo / commit hash)

**Cons**

- Sandbox is designed for network testing, not application development - it's much more complex than the needs of application developers
- Slow to start because it has to download and built algod and indexer (this is particularly problematic for ephemeral CI/CD build agents)
- It's not cross-platform (it requires bash to run sandbox.sh, although a sandbox.ps1 version could be created)

## Preferred option

Option 1 and Option 2.

Option 1 provides a fully-featured experience that will work great in most scenarios, having option 2 as a second option would open up more advanced parallel automated testing scenarios in addition to that.

## Selected option

Option 1

We're aiming to release the first version of AlgoKit within a short timeframe, which won't give time for Option 2 to be developed. Sandbox itself has been ruled out since it's not cross-platform and is too slow for both development and continuous integration.

Option 1 also results in a similar result to running Sandbox, so existing Algorand documentation, libraries and approaches should work well with this option making it a good slot-in replacement for Sandbox for application developers.

AlgoKit is designed to be modular: we can add in other approaches over time such as Option 2 when/if it becomes available.
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
title: Beaker testing strategy

- **Status**: Draft
- **Owner:** Rob Moore
- **Deciders**: Anne Kenyon (Algorand Inc.), Alessandro Cappellato (Algorand Foundation), Michael Diamant (Algorand Inc.), Benjamin Guidarelli (Algorand Foundation)
- **Date created**: 2022-11-22
- **Date decided:** TBD
- **Date updated**: 2022-11-28

## Context

AlgoKit will be providing a smart contract development experience built on top of [PyTEAL](https://pyteal.readthedocs.io/en/stable/) and [Beaker](https://developer.algorand.org/articles/hello-beaker/). Beaker is currently in a pre-production state and needs to be productionised to provide confidence for use in generating production-ready smart contracts by AlgoKit users. One of the key things to resolve to productionisation of Beaker is to improve the automated test coverage.

Beaker itself is currently split into the PyTEAL generation related code and the deployment and invocation related code (including interacting with Sandbox). This decision is solely focussed on the PyTEAL generation components of Beaker. The current automated test coverage of this part of the codebase is ~50% and is largely based on compiling and/or executing smart contracts against Algorand Sandbox. While it's generally not best practice to try and case a specific code coverage percentage, a coverage of ~80%+ is likely indicative of good coverage in a dynamic language such as Python.

The Sandbox tests provide a great deal of confidence, but are also slow to execute, which can potentially impair Beaker development and maintenance experience, especially as the coverage % is grown and/or features are added over time.

Beaker, like PyTEAL, can be considered to be a transpiler on top of TEAL. When generating smart contracts, the individual TEAL opcodes are significant, since security audits will often consider the impact at that level, and it can have impacts on (limited!) resource usage of the smart contract. As such, "output stability" is potentially an important characteristic to test for.

## Requirements

- We have a high degree of confidence that writing smart contracts in Beaker leads to expected results for production smart contracts
- We have reasonable regression coverage so features are unlikely to break as new features and refactorings are added over time
- We have a level of confidence in the "output stability" of the TEAL code generated from a Beaker smart contract

## Principles

- **Fast development feedback loops** - The feedback loop during normal development should be as fast as possible to improve the development experience of developing Beaker itself
- **Low overhead** - The overhead of writing and maintaining tests is as low as possible; tests should be quick to read and write
- **Implementation decoupled** - Tests aren't testing the implementation details of Beaker, but rather the user-facing experience and output of it; this reduces the likelihood of needing to rewrite tests when performing refactoring of the codebase

## Options

### Option 1: TEAL Approval tests

Writing [approval tests](https://approvaltests.com/) of the TEAL output generated from a given Beaker smart contract.

**Pros**

- Ensures TEAL output stability and focussing on asserting the output of Beaker rather than testing whether Algorand Protocol is working
- Runs in-memory/in-process so will execute in low 10s of milliseconds making it easy to provide high coverage with low developer feedback loop overhead
- Tests are easy to write - the assertion is a single line of code (no tedious assertions)
- The tests go from Beaker contract -> TEAL approval so don't bake implementation detail and thus allow full Beaker refactoring with regression confidence without needing to modify the tests
- Excellent regression coverage characteristics - fast test run and quick to write allows for high coverage and anchoring assertions to TEAL output is a very clear regression marker

**Cons**

- The tests rely on the approver to understand the TEAL opcodes that are emitted and verify they match the intent of the Beaker contract - anecdotally this can be difficult at times even for experienced (Py)TEAL developers
- Doesn't assert the correctness of the TEAL output, just that it matches the previously manually approved output

### Option 2: Sandbox compile tests

Writing Beaker smart contracts and checking that the TEAL output successfully compiles against algod.

**Pros**

- Ensures that the TEAL output compiles, giving some surety about the intactness of it and focussing on asserting the output of Beaker rather than testing whether Algorand Protocol is working
- Faster than executing the contract
- Tests are easy to write - the assertion is a single line of code (no tedious assertions)

**Cons**

- Order of magnitude slower than asserting TEAL output (out of process communication)
- Doesn't assert the correctness of the TEAL output, just that it compiles

### Option 3: Sandbox execution tests

Execute the smart contracts and assert the output is as expected. This can be done using dry run and/or actual transactions.

**Pros**

- Asserts that the TEAL output _executes_ correctly giving the highest confidence
- Doesn't require the test writer to understand the TEAL output
- Tests don't bake implementation detail and do assert on output so give a reasonable degree of refactoring confidence without modifying tests

**Cons**

- Tests are more complex to write
- Tests take an order of magnitude longer to run than just compilation (two orders of magnitude to run than checking TEAL output)
- Harder to get high regression coverage since it's slower to write and run the tests making it impractical to get full coverage
- Doesn't ensure output stability
- Is testing that the Algorand Protocol itself works (TEAL `x` when executed does `y`) so the testing scope is broader than just Beaker itself

## Preferred option

Option 1 (combined with Option 2 to ensure the approved TEAL actually compiles, potentially only run on CI by default to ensure fast local dev loop) for the bulk of testing to provide a rapid feedback loop for developers as well as ensuring output stability and great regression coverage.

## Selected option

Combination of option 1, 2 and 3:

- While Option 1 + 2 provides high confidence with fast feedback loop, it relies on the approver being able to determine the TEAL output does what they think it does, which isn't always the case
- Option 3 will be used judiciously to provide that extra level of confidence that the fundamentals of the Beaker output are correct for each main feature; this will involve key scenarios being tested with execution based tests, the goal isn't to get combinatorial coverage, which would be slow and time-consuming, but to give a higher degree of confidence
- The decision of when to use Option 3 as well as Option 1+2 will be made on a per-feature basis and reviewed via pull request, over time a set of principles may be able to be revised that outline a clear delineation
- Use of PyTest markers to separate execution so by default the dev feedback loop is still fast, but the full suite is always run against pull requests and merges to main
Loading

0 comments on commit b0763e5

Please sign in to comment.