This is the abridged developer documentation for Algorand Developer Portal
# Algorand Developer Portal
> Everything you need to build solutions powered by the Algorand blockchain network.
Start your journey today ## Become an Algorand Developer Follow our quick start guide to install Algorand’s developer toolkit and go from zero to deploying your "Hello, world" smart contract in mere minutes using TypeScript or Python pathways. Join the network ## Run an Algorand node Join the Algorand network with a validator node using accessible commodity hardware in a matter of minutes. Experience how easy it is to become a node-runner so you can participate in staking rewards, validate blocks, submit transactions, and read chain data.
# Intro to AlgoKit
AlgoKit is a comprehensive software development kit designed to streamline and accelerate the process of building decentralized applications on the Algorand blockchain. At its core, AlgoKit features a powerful command-line interface (CLI) tool that provides developers with an array of functionalities to simplify blockchain development. Along with the CLI, AlgoKit offers a suite of libraries, templates, and tools that facilitate rapid prototyping and deployment of secure, scalable, and efficient applications. Whether you’re a seasoned blockchain developer or new to the ecosystem, AlgoKit offers everything you need to harness the full potential of Algorand’s impressive tech and innovative consensus algorithm. [Introduction to AlgoKit](https://www.youtube.com/embed/pojEI-8h0lg?rel=0) ## AlgoKit CLI AlgoKit CLI is a powerful set of command line tools for Algorand developers. Its goal is to help developers build and launch secure, automated, production-ready applications rapidly. ### AlgoKit CLI commands Here is the list of commands that you can use with AlgoKit CLI. * \- Bootstrap AlgoKit project dependencies * \- Compile Algorand Python code * \- Install shell completions for AlgoKit * \- Deploy your smart contracts effortlessly to various networks * \- Fund your TestNet account with ALGOs from the AlgoKit TestNet Dispenser * \- Check AlgoKit installation and dependencies * \- Explore Algorand Blockchains using lora * \- Generate code for an Algorand project * \- Run the Algorand goal CLI against the AlgoKit Sandbox * \- Quickly initialize new projects using official Algorand Templates or community provided templates * \- Manage a locally sandboxed private Algorand network * \- Perform a variety of AlgoKit project workspace related operations like bootstrapping development environment, deploying smart contracts, running custom commands, and more * \- Perform a variety of useful operations like signing & sending transactions, minting ASAs, creating vanity address, and more, on the Algorand blockchain To learn more about AlgoKit CLI, refer to the following resources: Learn more about using and configuring AlgoKit CLI Explore the codebase and contribute to its development ## Algorand Python If you are a Python developer, you no longer need to learn a complex smart contract language to write smart contracts. Algorand Python is a semantically and syntactically compatible, typed Python language that works with standard Python tooling and allows you to write Algorand smart contracts (apps) and logic signatures in Python. Since the code runs on the Algorand virtual machine(AVM), there are limitations and minor differences in behaviors from standard Python, but all code you write with Algorand Python is Python code. Here is an example of a simple Hello World smart contract written in Algorand Python: ```py from algopy import ARC4Contract, String, arc4 class HelloWorld(ARC4Contract): @arc4.abimethod() def hello(self, name: String) -> String: return "Hello, " + name + "!" ``` To learn more about Algorand Python, refer to the following resources: Learn more about the design and implementation of Algorand Python Explore the codebase and contribute to its development ## Algorand TypeScript If you are a TypeScript developer, you no longer need to learn a complex smart contract language to write smart contracts. Algorand TypeScript is a semantically and syntactically compatible, typed TypeScript language that works with standard TypeScript tooling and allows you to write Algorand smart contracts (apps) and logic signatures in TypeScript. Since the code runs on the Algorand virtual machine(AVM), there are limitations and minor differences in behaviors from standard TypeScript, but all code you write with Algorand TypeScript is TypeScript code. Here is an example of a simple Hello World smart contract written in Algorand TypeScript: ```ts import { Contract } from '@algorandfoundation/algorand-typescript'; export class HelloWorld extends Contract { hello(name: string): string { return `Hello, ${name}`; } } ``` To learn more about Algorand TypeScript, refer to the following resources: Learn more about the design and implementation of Algorand TypeScript Explore the codebase and contribute to its development ## AlgoKit Utils AlgoKit Utils is a utility library recommended for you to use for all chain interactions like sending transactions, creating tokens(ASAs), calling smart contracts, and reading blockchain records. The goal of this library is to provide intuitive, productive utility functions that make it easier, quicker, and safer to build applications on Algorand. Largely, these functions wrap the underlying Algorand SDK but provide a higher-level interface with sensible defaults and capabilities for common tasks. AlgoKit Utils is available in TypeScript and Python. ### Capabilities The library helps you interact with and develop against the Algorand blockchain with a series of end-to-end capabilities as described below: * \- The key entrypoint to the AlgoKit Utils functionality * Core capabilities * \- Creation of (auto-retry) algod, indexer and kmd clients against various networks resolved from environment or specified configuration * \- Creation and use of accounts including mnemonic, rekeyed, multisig, transaction signer ( for dApps and Atomic Transaction Composer compatible signers), idempotent KMD accounts and environment variable injected * \- Reliable and terse specification of microAlgo and Algo amounts and conversion between them * \- Ability to send single, grouped or Atomic Transaction Composer transactions with consistent and highly configurable semantics, including configurable control of transaction notes (including ARC-0002), logging, fees, multiple sender account types, and sending behavior * Higher-order use cases * \- Creation, updating, deleting, calling (ABI and otherwise) smart contract apps and the metadata associated with them (including state and boxes) * \- Idempotent (safely retryable) deployment of an app, including deploy-time immutability and permanence control and TEAL template substitution * \- Builds on top of the App management and App deployment capabilities to provide a high productivity application client that works with ARC-0032 application spec defined smart contracts (e.g. via Beaker) * \- Ability to easily initiate algo transfers between accounts, including dispenser management and idempotent account funding * \- Terse, robust automated testing primitives that work across any testing framework (including jest and vitest) to facilitate fixture management, quickly generating isolated and funded test accounts, transaction logging, indexer wait management and log capture * \- Type-safe indexer API wrappers (no more `Record` pain), including automatic pagination control To learn more about AlgoKit Utils, refer to the following resources: Learn more about the design and implementation of Algorand Utils Explore the codebase and contribute to its development Learn more about the design and implementation of Algorand Utils Explore the codebase and contribute to its development [Introduction to Algokit Utils](https://www.youtube.com/embed/AkUj1GgcMig?rel=0) ## AlgoKit LocalNet The AlgoKit LocalNet feature allows you to manage (start, stop, reset, manage) a locally sandboxed private Algorand network. This allows you to interact with and deploy changes against your own Algorand network without needing to worry about funding TestNet accounts, whether the information you submit is publicly visible, or whether you are connected to an active Internet connection (once the network has been started). AlgoKit LocalNet uses Docker images optimized for a great developer experience. This means the Docker images are small and start fast. It also means that features suited to developers are enabled, such as KMD (so you can programmatically get faucet private keys). To learn more about AlgoKit Localnet, refer to the following resources: Learn more about using and configuring AlgoKit Localnet Explore the source code and technical implementation details ## AVM Debugger The AlgoKit AVM VS Code debugger extension provides a convenient way to debug any Algorand Smart Contracts written in TEAL. To learn more about the AVM debugger, refer to the following resources: Learn more about using and configuring the AVM Debugger Explore the AVM Debugger codebase and contribute to its development ## Language Servers The Algorand VS Code Language Extensions provide developers with enhanced capabilities to build Algorand smart contracts efficiently within Visual Studio Code. Designed to work alongside the standard Python and TypeScript language servers, these extensions extend core IDE functionality by adding Algorand-specific diagnostics, validation, and intelligent code actions. The integrates seamlessly with the official Python extension and automatically detects the PuyaPy environment to offer real-time contract-aware analysis and quick fixes, helping developers catch errors early and improve code quality. Similarly, the supports Algorand’s specialized TypeScript and smart contract utilities, providing targeted diagnostics and validation in a familiar developer workflow. Both extensions simplify the development process by offering immediate feedback relevant to Algorand’s unique blockchain environment, accelerating learning and reducing common mistakes. Currently in beta, they require Visual Studio Code 1.80.0 or later and are designed to complement existing language tooling, making them essential tools for any developer working on Algorand smart contracts. Learn more about the TypeScript Language Server Learn more about the Python Language Server ## Client Generator The client generator generates a type-safe smart contract client for the Algorand Blockchain that wraps the application client in AlgoKit Utils and tailors it to a specific smart contract. It does this by reading an ARC-0032 application spec file and generating a client that exposes methods for each ABI method in the target smart contract, along with helpers to create, update, and delete the application. To learn more about the client generator, refer to the following resources: Learn more about the TypeScript client generator for Algorand smart contracts Explore the TypeScript client generator codebase and contribute to its development Learn more about the Python client generator for Algorand smart contracts Explore the Python client generator codebase and contribute to its development ## Testnet Dispenser The AlgoKit TestNet Dispenser API provides functionalities to interact with the Dispenser service. This service enables users to fund and refund assets. To learn more about the testnet dispenser, refer to the following resources: Learn more about using and configuring the AlgoKit TestNet Dispenser Explore the technical implementation and contribute to its development ## AlgoKit Tools and Versions While AlgoKit as a *collection* was bumped to Version 3.0 on March 26, 2025, it is important to note that the individual tools in the kit are on different package version numbers. In the future this may be changed to epoch versioning so that it is clear that all packages are part of the same epoch release. | Tool | Repository | AlgoKit 3.0 Min Version | | ------------------------------------------ | ------------------------------- | ----------------------- | | Command Line Interface (CLI) | algokit-cli | 2.6.0 | | Utils (Python) | algokit-utils-py | 4.0.0 | | Utils (TypeScript) | algokit-utils-ts | 9.0.0 | | Client Generator (Python) | algokit-client-generator-py | 2.1.0 | | Client Generator (TypeScript) | algokit-client-generator-ts | 5.0.0 | | Subscriber (Python) | algokit-subscriber-py | 1.0.0 | | Subscriber (TypeScript) | algokit-subscriber-ts | 3.2.0 | | Puya Compiler | puya | 4.5.3 | | Puya Compiler, TypeScript | puya-ts | 1.0.0-beta.58 | | AVM Unit Testing (Python) | algorand-python-testing | 0.5.0 | | AVM Unit Testing (TypeScript) | algorand-typescript-testing | 1.0.0-beta.30 | | Lora the Explorer | algokit-lora | 1.2.0 | | AVM VSCode Debugger | algokit-avm-vscode-debugger | 1.1.5 | | Utils Add-On for TypeScript Debugging | algokit-utils-ts-debug | 1.0.4 | | Base Project Template | algokit-base-template | 1.1.0 | | Python Smart Contract Project Template | algokit-python-template | 1.6.0 | | TypeScript Smart Contract Project Template | algokit-typescript-template | 0.3.1 | | React Vite Frontend Project Template | algokit-react-frontend-template | 1.1.1 | | Fullstack Project Template | algokit-fullstack-template | 2.1.4 | ## Install ### Prerequisites The installation pre-requisites change depending on the method you use to install. Please refer to . Depending on the features you choose to leverage from the AlgoKit CLI, additional dependencies may be required. The AlgoKit CLI will tell you if you are missing one for a given command. These optional dependencies are: * **Git**: Essential for creating and updating projects from templates. Installation guide available at . * **Docker**: Necessary for running the AlgoKit LocalNet environment. Docker Compose version 2.5.0 or higher is required. See . * **Python**: For those installing the AlgoKit CLI via `pipx` or building contracts using Algorand Python. **Minimum required version is Python 3.12+ when working with Algorand Python**. See . * **Node.js**: For those working on frontend templates or building contracts using Algorand TypeScript or TEALScript. **Minimum required versions are Node.js `v22` and npm `v10`**. See . ### Cross-platform installation AlgoKit can be installed using OS specific package managers, or using the python tool . See below for specific installation instructions. #### Installation Methods ### Install AlgoKit on Windows 1. Ensure prerequisites are installed * (should be installed by default on recent Windows 10 or later) * (or `winget install git.git`) * (or `winget install docker.dockerdesktop`) 2. Install using winget ```shell winget install algokit ``` 3. #### Maintenance Some useful commands for updating or removing AlgoKit in the future. * To update AlgoKit: `winget upgrade algokit` * To remove AlgoKit: `winget uninstall algokit` ### Install AlgoKit on Mac 1. Ensure prerequisites are installed * (should already be available if `brew` is installed) * , (or `brew install --cask docker`) 1. Install using Homebrew ```shell brew install algorandfoundation/tap/algokit ``` 2. Restart the terminal to ensure AlgoKit is available on the path 3. #### Maintenance Some useful commands for updating or removing AlgoKit in the future. * To update AlgoKit: `brew upgrade algokit` * To remove AlgoKit: `brew uninstall algokit` ### Install AlgoKit on Linux 1. Ensure prerequisites are installed * (should be installed by default on Ubuntu 16.04.4 LTS (Xenial Xerus) or later) 1. Install using snap ```shell sudo snap install algokit --classic ``` > For detailed guidelines per each supported linux distro, refer to . 2. #### Maintenance Some useful commands for updating or removing AlgoKit in the future. * To update AlgoKit: `snap refresh algokit` * To remove AlgoKit: `snap remove --purge algokit` ### Install AlgoKit with pipx on any OS 1. Ensure desired prerequisites are installed 1. Install using pipx ```shell pipx install algokit ``` 2. Restart the terminal to ensure AlgoKit is available on the path 3. #### Maintenance Some useful commands for updating or removing AlgoKit in the future. * To update AlgoKit: `pipx upgrade algokit` * To remove AlgoKit: `pipx uninstall algokit` ### Verify installation Verify AlgoKit is installed correctly by running `algokit --version` and you should see output similar to: ```plaintext algokit, version 1.0.1 ``` It is also recommended that you run `algokit doctor` to verify there are no issues in your local environment and to diagnose any problems if you do have difficulties running AlgoKit. The output of this command will look similar to: ```plaintext timestamp: 2023-03-27T01:23:45+00:00 AlgoKit: 1.0.1 AlgoKit Python: 3.11.1 (main, Dec 23 2022, 09:28:24) [Clang 14.0.0 (clang-1400.0.29.202)] (location: /Users/algokit/.local/pipx/venvs/algokit) OS: macOS-13.1-arm64-arm-64bit docker: 20.10.21 docker compose: 2.13.0 git: 2.37.1 python: 3.10.9 (location: /opt/homebrew/bin/python) python3: 3.10.9 (location: /opt/homebrew/bin/python3) pipx: 1.1.0 poetry: 1.3.2 node: 18.12.1 npm: 8.19.2 brew: 3.6.18 If you are experiencing a problem with AlgoKit, feel free to submit an issue via: https://github.com/algorandfoundation/algokit-cli/issues/new Please include this output, if you want to populate this message in your clipboard, run `algokit doctor -c` ``` Per the above output, the doctor command output is a helpful tool if you need to ask for support or . ### Troubleshooting This section addresses specific edge cases and issues that some users might encounter when interacting with the CLI. The following table provides solutions to known edge cases: | Issue Description | OS(s) with observed behaviour | Steps to mitigate | References | | --------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------- | | This scenario may arise if installed `python` was build without `--with-ssl` flag enabled, causing pip to fail when trying to install dependencies. | Debian 12 | Run `sudo apt-get install -y libssl-dev` to install the required openssl dependency. Afterwards, ensure to reinstall python with `--with-ssl` flag enabled. This includes options like or using tools like . | | | `poetry install` invoked directly or via `algokit project bootstrap all` fails on `Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)`. | `MacOS` >=14 using `python` 3.13 installed via `homebrew` | Install dependencies deprecated in `3.13` and latest MacOS versions via `brew install pkg-config`, delete the virtual environment folder and retry the `poetry install` command invocation. | N/A |
# AVM Debugger
> Tutorial on how to debug a smart contract using AVM Debugger
The AVM VSCode debugger enables inspection of blockchain logic through `Simulate Traces` - JSON files containing detailed transaction execution data without on-chain deployment. The extension requires both trace files and source maps that link original code (TEAL or Puya) to compiled instructions. While the extension works independently, projects created with algokit templates include utilities that automatically generate these debugging artifacts. For full list of available capabilities of debugger extension refer to this . This tutorial demonstrates the workflow using a Python-based Algorand project. We will walk through identifying and fixing a bug in an Algorand smart contract using the Algorand Virtual Machine (AVM) Debugger. We’ll start with a simple, smart contract containing a deliberate bug, and by using the AVM Debugger, we’ll pinpoint and fix the issue. This guide will walk you through setting up, debugging, and fixing a smart contract using this extension. [Debugging with AlgoKit 3.0](https://www.youtube.com/embed/yPWRlmmTSHA?rel=0) ## Prerequisites * Visual Studio Code (version 1.80.0 or higher) * Node.js (version 18.x or higher) * installed * extension installed * Basic understanding of ## Step 1: Setup the Debugging Environment Install the Algokit AVM VSCode Debugger extension from the VSCode Marketplace by going to extensions in VSCode, then search for Algokit AVM Debugger and click install. You should see the output like the following:  ## Step 2: Set Up the Example Smart Contract We aim to debug smart contract code in a project generated via `algokit init`. Refer to set up . Here’s the Algorand Python code for an `tictactoe` smart contract. The bug is in the `move` method, where `games_played` is updated by `2` for guest and `1` for host (which should be updated by 1 for both guest and host). Remove `hello_world` folder Create a new tic tac toe smart contract starter via `algokit generate smart-contract -a contract_name "TicTacToe"` Replace the content of `contract.py` with the code below. Add the below deployment code in `deploy.config` file: ## Step 3: Compile & Deploy the Smart Contract To enable debugging mode and full tracing for each step in the execution, go to `main.py` file and add: ```python from algokit_utils.config import config config.configure(debug=True, trace_all=True) ``` For more details, refer to : Next compile the smart contract using AlgoKit: ```bash algokit project run build ``` This will generate the following files in artifacts: `approval.teal`, `clear.teal`, `clear.puya.map`, `approval.puya.map` and `arc32.json` files. The `.puya.map` files are result of the execution of puyapy compiler (which project run build command orchestrated and invokes automatically). The compiler has an option called `--output-source-maps` which is enabled by default. Deploy the smart contract on localnet: ```bash algokit project deploy localnet ``` This will automatically generate `*.appln.trace.avm.json` files in `debug_traces` folder, `.teal` and `.teal.map` files in sources. The `.teal.map` files are source maps for TEAL and those are automatically generated every time an app is deployed via `algokit-utils`. Even if the developer is only interested in debugging puya source maps, the teal source maps would also always be available as a backup in case there is a need to fall back to more lower level source map. ### Expected Behavior The expected behavior is that `games_played` should be updated by `1` for both guest and host ### Bug When `move` method is called, `games_played` will get updated incorrectly for guest player. ## Step 4: Start the debugger In the VSCode, go to run and debug on left side. This will load the compiled smart contract into the debugger. In the run and debug, select debug TEAL via Algokit AVM Debugger. It will ask to select the appropriate `debug_traces` file.  Figure: Load Debugger in VSCode Next it will ask you to select the source map file. Select the `approval.puya.map` file. Which would indicate to the debug extension that you would like to debug the given trace file using Puya sourcemaps, allowing you to step through high level python code. If you need to change the debugger to use TEAL or puya sourcemaps for other frontends such as Typescript, remove the individual record from `.algokit/sources/sources.avm.json` file or run the  ## Step 5: Debugging the smart contract Let’s now debug the issue:  Enter into the `app_id` of the `transaction_group.json` file. This opens the contract. Set a breakpoint in the `move` method. You can also add additional breakpoints.  On left side, you can see `Program State` which includes `program counter`, `opcode`, `stack`, `scratch space`. In `On-chain State` you will be able to see `global`, `local` and `box` storages for the application id deployed on localnet. :::note: We have used localnet but the contracts can be deployed on any other network. A trace file is in a sense agnostic of the network in which the trace file was generated in. As long as its a complete simulate trace that contains state, stack and scratch states in the execution trace - debugger will work just fine with those as well. ::: Once you start step operations of debugging, it will get populated according to the contract. Now you can step-into the code. ## Step 6: Analyze the Output Observe the `games_played` variable for guest is increased by 2 (incorrectly) whereas for host is increased correctly.  ## Step 7: Fix the Bug Now that we’ve identified the bug, let’s fix it in our original smart contract in `move` method: ## Step 8: Re-deploy Re-compile and re-deploy the contract using the `step 3`. ## Step 9: Verify again using Debugger Reset the `sources.avm.json` file, then restart the debugger selecting `approval.puya.source.map` file. Run through `steps 4 to 6` to verify that the `games_played` now updates as expected, confirming the bug has been fixed as seen below.  ## Summary In this tutorial, we walked through the process of using the AVM debugger from AlgoKit Python utils to debug an Algorand Smart Contract. We set up a debugging environment, loaded a smart contract with a planted bug, stepped through the execution, and identified the issue. This process can be invaluable when developing and testing smart contracts on the Algorand blockchain. It’s highly recommended to thoroughly test your smart contracts to ensure they function as expected and prevent costly errors in production before deploying them to the main network. ## Next steps To learn more, refer to documentation of the to learn more about Debugging session.
# Application Client Usage
After using the cli tool to generate an application client you will end up with a TypeScript file containing several type definitions, an application factory class and an application client class that is named after the target smart contract. For example, if the contract name is `HelloWorldApp` then you will end up with `HelloWorldAppFactory` and `HelloWorldAppClient` classes. The contract name will also be used to prefix a number of other types in the generated file which allows you to generate clients for multiple smart contracts in the one project without ambiguous type names. > !\[NOTE] > > If you are confused about when to use the factory vs client the mental model is: use the client if you know the app ID, use the factory if you don’t know the app ID (deferred knowledge or the instance doesn’t exist yet on the blockchain) or you have multiple app IDs ## Creating an application client instance The first step to using the factory/client is to create an instance, which can be done via the constructor or more easily via an instance via `algorand.client.get_typed_app_factory()` and `algorand.client.get_typed_app_client()` (see code examples below). Once you have an instance, if you want an escape hatch to the you can access them as a property: ```python # Untyped `AppFactory` untypedFactory = factory.app_factory # Untyped `AppClient` untypedClient = client.app_client ``` ### Get a factory The allows you to create and deploy one or more app instances and to create one or more app clients to interact with those (or other) app instances when you need to create clients for multiple apps. If you only need a single client for a single, known app then you can skip using the factory and just . ```python # Via AlgorandClient factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory) # Or, using the options: factory_with_optional_params = algorand.client.get_typed_app_factory( HelloWorldAppFactory, default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenName", compilation_params={ "deletable": True, "updatable": False, "deploy_time_params": { "VALUE": "1", }, }, version="2.0", ) # Or via the constructor factory = new HelloWorldAppFactory({ algorand, }) # with options: factory = new HelloWorldAppFactory({ algorand, default_sender: "DEFAULTSENDERADDRESS", app_name: "OverriddenName", compilation_params={ "deletable": True, "updatable": False, "deploy_time_params": { "VALUE": "1", }, }, version: "2.0", }) ``` ### Get a client by app ID The typed can be retrieved by ID. You can get one by using a previously created app factory, from an `AlgorandClient` instance and using the constructor: ```python # Via factory factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory) client = factory.get_app_client_by_id({ app_id: 123 }) client_with_optional_params = factory.get_app_client_by_id( app_id=123, default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", # Can also pass in `approval_source_map`, and `clear_source_map` ) # Via AlgorandClient client = algorand.client.get_typed_app_client_by_id(HelloWorldAppClient, app_id=123) client_with_optional_params = algorand.client.get_typed_app_client_by_id( HelloWorldAppClient, app_id=123, default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", # Can also pass in `approval_source_map`, and `clear_source_map` ) # Via constructor client = new HelloWorldAppClient( algorand=algorand, app_id=123, ) client_with_optional_params = new HelloWorldAppClient( algorand=algorand, app_id=123, default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", # Can also pass in `approval_source_map`, and `clear_source_map` ) ``` ### Get a client by creator address and name The typed can be retrieved by looking up apps by name for the given creator address if they were deployed using . You can get one by using a previously created app factory: ```python factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory) client = factory.get_app_client_by_creator_and_name(creator_address="CREATORADDRESS") client_with_optional_params = factory.get_app_client_by_creator_and_name( creator_address="CREATORADDRESS", default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", # Can also pass in `approval_source_map`, and `clear_source_map` ) ``` Or you can get one using an `AlgorandClient` instance: ```python client = algorand.client.get_typed_app_client_by_creator_and_name( HelloWorldAppClient, creator_address="CREATORADDRESS", ) client_with_optional_params = algorand.client.get_typed_app_client_by_creator_and_name( HelloWorldAppClient, creator_address="CREATORADDRESS", default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", ignore_cache=True, # Can also pass in `app_lookup_cache`, `approval_source_map`, and `clear_source_map` ) ``` ### Get a client by network The typed can be retrieved by network using any included network IDs within the ARC-56 app spec for the current network. You can get one by using a static method on the app client: ```python client = HelloWorldAppClient.from_network(algorand) client_with_optional_params = HelloWorldAppClient.from_network( algorand, default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", # Can also pass in `approval_source_map`, and `clear_source_map` ) ``` Or you can get one using an `AlgorandClient` instance: ```python client = algorand.client.get_typed_app_client_by_network(HelloWorldAppClient) client_with_optional_params = algorand.client.get_typed_app_client_by_network( HelloWorldAppClient, default_sender="DEFAULTSENDERADDRESS", app_name="OverriddenAppName", # Can also pass in `approval_source_map`, and `clear_source_map` ) ``` ## Deploying a smart contract (create, update, delete, deploy) The app factory and client will variously include methods for creating (factory), updating (client), and deleting (client) the smart contract based on the presence of relevant on completion actions and call config values in the ARC-32 / ARC-56 application spec file. If a smart contract does not support being updated for instance, then no update methods will be generated in the client. In addition, the app factory will also include a `deploy` method which will… * create the application if it doesn’t already exist * update or recreate the application if it does exist, but differs from the version the client is built on * recreate the application (and optionally delete the old version) if the deployed version is incompatible with being updated to the client version * do nothing in the application is already deployed and up to date. You can find more specifics of this behaviour in the docs. ### Create To create an app you need to use the factory. The return value will include a typed client instance for the created app. ```python factory = algorand.client.get_typed_app_factory(HelloWorldAppFactory) # Create the application using a bare call result, client = factory.send.create.bare() # Pass in some compilation flags factory.send.create.bare(compilation_params={ "updatable": True, "deletable": True, }) # Create the application using a specific on completion action (ie. not a no_op) factory.send.create.bare(params=CommonAppFactoryCallParams(on_complete=OnApplicationComplete.OptIn)) # Create the application using an ABI method (ie. not a bare call) factory.send.create.namedCreate( args=NamedCreateArgs( arg1=123, arg2="foo", ), ) # Pass compilation flags and on completion actions to an ABI create call factory.send.create.namedCreate({ args=NamedCreateArgs( arg1=123, arg2="foo", ), # Note also available as a typed tuple argument compilation_params={ "updatable": True, "deletable": True, }, params=CommonAppFactoryCallParams(on_complete=OnApplicationComplete.OptIn), }) ``` If you want to get a built transaction without sending it you can use `factory.createTransaction.create...` rather than `factory.send.create...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `factory.params.create...`. ### Update and Delete calls To create an app you need to use the client. ```python client = algorand.client.get_typed_app_client_by_id(HelloWorldAppClient, app_id=123) # Update the application using a bare call client.send.update.bare() # Pass in compilation flags client.send.update.bare(compilation_params={ "updatable": True, "deletable": False, }) # Update the application using an ABI method client.send.update.namedUpdate( args=NamedUpdateArgs( arg1=123, arg2="foo", ), ) # Pass compilation flags client.send.update.namedUpdate({ args=NamedUpdateArgs( arg1=123, arg2="foo", ), compilation_params={ "updatable": True, "deletable": True, }, params=CommonAppCallParams(on_complete=OnApplicationComplete.OptIn), ) # Delete the application using a bare call client.send.delete.bare() # Delete the application using an ABI method client.send.delete.namedDelete() ``` If you want to get a built transaction without sending it you can use `client.create_transaction.update...` / `client.create_transaction.delete...` rather than `client.send.update...` / `client.send.delete...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `client.params.update...` / `client.params.delete...`. ### Deploy call The deploy call will make a create, update, or delete and create, or no call depending on what is required to have the deployed application match the client’s contract version and the configured `on_update` and `on_schema_break` parameters. As such the deploy method allows you to configure arguments for each potential call it may make (via `create_params`, `update_params` and `delete_params`). If the smart contract is not updatable or deletable, those parameters will be omitted. These params values (`create_params`, `update_params` and `delete_params`) will only allow you to specify valid calls that are defined in the ARC-32 / ARC-56 app spec. You can control what call is made via the `method` parameter in these objects. If it’s left out (or set to `None`) then it will be a bare call, if set to the ABI signature of a call it will perform that ABI call. If there are arguments required for that ABI call then the type of the arguments will automatically populate in intellisense. ```ts client.deploy({ createParams: { onComplete: OnApplicationComplete.OptIn, }, updateParams: { method: 'named_update(uint64,string)string', args: { arg1: 123, arg2: 'foo', }, }, // Can leave this out and it will do an argumentless bare call (if that call is allowed) //deleteParams: {} allowUpdate: true, allowDelete: true, onUpdate: 'update', onSchemaBreak: 'replace', }); ``` ## Opt in and close out Methods with an `opt_in` or `close_out` `onCompletionAction` are grouped under properties of the same name within the `send`, `createTransaction` and `params` properties of the client. If the smart contract does not handle one of these on completion actions, it will be omitted. ```python # Opt in with bare call client.send.opt_in.bare() # Opt in with ABI method client.create_transaction.opt_in.named_opt_in(args=NamedOptInArgs(arg1=123)) # Close out with bare call client.params.close_out.bare() # Close out with ABI method client.send.close_out.named_close_out(args=NamedCloseOutArgs(arg1="foo")) ``` ## Clear state All clients will have a clear state method which will call the clear state program of the smart contract. ```python client.send.clear_state() client.create_transaction.clear_state() client.params.clear_state() ``` ## No-op calls The remaining ABI methods which should all have an `on_completion_action` of `OnApplicationComplete.NoOp` will be available on the `send`, `create_transaction` and `params` properties of the client. If a bare no-op call is allowed it will be available via `bare`. These methods will allow you to optionally pass in `on_complete` and if the method happens to allow other on-completes than no-op these can also be provided (and those methods will also be available via the on-complete sub-property too per above). ```python # Call an ABI method which takes no args client.send.some_method() # Call a no-op bare call client.create_transaction.bare() # Call an ABI method, passing args in as a dictionary client.params.some_other_method({ args: { arg1: 123, arg2: "foo" } }) ``` ## Method and argument naming By default, names of names, types and arguments will be transformed to `snake_case` to match Python idiomatic semantics (names of classes would be converted to idiomatic `PascalCase` as per Python conventions). If you want to keep the names the same as what is in the ARC-32 / ARC-56 app spec file then you can pass the `-p` or `--preserve-names` property to the type generator. ### Method name clashes The ARC-32 / ARC-56 specification allows two methods to have the same name, as long as they have different ABI signatures. On the client these methods will be emitted with a unique name made up of the method’s full signature. Eg. `create_string_uint32_void`. ## ABI arguments Each generated method will accept ABI method call arguments in both a `tuple` and a `dataclass`, so you can use whichever feels more comfortable. The types that are accepted will automatically translate from the specified ABI types in the app spec to an equivalent python type. ```python # ABI method which takes no args client.send.no_args_method() # ABI method with args client.send.other_method(args=OtherMethodArgs(arg1=123, arg2="foo", arg3=bytes([1, 2, 3, 4]))) # Call an ABI method, passing args in as a tuple client.send.yet_another_method(args=(1, 2, "foo")) ``` ## Structs If the method takes a struct as a parameter, or returns a struct as an output then it will automatically be allowed to be passed in and will be returned as the parsed struct object. ## Additional parameters Each ABI method and bare call on the client allows the consumer to provide additional parameters as well as the core method / args / etc. parameters. This models the parameters that are available in the underlying . ```python client.send.some_method( args=SomeMethodArgs(arg1=123), # Additional parameters go here ) client.send.opt_in.bare({ # Additional parameters go here }) ``` ## Composing transactions Algorand allows multiple transactions to be composed into a single atomic transaction group to be committed (or rejected) as one. ### Using the fluent composer The client exposes a fluent transaction composer which allows you to build up a group before sending it. The return values will be strongly typed based on the methods you add to the composer. ```python result = client .new_group() .method_one(args=SomeMethodArgs(arg1=123), box_references=["V"]) # Non-ABI transactions can still be added to the group .add_transaction( client.app_client.create_transaction.fund_app_account( FundAppAccountParams( amount=AlgoAmount.from_micro_algos(5000) ) ) ) .method_two(args=SomeOtherMethodArgs(arg1="foo")) .send() # Strongly typed as the return type of methodOne result_of_method_one = result.returns[0] # Strongly typed as the return type of methodTwo result_of_method_two = result.returns[1] ``` ### Manually with the TransactionComposer Multiple transactions can also be composed using the `TransactionComposer` class. ```python result = algorand .new_group() .add_app_call_method_call( client.params.method_one(args=SomeMethodArgs(arg1=123), box_references=["V"]) ) .add_payment( client.app_client.params.fund_app_account( FundAppAccountParams(amount=AlgoAmount.from_micro_algos(5000)) ) ) .add_app_call_method_call(client.params.method_two(args=SomeOtherMethodArgs(arg1="foo"))) .send() # returns will contain a result object for each ABI method call in the transaction group for (return_value in result.returns) { print(return_value) } ``` ## State You can access local, global and box storage state with any state values that are defined in the ARC-32 / ARC-56 app spec. You can do this via the `state` property which has 3 sub-properties for the three different kinds of state: `state.global`, `state.local(address)`, `state.box`. Each one then has a series of methods defined for each registered key or map from the app spec. Maps have a `value(key)` method to get a single value from the map by key and a `getMap()` method to return all box values as a map. Keys have a `{keyName}()` method to get the value for the key and there will also be a `get_all()` method to get an object will all key values. The properties will return values of the corresponding TypeScript type for the type in the app spec and any structs will be parsed as the struct object. ```python factory = algorand.client.get_typed_app_factory(Arc56TestFactory, default_sender="SENDER") result, client = factory.send.create.create_application( args=[], compilation_params={"deploy_time_params": {"some_number": 1337}}, ) assert client.state.global_state.global_key() == 1337 assert another_app_client.state.global_state.global_key() == 1338 assert client.state.global_state.global_map.value("foo") == { foo: 13, bar: 37, } client.appClient.fund_app_account( FundAppAccountParams(amount=AlgoAmount.from_micro_algos(1_000_000)) ) client.send.opt_in.opt_in_to_application( args=[], ) assert client.state.local(defaultSender).local_key() == 1337 assert client.state.local(defaultSender).local_map.value("foo") == "bar" assert client.state.box.box_key() == "baz" assert client.state.box.box_map.value({ add: { a: 1, b: 2 }, subtract: { a: 4, b: 3 }, }) == { sum: 3, difference: 1, } ```
# Application Client Usage
After using the cli tool to generate an application client you will end up with a TypeScript file containing several type definitions, an application factory class and an application client class that is named after the target smart contract. For example, if the contract name is `HelloWorldApp` then you will end up with `HelloWorldAppFactory` and `HelloWorldAppClient` classes. The contract name will also be used to prefix a number of other types in the generated file which allows you to generate clients for multiple smart contracts in the one project without ambiguous type names. > !\[NOTE] > > If you are confused about when to use the factory vs client the mental model is: use the client if you know the app ID, use the factory if you don’t know the app ID (deferred knowledge or the instance doesn’t exist yet on the blockchain) or you have multiple app IDs ## Creating an application client instance The first step to using the factory/client is to create an instance, which can be done via the constructor or more easily via an instance via `algorand.client.getTypedAppFactory()` and `algorand.client.getTypedAppClient*()` (see code examples below). Once you have an instance, if you want an escape hatch to the you can access them as a property: ```typescript // Untyped `AppFactory` const untypedFactory = factory.appFactory; // Untyped `AppClient` const untypedClient = client.appClient; ``` ### Get a factory The allows you to create and deploy one or more app instances and to create one or more app clients to interact with those (or other) app instances when you need to create clients for multiple apps. If you only need a single client for a single, known app then you can skip using the factory and just . ```typescript // Via AlgorandClient const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory); // Or, using the options: const factoryWithOptionalParams = algorand.client.getTypedAppFactory(HelloWorldAppFactory, { defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenName', deletable: true, updatable: false, deployTimeParams: { VALUE: '1', }, version: '2.0', }); // Or via the constructor const factory = new HelloWorldAppFactory({ algorand, }); // with options: const factory = new HelloWorldAppFactory({ algorand, defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenName', deletable: true, updatable: false, deployTimeParams: { VALUE: '1', }, version: '2.0', }); ``` ### Get a client by app ID The typed can be retrieved by ID. You can get one by using a previously created app factory, from an `AlgorandClient` instance and using the constructor: ```typescript // Via factory const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory); const client = factory.getAppClientById({ appId: 123n }); const clientWithOptionalParams = factory.getAppClientById({ appId: 123n, defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', // Can also pass in `approvalSourceMap`, and `clearSourceMap` }); // Via AlgorandClient const client = algorand.client.getTypedAppClientById(HelloWorldAppClient, { appId: 123n, }); const clientWithOptionalParams = algorand.client.getTypedAppClientById(HelloWorldAppClient, { appId: 123n, defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', // Can also pass in `approvalSourceMap`, and `clearSourceMap` }); // Via constructor const client = new HelloWorldAppClient({ algorand, appId: 123n, }); const clientWithOptionalParams = new HelloWorldAppClient({ algorand, appId: 123n, defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', // Can also pass in `approvalSourceMap`, and `clearSourceMap` }); ``` ### Get a client by creator address and name The typed can be retrieved by looking up apps by name for the given creator address if they were deployed using . You can get one by using a previously created app factory: ```typescript const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory); const client = factory.getAppClientByCreatorAndName({ creatorAddress: 'CREATORADDRESS' }); const clientWithOptionalParams = factory.getAppClientByCreatorAndName({ creatorAddress: 'CREATORADDRESS', defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', // Can also pass in `approvalSourceMap`, and `clearSourceMap` }); ``` Or you can get one using an `AlgorandClient` instance: ```typescript const client = algorand.client.getTypedAppClientByCreatorAndName(HelloWorldAppClient, { creatorAddress: 'CREATORADDRESS', }); const clientWithOptionalParams = algorand.client.getTypedAppClientByCreatorAndName( HelloWorldAppClient, { creatorAddress: 'CREATORADDRESS', defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', ignoreCache: true, // Can also pass in `appLookupCache`, `approvalSourceMap`, and `clearSourceMap` }, ); ``` ### Get a client by network The typed can be retrieved by network using any included network IDs within the ARC-56 app spec for the current network. You can get one by using a static method on the app client: ```typescript const client = HelloWorldAppClient.fromNetwork({ algorand }); const clientWithOptionalParams = HelloWorldAppClient.fromNetwork({ algorand, defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', // Can also pass in `approvalSourceMap`, and `clearSourceMap` }); ``` Or you can get one using an `AlgorandClient` instance: ```typescript const client = algorand.client.getTypedAppClientByNetwork(HelloWorldAppClient); const clientWithOptionalParams = algorand.client.getTypedAppClientByNetwork(HelloWorldAppClient, { defaultSender: 'DEFAULTSENDERADDRESS', appName: 'OverriddenAppName', // Can also pass in `approvalSourceMap`, and `clearSourceMap` }); ``` ## Deploying a smart contract (create, update, delete, deploy) The app factory and client will variously include methods for creating (factory), updating (client), and deleting (client) the smart contract based on the presence of relevant on completion actions and call config values in the ARC-32 / ARC-56 application spec file. If a smart contract does not support being updated for instance, then no update methods will be generated in the client. In addition, the app factory will also include a `deploy` method which will… * create the application if it doesn’t already exist * update or recreate the application if it does exist, but differs from the version the client is built on * recreate the application (and optionally delete the old version) if the deployed version is incompatible with being updated to the client version * do nothing in the application is already deployed and up to date. You can find more specifics of this behaviour in the docs. ### Create To create an app you need to use the factory. The return value will include a typed client instance for the created app. ```ts const factory = algorand.client.getTypedAppFactory(HelloWorldAppFactory); // Create the application using a bare call const { result, appClient: client } = factory.send.create.bare(); // Pass in some compilation flags factory.send.create.bare({ updatable: true, deletable: true, }); // Create the application using a specific on completion action (ie. not a no_op) factory.send.create.bare({ onComplete: OnApplicationComplete.OptIn, }); // Create the application using an ABI method (ie. not a bare call) factory.send.create.namedCreate({ args: { arg1: 123, arg2: 'foo', }, }); // Pass compilation flags and on completion actions to an ABI create call factory.send.create.namedCreate({ args: { arg1: 123, arg2: 'foo', }, updatable: true, deletable: true, onComplete: OnApplicationComplete.OptIn, }); ``` If you want to get a built transaction without sending it you can use `factory.createTransaction.create...` rather than `factory.send.create...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `factory.params.create...`. ### Update and Delete calls To create an app you need to use the client. ```ts const client = algorand.client.getTypedAppClientById(HelloWorldAppClient, { appId: 123n, }); // Update the application using a bare call client.send.update.bare(); // Pass in compilation flags client.send.update.bare({ updatable: true, deletable: false, }); // Update the application using an ABI method client.send.update.namedUpdate({ args: { arg1: 123, arg2: 'foo', }, }); // Pass compilation flags client.send.update.namedUpdate({ args: { arg1: 123, arg2: 'foo', }, updatable: true, deletable: true, }); // Delete the application using a bare call client.send.delete.bare(); // Delete the application using an ABI method client.send.delete.namedDelete(); ``` If you want to get a built transaction without sending it you can use `client.createTransaction.update...` / `client.createTransaction.delete...` rather than `client.send.update...` / `client.send.delete...`. If you want to receive transaction parameters ready to pass in as an ABI argument or to an `TransactionComposer` call then you can use `client.params.update...` / `client.params.delete...`. ### Deploy call The deploy call will make a create, update, or delete and create, or no call depending on what is required to have the deployed application match the client’s contract version and the configured `onUpdate` and `onSchemaBreak` parameters. As such the deploy method allows you to configure arguments for each potential call it may make (via `createParams`, `updateParams` and `deleteParams`). If the smart contract is not updatable or deletable, those parameters will be omitted. These params values (`createParams`, `updateParams` and `deleteParams`) will only allow you to specify valid calls that are defined in the ARC-32 / ARC-56 app spec. You can control what call is made via the `method` parameter in these objects. If it’s left out (or set to `undefined`) then it will be a bare call, if set to the ABI signature of a call it will perform that ABI call. If there are arguments required for that ABI call then the type of the arguments will automatically populate in intellisense. ```ts client.deploy({ createParams: { onComplete: OnApplicationComplete.OptIn, }, updateParams: { method: 'named_update(uint64,string)string', args: { arg1: 123, arg2: 'foo', }, }, // Can leave this out and it will do an argumentless bare call (if that call is allowed) //deleteParams: {} allowUpdate: true, allowDelete: true, onUpdate: 'update', onSchemaBreak: 'replace', }); ``` ## Opt in and close out Methods with an `opt_in` or `close_out` `onCompletionAction` are grouped under properties of the same name within the `send`, `createTransaction` and `params` properties of the client. If the smart contract does not handle one of these on completion actions, it will be omitted. ```ts // Opt in with bare call client.send.optIn.bare(); // Opt in with ABI method client.createTransaction.optIn.namedOptIn({ args: { arg1: 123 } }); // Close out with bare call client.params.closeOut.bare(); // Close out with ABI method client.send.closeOut.namedCloseOut({ args: { arg1: 'foo' } }); ``` ## Clear state All clients will have a clear state method which will call the clear state program of the smart contract. ```ts client.send.clearState(); client.createTransaction.clearState(); client.params.clearState(); ``` ## No-op calls The remaining ABI methods which should all have an `onCompletionAction` of `OnApplicationComplete.NoOp` will be available on the `send`, `createTransaction` and `params` properties of the client. If a bare no-op call is allowed it will be available via `bare`. These methods will allow you to optionally pass in `onComplete` and if the method happens to allow other on-completes than no-op these can also be provided (and those methods will also be available via the on-complete sub-property too per above). ```ts // Call an ABI method which takes no args client.send.someMethod(); // Call a no-op bare call client.createTransaction.bare(); // Call an ABI method, passing args in as a dictionary client.params.someOtherMethod({ args: { arg1: 123, arg2: 'foo' } }); ``` ## Method and argument naming By default, names of names, types and arguments will be transformed to `camelCase` to match TypeScript idiomatic semantics. If you want to keep the names the same as what is in the ARC-32 / ARC-56 app spec file (e.g. `snake_case` etc.) then you can pass the `-p` or `--preserve-names` property to the type generator. ### Method name clashes The ARC-32 / ARC-56 specification allows two methods to have the same name, as long as they have different ABI signatures. On the client these methods will be emitted with a unique name made up of the method’s full signature. Eg. createStringUint32Void. Whilst TypeScript supports method overloading, in practice it would be impossible to reliably resolve the desired overload at run time once you factor in methods with default parameters. ## ABI arguments Each generated method will accept ABI method call arguments in both a tuple and a dictionary format, so you can use whichever feels more comfortable. The types that are accepted will automatically translate from the specified ABI types in the app spec to an equivalent TypeScript type. ```ts // ABI method which takes no args client.send.noArgsMethod({ args: {} }); client.send.noArgsMethod({ args: [] }); // ABI method with args client.send.otherMethod({ args: { arg1: 123, arg2: 'foo', arg3: new Uint8Array([1, 2, 3, 4]) } }); // Call an ABI method, passing args in as a tuple client.send.yetAnotherMethod({ args: [1, 2, 'foo'] }); ``` ## Structs If the method takes a struct as a parameter, or returns a struct as an output then it will automatically be allowed to be passed in and will be returned as the parsed struct object. ## Additional parameters Each ABI method and bare call on the client allows the consumer to provide additional parameters as well as the core method / args / etc. parameters. This models the parameters that are available in the underlying . ```ts client.send.someMethod({ args: { arg1: 123, }, /* Additional parameters go here */ }); client.send.optIn.bare({ /* Additional parameters go here */ }); ``` ## Composing transactions Algorand allows multiple transactions to be composed into a single atomic transaction group to be committed (or rejected) as one. ### Using the fluent composer The client exposes a fluent transaction composer which allows you to build up a group before sending it. The return values will be strongly typed based on the methods you add to the composer. ```ts const result = await client .newGroup() .methodOne({ args: { arg1: 123 }, boxReferences: ['V'] }) // Non-ABI transactions can still be added to the group .addTransaction(client.appClient.createTransaction.fundAppAccount({ amount: (5000).microAlgo() })) .methodTwo({ args: { arg1: 'foo' } }) .execute(); // Strongly typed as the return type of methodOne const resultOfMethodOne = result.returns[0]; // Strongly typed as the return type of methodTwo const resultOfMethodTwo = result.returns[1]; ``` ### Manually with the TransactionComposer Multiple transactions can also be composed using the `TransactionComposer` class. ```ts const result = algorand .newGroup() .addAppCallMethodCall(client.params.methodOne({ args: { arg1: 123 }, boxReferences: ['V'] })) .addPayment(client.appClient.params.fundAppAccount({ amount: (5000).microAlgo() })) .addAppCallMethodCall(client.params.methodTwo({ args: { arg1: 'foo' } })) .execute(); // returns will contain a result object for each ABI method call in the transaction group for (const { returnValue } of result.returns) { console.log(returnValue); } ``` ## State You can access local, global and box storage state with any state values that are defined in the ARC-32 / ARC-56 app spec. You can do this via the `state` property which has 3 sub-properties for the three different kinds of state: `state.global`, `state.local(address)`, `state.box`. Each one then has a series of methods defined for each registered key or map from the app spec. Maps have a `value(key)` method to get a single value from the map by key and a `getMap()` method to return all box values as a map. Keys have a `{keyName}()` method to get the value for the key and there will also be a `getAll()` method to get an object will all key values. The properties will return values of the corresponding TypeScript type for the type in the app spec and any structs will be parsed as the struct object. ```typescript const factory = algorand.client.getTypedAppFactory(Arc56TestFactory, { defaultSender: 'SENDER' }); const { appClient: client } = await factory.send.create.createApplication({ args: [], deployTimeParams: { someNumber: 1337n }, }); expect(await client.state.global.globalKey()).toBe(1337n); expect(await anotherAppClient.state.global.globalKey()).toBe(1338n); expect(await client.state.global.globalMap.value('foo')).toEqual({ foo: 13n, bar: 37n }); await client.appClient.fundAppAccount({ amount: microAlgos(1_000_000) }); await client.send.optIn.optInToApplication({ args: [], populateAppCallResources: true }); expect(await client.state.local(defaultSender).localKey()).toBe(1337n); expect(await client.state.local(defaultSender).localMap.value('foo')).toBe('bar'); expect(await client.state.box.boxKey()).toBe('baz'); expect( await client.state.box.boxMap.value({ add: { a: 1n, b: 2n }, subtract: { a: 4n, b: 3n }, }), ).toEqual({ sum: 3n, difference: 1n, }); ```
# AlgoKit Templates
> Overview of AlgoKit templates
## Using a Custom AlgoKit Template To initialize a community AlgoKit template, you can either provide a URL to the community template during the interactive wizard or by passing in `--template-url` to `algokit init`. For example: ```shell algokit init --template-url https://github.com/algorandfoundation/algokit-python-template # This is the url of the official Python template. Replace with the community template URL. # or algokit init # and select the Custom Template option ``` When you select the `Custom Template` option during the interactive wizard, you will be prompted to provide the URL of the custom template. ```shell Community templates have not been reviewed, and can execute arbitrary code. Please inspect the template repository, and pay particular attention to the values of _tasks, _migrations and _jinja_extensions in copier.yml Enter a custom project URL, or leave blank and press enter to go back to official template selection. Note that you can use gh: as a shorthand for github.com and likewise gl: for gitlab.com Valid examples: - gh:copier-org/copier - gl:copier-org/copier - git@github.com:copier-org/copier.git - git+https://mywebsiteisagitrepo.example.com/ - /local/path/to/git/repo - /local/path/to/git/bundle/file.bundle - ~/path/to/git/repo - ~/path/to/git/repo.bundle ? Custom template URL: # Enter the URL of the custom template here ``` The `--template-url` option can be combined with `--template-url-ref` to specify a specific commit, branch or tag. For example: ```shell algokit init --template-url https://github.com/algorandfoundation/algokit-python-templat --template-url-ref 9985005b7389c90c6afed685d75bb8e7608b2a96 ``` If the URL is not an official template there is a potential security risk and so to continue you must either acknowledge this prompt, or if you are in a non-interactive environment you can pass the `--UNSAFE-SECURITY-accept-template-url` option (but we generally don’t recommend this option so users can review the warning message first) e.g. ```shell Community templates have not been reviewed, and can execute arbitrary code. Please inspect the template repository, and pay particular attention to the values of \_tasks, \_migrations and \_jinja_extensions in copier.yml ? Continue anyway? Yes ``` ## Creating Custom AlgoKit Templates If the official templates do not serve your needs, you can create custom AlgoKit templates tailored to your project requirements or industry needs. These custom templates can be used for your future projects or contributed to the Algorand developer community, enhancing the ecosystem with specialized solutions. Creating templates in AlgoKit involves using various configuration files and a templating engine to generate project structures tailored to your needs. This guide will cover the key concepts and best practices for creating templates in AlgoKit. We will also refer to the official as an example. ### Quick Start For users who are keen on getting started with creating AlgoKit templates, you can follow these quick steps: 1. Click on `Use this template`->`Create a new repository` on Github page. This will create a new reference repository with clean git history, allowing you to modify and transform the base Python template into your custom template. 2. Modify the cloned template according to your specific needs. The remainder of this tutorial will help you understand expected behaviors from the AlgoKit side, Copier, the templating framework, and key concepts related to the default files you will encounter in the reference template. ### Overview of AlgoKit Templates AlgoKit templates are project scaffolds that can initialize new smart contract projects. These templates can include code files, configuration files, and scripts. AlgoKit uses Copier and the Jinja templating engine to create new projects based on these templates. #### Copier/Jinja AlgoKit uses Copier templates. Copier is a library that allows you to create project templates that can be easily replicated and customized. It’s often used along with Jinja. Jinja is a modern and designer-friendly templating engine for Python programming language. It’s used in Copier templates to substitute variables in files and file names. You can find more information in the and . #### AlgoKit Functionality with Templates AlgoKit provides the `algokit init` command to initialize a new project using a template. You can pass the template name using the `-t` flag or select a template from a list. ### Key Concepts #### .algokit.toml This file is the AlgoKit configuration file for this project, and it can be used to specify the minimum version of the AlgoKit. This is essential to ensure that projects created with your template are always compatible with the version of AlgoKit they are using. Example from `algokit-python-template`: ```toml [algokit] min_version = "v1.1.0-beta.4" ``` This specifies that the template requires at least version `v1.1.0-beta.4` of AlgoKit. #### Python Support: `pyproject.toml` Python projects in AlgoKit can leverage various tools for dependency management and project configuration. While Poetry and the `pyproject.toml` file are common choices, they are not the only options. If you opt to use Poetry, you’ll rely on the pyproject.toml file to define the project’s metadata and dependencies. This configuration file can utilize the Jinja templating syntax for customization. Example snippet from `algokit-python-template`: ```toml [tool.poetry] name = "{{ project_name }}" version = "0.1.0" description = "Algorand smart contracts" authors = ["{{ author_name }} <{{ author_email }}>"] readme = "README.md" ... ``` This example shows how project metadata and dependencies are defined in `pyproject.toml`, using Jinja syntax to allow placeholders for project metadata. #### TypeScript Support: `package.json` For TypeScript projects, the `package.json` file plays a similar role as `pyproject.toml` can do for Python projects. It specifies metadata about the project and lists the dependencies required for smart contract development. Example snippet: ```json { "name": "{{ project_name }}", "version": "1.0.0", "description": "{{ project_description }}", "scripts": { "build": "tsc" }, "devDependencies": { "typescript": "^4.2.4", "tslint": "^6.1.3", "tslint-config-prettier": "^1.18.0" } } ``` This example shows how Jinja syntax is used within `package.json` to allow placeholders for project metadata and dependencies. #### Bootstrap Option When instantiating your template via AlgoKit CLI, it will optionally prompt the user to automatically run after the project is initialized and can perform various setup tasks like installing dependencies or setting up databases. * `env`: Searches for and copies a `.env*.template` file to an equivalent `.env*` file in the current working directory, prompting for any unspecified values. This feature is integral for securely managing environment variables, preventing sensitive data from inadvertently ending up in version control. By default, Algokit will scan for network-prefixed `.env` variables (e.g., `.env.localnet`), which can be particularly useful when relying on the . If no such prefixed files are located, Algokit will then attempt to load default `.env` files. This functionality provides greater flexibility for different network configurations. * `poetry`: If your Python project uses Poetry for dependency management, the `poetry` command installs Poetry (if not present) and runs `poetry install` in the current working directory to install Python dependencies. * `npm`: If you’re developing a JavaScript or TypeScript project, the `npm` command runs npm install in the current working directory to install Node.js dependencies. * `all`: The `all` command runs all the aforementioned bootstrap sub-commands in the current directory and its subdirectories. This command is a comprehensive way to ensure all project dependencies and environment variables are correctly set up. #### Predefined Copier Answers Copier can prompt the user for input when initializing a new project, which is then passed to the template as variables. This is useful for customizing the new project based on user input. Example: copier.yaml ```yaml project_name: type: str help: What is the name of this project? placeholder: 'algorand-app' ``` This would prompt the user for the project name, and the input can then be used in the template using the Jinja syntax `{{ project_name }}`. ##### Default Behaviors When creating an AlgoKit template, there are a few default behaviors that you can expect to be provided by algokit-cli itself without introducing any extra code to your templates: * **Git**: If Git is installed on the user’s system and the user’s working directory is a Git repository, AlgoKit CLI will commit the newly created project as a new commit in the repository. This feature helps to maintain a clean version history for the project. If you wish to add a specific commit message for this action, you can specify a `commit_message` in the `_commit` option in your `copier.yaml` file. * **VSCode**: If the user has Visual Studio Code (VSCode) installed and the path to VSCode is added to their system’s PATH, AlgoKit CLI will automatically open the newly created VSCode window unless the user provides specific flags into the init command. * **Bootstrap**: AlgoKit CLI is equipped to execute a bootstrap script after a project has been initialized. This script, included in AlgoKit templates, can be automatically run to perform various setup tasks, such as installing dependencies or setting up databases. This is managed by AlgoKit CLI and not within the user-created codebase. By default, if a `bootstrap` task is defined in the `copier.yaml`, AlgoKit CLI will execute it unless the user opts out during the prompt. By combining predefined Copier answers with these default behaviors, you can create a smooth, efficient, and intuitive initialization experience for the users of your template. #### Executing Python Tasks in Templates If you need to use Python scripts as tasks within your Copier templates, ensure that you have Python installed on the host machine. By convention, AlgoKit automatically detects the Python installation on your machine and fills in the `python_path` variable accordingly. This process ensures that any Python scripts included as tasks within your Copier templates will execute using the system’s Python interpreter. It’s important to note that the use of `_copier_python` is not recommended. Here’s an example of specifying a Python script execution in your `copier.yaml` without needing to explicitly use `_copier_python`: ```yaml - '{{ python_path }} your_python_script.py' ``` If you’d like your template to be backwards compatible with versions of `algokit-cli` older than `v1.11.3` when executing custom python scripts via `copier` tasks, you can use a conditional statement to determine the Python path: ```yaml - '{{ python_path if python_path else _copier_python }} your_python_script.py' # _copier_python above is used for backwards compatibility with versions < v1.11.3 of the algokit cli ``` And to define `python_path` in your Copier questions: ```yaml # Auto determined by algokit-cli from v1.11.3 to allow execution of python script # in binary mode. python_path: type: str help: Path to the sys.executable. when: false ``` #### Working with Generators After mastering the use of `copier` and building your templates based on the official AlgoKit template repositories, you can enhance your proficiency by learning to define `custom generators`. Essentially, generators are smaller-scope `copier` templates designed to provide additional functionality after a project has been initialized from the template. For example, the official incorporates a generator in the `.algokit/generators` directory. This generator can be utilized to execute auxiliary tasks on AlgoKit projects that are initiated from this template, like adding new smart contracts to an existing project. For a comprehensive understanding, please consult the and . ##### How to Create a Generator Outlined below are the fundamental steps to create a generator. Although `copier` provides complete autonomy in structuring your template, you may prefer to define your generator to meet your specific needs. Nevertheless, as a starting point, we suggest: 1. Generate a new directory hierarchy within your template directory under the `.algokit/generators` folder (this is merely a suggestion; you can define your custom path if necessary and point to it via the algokit.toml file). 2. Develop a `copier.yaml` file within the generator directory and outline the generator’s behavior. This file bears similarities with the root `copier.yaml` file in your template directory, but it is exclusively for the generator. The `tasks` section of the `copier.yaml` file is where you can determine the generator’s behavior. Here’s an example of a generator that copies the `smart-contract` directory from the template to the current working directory: ```yaml _task: - "echo '==== Successfully initialized new smart contract 🚀 ===='" contract_name: type: str help: Name of your new contract. placeholder: 'my-new-contract' default: 'my-new-contract' _templates_suffix: '.j2' ``` Note that `_templates_suffix` must be different from the `_templates_suffix` defined in the root `copier.yaml` file. This is because the generator’s `copier.yaml` file is processed separately from the root `copier.yaml` file. 3. Develop your `generator` copier content and, when ready, test it by initiating a new project for your template and executing the generator command: ```shell algokit generate ``` This should dynamically load and display your generator as an optional `cli` command that your template users can execute. ### Recommendations * **Modularity**: Break your templates into modular components that can be combined in different ways. * **Documentation**: Include README files and comments in your templates to explain how they should be used. * **Versioning**: Use `.algokit.toml` to specify the minimum compatible version of AlgoKit. * **Testing**: Include test configurations and scripts in your templates to encourage testing best practices. * **Linting and Formatting**: Integrate linters and code formatters in your templates to ensure code quality. * **Algokit Principle**: for details on generic principles for designing templates, refer to . ### Conclusion Creating custom templates in AlgoKit is a powerful way to streamline your development workflow for Algorand smart contracts using Python or TypeScript. Leveraging Copier and Jinja for templating and incorporating best practices for modularity, documentation, and coding standards can result in robust, flexible, and user-friendly templates that can be valuable to your projects and the broader Algorand community. Happy coding!
# Algorand Python Language Server
The Algorand Python language extension brings language server-powered capabilities to your smart contract authoring experience in Visual Studio Code. It extends the results from your installed Python language server to provide Algorand Python-specific diagnostics and code actions. This tutorial demonstrates how to set up and use the Algorand Python extension to identify and resolve common issues early in your development workflow. We’ll walk through identifying and fixing bugs in an Algorand Python smart contract using the extension’s diagnostic features. ## Prerequisites * 1.80.0 or higher * 3.12 or higher * 5.3.0 or higher * Basic understanding of Caution The Algorand Python extension is currently in **beta**. It works alongside your existing Python language server (recommended with Pylance) to provide additional Algorand-specific diagnostics and code actions for smart contract development. There is currently some latency between making code changes and seeing updated diagnostics, which will be addressed in a future update. ## Step 1: Install the Extension Install the Algorand Python language extension from the VSCode Marketplace: 1. Open the Extensions view in VSCode (`Ctrl+Shift+X` or `Cmd+Shift+X`) 2. Search for `Algorand Python` 3. Click `Install` on the extension published by the Algorand Foundation Alternatively, install directly from the .  ## Step 2: Set Up Your Project ### Initialize an AlgoKit Project If you’re starting a new project, use AlgoKit to generate a Python smart contract project: ```bash algokit init ``` Select options for a Python smart contract project from the interactive prompts. If you haven’t installed algokit, follow these . Create your first Algokit project ### Install PuyaPy The extension requires PuyaPy version `5.3.0` or higher. Install it as a dev dependency in your project’s virtual environment: ```bash # Activate your virtual environment first pip install puyapy ``` We recommend installing PuyaPy in your project’s virtual environment to ensure the extension can automatically discover it. To check the version use: ```bash puyapy --version ``` It should display `5.3.0` or higher. ## Step 3: Enable the Language Server For new AlgoKit projects, the language server is enabled by default. For existing projects, you need to enable it manually: 1. Open your workspace settings (`File` > `Preferences` > `Settings` or `Cmd+,`) 2. Search for **algorandPython.languageServer.enable** 3. Check the box to enable the language server Alternatively, add this to your `.vscode/settings.json`: ```json { "algorandPython.languageServer.enable": true } ``` To see detailed information about what the language server is doing: 1. Open the `Output panel` (`View` > `Output` or `Ctrl+Shift+U`) 2. Select `Algorand Python Language Server` from the dropdown 3. Review logs for diagnostics and extension activity  ## Step 4: Create an Example Smart Contract Let’s create a simple contract with a deliberate bug to demonstrate the extension’s capabilities. Replace the Hello World contract with the below contract: This contract contains an intentional bug when updating the `votes` BoxMap in the `cast_vote` function. ## Step 5: Observe Real-Time Diagnostics Once you save the file, the Algorand Python extension will analyze your code. You should see a red squiggly line under `current_votes` in the if condition. The extension will display the error `mutable reference to ARC-4-encoded value must be copied using .copy() when being assigned to another variable` in the contract when you hover over the red line.  ## Step 6: Apply Quick Fixes The extension also provides quick fixes for the issue. Look for the lightbulb icon (💡) that appears. It suggests the fix `💡 Add .copy()`. Click on the suggestion to add the fix.  ## Step 7: Fixed Smart Contract Based on the extension’s diagnostics, your contract should now be updated as follows to address the identified issue: ## Step 8: Verify the Fixes After applying the fixes, verify that warnings and errors have cleared in the Problems Panel. The extension will continue to provide real-time feedback as you progress in your development.  ## Configuration Options The extension provides additional configuration options for customizing your experience: ### Enable/Disable Language Server ```json { "algorandPython.languageServer.enable": true } ``` ### Log Level Adjust the verbosity of messages in the Output panel: ```json { "algorandPython.languageServer.logLevel": "info" } ``` Available levels: `error`, `warning`, `info`, `debug` ### Debounce Interval Configure the delay between code changes and diagnostic updates: ```json { "algorandPython.languageServer.debounceInterval": 500 } ``` Value in milliseconds. Lower values provide faster feedback but may impact performance. ## Troubleshooting If the extension isn’t working as expected: ### Extension Not Providing Diagnostics 1. Verify the extension is installed and enabled: * Check Extensions view for `Algorand Python` * Ensure it shows as `Enabled` 2. Confirm both extensions are installed: * Python extension for Visual Studio Code * Algorand Python extension 3. Verify the language server is enabled: * Check workspace settings for `algorandPython.languageServer.enable` * Should be set to `true` 4. Verify PuyaPy installation: ```bash pip show puyapy ``` * Ensure version `5.3.0` or higher is installed * Confirm it’s in the same virtual environment VS Code is using ### Check Python Interpreter Make sure VS Code is using the correct Python interpreter: 1. Click on the Python version in the status bar (bottom right) 2. Select the interpreter from your project’s virtual environment 3. The interpreter should have PuyaPy installed ### File Not Recognized The extension activates for `.py` files in Algorand Python projects. Ensure: * Your file is a Python file with `.py` extension * The file contains Algorand Python imports (e.g., `from algopy import ...`) ### Check Language Server Logs 1. Open Output panel (`View` > `Output`) 2. Select `Algorand Python Language Server` from the dropdown 3. Look for error messages or warnings 4. Set log level to `debug` for more detailed information ## Summary In this tutorial, we covered: * Installing and configuring the Algorand Python language extension * Setting up a project * Using real-time diagnostics to identify issues * Applying quick fixes to resolve common problems * Troubleshooting common extension issues The Algorand Python extension provides valuable assistance throughout your development process, helping you write more correct and robust smart contracts by catching issues early and suggesting improvements as you code. ## Next Steps Learn Algorand Python concepts, write and test smart contracts, debug with AVM Debugger, and follow best practices. Explore Algorand Python concepts Learn about unit testing Python contracts Try the AVM Debugger
# Algorand TypeScript Language Server
The Algorand TypeScript language extension brings language server-powered capabilities to your smart contract authoring experience in Visual Studio Code. It extends the results from your installed TypeScript language server to provide Algorand TypeScript-specific diagnostics and code actions. This tutorial demonstrates how to set up and use the Algorand Typescript extension to identify and resolve common issues early in your development workflow. We’ll walk through identifying and fixing bugs in an Algorand Typescript smart contract using the extension’s diagnostic features. ## Prerequisites * 1.80.0 or higher * 1.0.1 or higher * Basic understanding of Caution The Algorand TypeScript extension is currently in **beta**. It works alongside your existing TypeScript language server to provide additional Algorand-specific diagnostics and code actions for smart contract development. ## Step 1: Install the Extension Install the Algorand TypeScript language extension from the VSCode Marketplace: 1. Open the Extensions view in VSCode (`Ctrl+Shift+X` or `Cmd+Shift+X`) 2. Search for `Algorand TypeScript` 3. Click `Install` on the extension published by the Algorand Foundation Alternatively, install directly from the .  ## Step 2: Set Up Your Project ### Initialize an AlgoKit Project If you’re starting a new project, use AlgoKit to generate a TypeScript smart contract project: ```bash algokit init ``` Select options for a TypeScript smart contract project from the interactive prompts. If you haven’t installed algokit, follow these . Create your first Algokit project ### Install puya-ts The extension requires `puya-ts` version `1.0.1` or higher. Install it as a dev dependency in your project: ```bash npm install --save-dev @algorandfoundation/puya-ts ``` ## Step 3: Enable the Language Server For new AlgoKit projects, the language server is enabled by default. For existing projects, you need to enable it manually: 1. Open your workspace settings (`File` > `Preferences` > `Settings` or `Cmd+,`) 2. Search for `algorandTypeScript.languageServer.enable` 3. Check the box to enable the language server Alternatively, add this to your `.vscode/settings.json`: ```json { "algorandTypeScript.languageServer.enable": true } ``` To see detailed information about what the language server is doing: 1. Open the `Output panel` (`View` > `Output` or `Ctrl+Shift+U`) 2. Select `Algorand TypeScript Language Server` from the dropdown 3. Review logs for diagnostics and extension activity  ## Step 4: Create an Example Smart Contract Let’s create a simple contract with a deliberate bug to demonstrate the extension’s capabilities. Replace the Hello World contract with the below contract: This contract contains an intentional bug when updating the `current_vote` in the `cast_vote` function. ## Step 5: Observe Real-Time Diagnostics Once you save the file, the Algorand TypeScript extension will analyze your code. You should see a red squiggly line under `current_votes` in the if condition. The extension will display the error `cannot create multiple references to a mutable stack type, the value must be copied using clone(...) when being assigned to another variable` in the contract when you hover over the red line.  ## Step 6: Apply Quick Fixes The extension also provides quick fixes for the issue. Look for the lightbulb icon (💡) that appears. It suggests the fix `💡 Wrap expression in clone(...)`. Click on the suggestion to add the fix.  ## Step 7: Fix the Smart Contract Based on the extension’s diagnostics, your contract should now be updated as follows to address the identified issue: ## Step 8: Verify the Fixes After applying the fixes, verify that warnings and errors have cleared in the Problems Panel. The extension will continue to provide real-time feedback as you progress in your development.  ## Configuration Options The extension provides additional configuration options for customizing your experience: ### Log Level Adjust the verbosity of messages in the Output panel: ```json { "algorandTypeScript.languageServer.logLevel": "info" } ``` Available levels: `off`, `error`, `warn`, `info`, `debug`, `trace` ## Troubleshooting If the extension isn’t working as expected: ### Extension Not Providing Diagnostics 1. Verify the extension is installed and enabled: * Check Extensions view for `Algorand TypeScript` * Ensure it shows as `Enabled` 2. Confirm the language server is enabled: * Check workspace settings for `algorandTypeScript.languageServer.enable` * Should be set to `true` 3. Verify puya-ts installation: ```bash npm list @algorandfoundation/puya-ts ``` * Ensure version `1.0.1` or higher is installed ### File Not Recognized The extension only activates for `.algo.ts` files. Ensure your smart contract files use this extension. ### Conflicting Diagnostics If you see duplicate or conflicting messages: * The extension works alongside the standard TypeScript language server * Some messages come from TypeScript, others from the Algorand extension * Both sets of diagnostics are valuable for different aspects of your code ### Check Language Server Logs 1. Open Output panel (`View` > `Output`) 2. Select `Algorand TypeScript Language Server` from dropdown 3. Look for error messages or warnings 4. Set log level to `debug` for more detailed information ## Summary In this tutorial, we covered: * Installing and configuring the Algorand TypeScript language extension * Setting up a project with puya-ts * Using real-time diagnostics to identify issues * Applying quick fixes to resolve common problems * Troubleshooting common extension issues The Algorand TypeScript extension provides valuable assistance throughout your development process, helping you write more correct and robust smart contracts by catching issues early and suggesting improvements as you code. ## Next Steps Learn Algorand TypeScript concepts, write and test smart contracts, debug with AVM Debugger, and follow best practices. Explore Algorand TypeScript concepts Learn about unit testing TypeScript contracts Try the AVM Debugger
# PuyaPy compiler
The PuyaPy compiler is a multi-stage, optimising compiler that takes Algorand Python and prepares it for execution on the AVM. PuyaPy ensures the resulting AVM bytecode has execution semantics that match the given Python code. PuyaPy produces output that is directly compatible with to make deployment and calling easy (among other formats). The PuyaPy compiler is based on the , which allows for multiple frontend languages to leverage the majority of the compiler logic so adding new frontend languages for execution on Algorand is relatively easy. ## Compiler installation The minimum supported Python version for running the PuyaPy compiler is 3.12. There are three ways of installing the PuyaPy compiler. 1. You can install and you can then use the `algokit compile py` command. 2. You can install the PuyaPy compiler into your project and thus lock the compiler version for that project: ```shell pip install puyapy # OR poetry add puyapy --group=dev ``` Note: if you do this then when you use`algokit compile py` within that project directory it will invoke the installed compiler rather than a global one. 3. You can install the compiler globally using : ```shell pipx install puya ``` Alternatively, it can be installed per project. For example, if you’re using , you can install it as a dev-dependency like so: ```shell poetry add puyapy --group=dev ``` If you just want to play with some examples, you can clone the repo and have a poke around: ```shell git clone https://github.com/algorandfoundation/puya.git cd puya poetry install poetry shell # compile the "Hello World" example puyapy examples/hello_world ``` ## Using the compiler To check that you can run the compiler successfully after installation, you can run the help command: ```default puyapy -h # OR algokit compile py -h ``` To compile a contract or contracts, just supply the path(s) - either to the .py files themselves, or the containing directories. In the case of containing directories, any (non-abstract) contracts discovered therein will be compiled, allowing you to compile multiple contracts at once. You can also supply more than one path at a time to the compiler. e.g. either `puyapy my_project/` or `puyapy my_project/contract.py` will work to compile a single contract. ## Type checking The first and second steps of the are significant to note, because it’s where we perform type checking. We leverage to do this, so we recommend that you install and use the latest version of MyPy in your development environment to get the best typing information that aligns to what the PuyaPy compiler expects. This should work with standard Python tooling e.g. with Visual Studio Code, PyCharm, et. al. The easiest way to get a productive development environment with Algorand Python is to instantiate a template with AlgoKit via `algokit init -t python`. This will give you a full development environment with intellisense, linting, automatic formatting, breakpoint debugging, deployment and CI/CD. Alternatively, you can construct your own environment by configuring MyPy, Ruff, etc. with the same configuration files . The MyPy config that PuyaPy uses is in ## Compiler usage The options available for the compile can be seen by executing `puyapy -h` or `algokit compile py -h`: ```default Usage: puyapy [ARGS] [OPTIONS] PuyaPy compiler for compiling Algorand Python to TEAL ╭─ Commands ─────────────────────────────────────────────────────────────────────────────╮ │ --help -h Display this message and exit. │ │ --version Display application version. │ ╰────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Arguments ────────────────────────────────────────────────────────────────────────────╮ │ * PATH Files or directories to compile [required] │ ╰────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Outputs ──────────────────────────────────────────────────────────────────────────────╮ │ Options for controlling what is output and where │ │ │ │ --out-dir Path for outputting artefacts │ │ --log-level Minimum level to log to console [choices: │ │ notset, debug, info, warning, error, critical] │ │ [default: info] │ │ --output-teal --no-output-teal Output TEAL code [default: True] │ │ --output-source-map Output debug source maps [default: True] │ │ --no-output-source-map │ │ --output-arc56 --no-output-arc56 Output {contract}.arc56.json ARC-56 app spec │ │ file [default: True] │ │ --output-arc32 --no-output-arc32 Output {contract}.arc32.json ARC-32 app spec │ │ file [default: False] │ │ --output-bytecode Output AVM bytecode [default: False] │ │ --no-output-bytecode │ │ --output-client Output Algorand Python contract client for typed │ │ --no-output-client ARC-4 ABI calls [default: False] │ │ --debug-level -g Output debug information level, 0 = none, 1 = │ │ debug, 2 = reserved for future use [choices: 0, │ │ 1, 2] [default: 1] │ ╰────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Compilation ──────────────────────────────────────────────────────────────────────────╮ │ Options that affect the compilation process, such as optimisation options etc. │ │ │ │ --optimization-level -O Set optimization level of output TEAL / AVM bytecode │ │ [choices: 0, 1, 2] [default: 1] │ │ --target-avm-version Target AVM version [choices: 10, 11, 12, 13] │ │ [default: 11] │ │ --resource-encoding If "index", then resource types (Application, Asset, │ │ Account) in ABI methods should be passed as an index │ │ into their appropriate foreign array. The default │ │ option "value", as of PuyaPy 5.0, means these values │ │ will be passed directly. [choices: index, value] │ │ [default: value] │ │ --locals-coalescing-strategy Strategy choice for out-of-ssa local variable │ │ coalescing. The best choice for your app is best │ │ determined through experimentation [choices: │ │ root-operand, root-operand-excluding-args, │ │ aggressive] [default: root-operand] │ │ --validate-abi-args Validates ABI transaction arguments by ensuring they │ │ --no-validate-abi-args are encoded correctly [default: True] │ │ --validate-abi-return Validates encoding of ABI return values when using │ │ --no-validate-abi-return .from_log(), arc4.abi_call, arc4.arc4_create and │ │ arc4.arc4_update [default: True] │ ╰────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Templating ───────────────────────────────────────────────────────────────────────────╮ │ Options for controlling the generation of TEAL template files │ │ │ │ --template-var -T Define template vars for use when assembling via │ │ --output-bytecode. Should be specified without the prefix │ │ (see --template-vars-prefix), e.g. -T SOME_INT=1234 │ │ SOME_BYTES=0x1A2B SOME_BOOL=True -T SOME_STR="hello" │ │ --template-vars-prefix Define the prefix to use with --template-var [default: │ │ TMPL_] │ ╰────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Additional outputs ───────────────────────────────────────────────────────────────────╮ │ Controls additional compiler outputs that may be useful to compiler developers. │ │ │ │ --output-awst Output parsed result of AWST [default: False] │ │ --output-awst-json Output parsed result of AWST as JSON [default: │ │ False] │ │ --output-source-annotations-json Output source annotations result of AWST parse as │ │ JSON [default: False] │ │ --output-ssa-ir Output IR (in SSA form) before optimizations │ │ [default: False] │ │ --output-optimization-ir Output IR after each optimization [default: False] │ │ --output-destructured-ir Output IR after SSA destructuring and before MIR │ │ [default: False] │ │ --output-memory-ir Output MIR before lowering to TEAL [default: False] │ │ --output-teal-intermediates Output TEAL before peephole optimization and before │ │ block optimization [default: False] │ │ --output-op-statistics Output statistics about ops used for each program │ │ compiled optimization_level: Set optimization level │ │ of output TEAL / AVM bytecode [default: False] │ ╰────────────────────────────────────────────────────────────────────────────────────────╯ ``` ### Defining template values , can be replaced with literal values during compilation to bytecode using the `--template-var` option. Additionally, Algorand Python functions that create AVM bytecode, such as and , can also provide the specified values. #### Examples of Variable Definitions The table below illustrates how different variables and values can be defined: | Variable Type | Example Algorand Python | Value definition example | | -------------------------------------------------------------- | ----------------------- | ------------------------ | | `algopy.TemplateVar[UInt64](docs/_build/markdown/"SOME_INT")` | `SOME_INT=1234` | | | `algopy.TemplateVar[Bytes](docs/_build/markdown/"SOME_BYTES")` | `SOME_BYTES=0x1A2B` | | | `algopy.TemplateVar[String](docs/_build/markdown/"SOME_STR")` | `SOME_STR="hello"` | | All template values specified via the command line are prefixed with “TMPL\_” by default. The default prefix can be modified using the `--template-vars-prefix` option. ## Sample `pyproject.toml` A sample `pyproject.toml` file with known good configuration is: ```ini [tool.poetry] name = "algorand_python_contract" version = "0.1.0" description = "Algorand smart contracts" authors = ["Name "] readme = "README.md" [tool.poetry.dependencies] python = "^3.12" algokit-utils = "^2.2.0" python-dotenv = "^1.0.0" algorand-python = "^1.0.0" [tool.poetry.group.dev.dependencies] black = { extras = ["d"], version = "*" } ruff = "^0.1.6" mypy = "*" pytest = "*" pytest-cov = "*" pip-audit = "*" pre-commit = "*" puyapy = "^1.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" [tool.ruff] line-length = 120 select = [ "E", "F", "ANN", "UP", "N", "C4", "B", "A", "YTT", "W", "FBT", "Q", "RUF", "I", ] ignore = [ "ANN101", # no type for self "ANN102", # no type for cls ] unfixable = ["B", "RUF"] [tool.ruff.flake8-annotations] allow-star-arg-any = true suppress-none-returning = true [tool.pytest.ini_options] pythonpath = ["smart_contracts", "tests"] [tool.mypy] files = "smart_contracts/" python_version = "3.12" disallow_any_generics = true disallow_subclassing_any = true disallow_untyped_calls = true disallow_untyped_defs = true disallow_incomplete_defs = true check_untyped_defs = true disallow_untyped_decorators = true warn_redundant_casts = true warn_unused_ignores = true warn_return_any = true strict_equality = true strict_concatenate = true disallow_any_unimported = true disallow_any_expr = true disallow_any_decorated = true disallow_any_explicit = true ```
# Language Guide
Algorand Python is conceptually two things: 1. A partial implementation of the Python programming language that runs on the AVM. 2. A framework for development of Algorand smart contracts and logic signatures, with Pythonic interfaces to underlying AVM functionality. You can install the Algorand Python types from PyPi: > `pip install algorand-python` or > `poetry add algorand-python` *** As a partial implementation of the Python programming language, it maintains the syntax and semantics of Python. The subset of the language that is supported will grow over time, but it will never be a complete implementation due to the restricted nature of the AVM as an execution environment. As a trivial example, the `async` and `await` keywords, and all associated features, do not make sense to implement. Being a partial implementation of Python means that existing developer tooling like IDE syntax highlighting, static type checkers, linters, and auto-formatters, will work out-of-the-box. This is as opposed to an approach to smart contract development that adds or alters language elements or semantics, which then requires custom developer tooling support, and more importantly, requires the developer to learn and understand the potentially non-obvious differences from regular Python. The greatest advantage to maintaining semantic and syntactic compatibility, however, is only realised in combination with the framework approach. Supplying a set of interfaces representing smart contract development and AVM functionality required allows for the possibility of implementing those interfaces in pure Python! This will make it possible in the near future for you to execute tests against your smart contracts without deploying them to Algorand, and even step into and break-point debug your code from those tests. The framework provides interfaces to the underlying AVM types and operations. By virtue of the AVM being statically typed, these interfaces are also statically typed, and require your code to be as well. The most basic types on the AVM are `uint64` and `bytes[]`, representing unsigned 64-bit integers and byte arrays respectively. These are represented by and in Algorand Python. There are further “bounded” types supported by the AVM which are backed by these two simple primitives. For example, `bigint` represents a variably sized (up to 512-bits), unsigned integer, but is actually backed by a `bytes[]`. This is represented by in Algorand Python. Unfortunately, none of these types map to standard Python primitives. In Python, an `int` is unsigned, and effectively unbounded. A `bytes` similarly is limited only by the memory available, whereas an AVM `bytes[]` has a maximum length of 4096. In order to both maintain semantic compatibility and allow for a framework implementation in plain Python that will fail under the same conditions as when deployed to the AVM, support for Python primitives is . For more information on the philosophy and design of Algorand Python, please see . If you aren’t familiar with Python, a good place to start before continuing below is with the . Just beware that as mentioned above, . ## Table of Contents * * * * * * * * * * * * * * * * * * * * * * * * * * * *
# ARC-28: Structured event logging
provides a methodology for structured logging by Algorand smart contracts. It introduces the concept of Events, where data contained in logs may be categorized and structured. Each Event is identified by a unique 4-byte identifier derived from its `Event Signature`. The Event Signature is a UTF-8 string comprised of the event’s name, followed by the names of the data types contained in the event, all enclosed in parentheses (`EventName(type1,type2,...)`) e.g.: ```default Swapped(uint64,uint64) ``` Events are emitted by including them in the . The metadata that identifies the event should then be included in the ARC-4 contract output so that a calling client can parse the logs to parse the structured data out. This part of the ARC-28 spec isn’t yet implemented in Algorand Python, but it’s on the roadmap. ## Emitting Events To emit an ARC-28 event in Algorand Python you can use the `emit` function, which appears in the `algopy.arc4` namespace for convenience since it heavily uses ARC-4 types and is essentially an extension of the ARC-4 specification. This function takes care of encoding the event payload to conform to the ARC-28 specification and there are 3 overloads: * An , from which the name of the struct will be used as the event name and the struct parameters will be used as the event fields - `arc4.emit(Swapped(a, b))` * An event signature as a , followed by the values - `arc4.emit("Swapped(uint64,uint64)", a, b)` * An event name as a , followed by the values - `arc4.emit("Swapped", a, b)` Here’s an example contract that emits events: ```python from algopy import ARC4Contract, arc4 class Swapped(arc4.Struct): a: arc4.UInt64 b: arc4.UInt64 class EventEmitter(ARC4Contract): @arc4.abimethod def emit_swapped(self, a: arc4.UInt64, b: arc4.UInt64) -> None: arc4.emit(Swapped(b, a)) arc4.emit("Swapped(uint64,uint64)", b, a) arc4.emit("Swapped", b, a) ``` It’s worth noting that the ARC-28 event signature needs to be known at compile time so the event name can’t be a dynamic type and must be a static string literal or string module constant. If you want to emit dynamic events you can do so using the , but you’d need to manually construct the correct series of bytes and the compiler won’t be able to emit the ARC-28 metadata so you’ll need to also manually parse the logs in your client. Examples of manually constructing an event: ```python # This is essentially what the `emit` method is doing, noting that a,b need to be encoded # as a tuple so below (simple concat) only works for static ARC-4 types log(arc4.arc4_signature("Swapped(uint64,uint64)"), a, b) # or, if you wanted it to be truly dynamic for some reason, # (noting this has a non-trivial opcode cost) and assuming in this case # that `event_suffix` is already defined as a `String`: event_name = String("Event") + event_suffix event_selector = op.sha512_256((event_name + "(uint64)").bytes)[:4] log(event_selector, UInt64(6)) ```
# ARC-4: Application Binary Interface
defines a set of encodings and behaviors for authoring and interacting with an Algorand Smart Contract. It is not the only way to author a smart contract, but adhering to it will make it easier for other clients and users to interop with your contract. To author an arc4 contract you should extend the `ARC4Contract` base class. ```python from algopy import ARC4Contract class HelloWorldContract(ARC4Contract): ... ``` ## ARC-32 and ARC-56 extends the concepts in ARC-4 to include an Application Specification which more holistically describes a smart contract and its associated state. ARC-32/ARC-56 Application Specification files are automatically generated by the compiler for ARC-4 contracts as `.arc32.json` or `.arc56.json` ## Methods Individual methods on a smart contract should be annotated with an `abimethod` decorator. This decorator is used to indicate a method which should be externally callable. The decorator itself includes properties to restrict when the method should be callable, for instance only when the application is being created or only when the OnComplete action is OptIn. A method that should not be externally available should be annotated with a `subroutine` decorator. Method docstrings will be used when outputting ARC-32 or ARC-56 application specifications, the following docstrings styles are supported ReST, Google, Numpydoc-style and Epydoc. ```python from algopy import ARC4Contract, subroutine, arc4 class HelloWorldContract(ARC4Contract): @arc4.abimethod(create=False, allow_actions=["NoOp", "OptIn"], name="external_name") def hello(self, name: arc4.String) -> arc4.String: return self.internal_method() + name @subroutine def internal_method(self) -> arc4.String: return arc4.String("Hello, ") ``` ## Router Algorand Smart Contracts only have two possible programs that are invoked when making an ApplicationCall Transaction (`appl`). The “clear state” program which is called when using an OnComplete action of `ClearState` or the “approval” program which is called for all other OnComplete actions. Routing is required to dispatch calls handled by the approval program to the relevant ABI methods. When extending `ARC4Contract`, the routing code is automatically generated for you by the PuyaPy compiler. ## Types ARC-4 defines a number of which can be used in an ARC-4 compatible contract and details how these types should be encoded in binary. Algorand Python exposes these through a number of types which can be imported from the `algopy.arc4` module. These types represent binary encoded values following the rules prescribed in the ARC which can mean operations performed directly on these types are not as efficient as ones performed on natively supported types (such as `algopy.UInt64` or `algopy.Bytes`) Where supported, the native equivalent of an ARC-4 type can be obtained via the `.native` property. It is possible to use native types in an ABI method and the router will automatically encode and decode these types to their ARC-4 equivalent. ### Booleans **Type:** `algopy.arc4.Bool`\ \\\ **Encoding:** A single byte where the most significant bit is `1` for `True` and `0` for `False`\ \\\ **Native equivalent:** `builtins.bool` ### Unsigned ints **Types:** `algopy.arc4.UIntN` (<= 64 bits) `algopy.arc4.BigUIntN` (> 64 bits)\ \\\ **Encoding:** A big endian byte array of N bits\ \\\ **Native equivalent:** `algopy.UInt64` or `algopy.BigUInt` Common bit sizes have also been aliased under `algopy.arc4.UInt8`, `algopy.arc4.UInt16` etc. A uint of any size between 8 and 512 bits (in intervals of 8bits) can be created using a generic parameter. It can be helpful to define your own alias for this type. ```python import typing as t from algopy import arc4 UInt40: t.TypeAlias = arc4.UIntN[t.Literal[40]] ``` ### Unsigned fixed point decimals **Types:** `algopy.arc4.UFixedNxM` (<= 64 bits) `algopy.arc4.BigUFixedNxM` (> 64 bits)\ \\\ **Encoding:** A big endian byte array of N bits where `encoded_value = value / (10^M)`\ \\\ **Native equivalent:** *none* ```python import typing as t from algopy import arc4 Decimal: t.TypeAlias = arc4.UFixedNxM[t.Literal[64], t.Literal[10]] ``` ### Bytes and strings **Types:** `algopy.arc4.DynamicBytes` and `algopy.arc4.String`\ \\\ **Encoding:** A variable length byte array prefixed with a 16-bit big endian header indicating the length of the data\ \\\ **Native equivalent:** `algopy.Bytes` and `algopy.String` Strings are assumed to be utf-8 encoded and the length of a string is the total number of bytes, *not the total number of characters*. ### Static arrays **Type:** `algopy.arc4.StaticArray`\ \\\ **Encoding:** See\ \\\ **Native equivalent:** *none* An ARC-4 StaticArray is an array of a fixed size. The item type is specified by the first generic parameter and the size is specified by the second. ```python import typing as t from algopy import arc4 FourBytes: t.TypeAlias = arc4.StaticArray[arc4.Byte, t.Literal[4]] ``` ### Address **Type:** `algopy.arc4.Address`\ \\\ **Encoding:** A byte array 32 bytes long **Native equivalent:** Address represents an Algorand address’s public key, and can be used instead of `algopy.Account` when needing to reference an address in an ARC-4 struct, tuple or return type. It is a subclass of `arc4.StaticArray[arc4.Byte, typing.Literal[32]]` ### Dynamic arrays **Type:** `algopy.arc4.DynamicArray`\ \\\ **Encoding:** See\ \\\ **Native equivalent:** *none* An ARC-4 DynamicArray is an array of a variable size. The item type is specified by the first generic parameter. Items can be added and removed via `.pop`, `.append`, and `.extend`. The current length of the array is encoded in a 16-bit prefix similar to the `arc4.DynamicBytes` and `arc4.String` types ```python import typing as t from algopy import arc4 UInt64Array: t.TypeAlias = arc4.DynamicArray[arc4.UInt64] ``` ### Tuples **Type:** `algopy.arc4.Tuple`\ \\\ **Encoding:** See\ \\\ **Native equivalent:** `builtins.tuple` ARC-4 Tuples are immutable statically sized arrays of mixed item types. Item types can be specified via generic parameters or inferred from constructor parameters. ### Structs **Type:** `algopy.arc4.Struct`\ \\\ **Encoding:** See\ \\\ **Native equivalent:** `typing.NamedTuple` ARC-4 Structs are named tuples. The class keyword `frozen` can be used to indicate if a struct can be mutated. Items can be accessed and mutated via names instead of indexes. Structs do not have a `.native` property, but a NamedTuple can be used in ABI methods and will be encoded/decoded to an ARC-4 struct automatically. ```python import typing from algopy import arc4 Decimal: typing.TypeAlias = arc4.UFixedNxM[typing.Literal[64], typing.Literal[9]] class Vector(arc4.Struct, kw_only=True, frozen=True): x: Decimal y: Decimal ``` ### ARC-4 Container Packing ARC-4 encoding rules are detailed explicitly in the . A summary is included here. Containers are composed of a head and tail portion. * For dynamic arrays, the head is prefixed with the length of the array encoded as a 16-bit number. This prefix is not included in offset calculation * For fixed sized items (eg. Bool, UIntN, or a StaticArray of UIntN), the item is included in the head * Consecutive Bool items are compressed into the minimum number of whole bytes possible by using a single bit to represent each Bool * For variable sized items (eg. DynamicArray, String etc), a pointer is included to the head and the data is added to the tail. This pointer represents the offset from the start of the head to the start of the item data in the tail. ### Reference types **Types:** `algopy.Account`, `algopy.Application`, `algopy.Asset`, `algopy.gtxn.PaymentTransaction`, `algopy.gtxn.KeyRegistrationTransaction`, `algopy.gtxn.AssetConfigTransaction`, `algopy.gtxn.AssetTransferTransaction`, `algopy.gtxn.AssetFreezeTransaction`, `algopy.gtxn.ApplicationCallTransaction` The ARC-4 specification allows for using a number of in an ABI method signature where this reference type refers to… * another transaction in the group * an account in the accounts array (`apat` property of the transaction) * an asset in the foreign assets array (`apas` property of the transaction) * an application in the foreign apps array (`apfa` property of the transaction) These types can only be used as parameters, and not as return types. ```python from algopy import ( Account, Application, ARC4Contract, Asset, arc4, gtxn, ) class Reference(ARC4Contract): @arc4.abimethod def with_transactions( self, asset: Asset, pay: gtxn.PaymentTransaction, account: Account, app: Application, axfr: gtxn.AssetTransferTransaction ) -> None: ... ``` ### Mutability To ensure semantic compatibility the compiler will also check for any usages of mutable ARC-4 types (arrays and structs) and ensure that any additional references are copied using the `.copy()` method. Python values are passed by reference, and when an object (eg. an array or struct) is mutated in one place, all references to that object see the mutated version. In Python this is managed via the heap. In Algorand Python these mutable values are instead stored on the stack, so when an additional reference is made (i.e. by assigning to another variable) a copy is added to the stack. Which means if one reference is mutated, the other references would not see the change. In order to keep the semantics the same, the compiler forces the addition of `.copy()` each time a new reference to the same object to match what will happen on the AVM. Struct types can be indicated as `frozen` which will eliminate the need for a `.copy()` as long as the struct also contains no mutable fields (such as arrays or another mutable struct)
# Python builtins
Some common python builtins have equivalent `algopy` versions, that use an instead of a native `int`. ## len The `len()` builtin is not supported. Instead, `algopy` types that have a length have a `.length` property of type . This is primarily due to `len()` always returning `int` and the CPython implementation enforcing that it returns *exactly* `int`. ## range The `range()` builtin has an equivalent . This behaves the same as the python builtin except that it returns an iteration of values instead of `int`. ## enumerate The `enumerate()` builtin has an equivalent . This behaves the same as the python builtin except that it returns an iteration of index values and the corresponding item. ## reversed The `reversed()` builtin is supported when iterating within a `for` loop and behaves the same as the python builtin. ## types See
# Calling other applications
The preferred way to call other smart contracts is using , or . These methods support type checking and encoding of arguments, decoding of results, group transactions, and in the case of `arc4_create` and `arc4_update` automatic inclusion of approval and clear state programs. ## `algopy.arc4.abi_call` can be used to call other ARC-4 contracts, the first argument should refer to an ARC-4 method either by referencing an Algorand Python method, an method generated from an ARC-32/ARC-56 app spec, or a string representing the ARC-4 method signature or name. The following arguments should then be the arguments required for the call, these arguments will be type checked and converted where appropriate. Any other related transaction parameters such as `app_id`, `fee` etc. can also be provided as keyword arguments. If the ARC-4 method returns an ARC-4 result then the result will be a tuple of the ARC-4 result and the inner transaction. If the ARC-4 method does not return a result, or if the result type is not fully qualified then just the inner transaction is returned. ```python from algopy import Application, ARC4Contract, String, arc4, subroutine class HelloWorld(ARC4Contract): @arc4.abimethod() def greet(self, name: String) -> String: return "Hello " + name @subroutine def call_existing_application(app: Application) -> None: greeting, greet_txn = arc4.abi_call(HelloWorld.greet, "there", app_id=app) assert greeting == "Hello there" assert greet_txn.app_id == 1234 ``` ### Alternative ways to use `arc4.abi_call` #### ARC4Client method An ARC4Client client represents the ARC-4 abimethods of a smart contract and can be used to call abimethods in a type safe way ARC4Client’s can be produced by using `puyapy --output-client=True` when compiling a smart contract (this would be useful if you wanted to publish a client for consumption by other smart contracts) An ARC4Client can also be generated from an ARC-32/ARC-56 application.json using `puyapy-clientgen` e.g. `puyapy-clientgen examples/hello_world_arc4/out/HelloWorldContract.arc56.json`, this would be the recommended approach for calling another smart contract that is not written in Algorand Python or does not provide the source ```python from algopy import arc4, subroutine class HelloWorldClient(arc4.ARC4Client): def hello(self, name: arc4.String) -> arc4.String: ... @subroutine def call_another_contract() -> None: # can reference another algopy contract method result, txn = arc4.abi_call(HelloWorldClient.hello, arc4.String("World"), app=...) assert result == "Hello, World" ``` #### Method signature or name An ARC-4 method selector can be used e.g. `"hello(string)string` along with a type index to specify the return type. Additionally just a name can be provided and the method signature will be inferred e.g. ```python from algopy import arc4, subroutine @subroutine def call_another_contract() -> None: # can reference a method selector result, txn = arc4.abi_call[arc4.String](docs/_build/markdown/"hello(string)string", arc4.String("Algo"), app=...) assert result == "Hello, Algo" # can reference a method name, the method selector is inferred from arguments and return type result, txn = arc4.abi_call[arc4.String](docs/_build/markdown/"hello", "There", app=...) assert result == "Hello, There" ``` ## `algopy.arc4.arc4_create` can be used to create ARC-4 applications, and will automatically populate required fields for app creation (such as approval program, clear state program, and global/local state allocation). Like it handles ARC-4 arguments and provides ARC-4 return values. If the compiled programs and state allocation fields need to be customized (for example due to ), this can be done by passing a via the `compiled` keyword argument. ```python from algopy import ARC4Contract, String, arc4, subroutine class HelloWorld(ARC4Contract): @arc4.abimethod() def greet(self, name: String) -> String: return "Hello " + name @subroutine def create_new_application() -> None: hello_world_app = arc4.arc4_create(HelloWorld).created_app greeting, _txn = arc4.abi_call(HelloWorld.greet, "there", app_id=hello_world_app) assert greeting == "Hello there" ``` ## `algopy.arc4.arc4_update` is used to update an existing ARC-4 application and will automatically populate the required approval and clear state program fields. Like it handles ARC-4 arguments and provides ARC-4 return values. If the compiled programs need to be customized (for example due to ), this can be done by passing a via the `compiled` keyword argument. ```python from algopy import Application, ARC4Contract, String, arc4, subroutine class NewApp(ARC4Contract): @arc4.abimethod() def greet(self, name: String) -> String: return "Hello " + name @subroutine def update_existing_application(existing_app: Application) -> None: hello_world_app = arc4.arc4_update(NewApp, app_id=existing_app) greeting, _txn = arc4.abi_call(NewApp.greet, "there", app_id=hello_world_app) assert greeting == "Hello there" ``` ## Using `itxn.ApplicationCall` If the application being called is not an ARC-4 contract, or an application specification is not available, then can be used. This approach is generally more verbose than the above approaches, so should only be used if required. See for an example
# Compiling to AVM bytecode
The PuyaPy compiler can compile Algorand Python smart contracts directly into AVM bytecode. Once compiled, this bytecode can be utilized to construct AVM Application Call transactions both on and off chain. ## Outputting AVM bytecode from CLI The `--output-bytecode` option can be used to generate `.bin` files for smart contracts and logic signatures, producing an approval and clear program for each smart contract. ## Obtaining bytecode within other contracts The function takes an Algorand Python smart contract class and returns a , The global state, local state and program pages allocation parameters are derived from the contract by default, but can be overridden. This compiled contract can then be used to create an transaction or used with the functions. The takes an Algorand Python logic signature and returns a , which can be used to verify if a transaction has been signed by a particular logic signature. ## Template variables Algorand Python supports defining variables that can be substituted during compilation. For example, the following contract has `UInt64` and `Bytes` template variables. ```python from algopy import ARC4Contract, Bytes, TemplateVar, UInt64, arc4 class TemplatedContract(ARC4Contract): @arc4.abimethod def my_method(self) -> UInt64: return TemplateVar[UInt64](docs/_build/markdown/"SOME_UINT") @arc4.abimethod def my_other_method(self) -> Bytes: return TemplateVar[Bytes](docs/_build/markdown/"SOME_BYTES") ``` When compiling to bytecode, the values for these template variables must be provided. These values can be provided via the CLI, or through the `template_vars` parameter of the and functions. ### CLI The `--template-var` option can be used to each variable. For example to provide the values for the above example contract the following command could be used `puyapy --template-var SOME_UINT=123 --template-var SOME_BYTES=0xABCD templated_contract.py` ### Within other contracts The functions and both have an optional `template_vars` parameter which can be used to define template variables. Variables defined in this manner take priority over variables defined on the CLI. ```python from algopy import Bytes, UInt64, arc4, compile_contract, subroutine from templated_contract import TemplatedContract @subroutine def create_templated_contract() -> None: compiled = compile_contract( TemplatedContract, global_uints=2, # customize allocated global uints template_vars={ # provide template vars "SOME_UINT": UInt64(123), "SOME_BYTES": Bytes(b"\xAB\xCD") }, ) arc4.arc4_create(TemplatedContract, compiled=compiled) ```
# Control flow structures
Control flow in Algorand Python is similar to standard Python control flow, with support for if statements, while loops, for loops, and match statements. ## If statements If statements work the same as Python. The conditions must be an expression that evaluates to bool, which can include a among others. ```python if condition: # block of code to execute if condition is True elif condition2: # block of code to execute if condition is False and condition2 is True else: # block of code to execute if condition and condition2 are both False ``` . ## Ternary conditions Ternary conditions work the same as Python. The condition must be an expression that evaluates to bool, which can include a among others. ```python value1 = UInt64(5) value2 = String(">6") if value1 > 6 else String("<=6") ``` ## While loops While loops work the same as Python. The condition must be an expression that evaluates to bool, which can include a among others. You can use `break` and `continue`. ```python while condition: # block of code to execute if condition is True ``` . ## For Loops For loops are used to iterate over sequences, ranges and . They work the same as Python. Algorand Python provides functions like `uenumerate` and `urange` to facilitate creating sequences and ranges; in-built Python `reversed` method works with these. * `uenumerate` is similar to Python’s built-in enumerate function, but for UInt64 numbers; it allows you to loop over a sequence and have an automatic counter. * `urange` is a function that generates a sequence of Uint64 numbers, which you can iterate over. * `reversed` returns a reversed iterator of a sequence. Here is an example of how you can use these functions in a contract: ```python test_array = arc4.StaticArray(arc4.UInt8(), arc4.UInt8(), arc4.UInt8(), arc4.UInt8()) # urange: reversed items, forward index for index, item in uenumerate(reversed(urange(4))): test_array[index] = arc4.UInt8(item) assert test_array.bytes == Bytes.from_hex("03020100") ``` . ## Match Statements Match statements work the same as Python with support for basic case/switch functionality. Captures and patterns are not supported. Pattern matching and guard clauses are also not supported currently. ```python match value: case pattern1: # block of code to execute if pattern1 matches case pattern2: # block of code to execute if pattern2 matches case _: # Fallback ``` .
# Data structures
In terms of data structures, Algorand Python currently provides support for data types and arrays. In a restricted and costly computing environment such as a blockchain application, making the correct choice for data structures is crucial. All ARC-4 data types are supported, and initially were the only choice of data structures in Algorand Python 1.0, other than statically sized native Python tuples. However, ARC-4 encoding is not an efficient encoding for mutations, additionally they were restricted in that they could only contain other ARC-4 types. As of Algorand Python 2.7, two new array types were introduced , a mutable array type that supports statically sized native and ARC-4 elements and that has an immutable API and supports dynamically sized native and ARC-4 elements. ## Mutability vs Immutability A value with an immutable type cannot be modified. Some examples are , , `tuple` and `typing.NamedTuple`. Aggregate immutable types such as `tuple` or `ImmutableArray` provide a way to produce modified values, this is done by returning a copy of the original value with the specified changes applied e.g. ```python import typing import algopy # update a named tuple with _replace class MyTuple(typing.NamedTuple): foo: algopy.UInt64 bar: algopy.String tup1 = MyTuple(foo=algopy.UInt64(12), bar=algopy.String("Hello")) # this does not modify tup1 tup2 = tup1._replace(foo=algopy.UInt64(34)) assert tup1.foo != tup2.foo # update immutable array by appending and reassigning arr = algopy.ImmutableArray[MyTuple]() arr = arr.append(tup1) arr = arr.append(tup2) ``` Mutable types allow direct modification of a value and all references to this value are able to observe the change e.g. ```python import algopy # both my_arr and my_arr2 both point to the same array my_arr = algopy.Array[algopy.UInt64]() my_arr2 = my_arr my_arr.append(algopy.UInt64(12)) assert my_arr.length == 1 assert my_arr2.length == 1 my_arr2.append(algopy.UInt64(34)) assert my_arr2.length == 2 assert my_arr.length == 2 ``` ## Static size vs Dynamic size A static sized type is a type where its total size in memory is determinable at compile time, for example `UInt64` is always 8 bytes of memory. Aggregate types such as `tuple`, `typing.NamedTuple`, `arc4.Struct` and `arc4.Tuple` are static size if all their members are also static size e.g. `tuple[UInt64, UInt64]` is static size as it contains two static sized members. Any type where its size is not statically defined is dynamically sized e.g. `Bytes`, `String`, `tuple[UInt64, String]` and `Array[UInt64]` are all dynamically sized. ## Size constraints All `bytes` on the AVM stack cannot exceed 4096 bytes in length, this means all arrays and structs cannot exceed this size. Boxes are an exception to this, the contents of a box can be up to 32k bytes. However loading this entire box into a variable is not possible as it would exceed the AVM limit of 4096 bytes. However Puya will support reading and writing parts of a box ```python import typing from algopy import Box, FixedArray, Struct, UInt64, arc4, size_of class BigStruct(Struct): count: UInt64 # 8 bytes large_array: FixedArray[UInt64, typing.Literal[512]] # 4096 bytes class Contract(arc4.ARC4Contract): def __init__(self) -> None: self.box = Box(BigStruct) self.box.create() @arc4.abimethod() def read_box_fails(self) -> UInt64: assert size_of(BigStruct) == 4104 big_struct = self.box.value # this fails to compile because size_of(BigStruct) assert big_struct.count > 0, "" ``` ## Algorand Python composite types ### `tuple` This is a regular python tuple * Immutable * Members can be of any type * Most useful as an anonymous type * Each member is stored on the stack, within a function this makes them quite efficient. However when passing to another function they can require a lot of stack manipulations to order all the members correctly on the stack ### `typing.NamedTuple` * Immutable * Members can be of any type * Members are described by a field name and type * Modified copies can be made using `._replace` * Each member is stored on the stack, within a function this makes them quite efficient. However when passing to another function they can require a lot of stack manipulations to order all the members correctly on the stack ### `Struct` * Can contain any type except transactions * Members are described by a field name and type * Can be immutable if using the `frozen` class option and all members are also immutable * Requires when mutable and creating additional references * Encoded as a single ARC-4 value on the stack ### `arc4.Tuple` * Can only contain other ARC-4 types * Can be immutable if all members are also immutable * Requires when mutable and creating additional references * Encoded as a single ARC-4 value on the stack ### `arc4.Struct` * Can only contain other ARC-4 types * Members are described by a field name and type * Can be immutable if using the `frozen` class option and all members are also immutable * Requires when mutable and creating additional references * Encoded as a single ARC-4 value on the stack ## Algorand Python array types ### `algopy.FixedArray` * Can contain any type except transactions * Can only contain a fixed number of elements * Most efficient array type * Requires if making additional references to the array or any mutable elements ### `algopy.Array` * Can contain any type except transactions * Dynamically sized, efficient for reading (when assembled off-chain). Inefficient to manipulate on-chain * Requires if making additional references to the array or any mutable elements ### `algopy.ReferenceArray` * Mutable, all references see modifications * Only supports static size immutable types. Note: Supporting mutable elements would have the potential to quickly exhaust scratch slots in a program so for this reason this type is limited to immutable elements only * May use scratch slots to store the data * Cannot be put in storage or used in ABI method signatures * An immutable copy can be made for storage or returning from a contract by using the method e.g. ```python import algopy class SomeContract(algopy.arc4.ARC4Contract): @algopy.arc4.abimethod() def get_array(self) -> algopy.ImmutableArray[algopy.UInt64]: arr = algopy.ReferenceArray[algopy.UInt64]() # modify arr as required ... # return immutable copy return arr.freeze() ``` ### `algopy.ImmutableArray` * Immutable version of `algopy.Array` * Modifications are done by reassigning a modified copy of the original array * Can only contain immutable types * Can be put in storage or used in ABI method signatures ### `algopy.arc4.DynamicArray` / `algopy.arc4.StaticArray` * Only supports ARC-4 elements * Elements often require conversion to native types, use `algopy.Array` / `algopy.FixedArray` to avoid explict conversions * Dynamically sized types are efficient for reading, but not writing * Requires if making additional references to the array or mutable elements ## Tips * Avoid using dynamically sized types as they are less efficient and can obfuscate constraints of the AVM (`algopy.Bytes`, `algopy.String`, `algopy.Array`, `algopy.arc4.DynamicArray`, `algopy.arc4.DynamicBytes` `algopy.arc4.String`) * Prefer frozen structs where possible to avoid `.copy()` requirements * If a function needs just a few values of a tuple it is more efficient to just pass those members rather than the whole tuple * For passing composite values between functions there can be different trade-offs in terms of op budget and program size between a tuple or a struct, if this is a concern then test and confirm which suits your contract the best. * All array types except `algopy.ReferenceArray` can be used in storage and ABI methods, and will be viewed externally (i.e. in ARC-56 definitions) as the equivalent ARC-4 encoded type * Use to convert the array to an `algopy.ImmutableArray` for storage
# Error handling and assertions
In Algorand Python, error handling and assertions play a crucial role in ensuring the correctness and robustness of smart contracts. ## Assertions Assertions allow you to immediately fail a smart contract if a evaluates to `False`. If an assertion fails, it immediately stops the execution of the contract and marks the call as a failure. In Algorand Python, you can use the Python built-in `assert` statement to make assertions in your code. For example: ```python @subroutine def set_value(value: UInt64): assert value > 4, "Value must be > 4" ``` ### Assertion error handling The (optional) string value provided with an assertion, if provided, will be added as a TEAL comment on the end of the assertion line. This works in concert with default AlgoKit Utils app client behaviour to show a TEAL stack trace of an error and thus show the error message to the caller (when source maps have been loaded). ## Explicit failure For scenarios where you need to fail a contract explicitly, you can use the operation. This operation causes the TEAL program to immediately and unconditionally fail. Alternatively will achieve the same result. A non-zero value will do the opposite and immediately succeed. ## Exception handling The AVM doesn’t provide error trapping semantics so it’s not possible to implement `raise` and `catch`. For more details see .
# Logging
Algorand Python provides a that allows you to emit debugging and event information as well as return values from your contracts to the caller. This `log` method is a superset of the that adds extra functionality: * You can log multiple items rather than a single item * Items are concatenated together with an optional separator (which defaults to: `""`) * Items are automatically converted to bytes for you * Support for: * `int` literals / module variables (encoded as raw bytes, not ASCII) * `UInt64` values (encoded as raw bytes, not ASCII) * `str` literals / module variables (encoded as UTF-8) * `bytes` literals / module variables (encoded as is) * `Bytes` values (encoded as is) * `BytesBacked` values, which includes , , and all of the (encoded as their underlying bytes values) Logged values are and attached to the transaction record stored on the blockchain ledger. If you want to emit ARC-28 events in the logs then there is a . Here’s an example contract that uses the log method in various ways: ```python from algopy import BigUInt, Bytes, Contract, log, op class MyContract(Contract): def approval_program(self) -> bool: log(0) log(b"1") log("2") log(op.Txn.num_app_args + 3) log(Bytes(b"4") if op.Txn.num_app_args else Bytes()) log( b"5", 6, op.Txn.num_app_args + 7, BigUInt(8), Bytes(b"9") if op.Txn.num_app_args else Bytes(), sep="_", ) return True def clear_state_program(self) -> bool: return True ```
# PuyaPy migration from 4.x to 5.0
PuyaPy 5.0 and the accompanying Algorand Python 3.0 `algopy` stubs have some breaking changes from prior versions, this document outlines those changes and how to resolve them ## `algopy.Array` to `algopy.ReferenceArray` The `algopy.Array` type present in 4.x has been renamed to `algopy.ReferenceArray` to make it clearer how it differs from the other array types. If a contract was using this type in 4.x it could encounter one of the following errors after upgrading to 5.0 * `No overload variant of "Array" matches argument types` * `expression is not valid as an assignment target` * `unsupported assignment target` * `mutable values cannot be passed more than once to a subroutine` A simple way to solve this for existing contracts using the old name is to alias the `ReferenceArray` type as `Array`. e.g. ```python from algopy import ReferenceArray as Array ``` If you need to use both `algopy.ReferenceArray` and the new `algopy.Array` then it would be better to update existing `algopy.Array` references to `algopy.ReferenceArray` e.g. code that was using `algopy.Array` prior to 5.0 ```python from algopy import * @subroutine def some_method(arr: Array[UInt64]) -> None: ... ``` After migrating to 5.0, should use `algopy.ReferenceArray` for existing code, and is free to use `algopy.Array` for new code ```python from algopy import * @subroutine def some_method(arr: ReferenceArray[UInt64]) -> None: ... @subroutine def a_new_method(arr: Array[UInt64]) -> None: ``` ## `algopy.Account`, `algopy.Asset` and `algopy.Application` routing behaviour The default routing behaviour for resources types `algopy.Account`, `algopy.Asset` and `algopy.Application` has changed in 5.0, the new behaviour will treat these types as their underlying ARC-4 value type when constructing ABI method signatures. This allows for more efficient resource packing when using the `algokit_utils` populate resource functionality. | Type | 4.x (`foreign_index`) | 5.0 (`value`) | | -------------------- | --------------------- | ------------- | | `algopy.Account` | `account` | `address` | | `algopy.Asset` | `asset` | `uint64` | | `algopy.Application` | `application` | `uint64` | There are two methods to return to the 4.x behaviour for these types: 1.) Use original behaviour for entire compilation by using CLI options The original behaviour can be restored by using the `--resource-encoding` CLI option on `puyapy` e.g. `puyapy --resource-encoding=index path/to/contracts` 2.) Use original behaviour for specific methods by using an `abimethod` option Individual methods can be forced to use the original behaviour by setting the `resource_encoding` option on `arc4.abimethod` e.g. ```python from algopy import arc4, Account, Application, Asset class MyContract(arc4.ARC4Contract): @arc4.abimethod(resource_encoding="index") def my_abi_method(self, app: Application, asset: Asset, account: Account) -> None: ... # has an ARC-4 signature of my_abi_method(application,asset,account)void ``` ## Constructor signatures of `ImmutableArray` and `ReferenceArray` With the introduction of the new Mutable Native Arrays (`Array`, `FixedArray`) to PuyaPy we chose to follow standard Python idioms, in that these arrays can be initialized with an iterable (tuple, another array e.g. `Array((UInt64(1), UInt64(2)))`, `Array(existing_arr)`). The constructor signatures of `ImmutableArray` and `ReferenceArray` (which is called `Array` prior to 5.0, see ) has been changed to be consistent with the new Mutable Native Arrays. e.g. code that constructs `algopy.ImmutableArray` prior to 5.0 ```python arr = ImmutableArray(UInt64(1), UInt64(2), UInt64(3)) ``` After migrating to 5.0, construct `algopy.ImmutableArray` using an iterable parameter ```python arr = ImmutableArray((UInt64(1), UInt64(2), UInt64(3))) ```
# Module level constructs
You can write compile-time constant code at a module level and then use them in place of . For a full example of what syntax is currently possible see the . ## Module constants Module constants are compile-time constant, and can contain `bool`, `int`, `str` and `bytes`. You can use fstrings and other compile-time constant values in module constants too. For example: ```python from algopy import UInt64, subroutine SCALE = 100000 SCALED_PI = 314159 @subroutine def circle_area(radius: UInt64) -> UInt64: scaled_result = SCALED_PI * radius**2 result = scaled_result // SCALE return result @subroutine def circle_area_100() -> UInt64: return circle_area(UInt64(100)) ``` ## If statements You can use if statements with compile-time constants in module constants. For example: ```python FOO = 42 if FOO > 12: BAR = 123 else: BAR = 456 ``` ## Integer math Module constants can also be defined using common integer expressions. For example: ```python SEVEN = 7 TEN = 7 + 3 FORTY_NINE = 7 ** 2 ``` ## Strings Module `str` constants can use f-string formatting and other common string expressions. For example: ```python NAME = "There" MY_FORMATTED_STRING = f"Hello {NAME}" # Hello There PADDED = f"{123:05}" # "00123" DUPLICATED = "5" * 3 # "555" ``` ## Type aliases You can create type aliases to make your contract terser and more expressive. For example: ```python import typing from algopy import arc4 VoteIndexArray: typing.TypeAlias = arc4.DynamicArray[arc4.UInt8] Row: typing.TypeAlias = arc4.StaticArray[arc4.UInt8, typing.Literal[3]] Game: typing.TypeAlias = arc4.StaticArray[Row, typing.Literal[3]] Move: typing.TypeAlias = tuple[arc4.UInt64, arc4.UInt64] Bytes32: typing.TypeAlias = arc4.StaticArray[arc4.Byte, typing.Literal[32]] Proof: typing.TypeAlias = arc4.DynamicArray[Bytes32] ```
# Opcode budgets
Algorand Python provides a helper method for increasing the , see .
# AVM operations
Algorand Python allows you to do express apart from ops that manipulate the stack (to avoid conflicts with the compiler), and `log` (to avoid confusion with the superior ). These ops are exposed via the submodule. We generally recommend importing this entire submodule so you can use intellisense to discover the available methods: ```python from algopy import UInt64, op, subroutine @subroutine def sqrt_16() -> UInt64: return op.sqrt(16) ``` All ops are typed using Algorand Python types and have correct static type representations. Many ops have higher-order functionality that Algorand Python exposes and would limit the need to reach for the underlying ops. For instance, there is first-class support for local and global storage so there is little need to use the likes of `app_local_get` et. al. But they are still exposed just in case you want to do something that Algorand Python’s abstractions don’t support. ## Txn The `Txn` opcodes are so commonly used they have been exposed directly in the `algopy` module and can be easily imported to make it terser to access: ```python from algopy import subroutine, Txn @subroutine def has_no_app_args() -> bool: return Txn.num_app_args == 0 ``` ## Global The `Global` opcodes are so commonly used they have been exposed directly in the `algopy` module and can be easily imported to make it terser to access: ```python from algopy import subroutine, Global, Txn @subroutine def only_allow_creator() -> None: assert Txn.sender == Global.creator_address, "Only the contract creator can perform this operation" ```
# Storing data on-chain
Algorand smart contracts can utilise : , , and . They also have access to a transient form of storage in . The life-cycle of a smart contract matches the semantics of Python classes when you consider deploying a smart contract as “instantiating” the class. Any calls to that smart contract are made to that instance of the smart contract, and any state assigned to `self.` variables will persist across different invocations (provided the transaction it was a part of succeeds, of course). You can deploy the same contract class multiple times, each will become a distinct and isolated instance. During a single smart contract execution there is also the ability to use “temporary” storage either global to the contract execution via , or local to the current method via . ## Global storage Global storage is state that is stored against the contract instance and can be retrieved by key. There are . This is represented in Algorand Python by either: 1. Assigning any value to an instance variable (e.g. `self.value = UInt64(3)`). * Use this approach if you just require a terse API for getting and setting a state value 2. Using an instance of `GlobalState`, which gives to understand and control the value and the metadata of it (which propagates to the ARC-32/ARC-56 app spec file) * Use this approach if you need to: * Omit a default/initial value * Delete the stored value * Check if a value exists * Specify the exact key bytes * Include a description to be included in App Spec files (ARC-32/ARC-56) For example: ```python self.global_int_full = GlobalState(UInt64(55), key="gif", description="Global int full") self.global_int_simplified = UInt64(33) self.global_int_no_default = GlobalState(UInt64) self.global_bytes_full = GlobalState(Bytes(b"Hello")) self.global_bytes_simplified = Bytes(b"Hello") self.global_bytes_no_default = GlobalState(Bytes) global_int_full_set = bool(self.global_int_full) bytes_with_default_specified = self.global_bytes_no_default.get(b"Default if no value set") error_if_not_set = self.global_int_no_default.value ``` These values can be assigned anywhere you have access to `self` i.e. any instance methods/subroutines. The information about global storage is automatically included in the ARC-32/ARC-56 app spec file and thus will automatically appear within any . ## Local storage Local storage is state that is stored against the contract instance for a specific account and can be retrieved by key and account address. There are . This is represented in Algorand Python by using an instance of . For example: ```python def __init__(self) -> None: self.local = LocalState(Bytes) self.local_with_metadata = LocalState(UInt64, key = "lwm", description = "Local with metadata") @subroutine def get_guaranteed_data(self, for_account: Account) -> Bytes: return self.local[for_account] @subroutine def get_data_with_default(self, for_account: Account, default: Bytes) -> Bytes: return self.local.get(for_account, default) @subroutine def get_data_or_assert(self, for_account: Account) -> Bytes: result, exists = self.local.maybe(for_account) assert exists, "no data for account" return result @subroutine def set_data(self, for_account: Account, value: Bytes) -> None: self.local[for_account] = value @subroutine def delete_data(self, for_account: Account) -> None: del self.local[for_account] ``` These values can be assigned anywhere you have access to `self` i.e. any instance methods/subroutines. The information about local storage is automatically included in the ARC-32/ARC-56 app spec file and thus will automatically appear within any . ## Box storage We provide two different types for accessing box storage: , and . We also expose raw operations via the module. Before using box storage, be sure to familiarise yourself with the of the underlying API. The `Box` type provides an abstraction over storing a single value in a single box. A box can be declared against `self` in an `__init__` method (in which case the key must be a compile time constant); or as a local variable within any subroutine. `Box` proxy instances can be passed around like any other value. Once declared, you can interact with the box via its instance methods. ```python import typing as t from algopy import Box, arc4, Contract, op class MyContract(Contract): def __init__(self) -> None: self.box_a = Box(arc4.StaticArray[arc4.UInt32, t.Literal[20]], key=b"a") def approval_program(self) -> bool: box_b = Box(arc4.String, key=b"b") box_b.value = arc4.String("Hello") # Check if the box exists if self.box_a: # Reassign the value self.box_a.value[2] = arc4.UInt32(40) else: # Assign a new value self.box_a.value = arc4.StaticArray[arc4.UInt32, t.Literal[20]].from_bytes(op.bzero(20 * 4)) # Read a value return self.box_a.value[4] == arc4.UInt32(2) ``` In addition to being able to set and read the box value, there are operations for extracting and replacing just a portion of the box data which is useful for minimizing the amount of reads and writes required, but also allows you to interact with byte arrays which are longer than the AVM can support (currently 4096). ```python from algopy import Box, Contract, Global, Txn class MyContract(Contract): def approval_program(self) -> bool: my_blob = Box(Bytes, key=b"blob") sender_bytes = Txn.sender.bytes app_address = Global.current_application_address.bytes assert my_blob.create(size=8000) my_blob.replace(0, sender_bytes) my_blob.splice(0, 0, app_address) first_64 = my_blob.extract(0, 32 * 2) assert first_64 == app_address + sender_bytes value, exists = my_blob.maybe() assert exists del my_blob.value value, exists = my_blob.maybe() assert not exists assert my_blob.get(default=sender_bytes) == sender_bytes my_blob.create(size=sender_bytes + app_address) assert my_blob, "Blob exists" assert my_blob.length == 64 return True ``` `BoxMap` is similar to the `Box` type, but allows for grouping a set of boxes with a common key and content type. A custom `key_prefix` can optionally be provided, with the default being to use the variable name as the prefix. The key can be a `Bytes` value, or anything that can be converted to `Bytes`. The final box name is the combination of `key_prefix + key`. ```python from algopy import BoxMap, Contract, Account, Txn, String class MyContract(Contract): def __init__(self) -> None: self.my_map = BoxMap(Account, String, key_prefix=b"a_") def approval_program(self) -> bool: # Check if the box exists if Txn.sender in self.my_map: # Reassign the value self.my_map[Txn.sender] = String(" World") else: # Assign a new value self.my_map[Txn.sender] = String("Hello") # Read a value return self.my_map[Txn.sender] == String("Hello World") ``` If none of these abstractions suit your needs, you can use the box storage to interact with box storage. These ops match closely to the opcodes available on the AVM. For example: ```python op.Box.create(b"key", size) op.Box.put(Txn.sender.bytes, answer_ids.bytes) (votes, exists) = op.Box.get(Txn.sender.bytes) op.Box.replace(TALLY_BOX_KEY, index, op.itob(current_vote + 1)) ``` See the for a real-world example that uses box storage. ## Scratch storage To use scratch storage you need to and then you can use the scratch storage . For example: ```python from algopy import Bytes, Contract, UInt64, op, urange TWO = 2 TWENTY = 20 class MyContract(Contract, scratch_slots=(1, TWO, urange(3, TWENTY))): def approval_program(self) -> bool: op.Scratch.store(1, UInt64(5)) op.Scratch.store(2, Bytes(b"Hello World")) for i in urange(3, 20): op.Scratch.store(i, i) assert op.Scratch.load_uint64(1) == UInt64(5) assert op.Scratch.load_bytes(2) == b"Hello World" assert op.Scratch.load_uint64(5) == UInt64(5) return True def clear_state_program(self) -> bool: return True ```
# Program structure
An Algorand Python smart contract is defined within a single class. You can extend other contracts (through inheritance), and also define standalone functions and reference them. This also works across different Python packages - in other words, you can have a Python library with common functions and re-use that library across multiple projects! ## Modules Algorand Python modules are files that end in `.py`, as with standard Python. Sub-modules are supported as well, so you’re free to organise your Algorand Python code however you see fit. The standard python import rules apply, including requirements. A given module can contain zero, one, or many smart contracts and/or logic signatures. A module can contain , , , and . ## Typing Algorand Python code must be fully typed with . In practice, this mostly means annotating the arguments and return types of all functions. ## Subroutines Subroutines are “internal” or “private” methods to a contract. They can exist as part of a contract class, or at the module level so they can be used by multiple classes or even across multiple projects. You can pass parameters to subroutines and define local variables, both of which automatically get managed for you with semantics that match Python semantics. All subroutines must be decorated with `algopy.subroutine`, like so: ```python def foo() -> None: # compiler error: not decorated with subroutine ... @algopy.subroutine def bar() -> None: ... ``` #### NOTE Requiring this decorator serves two key purposes: 1. You get an understandable error message if you try and use a third party package that wasn’t built for Algorand Python 2. It provides for the ability to modify the functions on the fly when running in Python itself, in a future testing framework. Argument and return types to a subroutine can be any Algorand Python variable type (except for\ \\\ ). Returning multiple values is allowed, this is annotated in the standard Python way with `tuple`: ```python @algopy.subroutine def return_two_things() -> tuple[algopy.UInt64, algopy.String]: ... ``` Keyword only and positional only argument list modifiers are supported: ```python @algopy.subroutine def my_method(a: algopy.UInt64, /, b: algopy.UInt64, *, c: algopy.UInt64) -> None: ... ``` In this example, `a` can only be passed positionally, `b` can be passed either by position or by name, and `c` can only be passed by name. The following argument/return types are not currently supported: * Type unions * Variadic args like `*args`, `**kwargs` * Python types such as `int` * Default values are not supported ## Contract classes An consists of two distinct “programs”; an approval program, and a clear-state program. These are tied together in Algorand Python as a single class. All contracts must inherit from the base class `algopy.Contract` - either directly or indirectly, which can include inheriting from `algopy.ARC4Contract`. The life-cycle of a smart contract matches the semantics of Python classes when you consider deploying a smart contract as “instantiating” the class. Any calls to that smart contract are made to that instance of the smart contract, and any state assigned to `self.` will persist across different invocations (provided the transaction it was a part of succeeds, of course). You can deploy the same contract class multiple times, each will become a distinct and isolated instance. Contract classes can optionally implement an `__init__` method, which will be executed exactly once, on first deployment. This method takes no arguments, but can contain arbitrary code, including reading directly from the transaction arguments via . This makes it a good place to put common initialisation code, particularly in ARC-4 contracts with multiple methods that allow for creation. The contract class body should not contain any logic or variable initialisations, only method definitions. Forward type declarations are allowed. Example: ```python class MyContract(algopy.Contract): foo: algopy.UInt64 # okay bar = algopy.UInt64(1) # not allowed if True: # also not allowed bar = algopy.UInt64(2) ``` Only concrete (ie non-abstract) classes produce output artifacts for deployment. To mark a class as explicitly abstract, inherit from . #### NOTE The compiler will produce a warning if a Contract class is implicitly abstract, i.e. if any abstract methods are unimplemented. ### Contract class configuration When defining a contract subclass you can pass configuration options to the `algopy.Contract` base class . Namely you can pass in: * `name` - Which will affect the output TEAL file name if there are multiple non-abstract contracts in the same file and will also be used as the contract name in the ARC-32/ARC-56 application.json instead of the class name. * `scratch_slots` - Which allows you to mark a slot ID or range of slot IDs as “off limits” to Puya so you can manually use them. * `state_totals` - Which allows defining what values should be used for global and local uint and bytes storage values when creating a contract and will appear in ARC-32/ARC-56 app spec. Full example: ```python GLOBAL_UINTS = 3 class MyContract( algopy.Contract, name="CustomName", scratch_slots=[5, 25, algopy.urange(110, 115)], state_totals=algopy.StateTotals(local_bytes=1, local_uints=2, global_bytes=4, global_uints=GLOBAL_UINTS), ): ... ``` ### Example: Simplest possible `algopy.Contract` implementation For a non-ARC-4 contract, the contract class must implement an `approval_program` and a `clear_state_program` method. As an example, this is a valid contract that always approves: ```python class Contract(algopy.Contract): def approval_program(self) -> bool: return True def clear_state_program(self) -> bool: return True ``` The return value of these methods can be either a `bool` that indicates whether the transaction should approve or not, or a `algopy.UInt64` value, where `UInt64(0)` indicates that the transaction should be rejected and any other value indicates that it should be approved. ### Example: Simple call counter Here is a very simple example contract that maintains a counter of how many times it has been called (including on create). ```python class Counter(algopy.Contract): def __init__(self) -> None: self.counter = algopy.UInt64(0) def approval_program(self) -> bool: match algopy.Txn.on_completion: case algopy.OnCompleteAction.NoOp: self.increment_counter() return True case _: # reject all OnCompletionAction's other than NoOp return False def clear_state_program(self) -> bool: return True @algopy.subroutine def increment_counter(self) -> None: self.counter += 1 ``` Some things to note: * `self.counter` will be stored in the application’s . * The return type of `__init__` must be `None`, per standard typed Python. * Any methods other than `__init__`, `approval_program` or `clear_state_program` must be decorated with `@subroutine`. ### Example: Simplest possible `algopy.ARC4Contract` implementation And here is a valid ARC-4 contract: ```python class ABIContract(algopy.ARC4Contract): pass ``` A default `@algopy.arc4.baremethod` that allows contract creation is automatically inserted if no other public method allows execution on create. The approval program is always automatically generated, and consists of a router which delegates based on the transaction application args to the correct public method. A default `clear_state_program` is implemented which always approves, but this can be overridden. ### Example: An ARC-4 call counter ```python import algopy class ARC4Counter(algopy.ARC4Contract): def __init__(self) -> None: self.counter = algopy.UInt64(0) @algopy.arc4.abimethod(create="allow") def invoke(self) -> algopy.arc4.UInt64: self.increment_counter() return algopy.arc4.UInt64(self.counter) @algopy.subroutine def increment_counter(self) -> None: self.counter += 1 ``` This functions very similarly to the . Things to note here: * Since the `invoke` method has `create="allow"`, it can be called both as the method to create the app and also to invoke it after creation. This also means that no default bare-method create will be generated, so the only way to create the contract is through this method. * The default options for `abimethod` is to only allow `NoOp` as an on-completion-action, so we don’t need to check this manually. * The current call count is returned from the `invoke` method. * Every method in an `ARC4Contract` except for the optional `__init__` and `clear_state_program` methods must be decorated with one of `algopy.arc4.abimethod`, `algopy.arc4.baremethod`, or `algopy.subroutine`. `subroutines` won’t be directly callable through the default router. See the of this language guide for more info on the above. ## Logic signatures are stateless, and consist of a single program. As such, they are implemented as functions in Algorand Python rather than classes. ```python @algopy.logicsig def my_log_sig() -> bool: ... ``` Similar to `approval_program` or `clear_state_program` methods, the function must take no arguments, and return either `bool` or `algopy.UInt64`. The meaning is the same: a `True` value or non-zero `UInt64` value indicates success, `False` or `UInt64(0)` indicates failure. Logic signatures can make use of subroutines that are not nested in contract classes.
# Transactions
Algorand Python provides types for accessing fields of other transactions in a group, as well as creating and submitting inner transactions from your smart contract. The following types are available: | Group Transactions | Inner Transaction Field sets | Inner Transaction | | ------------------ | ---------------------------- | ----------------- | ## Group Transactions Group transactions can be used as ARC-4 parameters or instantiated from a group index. ### ARC-4 parameter Group transactions can be used as parameters in ARC-4 method For example to require a payment transaction in an ARC-4 ABI method: ```python import algopy class MyContract(algopy.ARC4Contract): @algopy.arc4.abimethod() def process_payment(self, payment: algopy.gtxn.PaymentTransaction) -> None: ... ``` ### Group Index Group transactions can also be created using the group index of the transaction. If instantiating one of the type specific transactions they will be checked to ensure the transaction is of the expected type. is not checked for a specific type and provides access to all transaction fields For example, to obtain a reference to a payment transaction: ```python import algopy @algopy.subroutine() def process_payment(group_index: algopy.UInt64) -> None: pay_txn = algopy.gtxn.PaymentTransaction(group_index) ... ``` ## Inner Transactions Inner transactions are defined using the parameter types, and can then be submitted individually by calling the `.submit()` method, or as a group by calling ### Examples #### Create and submit an inner transaction ```python from algopy import Account, UInt64, itxn, subroutine @subroutine def example(amount: UInt64, receiver: Account) -> None: itxn.Payment( amount=amount, receiver=receiver, fee=0, ).submit() ``` #### Accessing result of a submitted inner transaction ```python from algopy import Asset, itxn, subroutine @subroutine def example() -> Asset: asset_txn = itxn.AssetConfig( asset_name=b"Puya", unit_name=b"PYA", total=1000, decimals=3, fee=0, ).submit() return asset_txn.created_asset ``` #### Submitting multiple transactions ```python from algopy import Asset, Bytes, itxn, log, subroutine @subroutine def example() -> tuple[Asset, Bytes]: asset1_params = itxn.AssetConfig( asset_name=b"Puya", unit_name=b"PYA", total=1000, decimals=3, fee=0, ) app_params = itxn.ApplicationCall( app_id=1234, app_args=(Bytes(b"arg1"), Bytes(b"arg1")) ) asset1_txn, app_txn = itxn.submit_txns(asset1_params, app_params) # log some details log(app_txn.logs(0)) log(asset1_txn.txn_id) log(app_txn.txn_id) return asset1_txn.created_asset, app_txn.logs(1) ``` #### Create an ARC-4 application, and then call it ```python from algopy import Bytes, arc4, itxn, subroutine HELLO_WORLD_APPROVAL: bytes = ... HELLO_WORLD_CLEAR: bytes = ... @subroutine def example() -> None: # create an application application_txn = itxn.ApplicationCall( approval_program=HELLO_WORLD_APPROVAL, clear_state_program=HELLO_WORLD_CLEAR, fee=0, ).submit() app = application_txn.created_app # invoke an ABI method call_txn = itxn.ApplicationCall( app_id=app, app_args=(arc4.arc4_signature("hello(string)string"), arc4.String("World")), fee=0, ).submit() # extract result hello_world_result = arc4.String.from_log(call_txn.last_log) ``` #### Create and submit transactions in a loop ```python from algopy import Account, UInt64, itxn, subroutine @subroutine def example(receivers: tuple[Account, Account, Account]) -> None: for receiver in receivers: itxn.Payment( amount=UInt64(1_000_000), receiver=receiver, fee=0, ).submit() ``` ### Limitations Inner transactions are powerful, but currently do have some restrictions in how they are used. #### Inner transaction objects cannot be passed to or returned from subroutines ```python from algopy import Application, Bytes, itxn, subroutine @subroutine def parameter_not_allowed(txn: itxn.PaymentInnerTransaction) -> None: # this is a compile error ... @subroutine def return_not_allowed() -> itxn.PaymentInnerTransaction: # this is a compile error ... @subroutine def passing_fields_allowed() -> Application: txn = itxn.ApplicationCall(...).submit() do_something(txn.txn_id, txn.logs(0)) # this is ok return txn.created_app # and this is ok @subroutine def do_something(txn_id: Bytes): # this is just a regular subroutine ... ``` #### Inner transaction parameters cannot be reassigned without a `.copy()` ```python from algopy import itxn, subroutine @subroutine def example() -> None: payment = itxn.Payment(...) reassigned_payment = payment # this is an error copied_payment = payment.copy() # this is ok ``` #### Inner transactions cannot be reassigned ```python from algopy import itxn, subroutine @subroutine def example() -> None: payment_txn = itxn.Payment(...).submit() reassigned_payment_txn = payment_txn # this is an error txn_id = payment_txn.txn_id # this is ok ``` #### Inner transactions methods cannot be called if there is a subsequent inner transaction submitted or another subroutine is called ```python from algopy import itxn, subroutine @subroutine def example() -> None: app_1 = itxn.ApplicationCall(...).submit() log_from_call1 = app_1.logs(0) # this is ok # another inner transaction is submitted itxn.ApplicationCall(...).submit() # or another subroutine is called call_some_other_subroutine() app1_txn_id = app_1.txn_id # this is ok, properties are still available another_log_from_call1 = app_1.logs(1) # this is not allowed as the array results may no longer be available, instead assign to a variable before submitting another transaction ```
# Types
Algorand Python exposes a number of types that provide a statically typed representation of the behaviour that is possible on the Algorand Virtual Machine. > ##### Types > > * > * > * ## AVM types The most basic are `uint64` and `bytes[]`, representing unsigned 64-bit integers and byte arrays respectively. These are represented by and in Algorand Python. There are further “bounded” types supported by the AVM, which are backed by these two simple primitives. For example, `bigint` represents a variably sized (up to 512-bits), unsigned integer, but is actually backed by a `bytes[]`. This is represented by in Algorand Python. ### UInt64 represents the underlying AVM `uint64` type. It supports all the same operators as `int`, except for `/`, you must use `//` for truncating division instead. ```python # you can instantiate with an integer literal num = algopy.UInt64(1) # no arguments default to the zero value zero = algopy.UInt64() # zero is False, any other value is True assert not zero assert num # Like Python's `int`, `UInt64` is immutable, so augmented assignment operators return new values one = num num += 1 assert one == 1 assert num == 2 # note that once you have a variable of type UInt64, you don't need to type any variables # derived from that or wrap int literals num2 = num + 200 // 3 ``` . ### Bytes represents the underlying AVM `bytes[]` type. It is intended to represent binary data, for UTF-8 it might be preferable to use . ```python # you can instantiate with a bytes literal data = algopy.Bytes(b"abc") # no arguments defaults to an empty value empty = algopy.Bytes() # empty is False, non-empty is True assert data assert not empty # Like Python's `bytes`, `Bytes` is immutable, augmented assignment operators return new values abc = data data += b"def" assert abc == b"abc" assert data == b"abcdef" # indexing and slicing are supported, and both return a Bytes assert abc[0] == b"a" assert data[:3] == abc # check if a bytes sequence occurs within another assert abc in data ``` #### HINT Indexing a `Bytes` returning a `Bytes` differs from the behaviour of Python’s bytes type, which returns an `int`. ```python # you can iterate for i in abc: ... # construct from encoded values base32_seq = algopy.Bytes.from_base32('74======') base64_seq = algopy.Bytes.from_base64('RkY=') hex_seq = algopy.Bytes.from_hex('FF') # binary manipulations ^, &, |, and ~ are supported data ^= ~((base32_seq & base64_seq) | hex_seq) # access the length via the .length property assert abc.length == 3 ``` #### NOTE See for an explanation of why `len()` isn’t supported. . ### String is a special Algorand Python type that represents a UTF-8 encoded string. It’s backed by `Bytes`, which can be accessed through the `.bytes` property. It works similarly to `Bytes`, except that it works with `str` literals rather than `bytes` literals. Additionally, due to a lack of AVM support for unicode data, indexing and length operations are not currently supported (simply getting the length of a UTF-8 string is an `O(N)` operation, which would be quite costly in a smart contract). If you are happy using the length as the number of bytes, then you can call `.bytes.length`. ```python # you can instantiate with a string literal data = algopy.String("abc") # no arguments defaults to an empty value empty = algopy.String() # empty is False, non-empty is True assert data assert not empty # Like Python's `str`, `String` is immutable, augmented assignment operators return new values abc = data data += "def" assert abc == "abc" assert data == "abcdef" # whilst indexing and slicing are not supported, the following tests are: assert abc.startswith("ab") assert abc.endswith("bc") assert abc in data # you can also join multiple Strings together with a separator: assert algopy.String(", ").join((abc, abc)) == "abc, abc" # access the underlying bytes assert abc.bytes == b"abc" ``` . ### BigUInt represents a variable length (max 512-bit) unsigned integer stored as `bytes[]` in the AVM. It supports all the same operators as `int`, except for power (`**`), left and right shift (`<<` and `>>`) and `/` (as with `UInt64`, you must use `//` for truncating division instead). Note that the op code costs for `bigint` math are an order of magnitude higher than those for `uint64` math. If you just need to handle overflow, take a look at the wide ops such as `addw`, `mulw`, etc - all of which are exposed through the module. Another contrast between `bigint` and `uint64` math is that `bigint` math ops don’t immediately error on overflow - if the result exceeds 512-bits, then you can still access the value via `.bytes`, but any further math operations will fail. ```python # you can instantiate with an integer literal num = algopy.BigUInt(1) # no arguments default to the zero value zero = algopy.BigUInt() # zero is False, any other value is True assert not zero assert num # Like Python's `int`, `BigUInt` is immutable, so augmented assignment operators return new values one = num num += 1 assert one == 1 assert num == UInt64(2) # note that once you have a variable of type BigUInt, you don't need to type any variables # derived from that or wrap int literals num2 = num + 200 // 3 ``` . ### bool The semantics of the AVM `bool` bounded type exactly match the semantics of Python’s built-in `bool` type and thus Algorand Python uses the in-built `bool` type from Python. Per the behaviour in normal Python, Algorand Python automatically converts various types to `bool` when they appear in statements that expect a `bool` e.g. `if`/`while`/`assert` statements, appear in Boolean expressions (e.g. next to `and` or `or` keywords) or are explicitly casted to a bool. The semantics of `not`, `and` and `or` are special (e.g. short circuiting). ```python a = UInt64(1) b = UInt64(2) c = a or b d = b and a e = self.expensive_op(UInt64(0)) or self.side_effecting_op(UInt64(1)) f = self.expensive_op(UInt64(3)) or self.side_effecting_op(UInt64(42)) g = self.side_effecting_op(UInt64(0)) and self.expensive_op(UInt64(42)) h = self.side_effecting_op(UInt64(2)) and self.expensive_op(UInt64(3)) i = a if b < c else d + e if a: log("a is True") ``` . ### Account represents a logical Account, backed by a `bytes[32]` representing the bytes of the public key (without the checksum). It has various account related methods that can be called from the type. Also see if needing to represent the address as a distinct type. ### Asset represents a logical Asset, backed by a `uint64` ID. It has various asset related methods that can be called from the type. ### Application represents a logical Application, backed by a `uint64` ID. It has various application related methods that can be called from the type. ## Python built-in types Unfortunately, the don’t map to standard Python primitives. For instance, in Python, an `int` is unsigned, and effectively unbounded. A `bytes` similarly is limited only by the memory available, whereas an AVM `bytes[]` has a maximum length of 4096. In order to both maintain semantic compatibility and allow for a framework implementation in plain Python that will fail under the same conditions as when deployed to the AVM, support for Python primitives is limited. In saying that, there are many places where built-in Python types can be used and over time the places these types can be used are expected to increase. ### bool Algorand Python has full support for `bool`. ### tuple Python tuples are supported as arguments to subroutines, local variables, return types. ### typing.NamedTuple Python named tuples are also supported using . #### NOTE Default field values and subclassing a NamedTuple are not supported ```python import typing import algopy class Pair(typing.NamedTuple): foo: algopy.Bytes bar: algopy.Bytes ``` ### None `None` is not supported as a value, but is supported as a type annotation to indicate a function or subroutine returns no value. ### int, str, bytes, float The `int`, `str` and `bytes` built-in types are currently only supported as or literals. They can be passed as arguments to various Algorand Python methods that support them or when interacting with certain e.g. adding a number to a `UInt64`. `float` is not supported. ## Template variables Template variables can be used to represent a placeholder for a deploy-time provided value. This can be declared using the `TemplateVar[TYPE]` type where `TYPE` is the Algorand Python type that it will be interpreted as. ```python from algopy import BigUInt, Bytes, TemplateVar, UInt64, arc4 from algopy.arc4 import UInt512 class TemplateVariablesContract(arc4.ARC4Contract): @arc4.abimethod() def get_bytes(self) -> Bytes: return TemplateVar[Bytes](docs/_build/markdown/"SOME_BYTES") @arc4.abimethod() def get_big_uint(self) -> UInt512: x = TemplateVar[BigUInt](docs/_build/markdown/"SOME_BIG_UINT") return UInt512(x) @arc4.baremethod(allow_actions=["UpdateApplication"]) def on_update(self) -> None: assert TemplateVar[bool](docs/_build/markdown/"UPDATABLE") @arc4.baremethod(allow_actions=["DeleteApplication"]) def on_delete(self) -> None: assert TemplateVar[UInt64](docs/_build/markdown/"DELETABLE") ``` The resulting TEAL code that PuyaPy emits has placeholders with `TMPL_{template variable name}` that expects either an integer value or an encoded bytes value. This behaviour exactly matches what . For more information look at the API reference for . ## ARC-4 types ARC-4 data types are a first class concept in Algorand Python. They can be passed into ARC-4 methods (which will translate to the relevant ARC-4 method signature), passed into subroutines, or instantiated into local variables. A limited set of operations are exposed on some ARC-4 types, but often it may make sense to convert the ARC-4 value to a native AVM type, in which case you can use the `native` property to retrieve the value. Most of the ARC-4 types also allow for mutation e.g. you can edit values in arrays by index. Please see the for the different classes that can be used to represent ARC-4 values or the for more information about ARC-4. ## Type Validation Most high-order types (i.e. not `Uint64` or `Bytes`) supported by Algorand TypeScript exist as a single byte array value with a specific encoding. When reading one of these values from an untrusted source it is important to validate the encoding of this value before using it. For example when expecting a `Account` one should validate that there are exactly 32 bytes in the underlying value. PuyaPy automatically validates some value sources for you, whilst leaving others to be explicitly validated by the developer. You should always validate untrusted sources (such as ABI args from untrusted clients) but may wish to omit validation for performance/efficiency reasons from trusted sources (such as a Global state value only your application accesses). For more detailed information on the impacts of type validation refer to in the developer portal. ### Validated Sources of Values The following sources of ABI values are always validated by the compiler by default. * ABI method arguments (when called externally) * ABI return values * Bytes.to\_fixed (with the `assert-length` strategy) **NOTE**: Argument validation can be disabled globally via the `--validate-abi-args` flags. Similarly, return value validation can be disable via the `--validate-abi-return` flag. It is also possible for a method implementation to disable validation for its own arguments via the `validate_encoding` option on the `abimethod` decorator. Per-method argument validation settings override the global compiler settings. If one wishes to disable the return validation, you can parse the return value directly from the inner transaction’s last log and use an unsafe method (`.from_bytes`) for converting the bytes to the desired ABI type. ### Non-Validated Sources There are certain places where one can get an ABI value that is not fully validated: * Global state * Local state * Boxes * Subroutine arguments * Subroutine return values * `from_bytes` methods on ABI types There are no automatic validation steps taken for these values because it is assumed that the value was validated before reaching this point by the compiler. For example, if a method takes an ABI value as an argument and stores it in a box, the value is validated when taken as input from the method arguments but not when placed in the box. By default, all sources of ABI values other than what is listed above does have ABI validation, thus it would be inefficient to perform validation again every time the value is used. It should be noted, however, that all the validation methods the Puya compiler does automatically can be disabled on a per-method basis. This means it is theoretically possible for an incorrectly encoded value to come from one of the listed sources, but it will always be clear in the source code that this is the case. For example, given the following contract: ```py class BoxReadWrite(ARC4Contract): def __init__(self) -> None: self.acct_box = Box(Account) @abimethod() def write_to_box(self, acct: Account) -> None: self.acct_box.value = acct @abimethod() def read_from_box(self) -> Account: return self.acct_box.value ``` One can be sure that the value in `acctBox` is always valid because the only source of the value is an ABI argument (`acct` in `writeToBox`). If validation was disabled, however, then one cannot trust that it is properly encoded and should perform a manual validation if required: ```py class BoxReadWrite(ARC4Contract): def __init__(self): self.acct_box = Box(Account) @abimethod(validate_encoding="unsafe_disabled") def write_to_box(self, acct: Account) -> None: acct.validate() self.acct_box.value = acct @abimethod() def read_from_box(self) -> Account: return self.acct_box.value ``` Similarly, if a the Account is constructed from bytes, a manual validation should be performed: ```py def write_to_box(self, acct_bytes: Bytes) -> None: acct = Account.from_bytes(acct_bytes) acct.validate() self.acct_box.value = acct ```
# Unsupported Python features
## raise, try/except/finally Exception raising and exception handling constructs are not supported. Supporting user exceptions would be costly to implement in terms of op codes. Furthermore, AVM errors and exceptions are not “catch-able”, they immediately terminate the program. Therefore, there is very little to no benefit of supporting exceptions and exception handling. The preferred method of raising an error that terminates is through the use of . ## with Context managers are redundant without exception handling support. ## async The AVM is not just single threaded, but all operations are effectively “blocking”, rendering asynchronous programming effectively useless. ## closures & lambdas Without the support of function pointers, or other methods of invoking an arbitrary function, it’s not possible to return a function as a closure. Nested functions/lambdas as a means of repeating common operations within a given function may be supported in the future. ## global keyword Module level values are only allowed to be . No rebinding of module constants is allowed. It’s not clear what the meaning here would be, since there’s no real arbitrary means of storing state without associating it with a particular contract. If you do have need of such a thing, take a look at or if the contracts are within the same transaction, otherwise and . ## Inheritance (outside of contract classes) Polymorphism is also impossible to support without function pointers, so data classes (such as ) don’t currently allow for inheritance. Member functions there are not supported because we’re not sure yet whether it’s better to not have inheritance but allow functions on data classes, or to allow inheritance and disallow member functions. Contract inheritance is a special case, since each concrete contract is compiled separately, true polymorphism isn’t required as all references can be resolved at compile time.
# Algorand Python
Algorand Python is a partial implementation of the Python programming language that runs on the AVM. It includes a statically typed framework for development of Algorand smart contracts and logic signatures, with Pythonic interfaces to underlying AVM functionality that works with standard Python tooling. Algorand Python is compiled for execution on the AVM by PuyaPy, an optimising compiler that ensures the resulting AVM bytecode has execution semantics that match the given Python code. PuyaPy produces output that is directly compatible with to make deployment and calling easy. ## Quick start The easiest way to use Algorand Python is to instantiate a template with AlgoKit via `algokit init -t python`. This will give you a full development environment with intellisense, linting, automatic formatting, breakpoint debugging, deployment and CI/CD. Alternatively, if you want to start from scratch you can do the following: 1. Ensure you have Python 3.12+ 2. Install 3. Check you can run the compiler: ```shell algokit compile py -h ``` 4. Install Algorand Python into your project `poetry add algorand-python` 5. Create a contract in a (e.g.) `contract.py` file: ```python from algopy import ARC4Contract, arc4 class HelloWorldContract(ARC4Contract): @arc4.abimethod def hello(self, name: arc4.String) -> arc4.String: return "Hello, " + name ``` 6. Compile the contract: ```shell algokit compile py contract.py ``` 7. You should now have `HelloWorldContract.approval.teal` and `HelloWorldContract.clear.teal` on the file system! 8. We generally recommend using ARC-56 and to have the most optimal deployment and consumption experience; PuyaPy produces an ARC-56 compatible app spec file by default: ```shell algokit compile py contract.py --no-output-teal ``` 9. You should now have `HelloWorldContract.arc56.json`, which can be generated into a client e.g. using AlgoKit CLI: ```shell algokit generate client HelloWorldContract.arc56.json --output client.py ``` 10. From here you can dive into the or look at the . ## Programming with Algorand Python To get started developing with Algorand Python, please take a look at the . ## Using the PuyaPy compiler To see detailed guidance for using the PuyaPy compiler, please take a look at the .
# Principles & Background
## Background **Smart contracts** on the Algorand blockchain run on the Algorand Virtual Machine (). This is a stack based virtual machine, which executes AVM bytecode as part of an . The official mechanism for generating this bytecode is by submitting TEAL (Transaction Execution Approval Language) to an Algorand Node to compile. **Smart signatures** have the same basis in the AVM and TEAL, but have a different execution model, one not involving Application Call transactions. Our focus will primarily be on smart contracts, since they are strictly more powerful in terms of available AVM functions. TEAL is a (albeit one with support for procedure calls that can isolate stack changes since v8 with `proto`). Writing TEAL is very similar to writing assembly code. It goes without saying that this is NOT a particularly common or well-practiced model for programming these days. As it stands today, developers wanting to write smart contracts specifically for Algorand have the option of writing TEAL directly, or using some other mechanism of generating TEAL such as the officially supported or the community supported . PyTEAL follows a paradigm, which is a form of metaprogramming. Naturally, writing programs to generate programs presents an additional hurdle for developers looking to pick up smart contract development. Tooling support for this is also suboptimal, for example, many classes of errors resulting from the interaction between the procedural elements of the Python language and the PyTEAL expression-building framework go unnoticed until the point of TEAL generation, or worse go completely unnoticed, and even when PyTEAL can/does provide an error it can be difficult to understand. Tealish provides a higher level procedural language, bearing a passing resemblance to Python, that compiles down to TEAL. However, it’s still lower level than most developers are used to. For example, the expression `1 + 2 + 3`is . Another difference vs a higher level language such as Python is that . In essence, tealish abstracts away many difficulties with writing plain TEAL, but it is still essentially more of a transpiler than a compiler. Furthermore, whilst appearing to have syntax inspired by Python, it both adds and removes many fundamental syntax elements, presenting an additional learning curve to developers looking to learn blockchain development on Algorand. Being a bespoke language also means it has a much smaller ecosystem of tooling built around it compared to languages like Python or JavaScript. To most developers, the Python programming language needs no introduction. First released in 1991, its popularity has grown steadily over the decades, and as of June 2023 it is consistently ranked as either the most popular language, or second most popular following JavaScript: The AlgoKit project is an Algorand Foundation initiative to improve the developer experience on Algorand. Within this broad remit, two of the key are to “meet developers where they are” and “leverage existing ecosystem”. Building a compiler that allows developers to write smart contracts using an idiomatic subset of a high level language such as Python would make great strides towards both of these goals. Wyvern was the original internal code name for just such a compiler (now called Puya), one that will transform Python code into valid TEAL smart contracts. In line with the principle of meeting developers where they are, and recognising the popularity of JavaScript and TypeScript, a parallel initiative to build a TypeScript to TEAL compiler is . ## Principles The principles listed here should form the basis of our decision-making, both in the design and implementation. ### Least surprise Our primary objective is to assist developers in creating accurate smart contracts right from the start. The often immutable nature of these contracts - although not always the case - and the substantial financial value they frequently safeguard, underlines the importance of this goal. This principle ensures that the code behaves as anticipated by the developer. Specifically, if you’re a Python developer writing Python smart contract code, you can expect the code to behave identically to its execution in a standard Python environment. Furthermore, we believe in promoting explicitness and correctness in contract code and its associated typing. This approach reduces potential errors and enhances the overall integrity of our smart contracts. Our commitment is to provide a user-friendly platform that aligns with the developer’s intuition and experience, ultimately simplifying their work and minimizing the potential for mistakes. ### Inherited from AlgoKit As a part of the AlgoKit project, the principles outlined also apply - to the extent that this project is just one component of AlgoKit. #### “Leverage existing ecosystem” > AlgoKit functionality gets into the hands of Algorand developers quickly by building on top of the existing ecosystem wherever possible and aligned to these principles. In order to leverage as much existing Python tooling as possible, we should strive to maintain the highest level of compatibility with the Python language (and the reference implementation: CPython). #### “Meet developers where they are” > Make Blockchain development mainstream by giving all developers an idiomatic development experience in the operating system, IDE and language they are comfortable with so they can dive in quickly and have less they need to learn before being productive. Python is a very idiomatic language. We should embrace accepted patterns and practices as much as possible, such as those listed in (aka “The Zen of Python”). #### “Extensible” > Be extensible for community contribution rather than stifling innovation, bottle-necking all changes through the Algorand Foundation and preventing the opportunity for other ecosystems being represented (e.g. Go, Rust, etc.). This helps make developers feel welcome and is part of the developer experience, plus it makes it easier to add features sustainably One way to support this principle in the broader AlgoKit context is by building in a mechanism for reusing common code between smart contracts, to allow the community to build their own Python packages. #### “Sustainable” > AlgoKit should be built in a flexible fashion with long-term maintenance in mind. Updates to latest patches in dependencies, Algorand protocol development updates, and community contributions and feedback will all feed in to the evolution of the software. Taking this principle further, ensuring the compiler is well-designed (e.g. frontend backend separation, with a well-thought-out IR) will help with maintaining and improving the implementation over time. For example, adding in new TEAL language features will be easier, same for implementing new optimisation strategies. Looking to the future, best practices for smart contract development are rapidly evolving. We shouldn’t tie the implementation too tightly to a current standard such as ARC-4 - although in that specific example, we would still aim for first class support, but it shouldn’t be assumed as the only way to write smart contracts. #### “Modular components” > Solution components should be modular and loosely coupled to facilitate efficient parallel development by small, effective teams, reduced architectural complexity and allowing developers to pick and choose the specific tools and capabilities they want to use based on their needs and what they are comfortable with. We will focus on the language and compiler design itself. An example of a very useful feature, that is strongly related but could be implemented separately instead, is the ability to run the users code in a unit-testing context, without compilation+deployment first. This would require implementing in Python some level of simulation of Algorand Nodes / AVM behaviour. #### “Secure by default” > Include defaults, patterns and tooling that help developers write secure code and reduce the likelihood of security incidents in the Algorand ecosystem. This solution should help Algorand be the most secure Blockchain ecosystem. Enforcing security (which is multi-faceted) at a compiler level is difficult, and is some cases impossible. The best application of this principle here is to support auditing, which is important and nuanced enough to be listed below as a separate principle. #### “Cohesive developer tool suite” + “Seamless onramp” > Cohesive developer tool suite: Using AlgoKit should feel professional and cohesive, like it was designed to work together, for the developer; not against them. Developers are guided towards delivering end-to-end, high quality outcomes on MainNet so they and Algorand are more likely to be successful. Seamless onramp: New developers have a seamless experience to get started and they are guided into a pit of success with best practices, supported by great training collateral; you should be able to go from nothing to debugging code in 5 minutes. These principles relate more to AlgoKit as a whole, so we can respect them by considering the impacts of our decisions there more broadly. ### Abstraction without obfuscation Algorand Python is a high level language, with support for things such as branching logic, operator precedence, etc., and not a set of “macros” for generating TEAL. As such, developers will not be able to directly influence specific TEAL output, if this is desirable a language such as is more appropriate. Whilst this will abstract away certain aspects of the underlying TEAL language, there are certain AVM concerns (such as op code budgets) that should not be abstracted away. That said, we should strive to generate code that is cost-effective and unsurprising. Python mechanisms such as dynamic (runtime) dispatch, and also many of its builtin functions on types such as `str` that are taken for granted, would require large amounts of ops compared to the Python code it represents. ### Support auditing Auditing is a critical part of the security process for deploying smart contracts. We want to support this function, and can do so in two ways: 1. By ensuring the same Python code as input generates identical output each time the compiler is run regardless of the system it’s running on. This is what might be termed . Ensuring a consistent output regardless of the system it’s run on (assuming the same compiler version), means that auditing the lower level (ie TEAL) code is possible. 2. Although auditing the TEAL code should be possible, being able to easily identify and relate it back to the higher level code can make auditing the contract logic simpler and easier. ### Revolution, not evolution This is a new and groundbreaking way of developing for Algorand, and not a continuation of the PyTEAL/Beaker approach. By allowing developers to write procedural code, as opposed to constructing an expression tree, we can (among other things) significantly reduce the barrier to entry for developing smart contracts for the Algorand platform. Since the programming paradigm will be fundamentally different, providing a smooth migration experience from PyTEAL to this new world is not an intended goal, and shouldn’t be a factor in our decisions. For example, it is not a goal of this project to produce a step-by-step “migrating from PyTEAL” document, as it is not a requirement for users to switch to this new paradigm in the short to medium term - support for PyTEAL should continue in parallel.
# Lora Overview
> Overview of Lora, a live on-chain resource analyzer for Algorand
Algorand AlgoKit lora is a live on-chain resource analyzer, that enables developers to explore and interact with a configured Algorand network in a visual way. ## What is Lora? AlgoKit lora is a powerful visual tool designed to streamline the Algorand local development experience. It acts as both a network explorer and a tool for building and testing your Algorand applications. You can access lora by visiting in your browser or by running `algokit explore` when you have the installed. ## Key features * Explore blocks, transactions, transaction groups, assets, accounts and applications on LocalNet, TestNet or MainNet. * Visualise and understand complex transactions and transaction groups with the visual transaction view. * View blocks in real time as they are produced on the connected network. * Monitor and inspect real-time transactions related to an asset, account, or application with the live transaction view. * Review historical transactions related to an asset, account, or application through the historical transaction view. * Access detailed asset information and metadata when the asset complies with one of the ASA ARCs. * Connected to your Algorand wallet and perform context specific actions. * Fund an account in LocalNet or TestNet. * Visually deploy, populate, simulate and call an app by uploading an ARC-4, ARC-32 or ARC-56 app spec via App lab. * Craft, simulate and send transaction groups using Transaction wizard. * Seamless integration into the existing AlgoKit ecosystem. ## Why Did We Build Lora? An explorer is an essential tool for making blockchain data accessible and enables users to inspect and understand on-chain activities. Without these tools, it’s difficult to interpret data or gather the information and insights to fully harness the potential of the blockchain. Therefore it makes sense to have a high quality, officially supported and fully open-source tool available to the community. Before developing Lora, we evaluated the existing tools in the community, but none fully met our desires. As part of this evaluation we came up with several design goals, which are: * **Developer-Centric User Experience**: Offer a rich user experience tailored for developers, with support for LocalNet, TestNet, and MainNet. * **Open Source**: Fully open source and actively maintained. * **Operationally Simple**: Operate using algod and indexer directly, eliminating the need for additional setup, deployment, or maintenance. * **Visualize Complexity**: Enable Algorand developers to understand complex transactions and transaction groups by visually representing them. * **Contextual Linking**: Allow users to see live and historical transactions in the context of related accounts, assets, or applications. * **Performant**: Ensure a fast and seamless experience by minimizing requests to upstream services and utilizing caching to prevent unnecessary data fetching. Whenever possible, ancillary data should be fetched just in time with minimal over-fetching. * **Support the Learning Journey**: Assist developers in discovering and learning about the Algorand ecosystem. * **Seamless Integration**: Use and integrate seamlessly with the existing AlgoKit tools and enhance their usefulness. * **Local Installation**: Allow local installation alongside the AlgoKit CLI and your existing dev tools.
# AlgoKit Templates
> Overview of AlgoKit templates
AlgoKit offers a curated collection of production-ready and starter templates, streamlining front-end and smart contract development. These templates provide a comprehensive suite of pre-configured tools and integrations, from boilerplate React projects with Algorand wallet integration to smart contract projects for Python and TypeScript. This enables developers to prototype and deploy robust, production-ready applications rapidly. By leveraging AlgoKit templates, developers can significantly reduce setup time, ensure best practices in testing, compiling, and deploying smart contracts, and focus on building innovative blockchain solutions with confidence. This page provides an overview of the official AlgoKit templates and guidance on creating and sharing your custom templates to suit your needs better or contribute to the community. ## Official Templates AlgoKit provides several official templates to cater to different development needs. These templates will create a . * Smart Contract Templates: ## How to initialize a template **To initialize using the `algokit` CLI**: 1. and all the prerequisites mentioned in the installation guide. 2. Execute the command `algokit init`. This initiates an interactive wizard that assists in selecting the most appropriate template for your project requirements. ```shell algokit init # This command will start an interactive wizard to select a template ``` **To initialize within GitHub Codespaces**: 1. Go to the repository. 2. Initiate a new codespace by selecting the `Create codespace on main` option. You can find this by clicking the `Code` button and then navigating to the `Codespaces` tab. 3. Upon codespace preparation, `algokit` will automatically start `LocalNet` and present a prompt with the next steps. Executing `algokit init` will initiate the interactive wizard. ## Algorand Python Smart Contract Template This template provides a production-ready baseline for developing and deploying smart contracts. To use it and then either pass in `-t python` to `algokit init` or select the `python` template. ```shell algokit init -t python # or algokit init # and select Smart Contracts & Python template ``` ### Features This template supports the following features: * Compilation of multiple Algorand Python contracts to a predictable folder location and file layout where they can be deployed * Deploy-time immutability and permanence control * for Python dependency management and virtual environment management * Linting via or * Formatting via * Type checking via * Testing via pytest (not yet used) * Dependency vulnerability scanning via pip-audit (not yet used) * VS Code configuration (linting, formatting, breakpoint debugging) * dotenv (.env) file for configuration * Automated testing of the compiled smart contracts * tests of the TEAL output * CI/CD pipeline using GitHub Actions: * Optionally pick deployments to Netlify or Vercel ### Getting started Once the template is instantiated, you can follow the `README.md` file to see instructions on how to use the template. ## Algorand TypeScript Smart Contract Template This template provides a baseline TealScript smart contract development environment. To use it and then either pass in `-t tealscript` to `algokit init` or select the `TypeScript` language option interactively during `algokit init.` ```shell algokit init -t tealscript # or algokit init # and select Smart Contracts & TypeScript template ``` ### Getting started Once the template is instantiated, you can follow the file for instructions on how to use it. ## DApp Frontend React Template This template provides a baseline React web app for developing and integrating with any compliant Algorand smart contracts. To use it and then either pass in `-t react` to `algokit init` or select the `react` template interactively during `algokit init`. ```shell algokit init -t react # or algokit init # and select DApp Frontend template ``` ### Features This template supports the following features: * React web app with and * Styled framework agnostic CSS components using . * Starter jest unit tests for typescript functions. It can be turned off if not needed. * Starter tests for end to end testing. It can be turned off if not needed. * Integration with for connecting to Algorand wallets such as Pera, Defly, and Exodus. * Example of performing a transaction. * Dotenv support for environment variables and a local-only KMD provider that can connect the frontend component to an `algokit localnet` instance (docker required). * CI/CD pipeline using GitHub Actions (Vercel or Netlify for hosting) ### Getting started Once the template is instantiated, you can follow the `README.md` file to see instructions on how to use the template. ## Fullstack (Smart Contract + Frontend) Template This full-stack template provides both a baseline React web app and a production-ready baseline for developing and deploying `Algorand Python` and `TypeScript` smart contracts. It’s suitable for developing and integrating with any compliant Algorand smart contracts. To use this template, and then either pass in `-t fullstack` to `algokit init` or select the relevant template interactively during `algokit init`. ```shell algokit init -t fullstack # or algokit init # and select the Smart Contracts & DApp Frontend template ``` ### Features This template supports many features for developing full-stack applications using official AlgoKit templates. Using the full-stack template currently allows you to create a workspace that combines the following frontend template: * \- A React web app with TypeScript, Tailwind CSS, and all Algorand-specific integrations pre-configured and ready for you to build. And the following backend templates: * \- An official starter for developing and deploying Algorand Python smart contracts. * \- An official starter for developing and deploying TealScript smart contracts. Initializing a fullstack algokit project will create an AlgoKit workspace with a frontend React web app and Algorand smart contract project inside the `projects` folder. * .algokit.toml * README.md * {your\_workspace/project\_name}.code-workspace
# Project Structure
> Learn about the different types of AlgoKit projects and how to create them.
AlgoKit streamlines configuring components for development, testing, and deploying smart contracts to the blockchain and effortlessly sets up a project with all the necessary components. In this guide, we’ll explore what an AlgoKit project is and how you can use it to kickstart your own Algorand project. ## What is an AlgoKit Project? In the context of AlgoKit, a “project” refers to a structured standalone or monorepo workspace that includes all the necessary components for developing, testing, and deploying Algorand applications, such as smart contracts, frontend applications, and any associated configurations. ## Two Types of AlgoKit Projects AlgoKit supports two main types of project structures: Workspaces and Standalone Projects. This flexibility caters to the diverse needs of developers, whether managing multiple related projects or focusing on a single application. * **Monorepo Workspace**: This workspace is ideal for complex applications comprising multiple subprojects. It facilitates the organized management of these subprojects under a single root directory, streamlining dependency management and shared configurations. * **Standalone Project**: This structure is suitable for simpler applications or when working on a single component. It offers straightforward project management, with each project residing in its own directory, independent of others. ## AlgoKit Monorepo Workspace Workspaces are designed to manage multiple related projects under a single root directory. This approach benefits complex applications with multiple sub-projects, such as a smart contract and a corresponding frontend application. Workspaces help organize these sub-projects in a structured manner, making managing dependencies and shared configurations easier. Simply put, workspaces contain multiple AlgoKit standalone project folders within the `projects` folder and manage them from a single root directory: * .algokit.toml * README.md * {your\_workspace/project\_name}.code-workspace ### Creating an AlgoKit Monorepo Workspace To create an AlgoKit monorepo workspace, run the following command: ```shell algokit init # Creates a workspace by default # or algokit init --workspace ``` ### Adding a Sub-Project to an AlgoKit Workspace Once established, new projects can be added to the workspace, allowing centralized management. To add another sub-project within a workspace, run the following command at the root directory of the related AlgoKit workspace: ```shell algokit init ``` ### Marking a Project as a Workspace To mark your project as a workspace, fill in the following in your `.algokit.toml` file: ```toml [project] type = 'workspace' # type specifying if the project is a workspace or standalone projects_root_path = 'projects' # path to the root folder containing all sub-projects in the workspace ``` ### VSCode optimizations AlgoKit has a set of minor optimizations for VSCode users that are useful to be aware of: * Templates created with the `--workspace` flag automatically include a VSCode code-workspace file. New projects added to an AlgoKit workspace are also integrated into an existing VSCode workspace. * Using the `--ide` flag with init triggers automatic prompts to open the project and, if available, the code workspace in VSCode. ### Handling of the .github Folder A key aspect of using the `--workspace` flag is how the .github folder is managed. This folder, which contains GitHub-specific configurations, such as workflows and issue templates, are moved from the project directory to the root of the workspace. This move is necessary because GitHub does not recognize workflows located in subdirectories. Here’s a simplified overview of what happens: 1. If a .github folder is found in your project, its contents are transferred to the workspace’s root .github folder. 2. Files with matching names in the destination are not overwritten; they’re skipped. 3. The original .github folder is removed if left empty after the move. 4. A notification is displayed advising you to review the moved .github contents to ensure everything is in order. This process ensures that your GitHub configurations are appropriately recognized at the workspace level, allowing you to utilize GitHub Actions and other features seamlessly across your projects. ## Standalone Projects Standalone projects are suitable for more straightforward applications or when working on a single component. This structure is straightforward, with each project residing in its directory, independent of others. Standalone projects are ideal for developers who prefer simplicity or focus on a single aspect of their application and are sure they will not need to add more sub-projects in the future. ### Creating a Standalone Project To create a standalone project, use the `--no-workspace` flag during initialization. ```shell algokit init -–no-workspace ``` This instructs AlgoKit to bypass the workspace structure and set up the project as an isolated entity. ### Marking a Project as a Standalone Project To mark your project as a standalone project, fill in the following in your .algokit.toml file: ```toml [project] type = {'backend' | 'contract' | 'frontend'} # currently support 3 generic categories for standalone projects name = 'my-project' # unique name for the project inside the workspace ``` Both workspaces and standalone projects are fully supported by AlgoKit’s suite of tools, ensuring developers can choose the structure that best fits their workflow without compromising on functionality.
# Algorand transaction subscription / indexing
## Quick start ```{testcode} # Import necessary modules from algokit_subscriber import AlgorandSubscriber from algosdk.v2client import algod from algokit_utils import get_algod_client, get_algonode_config # Create an Algod client algod_client = get_algod_client(get_algonode_config("testnet", "algod", "")) # testnet used for demo purposes # Create subscriber (example with filters) subscriber = AlgorandSubscriber( config={ "filters": [ { "name": "filter1", "filter": { "type": "pay", "sender": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ", }, }, ], "watermark_persistence": { "get": lambda: 0, "set": lambda x: None }, "sync_behaviour": "skip-sync-newest", "max_rounds_to_sync": 100, }, algod_client=algod_client, ) # Set up subscription(s) subscriber.on("filter1", lambda transaction, _: print(f"Received transaction: {transaction['id']}")) # Set up error handling subscriber.on_error(lambda error, _: print(f"Error occurred: {error}")) # Either: Start the subscriber (if in long-running process) # subscriber.start() # OR: Poll the subscriber (if in cron job / periodic lambda) result = subscriber.poll_once() print(f"Polled {len(result['subscribed_transactions'])} transactions") ``` ```{testoutput} Polled 0 transactions ``` ## Capabilities * * ### Notification *and* indexing This library supports the ability to stay at the tip of the chain and power notification / alerting type scenarios through the use of the `sync_behaviour` parameter in both and . For example to stay at the tip of the chain for notification/alerting scenarios you could do: ```python subscriber = AlgorandSubscriber({"sync_behavior": 'skip-sync-newest', "max_rounds_to_sync": 100, ...}, ...) # or: get_subscribed_transactions({"sync_behaviour": "skip-sync-newest", "max_rounds_to_sync": 100, ...}, ...) ``` The `current_round` parameter (availble when calling `get_subscribed_transactions`) can be used to set the tip of the chain. If not specified, the tip will be automatically detected. Whilst this is generally not needed, it is useful in scenarios where the tip is being detected as part of another process and you only want to sync to that point and no further. The `max_rounds_to_sync` parameter controls how many rounds it will process when first starting when it’s not caught up to the tip of the chain. While it’s caught up to the chain it will keep processing as many rounds as are available from the last round it processed to when it next tries to sync (see below for how to control that). If you expect your service will resiliently always stay running, should never get more than `max_rounds_to_sync` from the tip of the chain, there is a problem if it processes old records and you’d prefer it throws an error when losing track of the tip of the chain rather than continue or skip to newest you can set the `sync_behaviour` parameter to `fail`. The `sync_behaviour` parameter can also be set to `sync-oldest-start-now` if you want to process all transactions once you start alerting/notifying. This requires that your service needs to keep running otherwise it could fall behind and start processing old records / take a while to catch back up with the tip of the chain. This is also a useful setting if you are creating an indexer that only needs to process from the moment the indexer is deployed rather than from the beginning of the chain. Note: this requires the to start at 0 to work. The `sync_behaviour` parameter can also be set to `sync-oldest`, which is a more traditional indexing scenario where you want to process every single block from the beginning of the chain. This can take a long time to process by default (e.g. days), noting there is a . If you don’t want to start from the beginning of the chain you can to a higher round number than 0 to start indexing from that point. ### Low latency processing You can control the polling semantics of the library when using the by either specifying the `frequency_in_seconds` parameter to control the duration between polls or you can use the `wait_for_block_when_at_tip` parameter to indicate the subscriber should so the subscriber can immediately process that round with a much lower-latency. When this mode is set, the subscriber intelligently uses this option only when it’s caught up to the tip of the chain, but otherwise uses `frequency_in_seconds` while catching up to the tip of the chain. e.g. ```python # When catching up to tip of chain will pool every 1s for the next 1000 blocks, but when caught up will poll algod for a new block so it can be processed immediately with low latency subscriber = AlgorandSubscriber(config={ "frequency_in_seconds": 1, "wait_for_block_when_at_tip": True, "max_rounds_to_sync": 1000, # ... other configuration options }, ...) ... subscriber.start() ``` If you are using or the `pollOnce` method on `AlgorandSubscriber` then you can use your infrastructure and/or surrounding orchestration code to take control of the polling duration. If you want to manually run code that waits for a given round to become available you can execute the following algosdk code: ```python algod.status_after_block(round_number_to_wait_for) ``` ### Watermarking and resilience You can create reliable syncing / indexing services through a simple round watermarking capability that allows you to create resilient syncing services that can recover from an outage. This works through the use of the `watermark_persistence` parameter in and `watermark` parameter in : ```python def get_saved_watermark() -> int: # Return the watermark from a persistence store e.g. database, redis, file system, etc. pass def save_watermark(new_watermark: int) -> None: # Save the watermark to a persistence store e.g. database, redis, file system, etc. pass ... subscriber = AlgorandSubscriber({ "watermark_persistence": { "get": get_saved_watermark, "set": save_watermark }, # ... other configuration options }, ...) # or: watermark = get_saved_watermark() result = get_subscribed_transactions(watermark=watermark, ...) save_watermark(result.new_watermark) ``` By using a persistence store, you can gracefully respond to an outage of your subscriber. The next time it starts it will pick back up from the point where it last persisted. It’s worth noting this provides at least once delivery semantics so you need to handle duplicate events. Alternatively, if you want to create at most once delivery semantics you could use the and wrap a unit of work from a ACID persistence store (e.g. a SQL database with a serializable or repeatable read transaction) around the watermark retrieval, transaction processing and watermark persistence so the processing of transactions and watermarking of a single poll happens in a single atomic transaction. In this model, you would then process the transactions in a separate process from the persistence store (and likely have a flag on each transaction to indicate if it has been processed or not). You would need to be careful to ensure that you only have one subscriber actively running at a time to guarantee this delivery semantic. To ensure resilience you may want to have multiple subscribers running, but a primary node that actually executes based on retrieval of a distributed semaphore / lease. If you are doing a quick test or creating an ephemeral subscriber that just needs to exist in-memory and doesn’t need to recover resiliently (useful with `sync_behaviour` of `skip-sync-newest` for instance) then you can use an in-memory variable instead of a persistence store, e.g.: ```python watermark = 0 subscriber = AlgorandSubscriber( config={ "watermark_persistence": { "get": lambda: watermark, "set": lambda new_watermark: globals().update(watermark=new_watermark) }, # ... other configuration options }, # ... other arguments ) # or: watermark = 0 result = get_subscribed_transactions(watermark=watermark, ...) watermark = result.new_watermark ``` ### Extensive subscription filtering This library has extensive filtering options available to you so you can have fine-grained control over which transactions you are interested in. There is a core type that is used to specify the filters : ```python subscriber = AlgorandSubscriber(config={'filters': [{'name': 'filterName', 'filter': {# Filter properties}}], ...}, ...) # or: get_subscribed_transactions(filters=[{'name': 'filterName', 'filter': {# Filter properties}}], ...) ``` Currently this allows you filter based on any combination (AND logic) of: * Transaction type e.g. `filter: { type: "axfer" }` or `filter: {type: ["axfer", "pay"] }` * Account (sender and receiver) e.g. `filter: { sender: "ABCDE..F" }` or `filter: { sender: ["ABCDE..F", "ZYXWV..A"] }` and `filter: { receiver: "12345..6" }` or `filter: { receiver: ["ABCDE..F", "ZYXWV..A"] }` * Note prefix e.g. `filter: { note_prefix: "xyz" }` * Apps * ID e.g. `filter: { appId: 54321 }` or `filter: { appId: [54321, 12345] }` * Creation e.g. `filter: { app_create: true }` * Call on-complete(s) e.g. `filter: { app_on_complete: 'optin' }` or `filter: { app_on_complete: ['optin', 'noop'] }` * ARC4 method signature(s) e.g. `filter: { method_signature: "MyMethod(uint64,string)" }` or `filter: { method_signature: ["MyMethod(uint64,string)uint64", "MyMethod2(unit64)"] }` * Call arguments e.g. ```python "filter": { 'app_call_arguments_match': lambda app_call_arguments: len(app_call_arguments) > 1 and app_call_arguments[1].decode('utf-8') == 'hello_world' } ``` * Emitted ARC-28 event(s) e.g. ```python 'filter': { 'arc28_events': [{ 'group_name': "group1", 'event_name': "MyEvent" }]; } ``` Note: For this to work you need to . * Assets * ID e.g. `'filter': { 'asset_id': 123456 }` or `'filter': { 'asset_id': [123456, 456789] }` * Creation e.g. `'filter': { 'asset_create': true }` * Amount transferred (min and/or max) e.g. `'filter': { 'type': 'axfer', 'min_amount': 1, 'max_amount': 100 }` * Balance changes (asset ID, sender, receiver, close to, min and/or max change) e.g. `filter: { 'balance_changes': [{'asset_id': [15345, 36234], 'roles': [BalanceChangerole.Sender], 'address': "ABC...", 'min_amount': 1, 'max_amount': 2}]}` * Algo transfers (pay transactions) * Amount transferred (min and/or max) e.g. `'filter': { 'type': 'pay', 'min_amount': 1, 'max_amount': 100 }` * Balance changes (sender, receiver, close to, min and/or max change) e.g. `'filter': { 'balance_changes': [{'roles': [BalanceChangeRole.Sender], 'address': "ABC...", 'min_amount': 1, 'max_amount': 2}]}` You can supply multiple, named filters via the type. When subscribed transactions are returned each transaction will have a `filters_matched` property that will have an array of any filter(s) that caused that transaction to be returned. When using , you can subscribe to events that are emitted with the filter name. ### ARC-28 event subscription and reads You can for a smart contract, similar to how you can . Furthermore, you can receive any ARC-28 events that a smart contract call you subscribe to emitted in the . Both subscription and receiving ARC-28 events work through the use of the `arc28Events` parameter in and : ```python group1_events = { "groupName": "group1", "events": [ { "name": "MyEvent", "args": [ {"type": "uint64"}, {"type": "string"}, ] } ] } subscriber = AlgorandSubscriber(arc28_events=[group1_events], ...) # or: result = await get_subscribed_transactions(arc28_events=[group1_events], ...) ``` The `Arc28EventGroup` type has the following definition: ```python class Arc28EventGroup(TypedDict): """ Specifies a group of ARC-28 event definitions along with instructions for when to attempt to process the events. """ group_name: str """The name to designate for this group of events.""" process_for_app_ids: list[int] """Optional list of app IDs that this event should apply to.""" process_transaction: NotRequired[Callable[[TransactionResult], bool]] """Optional predicate to indicate if these ARC-28 events should be processed for the given transaction.""" continue_on_error: bool """Whether or not to silently (with warning log) continue if an error is encountered processing the ARC-28 event data; default = False.""" events: list[Arc28Event] """The list of ARC-28 event definitions.""" class Arc28Event(TypedDict): """ The definition of metadata for an ARC-28 event as per the ARC-28 specification. """ name: str """The name of the event""" desc: NotRequired[str] """An optional, user-friendly description for the event""" args: list[Arc28EventArg] """The arguments of the event, in order""" ``` Each group allows you to apply logic to the applicability and processing of a set of events. This structure allows you to safely process the events from multiple contracts in the same subscriber, or perform more advanced filtering logic to event processing. When specifying an , you specify both the `group_name` and `event_name`(s) to narrow down what event(s) you want to subscribe to. If you want to emit an ARC-28 event from your smart contract you can follow the . ### First-class inner transaction support When you subscribe to transactions any subscriptions that cover an inner transaction will pick up that inner transaction and it to you correctly. Note: the behaviour Algorand Indexer has is to return the parent transaction, not the inner transaction; this library will always return the actual transaction you subscribed to. If you an inner transaction then there will be a `parent_transaction_id` field populated that allows you to see that it was an inner transaction and how to identify the parent transaction. The `id` of an inner transaction will be set to `{parent_transaction_id}/inner/{index-of-child-within-parent}` where `{index-of-child-within-parent}` is calculated based on uniquely walking the tree of potentially nested inner transactions. is a good illustration of how inner transaction indexes are allocated (this library uses the same approach). All transactions will have an `inner-txns` property with any inner transactions of that transaction populated (recursively). The `intra-round-offset` field in a is calculated by walking the full tree depth-first from the first transaction in the block, through any inner transactions recursively starting from an index of 0. This algorithm matches the one in Algorand Indexer and ensures that all transactions have a unique index, but the top level transaction in the block don’t necessarily have a sequential index. ### State-proof support You can subscribe to transactions using this subscriber library. At the time of writing state proof transactions are not supported by algosdk v2 and custom handling has been added to ensure this valuable type of transaction can be subscribed to. The field level documentation of the is comprehensively documented via . By exposing this functionality, this library can be used to create a . ### Simple programming model This library is easy to use and consume through and subscribed transactions have a with all relevant/useful information about that transaction (including things like transaction id, round number, created asset/app id, app logs, etc.) modelled on the indexer data model (which is used regardless of whether the transactions come from indexer or algod so it’s a consistent experience). For more examples of how to use it see the . ### Easy to deploy Because the of this library are simple TypeScript methods to execute it you simply need to run it in a valid JavaScript execution environment. For instance, you could run it within a web browser if you want a user facing app to show real-time transaction notifications in-app, or in a Node.js process running in the myriad of ways Node.js can be run. Because of that, you have full control over how you want to deploy and use the subscriber; it will work with whatever persistence (e.g. sql, no-sql, etc.), queuing/messaging (e.g. queues, topics, buses, web hooks, web sockets) and compute (e.g. serverless periodic lambdas, continually running containers, virtual machines, etc.) services you want to use. ### Fast initial index When for the purposes of building an index you often will want to start at the beginning of the chain or a substantial time in the past when the given solution you are subscribing for started. This kind of catch up takes days to process since algod only lets you retrieve a single block at a time and retrieving a block takes 0.5-1s. Given there are millions of blocks in MainNet it doesn’t take long to do the math to see why it takes so long to catch up. This subscriber library has a unique, optional indexer catch up mode that allows you to use indexer to catch up to the tip of the chain in seconds or minutes rather than days for your specific filter. This is really handy when you are doing local development or spinning up a new environment and don’t want to wait for days. To make use of this feature, you need to set the `sync_behaviour` config to `catchup-with-indexer` and ensure that you pass `indexer` in to the along with `algod`. Any you apply will be seamlessly translated to indexer searches to get the historic transactions in the most efficient way possible based on the apis indexer exposes. Once the subscriber is within `max_rounds_to_sync` of the tip of the chain it will switch to subscribing using `algod`. To see this in action, you can run the Data History Museum example in this repository against MainNet and see it sync millions of rounds in seconds. The indexer catchup isn’t magic - if the filter you are trying to catch up with generates an enormous number of transactions (e.g. hundreds of thousands or millions) then it will run very slowly and has the potential for running out of compute and memory time depending on what the constraints are in the deployment environment you are running in. In that instance though, there is a config parameter you can use `max_indexer_rounds_to_sync` so you can break the indexer catchup into multiple “polls” e.g. 100,000 rounds at a time. This allows a smaller batch of transactions to be retrieved and persisted in multiple batches. To understand how the indexer behaviour works to know if you are likely to generate a lot of transactions it’s worth understanding the architecture of the indexer catchup; indexer catchup runs in two stages: 1. **Pre-filtering**: Any filters that can be translated to the . This query is then run between the rounds that need to be synced and paginated in the max number of results (1000) at a time until all of the transactions are retrieved. This ensures we get round-based transactional consistency. This is the filter that can easily explode out though and take a long time when using indexer catchup. For avoidance of doubt, the following filters are the ones that are converted to a pre-filter: * `sender` (single value) * `receiver` (single value) * `type` (single value) * `note_prefix` * `app_id` (single value) * `asset_id` (single value) * `min_amount` (and `type = pay` or `assetId` provided) * `max_amount` (and `maxAmount < Number.MAX_SAFE_INTEGER` and `type = pay` or (`assetId` provided and `minAmount > 0`)) 2. **Post-filtering**: All remaining filters are then applied in-memory to the resulting list of transactions that are returned from the pre-filter before being returned as subscribed transactions. ## Entry points There are two entry points into the subscriber functionality. The lower level method that contains the raw subscription logic for a single “poll”, and the class that provides a higher level interface that is easier to use and takes care of a lot more orchestration logic for you (particularly around the ability to continuously poll). Both are first-class supported ways of using this library, but we generally recommend starting with the `AlgorandSubscriber` since it’s easier to use and will cover the majority of use cases. ## Reference docs . ## Emit ARC-28 events To emit ARC-28 events from your smart contract you can use the following syntax. ### Algorand Python ```python @arc4.abimethod def emit_swapped(self, a: arc4.UInt64, b: arc4.UInt64) -> None: arc4.emit("MyEvent", a, b) ``` OR: ```python class MyEvent(arc4.Struct): a: arc4.String b: arc4.UInt64 # ... @arc4.abimethod def emit_swapped(self, a: arc4.String, b: arc4.UInt64) -> None: arc4.emit(MyEvent(a, b)) ``` ### TealScript ```typescript MyEvent = new EventLogger<{ stringField: string intField: uint64 }>(); // ... this.MyEvent.log({ stringField: "a" intField: 2 }) ``` ### PyTEAL ```python class MyEvent(pt.abi.NamedTuple): stringField: pt.abi.Field[pt.abi.String] intField: pt.abi.Field[pt.abi.Uint64] # ... @app.external() def myMethod(a: pt.abi.String, b: pt.abi.Uint64) -> pt.Expr: # ... return pt.Seq( # ... (event := MyEvent()).set(a, b), pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), event._stored_value.load())), pt.Approve(), ) ``` Note: if your event doesn’t have any dynamic ARC-4 types in it then you can simplify that to something like this: ```python pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), a.get(), pt.Itob(b.get()))), ``` ### TEAL ```teal method "MyEvent(byte[],uint64)" frame_dig 0 // or any other command to put the ARC-4 encoded bytes for the event on the stack concat log ``` ## Next steps To dig deeper into the capabilities of `algokit-subscriber`, continue with the following sections. ```{toctree} --- maxdepth: 2 caption: Contents hidden: true --- subscriber subscriptions api ```
# AlgorandSubscriber
`AlgorandSubscriber` is a class that allows you to easily subscribe to the Algorand Blockchain, define a series of events that you are interested in, and react to those events. ## Creating a subscriber To create an `AlgorandSubscriber` you can use the constructor: ```python class AlgorandSubscriber: def __init__(self, config: AlgorandSubscriberConfig, algod_client: AlgodClient, indexer_client: IndexerClient | None = None): """ Create a new `AlgorandSubscriber`. :param config: The subscriber configuration :param algod_client: An algod client :param indexer_client: An (optional) indexer client; only needed if `subscription.sync_behaviour` is `catchup-with-indexer` """ ``` **TODO: Link to config type** `watermark_persistence` allows you to ensure reliability against your code having outages since you can persist the last block your code processed up to and then provide it again the next time your code runs. `max_rounds_to_sync` and `sync_behaviour` allow you to control the subscription semantics as your code falls behind the tip of the chain (either on first run or after an outage). `frequency_in_seconds` allows you to control the polling frequency and by association your latency tolerance for new events once you’ve caught up to the tip of the chain. Alternatively, you can set `wait_for_block_when_at_tip` to get the subscriber to ask algod to tell it when there is a new block ready to reduce latency when it’s caught up to the tip of the chain. `arc28_events` are any . Filters defines the different subscription(s) you want to make, and is defined by the following interface: ```python class NamedTransactionFilter(TypedDict): """Specify a named filter to apply to find transactions of interest.""" name: str """The name to give the filter.""" filter: TransactionFilter """The filter itself.""" class SubscriberConfigFilter(NamedTransactionFilter): """A single event to subscribe to / emit.""" mapper: NotRequired[Callable[[list['SubscribedTransaction']], list[Any]]] """ An optional data mapper if you want the event data to take a certain shape when subscribing to events with this filter name. """ ``` The event name is a unique name that describes the event you are subscribing to. The defines how to interpret transactions on the chain as being “collected” by that event and the mapper is an optional ability to map from the raw transaction to a more targeted type for your event subscribers to consume. ## Subscribing to events Once you have created the `AlgorandSubscriber`, you can register handlers/listeners for the filters you have defined, or each poll as a whole batch. You can do this via the `on`, `on_batch` and `on_poll` methods: ```python def on(self, filter_name: str, listener: EventListener) -> 'AlgorandSubscriber': """ Register an event handler to run on every subscribed transaction matching the given filter name. """ def on_batch(self, filter_name: str, listener: EventListener) -> 'AlgorandSubscriber': """ Register an event handler to run on all subscribed transactions matching the given filter name for each subscription poll. """ def on_before_poll(self, listener: EventListener) -> 'AlgorandSubscriber': """ Register an event handler to run before each subscription poll. """ def on_poll(self, listener: EventListener) -> 'AlgorandSubscriber': """ Register an event handler to run after each subscription poll. """ def on_error(self, listener: EventListener) -> 'AlgorandSubscriber': """ Register an event handler to run when an error occurs. """ ``` The `EventListener` type is defined as: ```python EventListener = Callable[[SubscribedTransaction, str], None] """ A function that takes a SubscribedTransaction and the event name. """ ``` When you define an event listener it will be called, one-by-one in the order the registrations occur. If you call `on_batch` it will be called first, with the full set of transactions that were found in the current poll (0 or more). Following that, each transaction in turn will then be passed to the listener(s) that subscribed with `on` for that event. The default type that will be received is a `SubscribedTransaction`, which can be imported like so: ```python from algokit_subscriber import SubscribedTransaction ``` See the . Alternatively, if you defined a mapper against the filter then it will be applied before passing the objects through. If you call `on_poll` it will be called last (after all `on` and `on_batch` listeners) for each poll, with the full set of transactions for that poll and . This allows you to process the entire poll batch in one transaction or have a hook to call after processing individual listeners (e.g. to commit a transaction). If you want to run code before a poll starts (e.g. to log or start a transaction) you can do so with `on_before_poll`. ## Poll the chain There are two methods to poll the chain for events: `pollOnce` and `start`: ```python def poll_once(self) -> TransactionSubscriptionResult: """ Execute a single subscription poll. """ def start(self, inspect: Callable | None = None, suppress_log: bool = False) -> None: # noqa: FBT001, FBT002 """ Start the subscriber in a loop until `stop` is called. This is useful when running in the context of a long-running process / container. If you want to inspect or log what happens under the covers you can pass in an `inspect` callable that will be called for each poll. """ ``` `poll_once` is useful when you want to take control of scheduling the different polls, such as when running a Lambda on a schedule or a process via cron, etc. - it will do a single poll of the chain and return the result of that poll. `start` is useful when you have a long-running process or container and you want it to loop infinitely at the specified polling frequency from the constructor config. If you want to inspect or log what happens under the covers you can pass in an `inspect` lambda that will be called for each poll. If you use `start` then you can stop the polling by calling `stop`, which will ensure everything is cleaned up nicely. ## Handling errors To handle errors, you can register error handlers/listeners using the `on_error` method. This works in a similar way to the other `on*` methods. When no error listeners have been registered, a default listener is used to re-throw any exception, so they can be caught by global uncaught exception handlers. Once an error listener has been registered, the default listener is removed and it’s the responsibility of the registered error listener to perform any error handling. ## Examples See the .
# get_subscribed_transactions
`get_subscribed_transactions` is the core building block at the centre of this library. It’s a simple, but flexible mechanism that allows you to enact a single subscription “poll” of the Algorand blockchain. This is a lower level building block, you likely don’t want to use it directly, but instead use the . You can use this method to orchestrate everything from an index of all relevant data from the start of the chain through to simply subscribing to relevant transactions as they emerge at the tip of the chain. It allows you to have reliable at least once delivery even if your code has outages through the use of watermarking. ```python def get_subscribed_transactions( subscription: TransactionSubscriptionParams, algod: AlgodClient, indexer: IndexerClient | None = None ) -> TransactionSubscriptionResult: """ Executes a single pull/poll to subscribe to transactions on the configured Algorand blockchain for the given subscription context. """ ``` ## TransactionSubscriptionParams Specifying a subscription requires passing in a `TransactionSubscriptionParams` object, which configures the behaviour: ```python class CoreTransactionSubscriptionParams(TypedDict): filters: list['NamedTransactionFilter'] """The filter(s) to apply to find transactions of interest.""" arc28_events: NotRequired[list['Arc28EventGroup']] """Any ARC-28 event definitions to process from app call logs""" max_rounds_to_sync: NotRequired[int | None] """ The maximum number of rounds to sync from algod for each subscription pull/poll. Defaults to 500. """ max_indexer_rounds_to_sync: NotRequired[int | None] """ The maximum number of rounds to sync from indexer when using `sync_behaviour: 'catchup-with-indexer'`. """ sync_behaviour: str """ If the current tip of the configured Algorand blockchain is more than `max_rounds_to_sync` past `watermark` then how should that be handled. """ class TransactionSubscriptionParams(CoreTransactionSubscriptionParams): watermark: int """ The current round watermark that transactions have previously been synced to. """ current_round: NotRequired[int] """ The current tip of the configured Algorand blockchain. If not provided, it will be resolved on demand. """ ``` ## TransactionFilter The allows you to specify a set of filters to return a subset of transactions you are interested in. Each filter contains a `filter` property of type `TransactionFilter`, which matches the following type: ```typescript class TransactionFilter(TypedDict): type: NotRequired[str | list[str]] """Filter based on the given transaction type(s).""" sender: NotRequired[str | list[str]] """Filter to transactions sent from the specified address(es).""" receiver: NotRequired[str | list[str]] """Filter to transactions being received by the specified address(es).""" note_prefix: NotRequired[str | bytes] """Filter to transactions with a note having the given prefix.""" app_id: NotRequired[int | list[int]] """Filter to transactions against the app with the given ID(s).""" app_create: NotRequired[bool] """Filter to transactions that are creating an app.""" app_on_complete: NotRequired[str | list[str]] """Filter to transactions that have given on complete(s).""" asset_id: NotRequired[int | list[int]] """Filter to transactions against the asset with the given ID(s).""" asset_create: NotRequired[bool] """Filter to transactions that are creating an asset.""" min_amount: NotRequired[int] """ Filter to transactions where the amount being transferred is greater than or equal to the given minimum (microAlgos or decimal units of an ASA if type: axfer). """ max_amount: NotRequired[int] """ Filter to transactions where the amount being transferred is less than or equal to the given maximum (microAlgos or decimal units of an ASA if type: axfer). """ method_signature: NotRequired[str | list[str]] """ Filter to app transactions that have the given ARC-0004 method selector(s) for the given method signature as the first app argument. """ app_call_arguments_match: NotRequired[Callable[[list[bytes] | None], bool]] """Filter to app transactions that meet the given app arguments predicate.""" arc28_events: NotRequired[list[dict[str, str]]] """ Filter to app transactions that emit the given ARC-28 events. Note: the definitions for these events must be passed in to the subscription config via `arc28_events`. """ balance_changes: NotRequired[list[dict[str, Union[int, list[int], str, list[str], 'BalanceChangeRole', list['BalanceChangeRole']]]]] """Filter to transactions that result in balance changes that match one or more of the given set of balance changes.""" custom_filter: NotRequired[Callable[[TransactionResult], bool]] """Catch-all custom filter to filter for things that the rest of the filters don't provide.""" ``` Each filter you provide within this type will apply an AND logic between the specified filters, e.g. ```typescript "filter": { "type": "axfer", "sender": "ABC..." } ``` Will return transactions that are `axfer` type AND have a sender of `"ABC..."`. ### NamedTransactionFilter You can specify multiple filters in an array, where each filter is a `NamedTransactionFilter`, which consists of: ```python class NamedTransactionFilter(TypedDict): """Specify a named filter to apply to find transactions of interest.""" name: str """The name to give the filter.""" filter: TransactionFilter """The filter itself.""" ``` This gives you the ability to detect which filter got matched when a transaction is returned, noting that you can use the same name multiple times if there are multiple filters (aka OR logic) that comprise the same logical filter. ## Arc28EventGroup The allows you to define any ARC-28 events that may appear in subscribed transactions so they can either be subscribed to, or be processed and added to the resulting . ## TransactionSubscriptionResult The result of calling `get_subscribed_transactions` is a `TransactionSubscriptionResult`: ```python class TransactionSubscriptionResult(TypedDict): """The result of a single subscription pull/poll.""" synced_round_range: tuple[int, int] """The round range that was synced from/to""" current_round: int """The current detected tip of the configured Algorand blockchain.""" starting_watermark: int """The watermark value that was retrieved at the start of the subscription poll.""" new_watermark: int """ The new watermark value to persist for the next call to `get_subscribed_transactions` to continue the sync. Will be equal to `synced_round_range[1]`. Only persist this after processing (or in the same atomic transaction as) subscribed transactions to keep it reliable. """ subscribed_transactions: list['SubscribedTransaction'] """ Any transactions that matched the given filter within the synced round range. This substantively uses the indexer transaction format to represent the data with some additional fields. """ block_metadata: NotRequired[list['BlockMetadata']] """ The metadata about any blocks that were retrieved from algod as part of the subscription poll. """ class BlockMetadata(TypedDict): """Metadata about a block that was retrieved from algod.""" hash: NotRequired[str | None] """The base64 block hash.""" round: int """The round of the block.""" timestamp: int """Block creation timestamp in seconds since epoch""" genesis_id: str """The genesis ID of the chain.""" genesis_hash: str """The base64 genesis hash of the chain.""" previous_block_hash: NotRequired[str | None] """The base64 previous block hash.""" seed: str """The base64 seed of the block.""" rewards: NotRequired['BlockRewards'] """Fields relating to rewards""" parent_transaction_count: int """Count of parent transactions in this block""" full_transaction_count: int """Full count of transactions and inner transactions (recursively) in this block.""" txn_counter: int """Number of the next transaction that will be committed after this block. It is 0 when no transactions have ever been committed (since TxnCounter started being supported).""" transactions_root: str """ Root of transaction merkle tree using SHA512_256 hash function. This commitment is computed based on the PaysetCommit type specified in the block's consensus protocol. """ transactions_root_sha256: str """ TransactionsRootSHA256 is an auxiliary TransactionRoot, built using a vector commitment instead of a merkle tree, and SHA256 hash function instead of the default SHA512_256. This commitment can be used on environments where only the SHA256 function exists. """ upgrade_state: NotRequired['BlockUpgradeState'] """Fields relating to a protocol upgrade.""" ``` ## SubscribedTransaction The common model used to expose a transaction that is returned from a subscription is a `SubscribedTransaction`, which can be imported like so: ```python from algokit_subscriber import SubscribedTransaction ``` This type is substantively, based on the Indexer format. While the indexer type is used, the subscriber itself doesn’t have to use indexer - any transactions it retrieves from algod are transformed to this common model type. Beyond the indexer type it has some modifications to: * Add the `parent_transaction_id` field so inner transactions have a reference to their parent * Override the type of `inner-txns` to be `SubscribedTransaction[]` so inner transactions (recursively) get these extra fields too * Add emitted ARC-28 events via `arc28_events` * The list of filter(s) that caused the transaction to be matched The definition of the type is: ```python TransactionResult = TypedDict("TransactionResult", { "id": str, "tx-type": str, "fee": int, "sender": str, "first-valid": int, "last-valid": int, "confirmed-round": NotRequired[int], "group": NotRequired[None | str], "note": NotRequired[str], "logs": NotRequired[list[str]], "round-time": NotRequired[int], "intra-round-offset": NotRequired[int], "signature": NotRequired['TransactionSignature'], "application-transaction": NotRequired['ApplicationTransactionResult'], "created-application-index": NotRequired[None | int], "asset-config-transaction": NotRequired['AssetConfigTransactionResult'], "created-asset-index": NotRequired[None | int], "asset-freeze-transaction": NotRequired['AssetFreezeTransactionResult'], "asset-transfer-transaction": NotRequired['AssetTransferTransactionResult'], "keyreg-transaction": NotRequired['KeyRegistrationTransactionResult'], "payment-transaction": NotRequired['PaymentTransactionResult'], "state-proof-transaction": NotRequired['StateProofTransactionResult'], "auth-addr": NotRequired[None | str], "closing-amount": NotRequired[None | int], "genesis-hash": NotRequired[str], "genesis-id": NotRequired[str], "inner-txns": NotRequired[list['TransactionResult']], "rekey-to": NotRequired[str], "lease": NotRequired[str], "local-state-delta": NotRequired[list[dict]], "global-state-delta": NotRequired[list[dict]], "receiver-rewards": NotRequired[int], "sender-rewards": NotRequired[int], "close-rewards": NotRequired[int] }) class SubscribedTransaction(TransactionResult): """ The common model used to expose a transaction that is returned from a subscription. Substantively, based on the Indexer `TransactionResult` model format with some modifications to: * Add the `parent_transaction_id` field so inner transactions have a reference to their parent * Override the type of `inner_txns` to be `SubscribedTransaction[]` so inner transactions (recursively) get these extra fields too * Add emitted ARC-28 events via `arc28_events` * Balance changes in algo or assets """ parent_transaction_id: NotRequired[None | str] """The transaction ID of the parent of this transaction (if it's an inner transaction).""" inner_txns: NotRequired[list['SubscribedTransaction']] """Inner transactions produced by application execution.""" arc28_events: NotRequired[list[EmittedArc28Event]] """Any ARC-28 events emitted from an app call.""" filters_matched: NotRequired[list[str]] """The names of any filters that matched the given transaction to result in it being 'subscribed'.""" balance_changes: NotRequired[list['BalanceChange']] """The balance changes in the transaction.""" class BalanceChange(TypedDict): """Represents a balance change effect for a transaction.""" address: str """The address that the balance change is for.""" asset_id: int """The asset ID of the balance change, or 0 for Algos.""" amount: int """The amount of the balance change in smallest divisible unit or microAlgos.""" roles: list['BalanceChangeRole'] """The roles the account was playing that led to the balance change""" class Arc28EventToProcess(TypedDict): """ Represents an ARC-28 event to be processed. """ group_name: str """The name of the ARC-28 event group the event belongs to""" event_name: str """The name of the ARC-28 event that was triggered""" event_signature: str """The signature of the event e.g. `EventName(type1,type2)`""" event_prefix: str """The 4-byte hex prefix for the event""" event_definition: Arc28Event """The ARC-28 definition of the event""" class EmittedArc28Event(Arc28EventToProcess): """ Represents an ARC-28 event that was emitted. """ args: list[Any] """The ordered arguments extracted from the event that was emitted""" args_by_name: dict[str, Any] """The named arguments extracted from the event that was emitted (where the arguments had a name defined)""" ``` ## Examples Here are some examples of how to use this method: ### Real-time notification of transactions of interest at the tip of the chain discarding stale records If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would drop old records and restart notifications from the new tip. ```python from algokit_subscriber import AlgorandSubscriber, SubscribedTransaction from algokit_utils.beta.algorand_client import AlgorandClient algorand = AlgorandClient.test_net() watermark = 0 def get_watermark() -> int: return watermark def set_watermark(new_watermark: int) -> None: global watermark # noqa: PLW0603 watermark = new_watermark subscriber = AlgorandSubscriber(algod_client=algorand.client.algod, config={ 'filters': [ { 'name': 'filter1', 'filter': { 'sender': 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU' } } ], 'wait_for_block_when_at_tip': True, 'watermark_persistence': { 'get': get_watermark, 'set': set_watermark }, 'sync_behaviour': 'skip-sync-newest', 'max_rounds_to_sync': 100 }) def notify_transactions(transaction: SubscribedTransaction, _: str) -> None: # Implement your notification logic here print(f"New transaction from {transaction['sender']}") # noqa: T201 subscriber.on('filter1', notify_transactions) subscriber.start() ``` ### Real-time notification of transactions of interest at the tip of the chain with at least once delivery If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would pick up where it left off and catch up using algod (note: you need to connect it to a archival node). ```python from algokit_subscriber import AlgorandSubscriber, SubscribedTransaction from algokit_utils.beta.algorand_client import AlgorandClient algorand = AlgorandClient.test_net() watermark = 0 def get_watermark() -> int: return watermark def set_watermark(new_watermark: int) -> None: global watermark # noqa: PLW0603 watermark = new_watermark subscriber = AlgorandSubscriber(algod_client=algorand.client.algod, config={ 'filters': [ { 'name': 'filter1', 'filter': { 'sender': 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU' } } ], 'wait_for_block_when_at_tip': True, 'watermark_persistence': { 'get': get_watermark, 'set': set_watermark }, 'sync_behaviour': 'sync-oldest-start-now', 'max_rounds_to_sync': 100 }) def notify_transactions(transaction: SubscribedTransaction, _: str) -> None: # Implement your notification logic here print(f"New transaction from {transaction['sender']}") # noqa: T201 subscriber.on('filter1', notify_transactions) subscriber.start() ``` ### Quickly building a reliable, up-to-date cache index of all transactions of interest from the beginning of the chain If you ran the following code on a cron schedule of (say) every 30 - 60 seconds it would create a cached index of all assets created by the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`). Given it uses indexer to catch up you can deploy this into a fresh environment with an empty database and it will catch up in seconds rather than days. ```python from algokit_subscriber import AlgorandSubscriber, SubscribedTransaction from algokit_utils.beta.algorand_client import AlgorandClient algorand = AlgorandClient.test_net() watermark = 0 def get_watermark() -> int: return watermark def set_watermark(new_watermark: int) -> None: global watermark # noqa: PLW0603 watermark = new_watermark def save_transactions(transactions: list[SubscribedTransaction]) -> None: # Implement your logic to save transactions here pass subscriber = AlgorandSubscriber(algod_client=algorand.client.algod, indexer_client=algorand.client.indexer, config={ 'filters': [ { 'name': 'filter1', 'filter': { 'type': 'acfg', 'sender': 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU', 'asset_create': True } } ], 'wait_for_block_when_at_tip': True, 'watermark_persistence': { 'get': get_watermark, 'set': set_watermark }, 'sync_behaviour': 'catchup-with-indexer', 'max_rounds_to_sync': 1000 }) def process_transactions(transaction: SubscribedTransaction, _: str) -> None: save_transactions([transaction]) subscriber.on('filter1', process_transactions) subscriber.start() ```
# Algorand transaction subscription / indexing
## Quick start ```typescript // Create subscriber or something const subscriber = new AlgorandSubscriber( { filters: [ { name: 'filter1', filter: { type: TransactionType.pay, sender: 'ABC...', }, }, ], /* ... other options (use intellisense to explore) */ }, algod, optionalIndexer, ); // Set up subscription(s) subscriber.on('filter1', async transaction => { // ... }); //... // Set up error handling subscriber.onError(e => { // ... }); // Either: Start the subscriber (if in long-running process) subscriber.start(); // OR: Poll the subscriber (if in cron job / periodic lambda) subscriber.pollOnce(); ``` ## Capabilities * * * ### Notification *and* indexing This library supports the ability to stay at the tip of the chain and power notification / alerting type scenarios through the use of the `syncBehaviour` parameter in both and . For example to stay at the tip of the chain for notification/alerting scenarios you could do: ```typescript const subscriber = new AlgorandSubscriber({syncBehaviour: 'skip-sync-newest', maxRoundsToSync: 100, ...}, ...) // or: getSubscribedTransactions({syncBehaviour: 'skip-sync-newest', maxRoundsToSync: 100, ...}, ...) ``` The `currentRound` parameter (availble when calling `getSubscribedTransactions`) can be used to set the tip of the chain. If not specified, the tip will be automatically detected. Whilst this is generally not needed, it is useful in scenarios where the tip is being detected as part of another process and you only want to sync to that point and no further. The `maxRoundsToSync` parameter controls how many rounds it will process when first starting when it’s not caught up to the tip of the chain. While it’s caught up to the chain it will keep processing as many rounds as are available from the last round it processed to when it next tries to sync (see below for how to control that). If you expect your service will resiliently always stay running, should never get more than `maxRoundsToSync` from the tip of the chain, there is a problem if it processes old records and you’d prefer it throws an error when losing track of the tip of the chain rather than continue or skip to newest you can set the `syncBehaviour` parameter to `fail`. The `syncBehaviour` parameter can also be set to `sync-oldest-start-now` if you want to process all transactions once you start alerting/notifying. This requires that your service needs to keep running otherwise it could fall behind and start processing old records / take a while to catch back up with the tip of the chain. This is also a useful setting if you are creating an indexer that only needs to process from the moment the indexer is deployed rather than from the beginning of the chain. Note: this requires the to start at 0 to work. The `syncBehaviour` parameter can also be set to `sync-oldest`, which is a more traditional indexing scenario where you want to process every single block from the beginning of the chain. This can take a long time to process by default (e.g. days), noting there is a . If you don’t want to start from the beginning of the chain you can to a higher round number than 0 to start indexing from that point. ### Low latency processing You can control the polling semantics of the library when using the by either specifying the `frequencyInSeconds` parameter to control the duration between polls or you can use the `waitForBlockWhenAtTip` parameter to indicate the subscriber should so the subscriber can immediately process that round with a much lower-latency. When this mode is set, the subscriber intelligently uses this option only when it’s caught up to the tip of the chain, but otherwise uses `frequencyInSeconds` while catching up to the tip of the chain. e.g. ```typescript // When catching up to tip of chain will pool every 1s for the next 1000 blocks, but when caught up will poll algod for a new block so it can be processed immediately with low latency const subscriber = new AlgorandSubscriber({frequencyInSeconds: 1, waitForBlockWhenAtTip: true, maxRoundsToSync: 1000, ...}, ...) ... subscriber.start() ``` If you are using or the `pollOnce` method on `AlgorandSubscriber` then you can use your infrastructure and/or surrounding orchestration code to take control of the polling duration. If you want to manually run code that waits for a given round to become available you can execute the following algosdk code: ```typescript await algod.statusAfterBlock(roundNumberToWaitFor).do(); ``` It’s worth noting special care has been placed in the subscriber library to properly handle abort signalling. All asynchronous operations including algod polls and polling waits have abort signal handling in place so if you call `subscriber.stop()` at any point in time it should almost immediately, cleanly, exit and if you want to wait for the stop to finish you can `await subscriber.stop()`. If you want to hook this up to Node.js process signals you can include code like this in your service entrypoint: ```typescript ['SIGINT', 'SIGTERM', 'SIGQUIT'].forEach(signal => process.on(signal, () => { // eslint-disable-next-line no-console console.log(`Received ${signal}; stopping subscriber...`); subscriber.stop(signal); }), ); ``` ### Watermarking and resilience You can create reliable syncing / indexing services through a simple round watermarking capability that allows you to create resilient syncing services that can recover from an outage. This works through the use of the `watermarkPersistence` parameter in and `watermark` parameter in : ```typescript async function getSavedWatermark(): Promise { // Return the watermark from a persistence store e.g. database, redis, file system, etc. } async function saveWatermark(newWatermark: bigint): Promise { // Save the watermark to a persistence store e.g. database, redis, file system, etc. } ... const subscriber = new AlgorandSubscriber({watermarkPersistence: { get: getSavedWatermark, set: saveWatermark }, ...}, ...) // or: const watermark = await getSavedWatermark() const result = await getSubscribedTransactions({watermark, ...}, ...) await saveWatermark(result.newWatermark) ``` By using a persistence store, you can gracefully respond to an outage of your subscriber. The next time it starts it will pick back up from the point where it last persisted. It’s worth noting this provides at least once delivery semantics so you need to handle duplicate events. Alternatively, if you want to create at most once delivery semantics you could use the and wrap a unit of work from a ACID persistence store (e.g. a SQL database with a serializable or repeatable read transaction) around the watermark retrieval, transaction processing and watermark persistence so the processing of transactions and watermarking of a single poll happens in a single atomic transaction. In this model, you would then process the transactions in a separate process from the persistence store (and likely have a flag on each transaction to indicate if it has been processed or not). You would need to be careful to ensure that you only have one subscriber actively running at a time to guarantee this delivery semantic. To ensure resilience you may want to have multiple subscribers running, but a primary node that actually executes based on retrieval of a distributed semaphore / lease. If you are doing a quick test or creating an ephemeral subscriber that just needs to exist in-memory and doesn’t need to recover resiliently (useful with `syncBehaviour` of `skip-sync-newest` for instance) then you can use an in-memory variable instead of a persistence store, e.g.: ```typescript let watermark = 0 const subscriber = new AlgorandSubscriber({watermarkPersistence: { get: () => watermark, set: (newWatermark: bigint) => watermark = newWatermark }, ...}, ...) // or: let watermark = 0 const result = await getSubscribedTransactions({watermark, ...}, ...) watermark = result.newWatermark ``` ### Extensive subscription filtering This library has extensive filtering options available to you so you can have fine-grained control over which transactions you are interested in. There is a core type that is used to specify the filters : ```typescript const subscriber = new AlgorandSubscriber({filters: [{name: 'filterName', filter: {/* Filter properties */}}], ...}, ...) // or: getSubscribedTransactions({filters: [{name: 'filterName', filter: {/* Filter properties */}}], ... }, ...) ``` Currently this allows you filter based on any combination (AND logic) of: * Transaction type e.g. `filter: { type: TransactionType.axfer }` or `filter: {type: [TransactionType.axfer, TransactionType.pay] }` * Account (sender and receiver) e.g. `filter: { sender: "ABCDE..F" }` or `filter: { sender: ["ABCDE..F", "ZYXWV..A"] }` and `filter: { receiver: "12345..6" }` or `filter: { receiver: ["ABCDE..F", "ZYXWV..A"] }` * Note prefix e.g. `filter: { notePrefix: "xyz" }` * Apps * ID e.g. `filter: { appId: 54321 }` or `filter: { appId: [54321, 12345] }` * Creation e.g. `filter: { appCreate: true }` * Call on-complete(s) e.g. `filter: { appOnComplete: ApplicationOnComplete.optin }` or `filter: { appOnComplete: [ApplicationOnComplete.optin, ApplicationOnComplete.noop] }` * ARC4 method signature(s) e.g. `filter: { methodSignature: "MyMethod(uint64,string)" }` or `filter: { methodSignature: ["MyMethod(uint64,string)uint64", "MyMethod2(unit64)"] }` * Call arguments e.g. ```typescript filter: { appCallArgumentsMatch: appCallArguments => appCallArguments.length > 1 && Buffer.from(appCallArguments[1]).toString('utf-8') === 'hello_world'; } ``` * Emitted ARC-28 event(s) e.g. ```typescript filter: { arc28Events: [{ groupName: 'group1', eventName: 'MyEvent' }]; } ``` Note: For this to work you need to . * Assets * ID e.g. `filter: { assetId: 123456n }` or `filter: { assetId: [123456n, 456789n] }` * Creation e.g. `filter: { assetCreate: true }` * Amount transferred (min and/or max) e.g. `filter: { type: TransactionType.axfer, minAmount: 1, maxAmount: 100 }` * Balance changes (asset ID, sender, receiver, close to, min and/or max change) e.g. `filter: { balanceChanges: [{assetId: [15345n, 36234n], roles: [BalanceChangeRole.sender], address: "ABC...", minAmount: 1, maxAmount: 2}]}` * Algo transfers (pay transactions) * Amount transferred (min and/or max) e.g. `filter: { type: TransactionType.pay, minAmount: 1, maxAmount: 100 }` * Balance changes (sender, receiver, close to, min and/or max change) e.g. `filter: { balanceChanges: [{roles: [BalanceChangeRole.sender], address: "ABC...", minAmount: 1, maxAmount: 2}]}` You can supply multiple, named filters via the type. When subscribed transactions are returned each transaction will have a `filtersMatched` property that will have an array of any filter(s) that caused that transaction to be returned. When using , you can subscribe to events that are emitted with the filter name. ### ARC-28 event subscription and reads You can for a smart contract, similar to how you can . Furthermore, you can receive any ARC-28 events that a smart contract call you subscribe to emitted in the . Both subscription and receiving ARC-28 events work through the use of the `arc28Events` parameter in and : ```typescript const group1Events: Arc28EventGroup = { groupName: 'group1', events: [ { name: 'MyEvent', args: [ {type: 'uint64'}, {type: 'string'}, ] } ] } const subscriber = new AlgorandSubscriber({arc28Events: [group1Events], ...}, ...) // or: const result = await getSubscribedTransactions({arc28Events: [group1Events], ...}, ...) ``` The `Arc28EventGroup` type has the following definition: ```typescript /** Specifies a group of ARC-28 event definitions along with instructions for when to attempt to process the events. */ export interface Arc28EventGroup { /** The name to designate for this group of events. */ groupName: string; /** Optional list of app IDs that this event should apply to */ processForAppIds?: bigint[]; /** Optional predicate to indicate if these ARC-28 events should be processed for the given transaction */ processTransaction?: (transaction: TransactionResult) => boolean; /** Whether or not to silently (with warning log) continue if an error is encountered processing the ARC-28 event data; default = false */ continueOnError?: boolean; /** The list of ARC-28 event definitions */ events: Arc28Event[]; } /** * The definition of metadata for an ARC-28 event per https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0028.md#event. */ export interface Arc28Event { /** The name of the event */ name: string; /** Optional, user-friendly description for the event */ desc?: string; /** The arguments of the event, in order */ args: Array<{ /** The type of the argument */ type: string; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; }>; } ``` Each group allows you to apply logic to the applicability and processing of a set of events. This structure allows you to safely process the events from multiple contracts in the same subscriber, or perform more advanced filtering logic to event processing. When specifying an , you specify both the `groupName` and `eventName`(s) to narrow down what event(s) you want to subscribe to. If you want to emit an ARC-28 event from your smart contract you can follow the . ### First-class inner transaction support When you subscribe to transactions any subscriptions that cover an inner transaction will pick up that inner transaction and it to you correctly. Note: the behaviour Algorand Indexer has is to return the parent transaction, not the inner transaction; this library will always return the actual transaction you subscribed to. If you an inner transaction then there will be a `parentTransactionId` field populated that allows you to see that it was an inner transaction and how to identify the parent transaction. The `id` of an inner transaction will be set to `{parentTransactionId}/inner/{index-of-child-within-parent}` where `{index-of-child-within-parent}` is calculated based on uniquely walking the tree of potentially nested inner transactions. is a good illustration of how inner transaction indexes are allocated (this library uses the same approach). All transactions will have an `inner-txns` property with any inner transactions of that transaction populated (recursively). The `intra-round-offset` field in a is calculated by walking the full tree depth-first from the first transaction in the block, through any inner transactions recursively starting from an index of 0. This algorithm matches the one in Algorand Indexer and ensures that all transactions have a unique index, but the top level transaction in the block don’t necessarily have a sequential index. ### State-proof support You can subscribe to transactions using this subscriber library. At the time of writing state proof transactions are not supported by algosdk v2 and custom handling has been added to ensure this valuable type of transaction can be subscribed to. The field level documentation of the is comprehensively documented via . By exposing this functionality, this library can be used to create a . ### Simple programming model This library is easy to use and consume through and subscribed transactions have a with all relevant/useful information about that transaction (including things like transaction id, round number, created asset/app id, app logs, etc.) modelled on the indexer data model (which is used regardless of whether the transactions come from indexer or algod so it’s a consistent experience). Furthermore, the `AlgorandSubscriber` class has a familiar programming model based on the , but with async methods. For more examples of how to use it see the . ### Easy to deploy Because the of this library are simple TypeScript methods to execute it you simply need to run it in a valid JavaScript execution environment. For instance, you could run it within a web browser if you want a user facing app to show real-time transaction notifications in-app, or in a Node.js process running in the myriad of ways Node.js can be run. Because of that, you have full control over how you want to deploy and use the subscriber; it will work with whatever persistence (e.g. sql, no-sql, etc.), queuing/messaging (e.g. queues, topics, buses, web hooks, web sockets) and compute (e.g. serverless periodic lambdas, continually running containers, virtual machines, etc.) services you want to use. ### Fast initial index When for the purposes of building an index you often will want to start at the beginning of the chain or a substantial time in the past when the given solution you are subscribing for started. This kind of catch up takes days to process since algod only lets you retrieve a single block at a time and retrieving a block takes 0.5-1s. Given there are millions of blocks in MainNet it doesn’t take long to do the math to see why it takes so long to catch up. This subscriber library has a unique, optional indexer catch up mode that allows you to use indexer to catch up to the tip of the chain in seconds or minutes rather than days for your specific filter. This is really handy when you are doing local development or spinning up a new environment and don’t want to wait for days. To make use of this feature, you need to set the `syncBehaviour` config to `catchup-with-indexer` and ensure that you pass `indexer` in to the along with `algod`. Any you apply will be seamlessly translated to indexer searches to get the historic transactions in the most efficient way possible based on the apis indexer exposes. Once the subscriber is within `maxRoundsToSync` of the tip of the chain it will switch to subscribing using `algod`. To see this in action, you can run the Data History Museum example in this repository against MainNet and see it sync millions of rounds in seconds. The indexer catchup isn’t magic - if the filter you are trying to catch up with generates an enormous number of transactions (e.g. hundreds of thousands or millions) then it will run very slowly and has the potential for running out of compute and memory time depending on what the constraints are in the deployment environment you are running in. In that instance though, there is a config parameter you can use `maxIndexerRoundsToSync` so you can break the indexer catchup into multiple “polls” e.g. 100,000 rounds at a time. This allows a smaller batch of transactions to be retrieved and persisted in multiple batches. To understand how the indexer behaviour works to know if you are likely to generate a lot of transactions it’s worth understanding the architecture of the indexer catchup; indexer catchup runs in two stages: 1. **Pre-filtering**: Any filters that can be translated to the . This query is then run between the rounds that need to be synced and paginated in the max number of results (1000) at a time until all of the transactions are retrieved. This ensures we get round-based transactional consistency. This is the filter that can easily explode out though and take a long time when using indexer catchup. For avoidance of doubt, the following filters are the ones that are converted to a pre-filter: * `sender` (single value) * `receiver` (single value) * `type` (single value) * `notePrefix` * `appId` (single value) * `assetId` (single value) * `minAmount` (and `type = pay` or `assetId` provided) * `maxAmount` (and `maxAmount < Number.MAX_SAFE_INTEGER` and `type = pay` or (`assetId` provided and `minAmount > 0`)) 2. **Post-filtering**: All remaining filters are then applied in-memory to the resulting list of transactions that are returned from the pre-filter before being returned as subscribed transactions. ## Entry points There are two entry points into the subscriber functionality. The lower level method that contains the raw subscription logic for a single “poll”, and the class that provides a higher level interface that is easier to use and takes care of a lot more orchestration logic for you (particularly around the ability to continuously poll). Both are first-class supported ways of using this library, but we generally recommend starting with the `AlgorandSubscriber` since it’s easier to use and will cover the majority of use cases. ## Reference docs . ## Emit ARC-28 events To emit ARC-28 events from your smart contract you can use the following syntax. ### Algorand Python ```python @arc4.abimethod def emit_swapped(self, a: arc4.UInt64, b: arc4.UInt64) -> None: arc4.emit("MyEvent", a, b) ``` OR: ```python class MyEvent(arc4.Struct): a: arc4.String b: arc4.UInt64 # ... @arc4.abimethod def emit_swapped(self, a: arc4.String, b: arc4.UInt64) -> None: arc4.emit(MyEvent(a, b)) ``` ### TealScript ```typescript MyEvent = new EventLogger<{ stringField: string intField: uint64 }>(); // ... this.MyEvent.log({ stringField: "a" intField: 2 }) ``` ### PyTEAL ```python class MyEvent(pt.abi.NamedTuple): stringField: pt.abi.Field[pt.abi.String] intField: pt.abi.Field[pt.abi.Uint64] # ... @app.external() def myMethod(a: pt.abi.String, b: pt.abi.Uint64) -> pt.Expr: # ... return pt.Seq( # ... (event := MyEvent()).set(a, b), pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), event._stored_value.load())), pt.Approve(), ) ``` Note: if your event doesn’t have any dynamic ARC-4 types in it then you can simplify that to something like this: ```python pt.Log(pt.Concat(pt.MethodSignature("MyEvent(byte[],uint64)"), a.get(), pt.Itob(b.get()))), ``` ### TEAL ```teal method "MyEvent(byte[],uint64)" frame_dig 0 // or any other command to put the ARC-4 encoded bytes for the event on the stack concat log ```
# `AlgorandSubscriber`
`AlgorandSubscriber` is a class that allows you to easily subscribe to the Algorand Blockchain, define a series of events that you are interested in, and react to those events. It has a similar programming model to , but also supports async/await. ## Creating a subscriber To create an `AlgorandSubscriber` you can this cool-ass constructor: ```typescript /** * Create a new `AlgorandSubscriber`. * @param config The subscriber configuration * @param algod An algod client * @param indexer An (optional) indexer ; only needed if `subscription.syncBehaviour` is `catchup-with-indexer` */ constructor(config: AlgorandSubscriberConfig, algod: Algodv2, indexer?: Indexer) ``` The key configuration is the `AlgorandSubscriberConfig` interface: ````typescript /** Configuration for an `AlgorandSubscriber`. */ export interface AlgorandSubscriberConfig extends CoreTransactionSubscriptionParams { /** The set of filters to subscribe to / emit events for, along with optional data mappers. */ filters: SubscriberConfigFilter[]; /** The frequency to poll for new blocks in seconds; defaults to 1s */ frequencyInSeconds?: number; /** Whether to wait via algod `/status/wait-for-block-after` endpoint when at the tip of the chain; reduces latency of subscription */ waitForBlockWhenAtTip?: boolean; /** Methods to retrieve and persist the current watermark so syncing is resilient and maintains * its position in the chain */ watermarkPersistence: { /** Returns the current watermark that syncing has previously been processed to */ get: () => Promise; /** Persist the new watermark that has been processed */ set: (newWatermark: bigint) => Promise; }; } /** Common parameters to control a single subscription pull/poll for both `AlgorandSubscriber` and `getSubscribedTransactions`. */ export interface CoreTransactionSubscriptionParams { /** The filter(s) to apply to find transactions of interest. * A list of filters with corresponding names. * * @example * ```typescript * filter: [{ * name: 'asset-transfers', * filter: { * type: TransactionType.axfer, * //... * } * }, { * name: 'payments', * filter: { * type: TransactionType.pay, * //... * } * }] * ``` * */ filters: NamedTransactionFilter[]; /** Any ARC-28 event definitions to process from app call logs */ arc28Events?: Arc28EventGroup[]; /** The maximum number of rounds to sync from algod for each subscription pull/poll. * * Defaults to 500. * * This gives you control over how many rounds you wait for at a time, * your staleness tolerance when using `skip-sync-newest` or `fail`, and * your catchup speed when using `sync-oldest`. **/ maxRoundsToSync?: number; /** * The maximum number of rounds to sync from indexer when using `syncBehaviour: 'catchup-with-indexer'. * * By default there is no limit and it will paginate through all of the rounds. * Sometimes this can result in an incredibly long catchup time that may break the service * due to execution and memory constraints, particularly for filters that result in a large number of transactions. * * Instead, this allows indexer catchup to be split into multiple polls, each with a transactionally consistent * boundary based on the number of rounds specified here. */ maxIndexerRoundsToSync?: number; /** If the current tip of the configured Algorand blockchain is more than `maxRoundsToSync` * past `watermark` then how should that be handled: * * `skip-sync-newest`: Discard old blocks/transactions and sync the newest; useful * for real-time notification scenarios where you don't care about history and * are happy to lose old transactions. * * `sync-oldest`: Sync from the oldest rounds forward `maxRoundsToSync` rounds * using algod; note: this will be slow if you are starting from 0 and requires * an archival node. * * `sync-oldest-start-now`: Same as `sync-oldest`, but if the `watermark` is `0` * then start at the current round i.e. don't sync historical records, but once * subscribing starts sync everything; note: if it falls behind it requires an * archival node. * * `catchup-with-indexer`: Sync to round `currentRound - maxRoundsToSync + 1` * using indexer (much faster than using algod for long time periods) and then * use algod from there. * * `fail`: Throw an error. **/ syncBehaviour: | 'skip-sync-newest' | 'sync-oldest' | 'sync-oldest-start-now' | 'catchup-with-indexer' | 'fail'; } ```` `watermarkPersistence` allows you to ensure reliability against your code having outages since you can persist the last block your code processed up to and then provide it again the next time your code runs. `maxRoundsToSync` and `syncBehaviour` allow you to control the subscription semantics as your code falls behind the tip of the chain (either on first run or after an outage). `frequencyInSeconds` allows you to control the polling frequency and by association your latency tolerance for new events once you’ve caught up to the tip of the chain. Alternatively, you can set `waitForBlockWhenAtTip` to get the subscriber to ask algod to tell it when there is a new block ready to reduce latency when it’s caught up to the tip of the chain. `arc28Events` are any . Filters defines the different subscription(s) you want to make, and is defined by the following interface: ```typescript /** A single event to subscribe to / emit. */ export interface SubscriberConfigFilter extends NamedTransactionFilter { /** An optional data mapper if you want the event data to take a certain shape when subscribing to events with this filter name. * * If not specified, then the event will simply receive a `SubscribedTransaction`. * * Note: if you provide multiple filters with the same name then only the mapper of the first matching filter will be used */ mapper?: (transaction: SubscribedTransaction[]) => Promise; } /** Specify a named filter to apply to find transactions of interest. */ export interface NamedTransactionFilter { /** The name to give the filter. */ name: string; /** The filter itself. */ filter: TransactionFilter; } ``` The event name is a unique name that describes the event you are subscribing to. The defines how to interpret transactions on the chain as being “collected” by that event and the mapper is an optional ability to map from the raw transaction to a more targeted type for your event subscribers to consume. ## Subscribing to events Once you have created the `AlgorandSubscriber`, you can register handlers/listeners for the filters you have defined, or each poll as a whole batch. You can do this via the `on`, `onBatch` and `onPoll` methods: ````typescript /** * Register an event handler to run on every subscribed transaction matching the given filter name. * * The listener can be async and it will be awaited if so. * @example **Non-mapped** * ```typescript * subscriber.on('my-filter', async (transaction) => { console.log(transaction.id) }) * ``` * @example **Mapped** * ```typescript * new AlgorandSubscriber({filters: [{name: 'my-filter', filter: {...}, mapper: (t) => t.id}], ...}, algod) * .on('my-filter', async (transactionId) => { console.log(transactionId) }) * ``` * @param filterName The name of the filter to subscribe to * @param listener The listener function to invoke with the subscribed event * @returns The subscriber so `on*` calls can be chained */ on(filterName: string, listener: TypedAsyncEventListener) {} /** * Register an event handler to run on all subscribed transactions matching the given filter name * for each subscription poll. * * This is useful when you want to efficiently process / persist events * in bulk rather than one-by-one. * * The listener can be async and it will be awaited if so. * @example **Non-mapped** * ```typescript * subscriber.onBatch('my-filter', async (transactions) => { console.log(transactions.length) }) * ``` * @example **Mapped** * ```typescript * new AlgorandSubscriber({filters: [{name: 'my-filter', filter: {...}, mapper: (t) => t.id}], ...}, algod) * .onBatch('my-filter', async (transactionIds) => { console.log(transactionIds) }) * ``` * @param filterName The name of the filter to subscribe to * @param listener The listener function to invoke with the subscribed events * @returns The subscriber so `on*` calls can be chained */ onBatch(filterName: string, listener: TypedAsyncEventListener) {} /** * Register an event handler to run before every subscription poll. * * This is useful when you want to do pre-poll logging or start a transaction etc. * * The listener can be async and it will be awaited if so. * @example * ```typescript * subscriber.onBeforePoll(async (metadata) => { console.log(metadata.watermark) }) * ``` * @param listener The listener function to invoke with the pre-poll metadata * @returns The subscriber so `on*` calls can be chained */ onBeforePoll(listener: TypedAsyncEventListener) {} /** * Register an event handler to run after every subscription poll. * * This is useful when you want to process all subscribed transactions * in a transactionally consistent manner rather than piecemeal for each * filter, or to have a hook that occurs at the end of each poll to commit * transactions etc. * * The listener can be async and it will be awaited if so. * @example * ```typescript * subscriber.onPoll(async (pollResult) => { console.log(pollResult.subscribedTransactions.length, pollResult.syncedRoundRange) }) * ``` * @param listener The listener function to invoke with the poll result * @returns The subscriber so `on*` calls can be chained */ onPoll(listener: TypedAsyncEventListener) {} ```` The `TypedAsyncEventListener` type is defined as: ```typescript type TypedAsyncEventListener = (event: T, eventName: string | symbol) => Promise | void; ``` This allows you to use async or sync event listeners. When you define an event listener it will be called, one-by-one (and awaited) in the order the registrations occur. If you call `onBatch` it will be called first, with the full set of transactions that were found in the current poll (0 or more). Following that, each transaction in turn will then be passed to the listener(s) that subscribed with `on` for that event. The default type that will be received is a `SubscribedTransaction`, which can be imported like so: ```typescript import type { SubscribedTransaction } from '@algorandfoundation/algokit-subscriber/types'; ``` See the . Alternatively, if you defined a mapper against the filter then it will be applied before passing the objects through. If you call `onPoll` it will be called last (after all `on` and `onBatch` listeners) for each poll, with the full set of transactions for that poll and . This allows you to process the entire poll batch in one transaction or have a hook to call after processing individual listeners (e.g. to commit a transaction). If you want to run code before a poll starts (e.g. to log or start a transaction) you can do so with `onBeforePoll`. ## Poll the chain There are two methods to poll the chain for events: `pollOnce` and `start`: ```typescript /** * Execute a single subscription poll. * * This is useful when executing in the context of a process * triggered by a recurring schedule / cron. * @returns The poll result */ async pollOnce(): Promise {} /** * Start the subscriber in a loop until `stop` is called. * * This is useful when running in the context of a long-running process / container. * @param inspect A function that is called for each poll so the inner workings can be inspected / logged / etc. * @returns An object that contains a promise you can wait for after calling stop */ start(inspect?: (pollResult: TransactionSubscriptionResult) => void, suppressLog?: boolean): void {} ``` `pollOnce` is useful when you want to take control of scheduling the different polls, such as when running a Lambda on a schedule or a process via cron, etc. - it will do a single poll of the chain and return the result of that poll. `start` is useful when you have a long-running process or container and you want it to loop infinitely at the specified polling frequency from the constructor config. If you want to inspect or log what happens under the covers you can pass in an `inspect` lambda that will be called for each poll. If you use `start` then you can stop the polling by calling `stop`, which can be awaited to wait until everything is cleaned up. You may want to subscribe to Node.JS kill signals to exit cleanly: ```typescript ['SIGINT', 'SIGTERM', 'SIGQUIT'].forEach(signal => process.on(signal, () => { // eslint-disable-next-line no-console console.log(`Received ${signal}; stopping subscriber...`); subscriber.stop(signal).then(() => console.log('Subscriber stopped')); }), ); ``` ## Handling errors Because `start` isn’t a blocking method, you can’t simply wrap it in a try/catch. To handle errors, you can register error handlers/listeners using the `onError` method. This works in a similar way to the other `on*` methods. ````typescript /** * Register an error handler to run if an error is thrown during processing or event handling. * * This is useful to handle any errors that occur and can be used to perform retries, logging or cleanup activities. * * The listener can be async and it will be awaited if so. * @example * ```typescript * subscriber.onError((error) => { console.error(error) }) * ``` * @example * ```typescript * const maxRetries = 3 * let retryCount = 0 * subscriber.onError(async (error) => { * retryCount++ * if (retryCount > maxRetries) { * console.error(error) * return * } * console.log(`Error occurred, retrying in 2 seconds (${retryCount}/${maxRetries})`) * await new Promise((r) => setTimeout(r, 2_000)) * subscriber.start() *}) * ``` * @param listener The listener function to invoke with the error that was thrown * @returns The subscriber so `on*` calls can be chained */ onError(listener: ErrorListener) {} ```` The `ErrorListener` type is defined as: ```typescript type ErrorListener = (error: unknown) => Promise | void; ``` This allows you to use async or sync error listeners. Multiple error listeners can be added, and each will be called one-by-one (and awaited) in the order the registrations occur. When no error listeners have been registered, a default listener is used to re-throw any exception, so they can be caught by global uncaught exception handlers. Once an error listener has been registered, the default listener is removed and it’s the responsibility of the registered error listener to perform any error handling. ## Examples See the .
# `getSubscribedTransactions`
`getSubscribedTransactions` is the core building block at the centre of this library. It’s a simple, but flexible mechanism that allows you to enact a single subscription “poll” of the Algorand blockchain. This is a lower level building block, you likely don’t want to use it directly, but instead use the . You can use this method to orchestrate everything from an index of all relevant data from the start of the chain through to simply subscribing to relevant transactions as they emerge at the tip of the chain. It allows you to have reliable at least once delivery even if your code has outages through the use of watermarking. ```typescript /** * Executes a single pull/poll to subscribe to transactions on the configured Algorand * blockchain for the given subscription context. * @param subscription The subscription context. * @param algod An Algod client. * @param indexer An optional indexer client, only needed when `onMaxRounds` is `catchup-with-indexer`. * @returns The result of this subscription pull/poll. */ export async function getSubscribedTransactions( subscription: TransactionSubscriptionParams, algod: Algodv2, indexer?: Indexer, ): Promise; ``` ## TransactionSubscriptionParams Specifying a subscription requires passing in a `TransactionSubscriptionParams` object, which configures the behaviour: ````typescript /** Parameters to control a single subscription pull/poll. */ export interface TransactionSubscriptionParams { /** The filter(s) to apply to find transactions of interest. * A list of filters with corresponding names. * * @example * ```typescript * filter: [{ * name: 'asset-transfers', * filter: { * type: TransactionType.axfer, * //... * } * }, { * name: 'payments', * filter: { * type: TransactionType.pay, * //... * } * }] * ``` * */ filters: NamedTransactionFilter[]; /** Any ARC-28 event definitions to process from app call logs */ arc28Events?: Arc28EventGroup[]; /** The current round watermark that transactions have previously been synced to. * * Persist this value as you process transactions processed from this method * to allow for resilient and incremental syncing. * * Syncing will start from `watermark + 1`. * * Start from 0 if you want to start from the beginning of time, noting that * will be slow if `onMaxRounds` is `sync-oldest`. **/ watermark: bigint; /** The maximum number of rounds to sync for each subscription pull/poll. * * Defaults to 500. * * This gives you control over how many rounds you wait for at a time, * your staleness tolerance when using `skip-sync-newest` or `fail`, and * your catchup speed when using `sync-oldest`. **/ maxRoundsToSync?: number; /** * The maximum number of rounds to sync from indexer when using `syncBehaviour: 'catchup-with-indexer'. * * By default there is no limit and it will paginate through all of the rounds. * Sometimes this can result in an incredibly long catchup time that may break the service * due to execution and memory constraints, particularly for filters that result in a large number of transactions. * * Instead, this allows indexer catchup to be split into multiple polls, each with a transactionally consistent * boundary based on the number of rounds specified here. */ maxIndexerRoundsToSync?: number; /** If the current tip of the configured Algorand blockchain is more than `maxRoundsToSync` * past `watermark` then how should that be handled: * * `skip-sync-newest`: Discard old blocks/transactions and sync the newest; useful * for real-time notification scenarios where you don't care about history and * are happy to lose old transactions. * * `sync-oldest`: Sync from the oldest rounds forward `maxRoundsToSync` rounds * using algod; note: this will be slow if you are starting from 0 and requires * an archival node. * * `sync-oldest-start-now`: Same as `sync-oldest`, but if the `watermark` is `0` * then start at the current round i.e. don't sync historical records, but once * subscribing starts sync everything; note: if it falls behind it requires an * archival node. * * `catchup-with-indexer`: Sync to round `currentRound - maxRoundsToSync + 1` * using indexer (much faster than using algod for long time periods) and then * use algod from there. * * `fail`: Throw an error. **/ syncBehaviour: | 'skip-sync-newest' | 'sync-oldest' | 'sync-oldest-start-now' | 'catchup-with-indexer' | 'fail'; } ```` ## TransactionFilter The allows you to specify a set of filters to return a subset of transactions you are interested in. Each filter contains a `filter` property of type `TransactionFilter`, which matches the following type: ````typescript /** Common parameters to control a single subscription pull/poll for both `AlgorandSubscriber` and `getSubscribedTransactions`. */ export interface CoreTransactionSubscriptionParams { /** The filter(s) to apply to find transactions of interest. * A list of filters with corresponding names. * * @example * ```typescript * filter: [{ * name: 'asset-transfers', * filter: { * type: TransactionType.axfer, * //... * } * }, { * name: 'payments', * filter: { * type: TransactionType.pay, * //... * } * }] * ``` * */ filters: NamedTransactionFilter[]; /** Any ARC-28 event definitions to process from app call logs */ arc28Events?: Arc28EventGroup[]; /** The maximum number of rounds to sync from algod for each subscription pull/poll. * * Defaults to 500. * * This gives you control over how many rounds you wait for at a time, * your staleness tolerance when using `skip-sync-newest` or `fail`, and * your catchup speed when using `sync-oldest`. **/ maxRoundsToSync?: number; /** * The maximum number of rounds to sync from indexer when using `syncBehaviour: 'catchup-with-indexer'. * * By default there is no limit and it will paginate through all of the rounds. * Sometimes this can result in an incredibly long catchup time that may break the service * due to execution and memory constraints, particularly for filters that result in a large number of transactions. * * Instead, this allows indexer catchup to be split into multiple polls, each with a transactionally consistent * boundary based on the number of rounds specified here. */ maxIndexerRoundsToSync?: number; /** If the current tip of the configured Algorand blockchain is more than `maxRoundsToSync` * past `watermark` then how should that be handled: * * `skip-sync-newest`: Discard old blocks/transactions and sync the newest; useful * for real-time notification scenarios where you don't care about history and * are happy to lose old transactions. * * `sync-oldest`: Sync from the oldest rounds forward `maxRoundsToSync` rounds * using algod; note: this will be slow if you are starting from 0 and requires * an archival node. * * `sync-oldest-start-now`: Same as `sync-oldest`, but if the `watermark` is `0` * then start at the current round i.e. don't sync historical records, but once * subscribing starts sync everything; note: if it falls behind it requires an * archival node. * * `catchup-with-indexer`: Sync to round `currentRound - maxRoundsToSync + 1` * using indexer (much faster than using algod for long time periods) and then * use algod from there. * * `fail`: Throw an error. **/ syncBehaviour: | 'skip-sync-newest' | 'sync-oldest' | 'sync-oldest-start-now' | 'catchup-with-indexer' | 'fail'; } ```` Each filter you provide within this type will apply an AND logic between the specified filters, e.g. ```typescript filter: { type: TransactionType.axfer, sender: "ABC..." } ``` Will return transactions that are `axfer` type AND have a sender of `"ABC..."`. ### NamedTransactionFilter You can specify multiple filters in an array, where each filter is a `NamedTransactionFilter`, which consists of: ```typescript /** Specify a named filter to apply to find transactions of interest. */ export interface NamedTransactionFilter { /** The name to give the filter. */ name: string; /** The filter itself. */ filter: TransactionFilter; } ``` This gives you the ability to detect which filter got matched when a transaction is returned, noting that you can use the same name multiple times if there are multiple filters (aka OR logic) that comprise the same logical filter. ## Arc28EventGroup The allows you to define any ARC-28 events that may appear in subscribed transactions so they can either be subscribed to, or be processed and added to the resulting . ## TransactionSubscriptionResult The result of calling `getSubscribedTransactions` is a `TransactionSubscriptionResult`: ```typescript /** The result of a single subscription pull/poll. */ export interface TransactionSubscriptionResult { /** The round range that was synced from/to */ syncedRoundRange: [startRound: bigint, endRound: bigint]; /** The current detected tip of the configured Algorand blockchain. */ currentRound: bigint; /** The watermark value that was retrieved at the start of the subscription poll. */ startingWatermark: bigint; /** The new watermark value to persist for the next call to * `getSubscribedTransactions` to continue the sync. * Will be equal to `syncedRoundRange[1]`. Only persist this * after processing (or in the same atomic transaction as) * subscribed transactions to keep it reliable. */ newWatermark: bigint; /** Any transactions that matched the given filter within * the synced round range. This substantively uses the [indexer transaction * format](hhttps://dev.algorand.co/reference/rest-apis/indexer#transaction) * to represent the data with some additional fields. */ subscribedTransactions: SubscribedTransaction[]; /** The metadata about any blocks that were retrieved from algod as part * of the subscription poll. */ blockMetadata?: BlockMetadata[]; } /** Metadata about a block that was retrieved from algod. */ export interface BlockMetadata { /** The base64 block hash. */ hash?: string; /** The round of the block. */ round: bigint; /** Block creation timestamp in seconds since epoch */ timestamp: number; /** The genesis ID of the chain. */ genesisId: string; /** The base64 genesis hash of the chain. */ genesisHash: string; /** The base64 previous block hash. */ previousBlockHash?: string; /** The base64 seed of the block. */ seed: string; /** Fields relating to rewards */ rewards?: BlockRewards; /** Count of parent transactions in this block */ parentTransactionCount: number; /** Full count of transactions and inner transactions (recursively) in this block. */ fullTransactionCount: number; /** Number of the next transaction that will be committed after this block. It is 0 when no transactions have ever been committed (since TxnCounter started being supported). */ txnCounter: bigint; /** TransactionsRoot authenticates the set of transactions appearing in the block. More specifically, it's the root of a merkle tree whose leaves are the block's Txids, in lexicographic order. For the empty block, it's 0. Note that the TxnRoot does not authenticate the signatures on the transactions, only the transactions themselves. Two blocks with the same transactions but in a different order and with different signatures will have the same TxnRoot. Pattern : "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==\|[A-Za-z0-9+/]{3}=)?$" */ transactionsRoot: string; /** TransactionsRootSHA256 is an auxiliary TransactionRoot, built using a vector commitment instead of a merkle tree, and SHA256 hash function instead of the default SHA512_256. This commitment can be used on environments where only the SHA256 function exists. */ transactionsRootSha256: string; /** Fields relating to a protocol upgrade. */ upgradeState?: BlockUpgradeState; /** Tracks the status of state proofs. */ stateProofTracking?: BlockStateProofTracking[]; /** Fields relating to voting for a protocol upgrade. */ upgradeVote?: BlockUpgradeVote; /** Participation account data that needs to be checked/acted on by the network. */ participationUpdates?: ParticipationUpdates; /** Address of the proposer of this block */ proposer?: string; } ``` ## SubscribedTransaction The common model used to expose a transaction that is returned from a subscription is a `SubscribedTransaction`, which can be imported like so: ```typescript import type { SubscribedTransaction } from '@algorandfoundation/algokit-subscriber/types'; ``` This type is substantively, based on the `algosdk.indexerModels.Transaction`. While the indexer type is used, the subscriber itself doesn’t have to use indexer - any transactions it retrieves from algod are transformed to this common model type. Beyond the indexer type it has some modifications to: * Make `id` required * Add the `parentTransactionId` field so inner transactions have a reference to their parent * Override the type of `innerTxns` to be `SubscribedTransaction[]` so inner transactions (recursively) get these extra fields too * Add emitted ARC-28 events via `arc28Events` * The list of filter(s) that caused the transaction to be matched * The list of balanceChange(s) that occurred in the transaction The definition of the type is: ```typescript export class SubscribedTransaction extends algosdk.indexerModels.Transaction { id: string; /** The intra-round offset of the parent of this transaction (if it's an inner transaction). */ parentIntraRoundOffset?: number; /** The transaction ID of the parent of this transaction (if it's an inner transaction). */ parentTransactionId?: string; /** Inner transactions produced by application execution. */ innerTxns?: SubscribedTransaction[]; /** Any ARC-28 events emitted from an app call. */ arc28Events?: EmittedArc28Event[]; /** The names of any filters that matched the given transaction to result in it being 'subscribed'. */ filtersMatched?: string[]; /** The balance changes in the transaction. */ balanceChanges?: BalanceChange[]; constructor({ id, parentIntraRoundOffset, parentTransactionId, innerTxns, arc28Events, filtersMatched, balanceChanges, ...rest }: Omit) { super(rest); this.id = id; this.parentIntraRoundOffset = parentIntraRoundOffset; this.parentTransactionId = parentTransactionId; this.innerTxns = innerTxns; this.arc28Events = arc28Events; this.filtersMatched = filtersMatched; this.balanceChanges = balanceChanges; } } /** An emitted ARC-28 event extracted from an app call log. */ export interface EmittedArc28Event extends Arc28EventToProcess { /** The ordered arguments extracted from the event that was emitted */ args: ABIValue[]; /** The named arguments extracted from the event that was emitted (where the arguments had a name defined) */ argsByName: Record; } /** An ARC-28 event to be processed */ export interface Arc28EventToProcess { /** The name of the ARC-28 event group the event belongs to */ groupName: string; /** The name of the ARC-28 event that was triggered */ eventName: string; /** The signature of the event e.g. `EventName(type1,type2)` */ eventSignature: string; /** The 4-byte hex prefix for the event */ eventPrefix: string; /** The ARC-28 definition of the event */ eventDefinition: Arc28Event; } /** Represents a balance change effect for a transaction. */ export interface BalanceChange { /** The address that the balance change is for. */ address: string; /** The asset ID of the balance change, or 0 for Algos. */ assetId: bigint; /** The amount of the balance change in smallest divisible unit or microAlgos. */ amount: bigint; /** The roles the account was playing that led to the balance change */ roles: BalanceChangeRole[]; } /** The role that an account was playing for a given balance change. */ export enum BalanceChangeRole { /** Account was sending a transaction (sending asset and/or spending fee if asset `0`) */ Sender, /** Account was receiving a transaction */ Receiver, /** Account was having an asset amount closed to it */ CloseTo, } ``` ## Examples Here are some examples of how to use this method: ### Real-time notification of transactions of interest at the tip of the chain discarding stale records If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would drop old records and restart notifications from the new tip. ```typescript const algorand = AlgorandClient.defaultLocalNet(); // You would need to implement getLastWatermark() to retrieve from a persistence store const watermark = await getLastWatermark(); const subscription = await getSubscribedTransactions( { filters: [ { name: 'filter1', filter: { sender: 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU', }, }, ], watermark, maxRoundsToSync: 100, onMaxRounds: 'skip-sync-newest', }, algorand.client.algod, ); if (subscription.subscribedTransactions.length > 0) { // You would need to implement notifyTransactions to action the transactions await notifyTransactions(subscription.subscribedTransactions); } // You would need to implement saveWatermark to persist the watermark to the persistence store await saveWatermark(subscription.newWatermark); ``` ### Real-time notification of transactions of interest at the tip of the chain with at least once delivery If you ran the following code on a cron schedule of (say) every 5 seconds it would notify you every time the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`) sent a transaction. If the service stopped working for a period of time and fell behind then it would pick up where it left off and catch up using algod (note: you need to connect it to a archival node). ```typescript const algorand = AlgorandClient.defaultLocalNet(); // You would need to implement getLastWatermark() to retrieve from a persistence store const watermark = await getLastWatermark(); const subscription = await getSubscribedTransactions( { filters: [ { name: 'filter1', filter: { sender: 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU', }, }, ], watermark, maxRoundsToSync: 100, onMaxRounds: 'sync-oldest-start-now', }, algorand.client.algod, ); if (subscription.subscribedTransactions.length > 0) { // You would need to implement notifyTransactions to action the transactions await notifyTransactions(subscription.subscribedTransactions); } // You would need to implement saveWatermark to persist the watermark to the persistence store await saveWatermark(subscription.newWatermark); ``` ### Quickly building a reliable, up-to-date cache index of all transactions of interest from the beginning of the chain If you ran the following code on a cron schedule of (say) every 30 - 60 seconds it would create a cached index of all assets created by the account (in this case the Data History Museum TestNet account `ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU`). Given it uses indexer to catch up you can deploy this into a fresh environment with an empty database and it will catch up in seconds rather than days. ```typescript const algorand = AlgorandClient.defaultLocalNet(); // You would need to implement getLastWatermark() to retrieve from a persistence store const watermark = await getLastWatermark(); const subscription = await getSubscribedTransactions( { filters: [ { name: 'filter1', filter: { type: TransactionType.acfg, sender: 'ER7AMZRPD5KDVFWTUUVOADSOWM4RQKEEV2EDYRVSA757UHXOIEKGMBQIVU', assetCreate: true, }, }, ], watermark, maxRoundsToSync: 1000, onMaxRounds: 'catchup-with-indexer', }, algorand.client.algod, algorand.client.indexer, ); if (subscription.subscribedTransactions.length > 0) { // You would need to implement saveTransactions to persist the transactions await saveTransactions(subscription.subscribedTransactions); } // You would need to implement saveWatermark to persist the watermark to the persistence store await saveWatermark(subscription.newWatermark); ```
# Testing Guide
The Algorand Python Testing framework provides powerful tools for testing Algorand Python smart contracts within a Python interpreter. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications. ```{note} For all code examples in the _Testing Guide_ section, assume `context` is an instance of `AlgopyTestContext` obtained using the `algopy_testing_context()` context manager. All subsequent code is executed within this context. ``` ```{mermaid} graph TD subgraph GA["Your Development Environment"] A["algopy (type stubs)"] B["algopy_testing (testing framework)
(You are here 📍)"] C["puya (compiler)"] end subgraph GB["Your Algorand Project"] D[Your Algorand Python contract] end D -->|type hints inferred from| A D -->|compiled using| C D -->|tested via| B ``` > *High-level overview of the relationship between your smart contracts project, Algorand Python Testing framework, Algorand Python type stubs, and the compiler* The Algorand Python Testing framework streamlines unit testing of your Algorand Python smart contracts by offering functionality to: 1. Simulate the Algorand Virtual Machine (AVM) environment 2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types 3. Test smart contract classes, including their states, variables, and methods 4. Verify logic signatures and subroutines 5. Manage global state, local state, scratch slots, and boxes in test contexts 6. Simulate transactions and transaction groups, including inner transactions 7. Verify opcode behavior By using this framework, you can ensure your Algorand Python smart contracts function correctly before deploying them to a live network. Key features of the framework include: * `AlgopyTestContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state * AVM Type Simulation: Accurate representations of AVM types like `UInt64` and `Bytes` * ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding * Transaction Simulation: Ability to create and execute various transaction types * State Management: Tools for managing and verifying global and local state changes * Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing The framework is designed to work seamlessly with Algorand Python smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain. ## Table of Contents ```{toctree} --- maxdepth: 3 --- concepts avm-types arc4-types transactions contract-testing signature-testing state-management subroutines opcodes ```
# ARC4 Types
These types are available under the `algopy.arc4` namespace. Refer to the for more details on the spec. ```{hint} Test context manager provides _value generators_ for ARC4 types. To access their _value generators_, use `{context_instance}.any.arc4` property. See more examples below. ``` ```{note} For all `algopy.arc4` types with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-python-testing`](https://github.com/algorandfoundation/algorand-python-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-python-testing/blob/main/CONTRIBUTING). ``` ```{testsetup} import algopy from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## Unsigned Integers ```{testcode} from algopy import arc4 # Integer types uint8_value = arc4.UInt8(255) uint16_value = arc4.UInt16(65535) uint32_value = arc4.UInt32(4294967295) uint64_value = arc4.UInt64(18446744073709551615) ... # instantiate test context # Generate a random unsigned arc4 integer with default range uint8 = context.any.arc4.uint8() uint16 = context.any.arc4.uint16() uint32 = context.any.arc4.uint32() uint64 = context.any.arc4.uint64() biguint128 = context.any.arc4.biguint128() biguint256 = context.any.arc4.biguint256() biguint512 = context.any.arc4.biguint512() # Generate a random unsigned arc4 integer with specified range uint8_custom = context.any.arc4.uint8(min_value=10, max_value=100) uint16_custom = context.any.arc4.uint16(min_value=1000, max_value=5000) uint32_custom = context.any.arc4.uint32(min_value=100000, max_value=1000000) uint64_custom = context.any.arc4.uint64(min_value=1000000000, max_value=10000000000) biguint128_custom = context.any.arc4.biguint128(min_value=1000000000000000, max_value=10000000000000000) biguint256_custom = context.any.arc4.biguint256(min_value=1000000000000000000000000, max_value=10000000000000000000000000) biguint512_custom = context.any.arc4.biguint512(min_value=10000000000000000000000000000000000, max_value=10000000000000000000000000000000000) ``` ## Address ```{testcode} from algopy import arc4 # Address type address_value = arc4.Address("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ") # Generate a random address random_address = context.any.arc4.address() # Access native underlaying type native = random_address.native ``` ## Dynamic Bytes ```{testcode} from algopy import arc4 # Dynamic byte string bytes_value = arc4.DynamicBytes(b"Hello, Algorand!") # Generate random dynamic bytes random_dynamic_bytes = context.any.arc4.dynamic_bytes(n=123) # n is the number of bits in the arc4 dynamic bytes ``` ## String ```{testcode} from algopy import arc4 # UTF-8 encoded string string_value = arc4.String("Hello, Algorand!") # Generate random string random_string = context.any.arc4.string(n=12) # n is the number of bits in the arc4 string ``` ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# AVM Types
These types are available directly under the `algopy` namespace. They represent the basic AVM primitive types and can be instantiated directly or via *value generators*: ```{note} For 'primitive `algopy` types such as `Account`, `Application`, `Asset`, `UInt64`, `BigUint`, `Bytes`, `Sting` with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-python-testing`](https://github.com/algorandfoundation/algorand-python-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-python-testing/blob/main/CONTRIBUTING). ``` ```{testsetup} import algopy from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## UInt64 ```{testcode} # Direct instantiation uint64_value = algopy.UInt64(100) # Instantiate test context ... # Generate a random UInt64 value random_uint64 = context.any.uint64() # Specify a range random_uint64 = context.any.uint64(min_value=1000, max_value=9999) ``` ## Bytes ```{testcode} # Direct instantiation bytes_value = algopy.Bytes(b"Hello, Algorand!") # Instantiate test context ... # Generate random byte sequences random_bytes = context.any.bytes() # Specify the length random_bytes = context.any.bytes(length=32) ``` ## String ```{testcode} # Direct instantiation string_value = algopy.String("Hello, Algorand!") # Generate random strings random_string = context.any.string() # Specify the length random_string = context.any.string(length=16) ``` ## BigUInt ```{testcode} # Direct instantiation biguint_value = algopy.BigUInt(100) # Generate a random BigUInt value random_biguint = context.any.biguint() ``` ## Asset ```{testcode} # Direct instantiation asset = algopy.Asset(asset_id=1001) # Instantiate test context ... # Generate a random asset random_asset = context.any.asset( creator=..., # Optional: Creator account name=..., # Optional: Asset name unit_name=..., # Optional: Unit name total=..., # Optional: Total supply decimals=..., # Optional: Number of decimals default_frozen=..., # Optional: Default frozen state url=..., # Optional: Asset URL metadata_hash=..., # Optional: Metadata hash manager=..., # Optional: Manager address reserve=..., # Optional: Reserve address freeze=..., # Optional: Freeze address clawback=... # Optional: Clawback address ) # Get an asset by ID asset = context.ledger.get_asset(asset_id=random_asset.id) # Update an asset context.ledger.update_asset( random_asset, name=..., # Optional: New asset name total=..., # Optional: New total supply decimals=..., # Optional: Number of decimals default_frozen=..., # Optional: Default frozen state url=..., # Optional: New asset URL metadata_hash=..., # Optional: New metadata hash manager=..., # Optional: New manager address reserve=..., # Optional: New reserve address freeze=..., # Optional: New freeze address clawback=... # Optional: New clawback address ) ``` ## Account ```{testcode} # Direct instantiation raw_address = 'PUYAGEGVCOEBP57LUKPNOCSMRWHZJSU4S62RGC2AONDUEIHC6P7FOPJQ4I' account = algopy.Account(raw_address) # zero address by default # Instantiate test context ... # Generate a random account random_account = context.any.account( address=str(raw_address), # Optional: Specify a custom address, defaults to a random address opted_asset_balances={}, # Optional: Specify opted asset balances as dict of assets to balance opted_apps=[], # Optional: Specify opted apps as sequence of algopy.Application objects balance=..., # Optional: Specify an initial balance min_balance=..., # Optional: Specify a minimum balance auth_address=..., # Optional: Specify an auth address total_assets=..., # Optional: Specify the total number of assets total_assets_created=..., # Optional: Specify the total number of created assets total_apps_created=..., # Optional: Specify the total number of created applications total_apps_opted_in=..., # Optional: Specify the total number of applications opted into total_extra_app_pages=..., # Optional: Specify the total number of extra ) # Generate a random account that is opted into a specific asset mock_asset = context.any.asset() mock_account = context.any.account( opted_asset_balances={mock_asset: 123} ) # Get an account by address account = context.ledger.get_account(str(mock_account)) # Update an account context.ledger.update_account( mock_account, balance=..., # Optional: New balance min_balance=..., # Optional: New minimum balance auth_address=context.any.account(), # Optional: New auth address total_assets=..., # Optional: New total number of assets total_created_assets=..., # Optional: New total number of created assets total_apps_created=..., # Optional: New total number of created applications total_apps_opted_in=..., # Optional: New total number of applications opted into total_extra_app_pages=..., # Optional: New total number of extra application pages rewards=..., # Optional: New rewards status=... # Optional: New account status ) # Check if an account is opted into a specific asset opted_in = account.is_opted_in(mock_asset) ``` ## Application ```{testcode} # Direct instantiation application = algopy.Application() # Instantiate test context ... # Generate a random application random_app = context.any.application( approval_program=algopy.Bytes(b''), # Optional: Specify a custom approval program clear_state_program=algopy.Bytes(b''), # Optional: Specify a custom clear state program global_num_uint=algopy.UInt64(1), # Optional: Number of global uint values global_num_bytes=algopy.UInt64(1), # Optional: Number of global byte values local_num_uint=algopy.UInt64(1), # Optional: Number of local uint values local_num_bytes=algopy.UInt64(1), # Optional: Number of local byte values extra_program_pages=algopy.UInt64(1), # Optional: Number of extra program pages creator=context.default_sender # Optional: Specify the creator account ) # Get an application by ID app = context.ledger.get_app(app_id=random_app.id) # Update an application context.ledger.update_app( random_app, approval_program=..., # Optional: New approval program clear_state_program=..., # Optional: New clear state program global_num_uint=..., # Optional: New number of global uint values global_num_bytes=..., # Optional: New number of global byte values local_num_uint=..., # Optional: New number of local uint values local_num_bytes=..., # Optional: New number of local byte values extra_program_pages=..., # Optional: New number of extra program pages creator=... # Optional: New creator account ) # Patch logs for an application. When accessing via transactions or inner transaction related opcodes, will return the patched logs unless new logs where added into the transaction during execution. test_app = context.any.application(logs=b"log entry" or [b"log entry 1", b"log entry 2"]) # Get app associated with the active contract class MyContract(algopy.ARC4Contract): ... contract = MyContract() active_app = context.ledger.get_app(contract) ``` ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# Concepts
The following sections provide an overview of key concepts and features in the Algorand Python Testing framework. ## Test Context The main abstraction for interacting with the testing framework is the . It creates an emulated Algorand environment that closely mimics AVM behavior relevant to unit testing the contracts and provides a Pythonic interface for interacting with the emulated environment. ```python from algopy_testing import algopy_testing_context def test_my_contract(): # Recommended way to instantiate the test context with algopy_testing_context() as ctx: # Your test code here pass # ctx is automatically reset after the test code is executed ``` The context manager interface exposes three main properties: 1. `ledger`: An instance of `LedgerContext` for interacting with and querying the emulated Algorand ledger state. 2. `txn`: An instance of `TransactionContext` for creating and managing transaction groups, submitting transactions, and accessing transaction results. 3. `any`: An instance of `AlgopyValueGenerator` for generating randomized test data. For detailed method signatures, parameters, and return types, refer to the following API sections: The `any` property provides access to different value generators: * `AVMValueGenerator`: Base abstractions for AVM types. All methods are available directly on the instance returned from `any`. * `TxnValueGenerator`: Accessible via `any.txn`, for transaction-related data. * `ARC4ValueGenerator`: Accessible via `any.arc4`, for ARC4 type data. These generators allow creation of constrained random values for various AVM entities (accounts, assets, applications, etc.) when specific values are not required. ```{hint} Value generators are powerful tools for generating test data for specified AVM types. They allow further constraints on random value generation via arguments, making it easier to generate test data when exact values are not necessary. When used with the 'Arrange, Act, Assert' pattern, value generators can be especially useful in setting up clear and concise test data in arrange steps. They can also serve as a base building block that can be integrated/reused with popular Python property-based testing frameworks like [`hypothesis`](https://hypothesis.readthedocs.io/en/latest/). ``` ## Types of `algopy` stub implementations As explained in the , `algorand-python-testing` *injects* test implementations for stubs available in the `algorand-python` package. However, not all of the stubs are implemented in the same manner: 1. **Native**: Fully matches AVM computation in Python. For example, `algopy.op.sha256` and other cryptographic operations behave identically in AVM and unit tests. This implies that the majority of opcodes that are ‘pure’ functions in AVM also have a native Python implementation provided by this package. These abstractions and opcodes can be used within and outside of the testing context. 2. **Emulated**: Uses `AlgopyTestContext` to mimic AVM behavior. For example, `Box.put` on an `algopy.Box` within a test context stores data in the test manager, not the real Algorand network, but provides the same interface. 3. **Mockable**: Not implemented, but can be mocked or patched. For example, `algopy.abi_call` can be mocked to return specific values or behaviors; otherwise, it raises a `NotImplementedError`. This category covers cases where native or emulated implementation in a unit test context is impractical or overly complex. For a full list of all public `algopy` types and their corresponding implementation category, refer to the section. ```plaintext ```
# Smart Contract Testing
This guide provides an overview of how to test smart contracts using the Algorand Python SDK (`algopy`). We will cover the basics of testing `ARC4Contract` and `Contract` classes, focusing on `abimethod` and `baremethod` decorators.  ```{note} The code snippets showcasing the contract testing capabilities are using [pytest](https://docs.pytest.org/en/latest/) as the test framework. However, note that the `algorand-python-testing` package can be used with any other test framework that supports Python. `pytest` is used for demonstration purposes in this documentation. ``` ```{testsetup} import algopy import algopy_testing from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` The following video includes a practical tutorial of how to do tests in Algorand Smart Contracts: [Algorand Smart Contract Testing - Python](https://www.youtube.com/embed/B4mzNmQB5mU?rel=0) ## `algopy.ARC4Contract` Subclasses of `algopy.ARC4Contract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `algopy.Application` object instance. Within the class implementation, methods decorated with `algopy.arc4.abimethod` and `algopy.arc4.baremethod` will automatically assemble an `algopy.gtxn.ApplicationCallTransaction` transaction to emulate the AVM application call. This behavior can be overriden by setting the transaction group manually as part of test setup, this is done via implicit invocation of `algopy_testing.context.any_application()` *value generator* (refer to for more details). ```{testcode} class SimpleVotingContract(algopy.ARC4Contract): def __init__(self) -> None: self.topic = algopy.GlobalState(algopy.Bytes(b"default_topic"), key="topic", description="Voting topic") self.votes = algopy.GlobalState( algopy.UInt64(0), key="votes", description="Votes for the option", ) self.voted = algopy.LocalState(algopy.UInt64, key="voted", description="Tracks if an account has voted") @algopy.arc4.abimethod(create="require") def create(self, initial_topic: algopy.Bytes) -> None: self.topic.value = initial_topic self.votes.value = algopy.UInt64(0) @algopy.arc4.abimethod def vote(self) -> algopy.UInt64: assert self.voted[algopy.Txn.sender] == algopy.UInt64(0), "Account has already voted" self.votes.value += algopy.UInt64(1) self.voted[algopy.Txn.sender] = algopy.UInt64(1) return self.votes.value @algopy.arc4.abimethod(readonly=True) def get_votes(self) -> algopy.UInt64: return self.votes.value @algopy.arc4.abimethod def change_topic(self, new_topic: algopy.Bytes) -> None: assert algopy.Txn.sender == algopy.Txn.application_id.creator, "Only creator can change topic" self.topic.value = new_topic self.votes.value = algopy.UInt64(0) # Reset user's vote (this is simplified per single user for the sake of example) self.voted[algopy.Txn.sender] = algopy.UInt64(0) # Arrange initial_topic = algopy.Bytes(b"initial_topic") contract = SimpleVotingContract() contract.voted[context.default_sender] = algopy.UInt64(0) # Act - Create the contract contract.create(initial_topic) # Assert - Check initial state assert contract.topic.value == initial_topic assert contract.votes.value == algopy.UInt64(0) # Act - Vote # The method `.vote()` is decorated with `algopy.arc4.abimethod`, which means it will assemble a transaction to emulate the AVM application call result = contract.vote() # Assert - you can access the corresponding auto generated application call transaction via test context assert len(context.txn.last_group.txns) == 1 # Assert - Note how local and global state are accessed via regular python instance attributes assert result == algopy.UInt64(1) assert contract.votes.value == algopy.UInt64(1) assert contract.voted[context.default_sender] == algopy.UInt64(1) # Act - Change topic new_topic = algopy.Bytes(b"new_topic") contract.change_topic(new_topic) # Assert - Check topic changed and votes reset assert contract.topic.value == new_topic assert contract.votes.value == algopy.UInt64(0) assert contract.voted[context.default_sender] == algopy.UInt64(0) # Act - Get votes (should be 0 after reset) votes = contract.get_votes() # Assert - Check votes assert votes == algopy.UInt64(0) ``` For more examples of tests using `algopy.ARC4Contract`, see the section. ## \`algopy.Contract“ Subclasses of `algopy.Contract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `algopy.Application` object instance. This behavior is identical to `algopy.ARC4Contract` class instances. Unlike `algopy.ARC4Contract`, `algopy.Contract` requires manual setup of the transaction context and explicit method calls. Alternatively, you can use `active_txn_overrides` to specify application arguments and foreign arrays without needing to create a full transaction group if your aim is to patch a specific active transaction related metadata. Here’s an updated example demonstrating how to test a `Contract` class: ```{testcode} import algopy import pytest from algopy_testing import AlgopyTestContext, algopy_testing_context class CounterContract(algopy.Contract): def __init__(self): self.counter = algopy.UInt64(0) @algopy.subroutine def increment(self): self.counter += algopy.UInt64(1) return algopy.UInt64(1) @algopy.arc4.baremethod def approval_program(self): return self.increment() @algopy.arc4.baremethod def clear_state_program(self): return algopy.UInt64(1) @pytest.fixture() def context(): with algopy_testing_context() as ctx: yield ctx def test_counter_contract(context: AlgopyTestContext): # Instantiate contract contract = CounterContract() # Set up the transaction context using active_txn_overrides with context.txn.create_group( active_txn_overrides={ "sender": context.default_sender, "app_args": [algopy.Bytes(b"increment")], } ): # Invoke approval program result = contract.approval_program() # Assert approval program result assert result == algopy.UInt64(1) # Assert counter value assert contract.counter == algopy.UInt64(1) # Test clear state program assert contract.clear_state_program() == algopy.UInt64(1) def test_counter_contract_multiple_txns(context: AlgopyTestContext): contract = CounterContract() # For scenarios with multiple transactions, you can still use gtxns extra_payment = context.any.txn.payment() with context.txn.create_group( gtxns=[ extra_payment, context.any.txn.application_call( sender=context.default_sender, app_id=contract.app_id, app_args=[algopy.Bytes(b"increment")], ), ], active_txn_index=1 # Set the application call as the active transaction ): result = contract.approval_program() assert result == algopy.UInt64(1) assert contract.counter == algopy.UInt64(1) assert len(context.txn.last_group.txns) == 2 ``` In this updated example: 1. We use `context.txn.create_group()` with `active_txn_overrides` to set up the transaction context for a single application call. This simplifies the process when you don’t need to specify a full transaction group. 2. The `active_txn_overrides` parameter allows you to specify `app_args` and other transaction fields directly, without creating a full `ApplicationCallTransaction` object. 3. For scenarios involving multiple transactions, you can still use the `gtxns` parameter to create a transaction group, as shown in the `test_counter_contract_multiple_txns` function. 4. The `app_id` is automatically set to the contract’s application ID, so you don’t need to specify it explicitly when using `active_txn_overrides`. This approach provides more flexibility in setting up the transaction context for testing `Contract` classes, allowing for both simple single-transaction scenarios and more complex multi-transaction tests. ## Defer contract method invocation You can create deferred application calls for more complex testing scenarios where order of transactions needs to be controlled: ```python def test_deferred_call(context): contract = MyARC4Contract() extra_payment = context.any.txn.payment() extra_asset_transfer = context.any.txn.asset_transfer() implicit_payment = context.any.txn.payment() deferred_call = context.txn.defer_app_call(contract.some_method, implicit_payment) with context.txn.create_group([extra_payment, deferred_call, extra_asset_transfer]): result = deferred_call.submit() print(context.txn.last_group) # [extra_payment, implicit_payment, app call, extra_asset_transfer] ``` A deferred application call prepares the application call transaction without immediately executing it. The call can be executed later by invoking the `.submit()` method on the deferred application call instance. As demonstrated in the example, you can also include the deferred call in a transaction group creation context manager to execute it as part of a larger transaction group. When `.submit()` is called, only the specific method passed to `defer_app_call()` will be executed. ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# AVM Opcodes
The file provides a comprehensive list of all opcodes and their respective types, categorized as *Mockable*, *Emulated*, or *Native* within the `algorand-python-testing` package. This section highlights a **subset** of opcodes and types that typically require interaction with the test context manager. `Native` opcodes are assumed to function as they do in the Algorand Virtual Machine, given their stateless nature. If you encounter issues with any `Native` opcodes, please raise an issue in the or contribute a PR following the guide. ```{testsetup} import algopy from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## Implemented Types These types are fully implemented in Python and behave identically to their AVM counterparts: ### 1. Cryptographic Operations The following opcodes are demonstrated: * `op.sha256` * `op.keccak256` * `op.ecdsa_verify` ```{testcode} from algopy import op # SHA256 hash data = algopy.Bytes(b"Hello, World!") hashed = op.sha256(data) # Keccak256 hash keccak_hashed = op.keccak256(data) # ECDSA verification message_hash = bytes.fromhex("f809fd0aa0bb0f20b354c6b2f86ea751957a4e262a546bd716f34f69b9516ae1") sig_r = bytes.fromhex("18d96c7cda4bc14d06277534681ded8a94828eb731d8b842e0da8105408c83cf") sig_s = bytes.fromhex("7d33c61acf39cbb7a1d51c7126f1718116179adebd31618c4604a1f03b5c274a") pubkey_x = bytes.fromhex("f8140e3b2b92f7cbdc8196bc6baa9ce86cf15c18e8ad0145d50824e6fa890264") pubkey_y = bytes.fromhex("bd437b75d6f1db67155a95a0da4b41f2b6b3dc5d42f7db56238449e404a6c0a3") result = op.ecdsa_verify(op.ECDSA.Secp256r1, message_hash, sig_r, sig_s, pubkey_x, pubkey_y) assert result ``` ### 2. Arithmetic and Bitwise Operations The following opcodes are demonstrated: * `op.addw` * `op.bitlen` * `op.getbit` * `op.setbit_uint64` ```{testcode} from algopy import op # Addition with carry result, carry = op.addw(algopy.UInt64(2**63), algopy.UInt64(2**63)) # Bitwise operations value = algopy.UInt64(42) bit_length = op.bitlen(value) is_bit_set = op.getbit(value, 3) new_value = op.setbit_uint64(value, 2, 1) ``` For a comprehensive list of all opcodes and types, refer to the page. ## Emulated Types Requiring Transaction Context These types necessitate interaction with the transaction context: ### algopy.op.Global ```{testcode} from algopy import op class MyContract(algopy.ARC4Contract): @algopy.arc4.abimethod def check_globals(self) -> algopy.UInt64: return op.Global.min_txn_fee + op.Global.min_balance ... # setup context (below assumes available under 'ctx' variable) context.ledger.patch_global_fields( min_txn_fee=algopy.UInt64(1000), min_balance=algopy.UInt64(100000) ) contract = MyContract() result = contract.check_globals() assert result == algopy.UInt64(101000) ``` ### algopy.op.Txn ```{testcode} from algopy import op class MyContract(algopy.ARC4Contract): @algopy.arc4.abimethod def check_txn_fields(self) -> algopy.arc4.Address: return algopy.arc4.Address(op.Txn.sender) ... # setup context (below assumes available under 'ctx' variable) contract = MyContract() custom_sender = context.any.account() with context.txn.create_group(active_txn_overrides={"sender": custom_sender}): result = contract.check_txn_fields() assert result == custom_sender ``` ### algopy.op.AssetHoldingGet ```{testcode} from algopy import op class AssetContract(algopy.ARC4Contract): @algopy.arc4.abimethod def check_asset_holding(self, account: algopy.Account, asset: algopy.Asset) -> algopy.UInt64: balance, _ = op.AssetHoldingGet.asset_balance(account, asset) return balance ... # setup context (below assumes available under 'ctx' variable) asset = context.any.asset(total=algopy.UInt64(1000000)) account = context.any.account(opted_asset_balances={asset.id: algopy.UInt64(5000)}) contract = AssetContract() result = contract.check_asset_holding(account, asset) assert result == algopy.UInt64(5000) ``` ### algopy.op.AppGlobal ```{testcode} from algopy import op class StateContract(algopy.ARC4Contract): @algopy.arc4.abimethod def set_and_get_state(self, key: algopy.Bytes, value: algopy.UInt64) -> algopy.UInt64: op.AppGlobal.put(key, value) return op.AppGlobal.get_uint64(key) ... # setup context (below assumes available under 'ctx' variable) contract = StateContract() key, value = algopy.Bytes(b"test_key"), algopy.UInt64(42) result = contract.set_and_get_state(key, value) assert result == value stored_value = context.ledger.get_global_state(contract, key) assert stored_value == 42 ``` ### algopy.op.Block ```{testcode} from algopy import op class BlockInfoContract(algopy.ARC4Contract): @algopy.arc4.abimethod def get_block_seed(self) -> algopy.Bytes: return op.Block.blk_seed(1000) ... # setup context (below assumes available under 'ctx' variable) context.ledger.set_block(1000, seed=123456, timestamp=1625097600) contract = BlockInfoContract() seed = contract.get_block_seed() assert seed == algopy.op.itob(123456) ``` ### algopy.op.AcctParamsGet ```{testcode} from algopy import op class AccountParamsContract(algopy.ARC4Contract): @algopy.arc4.abimethod def get_account_balance(self, account: algopy.Account) -> algopy.UInt64: balance, exists = op.AcctParamsGet.acct_balance(account) assert exists return balance ... # setup context (below assumes available under 'ctx' variable) account = context.any.account(balance=algopy.UInt64(1000000)) contract = AccountParamsContract() balance = contract.get_account_balance(account) assert balance == algopy.UInt64(1000000) ``` ### algopy.op.AppParamsGet ```{testcode} class AppParamsContract(algopy.ARC4Contract): @algopy.arc4.abimethod def get_app_creator(self, app_id: algopy.Application) -> algopy.arc4.Address: creator, exists = algopy.op.AppParamsGet.app_creator(app_id) assert exists return algopy.arc4.Address(creator) ... # setup context (below assumes available under 'ctx' variable) contract = AppParamsContract() app = context.any.application() creator = contract.get_app_creator(app) assert creator == context.default_sender ``` ### algopy.op.AssetParamsGet ```{testcode} from algopy import op class AssetParamsContract(algopy.ARC4Contract): @algopy.arc4.abimethod def get_asset_total(self, asset_id: algopy.UInt64) -> algopy.UInt64: total, exists = op.AssetParamsGet.asset_total(asset_id) assert exists return total ... # setup context (below assumes available under 'ctx' variable) asset = context.any.asset(total=algopy.UInt64(1000000), decimals=algopy.UInt64(6)) contract = AssetParamsContract() total = contract.get_asset_total(asset.id) assert total == algopy.UInt64(1000000) ``` ### algopy.op.Box ```{testcode} from algopy import op class BoxStorageContract(algopy.ARC4Contract): @algopy.arc4.abimethod def store_and_retrieve(self, key: algopy.Bytes, value: algopy.Bytes) -> algopy.Bytes: op.Box.put(key, value) retrieved_value, exists = op.Box.get(key) assert exists return retrieved_value ... # setup context (below assumes available under 'ctx' variable) contract = BoxStorageContract() key, value = algopy.Bytes(b"test_key"), algopy.Bytes(b"test_value") result = contract.store_and_retrieve(key, value) assert result == value stored_value = context.ledger.get_box(contract, key) assert stored_value == value.value ``` ## Mockable Opcodes These opcodes are mockable in `algorand-python-testing`, allowing for controlled testing of complex operations: ### algopy.compile\_contract ```{testcode} from unittest.mock import patch, MagicMock import algopy mocked_response = MagicMock() mocked_response.local_bytes = algopy.UInt64(4) class MockContract(algopy.Contract): ... class ContractFactory(algopy.ARC4Contract): ... @algopy.arc4.abimethod def compile_and_get_bytes(self) -> algopy.UInt64: contract_response = algopy.compile_contract(MockContract) return contract_response.local_bytes ... # setup context (below assumes available under 'ctx' variable) contract = ContractFactory() with patch('algopy.compile_contract', return_value=mocked_response): assert contract.compile_and_get_bytes() == 4 ``` ### algopy.arc4.abi\_call ```{testcode} import unittest from unittest.mock import patch, MagicMock import algopy import typing class MockAbiCall: def __call__( self, *args: typing.Any, **_kwargs: typing.Any ) -> tuple[typing.Any, typing.Any]: return ( algopy.arc4.UInt64(11), MagicMock(), ) def __getitem__(self, _item: object) -> typing.Self: return self class MyContract(algopy.ARC4Contract): @algopy.arc4.abimethod def my_method(self, arg1: algopy.UInt64, arg2: algopy.UInt64) -> algopy.UInt64: return algopy.arc4.abi_call[algopy.arc4.UInt64]("my_other_method", arg1, arg2)[0].native ... # setup context (below assumes available under 'ctx' variable) contract = MyContract() with patch('algopy.arc4.abi_call', MockAbiCall()): result = contract.my_method(algopy.UInt64(10), algopy.UInt64(1)) assert result == 11 ``` ### algopy.op.vrf\_verify ```{testcode} from unittest.mock import patch, MagicMock import algopy def test_mock_vrf_verify(): mock_result = (algopy.Bytes(b'mock_output'), True) with patch('algopy.op.vrf_verify', return_value=mock_result) as mock_vrf_verify: result = algopy.op.vrf_verify( algopy.op.VrfVerify.VrfAlgorand, algopy.Bytes(b'proof'), algopy.Bytes(b'message'), algopy.Bytes(b'public_key') ) assert result == mock_result mock_vrf_verify.assert_called_once_with( algopy.op.VrfVerify.VrfAlgorand, algopy.Bytes(b'proof'), algopy.Bytes(b'message'), algopy.Bytes(b'public_key') ) test_mock_vrf_verify() ``` ### algopy.op.EllipticCurve ```{testcode} from unittest.mock import patch, MagicMock import algopy def test_mock_elliptic_curve_add(): mock_result = algopy.Bytes(b'result') with patch('algopy.op.EllipticCurve.add', return_value=mock_result) as mock_add: result = algopy.op.EllipticCurve.add( algopy.op.EC.BN254g1, algopy.Bytes(b'a'), algopy.Bytes(b'b') ) assert result == mock_result mock_add.assert_called_once_with( algopy.op.EC.BN254g1, algopy.Bytes(b'a'), algopy.Bytes(b'b'), ) test_mock_elliptic_curve_add() ``` These examples demonstrate how to mock key mockable opcodes in `algorand-python-testing`. Use similar techniques (in your preferred testing framework) for other mockable opcodes like `algopy.compile_logicsig`, `algopy.arc4.arc4_create`, and `algopy.arc4.arc4_update`. Mocking these opcodes allows you to: 1. Control complex operations’ behavior not covered by *implemented* and *emulated* types. 2. Test edge cases and error conditions. 3. Isolate contract logic from external dependencies. ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# Testing Guide
The Algorand Python Testing framework provides powerful tools for testing Algorand Python smart contracts within a Python interpreter. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications. ```{note} For all code examples in the _Testing Guide_ section, assume `context` is an instance of `AlgopyTestContext` obtained using the `algopy_testing_context()` context manager. All subsequent code is executed within this context. ``` ```{mermaid} graph TD subgraph GA["Your Development Environment"] A["algopy (type stubs)"] B["algopy_testing (testing framework)
(You are here 📍)"] C["puya (compiler)"] end subgraph GB["Your Algorand Project"] D[Your Algorand Python contract] end D -->|type hints inferred from| A D -->|compiled using| C D -->|tested via| B ``` > *High-level overview of the relationship between your smart contracts project, Algorand Python Testing framework, Algorand Python type stubs, and the compiler* The Algorand Python Testing framework streamlines unit testing of your Algorand Python smart contracts by offering functionality to: 1. Simulate the Algorand Virtual Machine (AVM) environment 2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types 3. Test smart contract classes, including their states, variables, and methods 4. Verify logic signatures and subroutines 5. Manage global state, local state, scratch slots, and boxes in test contexts 6. Simulate transactions and transaction groups, including inner transactions 7. Verify opcode behavior By using this framework, you can ensure your Algorand Python smart contracts function correctly before deploying them to a live network. Key features of the framework include: * `AlgopyTestContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state * AVM Type Simulation: Accurate representations of AVM types like `UInt64` and `Bytes` * ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding * Transaction Simulation: Ability to create and execute various transaction types * State Management: Tools for managing and verifying global and local state changes * Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing The framework is designed to work seamlessly with Algorand Python smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain. The following video includes a practical tutorial of how to do unit tests in Algorand: [Algorand Smart Contract Testing - Python](https://www.youtube.com/embed/B4mzNmQB5mU?rel=0) ## Table of Contents ```{toctree} --- maxdepth: 3 --- concepts avm-types arc4-types transactions contract-testing signature-testing state-management subroutines opcodes ```
# Smart Signature Testing
Test Algorand smart signatures (LogicSigs) with ease using the Algorand Python Testing framework. ```{testsetup} import algopy from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## Define a LogicSig Use the `@logicsig` decorator to create a LogicSig: ```{testcode} from algopy import logicsig, Account, Txn, Global, UInt64, Bytes @logicsig def hashed_time_locked_lsig() -> bool: # LogicSig code here return True # Approve transaction ``` ## Execute and Test Use `AlgopyTestContext.execute_logicsig()` to run and verify LogicSigs: ```{testcode} with context.txn.create_group([ context.any.txn.payment(), ]): result = context.execute_logicsig(hashed_time_locked_lsig, algopy.Bytes(b"secret")) assert result is True ``` `execute_logicsig()` returns a boolean: * `True`: Transaction approved * `False`: Transaction rejected ## Pass Arguments Provide arguments to LogicSigs using `execute_logicsig()`: ```{testcode} result = context.execute_logicsig(hashed_time_locked_lsig, algopy.Bytes(b"secret")) ``` Access arguments in the LogicSig with `algopy.op.arg()` opcode: ```{testcode} @logicsig def hashed_time_locked_lsig() -> bool: secret = algopy.op.arg(0) expected_hash = algopy.op.sha256(algopy.Bytes(b"secret")) return algopy.op.sha256(secret) == expected_hash # Example usage secret = algopy.Bytes(b"secret") assert context.execute_logicsig(hashed_time_locked_lsig, secret) ``` For more details on available operations, see the . ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# State Management
`algorand-python-testing` provides tools to test state-related abstractions in Algorand smart contracts. This guide covers global state, local state, boxes, and scratch space management. ```{testsetup} import algopy from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## Global State Global state is represented as instance attributes on `algopy.Contract` and `algopy.ARC4Contract` classes. ```{testcode} class MyContract(algopy.ARC4Contract): def __init__(self): self.state_a = algopy.GlobalState(algopy.UInt64, key="global_uint64") self.state_b = algopy.UInt64(1) # In your test contract = MyContract() contract.state_a.value = algopy.UInt64(10) contract.state_b.value = algopy.UInt64(20) ``` ## Local State Local state is defined similarly to global state, but accessed using account addresses as keys. ```{testcode} class MyContract(algopy.ARC4Contract): def __init__(self): self.local_state_a = algopy.LocalState(algopy.UInt64, key="state_a") # In your test contract = MyContract() account = context.any.account() contract.local_state_a[account] = algopy.UInt64(10) ``` ## Boxes The framework supports various Box abstractions available in `algorand-python`. ```{testcode} class MyContract(algopy.ARC4Contract): def __init__(self): self.box_map = algopy.BoxMap(algopy.Bytes, algopy.UInt64) @algopy.arc4.abimethod() def some_method(self, key_a: algopy.Bytes, key_b: algopy.Bytes, key_c: algopy.Bytes) -> None: self.box = algopy.Box(algopy.UInt64, key=key_a) self.box.value = algopy.UInt64(1) self.box_map[key_b] = algopy.UInt64(1) self.box_map[key_c] = algopy.UInt64(2) # In your test contract = MyContract() key_a = b"key_a" key_b = b"key_b" key_c = b"key_c" contract.some_method(algopy.Bytes(key_a), algopy.Bytes(key_b), algopy.Bytes(key_c)) # Access boxes box_content = context.ledger.get_box(contract, key_a) assert context.ledger.box_exists(contract, key_a) # Set box content manually with context.txn.create_group(): context.ledger.set_box(contract, key_a, algopy.op.itob(algopy.UInt64(1))) ``` ## Scratch Space Scratch space is represented as a list of 256 slots for each transaction. ```{testcode} class MyContract(algopy.Contract, scratch_slots=(1, 2, algopy.urange(3, 20))): def approval_program(self): algopy.op.Scratch.store(1, algopy.UInt64(5)) assert algopy.op.Scratch.load_uint64(1) == algopy.UInt64(5) return True # In your test contract = MyContract() result = contract.approval_program() assert result scratch_space = context.txn.last_group.get_scratch_space() assert scratch_space[1] == algopy.UInt64(5) ``` For more detailed information, explore the example contracts in the `examples/` directory, the page, and the . ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# Subroutines
Subroutines allow direct testing of internal contract logic without full application calls. ```{testsetup} import algopy import algopy_testing from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## Overview The `@algopy.subroutine` decorator exposes contract methods for isolated testing within the Algorand Python Testing framework. This enables focused validation of core business logic without the overhead of full application deployment and execution. ## Usage 1. Decorate internal methods with `@algopy.subroutine`: ```{testcode} from algopy import subroutine, UInt64 class MyContract: @subroutine def calculate_value(self, input: UInt64) -> UInt64: return input * UInt64(2) ``` 2. Test the subroutine directly: ```{testcode} def test_calculate_value(context: algopy_testing.AlgopyTestContext): contract = MyContract() result = contract.calculate_value(UInt64(5)) assert result == UInt64(10) ``` ## Benefits * Faster test execution * Simplified debugging * Focused unit testing of core logic ## Best Practices * Use subroutines for complex internal calculations * Prefer writing `pure` subroutines in ARC4Contract classes * Combine with full application tests for comprehensive coverage * Maintain realistic input and output types (e.g., `UInt64`, `Bytes`) ## Example For a complete example, see the `simple_voting` contract in the section. ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# Transactions
The testing framework follows the Transaction definitions described in . This section focuses on *value generators* and interactions with inner transactions, it also explains how the framework identifies *active* transaction group during contract method/subroutine/logicsig invocation. ```{testsetup} import algopy import algopy_testing from algopy_testing import algopy_testing_context # Create the context manager for snippets below ctx_manager = algopy_testing_context() # Enter the context context = ctx_manager.__enter__() ``` ## Group Transactions Refers to test implementation of transaction stubs available under `algopy.gtxn.*` namespace. Available under instance accessible via `context.any.txn` property: ```{mermaid} graph TD A[TxnValueGenerator] --> B[payment] A --> C[asset_transfer] A --> D[application_call] A --> E[asset_config] A --> F[key_registration] A --> G[asset_freeze] A --> H[transaction] ``` ```{testcode} ... # instantiate test context # Generate a random payment transaction pay_txn = context.any.txn.payment( sender=context.any.account(), # Optional: Defaults to context's default sender if not provided receiver=context.any.account(), # Required amount=algopy.UInt64(1000000) # Required ) # Generate a random asset transfer transaction asset_transfer_txn = context.any.txn.asset_transfer( sender=context.any.account(), # Optional: Defaults to context's default sender if not provided receiver=context.any.account(), # Required asset_id=algopy.UInt64(1), # Required amount=algopy.UInt64(1000) # Required ) # Generate a random application call transaction app_call_txn = context.any.txn.application_call( app_id=context.any.application(), # Required app_args=[algopy.Bytes(b"arg1"), algopy.Bytes(b"arg2")], # Optional: Defaults to empty list if not provided accounts=[context.any.account()], # Optional: Defaults to empty list if not provided assets=[context.any.asset()], # Optional: Defaults to empty list if not provided apps=[context.any.application()], # Optional: Defaults to empty list if not provided approval_program_pages=[algopy.Bytes(b"approval_code")], # Optional: Defaults to empty list if not provided clear_state_program_pages=[algopy.Bytes(b"clear_code")], # Optional: Defaults to empty list if not provided scratch_space={0: algopy.Bytes(b"scratch")} # Optional: Defaults to empty dict if not provided ) # Generate a random asset config transaction asset_config_txn = context.any.txn.asset_config( sender=context.any.account(), # Optional: Defaults to context's default sender if not provided asset_id=algopy.UInt64(1), # Optional: If not provided, creates a new asset total=1000000, # Required for new assets decimals=0, # Required for new assets default_frozen=False, # Optional: Defaults to False if not provided unit_name="UNIT", # Optional: Defaults to empty string if not provided asset_name="Asset", # Optional: Defaults to empty string if not provided url="http://asset-url", # Optional: Defaults to empty string if not provided metadata_hash=b"metadata_hash", # Optional: Defaults to empty bytes if not provided manager=context.any.account(), # Optional: Defaults to sender if not provided reserve=context.any.account(), # Optional: Defaults to zero address if not provided freeze=context.any.account(), # Optional: Defaults to zero address if not provided clawback=context.any.account() # Optional: Defaults to zero address if not provided ) # Generate a random key registration transaction key_reg_txn = context.any.txn.key_registration( sender=context.any.account(), # Optional: Defaults to context's default sender if not provided vote_pk=algopy.Bytes(b"vote_pk"), # Optional: Defaults to empty bytes if not provided selection_pk=algopy.Bytes(b"selection_pk"), # Optional: Defaults to empty bytes if not provided vote_first=algopy.UInt64(1), # Optional: Defaults to 0 if not provided vote_last=algopy.UInt64(1000), # Optional: Defaults to 0 if not provided vote_key_dilution=algopy.UInt64(10000) # Optional: Defaults to 0 if not provided ) # Generate a random asset freeze transaction asset_freeze_txn = context.any.txn.asset_freeze( sender=context.any.account(), # Optional: Defaults to context's default sender if not provided asset_id=algopy.UInt64(1), # Required freeze_target=context.any.account(), # Required freeze_state=True # Required ) # Generate a random transaction of a specified type generic_txn = context.any.txn.transaction( type=algopy.TransactionType.Payment, # Required sender=context.any.account(), # Optional: Defaults to context's default sender if not provided receiver=context.any.account(), # Required for Payment amount=algopy.UInt64(1000000) # Required for Payment ) ``` ## Preparing for execution When a smart contract instance (application) is interacted with on the Algorand network, it must be performed in relation to a specific transaction or transaction group where one or many transactions are application calls to target smart contract instances. To emulate this behaviour, the `create_group` context manager is available on instance that allows setting temporary transaction fields within a specific scope, passing in emulated transaction objects and identifying the active transaction index within the transaction group ```{testcode} import algopy from algopy_testing import AlgopyTestContext, algopy_testing_context class SimpleContract(algopy.ARC4Contract): @algopy.arc4.abimethod def check_sender(self) -> algopy.arc4.Address: return algopy.arc4.Address(algopy.Txn.sender) ... # Create a contract instance contract = SimpleContract() # Use active_txn_overrides to change the sender test_sender = context.any.account() with context.txn.create_group(active_txn_overrides={"sender": test_sender}): # Call the contract method result = contract.check_sender() assert result == test_sender # Assert that the sender is the test_sender after exiting the # transaction group context assert context.txn.last_active.sender == test_sender # Assert the size of last transaction group assert len(context.txn.last_group.txns) == 1 ``` ## Inner Transaction Inner transactions are AVM transactions that are signed and executed by AVM applications (instances of deployed smart contracts or signatures). When testing smart contracts, to stay consistent with AVM, the framework \_does not allow you to submit inner transactions outside of contract/subroutine invocation, but you can interact with and manage inner transactions using the test context manager as follows: ```{testcode} class MyContract(algopy.ARC4Contract): @algopy.arc4.abimethod def pay_via_itxn(self, asset: algopy.Asset) -> None: algopy.itxn.Payment( receiver=algopy.Txn.sender, amount=algopy.UInt64(1) ).submit() ... # setup context (below assumes available under 'context' variable) # Create a contract instance contract = MyContract() # Generate a random asset asset = context.any.asset() # Execute the contract method contract.pay_via_itxn(asset=asset) # Access the last submitted inner transaction payment_txn = context.txn.last_group.last_itxn.payment # Assert properties of the inner transaction assert payment_txn.receiver == context.txn.last_active.sender assert payment_txn.amount == algopy.UInt64(1) # Access all inner transactions in the last group for itxn in context.txn.last_group.itxn_groups[-1]: # Perform assertions on each inner transaction ... # Access a specific inner transaction group first_itxn_group = context.txn.last_group.get_itxn_group(0) first_payment_txn = first_itxn_group.payment(0) ``` In this example, we define a contract method `pay_via_itxn` that creates and submits an inner payment transaction. The test context automatically captures and stores the inner transactions submitted by the contract method. Note that we don’t need to wrap the execution in a `create_group` context manager because the method is decorated with `@algopy.arc4.abimethod`, which automatically creates a transaction group for the method. The `create_group` context manager is only needed when you want to create more complex transaction groups or patch transaction fields for various transaction-related opcodes in AVM. To access the submitted inner transactions: 1. Use `context.txn.last_group.last_itxn` to access the last submitted inner transaction of a specific type. 2. Iterate over all inner transactions in the last group using `context.txn.last_group.itxn_groups[-1]`. 3. Access a specific inner transaction group using `context.txn.last_group.get_itxn_group(index)`. These methods provide type validation and will raise an error if the requested transaction type doesn’t match the actual type of the inner transaction. ## References * for more details on the test context manager and inner transactions related methods that perform implicit inner transaction type validation. * for more examples of smart contracts and associated tests that interact with inner transactions. ```{testcleanup} ctx_manager.__exit__(None, None, None) ```
# Testing Guide
The Algorand TypeScript Testing framework provides powerful tools for testing Algorand TypeScript smart contracts within a Node.js environment. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications. ```{note} For all code examples in the _Testing Guide_ section, assume `context` is an instance of `TestExecutionContext` obtained using the initialising an instance of `TestExecutionContext` class. All subsequent code is executed within this context. ``` The Algorand TypeScript Testing framework streamlines unit testing of your Algorand TypeScript smart contracts by offering functionality to: 1. Simulate the Algorand Virtual Machine (AVM) environment 2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types 3. Test smart contract classes, including their states, variables, and methods 4. Verify logic signatures and subroutines 5. Manage global state, local state, scratch slots, and boxes in test contexts 6. Simulate transactions and transaction groups, including inner transactions 7. Verify opcode behavior By using this framework, you can ensure your Algorand TypeScript smart contracts function correctly before deploying them to a live network. Key features of the framework include: * `TestExecutionContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state * AVM Type Simulation: Accurate representations of AVM types like `uint64` and `bytes` * ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding * Transaction Simulation: Ability to create and execute various transaction types * State Management: Tools for managing and verifying global and local state changes * Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing The framework is designed to work seamlessly with Algorand TypeScript smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain. ## Table of Contents
# ARC4 Types
These types are available under the `arc4` namespace. Refer to the for more details on the spec. ```{hint} Test execution context provides _value generators_ for ARC4 types. To access their _value generators_, use `{context_instance}.any.arc4` property. See more examples below. ``` ```{note} For all `arc4` types with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-typescript-testing`](https://github.com/algorandfoundation/algorand-typescript-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-typescript-testing/blob/main/CONTRIBUTING). ``` ```ts import { arc4 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` ## Unsigned Integers ```ts // Integer types const uint8Value = new arc4.UintN8(255); const uint16Value = new arc4.UintN16(65535); const uint32Value = new arc4.UintN32(4294967295); const uint64Value = new arc4.UintN64(18446744073709551615n); // Generate a random unsigned arc4 integer with default range const uint8 = ctx.any.arc4.uintN8(); const uint16 = ctx.any.arc4.uintN16(); const uint32 = ctx.any.arc4.uintN32(); const uint64 = ctx.any.arc4.uintN64(); const biguint128 = ctx.any.arc4.uintN128(); const biguint256 = ctx.any.arc4.uintN256(); const biguint512 = ctx.any.arc4.uintN512(); // Generate a random unsigned arc4 integer with specified range const uint8Custom = ctx.any.arc4.uintN8(10, 100); const uint16Custom = ctx.any.arc4.uintN16(1000, 5000); const uint32Custom = ctx.any.arc4.uintN32(100000, 1000000); const uint64Custom = ctx.any.arc4.uintN64(1000000000, 10000000000); const biguint128Custom = ctx.any.arc4.uintN128(1000000000000000, 10000000000000000n); const biguint256Custom = ctx.any.arc4.uintN256( 1000000000000000000000000n, 10000000000000000000000000n, ); const biguint512Custom = ctx.any.arc4.uintN512( 10000000000000000000000000000000000n, 10000000000000000000000000000000000n, ); ``` ## Address ```ts // Address type const addressValue = new arc4.Address('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ'); // Generate a random address const randomAddress = ctx.any.arc4.address(); // Access native underlaying type const native = randomAddress.native; ``` ## Dynamic Bytes ```ts // Dynamic byte string const bytesValue = new arc4.DynamicBytes('Hello, Algorand!'); // Generate random dynamic bytes const randomDynamicBytes = ctx.any.arc4.dynamicBytes(123); // n is the number of bits in the arc4 dynamic bytes ``` ## String ```ts // UTF-8 encoded string const stringValue = new arc4.Str('Hello, Algorand!'); // Generate random string const randomString = ctx.any.arc4.str(12); // n is the number of bits in the arc4 string ``` ```ts // test cleanup ctx.reset(); ```
# AVM Types
These types are available directly under the `algorand-typescript` namespace. They represent the basic AVM primitive types and can be instantiated directly or via *value generators*: ```{note} For 'primitive `algorand-typescript` types such as `Account`, `Application`, `Asset`, `uint64`, `biguint`, `bytes`, `string` with and without respective _value generator_, instantiation can be performed directly. If you have a suggestion for a new _value generator_ implementation, please open an issue in the [`algorand-typescript-testing`](https://github.com/algorandfoundation/algorand-typescript-testing) repository or contribute by following the [contribution guide](https://github.com/algorandfoundation/algorand-typescript-testing/blob/main/CONTRIBUTING). ``` ```ts import * as algots from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` ## uint64 ```ts // Direct instantiation const uint64Value = algots.Uint64(100); // Generate a random UInt64 value const randomUint64 = ctx.any.uint64(); // Specify a range const randomUint64InRange = ctx.any.uint64(1000, 9999); ``` ## bytes ```ts // Direct instantiation const bytesValue = algots.Bytes('Hello, Algorand!'); // Generate random byte sequences const randomBytes = ctx.any.bytes(); // Specify the length const randomBytesOfLength = ctx.any.bytes(32); ``` ## string ```ts // Direct instantiation const stringValue = 'Hello, Algorand!'; // Generate random strings const randomString = ctx.any.string(); // Specify the length const randomStringOfLength = ctx.any.string(16); ``` ## biguint ```ts // Direct instantiation const biguintValue = algots.BigUint(100); // Generate a random BigUInt value const randomBiguint = ctx.any.biguint(); // Specify the min value const randomBiguintOver = ctx.any.biguint(100n); ``` ## Asset ```ts // Direct instantiation const asset = algots.Asset(1001); // Generate a random asset const randomAsset = ctx.any.asset({ clawback: ctx.any.account(), // Optional: Clawback address creator: ctx.any.account(), // Optional: Creator account decimals: 6, // Optional: Number of decimals defaultFrozen: false, // Optional: Default frozen state freeze: ctx.any.account(), // Optional: Freeze address manager: ctx.any.account(), // Optional: Manager address metadataHash: ctx.any.bytes(32), // Optional: Metadata hash name: algots.Bytes(ctx.any.string()), // Optional: Asset name reserve: ctx.any.account(), // Optional: Reserve address total: 1000000, // Optional: Total supply unitName: algots.Bytes(ctx.any.string()), // Optional: Unit name url: algots.Bytes(ctx.any.string()), // Optional: Asset URL }); // Get an asset by ID const asset = ctx.ledger.getAsset(randomAsset.id); // Update an asset ctx.ledger.patchAssetData(randomAsset, { clawback: ctx.any.account(), // Optional: New clawback address creator: ctx.any.account(), // Optional: Creator account decimals: 6, // Optional: New number of decimals defaultFrozen: false, // Optional: Default frozen state freeze: ctx.any.account(), // Optional: New freeze address manager: ctx.any.account(), // Optional: New manager address metadataHash: ctx.any.bytes(32), // Optional: New metadata hash name: algots.Bytes(ctx.any.string()), // Optional: New asset name reserve: ctx.any.account(), // Optional: New reserve address total: 1000000, // Optional: New total supply unitName: algots.Bytes(ctx.any.string()), // Optional: Unit name url: algots.Bytes(ctx.any.string()), // Optional: New asset URL }); ``` ## Account ```ts // Direct instantiation const rawAddress = algots.Bytes.fromBase32( 'PUYAGEGVCOEBP57LUKPNOCSMRWHZJSU4S62RGC2AONDUEIHC6P7FOPJQ4I', ); const account = algots.Account(rawAddress); // zero address by default // Generate a random account const randomAccount = ctx.any.account({ address: rawAddress, // Optional: Specify a custom address, defaults to a random address optedAssetBalances: new Map([]), // Optional: Specify opted asset balances as dict of assets to balance optedApplications: [], // Optional: Specify opted apps as sequence of algopy.Application objects totalAppsCreated: 0, // Optional: Specify the total number of created applications totalAppsOptedIn: 0, // Optional: Specify the total number of applications opted into totalAssets: 0, // Optional: Specify the total number of assets totalAssetsCreated: 0, // Optional: Specify the total number of created assets totalBoxBytes: 0, // Optional: Specify the total number of box bytes totalBoxes: 0, // Optional: Specify the total number of boxes totalExtraAppPages: 0, // Optional: Specify the total number of extra totalNumByteSlice: 0, // Optional: Specify the total number of byte slices totalNumUint: 0, // Optional: Specify the total number of uints minBalance: 0, // Optional: Specify a minimum balance balance: 0, // Optional: Specify an initial balance authAddress: algots.Account(), // Optional: Specify an auth address, }); // Generate a random account that is opted into a specific asset const mockAsset = ctx.any.asset(); const mockAccount = ctx.any.account({ optedAssetBalances: new Map([[mockAsset.id, 123]]), }); // Get an account by address const account = ctx.ledger.getAccount(mockAccount); // Update an account ctx.ledger.patchAccountData(mockAccount, { account: { balance: 0, // Optional: New balance minBalance: 0, // Optional: New minimum balance authAddress: ctx.any.account(), // Optional: New auth address totalAssets: 0, // Optional: New total number of assets totalAssetsCreated: 0, // Optional: New total number of created assets totalAppsCreated: 0, // Optional: New total number of created applications totalAppsOptedIn: 0, // Optional: New total number of applications opted into totalExtraAppPages: 0, // Optional: New total number of extra application pages }, }); // Check if an account is opted into a specific asset const optedIn = account.isOptedIn(mockAsset); ``` ## Application ```ts // Direct instantiation const application = algots.Application(); // Generate a random application const randomApp = ctx.any.application({ approvalProgram: algots.Bytes(''), // Optional: Specify a custom approval program clearStateProgram: algots.Bytes(''), // Optional: Specify a custom clear state program globalNumUint: 1, // Optional: Number of global uint values globalNumBytes: 1, // Optional: Number of global byte values localNumUint: 1, // Optional: Number of local uint values localNumBytes: 1, // Optional: Number of local byte values extraProgramPages: 1, // Optional: Number of extra program pages creator: ctx.defaultSender, // Optional: Specify the creator account }); // Get an application by ID const app = ctx.ledger.getApplication(randomApp.id); // Update an application ctx.ledger.patchApplicationData(randomApp, { application: { approvalProgram: algots.Bytes(''), // Optional: New approval program clearStateProgram: algots.Bytes(''), // Optional: New clear state program globalNumUint: 1, // Optional: New number of global uint values globalNumBytes: 1, // Optional: New number of global byte values localNumUint: 1, // Optional: New number of local uint values localNumBytes: 1, // Optional: New number of local byte values extraProgramPages: 1, // Optional: New number of extra program pages creator: ctx.defaultSender, // Optional: New creator account }, }); // Patch logs for an application. When accessing via transactions or inner transaction related opcodes, will return the patched logs unless new logs where added into the transaction during execution. const testApp = ctx.any.application({ appLogs: [algots.Bytes('log entry 1'), algots.Bytes('log entry 2')], }); // Get app associated with the active contract class MyContract extends algots.arc4.Contract {} const contract = ctx.contract.create(MyContract); const activeApp = ctx.ledger.getApplicationForContract(contract); ``` ```ts // test context clean up ctx.reset(); ```
# Concepts
The following sections provide an overview of key concepts and features in the Algorand TypeScript Testing framework. ## Test Context The main abstraction for interacting with the testing framework is the . It creates an emulated Algorand environment that closely mimics AVM behavior relevant to unit testing the contracts and provides a TypeScript interface for interacting with the emulated environment. ```typescript import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; import { afterEach, describe, it } from 'vitest'; describe('MyContract', () => { // Recommended way to instantiate the test context const ctx = new TestExecutionContext(); afterEach(() => { // ctx should be reset after each test is executed ctx.reset(); }); it('test my contract', () => { // Your test code here }); }); ``` The context manager interface exposes four main properties: 1. `contract`: An instance of `ContractContext` for creating instances of Contract under test and register them with the test execution context. 2. `ledger`: An instance of `LedgerContext` for interacting with and querying the emulated Algorand ledger state. 3. `txn`: An instance of `TransactionContext` for creating and managing transaction groups, submitting transactions, and accessing transaction results. 4. `any`: An instance of `AlgopyValueGenerator` for generating randomized test data. The `any` property provides access to different value generators: * `AvmValueGenerator`: Base abstractions for AVM types. All methods are available directly on the instance returned from `any`. * `TxnValueGenerator`: Accessible via `any.txn`, for transaction-related data. * `Arc4ValueGenerator`: Accessible via `any.arc4`, for ARC4 type data. These generators allow creation of constrained random values for various AVM entities (accounts, assets, applications, etc.) when specific values are not required. ```{hint} Value generators are powerful tools for generating test data for specified AVM types. They allow further constraints on random value generation via arguments, making it easier to generate test data when exact values are not necessary. When used with the 'Arrange, Act, Assert' pattern, value generators can be especially useful in setting up clear and concise test data in arrange steps. ``` ## Types of `algorand-typescript` stub implementations As explained in the , `algorand-typescript-testing` *injects* test implementations for stubs available in the `algorand-typescript` package. However, not all of the stubs are implemented in the same manner: 1. **Native**: Fully matches AVM computation in Python. For example, `op.sha256` and other cryptographic operations behave identically in AVM and unit tests. This implies that the majority of opcodes that are ‘pure’ functions in AVM also have a native TypeScript implementation provided by this package. These abstractions and opcodes can be used within and outside of the testing context. 2. **Emulated**: Uses `TestExecutionContext` to mimic AVM behavior. For example, `Box.put` on an `Box` within a test context stores data in the test manager, not the real Algorand network, but provides the same interface. 3. **Mockable**: Not implemented, but can be mocked or patched. For example, `op.onlineStake` can be mocked to return specific values or behaviors; otherwise, it raises a `NotImplementedError`. This category covers cases where native or emulated implementation in a unit test context is impractical or overly complex.
# Smart Contract Testing
This guide provides an overview of how to test smart contracts using the . We will cover the basics of testing `arc4.Contract` and `BaseContract` classes, focusing on `abimethod` and `baremethod` decorators. ```{note} The code snippets showcasing the contract testing capabilities are using [vitest](https://vitest.dev/) as the test framework. However, note that the `algorand-typescript-testing` package can be used with any other test framework that supports TypeScript. `vitest` is used for demonstration purposes in this documentation. ``` ```ts import { arc4 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` The following video includes a practical tutorial of how to do tests in Algorand: [Algorand Smart Contract Testing - Typescript](https://www.youtube.com/embed/6SSga2FCg-c?rel=0) ## `arc4.Contract` Subclasses of `arc4.Contract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `Application` object instance. Within the class implementation, methods decorated with `arc4.abimethod` and `arc4.baremethod` will automatically assemble an `gtxn.ApplicationTxn` transaction to emulate the AVM application call. This behavior can be overriden by setting the transaction group manually as part of test setup, this is done via implicit invocation of `ctx.any.txn.applicationCall` *value generator* (refer to for more details). ```ts class SimpleVotingContract extends arc4.Contract { topic = GlobalState({ initialValue: Bytes('default_topic'), key: 'topic' }); votes = GlobalState({ initialValue: Uint64(0), key: 'votes', }); voted = LocalState({ key: 'voted' }); @arc4.abimethod({ onCreate: 'require' }) create(initialTopic: bytes) { this.topic.value = initialTopic; this.votes.value = Uint64(0); } @arc4.abimethod() vote(): uint64 { assert(this.voted(Txn.sender).value === 0, 'Account has already voted'); this.votes.value = this.votes.value + 1; this.voted(Txn.sender).value = Uint64(1); return this.votes.value; } @arc4.abimethod({ readonly: true }) getVotes(): uint64 { return this.votes.value; } @arc4.abimethod() changeTopic(newTopic: bytes) { assert(Txn.sender === Txn.applicationId.creator, 'Only creator can change topic'); this.topic.value = newTopic; this.votes.value = Uint64(0); // Reset user's vote (this is simplified per single user for the sake of example) this.voted(Txn.sender).value = Uint64(0); } } // Arrange const initialTopic = Bytes('initial_topic'); const contract = ctx.contract.create(SimpleVotingContract); contract.voted(ctx.defaultSender).value = Uint64(0); // Act - Create the topic contract.create(initialTopic); // Assert - Check initial state expect(contract.topic.value).toEqual(initialTopic); expect(contract.votes.value).toEqual(Uint64(0)); // Act - Vote // The method `.vote()` is decorated with `algopy.arc4.abimethod`, which means it will assemble a transaction to emulate the AVM application call const result = contract.vote(); // Assert - you can access the corresponding auto generated application call transaction via test context expect(ctx.txn.lastGroup.transactions.length).toEqual(1); // Assert - Note how local and global state are accessed via regular python instance attributes expect(result).toEqual(1); expect(contract.votes.value).toEqual(1); expect(contract.voted(ctx.defaultSender).value).toEqual(1); // Act - Change topic const newTopic = Bytes('new_topic'); contract.changeTopic(newTopic); // Assert - Check topic changed and votes reset expect(contract.topic.value).toEqual(newTopic); expect(contract.votes.value).toEqual(0); expect(contract.voted(ctx.defaultSender).value).toEqual(0); // Act - Get votes (should be 0 after reset) const votes = contract.getVotes(); // Assert - Check votes expect(votes).toEqual(0); ``` For more examples of tests using `arc4.Contract`, see the section. ## \`BaseContract“ Subclasses of `BaseContract` are **required** to be instantiated with an active test context. As part of instantiation, the test context will automatically create a matching `Application` object instance. This behavior is identical to `arc4.Contract` class instances. Unlike `arc4.Contract`, `BaseContract` requires manual setup of the transaction context and explicit method calls. Here’s an updated example demonstrating how to test a `BaseContract` class: ```ts import { BaseContract, Bytes, GlobalState, Uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; import { afterEach, expect, test } from 'vitest'; class CounterContract extends BaseContract { counter = GlobalState({ initialValue: Uint64(0) }); increment() { this.counter.value = this.counter.value + 1; return Uint64(1); } approvalProgram() { return this.increment(); } clearStateProgram() { return Uint64(1); } } const ctx = new TestExecutionContext(); afterEach(() => { ctx.reset(); }); test('increment', () => { // Instantiate contract const contract = ctx.contract.create(CounterContract); // Set up the transaction context using active_txn_overrides ctx.txn .createScope([ ctx.any.txn.applicationCall({ appId: contract, sender: ctx.defaultSender, appArgs: [Bytes('increment')], }), ]) .execute(() => { // Invoke approval program const result = contract.approvalProgram(); // Assert approval program result expect(result).toEqual(1); // Assert counter value expect(contract.counter.value).toEqual(1); }); // Test clear state program expect(contract.clearStateProgram()).toEqual(1); }); test('increment with multiple txns', () => { const contract = ctx.contract.create(CounterContract); // For scenarios with multiple transactions, you can still use gtxns const extraPayment = ctx.any.txn.payment(); ctx.txn .createScope( [ extraPayment, ctx.any.txn.applicationCall({ sender: ctx.defaultSender, appId: contract, appArgs: [Bytes('increment')], }), ], 1, // Set the application call as the active transaction ) .execute(() => { const result = contract.approvalProgram(); expect(result).toEqual(1); expect(contract.counter.value).toEqual(1); }); expect(ctx.txn.lastGroup.transactions.length).toEqual(2); }); ``` In this updated example: 1. We use `ctx.txn.createScope()` with `ctx.any.txn.applicationCall` to set up the transaction context for a single application call. 2. For scenarios involving multiple transactions, you can still use the `group` parameter to create a transaction group, as shown in the `test('increment with multiple txns', () => {})` function. This approach provides more flexibility in setting up the transaction context for testing `Contract` classes, allowing for both simple single-transaction scenarios and more complex multi-transaction tests. ## Defer contract method invocation You can create deferred application calls for more complex testing scenarios where order of transactions needs to be controlled: ```ts class MyARC4Contract extends arc4.Contract { someMethod(payment: gtxn.PaymentTxn) { return Uint64(1); } } const ctx = new TestExecutionContext(); test('deferred call', () => { const contract = ctx.contract.create(MyARC4Contract); const extraPayment = ctx.any.txn.payment(); const extraAssetTransfer = ctx.any.txn.assetTransfer(); const implicitPayment = ctx.any.txn.payment(); const deferredCall = ctx.txn.deferAppCall( contract, contract.someMethod, 'someMethod', implicitPayment, ); ctx.txn.createScope([extraPayment, deferredCall, extraAssetTransfer]).execute(() => { const result = deferredCall.submit(); }); console.log(ctx.txn.lastGroup); // [extra_payment, implicit_payment, app call, extra_asset_transfer] }); ``` A deferred application call prepares the application call transaction without immediately executing it. The call can be executed later by invoking the `.submit()` method on the deferred application call instance. As demonstrated in the example, you can also include the deferred call in a transaction group creation context manager to execute it as part of a larger transaction group. When `.submit()` is called, only the specific method passed to `defer_app_call()` will be executed. ```ts // test cleanup ctx.reset(); ```
# AVM Opcodes
The file provides a comprehensive list of all opcodes and their respective types, categorized as *Mockable*, *Emulated*, or *Native* within the `algorand-typescript-testing` package. This section highlights a **subset** of opcodes and types that typically require interaction with the test execution context. `Native` opcodes are assumed to function as they do in the Algorand Virtual Machine, given their stateless nature. If you encounter issues with any `Native` opcodes, please raise an issue in the or contribute a PR following the guide. ```ts import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` ## Implemented Types These types are fully implemented in TypeScript and behave identically to their AVM counterparts: ### 1. Cryptographic Operations The following opcodes are demonstrated: * `op.sha256` * `op.keccak256` * `op.ecdsaVerify` ```ts import { op } from '@algorandfoundation/algorand-typescript'; // SHA256 hash const data = Bytes('Hello, World!'); const hashed = op.sha256(data); // Keccak256 hash const keccakHashed = op.keccak256(data); // ECDSA verification const messageHash = Bytes.fromHex( 'f809fd0aa0bb0f20b354c6b2f86ea751957a4e262a546bd716f34f69b9516ae1', ); const sigR = Bytes.fromHex('18d96c7cda4bc14d06277534681ded8a94828eb731d8b842e0da8105408c83cf'); const sigS = Bytes.fromHex('7d33c61acf39cbb7a1d51c7126f1718116179adebd31618c4604a1f03b5c274a'); const pubkeyX = Bytes.fromHex('f8140e3b2b92f7cbdc8196bc6baa9ce86cf15c18e8ad0145d50824e6fa890264'); const pubkeyY = Bytes.fromHex('bd437b75d6f1db67155a95a0da4b41f2b6b3dc5d42f7db56238449e404a6c0a3'); const result = op.ecdsaVerify(op.Ecdsa.Secp256r1, messageHash, sigR, sigS, pubkeyX, pubkeyY); expect(result).toBe(true); ``` ### 2. Arithmetic and Bitwise Operations The following opcodes are demonstrated: * `op.addw` * `op.bitLength` * `op.getBit` * `op.setBit` ```ts import { op, Uint64 } from '@algorandfoundation/algorand-typescript'; // Addition with carry const [result, carry] = op.addw(Uint64(2n ** 63n), Uint64(2n ** 63n)); // Bitwise operations const value = Uint64(42); const bitLength = op.bitLength(value); const isBitSet = op.getBit(value, 3); const newValue = op.setBit(value, 2, 1); ``` For a comprehensive list of all opcodes and types, refer to the page. ## Emulated Types Requiring Transaction Context These types necessitate interaction with the transaction context: ### algopy.op.Global ```ts import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; import { op, arc4, uint64, Uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class MyContract extends arc4.Contract { @arc4.abimethod() checkGlobals(): uint64 { return op.Global.minTxnFee + op.Global.minBalance; } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); ctx.ledger.patchGlobalData({ minTxnFee: 1000, minBalance: 100000, }); const contract = ctx.contract.create(MyContract); const result = contract.checkGlobals(); expect(result).toEqual(101000); ``` ### algopy.op.Txn ```ts import { op, arc4 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class MyContract extends arc4.Contract { @arc4.abimethod() checkTxnFields(): arc4.Address { return new arc4.Address(op.Txn.sender); } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(MyContract); const customSender = ctx.any.account(); ctx.txn.createScope([ctx.any.txn.applicationCall({ sender: customSender })]).execute(() => { const result = contract.checkTxnFields(); expect(result).toEqual(customSender); }); ``` ### algopy.op.AssetHoldingGet ```ts import { Account, arc4, Asset, op, uint64, Uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class AssetContract extends arc4.Contract { @arc4.abimethod() checkAssetHolding(account: Account, asset: Asset): uint64 { const [balance, _] = op.AssetHolding.assetBalance(account, asset); return balance; } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(AssetContract); const asset = ctx.any.asset({ total: 1000000 }); const account = ctx.any.account({ optedAssetBalances: new Map([[asset.id, Uint64(5000)]]) }); const result = contract.checkAssetHolding(account, asset); expect(result).toEqual(5000); ``` ### algopy.op.AppGlobal ```ts import { arc4, bytes, Bytes, op, uint64, Uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class StateContract extends arc4.Contract { @arc4.abimethod() setAndGetState(key: bytes, value: uint64): uint64 { op.AppGlobal.put(key, value); return op.AppGlobal.getUint64(key); } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(StateContract); const key = Bytes('test_key'); const value = Uint64(42); const result = contract.setAndGetState(key, value); expect(result).toEqual(value); const [storedValue, _] = ctx.ledger.getGlobalState(contract, key); expect(storedValue?.value).toEqual(42); ``` ### algopy.op.Block ```ts import { arc4, bytes, op } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class BlockInfoContract extends arc4.Contract { @arc4.abimethod() getBlockSeed(): bytes { return op.Block.blkSeed(1000); } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(BlockInfoContract); ctx.ledger.patchBlockData(1000, { seed: op.itob(123456), timestamp: 1625097600 }); const seed = contract.getBlockSeed(); expect(seed).toEqual(op.itob(123456)); ``` ### algopy.op.AcctParamsGet ```ts import type { Account, uint64 } from '@algorandfoundation/algorand-typescript'; import { arc4, assert, op, Uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class AccountParamsContract extends arc4.Contract { @arc4.abimethod() getAccountBalance(account: Account): uint64 { const [balance, exists] = op.AcctParams.acctBalance(account); assert(exists); return balance; } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(AccountParamsContract); const account = ctx.any.account({ balance: 1000000 }); const balance = contract.getAccountBalance(account); expect(balance).toEqual(Uint64(1000000)); ``` ### algopy.op.AppParamsGet ```ts import type { Application } from '@algorandfoundation/algorand-typescript'; import { arc4, assert, op } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class AppParamsContract extends arc4.Contract { @arc4.abimethod() getAppCreator(appId: Application): arc4.Address { const [creator, exists] = op.AppParams.appCreator(appId); assert(exists); return new arc4.Address(creator); } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(AppParamsContract); const app = ctx.any.application(); const creator = contract.getAppCreator(app); expect(creator).toEqual(ctx.defaultSender); ``` ### algopy.op.AssetParamsGet ```ts import type { uint64 } from '@algorandfoundation/algorand-typescript'; import { arc4, assert, op } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class AssetParamsContract extends arc4.Contract { @arc4.abimethod() getAssetTotal(assetId: uint64): uint64 { const [total, exists] = op.AssetParams.assetTotal(assetId); assert(exists); return total; } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(AssetParamsContract); const asset = ctx.any.asset({ total: 1000000, decimals: 6 }); const total = contract.getAssetTotal(asset.id); expect(total).toEqual(1000000); ``` ### algopy.op.Box ```ts import type { bytes } from '@algorandfoundation/algorand-typescript'; import { arc4, assert, Bytes, op } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class BoxStorageContract extends arc4.Contract { @arc4.abimethod() storeAndRetrieve(key: bytes, value: bytes): bytes { op.Box.put(key, value); const [retrievedValue, exists] = op.Box.get(key); assert(exists); return retrievedValue; } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(BoxStorageContract); const key = Bytes('test_key'); const value = Bytes('test_value'); const result = contract.storeAndRetrieve(key, value); expect(result).toEqual(value); const storedValue = ctx.ledger.getBox(contract, key); expect(storedValue).toEqual(value); ``` ### algopy.compile\_contract ```ts import { arc4, compile, uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class MockContract extends arc4.Contract {} class ContractFactory extends arc4.Contract { @arc4.abimethod() compileAndGetBytes(): uint64 { const contractResponse = compile(MockContract); return compiled.localBytes; } } // Create the context manager for snippets below const ctx = new TestExecutionContext(); const contract = ctx.contract.create(ContractFactory); const mockApp = ctx.any.application({ localNumBytes: 4 }); ctx.setCompiledApp(MockContract, mockApp.id); const result = contract.compileAndGetBytes(); expect(result).toBe(4); ``` ## Mockable Opcodes These opcodes are mockable in `algorand-typescript-testing`, allowing for controlled testing of complex operations. Note that the module being mocked is `@algorandfoundation/algorand-typescript-testing/internal` which holds the stub implementations of `algorand-typescript` functions to be executed in Node.js environment. ### algopy.op.vrf\_verify ```ts import { expect, Mock, test, vi } from 'vitest'; import { bytes, Bytes, op, VrfVerify } from '@algorandfoundation/algorand-typescript'; vi.mock( import('@algorandfoundation/algorand-typescript-testing/internal'), async importOriginal => { const mod = await importOriginal(); return { ...mod, op: { ...mod.op, vrfVerify: vi.fn(), }, }; }, ); test('mock vrfVerify', () => { const mockedVrfVerify = op.vrfVerify as Mock; const mockResult = [Bytes('mock_output'), true] as readonly [bytes, boolean]; mockedVrfVerify.mockReturnValue(mockResult); const result = op.vrfVerify( VrfVerify.VrfAlgorand, Bytes('proof'), Bytes('message'), Bytes('public_key'), ); expect(result).toEqual(mockResult); }); ``` ### algopy.op.EllipticCurve ```ts import { expect, Mock, test, vi } from 'vitest'; import { Bytes, op } from '@algorandfoundation/algorand-typescript'; vi.mock( import('@algorandfoundation/algorand-typescript-testing/internal'), async importOriginal => { const mod = await importOriginal(); return { ...mod, op: { ...mod.op, EllipticCurve: { ...mod.op.EllipticCurve, add: vi.fn(), }, }, }; }, ); test('mock EllipticCurve', () => { const mockedEllipticCurveAdd = op.EllipticCurve.add as Mock; const mockResult = Bytes('mock_output'); mockedEllipticCurveAdd.mockReturnValue(mockResult); const result = op.EllipticCurve.add(op.Ec.BN254g1, Bytes('A'), Bytes('B')); expect(result).toEqual(mockResult); }); ``` These examples demonstrate how to mock key mockable opcodes in `algorand-typescript-testing`. Use similar techniques (in your preferred testing framework) for other mockable opcodes like `mimc`, and `JsonRef`. Mocking these opcodes allows you to: 1. Control complex operations’ behavior not covered by *implemented* and *emulated* types. 2. Test edge cases and error conditions. 3. Isolate contract logic from external dependencies. ```ts // test cleanup ctx.reset(); ```
# Testing Guide
The Algorand TypeScript Testing framework provides powerful tools for testing Algorand TypeScript smart contracts within a Node.js environment. This guide covers the main features and concepts of the framework, helping you write effective tests for your Algorand applications. ```{note} For all code examples in the _Testing Guide_ section, assume `context` is an instance of `TestExecutionContext` obtained using the initialising an instance of `TestExecutionContext` class. All subsequent code is executed within this context. ``` The Algorand TypeScript Testing framework streamlines unit testing of your Algorand TypeScript smart contracts by offering functionality to: 1. Simulate the Algorand Virtual Machine (AVM) environment 2. Create and manipulate test accounts, assets, applications, transactions, and ARC4 types 3. Test smart contract classes, including their states, variables, and methods 4. Verify logic signatures and subroutines 5. Manage global state, local state, scratch slots, and boxes in test contexts 6. Simulate transactions and transaction groups, including inner transactions 7. Verify opcode behavior By using this framework, you can ensure your Algorand TypeScript smart contracts function correctly before deploying them to a live network. Key features of the framework include: * `TestExecutionContext`: The main entry point for testing, providing access to various testing utilities and simulated blockchain state * AVM Type Simulation: Accurate representations of AVM types like `uint64` and `bytes` * ARC4 Support: Tools for testing ARC4 contracts and methods, including struct definitions and ABI encoding/decoding * Transaction Simulation: Ability to create and execute various transaction types * State Management: Tools for managing and verifying global and local state changes * Opcode Simulation: Implementations of AVM opcodes for accurate smart contract behavior testing The framework is designed to work seamlessly with Algorand TypeScript smart contracts, allowing developers to write comprehensive unit tests that closely mimic the behavior of contracts on the Algorand blockchain. The following video includes a practical tutorial of how to do tests in Algorand: [Algorand Smart Contract Testing - Typescript](https://www.youtube.com/embed/6SSga2FCg-c?rel=0) ## Table of Contents
# Smart Signature Testing
Test Algorand smart signatures (LogicSigs) with ease using the Algorand TypeScript Testing framework. ```ts import * as algots from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` ## Define a LogicSig Extend `algots.LogicSig` class to create a LogicSig: ```ts import * as algots from '@algorandfoundation/algorand-typescript'; class HashedTimeLockedLogicSig extends LogicSig { program(): boolean { // LogicSig code here return true; // Approve transaction } } ``` ## Execute and Test Use `ctx.executeLogicSig()` to run and verify LogicSigs: ```ts ctx.txn.createScope([ctx.any.txn.payment()]).execute(() => { const result = ctx.executeLogicSig(new HashedTimeLockedLogicSig(), Bytes('secret')); expect(result).toBe(true); }); ``` `executeLogicSig()` returns a boolean: * `true`: Transaction approved * `false`: Transaction rejected ## Pass Arguments Provide arguments to LogicSigs using `executeLogicSig()`: ```ts const result = ctx.executeLogicSig(new HashedTimeLockedLogicSig(), Bytes('secret')); ``` Access arguments in the LogicSig with `algots.op.arg()` opcode: ```ts import * as algots from '@algorandfoundation/algorand-typescript'; class HashedTimeLockedLogicSig extends LogicSig { program(): boolean { // LogicSig code here const secret = algots.op.arg(0); const expectedHash = algots.op.sha256(algots.Bytes('secret')); return algots.op.sha256(secret) === expectedHash; } } // Example usage const secret = algots.Bytes('secret'); expect(ctx.executeLogicSig(new HashedTimeLockedLogicSig(), secret)); ``` For more details on available operations, see the . ```ts // test cleanup ctx.reset(); ```
# State Management
`algorand-typescript-testing` provides tools to test state-related abstractions in Algorand smart contracts. This guide covers global state, local state, boxes, and scratch space management. ```ts import * as algots from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` ## Global State Global state is represented as instance attributes on `algots.Contract` and `algots.arc4.Contract` classes. ```ts class MyContract extends algots.arc4.Contract { stateA = algots.GlobalState({ key: 'globalStateA' }); stateB = algots.GlobalState({ initialValue: algots.Uint64(1), key: 'globalStateB' }); } // In your test const contract = ctx.contract.create(MyContract); contract.stateA.value = algots.Uint64(10); contract.stateB.value = algots.Uint64(20); ``` ## Local State Local state is defined similarly to global state, but accessed using account addresses as keys. ```ts class MyContract extends algots.arc4.Contract { localStateA = algots.LocalState({ key: 'localStateA' }); } // In your test const contract = ctx.contract.create(MyContract); const account = ctx.any.account(); contract.localStateA(account).value = algots.Uint64(10); ``` ## Boxes The framework supports various Box abstractions available in `algorand-typescript`. ```ts class MyContract extends algots.arc4.Contract { box: algots.Box | undefined; boxMap = algots.BoxMap({ keyPrefix: 'boxMap' }); @algots.arc4.abimethod() someMethod(keyA: algots.bytes, keyB: algots.bytes, keyC: algots.bytes) { this.box = algots.Box({ key: keyA }); this.box.value = algots.Uint64(1); this.boxMap.set(keyB, algots.Uint64(1)); this.boxMap.set(keyC, algots.Uint64(2)); } } // In your test const contract = ctx.contract.create(MyContract); const keyA = algots.Bytes('keyA'); const keyB = algots.Bytes('keyB'); const keyC = algots.Bytes('keyC'); contract.someMethod(keyA, keyB, keyC); // Access boxes const boxContent = ctx.ledger.getBox(contract, keyA); expect(ctx.ledger.boxExists(contract, keyA)).toBe(true); // Set box content manually ctx.ledger.setBox(contract, keyA, algots.op.itob(algots.Uint64(1))); ``` ## Scratch Space Scratch space is represented as a list of 256 slots for each transaction. ```ts @algots.contract({ scratchSlots: [1, 2, { from: 3, to: 20 }] }) class MyContract extends algots.Contract { approvalProgram(): boolean { algots.op.Scratch.store(1, algots.Uint64(5)); algots.assert(algots.op.Scratch.loadUint64(1) === algots.Uint64(5)); return true; } } // In your test const contract = ctx.contract.create(MyContract); const result = contract.approvalProgram(); expect(result).toBe(true); const scratchSpace = ctx.txn.lastGroup.getScratchSpace(); expect(scratchSpace[1]).toEqual(5); ``` For more detailed information, explore the example contracts in the `examples/` directory, the page, and the . ```ts // test cleanup ctx.reset(); ```
# Transactions
The testing framework follows the Transaction definitions described in . This section focuses on *value generators* and interactions with inner transactions, it also explains how the framework identifies *active* transaction group during contract method/subroutine/logicsig invocation. ```ts import * as algots from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; // Create the context manager for snippets below const ctx = new TestExecutionContext(); ``` ## Group Transactions Refers to test implementation of transaction stubs available under `algots.gtxn.*` namespace. Available under instance accessible via `ctx.any.txn` property: ```ts // Generate a random payment transaction const payTxn = ctx.any.txn.payment({ sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided receiver: ctx.any.account(), // Required amount: 1000000, // Required }); // Generate a random asset transfer transaction const assetTransferTxn = ctx.any.txn.assetTransfer({ sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided assetReceiver: ctx.any.account(), // Required xferAsset: ctx.any.asset({ assetId: 1 }), // Required assetAmount: 1000, // Required }); // Generate a random application call transaction const appCallTxn = ctx.any.txn.applicationCall({ appId: ctx.any.application(), // Required appArgs: [algots.Bytes('arg1'), algots.Bytes('arg2')], // Optional: Defaults to empty list if not provided accounts: [ctx.any.account()], // Optional: Defaults to empty list if not provided assets: [ctx.any.asset()], // Optional: Defaults to empty list if not provided apps: [ctx.any.application()], // Optional: Defaults to empty list if not provided approvalProgramPages: [algots.Bytes('approval_code')], // Optional: Defaults to empty list if not provided clearStateProgramPages: [algots.Bytes('clear_code')], // Optional: Defaults to empty list if not provided scratchSpace: { 0: algots.Bytes('scratch') }, // Optional: Defaults to empty dict if not provided }); // Generate a random asset config transaction const assetConfigTxn = ctx.any.txn.assetConfig({ sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided configAsset: undefined, // Optional: If not provided, creates a new asset total: 1000000, // Required for new assets decimals: 0, // Required for new assets defaultFrozen: false, // Optional: Defaults to False if not provided unitName: algots.Bytes('UNIT'), // Optional: Defaults to empty string if not provided assetName: algots.Bytes('Asset'), // Optional: Defaults to empty string if not provided url: algots.Bytes('http://asset-url'), // Optional: Defaults to empty string if not provided metadataHash: algots.Bytes('metadata_hash'), // Optional: Defaults to empty bytes if not provided manager: ctx.any.account(), // Optional: Defaults to sender if not provided reserve: ctx.any.account(), // Optional: Defaults to zero address if not provided freeze: ctx.any.account(), // Optional: Defaults to zero address if not provided clawback: ctx.any.account(), // Optional: Defaults to zero address if not provided }); // Generate a random key registration transaction const keyRegTxn = ctx.any.txn.keyRegistration({ sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided voteKey: algots.Bytes('vote_pk'), // Optional: Defaults to empty bytes if not provided selectionKey: algots.Bytes('selection_pk'), // Optional: Defaults to empty bytes if not provided voteFirst: 1, // Optional: Defaults to 0 if not provided voteLast: 1000, // Optional: Defaults to 0 if not provided voteKeyDilution: 10000, // Optional: Defaults to 0 if not provided }); // Generate a random asset freeze transaction const assetFreezeTxn = ctx.any.txn.assetFreeze({ sender: ctx.any.account(), // Optional: Defaults to context's default sender if not provided freezeAsset: ctx.ledger.getAsset(algots.Uint64(1)), // Required freezeAccount: ctx.any.account(), // Required frozen: true, // Required }); ``` ## Preparing for execution When a smart contract instance (application) is interacted with on the Algorand network, it must be performed in relation to a specific transaction or transaction group where one or many transactions are application calls to target smart contract instances. To emulate this behaviour, the `createScope` context manager is available on instance that allows setting temporary transaction fields within a specific scope, passing in emulated transaction objects and identifying the active transaction index within the transaction group ```ts import { arc4, Txn } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class SimpleContract extends arc4.Contract { @arc4.abimethod() checkSender(): arc4.Address { return new arc4.Address(Txn.sender); } } const ctx = new TestExecutionContext(); // Create a contract instance const contract = ctx.contract.create(SimpleContract); // Use active_txn_overrides to change the sender const testSender = ctx.any.account(); ctx.txn .createScope([ctx.any.txn.applicationCall({ appId: contract, sender: testSender })]) .execute(() => { // Call the contract method const result = contract.checkSender(); expect(result).toEqual(testSender); }); // Assert that the sender is the test_sender after exiting the // transaction group context expect(ctx.txn.lastActive.sender).toEqual(testSender); // Assert the size of last transaction group expect(ctx.txn.lastGroup.transactions.length).toEqual(1); ``` ## Inner Transaction Inner transactions are AVM transactions that are signed and executed by AVM applications (instances of deployed smart contracts or signatures). When testing smart contracts, to stay consistent with AVM, the framework \_does not allow you to submit inner transactions outside of contract/subroutine invocation, but you can interact with and manage inner transactions using the test execution context as follows: ```ts import { arc4, Asset, itxn, Txn, Uint64 } from '@algorandfoundation/algorand-typescript'; import { TestExecutionContext } from '@algorandfoundation/algorand-typescript-testing'; class MyContract extends arc4.Contract { @arc4.abimethod() payViaItxn(asset: Asset) { itxn .payment({ receiver: Txn.sender, amount: 1, }) .submit(); } } // setup context const ctx = new TestExecutionContext(); // Create a contract instance const contract = ctx.contract.create(MyContract); // Generate a random asset const asset = ctx.any.asset(); // Execute the contract method contract.payViaItxn(asset); // Access the last submitted inner transaction const paymentTxn = ctx.txn.lastGroup.lastItxnGroup().getPaymentInnerTxn(); // Assert properties of the inner transaction expect(paymentTxn.receiver).toEqual(ctx.txn.lastActive.sender); expect(paymentTxn.amount).toEqual(1); // Access all inner transactions in the last group ctx.txn.lastGroup.itxnGroups.at(-1)?.itxns.forEach(itxn => { // Perform assertions on each inner transaction expect(itxn.type).toEqual(TransactionType.Payment); }); // Access a specific inner transaction group const firstItxnGroup = ctx.txn.lastGroup.getItxnGroup(0); const firstPaymentTxn = firstItxnGroup.getPaymentInnerTxn(0); expect(firstPaymentTxn.type).toEqual(TransactionType.Payment); ``` In this example, we define a contract method `payViaItxn` that creates and submits an inner payment transaction. The test execution context automatically captures and stores the inner transactions submitted by the contract method. Note that we don’t need to wrap the execution in a `createScope` context manager because the method is decorated with `@arc4.abimethod`, which automatically creates a transaction group for the method. The `createScope` context manager is only needed when you want to create more complex transaction groups or patch transaction fields for various transaction-related opcodes in AVM. To access the submitted inner transactions: 1. Use `ctx.txn.lastGroup.lastItxnGroup().getPaymentInnerTxn()` to access the last submitted inner transaction of a specific type, in this case payment transaction. 2. Iterate over all inner transactions in the last group using `ctx.txn.lastGroup.itxnGroups.at(-1)?.itxns`. 3. Access a specific inner transaction group using `ctx.txn.lastGroup.getItxnGroup(index)`. These methods provide type validation and will raise an error if the requested transaction type doesn’t match the actual type of the inner transaction. ## References * for more details on the test context manager and inner transactions related methods that perform implicit inner transaction type validation. * for more examples of smart contracts and associated tests that interact with inner transactions. ```ts // test cleanup ctx.reset(); ```
# AlgoKit Clients
When building on Algorand, you need reliable ways to communicate with the blockchain—sending transactions, interacting with smart contracts, and accessing blockchain data. AlgoKit Utils clients provide straightforward, developer-friendly interfaces for these interactions, reducing the complexity typically associated with blockchain development. This guide explains how to use these clients to simplify common Algorand development tasks, whether you’re sending a basic transaction or deploying complex smart contracts. AlgoKit offers two main types of clients to interact with the Algorand blockchain: 1. **Algorand Client** - A general-purpose client for all Algorand interactions, including: * Crafting, grouping, and sending transactions through a fluent interface of chained methods * Accessing network services through REST API clients for algod, indexer, and kmd * Configuring connection and transaction parameters with sensible defaults and optional overrides 2. **Typed Application Client** - A specialized, auto-generated client for interacting with specific smart contracts: * Provides type-safe interfaces generated from or contract specification files * Enables IntelliSense-driven development experience that includes the smart contract methods * Reduces errors through real-time type checking of arguments provided to smart contract methods Let’s explore each client type in detail. ## Algorand Client: Gateway to the Blockchain The `AlgorandClient` serves as your primary entry point for all Algorand operations. Think of it as your Swiss Army knife for blockchain interactions. ### Getting Started with AlgorandClient You can create an AlgorandClient instance in several ways, depending on your needs: These factory methods make it easy to connect to different Algorand networks without manually configuring connection details. Once you have an `AlgorandClient` instance, you can access the REST API clients for the various Algorand APIs via the `AlgorandClient.client` property: For more information about the functionalities of the REST API clients, refer to the following pages: Interact with Algorand nodes, submit transactions, and get blockchain status Query historical transactions, account information, and blockchain data Manage wallets and keys (primarily for development environments) ### Understanding AlgorandClient’s Stateful Design The `AlgorandClient` is “stateful”, meaning that it caches various information that are reused multiple times. This allows the `AlgorandClient` to avoid redundant requests to the blockchain and to provide a more efficient interface for interacting with the blockchain. This is an important concept to understand before using the `AlgorandClient`. #### Account Signer Caching When sending transactions, you need to sign them with a private key. `AlgorandClient` can cache these signing capabilities, eliminating the need to provide signing information for every transaction, as you can see in the following example: The same example, but with different approaches to signer caching demonstrated: This caching mechanism simplifies your code, especially when sending multiple transactions from the same account. #### Suggested Parameter Caching `AlgorandClient` caches network provided transaction values () for you automatically to reduce network traffic. It has a set of default configurations that control this behavior, but you have the ability to override and change the configuration of this behavior. ##### What Are Suggested Parameters? In Algorand, every transaction requires a set of network-specific parameters that define how the transaction should be processed. These “suggested parameters” include: * **Fee:** The transaction fee (in microAlgos) * **First Valid Round:** The first blockchain round where the transaction can be processed * **Last Valid Round:** The last blockchain round where the transaction can be processed (after this, the transaction expires) * **Genesis ID:** The identifier for the Algorand network (e.g., “mainnet-v1.0”) * **Genesis Hash:** The hash of the genesis block for the network * **Min Fee:** The minimum fee required by the network These parameters are called “suggested” because the network provides recommended values, but developers can modify them (for example, to increase the fee during network congestion). ##### Why Cache These Parameters? Without caching, your application would need to request these parameters from the network before every transaction, which: * **Increases latency:** Each transaction would require an additional network request * **Increases network load:** Both for your application and the Algorand node * **Slows down user experience:** Especially when creating multi-transaction groups Since these parameters only change every few seconds (when new blocks are created), repeatedly requesting them wastes resources. ##### How Parameter Caching Works The `AlgorandClient` automatically: 1. Requests suggested parameters when needed 2. Caches them for a configurable time period (default: 3 seconds) 3. Reuses the cached values for subsequent transactions 4. Refreshes the cache when it expires ##### Customized Parameter Caching `AlgorandClient` has a set of default configurations that control this behavior, but you have the ability to override and change the configuration of this behavior: * `algorand.setDefaultValidityWindow(validityWindow)` - Set the default validity window (number of rounds from the current known round that the transaction will be valid to be accepted for), having a smallish value for this is usually ideal to avoid transactions that are valid for a long future period and may be submitted even after you think it failed to submit if waiting for a particular number of rounds for the transaction to be successfully submitted. The validity window defaults to 10, except in automated testing where it’s set to 1000 when targeting LocalNet. * `algorand.setSuggestedParams(suggestedParams, until?)` - Set the suggested network parameters to use (optionally until the given time) * `algorand.setSuggestedParamsTimeout(timeout)` - Set the timeout that is used to cache the suggested network parameters (by default 3 seconds) * `algorand.getSuggestedParams()` - Get the current suggested network parameters object, either the cached value, or if the cache has expired a fresh value By understanding and properly configuring suggested parameter caching, you can optimize your application’s performance while ensuring transactions are processed correctly by the Algorand network. ## Typed App Clients: Smart Contract Interaction Simplified While the `AlgorandClient` handles general blockchain interactions, typed app clients provide specialized interfaces for deployed applications. These clients are generated from contract specifications (/) and offer: * Type-safe method calls * Automatic parameter validation * IntelliSense code completion support ### Generating App Clients The relevant smart contract’s app client is generated using the *ARC56/ARC32* ABI file. There are two different ways to generate an application client for a smart contract: #### 1. Using the AlgoKit Build CLI Command When you are using the AlgoKit smart contract template for your project, compiling your *ARC4* smart contract written in either TypeScript or Python will automatically generate the TypeScript or Python application client for you depending on what language you chose for contract interaction. Simply run the following command to generate the artifacts including the typed application client: ```shell algokit project run build ``` After running the command, you should see the following artifacts generated in the `artifacts` directory under the `smart_contracts` directory: #### 2. Using the AlgoKit Generate CLI Command There is also an AlgoKit CLI command to generate the app client for a smart contract. You can also use it to define custom commands inside of the `.algokit.toml` file in your project directory. Note that you can specify what language you want for the application clients with the file extensions `.ts` for TypeScript and `.py` for Python. ```shell # To output a single arc32.json to a TypeScript typed app client: algokit generate client path/to/arc32.json --output client.ts # To process multiple arc32.json in a directory structure and output to a TypeScript app client for each in the current directory: algokit generate client smart_contracts/artifacts --output {contract_name}.ts # To process multiple arc32.json in a directory structure and output to a Python client alongside each arc32.json: algokit generate client smart_contracts/artifacts --output {app_spec_path}/client.py ``` When compiled, all *ARC-4* smart contracts generate an `arc56.json` or `arc32.json` file depending on what app spec was used. This file contains the smart contract’s extended ABI, which follows the *ARC-32* standard. ### Working with a Typed App Client Object To get an instance of a typed client you can use an `AlgorandClient` instance or a typed app `Factory` instance. The approach to obtaining a client instance depends on how many app clients you require for a given app spec and if the app has already been deployed, which is summarised below: #### App is Already Deployed #### App is not Deployed For applications that need to work with multiple instances of the same smart contract spec, factories provide a convenient way to manage multiple clients: ### Calling a Smart Contract Method To call a smart contract method using the application client instance, follow these steps: The typed app client ensures you provide the correct parameters and handles all the underlying transaction construction and submission. ### Example: Deploying and Interacting with a Smart Contract For a simple example that deploys a contract and calls a `hello` method, see below: ## When to Use Each Client Type * Use the `AlgorandClient` when you need to: * Send basic transactions (payments, asset transfers) * Work with blockchain data in a general way * Interact with contracts you don’t have specifications for * Use Typed App Clients when you need to: * Deploy and interact with specific smart contracts * Benefit from type safety and IntelliSense * Build applications that leverage contract-specific functionality For most Algorand applications, you’ll likely use both: `AlgorandClient` for general blockchain operations and Typed App Clients for smart contract interactions. ## Next Steps Now that you understand AlgoKit Utils Clients, you’re ready to start building on Algorand with confidence. Remember: * Start with the AlgorandClient for general blockchain interactions * Generate Typed Application Clients for your smart contracts * Leverage the stateful design of these clients to simplify your code
# Algorand ARCs
> To discuss ARC drafts, use the corresponding issue in the issue tracker.
Welcome to the Algorand ARCs (Algorand Request for Comments) page. Here you’ll find information on Algorand Improvement Proposals. New ideas for ARCs are discussed through — you can find and contribute to them in the . ## Living Arcs | Number | Title | Description | | ------ | ----- | ----------- | | | | | | | | | | | | | | | | | ## Final Arcs | Number | Title | Description | | ------ | ----- | ----------- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## Last Call Arcs | Number | Title | Description | | ------ | ----- | ----------- | | | | | | | | | ## Withdrawn Arcs | Number | Title | Description | | ------ | ----- | ----------- | | | | | ## Deprecated Arcs | Number | Title | Description | | ------ | ----- | ----------- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ## Draft Arcs | Number | Title | Description | | ------ | ----- | ----------- | | | | | | | | | ## ARC Status Terms * **Idea** - An idea that is pre-draft. This is not tracked within the ARC Repository. * **Draft** - The first formally tracked stage of an ARC in development. An ARC is merged by an ARC Editor into the ARC repository when properly formatted. * **Review** - An ARC Author marks an ARC as ready for and requesting Peer Review. * **Last Call** - This is the final review window for an ARC before moving to FINAL. An ARC editor will assign Last Call status and set a review end date (\`last-call-deadline\`), typically 14 days later. If this period results in necessary normative changes it will revert the ARC to Review. * **Final** - This ARC represents the final standard. A Final ARC exists in a state of finality and should only be updated to correct errata and add non-normative clarifications. * **Stagnant** - Any ARC in Draft or Review if inactive for a period of 6 months or greater is moved to Stagnant. An ARC may be resurrected from this state by Authors or ARC Editors through moving it back to Draft. * **Withdrawn** - The ARC Author(s) have withdrawn the proposed ARC. This state has finality and can no longer be resurrected using this ARC number. If the idea is pursued at a later date it is considered a new proposal. * **Deprecated** - This ARC has been deprecated. It has been replaced by another one or is now obsolete. * **Living** - A special status for ARCs that are designed to be continually updated and not reach a state of finality.
# ARC Purpose and Guidelines
> Guide explaining how to write a new ARC
## Abstract ### What is an ARC? ARC stands for Algorand Request for Comments. An ARC is a design document providing information to the Algorand community or describing a new feature for Algorand or its processes or environment. The ARC should provide a concise technical specification and a rationale for the feature. The ARC author is responsible for building consensus within the community and documenting dissenting opinions. We intend ARCs to be the primary mechanisms for proposing new features and collecting community technical input on an issue. We maintain ARCs as text files in a versioned repository. Their revision history is the historical record of the feature proposal. ## Specification ### ARC Types There are three types of ARC: * A **Standards track ARC**: application-level standards and conventions, including contract standards such as NFT standards, Algorand ABI, URI schemes, library/package formats, and wallet formats. * A **Meta ARC** describes a process surrounding Algorand or proposes a change to (or an event in) a process. Process ARCs are like Standards track ARCs but apply to areas other than the Algorand protocol. They may propose an implementation, but not to Algorand’s codebase; they often require community consensus; unlike Informational ARCs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Algorand development. Any meta-ARC is also considered a Process ARC. * An **Informational ARC** describes an Algorand design issue or provides general guidelines or information to the Algorand community but does not propose a new feature. Informational ARCs do not necessarily represent Algorand community consensus or a recommendation, so users and implementers are free to ignore Informational ARCs or follow their advice. We recommend that a single ARC contains a single key proposal or new idea. The more focused the ARC, the more successful it tends to be. A change to one client does not require an ARC; a change that affects multiple clients, or defines a standard for multiple apps to use, does. An ARC must meet specific minimum criteria. It must be a clear and complete description of the proposed enhancement. The enhancement must represent a net improvement. If applicable, the proposed implementation must be solid and not complicate the protocol unduly. ### Shepherding an ARC Parties involved in the process are you, the champion or *ARC author*, the , the , and the . Before writing a formal ARC, you should vet your idea. Ask the Algorand community first if an idea is original to avoid wasting time on something that will be rejected based on prior research. You **MUST** open an issue on the to do this. You **SHOULD** also share the idea on the . Once the idea has been vetted, your next responsibility will be to create a to present (through an ARC) the idea to the reviewers and all interested parties and invite editors, developers, and the community to give feedback on the aforementioned issue. The pull request with the **DRAFT** status **MUST**: * Have been vetted on the forum. * Be editable by ARC Editors; it will be closed otherwise. You should try and gauge whether the interest in your ARC is commensurate with both the work involved in implementing it and how many parties will have to conform to it. Negative community feedback will be considered and may prevent your ARC from moving past the Draft stage. To facilitate the discussion between each party involved in an ARC, you **SHOULD** use the specific . The ARC author is in charge of creating the PR and changing the status to **REVIEW**. The pull request with the **REVIEW** status **MUST**: * Contain a reference implementation. * Have garnered the interest of multiple projects; it will be set to **STAGNANT** otherwise. To update the status of an ARC from **REVIEW** to **LAST CALL**, a discussion will occur with all parties involved in the process. Any stakeholder **SHOULD** implement the ARC to point out any flaws that might occur. *In short, the role of a champion is to write the ARC using the style and format described below, shepherd the discussions in the appropriate forums, build community consensus around the idea, and gather projects with similar needs who will implement it.* ### ARC Process The following is the standardization process for all ARCs in all tracks:  **Idea** - An idea that is pre-draft. This is not tracked within the ARC Repository. **Draft** - The first formally tracked stage of an ARC in development. An ARC is merged by an ARC Editor into the ARC repository when adequately formatted. **Review** - An ARC Author marks an ARC as ready for and requests Peer Review. **Last Call** - The final review window for an ARC before moving to `FINAL`. An ARC editor will assign `Last Call` status and set a review end date (last-call-deadline), typically 1 month later. If this period results in necessary normative change, it will revert the ARC to `REVIEW`. **Final** - This ARC represents the final standard. A Final ARC exists in a state of finality and should only be updated to correct errata and add non-normative clarifications. **Stagnant** - Any ARC in `DRAFT`,`REVIEW` or `LAST CALL`, if inactive for 6 months or greater, is moved to `STAGNANT`. An ARC may be resurrected from this state by Authors or ARC Editors by moving it back to `DRAFT`. > An ARC with the status **STAGNANT** which does not have any activity for 1 month will be closed. *ARC Authors are notified of any algorithmic change to the status of their ARC* **Withdrawn** - The ARC Author(s)/Editor(s) has withdrawn the proposed ARC. This state has finality and can no longer be resurrected using this ARC number. If the idea is pursued later, it is considered a new proposal. **Idle** - Any ARC in `FINAL` or `LIVING`, if it has not been widely adopted by the ecosystem within 12 months. It will be moved to `DEPRECATED` after 6 months of `IDLE`. And can go back to `FINAL` or `LIVING` if the adoption starts. **Living** - A special status for ARCs which, by design, will be continually updated and **MIGHT** not reach a state of finality. **Deprecated** - A status for ARCs that are no longer aligned with our ecosystem or have been superseded by another ARC. ### What belongs in a successful ARC? Each ARC should have the following parts: * Preamble - style headers containing metadata about the ARC, including the ARC number, a short descriptive title (limited to a maximum of 44 characters), a description (limited to a maximum of 140 characters), and the author details. Irrespective of the category, the title and description should not include ARC numbers. See for details. * Abstract - This is a multi-sentence (short paragraph) technical summary. It should be a very terse and human-readable version of the specification section. Someone should be able to read only the abstract to get the gist of what this specification does. * Specification - The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Algorand clients. * Rationale - The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g., how the feature is supported in other languages. The rationale may also provide evidence of consensus within the community and should discuss significant objections or concerns raised during discussions. * Backwards Compatibility - All ARCs that introduce backward incompatibilities must include a section describing these incompatibilities and their severity. The ARC must explain how the author proposes to deal with these incompatibilities. ARC submissions without a sufficient backward compatibility treatise may be rejected outright. * Test Cases - Test cases for implementation are mandatory for ARCs that are affecting consensus changes. Tests should either be inlined in the ARC as data (such as input/expected output pairs, or included in `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-###/`. * Reference Implementation - An section that contains a reference/example implementation that people **MUST** use to assist in understanding or implementing this specification. If the reference implementation is too complex, the reference implementation **MUST** be included in `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-###/` * Security Considerations - All ARCs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks, and can be used throughout the life-cycle of the proposal. E.g., include security-relevant design decisions, concerns, essential discussions, implementation-specific guidance and pitfalls, an outline of threats and risks, and how they are being addressed. ARC submissions missing the “Security Considerations” section will be rejected. An ARC cannot proceed to status “Final” without a Security Considerations discussion deemed sufficient by the reviewers. * Copyright Waiver - All ARCs must be in the public domain. See the bottom of this ARC for an example copyright waiver. ### ARC Formats and Templates ARCs should be written in format. There is a to follow. ### ARC Header Preamble Each ARC must begin with an style header preamble, preceded and followed by three hyphens (`---`). This header is also termed “front matter” by . The headers must appear in the following order. Headers marked with ”\*” are optional and are described below. All other headers are required. `arc:` *ARC number* (It is determined by the ARC editor) `title:` *The ARC title is a few words, not a complete sentence* `description:` *Description is one full (short) sentence* `author:` *A list of the author’s or authors’ name(s) and/or username(s), or name(s) and email(s). Details are below.* > The `author` header lists the names, email addresses, or usernames of the authors/owners of the ARC. Those who prefer anonymity may use a username only or a first name and a username. The format of the `author` header value must be: Random J. User <> or Random J. User (@username) At least one author must use a GitHub username in order to get notified of change requests and can approve or reject them. `* discussions-to:` *A url pointing to the official discussion thread* While an ARC is in state `Idea`, a `discussions-to` header will indicate the URL where the ARC is being discussed. As mentioned above, an example of a place to discuss your ARC is the Algorand forum, but you can also use Algorand Discord #arcs chat room. When the ARC reach the state `Draft`, the `discussions-to` header will redirect to the discussion in . `status:` *Draft, Review, Last Call, Final, Stagnant, Withdrawn, Living* `* last-call-deadline:` *Date review period ends* `type:` *Standards Track, Meta, or Informational* `* category:` *Core, Networking, Interface, or ARC* (Only needed for Standards Track ARCs) `* sub-category:` *General, Asa, Application, Explorer or Wallet* optional sub-category header, classifying the ARC into one of the following categories. `created:` *Date created on* > The `created` header records the date that the ARC was assigned a number. Both headers should be in yyyy-mm-dd format, e.g. 2001-08-14. `* updated:` *Comma separated list of dates* The `updated` header records the date(s) when the ARC was updated with “substantial” changes. This header is only valid for ARCs of Draft and Active status. `* requires:` *ARC number(s)* ARCs may have a `requires` header, indicating the ARC numbers that this ARC depends on. `* replaces:` *ARC number(s)* `* superseded-by:` *ARC number(s)* ARCs may also have a `superseded-by` header indicating that an ARC has been rendered obsolete by a later document; the value is the number of the ARC that replaces the current document. The newer ARC must have a `replaces` header containing the number of the ARC that it rendered obsolete. > ARCs may also have an `extended-by` header indicating that functionalities have been added to the existing, still active ARC; the value is the number of the ARC that updates the current document. The newer ARC must have an `extends` header containing the number of the ARC that it extends. `* resolution:` *A url pointing to the resolution of this ARC* Headers that permit lists must separate elements with commas. Headers requiring dates will always do so in the format of ISO 8601 (yyyy-mm-dd). ### Style Guide When referring to an ARC by number, it should be written in the hyphenated form `ARC-X` where `X` is the ARC’s assigned number. ### Linking to other ARCs References to other ARCs should follow the format `ARC-N`, where `N` is the ARC number you are referring to. Each ARC that is referenced in an ARC **MUST** be accompanied by a relative markdown link the first time it is referenced, and **MAY** be accompanied by a link on subsequent references. The link **MUST** always be done via relative paths so that the links work in this GitHub repository, forks of this repository, the main ARCs site, mirrors of the main ARC site, etc. For example, you would link to this ARC with `[ARC-0](./arc-0000.md)`. ### Auxiliary Files Images, diagrams, and auxiliary files should be included in a subdirectory of the `assets` folder for that ARC as follows: `assets/arc-N` (where **N** is to be replaced with the ARC number). When linking to an image in the ARC, use relative links such as `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-1/image.png`. ### Application’s Methods name To provide information about which ARCs has been implemented on a particular application, namespace with the ARC number should be used before every method name: `arc_methodName`. > Where represents the specific ARC number associated to the standard. eg: ```json { "name": "Method naming convention", "desc": "Example", "methods": [ { "name": "arc0_method1", "desc": "Method 1", "args": [ { "type": "uint64", "name": "Number", "desc": "A number" }, ], "returns": { "type": "void[]" } }, { "name": "arc0_method2", "desc": "Method 2", "args": [ { "type": "byte[]", "name": "user_data", "desc": "Some characters" } ], "returns": { "type": "void[]" } } ] } ``` ### Application’s Event name To provide information about which ARCs has been implemented on a particular application, namespace with the ARC number should be used before every event name: `arc_EventName`. > Where represents the specific ARC number associated to the standard. eg: ```json { "name": "Event naming convention", "desc": "Example", "events": [ { "name": "arc0_Event1", "desc": "Method 1", "args": [ { "type": "uint64", "name": "Number", "desc": "A number" }, ] }, { "name": "arc0_Event2", "desc": "Method 2", "args": [ { "type": "byte[]", "name": "user_data", "desc": "Some characters" } ] } ] } ``` ## Rationale This document was derived heavily from , which was written by Martin Becze and Hudson Jameson, which in turn was derived from written by Amir Taaki, which in turn was derived from . In many places, text was copied and modified. Although the PEP-0001 text was written by Barry Warsaw, Jeremy Hylton, and David Goodger, they are not responsible for its use in the Algorand Request for Comments. They should not be bothered with technical questions specific to Algorand or the ARC. Please direct all comments to the ARC editors. ## Security Considerations ### Usage of related link Every link **SHOULD** be relative. | OK | `[ARC-0](./arc-0000.md)` | | :-- | -------------------------------------------------------------------------------: | | NOK | `[ARC-0](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0000.md)` | If you are using many links you **SHOULD** use this format: ### Usage of non-related link If for some reason (CCO, RFC …), you need to refer on something outside of the repository, you *MUST* you the following syntax | OK | `ARCS` | | :-- | --------------------------------------------------------------: | | NOK | `[ARCS](https://github.com/algorandfoundation/ARCs)` | ### Transferring ARC Ownership It occasionally becomes necessary to transfer ownership of ARCs to a new champion. In general, we would like to retain the original author as a co-author of the transferred ARC, but that is really up to the original author. A good reason to transfer ownership is that the original author no longer has the time or interest in updating it or following through with the ARC process or has fallen off the face of the ‘net (i.e., is unreachable or is not responding to email). A wrong reason to transfer ownership is that you disagree with the direction of the ARC. We try to build consensus around an ARC, but if that is not possible, you can always submit a competing ARC. If you are interested in assuming ownership of an ARC, send a message asking to take over, addressed to both the original author and the ARC editor. If the original author does not respond to the email on time, the ARC editor will make a unilateral decision (it’s not like such decisions can’t be reversed :)). ### ARC Editors The current ARC editor is: * Stéphane Barroso (@sudoweezy) ### ARC Editor Responsibilities For each new ARC that comes in, an editor does the following: * Read the ARC to check if it is ready: sound and complete. The ideas must make technical sense, even if they do not seem likely to get to final status. * The title should accurately describe the content. * Check the ARC for language (spelling, grammar, sentence structure, etc.), markup (GitHub flavored Markdown), code style If the ARC is not ready, the editor will send it back to the author for revision with specific instructions. Once the ARC is ready for the repository, the ARC editor will: * Assign an ARC number * Create a living discussion in the Issues section of this repository > The issue will be closed when the ARC reaches the status *Final* or *Withdrawn* * Merge the corresponding pull request * Send a message back to the ARC author with the next step. The editors do not pass judgment on ARCs. We merely do the administrative & editorial part. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Transaction Signing API
> An API for a function used to sign a list of transactions.
## Abstract The goal of this API is to propose a standard way for a dApp to request the signature of a list of transactions to an Algorand wallet. This document also includes detailed security requirements to reduce the risks of users being tricked to sign dangerous transactions. As the Algorand blockchain adds new features, these requirements may change. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Overview > This overview section is non-normative. After this overview, the syntax of the interfaces are described followed by the semantics and the security requirements. At a high-level the API allows to sign: * A valid group of transaction (aka atomic transfers). * (**OPTIONAL**) A list of groups of transactions. Signatures are requested by calling a function `signTxns(txns)` on a list `txns` of transactions. The dApp may also provide an optional parameter `opts`. Each transaction is represented by a `WalletTransaction` object. The only required field of a `WalletTransaction` is `txn`, a base64 encoding of the canonical msgpack encoding of the unsigned transaction. There are three main use cases: 1. The transaction needs to be signed and the sender of the transaction is an account known by the wallet. This is the most common case. Example: ```json { "txn": "iaNhbXT..." } ``` The wallet is free to generate the resulting signed transaction in any way it wants. In particular, the signature may be a multisig, may involve rekeying, or for very advanced wallets may use logicsigs. > Remark: If the wallet uses a large logicsig to sign the transaction and there is congestion, the fee estimated by the dApp may be too low. A future standard may provide a wallet API allowing the dApp to compute correctly the estimated fee. Before such a standard, the dApp may need to retry with a higher fee when this issue arises. 2. The transaction does not need to be signed. This happens when the transaction is part of a group of transaction and is signed by another party or by a logicsig. In that case, the field `signers` is set to an empty array. Example: ```json { "txn": "iaNhbXT...", "signers": [] } ``` 3. (**OPTIONAL**) The transaction needs to be signed but the sender of the transaction is *not* an account known by the wallet. This happens when the dApp uses a sender account derived from one or more accounts of the wallet. For example, the sender account may be a multisig account with public keys corresponding to some accounts of the wallet, or the sender account may be rekeyed to an account of the wallet. Example: ```json { "txn": "iaNhbXT...", "authAddr": "HOLQV2G65F6PFM36MEUKZVHK3XM7UEIFLG35UJGND77YDXHKXHKX4UXUQU", "msig": { "version": 1, "threshold": 2, "addrs": [ "5MF575NQUDMRWOTS27KIBL2MFPJHKQEEF4LZEN6H3CZDAYVUKESMGZPK3Q", "FS7G3AHTDVMQNQQBHZYMGNWAX7NV2XAQSACQH3QDBDOW66DYTAQQW76RYA", "DRSHY5ONWKVMWWASTB7HOELVF5HRUTRQGK53ZK3YNMESZJR6BBLMNH4BBY" ] }, "signers": ... } ``` Note that in both the first and the third use cases, the wallet may sign the transaction using a multisig and may use a different authorized address (`authAddr`) than the sender address (i.e., rekeying). The main difference is that in the first case, the wallet knows how to sign the transaction (i.e., whether the sender address is a multisig and/or rekeyed), while in the third case, the wallet may not know it. ### Syntax and Interfaces > Interfaces are defined in TypeScript. All the objects that are defined are valid JSON objects. #### Interface `SignTxnsFunction` A wallet transaction signing function `signTxns` is defined by the following interface: ```typescript export type SignTxnsFunction = ( txns: WalletTransaction[], opts?: SignTxnsOpts ) => Promise<(SignedTxnStr | null)[]>; ``` where: * `txns` is a non-empty list of `WalletTransaction` objects (defined below). * `opts` is an optional parameter object `SignTxnsOpts` (defined below). In case of error, the wallet (i.e., the `signTxns` function in this document) **MUST** reject the promise with an error object `SignTxnsError` defined below. This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”. #### Interface `AlgorandAddress` An Algorand address is represented by a 58-character base32 string. It includes the checksum. ```typescript export type AlgorandAddress = string; ``` An Algorand address is *valid* is it is a valid base32 string without padding and if the checksum is valid. > Example: `"6BJ32SU3ABLWSBND7U5H2QICQ6GGXVD7AXSSMRYM2GO3RRNHCZIUT4ISAQ"` is a valid Algorand address. #### Interface `SignedTxnStr` `SignedTxnStr` is the base64 encoding of the canonical msgpack encoding of a `SignedTxn` object, as defined in the [. For Algorand version 2.5.5, see the ]()of the specs or the ```typescript export type SignedTxnStr = string; ``` #### Interface `MultisigMetadata` A `MultisigMetadata` object specifies the parameters of an Algorand multisig address. ```typescript export interface MultisigMetadata { /** * Multisig version. */ version: number; /** * Multisig threshold value. Authorization requires a subset of signatures, * equal to or greater than the threshold value. */ threshold: number; /** * List of Algorand addresses of possible signers for this * multisig. Order is important. */ addrs: AlgorandAddress[]; } ``` * `version` should always be 1. * `threshold` should be between 1 and the length of `addrs`. > Interface originally from github.com/algorand/js-algorand-sdk/blob/e07d99a2b6bd91c4c19704f107cfca398aeb9619/src/types/multisig.ts, where `string` has been replaced by `AlgorandAddress`. #### Interface `WalletTransaction` A `WalletTransaction` object represents a transaction to be signed by a wallet. ```typescript export interface WalletTransaction { /** * Base64 encoding of the canonical msgpack encoding of a Transaction. */ txn: string; /** * Optional authorized address used to sign the transaction when the account * is rekeyed. Also called the signor/sgnr. */ authAddr?: AlgorandAddress; /** * Multisig metadata used to sign the transaction */ msig?: MultisigMetadata; /** * Optional list of addresses that must sign the transactions */ signers?: AlgorandAddress[]; /** * Optional base64 encoding of the canonical msgpack encoding of a * SignedTxn corresponding to txn, when signers=[] */ stxn?: SignedTxnStr; /** * Optional message explaining the reason of the transaction */ message?: string; /** * Optional message explaining the reason of this group of transaction * Field only allowed in the first transaction of a group */ groupMessage?: string; } ``` #### Interface `SignTxnsOpts` A `SignTxnsOps` specifies optional parameters of the `signTxns` function: ```typescript export type SignTxnsOpts = { /** * Optional message explaining the reason of the group of transactions */ message?: string; } ``` #### Error Interface `SignTxnsError` In case of error, the `signTxns` function **MUST** return a `SignTxnsError` object ```typescript interface SignTxnsError extends Error { code: number; data?: any; } ``` where: * `message`: * **MUST** be a human-readable string * **SHOULD** adhere to the specifications in the Error Standards section below * `code`: * **MUST** be an integer number * **MUST** adhere to the specifications in the Error Standards section below * `data`: * **SHOULD** contain any other useful information about the error > Inspired from github.com/ethereum/EIPs/blob/master/EIPS/eip-1193.md ### Error Standards | Status Code | Name | Description | | ----------- | --------------------- | --------------------------------------------------------------------------- | | 4001 | User Rejected Request | The user rejected the request. | | 4100 | Unauthorized | The requested operation and/or account has not been authorized by the user. | | 4200 | Unsupported Operation | The wallet does not support the requested operation. | | 4201 | Too Many Transactions | The wallet does not support signing that many transactions at a time. | | 4202 | Uninitialized Wallet | The wallet was not initialized properly beforehand. | | 4300 | Invalid Input | The input provided is invalid. | ### Wallet-specific extensions Wallets **MAY** use specific extension fields in `WalletTransaction` and in `SignTxnsOpts`. These fields must start with: `_walletName`, where `walletName` is the name of the wallet. Wallet designers **SHOULD** ensure that their wallet name is not already used. > Example of a wallet-specific fields in `opts` (for the wallet `theBestAlgorandWallet`): `_theBestAlgorandWalletIcon` for displaying an icon related to the transactions. Wallet-specific extensions **MUST** be designed such that a wallet not understanding them would not provide a lower security level. > Example of a forbidden wallet-specific field in `WalletTransaction`: `_theWorstAlgorandWalletDisable` requires this transaction not to be signed. This is dangerous for security as any signed transaction may leak and be committed by an attacker. Therefore, the dApp should never submit transactions that should not be signed, and that some wallets (not supporting this extension) may still sign. ### Semantic and Security Requirements The call `signTxns(txns, opts)` **MUST** either throws an error or return an array `ret` of the same length of the `txns` array: 1. If `txns[i].signers` is an empty array, the wallet **MUST NOT** sign the transaction `txns[i]`, and: * if `txns[i].stxn` is not present, `ret[i]` **MUST** be set to `null`. * if `txns[i].stxn` is present and is a valid `SignedTxnStr` with the underlying transaction exactly matching `txns[i].txn`, `ret[i]` **MUST** be set to `txns[i].stxn`. (See section on the semantic of `WalletTransaction` for the exact requirements on `txns[i].stxn`.) * otherwise, the wallet **MUST** throw a `4300` error. 2. Otherwise, the wallet **MUST** sign the transaction `txns[i].txn` and `ret[i]` **MUST** be set to the corresponding `SignedTxnStr`. Note that if any transaction `txns[i]` that should be signed (i.e., where `txns[i].signers` is not an empty array) cannot be signed for any reason, the wallet **MUST** throw an error. #### Terminology: Validation, Warnings, Fields All the field names below are the ones in the and . Field of the actual transaction are prefixed with `txn.` (as opposed to fields of the `WalletTransaction` such as `signers`). For example, the sender of a transaction is `txn.Sender`. **Rejecting** means throwing a `4300` error. Strong warning / warning / weak warning / informational messages are different level of alerts. Strong warnings **MUST** be displayed in such a way that the user cannot miss the importance of them. #### Semantic of `WalletTransaction` * `txn`: * Must a base64 encoding of the canonical msgpack encoding of a `Transaction` object as defined in the . For Algorand version 2.5.5, see the of the specs or the . * If `txn` is not a base64 string or cannot be decoded into a `Transaction` object, the wallet **MUST** reject. * `authAddr`: * The wallet **MAY** not support this field. In that case, it **MUST** throw a `4200` error. * If specified, it must be a valid Algorand address. If this is not the case, the wallet **MUST** reject. * If specified and supported, the wallet **MUST** sign the transaction using this authorized address *even if it sees the sender address `txn.Sender` was not rekeyed to `authAddr`*. This is because the sender may be rekeyed before the transaction is committed. The wallet **SHOULD** display an informational message. * `msig`: * The wallet **MAY** not support this field. In that case, it **MUST** throw a `4200` error. * If specified, it must be a valid `MultisigMetadata` object. If this is not the case, the wallet **MUST** reject. * If specified and supported, the wallet **MUST** verify `msig` matches `authAddr` (if `authAddr` is specified and supported) or the sender address `txn.Sender` (otherwise). The wallet **MUST** reject if this is not the case. * If specified and supported and if `signers` is not specified, the wallet **MUST** return a `SignedTxn` with all the subsigs that it can provide and that the wallet user agrees to provide. If the wallet can sign more subsigs than the requested threshold (`msig.threshold`), it **MAY** only provide `msig.threshold` subsigs. It is also possible that the wallet cannot provide at least `msig.threshold` subsigs (either because the user prevented signing with some keys or because the wallet does not know enough keys). In that case, the wallet just provide the subsigs it can provide. However, the wallet **MUST** provide at least one subsig or throw an error. * `signers`: * If specified and if not a list of valid Algorand addresses, the wallet **MUST** reject. * If `signers` is an empty array, the transaction is for information purpose only and the wallet **SHALL NOT** sign it, even if it can (e.g., know the secret key of the sender address). * If `signers` is an array with more than 1 Algorand addresses: * The wallet **MUST** reject if `msig` is not specified. * The wallet **MUST** reject if `signers` is not a subset of `msig.addrs`. * The wallet **MUST** try to return a `SignedTxn` with all the subsigs corresponding to `signers` signed. If it cannot, it **SHOULD** throw a `4001` error. Note that this is different than when `signers` is not provided, where the signing is only “best effort”. * If `signers` is an array with a single Algorand address: * If `msig` is specified, the rules as when `signers` is an array with more than 1 Algorand addresses apply. * If `authAddr` is specified but `msig` is not, the wallet **MUST** reject if `signers[0]` is not equal to `authAddr`. * If neither `authAddr` nor `msig` are specified, the wallet **MUST** reject if `signers[0]` is not the sender address `txn.Sender`. * In all cases, the wallet **MUST** only try to provide signatures for `signers[0]`. In particular, if the sender address `txn.Sender` was rekeyed or is a multisig and if `authAddr` and `msig` are not specified, then the wallet **MUST** reject. * `stxn` if specified: * If specified and if `signers` is not the empty array, the wallet **MUST** reject. * If specified: * It must be a valid `SignedTxnStr`. The wallet **MUST** reject if this is not the case. * The wallet **MUST** reject if the field `txn` inside the `SignedTxn` object does not match exactly the `Transaction` object in `txn`. * The wallet **MAY NOT** check whether the other fields of the `SignedTxn` are valid. In particular, it **MAY** accept `stxn` even in the following cases: it contains an invalid signature `sig`, it contains both a signature `sig` and a logicsig `lsig`, it contains a logicsig `lsig` that always reject. * `message`: * The wallet **MAY** decide to never print the message, to only print the first characters, or to make any changes to the messages that may be used to ensure a higher level of security. The wallet **MUST** be designed to ensure that the message cannot be easily used to trick the user to do an incorrect action. In particular, if displayed, the message must appear in an area that is easily and clearly identifiable as not trusted by the wallet. * The wallet **MUST** prevent HTML/JS injection and must only display plaintext messages. * `groupMessage` obeys the same rules as `message`, except it is a message common to all the transactions of the group containing the current transaction. In addition, the wallet **MUST** reject if `groupMessage` is provided for a transaction that is not the first transaction of the group. Note that `txns` may contain multiple groups of transactions, one after the other (see the Group Validation section for details). ##### Particular Case without `signers`, nor `msig`, nor `senders` When neither `signers`, nor `msig`, nor `authAddr` are specified, the wallet **MAY** still sign the transaction using a multisig or a different authorized address than the sender address `txn.Sender`. It may also sign the transaction using a logicsig. However, in all these cases, the resulting `SignedTxn` **MUST** be such that it can be committed to the blockchain (assuming the transaction itself can be executed and that the account is not rekeyed in the meantime). In particular, if a multisig is used, the numbers of subsigs provided must be at least equal to the multisig threshold. This is different from the case where `msig` is provided, where the wallet **MAY** provide fewer subsigs than the threshold. #### Semantic of `SignTxnsOpts` * `message` obeys the rules as `WalletTransaction.message` except it is a message common to all transactions. #### General Validation The goal is to ensure the highest level of security for the end-user, even when the transaction is generated by a malicious dApp. Every input must be validated. Validation: * **SHALL NOT** rely on TypeScript typing as this can be bypassed. Types **MUST** be manually verified. * **SHALL NOT** assume the Algorand SDK does any validation, as the Algorand SDK is not meant to receive maliciously generated inputs. Furthermore, the SDK allows for dangerous transactions (such as rekeying). The only exception for the above rule is for de-serialization of transactions. Once de-serialized, every field of the transaction must be manually validated. > Note: We will be working with the algosdk team to provide helper functions for validation in some cases and to ensure the security of the de-serialization of potentially malicious transactions. If there is any unexpected field at any level (both in the transaction itself or in the object WalletTransaction), the wallet **MUST** immediately reject. The only exception is for the “wallet-specific extension” fields (see above). #### Group Validation The wallet should support the following two use cases: 1. (**REQUIRED**) `txns` is a non-empty array of transactions that belong to the same group of transactions. In other words, either `txns` is an array of a single transaction with a zero group ID (`txn.Group`), or `txns` is an array of one or more transactions with the *same* non-zero group ID. The wallet **MUST** reject if the transactions do not match their group ID. (The dApp must provide the transactions in the order defined by the group ID.) > An early draft of this ARC required that the size of a group of transactions must be greater than 1 but, since the Algorand protocol supports groups of size 1, this requirement had been changed so dApps don’t have to have special cases for single transactions and can always send a group to the wallet. 2. (**OPTIONAL**) `txns` is a concatenation of `txns` arrays of transactions of type 1: * All transactions with the *same* non-zero group ID must be consecutive and must match their group ID. The wallet **MUST** reject if the above is not satisfied. * The wallet UI **MUST** be designed so that it is clear to the user when transactions are grouped (aka form an atomic transfers) and when they are not. It **SHOULD** provide very clear explanations that are understandable by beginner users, so that they cannot easily be tricked to sign what they believe is an atomic exchange while it is in actuality a one-sided payment. If `txns` does not match any of the formats above, the wallet **MUST** reject. The wallet **MAY** choose to restrict the maximum size of the array `txns`. The maximum size allowed by a wallet **MUST** be at least the maximum size of a group of transactions in the current Algorand protocol on MainNet. (When this ARC was published, this maximum size was 16.) If the wallet rejects `txns` because of its size, it **MUST** throw a 4201 error. An early draft of this API allowed to sign single transactions in a group without providing the other transactions in the group. For security reasons, this use case is now deprecated and **SHALL** not be allowed in new implementations. Existing implementations may continue allowing for single transactions to be signed if a very clear warning is displayed to the user. The warning **MUST** stress that signing the transaction may incur losses that are much higher than the amount of tokens indicated in the transaction. That is because potential future features of Algorand may later have such consequences (e.g., a signature of a transaction may actually authorize the full group under some circumstances). #### Transaction Validation ##### Inputs that Must Be Systematically Rejected * Transactions `WalletTransaction.txn` with fields that are not known by the wallet **MUST** be systematically rejected. In particular: * Every field **MUST** be validated. * Any extra field **MUST** systematically make the wallet reject. * This is to prevent any security issue in case of the introduction of new dangerous fields (such as `txn.RekeyTo` or `txn.CloseRemainderTo`). * Transactions of an unknown type (field `txn.Type`) **MUST** be rejected. * Transactions containing fields of a different transaction type (e.g., `txn.Receiver` in an asset transfer transaction) **MUST** be rejected. ##### Inputs that Warrant Display of Warnings The wallet **MUST**: * Display a strong warning message when signing a transaction with one of the following fields: `txn.RekeyTo`, `txn.CloseRemainderTo`, `txn.AssetCloseTo`. The warning message **MUST** clearly explain the risks. No warning message is necessary for transactions that are provided for informational purposes in a group and are not signed (i.e., transactions with `signers=[]`). * Display a strong warning message in case the transaction is signed in the future (first valid round is after current round plus some number, e.g. 500). This is to prevent surprises in the future where a user forgot that they signed a transaction and the dApp maliciously play it later. * Display a warning message when the fee is too high. The threshold **MAY** depend on the load of the Algorand network. * Display a weak warning message when signing a transaction that can increase the minimum balance in a way that may be hard or impossible to undo (asset creation or application creation) * Display an informational message when signing a transaction that can increase the minimum balance in a way that can be undone (opt-in to asset or transaction) The above is for version 2.5.6 of the Algorand software. Future consensus versions may require additional checks. Before supporting any new transaction field or type (for a new version of the Algorand blockchain), the wallet authors **MUST** be perform a careful security analysis. #### Genesis Validation The wallet **MUST** check that the genesis hash (field `txn.GenesisHash`) and the genesis ID (field `txn.GenesisID`, if provided) match the network used by the wallet. If the wallet supports multiple networks, it **MUST** make clear to the user which network is used. #### UI In general, the UI **MUST** ensure that the user cannot be confused by the dApp to perform dangerous operations. In particular, the wallet **MUST** make clear to the user what is part of the wallet UI from what is part of what the dApp provided. Special care **MUST** be taken of when: * Displaying the `message` field of `WalletTransaction` and of `SignTxnsOpts`. * Displaying any arbitrary field of transactions including note field (`txn.Note`), genesis ID (`txn.genesisID`), asset configuration fields (`txn.AssetName`, `txn.UnitName`, `txn.URL`, …) * Displaying message hidden in fields that are expected to be base32/base64-strings or addresses. Using a different font for those fields **MAY** be an option to prevent such confusion. Usual precautions **MUST** be taken regarding the fact that the inputs are provided by an untrusted dApp (e.g., preventing code injection and so on). ## Rationale The API was designed to: * Be easily implementable by all Algorand wallets * Rely on the official and the . * Only use types supported by JSON to simplify interoperability (avoid Uint8Array for example) and to allow easy serialization / deserialization * Be easy to extend to support future features of Algorand * Be secure by design: making it hard for malicious dApps to cause the wallet to sign a transaction without the user understanding the implications of their signature. The API was not designed to: * Directly support of the SDK objects. SDK objects must first be serialized. * Support any listing accounts, connecting to the wallet, sending transactions, … * Support of signing logic signatures. The last two items are expected to be defined in other documents. ### Rationale for Group Validation The requirements around group validation have been designed to prevent the following attack. The dApp pretends to buy 1 Algo for 10 USDC, but instead creates an atomic transfer with the user sending 1 Algo to the dApp and the dApp sending 0.01 USDC to the user. However, it sends to the wallet a 1 Algo and 10 USDC transactions. If the wallet does not verify that this is a valid group, it will make the user believe that they are signing for the correct atomic transfer. ## Reference Implementation > This section is non-normative. ### Sign a Group of Two Transactions Here is an example in node.js how to use the wallet interface to sign a group of two transactions and send them to the network. The function `signTxns` is assumed to be a method of `algorandWallet`. > Note: We will be working with the algosdk development to add two helper functions to facilitate the use of the wallet. Current idea is to add: `Transaction.toBase64` that does the same as `Transaction.toByte` except it outputs a base64 string `Algodv2.sendBase64RawTransactions` that does the same as `Algodv2.sendRawTransactions` except it takes an array of base64 string instead of an array of Uint8array ```typescript import algosdk from 'algosdk'; import * as algorandWallet from './wallet'; import {Buffer} from "buffer"; const firstRound = 13809129; const suggestedParams = { flatFee: false, fee: 0, firstRound: firstRound, lastRound: firstRound + 1000, genesisID: 'testnet-v1.0', genesisHash: 'SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=' }; const txn1 = algosdk.makePaymentTxnWithSuggestedParamsFromObject({ from: "37MSZIPXHGNCKTDJTJDSYIOF4C57JAL2FTKESD2HBVELXYHEIXVZ4JVGFU", to: "PKSE2TARC645D4O2IO6QNWVW6PLJDTR6IOKNKMGSHQL7JIJHNGNFVISUHI", amount: 1000, suggestedParams, }); const txn2 = algosdk.makePaymentTxnWithSuggestedParamsFromObject({ from: "37MSZIPXHGNCKTDJTJDSYIOF4C57JAL2FTKESD2HBVELXYHEIXVZ4JVGFU", to: "PKSE2TARC645D4O2IO6QNWVW6PLJDTR6IOKNKMGSHQL7JIJHNGNFVISUHI", amount: 2000, suggestedParams, }); const txs = [txn1, txn2]; algosdk.assignGroupID(txs); const txn1B64 = Buffer.from(txn1.toByte()).toString("base64"); const txn2B64 = Buffer.from(txn2.toByte()).toString("base64"); (async () => { const signedTxs = await algorandWallet.signTxns([ {txn: txn1B64}, {txn: txn2B64, signers: []} ]); const algodClient = new algosdk.Algodv2("", "...", ""); algodClient.sendRawTransaction( signedTxs.map(stxB64 => Buffer.from(stxB64, "base64")) ) })(); ``` ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Algorand Transaction Note Field Conventions
> Conventions for encoding data in the note field at application-level
## Abstract The goal of these conventions is to make it simpler for block explorers and indexers to parse the data in the note fields and filter transactions of certain dApps. ## Specification Note fields should be formatted as follows: for dApps ```plaintext : ``` for ARCs ```plaintext arc: ``` where: * `` is the name of the dApp: * Regexp to satisfy: `[a-zA-Z0-9][a-zA-Z0-9_/@.-]{4-31}` In other words, a name should: * only contain alphanumerical characters or `_`, `/`, `-`, `@`, `.` * start with an alphanumerical character * be at least 5 characters long * be at most 32 characters long * Names starting with `a/` and `af/` are reserved for the Algorand protocol and the Algorand Foundation uses. * `` is the number of the ARC: * Regexp to satisfy: `\b(0|[1-9]\d*)\b` In other words, an arc-number should: * Only contain a digit number, without any padding * `` is one of the following: * `m`: * `j`: * `b`: arbitrary bytes * `u`: utf-8 string * `` is the actual data in the format specified by `` **WARNING**: Any user can create transactions with arbitrary data and may impersonate other dApps. In particular, the fact that a note field start with `` does not guarantee that it indeed comes from this dApp. The value `` cannot be relied upon to ensure provenance and validity of the ``. **WARNING**: Any user can create transactions with arbitrary data, including ARC numbers, which may not correspond to the intended standard. An ARC number included in a note field does not ensure compliance with the corresponding standard. The value of the ARC number cannot be relied upon to ensure the provenance and validity of the . ### Versioning This document suggests the following convention for the names of dApp with multiple versions: `mydapp/v1`, `mydapp/v2`, … However, dApps are free to use any other convention and may include the version inside the `` part instead of the `` part. ## Rationale The goal of these conventions is to facilitate displaying notes by block explorers and filtering of transactions by notes. However, the note field **cannot be trusted**, as any user can create transactions with arbitrary note fields. An external mechanism needs to be used to ensure the validity and provenance of the data. For example: * Some dApps may only send transactions from a small set of accounts controlled by the dApps. In that case, the sender of the transaction should be checked. * Some dApps may fund escrow accounts created from some template TEAL script. In that case, the note field may contain the template parameters and the escrow account address should be checked to correspond to the resulting TEAL script. * Some dApps may include a signature in the `` part of the note field. The `` may be an MsgPack encoding of a structure of the form: ```json { "d": ... // actual data "sig": ... // signature of the actual data (encoded using MsgPack) } ``` In that case, the signature should be checked. The conventions were designed to support multiple use cases of the notes. Some dApps may just record data on the blockchain without using any smart contracts. Such dApps typically would use JSON or MsgPack encoding. On the other hands, dApps that need reading note fields from smart contracts most likely would require easier-to-parse formats of data, which would most likely consist in application-specific byte strings. Since `:` is a prefix of the note, transactions for a given dApp can easily be filtered by the (). The restrictions on dApp names were chosen to allow most usual names while avoiding any encoding or displaying issues. The maximum length (32) matches the maximum length of ASA on Algorand, while the minimum length (5) has been chosen to limit collisions. ## Reference Implementation > This section is non-normative. Consider , that provides information about Smart ASA’s Application. Here a potential note indicating that the Application ID is 123: * JSON without version: ```plaintext arc20:j{"application-id":123} ``` Consider a dApp named `algoCityTemp` that stores temperatures from cities on the blockchain. Here are some potential notes indicating that Singapore’s temperature is 35 degree Celsius: * JSON without version: ```plaintext algoCityTemp:j{"city":"Singapore","temp":35} ``` * JSON with version in the name: ```plaintext algoCityTemp/v1:j{"city":"Singapore","temp":35} ``` * JSON with version in the name with index lookup: ```plaintext algoCityTemp/v1/35:j{"city":"Singapore","temp":35} ``` * JSON with version in the data: ```plaintext algoCityTemp:j{"city":"Singapore","temp":35,"ver":1} ``` * UTF-8 string without version: ```plaintext algoCityTemp:uSingapore|35 ``` * Bytes where the temperature is encoded as a signed 1-byte integer in the first position: ```plaintext algoCityTemp:b#Singapore ``` (`#` is the ASCII character for 35.) * MsgPack corresponding to the JSON example with version in the name. The string is encoded in base64 as it contains characters that cannot be printed in this document. But the note should contain the actual bytes and not the base64 encoding of them: ```plaintext YWxnb0NpdHlUZW1wL3YxOoKkY2l0ealTaW5nYXBvcmWkdGVtcBg= ``` ## Security Considerations > Not Applicable ## Copyright Copyright and related rights waived via .
# Conventions Fungible/Non-Fungible Tokens
> Parameters Conventions for Algorand Standard Assets (ASAs) for fungible tokens and non-fungible tokens (NFTs).
## Abstract The goal of these conventions is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to display the properties of a given ASA. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. An ASA has an associated JSON Metadata file, formatted as specified below, that is stored off-chain. ### ASA Parameters Conventions The ASA parameters should follow the following conventions: * *Unit Name* (`un`): no restriction but **SHOULD** be related to the name in the JSON Metadata file * *Asset Name* (`an`): **MUST** be: * (**NOT RECOMMENDED**) either exactly `arc3` (without any space) * (**NOT RECOMMENDED**) or `@arc3`, where `` **SHOULD** be closely related to the name in the JSON Metadata file: * If the resulting asset name can fit the *Asset Name* field, then `` **SHOULD** be equal to the name in the JSON Metadata file. * If the resulting asset name cannot fit the *Asset Name* field, then `` **SHOULD** be a reasonable shorten version of the name in the JSON Metadata file. * (**RECOMMENDED**) or `` where `` is defined as above. In this case, the Asset URL **MUST** end with `#arc3`. * *Asset URL* (`au`): a URI pointing to a JSON Metadata file. * This URI as well as any URI in the JSON Metadata file: * **SHOULD** be persistent and allow to download the JSON Metadata file forever. * **MAY** contain the string `{id}`. If `{id}` exists in the URI, clients **MUST** replace this with the asset ID in decimal form. The rules below applies after such a replacement. * **MUST** follow and **MUST NOT** contain any whitespace character * **SHOULD** use one of the following URI schemes (for compatibility and security): *https* and *ipfs*: * When the file is stored on IPFS, the `ipfs://...` URI **SHOULD** be used. IPFS Gateway URI (such as `https://ipfs.io/ipfs/...`) **SHOULD NOT** be used. * **SHOULD NOT** use the following URI scheme: *http* (due to security concerns). * **MUST** be such that the returned resource includes the CORS header ```plaintext Access-Control-Allow-Origin: * ``` if the URI scheme is *https* > This requirement is to ensure that client JavaScript can load all resources pointed by *https* URIs inside an ARC-3 ASA. * **MAY** be a relative URI when inside the JSON Metadata file. In that case, the relative URI is relative to the Asset URL. The Asset URL **SHALL NOT** be relative. Relative URI **MUST** not contain the character `:`. Clients **MUST** consider a URI as relative if and only if it does not contain the character `:`. * If the Asset Name is neither `arc3` nor of the form `@arc3`, then the Asset URL **MUST** end with `#arc3`. * If the Asset URL ends with `#arc3`, clients **MUST** remove `#arc3` when linking to the URL. When displaying the URL, they **MAY** display `#arc3` in a different style (e.g., a lighter color). * If the Asset URL ends with `#arc3`, the full URL with `#arc3` **SHOULD** be valid and point to the same resource as the URL without `#arc3`. > This recommendation is to ensure backward compatibility with wallets that do not support ARC-3. * *Asset Metadata Hash* (`am`): * If the JSON Metadata file specifies extra metadata `e` (property `extra_metadata`), then `am` is defined as: ```plain am = SHA-512/256("arc0003/am" || SHA-512/256("arc0003/amj" || content of JSON Metadata file) || e) ``` where `||` denotes concatenation and SHA-512/256 is defined in . The above definition of `am` **MUST** be used when the property `extra_metadata` is specified, even if its value `e` is the empty string. Python code to compute the hash and a full example are provided below (see “Sample with Extra Metadata”). > Extra metadata can be used to store data about the asset that needs to be accessed from a smart contract. The smart contract would not be able to directly read the metadata. But, if provided with the hash of the JSON Metadata file and with the extra metadata `e`, the smart contract can check that `e` is indeed valid. * If the JSON Metadata file does not specify the property `extra_metadata`, then `am` is defined as the SHA-256 digest of the JSON Metadata file as a 32-byte string (as defined in ) There are no requirements regarding the manager account of the ASA, or its the reserve account, freeze account, or clawback account. > Clients recognize ARC-3 ASAs by looking at the Asset Name and Asset URL. If the Asset Name is `arc3` or ends with `@arc3`, or if the Asset URL ends with `#arc3`, the ASA is to be considered an ARC-3 ASA. #### Pure and Fractional NFTs An ASA is said to be a *pure non-fungible token* (*pure NFT*) if and only if it has the following properties: * *Total Number of Units* (`t`) **MUST** be 1. * *Number of Digits after the Decimal Point* (`dc`) **MUST** be 0. An ASA is said to be a *fractional non-fungible token* (*fractional NFT*) if and only if it has the following properties: * *Total Number of Units* (`t`) **MUST** be a power of 10 larger than 1: 10, 100, 1000, … * *Number of Digits after the Decimal Point* (`dc`) **MUST** be equal to the logarithm in base 10 of total number of units. > In other words, the total supply of the ASA is exactly 1. ### JSON Metadata File Schema > The JSON Medata File schema follow the Ethereum Improvement Proposal with the following main differences: > > * Support for integrity fields for any file pointed by any URI field as well as for localized JSON Metadata files. > * Support for mimetype fields for any file pointed by any URI field. > * Support for extra metadata that is hashed as part of the Asset Metadata Hash (`am`) of the ASA. > * Adding the fields `external_url`, `background_color`, `animation_url` used by . Similarly to ERC-1155, the URI does support ID substitution. If the URI contains `{id}`, clients **MUST** substitute it by the asset ID in *decimal*. > Contrary to ERC-1155, the ID is represented in decimal (instead of hexadecimal) to match what current APIs and block explorers use on the Algorand blockchain. The JSON Metadata schema is as follows: ```json { "title": "Token Metadata", "type": "object", "properties": { "name": { "type": "string", "description": "Identifies the asset to which this token represents" }, "decimals": { "type": "integer", "description": "The number of decimal places that the token amount should display - e.g. 18, means to divide the token amount by 1000000000000000000 to get its user representation." }, "description": { "type": "string", "description": "Describes the asset to which this token represents" }, "image": { "type": "string", "description": "A URI pointing to a file with MIME type image/* representing the asset to which this token represents. Consider making any images at a width between 320 and 1080 pixels and aspect ratio between 1.91:1 and 4:5 inclusive." }, "image_integrity": { "type": "string", "description": "The SHA-256 digest of the file pointed by the URI image. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)." }, "image_mimetype": { "type": "string", "description": "The MIME type of the file pointed by the URI image. MUST be of the form 'image/*'." }, "background_color": { "type": "string", "description": "Background color do display the asset. MUST be a six-character hexadecimal without a pre-pended #." }, "external_url": { "type": "string", "description": "A URI pointing to an external website presenting the asset." }, "external_url_integrity": { "type": "string", "description": "The SHA-256 digest of the file pointed by the URI external_url. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)." }, "external_url_mimetype": { "type": "string", "description": "The MIME type of the file pointed by the URI external_url. It is expected to be 'text/html' in almost all cases." }, "animation_url": { "type": "string", "description": "A URI pointing to a multi-media file representing the asset." }, "animation_url_integrity": { "type": "string", "description": "The SHA-256 digest of the file pointed by the URI external_url. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)." }, "animation_url_mimetype": { "type": "string", "description": "The MIME type of the file pointed by the URI animation_url. If the MIME type is not specified, clients MAY guess the MIME type from the file extension or MAY decide not to display the asset at all. It is STRONGLY RECOMMENDED to include the MIME type." }, "properties": { "type": "object", "description": "Arbitrary properties (also called attributes). Values may be strings, numbers, object or arrays." }, "extra_metadata": { "type": "string", "description": "Extra metadata in base64. If the field is specified (even if it is an empty string) the asset metadata (am) of the ASA is computed differently than if it is not specified." }, "localization": { "type": "object", "required": ["uri", "default", "locales"], "properties": { "uri": { "type": "string", "description": "The URI pattern to fetch localized data from. This URI should contain the substring `{locale}` which will be replaced with the appropriate locale value before sending the request." }, "default": { "type": "string", "description": "The locale of the default data within the base JSON" }, "locales": { "type": "array", "description": "The list of locales for which data is available. These locales should conform to those defined in the Unicode Common Locale Data Repository (http://cldr.unicode.org/)." }, "integrity": { "type": "object", "patternProperties": { ".*": { "type": "string" } }, "description": "The SHA-256 digests of the localized JSON files (except the default one). The field name is the locale. The field value is a single SHA-256 integrity metadata as defined in the W3C subresource integrity specification (https://w3c.github.io/webappsec-subresource-integrity)." } } } } } ``` All the fields are **OPTIONAL**. But if provided, they **MUST** match the description in the JSON schema. The field `decimals` is **OPTIONAL**. If provided, it **MUST** match the ASA parameter `dt`. URI fields (`image`, `external_url`, `animation_url`, and `localization.uri`) in the JSON Metadata file are defined similarly as the Asset URL parameter `au`. However, contrary to the Asset URL, they **MAY** be relative (to the Asset URL). See Asset URL above. #### Integrity Fields Compared to ERC-1155, the JSON Metadata schema allows to indicate digests of the files pointed by any URI field. This is to ensure the integrity of all the files referenced by the ASA. Concretly, every URI field `xxx` is allowed to have an optional associated field `xxx_integrity` that specifies the digest of the file pointed by the URI. The digests are represented as a single SHA-256 integrity metadata as defined in the . Details on how to generate those digests can be found on the (where `sha384` or `384` are to be replaced by `sha256` and `256` respectively as only SHA-256 is supported by this ARC). It is **RECOMMENDED** to specify all the `xxx_integrity` fields of all the `xxx` URI fields, except for `external_url_integrity` when it points to a potentially mutable website. Any field with a name ending with `_integrity` **MUST** match a corresponding field containing a URI to a file with a matching digest. For example, if the field `hello_integrity` is specified, the field `hello` **MUST** exist and **MUST** be a URI pointing to a file with a digest equal to the digest specified by `hello_integrity`. #### MIME Type Files Compared to ERC-1155, the JSON Metadata schema allows to indicate the MIME type of the files pointed by any URI field. This is to allow clients to display appropriately the resource without having to first query it to find out the MIME type. Concretly, every URI field `xxx` is allowed to have an optional associated field `xxx_integrity` that specifies the digest of the file pointed by the URI. It is **STRONGLY RECOMMENDED** to specify all the `xxx_mimetype` fields of all the `xxx` URI fields, except for `external_url_mimetype` when it points to a website. If the MIME type is not specified, clients **MAY** guess the MIME type from the file extension or **MAY** decide not to display the asset at all. Clients **MUST NOT** rely on the `xxx_mimetype` fields from a security perspective and **MUST NOT** break or fail if the fields are incorrect (beyond not displaying the asset image or animation correctly). In particular, clients **MUST** take all necessary security measures to protect users against remote code execution or cross-site scripting attacks, even when the MIME type looks innocuous (like `image/png`). > The above restriction is to protect clients and users against malformed or malicious ARC-3. Any field with a name ending with `_mimetype` **MUST** match a corresponding field containing a URI to a file with a matching digest. For example, if the field `hello_mimetype` is specified, the field `hello` **MUST** exist and **MUST** be a URI pointing to a file with a digest equal to the digest specified by `hello_mimetype`. #### Localization If the JSON Metadata file contains a `localization` attribute, its content **MAY** be used to provide localized values for fields that need it. The `localization` attribute should be a sub-object with three **REQUIRED** attributes: `uri`, `default`, `locales`, and one **RECOMMENDED** attribute: `integrity`. If the string `{locale}` exists in any URI, it **MUST** be replaced with the chosen locale by all client software. > Compared to ERC-1155, the `localization` attribute contains an additional optional `integrity` field that specify the digests of the localized JSON files. It is **RECOMMENDED** that `integrity` contains the digests of all the locales but the default one. #### Examples ##### Basic Example An example of an ARC-3 JSON Metadata file for a song follows. The properties array proposes some **SUGGESTED** formatting for token-specific display properties and metadata. ```json { "name": "My Song", "description": "My first and best song!", "image": "https://s3.amazonaws.com/your-bucket/song/cover/mysong.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "image_mimetype": "image/png", "external_url": "https://mysongs.com/song/mysong", "animation_url": "https://s3.amazonaws.com/your-bucket/song/preview/mysong.ogg", "animation_url_integrity": "sha256-LwArA6xMdnFF3bvQjwODpeTG/RVn61weQSuoRyynA1I=", "animation_url_mimetype": "audio/ogg", "properties": { "simple_property": "example value", "rich_property": { "name": "Name", "value": "123", "display_value": "123 Example Value", "class": "emphasis", "css": { "color": "#ffffff", "font-weight": "bold", "text-decoration": "underline" } }, "array_property": { "name": "Name", "value": [1,2,3,4], "class": "emphasis" } } } ``` In the example, the `image` field **MAY** be the album cover, while the `animation_url` **MAY** be the full song or may just be a small preview. In the latter case, the full song **MAY** be specified by three additional properties inside the `properties` field: ```json { ... "properties": { ... "file_url": "https://s3.amazonaws.com/your-bucket/song/full/mysong.ogg", "file_url_integrity": "sha256-7IGatqxLhUYkruDsEva52Ku43up6774yAmf0k98MXnU=", "file_url_mimetype": "audio/ogg" } } ``` An example of possible ASA parameters would be: * *Asset Unit*: `mysong` for example * *Asset Name*: `My Song` * *Asset URL*: `https://example.com/mypict#arc3` or `https://arweave.net/MAVgEMO3qlqe-qHNVs00qgwwbCb6FY2k15vJP3gBLW4#arc3` * *Metadata Hash*: the 32 bytes of the SHA-256 digest of the above JSON file * *Total Number of Units*: 100 * *Number of Digits after the Decimal Point*: 2 > IPFS urls of the form `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT#arc3` may be used too but may cause issue with clients that do not support ARC-3 and that do not handle fragments in IPFS URLs. Example of alternative versions for *Asset Name* and *Asset URL*: * *Asset Name*: `My Song@arc3` or `arc3` * *Asset URL*: `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT` or `https://example.com/mypict` or `https://arweave.net/MAVgEMO3qlqe-qHNVs00qgwwbCb6FY2k15vJP3gBLW4` > These alternative versions are less recommended as they make the asset name harder to read for clients that do not support ARC-3. The above parameters define a fractional NFT with 100 shares. The JSON Metadata file **MAY** contain the field `decimals: 2`: ```json { ... "decimals": 2 } ``` ##### Example with Relative URI and IPFS > When using IPFS, it is convenient to bundle the JSON Metadata file with other files references by the JSON Metadata file. In this case, because of circularity, it is necessary to use relative URI An example of an ARC-3 JSON Metadata file using IPFS and relative URI is provided below: ```json { "name": "My Song", "description": "My first and best song!", "image": "mysong.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "image_mimetype": "image/png", "external_url": "https://mysongs.com/song/mysong", "animation_url": "mysong.ogg", "animation_url_integrity": "sha256-LwArA6xMdnFF3bvQjwODpeTG/RVn61weQSuoRyynA1I=", "animation_url_mimetype": "audio/ogg" } ``` If the Asset URL is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/metadata.json`: * the `image` URI is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/mysong.png`. * the `animation_url` URI is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/mysong.ogg`. ##### Example with Extra Metadata and `{id}` An example of an ARC-3 JSON Metadata file with extra metadata and `{id}` is provided below. ```json { "name": "My Picture", "description": "Lorem ipsum...", "image": "https://s3.amazonaws.com/your-bucket/images/{id}.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "image_mimetype": "image/png", "external_url": "https://mysongs.com/song/{id}", "extra_metadata": "iHcUslDaL/jEM/oTxqEX++4CS8o3+IZp7/V5Rgchqwc=" } ``` The possible ASA parameters are the same as with the basic example, except for the metadata hash that would be the 32-byte string corresponding to the base64 string `xsmZp6lGW9ktTWAt22KautPEqAmiXxow/iIuJlRlHIg=`. > For completeness, we provide below a Python program that computes this metadata hash: ```python import base64 import hashlib extra_metadata_base64 = "iHcUslDaL/jEM/oTxqEX++4CS8o3+IZp7/V5Rgchqwc=" extra_metadata = base64.b64decode(extra_metadata_base64) json_metadata = """{ "name": "My Picture", "description": "Lorem ipsum...", "image": "https://s3.amazonaws.com/your-bucket/images/{id}.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "image_mimetype": "image/png", "external_url": "https://mysongs.com/song/{id}", "extra_metadata": "iHcUslDaL/jEM/oTxqEX++4CS8o3+IZp7/V5Rgchqwc=" }""" h = hashlib.new("sha512_256") h.update(b"arc0003/amj") h.update(json_metadata.encode("utf-8")) json_metadata_hash = h.digest() h = hashlib.new("sha512_256") h.update(b"arc0003/am") h.update(json_metadata_hash) h.update(extra_metadata) am = h.digest() print("Asset metadata in base64: ") print(base64.b64encode(am).decode("utf-8")) ``` #### Localized Example An example of an ARC-3 JSON Metadata file with localized metadata is presented below. Base metadata file: ```json { "name": "Advertising Space", "description": "Each token represents a unique Ad space in the city.", "localization": { "uri": "ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/{locale}.json", "default": "en", "locales": [ "en", "es", "fr" ], "integrity": { "es": "sha256-T0UofLOqdamWQDLok4vy/OcetEFzD8dRLig4229138Y=", "fr": "sha256-UUM89QQlXRlerdzVfatUzvNrEI/gwsgsN/lGkR13CKw=" } } } ``` File `es.json`: ```json { "name": "Espacio Publicitario", "description": "Cada token representa un espacio publicitario único en la ciudad." } ``` File `fr.json`: ```json { "name": "Espace Publicitaire", "description": "Chaque jeton représente un espace publicitaire unique dans la ville." } ``` Note that if the base metadata file URI (i.e., the Asset URL) is `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT/metadata.json`, then the `uri` field inside the `localization` field may be the relative URI `{locale}.json`. ## Rationale These conventions are heavily based on Ethereum Improvement Proposal to facilitate interoperobility. The main differences are highlighted below: * Asset Name and Asset Unit can be optionally specified in the ASA parameters. This is to allow wallets that are not aware of ARC-3 or that are not able to retrieve the JSON file to still display meaningful information. * A digest of the JSON Metadata file is included in the ASA parameters to ensure integrity of this file. This is redundant with the URI when IPFS is used. But this is important to ensure the integrity of the JSON file when IPFS is not used. * Similarly, the JSON Metadata schema is changed to allow to specify the SHA-256 digests of the localized versions as well as the SHA-256 digests of any file pointed by a URI property. * MIME type fields are added to help clients know how to display the files pointed by URI. * When extra metadata are provided, the Asset Metadata Hash parameter is computed using SHA-512/256 with prefix for proper domain separation. SHA-512/256 is the hash function used in Algorand in general (see the list of prefixes in ). Domain separation is especially important in this case to avoid mixing hash of the JSON Metadata file with extra metadata. However, since SHA-512/256 is less common and since not every tool or library allows to compute SHA-512/256, when no extra metadata is specified, SHA-256 is used instead. * Support for relative URI is added to allow storing both the JSON Metadata files and the files it refers to in the same IPFS directory. Valid JSON Metadata files for ERC-1155 are valid JSON Metadata files for ARC-3. However, it is highly recommended that users always include the additional RECOMMENDED fields, such as the integrity fields. The asset name is either `arc3` or suffixed by `@arc3` to allow client software to know when an asset follows the conventions. ## Security Considerations > Not Applicable ## Copyright Copyright and related rights waived via .
# Application Binary Interface (ABI)
> Conventions for encoding method calls in Algorand Application
## Abstract This document introduces conventions for encoding method calls, including argument and return value encoding, in Algorand Application call transactions. The goal is to allow clients, such as wallets and dapp frontends, to properly encode call transactions based on a description of the interface. Further, explorers will be able to show details of these method invocations. ### Definitions * **Application:** an Algorand Application, aka “smart contract”, “stateful contract”, “contract”, or “app”. * **HLL:** a higher level language that compiles to TEAL bytecode. * **dapp (frontend)**: a decentralized application frontend, interpreted here to mean an off-chain frontend (a webapp, native app, etc.) that interacts with Applications on the blockchain. * **wallet**: an off-chain application that stores secret keys for on-chain accounts and can display and sign transactions for these accounts. * **explorer**: an off-chain application that allows browsing the blockchain, showing details of transactions. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. Interfaces are defined in TypeScript. All the objects that are defined are valid JSON objects, and all JSON `string` types are UTF-8 encoded. ### Overview This document makes recommendations for encoding method invocations as Application call transactions, and for describing methods for access by higher-level entities. Encoding recommendations are intended to be minimal, intended only to allow interoperability among Applications. Higher level recommendations are intended to enhance user-facing interfaces, such as high-level languages, dapps, and wallets. Applications that follow the recommendations described here are called *Applications*. ### Methods A method is a section of code intended to be invoked externally with an Application call transaction. A method must have a name, it may take a list of arguments as input when it is invoked, and it may return a single value (which may be a tuple) when it finishes running. The possible types for arguments and return values are described later in the section. Invoking a method involves creating an Application call transaction to specifically call that method. Methods are different from internal subroutines that may exist in a contract, but are not externally callable. Methods may be invoked by a top-level Application call transaction from an off-chain caller, or by an Application call inner transaction created by another Application. #### Method Signature A method signature is a unique identifier for a method. The signature is a string that consists of the method’s name, an open parenthesis, a comma-separated list of the types of its arguments, a closing parenthesis, and the method’s return type, or `void` if it does not return a value. The names of the arguments **MUST NOT** be included in a method’s signature, and **MUST NOT** contain any whitespace. For example, `add(uint64,uint64)uint128` is the method signature for a method named `add` which takes two uint64 parameters and returns a uint128. Signatures are encoded in ASCII. For the benefit of universal interoperability (especially in HLLs), names **MUST** satisfy the regular expression `[_A-Za-z][A-Za-z0-9_]*`. Names starting with an underscore are reserved and **MUST** only be used as specified in this ARC or future ABI-related ARC. #### Method Selector Method signatures contain all the information needed to identify a method, however the length of a signature is unbounded. Rather than consume program space with such strings, a method selector is used to identify methods in calls. A method selector is the first four bytes of the SHA-512/256 hash of the method signature. For example, the method selector for a method named `add` which takes two uint64 parameters and returns a uint128 can be computed as follows: ```plaintext Method signature: add(uint64,uint64)uint128 SHA-512/256 hash (in hex): 8aa3b61f0f1965c3a1cbfa91d46b24e54c67270184ff89dc114e877b1753254a Method selector (in hex): 8aa3b61f ``` #### Method Description A method description provides further information about a method beyond its signature. This description is encoded in JSON and consists of a method’s name, description (optional), arguments (their types, and optional names and descriptions), and return type and optional description for the return type. From this structure, the method’s signature and selector can be calculated. The Algorand SDKs provide convenience functions to calculate signatures and selectors from such JSON files. These details will enable high-level languages and dapps/wallets to properly encode arguments, call methods, and decode return values. This description can populate UIs in dapps, wallets, and explorers with description of parameters, as well as populate information about methods in IDEs for HLLs. The JSON structure for such an object is: ```typescript interface Method { /** The name of the method */ name: string; /** Optional, user-friendly description for the method */ desc?: string; /** The arguments of the method, in order */ args: Array<{ /** The type of the argument */ type: string; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; }>; /** Information about the method's return value */ returns: { /** The type of the return value, or "void" to indicate no return value. */ type: string; /** Optional, user-friendly description for the return value */ desc?: string; }; } ``` For example: ```json { "name": "add", "desc": "Calculate the sum of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first term to add" }, { "type": "uint64", "name": "b", "desc": "The second term to add" } ], "returns": { "type": "uint128", "desc": "The sum of a and b" } } ``` ### Interfaces An Interface is a logically grouped set of methods. All method selectors in an Interface **MUST** be unique. Method names **MAY** not be unique, as long as the corresponding method selectors are different. Method names in Interfaces **MUST NOT** begin with an underscore. An Algorand Application *implements* an Interface if it supports all of the methods from that Interface. An Application **MAY** implement zero, one, or multiple Interfaces. Interface designers **SHOULD** try to prevent collisions of method selectors between Interfaces that are likely to be implemented together by the same Application. > For example, an Interface `Calculator` providing addition and subtraction of integer methods and an Interface `NumberFormatting` providing formatting methods for numbers into strings are likely to be used together. Interface designers should ensure that all the methods in `Calculator` and `NumberFormatting` have distinct method selectors. #### Interface Description An Interface description is a JSON object containing the JSON descriptions for each of the methods in the Interface. The JSON structure for such an object is: ```typescript interface Interface { /** A user-friendly name for the interface */ name: string; /** Optional, user-friendly description for the interface */ desc?: string; /** All of the methods that the interface contains */ methods: Method[]; } ``` Interface names **MUST** satisfy the regular expression `[_A-Za-z][A-Za-z0-9_]*`. Interface names starting with `ARC` are reserved to interfaces defined in ARC. Interfaces defined in `ARC-XXXX` (where `XXXX` is a 0-padded number) **SHOULD** start with `ARC_XXXX`. For example: ```json { "name": "Calculator", "desc": "Interface for a basic calculator supporting additions and multiplications", "methods": [ { "name": "add", "desc": "Calculate the sum of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first term to add" }, { "type": "uint64", "name": "b", "desc": "The second term to add" } ], "returns": { "type": "uint128", "desc": "The sum of a and b" } }, { "name": "multiply", "desc": "Calculate the product of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first factor to multiply" }, { "type": "uint64", "name": "b", "desc": "The second factor to multiply" } ], "returns": { "type": "uint128", "desc": "The product of a and b" } } ] } ``` ### Contracts A Contract is a declaration of what an Application implements. It includes the complete list of the methods implemented by the related Application. It is similar to an Interface, but it may include further details about the concrete implementation, as well as implementation-specific methods that do not belong to any Interface. All methods in a Contract **MUST** be unique; specifically, each method **MUST** have a unique method selector. Method names in Contracts **MAY** begin with underscore, but these names are reserved for use by this ARC and future extensions of this ARC. #### OnCompletion Actions and Creation In addition to the set of methods from the Contract’s definition, a Contract **MAY** allow Application calls with zero arguments, also known as bare Application calls. Since method invocations with zero arguments still encode the method selector as the first Application call argument, bare Application calls are always distinguishable from method invocations. The primary purpose of bare Application calls is to allow the execution of an OnCompletion (`apan`) action which requires no inputs and has no return value. A Contract **MAY** allow this for all of the OnCompletion actions listed below, for only a subset of them, or for none at all. Great care should be taken when allowing these operations. Allowed OnCompletion actions: * 0: NoOp * 1: OptIn * 2: CloseOut * 4: UpdateApplication * 5: DeleteApplication Note that OnCompletion action 3, ClearState, is **NOT** allowed to be invoked as a bare Application call. > While ClearState is a valid OnCompletion action, its behavior differs significantly from the other actions. Namely, an Application running during ClearState which wishes to have any effect on the state of the chain must never fail, since due to the unique behavior about ClearState failure, doing so would revert any effect made by that Application. Because of this, Applications running during ClearState are incentivized to never fail. Accepting any user input, whether that is an ABI method selector, method arguments, or even relying on the absence of Application arguments to indicate a bare Application call, is therefore a dangerous operation, since there is no way to enforce properties or even the existence of data that is supplied by the user. If a Contract elects to allow bare Application calls for some OnCompletion actions, then that Contract **SHOULD** also allow any of its methods to be called with those OnCompletion actions, as long as this would not cause undesirable or nonsensical behavior. > The reason for this is because if it’s acceptable to allow an OnCompletion action to take place in isolation inside of a bare Application call, then it’s most likely acceptable to allow the same action to take place at the same time as an ABI method call. And since the latter can be accomplished in just one transaction, it can be more efficient. If a Contract requires an OnCompletion action to take inputs or to return a value, then the **RECOMMENDED** behavior of the Contract is to not allow bare Application calls for that OnCompletion action. Rather, the Contract should have one or more methods that are meant to be called with the appropriate OnCompletion action set in order to process that action. A Contract **MUST NOT** allow any of its methods to be called with the ClearState OnCompletion action. > To reinforce an earlier point, it is unsafe for a ClearState program to read any user input, whether that is a method argument or even relying on a certain method selector to be present. This behavior makes it unsafe to use ABI calling conventions during ClearState. If an Application is called with greater than zero Application call arguments (i.e. **NOT** a bare Application call) and the OnCompletion action is **NOT** ClearState, the Application **MUST** always treat the first argument as a method selector and invoke the specified method. This behavior **MUST** be followed for all OnCompletion actions, except for ClearState. This applies to Application creation transactions as well, where the supplied Application ID is 0. Similar to OnCompletion actions, if a Contract requires its creation transaction to take inputs or to return a value, then the **RECOMMENDED** behavior of the Contract should be to not allow bare Application calls for creation. Rather, the Contract should have one or more methods that are meant to be called in order to create the Contract. #### Contract Description A Contract description is a JSON object containing the JSON descriptions for each of the methods in the Contract. The JSON structure for such an object is: ```typescript interface Contract { /** A user-friendly name for the contract */ name: string; /** Optional, user-friendly description for the interface */ desc?: string; /** * Optional object listing the contract instances across different networks */ networks?: { /** * The key is the base64 genesis hash of the network, and the value contains * information about the deployed contract in the network indicated by the * key */ [network: string]: { /** The app ID of the deployed contract in this network */ appID: number; } } /** All of the methods that the contract implements */ methods: Method[]; } ``` Contract names **MUST** satisfy the regular expression `[_A-Za-z][A-Za-z0-9_]*`. The `desc` fields of the Contract and the methods inside the Contract **SHOULD** contain information that is not explicitly encoded in the other fields, such as support of bare Application calls, requirement of specific OnCompletion action for specific methods, and methods to call for creation (if creation cannot be done via a bare Application call). For example: ```json { "name": "Calculator", "desc": "Contract of a basic calculator supporting additions and multiplications. Implements the Calculator interface.", "networks": { "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=": { "appID": 1234 }, "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=": { "appID": 5678 }, }, "methods": [ { "name": "add", "desc": "Calculate the sum of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first term to add" }, { "type": "uint64", "name": "b", "desc": "The second term to add" } ], "returns": { "type": "uint128", "desc": "The sum of a and b" } }, { "name": "multiply", "desc": "Calculate the product of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first factor to multiply" }, { "type": "uint64", "name": "b", "desc": "The second factor to multiply" } ], "returns": { "type": "uint128", "desc": "The product of a and b" } } ] } ``` ### Method Invocation In order for a caller to invoke a method, the caller and the method implementation (callee) must agree on how information will be passed to and from the method. This ABI defines a standard for where this information should be stored and for its format. This standard does not apply to Application calls with the ClearState OnCompletion action, since it is unsafe for ClearState programs to rely on user input. #### Standard Format The method selector must be the first Application call argument (index 0), accessible as `txna ApplicationArgs 0` from TEAL (except for bare Application calls, which use zero application call arguments). If a method has 15 or fewer arguments, each argument **MUST** be placed in order in the following Application call argument slots (indexes 1 through 15). The arguments **MUST** be encoded as defined in the section. Otherwise, if a method has 16 or more arguments, the first 14 **MUST** be placed in order in the following Application call argument slots (indexes 1 through 14), and the remaining arguments **MUST** be encoded as a tuple in the final Application call argument slot (index 15). The arguments must be encoded as defined in the section. If a method has a non-void return type, then the return value of the method **MUST** be located in the final logged value of the method’s execution, using the `log` opcode. The logged value **MUST** contain a specific 4 byte prefix, followed by the encoding of the return value as defined in the section. The 4 byte prefix is defined as the first 4 bytes of the SHA-512/256 hash of the ASCII string `return`. In hex, this is `151f7c75`. > For example, if the method `add(uint64,uint64)uint128` wanted to return the value 4160, it would log the byte array `151f7c7500000000000000000000000000001040` (shown in hex). #### Implementing a Method An ARC-4 Application implementing a method: 1. **MUST** check if `txn NumAppArgs` equals 0. If true, then this is a bare Application call. If the Contract supports bare Application calls for the current transaction parameters (it **SHOULD** check the OnCompletion action and whether the transaction is creating the application), it **MUST** handle the call appropriately and either approve or reject the transaction. The following steps **MUST** be ignored in this case. Otherwise, if the Contract does not support this bare application call, the Contract **MUST** reject the transaction. 2. **MUST** examine `txna ApplicationArgs 0` to identify the selector of the method being invoked. If the contract does not implement a method with that selector, the Contract **MUST** reject the transaction. 3. **MUST** execute the actions required to implement the method being invoked. In general, this works by branching to the body of the method indicated by the selector. 4. The code for that method **MAY** extract the arguments it needs, if any, from the application call arguments as described in the section. If the method has more than 15 arguments and the contract needs to extract an argument beyond the 14th, it **MUST** decode `txna ApplicationArgs 15` as a tuple to access the arguments contained in it. 5. If the method is non-void, the Application **MUST** encode the return value as described in the section and then `log` it with the prefix `151f7c75`. Other values **MAY** be logged before the return value, but other values **MUST NOT** be logged after the return value. #### Calling a Method from Off-Chain To invoke an ARC-4 Application, an off-chain system, such as a dapp or wallet, would first obtain the Interface or Contract description JSON object for the app. The client may now: 1. Create an Application call transaction with the following parameters: 1. Use the ID of the desired Application whose program code implements the method being invoked, or 0 if they wish to create the Application. 2. Use the selector of the method being invoked as the first Application call argument. 3. Encode all arguments for the method, if any, as described in the section. If the method has more than 15 arguments, encode all arguments beyond (but not including) the 14th as a tuple into the final Application call argument. 2. Submit this transaction and wait until it successfully commits to the blockchain. 3. Decode the return value, if any, from the ApplyData’s log information. Clients **MAY** ignore the return value. An exception to the above instructions is if the app supports bare Application calls for some transaction parameters, and the client wishes to invoke this functionality. Then the client may simply create and submit to the network an Application call transaction with the ID of the Application (or 0 if they wish to create the application) and the desired OnCompletion value set. Application arguments **MUST NOT** be present. ### Encoding This section describes how ABI types can be represented as byte strings. Like the , this encoding specification is designed to have the following two properties: 1. The number of non-sequential “reads” necessary to access a value is at most the depth of that value inside the encoded array structure. For example, at most 4 reads are needed to retrieve a value at `a[i][k][l][r]`. 2. The encoding of a value or array element is not interleaved with other data and it is relocatable, i.e. only relative “addresses” (indexes to other parts of the encoding) are used. #### Types The following types are supported in the Algorand ABI. * `uint`: An `N`-bit unsigned integer, where `8 <= N <= 512` and `N % 8 = 0`. When this type is used as part of a method signature, `N` must be written as a base 10 number without any leading zeros. * `byte`: An alias for `uint8`. * `bool`: A boolean value that is restricted to either 0 or 1. When encoded, up to 8 consecutive `bool` values will be packed into a single byte. * `ufixedx`: An `N`-bit unsigned fixed-point decimal number with precision `M`, where `8 <= N <= 512`, `N % 8 = 0`, and `0 < M <= 160`, which denotes a value `v` as `v / (10^M)`. When this type is used as part of a method signature, `N` and `M` must be written as base 10 numbers without any leading zeros. * `[]`: A fixed-length array of length `N`, where `N >= 0`. `type` can be any other type. When this type is used as part of a method signature, `N` must be written as a base 10 number without any leading zeros, *unless* `N` is zero, in which case only a single 0 character should be used. * `address`: Used to represent a 32-byte Algorand address. This is equivalent to `byte[32]`. * `[]`: A variable-length array. `type` can be any other type. * `string`: A variable-length byte array (`byte[]`) assumed to contain UTF-8 encoded content. * `(T1,T2,…,TN)`: A tuple of the types `T1`, `T2`, …, `TN`, `N >= 0`. * reference types `account`, `asset`, `application`: **MUST NOT** be used as the return type. For encoding purposes they are an alias for `uint8`. See section “Reference Types” below. Additional special use types are defined in and . #### Static vs Dynamic Types For encoding purposes, the types are divided into two categories: static and dynamic. The dynamic types are: * `[]` for any `type` * This includes `string` since it is an alias for `byte[]`. * `[]` for any dynamic `type` * `(T1,T2,...,TN)` if `Ti` is dynamic for some `1 <= i <= N` All other types are static. For a static type, all encoded values of that type have the same length, irrespective of their actual value. #### Encoding Rules Let `len(a)` be the number of bytes in the binary string `a`. The returned value shall be considered to have the ABI type `uint16`. Let `enc` be a mapping from values of the ABI types to binary strings. This mapping defines the encoding of the ABI. For any ABI value `x`, we recursively define `enc(x)` to be as follows: * If `x` is a tuple of `N` types, `(T1,T2,...,TN)`, where `x[i]` is the value at index `i`, starting at 1: * `enc(x) = head(x[1]) ... head(x[N]) tail(x[1]) ... tail(x[N])` * Let `head` and `tail` be mappings from values in this tuple to binary strings. For each `i` such that `1 <= i <= N`, these mappings are defined as: * If `Ti` (the type of `x[i]`) is static: * If `Ti` is `bool`: * Let `after` be the largest integer such that all `T(i+j)` are `bool`, for `0 <= j <= after`. * Let `before` be the largest integer such that all `T(i-j)` are `bool`, for `0 <= j <= before`. * If `before % 8 == 0`: * `head(x[i]) = enc(x[i]) | (enc(x[i+1]) >> 1) | ... | (enc(x[i + min(after,7)]) >> min(after,7))`, where `>>` is bitwise right shift which pads with 0, `|` is bitwise or, and `min(x,y)` returns the minimum value of the integers `x` and `y`. * `tail(x[i]) = ""` (the empty string) * Otherwise: * `head(x[i]) = ""` (the empty string) * `tail(x[i]) = ""` (the empty string) * Otherwise: * `head(x[i]) = enc(x[i])` * `tail(x[i]) = ""` (the empty string) * Otherwise: * `head(x[i]) = enc(len( head(x[1]) ... head(x[N]) tail(x[1]) ... tail(x[i-1]) ))` * `tail(x[i]) = enc(x[i])` * If `x` is a fixed-length array `T[N]`: * `enc(x) = enc((x[0], ..., x[N-1]))`, i.e. it’s encoded as if it were an `N` element tuple where every element is type `T`. * If `x` is a variable-length array `T[]` with `k` elements: * `enc(x) = enc(k) enc([x[0], ..., x[k-1]])`, i.e. it’s encoded as if it were a fixed-length array of `k` elements, prefixed with its length, `k` encoded as a `uint16`. * If `x` is an `N`-bit unsigned integer, `uint`: * `enc(x)` is the `N`-bit big-endian encoding of `x`. * If `x` is an `N`-bit unsigned fixed-point decimal number with precision `M`, `ufixedx`: * `enc(x) = enc(x * 10^M)`, where `x * 10^M` is interpreted as a `uint`. * If `x` is a boolean value `bool`: * `enc(x)` is a single byte whose **most significant bit** is either 1 or 0, if `x` is true or false respectively. All other bits are 0. Note: this means that a value of true will be encoded as `0x80` (`10000000` in binary) and a value of false will be encoded as `0x00`. This is in contrast to most other encoding schemes, where a value of true is encoded as `0x01`. Other aliased types’ encodings are already covered: * `string` and `address` are aliases for `byte[]` and `byte[32]` respectively * `byte` is an alias for `uint8` * each of the reference types is an alias for `uint8` #### Reference Types Three special types are supported *only* as the type of an argument. They *can* be embedded in arrays and tuples. * `account` represents an Algorand account, stored in the Accounts (`apat`) array * `asset` represents an Algorand Standard Asset (ASA), stored in the Foreign Assets (`apas`) array * `application` represents an Algorand Application, stored in the Foreign Apps (`apfa`) array Some AVM opcodes require specific values to be placed in the “foreign arrays” of the Application call transaction. These three types allow methods to describe these requirements. To encode method calls that have these types as arguments, the value in question is placed in the Accounts (`apat`), Foreign Assets (`apas`), or Foreign Apps (`apfa`) arrays, respectively, and a `uint8` containing the index of the value in the appropriate array is encoded in the normal location for this argument. Note that the Accounts and Foreign Apps arrays have an implicit value at index 0, the Sender of the transaction or the called Application, respectively. Therefore, indexes of any additional values begin at 1. Additionally, for efficiency, callers of a method that wish to pass the transaction Sender as an `account` value or the called Application as an `application` value **SHOULD** use 0 as the index of these values and not explicitly add them to Accounts or Foreign Apps arrays. When passing addresses, ASAs, or apps that are *not* required to be accessed by such opcodes, ARC-4 Contracts **SHOULD** use the base types for passing these types: `address` for accounts and `uint64` for asset or Application IDs. #### Transaction Types Some apps require that they are invoked as part of a larger transaction group, containing specific additional transactions. Seven additional special types are supported (only) as argument types to describe such requirements. * `txn` represents any Algorand transaction * `pay` represents a PaymentTransaction (algo transfer) * `keyreg` represents a KeyRegistration transaction (configure consensus participation) * `acfg` represent a AssetConfig transaction (create, configure, or destroy ASAs) * `axfer` represents an AssetTransfer transaction (ASA transfer) * `afrz` represents an AssetFreezeTx transaction (freeze or unfreeze ASAs) * `appl` represents an ApplicationCallTx transaction (create/invoke a Application) Arguments of these types are encoded as consecutive transactions in the same transaction group as the Application call, placed in the position immediately preceding the Application call. Unlike “foreign” references, these special types are not encoded in ApplicationArgs as small integers “pointing” to the associated object. In fact, they occupy no space at all in the Application Call transaction itself. Allowing explicit references would create opportunities for multiple transaction “values” to point to the same transaction in the group, which is undesirable. Instead, the locations of the transactions are implied entirely by the placement of the transaction types in the argument list. For example, to invoke the method `deposit(string,axfer,pay,uint32)void`, a client would create a transaction group containing, in this order: 1. an asset transfer 2. a payment 3. the actual Application call When encoding the other (non-transaction) arguments, the client **MUST** act as if the transaction arguments were completely absent from the method signature. The Application call would contain the method selector in ApplicationArgs\[0], the first (string) argument in ApplicationArgs\[1], and the fourth (uint32) argument in ApplicationArgs\[2]. ARC-4 Applications **SHOULD** be constructed to allow their invocations to be combined with other contract invocations in a single atomic group if they can do so safely. For example, they **SHOULD** use `gtxns` to examine the previous index in the group for a required `pay` transaction, rather than hardcode an index with `gtxn`. In general, an ARC-4 Application method with `n` transactions as arguments **SHOULD** only inspect the `n` previous transactions. In particular, it **SHOULD NOT** inspect transactions after and it **SHOULD NOT** check the size of a transaction group (if this can be done safely). In addition, a given method **SHOULD** always expect the same number of transactions before itself. For example, the method `deposit(string,axfer,pay,uint32)void` is always preceded by two transactions. It is never the case that it can be called only with one asset transfer but no payment transfer. > The reason for the above recommendation is to provide minimal composability support while preventing obvious dangerous attacks. For example, if some apps expect payment transactions after them while other expect payment transaction before them, then the same payment may be counted twice. ## Rationale ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Wallet Transaction Signing API (Functional)
> An API for a function used to sign a list of transactions.
> This ARC is intended to be completely compatible with . ## Abstract ARC-1 defines a standard for signing transactions with security in mind. This proposal is a strict subset of ARC-1 that outlines only the minimum functionality required in order to be useable. Wallets that conform to ARC-1 already conform to this API. Wallets conforming to but not ARC-1 **MUST** only be used for testing purposes and **MUST NOT** used on MainNet. This is because this ARC-5 does not provide the same security guarantees as ARC-1 to protect properly wallet users. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Interface `SignTxnsFunction` Signatures are requested by calling a function `signTxns(txns)` on a list `txns` of transactions. The dApp may also provide an optional parameter `opts`. A wallet transaction signing function `signTxns` is defined by the following interface: ```ts export type SignTxnsFunction = ( txns: WalletTransaction[], opts?: SignTxnsOpts, ) => Promise<(SignedTxnStr | null)[]>; ``` * `SignTxnsOpts` is as specified by . * `SignedTxnStr` is as specified by . A `SignTxnsFunction`: * expects `txns` to be in the correct format as specified by `WalletTransaction`. ### Interface `WalletTransaction` ```ts export interface WalletTransaction { /** * Base64 encoding of the canonical msgpack encoding of a Transaction. */ txn: string; } ``` ### Semantic requirements * The call `signTxns(txns, opts)` **MUST** either throw an error or return an array `ret` of the same length as the `txns` array. * Each element of `ret` **MUST** be a valid `SignedTxnStr` with the underlying transaction exactly matching `txns[i].txn`. This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”. `signTxns` **SHOULD** follow the error standard specified in . ### UI requirements Wallets satisfying this ARC but not **MUST** clearly display a warning to the user that they **MUST** not be used with real funds on MainNet. ## Rationale This simplified version of ARC-0001 exists for two main reasons: 1. To outline the minimum amount of functionality needed in order to be useful. 2. To serve as a stepping stone towards full ARC-0001 compatibility. While this ARC **MUST** not be used by users with real funds on MainNet for security reasons, this simplified API sets a lower bar and acts as a signpost for which wallets can even be used at all. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Address Discovery API
> API function, enable, which allows the discovery of accounts
## Abstract A function, `enable`, which allows the discovery of accounts. Optional functions, `enableNetwork` and `enableAccounts`, which handle the multiple capabilities of `enable` separately. This document requires nothing else, but further semantic meaning is prescribed to these functions in which builds off of this one and a few others. The caller of this function is usually a dApp. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Interface `EnableFunction` ```ts export type AlgorandAddress = string; export type GenesisHash = string; export type EnableNetworkFunction = ( opts?: EnableNetworkOpts ) => Promise; export type EnableAccountsFunction = ( opts?: EnableAccountsOpts ) => Promise; export type EnableFunction = ( opts?: EnableOpts ) => Promise; export type EnableOpts = ( EnableNetworkOpts & EnableAccountsOpts ); export interface EnableNetworkOpts { genesisID?: string; genesisHash?: GenesisHash; }; export interface EnableAccountsOpts { accounts?: AlgorandAddress[]; }; export type EnableResult = ( EnableNetworkResult & EnableAccountsResult ); export interface EnableNetworkResult { genesisID: string; genesisHash: GenesisHash; } export interface EnableAccountsResult { accounts: AlgorandAddress[]; } export interface EnableError extends Error { code: number; data?: any; } ``` An `EnableFunction` with optional input argument `opts:EnableOpts` **MUST** return a value `ret:EnableResult` or **MUST** throw an exception object of type `EnableError`. #### String specification: `GenesisID` and `GenesisHash` A `GenesisID` is an ascii string A `GenesisHash` is base64 string representing a 32-byte genesis hash. #### String specification: `AlgorandAddress` Defined as in : > An Algorand address is represented by a 58-character base32 string. It includes includes the checksum. #### Error Standards `EnableError` follows the same rules as `SignTxnsError` from and uses the same status error codes. ### Interface `WalletAccountManager` ```ts export interface WalletAccountManager { switchAccount: (addr: AlgorandAddress) => Promise switchNetwork: (genesisID: string) => Promise onAccountSwitch: (hook: (addr: AlgorandAddress) => void) onNetworkSwitch: (hook: (genesisID: string, genesisHash: GenesisHash) => void) } ``` Wallets SHOULD expose `switchAccount` function to allow an app to switch an account to another one managed by the wallet. The `switchAccount` function should return a promise which will be fulfilled when the wallet will effectively switch an account. The function must thrown an `Error` exception when the wallet can’t execute the switch (for example, the provided address is not managed by the wallet or when the address is not a valid Algorand address). Similarly, wallets SHOULD expose `switchNetwork` function to instrument a wallet to switch to another network. The function must thrown an `Error` exception when the wallet can’t execute the switch (for example, when the provided genesis ID is not recognized by the wallet). Very often, webapp have their own state with information about the user (provided by the account address) and a network. For example, a webapp can list all compatible Smart Contracts for a given network. For descent integration with a wallet, we must be able to react in a webapp on the account and network switch from the wallet interface. For that we define 2 functions which MUST be exposed by wallets: `onAccountSwitch` and `onNetworkSwitch`. These function will register a hook and will call it whenever a user switches respectively an account or network from the wallet interface. ### Semantic requirements This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”. #### First call to `enable` Regarding a first call by a caller to `enable(opts)` or `enable()` (where `opts` is `undefined`), with potential promised return value `ret`: When `genesisID` and/or `genesisHash` is specified in `opts`: * The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.genesisID` and `ret.genesisHash` match `opts.genesisID` and `opts.genesisHash` (i.e., `ret.genesisID` is identical to `opts.genesisID` if `opts.genesisID` is specified, and `ret.genesisHash` is identical to `opts.genesisHash` if `opts.genesisHash` is specified). * The user **SHOULD** be prompted for permission to acknowledge control of accounts on that specific network (defined by `ret.genesisID` and `ret.genesisHash`). * In the case only `opts.genesisID` is provided, several networks may match this ID and the user **SHOULD** be prompted to select the network they wish to use. When neither `genesisID` nor `genesisHash` is specified in `opts`: * The user **SHOULD** be prompted to select the network they wish to use. * The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.genesisID` and `ret.genesisHash` **SHOULD** represent the user’s selection of network. * The function **MAY** throw an error if it does not support user selection of network. When `accounts` is specified in `opts`: * The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.accounts` is an array that starts with all the same elements as `opts.accounts`, in the same order. * The user **SHOULD** be prompted for permission to acknowledge their control of the specified accounts. The wallet **MAY** allow the user to provide more accounts than those listed. The wallet **MAY** allow the user to select fewer accounts than those listed, in which the wallet **MUST** return an error which **SHOULD** be a user rejected error and contain the rejected accounts in `data.accounts`. When `accounts` is not specified in `opts`: * The user **SHOULD** be prompted to select the accounts they wish to reveal on the selected network. * The call `enable(opts)` **MUST** either throw an error or return an object `ret` where `ret.accounts` is a empty or non-empty array. * If `ret.accounts` is not empty, the caller **MAY** assume that `ret.accounts[0]` is the user’s “currently-selected” or “default” account, for DApps that only require access to one account. > Empty `ret.accounts` array are used to allow a DApp to get access to an Algorand node but not to signing capabilities. #### Network In addition to the above rules, in all cases, if `ret.genesisID` is one of the official network `mainnet-v1.0`, `testnet-v1.0`, or `betanet-v1.0`, `ret.genesisHash` **MUST** match the genesis hash of those networks | Genesis ID | Genesis Hash | | -------------- | ---------------------------------------------- | | `mainnet-v1.0` | `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=` | | `testnet-v1.0` | `SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=` | | `betanet-v1.0` | `mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0=` | When using a genesis ID that is not one of the above, the caller **SHOULD** always provide a `genesisHash`. This is because a `genesisID` does not uniquely define a network in that case. If a caller does not provide a `genesisHash`, multiple calls to `enable` may return a different network with the same `genesisID` but a different `genesisHash`. #### Identification of the caller The `enable` function **MAY** remember the choices of the user made by a specific caller and use them everytime the same caller calls the function. The function **MUST** ensure that the caller can be securely identified. In particular, by default, the function **MUST NOT** allow webapps on the http protocol to call it, as such webapps can easily be modified by a man-in-the-middle attacker. In the case of callers that are https websites, the caller **SHOULD** be identified by its fully qualified domain name. The function **MAY** offer the user some “developer mode” or “advanced” options to allow calls from insecure dApps. In that case, the fact that the caller is insecure and/or the fact that the wallet in “developer mode” **MUST** be clearly displayed by the wallet. #### Multiple calls to `enable` The same caller **MAY** call multiple time the `enable` function. When the caller is a dApp, every time a dApp is refreshed, it actually **SHOULD** call the `enable()` function. The `enable` function **MAY NOT** return the same value every time it is called, even when called with the exact same argument `opts`. The caller **MUST NOT** assume that the `enable` function will always return the same value, and **MUST** properly handle changes of available accounts and/or changes of network. For example, a user may want to change network or accounts for a dApp. That is why, upon refresh, the dApp **SHOULD** automatically switch network and perform all required changes. Examples of required changes include but are not limited to change of the list of accounts, change of statuses of the account (e.g., opted in or not), change of the balances of the accounts. ### `enableNetwork` and `enableAccounts` It may be desirable for a dapp to perform network queries prior to requesting that the user enable an account for use with the dapp. Wallets may provide the functionality of `enable` in two parts: `enableNetwork` for network discovery, and `enableAccounts` for account discovery, which together are the equivalent of calling `enable`. ## Rationale This API puts power in the user’s hands to choose a preferred network and account to use when interacting with a dApp. It also allows dApp developers to suggest a specific network, or specific accounts, as appropriate. The user still maintains the ability to reject the dApp’s suggestions, which corresponds to rejecting the promise returned by `enable()`. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Post Transactions API
> API function to Post Signed Transactions to the network.
## Abstract A function, `postTxns`, which accepts an array of `SignedTransaction`s, and posts them to the network. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. This ARC uses interchangeably the terms “throw an error” and “reject a promise with an error”. ### Interface `PostTxnsFunction` ```ts export type TxnID = string; export type SignedTxnStr = string; export type PostTxnsFunction = ( stxns: SignedTxnStr[], ) => Promise; export interface PostTxnsResult { txnIDs: TxnID[]; } export interface PostTxnsError extends Error { code: number; data?: any; successTxnIDs: (TxnID | null)[]; } ``` A `PostTxnsFunction` with input argument `stxns:string[]` and promised return value `ret:PostTxnsResult`: * expects `stxns` to be in the correct string format as specified by `SignedTxnStr` (defined below). * **MUST**, if successful, return an object `ret` such that `ret.txID` is in the correct string format as specified by `TxID`. > The use of `txID` instead of `txnID` is to follow the standard name for the transaction ID. ### String specification: `SignedTxnStr` Defined as in : > \[`SignedTxnStr` is] the base64 encoding of the canonical msgpack encoding of the `SignedTxn` corresponding object, as defined in the . ### String specification: `TxnID` A `TxnID` is a 52-character base32 string (without padding) corresponding to a 32-byte string. For example: `H2KKVITXKWL2VBZBWNHSYNU3DBLYBXQAVPFPXBCJ6ZZDVXQPSRTQ`. ### Error standard `PostTxnsError` follows the same rules as `SignTxnsError` from and uses the same status codes as well as the following status codes: | Status Code | Name | Description | | ----------- | --------------------------------- | ----------------------------------------- | | 4400 | Failure Sending Some Transactions | Some transactions were not sent properly. | ### Semantic requirements Regarding a call to `postTxns(stxns)` with promised return value `ret`: * `postTxns` **MAY** assume that `stxns` is an array of valid `SignedTxnStr` strings that represent correctly signed transactions such that: * Either all transaction belong to the same group of transactions and are in the correct order. In other words, either `stxns` is an array of a single transaction with a zero group ID (`txn.Group`), or `stxns` is an array of one or more transactions with the *same* non-zero group ID. The function **MUST** reject if the transactions do not match their group ID. (The caller must provide the transactions in the order defined by the group ID.) > An early draft of this ARC required that the size of a group of transactions must be greater than 1 but, since the Algorand protocol supports groups of size 1, this requirement had been changed so dApps don’t have to have special cases for single transactions and can always send a group to the wallet. * Or `stxns` is a concatenation of arrays satisfying the above. * `postTxns` **MUST** attempt to post all transactions together. With the `algod` v2 API, this implies splitting the transactions into groups and making an API call per transaction group. `postTxns` **SHOULD NOT** wait after each transaction group but post all of them without pause in-between. * `postTxns` **MAY** ask the user whether they approve posting those transactions. > A dApp can always post transactions itself without the help of `postTxns` when a public network is used. However, when a private network is used, a dApp may need `postTxns`, and in this case, asking the user’s approval can make sense. Another such use case is when the user uses a specific trusted node that has some legal restrictions. * `postTxns` **MUST** wait for confirmation that the transactions are finalized. > TODO: Decide whether to add an optional flag to not wait for that. * If successful, `postTxns` **MUST** resolve the returned promise with the list of transaction IDs `txnIDs` of the posted transactions `stxn`. * If unsuccessful, `postTxns` **MUST** reject the promise with an error `err` of type `PostTxnsError` such that: * `err.code=4400` if there was a failure sending the transactions or a code as specified in if the user or function disallowed posting the transactions. * `err.message` **SHOULD** describe what went wrong in as much detail as possible. * `err.successTxnIDs` **MUST** be an array such that `err.successTxnID[i]` is the transaction ID of `stxns[i]` if `stxns[i]` was successfully committed to the blockchain, and `null` otherwise. ### Security considerations In case the wallet uses an API service that is secret or provided by the user, the wallet **MUST** ensure that the URL of the service and the potential tokens/headers are not leaked to the dApp. > Leakage may happen by accidentally including too much information in responses or errors returned by the various methods. For example, if the Node.JS superagent library is used without filtering errors and responses, errors and responses may include the request object, which includes the potentially secret API service URL / secret token headers. ## Rationale This API allows DApps to use a user’s preferred connection in order to submit transactions to the network. The user may wish to use a specific trusted node, or a particular paid service with their own secret token. This API protects the user’s secrets by not exposing connection details to the DApp. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Sign and Post API
> A function used to simultaneously sign and post transactions to the network.
## Abstract A function `signAndPostTxns`, which accepts an array of `WalletTransaction`s, and posts them to the network. Accepts the inputs to ’s / ’s `signTxns`, and produces the output of ’s `postTxns`. ## Specification ### Interface `SignAndPostTxnsFunction` ```ts export type SignAndPostTxnsFunction = ( txns: WalletTransaction[], opts?: any, ) => Promise; ``` * `WalletTransaction` is as specified by . * `PostTxnsResult` is as specified by . Errors are handled exactly as specified by and ## Rationale Allows the user to be sure that what they are signing is in fact all that is being sent. Doesn’t necessarily grant the DApp direct access to the signed txns, though they are posted to the network, so they should not be considered private. Exposing only this API instead of exposing `postTxns` directly is potentially safer for the wallet user, since it only allows the posting of transactions which the user has explicitly approved. ## Security Considerations In case the wallet uses an API service that is secret or provided by the user, the wallet **MUST** ensure that the URL of the service and the potential tokens/headers are not leaked to the dApp. > Leakage may happen by accidentally including too much information in responses or errors returned by the various methods. For example, if the nodeJS superagent library is used without filtering errors and responses, errors and responses may include the request object, which includes the potentially secret API service URL / secret token headers. For dApps using the `signAndPostTxns` function, it is **RECOMMENDED** to display a Waiting/Loading Screen to wait until the transaction is confirmed to prevent potential issues. > The reasoning is the following: the pop-up/window in which the wallet is showing the waiting/loading screen may disappear in some cases (e.g., if the user clicks away from it). If it disappears, the user may be tempted to perform again the action, causing significant damages. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Algodv2 and Indexer API
> An API for accessing Algod and Indexer through a user's preferred connection.
## Abstract Functions `getAlgodv2Client` and `getIndexerClient` which return a `BaseHTTPClient` that can be used to construct an `Algodv2Client` and an `IndexerClient` respectively (from the ); ## Specification ### Interface `GetAlgodv2ClientFunction` ```ts type GetAlgodv2ClientFunction = () => Promise ``` Returns a promised `BaseHTTPClient` that can be used to then build an `Algodv2Client`, where `BaseHTTPClient` is an interface matching the interface `algosdk.BaseHTTPClient` from the ). ### Interface `GetIndexerClientFunction` ```ts type GetIndexerClientFunction = () => Promise ``` Returns a promised `BaseHTTPClient` that can be used to then build an `Indexer`, where `BaseHTTPClient` is an interface matching the interface `algosdk.BaseHTTPClient` from the ). ### Security considerations The returned `BaseHTTPClient` **SHOULD** filter the queries made to prevent potential attacks and reject (i.e., throw an exception) if this is not satisfied. A non-exhaustive list of checks is provided below: * Check that the relative PATH does not contain `..`. * Check that the only provided headers are the ones used by the SDK (when this ARC was written: `accept` and `content-type`) and their values are the ones provided by the SDK. `BaseHTTPClient` **MAY** impose rate limits. For higher security, `BaseHTTPClient` **MAY** also check the queries with regards to the OpenAPI specification of the node and the indexer. In case the wallet uses an API service that is secret or provided by the user, the wallet **MUST** ensure that the URL of the service and the potential tokens/headers are not leaked to the dApp. > Leakage may happen by accidentally including too much information in responses or errors returned by the various methods. For example, if the nodeJS superagent library is used without filtering errors and responses, errors and responses may include the request object, which includes the potentially secret API service URL / secret token headers. ## Rationale Nontrivial dApps often require the ability to query the network for activity. Algorand dApps written without regard to wallets are likely written using `Algodv2` and `Indexer` from `algosdk`. This document allows dApps to instantiate `Algodv2` and `Indexer` for a wallet API service, making it easy for JavaScript dApp authors to port their code to work with wallets. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Reach Minimum Requirements
> Minimum requirements for Reach to function with a given wallet.
## Abstract An amalgamation of APIs which comprise the minimum requirements for Reach to be able to function correctly with a given wallet. ## Specification A group of related functions: * `enable` (**REQUIRED**) * `enableNetwork` (**OPTIONAL**) * `enableAccounts` (**OPTIONAL**) * `signAndPostTxns` (**REQUIRED**) * `getAlgodv2Client` (**REQUIRED**) * `getIndexerClient` (**REQUIRED**) * `signTxns` (**OPTIONAL**) * `postTxns` (**OPTIONAL**) * `enable`: as specified in . * `signAndPostTxns`: as specified in . * `getAlgodv2Client` and `getIndexerClient`: as specified in . * `signTxns`: as specified in / . * `postTxns`: as specified in . There are additional semantics for using these functions together. ### Semantic Requirements * `enable` **SHOULD** be called before calling the other functions and upon refresh of the dApp. * Calling `enableNetwork` and then `enableAccounts` **MUST** be equivalent to calling `enable`. * If used instead of `enable`: `enableNetwork` **SHOULD** be called before `enableAccounts` and `getIndexerClient`. Both `enableNetwork` and `enableAccounts` **SHOULD** be called before the other functions. * If `signAndPostTxns`, `getAlgodv2Client`, `getIndexerClient`, `signTxns`, or `postTxns` are called before `enable` (or `enableAccounts`), they **SHOULD** throw an error object with property `code=4202`. (See Error Standards in ). * `getAlgodv2Client` and `getIndexerClient` **MUST** return connections to the network indicated by the `network` result of `enable`. * `signAndPostTxns` **MUST** post transactions to the network indicated by the `network` result of `enable` * The result of `getAlgodv2Client` **SHOULD** only be used to query the network. `postTxns` (if available) and `signAndPostTxns` **SHOULD** be used to send transactions to the network. The `Algodv2Client` object **MAY** be modified to throw exceptions if the caller tries to use it to post transactions. * `signTxns` and `postTxns` **MAY** or **MAY NOT** be provided. When one is provided, they both **MUST** be provided. In addition, `signTxns` **MAY** display a warning that the transactions are returned to the dApp rather than posted directly to the blockchain. ### Additional requirements regarding LogicSigs `signAndPostTxns` must also be able to handle logic sigs, and more generally transactions signed by the DApp itself. In case of logic sigs, callers are expected to sign the logic sig by themselves, rather than expecting the wallet to do so on their behalf. To handle these cases, we adopt and extend the format for `WalletTransaction`s that do not need to be signed: ```json { "txn": "...", "signers": [], "stxn": "..." } ``` * `stxn` is a `SignedTxnStr`, as specified in . * For production wallets, `stxn` **MUST** be checked to match `txn`, as specified in . `signAndPostTxns` **MAY** reject when none of the transactions need to be signed by the user. ## Rationale In order for a wallet to be useable by a DApp, it must support features for account discovery, signing and posting transactions, and querying the network. To whatever extent possible, the end users of a DApp should be empowered to select their own wallet, accounts, and network to be used with the DApp. Furthermore, said users should be able to use their preferred network node connection, without exposing their connection details and secrets (such as endpoint URLs and API tokens) to the DApp. The APIs presented in this document and related documents are sufficient to cover the needed functionality, while protecting user choice and remaining compatible with best security practices. Most DApps indeed always need to post transactions immediately after signing. `signAndPostTxns` allows this goal without revealing the signed transactions to the DApp, which prevents surprises to the user: there is no risk the DApp keeps in memory the transactions and post it later without the user knowing it (either to achieve a malicious goal such as forcing double spending, or just because the DApp has a bug). However, there are cases where `signTxns` and `postTxns` need to be used: for example when multiple users need to coordinate to sign an atomic transfer. ## Reference Implementation ```js async function main(wallet) { // Account discovery const enabled = await wallet.enable({genesisID: 'testnet-v1.0'}); const from = enabled.accounts[0]; // Querying const algodv2 = new algosdk.Algodv2(await wallet.getAlgodv2Client()); const suggestedParams = await algodv2.getTransactionParams().do(); const txns = makeTxns(from, suggestedParams); // Sign and post const res = await wallet.signAndPost(txns); console.log(res); }; ``` Where `makeTxns` is comparable to what is seen in ’s sample code. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Reach Browser Spec
> Convention for DApps to discover Algorand wallets in browser
## Abstract A common convention for DApps to discover Algorand wallets in browser code: `window.algorand`. A property `algorand` attached to the `window` browser object, with all the features defined in . ## Specification ```ts interface WindowAlgorand { enable: EnableFunction; enableNetwork?: EnableNetworkFunction; enableAccounts?: EnableAccountsFunction; signAndPostTxns: SignAndPostTxnsFunction; getAlgodv2Client: GetAlgodv2ClientFunction; getIndexerClient: GetIndexerClientFunction; signTxns?: SignTxnsFunction; postTxns?: SignTxnsFunction; } ``` With the specifications and semantics for each function as stated in . ## Rationale DApps should be unopinionated about which wallet they are used with. End users should be able to inject their wallet of choice into the DApp. Therefore, in browser contexts, we reserve `window.algorand` for this purpose. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Claimable ASA from vault application
> A smart signature contract account that can receive & disburse claimable Algorand Smart Assets (ASA) to an intended recipient account.
## Abstract The goal of this standard is to establish a standard in the Algorand ecosytem by which ASAs can be sent to an intended receiver even if their account is not opted in to the ASA. A on-chain application, called a vault, will be used to custody assets on behalf of a given user, with only that user being able to withdraw assets. A master application will use box storage to keep track of the vault for any given Algorand account. If integrated into ecosystem technologies including wallets, epxlorers, and dApps, this standard can provide enhanced capabilities around ASAs which are otherwise strictly bound at the protocol level to require opting in to be received. This also enables the ability to “burn” ASAs by sending them to the vault associated with the global Zero Address. ## Motivation Algorand requires accounts to opt in to receive any ASA, a fact which simultaneously: 1. Grants account holders fine-grained control over their holdings by allowing them to select which assets to allow and preventing receipt of unwanted tokens. 2. Frustrates users and developers when accounting for this requirement especially since other blockchains do not have this requirement. This ARC lays out a new way to navigate the ASA opt in requirement. ### Contemplated Use Cases The following use cases help explain how this capability can enhance the possibilities within the Algorand ecosystem. #### Airdrops An ASA creator who wants to send their asset to a set of accounts faces the challenge of needing their intended receivers to opt in to the ASA ahead of time, which requires non-trivial communication efforts and precludes the possibility of completing the airdrop as a surprise. This claimable ASA standard creates the ability to send an airdrop out to individual addresses so that the receivers can opt in and claim the asset at their convenience—or not, if they so choose. #### Reducing New User On-boarding Friction An application operator who wants to on-board users to their game or business may want to reduce the friction of getting people started by decoupling their application on-boarding process from the process of funding a non-custodial Algorand wallet, if users are wholly new to the Algorand ecosystem. As long as the receiver’s address is known, an ASA can be sent to them ahead of them having ALGOs in their wallet to cover the minimum balance requirement and opt in to the asset. #### Token Burning Similarly to any regular account, the global Zero Address also has a corresponding vault to which one can send a quantity of any ASA to effectively “burn” it, rendering it lost forever. No one controls the Zero Address, so while it cannot opt into any ASA to receive it directly, it also cannot make any claims from its corresponding vault, which thus functions as an UN-claimable ASAs purgatory account. By utilizing this approach, anyone can verifiably and irreversibly take a quantity of any ASA out of circulation forever. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Definitions * **Claimable ASA**: An Algorand Standard Asset (ASA) which has been transferred to a vault following the standard set forth in this proposal such that only the intended receiver account can claim it at their convenience. * **Vaultt**: An Algorand application used to hold claimable ASAs for a given account. * **Master**: An Algorand application used to keep track of all of the vaults created for Algorand accounts. * **dApp**: A decentralized application frontend, interpreted here to mean an off-chain frontend (a webapp, native app, etc.) that interacts with applications on the blockchain. * **Explorer**: An off-chain application that allows browsing the blockchain, showing details of transactions. * **Wallet**: An off-chain application that stores secret keys for on-chain accounts and can display and sign transactions for these accounts. * **Mainnet ID**: The ID for the application that should be called upon claiming an asset on mainnet * **Testnet ID**: The ID for the application that should be called upoin claiming an asset on testnet * **Minimum Balance Requirement (MBR)**: The minimum amount of Algos which must be held by an account on the ledger, which is currently 0.1A + 0.1A per ASA opted into. ### TEAL Smart Contracts There are two smart contracts being utilized: The and the . #### Vault ##### Storage | Type | Key | Value | Description | | ------ | ---------- | -------------- | ----------------------------------------------------- | | Global | “creator” | Account | The account that funded the creation of the vault | | Global | “master” | Application ID | The application ID that created the vault | | Global | “receiver” | Account | The account that can claim/reject ASAs from the vault | | Box | Asset ID | Account | The account that funded the MBR for the given ASA | ##### Methods ###### Opt-In * Opts vault into ASA * Creates box: ASA -> “funder” * “funder” being the account that initiates the opt-in * “funder” is the one covering the ASA MBR ###### Claim * Transfers ASA from vault to “receiver” * Deletes box: ASA -> “funder” * Returns ASA and box MBR to “funder” ###### Reject * Sends ASA to ASA creator * Refunds rejector all fees incurred (thus rejecting is free) * Deletes box: ASA -> “funder” * Remaining balance sent to fee sink #### Master ##### Storage | Type | Key | Value | Description | | ---- | ------- | -------------- | ------------------------------- | | Box | Account | Application ID | The vault for the given account | ##### Methods ###### Create Vault * Creates a vault for a given account (“receiver”) * Creates box: “receiver” -> vault ID * App/box MBR funded by vault creator ###### Delete Vault * Deletes vault app * Deletes box: “receiver” -> vault ID * App.box MBR returned to vault creator ###### Verify Axfer * Verifies asset is going to correct vault for “receiver” ###### getVaultID * Returns vault ID for “receiver” * Fails if “receiver” does not have vault ###### getVaultAddr * Returns vault address for “receiver” * Fails if “receiver” does not have vault ###### hasVault * Determines if “receiver” has a vault ## Rationale This design was created to offer a standard mechanism by which wallets, explorers, and dapps could enable users to send, receive, and find claimable ASAs without requiring any changes to the core protocol. ## Backwards Compatibility This ARC makes no changes to the consensus protocol and creates no backwards compatibility issues. ## Reference Implementation ### Source code ## Security Considerations Both applications (The vault and the master have not been audited) ## Copyright Copyright and related rights waived via .
# Encrypted Short Messages
> Scheme for encryption/decryption that allows for private messages.
## Abstract The goal of this convention is to have a standard way for block explorers, wallets, exchanges, marketplaces, and more generally, client software to send, read & delete short encrypted messages. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Account’s message Application To receive a message, an Account **MUST** create an application that follows this convention: * A Local State named `public_key` **MUST** contain an *NACL Public Key (Curve 25519)* key * A Local State named `arc` **MUST** contain the value `arc15-nacl-curve25519` * A Box `inbox` where: * Keys is an ABI encoded of the tuple `(address,uint64)` containing the address of the sender and the round when the message is sent * Value is an encoded **text** > With this design, for each round, the sender can only write one message per round. For the same round, an account can receive multiple messages if distinct sender sends them ### ABI Interface The associated smart contract **MUST** implement the following ABI interface: ```json { "name": "ARC_0015", "desc": "Interface for an encrypted messages application", "methods": [ { "name": "write", "desc": "Write encrypted text to the box inbox", "args": [ { "type": "byte[]", "name": "text", "desc": "Encrypted text provided by the sender." } ], "returns": { "type": "void" } }, { "name": "authorize", "desc": "Authorize an addresses to send a message", "args": [ { "type": "byte[]", "name": "address_to_add", "desc": "Address of a sender" }, { "type": "byte[]", "name": "info", "desc": "information about the sender" } ], "returns": { "type": "void" } }, { "name": "remove", "desc": "Delete the encrypted text sent by an account on a particular round. Send the MBR used for a box to the Application's owner.", "args": [ { "type": "byte[]", "name": "address", "desc": "Address of the sender"}, { "type": "uint64", "name": "round", "desc": "Round when the message was sent"} ], "returns": { "type": "void" } }, { "name": "set_public_key", "desc": "Register a NACL Public Key (Curve 25519) to the global value public_key", "args": [ { "type": "byte[]", "name": "public_key", "desc": "NACL Public Key (Curve 25519)" } ], "returns": { "type": "void" } } ] } ``` > Warning: The remove method only removes the box used for a message, but it is still possible to access it by looking at the indexer. ## Rationale Algorand blockchain unlocks many new use cases - anonymous user login to dApps and classical WEB2.0 solutions being one of them. For many use-cases, anonymous users still require asynchronous event notifications, and email seems to be the only standard option at the time of the creation of this ARC. With wallet adoption of this standard, users will enjoy real-time encrypted A2P (application-to-person) notifications without having to provide their email addresses and without any vendor lock-in. There is also a possibility to do a similar version of this ARC with one App which will store every message for every Account. Another approach was to use the note field for messages, but with box storage available, it was a more practical and secure design. ## Reference Implementation The following codes are not audited and are only here for information purposes. It **MUST** not be used in production. Here is an example of how the code can be run in python : . > The delete method is only for test purposes, it is not part of the ABI for an `ARC-15` Application. An example the application created using Beaker can be found here : . ## Security Considerations Even if the message is encrypted, it will stay on the blockchain. If the secret key used to decrypt is compromised at one point, every related message IS at risk. ## Copyright Copyright and related rights waived via .
# Convention for declaring traits of an NFT's
> This is a convention for declaring traits in an NFT's metadata.
## Abstract The goal is to establish a standard for how traits are declared inside a non-fungible NFT’s metadata, for example as specified in (), () or (). ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. If the property `traits` is provided anywhere in the metadata, it **MUST** adhere to the schema below. If the NFT is a part of a larger collection and that collection has traits, all the available traits for the collection **MUST** be listed as a property of the `traits` object. If the NFT does not have a particular trait, it’s value **MUST** be “none”. The JSON schema for `traits` is as follows: ```json { "title": "Traits for Non-Fungible Token", "type": "object", "properties": { "traits": { "type": "object", "description": "Traits (attributes) that can be used to calculate things like rarity. Values may be strings or numbers" } } } ``` #### Examples ##### Example of an NFT that has traits ```json { "name": "NFT With Traits", "description": "NFT with traits", "image": "https://s3.amazonaws.com/your-bucket/images/two.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "properties": { "creator": "Tim Smith", "created_at": "January 2, 2022", "traits": { "background": "red", "shirt_color": "blue", "glasses": "none", "tattoos": 4, } } } ``` ##### Example of an NFT that has no traits ```json { "name": "NFT Without Traits", "description": "NFT without traits", "image": "https://s3.amazonaws.com/your-bucket/images/one.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "properties": { "creator": "John Smith", "created_at": "January 1, 2022", } } ``` ## Rationale A standard for traits is needed so programs know what to expect in order to calculate things like rarity. ## Backwards Compatibility If the metadata does not have the field `traits`, each value of `properties` should be considered a trait. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Royalty Enforcement Specification
> An ARC to specify the methods and mechanisms to enforce Royalty payments as part of ASA transfers
## Abstract A specification to describe a set of methods that offer an API to enforce Royalty Payments to a Royalty Receiver given a policy describing the royalty shares, both on primary and secondary sales. This is an implementation of an specification and other methods may be implemented in the same contract according to that specification. ## Motivation This ARC is defined to provide a consistent set of asset configurations and ABI methods that, together, enable a royalty payment to a Royalty Receiver. An example may include some music rights where the label, the artist, and any investors have some assigned royalty percentage that should be enforced on transfer. During the sale transaction, the appropriate royalty payments should be included or the transaction must be rejected. ## Specification The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in .. - The name for the settings that define how royalty payments are collected. - The application that enforces the royalty payments given the Royalty Policy and performs transfers of the assets. - The account that may call administrative level methods against the Royalty Enforcer. - The account that receives the royalty payment. It can be any valid Algorand account. - The share of a payment that is due to the Royalty Receiver - The ASA that should have royalties enforced during a transfer. - A data structure stored in local state for the current owner representing the number of units of the asset being offered and the authorizing account for any transfer requests. - A third party marketplace may be any marketplace that implements the appropriate methods to initiate transfers. ### Royalty Policy ```ts interface RoyaltyPolicy { royalty_basis: number // The percentage of the payment due, specified in basis points (0-10,000) royalty_recipient: string // The address that should collect the payment } ``` A Royalty Share consists of a `royalty_receiver` that should receive a Royalty payment and a `royalty_basis` representing some share of the total payment amount. ### Royalty Enforcer The Royalty Enforcer is an instance of the contract, an Application, that controls the transfer of ASAs subject to the Royalty Policy. This is accomplished by exposing an interface defined as a set of allowing a grouped transaction call containing a payment and a request. ### Royalty Enforcer Administrator The Royalty Enforcer Administrator is the account that has privileges to call administrative actions against the Royalty Enforcer. If one is not set the account that created the application MUST be used. To update the Royalty Enforcer Administrator the method is called by the current administrator and passed the address of the new administrator. An implementation of this spec may choose how they wish to enforce a that method is called by the administrator. ### Royalty Receiver The Royalty Receiver is a generic account that could be set to a Single Signature, a Multi Signature, a Smart Signature or even to another Smart Contract. The Royalty Receiver is then responsible for any further royalty distribution logic, making the Royalty Enforcement Specification more general and composable. ### Royalty Basis The Royalty Basis is value representing the percentage of the payment made during a transfer that is due to the Royalty Receiver. The Royalty Basis **MUST** be specified in terms of basis points of the payment amount. ### Royalty Asset The Royalty Asset is an ASA subject to royalty payment collection and **MUST** be created with the . > Because the protocol does not allow updating an address parameter after it’s been deleted, if the asset creator thinks they may want to modify them later, they must be set to some non-zero address. #### Asset Offer The Asset Offer is the a data structure stored in the owner’s local state. It is keyed in local storage by the byte string representing the ASA Id. ```ts interface AssetOffer { auth_address: string // The address of a marketplace or account that may issue a transfer request offered_amount: number // The number of units being offered } ``` This concept is important to this specification because we use the clawback feature to transfer the assets. Without some signal that the current owner is willing to have their assets transferred, it may be possible to transfer the asset without their permission. In order for a transfer to occur, this field **MUST** be set and the parameters of the transfer request **MUST** match the value set. > A transfer matching the offer would require the transfer amount <= offered amount and that the transfer is sent by auth\_address. After the transfer is completed this value **MUST** be wiped from the local state of the owner’s account. #### Royalty Asset Parameters The Clawback parameter **MUST** be set to the Application Address of the Royalty Enforcer. > Since the Royalty Enforcer relies on using the Clawback mechanism to perform the transfer the Clawback should NEVER be set to the zero address. The Freeze parameter **MUST** be set to the Application Address of the Royalty Enforcer if `FreezeAddr != ZeroAddress`, else set to `ZeroAddress`. If the asset creator wants to allow an ASA to be Royalty Free after some conditions are met, it should be set to the Application Address The Manager parameter **MUST** be set to the Application Address of the Royalty Enforcer if `ManagerAddr != ZeroAddress`, else set to `ZeroAddress`. If the asset creator wants to update the Freeze parameter, this should be set to the application address The Reserve parameter **MAY** be set to anything. The `DefaultFrozen` **MUST** be set to true. ### Third Party Marketplace In order to support secondary sales on external markets this spec was designed such that the Royalty Asset may be listed without transferring it from the current owner’s account. A Marketplace may call the transfer request as long as the address initiating the transfer has been set as the `auth_address` through the method in some previous transaction by the current owner. ### ABI Methods The following is a set of methods that conform to the specification meant to enable the configuration of a Royalty Policy and perform transfers. Any Inner Transactions that may be performed as part of the execution of the Royalty Enforcer application **SHOULD** set the fee to 0 and enforce fee payment through fee pooling by the caller. #### Set Administrator: *OPTIONAL* ```plaintext set_administrator( administrator: address, ) ``` Sets the administrator for the Royalty Enforcer contract. If this method is never called the creator of the application **MUST** be considered the administrator. This method **SHOULD** have checks to ensure it is being called by the current administrator. The `administrator` parameter is the address of the account that should be set as the new administrator for this Royalty Enforcer application. #### Set Policy: *REQUIRED* ```plaintext set_policy( royalty_basis: uint64, royalty_recipient: account, ) ``` Sets the policy for any assets using this application as a Royalty Enforcer. The `royalty_basis` is the percentage for royalty payment collection, specified in basis points (e.g., 1% is 100). A Royalty Basis **SHOULD** be immutable, if an application call is made that would overwrite an existing value it **SHOULD** fail. See for more details on how to handle this parameters mutability. The `royalty_receiver` is the address of the account that should receive a partial share of the payment for any of an asset subject to royalty collection. #### Set Payment Asset: *REQUIRED* ```plaintext set_payment_asset( payment_asset: asset, allowed: boolean, ) ``` The `payment_asset` argument represents the ASA id that is acceptable for payment. The contract logic **MUST** opt into the asset specified in order to accept them as payment as part of a transfer. This method **SHOULD** have checks to ensure it is being called by the current administrator. The `allowed` argument is a boolean representing whether or not this asset is allowed. The Royalty Receiver **MUST** be opted into the full set of assets contained in this list of payment\_assets. > In the case that an account is not opted into an asset, any transfers where payment is specified for that asset will fail until the account opts into the asset. or the policy is updated. #### Transfer: *REQUIRED* ```plaintext transfer_algo_payment( royalty_asset: asset, royalty_asset_amount: uint64, from: account, to: account, royalty_receiver: account, payment: pay, current_offer_amount: uint64, ) ``` And ```plaintext transfer_asset_payment( royalty_asset: asset, royalty_asset_amount: uint64, from: account, to: account, royalty_receiver: account, payment: axfer, payment_asset: asset, current_offer_amount: uint64, ) ``` Transfers the Asset after checking that the royalty policy is adhered to. This call must be sent by the `auth_address` specified by the current offer. There **MUST** be a royalty policy defined prior to attempting a transfer. There are two different method signatures specified, one for simple Algo payments and one for Asset as payment. The appropriate method should be called depending on the circumstance. The `royalty_asset` is the ASA ID to be transferred. The `from` parameter is the account the ASA is transferred from. The `to` parameter is the account the ASA is transferred to. The `royalty_receiver` parameter is the account that collects the royalty payment. The `royalty_asset_amount` parameter is the number of units of this ASA ID to transfer. The amount **MUST** be less than or equal to the amount by the `from` account. The `payment` parameter is a reference to the transaction that is transferring some asset (ASA or Algos) from the buyer to the Application Address of the Royalty Enforcer. The `payment_asset` parameter is specified in the case that the payment is being made with some ASA rather than with Algos. It **MUST** match the Asset ID of the AssetTransfer payment transaction. The `current_offer_amount` parameter is the current amount of the Royalty Asset by the `from` account. The transfer call **SHOULD** be part of a group with a size of 2 (payment/asset transfer + app call) > See for details on how this check may be circumvented. Prior to each transfer the Royalty Enforcer **SHOULD** assert that the Seller (the `from` parameter) and the Buyer (the `to` parameter) have blank or unset `AuthAddr`. This reasoning for this check is described in . It is purposely left to the implementor to decide if it should be checked. #### Offer: *REQUIRED* ```plaintext offer( royalty_asset: asset, royalty_asset_amount: uint64, auth_address: account, offered_amount: uint64, offered_auth_addr: account, ) ``` Flags the asset as transferrable and sets the address that may initiate the transfer request. The `royalty_asset` is the ASA ID that is being offered. The `royalty_asset_amount` is the number of units of the ASA ID that are offered. The account making this call **MUST** have at least this amount. The `auth_address` is the address that may initiate a . > This address may be any valid address in the Algorand network including an Application Account’s address. The `offered_amount` is the number of units of the ASA ID that are currently offered. In the case that this is an update, it should be the amount being replaced. In the case that this is a new offer it should be 0. The `offered_auth_address` is the address that may currently initiate a . In the case that this is an update, it should be the address being replaced. In the case that this is a new offer it should be the zero address. If any transfer is initiated by an address that is *not* listed as the `auth_address` for this asset ID from this account, the transfer **MUST** be rejected. If this method is called when there is an existing entry for the same `royalty_asset`, the call is treated as an update. In the case of an update case the contract **MUST** compare the `offered_amount` and `offered_auth_addr` with what is currently set. If the values differ, the call **MUST** be rejected. This requirement is meant to prevent a sort of race condition where the `auth_address` has a `transfer` accepted before the `offer`-ing account sees the update. In that case the offering account might try to offer more than they would otherwise want to. An example is offered in To rescind an offer, this method is called with 0 as the new offered amount. If a or is called successfully, the `offer` **SHOULD** be updated or deleted from local state. Exactly how to update the offer is left to the implementer. In the case of a partially filled offer, the amount may be updated to reflect some new amount that represents `offered_amount - amount transferred` or the offer may be deleted completely. #### Royalty Free Move: *OPTIONAL* ```plaintext royalty_free_move( royalty_asset: asset, royalty_asset_amount: uint64, from: account, to: account, offered_amount: uint64, ) ``` Moves an asset to the new address without collecting any royalty payment. Prior to this method being called the current owner **MUST** offer their asset to be moved. The `auth_address` of the offer **SHOULD** be set to the address of the Royalty Enforcer Administrator and calling this method **SHOULD** have checks to ensure it is being called by the current administrator. > This May be useful in the case of a marketplace where the NFT must be placed in some escrow account. Any logic may be used to validate this is an authorized transfer. The `royalty_asset` is the asset being transferred without applying the Royalty Enforcement logic. The `royalty_asset_amount` is the number of units of this ASA ID that should be moved. The `from` parameter is the current owner of the asset. The `to` parameter is the intended receiver of the asset. The `offered_amount` is the number of units of this asset currently offered. This value **MUST** be greater than or equal to the amount being transferred. The `offered_amount` value for is passed to prevent the race or attack described in . ### Read Only Methods Three methods are specified here as `read-only` as defined in . #### Get Policy: *REQUIRED* ```plaintext get_policy()(address,uint64) ``` Gets the current setting for this Royalty Enforcer. The return value is a tuple of type `(address,uint64)`, where the `address` is the and the `uint64` is the . #### Get Offer: *REQUIRED* ```plaintext get_offer( royalty_asset: asset, from: account, )(address,uint64) ``` Gets the current for a given asset as set by its owner. The `royalty_asset` parameter is the asset id of the that has been offered The `from` parameter is the account that placed the offer The return value is a tuple of type `(address,uint64)`, where `address` is the authorizing address that may make a transfer request and the `uint64` it the amount offered. #### Get Administrator: *OPTIONAL* unless set\_administrator is implemented then *REQUIRED* ```plaintext get_administrator()address ``` Gets the set for this Royalty Enforcer. The return value is of type `address` and represents the address of the account that may call administrative methods for this Royalty Enforcer application ### Storage While the details of storage are described here, `readonly` methods are specified to provide callers with a method to retrieve the information without having to write parsing logic. The exact location and encoding of these fields are left to the implementer. #### Global Storage The parameters that describe a policy are stored in Global State. The relevant keys are: `royalty_basis` - The percentage specified in basis points of the payment `royalty_receiver` - The account that should be paid the royalty Another key is used to store the current administrator account: `administrator` - The account that is allowed to make administrative calls to this Royalty Enforcer application #### Local Storage For an offered Asset, the authorizing address and amount offered should be stored in a Local State field for the account offering the Asset. ### Full ABI Spec ```json { "name": "ARC18", "methods": [ { "name": "set_policy", "args": [ { "type": "uint64", "name": "royalty_basis" }, { "type": "address", "name": "royalty_receiver" } ], "returns": { "type": "void" }, "desc": "Sets the royalty basis and royalty receiver for this royalty enforcer" }, { "name": "set_administrator", "args": [ { "type": "address", "name": "new_admin" } ], "returns": { "type": "void" }, "desc": "Sets the administrator for this royalty enforcer" }, { "name": "set_payment_asset", "args": [ { "type": "asset", "name": "payment_asset" }, { "type": "bool", "name": "is_allowed" } ], "returns": { "type": "void" }, "desc": "Triggers the contract account to opt in or out of an asset that may be used for payment of royalties" }, { "name": "set_offer", "args": [ { "type": "asset", "name": "royalty_asset" }, { "type": "uint64", "name": "royalty_asset_amount" }, { "type": "address", "name": "auth_address" }, { "type": "uint64", "name": "prev_offer_amt" }, { "type": "address", "name": "prev_offer_auth" } ], "returns": { "type": "void" }, "desc": "Flags that an asset is offered for sale and sets address authorized to submit the transfer" }, { "name": "transfer_asset_payment", "args": [ { "type": "asset", "name": "royalty_asset" }, { "type": "uint64", "name": "royalty_asset_amount" }, { "type": "account", "name": "owner" }, { "type": "account", "name": "buyer" }, { "type": "account", "name": "royalty_receiver" }, { "type": "axfer", "name": "payment_txn" }, { "type": "asset", "name": "payment_asset" }, { "type": "uint64", "name": "offered_amt" } ], "returns": { "type": "void" }, "desc": "Transfers an Asset from one account to another and enforces royalty payments. This instance of the `transfer` method requires an AssetTransfer transaction and an Asset to be passed corresponding to the Asset id of the transfer transaction." }, { "name": "transfer_algo_payment", "args": [ { "type": "asset", "name": "royalty_asset" }, { "type": "uint64", "name": "royalty_asset_amount" }, { "type": "account", "name": "owner" }, { "type": "account", "name": "buyer" }, { "type": "account", "name": "royalty_receiver" }, { "type": "pay", "name": "payment_txn" }, { "type": "uint64", "name": "offered_amt" } ], "returns": { "type": "void" }, "desc": "Transfers an Asset from one account to another and enforces royalty payments. This instance of the `transfer` method requires a PaymentTransaction for payment in algos" }, { "name": "royalty_free_move", "args": [ { "type": "asset", "name": "royalty_asset" }, { "type": "uint64", "name": "royalty_asset_amount" }, { "type": "account", "name": "owner" }, { "type": "account", "name": "receiver" }, { "type": "uint64", "name": "offered_amt" } ], "returns": { "type": "void" }, "desc": "Moves the asset passed from one account to another" }, { "name": "get_offer", "args": [ { "type": "uint64", "name": "royalty_asset" }, { "type": "account", "name": "owner" } ], "returns": { "type": "(address,uint64)" }, "read-only":true }, { "name": "get_policy", "args": [], "returns": { "type": "(address,uint64)" }, "read-only":true }, { "name": "get_administrator", "args": [], "returns": { "type": "address" }, "read-only":true } ], "desc": "ARC18 Contract providing an interface to create and enforce a royalty policy over a given ASA. See https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0018.md for details.", "networks": {} } ``` #### Example Flow for a Marketplace ```plaintext Let Alice be the creator of the Royalty Enforcer and Royalty Asset Let Alice also be the Royalty Receiver Let Bob be the Royalty Asset holder Let Carol be a buyer of a Royalty Asset ``` ```mermaid sequenceDiagram Alice->>Royalty Enforcer: set_policy with Royalty Basis and Royalty Receiver Alice->>Royalty Enforcer: set_payment_asset with any asset that should be accepted as payment par List Bob->>Royalty Enforcer: offer Bob->>Marketplace: list end Par Buy Carol->>Marketplace: buy Marketplace->>Royalty Enforcer: transfer Bob->>Carol: clawback issued by Royalty Enforcer Royalty Enforcer->>Alice: royalty payment end par Delist Bob->>Royalty Enforcer: offer 0 Bob->>Marketplace: delist end ``` ### Metadata The metadata associated to an asset **SHOULD** conform to any ARC that supports an additional field in the `properties` section specifying the specific information relevant for off-chain applications like wallets or Marketplace dApps. The metadata **MUST** be immutable. The fields that should be specified are the `application-id` as described in and `rekey-checked` which describes whether or not this application implements the rekey checks during transfers. Example: ```js //... "properties":{ //... "arc-20":{ "application-id":123 }, "arc-18":{ "rekey-checked":true // Defaults to false if not set, see *Rekey to swap* below for reasoning } } //... ``` ## Rationale The motivation behind defining a Royalty Enforcement specification is the need to guarantee a portion of a payment is received by select royalty collector on sale of an asset. Current royalty implementations are either platform specific or are only adhered to when an honest seller complies with it, allowing for the exchange of an asset without necessarily paying the royalties. The use of a smart contract as a clawback address is a guaranteed way to know an asset transfer is only ever made when certain conditions are met, or made in conjunction with additional transactions. The Royalty Enforcer is responsible for the calculations required in dividing up and dispensing the payments to the respective parties. The present specification does not impose any restriction on the Royalty Receiver distribution logic (if any), which could be achieved through a Multi Signature account, a Smart Signature or even through another Smart Contract. On Ethereum the EIP-2981 standard allows for ERC-721 and ERC-1155 interfaces to signal a royalty amount to be paid, however this is not enforced and requires marketplaces to implement and adhere to it. ## Backwards Compatibility Existing ASAs with unset clawback address or unset manager address (in case the clawback address is not the application account of a smart contract that is updatable - which is most likely the case) will be incompatible with this specification. ## Reference Implementation ## Security Considerations There are a number of security considerations that implementers and users should be aware of. *Royalty policy mutability* The immutability of a royalty basis is important to consider since mutability introduces the possibility for a situation where, after an initial sale, the royalty policy is updated from 1% to 100% for example. This would make any further sales have the full payment amount sent to the royalty recipient and the seller would receive nothing. This specification is written with the recommendation that the royalty policy **SHOULD** be immutable. This is not a **MUST** so that an implementation may decrease the royalty basis may decrease over time. Caution should be taken by users and implementers when evaluating how to implement the exact logic. *Spoofed payment* While its possible to enforce the group size limit, it is possible to circumvent the royalty enforcement logic by simply making an Inner Transaction application call with the appropriate parameters and a small payment, then in the same outer group the “real” payment. The counter-party risk remains the same since the inner transaction is atomic with the outers. In addition, it is always possible to circumvent the royalty enforcement logic by using an escrow account in the middle: * Alice wants to sell asset A to Bob for 1M USDC. * Alice and Bob creates an escrow ESCROW (smart signature). * Alice sends A for 1 μAlgo to the ESCROW * Bob sends 1M USDC to ESCROW. * Then ESCROW sends 1M USDC to Alice and sends A to Bob for 1 microAlgo. Some ways to prevent a small royalty payment and larger payment in a later transaction of the same group might be by using an `allow` list that is checked against the `auth_addr` of the offer call. The `allow` list would be comprised of known and trusted marketplaces that do not attempt to circumvent the royalty policy. The `allow` list may be implicit as well by transferring a specific asset to the `auth_addr` as frozen and on `offer` a the balance must be > 0 to allow the `auth_addr` to be persisted. The exact logic that should determine *if* a transfer should be allowed is left to the implementer. *Rekey to swap* Rekeying an account can also be seen as circumventing this logic since there is no counter-party risk given that a rekey can be grouped with a payment. We address this by suggesting the `auth_addr` on the buyer and seller accounts are both set to the zero address. *Offer for unintended clawback* Because we use the clawback mechanism to move the asset, we need to be sure that the current owner is actually interested in making the sale. We address this by requiring the method is called to set an authorized address OR that the AssetSender is the one making the application call. *Offer double spend* If the method did not require the current value be passed, a possible attack or race condition may be taken advantage of. * There’s an open offer for N. * The owner decides to lower it to N < M < 0 * I see that; decide to “frontrun” the second tx and first get N, \[here the ledger should apply the change of offer, which overwrites the previous value — now 0 — with M], then I can get another M of the asset. *Mutable asset parameters* If the ASA has it’s manager parameter set, it is possible to change the other address parameters. Namely the clawback and freeze roles could be changed to allow an address that is *not* the Royalty Enforcer’s application address. For that reason the manager **MUST** be set to the zero address or to the Royalty Enforcer’s address. *Compatibility of existing ASAs* In the case of and ASA’s the manager is the account that may issue `acfg` transactions to update metadata or to change the reserve address. For the purposes of this spec the manager **MUST** be the application address, so the logic to issue appropriate `acfg` transactions should be included in the application logic if there is a need to update them. > When evaluating whether or not an existing ASA may be compatible with this spec, note that the `clawback` address needs to be set to the application address of the Royalty Enforcer. The `freeze` address and `manager` address may be empty or, if set, must be the application address. If these addresses aren’t set correctly, the royalty enforcer will not be able to issue the transactions required and there may be security considerations. The `reserve` address has no requirements in this spec so ASAs should have no issue assuming the rest of the addresses are set correctly. ## Copyright Copyright and related rights waived via .
# Templating of NFT ASA URLs for mutability
> Templating mechanism of the URL so that changeable data in an asset can be substituted by a client, providing a mutable URL.
## Abstract This ARC describes a template substitution for URLs in ASAs, initially for ipfs\:// scheme URLs allowing mutable CID replacement in rendered URLs. The proposed template-XXX scheme has substitutions like: ```plaintext template-ipfs://{ipfscid::::}[/...] ``` This will allow modifying the 32-byte ‘Reserve address’ in an ASA to represent a new IPFS content-id hash. Changing of the reserve address via an asset-config transaction will be all that is needed to point an ASA URL to new IPFS content. The client reading this URL, will compose a fully formed IPFS Content-ID based on the version, multicodec, and hash arguments provided in the ipfscid substitution. ## Motivation While immutability for many NFTs is appropriate (see link), there are cases where some type of mutability is desired for NFT metadata and/or digital media. The data being referenced by the pointer should be immutable but the pointer may be updated to provide a kind of mutability. The data being referenced may be of any size. Algorand ASAs support mutation of several parameters, namely the role address fields (Manager, Clawback, Freeze, and Reserve addresses), unless previously cleared. These are changed via an asset-config transaction from the Manager account. An asset-config transaction may include a note, but it is limited to 1KB and accessing this value requires clients to use an indexer to iterate/retrieve the values. Of the parameters that are mutable, the Reserve address is somewhat distinct in that it is not used for anything directly as part of the protocol. It is used solely for determining what is in/out of circulation (by subtracting supply from that held by the reserve address). With a (pure) NFT, the Reserve address is irrelevant as it is a 1 of 1 unit. Thus, the Reserve address may be repurposed as a 32-byte ‘bitbucket’. These 32-bytes can, for example, hold a SHA2-256 hash uniquely referencing the desired content for the ASA (ARC-3-like metadata for example) Using the reserve address in this way means that what an ASA ‘points to’ for metadata can be changed with a single asset config transaction, changing only the 32-bytes of the reserve address. The new value is accessible via even non-archival nodes with a single call to the `/v2/assets/xxx` REST endpoint. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . This proposal specifies a method to provide mutability for IPFS hosted content-ids. The intention is that FUTURE ARCs could define additional template substitutions, but this is not meant to be a kitchen sink of templates, only to establish a possible baseline of syntax. An indication that this ARC is in use is defined by an ASA URL’s “scheme” having the prefix “**template-**”. An Asset conforming this specification **MUST** have: 1. **URL Scheme of “template-ipfs”** The URL of the asset must be of the form: ```plain template-ipfs://(...) ``` > The ipfs\:// scheme is already somewhat of a meta scheme in that clients interpret the ipfs scheme as referencing an IPFS CID (version 0/base58 or 1/base32 currently) followed by optional path within certain types of IPFS DAG content (IPLD CAR content for example). The clients take the CID and use to fetch directly from the IPFS network directly via IPFS nodes, or via various IPFS gateways (…], pinata, etc.)). 2. **An “ipfscid” *template* argument in place of the normal CID.** Where the format of templates are `{:])` The ipfscid template definitions is based on properties within the IPFS CID spec: ```plaintext ipfscid:::: ``` > The intent is to recompose a complete CID based on the content-hash contained within the 32-byte reserve address, but using the correct multicodec content type, ipfs content-id version, and hash type to match how the asset creator will seed the IPFS content. If a single file is added using the ‘ipfs’ CLI via `ipfs add --cid-version=1 metadata.json` then the resulting content will be encoded using the ‘raw’ multicodec type. If a directory is added containing one or more files, then it will be encoded using the dag-pb multicodec. CAR content will also be dag-pb. Thus based on the method used to post content to IPFS, the ipfscid template should match. The parameters to the template ipfscid are: 1. IPFS ``, **MUST** a valid IPFS CID version. Client implementation **MUST** support ‘0’ or ‘1’ and **SHOULD** support future version. 2. `` **MUST** be an IPFS multicodec name. Client implementations **MUST** support ‘raw’ or ‘dag-pb’. Other codecs **SHOULD** be supported but are beyond the scope of this proposal. 3. `` **MUST** be ‘reserve’. > This is to represent the reserve address is used for the 32-byte hash. It is specified here so future iterations of the specification may allow other fields or syntaxes to reference other mutable field types. 4. `` **MUST** be the multihash hash function type (as defined in ). Client implementations **MUST** support ‘sha2-256’ and **SHOULD** support future hash types when introduced by IPFS. > IPFS may add future versions of the cid spec, and add additional multicodec types or hash types. Implementations **SHOULD** use IPFS libraries where possible that accept multicodec and hash types as named values and allow a CID to be composed generically. ### Examples > This whole section is non-normative. * ASA URL: `template-ipfs://{ipfscid:0:dag-pb:reserve:sha2-256}/arc3.json` * ASA URL: `template-ipfs://{ipfscid:1:raw:reserve:sha2-256}` * ASA URL: `template-ipfs://{ipfscid:1:dag-pb:reserve:sha2-256}/metadata.json` #### Deployed Testnet Example An example was pushed to TestNet, converting from an existing ARC-3 MainNet ASA (asset ID 560421434, ) With IPFS URL: ```plaintext ipfs://QmQZyq4b89RfaUw8GESPd2re4hJqB8bnm4kVHNtyQrHnnK ``` The TestNet ASA was minted with the URL: ```plaintext template-ipfs://{ipfscid:0:dag-pb:reserve:sha2-256} ``` as the original CID is a V0 / dag-pb CID. A helpful link to ‘visualize’ CIDs and for this specific id, is Using the example encoding implementation, results in virtual ‘reserve address’ of ```plaintext EEQYWGGBHRDAMTEVDPVOSDVX3HJQIG6K6IVNR3RXHYOHV64ZWAEISS4CTI ``` which is the address (with checksum) corresponding to the 32-byte with hexadecimal value: ```plaintext 21218B18C13C46064C951BEAE90EB7D9D3041BCAF22AD8EE373E1C7AFB99B008 ``` (Transformation from a 32-byte public key to an address can be found there on the developer website .) The resulting ASA can be seen on Using the forked , with testnet selected, and the /nft/66753108 url - the browser will display the original content as-is, using only the Reserve address as the source of the content hash. ### Interactions with ARC-3 This ARC is compatible with with the following notable exception: the ASA Metadata Hash (`am`) is no more necessarily a valid hash of the JSON Metadata File pointed by the URL. As such, clients cannot be strictly compatible to both ARC-3 and . An ARC-3 and ARC-19 client **SHOULD** ignore validation of the ASA Metadata Hash when the Asset URL is following ARC-19. ARC-3 clients **SHOULD** clearly indicate to the user when displaying an ARC-19 ASA, as contrary to a strict ARC-3 ASA, the asset may arbitrarily change over time (even after being bought). ASA that follow both ARC-3 and ARC-19 **MUST NOT** use extra metadata hash (from ARC-3). ## Rationale See the motivation section above for the general rationale. ### Backwards Compatibility The ‘template-’ prefix of the scheme is intended to break clients reading these ASA URLs outright. Clients interpreting these URLs as-is would likely yield unusual errors. Code checking for an explicit ‘ipfs’ scheme for example will not see this as compatible with any of the default processing and should treat the URL as if it were simply unknown/empty. ## Reference Implementation ### Encoding #### Go implementation ```go import ( "github.com/algorand/go-algorand-sdk/types" "github.com/ipfs/go-cid" "github.com/multiformats/go-multihash" ) // ... func ReserveAddressFromCID(cidToEncode cid.Cid) (string, error) { decodedMultiHash, err := multihash.Decode(cidToEncode.Hash()) if err != nil { return "", fmt.Errorf("failed to decode ipfs cid: %w", err)) } return types.EncodeAddress(decodedMultiHash.Digest) } // .... ``` ### Decoding #### Go implementation ```go import ( "errors" "fmt" "regexp" "strings" "github.com/algorand/go-algorand-sdk/types" "github.com/ipfs/go-cid" "github.com/multiformats/go-multicodec" "github.com/multiformats/go-multihash" ) var ( ErrUnknownSpec = errors.New("unsupported template-ipfs spec") ErrUnsupportedField = errors.New("unsupported ipfscid field, only reserve is currently supported") ErrUnsupportedCodec = errors.New("unknown multicodec type in ipfscid spec") ErrUnsupportedHash = errors.New("unknown hash type in ipfscid spec") ErrInvalidV0 = errors.New("cid v0 must always be dag-pb and sha2-256 codec/hash type") ErrHashEncoding = errors.New("error encoding new hash") templateIPFSRegexp = regexp.MustCompile(`template-ipfs://{ipfscid:(?P[01]):(?P[a-z0-9\-]+):(?P[a-z0-9\-]+):(?P[a-z0-9\-]+)}`) ) func ParseASAUrl(asaUrl string, reserveAddress types.Address) (string, error) { matches := templateIPFSRegexp.FindStringSubmatch(asaUrl) if matches == nil { if strings.HasPrefix(asaUrl, "template-ipfs://") { return "", ErrUnknownSpec } return asaUrl, nil } if matches[templateIPFSRegexp.SubexpIndex("field")] != "reserve" { return "", ErrUnsupportedField } var ( codec multicodec.Code multihashType uint64 hash []byte err error cidResult cid.Cid ) if err = codec.Set(matches[templateIPFSRegexp.SubexpIndex("codec")]); err != nil { return "", ErrUnsupportedCodec } multihashType = multihash.Names[matches[templateIPFSRegexp.SubexpIndex("hash")]] if multihashType == 0 { return "", ErrUnsupportedHash } hash, err = multihash.Encode(reserveAddress[:], multihashType) if err != nil { return "", ErrHashEncoding } if matches[templateIPFSRegexp.SubexpIndex("version")] == "0" { if codec != multicodec.DagPb { return "", ErrInvalidV0 } if multihashType != multihash.SHA2_256 { return "", ErrInvalidV0 } cidResult = cid.NewCidV0(hash) } else { cidResult = cid.NewCidV1(uint64(codec), hash) } return fmt.Sprintf("ipfs://%s", strings.ReplaceAll(asaUrl, matches[0], cidResult.String())), nil } ``` #### Typescript Implementation A modified version of a simple ARC-3 viewer can be found specifically the code segment at This is a fork of ## Security Considerations There should be no specific security issues beyond those of any client accessing any remote content and the risks linked to assets changing (even after the ASA is bought). The later is handled in the section “Interactions with ARC-3” above. Regarding the former, URLs within ASAs could point to malicious content, whether that is an http/https link or whether fetched through ipfs protocols or ipfs gateways. As the template changes nothing other than the resulting URL and also defines nothing more than the generation of an IPFS CID hash value, no security concerns derived from this specific proposal are known. ## Copyright Copyright and related rights waived via .
# Smart ASA
> An ARC for an ASA controlled by an Algorand Smart Contract
## Abstract A “Smart ASA” is an Algorand Standard Asset (ASA) controlled by a Smart Contract that exposes methods to create, configure, transfer, freeze, and destroy the asset. This ARC defines the ABI interface of such a Smart Contract, the required metadata, and suggests a reference implementation. ## Motivation The Algorand Standard Asset (ASA) is an excellent building block for on-chain applications. It is battle-tested and widely supported by SDKs, wallets, and dApps. However, the ASA lacks in flexibility and configurability. For instance, once issued, it can’t be re-configured (its unit name, decimals, maximum supply). Also, it is freely transferable (unless frozen). This prevents developers from specifying additional business logic to be checked while transferring it (think of royalties or vesting ). Enforcing transfer conditions requires freezing the asset and transferring it through a clawback operation — which results in a process that is opaque to users and wallets and a bad experience for the users. The Smart ASA defined by this ARC extends the ASA to increase its expressiveness and its flexibility. By introducing this as a standard, both developers, users (marketplaces, wallets, dApps, etc.) and SDKs can confidently and consistently recognize Smart ASAs and adjust their flows and user experiences accordingly. ## Specification The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in . The following sections describe: * The ABI interface for a controlling Smart Contract (the Smart Contract that controls a Smart ASA). * The metadata required to denote a Smart ASA and define the association between an ASA and its controlling Smart Contract. ### ABI Interface The ABI interface specified here draws inspiration from the transaction reference of an Algorand Standard Asset (ASA). To provide a unified and familiar interface between the Algorand Standard Asset and the Smart ASA, method names and parameters have been adapted to the ABI types but left otherwise unchanged. #### Asset Creation ```json { "name": "asset_create", "args": [ { "type": "uint64", "name": "total" }, { "type": "uint32", "name": "decimals" }, { "type": "bool", "name": "default_frozen" }, { "type": "string", "name": "unit_name" }, { "type": "string", "name": "name" }, { "type": "string", "name": "url" }, { "type": "byte[]", "name": "metadata_hash" }, { "type": "address", "name": "manager_addr" }, { "type": "address", "name": "reserve_addr" }, { "type": "address", "name": "freeze_addr" }, { "type": "address", "name": "clawback_addr" } ], "returns": { "type": "uint64" } } ``` Calling `asset_create` creates a new Smart ASA and returns the identifier of the ASA. The describes its required properties. > Upon a call to `asset_create`, a reference implementation SHOULD: > > * Mint an Algorand Standard Asset (ASA) that MUST specify the properties defined in the . In addition: > > * The `manager`, `reserve` and `freeze` addresses SHOULD be set to the account of the controlling Smart Contract. > * The remaining fields are left to the implementation, which MAY set `total` to `2 ** 64 - 1` to enable dynamically increasing the max circulating supply of the Smart ASA. > * `name` and `unit_name` MAY be set to `SMART-ASA` and `S-ASA`, to denote that this ASA is Smart and has a controlling application. > > * Persist the `total`, `decimals`, `default_frozen`, etc. fields for later use/retrieval. > > * Return the ID of the created ASA. > > It is RECOMMENDED for calls to this method to be permissioned, e.g. to only approve transactions issued by the controlling Smart Contract creator. #### Asset Configuration ```json [ { "name": "asset_config", "args": [ { "type": "asset", "name": "config_asset" }, { "type": "uint64", "name": "total" }, { "type": "uint32", "name": "decimals" }, { "type": "bool", "name": "default_frozen" }, { "type": "string", "name": "unit_name" }, { "type": "string", "name": "name" }, { "type": "string", "name": "url" }, { "type": "byte[]", "name": "metadata_hash" }, { "type": "address", "name": "manager_addr" }, { "type": "address", "name": "reserve_addr" }, { "type": "address", "name": "freeze_addr" }, { "type": "address", "name": "clawback_addr" } ], "returns": { "type": "void" } }, { "name": "get_asset_config", "readonly": true, "args": [{ "type": "asset", "name": "asset" }], "returns": { "type": "(uint64,uint32,bool,string,string,string,byte[],address,address,address,address)", "desc": "`total`, `decimals`, `default_frozen`, `unit_name`, `name`, `url`, `metadata_hash`, `manager_addr`, `reserve_addr`, `freeze_addr`, `clawback_addr`" } } ] ``` Calling `asset_config` configures an existing Smart ASA. > Upon a call to `asset_config`, a reference implementation SHOULD: > > * Fail if `config_asset` does not correspond to an ASA controlled by this smart contract. > * Succeed iff the `sender` of the transaction corresponds to the `manager_addr` that was previously persisted for `config_asset` by a previous call to this method or, if never caller, to `asset_create`. > * Update the persisted `total`, `decimals`, `default_frozen`, etc. fields for later use/retrieval. > > The business logic associated with the update of the other parameters is left to the implementation. An implementation that maximizes similarities with ASAs SHOULD NOT allow modifying the `clawback_addr` or `freeze_addr` after they have been set to the special value `ZeroAddress`. > > The implementation MAY provide flexibility on the fields of an ASA that cannot be updated after initial configuration. For instance, it MAY update the `total` parameter to enable minting of new units or restricting the maximum supply; when doing so, the implementation SHOULD ensure that the updated `total` is not lower than the current circulating supply of the asset. Calling `get_asset_config` reads and returns the `asset`’s configuration as specified in: * The most recent invocation of `asset_config`; or * if `asset_config` was never invoked for `asset`, the invocation of `asset_create` that originally created it. > Upon a call to `get_asset_config`, a reference implementation SHOULD: > > * Fail if `asset` does not correspond to an ASA controlled by this smart contract (see `asset_config`). > * Return `total`, `decimals`, `default_frozen`, `unit_name`, `name`, `url`, `metadata_hash`, `manager_addr`, `reserve_addr`, `freeze_addr`, `clawback` as persisted by `asset_create` or `asset_config`. #### Asset Transfer ```json { "name": "asset_transfer", "args": [ { "type": "asset", "name": "xfer_asset" }, { "type": "uint64", "name": "asset_amount" }, { "type": "account", "name": "asset_sender" }, { "type": "account", "name": "asset_receiver" } ], "returns": { "type": "void" } } ``` Calling `asset_transfer` transfers a Smart ASA. > Upon a call to `asset_transfer`, a reference implementation SHOULD: > > * Fail if `xfer_asset` does not correspond to an ASA controlled by this smart contract. > > * Succeed if: > > * the `sender` of the transaction is the `asset_sender` and > * `xfer_asset` is not in a frozen state (see ) and > * `asset_sender` and `asset_receiver` are not in a frozen state (see ) > > * Succeed if the `sender` of the transaction corresponds to the `clawback_addr`, as persisted by the controlling Smart Contract. This enables clawback operations on the Smart ASA. > > Internally, the controlling Smart Contract SHOULD issue a clawback inner transaction that transfers the `asset_amount` from `asset_sender` to `asset_receiver`. The inner transaction will fail on the usual conditions (e.g. not enough balance). > > Note that the method interface does not specify `asset_close_to`, because holders of a Smart ASA will need two transactions (RECOMMENDED in an Atomic Transfer) to close their position: > > * A call to this method to transfer their outstanding balance (possibly as a `CloseOut` operation if the controlling Smart Contract required opt in); and > * an additional transaction to close out of the ASA. #### Asset Freeze ```json [ { "name": "asset_freeze", "args": [ { "type": "asset", "name": "freeze_asset" }, { "type": "bool", "name": "asset_frozen" } ], "returns": { "type": "void" } }, { "name": "account_freeze", "args": [ { "type": "asset", "name": "freeze_asset" }, { "type": "account", "name": "freeze_account" }, { "type": "bool", "name": "asset_frozen" } ], "returns": { "type": "void" } } ] ``` Calling `asset_freeze` prevents any transfer of a Smart ASA. Calling `account_freeze` prevents a specific account from transferring or receiving a Smart ASA. > Upon a call to `asset_freeze` or `account_freeze`, a reference implementation SHOULD: > > * Fail if `freeze_asset` does not correspond to an ASA controlled by this smart contract. > * Succeed iff the `sender` of the transaction corresponds to the `freeze_addr`, as persisted by the controlling Smart Contract. > > In addition: > > * Upon a call to `asset_freeze`, the controlling Smart Contract SHOULD persist the tuple `(freeze_asset, asset_frozen)` (for instance, by setting a `frozen` flag in *global* storage). > * Upon a call to `account_freeze` the controlling Smart Contract SHOULD persist the tuple `(freeze_asset, freeze_account, asset_frozen)` (for instance by setting a `frozen` flag in the *local* storage of the `freeze_account`). See the for how to ensure that Smart ASA holders cannot reset their `frozen` flag by clearing out their state at the controlling Smart Contract. ```json [ { "name": "get_asset_is_frozen", "readonly": true, "args": [{ "type": "asset", "name": "freeze_asset" }], "returns": { "type": "bool" } }, { "name": "get_account_is_frozen", "readonly": true, "args": [ { "type": "asset", "name": "freeze_asset" }, { "type": "account", "name": "freeze_account" } ], "returns": { "type": "bool" } } ] ``` The value return by `get_asset_is_frozen` (respectively, `get_account_is_frozen`) tells whether any account (respectively `freeze_account`) can transfer or receive `freeze_asset`. A `false` value indicates that the transfer will be rejected. > Upon a call to `get_asset_is_frozen`, a reference implementation SHOULD retrieve the tuple `(freeze_asset, asset_frozen)` as stored on `asset_freeze` and return the value corresponding to `asset_frozen`. Upon a call to `get_account_is_frozen`, a reference implementation SHOULD retrieve the tuple `(freeze_asset, freeze_account, asset_frozen)` as stored on `account_freeze` and return the value corresponding to `asset_frozen`. #### Asset Destroy ```json { "name": "asset_destroy", "args": [{ "type": "asset", "name": "destroy_asset" }], "returns": { "type": "void" } } ``` Calling `asset_destroy` destroys a Smart ASA. > Upon a call to `asset_destroy`, a reference implementation SHOULD: > > * Fail if `destroy_asset` does not correspond to an ASA controlled by this smart contract. > > It is RECOMMENDED for calls to this method to be permissioned (see `asset_create`). > > The controlling Smart Contract SHOULD perform an asset destroy operation on the ASA with ID `destroy_asset`. The operation will fail if the asset is still in circulation. #### Circulating Supply ```json { "name": "get_circulating_supply", "readonly": true, "args": [{ "type": "asset", "name": "asset" }], "returns": { "type": "uint64" } } ``` Calling `get_circulating_supply` returns the circulating supply of a Smart ASA. > Upon a call to `get_circulating_supply`, a reference implementation SHOULD: > > * Fail if `asset` does not correspond to an ASA controlled by this smart contract. > * Return the circulating supply of `asset`, defined by the difference between the ASA `total` and the balance held by its `reserve_addr` (see ). #### Full ABI Spec ```json { "name": "arc-0020", "methods": [ { "name": "asset_create", "args": [ { "type": "uint64", "name": "total" }, { "type": "uint32", "name": "decimals" }, { "type": "bool", "name": "default_frozen" }, { "type": "string", "name": "unit_name" }, { "type": "string", "name": "name" }, { "type": "string", "name": "url" }, { "type": "byte[]", "name": "metadata_hash" }, { "type": "address", "name": "manager_addr" }, { "type": "address", "name": "reserve_addr" }, { "type": "address", "name": "freeze_addr" }, { "type": "address", "name": "clawback_addr" } ], "returns": { "type": "uint64" } }, { "name": "asset_config", "args": [ { "type": "asset", "name": "config_asset" }, { "type": "uint64", "name": "total" }, { "type": "uint32", "name": "decimals" }, { "type": "bool", "name": "default_frozen" }, { "type": "string", "name": "unit_name" }, { "type": "string", "name": "name" }, { "type": "string", "name": "url" }, { "type": "byte[]", "name": "metadata_hash" }, { "type": "address", "name": "manager_addr" }, { "type": "address", "name": "reserve_addr" }, { "type": "address", "name": "freeze_addr" }, { "type": "address", "name": "clawback_addr" } ], "returns": { "type": "void" } }, { "name": "get_asset_config", "readonly": true, "args": [ { "type": "asset", "name": "asset" } ], "returns": { "type": "(uint64,uint32,bool,string,string,string,byte[],address,address,address,address)", "desc": "`total`, `decimals`, `default_frozen`, `unit_name`, `name`, `url`, `metadata_hash`, `manager_addr`, `reserve_addr`, `freeze_addr`, `clawback`" } }, { "name": "asset_transfer", "args": [ { "type": "asset", "name": "xfer_asset" }, { "type": "uint64", "name": "asset_amount" }, { "type": "account", "name": "asset_sender" }, { "type": "account", "name": "asset_receiver" } ], "returns": { "type": "void" } }, { "name": "asset_freeze", "args": [ { "type": "asset", "name": "freeze_asset" }, { "type": "bool", "name": "asset_frozen" } ], "returns": { "type": "void" } }, { "name": "account_freeze", "args": [ { "type": "asset", "name": "freeze_asset" }, { "type": "account", "name": "freeze_account" }, { "type": "bool", "name": "asset_frozen" } ], "returns": { "type": "void" } }, { "name": "get_asset_is_frozen", "readonly": true, "args": [ { "type": "asset", "name": "freeze_asset" } ], "returns": { "type": "bool" } }, { "name": "get_account_is_frozen", "readonly": true, "args": [ { "type": "asset", "name": "freeze_asset" }, { "type": "account", "name": "freeze_account" } ], "returns": { "type": "bool" } }, { "name": "asset_destroy", "args": [ { "type": "asset", "name": "destroy_asset" } ], "returns": { "type": "void" } }, { "name": "get_circulating_supply", "readonly": true, "args": [ { "type": "asset", "name": "asset" } ], "returns": { "type": "uint64" } } ] } ``` ### Metadata #### ASA Metadata The ASA underlying a Smart ASA: * MUST be `DefaultFrozen`. * MUST specify the ID of the controlling Smart Contract (see below); and * MUST set the `ClawbackAddr` to the account of such Smart Contract. The metadata **MUST** be immutable. #### Specifying the controlling Smart Contract A Smart ASA MUST specify the ID of its controlling Smart Contract. If the Smart ASA also conforms to any ARC that supports additional `properties` (, ), then it MUST include a `arc-20` key and set the corresponding value to a map, including the ID of the controlling Smart Contract as a value for the key `application-id`. For example: ```javascript { //... "properties": { //... "arc-20": { "application-id": 123 } } //... } ``` > To avoid ecosystem fragmentation this ARC does NOT propose any new method to specify the metadata of an ASA. Instead, it only extends already existing standards. ### Handling opt in and close out A Smart ASA MUST require users to opt to the ASA and MAY require them to opt in to the controlling Smart Contract. This MAY be performed at two separate times. The reminder of this section is non-normative. > Smart ASAs SHOULD NOT require users to opt in to the controlling Smart Contract, unless the implementation requires storing information into their local schema (for instance, to implement ; also see ). > > Clients MAY inspect the local state schema of the controlling Smart Contract to infer whether opt in is required. > > If a Smart ASA requires opt in, then clients SHOULD prevent users from closing out the controlling Smart Contract unless they don’t hold a balance for any of the ASAs controlled by the Smart Contract. ## Rationale This ARC builds on the strengths of the ASA to enable a Smart Contract to control its operations and flexibly re-configure its configuration. The rationale is to have a “Smart ASA” that is as widely adopted as the ASA both by the community and by the surrounding ecosystem. Wallets, dApps, and marketplaces: * Will display a user’s Smart ASA balance out-of-the-box (because of the underlying ASA). * SHOULD recognize Smart ASAs and inform the users accordingly by displaying the name, unit name, URL, etc. from the controlling Smart Contract. * SHOULD enable users to transfer the Smart ASA by constructing the appropriate transactions, which call the ABI methods of the controlling Smart Contract. With this in mind, this standard optimizes for: * Community adoption, by minimizing the that need to be set and the requirements of a conforming implementation. * Developer adoption, by re-using the familiar ASA transaction reference in the methods’ specification. * Ecosystem integration, by minimizing the amount of work that a wallet, dApp or service should perform to support the Smart ASA. ## Backwards Compatibility Existing ASAs MAY adopt this standard if issued or re-configured to match the requirements in the . This requires: * The ASA to be `DefaultFrozen`. * Deploying a Smart Contract that will manage, control and operate on the asset(s). * Re-configuring the ASA, by setting its `ClawbackAddr` to the account of the controlling Smart Contract. * Associating the ID of the Smart Contract to the ASA (see ). Assets implementing MAY also be compatible with this ARC if the Smart Contract implementing royalties enforcement exposes the ABI methods specified here. The corresponding ASA and their metadata are compliant with this standard. ## Reference Implementation A reference implementation is available ## Security Considerations Keep in mind that the rules governing a Smart ASA are only in place as long as: * The ASA remains frozen; * the `ClawbackAddr` of the ASA is set to a controlling Smart Contract, as specified in the ; * the controlling Smart Contract is not updatable, nor deletable, nor re-keyable. ### Local State If your controlling Smart Contract implementation writes information to a user’s local state, keep in mind that users can close out the application and (worse) clear their state at all times. This requires careful considerations. For instance, if you determine a user’s state by reading a flag from their local state, you should consider the flag *set* and the user *frozen* if the corresponding local state key is *missing*. For a `default_frozen` Smart ASA this means: * Set the `frozen` flag (to `1`) at opt in. * Explicitly verify that a user’s `frozen` flag is not set (is `0`) before approving transfers. * If the key `frozen` is missing from the user’s local state, then considered the flag to be set and reject all transfers. This prevents users from resetting their `frozen` flag by clearing their state and then opting into the controlling Smart Contract again. ## Copyright Copyright and related rights waived via .
# Round based datafeed oracles on Algorand
> Conventions for building round based datafeed oracles on Algorand
## Abstract The following document introduces conventions for building round based datafeed oracles on Algorand using the ABI defined in ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. An oracle **MUST** have an associated smart-contract implementaing the ABI interface described below. ### ABI Interface Round based datafeed oracles allow smart-contracts to get data with relevancy to a specific block number, for example the ALGO price at a specific round. The associated smart contract **MUST** implement the following ABI interface: ```json { "name": "ARC_0021", "desc": "Interface for a round based datafeed oracle", "methods": [ { "name": "get", "desc": "Get data from the oracle for a specific round", "args": [ { "type": "uint64", "name": "round", "desc": "The desired round" }, { "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." } ], "returns": { "type": "byte[]", "desc": "The oracle's response. If the data doesn't exist, the response is an empty slice." } }, { "name": "must_get", "desc": "Get data from the oracle for a specific round. Panics if the data doesn't exist.", "args": [ { "type": "uint64", "name": "round", "desc": "The desired round" }, { "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." } ], "returns": { "type": "byte[]", "desc": "The oracle's response" } }, /** Optional */ { "name": "get_closest", "desc": "Get data from the oracle closest to a specified round by searching over past rounds.", "args": [ { "type": "uint64", "name": "round", "desc": "The desired round" }, { "type": "uint64", "name": "search_span", "desc": "Threshold for number of rounds in the past to search on." } { "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." } ], "returns": { "type": "(uint64,byte[])", "desc": "The closest round and the oracle's response for that round. If the data doesn't exist, the round is set to 0 and the response is an empty slice." } }, /** Optional */ { "name": "must_get_closest", "desc": "Get data from the oracle closest to a specified round by searching over past rounds. Panics if no data is found within the specified range.", "args": [ { "type": "uint64", "name": "round", "desc": "The desired round" }, { "type": "uint64", "name": "search_span", "desc": "Threshold for number of rounds in the past to search on." } { "type": "byte[]", "name": "user_data", "desc": "Optional: Extra data provided by the user. Pass an empty slice if not used." } ], "returns": { "type": "(uint64,byte[])", "desc": "The closest round and the oracle's response for that round." } } ] } ``` ### Method boundaries * All of `get`, `must_get`, `get_closest` and `must_get_closest` functions **MUST NOT** use local state. * Optional arguments of type `byte[]` that are not used are expected to be passed as an empty byte slice. ## Rationale The goal of these conventions is to make it easier for smart-contracts to interact with off-chain data sources. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Add `read-only` annotation to ABI methods
> Convention for creating methods which don't mutate state
The following document introduces a convention for creating methods (as described in ) which don’t mutate state. ## Abstract The goal of this convention is to allow smart contract developers to distinguish between methods which mutate state and methods which don’t by introducing a new property to the `Method` descriptor. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Read-only functions A `read-only` function is a function with no side-effects. In particular, a `read-only` function **SHOULD NOT** include: * local/global state modifications * calls to non `read-only` functions * inner-transactions It is **RECOMMENDED** for a `read-only` function to not access transactions in a group or metadata of the group. > The goal is to allow algod to easily execute `read-only` functions without broadcasting a transaction In order to support this annotation, the following `Method` descriptor is suggested: ```typescript interface Method { /** The name of the method */ name: string; /** Optional, user-friendly description for the method */ desc?: string; /** Optional, is it a read-only method (according to ARC-22) */ readonly?: boolean /** The arguments of the method, in order */ args: Array<{ /** The type of the argument */ type: string; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; }>; /** Information about the method's return value */ returns: { /** The type of the return value, or "void" to indicate no return value. */ type: string; /** Optional, user-friendly description for the return value */ desc?: string; }; } ``` ## Rationale ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Sharing Application Information
> Append application information to compiled TEAL applications
## Abstract The following document introduces a convention for appending information (stored in various files) to the compiled application’s bytes. The goal of this convention is to standardize the process of verifying and adding this information. The encoded information byte string is `arc23` followed by the IPFS CID v1 of a folder containing the files with the information. The minimum required file is `contract.json` representing the contract metadata (as described in ), and as extended by future potential ARCs). ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Files containing Application Information Application information are represented by various files in a folder that: * **MUST** contain a file `contract.json` representing the contract metadata (as described in ), and as extended by future potential ARCs). * **MAY** contain a file with the basename `application` followed by the extension of the high-level language the application is written in (e.g., `application.py` for PyTeal). > To allow the verification of your contract, be sure to write the version used to compile the file after the import eg: `from pyteal import * #pyteal==0.20.1` * **MAY** contain the files `approval.teal` and `clear.teal`, that are the compiled versions of approval and clear program in TEAL. * Note that `approval.teal` will not be able to contain the application information as this would create circularity. If `approval.teal` is provided, it is assumed that the *actual* `approval.teal` that is deployed corresponds to `approval.teal` with the proper `bytecblock` (defined below) appended at the end. * **MAY** contain other files as defined by other ARCs. ### CID, Pinning, and CAR of the Application Information The allows to access the corresponding application information files using . The CID **MUST**: * Represent a folder of files, even if only `contract.json` is present. > You may need to use the option `--wrap-with-directory` of `ipfs add` * Be a version V1 CID > E.g., use the option `--cid-version=1` of `ipfs add` * Use SHA-256 hash algorithm > E.g., use the option `--hash=sha2-256` of `ipfs add` Since the exact CID depends on the options provided when creating it and of the IPFS software version (if default options are used), for any production application, the folder of files **SHOULD** be published and pinned on IPFS. > All examples in this ARC assume the use of Kubo IPFS version 0.17.0 with default options apart those explicitly stated. If the IPFS is not pinned, any production application **SHOULD** provide a ( file of the folder, obtained using `ipfs dag export`. For public networks (e.g., MainNet, TestNet, BetaNet), block explorers and wallets (that support this ARC) **SHOULD** try to recover application information files from IPFS, and if not possible, **SHOULD** allow developers to upload a CAR file. If a CAR file is used, these tools **MUST** validate the CAR file matches the CID. For development purposes, on private networks, the application information files **MAY** be instead provided as a .zip or .tar.gz containing at the root all the required files. Block explorers and wallets for *private* networks **MAY** allow uploading the application information as a .zip or .tar.gz. They still **SHOULD** validate the files. > The validation of .zip or .tar.gz files will work if the same version of the IPFS software is used with the same option. Since for development purposes, the same machine is normally used to code the dApp and run the block explorer/wallet, this is most likely not an issue. However, for production purposes, we cannot assume the same IPFS software is used and a CAR file is the best solution to ensure that the application information files will always be available and possible to validate. > Example: For the example stored in `/asset/arc-0023/application_information`, the CID is `bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte`, which can be obtained with the command: > > ```plaintext > ipfs add --cid-version=1 --hash=sha2-256 --recursive --quiet --wrap-with-directory --only-hash application_information > ``` ### Associated Encoded Information Byte String The (encoded) information byte string is `arc23` concatenated to the 36 bytes of the binary CID. The information byte string is always 41-byte long and always start, in hexadecimal with `0x6172633233` (corresponding to `arc23`). > Example: for the above CID `bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte`, the binary CID corresponds to the following hexadecimal value: > > ```plaintext > 0x0170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699 > ``` > > and hence the encoded information byte string has the following hexadecimal value: > > ```plaintext > 0x61726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699 > ``` ### Inclusion of the Encoded Information Byte String in Programs The encoded information byte string is included in the *approval program* of the application via a with a unique byte string equal to the encoding information byte string. > For the example above, the `bytecblock` is: > > ```plaintext > bytecblock 0x61726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699 > ``` > > and when compiled this gives the following byte string (at least with TEAL v8 and before): > > ```plaintext > 0x26012961726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699 > ``` The size of the compiled application plus the bytecblock **MUST** be, at most, the maximum size of a compiled application according to the latest consensus parameters supported by the compiler. > At least with TEAL v8 and before, appending the `bytecblock` to the end of the program should add exactly 44 bytes (1 byte for opcode `bytecblock`, 1 byte for 0x01 -the number of byte strings-, 1 byte for 0x29 the length of the encoded information byte string, 41 byte for the encodedin information byte string) The `bytecblock` **MAY** be placed anywhere in the TEAL source code as long as it does not modify the semantic of the TEAL source code. However, if `approval.teal` is provided as an application information file, the `bytecblock` **SHOULD** be the last opcode of the deployed TEAL program. Developers **MUST** check that, when adding the `bytecblock` to their program, semantic is not changed. > At least with TEAL v8 and before, adding a `bytecblock` opcode at the end of the approval program does not change the semantics of the program, as long as opcodes are correctly aligned, there is no jump after the last position (that would make the program fail without `bytecblock`), and there is enough space left to add the opcode, at least with TEAL v8 and before. However, though very unlikely, future versions of TEAL may not satisfy this property. The `bytecblock` **MUST NOT** contain any additional byte string beyond the encoded information byte string. > For example, the following `bytecblock` is **INVALID**: > > ```plaintext > bytecblock 0x61726332330170122015066a3a83d4c5e1419647efd2144cf7fc7e9a66b73c70b69cdad0090053d699 0x42 > ``` ### Retrieval the Encoded Information Byte String and CID from Compiled TEAL Programs For programs until TEAL v8, a way to find the encoded information byte string is to search for the prefix: ```plaintext 0x2601296172633233 ``` which is then followed by the 36 bytes of the binary CID. Indeed, this prefix is composed of: * 0x26, the `bytecblock` opcode * 0x01, the number of byte strings provided in the `bytecblock` * 0x29, the length of the encoded information byte string * 0x6172633233, the hexadecimal of `arc23` Software retrieving the encoded information byte string **SHOULD** check the TEAL version and only perform retrieval for supported TEAL version. They also **SHOULD** gracefully handle false positives, that is when the above prefix is found multiple times. One solution is to allow multiple possible CID for a given compiled program. Note that opcode encoding may change with the TEAL version (though this did not happen up to TEAL v8 at least). If the `bytecblock` opcode encoding changes, software that extract the encoded information byte string from compiled TEAL programs **MUST** be updated. ## Rationale By appending the IPFS CID of the folder containing information about the Application, any user with access to the blockchain could easily verify the Application and the ABI of the Application and interact with it. Using IPFS has several advantages: * Allows automatic retrievel of the application information when pinned. * Allows easy archival using CAR. * Allows support of multiple files. ## Reference Implementation The following codes are not audited and are only here for information purposes. Here is an example of a python script that can generate the hash and append it to the compiled application, according this ARC: . A Folder containing: * example of the application . * example of the contract metadata that follow . Files are accessible through followings IPFS command: ```console $ ipfs cat bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte/contract.json $ ipfs cat bafybeiavazvdva6uyxqudfsh57jbithx7r7juzvxhrylnhg22aeqau6wte/application.py ``` > If they are not accessible be sure to removed \[—only-hash | -n] from your command or check you ipfs node. ## Security Considerations CIDs are unique; however, related files **MUST** be checked to ensure that the application conforms. An `arc-23` CID added at the end of an application is here to share information, not proof of anything. In particular, nothing ensures that a provided `approval.teal` matches the actual program on chain. ## Copyright Copyright and related rights waived via .
# Algorand WalletConnect v1 API
> API for communication between Dapps and wallets using WalletConnect
This document specifies a standard API for communication between Algorand decentralized applications and wallets using the WalletConnect v1 protocol. ## Abstract WalletConnect is an open protocol to communicate securely between mobile wallets and decentralized applications (dApps) using QR code scanning (desktop) or deep linking (mobile). It’s main use case allows users to sign transactions on web apps using a mobile wallet. This document aims to establish a standard API for using the WalletConnect v1 protocol on Algorand, leveraging the existing transaction signing APIs defined in . ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. It is strongly recommended to read and understand the entirety of before reading this ARC. ### Overview This overview section is non-normative. It offers a brief overview of the WalletConnect v1 lifecycle. A more in-depth description can be found in the WalletConnect v1 documentation . In order for a dApp and wallet to communicate using WalletConnect, a WalletConnect session must be established between them. The dApp is responsible for initiating this session and producing a session URI, which it will communicate to the wallet, typically in the form of a QR code or a deep link. This processed is described in the section. Once a session is established between a dApp and a wallet, the dApp is able to send requests to the wallet. The wallet is responsible for listening for requests, performing the appropriate actions to fulfill requests, and sending responses back to the dApp with the results of requests. This process is described in the section. ### Session Creation The dApp is responsible for initializing a WalletConnect session and producing a WalletConnect URI that communicates the necessary session information to the wallet. This process is as described in the WalletConnect documentation , with one addition. In order for wallets to be able to easily and immediately recognize an Algorand WalletConnect session, dApps **SHOULD** add an additional URI query parameter to the WalletConnect URI. If present, the name of this parameter **MUST** be `algorand` and its value **MUST** be `true`. This query parameter can appear in any order relative to the other query parameters in the URI. > For example, here is a standard WalletConnect URI: > > ```plaintext > wc:4015f93f-b88d-48fc-8bfe-8b063cc325b6@1?bridge=https%3A%2F%2F9.bridge.walletconnect.org&key=b0576e0880e17f8400bfff92d4caaf2158cccc0f493dcf455ba76d448c9b5655 > ``` > > And here is that same URI with the Algorand-specific query parameter: > > ```plaintext > wc:4015f93f-b88d-48fc-8bfe-8b063cc325b6@1?bridge=https%3A%2F%2F9.bridge.walletconnect.org&key=b0576e0880e17f8400bfff92d4caaf2158cccc0f493dcf455ba76d448c9b5655&algorand=true > ``` It is **RECOMMENDED** that dApps include this query parameter, but it is not **REQUIRED**. Wallets **MAY** reject sessions if the session URI does not contain this query parameter. #### Chain IDs WalletConnect v1 sessions are associated with a numeric chain ID. Since Algorand chains do not have numeric identifiers (instead, the genesis hash or ID is used for this purpose), this document defines the following chain IDs for the Algorand ecosystem: * MainNet (genesis hash `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=`): 416001 * TestNet (genesis hash `SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=`): 416002 * BetaNet (genesis hash `mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0=`): 416003 At the time of writing, these chain IDs do not conflict with any known chain that also uses WalletConnect. In the unfortunate event that this were to happen, the `algorand` query parameter discussed above would be used to differentiate Algorand chains from others. Future Algorand chains, if introduced, **MUST** be assigned new chain IDs. Wallets and dApps **MAY** support all of the above chain IDs or only a subset of them. If a chain ID is presented to a wallet or dApp that does not support that chain ID, they **MUST** terminate the session. For compatibility with WalletConnect usage prior to this ARC, the following catch-all chain ID is also defined: * All Algorand Chains (legacy value): 4160 Wallets and dApps **SHOULD** support this chain ID as well for backwards compatibility. Unfortunately this ID alone is not enough to identify which Algorand chain is being used, so extra fields in message requests (i.e. the genesis hash field in a transaction to sign) **SHOULD** be consulted as well to determine this. ### Message Schema Note: interfaces are defined in TypeScript. These interfaces are designed to be serializable to and from valid JSON objects. The WalletConnect message schema is a set of JSON-RPC 2.0 requests and responses. Decentralized applications will send requests to the wallets and will receive responses as JSON-RPC messages. All requests **MUST** adhere to the following structure: ```typescript interface JsonRpcRequest { /** * An identifier established by the Client. Numbers SHOULD NOT contain fractional parts. */ id: number; /** * A String specifying the version of the JSON-RPC protocol. MUST be exactly "2.0". */ jsonrpc: "2.0"; /** * A String containing the name of the RPC method to be invoked. */ method: string; /** * A Structured value that holds the parameter values to be used during the invocation of the method. */ params: any[]; } ``` The Algorand WalletConnect schema consists of a single RPC method, `algo_signTxn`, as described in the following section. All responses, whether successful or unsuccessful, **MUST** adhere to the following structure: ```typescript interface JsonRpcResponse { /** * This member is REQUIRED. * It MUST be the same as the value of the id member in the Request Object. * If there was an error in detecting the id in the Request object (e.g. Parse error/Invalid Request), it MUST be Null. */ id: number; /** * A String specifying the version of the JSON-RPC protocol. MUST be exactly "2.0". */ jsonrpc: "2.0"; /** * This member is REQUIRED on success. * This member MUST NOT exist if there was an error invoking the method. * The value of this member is determined by the method invoked on the Server. */ result?: any; /** * This member is REQUIRED on error. * This member MUST NOT exist if the requested method was invoked successfully. */ error?: JsonRpcError; } interface JsonRpcError { /** * A Number that indicates the error type that occurred. * This MUST be an integer. */ code: number; /** * A String providing a short description of the error. * The message SHOULD be limited to a concise single sentence. */ message: string; /** * A Primitive or Structured value that contains additional information about the error. * This may be omitted. * The value of this member is defined by the Server (e.g. detailed error information, nested errors etc.). */ data?: any; } ``` #### `algo_signTxn` This request is used to ask a wallet to sign one or more transactions in one or more atomic groups. ##### Request This request **MUST** adhere to the following structure: ```typescript interface AlgoSignTxnRequest { /** * As described in JsonRpcRequest. */ id: number; /** * As described in JsonRpcRequest. */ jsonrpc: "2.0"; /** * The method to invoke, MUST be "algo_signTxn". */ method: "algo_signTxn"; /** * Parameters for the transaction signing request. */ params: SignTxnParams; } /** * The first element is an array of `WalletTransaction` objects which contain the transaction(s) to be signed. * If transactions from an atomic transaction group are being signed, then all transactions in the group (even the ones not being signed by the wallet) MUST appear in this array. * * The second element, if present, contains addition options specified with the `SignTxnOpts` structure. */ type SignTxnParams = [WalletTransaction[], SignTxnOpts?]; ``` > `SignTxnParams` is a tuple with an optional element , meaning its length can be 1 or 2. The and types are defined in . All specifications, restrictions, and guidelines declared in ARC-1 for these types apply to their usage here as well. Additionally, all security requirements and restrictions for processing transaction signing requests from ARC-1 apply to this request as well. > For more information, see and . ##### Response To respond to a request, the wallet **MUST** send back the following response object: ```typescript interface AlgoSignTxnResponse { /** * As described in JsonRpcResponse. */ id: number; /** * As described in JsonRpcResponse. */ jsonrpc: "2.0"; /** * An array containing signed transactions at specific indexes. */ result?: Array; /** * As described in JsonRpcResponse. */ error?: JsonRpcError; } ``` type is defined in . In this response, `result` **MUST** be an array with the same length as the number of `WalletTransaction`s in the request (i.e. `.params[0].length`). For every integer `i` such that `0 <= i < result.length`: * If the transaction at index `i` in the group should be signed by the wallet (i.e. `.params[0][i].signers` is not an empty array): `result[i]` **MUST** be a base64-encoded string containing the msgpack-encoded signed transaction `params[0][i].txn`. * Otherwise: `result[i]` **MUST** be `null`, since the wallet was not requested to sign this transaction. If the wallet does not approve signing every transaction whose signature is being requested, the request **MUST** fail. All request failures **MUST** use the error codes defined in . ## Rationale ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# URI scheme
> A specification for encoding Transactions in a URI format.
## Abstract This URI specification represents a standardized way for applications and websites to send requests and information through deeplinks, QR codes, etc. It is heavily based on Bitcoin’s and should be seen as derivative of it. The decision to base it on BIP-0021 was made to make it easy and compatible as possible for any other application. ## Specification ### General format Algorand URIs follow the general format for URIs as set forth in . The path component consists of an Algorand address, and the query component provides additional payment options. Elements of the query component may contain characters outside the valid range. These must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986. ### ABNF Grammar ```plaintext algorandurn = "algorand://" algorandaddress [ "?" algorandparams ] algorandaddress = *base32 algorandparams = algorandparam [ "&" algorandparams ] algorandparam = [ amountparam / labelparam / noteparam / assetparam / otherparam ] amountparam = "amount=" *digit labelparam = "label=" *qchar assetparam = "asset=" *digit noteparam = (xnote | note) xnote = "xnote=" *qchar note = "note=" *qchar otherparam = qchar *qchar [ "=" *qchar ] ``` Here, “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators. The scheme component (“algorand:”) is case-insensitive, and implementations must accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys. !!! Caveat When it comes to generation of an address’ QR, many exchanges and wallets encodes the address w/o the scheme component (“algorand:”). This is not a URI so it is OK. ### Query Keys * label: Label for that address (e.g. name of receiver) * address: Algorand address * xnote: A URL-encoded notes field value that must not be modifiable by the user when displayed to users. * note: A URL-encoded default notes field value that the the user interface may optionally make editable by the user. * amount: microAlgos or smallest unit of asset * asset: The asset id this request refers to (if Algos, simply omit this parameter) * (others): optional, for future extensions ### Transfer amount/size !!! Note This is DIFFERENT than Bitcoin’s BIP-0021 If an amount is provided, it MUST be specified in basic unit of the asset. For example, if it’s Algos (Algorand native unit), the amount should be specified in microAlgos. All amounts MUST NOT contain commas nor a period (.) Strictly non negative integers. e.g. for 100 Algos, the amount needs to be 100000000, for 54.1354 Algos the amount needs to be 54135400. Algorand Clients should display the amount in whole Algos. Where needed, microAlgos can be used as well. In any case, the units shall be clear for the user. ### Appendix This section contains several examples address - ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4 ``` address with label - ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?label=Silvio ``` Request 150.5 Algos from an address ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150500000 ``` Request 150 units of Asset ID 45 from an address ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150&asset=45 ``` ## Rationale ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Provider Message Schema
> A comprehensive message schema for communication between clients and providers.
## Abstract Building off of the work of the previous ARCs relating to; provider transaction signing (), provider address discovery (), provider transaction network posting () and provider transaction signing & posting (), this proposal aims to comprehensively outline a common message schema between clients and providers. Furthermore, this proposal extends the aforementioned methods to encompass new functionality such as: * Extending the message structure to target specific networks, thereby supporting multiple AVM (Algorand Virtual Machine) chains. * Adding a new method that disables clients on providers. * Adding a new method to discover provider capabilities, such as what networks and methods are supported. This proposal serves as a formalization of the message schema and leaves the implementation details to the prerogative of the clients and providers. ## Motivation The previous ARCs relating to client/provider communication (, , and serve as the foundation of this proposal. However, this proposal attempts to bring these previous ARCs together and extend their functionality as some of the previous formats did not allow for very much robustness when it came to targeting a specific AVM chain. More methods have been added in an attempt to “fill in the gaps” of the previous client/provider communication ARCS. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. ### Definitions This section is non-normative. * Client * An end-user application that interacts with a provider; e.g. a dApp. * Provider * An application that manages private keys and performs signing operations; e.g. a wallet. ### Message Reference Naming In order for each message to be identifiable, each message **MUST** contain a `reference` property. Furthermore, this `reference` property **MUST** conform to the following naming convention: ```plaintext [namespace]:[method]:[type] ``` where: * `namespace`: * **MUST** be `arc0027` * `method`: * **MUST** be in snake case * **MUST** be one of `disable`, `discover`, `enable`, `post_transactions`, `sign_and_post_transactions`, `sign_message` or `sign_transactions` * `type`: * **MUST** be one of `request` or `response` This convention ensures that each message can be identified and handled. ### Supported Methods | Name | Summary | Example | | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | `disable` | Removes access for the clients on the provider. What this looks like is the prerogative of the provider. | | | `discover` | Sent by a client to discover the available provider(s). If the `params.providerId` property is supplied, only the provider with the matching ID **SHOULD** respond. This method is usually called before other methods as it allows the client to identify provider(s), the networks the provider(s) supports and the methods the provider(s) supports on each network. | | | `enable` | Requests that a provider allow a client access to the providers’ accounts. The response **MUST** return a user-curated list of available addresses. Providers **SHOULD** create a “session” for the requesting client, what this should look like is the prerogative of the provider(s) and is beyond the scope of this proposal. | | | `post_transactions` | Sends a list of signed transactions to be posted to the network by the provider. | | | `sign_and_post_transactions` | Sends a list of signed transactions to be posted to the network by the provider. | | | `sign_message` | Sends a UTF-8 encoded message to be signed by the provider. | | | `sign_transactions` | Sends a list of transactions to be signed by the provider. | | ### Request Message Schema ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/request-message", "title": "Request Message", "description": "Outlines the structure of a request message", "type": "object", "properties": { "id": { "type": "string", "description": "A globally unique identifier for the message", "format": "uuid" }, "reference": { "description": "Identifies the purpose of the message", "enum": [ "arc0027:disable:request", "arc0027:discover:request", "arc0027:enable:request", "arc0027:post_transactions:request", "arc0027:sign_and_post_transactions:request", "arc0027:sign_message:request", "arc0027:sign_transactions:request" ] } }, "allOf": [ { "if": { "properties": { "reference": { "const": "arc0027:disable:request" } }, "required": ["id", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/disable-params" } } } }, { "if": { "properties": { "reference": { "const": "arc0027:discover:request" } }, "required": ["id", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/discover-params" } } } }, { "if": { "properties": { "reference": { "const": "arc0027:enable:request" } }, "required": ["id", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/enable-params" } } } }, { "if": { "properties": { "reference": { "const": "arc0027:post_transactions:request" } }, "required": ["id", "params", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/post-transactions-params" } } } }, { "if": { "properties": { "reference": { "const": "arc0027:sign_and_post_transactions:request" } }, "required": ["id", "params", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/sign-and-post-transactions-params" } } } }, { "if": { "properties": { "reference": { "const": "arc0027:sign_message:request" } }, "required": ["id", "params", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/sign-message-params" } } } }, { "if": { "properties": { "reference": { "const": "arc0027:sign_transactions:request" } }, "required": ["id", "params", "reference"] }, "then": { "properties": { "params": { "$ref": "/schemas/sign-transactions-params" } } } } ] } ``` where: * `id`: * **MUST** be a compliant string * `reference`: * **MUST** be a string that conforms to the convention #### Param Definitions ##### Disable Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/disable-params", "title": "Disable Params", "description": "Disables a previously enabled client with any provider(s)", "type": "object", "properties": { "genesisHash": { "type": "string", "description": "The unique identifier for the network that is the hash of the genesis block" }, "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "sessionIds": { "type": "array", "description": "A list of specific session IDs to remove", "items": { "type": "string" } } }, "required": ["providerId"] } ``` where: * `genesisHash`: * **OPTIONAL** if omitted, the provider **SHOULD** assume the “default” network * **MUST** be a base64 encoded hash of the genesis block of the network * `providerId`: * **MUST** be a compliant string * `sessionIds`: * **OPTIONAL** if omitted, all sessions must be removed * **MUST** remove all sessions if the list is empty ##### Discover Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/discover-params", "title": "Discover Params", "description": "Gets a list of available providers", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" } } } ``` where: * `providerId`: * **OPTIONAL** if omitted, all providers **MAY** respond * **MUST** be a compliant string ##### Enable Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/enable-params", "title": "Enable Params", "description": "Asks provider(s) to enable the requesting client", "type": "object", "properties": { "genesisHash": { "type": "string", "description": "The unique identifier for the network that is the hash of the genesis block" }, "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" } }, "required": ["providerId"] } ``` where: * `genesisHash`: * **OPTIONAL** if omitted, the provider **SHOULD** assume the “default” network * **MUST** be a base64 encoded hash of the genesis block of the network * `providerId`: * **MUST** be a compliant string ##### Post Transactions Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/post-transactions-params", "title": "Post Transactions Params", "description": "Sends a list of signed transactions to be posted to the network by the provider(s)", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "stxns": { "type": "array", "description": "A list of signed transactions to be posted to the network by the provider(s)", "items": { "type": "string" } } }, "required": [ "providerId", "stxns" ] } ``` where: * `providerId`: * **MUST** be a compliant string * `stxns`: * **MUST** be the base64 encoding of the canonical msgpack encoding of a signed transaction as defined in * **MAY** be empty ##### Sign And Post Transactions Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/sign-and-post-transactions-params", "title": "Sign And Post Transactions Params", "description": "Sends a list of transactions to be signed and posted to the network by the provider(s)", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "txns": { "type": "array", "description": "A list of transactions to be signed and posted to the network by the provider(s)", "items": { "type": "object", "properties": { "authAddr": { "type": "string", "description": "The auth address if the sender has rekeyed" }, "msig": { "type": "object", "description": "Extra metadata needed when sending multisig transactions", "properties": { "addrs": { "type": "array", "description": "A list of Algorand addresses representing possible signers for the multisig", "items": { "type": "string" } }, "threshold": { "type": "integer", "description": "Multisig threshold value" }, "version": { "type": "integer", "description": "Multisig version" } } }, "signers": { "type": "array", "description": "A list of addresses to sign with", "items": { "type": "string" } }, "stxn": { "type": "string", "description": "The base64 encoded signed transaction" }, "txn": { "type": "string", "description": "The base64 encoded unsigned transaction" } }, "required": ["txn"] } } }, "required": [ "providerId", "txns" ] } ``` where: * `providerId`: * **MUST** be a compliant string * `txns`: * **MUST** have each item conform to the semantic of a transaction in * **MAY** be empty ##### Sign Message Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/sign-message-params", "title": "Sign Message Params", "description": "Sends a UTF-8 encoded message to be signed by the provider(s)", "type": "object", "properties": { "message": { "type": "string", "description": "The string to be signed by the provider" }, "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "signer": { "type": "string", "description": "The address to be used to sign the message" } }, "required": [ "message", "providerId" ] } ``` where: * `message`: * **MUST** be a string that is compatible with the UTF-8 character set as defined in * `providerId`: * **MUST** be a compliant string * `signer`: * **MUST** be a base32 encoded public key with a 4 byte checksum appended as defined in ##### Sign Transactions Params ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/sign-transactions-params", "title": "Sign Transactions Params", "description": "Sends a list of transactions to be signed by the provider(s)", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "txns": { "type": "array", "description": "A list of transactions to be signed by the provider(s)", "items": { "type": "object", "properties": { "authAddr": { "type": "string", "description": "The auth address if the sender has rekeyed" }, "msig": { "type": "object", "description": "Extra metadata needed when sending multisig transactions", "properties": { "addrs": { "type": "array", "description": "A list of Algorand addresses representing possible signers for the multisig", "items": { "type": "string" } }, "threshold": { "type": "integer", "description": "Multisig threshold value" }, "version": { "type": "integer", "description": "Multisig version" } } }, "signers": { "type": "array", "description": "A list of addresses to sign with", "items": { "type": "string" } }, "stxn": { "type": "string", "description": "The base64 encoded signed transaction" }, "txn": { "type": "string", "description": "The base64 encoded unsigned transaction" } }, "required": ["txn"] } } }, "required": [ "providerId", "txns" ] } ``` where: * `providerId`: * **MUST** be a compliant string * `txns`: * **MUST** have each item conform to the semantic of a transaction in * **MAY** be empty ### Response Message Schema ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/response-message", "title": "Response Message", "description": "Outlines the structure of a response message", "type": "object", "properties": { "id": { "type": "string", "description": "A globally unique identifier for the message", "format": "uuid" }, "reference": { "description": "Identifies the purpose of the message", "enum": [ "arc0027:disable:response", "arc0027:discover:response", "arc0027:enable:response", "arc0027:post_transactions:response", "arc0027:sign_and_post_transactions:response", "arc0027:sign_message:response", "arc0027:sign_transactions:response" ] }, "requestId": { "type": "string", "description": "The ID of the request message", "format": "uuid" } }, "allOf": [ { "if": { "properties": { "reference": { "const": "arc0027:disable:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/disable-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } }, { "if": { "properties": { "reference": { "const": "arc0027:discover:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/discover-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } }, { "if": { "properties": { "reference": { "const": "arc0027:enable:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/enable-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } }, { "if": { "properties": { "reference": { "const": "arc0027:post_transactions:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/post-transactions-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } }, { "if": { "properties": { "reference": { "const": "arc0027:sign_and_post_transactions:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/sign-and-post-transactions-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } }, { "if": { "properties": { "reference": { "const": "arc0027:sign_message:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/sign-message-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } }, { "if": { "properties": { "reference": { "const": "arc0027:sign_transactions:response" } }, "required": ["id", "reference", "requestId"] }, "then": { "oneOf": [ { "properties": { "result": { "$ref": "/schemas/sign-transactions-result" } } }, { "properties": { "error": { "$ref": "/schemas/error" } } } ] } } ] } ``` * `id`: * **MUST** be a compliant string * `reference`: * **MUST** be a string that conforms to the convention * `requestId`: * **MUST** be the ID of the origin request message #### Result Definitions ##### Disable Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/disable-result", "title": "Disable Result", "description": "The response from a disable request", "type": "object", "properties": { "genesisHash": { "type": "string", "description": "The unique identifier for the network that is the hash of the genesis block" }, "genesisId": { "type": "string", "description": "A human-readable identifier for the network" }, "providerId": { "type": "number", "description": "A unique identifier for the provider", "format": "uuid" }, "sessionIds": { "type": "array", "description": "A list of specific session IDs that have been removed", "items": { "type": "string" } } }, "required": [ "genesisHash", "genesisId", "providerId" ] } ``` where: * `genesisHash`: * **MUST** be a base64 encoded hash of the genesis block of the network * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider ##### Discover Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/discover-result", "title": "Discover Result", "description": "The response from a discover request", "type": "object", "properties": { "host": { "type": "string", "description": "A domain name of the provider" }, "icon": { "type": "string", "description": "A URI pointing to an image" }, "name": { "type": "string", "description": "A human-readable canonical name of the provider" }, "networks": { "type": "array", "description": "A list of networks available for the provider", "items": { "type": "object", "properties": { "genesisHash": { "type": "string", "description": "The unique identifier for the network that is the hash of the genesis block" }, "genesisId": { "type": "string", "description": "A human-readable identifier for the network" }, "methods": { "type": "array", "description": "A list of methods available from the provider for the chain", "items": { "enum": [ "disable", "enable", "post_transactions", "sign_and_post_transactions", "sign_message", "sign_transactions" ] } } }, "required": [ "genesisHash", "genesisId", "methods" ] } }, "providerId": { "type": "string", "description": "A globally unique identifier for the provider", "format": "uuid" } }, "required": [ "name", "networks", "providerId" ] } ``` where: * `host`: * **RECOMMENDED** a URL that points to a live website * `icon`: * **RECOMMENDED** be a URI that conforms to * **SHOULD** be a URI that points to a square image with a 96x96px minimum resolution * **RECOMMENDED** image format to be either lossless or vector based such as PNG, WebP or SVG * `name`: * **SHOULD** be human-readable to allow for display to a user * `networks`: * **MAY** be empty * `networks.genesisHash`: * **MUST** be a base64 encoded hash of the genesis block of the network * `networks.methods`: * **SHOULD** be one or all of `disable`, `enable`, `post_transactions`, `sign_and_post_transactions`, `sign_message` or `sign_transactions` * **MAY** be empty * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider ##### Enable Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/enable-result", "title": "Enable Result", "description": "The response from an enable request", "type": "object", "properties": { "accounts": { "type": "array", "description": "A list of accounts available for the provider", "items": { "type": "object", "properties": { "address": { "type": "string", "description": "The address of the account" }, "name": { "type": "string", "description": "A human-readable name for this account" } }, "required": ["address"] } }, "genesisHash": { "type": "string", "description": "The unique identifier for the network that is the hash of the genesis block" }, "genesisId": { "type": "string", "description": "A human-readable identifier for the network" }, "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "sessionId": { "type": "string", "description": "A globally unique identifier for the session as defined by the provider" } }, "required": [ "accounts", "genesisHash", "genesisId", "providerId" ] } ``` where: * `accounts`: * **MAY** be empty * `accounts.address`: * **MUST** be a base32 encoded public key with a 4 byte checksum appended as defined in * `genesisHash`: * **MUST** be a base64 encoded hash of the genesis block of the network * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider * `sessionId`: * **RECOMMENDED** to be a compliant string ##### Post Transactions Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/post-transactions-result", "title": "Post Transactions Result", "description": "The response from a post transactions request", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "txnIDs": { "type": "array", "description": "A list of IDs for all of the transactions posted to the network", "items": { "type": "string" } } }, "required": [ "providerId", "txnIDs" ] } ``` where: * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider * `txnIDs`: * **MUST** contain items that are a 52-character base32 string (without padding) corresponding to a 32-byte string transaction ID * **MAY** be empty ##### Sign And Post Transactions Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/sign-and-post-transactions-result", "title": "Sign And Post Transactions Result", "description": "The response from a sign and post transactions request", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "txnIDs": { "type": "array", "description": "A list of IDs for all of the transactions posted to the network", "items": { "type": "string" } } }, "required": [ "providerId", "txnIDs" ] } ``` where: * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider * `txnIDs`: * **MUST** contain items that are a 52-character base32 string (without padding) corresponding to a 32-byte string transaction ID * **MAY** be empty ##### Sign Message Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/sign-message-result", "title": "Sign Message Result", "description": "The response from a sign message request", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "signature": { "type": "string", "description": "The signature of the signed message signed by the private key of the intended signer" }, "signer": { "type": "string", "description": "The address of the signer used to sign the message" } }, "required": ["providerId", "signature", "signer"] } ``` where: * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider * `signature`: * **MUST** be a base64 encoded string * `signer`: * **MUST** be a base32 encoded public key with a 4 byte checksum appended as defined in ##### Sign Transactions Result ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/sign-transactions-result", "title": "Sign Transactions Result", "description": "The response from a sign transactions request", "type": "object", "properties": { "providerId": { "type": "string", "description": "A unique identifier for the provider", "format": "uuid" }, "stxns": { "type": "array", "description": "A list of signed transactions that is ready to be posted to the network", "items": { "type": "string" } } }, "required": ["providerId", "stxns"] } ``` where: * `providerId`: * **MUST** be a compliant string * **MUST** uniquely identify the provider * `stxns`: * **MUST** be the base64 encoding of the canonical msgpack encoding of a signed transaction as defined in * **MAY** be empty #### Error Definition ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "$id": "/schemas/error", "title": "Error", "description": "Details the type of error and a human-readable message that can be displayed to the user", "type": "object", "properties": { "code": { "description": "An integer that defines the type of error", "enum": [ 4000, 4001, 4002, 4003, 4004, 4100, 4200, 4201, 4300 ] }, "data": { "type": "object", "description": "Additional information about the error" }, "message": { "type": "string", "description": "A human-readable message about the error" }, "providerId": { "type": "number", "description": "A unique identifier for the provider", "format": "uuid" } }, "required": [ "code", "message" ] } ``` where: * `code`: * **MUST** be a code of one of the * `message`: * **SHOULD** be human-readable to allow for display to a user * `providerId`: * **MUST** be a compliant string * **MUST** be present if the error originates from the provider ### Errors #### Summary | Code | Name | Summary | | ---- | ----------------------------------------------------------------------------------------- | ------- | | 4000 | The default error response, usually indicates something is not quite right. | | | 4001 | When a user has rejected the method. | | | 4002 | The requested method has timed out. | | | 4003 | The provider does not support this method. | | | 4004 | Network is not supported. | | | 4100 | The provider has not given permission to use a specified signer. | | | 4200 | The input for signing transactions is malformed. | | | 4201 | The computed group ID of the atomic transactions is different from the assigned group ID. | | | 4300 | When some transactions were not sent properly. | | #### 4000 `UnknownError` This error is the default error and serves as the “catch all” error. This usually occurs when something has happened that is outside the bounds of graceful handling. You can check the `UnknownError.message` string for more information. The code **MUST** be 4000. #### 4001 `MethodCanceledError` This error is thrown when a user has rejected or canceled the requested method on the provider. For example, the user decides to cancel the signing of a transaction. **Additional Data** | Name | Type | Value | Description | | ------ | -------- | ----- | ----------------------------------------- | | method | `string` | - | The name of the method that was canceled. | The code **MUST** be 4001. #### 4002 `MethodTimedOutError` This can be thrown by most methods and indicates that the method has timed out. **Additional Data** | Name | Type | Value | Description | | ------ | -------- | ----- | -------------------------------------- | | method | `string` | - | The name of the method that timed out. | The code **MUST** be 4002. #### 4003 `MethodNotSupportedError` This can be thrown by most methods and indicates that the provider does not support the method you are trying to perform. The code **MUST** be 4003. **Additional Data** | Name | Type | Value | Description | | ------ | -------- | ----- | --------------------------------------------- | | method | `string` | - | The name of the method that is not supported. | #### 4004 `NetworkNotSupportedError` This error is thrown when the requested genesis hash is not supported by the provider. The code **MUST** be 4004. **Additional Data** | Name | Type | Value | Description | | ----------- | -------- | ----- | ------------------------------------------------------ | | genesisHash | `string` | - | The genesis hash of the network that is not supported. | #### 4100 `UnauthorizedSignerError` This error is thrown when a provided account has been specified, but the provider has not given permission to use that account as a signer. The code **MUST** be 4100. **Additional Data** | Name | Type | Value | Description | | ------ | -------- | ----- | ------------------------------------------------- | | signer | `string` | - | The address of the signer that is not authorized. | #### 4200 `InvalidInputError` This error is thrown when the provider attempts to sign transaction(s), but the input is malformed. The code **MUST** be 4200. #### 4201 `InvalidGroupIdError` This error is thrown when the provider attempts to sign atomic transactions in which the computed group ID is different from the assigned group ID. The code **MUST** be 4301. **Additional Data** | Name | Type | Value | Description | | --------------- | -------- | ----- | ---------------------------------------------------- | | computedGroupId | `string` | - | The computed ID of the supplied atomic transactions. | #### 4300 `FailedToPostSomeTransactionsError` This error is thrown when some transactions failed to be posted to the network. The code **MUST** be 4300. **Additional Data** | Name | Type | Value | Description | | ------------- | -------------------- | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | successTxnIDs | `(string \| null)[]` | - | This will correspond to the `stxns` list sent in `post_transactions` & `sign_and_post_transactions` and will contain the ID of those transactions that were successfully committed to the blockchain, or null if they failed. | ## Rationale An original vision for Algorand was that multiple AVM chains could co-exist. Extending the base of each message schema with a targeted network (referenced by its genesis hash) ensures the schema can remain AVM chain-agnostic and adapted to work with any AVM-compatible chain. The schema adds a few more methods that are not mentioned in previous ARCs and the inception of these methods are born out of the need that has been seen by providers, and clients alike. The latest JSON schema (as of writing is the draft) was chosen as the format due to the widely supported use across multiple platforms & languages, and due to its popularity. ## Reference Implementation ### Disable Example **Request** ```json { "id": "e44f5bde-37f4-44b0-94d5-1daff41bc984d", "params": { "genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "sessionIds": ["ab476381-c1f4-4665-b89c-9f386fb6f15d", "7b02d412-6a27-4d97-b091-d5c26387e644"] }, "reference": "arc0027:disable:request" } ``` **Response** ```json { "id": "e6696507-6a6c-4df8-98c4-356d5351207c", "reference": "arc0027:disable:response", "requestId": "e44f5bde-37f4-44b0-94d5-1daff41bc984d", "result": { "genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "genesisId": "testnet-v1.0", "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "sessionIds": ["ab476381-c1f4-4665-b89c-9f386fb6f15d", "7b02d412-6a27-4d97-b091-d5c26387e644"] } } ``` ### Discover Example **Request** ```json { "id": "5d5186fc-2091-4e88-8ef9-05a5d4da24ed", "params": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa" }, "reference": "arc0027:discover:request" } ``` **Response** ```json { "id": "6695f990-e3d7-41c4-bb26-64ab8da0653b", "reference": "arc0027:discover:response", "requestId": "5d5186fc-2091-4e88-8ef9-05a5d4da24ed", "result": { "host": "https://awesome-wallet.com", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUh...", "name": "Awesome Wallet", "networks": [ { "genesisHash": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=", "genesisId": "mainnet-v1.0", "methods": [ "disable", "enable", "post_transactions", "sign_and_post_transactions", "sign_message", "sign_transactions" ] }, { "genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "genesisId": "testnet-v1.0", "methods": [ "disable", "enable", "post_transactions", "sign_message", "sign_transactions" ] } ], "providerId": "85533948-4d0b-4727-904e-dd35305d49aa" } } ``` ### Enable Example **Request** ```json { "id": "4dd4ccdf-a918-4e33-a675-073330db4c99", "params": { "genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "providerId": "85533948-4d0b-4727-904e-dd35305d49aa" }, "reference": "arc0027:enable:request" } ``` **Response** ```json { "id": "cdf43d9e-1158-400b-b2fb-ba45e39548ff", "reference": "arc0027:enable:response", "requestId": "4dd4ccdf-a918-4e33-a675-073330db4c99", "result": { "accounts": [{ "address": "ARC27GVTJO27GGSWHZR2S3E7UY46KXFLBC6CLEMF7GY3UYF7YWGWC6NPTA", "name": "Main Account" }], "genesisHash": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "genesisId": "testnet-v1.0", "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "sessionId": "6eb74cf1-93e8-400c-94b5-4928807a3ab1" } } ``` ### Post Transactions Example **Request** ```json { "id": "e555ccb3-4730-474c-92e3-1e42868e0c0d", "params": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "stxns": [ "iaNhbXT..." ] }, "reference": "arc0027:post_transactions:request" } ``` **Response** ```json { "id": "13b115fb-2966-4a21-b6f7-8aca118ac008", "reference": "arc0027:post_transactions:response", "requestId": "e555ccb3-4730-474c-92e3-1e42868e0c0d", "result": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "txnIDs": [ "H2KKVI..." ] } } ``` ### Sign And Post Transactions Example **Request** ```json { "id": "43adafeb-d455-4264-a1c0-d86d9e1d75d9", "params": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "txns": [ { "txn": "iaNhbXT..." }, { "txn": "iaNhbXT...", "signers": [] } ] }, "reference": "arc0027:sign_and_post_transactions:request" } ``` **Response** ```json { "id": "973df300-f149-4004-9718-b04b5f3991bd", "reference": "arc0027:sign_and_post_transactions:response", "requestId": "43adafeb-d455-4264-a1c0-d86d9e1d75d9", "result": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "stxns": [ "iaNhbXT...", null ] } } ``` ### Sign Message Example **Request** ```json { "id": "8f4aa9e5-d039-4272-95ac-6e972967e0cb", "params": { "message": "Hello humie!", "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "signer": "ARC27GVTJO27GGSWHZR2S3E7UY46KXFLBC6CLEMF7GY3UYF7YWGWC6NPTA" }, "reference": "arc0027:sign_message:request" } ``` **Response** ```json { "id": "9bdf72bf-218e-462a-8f64-3a40ef4a4963", "reference": "arc0027:sign_message:response", "requestId": "8f4aa9e5-d039-4272-95ac-6e972967e0cb", "result": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "signature": "iaNhbXT...", "signer": "ARC27GVTJO27GGSWHZR2S3E7UY46KXFLBC6CLEMF7GY3UYF7YWGWC6NPTA" } } ``` ### Sign Transactions Example **Request** ```json { "id": "464e6b88-8860-403c-891d-7de6d0425686", "params": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "txns": [ { "txn": "iaNhbXT..." }, { "txn": "iaNhbXT...", "signers": [] } ] }, "reference": "arc0027:sign_transactions:request" } ``` **Response** ```json { "id": "f5a56135-5cd2-4f3f-8757-7b89d32d67e0", "reference": "arc0027:sign_transactions:response", "requestId": "464e6b88-8860-403c-891d-7de6d0425686", "result": { "providerId": "85533948-4d0b-4727-904e-dd35305d49aa", "stxns": [ "iaNhbXT...", null ] } } ``` ## Security Considerations As this ARC only serves as the formalization of the message schema, the end-to-end security of the actual messages is beyond the scope of this ARC. It is **RECOMMENDED** that another ARC be proposed to advise in this topic, with reference to this ARC. ## Copyright Copyright and related rights waived via .
# Algorand Event Log Spec
> A methodology for structured logging by Algorand dapps.
## Abstract Algorand dapps can use the primitive to attach information about an application call. This ARC proposes the concept of Events, which are merely a way in which data contained in these logs may be categorized and structured. In short: to emit an Event, a dapp calls `log` with ABI formatting of the log data, and a 4-byte prefix to indicate which Event it is. ## Specification Each kind of Event emitted by a given dapp has a unique 4-byte identifier. This identifier is derived from its name and the structure of its contents, like so: ### Event Signature An Event Signature is a utf8 string, comprised of: the name of the event, followed by an open paren, followed by the comma-separated names of the data types contained in the event (Types supported are the same as in ), followed by a close paren. This follows naming conventions similar to ABI signatures, but does not include the return type. ### Deriving the 4-byte prefix from the Event Signature To derive the 4-byte prefix from the Event Signature, perform the `sha512/256` hash algorithm on the signature, and select the first 4 bytes of the result. This is the same process that is used by the as specified in ARC-4. ### Argument Encoding The arguments to a tuple **MUST** be encoded as if they were a single tuple (opposed to concatenating the encoded values together). For example, an event signature `foo(string,string)` would contain the 4-byte prefix and a `(string,string)` encoded byteslice. ### ARC-4 Extension #### Event An event is represented as follow: ```typescript interface Event { /** The name of the event */ name: string; /** Optional, user-friendly description for the event */ desc?: string; /** The arguments of the event, in order */ args: Array<{ /** The type of the argument */ type: string; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; }>; } ``` #### Method This ARC extends ARC-4 by adding an array events of type `Event[]` to the `Method` interface. Concretely, this give the following extended Method interface: ```typescript interface Method { /** The name of the method */ name: string; /** Optional, user-friendly description for the method */ desc?: string; /** The arguments of the method, in order */ args: Array<{ /** The type of the argument */ type: string; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; }>; /** All of the events that the method use */ events: Event[]; /** Information about the method's return value */ returns: { /** The type of the return value, or "void" to indicate no return value. */ type: string; /** Optional, user-friendly description for the return value */ desc?: string; }; } ``` #### Contract > Even if events are already inside `Method`, the contract **MUST** provide an array of `Events` to improve readability. ```typescript interface Contract { /** A user-friendly name for the contract */ name: string; /** Optional, user-friendly description for the interface */ desc?: string; /** * Optional object listing the contract instances across different networks */ networks?: { /** * The key is the base64 genesis hash of the network, and the value contains * information about the deployed contract in the network indicated by the * key */ [network: string]: { /** The app ID of the deployed contract in this network */ appID: number; } } /** All of the methods that the contract implements */ methods: Method[]; /** All of the events that the contract contains */ events: Event[]; } ``` ## Rationale Event logging allows a dapp to convey useful information about the things it is doing. Well-designed Event logs allow observers to more easily interpret the history of interactions with the dapp. A structured approach to Event logging could also allow for indexers to more efficiently store and serve queryable data exposed by the dapp about its history. ## Reference Implementation ### Sample interpretation of Event log data An exchange dapp might emit a `Swapped` event with two `uint64` values representing quantities of currency swapped. The event signature would be: `Swapped(uint64,uint64)`. Suppose that dapp emits the following log data (seen here as base64 encoded): `HMvZJQAAAAAAAAAqAAAAAAAAAGQ=`. Suppose also that the dapp developers have declared that it follows this spec for Events, and have published the signature `Swapped(uint64,uint64)`. We can attempt to parse this log data to see if it is one of these events, as follows. (This example is written in JavaScript.) First, we can determine the expected 4-byte prefix by following the spec above: ```js > { sha512_256 } = require('js-sha512') > sig = 'Swapped(uint64,uint64)' 'Swapped(uint64,uint64)' > hash = sha512_256(sig) '1ccbd9254b9f2e1caf190c6530a8d435fc788b69954078ab937db9b5540d9567' > prefix = hash.slice(0,8) // 8 nibbles = 4 bytes '1ccbd925' ``` Next, we can inspect the data to see if it matches the expected format: 4 bytes for the prefix, 8 bytes for the first uint64, and 8 bytes for the next. ```js > b = Buffer.from('HMvZJQAAAAAAAAAqAAAAAAAAAGQ=', 'base64') > b.slice(0,4).toString('hex') '1ccbd925' > b.slice(4, 12) > b.slice(12,20) ``` We see that the 4-byte prefix matches the signature for `Swapped(uint64,uint64)`, and that the rest of the data can be interpreted using the types declared for that signature. We interpret the above Event data to be: `Swapped(0x2a,0x64)`, meaning `Swapped(42,100)`. ## Security Considerations As specify in ARC-4, methods which have a `return` value MUST NOT emit an event after they log their `return` value. ## Copyright Copyright and related rights waived via .
# Application Specification
> A specification for fully describing an Application, useful for Application clients.
## Abstract > \[!NOTE] This specification will be eventually deprecated by the specification. An Application is partially defined by it’s but further information about the Application should be available. Other descriptive elements of an application may include it’s State Schema, the original TEAL source programs, default method arguments, and custom data types. This specification defines the descriptive elements of an Application that should be available to clients to provide useful information for an Application Client. ## Motivation As more complex Applications are created and deployed, some consistent way to specify the details of the application and how to interact with it becomes more important. A specification to allow a consistent and complete definition of an application will help developers attempting to integrate an application they’ve never worked with before. ## Specification The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in .. ### Definitions * : The object containing the elements describing the Application. * : The object containing a description of the TEAL source programs that are evaluated when this Application is called. * : The object containing a description of the schema required by the Application. * : The object containing a map of on completion actions to allowable calls for bare methods * : The object containing a map of method signatures to meta data about each method ### Application Specification The Application Specification is composed of a number of elements that serve to fully describe the Application. ```ts type AppSpec = { // embedded contract fields, see ARC-0004 for more contract: ARC4Contract; // the original teal source, containing annotations, base64 encoded source?: SourceSpec; // the schema this application requires/provides schema?: SchemaSpec; // supplemental information for calling bare methods bare_call_config?: CallConfigSpec; // supplemental information for calling ARC-0004 ABI methods hints: HintsSpec; // storage requirements state?: StateSpec; } ``` ### Source Specification Contains the source TEAL files including comments and other annotations. ```ts // Object containing the original TEAL source files type SourceSpec = { // b64 encoded approval program approval: string; // b64 encoded clear state program clear: string; } ``` ### Schema Specification The schema of an application is critical to know prior to creation since it is immutable after create. It also helps clients of the application understand the data that is available to be queried from off chain. Individual fields can be referenced from the to provide input data to a given ABI method. While some fields are possible to know ahead of time, others may be keyed dynamically. In both cases the data type being stored MUST be known and declared ahead of time. ```ts // The complete schema for this application type SchemaSpec = { local: Schema; global: Schema; } // Schema fields may be declared explicitly or reserved type Schema = { declared: Record; reserved: Record; } // Types supported for encoding/decoding enum AVMType { uint64, bytes } // string encoded datatype name defined in arc-4 type ABIType = string; // Fields that have an explicit key type DeclaredSchemaValueSpec = { type: AVMType | ABIType; key: string; descr: string; } // Fields that have an undetermined key type ReservedSchemaValueSpec = { type: AVMType | ABIType; descr: string; max_keys: number; } ``` ### Bare call specification Describes the supported OnComplete actions for bare calls on the contract. ```ts // describes under what conditions an associated OnCompletion type can be used with a particular method // NEVER: Never handle the specified on completion type // CALL: Only handle the specified on completion type for application calls // CREATE: Only handle the specified on completion type for application create calls // ALL: Handle the specified on completion type for both create and normal application calls type CallConfig = 'NEVER' | 'CALL' | 'CREATE' | 'ALL' type CallConfigSpec = { // lists the supported CallConfig for each on completion type, if not specified a CallConfig of NEVER is assumed no_op?: CallConfig opt_in?: CallConfig close_out?: CallConfig update_application?: CallConfig delete_application?: CallConfig } ``` ### Hints specification Contains supplemental information about ABI methods, each record represents a single method in the contract. The record key should be the corresponding ABI signature. NOTE: Ideally this information would be part of the ABI specification. ```ts type HintSpec = { // indicates the method has no side-effects and can be call via dry-run/simulate read_only?: bool; // describes the structure of arguments, key represents the argument name structs?: Record; // describes source of default values for arguments, key represents the argument name default_arguments?: Record; // describes which OnCompletion types are supported call_config: CallConfigSpec; } // key represents the method signature for an ABI method defined in 'contracts' type HintsSpec = Record ``` #### Readonly Specification Indicates the method has no side-effects and can be called via dry-run/simulate NOTE: This property is made obsolete by but is included as it is currently used by existing reference implementations such as Beaker #### Struct Specification Each defined type is specified as an array of `StructElement`s. The ABI encoding is exactly as if an ABI Tuple type defined the same element types in the same order. It is important to encode the struct elements as an array since it preserves the order of fields which is critical to encoding/decoding the data properly. ```ts // Type aliases for readability type FieldName = string // string encoded datatype name defined in ARC-0004 type ABIType = string // Each field in the struct contains a name and ABI type type StructElement = [FieldName, ABIType] // Type aliases for readability type ContractDefinedType = StructElement[] type ContractDefinedTypeName = string; // represents a input/output structure type StructSpec = { name: ContractDefinedTypeName elements: ContractDefinedType } ``` For example a `ContractDefinedType` that should provide an array of `StructElement`s Given the PyTeal: ```py from pyteal import abi class Thing(abi.NamedTuple): addr: abi.Field[abi.address] balance: abi.Field[abi.Uint64] ``` the equivalent ABI type is `(address,uint64)` and an element in the TypeSpec is: ```js { // ... "Thing":[["addr", "address"]["balance","uint64"]], // ... } ``` #### Default Argument Defines how default argument values can be obtained. The `source` field defines how a default value is obtained, the `data` field contains additional information based on the `source` value. Valid values for `source` are: * “constant” - `data` is the value to use * “global-state” - `data` is the global state key. * “local-state” - `data` is the local state key * “abi-method” - `data` is a reference to the ABI method to call. Method should be read only and return a value of the appropriate type Two scenarios where providing default arguments can be useful: 1. Providing a default value for optional arguments 2. Providing a value for required arguments such as foreign asset or application references without requiring the client to explicitly determine these values when calling the contract ```ts // ARC-0004 ABI method definition type ABIMethod = {}; type DefaultArgumentSpec = { // Where to look for the default arg value source: "constant" | "global-state" | "local-state" | "abi-method" // extra data to include when looking up the value data: string | bigint | number | ABIMethod } ``` ### State Specifications Describes the total storage requirements for both global and local storage, this should include both declared and reserved described in SchemaSpec. NOTE: If the Schema specification contained additional information such that the size could be calculated, then this specification would not be required. ```ts type StateSchema = { // how many byte slices are required num_byte_slices: number // how many uints are required num_uints: number } type StateSpec = { // schema specification for global storage global: StateSchema // schema specification for local storage local: StateSchema } ``` ### Reference schema A full JSON schema for application.json can be found in . ## Rationale The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages. ## Backwards Compatibility All ARCs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ARC must explain how the author proposes to deal with these incompatibilities. ARC submissions without a sufficient backwards compatibility treatise may be rejected outright. ## Test Cases Test cases for an implementation are mandatory for ARCs that are affecting consensus changes. If the test suite is too large to reasonably be included inline, then consider adding it as one or more files in `https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-####/`. ## Reference Implementation `algokit-utils-py` and `algokit-utils-ts` both provide reference implementations for the specification structure and using the data in an `ApplicationClient` `Beaker` provides a reference implementation for creating an application.json from a smart contract. ## Security Considerations All ARCs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks and can be used throughout the life cycle of the proposal. E.g. include security-relevant design decisions, concerns, important discussions, implementation-specific guidance and pitfalls, an outline of threats and risks and how they are being addressed. ARC submissions missing the “Security Considerations” section will be rejected. An ARC cannot proceed to status “Final” without a Security Considerations discussion deemed sufficient by the reviewers. ## Copyright Copyright and related rights waived via .
# xGov Pilot - Becoming an xGov
> Explanation on how to become Expert Governors.
## Abstract This ARC proposes a standard for achieving xGov status in the Algorand governance process. xGov status grants the right to vote on proposals raised by the community, specifically spending a previously specified amount of Algo in a given Term on particular initiatives. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . | Algorand xGovernor Summary | | | | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | | Enrolment | At the start of each governance period | | | How to become eligible | Having completed participation in the previous governance period through official or approved decentralized finance governance. | | | Requisite | Commit of governance reward for one year | | | Duration | 1 Year | | | Voting Power | 1 Algo committed = 1 Vote, as per REWARDS DEPOSIT | | | Duty | Spend all available votes each time a voting period occurs. (In case there is no proposal that aligns with an xGov's preference, a mock proposal can be used as an alternative.) | | | Disqualification | Forfeit rewards pledged | | ### What is an xGov? xGovs, or Expert Governors, are a **self-selected** group of decentralized decision makers who demonstrate an enduring commitment to the Algorand community, possess a deep understanding of the blockchain’s inner workings and realities of the Algorand community, and whose interests are aligned with the good of the Algorand blockchain. These individuals have the ability to participate in the designation **and** approval of proposals, and play an instrumental role in shaping the future of the Algorand ecosystem. ### Requirement to become an xGov To become an xGov, or Expert Governor, an account: * **MUST** first be deemed eligible by having fully participated in the previous governance period, either through official or approved decentralized finance governance. * At the start of each governance period, eligible participants will have the option to enrol in the xGov program * To gain voting power as an xGov, the eligible **governor rewards for the period of the enrolment** **MUST** be committed to the xGov Term Pool and locked for a period of 12 months. > Only the GP rewards are deposited to the xGov Term Pool. The principal algo committed remains in the gov wallet (or DeFi protocol) and can be used in subsequent Governance Periods. Rewards deposited to the xGov Term Pool will be call **REWARDS DEPOSIT**. ### Voting Power Voting power in the xGov process is determined by the amount of Algo an eligible participant commits. Voting power is 1 Algo = 1 Vote, as per REWARDS DEPOSIT, and it renews at the start of every quarter - provided the xGov remain eligible. This ensures that the weight of each vote is directly proportional to the level of investment and commitment to the Algorand ecosystem. ### Duty of an xGov As an xGov, you **MUST** actively participate in the governance process by using all available votes amongst proposals each time a voting period occurs. If you don’t do it, you will be disqualified. > eg. For 100 Algo as per REWARDS DEPOSIT, 100 votes available, they can be spent like this: > > * 50 on proposal A > * 20 on proposal B > * 30 on proposal C > * 0 on every other proposal > In case no proposal aligns with an xGov’s preference, a mock proposal can be used as an alternative. ### Disqualification As an xGov, it is important to understand the importance of your role in the governance process and the responsibilities that come with it. Failure to do so will result in disqualification. The consequences of disqualification are significant, as the xGov will lose the rewards that were committed when they entered the xGov process. It is important to take your role as an xGov seriously and fulfill your responsibilities to ensure the success of the governance process. > The rewards will remain in the xGov reward pools & will be distributed among remaining xGovs ## Rationale This proposal provides a clear and simple method for participation in xGov process, while also providing incentives for long-term commitment to the network. Separate pools for xGov and Gov allow for a more diverse range of participation, with the xGov pool providing an additional incentive for longer-term commitment. The requirement to spend 100% of your vote on proposals will ensure that participants are actively engaged in the decision-making process. After weeks of engagement with the community, it has been decided: * That the xGov process will not utilize token or NFT. * There will be no minimum or maximum amount of Algo required to participate in the xGov process * In the future, the possibility of node operation being considered as a form of participation eligibility is being explored This approach aims to make the xGov process accessible and inclusive for all members of the community. We encourage the community to continue to provide input on this topic through the submission of questions and ideas in this ARC document. > **Important**: The xGov program is still a work in progress, and changes are expected to happen over the next few years with community input and design consultation. Criteria to ENTER the program will only be applied forward, which means Term Pools already in place will not be affected by new any NEW ENTRY criteria. However, other ELIGIBILITY criteria could be added and be applied to all pools. For example, if the majority of the community deems necessary to have more than 1 voting session per quarter, this type of change could be applied to all Term pools, given ample notice and time for preparation. ## Security Considerations No funds need to leave the user’s wallet in order to become an xGov. ## Copyright Copyright and related rights waived via .
# xGov Pilot - Proposal Process
> Criteria for the creation of proposals.
## Abstract The Goal of this ARC is to clearly define the steps involved in submitting proposals for the xGov Program, to increase transparency and efficiency, ensuring all proposals are given proper consideration. The goal of this grants scheme is to fund proposals that will help us in our goal of increasing the adoption of the Algorand network, as the most advanced layer 1 blockchain to date. The program aims to fund proposals to develop open source software, including tooling, as well as educational resources to help inform and grow the Algorand community. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### What is a proposal The xGov program aims to provide funding for individuals or teams to: * Develop of open source applications and tools (eg. an open source AMM or contributing content to an Algorand software library). * Develop Algorand education resources, preferably in languages where the resources are not yet available(eg. a video series teaching developers about Algorand in Portuguese or Indonesian). The remainder of the xGov program pilot will not fund proposals for: * Supplying liquidity. * Reserving funds to pay for ad-hoc open-source development (devs can apply directly for an xGov grant). * Buying ASAs, including NFTs. Proposals **SHALL NOT** be divided in small chunks. > Issues requiring resolution may have been discussed on various online platforms such as forums, discord, and social media networks. Proposals requesting a large amount of funds **MUST BE** split into a milestone-based plan. See ### Duty of a proposer Having the ability to propose measures for a vote is a significant privilege, which requires: * A thorough understanding of the needs of the community. * Alignment of personal interests with the advancement of the Algorand ecosystem. * Promoting good behavior amongst proposers and discouraging “gaming the system”. * Reporting flaws and discussing possible solutions with the AF team and community using either the Algorand Forum or the xGov Discord channels. ### Life of a proposal The proposal process will follow the steps below: * Anyone can submit a proposal at any time. * Proposals will be evaluated and refined by the community and xGovs before they are available for voting. * Up to one month is allocated for voting on proposals. * The community will vote on proposals that have passed the refinement and temperature check stage. > If too many proposals are received in a short period of time. xGovs can elect to close proposals, in order to be able to handle the volume appropriately. ### Submit a proposal In order to submit a proposal, a proposer needs to create a pull request on the following repository: . Proposals **MUST**: * Be posted on the (using tags: Governance and xGov Proposals) and discussed with the community during the review phased. Proposals without a discussion thread WILL NOT be included in the voting session. * Follow the , filling all the template sections * Follow the rules of the xGov Proposals Repository. * The minimum requested Amount is 10000 Algo * Have the status `Final` before the end of the temperature check. * Be either Proactive (the content of the proposal is yet to be created) or Retroactive (the content of the proposal is already created) * Milestone-based grants must submit a proposal for one milestone at a time. * Milestones need to follow the governance periods cycle. With the current 3-months cycle, a milestone could be 3-months, 6 months, 9 months etc. * The proposal must display all milestones with clear deliverables and the amount requested must match the 1st milestone. If a second milestone proposal is submitted, it must display the first completed milestone, linking all deliverables. If a third milestone proposal is submitted, it must display the first and second completed milestone, linking all deliverables. This repeats until all milestones are completed. * Funding will only be disbursed upon the completion of deliverables. * A proposal must specify how its delivery can be verified, so that it can be checked prior to payment. * Proposals must include clear, non-technical descriptions of deliverables. We encourage the use of multimedia (blog/video) to help explain your proposal’s benefits to the community. * Contain the maintenance period, availability, and sustainability plans. This includes information on potential costs and the duration for which services will be offered at no or reduced cost. Proposals **MUST NOT**: * Request funds for marketing campaigns or organizing future meetups. > Each entity, individual, or project can submit at most two proposals (one proactive proposal and one retroactive proposal). Attempts to circumvent this rule may lead to disqualification or denial of funds. ### Disclaimer jurisdictions and exclusions To be eligible to apply for a grant, projects must abide by the (in particular the “Excluded Jurisdictions” section) and be willing to enter into . Additionally, applications promoting gambling, adult content, drug use, and violence of any kind are not permitted. > We are currently accepting grant applications from US-based individual/business. If the grant is approved, Algos will be converted to USDCa upon payment. This exception will be reviewed periodically. ### Voting Power When an account participates in its first session, the voting power assigned to it will be equivalent to the total governance rewards it would have received. For all following sessions, the account’s voting power will adjust based on the rewards lost by members in their pool who did not meet their obligations. The voting power for an upcoming session is computed as: `new_account_voting_power = (initial_pool_voting_power * initial_account_voting_power) / pool_voting_power_used` Where: * `new_account_voting_power`: Voting power allocated to an account for the next session. * `initial_account_voting_power`: The voting power originally assigned to an account, based on the governance rewards. * `initial_pool_voting_power`: The total voting power of the pool during its initial phase. This is the sum of governance rewards for all pool participants. * `pool_voting_power_used`: The voting power from the pool that was actually used in the last session. ### Proposal Approval Threshold In order for a proposal to be approved, it is necessary for the number of votes in favor of the proposal to be proportionate to the amount of funds requested. This ensures that the allocation of funds is in line with the community’s consensus and in accordance with democratic principles. The formula to calculate the voting power needed to pass a proposal is as follows: `voting_power_needed = (amount_requested) / (amount_available) * (current_session_voting_power_used)` Where: * `voting_power_needed`: Voting power required for a proposal to be accepted. * `amount_requested`: The requested amount a proposal is seeking. * `amount_available`: The entire grant funds available for the current session. * `current_session_voting_power_used`: The voting power used in the current session. > eg. 2 000 000 Algo are available to be given away as grants, 300 000 000 Algo are committed to the xGov Process, 200 000 000 Algo are used during the vote: > > * Proposal A request 100 000 Algos (5 % of the Amount available) > * Proposal A needs 5 % of the used votes (10 000 000 Votes) to go through ### Voting on proposal At the start of the voting period xGovs will vote on proposals using the voting tool hosted at . Vote will refer to the PR number and a cid hash of the proposal itself. The CID MUST: * Represent the file. * Be a version V1 CID * E.g., use the option —cid-version=1 of ipfs add * Use SHA-256 hash algorithm * E.g., use the option —hash=sha2-256 of ipfs add ### Grants calculation The allocation of grants will consider the funding request amounts and the available amount of ALGO to be distributed. ### Grants contract & payment * Once grants are approved, the Algorand Foundation team will handle the applicable contract and payment. * **Before submitting your grant proposal**, review the contract template and ensure you’re comfortable with its terms: . > For milestone-based grants, please also refer to the ## Rationale The current status of the proposal process includes the following elements: * Proposals will be submitted off-chain and linked to the on-chain voting through a hash. * Projects that require multiple funding rounds will need to submit separate proposals. * The allocation of funds will be subject to review and adjustment during each governance period. * Voting on proposals will take place on-chain. We encourage the community to continue to provide input on this topic through the submission of questions and ideas in this ARC document. ## Security Considerations None ## Copyright Copyright and related rights waived via .
# Algorand Offline Wallet Backup Protocol
> Wallet-agnostic backup protocol for multiple accounts
## Abstract This document outlines the high-level requirements for a wallet-agnostic backup protocol that can be used across all wallets on the Algorand ecosystem. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### Requirements At a high-level, offline wallet backup protocol has the following requirements: * Wallet applications should allow backing up and storing multiple accounts at the same time. Account information should be encrypted with a user-defined secret key, utilizing NaCl SecretBox method (audited and endorsed by Algorand). * Encrypted final string should be easily copyable to be stored digitally. When importing, wallet applications should be able to detect already imported accounts and gracefully ignore them. ### Format Before encryption, account information should be converted to the following JSON format: ```plaintext { "device_id": "UNIQUE IDENTIFIER FOR DEVICE (OPTIONAL)", "provider_name": "PROVIDER NAME (OPTIONAL, i.e. Pera Wallet)", "accounts": [ { "address": "ACCOUNT PUBLIC ADDRESS (REQUIRED)", "name": "USER DEFINED ACCOUNT NAME (OPTIONAL)", "account_type": "TYPE OF ACCOUNT: single, multisig, watch, contact, ledger (REQUIRED)", "private_key": "PRIVATE KEY AS BASE64 ENCODING OF 64 BYTE ALGORAND PRIVATE KEY as encoded by algosdk (NOT PASSPHRASE, REQUIRED for user-owned accounts, can be omitted in case of watch, contact, multisig, ledger accounts)", "metadata": "ANY ADDITIONAL CONTENT (OPTIONAL)", "multisig": "Multisig information (only required if the account_type is multisig)", "ledger": { "device_id": "device id", "index": , "connection_type": "bluetooth|usb" }, }, ... ] } ``` *Clients must accept additional fields in the JSON document.* Here is an example with a single account: ```plaintext { "device_id": "2498232091970170817", "provider_name": "Pera Wallet", "accounts": [ { "address": "ELWRE6HZ7KIUT46EQ6PBISGD3ND6QSCBVWICYR2QR2Y7LOBRZRCAIKLWDE", "name": "My NFT Account", "account_type": "single", "private_key": "w0HG2VH7tAYz9PD4SYX0flC4CKh1OONCB6U5bP7cXGci7RJ4+fqRSfPEh54USMPbR+hIQa2QLEdQjrH1uDHMRA==" } ], } ``` Here is an example with a single multi-sig account: ```plaintext { "device_id": "2498232091970170817", "provider_name": "Pera Wallet", "accounts": [ { "address": "ELWRE6HZ7KIUT46EQ6PBISGD3ND6QSCBVWICYR2QR2Y7LOBRZRCAIKLWDE", "name": "Our Multisig Account", "account_type": "multisig", "multisig": { version: 1, threshold: 2, addrs: [ account1.addr, account2.addr, account3.addr, ], }, } ], } ``` ### Encryption Once the input JSON is ready, as specified above, it needs to be encrypted. Even if it is assumed that the user is going to store this information in a secure location, copy-pasting it without encryption is not secure since multiple applications can access the clipboard. The information needs to be encrypted using a very long passphrase. 12 words mnemonic will be used as the key. 12-word mnemonic is secure and it will not create confusion with the 25-word mnemonics that are used to encrypt accounts. The wallet applications should not allow users to copy the 12-word mnemonic nor allow taking screenshots. The users should note it visually. The encryption should be made as follows: 1. The wallet generates a random 16-byte string S (using a cryptographically secure random number generator) 2. The wallet derives a 32-byte key: `key = HMAC-SHA256(key="Algorand export 1.0", input=S)` On libsodium, use , `crypto_auth_hmacsha256_init` / `crypto_auth_hmacsha256_update` / `crypto_auth_hmacsha256_final` 3. The wallet encrypts the input JSON using `crypto_secretbox_easy` from libsodium () 4. The wallet outputs the following output JSON: ```plaintext { "version": "1.0", "suite": "HMAC-SHA256:sodium_secretbox_easy", "ciphertext": } ``` This JSON document (will be referred as ciphertext envelope JSON) needs to be encoded with base64 again in order to make it easier to copy-paste & store. 5. S is encoded as a 12-word mnemonic (according to BIP-39) and displayed to the user. The user will be responsible for keeping the 12-word mnemonic and the base64 output of the ciphertext envelope JSON in safe locations. Note that step 5 is the default approach, however, the wallets can support other methods other than mnemonics as well, as long as they are secure. ### Importing When importing, wallet applications should ask the user for the base64 output of the envelope JSON and the 12-word mnemonic. After getting these values, it should attempt to decrypt the encrypted string using the 12-word mnemonic. On successful decryption, accounts that can be imported can be processed. ## Rationale There are many benefits to having an openly documented format: * Better interoperability across wallets, allowing users to use multiple wallets easily by importing all of their accounts using a single format. * Easy and secure backup of all wallet data at a user-defined location, including secure storage in digital environments. * Ability to transfer data from device to device securely, such as when moving data from one mobile device to another. ## Security Considerations Tbd ## Copyright Copyright and related rights waived via .
# Convention for declaring filters of an NFT
> This is a convention for declaring filters in an NFT metadata
## Abstract The goal is to establish a standard for how filters are declared inside a non-fungible (NFT) metadata. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative. If the property `filters` is provided anywhere in the metadata of an nft, it **MUST** adhere to the schema below. If the nft is a part of a larger collection and that collection has filters, all the available filters for the collection **MUST** be listed as a property of the `filters` object. If the nft does not have a particular filter, it’s value **MUST** be “none”. The JSON schema for `filters` is as follows: ```json { "title": "Filters for Non-Fungible Token", "type": "object", "properties": { "filters": { "type": "object", "description": "Filters can be used to filter nfts of a collection. Values must be an array of strings or numbers." } } } ``` #### Examples ##### Example of an NFT that has traits & filters ```json { "name": "NFT With Traits & filters", "description": "NFT with traits & filters", "image": "https://s3.amazonaws.com/your-bucket/images/two.png", "image_integrity": "sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "properties": { "creator": "Tim Smith", "created_at": "January 2, 2022", "traits": { "background": "yellow", "head": "curly" }, "filters": { "xp": 120, "state": "REM" } } } ``` ## Rationale A standard for filters is needed so programs know what to expect in order to filter things without using rarity. ## Backwards Compatibility If `filters` wants to be added on top of fields `traits` and `filters` should be inside the `properties` object. (eg: ) ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# xGov Pilot - Integration
> Integration of xGov Process
## Abstract This ARC aims to explain how the xGov process can be integrated within dApps. ## Motivation By leveraging the xGov decentralization, it can improve the overall efficiency of this initiative. ## Specification The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### How to register #### How to find the xGov Escrow address The xGov Escrow address can be extracted using this endpoint: `https://governance.algorand.foundation/api/periods/active/`. ```json { ... "xgov_escrow_address": "string", ... } ``` #### Registration Governors should specify the xGov-related fields. Specifically, governors can sign up to be xGovs by designating as beneficiaries the xGov escrow address (that changes from one governance period to the next). They can also designate an xGov-controller address that would participate on their behalf in xGov votes via the optional parameter “xGv”:“aaa”. Namely, the Notes field has the form. af/gov1:j{“com”:nnn,“mmm1”:nnn1,“mmm2”:nnn2,“bnf”:“XYZ”,“xGv”:“ABC”} Where: “com”:nnn is the Algo commitment; “mmm”:nnn is a commitment for LP-token with asset-ID mmm; “bnf”:“XYZ” designates the address “XYZ” as the recipient of rewards (“XYZ” must equal the xGov escrow in order to sign up as an xGov); The optional “xGv”:“ABC” designates address “ABC” as the xGov-controller of this xGov account. #### Goal example goal clerk send -a 0 -f ALDJ4R2L2PNDGQFSP4LZY4HATIFKZVOKTBKHDGI2PKAFZJSWC4L3UY5HN4 -t RFKCBRTPO76KTY7KSJ3HVWCH5HLBPNBHQYDC52QH3VRS2KIM7N56AS44M4 -n ‘af/gov1:j{“com”:1000000,“12345”:2,“67890”:30,“bnf”:“DRWUX3L5EW7NAYCFL3NWGDXX4YC6Y6NR2XVYIC6UNOZUUU2ERQEAJHOH4M”,“xGv”:“ALDJ4R2L2PNDGQFSP4LZY4HATIFKZVOKTBKHDGI2PKAFZJSWC4L3UY5HN4”}’ ### How to Interact with the Voting Application #### How to get the Application ID Every vote will be a different ID, but search for all apps created by the used account and look at the global state to see if is\_bootstrapped is 1. #### ABI The ABI is available . A working test example of how to call application’s method is here: ## Rationale This integration will improve the usage of the process. ## Backwards Compatibility None ## Security Considerations None ## Copyright Copyright and related rights waived via .
# Logic Signature Templates
> Defining templated logic signatures so wallets can safely sign them.
## Abstract This standard allows wallets to sign known logic signatures and clearly tell the user what they are signing. ## Motivation Currently, most Algorand wallets do not enable the signing of logic signature programs for the purpose of delegation. The rationale is to prevent users from signing malicious programs, but this limitation also prevents non-malicious delegated logic signatures from being used in the Algorand ecosystem. As such, there needs to be a way to provide a safe way for wallets to sign logic signatures without putting users at risk. ## Specification A logic signature **MUST** be described via the following JSON interface(s): ### Interface ```typescript interface LogicSignatureDescription { name: string, description: string, program: string, variables: { variable: string, name: string, type: string, description: string }[] } ``` | Key | Description | | ----------------------- | ------------------------------------------------------------------------- | | `name` | The name of the logic signature. **SHOULD** be short and descriptive | | `description` | A description of what the logic signature does | | `program` | base64 encoding of the TEAL program source | | `variables` | An array of variables in the program | | `variables.variable` | The name of the variable in the templated program. | | `variables.name` | Human-friendly name for the variable. **SHOULD** be short and descriptive | | `variables.type` | **MUST** be a type defined below in the `type` section | | `variables.description` | A description of how this variable is used in the program | ### Variables A variable in the program **MUST** be start with `TMPL_` #### Types All non-reference ABI types **MUST** be supported by the client. ABI values **MUST** be encoded in base16 (with the leading `0x`) with the following exceptions: | Type | Description | | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `address` | 58-character base32 Algorand public address. Typically to be used as an argument to the `addr` opcode. Front-ends **SHOULD** provide a link to the address on an explorer | | `application` | Application ID. Alias for `uint64`. Front-ends **SHOULD** provide a link to the app on an explorer | | `asset` | Asset ID. Alias for `uint64`. Front-ends **SHOULD** provide a link to the asset on an explorer | | `string` | UTF-8 string. Typically used as an argument to `byte`, `method`, or a branching opcode. | | `hex` | base16 encoding of binary data. Typically used as an argument to `byte`. **MUST** be prefixed with `0x` | For all other value, front-ends **MUST** decode the ABI value to display the human-readable value to the user. ### Input Validation All ABI values **MUST** be encoded as base16 and prefixed with `0x`, with the exception of `uint64` which should be provided as an integer. String values **MUST NOT** include any unescaped `"` to ensure there is no TEAL injection. All values **MUST** be validated to ensure they are encoded properly. This includes the following checks: * An `address` value must be a valid Algorand address * A `uint64`, `application`, or `asset` value must be a valid unsigned 64-bit integer ### Unique Identification To enable unique identification of a description, clients **MUST** calculate the SHA256 hash of the JSON description canonicalized in accordance with . ### WalletConnect Method For wallets to support this ARC, they need to support the a `algo_templatedLsig` method. The method expects three parameters described by the interface below ```ts interface TemplatedLsigParams { /** The canoncalized ARC47 templated lsig JSON as described in this ARC */ arc47: string /** The values of the templated variables, if there are any */ values?: {[variable: string]: string | number} /** The hash of the expected program. Wallets should compile the lsig with the given values to verify the program hash matches */ hash: string } ``` ## Rationale This provides a way for frontends to clearly display to the user what is being signed when signing a logic signature. Template variables must be immediate arguments. Otherwise a string variable could specify the opcode in the program, which could have unintended and unclear consequences. `TMPL_` prefix is used to align with existing template variable tooling. Hashing canonicalized JSON is useful for ensuring clients, such as wallets, can create a allowlist of templated logic signatures. ## Backwards Compatibility N/A ## Test Cases N/A ## Reference Implementation A reference implementation can be found in the`https://raw.githubusercontent.com/algorandfoundation/ARCs/main/assets/arc-0047` folder. contains the templated TEAL code for a logic signature that allows payments of a specific amount every 25,000 blocks. contains a TypeScript script showcasing how a dapp would form a wallet connect request for a templated logic signature. contains a TypeScript script showcasing how a wallet would handle a request for signing a templated logic signature. contains a TypeScript script showcasing how one could validate templated TEAL and variable values. ### String Variables #### Invalid: Partial Argument ```plaintext #pragma version 9 byte "Hello, TMPL_NAME" ``` This is not valid because `TMPL_NAME` is not the full immediate argument. #### Invalid: Not An Argument ```plaintext #pragma version 9 TMPL_PUSH_HELLO_NAME ``` This is not valid because `TMPL_PUSH_HELLO_NAME` is not an immediate argument to an opcode. #### Valid ```plaintext #pragma version 9 byte TMPL_HELLO_NAME ``` This is valid as `TMPL_HELLO_NAME` is the entire immediate argument of the `byte` opcode. A possible value could be `Hello, AlgoDev` ### Hex Variables #### Valid ```plaintext #pragma version 9 byte TMPL_DEAD_BEEF ``` This is valid as `TMPL_DEAD_BEEF` is the full immediate argument to the `byte` opcode. A possible value could be `0xdeadbeef`. ## Security Considerations It should be made clear that this standard alone does not define how frontends, particularly wallets, should deem a logic signature to be safe. This is a decision made solely by the front-ends as to which logic signatures they allow to be signed. It is **RECOMMENDED** to only support the signing of audited or otherwise trusted logic signatures. ## Copyright Copyright and related rights waived via .
# Targeted DeFi Rewards
> Targeted DeFi Rewards, Terms and Conditions
## Abstract The Targeted DeFi Rewards is a temporary incentive program that distributes Algo to be deployed in targeted activities to attract new DeFi users from within and outside the ecosystem. The goal is to give DeFi projects more flexibility in how these rewards are structured and distributed among their user base, targeting rapid growth, deeper DEX liquidity, and incentives for users who come to Algorand in the middle of a governance period. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### Eligibility Criteria To be eligible to apply to this program, projects must abide by the (in particular the “Excluded Jurisdictions” section) and be willing to enter into a binding contract in the form of the template provided by the Algorand Foundation. > The Algorand Foundation is temporarily allowing US-based entities to apply for this program. Approved projects will have their rewards swapped to USDCa on the day of the payment. This exception will be reviewed periodically. Projects must have at least 500K Algo equivalent in TVL of white-listed assets, at the time of the quarterly snapshot block, which happens on the 15th day of the last month of each calendar quarter. All related wallet addresses will be provided in advance for peer scrutiny. The DeFi Advisory Committee will review applications to verify each TVL claim, thus ensuring that claims are valid prior to application approval. For AMMs we will leverage the Eligible Liquidity Pool list that is currently adopted to allow the governors commitment of LP tokens in the DeFi Rewards program, with extension to the assets defined below. For Lending/Borrowing protocols, each project will provide a list of their assets and their holding wallet address(es). For Bridges, each project will provide a list of the bridged assets and their holding wallet address(es). ### Assets Selection The metrics used to select eligible assets to be used for Eligibility TVL Calculation (as per Eligibility Criteria above) were chosen to ensure that the selected tokens have a strong reputation, are difficult to manipulate, and are valuable to the ecosystem. This reputation is built on a combination of factors, including Total Value Locked (TVL), Market Cap, and listings. > Assets are expected to meet at least two of the three criteria below to be included in the white-list. | Criteria | | | :--------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | TVL | The total value locked in different Algorand protocols plays a key role. It’s a good indicator of the token’s popularity. Minimum TVL requirement: $100K across all the protocols. | | Market Cap | Market cap is a measure of a crypto token’s total circulating supply multiplied by its current market price. This parameter can be used to consider the positioning of the tokens on the entire crypto market. Minimum Market Cap requirement: USD 1MM. | | Listing | Tokens listed on multiple stable and respected exchanges are often seen as more established and trustworthy. This can also contribute to increased demand for the token and further the growth of its reputation within the ecosystem. | The following assets are qualified and meet the above criteria: * ALGO * gALGO - ASA ID 793124631 * USDC - ASA ID 31566704 * USDT - ASA ID 312769 * goBTC - ASA ID 386192725 * goETH - ASA ID 386195940 * PLANETS - ASA ID 27165954 * OPUL - ASA ID 287867876 * VESTIGE - ASA ID 700965019 * CHIPS - ASA ID 388592191 * DEFLY - ASA ID 470842789 * goUSD - ASA 672913181 * WBTC - ASA 1058926737 * WETH - ASA 887406851 * GOLD$ - ASA 246516580 * SILVER$ - ASA 246519683 * PEPE - ASA 1096015467 * COOP - ASA 796425061 * GORA - ASA 1138500612 > Applications for the above list can be submitted at any time . Cut off for the applications review is the 7th day of the last month of each calendar quarter, or one week before the quarterly snapshot date. ### Rewards Distribution Projects will receive 11250 Algo for each 500K Algo TVL as defined above, rounded down. In the event that the available Algo are not sufficient for all the projects, Algo rewards will be distributed to each protocol based on their weighted contribution of TVL to Algorand DeFi. Rewards per project are capped at 25% of the total rewards distributed under this program for that period. In the event of partial distribution of the allocated 7.5MM, the remaining funds will be distributed as regular DeFi governance rewards. For Governance Period 8, the AMM TVL count has doubled, when compared to lending/borrow and bridge projects, in recognition of their strategic role in providing liquidity for the ecosystem. This modification was approved by the DeFi Committee. Rewards under this program will be distributed to projects within 4 weeks of the scheduled start date of the new governance period and the project(s). The usage of these rewards will be made public, and they will be entirely dedicated to protocol provision, user rewards, and user engagement. The use of rewards and methodology for payment must be made public and approved by the Algorand DeFi advisory committee prior to distribution. ## Rationale This document was versioned using google doc, it made more sense to move it on github. ## Security Considerations Disclaimer: This document may be revised until the day before the voting session opens, as we are still collecting community feedback. ## Copyright Copyright and related rights waived via .
# NFT Rewards
> NFT Rewards, Terms and Conditions
## Abstract The NFT Rewards is a temporary incentive program that distributes ALGO to be deployed in targeted activities to attract new NFT users from within and outside the ecosystem. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### Pilot program qualification for NFT marketplaces To be eligible to apply to this program, projects must abide by the (in particular the “Excluded Jurisdictions” section) and be willing to enter into a binding contract in the form of the template provided by the Algorand Foundation. NFT marketplaces applying for this program: * Must be an NFT marketplace on Algorand that coordinates the selling of NFTs. An NFT marketplace is defined as an online platform that facilitates third-party non-fungible token listings and transactions in ALGO on the Algorand blockchain. * Must have transaction volume (over the previous 6 months leading up to the application for the program) that is equivalent to at least 10% of total rewards being distributed. For example, if the total rewards amount is 500K ALGO, then the minimum volume must be 50K ALGO.P #### Important Note *NFT Rewards Program for US entities:* > For 2024 | Q2 we will be allowing US-based entities that fit the Program Criteria to apply for the NFT Rewards program. Their allocated ALGO will be converted to USDCa post prior to the payment transfer. This change will be reviewed on a periodic basis. ### Allocation of rewards * Rewards will be allocated proportionally based on volume for each qualified NFT marketplace. * For qualifying marketplaces with more than 50% of total NFT marketplace volume, rewards will be capped at 35%. ### Requirements for initiatives 1. The rewards (ALGO) must ultimately go to NFT collectors/end users and creators. 2. NFT marketplaces must share their campaign plans publicly in advance in order to qualify for the rewards. 3. The rewards (ALGO) should be held in a separate wallet from operating funds to track on-chain transactions of how funds are being spent. 4. The NFT marketplace must make public data that shows its trading volume in the last quarter. 5. Proposals that incentivize wash trading\* will not be approved to participate in the Program. 6. NFT marketplaces must reward creators whose NFTs are purchased with a 5% minimum royalty. > * By definition, the term “wash trading” means a form of market manipulation where the same user simultaneously buys and sells the same asset with the intention of giving false or misleading signals about its demand or price ### Process for launching initiative * To apply, a qualifying NFT marketplace must provide detailed information on the specifics of initiatives they are planning in that period, as well as any documentation proving the location of its headquarters. * If approved by the Algorand Foundation team, rewards will be distributed proportionally based on the allocation defined above. * The qualifying NFT marketplaces must provide a detailed 1-page report following the initiative to Algorand Foundation and on the Forum: 1. Summary of the initiatives implemented; 2. Amount of rewards paid out (including any unspent rewards, which must be returned), and wallet addresses; 3. Total volume of transactions directly as a result of the campaign; 4. New wallets interacting with the marketplace; 5. Total volume of transactions compared to the previous quarter; 6. Any other relevant information. ### Evaluation From GP10 (Q1/2024) proposals will be added to the governance portal and approved or rejected directly by the community. A proposal passes when it reaches a majority of “Yes” votes. The proposals and results are available at . NFT marketplaces that do not fulfill their campaign plan cannot apply for further incentives. NFT team will review overall results and discuss whether this program is having the desired impact and, together with the community, will help evaluate whether it should be extended and expanded to the next period. ### Important to note * Marketplaces that fit the above criteria will be required to sign a legal contract with the Algorand Foundation. * Rewards are only paid out in ALGO or USDCa for US-based entities.. * Legal entities based in other jurisdictions where receiving ALGO is not allowed are not able to partake in this program. * Participants and the Algorand Foundation will all agree on the source of data and metrics to be used for calculating the allocation and measuring the results. ## Rationale This document was versioned using google doc, it made more sense to move it on github. ## Security Considerations Disclaimer: This document may be revised until the day before the voting session opens, as we are still collecting community feedback. ## Copyright Copyright and related rights waived via .
# Metadata Declarations
> A specification for a decentralized, Self-declared, & Verifiable Tokens, Collections, & Metadata
## Abstract This ARC describes a standard for a self-sovereign on-chain project & info declaration. The declaration is an ipfs link to a JSON document attached to a smart contract with multi-wallet verification capabilities that contains information about a project, including project tokens, FAQ, NFT collections, team members, and more. ## Motivation In our current ecosystem we have a number of centralized implementations for parts of these vital pieces of information to be communicated to other relevant parties. All NFT marketplaces implement their own collection listing systems & requirements. Block explorers all take different approaches to sourcing images for ASA’s; The most common being a github repository that the Tinyman team controls & maintains. This ARC aims to standardize the way that projects communicate this information to other parts of our ecosystem. We can use a smart contract with multi-wallet verification to store this information in a decentralized, self-sovereign & verifiable way by using custom field metadata & IPFS. A chain parser can be used to read the information stored & verify the details against the verified wallets attached to the contract. ## Specification The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in . This proposal specifies an associated off-chain JSON metadata file, displayed below. This metadata file contains many separate sections and escape hatches to include unique metadata about various businesses & projects. For the purposes of requiring as few files & ipfs uploads as possible the sections are all included within the same file. The file is then added to IPFS and the link saved in a custom field on the smart contract under the key `project`. | Field | Schema | Description | Required | | ----------- | ------------------ | ---------------------------------------------------------------------------------- | -------- | | version | string | The version of the standard that the metadata is following. | true | | associates | array\ | An array of objects that represent the associates of the project. | false | | collections | array\ | An array of objects that represent the collections of the project. | false | | tokens | array\ | An array of objects that represent the tokens of the project. | false | | faq | array\ | An array of objects that represent the FAQ of the project. | false | | extras | object | An object that represents any extra information that the project wants to include. | false | ##### Top Level JSON Example ```json { "version": "0.0.2", "associates": [...], "collections": [...], "tokens": [...], "faq": [...], "extras": {...} } ``` ### Version We envision this is an evolving / living standard that allows the community to add new sections & metadata as needed. The version field will be used to determine which version of the standard the metadata is following. This will allow for backwards compatibility & future proofing as the standard changes & grows. At the top level, `version` is the only required field. ### Associates Associates are a list of wallets & roles that are associated with the project. This can be used to display the team members of a project, or the owners of a collection. The associates field is an array of objects that contain the following fields: | Field | Schema | Description | Required | | ------- | ------ | ------------------------------------------------------------------ | -------- | | address | string | The algorand wallet address of the associated person | true | | role | string | A short title for the role the associate plays within the project. | true | eg: ```json "associates": [ { "address": "W5MD3VTDUN3H2FFYJR2NDXGAAV2SJ44XEEDGBWHIZKH6ZZXF44SE7KEPVP", "role": "Project Founder" }, ... ] ``` ### Collections NFT Collections have no formal standard for how they should be declared. This section aims to standardize the way that collections are declared & categorized. The collections field is an array of objects that contain the following fields: | Field | Schema | Description | Required | | ------------------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -------- | | name | string | The name of the collection | true | | network | string | The blockchain network that the collection is minted on. *Default*: `algorand` *Special*: `multichain` | false | | prefixes | array\ | An array of strings that represent the prefixes to match against the `unit_name` of the NFTs in the collection. | false | | addresses | array\ | An array of strings that represent the addresses that minted the NFTs in the collection. | false | | assets | array\ | An array of strings that represent the asset\_ids of the NFTs in the collection. | false | | excluded\_assets | array\ | An array of strings that represent the asset\_ids of the NFTs in the collection that should be excluded. | false | | artists | array\ | An array of strings that represent the addresses of the artists that created the NFTs in the collection. | false | | banner\_image | string | An IPFS link to an image that represents the collection. *if set `banner_id` should be unset & vice-versa* | false | | banner\_id | uint64 | An asset\_id that represents the collection. | false | | avatar\_image | string | An IPFS link to an image that represents the collection. *if set `avatar_id` should be unset & vice-versa* | false | | avatar\_id | uint64 | An asset\_id that represents the collection. | false | | explicit | boolean | A boolean that represents whether or not the collection contains explicit content. | false | | royalty\_percentage | uint64 | A uint64 with a value ranging from 0-10000 that represents the royalty percentage that the collection would prefer to take on secondary sales. | false | | properties | array\ | An array of objects that represent traits from an entire collection. | false | | extras | object | An object of key value pairs for any extra information that the project wants to include for the collection. | false | eg: ```json "collections": [ { "name": "My Collection", "networks": "algorand", "prefixes": [ "AKC", ... ], "addresses": [ "W5MD3VTDUN3H2FFYJR2NDXGAAV2SJ44XEEDGBWHIZKH6ZZXF44SE7KEPVP", ... ], "assets": [ 123456789, ... ], "excluded_assets": [ 123456789, ... ], "artists": [ "W5MD3VTDUN3H2FFYJR2NDXGAAV2SJ44XEEDGBWHIZKH6ZZXF44SE7KEPVP", ... ], "banner_image": "ipfs://...", "avatar": 123456789, "explicit": false, "royalty_percentage": "750", // ie: 7.5% "properties": [ { "name": "Fur", "values": [ { "name": "Red", "image": "ipfs://...", "image_integrity": "sha256-...", "image_mimetype": "image/png", "animation_url": "ipfs://...", "animation_url_integrity": "sha256-...", "animation_url_mimetype": "image/gif", "extras": { "key": "value", ... } }, ... ] } ... ], "extras": { "key": "value", ... } }, ... ] ``` #### Collection Scoping Not all collections have been consistent with their naming conventions. Some collections are minted across multiple wallets due to prior asa minting limitations. The following fields used together offer great flexibility in creating a group of NFTs to include in a collection. `prefixes`, `addresses`, `assets`, `excluded_assets`. Combined, these fields allow for maximum flexibility for mints that may have mistakes or exist across wallets & dont all conform to a consistent standard. `prefixes` allows for simple grouping of a set of NFTs based on the beginning part of the ASAs `unit_name`. This is useful for collections that have a consistent naming convention for their NFTs. Every other scoping field modifies this rule. `addresses` scope down the collection to only include ASAs minted by the addresses listed in this field. This is useful for projects that mint different collections across multiple wallets that utilize the same prefix. `assets` is a direct entry in the collection for NFTs that dont conform to any of the prefix rules. `excluded_assets` is a direct exclusion on an NFT that may conform to a prefix but should be excluded from the collection. `banner_image`, `banner_id`, `avatar_image`, `avatar_id` are all very self explanatory. They allow for a glancable preview of the collection to display on NFT marketplaces, analytics sites & others. Both `banner` & `avatar` field groups should be one or the other, not both. `banner_image` or `banner_id` (likely an ASA id from the creator). `avatar_image` or `avatar_id` (likely an ASA id from the collection). `explicit` is a boolean that indicates whether or not the collection contains explicit content. This is useful for sites that want to filter out explicit content. `properties` is an array of objects that represent traits from an entire collection. Many new NFT collections are choosing to use and mint their NFTs as blank slates. This can prevent sniping but also has the adverse affect of obscuring the trait information of a collection. This field allows for a collection to declare its traits, values, image previews of the trait it references and extra metadata. #### Collection Properties | Field | Schema | Description | Required | | ------ | ------------------------------- | -------------------------------------------------------------- | -------- | | name | string | The name of the property | true | | values | array\ | An array of objects that represent the values of the property. | true | #### Collection Property Values | Field | Schema | Description | Required | | ------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------- | -------- | | name | string | The name of the value | true | | image | string | An IPFS link to an image that represents the value. | false | | image\_integrity | string | A sha256 hash of the image that represents the value. | false | | image\_mimetype | string | The mimetype of the image that represents the value. | false | | animation\_url | string | An IPFS link to an animation that represents the value. | false | | animation\_url\_integrity | string | A sha256 hash of the animation that represents the value. | false | | animation\_url\_mimetype | string | The mimetype of the animation that represents the value. | false | | extras | object | An object of key value pairs for any extra information that the project wants to include for the property value. | false | ### Tokens Tokens are a list of assets that are associated with the project. This can be used to verify the tokens of a project and for others to easily source images to represent the token on their own platforms. | Field | Schema | Description | Required | | ---------------- | ------ | ----------------------------------------------------- | -------- | | asset\_id | uint64 | The asset\_id of the token | true | | image | string | An IPFS link to an image that represents the token. | false | | image\_integrity | string | A sha256 hash of the image that represents the token. | false | | image\_mimetype | string | The mimetype of the image that represents the token. | false | eg: ```json "tokens": [ { "asset_id": 123456789, "image": "ipfs://...", "image_integrity": "sha256-...", "image_mimetype": "image/png", } ... ] ``` ### FAQ Frequently Asked Questions for the project to address the common questions people have about their project and help inform the community. | Field | Schema | Description | Required | | ----- | ------ | ------------ | -------- | | q | string | The question | true | | a | string | The answer | true | eg: ```json "faq": [ { "q": "What is XYZ Collection?", "a": "XYZ Collection is a premiere NFT project that..." }, ... ] ``` ### Extras Custom Metadata for extending & customizing the declaration for your own use cases. This object can be found at several levels throughout the specification, The top level, within collections & within collection property value objects. | Field | Schema | Description | Required | | ----- | ------ | ---------------------------------- | -------- | | key | string | The key of the extra information | true | | value | string | The value of the extra information | true | eg: ```json "extras": { "key": "value", ... } ``` ### Contract Providers Custom metadata needs to be verifiable and many projects use many wallets as a means of separating concerns. Providers are smart contracts that have the capability of verifying multiple wallets & thus provide evidence to parsers of the authenticity of such data. Providers that support this standard will be listed on the site. ## Rationale See the motivation section above for the general rationale. ## Security Considerations None ## Copyright Copyright and related rights waived via .
# ASA Burning App
> Standardized Application for Burning ASAs
## Abstract This ARC provides TEAL which would deploy a application that can be used for burning Algorand Standard Assets. The goal is to have the apps deployed on the public networks using this TEAL to provide a standardized burn address and app ID. ## Motivation Currently there is no official way to burn ASAs. While one can currently deploy their own app or rekey an account holding the asset to some other address, having a standardized address for burned assets enables explorers and dapps to easily calculate and display burnt supply for any ASA burned here. ### Definitions Related to Token Supply & Burning It is important to note that assets with clawback enabled are effectively impossible to “burn” and could at any point be clawed back from any account or contract. The definitions below attempt to clarify some terminology around tokens and what can be considered burned. | Token Type | Clawback | No Clawback | | ------------------ | ---------------------------------------------------- | ---------------------------------------------------- | | Total Supply | Total | Total | | Circulating Supply | Total - Qty in Reserve Address - Qty in burn address | Total - Qty in Reserve Address - Qty in burn address | | Available Supply | Total | Total - Qty in burn address | | Burned Supply | N/A (Impossible to burn) | Qty in burn address | ## Specification ### `ARC-4` JSON Description ```json { "name": "ARC54", "desc": "Standardized application for burning ASAs", "methods": [ { "name": "arc54_optIntoASA", "args": [ { "name": "asa", "type": "asset", "desc": "The asset to which the contract will opt in" } ], "desc": "A method to opt the contract into an ASA", "returns": { "type": "void", "desc": "" } }, { "name": "createApplication", "desc": "", "returns": { "type": "void", "desc": "" }, "args": [] } ] } ``` ## Rationale This simple application is only able to opt in to ASAs but not send them. As such, once an ASA has been sent to the app address it is effectively burnt. If the burned ASA does not have clawback enabled, it will remain permanently in this account and can be considered out of circulation. The app will accept ASAs which have clawback enabled, but any such assets can never be considered permanently burned. Users may use the burning app as a convenient receptable to remove ASAs from their account rather than returning them to the creator account. The app will, of course, only be able to opt into a new ASA if it has sufficient Algo balance to cover the increase minimum balance requirement (MBR). Callers should fund the contract account as needed to cover the opt-in requests. It is possible for the contract to be funded by donated Algo so that subsequent callers need not pay the MBR requirement to request new ASA opt-ins. ## Reference Implementation ### TEAL Approval Program ```plaintext #pragma version 9 // This TEAL was generated by TEALScript v0.62.2 // https://github.com/algorandfoundation/TEALScript // This contract is compliant with and/or implements the following ARCs: [ ARC4 ] // The following ten lines of TEAL handle initial program flow // This pattern is used to make it easy for anyone to parse the start of the program and determine if a specific action is allowed // Here, action refers to the OnComplete in combination with whether the app is being created or called // Every possible action for this contract is represented in the switch statement // If the action is not implemented in the contract, its respective branch will be "NOT_IMPLEMENTED" which just contains "err" txn ApplicationID int 0 > int 6 * txn OnCompletion + switch create_NoOp NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED call_NoOp NOT_IMPLEMENTED: err // arc54_optIntoASA(asset)void // // /* // Sends an inner transaction to opt the contract account into an ASA. // The fee for the inner transaction must be covered by the caller. // // @param asa The ASA to opt in to abi_route_arc54_optIntoASA: // asa: asset txna ApplicationArgs 1 btoi txnas Assets // execute arc54_optIntoASA(asset)void callsub arc54_optIntoASA int 1 return arc54_optIntoASA: proto 1 0 // contracts/arc54.algo.ts:13 // sendAssetTransfer({ // assetReceiver: globals.currentApplicationAddress, // xferAsset: asa, // assetAmount: 0, // fee: 0, // }) itxn_begin int axfer itxn_field TypeEnum // contracts/arc54.algo.ts:14 // assetReceiver: globals.currentApplicationAddress global CurrentApplicationAddress itxn_field AssetReceiver // contracts/arc54.algo.ts:15 // xferAsset: asa frame_dig -1 // asa: asset itxn_field XferAsset // contracts/arc54.algo.ts:16 // assetAmount: 0 int 0 itxn_field AssetAmount // contracts/arc54.algo.ts:17 // fee: 0 int 0 itxn_field Fee // Submit inner transaction itxn_submit retsub abi_route_createApplication: int 1 return create_NoOp: method "createApplication()void" txna ApplicationArgs 0 match abi_route_createApplication err call_NoOp: method "arc54_optIntoASA(asset)void" txna ApplicationArgs 0 match abi_route_arc54_optIntoASA err ``` ### TealScript Source Code ```plaintext import { Contract } from '@algorandfoundation/tealscript'; // eslint-disable-next-line no-unused-vars class ARC54 extends Contract { /* * Sends an inner transaction to opt the contract account into an ASA. * The fee for the inner transaction must be covered by the caller. * * @param asa The ASA to opt in to */ arc54_optIntoASA(asa: Asset): void { sendAssetTransfer({ assetReceiver: globals.currentApplicationAddress, xferAsset: asa, assetAmount: 0, fee: 0, }); } } ``` ### Deployments An application per the above reference implementation has been deployed to each of Algorand’s networks at these app IDs: | Network | App ID | Address | | ------- | ---------- | ---------------------------------------------------------- | | MainNet | 1257620981 | BNFIREKGRXEHCFOEQLTX3PU5SUCMRKDU7WHNBGZA4SXPW42OAHZBP7BPHY | | TestNet | 497806551 | 3TKF2GMZJ5VZ4BQVQGC72BJ63WFN4QBPU2EUD4NQYHFLC3NE5D7GXHXYOQ | | BetaNet | 2019020358 | XRXCALSRDVUY2OQXWDYCRMHPCF346WKIV5JPAHXQ4MZADSROJGDIHZP7AI | ## Security Considerations It should be noted that once an asset is sent to the contract there will be no way to recover the asset unless it has clawback enabled. Due to the simplicity of a TEAL, an audit is not needed. The contract has no code paths which can send tokens, thus there is no concern of an exploit that undoes burning of ASAs without clawback. ## Copyright Copyright and related rights waived via .
# On-Chain storage/transfer for Multisig
> A smart contract that stores transactions and signatures for simplified multisignature use on Algorand.
## Abstract This ARC proposes the utilization of on-chain smart contracts to facilitate the storage and transfer of Algorand multisignature metadata, transactions, and corresponding signatures for the respective multisignature sub-accounts. ## Motivation Multisignature (multisig) accounts play a crucial role in enhancing security and control within the Algorand ecosystem. However, the management of multisig accounts often involves intricate off-chain coordination and the distribution of transactions among authorized signers. There exists a pressing need for a more streamlined and simplified approach to multisig utilization, along with an efficient transaction signing workflow. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### ABI A compliant smart contract, conforming to this ARC, **MUST** implement the following interface: ```json { "name": "ARC-55", "desc": "On-Chain Msig App", "methods": [ { "name": "arc55_getThreshold", "desc": "Retrieve the signature threshold required for the multisignature to be submitted", "readonly": true, "args": [], "returns": { "type": "uint64", "desc": "Multisignature threshold" } }, { "name": "arc55_getAdmin", "desc": "Retrieves the admin address, responsible for calling arc55_setup", "readonly": true, "args": [], "returns": { "type": "address", "desc": "Admin address" } }, { "name": "arc55_nextTransactionGroup", "readonly": true, "args": [], "returns": { "type": "uint64", "desc": "Next expected Transaction Group nonce" } }, { "name": "arc55_getTransaction", "desc": "Retrieve a transaction from a given transaction group", "readonly": true, "args": [ { "name": "transactionGroup", "type": "uint64", "desc": "Transaction Group nonce" }, { "name": "transactionIndex", "type": "uint8", "desc": "Index of transaction within group" } ], "returns": { "type": "byte[]", "desc": "A single transaction at the specified index for the transaction group nonce" } }, { "name": "arc55_getSignatures", "desc": "Retrieve a list of signatures for a given transaction group nonce and address", "readonly": true, "args": [ { "name": "transactionGroup", "type": "uint64", "desc": "Transaction Group nonce" }, { "name": "signer", "type": "address", "desc": "Address you want to retrieve signatures for" } ], "returns": { "type": "byte[64][]", "desc": "Array of signatures" } }, { "name": "arc55_getSignerByIndex", "desc": "Find out which address is at this index of the multisignature", "readonly": true, "args": [ { "name": "index", "type": "uint64", "desc": "Address at this index of the multisignature" } ], "returns": { "type": "address", "desc": "Address at index" } }, { "name": "arc55_isSigner", "desc": "Check if an address is a member of the multisignature", "readonly": true, "args": [ { "name": "address", "type": "address", "desc": "Address to check is a signer" } ], "returns": { "type": "bool", "desc": "True if address is a signer" } }, { "name": "arc55_mbrSigIncrease", "desc": "Calculate the minimum balance requirement for storing a signature", "readonly": true, "args": [ { "name": "signaturesSize", "type": "uint64", "desc": "Size (in bytes) of the signatures to store" } ], "returns": { "type": "uint64", "desc": "Minimum balance requirement increase" } }, { "name": "arc55_mbrTxnIncrease", "desc": "Calculate the minimum balance requirement for storing a transaction", "readonly": true, "args": [ { "name": "transactionSize", "type": "uint64", "desc": "Size (in bytes) of the transaction to store" } ], "returns": { "type": "uint64", "desc": "Minimum balance requirement increase" } }, { "name": "arc55_setup", "desc": "Setup On-Chain Msig App. This can only be called whilst no transaction groups have been created.", "args": [ { "name": "threshold", "type": "uint8", "desc": "Initial multisig threshold, must be greater than 0" }, { "name": "addresses", "type": "address[]", "desc": "Array of addresses that make up the multisig" } ], "returns": { "type": "void" } }, { "name": "arc55_newTransactionGroup", "desc": "Generate a new transaction group nonce for holding pending transactions", "args": [], "returns": { "type": "uint64", "desc": "transactionGroup Transaction Group nonce" } }, { "name": "arc55_addTransaction", "desc": "Add a transaction to an existing group. Only one transaction should be included per call", "args": [ { "name": "costs", "type": "pay", "desc": "Minimum Balance Requirement for associated box storage costs: (2500) + (400 * (9 + transaction.length))" }, { "name": "transactionGroup", "type": "uint64", "desc": "Transaction Group nonce" }, { "name": "index", "type": "uint8", "desc": "Transaction position within atomic group to add" }, { "name": "transaction", "type": "byte[]", "desc": "Transaction to add" } ], "returns": { "type": "void" }, "events": [ { "name": "TransactionAdded", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "transactionIndex", "type": "uint8" } ], "desc": "Emitted when a new transaction is added to a transaction group" } ] }, { "name": "arc55_addTransactionContinued", "args": [ { "name": "transaction", "type": "byte[]" } ], "returns": { "type": "void" } }, { "name": "arc55_removeTransaction", "desc": "Remove transaction from the app. The MBR associated with the transaction will be returned to the transaction sender.", "args": [ { "name": "transactionGroup", "type": "uint64", "desc": "Transaction Group nonce" }, { "name": "index", "type": "uint8", "desc": "Transaction position within atomic group to remove" } ], "returns": { "type": "void" }, "events": [ { "name": "TransactionRemoved", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "transactionIndex", "type": "uint8" } ], "desc": "Emitted when a transaction has been removed from a transaction group" } ] }, { "name": "arc55_setSignatures", "desc": "Set signatures for a particular transaction group. Signatures must be included as an array of byte-arrays", "args": [ { "name": "costs", "type": "pay", "desc": "Minimum Balance Requirement for associated box storage costs: (2500) + (400 * (40 + signatures.length))" }, { "name": "transactionGroup", "type": "uint64", "desc": "Transaction Group nonce" }, { "name": "signatures", "type": "byte[64][]", "desc": "Array of signatures" } ], "returns": { "type": "void" }, "events": [ { "name": "SignatureSet", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "signer", "type": "address" } ], "desc": "Emitted when a new signature is added to a transaction group" } ] }, { "name": "arc55_clearSignatures", "desc": "Clear signatures for an address. Be aware this only removes it from the current state of the ledger, and indexers will still know and could use your signature", "args": [ { "name": "transactionGroup", "type": "uint64", "desc": "Transaction Group nonce" }, { "name": "address", "type": "address", "desc": "Address whose signatures to clear" } ], "returns": { "type": "void" }, "events": [ { "name": "SignatureSet", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "signer", "type": "address" } ], "desc": "Emitted when a new signature is added to a transaction group" } ] }, { "name": "createApplication", "args": [], "returns": { "type": "void" } } ], "events": [ { "name": "TransactionAdded", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "transactionIndex", "type": "uint8" } ], "desc": "Emitted when a new transaction is added to a transaction group" }, { "name": "TransactionRemoved", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "transactionIndex", "type": "uint8" } ], "desc": "Emitted when a transaction has been removed from a transaction group" }, { "name": "SignatureSet", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "signer", "type": "address" } ], "desc": "Emitted when a new signature is added to a transaction group" }, { "name": "SignatureCleared", "args": [ { "name": "transactionGroup", "type": "uint64" }, { "name": "signer", "type": "address" } ], "desc": "Emitted when a signature has been removed from a transaction group" } ] } ``` ### Usage The deployment of an -compliant contract is not covered by the ARC and is instead left to the implementer for their own use-case. An internal function `arc55_setAdmin` **SHOULD** be used to initialize an address which will be administering the setup. If left unset, then the admin defaults to the creator address. Once the application exists on-chain it must be setup before it can be used. The ARC-55 admin is responsible for setting up the multisignature metadata using the `arc55_setup(uint8,address[])void` method, and passing in details about the signature threshold and signer accounts that will make up the multisignature address. After successful deployment and configuration, the application ID **SHOULD** be distributed among the involved parties (signers) as a one-time off-chain exchange. The setup process may be called multiple times to correct any changes to the multisignature metadata, as long as no one has created a new transaction group nonce. Once a transaction group nonce has been generated, the metadata is immutable. Before any transactions or signatures can be stored, a new “transaction group nonce” must be generated using the `arc55_newTransactionGroup()uint64` method. This returns a unique value which **MUST** be used for all further interactions. This nonce value allows multiple pending transactions groups to be available simultaneously under the same contract deployment. Do note confuse this value with a transaction group hash. It’s entirely possible to add multiple non-grouped, or multiple different groups into a single transaction group nonce, up to a limit of 255 transactions. However it’s unlikely ARC-55 clients will facilitate this. Using a transaction group nonce, the admin or any signer **MAY** add transactions one at a time to that transaction group by providing the transaction data and the index of that transaction within the group using `arc55_addTransaction(pay,uint64,uint8,byte[])void`. A mandatory payment transaction **MUST** be included before the application call and will contain any minimum balance requirements as a result of storing the transaction data. When adding transactions the index **MUST** start at 0. Once a transaction has successfully be used or is no longer needed, any signer **MAY** remove the transaction data from the group using the `arc55_removeTransaction(uint64,uint8)void` method. This will result in the minimum balance requirement being freed up and being sent to the transaction sender. Signers **MAY** provide their signature for a particular transaction group by using the `arc55_setSignatures(pay,uint64,byte[64][])void` method. This requires paying the minimum balance requirement used to store their signature and will be returned to them once their signature is removed. Any signer **MAY** also remove their own or others signatures from the contract using the `arc55_clearSignatures(uint64)void` method, however this may not prevent someone from using that signature. Once a signature has been shared publicly, anyone can use it assuming they meet the signature threshold to submit the transaction. Once a transaction receives enough signatures to meet the threshold and falls within the valid rounds of the transaction, anyone **MAY** construct the multisignature transaction, by including all the signatures and submitting it to the network. Subsequently, participants **SHOULD** now clear the signatures and transaction data from the contract. Whilst it’s not part of the ARC, an -compliant contract **MAY** be destroyed once it is no longer needed. The process **SHOULD** be performed by the admin and/or application creator, by first reclaiming any outstanding Algo funds by removing transactions and clearing signatures, which avoids permanently locking Algo on the network. Then issuing the `DeleteApplication` call and closing out the application address. It’s important to note that destroying the application does not render the multisignature account inaccessible, as a new deployment with the same multisignature metadata can be configured and used. Below is a typical expected lifecycle: * Creator deploys an ARC-55 compliant smart contract. * Admin performs setup: Setting threshold to 2, and including 2 signer addresses. * Either signer can now generate a new transaction group. * Either signer can add a new transaction to sign to the transaction group, providing the MBR. * Signer 1 provides their signatures to the transaction group, providing their MBR. * Signer 2 provides their signatures to the transaction group, providing their MBR. * Anyone can now submit the transaction to the network. * Either signer can now clear the signatures of each signer, refunding their MBR to each account. * Either signer can remove the transaction since it’s now committed to the network, refunding the MBR to the transaction sender. ### Storage ```plaintext n = Transaction group nonce (uint64) i = Transaction index within group (uint8) addr = signers address (byte[32]) ``` | Type | Key | Value | Description | | ------ | ----------------- | ------- | ------------------------------------------------------------ | | Global | `arc55_threshold` | uint64 | The multisig signature threshold | | Global | `arc55_nonce` | uint64 | The ARC-55 transaction group nonce | | Global | `arc55_admin` | Address | The admin responsible for calling `arc55_setup` | | Box | n+i | byte\[] | The ith transaction data for the nth transaction group nonce | | Box | n+addr | byte\[] | The signatures for the nth transaction group | | Global | uint8 | Address | The signer address index for the multisig | | Global | Address | uint64 | The number of times this signer appears in the multisig | Whilst the data can be read directly from the applications storage, there are also read-only method for use with Algod’s simulate to retrieve the data. Below is a summary of each piece of data, how and where it’s stored, and it’s associated method call. #### Threshold The threshold is stored in global state of the application as a uint64 value. It’s immutable after setup and the first transaction group nonce has been generated. The associated read-only method is `arc55_getThreshold()uint64`, which will return the signature threshold for the multisignature account. #### Multisig Signer Addresses A multisignature address is made up of one or more addresses. The contract stores these addresses in global state twice. Once as the positional index, and a second time to identify how many times they’re being used. This allows for simpler on-chain processing within the smart contract to identify 1) if the account is used, and 2) where the account should be used when reconstructing the multisignature. Their are two associated read-only methods for obtaining and checking multisignature signer addresses. To retrieve a list of index addresses, you **SHOULD** use `arc55_getSignerByIndex(uint64)address`, which will return the signer address at the given multisignature index. This can be done incrementally until you reach the end of the available indexes. To check if an address is a signer for the multisignature account, you **SHOULD** use `arc55_isSigner(address)boolean`, which will return a `true` or `false` value. #### Transactions All transactions are stored individually within boxes, where the name of the box are separately identified by their related transaction group nonce. The box names are a concatenation of a uint64 and a uint8, representing the transaction group nonce and transaction index. This allows off-chain services to list all boxes belonging to an application and can quickly group and identify how many transaction groups and transactions are available. The associated read-only method is `arc55_getTransaction(uint64,uint8)byte[]`, which will return the transaction for a given transaction group nonce and transaction index. Note: To retrieve data larger than 1024 bytes, simulate must be called with `AllowMoreLogging` set to true. Example Group Transaction Nonce: `1` (uint64) Transaction Index: `0` (uint8) Hex: `000000000000000100` Box name: `AAAAAAAAAAEA` (base64) #### Signatures Signers store their signatures in a single box per transaction group nonce. Where multiple signatures **MUST** be concatenated together in the same order as the transactions within the group. The box name is made up of the transaction group nonce and the signers public key. Which is later used when removing the signatures, to identify where to refund the minimum balance requirement to. The associated read-only method is `arc55_getSignatures(uint64,address)byte[64][]`, which will return the signatures for a given transaction group nonce and signer address. Example Group Transaction Nonce: `1` (uint64) Signer: `ALICE7Y2JOFGG2VGUC64VINB75PI56O6M2XW233KG2I3AIYJFUD4QMYTJM` (address) Hex: `000000000000000102d0227f1a4b8a636aa6a0bdcaa1a1ff5e8ef9de66af6d6f6a3691b023092d07` Box name: `AAAAAAAAAAEC0CJ/GkuKY2qmoL3KoaH/Xo753mavbW9qNpGwIwktBw==` (base64) ## Rationale Establishing individual deployments for distinct user groups, as opposed to relying on a singular instance accessible to all, presents numerous advantages. Initially, this approach facilitates the implementation and expansion of functionalities well beyond the scope initially envisioned by the ARC. It enables the integration of entirely customized smart contracts that adhere to while avoiding being constrained by it. Furthermore, in the context of third-party infrastructures, the management of numerous boxes for a singular monolithic application can become increasingly cumbersome over time. In contrast, empowering small groups to create their own multisig applications, they can subscribe exclusively to their unique application ID streamlining the monitoring of it for new transactions and signatures. ### Limitations and Design Decisions The available transaction size is the most critical limitation within this implementation. For transactions larger than 2048 bytes (the maximum application argument size), additional transactions using the method `arc55_addTransactionContinued(byte[])void` can be used and sent within the same group as the `arc55_addTransaction(pay,uint64,uint8,byte[])void` call. This will allow the storing of up to 4096 bytes per transaction. Note: The minimum balance requirement must be paid in full by the preceding payment transaction of the `addTransaction` call. This ARC inherently promotes transparency of transactions and signers. If an additional layer of anonymity is required, an extension to this ARC **SHOULD** be proposed, outlining how to store and share encrypted data. The current design necessitates that all transactions within the group be exclusively signed by the constituents of the multisig account. If a group transaction requires a separate signature from another account or a logicsig, this design does not support it. An extension to this ARC **SHOULD** be considered to address such scenarios. ## Reference Implementation A TEALScript reference implementation is available at . This version has been written as an inheritable class, so can be included on top of an existing project to give you an ARC-55-compliant interface. It is encouraged for others to implement this standard in their preferred smart contract language of choice and even extend the capabilities whilst adhering to the provided ABI specification. ## Security Considerations This ARC’s design solely involves storing existing data structures and does not have the capability to create or use multisignature accounts. Therefore, the security implications are minimal. End users are expected to review each transaction before generating a signature for it. If a smart contract implementing this ARC lacks proper security checks, the worst-case scenario would involve incorrect transactions and invalid signatures being stored on-chain, along with the potential loss of the minimum balance requirement from the application account. ## Copyright Copyright and related rights waived via .
# Extended App Description
> Adds information to the ABI JSON description
## Abstract This ARC takes the existing JSON description of a contract as described in and adds more fields for the purpose of client interaction ## Motivation The data provided by ARC-4 is missing a lot of critical information that clients should know when interacting with an app. This means ARC-4 is insufficient to generate type-safe clients that provide a superior developer experience. On the other hand, provides the vast majority of useful information that can be used to , but requires a separate JSON file on top of the ARC-4 json file, which adds extra complexity and cognitive overhead. ## Specification ### Contract Interface Every application is described via the following interface which is an extension of the `Contract` interface described in . ```ts /** Describes the entire contract. This interface is an extension of the interface described in ARC-4 */ interface Contract { /** The ARCs used and/or supported by this contract. All contracts implicitly support ARC4 and ARC56 */ arcs: number[]; /** A user-friendly name for the contract */ name: string; /** Optional, user-friendly description for the interface */ desc?: string; /** * Optional object listing the contract instances across different networks. * The key is the base64 genesis hash of the network, and the value contains * information about the deployed contract in the network indicated by the * key. A key containing the human-readable name of the network MAY be * included, but the corresponding genesis hash key MUST also be defined */ networks?: { [network: string]: { /** The app ID of the deployed contract in this network */ appID: number; }; }; /** Named structs used by the application. Each struct field appears in the same order as ABI encoding. */ structs: { [structName: StructName]: StructField[] }; /** All of the methods that the contract implements */ methods: Method[]; state: { /** Defines the values that should be used for GlobalNumUint, GlobalNumByteSlice, LocalNumUint, and LocalNumByteSlice when creating the application */ schema: { global: { ints: number; bytes: number; }; local: { ints: number; bytes: number; }; }; /** Mapping of human-readable names to StorageKey objects */ keys: { global: { [name: string]: StorageKey }; local: { [name: string]: StorageKey }; box: { [name: string]: StorageKey }; }; /** Mapping of human-readable names to StorageMap objects */ maps: { global: { [name: string]: StorageMap }; local: { [name: string]: StorageMap }; box: { [name: string]: StorageMap }; }; }; /** Supported bare actions for the contract. An action is a combination of call/create and an OnComplete */ bareActions: { /** OnCompletes this method allows when appID === 0 */ create: ("NoOp" | "OptIn" | "DeleteApplication")[]; /** OnCompletes this method allows when appID !== 0 */ call: ( | "NoOp" | "OptIn" | "CloseOut" | "UpdateApplication" | "DeleteApplication" )[]; }; /** Information about the TEAL programs */ sourceInfo?: { /** Approval program information */ approval: ProgramSourceInfo; /** Clear program information */ clear: ProgramSourceInfo; }; /** The pre-compiled TEAL that may contain template variables. MUST be omitted if included as part of ARC23 */ source?: { /** The approval program */ approval: string; /** The clear program */ clear: string; }; /** The compiled bytecode for the application. MUST be omitted if included as part of ARC23 */ byteCode?: { /** The approval program */ approval: string; /** The clear program */ clear: string; }; /** Information used to get the given byteCode and/or PC values in sourceInfo. MUST be given if byteCode or PC values are present */ compilerInfo?: { /** The name of the compiler */ compiler: "algod" | "puya"; /** Compiler version information */ compilerVersion: { major: number; minor: number; patch: number; commitHash?: string; }; }; /** ARC-28 events that MAY be emitted by this contract */ events?: Array; /** A mapping of template variable names as they appear in the TEAL (not including TMPL_ prefix) to their respective types and values (if applicable) */ templateVariables?: { [name: string]: { /** The type of the template variable */ type: ABIType | AVMType | StructName; /** If given, the base64 encoded value used for the given app/program */ value?: string; }; }; /** The scratch variables used during runtime */ scratchVariables?: { [name: string]: { slot: number; type: ABIType | AVMType | StructName; }; }; } ``` ### Method Interface Every method in the contract is described via a `Method` interface. This interface is an extension of the one defined in . ```ts /** Describes a method in the contract. This interface is an extension of the interface described in ARC-4 */ interface Method { /** The name of the method */ name: string; /** Optional, user-friendly description for the method */ desc?: string; /** The arguments of the method, in order */ args: Array<{ /** The type of the argument. The `struct` field should also be checked to determine if this arg is a struct. */ type: ABIType; /** If the type is a struct, the name of the struct */ struct?: StructName; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; /** The default value that clients should use. */ defaultValue?: { /** Where the default value is coming from * - box: The data key signifies the box key to read the value from * - global: The data key signifies the global state key to read the value from * - local: The data key signifies the local state key to read the value from (for the sender) * - literal: the value is a literal and should be passed directly as the argument * - method: The utf8 signature of the method in this contract to call to get the default value. If the method has arguments, they all must have default values. The method **MUST** be readonly so simulate can be used to get the default value. */ source: "box" | "global" | "local" | "literal" | "method"; /** Base64 encoded bytes, base64 ARC4 encoded uint64, or UTF-8 method selector */ data: string; /** How the data is encoded. This is the encoding for the data provided here, not the arg type. Undefined if the data is method selector */ type?: ABIType | AVMType; }; }>; /** Information about the method's return value */ returns: { /** The type of the return value, or "void" to indicate no return value. The `struct` field should also be checked to determine if this return value is a struct. */ type: ABIType; /** If the type is a struct, the name of the struct */ struct?: StructName; /** Optional, user-friendly description for the return value */ desc?: string; }; /** an action is a combination of call/create and an OnComplete */ actions: { /** OnCompletes this method allows when appID === 0 */ create: ("NoOp" | "OptIn" | "DeleteApplication")[]; /** OnCompletes this method allows when appID !== 0 */ call: ( | "NoOp" | "OptIn" | "CloseOut" | "UpdateApplication" | "DeleteApplication" )[]; }; /** If this method does not write anything to the ledger (ARC-22) */ readonly?: boolean; /** ARC-28 events that MAY be emitted by this method */ events?: Array; /** Information that clients can use when calling the method */ recommendations?: { /** The number of inner transactions the caller should cover the fees for */ innerTransactionCount?: number; /** Recommended box references to include */ boxes?: { /** The app ID for the box */ app?: number; /** The base64 encoded box key */ key: string; /** The number of bytes being read from the box */ readBytes: number; /** The number of bytes being written to the box */ writeBytes: number; }; /** Recommended foreign accounts */ accounts?: string[]; /** Recommended foreign apps */ apps?: number[]; /** Recommended foreign assets */ assets?: number[]; }; } ``` ### Event Interface events are described using an extension of the original interface described in the ARC, with the addition of an optional struct field for arguments ```ts interface Event { /** The name of the event */ name: string; /** Optional, user-friendly description for the event */ desc?: string; /** The arguments of the event, in order */ args: Array<{ /** The type of the argument. The `struct` field should also be checked to determine if this arg is a struct. */ type: ABIType; /** Optional, user-friendly name for the argument */ name?: string; /** Optional, user-friendly description for the argument */ desc?: string; /** If the type is a struct, the name of the struct */ struct?: StructName; }>; } ``` ### Type Interfaces The types defined in may not fully described the best way to use the ABI values as intended by the contract developers. These type interfaces are intended to supplement ABI types so clients can interact with the contract as intended. ```ts /** An ABI-encoded type */ type ABIType = string; /** The name of a defined struct */ type StructName = string; /** Raw byteslice without the length prefixed that is specified in ARC-4 */ type AVMBytes = "AVMBytes"; /** A utf-8 string without the length prefix that is specified in ARC-4 */ type AVMString = "AVMString"; /** A 64-bit unsigned integer */ type AVMUint64 = "AVMUint64"; /** A native AVM type */ type AVMType = AVMBytes | AVMString | AVMUint64; /** Information about a single field in a struct */ interface StructField { /** The name of the struct field */ name: string; /** The type of the struct field's value */ type: ABIType | StructName | StructField[]; } ``` ### Storage Interfaces These interfaces properly describe how app storage is access within the contract ```ts /** Describes a single key in app storage */ interface StorageKey { /** Description of what this storage key holds */ desc?: string; /** The type of the key */ keyType: ABIType | AVMType | StructName; /** The type of the value */ valueType: ABIType | AVMType | StructName; /** The bytes of the key encoded as base64 */ key: string; } /** Describes a mapping of key-value pairs in storage */ interface StorageMap { /** Description of what the key-value pairs in this mapping hold */ desc?: string; /** The type of the keys in the map */ keyType: ABIType | AVMType | StructName; /** The type of the values in the map */ valueType: ABIType | AVMType | StructName; /** The base64-encoded prefix of the map keys*/ prefix?: string; } ``` ### SourceInfo Interface These interfaces give clients more information about the contract’s source code. ```ts interface ProgramSourceInfo { /** The source information for the program */ sourceInfo: SourceInfo[]; /** How the program counter offset is calculated * - none: The pc values in sourceInfo are not offset * - cblocks: The pc values in sourceInfo are offset by the PC of the first op following the last cblock at the top of the program */ pcOffsetMethod: "none" | "cblocks"; } interface SourceInfo { /** The program counter value(s). Could be offset if pcOffsetMethod is not "none" */ pc: Array; /** A human-readable string that describes the error when the program fails at the given PC */ errorMessage?: string; /** The TEAL line number that corresponds to the given PC. RECOMMENDED to be used for development purposes, but not required for clients */ teal?: number; /** The original source file and line number that corresponds to the given PC. RECOMMENDED to be used for development purposes, but not required for clients */ source?: string; } ``` ### Template Variables Template variables are variables in the TEAL that should be substitued prior to compilation. The usage of the variable **MUST** appear in the TEAL starting with `TMPL_`. Template variables **MUST** be an argument to either `bytecblock` or `intcblock`. If a program has template variables, `bytecblock` and `intcblock` **MUST** be the first two opcodes in the program (unless one is not used). #### Example ```js #pragma version 10 bytecblock 0xdeadbeef TMPL_FOO intcblock 0x12345678 TMPL_BAR ``` ### Dynamic Template Variables When a program has a template variable with a dynamic length, the `pcOffsetMethod` in `ProgramSourceInfo` **MUST** be `cblocks`. The `pc` value in each `SourceInfo` **MUST** be the pc determined at compilation minus the last `pc` value of the last `cblock` at compilation. When a client is leveraging a source map with `cblocks` as the `pcOffsetMethod`, it **MUST** determine the `pc` value by parsing the bytecode to get the PC value of the first op following the last `cblock` at the top of the program. See the reference implementation section for an example of how to do this. ## Rationale ARC-32 essentially addresses the same problem, but it requires the generation of two separate JSON files and the ARC-32 JSON file contains the ARC-4 JSON file within it (redundant information). The goal of this ARC is to create one JSON schema that is backwards compatible with ARC-4 clients, but contains the relevant information needed to automatically generate comprehensive client experiences. ### State Describes all of the state that MAY exist in the app and how one should decode values. The schema provides the required schema when creating the app. ### Named Structs It is common for high-level languages to support named structs, which gives names to the indexes of elements in an ABI tuple. The same structs should be useable on the client-side just as they are used in the contract. ### Action This is one of the biggest deviation from ARC-32, but provides a much simpler interface to describe and understand what any given method can do. ## Backwards Compatibility The JSON schema defined in this ARC should be compatible with all ARC-4 clients, provided they don’t do any strict schema checking for extraneous fields. ## Test Cases NA ## Reference Implementation ### Calculating cblock Offsets Below is an example of how to determine the TEAL/source line for a PC from an algod error message when the `pcOffsetMethod` is `cblocks`. ```ts /** An ARC56 JSON file */ import arc56Json from "./arc56.json"; /** The bytecblock opcode */ const BYTE_CBLOCK = 38; /** The intcblock opcode */ const INT_CBLOCK = 32; /** * Get the offset of the last constant block at the beginning of the program * This value is used to calculate the program counter for an ARC56 program that has a pcOffsetMethod of "cblocks" * * @param program The program to parse * @returns The PC value of the opcode after the last constant block */ function getConstantBlockOffset(program: Uint8Array) { const bytes = [...program]; const programSize = bytes.length; bytes.shift(); // remove version /** The PC of the opcode after the bytecblock */ let bytecblockOffset: number | undefined; /** The PC of the opcode after the intcblock */ let intcblockOffset: number | undefined; while (bytes.length > 0) { /** The current byte from the beginning of the byte array */ const byte = bytes.shift()!; // If the byte is a constant block... if (byte === BYTE_CBLOCK || byte === INT_CBLOCK) { const isBytecblock = byte === BYTE_CBLOCK; /** The byte following the opcode is the number of values in the constant block */ const valuesRemaining = bytes.shift()!; // Iterate over all the values in the constant block for (let i = 0; i < valuesRemaining; i++) { if (isBytecblock) { /** The byte following the opcode is the length of the next element */ const length = bytes.shift()!; bytes.splice(0, length); } else { // intcblock is a uvarint, so we need to keep reading until we find the end (MSB is not set) while ((bytes.shift()! & 0x80) !== 0) { // Do nothing... } } } if (isBytecblock) bytecblockOffset = programSize - bytes.length - 1; else intcblockOffset = programSize - bytes.length - 1; if (bytes[0] !== BYTE_CBLOCK && bytes[0] !== INT_CBLOCK) { // if the next opcode isn't a constant block, we're done break; } } } return Math.max(bytecblockOffset ?? 0, intcblockOffset ?? 0); } /** The error message from algod */ const algodError = "Network request error. Received status 400 (Bad Request): TransactionPool.Remember: transaction ZR2LAFLRQYFZFV6WVKAPH6CANJMIBLLH5WRTSWT5CJHFVMF4UIFA: logic eval error: assert failed pc=162. Details: app=11927, pc=162, opcodes=log; intc_0 // 0; assert"; /** The PC of the error */ const pc = Number(algodError.match(/pc=(\d+)/)![1]); // Parse the ARC56 JSON to determine if the PC values are offset by the constant blocks if (arc56Json.sourceInfo.approval.pcOffsetMethod === "cblocks") { /** The program can either be cached locally OR retrieved via the algod API */ const program = new Uint8Array([ 10, 32, 3, 0, 1, 6, 38, 3, 64, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 32, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 3, 102, 111, 111, 40, 41, 34, 42, 49, 24, 20, 129, 6, 11, 49, 25, 8, 141, 12, 0, 85, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 71, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 136, 0, 3, 129, 1, 67, 138, 0, 0, 42, 176, 34, 68, 137, 136, 0, 3, 129, 1, 67, 138, 0, 0, 42, 40, 41, 132, 137, 136, 0, 3, 129, 1, 67, 138, 0, 0, 0, 137, 128, 4, 21, 31, 124, 117, 136, 0, 13, 73, 21, 22, 87, 6, 2, 76, 80, 80, 176, 129, 1, 67, 138, 0, 1, 34, 22, 137, 129, 1, 67, 128, 4, 184, 68, 123, 54, 54, 26, 0, 142, 1, 255, 240, 0, 128, 4, 154, 113, 210, 180, 128, 4, 223, 77, 92, 59, 128, 4, 61, 135, 13, 135, 128, 4, 188, 11, 23, 6, 54, 26, 0, 142, 4, 255, 135, 255, 149, 255, 163, 255, 174, 0, ]); /** Get the offset of the last constant block */ const offset = getConstantBlockOffset(program); /** Find the source info object that corresponds to the error's PC */ const sourceInfoObject = arc56Json.sourceInfo.approval.sourceInfo.find((s) => s.pc.includes(pc - offset) )!; /** Get the TEAL line and source line that corresponds to the error */ console.log( `Error at PC ${pc} corresponds to TEAL line ${sourceInfoObject.teal} and source line ${sourceInfoObject.source}` ); } ``` ## Security Considerations The type values used in methods **MUST** be correct, because if they were not then the method would not be callable. For state, however, it is possible to have an incorrect type encoding defined. Any significant security concern from this possibility is not immediately evident, but it is worth considering. ## Copyright Copyright and related rights waived via . ```plaintext ```
# ASA Inbox Router
> An application that can route ASAs to users or hold them to later be claimed
## Abstract The goal of this standard is to establish a standard in the Algorand ecosystem by which ASAs can be sent to an intended receiver even if their account is not opted in to the ASA. A wallet custodied by an application will be used to custody assets on behalf of a given user, with only that user being able to withdraw assets. A master application will be used to map inbox addresses to user address. This master application can route ASAs to users performing whatever actions are necessary. If integrated into ecosystem technologies including wallets, explorers, and dApps, this standard can provide enhanced capabilities around ASAs which are otherwise strictly bound at the protocol level to require opting in to be received. ## Motivation Algorand requires accounts to opt in to receive any ASA, a fact which simultaneously: 1. Grants account holders fine-grained control over their holdings by allowing them to select which assets to allow and preventing receipt of unwanted tokens. 2. Frustrates users and developers when accounting for this requirement especially since other blockchains do not have this requirement. This ARC lays out a new way to navigate the ASA opt in requirement. ### Contemplated Use Cases The following use cases help explain how this capability can enhance the possibilities within the Algorand ecosystem. #### Airdrops An ASA creator who wants to send their asset to a set of accounts faces the challenge of needing their intended receivers to opt in to the ASA ahead of time, which requires non-trivial communication efforts and precludes the possibility of completing the airdrop as a surprise. This claimable ASA standard creates the ability to send an airdrop out to individual addresses so that the receivers can opt in and claim the asset at their convenience—or not, if they so choose. #### Reducing New User On-boarding Friction An application operator who wants to on-board users to their game or business may want to reduce the friction of getting people started by decoupling their application on-boarding process from the process of funding a non-custodial Algorand wallet, if users are wholly new to the Algorand ecosystem. As long as the receiver’s address is known, an ASA can be sent to them ahead of them having ALGOs in their wallet to cover the minimum balance requirement and opt in to the asset. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### Deployments This ARC works best when there is a singleton deployment per network. Below are the app IDs for the canonical deployments: | Network | App ID | | ------- | ------------ | | Mainnet | `2449590623` | | Testnet | `643020148` | ### Router Contract JSON ```json { "name": "ARC59", "desc": "", "methods": [ { "name": "createApplication", "desc": "Deploy ARC59 contract", "args": [], "returns": { "type": "void" }, "actions": { "create": ["NoOp"], "call": [] } }, { "name": "arc59_optRouterIn", "desc": "Opt the ARC59 router into the ASA. This is required before this app can be used to send the ASA to anyone.", "args": [ { "name": "asa", "type": "uint64", "desc": "The ASA to opt into" } ], "returns": { "type": "void" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_getOrCreateInbox", "desc": "Gets the existing inbox for the receiver or creates a new one if it does not exist", "args": [ { "name": "receiver", "type": "address", "desc": "The address to get or create the inbox for" } ], "returns": { "type": "address", "desc": "The inbox address" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_getSendAssetInfo", "args": [ { "name": "receiver", "type": "address", "desc": "The address to send the asset to" }, { "name": "asset", "type": "uint64", "desc": "The asset to send" } ], "returns": { "type": "(uint64,uint64,bool,bool,uint64,uint64)", "desc": "Returns the following information for sending an asset:\nThe number of itxns required, the MBR required, whether the router is opted in, whether the receiver is opted in,\nand how much ALGO the receiver would need to claim the asset", "struct": "SendAssetInfo" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_sendAsset", "desc": "Send an asset to the receiver", "args": [ { "name": "axfer", "type": "axfer", "desc": "The asset transfer to this app" }, { "name": "receiver", "type": "address", "desc": "The address to send the asset to" }, { "name": "additionalReceiverFunds", "type": "uint64", "desc": "The amount of ALGO to send to the receiver/inbox in addition to the MBR" } ], "returns": { "type": "address", "desc": "The address that the asset was sent to (either the receiver or their inbox)" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_claim", "desc": "Claim an ASA from the inbox", "args": [ { "name": "asa", "type": "uint64", "desc": "The ASA to claim" } ], "returns": { "type": "void" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_reject", "desc": "Reject the ASA by closing it out to the ASA creator. Always sends two inner transactions.\nAll non-MBR ALGO balance in the inbox will be sent to the caller.", "args": [ { "name": "asa", "type": "uint64", "desc": "The ASA to reject" } ], "returns": { "type": "void" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_getInbox", "desc": "Get the inbox address for the given receiver", "args": [ { "name": "receiver", "type": "address", "desc": "The receiver to get the inbox for" } ], "returns": { "type": "address", "desc": "Zero address if the receiver does not yet have an inbox, otherwise the inbox address" }, "actions": { "create": [], "call": ["NoOp"] } }, { "name": "arc59_claimAlgo", "desc": "Claim any extra algo from the inbox", "args": [], "returns": { "type": "void" }, "actions": { "create": [], "call": ["NoOp"] } } ], "arcs": [4, 56], "structs": { "SendAssetInfo": [ { "name": "itxns", "type": "uint64" }, { "name": "mbr", "type": "uint64" }, { "name": "routerOptedIn", "type": "bool" }, { "name": "receiverOptedIn", "type": "bool" }, { "name": "receiverAlgoNeededForClaim", "type": "uint64" }, { "name": "receiverAlgoNeededForWorstCaseClaim", "type": "uint64" } ] }, "state": { "schema": { "global": { "bytes": 0, "ints": 0 }, "local": { "bytes": 0, "ints": 0 } }, "keys": { "global": {}, "local": {}, "box": {} }, "maps": { "global": {}, "local": {}, "box": { "inboxes": { "keyType": "address", "valueType": "address" } } } }, "bareActions": { "create": [], "call": [] } } ``` **NOTE:** This ARC-56 spec does not include the source information, including error mapping, because the deployment used a version of TEALScript to compile the contract prior to ARC-56 support. ### Sending an Asset When sending an asset, the sender **SHOULD** call `ARC59_getSendAssetInfo` to determine relevant information about the receiver and the router. This information is included as a tuple described below | Index | Object Property | Description | Type | | ----- | -------------------------- | -------------------------------------------------------------------------------- | ------ | | 0 | itxns | The number of itxns required | uint64 | | 1 | mbr | The amount of ALGO the sender **MUST** send the the router contract to cover MBR | uint64 | | 2 | routerOptedIn | Whether the router is already opted in to the asset | bool | | 3 | receiverOptedIn | Whether the receiver is already directly opted in to the asset | bool | | 4 | receiverAlgoNeededForClaim | The amount of ALGO the receiver would currently need to claim the asset | uint64 | This information can then be used to send the asset. An example of using this information to send an asset is shown in . ### Claiming an Asset When claiming an asset, the claimer **MUST** call `arc59_claim` to claim the asset from their inbox. This will transfer the asset to the claimer and any extra ALGO in the inbox will be sent to the claimer. Prior to sending the `arc59_claim` app call, a call to `arc59_claimAlgo` **SHOULD** be made to claim any extra ALGO in the inbox if the inbox balance is above its minimum balance. An example of claiming an asset is shown in . ## Rationale This design was created to offer a standard mechanism by which wallets, explorers, and dapps could enable users to send, receive, and find claimable ASAs without requiring any changes to the core protocol. This ARC is intended to replace . This ARC is simpler than , with the main feature lost being senders not getting back MBR. Given the significant reduction in complexity it is considered to be worth the tradeoff. No way to get back MBR is also another way to disincentivize spam. ### Rejection The initial proposal for this ARC included a method for burning that leveraged . After further consideration though it was decided to remove the burn functionality with a reject method. The reject method does not burn the ASA. It simply closes out to the creator. This decision was made to reduce the additional complexity and potential user friction that opt-ins introduced. ### Router MBR It should be noted that the MBR for the router contract itself is non-recoverable. This was an intentional decision that results in more predictable costs for assets that may freuqently be sent through the router, such as stablecoins. ## Test Cases Test cases for the JavaScript client and the smart contract implementation can be found ## Reference Implementation A project with a the full reference implementation, including the smart contract and JavaScript library (used for testing), can be found . ### Router Contract This contract is written using TEALScript v0.90.3 ```ts /* eslint-disable max-classes-per-file */ // eslint-disable-next-line import/no-unresolved, import/extensions import { Contract } from "@algorandfoundation/tealscript"; type SendAssetInfo = { /** * The total number of inner transactions required to send the asset through the router. * This should be used to add extra fees to the app call */ itxns: uint64; /** The total MBR the router needs to send the asset through the router. */ mbr: uint64; /** Whether the router is already opted in to the asset or not */ routerOptedIn: boolean; /** Whether the receiver is already directly opted in to the asset or not */ receiverOptedIn: boolean; /** The amount of ALGO the receiver would currently need to claim the asset */ receiverAlgoNeededForClaim: uint64; }; class ControlledAddress extends Contract { @allow.create("DeleteApplication") new(): Address { sendPayment({ rekeyTo: this.txn.sender, }); return this.app.address; } } export class ARC59 extends Contract { inboxes = BoxMap(); /** * Deploy ARC59 contract * */ createApplication(): void {} /** * Opt the ARC59 router into the ASA. This is required before this app can be used to send the ASA to anyone. * * @param asa The ASA to opt into */ arc59_optRouterIn(asa: AssetID): void { sendAssetTransfer({ assetReceiver: this.app.address, assetAmount: 0, xferAsset: asa, }); } /** * Gets the existing inbox for the receiver or creates a new one if it does not exist * * @param receiver The address to get or create the inbox for * @returns The inbox address */ arc59_getOrCreateInbox(receiver: Address): Address { if (this.inboxes(receiver).exists) return this.inboxes(receiver).value; const inbox = sendMethodCall({ onCompletion: OnCompletion.DeleteApplication, approvalProgram: ControlledAddress.approvalProgram(), clearStateProgram: ControlledAddress.clearProgram(), }); this.inboxes(receiver).value = inbox; return inbox; } /** * * @param receiver The address to send the asset to * @param asset The asset to send * * @returns Returns the following information for sending an asset: * The number of itxns required, the MBR required, whether the router is opted in, whether the receiver is opted in, * and how much ALGO the receiver would need to claim the asset */ arc59_getSendAssetInfo(receiver: Address, asset: AssetID): SendAssetInfo { const routerOptedIn = this.app.address.isOptedInToAsset(asset); const receiverOptedIn = receiver.isOptedInToAsset(asset); const info: SendAssetInfo = { itxns: 1, mbr: 0, routerOptedIn: routerOptedIn, receiverOptedIn: receiverOptedIn, receiverAlgoNeededForClaim: 0, }; if (receiverOptedIn) return info; const algoNeededToClaim = receiver.minBalance + globals.assetOptInMinBalance + globals.minTxnFee; // Determine how much ALGO the receiver needs to claim the asset if (receiver.balance < algoNeededToClaim) { info.receiverAlgoNeededForClaim += algoNeededToClaim - receiver.balance; } // Add mbr and transaction for opting the router in if (!routerOptedIn) { info.mbr += globals.assetOptInMinBalance; info.itxns += 1; } if (!this.inboxes(receiver).exists) { // Two itxns to create inbox (create + rekey) // One itxns to send MBR // One itxn to opt in info.itxns += 4; // Calculate the MBR for the inbox box const preMBR = globals.currentApplicationAddress.minBalance; this.inboxes(receiver).value = globals.zeroAddress; const boxMbrDelta = globals.currentApplicationAddress.minBalance - preMBR; this.inboxes(receiver).delete(); // MBR = MBR for the box + min balance for the inbox + ASA MBR info.mbr += boxMbrDelta + globals.minBalance + globals.assetOptInMinBalance; return info; } const inbox = this.inboxes(receiver).value; if (!inbox.isOptedInToAsset(asset)) { // One itxn to opt in info.itxns += 1; if (!(inbox.balance >= inbox.minBalance + globals.assetOptInMinBalance)) { // One itxn to send MBR info.itxns += 1; // MBR = ASA MBR info.mbr += globals.assetOptInMinBalance; } } return info; } /** * Send an asset to the receiver * * @param receiver The address to send the asset to * @param axfer The asset transfer to this app * @param additionalReceiverFunds The amount of ALGO to send to the receiver/inbox in addition to the MBR * * @returns The address that the asset was sent to (either the receiver or their inbox) */ arc59_sendAsset( axfer: AssetTransferTxn, receiver: Address, additionalReceiverFunds: uint64 ): Address { verifyAssetTransferTxn(axfer, { assetReceiver: this.app.address, }); // If the receiver is opted in, send directly to their account if (receiver.isOptedInToAsset(axfer.xferAsset)) { sendAssetTransfer({ assetReceiver: receiver, assetAmount: axfer.assetAmount, xferAsset: axfer.xferAsset, }); if (additionalReceiverFunds !== 0) { sendPayment({ receiver: receiver, amount: additionalReceiverFunds, }); } return receiver; } const inboxExisted = this.inboxes(receiver).exists; const inbox = this.arc59_getOrCreateInbox(receiver); if (additionalReceiverFunds !== 0) { sendPayment({ receiver: inbox, amount: additionalReceiverFunds, }); } if (!inbox.isOptedInToAsset(axfer.xferAsset)) { let inboxMbrDelta = globals.assetOptInMinBalance; if (!inboxExisted) inboxMbrDelta += globals.minBalance; // Ensure the inbox has enough balance to opt in if (inbox.balance < inbox.minBalance + inboxMbrDelta) { sendPayment({ receiver: inbox, amount: inboxMbrDelta, }); } // Opt the inbox in sendAssetTransfer({ sender: inbox, assetReceiver: inbox, assetAmount: 0, xferAsset: axfer.xferAsset, }); } // Transfer the asset to the inbox sendAssetTransfer({ assetReceiver: inbox, assetAmount: axfer.assetAmount, xferAsset: axfer.xferAsset, }); return inbox; } /** * Claim an ASA from the inbox * * @param asa The ASA to claim */ arc59_claim(asa: AssetID): void { const inbox = this.inboxes(this.txn.sender).value; sendAssetTransfer({ sender: inbox, assetReceiver: this.txn.sender, assetAmount: inbox.assetBalance(asa), xferAsset: asa, assetCloseTo: this.txn.sender, }); sendPayment({ sender: inbox, receiver: this.txn.sender, amount: inbox.balance - inbox.minBalance, }); } /** * Reject the ASA by closing it out to the ASA creator. Always sends two inner transactions. * All non-MBR ALGO balance in the inbox will be sent to the caller. * * @param asa The ASA to reject */ arc59_reject(asa: AssetID) { const inbox = this.inboxes(this.txn.sender).value; sendAssetTransfer({ sender: inbox, assetReceiver: asa.creator, assetAmount: inbox.assetBalance(asa), xferAsset: asa, assetCloseTo: asa.creator, }); sendPayment({ sender: inbox, receiver: this.txn.sender, amount: inbox.balance - inbox.minBalance, }); } /** * Get the inbox address for the given receiver * * @param receiver The receiver to get the inbox for * * @returns Zero address if the receiver does not yet have an inbox, otherwise the inbox address */ arc59_getInbox(receiver: Address): Address { return this.inboxes(receiver).exists ? this.inboxes(receiver).value : globals.zeroAddress; } /** Claim any extra algo from the inbox */ arc59_claimAlgo() { const inbox = this.inboxes(this.txn.sender).value; assert(inbox.balance - inbox.minBalance !== 0); sendPayment({ sender: inbox, receiver: this.txn.sender, amount: inbox.balance - inbox.minBalance, }); } } ``` ### TypeScript Send Asset Function ```ts /** * Send an asset to a receiver using the ARC59 router * * @param appClient The ARC59 client generated by algokit * @param assetId The ID of the asset to send * @param sender The address of the sender * @param receiver The address of the receiver * @param algorand The AlgorandClient instance to use to send transactions * @param sendAlgoForNewAccount Whether to send 201_000 uALGO to the receiver so they can claim the asset with a 0-ALGO balance */ async function arc59SendAsset( appClient: Arc59Client, assetId: bigint, sender: string, receiver: string, algorand: algokit.AlgorandClient ) { // Get the address of the ARC59 router const arc59RouterAddress = (await appClient.appClient.getAppReference()) .appAddress; // Call arc59GetSendAssetInfo to get the following: // itxns - The number of transactions needed to send the asset // mbr - The minimum balance that must be sent to the router // routerOptedIn - Whether the router has opted in to the asset // receiverOptedIn - Whether the receiver has opted in to the asset const [ itxns, mbr, routerOptedIn, receiverOptedIn, receiverAlgoNeededForClaim, ] = (await appClient.arc59GetSendAssetInfo({ asset: assetId, receiver })) .return!; // If the receiver has opted in, just send the asset directly if (receiverOptedIn) { await algorand.send.assetTransfer({ sender, receiver, assetId, amount: 1n, }); return; } // Create a composer to form an atomic transaction group const composer = appClient.compose(); const signer = algorand.account.getSigner(sender); // If the MBR is non-zero, send the MBR to the router if (mbr || receiverAlgoNeededForClaim) { const mbrPayment = await algorand.transactions.payment({ sender, receiver: arc59RouterAddress, amount: algokit.microAlgos(Number(mbr + receiverAlgoNeededForClaim)), }); composer.addTransaction({ txn: mbrPayment, signer }); } // If the router is not opted in, add a call to arc59OptRouterIn to do so if (!routerOptedIn) composer.arc59OptRouterIn({ asa: assetId }); /** The box of the receiver's pubkey will always be needed */ const boxes = [algosdk.decodeAddress(receiver).publicKey]; /** The address of the receiver's inbox */ const inboxAddress = ( await appClient.compose().arc59GetInbox({ receiver }, { boxes }).simulate() ).returns[0]; // The transfer of the asset to the router const axfer = await algorand.transactions.assetTransfer({ sender, receiver: arc59RouterAddress, assetId, amount: 1n, }); // An extra itxn is if we are also sending ALGO for the receiver claim const totalItxns = itxns + (receiverAlgoNeededForClaim === 0n ? 0n : 1n); composer.arc59SendAsset( { axfer, receiver, additionalReceiverFunds: receiverAlgoNeededForClaim }, { sendParams: { fee: algokit.microAlgos(1000 + 1000 * Number(totalItxns)) }, boxes, // The receiver's pubkey // Always good to include both accounts here, even if we think only the receiver is needed. This is to help protect against race conditions within a block. accounts: [receiver, inboxAddress], // Even though the asset is available in the group, we need to explicitly define it here because we will be checking the asset balance of the receiver assets: [Number(assetId)], } ); // Disable resource population to ensure that our manually defined resources are correct algokit.Config.configure({ populateAppCallResources: false }); // Send the transaction group await composer.execute(); // Re-enable resource population algokit.Config.configure({ populateAppCallResources: true }); } ``` ### TypeScript Claim Function ```ts /** * Claim an asset from the ARC59 inbox * * @param appClient The ARC59 client generated by algokit * @param assetId The ID of the asset to claim * @param claimer The address of the account claiming the asset * @param algorand The AlgorandClient instance to use to send transactions */ async function arc59Claim( appClient: Arc59Client, assetId: bigint, claimer: string, algorand: algokit.AlgorandClient ) { const composer = appClient.compose(); // Check if the claimer has opted in to the asset let claimerOptedIn = false; try { await algorand.account.getAssetInformation(claimer, assetId); claimerOptedIn = true; } catch (e) { // Do nothing } const inbox = ( await appClient .compose() .arc59GetInbox({ receiver: claimer }) .simulate({ allowUnnamedResources: true }) ).returns[0]; let totalTxns = 3; // If the inbox has extra ALGO, claim it const inboxInfo = await algorand.account.getInformation(inbox); if (inboxInfo.minBalance < inboxInfo.amount) { totalTxns += 2; composer.arc59ClaimAlgo( {}, { sender: algorand.account.getAccount(claimer), sendParams: { fee: algokit.algos(0) }, } ); } // If the claimer hasn't already opted in, add a transaction to do so if (!claimerOptedIn) { composer.addTransaction({ txn: await algorand.transactions.assetOptIn({ assetId, sender: claimer }), signer: algorand.account.getSigner(claimer), }); } composer.arc59Claim( { asa: assetId }, { sender: algorand.account.getAccount(claimer), sendParams: { fee: algokit.microAlgos(1000 * totalTxns) }, } ); await composer.execute(); } ``` ## Security Considerations The router application controls all user inboxes. If this contract is compromised, user assets might also be compromised. ## Copyright Copyright and related rights waived via .
# Algorand Wallet Arbitrary Signing API
> API function for signing data
## Abstract This ARC proposes a standard for arbitrary data signing. It is designed to be a simple and flexible standard that can be used in a wide variety of applications. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Comments like this are non-normative ## Rationale Signing data is a common and critical operation. Users may need to sign data for multiple reasons (e.g. delegate signatures, DIDs, signing documents, authentication). Algorand wallets need a standard approach to byte signing to unlock self-custodial services and protect users from malicious and attack-prone signing workflows. This ARC provides a standard API for bytes signing. The API encodes byte arrays to be signed into well-structured JSON schemas together with additional metadata. It requires wallets to validate the signing inputs, notify users about what they are signing and warn them in case of dangerous signing requests. ### Overview This ARC defines a function `signData(signingData, metadata)` for signing data. `signingData` is a `StdSigData` object composed of the signing `data` that instantiates a known JSON Schema and the `signer`’s public key. ### Signing Flow When connected to a specific `domain` (i.e app or other identifier), the wallet will receive a request to sign some `data` along side some `authenticatorData`, which will look like some random bytes. With this information, the wallet should follow the following steps: 1. Hash the `data` field with `sha256`. 2. Knowing to what `domain` we are connected to, hash such value with `sha256` and compare it with the first 32 bytes of `authenticatorData`. 2.1. If the hashes do not match, the wallet **MUST** return an error. 3. Append the `authenticatorData` to the resulting hash of the `data` field. 4. Sign the result ### `Scopes` Supported scopes are: * `AUTH` (1): This scope is used for authentication purposes. It is used to sign data that will be used to authenticate the user to a specific domain. The `data` field **MUST** be a JSON object that represents the content to be signed. The `authenticatorData` field **MUST** include, at least, the `sha256` hash of the `domain` requesting a signature. The wallet **MUST** do an integrity check on the first 32 bytes of `authenticatorData` to match the hash. The `hdPath` field is **optional** and **MUST** be a BIP44 path in order to derive the private key to sign the `data`. The wallet **MUST** validate the path before signing. Summarized signing process for `AUTH` scope: ```plaintext EdDSA(SHA256(data) + SHA256(authenticatorData)) ``` * **`note`**: Other scopes could be added in the future. #### Parameters ##### `StdSigData` Must be a JSON object with the following properties: | Field | Type | Description | | ------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data` | `string` | string representing the content to be signed for the specific `Scope`. This can be an encoded JSON object or any other data. It **MUST** be presented to the user in a human-readable format. | | `signer` | `bytes` | public key of the signer. This can the public related to an Algorand address or any other Ed25519 public key. | | `domain` | `string` | This is the domain requesting the signature. It can be a URL, a DID, or any other identifier. It **MUST** be presented to the user to inform them about the context of the signature. | | `requestId` | `string` | This field is **optional**. It is used to identify the request. It **MUST** be unique for each request. | | `authenticatorData` | `bytes` | It **MUST** include, at least, the `sha256` hash of the `domain` requesting a signature. The wallet **MUST** do an integrity check on the first 32 bytes of `authenticatorData` to match the hash. It **MAY** also include signature counters, network flags or any other unique data to prevent replay attacks or to trick user to sign unrelated data to the scope. The wallet **SHOULD** validate every field in `authenticatorData` before signing. Each `Scope` **MUST** specify if `authenticatorData` should be appended to the hash of the `data` before signing. | | `hdPath` | `string` | This field is **optional**. It is required if the wallet supports BIP39 / BIP32 / BIP44. This field **MUST** be a BIP44 path in order to derive the private key to sign the `data`. The wallet **MUST** validate the path before signing. | ##### `metadata` Must be a JSON object with the following properties: | Field | Type | Description | | ---------- | --------- | -------------------------------------------------------------------------------------------- | | `scope` | `integer` | Defines the purpose of the signature. It **MUST** be one of the following values: `1` (AUTH) | | `encoding` | `string` | Defines the encoding of the `data` field. `base64` is the recommended encoding. | ##### `authenticatorData` | Name | Length | Description | optional | | ------------------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | | `rpIdHash` | 32 bytes | SHA256 hash of the domain requesting the signature. | No | | `flags` | 1 byte | Flags (bit 0 is the least significant bit): - 0x01: User Present (UP) - 0 means the user is not present. Bit 1 Reserved for future use (RFU1). Bit 2 User Verified (UV) result. - 1 means user is verified. - 0 means user is not verified. Bits 3 - 5 Reserved for future use (RFU2). Bit 6: Attested credential data included (AT). - Indicates whether the authenticator added attested credential data. Bit 7: Extension data included (ED). - Indicates whether the authenticator added extension data. | yes | | `signCount` | 4 bytes | Signature counter. - This is a monotonically increasing counter that is incremented each time the user successfully authenticates. - The counter is reset to 0 when the authenticator is reset. - The counter is used to prevent replay attacks. | Yes | | `attestedCredentialData` | variable | attested credential data (if present). See | Yes | | `extensions` | variable | extension data (if present), is a key value JSON structure that may or may not be included. See for full details | Yes | This follows the FIDO WebAuthn specification for the `authenticatorData` field. The wallet **MUST** validate the `authenticatorData` field before signing. For more information on the `authenticatorData` field, please refer to the . ##### `Errors` These are the possible errors that the wallet **MUST** handle: | Error | Description | | ---------------------------------- | ---------------------------------------------------------------------- | | `ERROR_INVALID_SCOPE` | The `scope` is not valid. | | `ERROR_FAILED_DECODING` | The `data` field could not be decoded. | | `ERROR_INVALID_SIGNER` | Unable to find in the wallet the public key related to the signer. | | `ERROR_MISSING_DOMAIN` | The `domain` field is missing. | | `ERROR_MISSING_AUTHENTICATED_DATA` | The `authenticatorData` field is missing. | | `ERROR_BAD_JSON` | The `data` field is not a valid JSON object. | | `ERROR_FAILED_DOMAIN_AUTH` | The `authenticatorData` field does not match the hash of the `domain`. | | `ERROR_FAILED_HD_PATH` | The `hdPath` field is not a valid BIP44 path. | ## Backwards Compatibility N / A ## Reference Implementation Available in the `assets/arc-0060` folder. ### Sample Use cases #### Generic AUTH ```ts const authData: Uint8Array = new Uint8Array(createHash('sha256').update("arc60.io").digest()) const authRequest: StdSigData = { data: Buffer.from("{[jsonfields....]}").toString('base64'), signer: publicKey, domain: "arc60.io", requestId: Buffer.from(randomBytes(32)).toString('base64'), authenticationData: authData, hdPath: "m/44'/60'/0'/0/0" } const signResponse = await arc60wallet.signData(authRequest, { scope: ScopeType.AUTH, encoding: 'base64' }) ``` #### CAIP-122 ```ts const caip122Request: CAIP122 = { domain: "arc60.io", chain_id: "283", account_address: ... type: "ed25519", statement: "We are requesting you to sign this message to authenticate to arc60.io", uri: "https://arc60.io", version: "1", nonce: Buffer.from(randomBytes(32)).toString, ... } // Disply message title according EIP-4361 const msgTitle: string = `Sign this message to authenticate to ${caip122Request.domain} with account ${caip122Request.account_address}` // Display message body according EIP-4361 const msgBodyPlaceHolders: string = `URI: ${caip122Request.uri}\n` + `Chain ID: ${caip122Request.chain_id}\n` + `Type: ${caip122Request.type}\n` + `Nonce: ${caip122Request.nonce}\n` + `Statement: ${caip122Request.statement}\n` + `Expiration Time: ${caip122Request["expiration-time"]}\n` + `Not Before: ${caip122Request["not-before"]}\n` + `Issued At: ${caip122Request["issued-at"]}\n` + `Resources: ${(caip122Request.resources ?? []).join(' , \n')}\n` // Display message according EIP-4361 const msg: string = `${msgTitle}\n\n${msgBodyPlaceHolders}` console.log(msg) // authenticationData const authenticationData: Uint8Array = new Uint8Array(createHash('sha256').update(caip122Request.domain).digest()) const signData: StdSigData = { data: Buffer.from(JSON.stringify(caip122Request)).toString('base64'), signer: publicKey, domain: caip122Request.domain, // should be same as origin / authenticationData // random unique id, to help RP / Client match requests requestId: Buffer.from(randomBytes(32)).toString('base64'), authenticationData: authenticationData } const signResponse = await arc60wallet.signData(signData, { scope: ScopeType.AUTH, encoding: 'base64' }) expect(signResponse).toBeDefined() // reply ``` ## Security Considerations Wallets are free to make their own UX choices, but they **SHOULD** show the user the purpose (i.e. `scope`) of the signature, the domain that is requesting the signature, and the data that is being signed. This is to prevent users from signing data that they do not understand. Additionally, wallets **MUST** show to the user the data that is being signed in a human-readable format, as well as the authenticatorData and how it was calculated, so that the hash can be verified by the user when signing with ledger for example. ## Copyright Copyright and related rights waived via .
# ASA Circulating Supply
> Getter method for ASA circulating supply
## Abstract This ARC introduces a standard for the definition of circulating supply for Algorand Standard Assets (ASA) and its client-side retrieval. A reference implementation is suggested. ## Motivation Algorand Standard Asset (ASA) `total` supply is *defined* upon ASA creation. Creating an ASA on the ledger *does not* imply its `total` supply is immediately “minted” or “circulating”. In fact, the semantic of token “minting” on Algorand is slightly different from other blockchains: it is not coincident with the token units creation on the ledger. The Reserve Address, one of the 4 addresses of ASA Role-Based-Access-Control (RBAC), is conventionally used to identify the portion of `total` supply not yet in circulation. The Reserve Address has no “privilege” over the token: it is just a “logical” label used (client-side) to classify an existing amount of ASA as “not in circulation”. According to this convention, “minting” an amount of ASA units is equivalent to *moving that amount out of the Reserve Address*. > ASA may have the Reserve Address assigned to a Smart Contract to enforce specific “minting” policies, if needed. This convention led to a simple and unsophisticated semantic of ASA circulating supply, widely adopted by clients (wallets, explorers, etc.) to provide standard information: ```text circulating_supply = total - reserve_balance ``` Where `reserve_balance` is the ASA balance hold by the Reserve Address. However, the simplicity of such convention, who fostered adoption across the Algorand ecosystem, poses some limitations. Complex and sophisticated use-cases of ASA, such as regulated stable-coins and tokenized securities among others, require more detailed and expressive definitions of circulating supply. As an example, an ASA could have “burned”, “locked” or “pre-minted” amounts of token, not held in the Reserve Address, which *should not* be considered as “circulating” supply. This is not possible with the basic ASA protocol convention. This ARC proposes a standard ABI *read-only* method (getter) to provide the circulating supply of an ASA. ## Specification The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Notes like this are non-normative. ### ABI Method A compliant ASA, whose circulating supply definition conforms to this ARC, **MUST** implement the following method on an Application (referred as *Circulating Supply App* in this specification): ```json { "name": "arc62_get_circulating_supply", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "ASA ID of the circulating supply" } ], "returns": { "type": "uint64", "desc": "ASA circulating supply" }, "desc": "Get ASA circulating supply" } ``` The `arc62_get_circulating_supply` **MUST** be a *read-only* () method (getter). ### Usage Getter calls **SHOULD** be *simulated*. Any external resources used by the implementation **SHOULD** be discovered and auto-populated by the simulated getter call. #### Example 1 > Let the ASA have `total` supply and a Reserve Address (i.e. not set to `ZeroAddress`). > > Let the Reserve Address be assigned to an account different from the Circulating Supply App Account. > > Let `burned` be an external Burned Address dedicated to ASA burned supply. > > Let `locked` be an external Locked Address dedicated to ASA locked supply. > > The ASA issuer defines the *circulating supply* as: > > ```text > circulating_supply = total - reserve_balance - burned_balance - locked_balance > ``` > > In this case the simulated read-only method call would auto-populate 1 external reference for the ASA and 3 external reference accounts (Reserve, Burned and Locked). #### Example 2 > Let the ASA have `total` supply and *no* Reserve Address (i.e. set to `ZeroAddress`). > > Let `non_circulating_amount` be a UInt64 Global Var defined by the implementation of the Circulating Supply App. > > The ASA issuer defines the *circulating supply* as: > > ```text > circulating_supply = total - non_circulating_amount > ``` > > In this case the simulated read-only method call would auto-populate just 1 external reference for the ASA. ### Circulating Supply Application discovery > Given an ASA ID, clients (wallet, explorer, etc.) need to discover the related Circulating Supply App. An ASA conforming to this ARC **MUST** specify the Circulating Supply App ID. > To avoid ecosystem fragmentation, this ARC does not propose any new method to specify the metadata of an ASA. Instead, it only extends already existing standards. If the ASA also conforms to any ARC that supports additional `properties` (, , etc.) as metadata declared in the ASA URL field, then it **MUST** include a `arc-62` key and set the corresponding value to a map, including the ID of the Circulating Supply App as a value for the key `application-id`. #### Example: ARC-3 Property ```json { //... "properties": { //... "arc-62": { "application-id": 123 } } //... } ``` ## Rationale The definition of *circulating supply* for sophisticated use-cases is usually ASA-specific. It could involve, for example, complex math or external accounts’ balances, variables stored in boxes or in global state, etc.. For this reason, the proposed method’s signature does not require any reference to external resources, a part form the `asset_id` of the ASA for which the circulating supply is defined. Eventual external resources can be discovered and auto-populated directly by the simulated method call. The rational of this design choice is avoiding fragmentation and integration overhead for clients (wallets, explorers, etc.). Clients just need to know: 1. The ASA ID; 2. The Circulating Supply App ID implementing the `arc62_get_circulating_supply` method for that ASA. ## Backwards Compatibility Existing ASA willing to conform to this ARC **MUST** specify the Circulating Supply App ID as `AssetConfig` transaction note field, as follows: * The `` **MUST** be equal to `62`; * The **RECOMMENDED** `` are (`m`) or (`j`); * The `` **MUST** specify `application-id` equal to the Circulating Supply App ID. > **WARNING**: To preserve the existing ASA RBAC (e.g. Manager Address, Freeze Address, etc.) it is necessary to **include all the existing role addresses** in the `AssetConfig`. Not doing so would irreversibly disable the RBAC roles! ### Example - JSON without version ```text arc62:j{"application-id":123} ``` ## Reference Implementation > This section is non-normative. This section suggests a reference implementation of the Circulating Supply App. An Algorand-Python example is available . ### Recommendations An ASA using the reference implementation **SHOULD NOT** assign the Reserve Address to the Circulating Supply App Account. A reference implementation **SHOULD** target a version of the AVM that supports foreign resources pooling (version 9 or greater). A reference implementation **SHOULD** use 3 external addresses, in addition to the Reserve Address, to define the not circulating supply. > ⚠️The specification *is not limited* to 3 external addresses. The implementations **MAY** extend the non-circulating labels using more addresses, global storage, box storage, etc. The **RECOMMENDED** labels for not-circulating balances are: `burned`, `locked` and `generic`. > To change the labels of not circulating addresses is sufficient to rename the following constants just in `smart_contracts/circulating_supply/config.py`: > > ```python > NOT_CIRCULATING_LABEL_1: Final[str] = "burned" > NOT_CIRCULATING_LABEL_2: Final[str] = "locked" > NOT_CIRCULATING_LABEL_3: Final[str] = "generic" > ``` ### State Schema A reference implementation **SHOULD** allocate, at least, the following Global State variables: * `asset_id` as UInt64, initialized to `0` and set **only once** by the ASA Manager Address; * Not circulating address 1 (`burned`) as Bytes, initialized to the Global `Zero Address` and set by the ASA Manager Address; * Not circulating address 2 (`locked`) as Bytes, initialized to the Global `Zero Address` and set by the ASA Manager Address; * Not circulating address 3 (`generic`) as Bytes, initialized to the Global `Zero Address` and set by the ASA Manager Address. A reference implementation **SHOULD** enforce that, upon setting `burned`, `locked` and `generic` addresses, the latter already opted-in the `asset_id`. ```json "state": { "global": { "num_byte_slices": 3, "num_uints": 1 }, "local": { "num_byte_slices": 0, "num_uints": 0 } }, "schema": { "global": { "declared": { "asset_id": { "type": "uint64", "key": "asset_id" }, "not_circulating_label_1": { "type": "bytes", "key": "burned" }, "not_circulating_label_2": { "type": "bytes", "key": "locked" }, "not_circulating_label_3": { "type": "bytes", "key": "generic" } }, "reserved": {} }, "local": { "declared": {}, "reserved": {} } }, ``` ### Circulating Supply Getter A reference implementation **SHOULD** enforce that the `asset_id` Global Variable is equal to the `asset_id` argument of the `arc62_get_circulating_supply` getter method. > Alternatively the reference implementation could ignore the `asset_id` argument and use directly the `asset_id` Global Variable. A reference implementation **SHOULD** return the ASA *circulating supply* as: ```text circulating_supply = total - reserve_balance - burned_balance - locked_balance - generic_balance ``` Where: * `total` is the total supply of the ASA (`asset_id`); * `reserve_balance` is the ASA balance hold by the Reserve Address or `0` if the address is set to the Global `ZeroAddress` or not opted-in `asset_id`; * `burned_balance` is the ASA balance hold by the Burned Address or `0` if the address is set to the Global `ZeroAddress` or is not opted-in `asset_id`; * `locked_balance` is the ASA balance hold by the Locked Address or `0` if the address is set to the Global `ZeroAddress` or not opted-in `asset_id`; * `generic_balance` is the ASA balance hold by a Generic Address or `0` if the address is set to the Global `ZeroAddress` or not opted-in `asset_id`. > ⚠️The implementations **MAY** extend the calculation of `circulating_supply` using global storage, box storage, etc. See for reference. ## Security Considerations Permissions over the Circulating Supply App setting and update **SHOULD** be granted to the ASA Manager Address. > The ASA trust-model (i.e. who sets the Reserve Address) is extended to the generalized ASA circulating supply definition. ## Copyright Copyright and related rights waived via .
# AVM Run Time Errors In Program
> Informative AVM run time errors based on program bytecode
## Abstract This document introduces a convention for rising informative run time errors on the Algorand Virtual Machine (AVM) directly from the program bytecode. ## Motivation The AVM does not offer native opcodes to catch and raise run time errors. The lack of native error handling semantics could lead to fragmentation of tooling and frictions for AVM clients, who are unable to retrieve informative and useful hints about the occurred run time failures. This ARC formalizes a convention to rise AVM run time errors based just on the program bytecode. ## Specification The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > Notes like this are non-normative. ### Error format > The AVM programs bytecode have limited sized. In this convention, the errors are part of the bytecode, therefore it is good to mind errors’ formatting and sizing. > Errors consist of a *code* and an optional *short message*. Errors **MUST** be prefixed either with: * `ERR:` for custom errors; * `AER:` reserved for future ARC standard errors. Errors **MUST** use `:` as domain separator. It is **RECOMMENDED** to use `UTF-8` for the error bytes string encoding. It is **RECOMMENDED** to use *short* error messages. It is **RECOMMENDED** to use for alphanumeric error codes. It is **RECOMMENDED** to avoid error byte strings of *exactly* 8 or 32 bytes. ### In Program Errors When a program wants to emit informative run time errors, directly from the bytecode, it **MUST**: 1. Push to the stack the bytes string containing the error; 2. Execute the `log` opcode to use the bytes from the top of the stack; 3. Execute the `err` opcode to immediately terminate the program. Upon a program run time failure, the Algod API response contains both the failed *program counter* (`pc`) and the `logs` array with the *errors*. The program **MAY** return multiple errors in the same failed execution. The errors **MUST** be retrieved by: 1. Decoding the `base64` elements of the `logs` array; 2. Validating the decoded elements against the error regexp. ### Error examples > Error conforming this specification are always prefixed with `ERR:`. Error with a *numeric code*: `ERR:042`. Error with an *alphanumeric code*: `ERR:BadRequest`. Error with a *numeric code* and *short message*: `ERR:042:AFunnyError`. ### Program example The following program example raises the error `ERR:001:Invalid Method` for any application call to methods different from `m1()void`. ```teal #pragma version 10 txn ApplicationID bz end method "m1()void" txn ApplicationArgs 0 match method1 byte "ERR:001:Invalid Method" log err method1: b end end: int 1 ``` Full Algod API response of a failed execution: ```json { "data": { "app-index":1004, "eval-states": [ { "logs": ["RVJSOjAwMTpJbnZhbGlkIE1ldGhvZA=="] } ], "group-index":0, "pc":41 }, "message":"TransactionPool.Remember: transaction ESI4GHAZY46MCUCLPBSB5HBRZPGO6V7DDUM5XKMNVPIRJK6DDAGQ: logic eval error: err opcode executed. Details: app=1004, pc=41" } ``` The `logs` array contains the `base64` encoded error `ERR:001:Invalid Method`. The `logs` array **MAY** contain elements that are not errors (as specified by the regexp). It is **NOT RECOMMENDED** to use the `message` field to retrieve errors. ### AVM Compilers AVM compilers (and related tools) **SHOULD** provide two error compiling options: 1. The one specified in this ARC as **default**; 2. The one specified in as fallback, if compiled bytecode size exceeds the AVM limits. > Compilers **MAY** optimize for program bytecode size by storing the error prefixes in the `bytecblock` and concatenating the error message at the cost of some extra opcodes. ## Rationale This convention for AVM run time errors presents the following PROS and CONS. **PROS:** * No additional artifacts required to return informative run time errors; * Errors are directly returned in the Algod API response, which can be filtered with the specified error regexp. **CONS:** * Errors consume program bytecode size. ## Security Considerations > Not applicable. ## Copyright Copyright and related rights waived via .
# ASA Parameters Conventions, Digital Media
> Alternatives conventions for ASAs containing digital media.
We introduce community conventions for the parameters of Algorand Standard Assets (ASAs) containing digital media. ## Abstract The goal of these conventions is to make it simpler to display the properties of a given ASA. This ARC differs from by focusing on optimization for fetching of digital media, as well as the use of onchain metadata. Furthermore, since asset configuration transactions are used to store the metadata, this ARC can be applied to existing ASAs. While mutability helps with backwards compatibility and other use cases, like leveling up an RPG character, some use cases call for immutability. In these cases, the ASA manager MAY remove the manager address, after which point the Algorand network won’t allow anyone to send asset configuration transactions for the ASA. This effectively makes the latest valid metadata immutable. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . An ARC-69 ASA has an associated JSON Metadata file, formatted as specified below, that is stored on-chain in the note field of the most recent asset configuration transaction (that contains a note field with a valid ARC-69 JSON metadata). ### ASA Parameters Conventions The ASA parameters should follow the following conventions: * *Unit Name* (`un`): no restriction. * *Asset Name* (`an`): no restriction. * *Asset URL* (`au`): a URI pointing to digital media file. This URI: * **SHOULD** be persistent. * **SHOULD** link to a file small enough to fetch quickly in a gallery view. * **MUST** follow and **MUST NOT** contain any whitespace character. * **SHOULD** specify media type with `#` fragment identifier at end of URL. This format **MUST** follow: `#i` for images, `#v` for videos, `#a` for audio, `#p` for PDF, or `#h` for HTML/interactive digital media. If unspecified, assume Image. * **SHOULD** use one of the following URI schemes (for compatibility and security): *https* and *ipfs*: * When the file is stored on IPFS, the `ipfs://...` URI **SHOULD** be used. IPFS Gateway URI (such as `https://ipfs.io/ipfs/...`) **SHOULD NOT** be used. * **SHOULD NOT** use the following URI scheme: *http* (due to security concerns). * *Asset Metadata Hash* (`am`): the SHA-256 digest of the full resolution media file as a 32-byte string (as defined in ) * **OPTIONAL** * *Freeze Address* (`f`): * **SHOULD** be empty, unless needed for royalties or other use cases * *Clawback Address* (`c`): * **SHOULD** be empty, unless needed for royalties or other use cases There are no requirements regarding the manager account of the ASA, or the reserve account. However, if immutability is required the manager address **MUST** be removed. Furthermore, the manager address, if present, **SHOULD** be under the control of the ASA creator, as the manager address can unilaterally change the metadata. Some advanced use cases **MAY** use a logicsig as ASA manager, if the logicsig only allows to set the note fields by the ASA creator. ### JSON Metadata File Schema ```json { "title": "Token Metadata", "type": "object", "properties": { "standard": { "type": "string", "value": "arc69", "description": "(Required) Describes the standard used." }, "description": { "type": "string", "description": "Describes the asset to which this token represents." }, "external_url": { "type": "string", "description": "A URI pointing to an external website. Borrowed from Open Sea's metadata format (https://docs.opensea.io/docs/metadata-standards)." }, "media_url": { "type": "string", "description": "A URI pointing to a high resolution version of the asset's media." }, "properties": { "type": "object", "description": "Properties following the EIP-1155 'simple properties' format. (https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1155.md#erc-1155-metadata-uri-json-schema)" }, "mime_type": { "type": "string", "description": "Describes the MIME type of the ASA's URL (`au` field)." }, "attributes": { "type": "array", "description": "(Deprecated. New NFTs should define attributes with the simple `properties` object. Marketplaces should support both the `properties` object and the `attributes` array). The `attributes` array follows Open Sea's format: https://docs.opensea.io/docs/metadata-standards#attributes" } }, "required":[ "standard" ] } ``` The `standard` field is **REQUIRED** and **MUST** equal `arc69`. All other fields are **OPTIONAL**. If provided, the other fields **MUST** match the description in the JSON schema. The URI field (`external_url`) is defined similarly to the Asset URL parameter `au`. However, contrary to the Asset URL, the `external_url` does not need to link to the digital media file. #### MIME Type In addition to specifying a data type in the ASA’s URL (`au` field) with a URI fragment (ex: `#v` for video), the JSON Metadata schema also allows indication of the URL’s MIME type (ex: `video/mp4`) via the `mime_type` field. #### Examples ##### Basic Example An example of an ARC-69 JSON Metadata file for a song follows. The properties array proposes some **SUGGESTED** formatting for token-specific display properties and metadata. ```json { "standard": "arc69", "description": "arc69 theme song", "external_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "mime_type": "video/mp4", "properties": { "Bass":"Groovy", "Vibes":"Funky", "Overall":"Good stuff" } } ``` An example of possible ASA parameters would be: * *Asset Name*: `ARC-69 theme song` for example. * *Unit Name*: `69TS` for example. * *Asset URL*: `ipfs://QmWS1VAdMD353A6SDk9wNyvkT14kyCiZrNDYAad4w1tKqT#v` * *Metadata Hash*: the 32 bytes of the SHA-256 digest of the high resolution media file. * *Total Number of Units*: 1 * *Number of Digits after the Decimal Point*: 0 #### Mutability ##### Rendering Clients **SHOULD** render an ASA’s latest ARC-69 metadata. Clients **MAY** render an ASA’s previous ARC-69 metadata for changelogs or other historical features. ##### Updating ARC-69 metadata If an ASA has a manager address, then the manager **MAY** update an ASA’s ARC-69 metadata. To do so, the manager sends a new `acfg` transaction with the entire metadata represented as JSON in the transaction’s `note` field. ##### Making ARC-69 metadata immutable Managers MAY make an ASA’s ARC-69 immutable. To do so, they MUST remove the ASA’s manager address with an `acfg` transaction. ##### ARC-69 attribute deprecation The initial version of ARC-69 followed the Open Sea attributes format . As illustrated below: ```plaintext "attributes": { "type": "array", "description": "Attributes following Open Sea's attributes format (https://docs.opensea.io/docs/metadata-standards#attributes)." } ``` This format is now deprecated. New NFTs **SHOULD** use the simple `properties` format, since it significantly reduces the metadata size. To be fully compliant with the ARC-69 standard, both the `properties` object and the `attributes` array **SHOULD** be supported. ## Rationale These conventions take inspiration from and to facilitate interoperobility. The main differences are highlighted below: * Asset Name, Unit Name, and URL are specified in the ASA parameters. This allows applications to efficiently display meaningful information, even if they aren’t aware of ARC-69 metadata. * MIME types help clients more effectively fetch and render media. * All asset metadata is stored onchain. * Metadata can be either mutable or immutable. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# Non-Transferable ASA
> Parameters Conventions Non-Transferable Algorand Standard Asset
## Abstract The goal is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to identify & interact with a Non-transferable ASA (NTA). This defines an interface extending & non fungible ASA to create Non-transferable ASA. Before issuance, both parties (issuer and receiver), have to agree on who has (if any) the authorization to burn this ASA. > This spec is compatible with to create an updatable Non-transferable ASA. ## Motivation The idea of Non-transferable ASAs has garnered significant attention, inspired by the concept of Soul Bound Tokens. However, without a clear definition, Non-transferable ASAs cannot achieve interoperability. Developing universal services targeting Non-transferable ASAs remains challenging without a minimal consensus on their implementation and lifecycle management. This ARC envisions Non-transferable ASAs as specialized assets, akin to Soul Bound ASAs, that will serve as identities, credentials, credit records, loan histories, memberships, and much more. To provide the necessary flexibility in these use cases, Non-transferable ASAs must feature an application-specific burn method and a distinct way to differentiate themselves from regular ASAs. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . * There are 2 NTA actor roles: **Issuer** and **Holder**. * There are 3 NTA ASA states, **Issued** , **Held** and **Revoked**. * **Claimed** and **Revoked** NTAs reside in the holder’s wallet after claim , forever! * The ASA parameter decimal places **Must** be 0 (Fractional NFTs are not allowed) * The ASA parameter total supply **Must** be 1 (true Non-fungible token). Note : On Algorand in order to prioritize the end users and power the decentralization, the end call to hold any ASA is given to the user so unless the user is the creator (which needs token deletion) the user can close out the token back to creator even if the token is frozen. After much discussions and feedbacks and many great proposed solutions by experts on the field, in respect to Algorand design, this ARC embraces this convention and leaves the right even to detach Non-transferable ASA and close it back to creator. As a summary NTA respects the account holder’s right to close out the ASA back to creator address. ### ASA Parameters Conventions The Issued state is the starting state of the ASA.The claimed state is when NTA is sent to destination wallet (claimed) and The Revoked state is the state where the NTA ASA is revoked by issuer after issuance and therefore no longer valid for any usecase except for provenance and historical data reference. * NTAs with Revoked state are no longer valid and cannot be used as a proof of any credentials. * Manager address is able to revoke the NTA ASA by setting the Manager address to `ZeroAddress`. * Issuer **MUST** be an Algorand Smart Contract Account. #### Issued Non-transferable ASA * The Creator parameter, the ASA **MAY** be created by any addresses. * The Clawback parameter **MUST** be the `ZeroAddress`. * The Freeze parameter **MUST** be set to the Issuer Address. * The Manager parameter **MAY** be set to any address but is **RECOMMENDED** to be the Issuer. * The Reserve parameter **MUST** be set to either metadata or NTA Issuer’s address. #### Held (claimed) Non-transferable ASA * The Creator parameter, the ASA **MAY** be created by any addresses. * The Clawback parameter **MUST** be the `ZeroAddress`. * The Freeze parameter **MUST** be set to the `ZeroAddress`. * The asset must be frozen for holder (claimer) account address. * The Manager parameter **MAY** be set to any address but is **RECOMMENDED** to be the Issuer. * The Reserve parameter **MUST** be set to either ARC-19 metadata or NTA Issuer’s address. #### Revoked Non-transferable ASA * The Manager parameter **MUST** be set to `ZeroAddress`. ## Rationale ### Non-transferable ASA NFT Non-transferable ASA serves as a specialized subset of the existing ASAs. The advantage of such design is seamless compatibility of Non-transferable ASA with existing NFT services. Service providers can treat Non-transferable ASA NFTs like other ASAs and do not need to make drastic changes to their existing codebase. ### Revoking vs Burning Rationale for Revocation Over Burning in Non-Transferable ASAs (NTAs): The concept of Non-Transferable ASAs (NTAs) is rooted in permanence and attachment to the holder. Introducing a “burn” mechanism for NTAs fundamentally contradicts this concept because it involves removing the token from the holder’s wallet entirely. Burning suggests destruction and detachment, which is inherently incompatible with the idea of something being bound to the holder for life. In contrast, a revocation mechanism aligns more closely with both the Non-Transferable philosophy and established W3C standards, particularly in the context of Verifiable Credentials (VCs). Revocation allows for NTAs to remain in the user’s wallet, maintaining provenance, historical data, and records of the token’s existence, while simultaneously marking the token as inactive or revoked by its issuer. This is achieved by setting the Manager address of the token to the ZeroAddress, effectively signaling that the token is no longer valid without removing it from the wallet. For example, in cases where a Verifiable Credential (VC) issued as an NTA expires or needs to be invalidated (e.g., a driver’s license), revocation becomes an essential operation. The token can be revoked by the issuer without being deleted from the user’s wallet, preserving a clear record of its prior existence and revocation status. This is beneficial for provenance tracking and compliance, as historical records are crucial in many scenarios. Furthermore, the token can be used as a reference for re-issued or updated credentials without breaking its attachment to the holder. This approach has clear benefits: Provenance and Historical Data: Keeping the NTA in the wallet allows dApps and systems to track the history of revoked tokens, enabling insights into previous credentials or claims. Re-usability and Compatibility: NTAs with revocation fit well into W3C and DIF standards around re-usable DIDs (Decentralized Identifiers) and VCs, allowing credentials to evolve (e.g., switching from one issuer to another) without breaking the underlying identity or trust models. Immutable Attachment: The token does not leave the wallet, making it clear that the NTA is still part of the user’s identity, but with a revoked status. In contrast, burning would not allow for these records to be maintained, and would break the “bound” nature of the NTA by removing the token from the holder’s possession entirely, which defeats the core idea behind NTAs. In summary, revocation offers a more interoperable alternative to burning for NTAs. It ensures that NTAs remain Non-Transferable while allowing for expiration, invalidation, or issuer changes, all while maintaining a record of the token’s lifecycle and status. ## Backwards Compatibility , , ASAs can be converted into a NTA ASA, only if the manager address & freeze address are still available. ## Security Considerations * Claiming/Receiving a NTA ASA will lock Algo forever until user decides to close it out back to creator address. * For security critical implementations it is vital to take into account that according to Algorand design, the user has the right to close out the ASA back to creator address. This is certainly kept on chain transaction history and indexers. ## Copyright Copyright and related rights waived via .
# Algorand Smart Contract NFT Specification
> Base specification for non-fungible tokens implemented as smart contracts.
## Abstract This specifies an interface for non-fungible tokens (NFTs) to be implemented on Algorand as smart contracts. This interface defines a minimal interface for NFTs to be owned and traded, to be augmented by other standard interfaces and custom methods. ## Motivation Currently most NFTs in the Algorand ecosystem are implemented as ASAs. However, to provide rich extra functionality, it can be desirable to implement NFTs as a smart contract instead. To foster an interoperable NFT ecosystem, it is necessary that the core interfaces for NFTs be standardized. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### Core NFT specification A smart contract NFT that is compliant with this standard must implement the interface detection standard defined in . Additionally, the smart contract MUST implement the following interface: ```json { "name": "ARC-72", "desc": "Smart Contract NFT Base Interface", "methods": [ { "name": "arc72_ownerOf", "desc": "Returns the address of the current owner of the NFT with the given tokenId", "readonly": true, "args": [ { "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" }, ], "returns": { "type": "address", "desc": "The current owner of the NFT." } }, { "name": "arc72_transferFrom", "desc": "Transfers ownership of an NFT", "readonly": false, "args": [ { "type": "address", "name": "from" }, { "type": "address", "name": "to" }, { "type": "uint256", "name": "tokenId" } ], "returns": { "type": "void" } }, ], "events": [ { "name": "arc72_Transfer", "desc": "Transfer ownership of an NFT", "args": [ { "type": "address", "name": "from", "desc": "The current owner of the NFT" }, { "type": "address", "name": "to", "desc": "The new owner of the NFT" }, { "type": "uint256", "name": "tokenId", "desc": "The ID of the transferred NFT" } ] } ] } ``` Ownership of a token ID by the zero address indicates that ID is invalid. The `arc72_ownerOf` method MUST return the zero address for invalid token IDs. The `arc72_transferFrom` method MUST error when `from` is not the owner of `tokenId`. The `arc72_transferFrom` method MUST error unless called by the owner of `tokenId` or an approved operator as defined by an extension such as the transfer management extension defined in this ARC. The `arc72_transferFrom` method MUST emit a `arc72_Transfer` event a transfer is successful. A `arc72_Transfer` event SHOULD be emitted, with `from` being the zero address, when a token is first minted. A `arc72_Transfer` event SHOULD be emitted, with `to` being the zero address, when a token is destroyed. All methods in this and other interfaces defined throughout this standard that are marked as `readonly` MUST be read-only as defined by . The ARC-73 interface selector for this core interface is `0x53f02a40`. ### Metadata Extension A smart contract NFT that is compliant with this metadata extension MUST implement the interfaces required to comply with the Core NFT Specification, as well as the following interface: ```json { "name": "ARC-72 Metadata Extension", "desc": "Smart Contract NFT Metadata Interface", "methods": [ { "name": "arc72_tokenURI", "desc": "Returns a URI pointing to the NFT metadata", "readonly": true, "args": [ { "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" }, ], "returns": { "type": "byte[256]", "desc": "URI to token metadata." } } ], } ``` URIs shorter than the return length MUST be padded with zero bytes at the end of the URI. The token URI returned SHOULD be an `ipfs://...` URI so the metadata can’t expire or be changed by a lapse or takeover of a DNS registration. The token URI SHOULD NOT be an `http://` URI due to security concerns. The URI SHOULD resolve to a JSON file following : * the JSON Metadata File Schema defined in . * the standard for declaring traits defined in . Future standards could define new recommended URI or file formats for metadata. The ARC-73 interface selector for this metadata extension interface is `0xc3c1fc00`. ### Transfer Management Extension A smart contract NFT that is compliant with this transfer management extension MUST implement the interfaces required to comply with the Core NFT Specification, as well as the following interface: ```json { "name": "ARC-72 Transfer Management Extension", "desc": "Smart Contract NFT Transfer Management Interface", "methods": [ { "name": "arc72_approve", "desc": "Approve a controller for a single NFT", "readonly": false, "args": [ { "type": "address", "name": "approved", "desc": "Approved controller address" }, { "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" }, ], "returns": { "type": "void" } }, { "name": "arc72_setApprovalForAll", "desc": "Approve an operator for all NFTs for a user", "readonly": false, "args": [ { "type": "address", "name": "operator", "desc": "Approved operator address" }, { "type": "bool", "name": "approved", "desc": "true to give approval, false to revoke" }, ], "returns": { "type": "void" } }, { "name": "arc72_getApproved", "desc": "Get the current approved address for a single NFT", "readonly": true, "args": [ { "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" }, ], "returns": { "type": "address", "desc": "address of approved user or zero" } }, { "name": "arc72_isApprovedForAll", "desc": "Query if an address is an authorized operator for another address", "readonly": true, "args": [ { "type": "address", "name": "owner" }, { "type": "address", "name": "operator" }, ], "returns": { "type": "bool", "desc": "whether operator is authorized for all NFTs of owner" } }, ], "events": [ { "name": "arc72_Approval", "desc": "An address has been approved to transfer ownership of the NFT", "args": [ { "type": "address", "name": "owner", "desc": "The current owner of the NFT" }, { "type": "address", "name": "approved", "desc": "The approved user for the NFT" }, { "type": "uint256", "name": "tokenId", "desc": "The ID of the NFT" } ] }, { "name": "arc72_ApprovalForAll", "desc": "Operator set or unset for all NFTs defined by this contract for an owner", "args": [ { "type": "address", "name": "owner", "desc": "The current owner of the NFT" }, { "type": "address", "name": "operator", "desc": "The approved user for the NFT" }, { "type": "bool", "name": "approved", "desc": "Whether operator is authorized for all NFTs of owner " } ] }, ] } ``` The `arc72_Approval` event MUST be emitted when the `arc72_approve` method is called successfully. The zero address for the `arc72_approve` method and the `arc72_Approval` event indicate no approval, including revocation of previous single NFT controller. When a `arc72_Transfer` event emits, this also indicates that the approved address for that NFT (if any) is reset to none. The `arc72_ApprovalForAll` event MUST be emitted when the `arc72_setApprovalForAll` method is called successfully. The contract MUST allow multiple operators per owner. The `arc72_transferFrom` method, when its `nftId` argument is owned by its `from` argument, MUST succeed for when called by an address that is approved for the given NFT or approved as operator for the owner. The ARC-73 interface selector for this transfer management extension interface is `0xb9c6f696`. ### Enumeration Extension A smart contract NFT that is compliant with this enumeration extension MUST implement the interfaces required to comply with the Core NFT Specification, as well as the following interface: ```json { "name": "ARC-72 Enumeration Extension", "desc": "Smart Contract NFT Enumeration Interface", "methods": [ { "name": "arc72_balanceOf", "desc": "Returns the number of NFTs owned by an address", "readonly": true, "args": [ { "type": "address", "name": "owner" }, ], "returns": { "type": "uint256" } }, { "name": "arc72_totalSupply", "desc": "Returns the number of NFTs currently defined by this contract", "readonly": true, "args": [], "returns": { "type": "uint256" } }, { "name": "arc72_tokenByIndex", "desc": "Returns the token ID of the token with the given index among all NFTs defined by the contract", "readonly": true, "args": [ { "type": "uint256", "name": "index" }, ], "returns": { "type": "uint256" } }, ], } ``` The sort order for NFT indices is not specified. The `arc72_tokenByIndex` method MUST error when `index` is greater than `arc72_totalSupply`. The ARC-73 interface selector for this enumeration extension interface is `0xa57d4679`. ## Rationale This specification is based on , with some differences. ### Core Specification The core specification differs from ERC-721 by: * removing `safeTransferFrom`, since there is not a test for whether an address on Algorand corresponds to a smart contract * moving management functionality out of the base specification into an extension * moving balance query functionality out of the base specification into the enumeration extension Moving functionality out of the core specification into extensions allows the base specification to be much simpler, and allows extensions for extra capabilities to evolve separately from the core idea of owning and transferring ownership of non-fungible tokens. It is recommended that NFT contract authors make use of extensions to enrich the capabilities of their NFTs. ### Metadata Extension The metadata extension differns from the ERC-721 metadata extension by using a fixed-length URI return and removing the `symbol` and `name` operations. Metadata such as symbol or name can be included in the metadata pointed to by the URI. ### Transfer Management Extension The transfer management extension is taken from the set of methods and events from the base ERC-721 specification that deal with approving other addresses to transfer ownership of an NFT. This functionality is important for trusted NFT galleries like OpenSea to list and sell NFTs on behalf of users while allowing the owner to maintain on-chain ownership. However, this set of functionality is the bulk of the complexity of the ERC-721 standard, and moving it into an extension vastly simplifies the core NFT specification. Additionally, other interfaces have been proposed to allow for the sale of NFTs in decentralized manners without needing to give transfer control to a trusted third party. ### Enumeration Extension The enumeration extension is taken from the ERC-721 enumeration extension. However, it also includes the `arc72_balanceOf` function that is included in the base ERC-721 specification. This change simplifies the core standard and groups the `arc72_balanceOf` function with related functionality for contracts where supply details are desired. ## Backwards Compatibility This standard introduces a new kind of NFT that is incompatible with NFTs defined as ASAs. Applications that want to index, manage, or view NFTs on Algorand will need to handle these new smart NFTs as well as the already popular ASA implementation of NFTs will need to add code to handle both, and existing smart contracts that handle ASA-based NFTs will not work with these new smart contract NFTs. While this is a severe backwards incompatibility, smart contract NFTs are necessary to provide richer and more diverse functionality for NFTs. ## Security Considerations The fact that anybody can create a new implementation of a smart contract NFT standard opens the door for many of those implementations to contain security bugs. Additionally, malicious NFT implementations could contain hidden anti-features unexpected by users. As with other smart contract domains, it is difficult for users to verify or understand security properties of smart contract NFTs. This is a tradeoff compared with ASA NFTs, which share a smaller set of security properties that are easier to validate, to gain the possibility of adding novel features. ## Copyright Copyright and related rights waived via .
# Algorand Interface Detection Spec
> A specification for smart contracts and indexers to detect interfaces of smart contracts.
## Abstract This ARC specifies an interface detection interface based on . This interface allows smart contracts and indexers to detect whether a smart contract implements a particular interface based on an interface selector. ## Motivation applications have associated Contract or Interface description JSON objects that allow users to call their methods. However, these JSON objects are communicated outside of the consensus network. Therefore indexers can not reliably identify contract instances of a particular interface, and smart contracts have no way to detect whether another contract supports a particular interface. An on-chain method to detect interfaces allows greater composability for smart contracts, and allows indexers to automatically detect implementations of interfaces of interest. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### How Interfaces are Identified The specification for interfaces is defined by . This specification extends ARC-4 to define the concept of an interface selector. We define the interface selector as the XOR of all selectors in the interface. Selectors in the interface include selectors for methods, selectors for events as defined by , and selectors for potential future kinds of interface components. As an example, consider an interface that has two methods and one event, `add(uint64,uint64)uint128`, `add3(uint64,uint64,uint64)uint128`, and `alert(uint64)`. The method selector for the `add` method is the first 4 bytes of the method signature’s SHA-512/256 hash. The SHA-512/256 hash of `add(uint64,uint64)uint128` is `0x8aa3b61f0f1965c3a1cbfa91d46b24e54c67270184ff89dc114e877b1753254a`, so its method selector is `0x8aa3b61f`. The SHA-512/256 hash of `add3(uint64,uint64,uint64)uint128` is `0xa6fd1477731701dd2126f24facf3492d470cf526e7d4d849fea33d102b45f03d`, so its method selector is `0xa6fd1477` The SHA-512/256 hash of `alert(uint64)` is `0xc809efe9fd45417226d52b605658b83fff27850a01efeea30f694d1e112d5463`, so its method selector is `0xc809efe9` The interface selector is defined as the bitwise exclusive or of all method and event selectors, so the interface selector is `0x8aa3b61f XOR 0xa6fd1477 XOR 0xc809efe9`, which is `0xe4574d81`. ### How a Contract will Publish the Interfaces it Implements for Detection In addition to out-of-band JSON contract or interface description data, a contract that is compliant with this specification shall implement the following interface: ```json { "name": "ARC-73", "desc": "Interface for interface detection", "methods": [ { "name": "supportsInterface", "desc": "Detects support for an interface specified by selector.", "readonly": true, "args": [ { "type": "byte[4]", "name": "interfaceID", "desc": "The selector of the interface to detect." }, ], "returns": { "type": "bool", "desc": "Whether the contract supports the interface." } } ] } ``` The `supportsInterface` method must be `readonly` as specified by . The implementing contract must have a `supportsInterface` method that returns: * `true` when `interfaceID` is `0x4e22a3ba` (the selector for , this interface) * `false` when `interfaceID` is `0xffffffff` * `true` for any other `interfaceID` the contract implements * `false` for any other `interfaceID` ## Rationale This specification is nearly identical to the related specification for Ethereum, , merely adapted to Algorand. ## Security Considerations It is possible that a malicious contract may lie about interface support. This interface makes it easier for all kinds of actors, inclulding malicious ones, to interact with smart contracts that implement it. ## Copyright Copyright and related rights waived via .
# NFT Indexer API
> REST API for reading data about Application's NFTs.
## Abstract This specifies a REST interface that can be implemented by indexing services to provide data about NFTs conforming to the standard. ## Motivation While most data is available on-chain, reading and analyzing on-chain logs to get a complete and current picture about NFT ownership and history is slow and impractical for many uses. This REST interface standard allows analysis of NFT contracts to be done in a centralized manner to provide fast, up-to-date responses to queries, while allowing users to pick from any indexing provider. ## Specification This specification defines two REST endpoints: `/nft-index/v1/tokens` and `/nft-index/v1/transfers`. Both endpoints respond only to `GET` requests, take no path parameters, and consume no input. But both accept a variety of query parameters. ### `GET /nft-indexer/v1/tokens` Produces `application/json`. Optional Query Parameters: | Name | Schema | Description | | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------ | | round | integer | Include results for the specified round. For performance reasons, this parameter may be disabled on some configurations. | | next | string | Token for the next page of results. Use the `next-token` provided by the previous page of results. | | limit | integer | Maximum number of results to return. There could be additional pages even if the limit is not reached. | | contractId | integer | Limit results to NFTs implemented by the given contract ID. | | tokenId | integer | Limit results to NFTs with the given token ID. | | owner | address | Limit results to NFTs owned by the given owner. | | mint-min-round | integer | Limit results to NFTs minted on or after the given round. | | mint-max-round | integer | Limit results to NFTs minted on or before the given round. | When successful, returns a response with code 200 and an object with the schema: | Name | Required? | Schema | Description | | ------------- | --------- | ------- | -------------------------------------------------------------------------------------------- | | tokens | required | array | Array of Token objects that fit the query parameters, as defined below. | | current-round | required | integer | Round at which the results were computed. | | next-token | optional | string | Used for pagination, when making another request provide this token as the `next` parameter. | The `Token` object has the following schema: | Name | Required? | Schema | Description | | ----------- | --------- | ------- | -------------------------------------------------------------------------------------------------------------------------- | | owner | required | address | The current owner of the NFT. | | contractId | required | integer | The ID of the ARC-72 contract that defines the NFT. | | tokenId | required | integer | The tokenID of the NFT, which along with the contractId addresses a unique ARC-72 token. | | mint-round | optional | integer | The round at which the NFT was minted (IE the round at which it was transferred from the zero address to the first owner). | | metadataURI | optional | string | The URI given for the token by the `metadataURI` API of the contract, if applicable. | | metadata | optional | object | The result of resolving the `metadataURI`, if applicable and available. | When unsuccessful, returns a response with code 400 or 500 and an object with the schema: | Name | Required? | Schema | | ------- | --------- | ------ | | data | optional | object | | message | required | string | ### `GET /nft-indexer/v1/transfers` Produces `application/json`. Optional Query Parameters: | Name | Schema | Description | | ---------- | ------- | ------------------------------------------------------------------------------------------------------------------------ | | round | integer | Include results for the specified round. For performance reasons, this parameter may be disabled on some configurations. | | next | string | Token for the next page of results. Use the `next-token` provided by the previous page of results. | | limit | integer | Maximum number of results to return. There could be additional pages even if the limit is not reached. | | contractId | integer | Limit results to NFTs implemented by the given contract ID. | | tokenId | integer | Limit results to NFTs with the given token ID. | | user | address | Limit results to transfers where the user is either the sender or receiver. | | from | address | Limit results to transfers with the given address as the sender. | | to | address | Limit results to transfers with the given address as the receiver. | | min-round | integer | Limit results to transfers that were executed on or after the given round. | | max-round | integer | Limit results to transfers that were executed on or before the given round. | When successful, returns a response with code 200 and an object with the schema: | Name | Required? | Schema | Description | | ------------- | --------- | ------- | -------------------------------------------------------------------------------------------- | | transfers | required | array | Array of Transfer objects that fit the query parameters, as defined below. | | current-round | required | integer | Round at which the results were computed. | | next-token | optional | string | Used for pagination, when making another request provide this token as the `next` parameter. | The `Transfer` object has the following schema: | Name | Required? | Schema | Description | | ---------- | --------- | ------- | ---------------------------------------------------------------------------------------- | | contractId | required | integer | The ID of the ARC-72 contract that defines the NFT. | | tokenId | required | integer | The tokenID of the NFT, which along with the contractId addresses a unique ARC-72 token. | | from | required | address | The sender of the transaction. | | to | required | address | The receiver of the transaction. | | round | required | integer | The round of the transfer. | When unsuccessful, returns a response with code 400 or 500 and an object with the schema: | Name | Required? | Schema | | ------- | --------- | ------ | | data | optional | object | | message | required | string | ## Rationale This standard was designed to feel similar to the Algorand indexer API, and uses the same query parameters and results where applicable. ## Backwards Compatibility This standard presents a versioned REST interface, allowing future extensions to change the interface in incompatible ways while allowing for the old service to run in tandem. ## Security Considerations All data available through this indexer API is publicly available. ## Copyright Copyright and related rights waived via .
# Password Account
> Password account using PBKDF2
## Abstract This standard specifies a computation for seed bytes for Password Account. For general adoption it is better for people to remember passphrase than mnemonic. With this standard person can hash the passphrase and receive the seed bytes for X25529 algorand account. ## Motivation By providing a clear and precise computation process, Password Account empowers individuals to effortlessly obtain their seed bytes for algorand account. In the realm of practicality and widespread adoption, the standard highlights the immense advantages of utilizing a passphrase rather than a mnemonic. Remembering a passphrase becomes the key to unlocking a world of possibilities. With this groundbreaking standard, individuals can take control of their X25529 Algorand account by simply hashing their passphrase and effortlessly receiving the corresponding seed bytes. It’s time to embrace this new era of accessibility and security, empowering yourself to reach new heights in the world of Password Accounts. Let this standard serve as your guiding light, motivating community to embark on a journey of limitless possibilities and unparalleled success. This standard seek the synchronization between wallets which may provide password protected accounts. ## Specification Seed bytes generation is calculated with algorithm: ```plaintext const init = `ARC-0076-${password}-{slotId}-PBKDF2-999999`; const salt = `ARC-0076-{slotId}-PBKDF2-999999`; const iterations = 999999; const cryptoKey = await window.crypto.subtle.importKey( "raw", Buffer.from(init, "utf-8"), "PBKDF2", false, ["deriveBits", "deriveKey"] ); const masterBits = await window.crypto.subtle.deriveBits( { name: "PBKDF2", hash: "SHA-256", salt: Buffer.from(salt, "utf-8"), iterations: iterations, }, cryptoKey, 256 ); const uint8 = new Uint8Array(masterBits); const mnemonic = algosdk.mnemonicFromSeed(uint8); const genAccount = algosdk.mnemonicToSecretKey(mnemonic); ``` Length of the data section SHOULD be at least 16 bytes long. Slot ID is account iteration. Default is “0”. ### Email Password account Email Password account is account generated from the original data ```plaintext const init = `ARC-0076-${email}-${password}-{slotId}-PBKDF2-999999`; const salt = `ARC-0076-${email}-{slotId}-PBKDF2-999999`; ``` The email part can be published to the service provider backend and verified by the service provider. Password MUST NOT be transferred over the network. Length of the password SHOULD be at least 16 bytes long. ### Sample data This sample data may be used for verification of the `ARC-0076` implementation. ```plaintext const email = "email@example.com"; const password = "12345678901234567890123456789012345678901234567890"; const slotId = "0"; const init = `ARC-0076-${email}-${password}-{slotId}-PBKDF2-999999`; const salt = `ARC-0076-${email}-{slotId}-PBKDF2-999999`; ``` Results in: ```plaintext masterBits = [225,7,139,154,245,210,181,138,188,129,145,53,246,184,243,88,163,163,109,208,77,71,7,235,81,244,129,215,102,168,105,21] account.addr = "5AHWQJ5D52K4GRW4JWQ5GMR53F7PDSJEGT4PXVFSBQYE7VXDVG3WSPWSBM" ``` ## Rationale This standard was designed to allow the wallets to provide password protected accounts which does not require general population to store the mnemonic. Email extension allows service providers to bind specific account with the email address, and user experience to feel the basic authentication form with email and password they are already used to from web2 usecases. ## Backwards Compatibility We expect future extensions to be compatible with Password account. The hash mechanism for the future algorighms should be suffixed such as `-PBKDF2-999999`. ## Security Considerations This standard moves the security of strength of the account to how user generates the password. This standard relies on randomness and collision resistance of PBKDF2 and ‘SHA-256’. User MUST be informed about the risks associated with this type of account. ## Copyright Copyright and related rights waived via .
# URI scheme, keyreg Transactions extension
> A specification for encoding Key Registration Transactions in a URI format.
## Abstract This URI specification represents an extension to the base Algorand URI encoding standard () that specifies encoding of key registration transactions through deeplinks, QR codes, etc. ## Specification ### General format As in , URIs follow the general format for URIs as set forth in . The path component consists of an Algorand address, and the query component provides additional transaction parameters. Elements of the query component may contain characters outside the valid range. These are encoded differently depending on their expected character set. The text components (note, xnote) must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986. The binary components (votekey, selkey, etc.) must be encoded with base64url as specified in . ### Scope This ARC explicitly supports the two major subtypes of key registration transactions: * Online keyreg transcation * Declares intent to participate in consensus and configures required keys * Offline keyreg transaction * Declares intent to stop participating in consensus The following variants of keyreg transactions are not defined: * Non-participating keyreg transcation * This transaction subtype is considered deprecated * Heartbeat keyreg transaction * This transaction subtype will be included in the future block incentives protocol. The protocol specifies that this transaction type must be submitted by a node in response to a programmatic “liveness challenge”. It is not meant to be signed or submitted by an end user. ### ABNF Grammar ```plaintext algorandurn = "algorand://" algorandaddress [ "?" keyregparams ] algorandaddress = *base32 keyregparams = keyregparam [ "&" keyregparams ] keyregparam = [ typeparam / votekeyparam / selkeyparam / sprfkeyparam / votefstparam / votelstparam / votekdparam / noteparam / feeparam / otherparam ] typeparam = "type=keyreg" votekeyparam = "votekey=" *qbase64url selkeyparam = "selkey=" *qbase64url sprfkeyparam = "sprfkey=" *qbase64url votefstparam = "votefst=" *qdigit votelstparam = "votelst=" *qdigit votekdparam = "votekdkey=" *qdigit noteparam = (xnote | note) xnote = "xnote=" *qchar note = "note=" *qchar fee = "fee=" *qdigit otherparam = qchar *qchar [ "=" *qchar ] ``` * “qbase64url” corresponds to valid characters of “base64url” encoding, as defined in * “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators. As in the base standard, the scheme component (“algorand:”) is case-insensitive, and implementations must accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys. ### Query Keys * address: Algorand address of transaction sender. Required. * type: fixed to “keyreg”. Used to disambiguate the transaction type from the base standard and other possible extensions. Required. * votekeyparam: The vote key parameter to use in the transaction. Encoded with encoding. Required for keyreg online transactions. * selkeyparam: The selection key parameter to use in the transaction. Encoded with encoding. Required for keyreg online transactions. * sprfkeyparam: The state proof key parameter to use in the transaction. Encoded with encoding. Required for keyreg online transactions. * votefstparam: The first round on which the voting keys will valid. Required for keyreg online transactions. * votelstparam: The last round on which the voting keys will be valid. Required for keyreg online transactions. * votekdparam: The key dilution key parameter to use. Required for keyreg online transactions. * xnote: As in . A URL-encoded notes field value that must not be modifiable by the user when displayed to users. Optional. * note: As in . A URL-encoded default notes field value that the the user interface may optionally make editable by the user. Optional. * fee: Optional. A static fee to set for the transaction in microAlgos. Useful to signal intent to receive participation incentives (e.g. with a 2,000,000 microAlgo transaction fee.) Optional. * (others): optional, for future extensions ### Appendix This section contains encoding examples. The raw transaction object is presented along with the resulting URI encoding. #### Encoding keyreg online transactioon with minimum fee The following raw keyreg transaction: ```plaintext { "txn": { "fee": 1000, "fv": 1345, "gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=", "lv": 2345, "selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=", "snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=", "sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==", "type": "keyreg", "votefst": 1300, "votekd": 100, "votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=", "votelst": 11300 } } ``` Will result in this ARC-78 encoded URI: ```plaintext algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY? type=keyreg &selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c &sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg &votefst=1300 &votekd=100 &votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI &votelst=11300 ``` Note: newlines added for readability. Note the difference between base64 encoding in the raw object and base64url encoding in the URI parameters. For example, the selection key parameter `selkey` that begins with `+lfw+` in the raw object is encoded in base64url to `-lfw-`. Note: Here, the fee is omitted from the URI (due to being set to the minimum 1,000 microAlgos.) When the fee is omitted, it is left up to the application or wallet to decide. This is for demonstrative purposes - the ARC-78 standard does not require this behavior. #### Encoding keyreg offline transactioon The following raw keyreg transaction: ```plaintext { "txn": { "fee": 1000, "fv": 1776240, "gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=", "lv": 1777240, "snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=", "type": "keyreg" } } ``` Will result in this ARC-78 encoded URI: ```plaintext algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?type=keyreg ``` This offline keyreg transaction encoding is the smallest compatible ARC-78 representation. #### Encoding keyreg online transactioon with custom fee and note The following raw keyreg transaction: ```plaintext { "txn": { "fee": 2000000, "fv": 1345, "gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=", "lv": 2345, "note:b64": "Q29uc2Vuc3VzIHBhcnRpY2lwYXRpb24gZnR3", "selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=", "snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=", "sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==", "type": "keyreg", "votefst": 1300, "votekd": 100, "votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=", "votelst": 11300 } } ``` Will result in this ARC-78 encoded URI: ```plaintext algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY? type=keyreg &selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c &sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg &votefst=1300 &votekd=100 &votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI &votelst=11300 &fee=2000000 ¬e=Consensus%2Bparticipation%2Bftw ``` Note: newlines added for readability. ## Rationale The present aims to provide a standardized way to encode key registration transactions in order to enhance the user experience of signing key registration transactions in general, and in particular in the use case of an Algorand node runner that does not have their spending keys resident on their node (as is best practice.) The parameter names were chosen to match the corresponding names in encoded key registration transactions. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# URI scheme, App NoOp call extension
> A specification for encoding NoOp Application call Transactions in a URI format.
## Abstract NoOp calls are Generic application calls to execute the Algorand smart contract ApprovalPrograms. This URI specification proposes an extension to the base Algorand URI encoding standard () that specifies encoding of application NoOp transactions into standard URIs. ## Specification ### General format As in , URIs follow the general format for URIs as set forth in . The path component consists of an Algorand address, and the query component provides additional transaction parameters. Elements of the query component may contain characters outside the valid range. These are encoded differently depending on their expected character set. The text components (note, xnote) must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence **MUST** be percent-encoded as described in RFC 3986. The binary components (args, refs, etc.) **MUST** be encoded with base64url as specified in . ### ABNF Grammar ```plaintext algorandurn = "algorand://" algorandaddress [ "?" noopparams ] algorandaddress = *base32 noopparams = noopparam [ "&" noopparams ] noopparam = [ typeparam / appparam / methodparam / argparam / boxparam / assetarrayparam / accountarrayparam / apparrayparam / feeparam / otherparam ] typeparam = "type=appl" appparam = "app=" *digit methodparam = "method=" *qchar boxparam = "box=" *qbase64url argparam = "arg=" (*qchar | *digit) feeparam = "fee=" *digit accountparam = "account=" *base32 assetparam = "asset=" *digit otherparam = qchar *qchar [ "=" *qchar ] ``` * “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators. * “qbase64url” corresponds to valid characters of “base64url” encoding, as defined in * All params from the base standard, are supported and usable if fit the NoOp application call context (e.g. note) * As in the base standard, the scheme component (“algorand:”) is case-insensitive, and implementations **MUST** accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys. ### Query Keys * address: Algorand address of transaction sender * type: fixed to “appl”. Used to disambiguate the transaction type from the base standard and other possible extensions * app: The first reference is set to specify the called application (Algorand Smart Contract) ID and is mandatory. Additional references are optional and will be used in the Application NoOp call’s foreign applications array. * method: Specify the full method expression (e.g “example\_method(uint64,uint64)void”). * arg: specify args used for calling NoOp method, to be encoded within URI. * box: Box references to be used in Application NoOp method call box array. * asset: Asset reference to be used in Application NoOp method call foreign assets array. * account: Account or nfd address to be used in Application NoOp method call foreign accounts array. * fee: Optional. An optional static fee to set for the transaction in microAlgos. * (others): optional, for future extensions Note 1: If the fee is omitted , it means that Minimum Fee is preferred to be used for transaction. ### Template URI vs actionable URI If the URI is constructed so that other dApps, wallets or protocols could use it with their runtime Algorand entities of interest, then : * The placeholder account/app address in URI **MUST** be ZeroAddress (“AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ”). Since ZeroAddress cannot initiate any action this approach is considered non-vulnerable and secure. ### Example Call claim(uint64,uint64)byte\[] method on contract 11111111 paying a fee of 10000 micro ALGO from an specific address ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&fee=10000 ``` Call the same claim(uint64,uint64)byte\[] method on contract 11111111 paying a default 1000 micro algo fee ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&app=22222222&app=33333333 ``` ## Rationale Algorand application NoOp method calls cover the majority of application transactions in Algorand and have a wide range of use-cases. For use-cases where the runtime knows exactly what the called application needs in terms of arguments and transaction arrays and there are no direct interactions, this extension will be required since ARC-26 standard does not currently support application calls. ## Security Considerations None. ## Copyright Copyright and related rights waived via .
# URI scheme blockchain information
> Querying blockchain information using a URI format
## Abstract This URI specification defines a standardized method for querying application and asset data on Algorand. It enables applications, websites, and QR code implementations to construct URIs that allow users to retrieve data such as application state and asset metadata in a structured format. This specification is inspired by and follows similar principles, with adjustments specific to read-only queries for applications and assets. ## Specification ### General Format Algorand URIs in this standard follow the general format for URIs as defined in . The scheme component specifies whether the URI is querying an application (`algorand://app`) or an asset (`algorand://asset`). Query parameters define the specific data fields being requested. Parameters may contain characters outside the valid range. These must first be encoded in UTF-8, then percent-encoded according to RFC 3986. ### Application Query URI (`algorand://app`) The application URI allows querying the state of an application, including data from the application’s box storage, global storage, and local storage. And the teal program associated. Each storage type has specific requirements. ### Asset Query URI (`algorand://asset`) The asset URI enables retrieval of metadata and configuration details for a specific asset, such as its name, total supply, decimal precision, and associated addresses. ### ABNF Grammar ```abnf algorandappurn = "algorand://app/" appid [ "?" noopparams ] appid = *digit noopparams = noopparam [ "&" noopparams ] noopparam = [ boxparam / globalparam / localparam ] boxparam = "box=" *qbase64url globalparam = "global=" *qbase64url localparam = "local=" *qbase64url "&algorandaddress=" *base32 tealcodeparam = "tealcode" algorandasseturn = "algorand://asset/" assetid [ "?" assetparam ] assetid = *digit assetparam = [ totalparam / decimalsparam / frozenparam / unitnameparam / assetnameparam / urlparam / hashparam / managerparam / reserveparam / freezeparam / clawbackparam ] totalparam = "total" decimalsparam = "decimals" frozenparam = "frozen" unitnameparam = "unitname" assetnameparam = "assetname" urlparam = "url" metadatahashparam = "metadatahash" managerparam = "manager" reserveparam = "reserve" freezeparam = "freeze" clawbackparam = "clawback" ``` ### Parameter Definitions #### Application Parameters * **`boxparam`**: Queries the application’s box storage with a key encoded in `base64url`. * **`globalparam`**: Queries the global storage of the application using a `base64url`-encoded key. * **`localparam`**: Queries local storage for a specified account. Requires an additional `algorandaddress` parameter, representing the account whose local storage is queried. #### Asset Parameters * **`totalparam`** (`total`): Queries the total supply of the asset. * **`decimalsparam`** (`decimals`): Queries the number of decimal places used for the asset. * **`frozenparam`** (`frozen`): Queries whether the asset is frozen by default. * **`unitnameparam`** (`unitname`): Queries the short name or unit symbol of the asset (e.g., “USDT”). * **`assetnameparam`** (`assetname`): Queries the full name of the asset (e.g., “Tether”). * **`urlparam`** (`url`): Queries the URL associated with the asset, providing more information. * **`metadatahashparam`** (`metadatahash`): Queries the metadata hash associated with the asset. * **`managerparam`** (`manager`): Queries the address of the asset manager. * **`reserveparam`** (`reserve`): Queries the reserve address holding non-minted units of the asset. * **`freezeparam`** (`freeze`): Queries the freeze address for the asset. * **`clawbackparam`** (`clawback`): Queries the clawback address for the asset. ### Query Key Descriptions For each parameter, the query key name is listed, followed by its purpose: * **box**: Retrieves information from the specified box storage key. * **global**: Retrieves data from the specified global storage key. * **local**: Retrieves data from the specified local storage key. Requires `algorandaddress` to specify the account. * **total**: Retrieves the asset’s total supply. * **decimals**: Retrieves the number of decimal places for the asset. * **frozen**: Retrieves the default frozen status of the asset. * **unitname**: Retrieves the asset’s short name or symbol. * **assetname**: Retrieves the full name of the asset. * **url**: Retrieves the URL associated with the asset. * **metadatahash**: Retrieves the metadata hash for the asset. * **manager**: Retrieves the manager address of the asset. * **reserve**: Retrieves the reserve address for the asset. * **freeze**: Retrieves the freeze address of the asset. * **clawback**: Retrieves the clawback address of the asset. ### Example URIs 1. **Querying an Application’s Box Storage**: ```plaintext algorand://app/2345?box=YWxnb3JvbmQ= ``` Queries box storage with a `base64url`-encoded key. 2. **Querying Global Storage**: ```plaintext algorand://app/12345?global=Z2xvYmFsX2tleQ== ``` Queries global storage with a `base64url`-encoded key. 3. **Querying Local Storage**: ```plaintext algorand://app/12345?local=bG9jYWxfa2V5&algorandaddress=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 ``` Queries local storage with a `base64url`-encoded key and specifies the associated account. 4. **Querying Asset Details**: ```plaintext algorand://asset/67890?total ``` Queries the total supply of an asset. ## Rationale Previously, the Algorand URI scheme was primarily used to create transactions on the chain. This version allows using a URI scheme to directly retrieve information from the chain, specifically for applications and assets. This URI scheme provides a unified, standardized method for querying Algorand application and asset data, allowing interoperability across applications and services. ## Security Considerations Since these URIs are intended for read-only operations, they do not alter application or asset state, mitigating many security risks. However, data retrieved from these URIs should be validated to ensure it meets user expectations and that any displayed data cannot be tampered with. ## Copyright Copyright and related rights waived via .
# xGov Council - Application Process
> How to run for an xGov Council seat.
## Abstract The goal of this ARC is to clearly define the process for running for an xGov Council seat. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### How to apply In order to apply, a pull request needs to be created on the following repository: . Candidates must explain why they are applying to become an xGov Council member, their motivation for participating in the review process, and how their involvement can contribute to the Algorand ecosystem. * Follow the of the xGov Council Repository. * Follow the , complete all sections, and submit your application using the following file format: `Council/xgov_council-.md`. #### Header Preamble The `id` field is unique and incremented for each new submission. (The id should match the file name, for `id: 1`, the related file is `xgov_council-1.md`) The `author` field must include the candidate’s full name and their GitHub username in parentheses. > Example: Jane Doe (@janedoe) The `email` field must include a valid email address where the candidate can be contacted regarding the KYC (Know Your Customer) process. The `address` field represents an Algorand wallet address. This address will be used for verification or any token distribution if applicable. The `status` field indicates the current status of the submission: * `Draft`: In Pull request stage but not ready to be merged. * `Final`: In Pull request stage and ready to be merged. * `Elected`: The candidate has been elected. * `Not Elected`: The candidate has not been selected. ### Timeline * Applications will open 4-6 weeks before the election. A call for applications will be posted on the . ### xGov Council Duties and Powers #### Eligibility Criteria * Any Algorand holder, including xGovs, with Algorand technical expertise and/or a strong reputation can run for the council. * Candidates must disclose their real name, have an identified Algorand address, and undergo the KYC process with the Algorand Foundation. #### Duties * Review and understand the terms and conditions of the program. * Evaluate proposals to check compliance with terms and conditions, provide general guidance, and outline benefits or issues to help kick off the proposal discussion. * Hold public discussions about the proposals review process above. #### Powers * Once a proposal passes, the xGov council can block it ONLY if it doesn’t comply with the terms and conditions. * Expel fellow council members for misconduct by a supermajority vote of at least 85%. * Also, by a majority vote, block fellow council members’ remuneration if they are not performing their duties. ## Rationale The xGov Council is a fundamental component of the xGov Program, tasked with reviewing proposals. A structured, transparent application process ensures that only qualified and committed individuals are elected to the Council. ### Governance measures related to the xGov Council * . * . ## Security Considerations ### Disclaimer jurisdictions and exclusions To be eligible to apply for the xGov council, the applicant must not be a resident of, or located in, the following jurisdictions: Cuba, Iran, North Korea and the Crimea, Donetsk, and Luhansk regions of Ukraine, Syria, Russia, and Belarus. ## Copyright Copyright and related rights waived via .
# xGov status and voting power
> xGov status and voting power for the Algorand Governance
## Abstract This ARC defines the Expert Governor (xGov) status and voting power in the Algorand Expert Governance. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . The notation `(x, y)` denotes a pair of elements, while `(x; y)` (with `;`) denotes the interval of real numbers between `x` and `y` (including neither `x` nor `y`). ### xGov Registry The xGov Registry is the Application that manages the Algorand Expert Governance on the Algorand blockchain. Let * `g` be the Genesis Hash of the Algorand blockchain; * `R` the xGov Registry Application ID; * `Bc` the block number at which the xGov Registry `R` was created on `g`. The xGov Registry is created by the Algorand Foundation and is identified by the tuple `(g, R, Bc)`. > On the Algorand MainNet the xGov Registry is created by the Algorand Foundation address `I7OP7WFSK57IFDHJA6DM5TJC2IFY4M3XSBV4R4PVOV4YWF7K57BZFUVQ5E` and identified by: > > * `g`: `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=` > * `R`: `3147789458` > * `Bc`: `52307574` ### Governance Period A *governance period* is identified by a pair `(Bi, Bf)` such that * `Bi = 0 mod 1,000,000`; * `Bf = 0 mod 1,000,000`; * `Bf > Bi`; * `Bf > Bc`. And is intended as a range of blocks `[Bi; Bf)` (`Bi` included, `Bf` excluded). > Note that `Bi < Bc` is valid and denotes a period across the xGov Registry creation. ### xGov Status xGovs (Expert Governors) are decision makers in the Algorand Expert Governance, who acquire voting power by securing the network and producing blocks. These individuals can participate in the designation and approval of proposals submitted to the Algorand Expert Governance process. An xGov is associated with an Algorand Address (`a`) subscribing to the Algorand Expert Governance by acknowledging the xGov Registry. Once the xGov Registry confirms the acknowledgement on block `h` for the address `a`, it acquires an xGov status and is considered an xGov. The xGov status **MAY** be revoked on block `r` from address `a`, either by themself (unsubscribing from the xGov Registry) or by the xGov Registry rules. The xGov status `(a, h, k)` of an address `a` **SHOULD** be persisted on the xGov Registry state. If the xGov status `(a, h, k)` is revoked (`k ≠ 0`) it **SHOULD NOT** be deleted from xGov Registry state. ### xGov Voting Power Given a governance period `(Bi, Bf)`, an xGov `(a, h, k)` is *eligible* to acquire voting power for that period if and only if: * `h ∈ [Bc; Bf)` (xGov status **acknowledged** at before `Bf`), and * `k = 0` or `k ≥ Bf` (xGov status **not revoked** at `Bf`), and * `a` has proposed at least one block in `[Bi; Bf)`. The *voting power* assigned to each xGov `(a, h, k)` is equal to the number of blocks proposed by its Algorand Address (`a`) over the governance period `[Bi; Bf)`. > If an address `a` has acknowledged the xGov Registry at some `h ∈ [Bc; Bf)` and has proposed one or more blocks in `[Bi; Bf)`, then all such proposals in `[Bi; Bf)` contributes to its voting power, including those that occurred before `h`. > The *eligibility* of address `a` holds for all the governance periods `(Bi, Bf)` such that `h ∈ [Bc; Bf)` and the xGov status is not revoked at `Bf` (i.e., `k = 0` or `k ≥ Bf`), with no need to reacknowledge the xGov Registry for each period. ### xGov Committee An xGov Committee is a group of *eligible* xGovs that have acquired voting power in a governance period. Given the xGov Registry `(g, R, Bc)` and a governance period `(Bi, Bf)` as above, an *xGov Committee* for `(g, R, Bc, Bi, Bf)` is a finite set `C` of address–weight pairs `(a, v)` such that the following three conditions hold: 1. **Eligibility**: For all `(a, v)` in `C`, there exists an xGov status `(a, h, k)` such that: * `h ∈ [Bc; Bf)`, and * `k = 0` or `k ≥ Bf`, and * `a` has proposed at least one block in `[Bi; Bf)`. 2. **Voting Power**: For all `(a, v)` in `C`, `v` is equal to the voting power of `a` in `[Bi; Bf)`; 3. **Uniqueness**: The addresses `a` in `C` are all distinct. **Eligibility** at `Bf` **MUST** be evaluated on the xGov Registry state immediately after processing block `Bf-1` (i.e., the state at the end of block `Bf-1`). An xGov Committee is defined by the tuple `(g, R, Bc, Bi, Bf, C)`. If `C` is empty, then the xGov Committee for the governance period has no voting power. #### xGov Committee Members The *number of xGov Committee members* `M` is the cardinality of `C`, more formally `M = |C|`. #### xGov Committee Voting Power The *total voting power* of an xGov Committee `V` is the sum of votes (`v`) over all its members (`a`), more formally `V = Σ_{(a,v) ∈ C} v`. ### xGov Committee Selection Procedure The xGov Committee selection is repeated periodically to select new xGov Committees over time. To build the xGov Committee `(g, R, Bc, Bi, Bf, C)`, the selection is executed with the following procedure: 1. Collect all proposed blocks in the governance period `[Bi; Bf)` to build the *potential committee* set `P` (note that not all the Block Proposers hold the xGov status). 2. For each Block Proposer address (`a`) in `P`, assign a voting power (`v`) equal to the number of blocks proposed in the governance period `[Bi; Bf)`. 3. Determine the set of xGov statuses `(a, h, k)` that are *eligible* at `Bf`, i.e. those such that: * `h ∈ [Bc; Bf)`, and * `r = 0` or r `≥ Bf`. (**OPTIONAL**) If the xGov Registry state does not persist sufficient information to determine `(a, h, k)` at `Bf` from a state snapshot, replay the xGov Registry state transitions up to `Bf` to reconstruct xGov statuses at `Bf`. 4. Collect all the *eligible* xGovs in the governance period `[Bc; Bf)` to build the *eligible xGovs* set `E(Bi, Bf)`. 5. Filter `P ∩ E` to obtain the *xGov Committee* `C`. > The Committee for the governance period `(Bi, Bf)` is a pure function of: > > * The blocks history up to `Bf`; > * The fixed registry identity `(g,R,Bc)`; > > And it does not depend on the time at which the Committee is elected. ### Representation The xGov Committee **MUST** be represented with the canonical UTF-8 encoded JSON object with the following schema: ```json { "title": "xGov Committee", "description": "Selected xGov Committee with voting power and validity", "type": "object", "properties": { "xGovs": { "description": "xGovs with voting power, sorted lexicographically with respect to addresses", "type": "array", "items": { "type": "object", "properties": { "address": { "description": "xGov address used on xGov Registry in base32", "type": "string" }, "votes": { "description": "xGov voting power", "type": "integer", "minimum": 1 } }, "required": ["address", "votes"] }, "uniqueItems": true }, "periodStart": { "description": "First block of the Committee selection period, must ≡ 0 mod 1,000,000", "type": "integer", "multipleOf": 1000000 }, "periodEnd": { "description": "Last block of the Committee selection period, must ≡ 0 mod 1,000,000 and greater than periodStart", "type": "integer", "multipleOf": 1000000 }, "totalMembers": { "description": "Total number of Committee members", "type": "integer" }, "networkGenesisHash": { "description": "The genesis hash of the network in base64", "type": "string" }, "registryId": { "description": "xGov Registry application ID", "type": "integer" }, "totalVotes": { "description": "Total number of Committee votes", "type": "integer" } }, "required": ["networkGenesisHash", "periodEnd", "periodStart", "registryId", "totalMembers", "totalVotes", "xGovs"], "additionalProperties": false } ``` For a valid xGov Committee JSON object: * The number of entries in the xGovs array **MUST** equal `totalMembers`. * The sum of the vote fields of all xGovs entries **MUST** equal `totalVotes`. * All address values in the xGovs array **MUST** be distinct. The following rules aim to create a deterministic outcome of the committee file and its resulting hash. The object keys **MUST** be sorted in lexicographical order. The xGovs arrays **MUST** be sorted in lexicographical order with respect to the *unique* address keys. The canonical representation of the committee object **MUST NOT** include decorative white-space (pretty printing) or a trailing newline. An xGov Committee is identified by the following identifier: `SHA-512/256(arc0086||SHA-512/256(xGov Committee JSON))` The ASCII string `"arc0086"` **MUST** be encoded as the UTF-8 byte sequence `0x61 0x72 0x63 0x30 0x30 0x38 0x36`. ### Trust Model The Algorand Foundation is responsible for executing the Committee selection algorithm described above and publishing the resulting Committee ID on the xGov Registry. The correctness of the process is auditable post-facto via: * The block proposers’ history (on-chain) * The xGov Registry history and state (on-chain) * The published Committee JSON (hash verifiable) Any actor can recompute and verify the selected committee independently from on-chain data. Clients **SHOULD** use a trusted provider for both the block proposer history and the xGov Registry state. ## Rationale Given the shift of the Algorand protocol towards consensus incentivization, the xGov process could be an additional way to push consensus participation. ## Security Considerations Recomputing the xGov Committee requires access to block proposer history for the entire governance period `[Bi; Bf)` and to the xGov Registry state. Implementations **MUST** ensure that this historical data remains available (for example, via archival nodes or indexer services), or document any assumptions about third-party infrastructure. Clients **SHOULD** notify the Algorand Foundation if: * The xGov Committee for period `[Bi; Bf)` is not published by the Algorand Foundation within `10,000` blocks of the end of the period. * A published Committee ID does not match any recomputed xGov Committee using the agreed `(g, R, Bc, Bi, Bf, C)`. ## Copyright Copyright and related rights waived via .
# Key Name Specification
> A system for addressable values
## Abstract Adopt a standard key name specification for complex data. This defines key names that can be used to represent JSON, Blobs, or other structures that do not fit neatly into the state ## Motivation This pattern has emerged over time as a way to circumvent constraints with state storage. This seeks to codify the practice into a shared definition which can be leveraged as a primitive in the ecosystem. This greatly simplifies the cross-cutting concerns when integrating with complex structures by directly addressing values on-chain. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . > All bullet points are in reference to Key Names * **SHOULD** be prefixed with `o_` (for discovery/indexing) * **MUST** separate nested object keys with `.` * **MUST** index collections with `[N]` * **SHALL** be escaped by starting with `${` and ending in `}` ## Rationale Multiple variants of this pattern create downstream cycles which could be avoided. ### JSON/Objects Given the following object: ```json { "alice": "APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE", "bob": "BX2RWWE77PA7JNNIPWUBQYX44LDHDE6EBRFEPLJUOMBNLT4ATQ3SA7UGEQ", "metadata": { "rp": "algorand.co" } } ``` Represents the following Key/Value pairs: | key | value | | -------------- | ---------------------------------------------------------- | | o\_alice | APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE | | o\_bob | BX2RWWE77PA7JNNIPWUBQYX44LDHDE6EBRFEPLJUOMBNLT4ATQ3SA7UGEQ | | o\_metadata.rp | algorand.co | ### Blob/File Assuming the blob is greater than the state storage, chunking is required and can be represented in an object ```json { "index": 2, "mime": "text/plain", "blobs": [ "...", "..." ] } ``` Would produce the following keys | key | value | | ------------ | ---------- | | o\_index | 2 | | o\_mime | text/plain | | o\_blobs\[0] | … | | o\_blobs\[1] | … | This is only illustrative of the value size constraints, a dedicated specification would be more robust for bespoke Objects. This is out of scope for this key name specification. ### Templatization Assuming the key names are greater than 64 bytes, mapping of the names to values is required. ```json { "this is a really long key that for some reason is extra long even though it probably doesn't need to be this long but idk maybe someone has a key this long": "data for super long key" } ``` Would produce the following keys | key | value | | ------------------- | ----------------------- | | o\_${path.to.value} | data for super long key | This is only illustrative of the key size constraints, a dedicated specification would be more robust for applying templates. This is out of scope for this key name specification. ### Encoding/ Containers Assuming the values are encoded, further processing is required with knowledge of the types ```json { "APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE": [0,1,2,3,...] } ``` Given the value type ```typescript class PackedValue extends Struct<{ a: uint64 b: uint64 }> {} ``` Would be represented as the following object ```json { "APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE": { "a": 1234, "b": 1234 } } ``` And would produce the following keys | key | value | | ------------------------------------------------------------- | ----- | | o\_APZK5I5UAURBDSGFBEHYK3B235CDGYGXG6BAGC34ZHGDPQOJTBM5OSG6IE | bytes | This is only illustrative of the current encoding practices, a mapping of ARC-4 Containers to key paths could be done at a future date. This is out of scope for this key name specification. ## Backwards Compatibility All backwards compatibility must be done with an Adapter. ## Reference Implementation * See ## Security Considerations * This does not account for private data ## Copyright Copyright and related rights waived via .
# ASA Metadata Registry
> Singleton Application providing ASA metadata via Algod API or the AVM
## Abstract This ARC defines the interface and the implementation of a singleton Application that provides Algorand Standard Assets metadata through the Algod API or the AVM. ## Motivation Algorand Standard Assets (ASA) lack a dedicated metadata field on the Algorand ledger for storing additional asset information. Although it’s generally not advisable to use Algorand as a distributed storage system for data that could easily reside elsewhere, the absence of a native metadata store on the ledger has led the ecosystem to adopt less-than-ideal solutions for discovering and fetching off-chain asset data, involving the usage of an Indexer or external infrastructure (such as IPFS), or hacking on the ASA RBAC roles to get asset metadata mutability. While storing huge data, such as images, off-chain is a practical (and recommended) approach, smaller, more pertinent data should not incur the expenses, availability challenges, and latency typically associated with external infrastructure. This ARC establishes a standardized URI within the ASA URL field to solve this simple use case: *directly retrieving ASA metadata using the Algod API or the AVM*. ## Specification The keywords “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . The data types (like `uint64`, `byte[]`, etc.) in this document are to be interpreted as specified in . > Notes like this are non-normative. ### ASA Metadata Registry The ASA Metadata Registry is an *immutable singleton* Application that stores *mutable* or *immutable* Asset Metadata. The trusted deployments of ASA Metadata Registry are: | NETWORK | GENESIS HASH (`base64`) | APP ID | CREATOR ADDRESS | | :------- | :--------------------------------------------: | :---------: | :----------------------------------------------------------- | | Main Net | `wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=` | `TBD` | `XODGWLOMKUPTGL3ZV53H3GZZWMCTJVQ5B2BZICFD3STSLA2LPSH6V6RW3I` | | Test Net | `SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=` | `753324084` | `QYK5DXJ27Y7WIWUJMP3FFOTEU56L4KTRP4CY2GAKRXZHHKLNWV6M7JLYJM` | > Refer to the for the detailed Application Specification of the singleton reference implementation. The initial Minimum Balance Requirement (MBR) for the ASA Metadata Registry Application Account **SHOULD** be provided *before* enabling the creation of any Asset Metadata. Once deployed, the ASA Metadata Registry **MUST NOT** be updated. #### Asset Metadata Box The ASA Metadata, along with some ancillary information, are stored in a dedicated Box of the ASA Metadata Registry, called *Asset Metadata Box*. There **MUST** be at most one Asset Metadata Box per ASA. The Asset Metadata Box Name **MUST** be equal to the raw 8-byte big-endian encoding of the *Asset ID* (`uint64`) (`ASSET_METADATA_BOX_KEY_SIZE = 8` bytes). The Asset Metadata Box Value **MUST** be defined as follows: | FIELD | SCOPE | IN METADATA HASH | TYPE | BYTE OFFSET | BYTE SIZE | | :----- | :------------- | :--------------: | :--: | :-----------------------: | :-------: | | Header | Yes | `byte` | `0` | `1` | | | Header | Yes | `byte` | `1` | `1` | | | Header | Yes | `byte` | `2` | `1` | | | Header | No (Recursive) | `byte[32]` | `3` | `32` | | | Header | No | `uint64` | `35` | `8` | | | Header | No | `uint64` | `43` | `8` | | | Body | Yes | `byte[]` | `51` | up to `MAX_METADATA_SIZE` | | > See the for more details about the Metadata encoding and size limits. #### Metadata Header The Metadata Header is a byte-array of fixed length (`HEADER_SIZE`), encoding ancillary attributes of the Asset Metadata. The `HEADER_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that is equal to the sum of the Header fields byte sizes (`51` bytes). The maximum `HEADER_SIZE` depends on: * The AVM size of the `log` opcode `MAX_LOG_SIZE` (`1024` bytes); * The return prefix (`151f7c75`) size `ARC4_RETURN_PREFIX_SIZE` (`4` bytes); Therefore, `HEADER_SIZE ≤ MAX_LOG_SIZE - ARC4_RETURN_PREFIX_SIZE = 1020` bytes. ##### Metadata Identifiers The Metadata Identifiers (`byte`) are a set of boolean switches set by the ASA Metadata Registry. The Metadata Identifiers are defined as follows: | BIT | DESCRIPTION | DEFAULT | STATE TRANSITION | | :---: | :---------- | :------: | :--------------- | | `LSB` | Not used | - | - | | `1` | Not used | - | - | | `2` | Not used | - | - | | `3` | Not used | - | - | | `4` | Not used | - | - | | `5` | Not used | - | - | | `6` | Not used | - | - | | `MSB` | `False` | Two-ways | | The `MSB` is the leftmost bit in the byte stored in Asset Metadata Box. The Metadata Identifiers **SHALL NOT** be updated if the Metadata is . ###### Short Metadata The Metadata **MAY** be identified as *short* on creation or after, by setting the `MSB` in the Metadata Identifier to `True`. The *short* Metadata identifier is derived by the `metadata_size`. It is set to `True` if and only if `metadata_size ≤ SHORT_METADATA_SIZE`, and `False` otherwise. Its value **MAY** change on update. Clients **MUST NOT** assume shortness identifier persists across updates, since the Metadata size is not guaranteed to be constant (if not ). > If the Metadata is identified as *short*, clients are aware that all AVM opcodes can operate directly on the whole Metadata, example: decoding (`json_ref`, `base64_decode`), cryptography (`sha256`, `keccak256`, `sha512_256`, `sha3_256`), byte manipulations, etc. > For further details on identification rules, refer to the . ##### Metadata Flags The Metadata Flags (**reversible** and **irreversible**) are two *distinct* sets of boolean switches set by the ASA Manager Address. * **Reversible Flags**: are *two-ways* switches, they can be set (to `True`) or unset (to `False`). For further details, see the . * **Irreversible Flags**: are *one-way* switches, they can be set (to `True`). For further details, see the ). > Metadata Flags can be used for bitwise operations with a bitmask. The Metadata Flags **MAY** set by the ASA Manager Address **on creation** or **later**. The Metadata Flags **SHALL NOT** be updated if the Metadata is . ###### Reversible Flags The Reversible Flags (`byte`) are defined as follows: | BIT | DESCRIPTION | DEFAULT | SET TIME | | :---: | :----------------------------------------- | :-----: | :------- | | `LSB` | Smart ASA | `False` | Any | | `1` | Circulating Supply | `False` | Any | | `2` | Native Token Transfers (NTT) supported | `False` | Any | | `3` | Custom, should be reserved for future ARCs | `False` | Any | | `4` | Custom, should be reserved for future ARCs | `False` | Any | | `5` | Custom, should be reserved for future ARCs | `False` | Any | | `6` | Custom, should be reserved for future ARCs | `False` | Any | | `MSB` | Custom, should be reserved for future ARCs | `False` | Any | The `MSB` is the leftmost bit in the byte stored in Asset Metadata Box. The bits `2 ... MSB` are reserved for future ARCs (default `False` if not used). An ASA **MAY** be declared to be an Smart ASA on creation or after, setting the `LSB` in the Reversible Flags to `True`. If the ASA is declared to be an Smart ASA: * The ASA **MUST** conform with the specification, and * The Metadata **MUST** be used for the ASA Controlling Application discovery, conforming to the specification. The ASA Circulating Supply **MAY** be enabled on creation or after, setting the bit `1` in the Reversible Flags to `True`. If the support is *enabled*: * The ASA **MUST** conform with the specification, and * The Metadata **MUST** be used for the ASA Circulating Supply Application discovery, conforming to the specification. An ASA **MAY** declare to support on creation or after, setting the `2` bit in the Reversible Flags to `True`. ###### Irreversible Flags The Irreversible Flags (`byte`) are defined as follows: | BIT | DESCRIPTION | DEFAULT | SET TIME | | :---: | :----------------------------------------- | :-----: | :------------------- | | `LSB` | Compliant | `False` | At metadata creation | | `1` | Native ASA | `False` | At metadata creation | | `2` | Burnable ASA | `False` | Any | | `3` | Custom, should be reserved for future ARCs | `False` | Any | | `4` | Custom, should be reserved for future ARCs | `False` | Any | | `5` | Custom, should be reserved for future ARCs | `False` | Any | | `6` | Custom, should be reserved for future ARCs | `False` | Any | | `MSB` | `False` | Any | | The `MSB` is the leftmost bit in the byte stored in Asset Metadata Box. The bits `3 ... 6` are reserved for future ARCs (default `False` if not used). The Metadata **MAY** be declared as *compliant* on creation, setting the bit `LSB` in the Irreversible Flags to `True`. The ASA **MAY** be declared as a *native* ASA on creation, setting the bit `1` in the Irreversible Flags to `True`. The ASA **MAY** be declared as a *burnable* ASA on creation or after, setting the bit `2` in the Irreversible Flags to `True`, if it ASA has no Clawback Address. ###### Metadata Immutability The Metadata **MAY** be declared as *immutable* on creation or after, setting the `MSB` in the Irreversible Flags to `True`. > ⚠️ WARNING: If the ASA Manager Address is set to the Zero Address, this implies that the ASA is effectively *immutable*, regardless of the Metadata Immutability flag (`MSB`) setting. ##### Metadata Hash The Metadata Hash (`byte[32]`) is a 256-bit hash computed as defined in the . The Metadata Hash **MUST** be set on Asset Metadata creation. If the Asset Metadata is not , the Metadata Hash **MUST** be updated on any modification of either: * Metadata Identifiers, or * Metadata Flags, or * Metadata (body). ##### Last Modified Round The Last Modified Round (`uint64`) records the block in which the Metadata Header or the Metadata was last modified (or created). If the Asset Metadata is not , the Last Modified Round **MUST** be updated on any modification of either: * Metadata Identifiers, or * Metadata Flags, or * Metadata (body). > The Last Modified Round is guaranteed to be monotonically increasing. #### Deprecated By The Deprecated By (`uint64`) is the Application ID of the new ASA Metadata Registry version. The Deprecated By field **MUST** be set to `0` if the ASA Metadata Registry is not deprecated. The ASA Manager Address **MAY** migrate *mutable* metadata to a new ASA Metadata Registry version by setting the Deprecated By field to the Application ID of the new ASA Metadata Registry version. *Immutable* metadata **MUST NOT** be migrated. #### Metadata The Metadata (`byte[]`) is a byte-array of variable length (`metadata_size`). The `metadata_size` **MAY** be `0`, representing *empty* Metadata. In this case, the Metadata Body is the empty byte string (and `total_pages = 0`, see ). The Metadata Header still exists and can be retrieved by clients. The `MAX_METADATA_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that depends on: * The maximum byte size of an AVM Box `MAX_BOX_SIZE` (`32768` bytes); * The maximum Application Call arguments size `MAX_ARG_SIZE` (`2048` bytes); * The maximum number of transaction per Group `MAX_GROUP_SIZE` (`16`); * The `HEADER_SIZE`; * The method selector size `ARC4_METHOD_SELECTOR_SIZE` (`4` bytes); * The available payload for the method `arc89_create_metadata(uint64,byte,byte,uint16,byte[],pay)` (`FIRST_PAYLOAD_MAX_SIZE = MAX_ARG_SIZE - (ARC4_METHOD_SELECTOR_SIZE + 8 + 1 + 1 + 2 + 2 + 0) = 2030` bytes), which consumes an extra `pay` transaction in the Group (the `pay` transaction is not encoded as argument bytes, hence the `+ 0` in the formula); * The available payload for the method `arc89_extra_payload(uint64,byte[])` (`EXTRA_PAYLOAD_MAX_SIZE = MAX_ARG_SIZE - (ARC4_METHOD_SELECTOR_SIZE + 8 + 2 = 2034)` bytes); > The `MAX_METADATA_SIZE` is not constrained by the first head payload for the methods `arc89_replace_metadata(...)` and `arc89_replace_metadata_larger(...)` since they are larger than the one of `arc89_create_metadata(...)`. > Refer to the section for details about the method signatures. Therefore, `MAX_METADATA_SIZE = FIRST_PAYLOAD_MAX_SIZE + 14 * EXTRA_PAYLOAD_MAX_SIZE = 30506` bytes. The condition `MAX_METADATA_SIZE ≤ MAX_BOX_SIZE - HEADER_SIZE` **MUST** hold. The `metadata_size` **MUST** hold the condition: `metadata_size ≤ MAX_METADATA_SIZE`. The `SHORT_METADATA_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that is equal to the AVM Stack length (`4096` bytes). If the `metadata_size ≤ SHORT_METADATA_SIZE`, it **MUST** be declared as . The Metadata **MUST NOT** be updated if . > The available payload for the method `arc89_replace_metadata_slice(uint64,uint16,byte[])` is `REPLACE_PAYLOAD_MAX_SIZE = MAX_ARG_SIZE - (ARC4_METHOD_SELECTOR_SIZE + 8 + 2 + 2 = 2032)` bytes. ##### Encoding The Metadata **MUST** be a sequence of bytes representing a valid UTF-8 encoded JSON *object*, as defined in , without Byte Order Mark (BOM). If Metadata is *empty* (`metadata_size == 0`), clients **MUST** treat it as an empty JSON object for parsing purposes. If the Metadata is a valid JSON object, it **SHOULD** conform to the . This is the **RECOMMENDED** schema for maximum interoperability with the ecosystem (e.g., explorers, wallets, etc.). ##### Pagination A Metadata Page is a byte-array of variable length (`page_size`) that contains a portion of (or the entire) Metadata. The `PAGE_SIZE` (`uint16`) is a parameter of the ASA Metadata Registry that depends on: * The AVM size of the `log` opcode `MAX_LOG_SIZE` (`1024` bytes); * The return prefix (`151f7c75`) size `ARC4_RETURN_PREFIX_SIZE` (`4` bytes); * The maximum *return type* encoding overhead bytes depends on the ) return type `(bool,uint64,byte[])`. ABI tuple are encoded as `head(...) || tail(...)`. Therefore, `PAGE_SIZE = MAX_LOG_SIZE - ARC4_RETURN_PREFIX_SIZE - (1 + 8 + 2 + 2) = 1007` bytes. The `page_size` **MUST** hold the condition `page_size ≤ PAGE_SIZE`. A `page` **MUST** be identified by a 0-based index (`uint8`) from the head of the Metadata. Page `p` covers the byte range `[p*PAGE_SIZE, min((p+1)*PAGE_SIZE, metadata_size))`. The final `page` **MAY** be shorter; all intermediate pages **SHOULD** have `page_size = PAGE_SIZE`. > A `uint8` is enough as a `page` index since `ceil(MAX_METADATA_SIZE/PAGE_SIZE) = 31`; `0` pages are allowed (i.e., empty Metadata). > Empty Metadata: when `total_pages == 0`, there are no Metadata Pages for hashing purposes; however, the method accepts `page = 0` and return an empty page (and `has_next_page = False`) as a convenience (any `page != 0` fails). ##### MBR Delta The *MBR Delta* is the variation of the ASA Metadata Registry Application Account MBR due to the creation, update, or deletion of the Asset Metadata Box. It is a tuple of two elements, encoding: * The *sign* (`uint8`) enum: | ENUM | VALUE | DESCRIPTION | | :----- | :---: | :---------- | | `NULL` | `0` | Null | | `POS` | `1` | Positive | | `NEG` | `255` | Negative | * The *amount* (`uint64`) of MBR, expressed in microALGO. The MBR Delta is calculated based on the following contextual information: * The *existence* of the Asset Metadata Box for the ASA and, * The relative byte sizes (`delta_size`) between a *new* Metadata (`new_metadata_size`) and the *existing* Metadata (`metadata_size`, if any). #### Metadata Hash Computation If the Asset Metadata Hash (`am`) field of the ASA is set (i.e., not zero), then: * It takes precedence over the hash computation, and it is copied verbatim as Metadata Hash, and * The Asset Metadata **MUST** be flagged as at creation, and * The ASA Metadata Registry **SHALL** validate it (according to the following specification) if the Asset Metadata is flagged as and not as . > Refer to the section for details about the *Asset Metadata Hash* (`am`) field. Otherwise, the Metadata Hash is computed as follows: 1. Compute the Metadata Header Hash (`hh`): ```plain hh = SHA-512/256("arc0089/header" || Asset ID || Metadata Identifiers || Reversible Flags || Irreversible Flags || Metadata Size) ``` 1. If `total_pages > 0`, compute the Page Hashes (`ph[i]`) for each Metadata Page (`i = 0 ... total_pages - 1`): ```plain ph[i] = SHA-512/256("arc0089/page" || Asset ID || Page Index || Page Size || Page Content) ``` 1. If `total_pages > 0`, compute the Asset Metadata Hash (`am`) as: ```plain am = SHA-512/256("arc0089/am" || hh || ph[0] || ph[1] || ... || ph[total_pages - 1]) ``` otherwise, if `total_pages == 0`, compute the Asset Metadata Hash (`am`) as: ```plain am = SHA-512/256("arc0089/am" || hh) ``` Where: * `||` denotes concatenation, * `Asset ID` is the 8-byte encoding of the Asset ID (`uint64`), serialized in network byte order (big-endian); * `Metadata Identifiers` is the 1-byte encoding of the (`byte`); * `Reversible Flags` is the 1-byte encoding of the (`byte`); * `Irreversible Flags` is the 1-byte encoding of the (`byte`); * `Metadata Size` is the 2-byte encoding of the Metadata Size (`uint16`), serialized in network byte order (big-endian); * `Page Index` is the 1-byte encoding of the 0-based Metadata Page Index (`uint8`); * `Page Size` is 2-byte encoding of the i-th Page byte size (`uint16`), serialized in network byte order (big-endian); * `Page Content` are the *exact raw bytes* content of the i-th Metadata Page, unpadded if `len(page) < PAGE_SIZE`; * `SHA-512/256` is defined in . Hash components **MUST NOT** be reinterpreted as a signed integer, bitset string, or multibyte integer prior to hashing. > ⚠️ The Last Modified Round and the Deprecated By fields are **NOT** included in the Metadata Hash computation. ### ASA Creation Care has to be taken when creating an , specifically: * The *Asset URL* (`au`) field is defined at ASA creation time, and it is *immutable*, * The *Asset Metadata Hash* (`am`) field is defined at ASA creation time, and it is *immutable*, * The ASA Manager Address **MUST NOT** be set to the Zero Address on creation. #### Asset URL The *Asset URL* (`au`) field is used as a *partial* URI pointing to the Asset Metadata on the Algorand ledger. The *Asset URL* (`au`) **MUST** begin with the *partial* URI: `algorand:///app/?box=` and **MAY** declare the at the end of the *partial* URI: `algorand:///app/?box=#arc++...` where ``, ``, ``, etc. are the ARC numbers of the compliance fragments. The **MUST** be set to `True`. Clients **MUST** resolve the *partial* Asset URL (`au`) to a *complete* before using it. > Refer to the for details about the *complete* *Asset Metadata URI*. #### Asset Metadata Hash The *Asset Metadata Hash* (`am`) field is used as hash-lock invariant on ASA creation. The ASA Creator **SHOULD** compute the *Asset Metadata Hash* (`am`) field as specified by: * The , if the ASA is , * Otherwise, the . > Since the Metadata Identifiers are set by the ASA Metadata Registry on creation, the ASA Creator needs to pre-identify the ASA based on the creation parameters, specifically: > > * If the Metadata size at creation time is less than or equal to `SHORT_METADATA_SIZE`. #### Compliance The compliance with is **OPTIONAL** but **RECOMMENDED** to maximize interoperability with the ecosystem. If the ASA conforms to , then: * The ASA **MUST** comply with the for the *Asset Name* (`an`) and the *Asset URL* (`au`) fields. * It is **RECOMMENDED** to use the *Asset URL* (`au`) suffix option, in this case the *partial* URI would be: `algorand:///app/?box=#arc3` * The ASA **MUST** comply with the for the *Asset Metadata Hash* (`am`) field if the Asset Metadata are set as at creation, otherwise the *Asset Metadata Hash* (`am`) field **MUST NOT** be set (i.e., set to zero). * The Asset Metadata **MUST** comply with the . * The **MUST** be set to `True`. > Refer to the for details about the *complete* *Asset Metadata URI*. > The ASA Metadata Registry does not enforce *Asset Metadata Hash* (`am`) validation for ASA. #### Creation Process Two **RECOMMENDED** creation processes are provided. ##### ARC-89 Native ASA Creation The **RECOMMENDED** creation process for an *native* ASA: 1. The ASA Creator Address defines the and the , 2. The ASA Creator Address creates an ASA as follows: * The *Asset URL* (`au`) field is set to `algorand:///app/?box=#arc89`, * If the Asset Metadata is , the *Asset Metadata Hash* (`am`) field is computed according to the using the , the defined and the Metadata (raw bytes), * The ASA Manager Address is *not* set to the Zero Address. 3. The ASA Manager Address creates the Asset Metadata on the ASA Metadata Registry, using the defined Metadata Flags and Metadata. ##### ARC-89 Native ASA Creation with ARC-3 Compliant Metadata The **RECOMMENDED** creation process for an *native* ASA with Metadata is: 1. The ASA Creator Address defines the and the , 2. The ASA Creator Address creates an ASA as follows: * The *Asset URL* (`au`) field is set to `algorand:///app/?box=#arc3`, * If the Asset Metadata is , the *Asset Metadata Hash* (`am`) field is set according to the , * The ASA Manager Address is *not* set to the Zero Address. 3. The ASA Manager Address creates the Asset Metadata on the ASA Metadata Registry, using the defined Metadata Flags and Metadata. If the ASA configuration (Role-Based Access Control and destroyability) needs to be locked (by disabling the ASA Manager Address), the Asset Metadata **MUST** be created first. > The compliance fragment for **MUST NOT** contain additional elements (i.e., `#arc3+89` is not allowed), see section for details. ### Asset Metadata URI To get the *Asset Metadata URI*, clients **SHALL** complete the *Asset URL* with the `boxparam` filled with the Asset Metadata Box Name, equal to the *Asset ID* (big-endian `uint64` encoded as `base64url`, URL-safe with padding): `algorand:///app/?box=#arc++...` where ``, ``, ``, etc. are the ARC numbers of the compliance fragments as defined by . If the compliance fragment is used, it **MUST** be the only fragment as defined by (i.e., `#arc3` **is valid**, `#arc3+89` **is not valid**). > The **MainNet** `netauth` is empty, therefore: > > * the **Asset URL** is: `algorand://app/?box=#arc++...` > > * the **Asset Metadata URI** is: `algorand://app/?box=#arc++...` > The **TestNet** deployments uses `testnet` as `netlabel` for the `netauth` selector, therefore: > > * the **Asset URL** is: `algorand://net:testnet/app/?box=#arc++...` > > * the **Asset Metadata URI** is: `algorand://net:testnet/app/?box=#arc++...` Clients **MUST** encode the Asset Metadata Box Name with URL-safe `base64url` (with padding) in URIs, and with Standard `base64` when calling Algod API endpoints with `/box?name=` query parameter. > For further details on the `base64` Standard and URL-safe encodings refer to the . > The *Asset ID* (`uint64`) used as Asset Metadata Box Name (`boxparam`) in the Asset Metadata URI is encoded as `base64url` for two reasons: (1) the Box Name is assumed to be raw big-endian 8-bytes encoding a `uint64` and (2) the Algod API requires `/box?name=` query parameter to be Standard `base64` encoded, while the URI requires the URL-safe `base64url` encoding. #### Examples > | Asset ID (`uint64`) | 8-byte big-endian (hex) | Algod `/box?name=` (Standard `base64`) | ARC-90 `box=` (URL-safe `base64url`) | > | ------------------: | :---------------------: | :------------------------------------: | :----------------------------------: | > | `0` | `0000000000000000` | `AAAAAAAAAAA=` | `AAAAAAAAAAA=` | > | `1` | `0000000000000001` | `AAAAAAAAAAE=` | `AAAAAAAAAAE=` | > | `2^32` | `0000000100000000` | `AAAAAQAAAAA=` | `AAAAAQAAAAA=` | > | `2^63−1` | `7fffffffffffffff` | `f/////////8=` | `f_________8=` | > > The *Asset Metadata URI* for the ASA `12345` would be: > > `algorand:///app/?box=AAAAAAAAMDk#arc89` > > * **MainNet**: `algorand://app/?box=AAAAAAAAMDk#arc89` > * **TestNet**: `algorand://net:testnet/app/?box=AAAAAAAAMDk#arc89` > > The *Asset Metadata URI* for the ASA `12345` would be: > > `algorand:///app/?box=AAAAAAAAMDk#arc3` > > * **MainNet**: `algorand://app/?box=AAAAAAAAMDk#arc3` > * **TestNet**: `algorand://net:testnet/app/?box=AAAAAAAAMDk#arc3` ### Deprecation and ASA migration The ASA Metadata Registry singleton application is *immutable*. Any eventual future version **MUST** be deployed as a new Application ID. The decision to migrate existing ASA Metadata to a new version **MUST** be made by the ASA Manager Address, by declaring the new Application ID in the Deprecated By field of the Metadata Header. If the Deprecated By field is not `0`: * The ASA Manager **SHOULD** leave the (body) **empty** (i.e., `metadata_size = 0`), * Clients **SHALL** point to the new Asset Metadata URI: `algorand:///app/?box=#arc++...` and complete it as specified in the . ## Rationale This ARC standardizes an on-chain, Algod/AVM-addressable metadata source for Algorand Standard Assets (ASAs). The design goals are: 1. Direct retrieval without Indexer or external storage for small but important metadata, 2. Predictable costs and limits via a single-box layout and strict pagination caps, 3. Interoperability with the existing ecosystem through conditional ARCs hooks, 4. Forward compatibility with future ARCs standards, and 5. Precise deprecation strategy for new ASA Metadata Registry versions. ### ASA Metadata Registry Application + ARC-90 URI discovery By fixing a *singleton* application per Algorand network and using a partial URI in the Asset URL (`au`) field, any client can deterministically compute the query parameter pointing to the Asset Metadata (`/box?name=` as big-endian *Asset ID*) and retrieve the metadata through (a) Algod REST API (`GetApplicationBoxByName`) or (b) direct AVM calls to the ASA Metadata Registry. The standard supports two different entrypoints for the Metadata discovery and retrieval: the *Asset ID* (available on the Algorand ledger) or the *Asset Metadata URI* (which could be distributed on the Web or by other external channels). > Refer to the for details. ### Metadata Header/Body split A compact header (Identifiers, Flags, Hash, Last-Modified Round, Deprecated By) precedes the body (JSON). The Last-Modified Round provides a monotonic version marker so readers can detect mid-stream changes. The Deprecated By field allows the Asset Managers to migrate existing ASA Metadata to a new future version of the ASA Metadata Registry. ### Identifiers vs Flags The ASA Metadata Registry sets Identifiers (short-metadata hint) while the ASA Manager Address governs Flags (ARC-3, ARC-20, ARC-62, ARC-89, immutability). One-way transitions (e.g., immutability) are enforced on-chain. This mirrors ASA trust roles and prevents metadata rewrites after lock. ### Pagination with hard bounds Metadata pagination is provided for the AVM clients (Algod clients can read entire Metadata in a single request). A fixed `PAGE_SIZE` keeps each response within AVM limits. The registry guarantees `len(page) ≤ PAGE_SIZE` and supplies a `has_next` boolean. AVM clients can read paginated Metadata either *atomically* (**RECOMMENDED**), using Group Transactions of Inner Transactions, or with *sequential* Application Calls. If the *sequential* read is used, the Last-Modified Round supports streaming and parallel fetch with drift detection. A separate pagination head exposes total metadata size, page size, and total pages for preallocation and progress UIs. ### Hash-lock for immutable Metadata When the ASA is declared *immutable* at creation, the Asset Metadata Hash (`am`) field can commit to the on-chain bytes (domain-separated SHA-512/256 over Flags and Metadata). This binds the ledger state to a wallet-verifiable hash without requiring JSON normalization. ### Scope and limits The registry intentionally caps data to a single box (\~32 KiB minus header). Large artifacts (images, media) remain off-chain; their URIs (e.g., `ipfs://...`, `https://...`) live in the JSON. This strikes a balance between availability and ledger hygiene, discouraging chain-as-a-drive patterns. ### Operability Metadata deletion returns excess MBR; third-party cleanup of metadata for destroyed ASAs is permitted to prevent abandoned state. Network-specific singleton IDs are published by the ARC. ### AVM Operations The registry turns ASA metadata into on-chain first class citizens, using the full potential of AVM opcodes ( and ). ASA metadata on the registry can be read and written programatically on-chain, making them part of the AVM runtime (e.g., an Application can decide to pay a different amount based on some ASA metadata property). ## Backwards Compatibility Backwards compatibility for existing ASA is possible, as long as the size of their metadata does not exceed `MAX_METADATA_SIZE`. Existing ASAs **SHOULD NOT** be flagged as an . The ASA Metadata Registry can be used by existing ASA as a fallback option in addition to the existing URIs requiring external infrastructures (e.g., Indexer, IPFS, etc.). Since the Asset URL (`au`) field is immutable, the Asset Metadata cannot be discovered though an ASA look-up. Existing ASAs willing to backport metadata to the ASA Metadata Registry **MUST** publish the as message, as follows: * The `` **MUST** be equal to `89`; * The **RECOMMENDED** `` are (`m`) or (`j`); * The `` **MUST** specify `uri` key value equal to the . > **WARNING**: To preserve the existing ASA RBAC (e.g. Manager Address, Freeze Address, etc.) it is necessary to **include all the existing role addresses** in the `AssetConfig`. Not doing so would irreversibly disable the RBAC roles! Clients discover the backport message inspecting the ASA `AssetConfig` transaction history. Clients **SHOULD** optimistically check ASA metadata existence on the ASA Metadata Registry first, to avoid inspecting the transaction history. ### Backporting Message Example - JSON without a version > The message to backport existing ASA `12345` metadata to the ASA Metadata Registry would be: > > ```text > arc89:j{"uri": "algorand:///app/?box=AAAAAAAAMDk#arc3"} > ``` ## Reference Implementation ### Interface ```json { "name": "ASA Metadata Registry", "desc": "Singleton Application providing ASA metadata via Algod API and AVM", "methods": [ { "name": "arc89_create_metadata", "desc": "Create Asset Metadata for an existing ASA, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to create the Asset Metadata for" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags. WARNING: LSB and 1 can by set only at creation time. If the MSB is True the Asset Metadata is IMMUTABLE" }, { "type": "uint16", "name": "metadata_size", "desc": "The Metadata byte size to be created" }, { "type": "byte[]", "name": "payload", "desc": "The Metadata payload (without Header). WARNING: Payload larger than args capacity must be provided with arc89_extra_payload calls in the Group" }, { "type": "pay", "name": "mbr_delta_payment", "desc": "Payment of the MBR Delta amount (microALGO) for the Asset Metadata Box creation" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" } }, { "name": "arc89_replace_metadata", "desc": "Replace mutable Metadata with smaller or equal size payload for an existing ASA, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to replace the Asset Metadata for" }, { "type": "uint16", "name": "metadata_size", "desc": "The new Asset Metadata byte size" }, { "type": "byte[]", "name": "payload", "desc": "The Metadata payload (without Header). WARNING: Payload larger than args capacity must be provided with arc89_extra_payload calls in the Group" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" } }, { "name": "arc89_replace_metadata_larger", "desc": "Replace mutable Metadata with larger size payload for an existing ASA, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to replace the Asset Metadata for" }, { "type": "uint16", "name": "metadata_size", "desc": "The new Metadata byte size" }, { "type": "byte[]", "name": "payload", "desc": "The Metadata payload (without Header). WARNING: Payload larger than args capacity must be provided with arc89_extra_payload calls in the Group" }, { "type": "pay", "name": "mbr_delta_payment", "desc": "Payment of the MBR Delta amount (microALGO) for the larger Asset Metadata Box replace" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" } }, { "name": "arc89_replace_metadata_slice", "desc": "Replace a slice of the Asset Metadata for an ASA with a payload of the same size, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to replace the Asset Metadata slice for" }, { "type": "uint16", "name": "offset", "desc": "The 0-based byte offset within the Metadata (body) bytes" }, { "type": "byte[]", "name": "payload", "desc": "The slice payload" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "void" } }, { "name": "arc89_migrate_metadata", "desc": "Migrate the Asset Metadata for an ASA to a new ASA Metadata Registry version, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to migrate the Asset Metadata for" }, { "type": "uint64", "name": "new_registry_id", "desc": "The Application ID of the new ASA Metadata Registry version" } ], "events": [ { "name": "Arc89MetadataMigrated", "desc": "Event emitted when Asset Metadata has been migrated to a new ASA Metadata Registry version", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "new_registry_id", "desc": "The Application ID of the new ASA Metadata Registry version" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata migration" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata migration" } ] } ], "returns": { "type": "void" } }, { "name": "arc89_delete_metadata", "desc": "Delete Asset Metadata for an ASA, restricted to the ASA Manager Address (if the ASA still exists)", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to delete the Asset Metadata for" } ], "events": [ { "name": "Arc89MetadataDeleted", "desc": "Event emitted when Asset Metadata is deleted", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the deleted Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata delete" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata deletion" } ] } ], "returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" } }, { "name": "arc89_extra_payload", "desc": "Concatenate extra payload to Asset Metadata head call methods (creation or replacement)", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to provide Metadata extra payload for" }, { "type": "byte[]", "name": "payload", "desc": "The Metadata extra payload to concatenate" } ], "returns": { "type": "void" } }, { "name": "arc89_set_reversible_flag", "desc": "Set a reversible Asset Metadata Flag, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to set the Metadata Flag for" }, { "type": "uint8", "name": "flag", "desc": "The reversible flag index to set" }, { "type": "bool", "name": "value", "desc": "The flag value to set" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "void" } }, { "name": "arc89_set_irreversible_flag", "desc": "Set an irreversible Asset Metadata Flag, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to set the Metadata Flag for" }, { "type": "uint8", "name": "flag", "desc": "The irreversible flag index to set. WARNING: must be in 2 ... 6" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "void" } }, { "name": "arc89_set_immutable", "desc": "Set Asset Metadata as immutable, restricted to the ASA Manager Address", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to set immutable Asset Metadata for" } ], "events": [ { "name": "Arc89MetadataUpdated", "desc": "Event emitted when Asset Metadata is created or updated", "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID of the created or updated Asset Metadata" }, { "type": "uint64", "name": "round", "desc": "Round of the Asset Metadata creation or update" }, { "type": "uint64", "name": "timestamp", "desc": "Timestamp of the Asset Metadata creation or update" }, { "type": "byte", "name": "reversible_flags", "desc": "The Reversible Flags" }, { "type": "byte", "name": "irreversible_flags", "desc": "The Irreversible Flags" }, { "type": "bool", "name": "is_short", "desc": "True if the Asset Metadata is identified as short" }, { "type": "byte[32]", "name": "hash", "desc": "The Metadata Hash" } ] } ], "returns": { "type": "void" } }, { "name": "arc89_get_metadata_registry_parameters", "desc": "Return the ASA Metadata Registry parameters", "readonly": true, "args": [], "returns": { "type": "(uint8,uint16,uint16,uint16,uint16,uint16,uint16,uint16,uint64,uint64)", "desc": "Tuple of (ASSET_METADATA_BOX_KEY_SIZE, HEADER_SIZE, MAX_METADATA_SIZE, SHORT_METADATA_SIZE, PAGE_SIZE, FIRST_PAYLOAD_MAX_SIZE, EXTRA_PAYLOAD_MAX_SIZE, REPLACE_PAYLOAD_MAX_SIZE, FLAT_MBR, BYTE_MBR)" } }, { "name": "arc89_get_metadata_partial_uri", "desc": "Return the Asset Metadata ARC-90 partial URI, without compliance fragment (optional)", "readonly": true, "args": [], "returns": { "type": "string", "desc": "Asset Metadata ARC-90 partial URI, without compliance fragment" } }, { "name": "arc89_get_metadata_mbr_delta", "desc": "Return the Asset Metadata Box MBR Delta for an ASA, given a new Asset Metadata byte size. If the Asset Metadata Box does not exist, the creation MBR Delta is returned.", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to calculate the Asset Metadata MBR Delta for" }, { "type": "uint16", "name": "new_metadata_size", "desc": "The new Asset Metadata byte size" } ], "returns": { "type": "(uint8,uint64)", "desc": "MBR Delta: sign enum, and amount (microALGO)" } }, { "name": "arc89_check_metadata_exists", "desc": "Checks whether the specified ASA exists and whether its associated Asset Metadata is available", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to check the ASA and Asset Metadata existence for" } ], "returns": { "type": "(bool,bool)", "desc": "Tuple of (ASA exists, Asset Metadata exists)" } }, { "name": "arc89_is_metadata_immutable", "desc": "Return True if the Asset Metadata for an ASA is immutable, False otherwise", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to check the Asset Metadata immutability for" } ], "returns": { "type": "bool", "desc": "Asset Metadata for the ASA is immutable" } }, { "name": "arc89_is_metadata_short", "desc": "Return True if Asset Metadata for an ASA is short (up to 4096 bytes), False otherwise", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to check the Asset Metadata size classification for" } ], "returns": { "type": "(bool,uint64)", "desc": "Tuple of (Is Short Metadata, Metadata Last Modified Round)" } }, { "name": "arc89_get_metadata_header", "desc": "Return the Asset Metadata Header for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata Header for" } ], "returns": { "type": "(byte,byte,byte,byte[32],uint64,uint64)", "desc": "Asset Metadata Header (Identifiers, Reversible Flags, Irreversible Flags, Hash, Last Modified Round, Deprecated By)" } }, { "name": "arc89_get_metadata_pagination", "desc": "Return the Asset Metadata pagination for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata pagination for" } ], "returns": { "type": "(uint16,uint16,uint8)", "desc": "Tuple of (total metadata byte size, PAGE_SIZE, total number of pages)" } }, { "name": "arc89_get_metadata", "desc": "Return paginated Asset Metadata (without Header) for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata for" }, { "type": "uint8", "name": "page", "desc": "The 0-based Metadata page number" } ], "returns": { "type": "(bool,uint64,byte[])", "desc": "Tuple of (has next page, Metadata Last Modified Round, page content)" } }, { "name": "arc89_get_metadata_slice", "desc": "Return a slice of the Asset Metadata for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata slice for" }, { "type": "uint16", "name": "offset", "desc": "The 0-based byte offset within the Metadata (body) bytes" }, { "type": "uint16", "name": "size", "desc": "The slice bytes size to return" } ], "returns": { "type": "byte[]", "desc": "Asset Metadata slice (size limited to PAGE_SIZE)" } }, { "name": "arc89_get_metadata_header_hash", "desc": "Return the Metadata Header Hash for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Metadata Header Hash for" } ], "returns": { "type": "byte[32]", "desc": "Asset Metadata Header Hash" } }, { "name": "arc89_get_metadata_page_hash", "desc": "Return the SHA512-256 of a Metadata page for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Asset Metadata page hash for" }, { "type": "uint8", "name": "page", "desc": "The 0-based Metadata page number" } ], "returns": { "type": "byte[32]", "desc": "The SHA512-256 of the Metadata page" } }, { "name": "arc89_get_metadata_hash", "desc": "Return the Metadata Hash for an ASA", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the Metadata Hash for" } ], "returns": { "type": "byte[32]", "desc": "Asset Metadata Hash" } }, { "name": "arc89_get_metadata_string_by_key", "desc": "Return the UTF‑8 string value for a top‑level JSON key of type JSON String from short Metadata for an ASA; errors if the key does not exist or is not a JSON String", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" }, { "type": "string", "name": "key", "desc": "The top‑level JSON key whose string value to fetch" } ], "returns": { "type": "string", "desc": "The string value from valid UTF‑8 JSON Metadata (size limited to PAGE_SIZE)" } }, { "name": "arc89_get_metadata_uint64_by_key", "desc": "Return the uint64 value for a top‑level JSON key of type JSON Uint64 from short Metadata for an ASA; errors if the key does not exist or is not a JSON Uint64", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" }, { "type": "string", "name": "key", "desc": "The top‑level JSON key whose uint64 value to fetch" } ], "returns": { "type": "uint64", "desc": "The uint64 value from valid UTF‑8 JSON Metadata" } }, { "name": "arc89_get_metadata_object_by_key", "desc": "Return the UTF-8 object value for a top‑level JSON key of type JSON Object from short Metadata for an ASA; errors if the key does not exist or is not a JSON Object", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" }, { "type": "string", "name": "key", "desc": "The top‑level JSON key whose object value to fetch" } ], "returns": { "type": "string", "desc": "The object value from valid UTF‑8 JSON Metadata (size limited to PAGE_SIZE)" } }, { "name": "arc89_get_metadata_b64_bytes_by_key", "desc": "Return the base64-decoded bytes for a top-level JSON key of type JSON String from short Metadata for an ASA; errors if the key does not exist, is not a JSON String, or is not valid base64 for the chosen encoding", "readonly": true, "args": [ { "type": "uint64", "name": "asset_id", "desc": "The Asset ID to get the key value for" }, { "type": "string", "name": "key", "desc": "The top-level JSON key whose base64 string value to fetch and decode" }, { "type": "uint8", "name": "b64_encoding", "desc": "base64 encoding enum: 0 = URLEncoding, 1 = StdEncoding" } ], "returns": { "type": "byte[]", "desc": "The base64-decoded bytes from valid UTF‑8 JSON Metadata (size limited to PAGE_SIZE)" } } ] } ``` The ASA Metadata Registry **MUST** validate method arguments size according to their types. > Refer to the for the detailed Application Specification of the singleton reference implementation. ##### Create Metadata To create the Asset Metadata: * The ASA **MUST** *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST NOT** *exist*, and If the provided `metadata_size > MAX_METADATA_SIZE` the creation **MUST** be rejected. If the provided `metadata_size ≤ SHORT_METADATA_SIZE`, the **MUST** be set to `True`. The **MUST** be initialized with the provided `payload` value (empty is allowed). If the creation is part of a Group, the provided by *later* transactions for the same `asset_id` in the same Group **MUST** be concatenated in order. The creation **MUST** be rejected as soon as the cumulative staged size for the same `asset_id` in the same Group exceeds `metadata_size`. The cumulative staged payload **MUST** be equal to the provided `metadata_size` (no truncation), otherwise the creation is rejected. The **MUST** be initialized with the provided `reversible_flags` value (`byte`). The **MUST** be initialized with the provided `irreversible_flags` value (`byte`). The **MUST** be initialized according to the . The **MUST** be initialized to the current round. The field **MUST** be initialized to `0`. If the ASA is declared as compliant, the *Asset Name* (`an`) or the *Asset URL* (`au`) **MUST** comply with the . If the ASA is declared as , the *Asset URL* (`au`) **MUST** comply with the specified (no `#arc` fragment validation enforced). If the ASA is declared as , the ASA **MUST NOT** have a Clawback Address. The *amount* of the created Asset Metadata Box **MUST** be provided contextually to the ASA Metadata Registry Address. An `Arc89MetadataUpdated` event **MUST** be emitted. > ⚠️ WARNING: If the MSB of the Irreversible Flags is `True` the Asset Metadata is *immutable*, for further details refer to the . ##### Replace Metadata To replace the Asset Metadata for an ASA with smaller or equal size Metadata: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*. If the provided `metadata_size > MAX_METADATA_SIZE` the update **MUST** be rejected. If the provided `metadata_size > existing_metadata_size` the update **MUST** be rejected. If the provided `metadata_size ≤ SHORT_METADATA_SIZE`, the **MUST** be set to `True`. The **MUST** be replaced with the provided `payload` value (empty is allowed). If the replacement is part of a Group, the provided by *later* transactions for the same `asset_id` in the same Group **MUST** be concatenated in order. The replacement **MUST** be rejected as soon as the cumulative staged payload for the same `asset_id` in the same Group exceeds `metadata_size`. The cumulative staged payload **MUST** be equal to the provided `metadata_size` (no truncation), otherwise the replacement is rejected. The **MUST** be updated according to the . The **MUST** be updated to the current round. The *amount* of the updated Asset Metadata Box **MUST** be managed contextually: * If *sign* is `NULL`, no MBR management is required; * If *sign* is `NEG`, the excess of MBR amount **MUST** be returned from the ASA Metadata Registry Address to the ASA Manager Address. An `Arc89MetadataUpdated` event **MUST** be emitted. > MBR is returned with an Inner Transaction whose fee is externally provided. ##### Replace Metadata Larger To replace the Asset Metadata for an ASA with larger size Metadata: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*. If the provided `metadata_size > MAX_METADATA_SIZE` the update **MUST** be rejected. If the provided `metadata_size ≤ existing_metadata_size` the update **MUST** be rejected. If the provided `metadata_size ≤ SHORT_METADATA_SIZE`, the **MUST** be set to `True`. The **MUST** be replaced with the provided `payload` value (empty is allowed). If the creation is part of a Group, the provided by *later* transactions for the same `asset_id` in the same Group **MUST** be concatenated in order. The replacement **MUST** be rejected as soon as the cumulative staged payload for the same `asset_id` in the same Group exceeds `metadata_size`. The cumulative staged payload **MUST** be equal to the provided `metadata_size` (no truncation), otherwise the replacement is rejected. The **MUST** be updated according to the . The **MUST** be updated to the current round. The *amount* of the updated Asset Metadata Box **MUST** be provided contextually to the ASA Metadata Registry Address. An `Arc89MetadataUpdated` event **MUST** be emitted. ##### Replace Metadata Slice To replace the Metadata slice for an ASA: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*, and * The byte range specified by `offset` (`uint16`) and `payload` length **MUST NOT** exceed the `metadata_size`. The Metadata slice **MUST** be replaced with the provided `payload` value. The Metadata slice replacement **MUST** preserve the `metadata_size`. The **MUST** be updated according to the . The **MUST** be updated to the current round. An `Arc89MetadataUpdated` event **MUST** be emitted. > A group transaction can be used to replace a large Metadata slice atomically. ##### Migrate Metadata To migrate the Asset Metadata for an ASA to a new ASA Metadata Registry version: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*, and * The `new_registry_id` (`uint64`) **MUST** be different from the current ASA Metadata Registry Application ID (`uint64`). The field **MUST** be set to the `new_registry_id` (`uint64`) value. An `Arc89MetadataMigrated` event **MUST** be emitted. > The migration can be performed more than once and reverted. > ⚠️ The Deprecated By field is not included in the Metadata Hash computation, and does not affect the Last Modified Round. ##### Delete Metadata To delete the Asset Metadata for an ASA: * The Asset Metadata Box **MUST** *exist*, and * If the ASA still *exists*: * The Asset Metadata **MUST NOT** be *immutable*, and * The authorization **MUST** be restricted to the ASA Manager Address. > ⚠️ WARNING: Not even the ASA Manager Address can delete the *immutable* Asset Metadata of an *existing* ASA, while anyone can delete Asset Metadata if the ASA has been *destroyed*, regardless of being *immutable* or not. The Asset Metadata Box **MUST** be deleted. The *amount* of the deleted Asset Metadata Box **MUST** be managed contextually: * If the ASA *exists*, it **MUST** be returned to the ASA Manager Address, otherwise * It **MUST** be returned to the caller. An `Arc89MetadataDeleted` event **MUST** be emitted. > MBR is returned with an Inner Transaction whose fee is externally provided. > ⚠️ The ASA Metadata Registry is not aware of the ASA destruction events, therefore it cannot guarantee a grace period in favor of the ASA Manager Address. ASA Manager Address **SHOULD** group the ASA destruction and Asset Metadata deletion transactions in the same Group to avoid any race condition. ##### Extra Payload To provide an extra payload to append to Asset Metadata creation or replace for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address. The extra payload calls **MUST** appear *after* the corresponding header call (create or replace) for that same `asset_id` in the same Group (top-level or inner). Concatenation order is transaction-index order. All extra payload calls for a given `asset_id` **MUST** be top-level if the header call is top-level, or inner if the header is inner. > The Asset Metadata Box already exists since the extra payload call is always preceded by a header call (create or replace). > The header call (create or replace) checks that the extra payload call is keyed to the same Asset ID to manage interleaving and idempotence on the *same* Group. Interleaving on different Group levels (top-level / inner) are **not supported**. > > **Example:** Creating and updating different Assets Metadata in the same Group > > ```plain > [Tx1: Create Payload A, Extra Payload A1, Update Payload B, Extra Payload A2, Extra Payload B1] > ``` > > Would result in the following Asset Metadata Boxes: > > * Asset ID A: `[Header A, Create Payload A || Extra Payload A1 || Extra Payload A2]` > * Asset ID B: `[Header B, Update Payload B || Extra Payload B1]` ##### Set Reversible Flag To set a *reversible* Asset Metadata Flag for an ASA: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*, and * The *reversible* `flag` (`uint8`) **MUST** be in `0 ... 7`. The reversible `flag` **MUST** be set to the provided `value` (`bool`). The **MUST** be updated according to the . The **MUST** be updated to the current round. An `Arc89MetadataUpdated` event **MUST** be emitted if not idempotent. ##### Set Irreversible Flag To set an *irreversible* Asset Metadata Flag for an ASA: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*, and * The *irreversible* `flag` (`uint8`) **MUST** be in `2 ... 6`. The irreversible `flag` **MUST** be set to `True` (idempotent). The **MUST** be updated according to the . The **MUST** be updated to the current round. If the ASA is declared as , the ASA **MUST NOT** have a Clawback Address. An `Arc89MetadataUpdated` event **MUST** be emitted if not idempotent. > ⚠️ WARNING: flags 0, 1 are set only at creation time, for further details refer to the . ##### Set Immutable To set the Asset Metadata as *immutable*: * The ASA **MUST** still *exist*, and * The authorization **MUST** be restricted to the ASA Manager Address, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST NOT** be *immutable*. The Asset Metadata *immutability* flag in the **MUST** be set to `True`. The **MUST** be updated according to the . The **MUST** be updated to the current round. An `Arc89MetadataUpdated` event **MUST** be emitted. > ⚠️ WARNING: Asset Metadata immutability cannot be revoked once set. ##### Get Metadata Registry Parameters The method **MUST** return the ASA Metadata Registry parameters as a tuple: * The firs value is the `ASSET_METADATA_BOX_KEY_SIZE` (`uint8`), * The second value is the `HEADER_SIZE` (`uint16`),\` * The third value is the `MAX_METADATA_SIZE` (`uint16`),\` * The fourth value is the `SHORT_METADATA_SIZE` (`uint16`),\` * The fifth value is the `PAGE_SIZE` (`uint16`), * The sixth value is the `FIRST_PAYLOAD_MAX_SIZE` (`uint16`), * The seventh value is the `EXTRA_PAYLOAD_MAX_SIZE` (`uint16`), * The eighth value is the `REPLACE_PAYLOAD_MAX_SIZE` (`uint16`), * The nineth value is the `FLAT_MBR` (`uint64`), * The tenth value is the `BYTE_MBR` (`uint64`). Clients **SHOULD** use these parameter values and avoid locally computed constants. ##### Get Metadata Partial URI The method **MUST** return the Asset Metadata Partial URI (`string`) without the optional `#arc` compliance fragment: `algorand:///app/?box=` Clients **SHOULD** use this value as and avoid locally computed constants. ##### Get Metadata MBR Delta To get the for an ASA: The `new_metadata_size` (`uint16`) **MUST** be less than or equal to `MAX_METADATA_SIZE`. * If the Asset Metadata Box *exists*, `flat_mbr = 0` and then: * If the `new_metadata_size == metadata_size`, then: * The returned *sign* **MUST** be `NULL`, and * `delta_size = 0`. * If the `new_metadata_size > metadata_size`, then: * The returned *sign* **MUST** be `POS`, and * `delta_size = new_metadata_size - metadata_size`. * If the `new_metadata_size < metadata_size`, then: * The returned *sign* **MUST** be `NEG`, and * `delta_size = metadata_size - new_metadata_size`. * If the Asset Metadata Box *does not exist*, `flat_mbr = FLAT_MBR` and then: * The returned *sign* **MUST** be `POS`, and * `delta_size = ASSET_METADATA_BOX_KEY_SIZE + HEADER_SIZE + new_metadata_size`. The returned *amount* **MUST** be `flat_mbr + BYTE_MBR * delta_size`. > The *static* MBR Delta calculation provided to the clients is based on: > > * `FLAT_MBR` (`uint64`), a parameter of the ASA Metadata Registry (microALGO) equal to AVM MBR for Box creation; > > * `BYTE_MBR` (`uint64`), a parameter of the ASA Metadata Registry (microALGO) equal to AVM MBR for byte used by the Box. > The *dynamic* (**RECOMMENDED**) MBR Delta calculation is provided to the clients by simulating the create, update, or delete methods. ##### Check Metadata Exists The method **MUST** return a pair of booleans (`(bool,bool)`): * The first value is `True` if the ASA *still exists*, `False` otherwise; * The second value is `True` if the Asset Metadata for the ASA *exists*, `False` otherwise. ##### Is Metadata Immutable To check if the Asset Metadata is : * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*. The method **MUST** return `True` if the Asset Metadata for an ASA is *immutable* or the ASA Manager Address is set to the Zero Address, `False` otherwise. ##### Is Metadata Short To check if the Asset Metadata is : * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*. The method **MUST** return the value of the identifier and the Last Modified Round (`uint64`). ##### Get Metadata Header To get the Asset Metadata Header for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*. The Metadata Header **MUST** be returned as a tuple `(byte,byte,byte,byte[32],uint64,uint64)`, where: * The first value (`byte`) is the , * The second value (`byte`) is the , * The third value (`byte`) is the , * The fourth value (`byte[32]`) is the , * The fifth value (`uint64`) is the , * The sixth value (`uint64`) is the field. ##### Get Metadata Pagination To get the Asset Metadata pagination for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*. The pagination **MUST** be returned as a tuple `(uint16,uint16,uint8)`, where: * The first value (`uint16`) is the Metadata *total length* (`metadata_size`, in bytes), * The second value (`uint16`) is the `PAGE_SIZE` (in bytes, as defined in the ), * The third value (`uint8`) is the total number of Metadata pages (`total_pages`). ##### Get Metadata To get the Asset Metadata for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, Let `total_pages` be as returned by . * If `total_pages > 0`, the provided 0-indexed `page` (`uint8`) **MUST** satisfy `page < total_pages`. * If `total_pages == 0`, the provided `page` **MUST** be `0`. The paginated Asset Metadata **MUST** be returned as a tuple `(bool,uint64,byte[])`, where: * The first value (`bool`) is a flag indicating if the Metadata *has next page*, * The second value (`uint64`) is the of the Metadata, * The third value (`byte[]`) is the content of Metadata page with length equal to `content_size` bytes. If `total_pages == 0` (i.e., `metadata_size == 0`), the implementation **MUST** return an empty `byte[]`. The empty value does **NOT** imply the existence of a Metadata Page Hash (see ). The *has next page* flag **MUST** be `True` if, at the time of serving the request, `(page + 1) * PAGE_SIZE < metadata_size`, and `False` otherwise. The *content* byte size **MUST NOT** exceed the `PAGE_SIZE`. The implementation **MUST** ensure that `content_size ≤ PAGE_SIZE` for every response. For `page`s `p` where `(p+1)*PAGE_SIZE ≤ metadata_size` at serve time, the response the implementation **SHOULD** return `content_size = PAGE_SIZE`. The final page **MUST** return `content_size = metadata_size − PAGE_SIZE*(total_pages−1)`. > This invariant guarantees the read operation remains within protocol return-size limits, enables deterministic computation of total pages and *has next page*, and allows client implementations to safely preallocate buffers and parallelize fetches without risk of oversized responses. It is **RECOMMENDED** to group `total_pages` reading in a single *atomic read* using a Group Transaction or Inner Transactions. If the `total_pages` reading is *not atomic*, clients **MUST** verify that remains constant across pages; if it changes, clients **SHOULD** re-use the `arc89_get_metadata_pagination` method and restart reading from page `0`. Clients **MAY** simulate the *sequential* calls to guarantee atomicity under their own round expectation. > For further details refer to the . ##### Get Metadata Slice To get a Metadata Slice for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and * The condition `size ≤ PAGE_SIZE` **MUST** hold, and * The byte range specified with `offset` (`uint16`) and `size` (`uint16`) **MUST NOT** exceed the `metadata_size`. The slice extracted from the Metadata **MUST** be returned. ##### Get Metadata Header Hash To get the Metadata Header Hash (`hh`) for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*. The Metadata Header Hash (`hh`) **MUST** be returned according to the . ##### Get Metadata Page Hash To get the Metadata Page Hash for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and Let `total_pages` be as returned by . * If `total_pages > 0`, the provided 0-indexed `page` (`uint8`) **MUST** satisfy `page < total_pages`. * If `total_pages == 0`, the method **MUST** fail. The Metadata Page Hash (`ph[page]`) **MUST** be returned according to the . The Metadata Page Hash (`ph[i]`) **MUST** be returned according to the . ##### Get Metadata Hash To get the Metadata Hash for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*. The Metadata Hash (`am`) **MUST** be returned according to the . ##### Get Metadata String By Key To get a Metadata JSON String value by top-level key for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST** be , and * The key’s value length **MUST NOT** exceed `PAGE_SIZE`. The top-level key’s value (JSON String) extracted from the JSON Metadata object **MUST** be returned (as `string`). > ⚠️ WARNING: This getter does not provide pagination or truncation of the returned value. > ⚠️ WARNING: The following conditions cause a *runtime error*: > > * The Metadata (body) is not a valid UTF-8 encoded JSON object, > * The top-level key does not exist, > * The top-level key’s value is not a JSON String. ##### Get Metadata Uint64 By Key To get a Metadata uint64 value by top-level JSON key for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST** be . The top-level key’s value (JSON Uint64) extracted from the JSON Metadata object **MUST** be returned (as `uint64`). > ⚠️ WARNING: The following conditions cause a *runtime error*: > > * The Metadata (body) is not a valid UTF-8 encoded JSON object, > * The top-level key does not exist, > * The top-level key’s value is not a JSON Uint64. ##### Get Metadata Object By Key To get a Metadata object value by top-level JSON key for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST** be , and * The key’s value length **MUST NOT** exceed `PAGE_SIZE`. The top-level key’s value (JSON Object) extracted from the JSON Metadata object **MUST** be returned (as `string`). > ⚠️ WARNING: This getter does not provide pagination or truncation of the returned value. > ⚠️ WARNING: The following conditions cause a *runtime error*: > > * The Metadata (body) is not a valid UTF-8 encoded JSON object, > * The top-level key does not exist, > * The top-level key’s value is not a JSON Object. ##### Get Metadata b64 Bytes By Key To get a Metadata base64-decoded value by top-level JSON key for an ASA: * The ASA **MUST** still *exist*, and * The Asset Metadata Box **MUST** *exist*, and * The Asset Metadata **MUST** be , and * The `b64_encoding` enum (`uint8`) **MUST** be either `0` (`URLEncoding`) or `1` (`StdEncoding`), and * The key’s base64-decoded value length **MUST NOT** exceed `PAGE_SIZE`. The top-level key’s value (JSON String) extracted from the JSON Metadata object **MUST** be base64-decoded using the selected `b64_encoding` and returned (as `byte[]`). > ⚠️ WARNING: This getter does not provide pagination or truncation of the returned value. > ⚠️ WARNING: The following conditions cause a *runtime error*: > > * The Metadata (body) is not a valid UTF-8 encoded JSON object, > * The top-level key does not exist, > * The top-level key’s value is not a JSON String, > * The top-level key’s value is not a valid base64-encoding string for the chosen encoding. > For further details on the base64 encodings refer to the `base64_decode` . ### Events | ARC-28 EVENT SIGNATURE | 4-BYTE SELECTOR (HEX) | | -------------------------------------------------------------------- | --------------------- | | `Arc89MetadataUpdated(uint64,uint64,uint64,byte,byte,bool,byte[32])` | `8b035084` | | `Arc89MetadataMigrated(uint64,uint64,uint64,uint64)` | `c87023bf` | | `Arc89MetadataDeleted(uint64,uint64,uint64)` | `bc3f20d1` | ### AppSpec The ASA Metadata Registry AppSpec is published in the reference implementation . ### Usage The ASA Metadata Registry has two modes of operation: * **Algod API**: the *entire* Asset Metadata is retrieved via a single request to the Algod REST API endpoints (or via SDK wrappers); * **AVM**: the *paginated* Asset Metadata is retrieved via *grouped* (**RECOMMENDED**) or *sequential* Application Calls (real or simulated) to the ASA Metadata Registry. #### Usage Mode 1: Algod API The Algod clients retrieve the Asset Metadata from two entrypoints: 1. The *Asset ID*; 2. The *Asset Metadata URI*. > A minimal . is provided with the reference implementation. ##### Example 1: Get Metadata from the Asset ID Given the *Asset ID* `12345`, the client: 1. Calls the Algod API endpoint to get the *Asset URL* field (`url`) from the response and drops the `#arc3` suffix (if present), obtaining: `algorand:///app/?box=`; 2. Encodes the *Asset ID* as `base64url` to get the Asset Metadata Box Name (``); 3. Calls the Algod API endpoint to get the content of the *Asset Metadata Box* from the response: ```shell curl -X GET http://localhost/v2/applications//box?name= \ -H 'Accept: application/json' \ -H 'X-Algo-API-Token: API_KEY' ``` The `value` field of the response contains the Asset Metadata Box content as concatenation of the following fields: * Metadata Header (`byte[HEADER_SIZE]`); * Metadata Body (`byte[]`): JSON Metadata. > Clients **MUST** strip the Metadata Header (`byte[HEADER_SIZE]`) from the Asset Metadata Box value before parsing the JSON Metadata. ##### Example 2: Get Metadata from Asset Metadata URI Given the *Asset Metadata URI* `algorand:///app/?box=#arc3`, the client: 1. Calls the Algod API endpoint to get the content of the *Asset Metadata Box* from the response: ```shell curl -X GET http://localhost/v2/applications//box?name= \ -H 'Accept: application/json' \ -H 'X-Algo-API-Token: API_KEY' ``` The `value` field of the response contains the Asset Metadata Box content as concatenation of the following fields: * Metadata Header (`byte[HEADER_SIZE]`); * Metadata Body (`byte[]`): JSON Metadata. > Clients **MUST** strip the Metadata Header (`byte[HEADER_SIZE]`) from the Asset Metadata Box value before parsing the JSON Metadata. #### Usage Mode 2: AVM The AVM clients issue Application calls (real or simulated) to the ASA Metadata Registry in two ways: 1. (**RECOMMENDED**) Atomically, via grouped (Top-level or Inner) Application Calls; 2. Sequentially, via standalone Application Calls; ##### Example 1: Atomic read with Top-level Group Given the *Asset ID* `12345`, the client: 1. Call `arc89_get_metadata_pagination` with `asset_id=12345`, ASA Metadata Registry returns: * The total Metadata byte size (`uint16`); * The `PAGE_SIZE` (`uint16`), as defined in the ; * The total Metadata pages `N` (`uint8`). 2. Check that `N ≤ MAX_TXN_PER_GROUP`. 3. Group call `N * arc89_get_metadata` with `asset_id=12345` and `page=0...N-1` (0-based). **PROS:** * Best UX, no delay, atomic fetch guarantees integrity and no data drift. **CONS:** * The fetchable `metadata_size` is capped by `MAX_TXN_PER_GROUP` capacity for a Top-level Group, (while Inner Groups can fetch up to `MAX_METADATA_SIZE`). ##### Example 2: Sequential read, while “has next” page Given the *Asset ID* `12345`, the client: 1. Call `arc89_get_metadata` with `asset_id=12345` and `page=0` (0-based), ASA Metadata Registry returns: * A *has next* (`bool`) flag indicating if more pages exist; * The monotonic counter; * Exactly `PAGE_SIZE` bytes of Metadata (or fewer on the last page), as defined in the . 2. While *has next* page, call `arc89_get_metadata` with `asset_id=12345` and incremented `page`, verifying Last Modified Round is unchanged. **PROS:** * No arithmetic on the caller; just loop while *has next*. **CONS:** * Caller doesn’t know the total Metadata length or pages upfront. * Callers that want progress bars have to either read page `0` first or call the separate `arc89_get_metadata_pagination` method. **BEST FOR:** * Wallets and explorers that stream progressively and don’t care about total Metadata length until finished. ##### Example 3: Sequential read, two-call pattern Given the *Asset ID* `12345`, the client: 1. Call `arc89_get_metadata_pagination` with `asset_id=12345`, ASA Metadata Registry returns: * The total Metadata byte size (`uint16`); * The `PAGE_SIZE` (`uint16`), as defined in the ; * The total Metadata pages `N` (`uint8`). 2. Loop calls `arc89_get_metadata` with `asset_id=12345` and `page=0...N-1` (0-based), verifying is unchanged. **PROS:** * Changes of `PAGE_SIZE` in the future won’t break readers; * Improves UX (progress, preallocation). **CONS:** * Requires two round trips in the common case. **BEST FOR:** * Latency-tolerant clients and SDKs that value clarity and future-proofing. ## Security Considerations The authorization to create the Asset Metadata and update and delete *mutable* Asset Metadata is granted to the ASA Manager Address to preserve the ASA trust model. The authorization is not granted to the ASA Creator Address, since this role could be performed programmatically by Applications and is not supposed to be the long-lasting maintainer of the ASA. ## Copyright Copyright and related rights waived via .
# URI scheme
> Consolidated specification for encoding Algorand transactions and queries as URIs.
## Abstract This ARC defines a unified Algorand URI scheme that covers payment transactions, key registration, application NoOp calls, and read-only blockchain queries. It expands on earlier URI specifications to support deeplinks, QR codes, and other contexts where structured URIs communicate transaction intent or state queries. ## Motivation This ARC consolidates and supersedes , , , and . Unifying their technical details avoids divergence across implementations, ensures extensions share consistent encoding rules, and provides a single reference for wallet, application, and tooling authors. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . Algorand URIs follow the general format for URIs as set forth in . The path component consists of an Algorand address, and the query component provides additional parameters specific to the encoded intent. ### ABNF Overview The productions below consolidate the syntax for all Algorand URI variants. Scheme-specific sections reference these shared rules when describing their parameters. ```abnf ; Core algorandaddress = *base32 appid = *digit assetid = *digit ; qchar corresponds to RFC 3986 query characters excluding "=" and "&" ; qbase64url matches the unpadded base64url alphabet from RFC 4648 section 5 qbase64url = 1*(ALPHA / DIGIT / "-" / "_") alabel = 1*(ALPHA / DIGIT / "-" / "_" / ".") noteparam = "note=" *qchar xnote = "xnote=" *qchar feeparam = "fee=" *digit otherparam = qchar *qchar [ "=" *qchar ] ; Network authority selectors netauth = ghlabel / netlabel ghlabel = "gh:" 1*qbase64url netlabel = "net:" ( "testnet" / "betanet" / alabel ) ; Payment transactions (ARC-26) paymenturn = "algorand://" [ netauth "/" ] algorandaddress [ "?" paymentparams ] paymentparams = paymentparam *( "&" paymentparam ) paymentparam = amountparam / labelparam / noteparam / xnote / assetparam / otherparam amountparam = "amount=" *digit labelparam = "label=" *qchar assetparam = "asset=" *digit ; Key registration transactions (ARC-78) keyregurn = "algorand://" [ netauth "/" ] algorandaddress [ "?" keyregparams ] keyregparams = keyregparam *( "&" keyregparam ) keyregparam = typekeyreg / votekeyparam / selkeyparam / sprfkeyparam / votefstparam / votelstparam / votekdparam / noteparam / xnote / feeparam / otherparam typekeyreg = "type=keyreg" votekeyparam = "votekey=" *qbase64url selkeyparam = "selkey=" *qbase64url sprfkeyparam = "sprfkey=" *qbase64url votefstparam = "votefst=" *digit votelstparam = "votelst=" *digit votekdparam = "votekdkey=" *digit ; Application NoOp call transactions (ARC-79) noopurn = "algorand://" [ netauth "/" ] algorandaddress [ "?" noopparams ] noopparams = noopparam *( "&" noopparam ) noopparam = typeappl / appparam / methodparam / argparam / boxparam / assetparam / accountparam / feeparam / noteparam / xnote / otherparam typeappl = "type=appl" appparam = "app=" *digit methodparam = "method=" *qchar argparam = "arg=" *qchar boxparam = "box=" *qbase64url accountparam = "account=" *base32 ; Application state queries (ARC-82 application mode) appqueryurn = "algorand://" [ netauth "/" ] "app/" appid [ "?" appqueryparams ] appqueryparams = appqueryparam *( "&" appqueryparam ) appqueryparam = boxparam / globalparam / localparam / algaddrparam / tealcodeparam / otherparam globalparam = "global=" *qbase64url localparam = "local=" *qbase64url algaddrparam = "algorandaddress=" *base32 tealcodeparam = "tealcode" ; Asset metadata queries (ARC-82 asset mode) assetqueryurn = "algorand://" [ netauth "/" ] "asset/" assetid [ "?" assetqueryparams ] assetqueryparams = assetqueryparam *( "&" assetqueryparam ) assetqueryparam = totalparam / decimalsparam / frozenparam / unitnameparam / assetnameparam / urlparam / metadatahashparam / managerparam / reserveparam / freezeparam / clawbackparam / otherparam totalparam = "total" decimalsparam = "decimals" frozenparam = "frozen" unitnameparam = "unitname" assetnameparam = "assetname" urlparam = "url" metadatahashparam = "metadatahash" managerparam = "manager" reserveparam = "reserve" freezeparam = "freeze" clawbackparam = "clawback" ``` Elements of the query component may contain characters outside the valid range. These must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986. Here, “qchar” corresponds to valid characters of an RFC 3986 URI query component, excluding the ”=” and ”&” characters, which this specification takes as separators. The scheme component (“algorand:”) is case-insensitive, and implementations MUST accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys. > Encoding Rules for qchar Values > > Parameters containing text or binary data (e.g. note) MUST be encoded according to . Characters outside the unreserved URI set — including ”=”, ”&”, ”%”, and any non-ASCII or binary bytes — MUST first be UTF-8 encoded and then percent-encoded (%XX format). > > Implementations MUST NOT treat raw ”=” or ”&” inside values as literal characters, since these delimit query parameters. ```plaintext note=foo%3Dbar%26baz ; represents "foo=bar&baz" note=%00%FF%AA ; arbitrary binary bytes (hex 00 FF AA) note=Donation%20for%20Event ; spaces encoded as %20 ``` ### Common URI Format #### Network Selection via Authority All Algorand URI variants encode the target network in the authority component instead of relying on query parameters. The authority, when present, MUST use one of the following prefixes: * `gh:`: Authoritative selector carrying the unpadded base64url encoding (per ) of the 32-byte genesis hash. Clients MUST validate and honor this selector. * `net:`: Advisory alias (e.g., `testnet`, `betanet`, or deployment-specific labels) that clients MAY resolve to a known genesis hash. When the authority is absent (i.e., `algorand://` is followed immediately by the resource path), clients MUST assume the canonical Algorand network implied by legacy authority-free URIs. For avoidance of doubt, the canonical network corresponds to Algorand MainNet. To preserve backward compatibility, emitters targeting that canonical network MUST omit the authority entirely. This rearrangement moves `app` and `asset` identifiers out of the authority and into the leading path segment, so resolvers that previously looked for those tokens in the authority MUST be updated. Because authority-free URIs remain unchanged, applications that exclusively target the canonical network continue to generate and parse the exact same strings. Only resolvers and emitters that work with alternative networks need to adopt the `gh:` or `net:` authorities introduced here. #### Client Resolution Algorithm * If the authority begins with `gh:`, resolve to that network and validate the hash length. * Else if the authority begins with `net:`, map the alias to a locally known genesis hash; if the alias is unknown, treat the URI as invalid. * Else (no authority present), assume the client’s configured canonical Algorand genesis hash; if such configuration is missing, treat the URI as invalid. #### Network Selection Examples Implicit default network: ```plaintext algorand://asset/31566704?total ``` TestNet authoritative hash: ```plaintext algorand://gh:/app/421337?local=bG9j ``` Private network alias: ```plaintext algorand://net:myco-devnet/asset/31566704?total ``` Conflict example (invalid alias): ```plaintext algorand://net:unknown-net/asset/31566704?total ``` #### Migration & Compatibility * Parsers that expect `app` or `asset` in the authority for non-canonical networks will break. Implementations SHOULD accept legacy query-based selectors during a transition period but MUST emit the new authority-based form. * When a legacy form is detected, apply the Option A semantics (query `gh`/`net` selectors with a canonical default) to preserve backwards compatibility. * Authority-free canonical network URIs remain valid and identical, so tooling that only targets that network requires no changes. * Emitters targeting the canonical network MUST omit the authority. #### Trade-offs * **Pros**: Encodes the network as part of the hierarchical identity, keeps the default authority-free form compact, and aligns with resolver or gateway architectures that are keyed by network. * **Cons**: Requires ecosystem updates because the authority semantics change and the migration story is more involved than the query-based approach. ### Compliance Fragment Implementations that emit Algorand URIs and need to declare conformance with multiple ARC MUST encode that declaration in the URI fragment using the pattern `#arc++...`, where every letter represents an unpadded decimal ARC number listed in strictly ascending order. Only the first entry MAY carry the `arc` literal; subsequent entries MUST be bare numbers separated by `+` and no other separators or padding are allowed. For example, `#arc26+27` is valid, while `#arc26+arc27`, `#arc026+27`, and `#arc27+26` are invalid. **As a special case, declarations that include MUST list ARC-3 as the sole entry, using the fragment `#arc3` without any additional ARC identifiers.** Consumers of this scheme SHOULD treat fragments that do not follow this structure as non-compliant and ignore the multi-ARC declaration they attempt to convey. #### Example — Declaring Multi-ARC Compliance Implementations MAY use the URI fragment to indicate which ARC standards the resource conforms to. The following illustration uses two hypothetical future ARCs. A resource that conforms to hypothetical ARC-X and ARC-Y could expose a URI such as: `algorand://app/123456?box=AAAAAAAAAAAAA#arcX+Y` In this case, the fragment value `arcX+Y` declares that the asset metadata conforms to both hypothetical ARC-X and ARC-Y, where `X` and `Y` are ARC numbers. Clients interpreting this fragment SHOULD: 1. Strip the `#arc` prefix and split the remainder by the `"+"` separator.\ Example: `"arcX+Y"` → `[X, Y]` 2. Treat the resulting list as the set of supported ARC identifiers. 3. Optionally fetch or reference the corresponding ARC specification documents,\ e.g.: * ARC-X (hypothetical) * ARC-Y (hypothetical) 4. Preserve order if present, though ARC numbers **SHOULD** be listed in ascending order for canonical form.\ Clients **MUST** accept any order. Implementations MAY use the following regular expression to validate fragment values while enforcing the no-leading-zero requirement and forbidding duplicate `#arc` prefixes: ```regex ^(?!.*#arc.*#arc).*#arc(?!0\d)\d+(?:\+(?!0\d)\d+)*$ ``` This mechanism ensures that multiple ARC declarations in a URI fragment can be parsed, validated, and cross-referenced unambiguously. ### Transaction URIs The base payment URI encoding provides a standardized way for applications and websites to express payment intent through deeplinks, QR codes, and similar transports. It is heavily based on Bitcoin’s so existing tooling can adapt with minimal changes. The optional URI authority selects the network (`gh:` or `net:`); when omitted, clients assume the canonical network. The ABNF overview defines `paymenturn`, `paymentparams`, and `paymentparam`, which extend the shared productions with the payment-specific keys described below. #### Query Keys * label: Label for that address (e.g. name of receiver) * address: Algorand address * xnote: A URL-encoded notes field value that must not be modifiable by the user when displayed to users. * note: A URL-encoded default notes field value that the user interface may optionally make editable by the user. * amount: microAlgos or smallest unit of asset * asset: The asset id this request refers to (if Algos, simply omit this parameter) * (others): optional, for future extensions #### Transfer Amount and Size !!! Note This is DIFFERENT than Bitcoin’s BIP-0021 If an amount is provided, it MUST be specified in basic unit of the asset. For example, if it’s Algos (Algorand native unit), the amount MUST be specified in microAlgos. All amounts MUST NOT contain commas nor a period (.) - strictly non-negative integers. For 100 Algos, the amount needs to be 100000000. For 54.1354 Algos the amount needs to be 54135400. Algorand clients SHOULD display the amount in whole Algos. Where needed, microAlgos MAY be used as well. In any case, the units SHALL be clear for the user. #### Examples Address: ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4 ``` Address with label: ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?label=Silvio ``` Request 150.5 Algos from an address: ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150500000 ``` Request 150 units of Asset ID 45 from an address: ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150&asset=45 ``` ### Key Registration Transaction URIs This extension to the base Algorand URI scheme defines how to encode key registration transactions so they can be shared through deeplinks, QR codes, and similar mechanisms while remaining compatible with the payment format introduced in . Network selection uses the same authority-based mechanism described in the common format section. The ABNF overview defines `keyregurn`, `keyregparams`, and `keyregparam`, which extend the shared productions with the key registration-specific keys enumerated below. #### Scope This section explicitly supports the two major subtypes of key registration transactions: * Online keyreg transaction * Declares intent to participate in consensus and configures required keys * Offline keyreg transaction * Declares intent to stop participating in consensus The following variants of keyreg transactions are not defined: * Non-participating keyreg transaction * This transaction subtype is considered deprecated * Heartbeat keyreg transaction * This transaction subtype will be included in the future block incentives protocol. The protocol specifies that this transaction type must be submitted by a node in response to a programmatic “liveness challenge”. It is not meant to be signed or submitted by an end user. #### Query Keys * address: Algorand address of transaction sender. Required. * type: fixed to “keyreg”. Used to disambiguate the transaction type from the base standard and other possible extensions. Required. * votekeyparam: The vote key parameter to use in the transaction. Encoded with . Required for keyreg online transactions. * selkeyparam: The selection key parameter to use in the transaction. Encoded with base64url. Required for keyreg online transactions. * sprfkeyparam: The state proof key parameter to use in the transaction. Encoded with base64url. Required for keyreg online transactions. * votefstparam: The first round on which the voting keys will be valid. Required for keyreg online transactions. * votelstparam: The last round on which the voting keys will be valid. Required for keyreg online transactions. * votekdparam: The key dilution key parameter to use. Required for keyreg online transactions. * xnote: As in . A URL-encoded notes field value that must not be modifiable by the user when displayed to users. Optional. * note: As in . A URL-encoded default notes field value that the user interface may optionally make editable by the user. Optional. * fee: OPTIONAL. A static fee to set for the transaction in microAlgos. Useful to signal intent to receive participation incentives (e.g. with a 2,000,000 microAlgo transaction fee.) * (others): optional, for future extensions #### Examples Encoding keyreg online transaction with minimum fee: ```plaintext { "txn": { "fee": 1000, "fv": 1345, "gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=", "lv": 2345, "selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=", "snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=", "sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==", "type": "keyreg", "votefst": 1300, "votekd": 100, "votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=", "votelst": 11300 } } ``` Results in: ```plaintext algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY? type=keyreg &selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c &sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg &votefst=1300 &votekd=100 &votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI &votelst=11300 ``` Note: newlines added for readability. Note the difference between base64 encoding in the raw object and base64url encoding in the URI parameters. For example, the selection key parameter `selkey` that begins with `+lfw+` in the raw object is encoded in base64url to `-lfw-` in the URI. Note: Here, the fee is omitted from the URI (due to being set to the minimum 1,000 microAlgos.) When the fee is omitted, it is left up to the application or wallet to decide. This is for demonstrative purposes; the specification does not require this behavior. Encoding keyreg offline transaction: ```plaintext { "txn": { "fee": 1000, "fv": 1776240, "gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=", "lv": 1777240, "snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=", "type": "keyreg" } } ``` Results in: ```plaintext algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY?type=keyreg ``` This offline keyreg transaction encoding is the smallest compatible representation. Encoding keyreg online transaction with custom fee and note: ```plaintext { "txn": { "fee": 2000000, "fv": 1345, "gh:b64": "kUt08LxeVAAGHnh4JoAoAMM9ql/hBwSoiFtlnKNeOxA=", "lv": 2345, "note:b64": "Q29uc2Vuc3VzIHBhcnRpY2lwYXRpb24gZnR3", "selkey:b64": "+lfw+Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c=", "snd:b64": "+gJAXOr2rkSCdPQ5DEBDLjn+iIptzLxB3oSMJdWMVyQ=", "sprfkey:b64": "3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W/iy/JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg==", "type": "keyreg", "votefst": 1300, "votekd": 100, "votekey:b64": "UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI=", "votelst": 11300 } } ``` Results in: ```plaintext algorand://7IBEAXHK62XEJATU6Q4QYQCDFY475CEKNXGLYQO6QSGCLVMMK4SLVTYLMY? type=keyreg &selkey=-lfw-Y04lTnllJfncgMjXuAePe8i8YyVeoR9c1Xi78c &sprfkey=3NoXc2sEWlvQZ7XIrwVJjgjM30ndhvwGgcqwKugk1u5W_iy_JITXrykuy0hUvAxbVv0njOgBPtGFsFif3yLJpg &votefst=1300 &votekd=100 &votekey=UU8zLMrFVfZPnzbnL6ThAArXFsznV3TvFVAun2ONcEI &votelst=11300 &fee=2000000 ¬e=Consensus%2Bparticipation%2Bftw ``` Note: newlines added for readability. ### Application NoOp Call URIs NoOp calls are generic application calls that execute an Algorand smart contract’s approval program. This URI extension encodes the transactions so wallets, dApps, and services can invoke specific application methods using deeplinks and QR codes while remaining consistent with . As with other URI types, the optional authority selects the network. The ABNF overview defines `noopurn`, `noopparams`, and `noopparam`, which extend the shared productions with the application call keys described below. As in , URIs follow the general format for URIs as set forth in RFC 3986. The path component consists of an Algorand address, and the query component provides additional transaction parameters. Elements of the query component may contain characters outside the valid range. These are encoded differently depending on their expected character set. The text components (note, xnote) MUST first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence MUST be percent-encoded as described in RFC 3986. The binary components (args, refs, etc.) MUST be encoded with base64url as specified in . #### Query Keys * address: Algorand address of transaction sender * type: fixed to “appl”. Used to disambiguate the transaction type from the base standard and other possible extensions * app: The first reference is set to specify the called application (Algorand Smart Contract) ID and is mandatory. Additional references are optional and will be used in the Application NoOp call’s foreign applications array. * method: Specify the full method expression (e.g. “example\_method(uint64,uint64)void”). * arg: Specify arguments used for calling the NoOp method, to be encoded within URI. * box: Box references to be used in Application NoOp method call box array. * asset: Asset reference to be used in Application NoOp method call foreign assets array. * account: Account or NFD address to be used in Application NoOp method call foreign accounts array. * fee: OPTIONAL. An optional static fee to set for the transaction in microAlgos. * (others): optional, for future extensions Note: If the fee is omitted, it means that Minimum Fee is preferred to be used for the transaction. #### Template URI vs Actionable URI If the URI is constructed so that other dApps, wallets or protocols could use it with their runtime Algorand entities of interest, then the placeholder account/app address in the URI MUST be ZeroAddress (“AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ”). Since ZeroAddress cannot initiate any action this approach is considered non-vulnerable and secure. #### Examples Call `claim(uint64,uint64)byte[]` on contract 11111111 paying a fee of 10000 microAlgos from a specific address: ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&fee=10000 ``` Call the same method paying the default 1000 microAlgo fee while providing additional foreign applications: ```plaintext algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?type=appl&app=11111111&method=claim(uint64,uint64)byte[]&arg=20000&arg=474567&asset=45&app=22222222&app=33333333 ``` ### Blockchain Query URIs This read-only URI extension defines a standardized method for querying application and asset data on Algorand. It enables applications, websites, and QR code implementations to construct URIs that retrieve application state, box data, and asset metadata in a structured format. The design is inspired by and reuses its core URI principles for consistency. Algorand URIs in this section follow the general format for URIs as defined in RFC 3986. The authority optionally selects the network (`gh:` or `net:`), while the leading path segment specifies whether the URI targets an application (`.../app/`) or an asset (`.../asset/`). Query parameters define the specific data fields being requested. Parameters MAY contain characters outside the valid range. These MUST first be encoded in UTF-8, then percent-encoded according to RFC 3986. The ABNF overview defines `appqueryurn`, `appqueryparam`, `assetqueryurn`, and `assetqueryparam`, which extend the shared productions with the query keys summarized below. #### Application Query URIs (`algorand://app`) The application URI allows querying the state of an application, including data from the application’s box storage, global storage, and local storage, as well as the TEAL program associated with it. Each storage type has specific requirements. #### Asset Query URIs (`algorand://asset`) The asset URI enables retrieval of metadata and configuration details for a specific asset, such as its name, total supply, decimal precision, and associated addresses. #### Parameter Definitions **Application parameters** * `box`: Queries the application’s box storage with a key encoded in base64url. * `global`: Queries the global storage of the application using a base64url-encoded key. * `local`: Queries local storage for a specified account. Requires an additional `algorandaddress` parameter, representing the account whose local storage is queried. * `algorandaddress`: Supplies the account whose local storage should be inspected when paired with `local`. * `tealcode`: Requests the TEAL program associated with the application. **Asset parameters** * `total`: Queries the total supply of the asset. * `decimals`: Queries the number of decimal places used for the asset. * `frozen`: Queries whether the asset is frozen by default. * `unitname`: Queries the short name or unit symbol of the asset (e.g., “USDT”). * `assetname`: Queries the full name of the asset (e.g., “Tether”). * `url`: Queries the URL associated with the asset, providing more information. * `metadatahash`: Queries the metadata hash associated with the asset. * `manager`: Queries the address of the asset manager. * `reserve`: Queries the reserve address holding non-minted units of the asset. * `freeze`: Queries the freeze address for the asset. * `clawback`: Queries the clawback address for the asset. #### Query Key Descriptions For each parameter, the query key name is listed, followed by its purpose: * `box`: Retrieves information from the specified box storage key. * `global`: Retrieves data from the specified global storage key. * `local`: Retrieves data from the specified local storage key. Requires `algorandaddress` to specify the account. * `total`: Retrieves the asset’s total supply. * `decimals`: Retrieves the number of decimal places for the asset. * `frozen`: Retrieves the default frozen status of the asset. * `unitname`: Retrieves the asset’s short name or symbol. * `assetname`: Retrieves the full name of the asset. * `url`: Retrieves the URL associated with the asset. * `metadatahash`: Retrieves the metadata hash for the asset. * `manager`: Retrieves the manager address of the asset. * `reserve`: Retrieves the reserve address for the asset. * `freeze`: Retrieves the freeze address of the asset. * `clawback`: Retrieves the clawback address of the asset. #### Examples Query an application’s box storage: ```plaintext algorand://app/2345?box=YWxnb3JvbmQ= ``` Query global storage: ```plaintext algorand://app/12345?global=Z2xvYmFsX2tleQ== ``` Query local storage for a specific address: ```plaintext algorand://app/12345?local=bG9jYWxfa2V5&algorandaddress=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 ``` Query the total supply of an asset: ```plaintext algorand://asset/67890?total ``` ## Rationale The present aims to provide a standardized way to encode key registration transactions in order to enhance the user experience of signing key registration transactions in general, and in particular in the use case of an Algorand node runner that does not have their spending keys resident on their node (as is best practice). The parameter names were chosen to match the corresponding names in encoded key registration transactions. Algorand application NoOp method calls cover the majority of application transactions in Algorand and have a wide range of use-cases. For use-cases where the runtime knows exactly what the called application needs in terms of arguments and transaction arrays and there are no direct interactions, this extension is required since the original ARC-26 standard did not support application calls. Previously, the Algorand URI scheme was primarily used to create transactions on the chain. Extending it to cover read-only queries allows a URI scheme to directly retrieve information from the chain, specifically for applications and assets. This provides a unified, standardized method for querying Algorand application and asset data, allowing interoperability across applications and services. ## Backwards Compatibility This ARC replaces ARCs 26, 78, 79, and 82 without invalidating previously generated URIs. Existing URIs that conform to the earlier specifications remain valid under this consolidated definition, so no backwards incompatibilities are introduced beyond the deprecation of the superseded documents. For network selection, implementations MAY continue to accept the legacy query-based selectors during a migration period but SHOULD emit the authority-based form specified above. ## Reference Implementation None. ## Security Considerations The transaction-related sections of this specification introduce no additional security considerations beyond those identified in the originating ARCs. Since the blockchain query URIs are intended for read-only operations, they do not alter application or asset state, mitigating many security risks. However, data retrieved from these URIs should be validated to ensure it meets user expectations and that any displayed data cannot be tampered with. ## Copyright Copyright and related rights waived via .
# Algorand Smart Contract Token Specification
> Base specification for tokens implemented as smart contracts
## Abstract This ARC (Algorand Request for Comments) specifies an interface for tokens to be implemented on Algorand as smart contracts. The interface defines a minimal interface required for tokens to be held and transferred, with the potential for further augmentation through additional standard interfaces and custom methods. ## Motivation Currently, most tokens in the Algorand ecosystem are represented by ASAs (Algorand Standard Assets). However, to provide rich extra functionality, it can be desirable to implement tokens as smart contracts instead. To foster an interoperable token ecosystem, it is necessary that the core interfaces for tokens be standardized. ## Specification The key words “**MUST**”, “**MUST NOT**”, “**REQUIRED**”, “**SHALL**”, “**SHALL NOT**”, “**SHOULD**”, “**SHOULD NOT**”, “**RECOMMENDED**”, “**MAY**”, and “**OPTIONAL**” in this document are to be interpreted as described in . ### Core Token specification A smart contract token that is compliant with this standard MUST implement the following interface: ```json { "name": "ARC-200", "desc": "Smart Contract Token Base Interface", "methods": [ { "name": "arc200_name", "desc": "Returns the name of the token", "readonly": true, "args": [], "returns": { "type": "byte[32]", "desc": "The name of the token" } }, { "name": "arc200_symbol", "desc": "Returns the symbol of the token", "readonly": true, "args": [], "returns": { "type": "byte[8]", "desc": "The symbol of the token" } }, { "name": "arc200_decimals", "desc": "Returns the decimals of the token", "readonly": true, "args": [], "returns": { "type": "uint8", "desc": "The decimals of the token" } }, { "name": "arc200_totalSupply", "desc": "Returns the total supply of the token", "readonly": true, "args": [], "returns": { "type": "uint256", "desc": "The total supply of the token" } }, { "name": "arc200_balanceOf", "desc": "Returns the current balance of the owner of the token", "readonly": true, "args": [ { "type": "address", "name": "owner", "desc": "The address of the owner of the token" } ], "returns": { "type": "uint256", "desc": "The current balance of the holder of the token" } }, { "name": "arc200_transfer", "desc": "Transfers tokens", "readonly": false, "args": [ { "type": "address", "name": "to", "desc": "The destination of the transfer" }, { "type": "uint256", "name": "value", "desc": "Amount of tokens to transfer" } ], "returns": { "type": "bool", "desc": "Success" } }, { "name": "arc200_transferFrom", "desc": "Transfers tokens from source to destination as approved spender", "readonly": false, "args": [ { "type": "address", "name": "from", "desc": "The source of the transfer" }, { "type": "address", "name": "to", "desc": "The destination of the transfer" }, { "type": "uint256", "name": "value", "desc": "Amount of tokens to transfer" } ], "returns": { "type": "bool", "desc": "Success" } }, { "name": "arc200_approve", "desc": "Approve spender for a token", "readonly": false, "args": [ { "type": "address", "name": "spender" }, { "type": "uint256", "name": "value" } ], "returns": { "type": "bool", "desc": "Success" } }, { "name": "arc200_allowance", "desc": "Returns the current allowance of the spender of the tokens of the owner", "readonly": true, "args": [ { "type": "address", "name": "owner" }, { "type": "address", "name": "spender" } ], "returns": { "type": "uint256", "desc": "The remaining allowance" } } ], "events": [ { "name": "arc200_Transfer", "desc": "Transfer of tokens", "args": [ { "type": "address", "name": "from", "desc": "The source of transfer of tokens" }, { "type": "address", "name": "to", "desc": "The destination of transfer of tokens" }, { "type": "uint256", "name": "value", "desc": "The amount of tokens transferred" } ] }, { "name": "arc200_Approval", "desc": "Approval of tokens", "args": [ { "type": "address", "name": "owner", "desc": "The owner of the tokens" }, { "type": "address", "name": "spender", "desc": "The approved spender of tokens" }, { "type": "uint256", "name": "value", "desc": "The amount of tokens approve" } ] } ] } ``` Ownership of a token by a zero address indicates that a token is out of circulation indefinitely, or otherwise burned or destroyed. The methods `arc200_transfer` and `arc200_transferFrom` method MUST error when the balance of `from` is insufficient. In the case of the `arc200_transfer` method, from is implied as the `owner` of the token. The `arc200_transferFrom` method MUST error unless called by an `spender` approved by an `owner`. The methods `arc200_transfer` and `arc200_transferFrom` MUST emit a `Transfer` event. A `arc200_Transfer` event SHOULD be emitted, with `from` being the zero address, when a token is minted. A `arc200_Transfer` event SHOULD be emitted, with `to` being the zero address, when a token is destroyed. The `arc200_Approval` event MUST be emitted when an `arc200_approve` or `arc200_transferFrom` method is called successfully. A value of zero for the `arc200_approve` method and the `arc200_Approval` event indicates no approval. The `arc200_transferFrom` method and the `arc200_Approval` event indicates the approval value after it is decremented. The contract MUST allow multiple operators per owner. All methods in this standard that are marked as `readonly` MUST be read-only as defined by . ## Rationale This specification is based on . ### Core Specification The core specification identical to ERC-20. ## Backwards Compatibility This standard introduces a new kind of token that is incompatible with tokens defined as ASAs. Applications that want to index, manage, or view tokens on Algorand will need to handle these new smart tokens as well as the already popular ASA implementation of tokens will need to add code to handle both, and existing smart contracts that handle ASA-based tokens will not work with these new smart contract tokens. While this is a severe backward incompatibility, smart contract tokens are necessary to provide richer and more diverse functionality for tokens. ## Security Considerations The fact that anybody can create a new implementation of a smart contract tokens standard opens the door for many of those implementations to contain security bugs. Additionally, malicious token implementations could contain hidden anti-features unexpected by users. As with other smart contract domains, it is difficult for users to verify or understand the security properties of smart contract tokens. This is a tradeoff compared with ASA tokens, which share a smaller set of security properties that are easier to validate to gain the possibility of adding novel features. ## Copyright Copyright and related rights waived via .
# ARC Category Guidelines
> ARCs by categories
Welcome to the Guideline. Here you’ll find information on which ARCs to use for your project. ## General ARCs ### ARC 0 - ARC Purpose and Guidelines #### What is an ARC? ARC stands for Algorand Request for Comments. An ARC is a design document providing information to the Algorand community or describing a new feature for Algorand or its processes or environment. The ARC should provide a concise technical specification and a rationale for the feature. The ARC author is responsible for building consensus within the community and documenting dissenting opinions. We intend ARCs to be the primary mechanisms for proposing new features and collecting community technical input on an issue. We maintain ARCs as text files in a versioned repository. Their revision history is the historical record of the feature proposal. ### ARC 26 - URI scheme This URI specification represents a standardized way for applications and websites to send requests and information through deeplinks, QR codes, etc. It is heavily based on Bitcoin’s and should be seen as derivative of it. The decision to base it on BIP-0021 was made to make it easy and compatible as possible for any other application. ### ARC 65 - AVM Run Time Errors In Program This document introduces a convention for rising informative run time errors on the Algorand Virtual Machine (AVM) directly from the program bytecode. ### ARC 78 - URI scheme, keyreg Transactions extension This URI specification represents an extension to the base Algorand URI encoding standard () that specifies encoding of key registration transactions through deeplinks, QR codes, etc. ### ARC 79 - URI scheme, App NoOp call extension NoOp calls are Generic application calls to execute the Algorand smart contract ApprovalPrograms. This URI specification proposes an extension to the base Algorand URI encoding standard () that specifies encoding of application NoOp transactions into standard URIs. ### ARC 82 - URI scheme blockchain information This URI specification defines a standardized method for querying application and asset data on Algorand. It enables applications, websites, and QR code implementations to construct URIs that allow users to retrieve data such as application state and asset metadata in a structured format. This specification is inspired by and follows similar principles, with adjustments specific to read-only queries for applications and assets. ### ARC 83 - xGov Council - Application Process The goal of this ARC is to clearly define the process for running for an xGov Council seat. ### ARC 86 - xGov status and voting power This ARC defines the Expert Governor (xGov) status and voting power in the Algorand Expert Governance. ### ARC 90 - URI scheme This ARC defines a unified Algorand URI scheme that covers payment transactions, key registration, application NoOp calls, and read-only blockchain queries. It expands on earlier URI specifications to support deeplinks, QR codes, and other contexts where structured URIs communicate transaction intent or state queries. ## Asa ARCs ### ARC 3 - Conventions Fungible/Non-Fungible Tokens The goal of these conventions is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to display the properties of a given ASA. ### ARC 16 - Convention for declaring traits of an NFT’s The goal is to establish a standard for how traits are declared inside a non-fungible NFT’s metadata, for example as specified in (), () or (). ### ARC 19 - Templating of NFT ASA URLs for mutability This ARC describes a template substitution for URLs in ASAs, initially for ipfs\:// scheme URLs allowing mutable CID replacement in rendered URLs. The proposed template-XXX scheme has substitutions like: ```plaintext template-ipfs://{ipfscid::::}[/...] ``` This will allow modifying the 32-byte ‘Reserve address’ in an ASA to represent a new IPFS content-id hash. Changing of the reserve address via an asset-config transaction will be all that is needed to point an ASA URL to new IPFS content. The client reading this URL, will compose a fully formed IPFS Content-ID based on the version, multicodec, and hash arguments provided in the ipfscid substitution. ### ARC 20 - Smart ASA A “Smart ASA” is an Algorand Standard Asset (ASA) controlled by a Smart Contract that exposes methods to create, configure, transfer, freeze, and destroy the asset. This ARC defines the ABI interface of such a Smart Contract, the required metadata, and suggests a reference implementation. ### ARC 36 - Convention for declaring filters of an NFT The goal is to establish a standard for how filters are declared inside a non-fungible (NFT) metadata. ### ARC 62 - ASA Circulating Supply This ARC introduces a standard for the definition of circulating supply for Algorand Standard Assets (ASA) and its client-side retrieval. A reference implementation is suggested. ### ARC 69 - ASA Parameters Conventions, Digital Media The goal of these conventions is to make it simpler to display the properties of a given ASA. This ARC differs from by focusing on optimization for fetching of digital media, as well as the use of onchain metadata. Furthermore, since asset configuration transactions are used to store the metadata, this ARC can be applied to existing ASAs. While mutability helps with backwards compatibility and other use cases, like leveling up an RPG character, some use cases call for immutability. In these cases, the ASA manager MAY remove the manager address, after which point the Algorand network won’t allow anyone to send asset configuration transactions for the ASA. This effectively makes the latest valid metadata immutable. ### ARC 71 - Non-Transferable ASA The goal is to make it simpler for block explorers, wallets, exchanges, marketplaces, and more generally, client software to identify & interact with a Non-transferable ASA (NTA). This defines an interface extending & non fungible ASA to create Non-transferable ASA. Before issuance, both parties (issuer and receiver), have to agree on who has (if any) the authorization to burn this ASA. > This spec is compatible with to create an updatable Non-transferable ASA. ### ARC 89 - ASA Metadata Registry This ARC defines the interface and the implementation of a singleton Application that provides Algorand Standard Assets metadata through the Algod API or the AVM. ## Application ARCs ### ARC 4 - Application Binary Interface (ABI) This document introduces conventions for encoding method calls, including argument and return value encoding, in Algorand Application call transactions. The goal is to allow clients, such as wallets and dapp frontends, to properly encode call transactions based on a description of the interface. Further, explorers will be able to show details of these method invocations. #### Definitions * **Application:** an Algorand Application, aka “smart contract”, “stateful contract”, “contract”, or “app”. * **HLL:** a higher level language that compiles to TEAL bytecode. * **dapp (frontend)**: a decentralized application frontend, interpreted here to mean an off-chain frontend (a webapp, native app, etc.) that interacts with Applications on the blockchain. * **wallet**: an off-chain application that stores secret keys for on-chain accounts and can display and sign transactions for these accounts. * **explorer**: an off-chain application that allows browsing the blockchain, showing details of transactions. ### ARC 18 - Royalty Enforcement Specification A specification to describe a set of methods that offer an API to enforce Royalty Payments to a Royalty Receiver given a policy describing the royalty shares, both on primary and secondary sales. This is an implementation of an specification and other methods may be implemented in the same contract according to that specification. ### ARC 21 - Round based datafeed oracles on Algorand The following document introduces conventions for building round based datafeed oracles on Algorand using the ABI defined in ### ARC 22 - Add `read-only` annotation to ABI methods The goal of this convention is to allow smart contract developers to distinguish between methods which mutate state and methods which don’t by introducing a new property to the `Method` descriptor. ### ARC 23 - Sharing Application Information The following document introduces a convention for appending information (stored in various files) to the compiled application’s bytes. The goal of this convention is to standardize the process of verifying and adding this information. The encoded information byte string is `arc23` followed by the IPFS CID v1 of a folder containing the files with the information. The minimum required file is `contract.json` representing the contract metadata (as described in ), and as extended by future potential ARCs). ### ARC 28 - Algorand Event Log Spec Algorand dapps can use the primitive to attach information about an application call. This ARC proposes the concept of Events, which are merely a way in which data contained in these logs may be categorized and structured. In short: to emit an Event, a dapp calls `log` with ABI formatting of the log data, and a 4-byte prefix to indicate which Event it is. ### ARC 32 - Application Specification > \[!NOTE] This specification will be eventually deprecated by the specification. An Application is partially defined by it’s but further information about the Application should be available. Other descriptive elements of an application may include it’s State Schema, the original TEAL source programs, default method arguments, and custom data types. This specification defines the descriptive elements of an Application that should be available to clients to provide useful information for an Application Client. ### ARC 54 - ASA Burning App This ARC provides TEAL which would deploy a application that can be used for burning Algorand Standard Assets. The goal is to have the apps deployed on the public networks using this TEAL to provide a standardized burn address and app ID. ### ARC 56 - Extended App Description This ARC takes the existing JSON description of a contract as described in and adds more fields for the purpose of client interaction ### ARC 72 - Algorand Smart Contract NFT Specification This specifies an interface for non-fungible tokens (NFTs) to be implemented on Algorand as smart contracts. This interface defines a minimal interface for NFTs to be owned and traded, to be augmented by other standard interfaces and custom methods. ### ARC 73 - Algorand Interface Detection Spec This ARC specifies an interface detection interface based on . This interface allows smart contracts and indexers to detect whether a smart contract implements a particular interface based on an interface selector. ### ARC 74 - NFT Indexer API This specifies a REST interface that can be implemented by indexing services to provide data about NFTs conforming to the standard. ### ARC 87 - Key Name Specification Adopt a standard key name specification for complex data. This defines key names that can be used to represent JSON, Blobs, or other structures that do not fit neatly into the state ### ARC 200 - Algorand Smart Contract Token Specification This ARC (Algorand Request for Comments) specifies an interface for tokens to be implemented on Algorand as smart contracts. The interface defines a minimal interface required for tokens to be held and transferred, with the potential for further augmentation through additional standard interfaces and custom methods. ## Explorer ARCs ### ARC 2 - Algorand Transaction Note Field Conventions The goal of these conventions is to make it simpler for block explorers and indexers to parse the data in the note fields and filter transactions of certain dApps. ## Wallet ARCs ### ARC 1 - Algorand Wallet Transaction Signing API The goal of this API is to propose a standard way for a dApp to request the signature of a list of transactions to an Algorand wallet. This document also includes detailed security requirements to reduce the risks of users being tricked to sign dangerous transactions. As the Algorand blockchain adds new features, these requirements may change. ### ARC 5 - Wallet Transaction Signing API (Functional) ARC-1 defines a standard for signing transactions with security in mind. This proposal is a strict subset of ARC-1 that outlines only the minimum functionality required in order to be useable. Wallets that conform to ARC-1 already conform to this API. Wallets conforming to but not ARC-1 **MUST** only be used for testing purposes and **MUST NOT** used on MainNet. This is because this ARC-5 does not provide the same security guarantees as ARC-1 to protect properly wallet users. ### ARC 25 - Algorand WalletConnect v1 API WalletConnect is an open protocol to communicate securely between mobile wallets and decentralized applications (dApps) using QR code scanning (desktop) or deep linking (mobile). It’s main use case allows users to sign transactions on web apps using a mobile wallet. This document aims to establish a standard API for using the WalletConnect v1 protocol on Algorand, leveraging the existing transaction signing APIs defined in . ### ARC 27 - Provider Message Schema Building off of the work of the previous ARCs relating to; provider transaction signing (\[ARC-0005]\[arc-0005]), provider address discovery (\[ARC-0006]\[arc-0006]), provider transaction network posting (\[ARC-0007]\[arc-0007]) and provider transaction signing & posting (\[ARC-0008]\[arc-0008]), this proposal aims to comprehensively outline a common message schema between clients and providers. Furthermore, this proposal extends the aforementioned methods to encompass new functionality such as: * Extending the message structure to target specific networks, thereby supporting multiple AVM (Algorand Virtual Machine) chains. * Adding a new method that disables clients on providers. * Adding a new method to discover provider capabilities, such as what networks and methods are supported. This proposal serves as a formalization of the message schema and leaves the implementation details to the prerogative of the clients and providers. \[Back to top ^]\[top] ### ARC 35 - Algorand Offline Wallet Backup Protocol This document outlines the high-level requirements for a wallet-agnostic backup protocol that can be used across all wallets on the Algorand ecosystem. ### ARC 47 - Logic Signature Templates This standard allows wallets to sign known logic signatures and clearly tell the user what they are signing. ### ARC 55 - On-Chain storage/transfer for Multisig This ARC proposes the utilization of on-chain smart contracts to facilitate the storage and transfer of Algorand multisignature metadata, transactions, and corresponding signatures for the respective multisignature sub-accounts. ### ARC 59 - ASA Inbox Router The goal of this standard is to establish a standard in the Algorand ecosystem by which ASAs can be sent to an intended receiver even if their account is not opted in to the ASA. A wallet custodied by an application will be used to custody assets on behalf of a given user, with only that user being able to withdraw assets. A master application will be used to map inbox addresses to user address. This master application can route ASAs to users performing whatever actions are necessary. If integrated into ecosystem technologies including wallets, explorers, and dApps, this standard can provide enhanced capabilities around ASAs which are otherwise strictly bound at the protocol level to require opting in to be received. ### ARC 60 - Algorand Wallet Arbitrary Signing API This ARC proposes a standard for arbitrary data signing. It is designed to be a simple and flexible standard that can be used in a wide variety of applications.
# Disclosure of Vulnerabilities in Puya Smart Contract Compiler
This disclosure report contains technical details of two vulnerabilities in the Puya smart contract compiler. **Date reported:** October 10, 2025 **Affected Versions:** * PuyaPy: Versions `<5.3.2` and `<4.11.0` for the 4.x major version * Puya-TS: Versions `<1.0.0-alpha.96` or `<1.0.0-beta.74` ### Summary of Vulnerability Two separate vulnerabilities that could affect smart contracts were discovered in the Puya smart contract compiler: 1. **Missing Assert:** An optimization bug affecting the Puya compiler for Algorand Python & Algorand TypeScript in a narrow version window could remove a final assert before a return. 2. **ARC-4 Encoding Length Check:** A class of bugs where ARC-4 Application Binary Interface (ABI) values were not always validated by default, and this behavior was not clearly documented. Smart contract developers should use the resources in the **Steps to Reproduce** section below to assess their smart contract code for potential vulnerabilities and take immediate corrective action if any are discovered. ### Impact Any smart contract written in Algorand Python or Algorand TypeScript and compiled with a vulnerable version of the Puya compiler could potentially suffer from insecure TEAL code in certain scenarios. Smart contract developers should review their code following the guidance in the **Steps to Reproduce** section to assess if contracts were compiled with affected versions of Puya and, if so, review the code carefully to identify conditions for which the smart contract may have vulnerabilities in the compiled TEAL. As of the publication date, no direct impacts have been reported from the ecosystem. ### Technical Details #### Discovery On October 10, 2025, the Algorand Foundation received a report that a smart contract compiled with Puya was not checking ABI method arguments in the compiled TEAL code. Upon investigation by the engineering team, this was confirmed to be true and the full extent of missing validations was determined. In this process, the second missing assert vulnerability was also identified. Further investigation found that multiple other Algorand smart contract languages, such as PyTeal, TEALScript, and Tealish, also did partial or no validation of ABI values, with varying degrees of documentation about compiler behavior in this regard. #### Root Cause The “missing assert” bug was caused by a human error in coding a peephole optimization in the compiler, and the error was not caught by a second reviewer. The ARC-4 encoding length check vulnerability can be traced to insufficient documentation of the lack of validation, which was the Puya compiler’s default behavior by design. #### Remediation Going forward, Puya’s design will be secure-by-default, and security recommendations in the specs will be normative. Automatic validation will be applied during the compilation process unless the developer explicitly chooses to disable this behavior with a compiler flag. Additionally, an enhancement enables developers to apply ABI validations selectively to individual methods by using a new decorator. ### Strategic Mitigation Initiatives The Algorand Foundation engineering team has implemented multiple strategic prevention measures to prevent future issues. These include strengthening regression tests for the Puya compiler, implementing clearer warnings when automatic validation is disabled, and improving release processes to require additional reviewers through standard operating procedures and automated CI/CD controls. ### Steps to Reproduce Two detailed guides for understanding each type of vulnerability and assessing if it may affect contracts compiled with affected version of Puya have been published on GitHub: ### Fixes / Patches Available The fix for both issues is available in the following package versions: PuyaPy: Versions `≥5.3.2` or `≥4.11.0` for the 4.x major version Puya-TS: Versions `≥1.0.0-alpha.96` or `≥1.0.0-beta.74` Upgrade Puya, recompile all contracts, and verify the ARC-56 JSON shows Puya `≥4.11.0` or `≥5.3.2`. Developers are strongly encouraged to create tests to verify that oversized inputs are rejected and that any missing asserts are working. All projects are advised to avoid older versions: * PuyaPy: Versions `<5.3.2` and `<4.11.0` for the 4.x major version * Puya-TS: Versions `<1.0.0-alpha.96` or `<1.0.0-beta.74` ### Additional Information The ARC-4 Encoding Length Check vulnerability can also affect other high-level smart contract languages: PyTeal does not perform validation by default. Apply the recommendations for manual validation found in the PyTeal Documentation. TEALScript does not perform validation automatically for dynamic tuples or ABI return values, but does for static method arguments. Tealish supports fixed-size structs, but the compiler does not check them automatically. This behavior, however, is documented in the language guide. Developers should also review any smart contracts written directly in TEAL to ensure the appropriate checks are performed around ABI values. ### Acknowledgements Thanks to Folks Finance for discovering the vulnerabilities and reporting them responsibly. Additionally, thanks to the Algorand Foundation Engineering team and MakerX for their swift and thorough response to the issues and assistance in reviewing smart contracts for various applications. And, as always, thanks to the global Algorand community of validators, developers, and contributors who keep the network running, safe, and secure. ### References The GitHub repositories for Puya can be found at: * Puya compiler back end and Algorand Python front end: * Puya-TS front end for Algorand TypeScript: ### Contact For technical assistance related to the vulnerabilities, please contact the Developer Relations team at . To report further issues, please contact . For general discussion of the bulletin, please join the server. ### Incident Response Timeline | Key Actions | Date | Description | | ------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------- | | ABI validation issue discovered | October 10, 2025 | Algorand Foundation received the initial report about issues with ABI value validation in Puya-compiled contracts. | | Missing assert issue discovered | October 12, 2025 | Algorand Foundation received the second report of a missing assert bug in Puya. | | Mitigation communications | October 21, 2025 | Vulnerabilities and remediation actions were communicated privately to key ecosystem protocols to mitigate risk. | | Affected versions unpublished | October 27, 2025 | Vulnerable versions of Puya were taken down from PyPI and NPM (`5.0.0`, `5.0.1` puyapy, puya-ts betas not affected) | | All fixes released | October 24, 2025 | Fixed versions of Puya for Python and TypeScript were published. |
# Creating an account
Algorand offers multiple methods to account creation, in this guide we’ll explore the various methods available for creating accounts on the Algorand blockchain. Algorand supports multiple account types tailored to different use cases, from simple transactions to programmable smart contracts. (single key) are ideal for basic transfers, while offer secure key storage for applications. enable shared control with configurable thresholds, and Logic Signature accounts allow for stateless programmatic control by compiling TEAL logic into a dedicated address. This section will explore how to utilize them in `algokit-utils` , `goal`, `algokey`, `SDKs`, and `Pera Wallet`, the reasons you might want to choose one method over another for your application. Another approach to account creation is using logic signature accounts, which are contract-based accounts that operate using a logic signature instead of a private key. To create an logic signature account, you write transaction validation logic and compile it to obtain the corresponding address, and fund it with the required minimum balance. Accounts participating in transactions are required to maintain a minimum balance of 100,000 micro Algos. Before using a newly created account in transactions, make sure that it has a sufficient balance by transferring at least 100,000 micro Algos to it. An initial transfer of under that amount will fail due to the minimum balance constraint. Refer for more details. ## Standalone A standalone account is an Algorand address and private key pair that is not stored on disk. The private key is most often in the 25-word mnemonic form. Algorand’s mobile wallet uses standalone accounts. Use the 25-word mnemonic to import accounts into the mobile wallet. | **When to Use Standalone Accounts** | **When Not to Use Standalone Accounts** | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Low setup cost: No need to connect to a separate client or hardware; all you need is the 25-word human-readable mnemonic of the relevant account. | Limited direct import/export options: Developers relying on import and export functions may find kmd more suitable, as or provides import and export capabilities. | | Supports offline signing: Since private keys are not stored on disk, standalone accounts can be used in secure offline-signing procedures where hardware constraints may make using kmd more difficult. | | | Widely supported: Standalone account mnemonics are commonly used across various Algorand developer tools and services. | | ### How to generate a standalone account There are different ways to create a standalone account: #### Algokey Algokey is a command-line utility for managing Algorand keys and it is used for generating, exporting, and importing keys. ```shell $ algokey generate Private key mnemonic: [PASSPHRASE] Public key: [ADDRESS] ``` #### Algokit Utils Developers can programmatically create accounts without depending on external key management systems, making it ideal for lightweight applications, offline signing, and minimal setup scenarios. AlgoKit Utils offers multiple ways to create and manage standalone accounts. ##### Random Account Generation Developers can generate random accounts dynamically, each with a unique public/private key pair. ##### Mnemonic-Based Account Recovery Developers can create accounts from an existing 25-word mnemonic phrase, allowing seamless account recovery and reuse of predefined test accounts. Caution You can also create an account from environment variables as a standalone account. If it’s not LocalNet, the account will be treated as standalone and loaded using mnemonic secret. Ensure the mnemonic is handled securely and not committed to source control. #### Pera Wallet Pera Wallet is a popular non-custodial wallet for the Algorand blockchain. Getting started on how to create a New Algorand Account on Pera Wallet #### Vault Wallet Hashicorp Vault implementation can also be used for managing Algorand standalone accounts securely. By leveraging Vault, you can store private keys and 25-word mnemonics securely, ensuring sensitive data is protected from unauthorized access. This implementation provides a streamlined way to create and manage standalone accounts while maintaining best practices for key management. The integration is particularly useful for developers and enterprises seeking a secure, API-driven approach to manage Algorand accounts at scale, without relying on local storage or manual handling of sensitive credentials. ## KMD-Managed Accounts The Key Management Daemon is a process that runs on Algorand nodes, so if you are using a third-party API service this process likely will not be available to you. kmd is the underlying key storage mechanism used with `goal`. | **When to Use KMD** | **When Not to Use KMD** | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Single Master Derivation Key – Public/private key pairs are generated from a single master derivation key. You only need to remember the wallet passphrase/mnemonic to regenerate all accounts in the wallet. | Resource Intensive – Running `kmd` requires an active process and storing keys on disk. If you lack access to a node or need a lightweight solution, may be a better option. | | Enhanced Privacy – There is no way to determine that two addresses originate from the same master derivation key, allowing applications to implement anonymous spending without requiring users to store multiple passphrases. | | Caution KMD is not recommended for production. ### How to use kmd #### Start the kmd process To initiate the kmd process and generate the required `kmd.net` and `kmd.token` files use or command line utilities. To run kmd, you need to have the kmd library installed which comes with the node. Start kmd using goal with a 3600-second timeout. ```shell $ goal kmd start -t 3600 Successfully started kmd ``` Kmd can directly be used using the following command ```shell $ kmd -d data/kmd-v/ -t 3600 ``` Once the kmd has started, retrieve the kmd IP address and access token: ```shell $ echo "kmd IP address: " `cat $ALGORAND_DATA/kmd-v/kmd.net kmd IP address: [ip-address]:[port] $ echo "kmd token: " `cat $ALGORAND_DATA/kmd-v/kmd.token kmd token: [token] ``` #### Create a wallet and generate an account Wallet and account can be created using different ways. ##### goal Following are the commands to create a new wallet and a generate an account using goal, ```shell $ goal wallet new testwallet Please choose a password for wallet 'testwallet': Please confirm the password: Creating wallet... Created wallet 'testwallet' Your new wallet has a backup phrase that can be used for recovery. Keeping this backup phrase safe is extremely important. Would you like to see it now? (Y/n): y Your backup phrase is printed below. Keep this information safe -- never share it with anyone! [25-word mnemonic] $ goal account new Created new account with address [address] ``` ##### Algokit Utils ###### KMD client based Account creation We can also use the utils to create a wallet and account with the KMD client. Other operations like creating and renaming wallet can also be performed. ###### Environment Variable-Based Account Creation To create an account using environment variable will load the account from a KMD wallet called name. When running against a local Algorand network, a funded wallet can be automatically created if it doesn’t exist. #### Recover wallet and regenerate account To recover a wallet and any previously generated accounts, use the wallet backup phrase also called the wallet mnemonic or passphrase. The master derivation key for the wallet will always generate the same addresses in the same order. Therefore the process of recovering an account within the wallet looks exactly like generating a new account. ```shell $ goal wallet new -r Please type your recovery mnemonic below, and hit return when you are done: [25-word wallet mnemonic] Please choose a password for wallet [RECOVERED_WALLET_NAME]: Please confirm the password: Creating wallet... Created wallet [RECOVERED_WALLET_NAME] $ goal account new -w Created new account with address [RECOVERED_ADDRESS] ``` An offline wallet may not accurately reflect account balances, but the state for those accounts e.g. its balance, online status is safely stored on the blockchain. kmd will repopulate those balances when connected to a node. Caution For compatibility with other developer tools, `goal` provides functions to import and export accounts into kmd wallets, however, keep in mind that an imported account can not be recovered/derived from the wallet-level mnemonic. You must always keep track of the account-level mnemonics that you import into kmd wallets. #### HD Wallets Algorand’s Hierarchical Deterministic wallet implementation, based on the ARC-0052 standard, enables the creation of multiple accounts from a single master seed. The API implementations are in TypeScript, Kotlin, and Swift, providing a consistent and efficient solution for managing multiple accounts with a single mnemonic. HD wallets are especially beneficial for applications that require streamlined account generation and enhanced privacy. By using this approach, developers can ensure all accounts are deterministically derived from a single seed phrase, making wallet management more convenient for both users and applications.
# Funding an Account
To use the Algorand blockchain, accounts need to be funded with ALGO tokens. This guide explains different methods of funding accounts across Algorand’s various networks. You can also transfer ALGO tokens from an existing funded account to a new account using the Algorand SDK or through wallet applications. All Algorand accounts require a minimum balance to be registered in the ledger. The specific method you choose will depend on whether you’re working with MainNet, TestNet, or LocalNet. ## Choosing the Right Funding Method The appropriate funding method depends on your specific needs: * Development and Testing: Use TestNet faucet or LocalNet’s pre-funded accounts * Production Applications: Use MainNet on-ramps to acquire real ALGO tokens * Automated Deployments: Use AlgoKit’s ensureFunded utilities * CI/CD Environments: Use TestNet Dispenser API with appropriate credentials By selecting the right funding mechanism for your use case, you can streamline development and ensure your Algorand applications have the resources they need to operate effectively. ## LocalNet Funding Options LocalNet provides pre-funded accounts for development and testing. You can use these existing accounts or create and fund new ones using various mechanisms. ### Retrieving the Default LocalNet Dispenser This utils function retrieves the default LocalNet dispenser account, which is pre-funded and can be used to provide ALGOs to other accounts in a local development environment. The LocalNet dispenser is automatically available and is designed for testing purposes, making it easy to create and fund new accounts without external dependencies. ### Environment-Based Dispenser The below function retrieves a dispenser account configured through environment variables. It allows developers to specify a custom funding account for different environments (e.g., development, testing, staging). The function looks for environment variables containing the dispenser’s private key or mnemonic, making it flexible for dynamic funding configurations across various deployments. The dispenser here is managed by the developer and is not a public dispenser that already exists. ## TestNet Funding Options ### TestNet Faucet Algorand provides a faucet for funding TestNet accounts with test ALGO tokens for development purposes. 1. Visit and choose the network(localnet or testnet) or visit the 2. Sign in with your Google account and complete the reCAPTCHA 3. Enter your Algorand TestNet address 4. Click “Dispense” to receive test ALGOs ### TestNet Dispenser API For developers needing programmatic access to TestNet funds, AlgoKit provides utils to interact with the TestNet Dispenser API. #### Ensuring Funds from TestNet Dispenser The `ensureFundedFromTestNetDispenserApi` function checks if a specified Algorand account has enough funds on TestNet. If the balance is below the required threshold, it automatically requests additional ALGOs from the TestNet Dispenser API. The dispenser client is initialized using the `ALGOKIT_DISPENSER_ACCESS_TOKEN` environment variable for authentication. This is particularly useful for CI/CD pipelines and automated tests, ensuring accounts remain funded without manual intervention. #### Directly Funding an Account The below utils function sends a fixed amount of ALGOs (1,000,000 microALGOs = 1 ALGO) to a specified account using the TestNet Dispenser API. Unlike the ensureFundedFromTestNetDispenserApi method, which checks the balance before funding, this function transfers funds immediately. It is useful when you need to top up an account with a specific amount without verifying its current balance. ### Using AlgoKit CLI The AlgoKit CLI provides a simple command-line interface for funding accounts. This command directly funds the specified receiver address with the requested amount of ALGOs using the TestNet Dispenser. It’s convenient for quick funding operations without writing code. ```shell algokit dispenser fund --receiver --amount ``` ## MainNet On-Ramps For MainNet transactions, users must acquire real ALGO tokens through cryptocurrency exchanges or other on-ramp services. Required for real-world transactions and decentralized applications. Some of the common On-Ramps are centralized exchanges like Coinbase, Decentralized Exchanges like Tinyman, other DeFi protocols like Folks Finance. ## AlgoKit Utils Funding Helpers AlgoKit provides utility functions to help ensure accounts have sufficient funds, which is particularly useful for automation and deployment scripts. ### Ensure Funded The below code checks the balance of a specified account and transfers ALGOs from a dispenser if the balance falls below the required threshold (1 ALGO in this example). It ensures the account has enough funds before executing transactions, making it useful for automated scripts that depend on a minimum balance. ### Funding from Environment Variables This code combines the ensure-funded mechanism with an environment-configured dispenser. It retrieves a dispenser account from environment variables and uses it to top up the target account if its balance is below 1 ALGO. This approach makes the code more flexible and portable by allowing different dispensers to be used across various environments without hardcoding account details.
# Keys and Signing
Algorand uses **Ed25519 elliptic-curve signatures** to ensure high-speed, secure cryptographic operations. Every account in Algorand is built upon a **public/private key pair**, which plays a crucial role in signing and verifying transactions. To simplify key management and enhance security, Algorand provides various tools and transformations to make key handling more accessible to developers and end users. This guide explores how public and private key pairs are generated and transformed into user-friendly formats like Algorand addresses, base64 private keys, and mnemonics. It also covers various methods for signing transactions, including direct key management through command-line tools like Algokey, programmatic signing using AlgoKit Utils in Python and TypeScript, and wallet-based signing with Pera Wallet integration. By understanding these key management and signing methods, developers can ensure secure and efficient transactions on the Algorand network. ### Keys and Addresses Algorand uses Ed25519 high-speed, high-security elliptic-curve signatures. The keys are produced through standard, open-source cryptographic libraries packaged with each of the SDKs. The key generation algorithm takes a random value as input and outputs two 32-byte arrays, representing a public key and its associated private key. These are also referred to as a public/private key pair. These keys perform essential cryptographic functions like signing data and verifying signatures.  Public/Private Key Generation For reasons that include the need to make the keys human-readable and robust to human error when transferred, both the public and private keys transform. The output of these transformations is what most developers, and usually all end-users, see. The Algorand developer tools actively seek to mask the complexity involved in these transformations. So unless you are a protocol-level developer modifying cryptographic-related source code, you may never actually encounter the actual public/private key pair. #### Transformation: Public Key to Algorand Address The public key is transformed into an Algorand address by adding a 4-byte checksum to the end of the public key and then encoding it in base32. The result is what the developer and end-user recognize as an Algorand address. The address is 58 characters long.  Public Key to Algorand Address #### Transformation: Private Key to base64 private key A base64 encoded concatenation of the private and public keys is a representation of the private key most commonly used by developers interfacing with the SDKs. It is likely not a representation familiar to end users.  Base64 Private Key #### Transformation: Private Key to 25-word mnemonic The 25-word mnemonic is the most user-friendly representation of the private key. It is generated by converting the private key bytes into 11-bit integers and then mapping those integers to the , where integer *n* maps to the word in the *nth* position in the list. By itself, this creates a 24-word mnemonic. A checksum is added by taking the first two bytes of the hash of the private key and converting them to 11-bit integers and then to their corresponding word in the word list. This word is added to the end of the 24 words to create a 25-word mnemonic. This representation is called the private key mnemonic. You may also see it referred to as a passphrase.  Private Key Mnemonic To manage keys of an Algorand account and use them for signing, there are several methods and tools available. Here’s an overview of key management and signing processes: ## Signing using accounts ### Using algokey Algokey is a command-line tool provided by Algorand for managing cryptographic keys. It enables users to generate, export, import, and sign transactions using private keys. To sign a transaction, users need access to their private key, either in the form of a keyfile or mnemonic phrase. The signed transaction can then be submitted to the Algorand network for validation and execution. This process ensures that transactions remain tamper-proof and are executed only by authorized entities. To sign a transaction using an account with algokey, you can use the following command. ```plaintext algokey sign -t transaction.txn -k private_key.key -o signed_transaction.stxn ``` Algokey reference ### Using Algokit utils AlgoKit Utils simplifies the management of standalone Algorand accounts, signing in both Python and TypeScript by abstracting the complexities of Algorand SDKs, allowing developers to generate new accounts, retrieve existing ones, and manage private keys securely. It also streamlines transaction signing by providing flexible signer management options: #### Default signer A default signer is used when no specific signer is provided. This helps streamline transaction signing processes, making it easier for developers to handle transactions without manually specifying signers each time. #### Multiple signers In certain use cases, multiple signers may be required to approve a transaction. This is particularly relevant in scenarios involving multi-signature accounts, where different parties must authorize transactions before they can be executed. The below code registers multiple transaction signers at once. The `setSignerFromAccount` function tracks the given account for later signing. However, if you are generating accounts via the various methods on AccountManager (like random, fromMnemonic, logicsig, etc.) then they automatically get tracked. #### Get signer Get signer helps to retrieve the Transaction Signer for the given sender address, ready to sign a transaction for that sender.If no signer has been registered for that address then the default signer is used if registered and if not then an error is thrown. #### Override signer Create an unsigned payment transaction and manually sign it. The transaction signer can be specified in the second argument to `addTransaction`. ## Signing using Logic Signatures Logic signatures provide a programmable way to authorize transactions on the Algorand blockchain. Instead of relying solely on private key-based signatures, LogicSigs allow transaction approvals based on predefined conditions encoded in TEAL. It allow users to delegate signature authority without exposing their private key. LogicSigs allow fine-grained control over spending by defining transaction rules such as only allowing transfers to specific recipient addresses. Only use smart signatures when absolutely required. In most cases, it is preferrable to use smart contract escrow accounts over smart signatures as smart signatures require the logic to be supplied for every transaction. More details about logic signatures ## Signing using wallets ### Using UseWallet Library The UseWallet library provides an easy way to integrate multiple Algorand wallets, including Pera Wallet, without handling low-level SDK interactions. It simplifies connecting wallets, signing transactions, and sending them using a minimal setup. To integrate Pera Wallet and other Algorand wallets with minimal setup, follow these steps: 1. Install UseWallet using the command: `npm install @txnlab/use-wallet` 2. Configure UseWallet Provider by wrapping your application in the `UseWalletProvider` to enable wallet connections. 3. The useWallet hook provides two methods for signing Algorand transactions: `signTransactions` and `transactionSigner`. Guide to signing transactions using UseWallet ### HD wallet (coming soon)
# Multisignature Accounts
Multisignature accounts are a powerful, natively-supported security and governance feature on Algorand that require multiple parties to approve transactions. Think of a multisignature account as a secure vault with multiple keyholes, where a predetermined number of keys must be used together to open it. For example, a multisignature account might be configured so that any 2 out of 3 designated signers must approve before funds can be transferred. This creates a balance between security and operational flexibility that’s valuable in many scenarios: * **Treasury management** for organizations where multiple board members must approve expenditures * **Shared accounts** between business partners who want mutual consent for transactions * **Enhanced security** for high-value accounts by distributing signing authority across different devices or locations * **Recovery options** where backup signers can help regain access if a primary key is lost ## What is a Multisignature Account? Technically, a multisignature account on Algorand is a logical representation of an ordered set of addresses with a *threshold* and *version*. The threshold determines how many signatures are required to authorize any transaction from this account (such as 2-of-3 or 3-of-5), while the version specifies the multisignature protocol being used. Multisignature accounts can perform the same operations as standard accounts, including sending transactions and participating in consensus. The address for a multisignature account is derived from the ordered list of participant accounts, the threshold, and version values. Some important characteristics to understand: * The order of addresses matters when creating the multisignature account. Using addresses `[A, B, C]` creates a different multisignature address than `[B, A, C]`. * However, the order of signatures does not matter when signing a transaction. * Multisignature accounts cannot nest other multisignature accounts. In other words, a multisignature account cannot include another multisignature account as one of its participant addresses. * You must to the multisignature address to initialize its state on the ledger, just like with any other account. ## Benefits & Implications of Using Multisig Accounts | **Benefits** | **Implications** | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Enhanced Security:** Requires multiple signatures for transactions, adding an extra layer of protection against compromise of a single key | **Added Complexity:** Requires coordination among multiple signers for every transaction | | **Customizable Authorization:** The number of required signatures can be adjusted at creation time to fit different security models (e.g., 2-of-3, 3-of-5, 5-of-5, etc.) | **Key Management:** All signers must securely manage their private keys to maintain the security of the multisig account | | **Distributed Key Storage:** Signing keys can be stored separately and generated through different methods (kmd, standalone accounts, or a mix) | **Transaction Size:** Multisig transactions are larger than single-signature transactions, which could result in slightly higher transaction fees | | **Governance Mechanisms:** Enables cryptographically secure governance structures where a subset of authorized users must approve actions | **Not Always Necessary:** For simple use cases where security and governance are not critical concerns, a single-signature account may be more practical | | **Integration with Smart Contracts:** Can be paired with Algorand Smart Contracts for complex governance models requiring specific signature subsets | **Cryptographic Best Practices** Keys used as signers in critical multisig accounts should be used exclusively for this purpose. | ## How to Generate a Multisignature Account There are different ways to generate a multisignature account. The examples below demonstrate how to create a multisignature account that requires 2 signatures from 3 possible signers to authorize transaction:
# Overview of Accounts
An Algorand Account is a fundamental entity on the Algorand blockchain, representing an individual user or entity capable of holding assets, authorizing transactions, and participating in blockchain activities. Accounts on the Algorand blockchain serve several purposes, including managing balances of Algos, interacting with smart contracts, and holding Algorand Standard Assets. An Algorand account is the foundation of user interaction on the Algorand blockchain. It starts with the creation of a cryptographic key pair: * A private key, which must be kept secret as it is used to sign transactions and prove ownership of the account. * A public key, which acts as the account’s unique identity on the blockchain and is shared publicly as its address. The public key is transformed into a user-friendly Algorand address — a 58-character string you use for transactions and other blockchain interactions. For convenience, the private key can also be represented as a 25-word mnemonic, which serves as a human-readable backup for restoring account access. Refer to to understand on how transformation happened from public key to algorand address. An address is just an identifier, while an account represents the full state and capabilities on the blockchain. An address is always associated with one account, but an account can have multiple addresses through rekeying. ## Account Types: Algorand accounts fall into two broad categories: Standard Accounts and Smart Contract Accounts. ## Standard Accounts Accounts are entities on the Algorand blockchain associated with specific on-chain data, like a balance. Standard accounts are controlled by a private key, allowing users to sign transactions and interact with the blockchain. After generating a private key and corresponding address, sending Algos to the address on Algorand will initialize its state on the Algorand blockchain. ### Single Signature Accounts Single Signature Accounts are the most basic and widely used account type in Algorand, controlled by a single private key. Transactions from these accounts are authorized through a signature generated by the private key, which is stored in the transaction’s `sig` field as a base64-encoded string. When a transaction is signed, it forms a `SignedTransaction` object containing the transaction details and the generated signature. These accounts can be created as standalone key pairs, typically represented by a 25-word mnemonic, or managed through the Key Management Daemon, where multiple accounts can be derived from a master key.  Figure: Initializing an Account #### Attributes ##### Minimum Balance Every account on Algorand must have a minimum balance of 100,000 microAlgos. If a transaction is sent that would result in a balance lower than the minimum, the transaction will fail. The minimum balance increases with each asset holding the account whether the asset was created or owned by the account and with each application, the account created or opted in. Destroying a created asset, opting out/closing out an owned asset, destroying a created app, or opting out of an opted-in app decreases the minimum balance accordingly. More about assets, applications, and changes to the minimum balance requirement ##### Account Status The Algorand blockchain uses a decentralized Byzantine Agreement protocol that leverages pure proof of stake (Pure POS). By default, Algorand accounts are set to offline, meaning they do not contribute to the consensus process. An online account participates in Algorand consensus. For an account to go online, it must generate a participation key and send a special key registration transaction. With the addition of staking rewards into the protocol as of v4.0, Algorand consensus participants can set their account as eligible for rewards by including a 2 Algo fee when registering participation keys online. Read more about . #### Other Attributes Other attributes of account are as follows: Additional metadata and properties associated with accounts: ##### **Asset & Application Management** * `assets`: List of Algorand Standard Assets (ASAs) held by the account. * `createdAssets`: Assets created by this account. * `totalAssetsOptedIn`: Number of opted-in ASAs. * `totalCreatedAssets`: Number of ASAs created. * `createdApps`: Applications (smart contracts) created by this account. * `totalAppsOptedIn`: Number of opted-in applications. * `totalCreatedApps`: Number of applications created. ##### **Account Status & Participation** * `status`: Current status (`Offline`, `Online`, etc.). * `deleted`: Whether the account is closed. * `closedAtRound`: Round when the account was closed. * `participation`: Staking participation data (for consensus nodes). * `incentiveEligible`: Whether the account is eligible for incentives. ##### **Balances & Rewards** * `minBalance`: Minimum required balance (microAlgos). * `pendingRewards`: Pending staking rewards. * `rewards`: Total rewards earned. * `rewardBase?`: Base value for reward calculation. ##### **Metadata** * `round`: Last seen round. * `createdAtRound`: Round when the account was created. * `lastHeartbeat`: Last heartbeat round (for validator nodes). * `lastProposed`: Last round the account proposed a block. * `sigType`: Signature type used (`sig`, `msig`, `lsig`). ##### **Box Storage** * `totalBoxBytes`: Total bytes used in box storage. * `totalBoxes`: Number of boxes created. ### Multisignature Accounts Multisignature accounts in Algorand are structured as an ordered set of addresses with a defined threshold and version, allowing them to perform transactions and participate in consensus like standard accounts. Each multisig account requires a specified number of signatures to authorize a transaction, with the threshold determining how many signatures are needed. Multisignature accounts cannot be nested within other multisig accounts. More details about multisignature accounts ## Smart Contract Accounts Smart Contract Accounts do not have private keys; instead, they are controlled by on-chain logic. They can hold assets and execute transactions based on pre-defined conditions. ### Smart Signature Accounts (Contract Accounts): Smart Signature Accounts are Algorand accounts controlled by TEAL logic instead of private keys. Each unique compiled smart signature program corresponds to a single Algorand address, enabling it to function as an independent account when funded. These accounts authorize transactions based on predefined TEAL logic rather than user signatures, allowing them to hold Algos and Algorand Standard Assets. Since they are stateless, they do not maintain on-chain data between transactions, making them ideal for lightweight, logic-based transaction approvals. However, its recommended to use use smart signatures only when absolutely required as smart signatures require the logic to be supplied for every transaction. ### Application Accounts (Smart Contracts): Application accounts are automatically created for every smart contract (application) deployed on the Algorand blockchain. Each application has a unique account, with its address derived from the application ID. These accounts can hold Algos and Algorand Standard Assets (ASAs) and can also send transactions (inner transactions) as part of smart contract logic. ## Special Accounts Two accounts carry special meaning on the Algorand blockchain: the **FeeSink** and the **RewardsPool**. The FeeSink is where all transaction fees are sent. The FeeSink can only be spent on the RewardsPool account. The RewardsPool was first used to distribute rewards to balance holding accounts. Currently, this account is not used. In addition, the ZeroAddress `AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ` is an address that represents a blank byte array. It is used when you leave an address field blank in a transaction. Check the fee sink and reward pool addresses in section to know more. ### Wallets In the context of Algorand developer tools, wallets refer to wallets generated and managed by the Key Management Daemon process. A wallet stores a collection of keys. kmd stores collections of wallets and allows users to perform operations using the keys stored within these wallets. Every wallet is associated with a master key, represented as a 25-word mnemonic, from which all accounts in that wallet are derived. This allows the wallet owner only to need to remember a single passphrase for all of their accounts. Wallets are stored encrypted on disk. ### HD Wallets Hierarchical Deterministic wallets, following the ARC-0052 standard, provide an advanced method for key management. HD wallets derive keys deterministically from a single master seed, ensuring consistent addresses across different implementations. Using the Ed25519 algorithm for key generation and signing, they support BIP-44 derivation paths. It allows private key and mnemonic-based account generation, enabling deterministic recovery, automated address creation, and compatibility with Algorand’s address formats. ## Wallets In Algorand, a wallet is a system for generating, storing, and managing private keys that control accounts. * **Key Management Daemon (KMD) Wallets:** * Managed by Algorand’s Key Management Daemon (kmd), these wallets store multiple accounts and allow signing transactions securely. Each wallet is protected by a 25-word mnemonic, from which all accounts are derived. Wallets are encrypted and stored on disk. Create accounts using kmd * **Popular Mobile Wallets:** * **Pera:** Non-custodial, user-friendly wallet with a built-in dApp browser. * **Defly:** Designed for DeFi users, offering DEX support, insights, and multi-sig security. * **HesabPay:** Global mobile payment app for top-ups, cash-outs, bill payments, and transfers. * **Exodus:** iOS and Android mobile wallet solution. * **Popular Web Wallets:** * **Lute Wallet:** Web-based Algorand wallet. * **Exodus:** Chrome/Browser-based extension wallet. * **Hardware Wallet:** * **Ledger:** Secure offline storage for Algo and other crypto assets.
# Rekeying accounts
Rekeying is a powerful protocol feature that enables an Algorand account holder to maintain a static public address while dynamically rotating the authoritative private spending key(s). This is accomplished by issuing a transaction with the `rekey-to` field set the authorized address field within the account object. Future transaction authorization using the account’s public address must be provided by the spending key(s) associated with the authorized address, which may be a single key address, multisignature address, or logic signature program address. Rekeying an account only affects the authorizing address for that account. An account is distinct from an address, so several essential points may not be obvious: * If an account is closed (balance to 0), the rekey setting is lost. * Rekeys are not recursively resolved. If A is rekeyed to B and B rekeyed to C, B will authorize A’s transactions, not C. * Rekeying members of a multisignature does not affect the multisignature authorization since it’s composed of Addresses, not accounts. If necessary, the multisignature account would need to be rekeyed. The result of a confirmed `rekey-to` transaction will be the `auth-addr` field of the account object is defined, modified, or removed. Defining or modifying means only the corresponding authorized address’s private spending key(s) may authorize future transactions for this public address. Removing the `auth-addr` field is an explicit assignment of the authorized address back to the “addr” field of the account object (observed implicitly because the field is not displayed). To provide maximum flexibility in key management options, the `auth-addr` may be specified within a `rekey-to` transaction as a distinct foreign address representing a single key address, multisignature address, or logic signature program address. The protocol does not validate control of the required spending key(s) associated with the authorized address defined by `--rekey-to` parameter when the `rekey-to` transaction is sent. This is by design and affords additional privacy features to the new authorized address. It is incumbent upon the user to ensure proper key management practices and `--rekey-to` assignments. Caution Using the `--close-to` parameter on any transaction from a rekeyed account will remove the `auth-addr` field, thus reverting signing authority to the original address. The `--close-to` parameter should be used with caution by keyholder(s) of `auth-addr` as the effects remove their authority to access this account thereafter. ## Authorized Addresses The balance record of every account includes the `auth-addr` field, which, when populated, defines the required authorized address to be evaluated during transaction validation. Initially, the `auth-addr` field is implicitly set to the account’s `address` field, and the only valid private spending key is created during account generation. The `auth-addr` field is only stored and displayed to conserve resources after the network confirms an authorized `rekey-to` transaction. A `standard` account uses its private spending key to authorize from its public address. A `rekeyed` account defines the authorized address that references a distinct `foreign` address and thus requires the private spending key(s) thereof to authorize future transactions. Let’s consider a scenario where a single-key account with address `A` rekeys to a different single-key account with address `B`. This requires two single key accounts at time t0. The result from time t1 is that transactions for address `A` must be authorized by address `B`.  Figure: Rekeying to a Single Address Refer to to generate two accounts and to fund their addresses using the faucet. This example utilizes the following public addresses: ```shell ADDR_A="UGAGADYHIUGFGRBEPHXRFI6Z73HUFZ25QP32P5FV4H6B3H3DS2JII5ZF3Q" ADDR_B="LOWE5DE25WOXZB643JSNWPE6MGIJNBLRPU2RBAVUNI4ZU22E3N7PHYYHSY" ``` Use the following code sample to view initial authorized address for Account `A` using `goal`: ```shell goal account dump --address $ADDR_A ``` Response: ```shell { "addr": "UGAGADYHIUGFGRBEPHXRFI6Z73HUFZ25QP32P5FV4H6B3H3DS2JII5ZF3Q", "algo": 100000, [...] } ``` The response includes the `addr` field, which is the public address. Only the spending key associated with this address may authorize transactions for this account. Now lets consider another scenario wherein a single key account with public address `A` rekeys to a multi signature address `BC_T1`. This scenario reuses both Accounts `A` and `B`, adds a third Account `C` and creates a multisignature Account `BC_T1` comprised of addresses `B` and `C` with a threshold of 1. The result will be the private spending key for `$ADDR_B` or `$ADDR_C` may authorize transaction from `$ADDR_A`.  To create a new multisignature account, refer to . Ensure it uses both `$ADDR_B` and the new `$ADDR_C` with a threshold of 1 (so either `B` or `C` may authorize). Set the resulting account address to the `$ADDR_BC_T1` environment variable for use below. ## Rekey-to Transaction A `rekey-to` transaction allows an account holder to change the spending authority of their account without changing the account’s public address. A rekey-to transaction enables an account owner to delegate their spending authority to a different private key while maintaining the same public address. This means the original account can transfer its authorization to sign and approve transactions to a new key without creating a new account or changing the account’s address. The existing authorized address must provide authorization for this transaction. Account `A` intends to rekey its authorized address to `$ADDR_B,` which is the public address of Account `B`. This can be accomplished in a single `goal` command: ```shell goal clerk send --from $ADDR_A --to $ADDR_A --amount 0 --rekey-to $ADDR_B ``` Now, if we view account `A` using the command: ```shell goal account dump --address $ADDR_A ``` Response: ```shell { "addr": "UGAGADYHIUGFGRBEPHXRFI6Z73HUFZ25QP32P5FV4H6B3H3DS2JII5ZF3Q", "algo": 199000, [...] "spend": "LOWE5DE25WOXZB643JSNWPE6MGIJNBLRPU2RBAVUNI4ZU22E3N7PHYYHSY" } ``` The populated `spend` field instructs the validation protocol to only approve transactions for this account object when authorized by that address’s spending key(s). Validators will ignore all other attempted authorizations, including those from the public address defined in the `addr` field. The following transaction will fail because, by default, `goal` attempts to add the authorization using the `--from` parameter. However, the protocol will reject this because it is expecting the authorization from `$ADDR_B` due to the confirmed rekeying transaction above. ```shell goal clerk send --from $ADDR_A --to $ADDR_B --amount 100000 ``` The rekey-to transaction workflow is as follows: * Construct a transaction that specifies an address for the rekey-to parameter * Add the required signature(s) from the current authorized address * Send and confirm the transaction on the network ### Construct an Unsigned Transaction We will construct an unsigned transaction using `goal` with the `--out` flag to write the unsigned transaction to a file: ```shell goal clerk send --from $ADDR_A --to $ADDR_B --amount 100000 --out send-single.txn ``` For multisignature account, the rekey transaction constructed requires authorization from `$ADDR_B`. ```shell goal clerk send --from $ADDR_A --to $ADDR_A --amount 0 --rekey-to $ADDR_BC_T1 --out rekey-multisig.txn ``` ### Add Authorized Signature(s) Next, locate the wallet containing the private spending key for Account `B`. The `goal clerk sign` command provides the flag `--signer`, which specifies the proper required authorized address `$ADDR_B`. Notice the `infile` flag reads in the unsigned transaction file from above and the `--outfile` flag writes the signed transaction to a separate file. ```shell goal clerk sign --signer $ADDR_B --infile send-single.txn --outfile send-single.stxn ``` Use the following command to sign rekey transaction in multisignature account: ```shell goal clerk sign --signer $ADDR_B --infile rekey-multisig.txn --outfile rekey-multisig.stxn ``` ### Send and Confirm We will send the signed transaction file using the following command: ```shell goal clerk rawsend --filename send-single.stxn ``` This will succeed, sending the 100000 microAlgos from `$ADDR_A` to `$ADDR_B` using the private spending key of Account `B`. Next, send and Confirm Rekey to multisignature account using the following command: ```shell goal clerk rawsend --filename rekey-multisig.stxn goal account dump --address $ADDR_A ``` The rekey transaction will confirm, resulting in the `spend` field update within the account object: ```shell { "addr": "UGAGADYHIUGFGRBEPHXRFI6Z73HUFZ25QP32P5FV4H6B3H3DS2JII5ZF3Q", "algo": 199000, [...] "spend": "NEWMULTISIGADDRESSBCT1..." } ``` Now we will send with `Auth BC_T1` using the following command: ```shell goal clerk send --from $ADDR_A --to $ADDR_B --amount 100000 --msig-params="1 $ADDR_B $ADDR_C" --out send-multisig-bct1.txn goal clerk multisig sign --tx send-multisig-bct1.txn --address $ADDR_C goal clerk rawsend --filename send-multisig-bct1.txn ``` This transaction will succeed as a private spending key for `$ADDR_C` provided the authorization and meets the threshold requirement for the multisignature account. ## Utils Example Rekeying can also be acheived using Algokit Utils. In the following example, account\_a is rekeyed to account\_b. The code then illustrates that signing a transaction from account\_a will fail if signed with account\_a’s private key and succeed if signed with account\_b’s private key.
# Asset Metadata
* [ ] Working with IPFS for asset data? * [ ] Standards - cover main ARCs that people should know about for ASAs
# Asset Operations
Algorand Standard Assets (ASA) enable you to tokenize any type of asset on the Algorand blockchain. This guide covers the essential operations for managing these assets: creation, modification, transfer, and deletion. You’ll also learn about opt-in mechanics, asset freezing, and clawback functionality. Each operation requires specific permissions and can be performed using AlgoKit Utils or the Goal CLI. ## Creating Assets Creating an ASA lets you mint digital tokens on the Algorand blockchain. You can set the total supply, decimals, unit name, asset name, and add metadata through an optional URL. The asset requires special control addresses: a manager to modify configuration, a reserve for custody, a freeze address to control transferability, and a clawback address to revoke tokens. Every new asset receives a unique identifier on the blockchain. **Transaction Authorizer**: Any account with sufficient Algo balance Create assets using either Algokit Utils or `goal`. When using Algokit Utils, supply all creation parameters. With `goal`, managing the various addresses associated with the asset must be done after executing an asset creation. See Modifying an Asset in the next section for more details on changing addresses for the asset. Learn about the Algorand Request for Comments (ARCs) standards that help your assets work with existing community tools. Learn about the structure and components of an asset creation transaction. ## Updating Assets After creation, an ASA’s configuration can be modified, but only certain parameters are mutable. The manager address can update the asset’s control addresses: manager, reserve, freeze, and clawback. All other parameters like total supply and decimals are immutable. Setting any control address to empty permanently removes that capability from the asset. **Authorized by**: To update an asset’s configuration, the current manager account must sign the transaction. Each control address can be modified independently, and changes take effect immediately. Use caution when clearing addresses by setting them to empty strings, as this permanently removes the associated capability from the asset with no way to restore it. Learn about the structure and components of an asset reconfiguration transaction. ## Deleting Assets Destroying an ASA permanently removes it from the Algorand blockchain. This operation requires specific conditions: the asset manager must initiate the deletion, and all units of the asset must be held by the creator account. Once deleted, the asset ID becomes invalid and the creator’s minimum balance requirement for the asset is removed. **Authorized by**: Created assets can be destroyed only by the asset manager account. All of the assets must be owned by the creator of the asset before the asset can be deleted. Learn about the structure and components of an asset destroy transaction. ## Opting In and Out of Assets Before an account can receive an ASA, it must explicitly opt in to hold that asset. This security feature ensures accounts only hold assets they choose to accept. Opting in requires a minimum balance increase of 0.1 Algo per asset, while opting out releases this requirement. Both operations must be authorized by the account performing the action. **Authorized by**: The account opting out The asset management functions include opting in and out of assets, which are fundamental to asset interaction in a blockchain environment. ### optIn **Authorized by**: The account opting in An account can opt out of an asset at any time. This means that the account will no longer hold the asset, and the account will no longer be able to receive the asset. The account also recovers the Minimum Balance Requirement for the asset (100,000 microAlgo). When opting-out you generally want to be careful to ensure you have a zero-balance otherwise you will forfeit the balance you do have. By default, AlgoKit Utils protects you from making this mistake by checking you have a zero-balance before issuing the opt-out transaction. You can turn this check off if you want to avoid the extra calls to Algorand and are confident in what you are doing. AlgoKit Utils gives you functions that allow you to do opt-ins in bulk or as a single operation. The bulk operations give you less control over the sending semantics as they automatically send the transactions to Algorand in the most optimal way using transaction groups. An opt-in transaction is simply an asset transfer with an amount of 0, both to and from the account opting in. The following code illustrates this transaction. ### `assetBulkOptIn` The `assetBulkOptIn` function facilitates the opt-in process for an account to multiple assets, allowing the account to receive and hold those assets. ### optOut An account can opt out of an asset at any time. This means that the account will no longer hold the asset, and the account will no longer be able to receive the asset. The account also recovers the 0.1 Algo Minimum Balance Requirement for the asset. ### `assetBulkOptOut` The `assetBulkOptOut` function manages the opt-out process for a number of assets, permitting the account to discontinue holding a group of assets. Learn about the structure and components of an asset opt-in transaction. ## Transferring Assets Asset transfers are a fundamental operation in the Algorand ecosystem, enabling the movement of ASAs between accounts. These transactions form the backbone of token economics, allowing for trading, distribution, and general circulation of assets on the blockchain. Each transfer must respect the opt-in status of the receiving account and any freeze constraints that may be in place. **Authorized by**: The account that holds the asset to be transferred. Assets can be transferred between accounts that have opted-in to receiving the asset. These are analogous to standard payment transactions but for Algorand Standard Assets. Learn about the structure and components of an asset transfer transaction. ## Clawback Assets The clawback feature provides a mechanism for asset issuers to maintain control over their tokens after distribution. This powerful capability enables compliance with regulatory requirements, enforcement of trading restrictions, or recovery of assets in case of compromised accounts. When configured, the designated clawback address has the authority to revoke assets from any holder’s account and redirect them to another address. **Authorized by**: Revoking an asset from an account requires specifying an asset sender (the revoke target account) and an asset receiver (the account to transfer the funds back to). The code below illustrates the clawback transaction. Learn about the structure and components of an asset clawback transaction. ## Freezing Assets The freeze capability allows asset issuers to temporarily suspend the transfer of their assets for specific accounts. This feature is particularly useful for assets that require periodic compliance checks, need to enforce trading restrictions, or must respond to security incidents. Once an account is frozen, it cannot transfer the asset until the freeze is lifted by the designated freeze address. **Authorized by**: Freezing or unfreezing an asset for an account requires a transaction that is signed by the freeze account. The code below illustrates the freeze transaction. Learn about the structure and components of an asset freeze transaction.
# Known assets
Retrieve an asset’s configuration information from the network using Algokit Utils or `goal`. Additional details are also added to the accounts that own the specific asset and can be listed with standard account information calls. ## TODO Notes * [ ] Officila stablecoins * [ ] RWA * [ ] Check marketing materials * [ ] Tooling * \[ ]Tooling for assets * instructions for using Lora, links to community tools (wen.tools, ASAStats, etc.)
# Algorand Standard Assets (ASAs)
The Algorand protocol supports the creation of on-chain assets that benefit from the same security, compatibility, speed and ease of use as the Algo. The official name for assets on Algorand is **Algorand Standard Assets (ASA)**. With Algorand Standard Assets you can represent stablecoins, loyalty points, system credits, and in-game points, among many other digital assets. You can also represent single, unique assets like a deed for a house, collectable items, and unique parts on a supply chain. # Assets Overview There are several things to be aware of before getting started with assets: * For every asset an account creates or owns, its minimum balance is increased by 0.1 Algo or 100,000 microAlgo. * This minimum balance requirement will be placed on the original creator as long as the asset has not been destroyed. Transferring the asset does not alleviate the creator’s minimum balance requirement. * Before a new asset can be transferred to a specific account the receiver must opt-in to receive the asset. This process is described in . * If any transaction is issued that would violate the minimum balance requirements, the transaction will fail. ## Asset Parameters The type of asset that is created will depend on the parameters that are passed during asset creation and sometimes during asset re-configuration. View the complete list of parameters used in asset creation and configuration ### Immutable Asset Parameters These eight parameters can *only* be specified when an asset is created. When creating an Algorand Standard Asset, the following parameters define its fundamental characteristics. Once set, these values cannot be modified for the lifetime of the asset: | **Parameter** | Required | Description | | --------------------- | -------- | ----------- | | *YES* | | | | *No, but recommended* | | | | *No, but recommended* | | | | *YES* | | | | *YES* | | | | *YES* | | | | *No* | | | | (*No*) | | | ### Mutable Asset Parameters There are four parameters that correspond to addresses that can authorize specific functionality for an asset. These addresses must be specified during asset creation. If a manager address is specified, that manager can later modify these addresses. However, if any of these addresses, including the manager address, are set to an empty string, that setting becomes irrevocable and can never be modified. Here are the four address types. The manager account is the only account that can authorize transactions to or an asset. Specifying a reserve account signifies that non-minted assets will reside in that account instead of the default creator account. Assets transferred from this account are “minted” units of the asset. If you specify a new reserve address, you must make sure the new account has opted into the asset and then issue a transaction to transfer the remaining assets to the new reserve. The freeze account is allowed to freeze or unfreeze the asset holdings for a specific account. When an account is frozen it cannot send or receive the frozen asset. In traditional finance, freezing assets may be performed to restrict liquidation of company stock or to investigate suspected criminal activity. If the `DefaultFrozen` state is set to `true`, you can use the unfreeze action to authorize accounts to trade the asset, for example after completing KYC/AML checks. The clawback address represents an account that is allowed to transfer assets from and to any asset holder, provided that they have opted-in. Use this if you need the option to revoke assets from an account when they breach certain contractual obligations tied to holding the asset. In traditional finance, this sort of transaction is referred to as a clawback. Setting any of these four addresses to an empty string `""` will permanently clear that address and disable its associated feature. For example, setting the freeze address to an empty string will disable the ability to freeze the asset.
# Networks
> Information about Algorand's public networks
Algorand has three public networks: MainNet, TestNet, and BetaNet. This section provides details about each of these networks that will help you validate the integrity of your connection to them. Each network reference, contains the following information: | Version | The latest protocol software version. Should match goal -v or GET /versions build version. | | ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | Release Version | A link to the official release notes where you can view all the latest changes. | | Genesis ID | A human-readable identifier for the network. This should not be used as a unique identifier. | | Genesis Hash | The unique identifier for the network, present in every transaction. Validate that your transactions match the network you plan to submit them to. | | FeeSink Address | Where all fees from transactions are sent. The FeeSink can only spend to the RewardsPool account. | | RewardsPool Address | Originally used to distribute rewards to balance-holding accounts. Currently this account is not used. | | Faucet | Link to a faucet (TestNet and BetaNet only) | ## Network Details ### MainNet | Version | 3.27.0-stable | | ------------------- | ---------------------------------------------------------- | | Release Version | | | Genesis ID | mainnet-v1.0 | | Genesis Hash | wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8= | | FeeSink Address | Y76M3MSY6DKBRHBL7C3NNDXGS5IIMQVQVUAB6MP4XEMMGVF2QWNPL226CA | | RewardsPool Address | 737777777777777777777777777777777777777777777777777UFEJ2CI | ### TestNet | Version | 3.27.0-stable | | ------------------- | ---------------------------------------------------------- | | Release Version | | | Genesis ID | testnet-v1.0 | | Genesis Hash | SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI= | | FeeSink Address | A7NMWS3NT3IUDMLVO26ULGXGIIOUQ3ND2TXSER6EBGRZNOBOUIQXHIBGDE | | RewardsPool Address | 7777777777777777777777777777777777777777777777777774MSJUVU | | Faucet | | ### BetaNet | Version | v4.0.1-beta | | ------------------- | ---------------------------------------------------------- | | Release Version | | | Genesis ID | betanet-v1.0 | | Genesis Hash | mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0= | | FeeSink Address | A7NMWS3NT3IUDMLVO26ULGXGIIOUQ3ND2TXSER6EBGRZNOBOUIQXHIBGDE | | RewardsPool Address | 7777777777777777777777777777777777777777777777777774MSJUVU | | Faucet | |
# Consensus Overview
The Algorand blockchain uses a decentralized Byzantine Agreement protocol that leverages pure proof of stake (Pure POS). This means that it can tolerate malicious users and achieve consensus without a central authority as long as a supermajority of the stake is in non-malicious hands. This protocol is very fast and requires minimal computational power per node, allowing it to finalize transactions efficiently. Before discussing the protocol, we discuss two functional concepts that Algorand uses. The following is a simplified explanation of the protocol that covers the ideal conditions. For all technical details see the or the . ## Verifiable Random Function Recently we released the for our implementation of a Verifiable Random Function (VRF). The VRF takes a secret key and a value and produces a pseudorandom output, with a proof that anyone can use to verify the result. The VRF functions similar to a lottery and is used to choose leaders to propose a block and committee members to vote on a block. This VRF output, when executed for an account, is used to sample from a to emulate a call for every Algo in a user’s account. The more Algo in an account, the greater chance the account has of being selected — it’s as if every Algo in an account participates in its own lottery. This method ensures that a user does not gain any advantage by creating multiple accounts. ## Participation Keys A user account must be online to participate in the consensus protocol. To reduce exposure, online users do not use their spending keys (i.e., the keys they use to sign transactions) for consensus. Instead, a user generates and registers a participation key for a certain number of rounds. It also generates a collection of ephemeral keys, one for each round, signs these keys with the participation key, and then deletes the participation key. Each ephemeral key is used to sign messages for the corresponding round, and is deleted after the round is over. Using participation keys ensures that a user’s tokens are secure even if their participating node is compromised. Deleting the participation and ephemeral keys after they are used ensures that the blockchain is forward-secure and cannot be compromised by attacks on old blocks using old keys. ## State Proof Keys As of go-algorand 3.4.2 (released March 2022), users also generate a state proof key, with associated ephemeral keys, alongside their participation keys. State proof keys will be used to generate Post-Quantum secure state proofs that attest to the state of the blockchain at different points in time. These will be useful for applications that want a portable, lightweight way to cryptographically verify Algorand state without running a full validator node. ## The Algorand Consensus Protocol Consensus refers to the way blocks are selected and written to the blockchain. Algorand uses the VRF described above to select leaders to propose blocks for a given round. When a block is proposed to the blockchain, a committee of voters is selected to vote on the block proposal. If a super majority of the votes are from honest participants, the block can be certified. What makes this algorithm a Pure Proof of Stake is that users are chosen for committees based on the number of Algo in their accounts. Committees are made up of pseudorandomly selected accounts with voting power dependent on their online stake. It is as if every token gets an execution of the VRF. Users with more tokens are likely to be selected more. For a committee membership this means higher stake accounts will most likely have more votes than a selected account with less tokens. Using randomly selected committees allows the protocol to still have good performance while allowing anyone in the network to participate. Consensus requires three steps to propose, confirm and write the block to the blockchain. These steps are: 1) propose, 2) soft vote and 3) certify vote. Each is described below, assuming the ideal case when there are no malicious users and the network is not partitioned (i.e., none of the network is down due to technical issues or from DDoS attacks). Note that all messages are cryptographically signed with the user’s participation key and committee membership is verified using the VRF in these steps. ### Block Proposal In the block proposal phase, accounts are selected to propose new blocks to the network. This phase starts with every node in the network looping through each online account for which it has valid participation keys, running Algorand’s VRF to determine if the account is selected to propose the block. The VRF acts similar to a weighted lottery where the number of Algo that the account has participating online determines the account’s chance of being selected. Once an account is selected by the VRF, the node propagates the proposed block along with the VRF output, which proves that the account is a valid proposer. We then move from the propose step to the soft vote step.  Block Proposal ### Soft Vote The purpose of this phase is to filter the number of proposals down to one, guaranteeing that only one block gets certified. Each node in the network will get many proposal messages from other nodes. Nodes will verify the signature of the message and then validate the selection using the VRF proof. Next, the node will compare the hash from each validated winner’s VRF proof to determine which is the lowest and will only propagate the block proposal with the lowest VRF hash. This process continues for a fixed amount of time to allow votes to be propagated across the network.  Soft Vote (Part 1) Each node will then run the VRF for every participating account it manages to see if they have been chosen to participate in the soft vote committee. If any account is chosen it will have a weighted vote based on the number of Algo the account has, and these votes will be propagated to the network. These votes will be for the lowest VRF block proposal calculated at the timeout and will be sent out to the other nodes along with the VRF Proof.  Soft Vote (Part 2) A new committee is selected for every step in the process and each step has a different committee size. This committee size is quantified in Algo. A quorum of votes is needed to move to the next step and must be a certain percentage of the expected committee size. These votes will be received from other nodes on the network and each node will validate the committee membership VRF proof before adding to the vote tally. Once a quorum is reached for the soft vote the process moves to the certify vote step. ### Certify Vote A new committee checks the block proposal that was voted on in the soft vote stage for overspending, double-spending, or any other problems. If valid, the new committee votes again to certify the block. This is done in a similar manner as the soft vote where each node iterates through its managed accounts to select a committee and to send votes. These votes are collected and validated by each node until a quorum is reached, triggering an end to the round and prompting the node to create a certificate for the block and write it to the ledger. At that point, a new round is initiated and the process starts over.  Certify Vote If a quorum is not reached in a certifying committee vote by a certain timeout then the network will enter recovery mode.
# Participation Key Management
Algorand provides a set of keys for voting and proposing blocks separate from account spending keys. These are called **participation keys** (sometimes referred to as **partkeys**). At a high-level, participation keys are a specialized set of keys located on a single node. Once this participation key set is associated with an account, the account has the ability to participate in consensus. Read about how Participation Keys function in the Algorand Consensus Protocol. ## Generating Participation Keys With NodeKit To generate your participation key with `NodeKit`, you can use our comprehensive guide that you can find here: Generating Participation Keys ## Generating Participation Keys With `goal` To generate a participation key, use the command on the node where the participation key will reside. This command takes the address of the participating account, a range of rounds, and an optional key dilution parameter. It then generates a and, using optimizations, generates a set of single-round voting keys for each round of the range specified. The VRF private key is what is passed into the VRF to determine if you are selected to propose or vote on a block in any given round. ```shell $ goal account addpartkey -a --roundFirstValid= --roundLastValid= Participation key generation successful ``` This creates a participation key on the node. You can use the `-o` flag to specify a different location in the case where you will eventually transfer your key to a different node to construct the keyreg transaction. ## Add Participation Key If you chose to save the participation key and now want to add it to the server, you can use the following command to add the partkey file to the node. ```shell $ goal account installpartkey --partkey ALICE...VWXYZ.0.30000.partkey --delete-input ``` ## Check that the Key Exists The command will check for any participation keys that live on the node and display pertinent information about them. ```shell $ goal account listpartkeys Registered Account ParticipationID Last Used First round Last round yes TUQ4...NLQQ GOWHR456... 27 0 3000000 ``` The output above is an example of `goal account listpartkeys` run from a particular node. It displays all partkeys and whether or not each key has been **registered**, the **filename** of the participation key, the **first** and **last** rounds of validity for the partkey, the **parent address** (i.e. the address of the participating account) and the **first key**. The first key refers to the key batch and the index in that batch (`.`) of the latest key that has not been deleted. This is useful in verifying that your node is participating (i.e. the batch should continue to increment as keys are deleted). It can also help ensure that you don’t store extra copies of registered participation keys that have past round keys intact. Caution It is okay to have multiple participation keys on a single node. However, if you generate multiple participation keys for the same account with overlapping rounds make sure you are aware of which one is the active one. It is recommended that you only keep one key per account - the active one - except during partkey renewal when you switch from the old key to the new key. Renewing participation keys is discussed in detail in the section. ## View Participation Key Info Use to dump all the information about each participation key that lives on the node. This information is used to generate the online key registration transaction described in the . ```shell $ goal account partkeyinfo Dumping participation key info from /opt/data... Participation ID: GOWHR456IK3LPU5KIJ66CRDLZM55MYV2OGNW7QTZYF5RNZEVS33A Parent address: TUQ4HOIR3G5Z3BZUN2W2XTWVJ3AUUME4OKLINJFAGKBO4Y76L4UT5WNLQQ Last vote round: 11 Last block proposal round: 12 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: l6MsaTt7AiCAdG+69LG/wjaprsI1vImZuGN6gQ1jS88= Voting key: Rleu99r3UqlwuuhaxCTrTQUuq1C9qk5uJd2WQQEG+6U= ``` Above is the example output from a particular node. Use these values to create the that will place the account online. ## Renew Participation Keys The process of renewing a participation key is simply creating a new participation key and registering it online before the previous key expires. You can renew a participation key anytime before it expires, and we recommend to do it at least two weeks (about 268,800 rounds) in advance so as not to risk having an account marked as online that is not . The validity ranges of participation keys can overlap. For any account, at any time, at most one participation key is registered, namely the one included in the latest online key registration transaction for this account. ## Step-by-Step * with a first voting round that is less than the last voting round of the current participation key. It should leave enough time to carry out this whole process (e.g. 40,000 rounds). * Once the network reaches the first voting round for the new key, submit an online key registration transaction for the . * Wait at least 320 rounds to . * Once participation is confirmed, it is safe to .  Example key rotation window ## Removing Old Keys When a participation key is no longer in use, you can remove it by running the following `goal` command with the participation ID of the key you want to remove. ```shell $ goal account deletepartkey --partkeyid IWBP2447JQIT54XWOZ7XKWOBVITS2AEIBOEZXDACX5Q6DZ4Z7VHA ``` Make sure to identify the correct key (i.e. make sure it is not the currently registered key) before deleting.
# Protocol Parameters
Protocol parameters are constants that define the limits and requirements of the Algorand blockchain. These parameters control various aspects of the network including transaction fees, minimum balances, smart contract constraints, and asset creation limits. Understanding these parameters is essential for developers building on Algorand, as they directly impact the cost and feasibility of different operations on the network. Learn about the specific costs and constraints that affect smart contract development ## Minimum Balance Requirements | Parameter | Value | Variable | Note | | ----------- | -------- | ---------- | ------------------------------------- | | Default | 0.1 Algo | MinBalance | | | Opt-in ASA | 0.1 Algo | MinBalance | | | Created ASA | 0.1 Algo | MinBalance | ASA creator is automatically opted in | ## Minimum Balance Requirements for Smart Contracts | Name | Value | Variable | Note | | --------------------------------- | ----------- | ------------------------ | ------------------------------ | | Per page application creation fee | 0.1 Algo | AppFlatParamsMinBalance | | | Flat for application opt-in | 0.1 Algo | AppFlatOptInMinBalance | | | Per state entry | 0.025 Algo | SchemaMinBalancePerEntry | | | Addition per integer entry | 0.0035 Algo | SchemaUintMinBalance | | | Addition per byte slice entry | 0.025 Algo | SchemaBytesMinBalance | | | Per Box created | 0.0025 Algo | BoxFlatMinBalance | | | Per byte in box created | 0.0004 Algo | BoxByteMinBalance | Includes the length of the key | ## Transaction Parameters | Name | Value | Variable | Note | | ------------------------------------------ | ----------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------- | | Minimum transaction fee, in all cases | 0.001 Algo | MinTxnFee | | | Additional minimum constraint if congested | Additional fee per byte | - | | | Max number of transactions in a group | 16 | MaxTxGroupSize | | | Max number of inner transactions | 256 | MaxInnerTransactions | Each transaction allows 16 inner transactions, multiplied by MaxTxGroupSize (16) through inner transaction pooling. | | Maximum size of a block | 5000000 bytes | MaxTxnBytesPerBlock | | | Maximum size of note | 1024 bytes | MaxTxnNoteBytes | | | Maximum transaction life | 1000 rounds | MaxTxnLife | | ## ASA Parameters | Name | Value | Variable | Note | | -------------------------------------- | --------- | --------------------- | ------------------------------------------ | | Max number of ASAs (create and opt-in) | Unlimited | MaxAssetsPerAccount | | | Max asset name size | 32 bytes | MaxAssetNameBytes | | | Max unit name size | 8 bytes | MaxAssetUnitNameBytes | | | Max URL size | 96 bytes | MaxAssetURLBytes | | | Metadata hash | 32 bytes | | Padded with zeros if shorter than 32 bytes | ## Smart Signature Parameters | Name | Value | Variable | Note | | ------------------------------------------------------ | ---------- | --------------- | ---- | | Max size of compiled TEAL code combined with arguments | 1000 bytes | LogicSigMaxSize | | | Max cost of TEAL code | 20000 | LogicSigMaxCost | | ## Smart Contract Parameters | Name | Value | Variable | Note | | ------------------------------------------------------------------ | ----------------- | ------------------------ | ----------------------------------------------------------------- | | Current Max AVM/TEAL Version | 12 | LogicSigVersion | Available in consensus v41/AVM v12 | | Page size of compiled approval + clear TEAL code | 2048 bytes | MaxAppProgramLen | by default, each application has a single page | | Max extra app pages | 3 | MaxExtraAppProgramPages | an application can “pay” for additional pages via minimum balance | | Max cost of approval TEAL code | 700 | MaxAppProgramCost | | | Max cost of clear TEAL code | 700 | MaxAppProgramCost | | | Max number of scratch variables | 256 | | | | Max depth of stack | 1000 | MaxStackDepth | | | Max number of arguments | 16 | MaxAppArgs | | | Max combined size of arguments | 2048 bytes | MaxAppTotalArgLen | | | Max number of global state keys | 64 | MaxGlobalSchemaEntries | | | Max number of local state keys | 16 | MaxLocalSchemaEntries | | | Max number of log messages | 32 | MaxLogCalls | | | Max size of log messages | 1024 | MaxLogSize | | | Max key size | 64 bytes | MaxAppKeyLen | | | Max \[]byte value size | 128 bytes | MaxAppBytesValueLen | | | Max key + value size | 128 bytes | MaxAppSumKeyValueLens | | | Max number of foreign accounts | 8 | MaxAppTxnAccounts | | | Max number of access list entries | 16 | MaxAppAccess | | | Max number of foreign ASAs | 8 | MaxAppTxnForeignAssets | | | Max number of foreign applications | 8 | MaxAppTxnForeignApps | | | Max number of foreign accounts + ASAs + applications + box storage | 8 | MaxAppTotalTxnReferences | | | Max number of created applications | Unlimited | MaxAppsCreated | | | Max number of opt-in applications | Unlimited | MaxAppsOptedIn | | | App Version | Auto-incrementing | AppVersion | Incremented each time approval or clear program changes | | Reject Version | uint64 | RejectVersion | Application version for which the txn must reject | ## Box Parameters | Name | Value | Variable | Note | | ----------------------- | -------- | -------------------- | ----------------------------------------------------------------------------- | | Max size of box | 32768 | MaxBoxSize | Does not include name/key length, which is capped at 64 bytes by MaxAppKeyLen | | Max box references | 8 | MaxAppBoxReferences | | | Bytes per Box reference | 2048 | BytesPerBoxReference | | | Max box key size | 64 bytes | MaxAppKeyLen | |
# Randomness
## Algorand Randomness Capabilities Since AVM v7 release, two new opcodes have been available: `block` and `vrf_verify`. These enable building randomness oracles and beacons on Algorand, which then provide secure sources of randomness to dApps on-chain. ### Algorand Randomness Beacon Algorand’s on-chain randomness beacon offers a secure, trustless, and verifiable way to generate cryptographically secure randomness. On-chain randomness serves as a vital component for provably fair gaming, random NFT generation, lottery and raffle systems, and other situations where a fair and random number is required. Algorand has achieved this cryptographically secure randomness through and cryptographic tools with two vital characteristics: verifiability and unpredictability. The randomness beacon calls the same VRF used by the Algorand consensus protocol to generate verifiable pseudo-random values stored on-chain. Nobody can predict the number beforehand, not even the randomness beacon (forward secrecy), and anyone can confirm that the number was generated fairly and not tampered with. These random values can be used by any smart contract deployed on the Algorand blockchain. ## A Brief Primer: Randomness in Computers In our daily lives, we can flip a coin to get a bit of randomness. In computers, it’s not quite that simple. One can Google “flip a coin” and our friendly neighborhood search engine will provide a coin flip. But how did Google decide the output of heads or tails? Computers are deterministic machines: you give a computer program an input, it gives you an output. If you give it the same input again, it’ll give you the same output again, every time. That is predictable, and therefore nearly antithetical to randomness. What is often provided by computers is a pseudo-random number generator (PRNG). Given a certain input seed, the PRNG will give you an output number that appears random and unrelated to the seed. It’s difficult to predict the output of a PRNG without actually running it. This works just fine for many situations, but presents an attack vector if used for sensitive use cases (use cases where there is a lot to be gained from breaking in). As an improvement over fully deterministic PRNGs, most operating systems nowadays have built-in entropy collection systems to provide better randomness. For example, they gather the user’s mouse movements and use that data as an input to their randomness generation. There is also Cloudflare’s for a particularly creative source of entropy. Unfortunately, none of this can be used as-is on the blockchain. ## Challenges to On-Chain Randomness A public blockchain presents two challenges to randomness. Firstly, blockchains are run by a consensus mechanism, which requires all consensus-participating nodes to agree on exactly what happened. Algorand smart contracts are fully deterministic to ensure that consensus can be reached. Thus, they cannot use sources of entropy in their calculations. If they could, different consensus participants would calculate different results and would therefore not reach consensus. Secondly, everything on a public blockchain is, in fact, public. Let’s say Alice implements a PRNG and publishes it in a smart contract. Everybody can see the exact code for her PRNG. When Bob calls the smart contract, he can decide what user seed to provide. Therefore, Bob can pre-run the PRNG with his seed and know ahead of time what the output will be. He predicts the output, therefore it’s not random. Off-chain PRNG’s will often use information that’s not user-provided to seed their function. Unfortunately for Alice, any information her smart contract uses is visible to everybody, because everything on the blockchain is public. Therefore, its output remains predictable and not random. ## Building a Randomness Beacon or Oracle To offer randomness on-chain, Alice has to make use of more advanced cryptography tools. Alice builds an off-chain service and an on-chain smart contract. At inception, her off-chain service generates a VRF private/public key pair. The public key is shared with her smart contract. Every 8th block (this could be a different number), the off-chain service computes the result of the VRF on the block seed (see below), under its private key. The output of this process is a VRF proof. The service now calls the smart contract and sends it the VRF proof (as an app call argument). Using the public key, the smart contract then validates that the VRF proof was actually properly generated using the private key: this is a similar process to verifying a digital signature. If it is valid, this VRF verification step also outputs a value, called the VRF output, which is the pseudo random value we’re looking for. This pseudo random value is stored in the smart contract’s state and can now be accessed by dApps. dApps may want different random values from each other. When calling Alice’s smart contract, they can provide a user seed, which is hashed with the VRF output to give them their own random value. ## The Block Seed The block seed is a pseudo-random number generated by each block. Its value is based on various pieces of information from the blockchain (read the if interested in the details). The block seed is used as part of the sortition algorithm to select the various committees running the Algorand blockchain. It is possible to influence the value of the block seed by running a validator node. For example, a consensus participant could choose not to propose a certain block if the resulting block seed would cause them to lose the lottery they’re playing in. Therefore, the block seed itself should NOT be used directly as a source of randomness. ## Using the Randomness Beacon or Oracle Now that the beacon is built, how do dApps make use of it? A somewhat unusual paradigm has to be taken with randomness. The output of the randomness beacon is only random when the block seed is not known. Once the block seed for a certain round is known, Alice will now be able to compute the randomness beacon value for that round and be able to cheat in any smart contract using this value. Therefore, the way that Alice’s randomness beacon should be used is by first committing to using the randomness several rounds in advance, and then getting the randomness in that later round. This “commitment” can take different forms; the most straightforward approach is to hardcode the round number in a deployed smart contract. For example, Bob would like to run a lottery on-chain, so he deploys a lottery smart contract. Everyone who participates in the lottery can buy their ticket by paying the lottery contract any time before round 3000 (this round number is hard-coded in the contract). Anytime after round 3162 (also hard-coded), anyone can call the lottery contract to actually run the lottery. Bob’s lottery contract calls Alice’s randomness beacon to get its value from round 3162, and uses that value to determine who wins the lottery. The winner can now call the lottery contract to claim their winnings. Applications with low security requirements (with little money at play) can commit to rounds in the very near future, like 2 rounds ahead. Randomness oracles will likely provide specific guidance and guarantees to their users around these numbers.
# Account Registration
An online account means that the account is available to participate in consensus. An account is marked online by registering a participation key with the network by sending an online key registration transaction to the network. An offline account means that the account is not available to participate in consensus. An account is marked offline by sending an offline key registration transaction to the network. It is important to mark your account offline if it is not participating for various reasons. Not doing so is bad network behavior and will decrease the honest/dishonest user ratio that underpins the liveness of the agreement protocol. Also, in the event of node migration, hardware swap, or other similar events, it is recommended to have your account offline for a few rounds rather than having it online on multiple nodes at the same time. With the addition of staking rewards into the protocol as of v4.0, Algorand consensus participants can set their account as eligible for rewards by including a 2 Algo fee when registering participation keys online. This eligibility status persists if the account is marked offline gracefully, such as for hardware maintenance, or when renewing participation keys. It is only necessary to pay the 2 Algo fee again if the account is kicked offline by the protocol for consensus absenteeism. ## Register Your Account Online This section assumes that you have already for the account you plan to mark online. For an account to participate in consensus, the account needs to be registered online by creating, signing, and sending a key registration transaction with details of the participation key that will vote on the account’s behalf. Once the blockchain processes the transaction, the Verifiable Random Function public key (referred to as the VRF public key) is written into the account’s data, and the account will start participating in consensus with that key. This VRF public key is how the account is associated with the specific participation keys on the node. ### Create an Online Key Registration Transaction There are two main ways you can create an online key registration transaction. ### Authorize and Send the Key Registration Transaction ## Register Your Account Offline To mark an account offline, send a key registration transaction to the network authorized by the account to be marked offline. The signal to mark the sending account offline is the issuance of a `"type": "keyreg"` transaction that does not contain any participation key-related fields (i.e., they are all set to null values). ### Create an Offline Key Registration Transaction There are two main ways you can create an online key registration transaction. ### Sign and Send the Key Registration Transaction
# Staking Rewards
> An overview of how Algorand staking rewards work
As of release version 4.0, the Algorand consensus protocol has been updated to add staking rewards. This section describes how the protocol implements staking rewards as block payouts, the account suspensions that manage poor behavior by accounts participating in consensus, and the heartbeat transactions that signal nodes are operating properly. ## Background Running a validator node on Algorand is a relatively lightweight operation. Therefore, participation in consensus was historically not compensated. There was an expectation that financially motivated holders of Algos would run nodes in order to help secure their holdings. Although simple consensus participation is not terribly resource intensive, running *any* service with high uptime becomes expensive when one considers that it should be monitored for uptime, be somewhat over-provisioned to handle unexpected load spikes, and plans need to be in place to restart in the face of hardware failure (or the account should leave consensus properly). With those burdens in mind, fewer Algo holders chose to run validator nodes than would be preferred to provide security against well-financed bad actors. To alleviate this problem, a mechanism to reward block proposers has been created. With these *block payouts* in place, Algo holders are incentivized to run validator nodes to earn more Algos, increasing security for the entire Algorand network. With the financial incentive to run validator nodes comes the risk that some nodes may be operated without sufficient care. Therefore, a mechanism to *suspend* nodes that appear to be performing poorly (or not at all) is required. Appearances can be deceiving, however. Because Algorand is a probabilistic consensus protocol, pure chance might lead to a node appearing to be delinquent. A new transaction type, the *heartbeat*, allows a node to explicitly indicate that it is online even if it does not propose blocks due to “bad luck”. ## Block Payouts Payouts are made in every block if the proposer has opted into receiving them, has an Algo balance in an appropriate range, and has not been suspended for poor behavior since opting in. The payout size is indicated in the block header and comes from the `FeeSink`. The block payout consists of two components. First, a portion of the block fees (currently 50%) are paid to the proposer. This component incentivizes fuller blocks, which lead to larger payouts. Second, a *bonus* payout is made according to an exponentially decaying formula. This bonus is (intentionally) unsustainable from protocol fees. It is expected that the Algorand Foundation will seed the `FeeSink` with sufficient funds to allow the bonuses to be paid out according to the formula for several years. If the `FeeSink` has insufficient funds for the sum of these components, the payout will be as high as possible while maintaining the `FeeSink`’s minimum balance. These calculations are performed in `endOfBlock` in `eval/eval.go`. To opt-in to receive block payouts, an account includes an extra fee in the `keyreg` transaction. The amount is controlled by the consensus parameter `Payouts.GoOnlineFee`. When such a fee is included, a new account state bit, `IncentiveEligible` is set to true. Even when an account is `IncentiveEligible` there is a proposal-time check of the account’s online stake. If the account has too much or too little, no payout is performed (though `IncentiveEligible` remains true). As explained below, this check occurs in `agreement` code in `payoutEligible()`. The balance check is performed on the *online* stake, that is, the stake from 320 rounds earlier, so a clever proposer can not move Algos in the round it proposes to receive the payout. Finally, in an interesting corner case, a proposing account could be closed at proposal time, since voting is based on the earlier balance. Such an account receives no payout, even if its balance was in the proper range 320 rounds ago. A surprising complication in the implementation of these payouts is that when a block is prepared by a node, it does not know which account is the proposer. Until now, `algod` could prepare a single block which would be used by any of the accounts it was participating for. The block would be handed off to `agreement` which would manipulate the block only to add the appropriate block seed (which depended upon the proposer). That interaction between `eval` and `agreement` was widened (see `WithProposer()`) to allow `agreement` to modify the block to include the proper `Proposer`, and to zero the `ProposerPayout` if the account that proposed was not actually eligible to receive a payout. ## Account Suspensions Accounts can be *suspended* for poor behavior. There are two forms of poor behavior that can lead to suspension. First, an account is considered *absent* if it fails to propose as often as it should. Second, an account can be suspended for failing to respond to a *challenge* issued by the network at random. ### Absenteeism An account can be expected to propose once every `n = TotalOnlineStake/AccountOnlineStake` rounds. For example, a node with 2% of online stake ought to propose once every 50 rounds. Of course, the actual proposer is chosen by random sortition. To make false positive suspensions unlikely, a node is considered absent if it fails to produce a block over the course of `20n` rounds. The suspension mechanism is implemented in `generateKnockOfflineAccountsList` in `eval/eval.go`. It is closely modeled on the mechanism that knocks accounts offline if their voting keys have expired. An absent account is added to the `AbsentParticipationAccounts` list of the block header. When evaluating a block accounts in `AbsentParticipationAccounts` are suspended by changing their `Status` to `Offline` and setting `IncentiveEligible` to false, but retaining their voting keys. #### Keyreg and LastHeartbeat As described so far, 320 rounds after a `keyreg` to go online, an account suddenly is expected to have proposed more recently than 20 times its new expected interval. That would be impossible, as it was not online until that round. Therefore, when a `keyreg` is used to go online and become `IncentiveEligible`, the account’s `LastHeartbeat` field is set 320 rounds into the future. In effect, the account is treated as though it proposed in the first round it is online. #### Large Algo increases and LastHeartbeat A similar problem can occur when an online account receives Algos. 320 rounds after receiving the new Algos, the account’s expected proposal interval will shrink. If, for example, such an account increases by a factor of 10, then it is reasonably likely that it will not have proposed recently enough and will be suspended immediately. To mitigate this risk, any time an online, `IncentiveEligible` account balance doubles from a single `Pay`, its `LastHeartbeat` is incremented to 320 rounds past the current round. ### Challenges The absenteeism checks quickly suspend a high-value account if it becomes inoperative. For example, an account with 2% of total online stake can be marked absent after 500 rounds (about 24 minutes). After suspension, the effect on consensus is mitigated after 320 more rounds (about 15 minutes). Therefore, the suspension mechanism makes Algorand significantly more robust in the face of operational errors. However, the absenteeism mechanism is very slow to notice small accounts. An account with 30,000 Algos might represent 1/100,000 or less of total online stake. It would only be considered absent after a million or more rounds without a proposal. At current network speeds, this is about a month. With such slow detection, a financially motivated entity might make the decision to run a node even if they lack the wherewithal to run the node with excellent uptime. A worst case scenario might be a node that is turned off daily, overnight. Such a node would generate profit for the runner, would probably never be marked offline by the absenteeism mechanism, yet would impact consensus negatively. Algorand can’t make progress with 1/3 of nodes offline at any given time for a nightly rest. To combat this scenario, the network generates random *challenges* periodically. Every `Payouts.ChallengeInterval` rounds (currently 1000), a random selected portion (currently 1/32) of all online accounts are challenged. They must *heartbeat* within `Payouts.ChallengeGracePeriod` rounds (currently 200), or they will be subject to suspension. With the current consensus parameters, nodes can be expected to be challenged daily. When suspended, accounts must `keyreg` with the `GoOnlineFee` in order to receive block payouts again, so it becomes unprofitable for these low-stake nodes to operate with poor uptimes. ## Node Heartbeats The absenteeism mechanism is subject to rare false positives. The challenge mechanism explicitly requires an affirmative response from nodes to indicate they are operating properly on behalf of a challenged account. Both of these needs are addressed by a new transaction type --- *Heartbeat*. A Heartbeat transaction contains a signature (`HbProof`) of the block seed (`HbSeed`) of the transaction’s FirstValid block under the participation key of the account (`HbAddress`) in question. Note that the account being heartbeat for is *not* the `Sender` of the transaction, which can be any address. Signing a recent block seed makes it more difficult to pre-sign heartbeats that another machine might send on your behalf. Signing the FirstValid’s block seed (rather than FirstValid-1) simply enforces a best practice: emit a transaction with FirstValid set to a committed round, not a future round, avoiding a race. The node you send transactions to might not have committed your latest round yet. It is relatively easy for a bad actor to emit Heartbeats for its accounts without actually participating. However, there is no financial incentive to do so. Pretending to be operational when offline does not earn block payouts. Furthermore, running a server to monitor the blockchain to notice challenges and gather the recent block seed is not significantly cheaper than simply running a functional node. It is *already* possible for malicious, well-resourced accounts to cause consensus difficulties by putting significant stake online without actually participating. Heartbeats do not mitigate that risk. Heartbeats have rather been designed to avoid *motivating* such behavior so that they can accomplish their actual goal of noticing poor behavior stemming from *inadvertent* operational problems. ### Free Heartbeats Challenges occur frequently, so it is important that `algod` can easily send Heartbeats as required. How should these transactions be paid for? Many accounts, especially high-value accounts, would not want to keep their spending keys available for automatic use by `algod`. Further, creating (and keeping funded) a low-value side account to pay for Heartbeats would be an annoying operational overhead. Therefore, when required by challenges, heartbeat transactions do not require a fee. Therefore, any account, even an unfunded LogicSig, can send heartbeats for an account under challenge. The conditions for a free Heartbeat are: 1. The Heartbeat is not part of a larger group and has a zero `GroupID`. 2. The `HbAddress` is Online and under challenge with its grace period at least half over. 3. The `HbAddress` is `IncentiveEligible`. 4. There is no `Note`, `Lease`, or `RekeyTo`. ### Heartbeat Service The Heartbeat Service (`heartbeat/service.go`) watches the state of all accounts for which `algod` has participation keys. If any of those accounts meets the requirements above, a heartbeat transaction is sent, starting with the round following half a grace period from the challenge. It uses the (presumably unfunded) LogicSig that does nothing except preclude rekey operations. The heartbeat service does *not* heartbeat if an account is unlucky and threatened to be considered absent. We presume such false positives to be so unlikely that, if they occur, the node must be brought back online manually. It would be reasonable to consider in the future: 1. Making heartbeats free for accounts that are “nearly absent,” or 2. Allowing for paid heartbeats by the heartbeat service when configured with access to a funded account’s spending key.
# State Proofs
A State Proof is a cryptographic proof of state changes that occur in a given set of blocks. While other interoperability solutions use intermediaries to “prove” blockchain activity, State Proofs are created and signed by the Algorand network itself. The same participants that reach consensus on new blocks sign a message attesting to a summary of recent Algorand transactions. These signatures are then compressed into a , also known as a State Proof. After a State Proof is created, a State Proof transaction, which includes the State Proof and the message it proves, is created and sent to the Algorand network for validation. The transaction goes through like any other pending Algorand transaction: it gets validated by validator nodes, included in a block proposal, and written to the blockchain. Each State Proof can be used to power lightweight services that verify Algorand transactions without running consensus or storing a copy of the Algorand ledger. These external services, or “Light Clients”, can efficiently verify proofs of Algorand state (either State Proofs or State Proof derived zk-SNARK proofs) in low-power environments like a smartphone, IoT device, or even inside a blockchain smart contract. For each verified State Proof, the Light Client can store the message’s transaction summary, giving it a light, verified history of Algorand state. Depending on its storage budget, a Light Client could store all State Proof history, giving it the ability to efficiently, cryptographically verify any Algorand transaction which occurred since the first State Proof was written on-chain. Since Algorand users already trust the Algorand network’s ability to reach consensus on new blocks, we call these State Proof transactions, and the Light Clients they power, “trustless.” By providing simple interfaces to verify Algorand transactions, these Light Clients make it safer and easier to develop and use cross-chain products and services which want to leverage the state of the Algorand blockchain. ## How State Proofs are Generated Each State Proof represents a collection of weighted signatures that attest to a specific message. In Algorand’s case, each State Proof message contains a commitment to all transactions that occurred over a period of 256 rounds, known as the State Proof Interval. Each proof convinces verifiers that participating accounts who jointly have a sufficient total portion of online Algorand stake have attested to this message, without seeing or verifying all of the signatures. Every block that is processed on the Algorand chain has a header containing a commitment to all transactions in that block. This Transaction Commitment is the root of a tree with all transactions in that block as leaves. At the end of each State Proof Interval, nodes assemble the block interval commitment by using each of the 256 Transaction Commitments from this interval as leaves. This commitment is then included in the State Proofs message, which is signed by network participants. The process for generating a State Proof for a specific block interval actually starts at the generation of the previous State Proof. For example, if a State Proof is being generated for round 768, the following steps will occur: On round 512 (=768 - 256), every participating node would create a participation commitment for the top N online accounts, composed of their public state proof keys and relative online stake. When a node is elected to propose a block through consensus, it includes this commitment in the block header. On round 769, every participating node executes the following steps for each online account it manages: 1. Build a Block Interval commitment tree based on all the blocks in the interval. This tree’s leaves are created using the transaction commitment from each of the blocks’ headers. This block interval will include rounds \[513,…,768]. 2. Assemble a message containing this Block Interval Commitment and some other metadata, sign the message, and propagate it to the network using the standard protocol gossip framework. Repeater nodes receive the signed messages and verify them. These signatures are accumulated based on the signer’s weight and added to a Signature array. Once the repeater node has sufficient signed weight accumulated, the repeater node constructs a State Proof that contains a randomized sample of accumulated signatures which can convince a verifier that at least 30% of the top N accounts have signed the State Proof message. After creating the proof, the repeater node constructs a State Proof transaction, composed of the message and its corresponding proof, and submits it to the network. This transaction (first in wins) is processed with normal consensus. Validator nodes run the state proof verification algorithm to make sure that this State Proof is valid, using the expected signers from round 512’s on-chain participation commitment as reference. Once through consensus, the transaction is written to the blockchain. Note that each State Proof is linked together by a series of participation commitments indicating which accounts should produce signatures for the next State Proof, and their weights. These commitments form a chain linking the most recent proof written on-chain to the genesis State Proof from launch day. Since the set of participants is committed ahead of time, and each participants’ signature is produced using quantum-safe Falcon keys, we can have confidence that each verifiable State Proof was produced by actual network participants. This means that any State Proof verifier can have full confidence that the transactions committed to in each State Proof message are in fact legitimate, even in an age where powerful quantum computers attempt attacks. By producing quantum-safe proofs of the history of the blockchain, Algorand reaches its first milestone towards post-quantum security. ## Using State Proofs State Proofs allow others to verify Algorand transactions while taking on minimal trust assumptions. Specifically, someone verifying Algorand transactions via a State Proof Light Client needs to trust the following: * The Algorand blockchain’s ability to reach consensus on valid transactions. * The first “participants” commitment that initialized the Light Client was obtained in a trustworthy way (this specifies the eligible voters for the genesis State Proof). * The State Proof verifier code inside the Light Client was implemented correctly. * Algorand’s new cryptographic primitives are secure (, ). * (Depending on the use case): The environment where the Algorand Light Client code is running is secure (e.g. another blockchain’s smart contract). To verify an Algorand transaction outside of the Algorand blockchain, external processes need to understand the structure of how transactions are hashed into the Block Interval commitment. This is done using two commitment trees that are explained below. ### Transaction Commitment A transaction commitment is created for every block that occurs on the Algorand blockchain. The root of this tree is stored in the block header.  The leaf nodes in this tree are sequenced in the same order as the transactions in the block. ### Block Interval Commitment Tree Once all of the blocks in a 256-round state proof interval have been certified on-chain, participating nodes generate a Block Interval commitment tree to attest to all transactions for the blocks in the period. The leaves of this block interval commitment are made of light block headers for each round contained within the interval. Each light block header contains the round number and the transaction commitment root for the given block. Participating accounts add the root of this commitment tree to a State Proof message, sign the message with their State Proof keys, and then propagate it to the network. The root of this commitment tree can be used in conjunction with a set of transaction and block interval proofs to verify any transaction in this period.  The provides clients that can make API calls to retrieve these commitment roots and proofs for verifying specific transactions.
# ABI
The ABI (Application Binary Interface) is a specification that defines the encoding/decoding of data types and a standard for exposing and invoking methods in a smart contract. The specification is defined in . At a high level, the ABI allows contracts to define an API with rich types and offer an interface description so clients know exactly what the contract is expecting to be passed. ## Data Types In Algorand ABI (ARC-4), each data type has a precise encoding scheme, ensuring that contracts and client applications can seamlessly exchange information without ambiguity. It’s crutial to understand how these types - such as integers, strings, arrays, addresses, and more — are structured and its respective representation. Keep in mind that the only reads `uint64` and `bytes`, usually the convertion of data types to these main two is handled under the hood by , and . This section describes how ABI types can be represented as byte strings. | Type | Description | | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | | uintN | An N-bit unsigned integer, where `8 <= N <= 512 and N % 8 = 0` | | byte | An alias for uint8 | | bool | A boolean value that is restricted to either 0 or 1. When encoded, up to 8 consecutive bool values will be packed into a single byte | | ufixedNxM | An N-bit unsigned fixed-point decimal number with precision M, where `8 <= N <= 512, N % 8 = 0, and 0 < M <= 160`, which denotes a value `v as v / (10^M)` | | type\[N] | A fixed-length array of length `N, where N >= 0`. type can be any other type | | address | Used to represent a 32-byte Algorand address. This is equivalent to byte\[32] | | type\[] | A variable-length array. type can be any other type | | string | A variable-length byte array (`byte[]`) assumed to contain UTF-8 encoded content | | (T1,T2,…,TN) | A tuple of the types `T1, T2, …, TN, N >= 0` | | reference type | account, asset, application only for arguments, in which case they are an alias for uint8. See section “Reference Types” below | Encoding for the data types is specified . ### Reference Types Reference types may be specified in the method signature referring to some transaction parameters that must be passed. The value encoded is a uint8 reference to the index of element in the relevant array (i.e. for account, the index in the foreign accounts array). These types are: * `account` - represents an Algorand account, stored in the Accounts array * `asset` - represents an Algorand Standard Asset (ASA), stored in the Foreign Assets array * `application` - represents an Algorand Application, stored in the Foreign Apps array Usually the construction of these arrays and handling these reference types is also executed by the high-level language tools in Algorand and Algokit. ## Methods Methods may be exposed by the smart contract and called by submitting an ApplicationCall transaction to the existing application id. A *method signature* is defined as a name, argument types, and return type. The stringified version is then hashed and the first 4 bytes are taken as a *method selector*. For example: A *method signature* for an `add` method that takes 2 uint64s and returns 1 uint128: ```plaintext Method signature: add(uint64,uint64)uint128 ``` The string version of the *method signature* is hashed and the first 4 bytes are its *method selector*: ```plaintext SHA-512/256 hash (in hex): 8aa3b61f0f1965c3a1cbfa91d46b24e54c67270184ff89dc114e877b1753254a Method selector (in hex): 8aa3b61f ``` Once the method selector is known, it is used in the smart contract logic to route to the appropriate logic that implements the `add` method. The `method` pseudo-opcode can be used in a contract to do the above work and produce a *method selector* given the *method signature* string. ```plaintext method "add(uint64,uint64)uint128" ``` ### Implementing a Method is done by handling an ApplicationCall transaction where the first element matches its method selector and the subsequent elements are used by the logic in the method body. The initial handling logic of the contract should route to the correct method given a match against the method selector passed and the known method selector of the application method. The return value of the method *must* be logged with the prefix `151f7c75` which is the result of `sha256("return")[:4]`. Only the last logged element with this prefix is considered the return value of this method call. ## Interfaces An Interface is a logically grouped set of methods. An Algorand Application implements an Interface if it supports all of the methods from that Interface. For example, an Interface Calculator providing addition and subtraction of integer methods and an Interface NumberFormatting providing formatting methods for numbers into strings are likely to be used together. Interface designers should ensure that all the methods in Calculator and NumberFormatting have distinct method selectors. For example: ```json { "name": "Calculator", "desc": "Interface for a basic calculator supporting additions and multiplications", "methods": [ { "name": "add", "desc": "Calculate the sum of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first term to add" }, { "type": "uint64", "name": "b", "desc": "The second term to add" } ], "returns": { "type": "uint128", "desc": "The sum of a and b" } }, { "name": "multiply", "desc": "Calculate the product of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first factor to multiply" }, { "type": "uint64", "name": "b", "desc": "The second factor to multiply" } ], "returns": { "type": "uint128", "desc": "The product of a and b" } } ] } ``` ## Contracts A Contract is a declaration of what an Application implements. It includes the complete list of the methods implemented by the related Application. It is similar to an Interface, but it may include further details about the concrete implementation, as well as implementation-specific methods that do not belong to any Interface. In addition to the set of methods from the Contract’s definition, a Contract may allow bare Application calls (zero arg application calls). The primary purpose of bare Application calls is to allow the execution of an OnCompletion actions which requires no inputs and has no return value such as NoOp, OptIn, CloseOut, UpdateApplication and DeleteApplication. Here’s an example of a contract implementation: ```json { "name": "Calculator", "desc": "Contract of a basic calculator supporting additions and multiplications. Implements the Calculator interface.", "networks": { "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=": { "appID": 1234 }, "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=": { "appID": 5678 } }, "methods": [ { "name": "add", "desc": "Calculate the sum of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first term to add" }, { "type": "uint64", "name": "b", "desc": "The second term to add" } ], "returns": { "type": "uint128", "desc": "The sum of a and b" } }, { "name": "multiply", "desc": "Calculate the product of two 64-bit integers", "args": [ { "type": "uint64", "name": "a", "desc": "The first factor to multiply" }, { "type": "uint64", "name": "b", "desc": "The second factor to multiply" } ], "returns": { "type": "uint128", "desc": "The product of a and b" } } ] } ``` ## API The API of a smart contract can be published as an . A user may read this object and instantiate a client that handles the encoding/decoding of the arguments and returns values using one of the SDKs or Algokit Utils. A full example of a contract json file might look like: ```json { "name": "super-awesome-contract", "networks": { "MainNet": { "appID": 123456 } }, "methods": [ { "name": "add", "desc": "Add 2 integers", "args": [{ "type": "uint64" }, { "type": "uint64" }], "returns": { "type": "uint64" } }, { "name": "sub", "desc": "Subtract 2 integers", "args": [{ "type": "uint64" }, { "type": "uint64" }], "returns": { "type": "uint64" } }, { "name": "mul", "desc": "Multiply 2 integers", "args": [{ "type": "uint64" }, { "type": "uint64" }], "returns": { "type": "uint64" } }, { "name": "div", "desc": "Divide 2 integers, throw away the remainder", "args": [{ "type": "uint64" }, { "type": "uint64" }], "returns": { "type": "uint64" } }, { "name": "qrem", "desc": "Divide 2 integers, return both the quotient and remainder", "args": [{ "type": "uint64" }, { "type": "uint64" }], "returns": { "type": "(uint64,uint64)" } }, { "name": "reverse", "desc": "Reverses a string", "args": [{ "type": "string" }], "returns": { "type": "string" } }, { "name": "txntest", "desc": "just check it", "args": [{ "type": "uint64" }, { "type": "pay" }, { "type": "uint64" }], "returns": { "type": "uint64" } }, { "name": "concat_strings", "desc": "concat some strings", "args": [{ "type": "string[]" }], "returns": { "type": "string" } }, { "name": "manyargs", "desc": "Try to send 20 arguments", "args": [ { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" }, { "type": "uint64" } ], "returns": { "type": "uint64" } }, { "name": "min_bal", "desc": "Get the minimum balance for given account", "args": [{ "type": "account" }], "returns": { "type": "uint64" } }, { "name": "tupler", "desc": "", "args": [{ "type": "(string,uint64,string)" }], "returns": { "type": "uint64" } } ] } ``` ## Validating ABI Values There are three main categories of ABI types: * Fixed-length types * Dynamic arrays * Dynamic tuples The implications of validation vary between each type which is detailed below. In summary, invalid encodings of ARC4 values can lead to critical security issues. ### Fixed-Length Types If a fixed-length type is longer or shorter than it should be, this can lead to unintended memory access. For example, a StaticBytes<32> should always be 32 bytes, but if it’s longer it can be used to overwrite other values in an array. For example: ```py @abimethod(validate_encoding="unsafe_disabled") def static_value(self, static_bytes: arc4.StaticArray[arc4.Byte, Literal[32]]) -> arc4.UInt64: # ⚠️ VULNERABLE: If static_bytes is more than 32 bytes, # it will overflow into SUPER_IMPORTANT_VALUE array = arc4.Tuple((static_bytes.copy(), arc4.UInt64(SUPER_IMPORTANT_VALUE))) return array[1] ``` ```ts @abimethod({ validateEncoding: "unsafe-disabled" }) staticValue(staticBytes: StaticBytes<32>): uint64 { const array: [StaticBytes<32>, uint64] = [ // ⚠️ VULNERABLE: If static_bytes is more than 32 bytes, // it will overflow into SUPER_IMPORTANT_VALUE staticBytes, SUPER_IMPORTANT_VALUE, ]; return array[1]; // Returns the last 8 bytes of staticBytes instead of SUPER_IMPORTANT_VALUE } ``` ### Dynamic Arrays ABI arrays are always prefixed with their length. For example, the ABI encoding of `0xdeadbeef` as `byte[]` is `0x0004deadbeef` because `0xdeadbeef` is 4 bytes long. If the ABI length prefix is longer than the actual value this can lead to an AVM panic when trying to access out-of-bounds memory. If the ABI length prefix is shorter than the actual length this can lead to unintended behavior in contract logic. For example: ```py even_numbers: GlobalState[DynamicArray[UInt64]] def __init__(self) -> None: self.even_numbers = GlobalState(DynamicArray[UInt64]) @abimethod(validate_encoding="unsafe_disabled") def store_numbers(self, numbers: DynamicArray[UInt64]) -> None: # If the ABI prefix for numbers is more than the actual amount of numbers, this will panic # If the ABI prefix for numbers is less than the actual amount of numbers, not all numbers will be validated for num in numbers: assert num % 2 == 0, "Only even numbers are allowed" self.even_numbers.value = numbers.copy() @abimethod() def get_even_number(self, index: UInt64) -> UInt64: # If the index is larger than what was given as the ABI prefix, this may potentially return an odd number that # bypassed the validation in storeNumbers return self.even_numbers.value[index] ``` ```ts evenNumbers = GlobalState(); @abimethod({ validateEncoding: "unsafe-disabled" }) storeNumbers(numbers: uint64[]) { // If the ABI prefix for numbers is more than the actual amount of numbers, this will panic // If the ABI prefix for numbers is less than the actual amount of numbers, not all numbers will be validated for (const num of numbers) { assert(num % 2, "Only even numbers are allowed"); } this.evenNumbers.value = numbers; } getEvenNumber(index: uint64): uint64 { // If the index is larger than what was given as the ABI prefix, this may potentially return an odd number that // bypassed the validation in storeNumbers return this.evenNumbers.value[index]; } ``` ### Dynamic Tuples Tuples with dynamically sized elements are encoded with two sections of a byte array. The head which contains offsets into the byte array where the values live and the tail which includes the actual value. For example, `[0xdead, 0xbeef]` encoded as `(byte[], byte[])` is `0x000400080002dead0002beef` because `0x0002dead` starts at byte 4 and `0x0002beef` starts at byte 8. If the offsets are larger than the byte length the AVM can panic. Offsets that point to the incorrect byte offset can lead to unintended behavior in contract logic. Most high-level languages, such as the ones that use the Puya compiler, will always use the head offsets to extract values from the tuple. This means that incorrect head offsets will lead to panics when attempting to use those values thus not allowing unintended memory access. Caution and understanding should still be taken when not validation types because this is not guaranteed to always be the case.
# Applications
> Explanatory section about Applications and its components in the Algorand Blockchain
Algorand Smart Contracts, also known as Applications, are the logic component of our blockchain systems. A client can invoke these pieces of structured code to execute a specific method or logic inside the application. Smart contracts live on the blockchain. Once they are deployed, the on-chain instance of the contract is referred to as an application and assigned an Application ID, which can be used by any client to lookup the application or to execute its methods. ## Storage Applications can store values on the Algorand blockchain using one of the following storage types: The Storage Overview section provides a detailed section on on-chain storage. Data storage primitives in the Algorand Virtual Machine (AVM) ## Components * **Approval Program**: Responsible for processing all application calls to the contract, except for the clear call, described in the next bullet. This program is responsible for implementing most of the logic of an application. Like Logic Signatures, this program will succeed only if one nonzero value is left on the stack upon program completion or the return opcode is called with a positive value on the top of the stack. * **Clear State Program**: Handles accounts using the clear call to remove the smart contract from their balance record. This program will pass or fail the same way the ApprovalProgram does. However, whether the logic passes or fails, the contract will be removed from the account’s balance record. In either program, if a global, box, or local state variable is modified and the program fails, the state changes will not be applied. Having two programs allows an account to clear the contract from its state, whether the logic passes or not. When the clear call is made to the contract, whether the logic passes or fails, the contract will be removed from the account’s balance record. ## Interaction and Lifecycle For interacting in a standard way with Applications, the should be used. This specification defines the encoding/decoding of data types and is a standard for exposing and invoking methods in a smart contract. For calling an Application, the clients must execute `ApplicationCall` transactions. Depending on the type, the application could show a different behavior and result: * `NoOp`: Generic application call to execute the Approval Program * `OptIn`: Accounts use this transaction to begin participating in a smart contract. Participation enables local storage usage. * `DeleteApplication`: Transaction to delete the application. * `UpdateApplication`: Transaction to update the logic of an application. * `CloseOut`: Accounts use this transaction to close out their participation in the contract. This call can fail based on the programmed logic, preventing the account from removing the contract from its balance record. * `ClearState`: Similar to `CloseOut`, the transaction will always clear a contract from the account’s balance record whether the program succeeds or fails. The `ClearStateProgram` handles the `ClearState` transaction and the `ApprovalProgram` handles all other ApplicationCall transactions. These transaction types can be created with either goal or the SDKs. In the following sections, details on the individual capabilities of a smart contract will be explained.  Applications Lifecycle ## Inner Transactions Inner transactions are operations that an Application performs from within its execution context. When an application executes, it has its own associated account that can create and submit transactions, similar to how a regular account would. Through inner transactions, Applications can: * Send payments * Hold assets * Create assets * Call other Applications * Perform any other transaction allowed by regular accounts Learn more about how smart contracts can create and submit transactions from within their execution context
# Algorand Virtual Machine
The AVM is a bytecode based stack interpreter that executes programs associated with Algorand transactions. TEAL is an assembly language syntax for specifying a program that is ultimately converted to AVM bytecode. These programs can be used to check the parameters of a transaction and approve the transaction as if by a signature. This use is called a *Logic Signature*. Starting with v2, these programs may also execute as *Smart Contracts*, which are often called *Applications*. Contract executions are invoked with explicit application call transactions. Programs have read-only access to the transaction they are attached to, the other transactions in their atomic transaction group, and a few global values. In addition, *Smart Contracts* have access to limited state that is global to the application, per-account local state for each account that has opted-in to the application, and additional per-application arbitrary state in named *boxes*. For both types of program, approval is signaled by finishing with the stack containing a single non-zero uint64 value, though `return` can be used to signal an early approval which approves based only upon the top stack value being a non-zero uint64 value. ## The Stack The stack starts empty and can contain values of either uint64 or byte-arrays (byte-arrays may not exceed 4096 bytes in length). Most operations act on the stack, popping arguments from it and pushing results to it. Some operations have *immediate* arguments that are encoded directly into the instruction, rather than coming from the stack. The maximum stack depth is 1000. If the stack depth is exceeded or if a byte-array element exceeds 4096 bytes, the program fails. If an opcode tries to access a position in the stack that does not exist, the operation fails. Most often, this is an attempt to access an element below the stack — the simplest example is an operation like `concat` which expects two arguments on the stack. If the stack has fewer than two elements, the operation fails. Some operations like `frame_dig` which retrieves values from subroutine parameters and `proto` which sets up subroutine stack frames could fail because of an attempt to access above the current stack. ## Stack Types While the stack can only store two basic types of values - `uint64` and `bytes` - these values are often bounded, meaning they have specific ranges or limits on what they can contain. For example, a boolean value is just a `uint64` that must be either 0 or 1, and an address must be exactly 32 bytes long. These limited types are named to make the documentation easier to understand and to help catch errors during program creation. #### Definitions | Name | Bound | AVM Type | | -------- | ------------------------- | -------- | | \[]byte | len(x) <= 4096 | \[]byte | | address | len(x) == 32 | \[]byte | | any | | any | | bigint | len(x) <= 64 | \[]byte | | bool | x <= 1 | uint64 | | boxName | 1 <= len(x) <= 64 | \[]byte | | method | len(x) == 4 | \[]byte | | none | | none | | stateKey | len(x) <= 64 | \[]byte | | uint64 | x <= 18446744073709551615 | uint64 | ## Scratch Space In addition to the stack there are 256 positions of scratch space. Like stack values, scratch locations may be `uint64` or `bytes`. Scratch locations are initialized as `uint64` zero. Scratch space is accessed by the `load(s)` and `store(s)` opcodes which move data from or to scratch space, respectively. Application calls may inspect the final scratch space of earlier application calls in the same group using `gload(s)(s)` ## Versions In order to maintain existing semantics for previously written programs, AVM code is versioned. When new opcodes are introduced, or behavior is changed, a new version is introduced. Programs carrying old versions are executed with their original semantics. In the AVM bytecode, the version is an incrementing integer, currently 12, and denoted vX throughout this document. ## Execution Modes Starting from v2, the AVM can run programs in two modes: 1. LogicSig or *stateless* mode, used to execute Logic Signatures 2. Application or *stateful* mode, used to execute Smart Contracts Differences between modes include: * Max program length (consensus parameters `LogicSigMaxSize`, `MaxAppTotalProgramLen` & `MaxExtraAppProgramPages`) * Max program cost (consensus parameters `LogicSigMaxCost`, `MaxAppProgramCost`) * Opcode availability. Refer to for details. * Some global values, such as `LatestTimestamp`, are only available in stateful mode. * Only Applications can observe transaction effects, such as Logs or IDs allocated to ASAs or new Applications. ## Execution Environment for Logic Signatures Logic Signatures execute as part of testing a proposed transaction to see if it is valid and authorized to be committed into a block. If an authorized program executes and finishes with a single non-zero `uint64` value on the stack then that program has validated the transaction it is attached to. The program has access to data from the transaction it is attached to (`txn` op), any transactions in a transaction group it is part of (`gtxn` op), and a few global values like consensus parameters (`global` op). Some “Args” may be attached to a transaction being validated by a program. Args are an array of byte strings. A common pattern would be to have the key to unlock some contract as an Arg. Be aware that Logic Signature Args are recorded on the blockchain and publicly visible when the transaction is submitted to the network, even before the transaction has been included in a block. These Args are *not* part of the transaction ID nor of the TxGroup hash. They also cannot be read from other programs in the group of transactions. A program can either authorize some delegated action on a normal signature-based or multisignature-based account or be wholly in charge of a contract account. * If the account has signed the program by providing a valid ed25519 signature or valid multisignature for the authorizer address on the string “Program” concatenated with the program bytecode, then the transaction is authorized as if the account had signed it, provided that the program returns true. This allows an account to hand out a signed program so that other users can carry out delegated actions which are approved by the program. Note that Logic Signature Args are *not* signed. * If the SHA512\_256 hash of the program, prefixed by “Program”, is equal to the authorizer address of the transaction sender then this is a contract account wholly controlled by the program. No other signature is necessary or possible. The only way to execute a transaction against the contract account is for the program to approve it. The size of a Logic Signature is defined as the length of its bytecode plus the length of all its Args. The sum of the sizes of all Smart Signatures in a group must not exceed 1000 bytes times the number of transactions in the group (1000 bytes is defined in consensus parameter `LogicSigMaxSize`). Each opcode has an associated cost, usually 1, but a few slow operations have higher costs. Prior to v4, the program’s cost was estimated as the static sum of all the opcode costs in the program, whether they were actually executed or not. Beginning with v4, the program’s cost is tracked dynamically while being evaluated. If the program exceeds its budget, it fails. The total program cost of all Logic Signatures in a group must not exceed 20,000 (consensus parameter `LogicSigMaxCost`) times the number of transactions in the group. ## Execution Environment for Smart Contracts Smart Contracts are executed in *ApplicationCall* transactions. Like Logic Signatures, contracts indicate success by leaving a single non-zero integer on the stack. A failed Smart Contract call to an ApprovalProgram is not a valid transaction, thus not written to the blockchain. An ApplicationCall with OnComplete set to ClearState invokes the ClearStateProgram, rather than the usual ApprovalProgram. If the ClearStateProgram fails, application state changes are rolled back, but the transaction still succeeds, and the Sender’s local state for the called application is removed. Smart Contracts have access to everything a Logic Signature may access, as well as the ability to examine blockchain state such as balances and contract state (their own state and the state of other contracts). They also have access to some global values that are not visible to Logic Signatures because the values change over time. Since smart contracts access changing state, nodes must rerun their code to determine if the ApplicationCall transactions in their pool would still succeed each time a block is added to the blockchain. Smart contracts have limits on their execution cost (700, consensus parameter `MaxAppProgramCost`). Before v4, this was a static limit on the cost of all the instructions in the program. Starting in v4, the cost is tracked dynamically during execution and must not exceed `MaxAppProgramCost`. Beginning with v5, programs costs are pooled and tracked dynamically across app executions in a group. If `n` application invocations appear in a group, then the total execution cost of all such calls must not exceed `n * MaxAppProgramCost`. In v6, inner application calls become possible, and each such call increases the pooled budget by `MaxAppProgramCost` at the time the inner group is submitted with `itxn_submit`. Executions of the ClearStateProgram are more stringent, in order to ensure that applications may be closed out, but that applications are also assured a chance to clean up their internal state. At the beginning of the execution of a ClearStateProgram, the pooled budget available must be `MaxAppProgramCost` or higher. If it is not, the containing transaction group fails without clearing the app’s state. During the execution of the ClearStateProgram, no more than `MaxAppProgramCost` may be drawn. If further execution is attempted, the ClearStateProgram fails, and the app’s state *is cleared*. ### Resource Availability Smart contracts have limits on the amount of blockchain state they may examine. These limits are enforced by failing any opcode that attempts to access a resource unless the resource is *available*. These resources are: * Accounts, which must be available to access their balance, or other account parameters such as voting details. * Assets, which must be available to access global asset parameters, such the as the asset’s URL, Name, or privileged addresses. * Holdings, which must be available to access a particular address’s balance or frozen status for a particular asset. * Applications, which must be available to read an application’s programs, parameters, or global state. * Locals, which must be available to read a particular address’s local state for a particular application. * Boxes, which must be available to read or write a box, designated by an application and name for the box. Resources are *available* based on the contents of the executing transaction and, in later versions, the contents of other transactions in the same group. * A resource in the “foreign array” fields of the ApplicationCall transaction (`txn.Accounts`, `txn.ForeignAssets`, and `txn.ForeignApplications`) is *available*. * The `txn.Sender`, `global CurrentApplicationID`, and `global CurrentApplicationAddress` are *available*. * In pre-v4 applications, all holdings are *available* to the `asset_holding_get` opcode, and all locals are *available* to the `app_local_get_ex` opcode if the *account* of the resource is *available*. * In v6 and later applications, any asset or application that was created earlier in the same transaction group (whether by a top-level or inner transaction) is *available*. In addition, any account that is the associated account of a contract that was created earlier in the group is *available*. * In v7 and later applications, the account associated with any contract present in the `txn.ForeignApplications` field is *available*. * In v4 and above applications, Holdings and Locals are *available* if, both components of the resource are available according to the above rules. * In v9 and later applications, there is group-level resource sharing. Any resource that is available in *some* top-level transaction in a transaction group is available in *all* v9 or later application calls in the group, whether those application calls are top-level or inner. * v9 and later applications may use the `txn.Access` list instead of the foreign arrays. When using `txn.Access` Holdings and Locals are no longer made available automatically because their components are. Application accounts are also not made available because of the availability of their corresponding app. Each resource must be listed explicitly. However, `txn.Access` allows for the listing of more resources than the foreign arrays. Listed resources become available to other (post-v8) applications through group sharing. * When considering whether an asset holding or application local state is available for group-level resource sharing, the holding or local state must be available in a top-level transaction based on pre-v9 rules. For example, if account A is made available in one transaction, and asset X is made available in another, group resource sharing does *not* make A’s X holding available. * Top-level transactions that are not application calls also make resources available to group-level resource sharing. The following resources are made available by other transaction types. * `pay` - `txn.Sender`, `txn.Receiver`, and `txn.CloseRemainderTo` (if set). * `keyreg` - `txn.Sender` * `acfg` - `txn.Sender`, `txn.ConfigAsset`, and the `txn.ConfigAsset` holding of `txn.Sender`. * `axfer` - `txn.Sender`, `txn.AssetReceiver`, `txn.AssetSender` (if set), `txnAssetCloseTo` (if set), `txn.XferAsset`, and the `txn.XferAsset` holding of each of those accounts. * `afrz` - `txn.Sender`, `txn.FreezeAccount`, `txn.FreezeAsset`, and the `txn.FreezeAsset` holding of `txn.FreezeAccount`. The `txn.FreezeAsset` holding of `txn.Sender` is *not* made available. * A Box is *available* to an Approval Program if *any* transaction in the same group contains a box reference (in `txn.Boxes` or `txn.Access`) that denotes the box. A box reference contains an index `i`, and name `n`. The index refers to the `ith` application in the transaction’s `ForeignApplications` or `Access` array (only one of which can be used), with the usual convention that 0 indicates the application ID of the app called by that transaction. No box is ever *available* to a ClearStateProgram. Regardless of *availability*, any attempt to access an Asset or Application with an ID less than 256 from within a Contract will fail immediately. This avoids any ambiguity in opcodes that interpret their integer arguments as resource IDs *or* indexes into the `txn.ForeignAssets` or `txn.ForeignApplications` arrays. It is recommended that contract authors avoid supplying array indexes to these opcodes, and always use explicit resource IDs. By using explicit IDs, contracts will better take advantage of group resource sharing. The array indexing interpretation may be deprecated in a future version. ## Constants Constants can be pushed onto the stack in two different ways: 1. Constants can be pushed directly with `pushint` or `pushbytes`. This method is more efficient for constants that are only used once. 2. Constants can be loaded into storage separate from the stack and scratch space, using two opcodes `intcblock` and `bytecblock`. Then, constants from this storage can be pushed onto the stack by referring to the type and index using `intc`, `intc_[0123]`, `bytec`, and `bytec_[0123]`. This method is more efficient for constants that are used multiple times. The assembler will hide most of this, allowing simple use of `int 1234` and `byte 0xcafed00d`. Constants introduced via `int` and `byte` will be assembled into appropriate uses of `pushint|pushbytes` and `{int|byte}c, {int|byte}c_[0123]` to minimize program size. The opcodes `intcblock` and `bytecblock` use , reproduced . The `intcblock` opcode is followed by a varuint specifying the number of integer constants and then that number of varuints. The `bytecblock` opcode is followed by a varuint specifying the number of byte constants, and then that number of pairs of (varuint, bytes) length prefixed byte strings. ### Named Integer Constants #### OnComplete An application transaction must indicate the action to be taken following the execution of its approvalProgram or clearStateProgram. The constants below describe the available actions. | Value | Name | Description | | ----- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 0 | NoOp | Only execute the `ApprovalProgram` associated with this application ID, with no additional effects. | | 1 | OptIn | Before executing the `ApprovalProgram`, allocate local state for this application into the sender’s account data. | | 2 | CloseOut | After executing the `ApprovalProgram`, clear any local state for this application out of the sender’s account data. | | 3 | ClearState | Don’t execute the `ApprovalProgram`, and instead execute the `ClearStateProgram` (which may not reject this transaction). Additionally, clear any local state for this application out of the sender’s account data as in `CloseOutOC`. | | 4 | UpdateApplication | After executing the `ApprovalProgram`, replace the `ApprovalProgram` and `ClearStateProgram` associated with this application ID with the programs specified in this transaction. | | 5 | DeleteApplication | After executing the `ApprovalProgram`, delete the application parameters from the account data of the application’s creator. | #### TypeEnum constants | Value | Name | Description | | ----- | ------- | --------------------------------- | | 0 | unknown | Unknown type. Invalid transaction | | 1 | pay | Payment | | 2 | keyreg | KeyRegistration | | 3 | acfg | AssetConfig | | 4 | axfer | AssetTransfer | | 5 | afrz | AssetFreeze | | 6 | appl | ApplicationCall | ## Operations Most operations work with only one type of argument, `uint64` or `bytes`, and fail if the wrong type value is on the stack. Many instructions accept values to designate Accounts, Assets, or Applications. Beginning with v4, these values may be given as an *offset* in the corresponding Txn fields (Txn.Accounts, Txn.ForeignAssets, Txn.ForeignApps) *or* as the value itself (a byte-array address for Accounts, or a uint64 ID). The values, however, must still be present in the Txn fields. Before v4, most opcodes required the use of an offset, except for reading account local values of assets or applications, which accepted the IDs directly and did not require the ID to be present in the corresponding *Foreign* array. (Note that beginning with v4, those IDs *are* required to be present in their corresponding *Foreign* array.) See individual opcodes for details. In the case of account offsets or application offsets, 0 is specially defined to Txn.Sender or the ID of the current application, respectively. This summary is supplemented by more detail in the . Some operations immediately fail the program. A transaction checked by a program that fails is not valid. Caution If an account is controlled by a program with bugs, there may be no way to recover assets locked in that account. In the documentation for each opcode, the stack arguments that are popped are referred to alphabetically, beginning with the deepest argument as `A`. These arguments are shown in the opcode description, and if the opcode must be of a specific type, it is noted there. All opcodes fail if a specified type is incorrect. If an opcode pushes more than one result, the values are named for ease of exposition and clarity concerning their stack positions. When an opcode manipulates the stack in such a way that a value changes position but is otherwise unchanged, the name of the output on the return stack matches the name of the input value. ### Arithmetic and Logic Operations | Opcode | Description | | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | | `+` | A plus B. Fail on overflow. | | `-` | A minus B. Fail if B > A. | | `/` | A divided by B (truncated division). Fail if B == 0. | | `*` | A times B. Fail on overflow. | | `<` | A less than B => {0 or 1} | | `>` | A greater than B => {0 or 1} | | `<=` | A less than or equal to B => {0 or 1} | | `>=` | A greater than or equal to B => {0 or 1} | | `&&` | A is not zero and B is not zero => {0 or 1} | | `\|\|` | A is not zero or B is not zero => {0 or 1} | | `shl` | A times 2^B, modulo 2^64 | | `shr` | A divided by 2^B | | `sqrt` | The largest integer I such that I^2 <= A | | `bitlen` | The highest set bit in A. If A is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4 | | `exp` | A raised to the Bth power. Fail if A == B == 0 and on overflow | | `==` | A is equal to B => {0 or 1} | | `!=` | A is not equal to B => {0 or 1} | | `!` | A == 0 yields 1; else 0 | | `itob` | converts uint64 A to big-endian byte array, always of length 8 | | `btoi` | converts big-endian byte array A to uint64. Fails if len(A) > 8. Padded by leading 0s if len(A) < 8. | | `%` | A modulo B. Fail if B == 0. | | `\|` | A bitwise-or B | | `&` | A bitwise-and B | | `^` | A bitwise-xor B | | `~` | bitwise invert value A | | `mulw` | A times B as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low | | `addw` | A plus B as a 128-bit result. X is the carry-bit, Y is the low-order 64 bits. | | `divw` | A,B / C. Fail if C == 0 or if result overflows. | | `divmodw` | W,X = (A,B / C,D); Y,Z = (A,B modulo C,D) | | `expw` | A raised to the Bth power as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low. Fail if A == B == 0 or if the results exceeds 2^128-1 | ### Byte Array Manipulation | Opcode | Description | | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `getbit` | Bth bit of (byte-array or integer) A. If B is greater than or equal to the bit length of the value (8\*byte length), the program fails | | `setbit` | Copy of (byte-array or integer) A, with the Bth bit set to (0 or 1) C. If B is greater than or equal to the bit length of the value (8\*byte length), the program fails | | `getbyte` | Bth byte of A, as an integer. If B is greater than or equal to the array length, the program fails | | `setbyte` | Copy of A with the Bth byte set to small integer (between 0..255) C. If B is greater than or equal to the array length, the program fails | | `concat` | join A and B | | `len` | yields length of byte value A | | `substring s e` | A range of bytes from A starting at S up to but not including E. If E < S, or either is larger than the array length, the program fails | | `substring3` | A range of bytes from A starting at B up to but not including C. If C < B, or either is larger than the array length, the program fails | | `extract s l` | A range of bytes from A starting at S up to but not including S+L. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails | | `extract3` | A range of bytes from A starting at B up to but not including B+C. If B+C is larger than the array length, the program fails `extract3` can be called using `extract` with no immediates. | | `extract_uint16` | A uint16 formed from a range of big-endian bytes from A starting at B up to but not including B+2. If B+2 is larger than the array length, the program fails | | `extract_uint32` | A uint32 formed from a range of big-endian bytes from A starting at B up to but not including B+4. If B+4 is larger than the array length, the program fails | | `extract_uint64` | A uint64 formed from a range of big-endian bytes from A starting at B up to but not including B+8. If B+8 is larger than the array length, the program fails | | `replace2 s` | Copy of A with the bytes starting at S replaced by the bytes of B. Fails if S+len(B) exceeds len(A) `replace2` can be called using `replace` with 1 immediate. | | `replace3` | Copy of A with the bytes starting at B replaced by the bytes of C. Fails if B+len(C) exceeds len(A) `replace3` can be called using `replace` with no immediates. | | `base64_decode e` | decode A which was base64-encoded using *encoding* E. Fail if A is not base64 encoded with encoding E | | `json_ref r` | key B’s value, of type R, from a utf-8 encoded json object A | The following opcodes take byte-array values that are interpreted as big-endian unsigned integers. For mathematical operators, the returned values are the shortest byte-array that can represent the returned value. For example, the zero value is the empty byte-array. For comparison operators, the returned value is a uint64. Input lengths are limited to a maximum length of 64 bytes, representing a 512 bit unsigned integer. Output lengths are not explicitly restricted, though only `b*` and `b+` can produce a larger output than their inputs, so there is an implicit length limit of 128 bytes on outputs. | Opcode | Description | | ------- | ---------------------------------------------------------------------------------------------------------------- | | `b+` | A plus B. A and B are interpreted as big-endian unsigned integers | | `b-` | A minus B. A and B are interpreted as big-endian unsigned integers. Fail on underflow. | | `b/` | A divided by B (truncated division). A and B are interpreted as big-endian unsigned integers. Fail if B is zero. | | `b*` | A times B. A and B are interpreted as big-endian unsigned integers. | | `b<` | 1 if A is less than B, else 0. A and B are interpreted as big-endian unsigned integers | | `b>` | 1 if A is greater than B, else 0. A and B are interpreted as big-endian unsigned integers | | `b<=` | 1 if A is less than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers | | `b>=` | 1 if A is greater than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers | | `b==` | 1 if A is equal to B, else 0. A and B are interpreted as big-endian unsigned integers | | `b!=` | 0 if A is equal to B, else 1. A and B are interpreted as big-endian unsigned integers | | `b%` | A modulo B. A and B are interpreted as big-endian unsigned integers. Fail if B is zero. | | `bsqrt` | The largest integer I such that I^2 <= A. A and I are interpreted as big-endian unsigned integers | These opcodes operate on the bits of byte-array values. The shorter input array is interpreted as though left padded with zeros until it is the same length as the other input. The returned values are the same length as the longer input. Therefore, unlike array arithmetic, these results may contain leading zero bytes. | Opcode | Description | | ------ | ------------------------------------------------------------------------------- | | `b\|` | A bitwise-or B. A and B are zero-left extended to the greater of their lengths | | `b&` | A bitwise-and B. A and B are zero-left extended to the greater of their lengths | | `b^` | A bitwise-xor B. A and B are zero-left extended to the greater of their lengths | | `b~` | A with all bits inverted | ### Cryptographic Operations | Opcode | Description | | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `sha256` | SHA256 hash of value A, yields \[32]byte | | `keccak256` | Keccak256 hash of value A, yields \[32]byte | | `sha512_256` | SHA512\_256 hash of value A, yields \[32]byte | | `sha3_256` | SHA3\_256 hash of value A, yields \[32]byte | | `falcon_verify` | for (data A, compressed-format signature B, pubkey C) verify the signature of data against the pubkey => {0 or 1} | | `ed25519verify` | for (data A, signature B, pubkey C) verify the signature of (“ProgData” \|\| program\_hash \|\| data) against the pubkey => {0 or 1} | | `ed25519verify_bare` | for (data A, signature B, pubkey C) verify the signature of the data against the pubkey => {0 or 1} | | `ecdsa_verify v` | for (data A, signature B, C and pubkey D, E) verify the signature of the data against the pubkey => {0 or 1} | | `ecdsa_pk_recover v` | for (data A, recovery id B, signature C, D) recover a public key | | `ecdsa_pk_decompress v` | decompress pubkey A into components X, Y | | `vrf_verify s` | Verify the proof B of message A against pubkey C. Returns vrf output and verification flag. | | `ec_add g` | for curve points A and B, return the curve point A + B | | `ec_scalar_mul g` | for curve point A and scalar B, return the curve point BA, the point A multiplied by the scalar B. | | `ec_pairing_check g` | 1 if the product of the pairing of each point in A with its respective point in B is equal to the identity element of the target group Gt, else 0 | | `ec_multi_scalar_mul g` | for curve points A and scalars B, return curve point B0A0 + B1A1 + B2A2 + … + BnAn | | `ec_subgroup_check g` | 1 if A is in the main prime-order subgroup of G (including the point at infinity) else 0. Program fails if A is not in G at all. | | `ec_map_to g` | maps field element A to group G | | `mimc c` | MiMC hash of scalars A, using curve and parameters specified by configuration C | ### Loading Values Opcodes for getting data onto the stack. Some of these have immediate data in the byte or bytes after the opcode. | Opcode | Description | | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | | `intcblock uint ...` | prepare block of uint64 constants for use by intc | | `intc i` | Ith constant from intcblock | | `intc_0` | constant 0 from intcblock | | `intc_1` | constant 1 from intcblock | | `intc_2` | constant 2 from intcblock | | `intc_3` | constant 3 from intcblock | | `pushint uint` | immediate UINT | | `pushints uint ...` | push sequence of immediate uints to stack in the order they appear (first uint being deepest) | | `bytecblock bytes ...` | prepare block of byte-array constants for use by bytec | | `bytec i` | Ith constant from bytecblock | | `bytec_0` | constant 0 from bytecblock | | `bytec_1` | constant 1 from bytecblock | | `bytec_2` | constant 2 from bytecblock | | `bytec_3` | constant 3 from bytecblock | | `pushbytes bytes` | immediate BYTES | | `pushbytess bytes ...` | push sequences of immediate byte arrays to stack (first byte array being deepest) | | `bzero` | zero filled byte-array of length A | | `arg n` | Nth LogicSig argument | | `arg_0` | LogicSig argument 0 | | `arg_1` | LogicSig argument 1 | | `arg_2` | LogicSig argument 2 | | `arg_3` | LogicSig argument 3 | | `args` | Ath LogicSig argument | | `txn f` | field F of current transaction | | `gtxn t f` | field F of the Tth transaction in the current group | | `txna f i` | Ith value of the array field F of the current transaction `txna` can be called using `txn` with 2 immediates. | | `txnas f` | Ath value of the array field F of the current transaction | | `gtxna t f i` | Ith value of the array field F from the Tth transaction in the current group `gtxna` can be called using `gtxn` with 3 immediates. | | `gtxnas t f` | Ath value of the array field F from the Tth transaction in the current group | | `gtxns f` | field F of the Ath transaction in the current group | | `gtxnsa f i` | Ith value of the array field F from the Ath transaction in the current group `gtxnsa` can be called using `gtxns` with 2 immediates. | | `gtxnsas f` | Bth value of the array field F from the Ath transaction in the current group | | `global f` | global field F | | `load i` | Ith scratch space value. All scratch spaces are 0 at program start. | | `loads` | Ath scratch space value. All scratch spaces are 0 at program start. | | `store i` | store A to the Ith scratch space | | `stores` | store B to the Ath scratch space | | `gload t i` | Ith scratch space value of the Tth transaction in the current group | | `gloads i` | Ith scratch space value of the Ath transaction in the current group | | `gloadss` | Bth scratch space value of the Ath transaction in the current group | | `gaid t` | ID of the asset or application created in the Tth transaction of the current group | | `gaids` | ID of the asset or application created in the Ath transaction of the current group | #### Transaction Fields ##### Scalar Fields | Index | Name | Type | In | Notes | | ----- | ------------------------- | --------- | --- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 0 | Sender | address | | 32 byte address | | 1 | Fee | uint64 | | microalgos | | 2 | FirstValid | uint64 | | round number | | 3 | FirstValidTime | uint64 | v7 | UNIX timestamp of block before txn.FirstValid. Fails if negative | | 4 | LastValid | uint64 | | round number | | 5 | Note | \[]byte | | Any data up to 1024 bytes | | 6 | Lease | \[32]byte | | 32 byte lease value | | 7 | Receiver | address | | 32 byte address | | 8 | Amount | uint64 | | microalgos | | 9 | CloseRemainderTo | address | | 32 byte address | | 10 | VotePK | \[32]byte | | 32 byte address | | 11 | SelectionPK | \[32]byte | | 32 byte address | | 12 | VoteFirst | uint64 | | The first round that the participation key is valid. | | 13 | VoteLast | uint64 | | The last round that the participation key is valid. | | 14 | VoteKeyDilution | uint64 | | Dilution for the 2-level participation key | | 15 | Type | \[]byte | | Transaction type as bytes | | 16 | TypeEnum | uint64 | | Transaction type as integer | | 17 | XferAsset | uint64 | | Asset ID | | 18 | AssetAmount | uint64 | | value in Asset’s units | | 19 | AssetSender | address | | 32 byte address. Source of assets if Sender is the Asset’s Clawback address. | | 20 | AssetReceiver | address | | 32 byte address | | 21 | AssetCloseTo | address | | 32 byte address | | 22 | GroupIndex | uint64 | | Position of this transaction within an atomic transaction group. A stand-alone transaction is implicitly element 0 in a group of 1 | | 23 | TxID | \[32]byte | | The computed ID for this transaction. 32 bytes. | | 24 | ApplicationID | uint64 | v2 | ApplicationID from ApplicationCall transaction | | 25 | OnCompletion | uint64 | v2 | ApplicationCall transaction on completion action | | 27 | NumAppArgs | uint64 | v2 | Number of ApplicationArgs | | 29 | NumAccounts | uint64 | v2 | Number of Accounts | | 30 | ApprovalProgram | \[]byte | v2 | Approval program | | 31 | ClearStateProgram | \[]byte | v2 | Clear state program | | 32 | RekeyTo | address | v2 | 32 byte Sender’s new AuthAddr | | 33 | ConfigAsset | uint64 | v2 | Asset ID in asset config transaction | | 34 | ConfigAssetTotal | uint64 | v2 | Total number of units of this asset created | | 35 | ConfigAssetDecimals | uint64 | v2 | Number of digits to display after the decimal place when displaying the asset | | 36 | ConfigAssetDefaultFrozen | bool | v2 | Whether the asset’s slots are frozen by default or not, 0 or 1 | | 37 | ConfigAssetUnitName | \[]byte | v2 | Unit name of the asset | | 38 | ConfigAssetName | \[]byte | v2 | The asset name | | 39 | ConfigAssetURL | \[]byte | v2 | URL | | 40 | ConfigAssetMetadataHash | \[32]byte | v2 | 32 byte commitment to unspecified asset metadata | | 41 | ConfigAssetManager | address | v2 | 32 byte address | | 42 | ConfigAssetReserve | address | v2 | 32 byte address | | 43 | ConfigAssetFreeze | address | v2 | 32 byte address | | 44 | ConfigAssetClawback | address | v2 | 32 byte address | | 45 | FreezeAsset | uint64 | v2 | Asset ID being frozen or un-frozen | | 46 | FreezeAssetAccount | address | v2 | 32 byte address of the account whose asset slot is being frozen or un-frozen | | 47 | FreezeAssetFrozen | bool | v2 | The new frozen value, 0 or 1 | | 49 | NumAssets | uint64 | v3 | Number of Assets | | 51 | NumApplications | uint64 | v3 | Number of Applications | | 52 | GlobalNumUint | uint64 | v3 | Number of global state integers in ApplicationCall | | 53 | GlobalNumByteSlice | uint64 | v3 | Number of global state byteslices in ApplicationCall | | 54 | LocalNumUint | uint64 | v3 | Number of local state integers in ApplicationCall | | 55 | LocalNumByteSlice | uint64 | v3 | Number of local state byteslices in ApplicationCall | | 56 | ExtraProgramPages | uint64 | v4 | Number of additional pages for each of the application’s approval and clear state programs. An ExtraProgramPages of 1 means 2048 more total bytes, or 1024 for each program. | | 57 | Nonparticipation | bool | v5 | Marks an account nonparticipating for rewards | | 59 | NumLogs | uint64 | v5 | Number of Logs (only with `itxn` in v5). Application mode only | | 60 | CreatedAssetID | uint64 | v5 | Asset ID allocated by the creation of an ASA (only with `itxn` in v5). Application mode only | | 61 | CreatedApplicationID | uint64 | v5 | ApplicationID allocated by the creation of an application (only with `itxn` in v5). Application mode only | | 62 | LastLog | \[]byte | v6 | The last message emitted. Empty bytes if none were emitted. Application mode only | | 63 | StateProofPK | \[64]byte | v6 | State proof public key | | 65 | NumApprovalProgramPages | uint64 | v7 | Number of Approval Program pages | | 67 | NumClearStateProgramPages | uint64 | v7 | Number of ClearState Program pages | | 68 | RejectVersion | uint64 | v12 | Application version for which the txn must reject | ##### Array Fields | Index | Name | Type | In | Notes | | ----- | ---------------------- | ------- | -- | ------------------------------------------------------------------------------------------- | | 26 | ApplicationArgs | \[]byte | v2 | Arguments passed to the application in the ApplicationCall transaction | | 28 | Accounts | address | v2 | Accounts listed in the ApplicationCall transaction | | 48 | Assets | uint64 | v3 | Foreign Assets listed in the ApplicationCall transaction | | 50 | Applications | uint64 | v3 | Foreign Apps listed in the ApplicationCall transaction | | 58 | Logs | \[]byte | v5 | Log messages emitted by an application call (only with `itxn` in v5). Application mode only | | 64 | ApprovalProgramPages | \[]byte | v7 | Approval Program as an array of pages | | 66 | ClearStateProgramPages | \[]byte | v7 | ClearState Program as an array of pages | Additional details in the on the `txn` op. **Global Fields** Global fields are fields that are common to all the transactions in the group. In particular it includes consensus parameters. | Index | Name | Type | In | Notes | | ----- | ------------------------- | --------- | --- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | | 0 | MinTxnFee | uint64 | | microalgos | | 1 | MinBalance | uint64 | | microalgos | | 2 | MaxTxnLife | uint64 | | rounds | | 3 | ZeroAddress | address | | 32 byte address of all zero bytes | | 4 | GroupSize | uint64 | | Number of transactions in this atomic transaction group. At least 1 | | 5 | LogicSigVersion | uint64 | v2 | Maximum supported version | | 6 | Round | uint64 | v2 | Current round number. Application mode only. | | 7 | LatestTimestamp | uint64 | v2 | Last confirmed block UNIX timestamp. Fails if negative. Application mode only. | | 8 | CurrentApplicationID | uint64 | v2 | ID of current application executing. Application mode only. | | 9 | CreatorAddress | address | v3 | Address of the creator of the current application. Application mode only. | | 10 | CurrentApplicationAddress | address | v5 | Address that the current application controls. Application mode only. | | 11 | GroupID | \[32]byte | v5 | ID of the transaction group. 32 zero bytes if the transaction is not part of a group. | | 12 | OpcodeBudget | uint64 | v6 | The remaining cost that can be spent by opcodes in this program. | | 13 | CallerApplicationID | uint64 | v6 | The application ID of the application that called this application. 0 if this application is at the top-level. Application mode only. | | 14 | CallerApplicationAddress | address | v6 | The application address of the application that called this application. ZeroAddress if this application is at the top-level. Application mode only. | | 15 | AssetCreateMinBalance | uint64 | v10 | The additional minimum balance required to create (and opt-in to) an asset. | | 16 | AssetOptInMinBalance | uint64 | v10 | The additional minimum balance required to opt-in to an asset. | | 17 | GenesisHash | \[32]byte | v10 | The Genesis Hash for the network. | | 18 | PayoutsEnabled | bool | v11 | Whether block proposal payouts are enabled. | | 19 | PayoutsGoOnlineFee | uint64 | v11 | The fee required in a keyreg transaction to make an account incentive eligible. | | 20 | PayoutsPercent | uint64 | v11 | The percentage of transaction fees in a block that can be paid to the block proposer. | | 21 | PayoutsMinBalance | uint64 | v11 | The minimum balance an account must have in the agreement round to receive block payouts in the proposal round. | | 22 | PayoutsMaxBalance | uint64 | v11 | The maximum balance an account can have in the agreement round to receive block payouts in the proposal round. | **Asset Fields** Asset fields include `AssetHolding` and `AssetParam` fields that are used in the `asset_holding_get` and `asset_params_get` opcodes. | Index | Name | Type | Notes | | ----- | ------------ | ------ | --------------------------------------------- | | 0 | AssetBalance | uint64 | Amount of the asset unit held by this account | | 1 | AssetFrozen | bool | Is the asset frozen or not | | Index | Name | Type | In | Notes | | ----- | ------------------ | --------- | -- | ---------------------------------------- | | 0 | AssetTotal | uint64 | | Total number of units of this asset | | 1 | AssetDecimals | uint64 | | See AssetParams.Decimals | | 2 | AssetDefaultFrozen | bool | | Frozen by default or not | | 3 | AssetUnitName | \[]byte | | Asset unit name | | 4 | AssetName | \[]byte | | Asset name | | 5 | AssetURL | \[]byte | | URL with additional info about the asset | | 6 | AssetMetadataHash | \[32]byte | | Arbitrary commitment | | 7 | AssetManager | address | | Manager address | | 8 | AssetReserve | address | | Reserve address | | 9 | AssetFreeze | address | | Freeze address | | 10 | AssetClawback | address | | Clawback address | | 11 | AssetCreator | address | v5 | Creator address | **App Fields** App fields used in the `app_params_get` opcode. | Index | Name | Type | In | Notes | | ----- | --------------------- | ------- | --- | ------------------------------------------------------------------------------- | | 0 | AppApprovalProgram | \[]byte | | Bytecode of Approval Program | | 1 | AppClearStateProgram | \[]byte | | Bytecode of Clear State Program | | 2 | AppGlobalNumUint | uint64 | | Number of uint64 values allowed in Global State | | 3 | AppGlobalNumByteSlice | uint64 | | Number of byte array values allowed in Global State | | 4 | AppLocalNumUint | uint64 | | Number of uint64 values allowed in Local State | | 5 | AppLocalNumByteSlice | uint64 | | Number of byte array values allowed in Local State | | 6 | AppExtraProgramPages | uint64 | | Number of Extra Program Pages of code space | | 7 | AppCreator | address | | Creator address | | 8 | AppAddress | address | | Address for which this application has authority | | 9 | AppVersion | uint64 | v12 | Version of the app, incremented each time the approval or clear program changes | **Account Fields** Account fields used in the `acct_params_get` opcode. | Index | Name | Type | In | Notes | | ----- | ---------------------- | ------- | --- | ------------------------------------------------------------------------------------------- | | 0 | AcctBalance | uint64 | | Account balance in microalgos | | 1 | AcctMinBalance | uint64 | | Minimum required balance for account, in microalgos | | 2 | AcctAuthAddr | address | | Address the account is rekeyed to. | | 3 | AcctTotalNumUint | uint64 | v8 | The total number of uint64 values allocated by this account in Global and Local States. | | 4 | AcctTotalNumByteSlice | uint64 | v8 | The total number of byte array values allocated by this account in Global and Local States. | | 5 | AcctTotalExtraAppPages | uint64 | v8 | The number of extra app code pages used by this account. | | 6 | AcctTotalAppsCreated | uint64 | v8 | The number of existing apps created by this account. | | 7 | AcctTotalAppsOptedIn | uint64 | v8 | The number of apps this account is opted into. | | 8 | AcctTotalAssetsCreated | uint64 | v8 | The number of existing ASAs created by this account. | | 9 | AcctTotalAssets | uint64 | v8 | The numbers of ASAs held by this account (including ASAs this account created). | | 10 | AcctTotalBoxes | uint64 | v8 | The number of existing boxes created by this account’s app. | | 11 | AcctTotalBoxBytes | uint64 | v8 | The total number of bytes used by this account’s app’s box keys and values. | | 12 | AcctIncentiveEligible | bool | v11 | Has this account opted into block payouts | | 13 | AcctLastProposed | uint64 | v11 | The round number of the last block this account proposed. | | 14 | AcctLastHeartbeat | uint64 | v11 | The round number of the last block this account sent a heartbeat. | ### Flow Control | Opcode | Description | | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | `err` | Fail immediately. | | `bnz target` | branch to TARGET if value A is not zero | | `bz target` | branch to TARGET if value A is zero | | `b target` | branch unconditionally to TARGET | | `return` | use A as success value; end | | `pop` | discard A | | `popn n` | remove N values from the top of the stack | | `dup` | duplicate A | | `dup2` | duplicate A and B | | `dupn n` | duplicate A, N times | | `dig n` | Nth value from the top of the stack. dig 0 is equivalent to dup | | `bury n` | replace the Nth value from the top of the stack with A. bury 0 fails. | | `cover n` | remove top of stack, and place it deeper in the stack such that N elements are above it. Fails if stack depth <= N. | | `uncover n` | remove the value at depth N in the stack and shift above items down so the Nth deep value is on top of the stack. Fails if stack depth <= N. | | `frame_dig i` | Nth (signed) value from the frame pointer. | | `frame_bury i` | replace the Nth (signed) value from the frame pointer in the stack with A | | `swap` | swaps A and B on stack | | `select` | selects one of two values based on top-of-stack: B if C != 0, else A | | `assert` | immediately fail unless A is a non-zero number | | `callsub target` | branch unconditionally to TARGET, saving the next instruction on the call stack | | `proto a r` | Prepare top call frame for a retsub that will assume A args and R return values. | | `retsub` | pop the top instruction from the call stack and branch to it | | `switch target ...` | branch to the Ath label. Continue at following instruction if index A exceeds the number of labels. | | `match target ...` | given match cases from A\[1] to A\[N], branch to the Ith label where A\[I] = B. Continue to the following instruction if no matches are found. | ### State Access | Opcode | Description | | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `balance` | balance for account A, in microalgos. The balance is observed after the effects of previous transactions in the group, and after the fee for the current transaction is deducted. Changes caused by inner transactions are observable immediately following `itxn_submit` | | `min_balance` | minimum required balance for account A, in microalgos. Required balance is affected by ASA, App, and Box usage. When creating or opting into an app, the minimum balance grows before the app code runs, therefore the increase is visible there. When deleting or closing out, the minimum balance decreases after the app executes. Changes caused by inner transactions or box usage are observable immediately following the opcode effecting the change. | | `app_opted_in` | 1 if account A is opted in to application B, else 0 | | `app_local_get` | local state of the key B in the current application in account A | | `app_local_get_ex` | X is the local state of application B, key C in account A. Y is 1 if key existed, else 0 | | `app_global_get` | global state of the key A in the current application | | `app_global_get_ex` | X is the global state of application A, key B. Y is 1 if key existed, else 0 | | `app_local_put` | write C to key B in account A’s local state of the current application | | `app_global_put` | write B to key A in the global state of the current application | | `app_local_del` | delete key B from account A’s local state of the current application | | `app_global_del` | delete key A from the global state of the current application | | `asset_holding_get f` | X is field F from account A’s holding of asset B. Y is 1 if A is opted into B, else 0 | | `asset_params_get f` | X is field F from asset A. Y is 1 if A exists, else 0 | | `app_params_get f` | X is field F from app A. Y is 1 if A exists, else 0 | | `acct_params_get f` | X is field F from account A. Y is 1 if A owns positive algos, else 0 | | `voter_params_get f` | X is field F from online account A as of the balance round: 320 rounds before the current round. Y is 1 if A had positive algos online in the agreement round, else Y is 0 and X is a type specific zero-value | | `online_stake` | the total online stake in the agreement round | | `log` | write A to log state of the current application | | `block f` | field F of block A. Fail unless A falls between txn.LastValid-1002 and txn.FirstValid (exclusive) | ### Box Access Box opcodes that create, delete, or resize boxes affect the minimum balance requirement of the calling application’s account. The change is immediate, and can be observed after exection by using `min_balance`. If the account does not possess the new minimum balance, the opcode fails. All box related opcodes fail immediately if used in a ClearStateProgram. This behavior is meant to discourage Smart Contract authors from depending upon the availability of boxes in a ClearState transaction, as accounts using ClearState are under no requirement to furnish appropriate Box References. Authors would do well to keep the same issue in mind with respect to the availability of Accounts, Assets, and Apps though State Access opcodes *are* allowed in ClearState programs because the current application and sender account are sure to be *available*. | Opcode | Description | | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `box_create` | create a box named A, of length B. Fail if the name A is empty or B exceeds 32,768. Returns 0 if A already existed, else 1 | | `box_extract` | read C bytes from box A, starting at offset B. Fail if A does not exist, or the byte range is outside A’s size. | | `box_replace` | write byte-array C into box A, starting at offset B. Fail if A does not exist, or the byte range is outside A’s size. | | `box_splice` | set box A to contain its previous bytes up to index B, followed by D, followed by the original bytes of A that began at index B+C. | | `box_del` | delete box named A if it exists. Return 1 if A existed, 0 otherwise | | `box_len` | X is the length of box A if A exists, else 0. Y is 1 if A exists, else 0. | | `box_get` | X is the contents of box A if A exists, else ”. Y is 1 if A exists, else 0. | | `box_put` | replaces the contents of box A with byte-array B. Fails if A exists and len(B) != len(box A). Creates A if it does not exist | | `box_resize` | change the size of box named A to be of length B, adding zero bytes to end or removing bytes from the end, as needed. Fail if the name A is empty, A is not an existing box, or B exceeds 32,768. | ### Inner Transactions The following opcodes allow for “inner transactions”. Inner transactions allow stateful applications to have many of the effects of a true top-level transaction, programmatically. However, they are different in significant ways. The most important differences are that they are not signed, duplicates are not rejected, and they do not appear in the block in the usual away. Instead, their effects are noted in metadata associated with their top-level application call transaction. An inner transaction’s `Sender` must be the SHA512\_256 hash of the application ID (prefixed by “appID”), or an account that has been rekeyed to that hash. In v5, inner transactions may perform `pay`, `axfer`, `acfg`, and `afrz` effects. After executing an inner transaction with `itxn_submit`, the effects of the transaction are visible beginning with the next instruction with, for example, `balance` and `min_balance` checks. In v6, inner transactions may also perform `keyreg` and `appl` effects. Inner `appl` calls fail if they attempt to invoke a program with version less than v4, or if they attempt to opt-in to an app with a ClearState Program less than v4. In v5, only a subset of the transaction’s header fields may be set: `Type`/`TypeEnum`, `Sender`, and `Fee`. In v6, header fields `Note` and `RekeyTo` may also be set. For the specific (non-header) fields of each transaction type, any field may be set. This allows, for example, clawback transactions, asset opt-ins, and asset creates in addition to the more common uses of `axfer` and `acfg`. All fields default to the zero value, except those described under `itxn_begin`. Fields may be set multiple times, but may not be read. The most recent setting is used when `itxn_submit` executes. For this purpose `Type` and `TypeEnum` are considered to be the same field. When using `itxn_field` to set an array field (`ApplicationArgs` `Accounts`, `Assets`, or `Applications`) each use adds an element to the end of the array, rather than setting the entire array at once. `itxn_field` fails immediately for unsupported fields, unsupported transaction types, or improperly typed values for a particular field. `itxn_field` makes acceptance decisions entirely from the field and value provided, never considering previously set fields. Illegal interactions between fields, such as setting fields that belong to two different transaction types, are rejected by `itxn_submit`. | Opcode | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `itxn_begin` | begin preparation of a new inner transaction in a new transaction group | | `itxn_next` | begin preparation of a new inner transaction in the same transaction group | | `itxn_field f` | set field F of the current inner transaction to A | | `itxn_submit` | execute the current inner transaction group. Fail if executing this group would exceed the inner transaction limit, or if any transaction in the group fails. | | `itxn f` | field F of the last inner transaction | | `itxna f i` | Ith value of the array field F of the last inner transaction | | `itxnas f` | Ath value of the array field F of the last inner transaction | | `gitxn t f` | field F of the Tth transaction in the last inner group submitted | | `gitxna t f i` | Ith value of the array field F from the Tth transaction in the last inner group submitted | | `gitxnas t f` | Ath value of the array field F from the Tth transaction in the last inner group submitted | # Assembler Syntax The assembler parses line by line. Ops that only take stack arguments appear on a line by themselves. Immediate arguments follow the opcode on the same line, separated by whitespace. The first line may contain a special version pragma `#pragma version X`, which directs the assembler to generate bytecode targeting a certain version. For instance, `#pragma version 2` produces bytecode targeting v2. By default, the assembler targets v1. Subsequent lines may contain other pragma declarations (i.e., `#pragma `), pertaining to checks that the assembler should perform before agreeing to emit the program bytes, specific optimizations, etc. Those declarations are optional and cannot alter the semantics as described in this document. “`//`” prefixes a line comment. ## Constants and Pseudo-Ops A few pseudo-ops simplify writing code. `int` and `byte` and `addr` and `method` followed by a constant record the constant to a `intcblock` or `bytecblock` at the beginning of code and insert an `intc` or `bytec` reference where the instruction appears to load that value. `addr` parses an Algorand account address base32 and converts it to a regular bytes constant. `method` is passed a method signature and takes the first four bytes of the hash to convert it to the standard method selector defined in `byte` constants are: ```plaintext byte base64 AAAA... byte b64 AAAA... byte base64(AAAA...) byte b64(AAAA...) byte base32 AAAA... byte b32 AAAA... byte base32(AAAA...) byte b32(AAAA...) byte 0x0123456789abcdef... byte "\x01\x02" byte "string literal" ``` `int` constants may be `0x` prefixed for hex, `0o` or `0` prefixed for octal, `0b` for binary, or decimal numbers. `intcblock` may be explicitly assembled. It will conflict with the assembler gathering `int` pseudo-ops into a `intcblock` program prefix, but may be used if code only has explicit `intc` references. `intcblock` should be followed by space separated int constants all on one line. `bytecblock` may be explicitly assembled. It will conflict with the assembler if there are any `byte` pseudo-ops but may be used if only explicit `bytec` references are used. `bytecblock` should be followed with byte constants all on one line, either ‘encoding value’ pairs (`b64 AAA...`) or 0x prefix or function-style values (`base64(...)`) or string literal values. ## Labels and Branches A label is defined by any string not some other opcode or keyword and ending in ’:’. A label can be an argument (without the trailing ’:’) to a branching instruction. Example: ```plaintext int 1 bnz safe err safe: pop ``` # Encoding and Versioning A compiled program starts with a varuint declaring the version of the compiled code. Any addition, removal, or change of opcode behavior increments the version. For the most part opcode behavior should not change, addition will be infrequent (not likely more often than every three months and less often as the language matures), and removal should be very rare. For version 1, subsequent bytes after the varuint are program opcode bytes. Future versions could put other metadata following the version identifier. It is important to prevent newly-introduced transaction types and fields from breaking assumptions made by programs written before they existed. If one of the transactions in a group will execute a program whose version predates a transaction type or field that can violate expectations, that transaction type or field must not be used anywhere in the transaction group. Concretely, the above requirement is translated as follows: A v1 program included in a transaction group that includes a ApplicationCall transaction or a non-zero RekeyTo field will fail regardless of the program itself. This requirement is enforced as follows: * For every transaction, compute the earliest version that supports all the fields and values in this transaction. * Compute the largest version number across all the transactions in a group (of size 1 or more), call it `maxVerNo`. If any transaction in this group has a program with a version smaller than `maxVerNo`, then that program will fail. In addition, applications must be v4 or greater to be called in an inner transaction. ## Varuint A ‘’ is encoded with 7 data bits per byte and the high bit is 1 if there is a following byte and 0 for the last byte. The lowest order 7 bits are in the first byte, followed by successively higher groups of 7 bits. # What AVM Programs Cannot Do Design and implementation limitations to be aware of with various versions. * Stateless programs cannot lookup balances of Algos or other assets. (Standard transaction accounting will apply after the Smart Signature has authorized a transaction. A transaction could still be invalid by other accounting rules just as a standard signed transaction could be invalid. e.g. I can’t give away money I don’t have.) * Programs cannot access information in previous blocks. Programs cannot access information in other transactions in the current block, unless they are a part of the same atomic transaction group. * Logic Signatures cannot know exactly what round the current transaction will commit in (but it is somewhere in FirstValid through LastValid). * Programs cannot know exactly what time its transaction is committed. * Programs cannot loop prior to v4. In v3 and prior, the branch instructions `bnz` “branch if not zero”, `bz` “branch if zero” and `b` “branch” can only branch forward. * Until v4, the AVM had no notion of subroutines (and therefore no recursion). As of v4, use `callsub` and `retsub`. * Programs cannot make indirect jumps. `b`, `bz`, `bnz`, and `callsub` jump to an immediately specified address, and `retsub` jumps to the address currently on the top of the call stack, which is manipulated only by previous calls to `callsub` and `retsub`.
# Control Flow
> Overview of control flow in Algorand smart contracts
Control flow in Algorand smart contracts follows common programming paradigms, with support for if statements, while loops, for loops, and switch/match statements. Both Algorand Python and Algorand TypeScript provide familiar syntax for these constructs. ### If statements If statements work as you would expect in any programming language. The conditions must be an expression that evaluates to a boolean. ### Ternary conditions Ternary conditions allow for compact conditional expressions. The condition must be an expression that evaluates to a boolean. ### While loops While loops iterate as long as the specified condition is true. The condition must be an expression that evaluates to a boolean. You can use `break` and `continue` statements to control loop execution. ### For Loops For loops are used to iterate over sequences, ranges and ARC-4 arrays. In Algorand Python, utility functions like `uenumerate` and `urange` facilitate creating sequences and ranges of UInt64 numbers, and the built-in `reversed` method works with these. In Algorand TypeScript, standard iteration constructs are available. Here is an example of how you can use For loops in smart contracts: ### Switch or Match Statements `switch` for TypeScript and `match` for Python provide a clean way to handle multiple conditions. They follow the standard syntax of their respective languages. Note: Captures and patterns are not supported. Currently, there is only support for basic case/switch functionality; pattern matching and guard clauses are not currently supported. ## TEAL Flow Control Opcode Algorand Python and TypeScript are high-level smart contract languages that allow developers to express control flows in more accessible languages. However, the Algorand Virtual Machine (AVM) executes the Transaction Execution Approval Language (TEAL) flow control opcodes after compilation. TEAL is a low-level assembly language that the AVM understands directly. While developers will write smart contracts in higher-level languages, understanding the underlying TEAL opcodes can be beneficial to comprehend what’s happening line by line. The following chart contains all of the control flow opcodes available in TEAL. | Opcode | Description | | --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | err | Fail immediately. | | bnz target | branch to TARGET if value A is not zero | | bz target | branch to TARGET if value A is zero | | b target | branch unconditionally to TARGET | | return | use A as success value; end | | pop | discard A | | popn n | remove N values from the top of the stack | | dup | duplicate A | | dup2 | duplicate A and B | | dupn n | duplicate A, N times | | dig n | Nth value from the top of the stack. dig 0 is equivalent to dup | | bury n | replace the Nth value from the top of the stack with A. bury 0 fails. | | cover n | remove top of stack, and place it deeper in the stack such that N elements are above it. Fails if stack depth <= N. | | uncover n | remove the value at depth N in the stack and shift above items down so the Nth deep value is on top of the stack. Fails if stack depth <= N. | | frame\_dig i | Nth (signed) value from the frame pointer. | | frame\_bury i | replace the Nth (signed) value from the frame pointer in the stack with A | | swap | swaps A and B on stack | | select | selects one of two values based on top-of-stack: B if C != 0, else A | | assert | immediately fail unless A is a non-zero number | | callsub target | branch unconditionally to TARGET, saving the next instruction on the call stack | | proto a r | Prepare top call frame for a retsub that will assume A args and R return values. | | retsub | pop the top instruction from the call stack and branch to it | | switch target … | branch to the Ath label. Continue at following instruction if index A exceeds the number of labels. | | match target … | given match cases from A\[1] to A\[N], branch to the Ith label where A\[I] = B. Continue to the following instruction if no matches are found. |
# Smart Contract Costs & Constraints
This page covers the costs and constraints specific to smart contract development on Algorand. For a complete list of all protocol parameters including transaction fees, minimum balances, and other network-wide settings, see the main protocol parameters page. Complete list of all Algorand protocol parameters including transaction fees, minimum balances, and network-wide constants ## Program Constraints ### Program Size Limits | Type | Constraint | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | | Logic Signatures | Max size: 1000 bytes for logic signatures (consensus parameter LogicSigMaxSize). Components: The bytecode plus the length of all arguments | | Smart Contracts | Max size: 2048\*(1+ExtraProgramPages) bytes Components: ApprovalProgram + ClearStateProgram | ### Application Call Arguments | Parameter | Constraint | | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Number of Arguments | Maximum 16 arguments can be passed to an application call. This limit is defined by the consensus parameter `MaxAppArgs` | | Combined Size of Arguments | The maximum combined size of arguments is 2048 bytes. This limit is defined by the consensus parameter `MaxAppTotalArgLen` | | Max Size of Compiled TEAL Code | The maximum size of compiled TEAL code combined with arguments is 1000 bytes. This limit is defined by the consensus parameter `LogicSigMaxSize` | | Max Cost of TEAL Code | The maximum cost of TEAL code is 20000 for logic signatures and 700 for smart contracts. These limits are defined by the consensus parameters `LogicSigMaxCost` and `MaxAppProgramCost` respectively | | Argument Types | The arguments to pass to the ABI call can be one of the following types: `boolean`, `number`, `bigint`, `string`, `Uint8Array`, an array of one of the above types, `algosdk.TransactionWithSigner`, `TransactionToSign`, `algosdk.Transaction`, `Promise`. These types are used when specifying the ABIAppCallArgs for an application call | ## Opcode Constraints In Algorand, the opcode budget measures the computational cost of executing a smart contract or logic signature. Each opcode (operation code) in the Algorand Virtual Machine (AVM) has an associated cost deducted from the opcode budget during execution. | Parameter | Constraint | | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Cost of Opcodes | Most opcodes have a computational cost of 1. Some operations (e.g., SHA256, keccak256, sha512\_256, ed25519verify) have larger costs. | | Budget Constraints | Max opcode budget: Smart signatures: 20,000 units Smart contracts invoked by a single application transaction: 700 units. If invoked via a group: 700 \* number of application transactions. | | Clear State Programs | Initial pooled budget must be >= 700 units or higher. Execution limit: 700 units. | > **Note:** Algorand Python provides a helper method for increasing the available opcode budget, see `algopy.ensure_budget`. ## Stack In Algorand’s Algorand Virtual Machine (AVM), the stack is a key component of the execution environment. | Parameter | Constraint | | ------------------- | -------------------------------------------------------------------------------------------------------------- | | Maximum Stack Depth | 1000. If the stack depth is exceeded, the program fails. | | Item Size Limits | The stack can contain values of either uint64 or byte-arrays. Byte-arrays may not exceed 4096 bytes in length. | | Type Limitation | Every element of the stack is restricted to the types uint64 and bytes. | | Item Size Limit | Maximum size for byte arrays is 4096 bytes. Maximum value uint64 is 18446744073709551615. | | Operation Failure | Fails if an opcode accesses a position in the stack that does not exist. | ## Resources In Algorand, the access and usage of resources such as account balance/state, application state, etc., by applications are subject to certain constraints and costs: ### Resource Access Limit | Aspect | Constraint | | ------------------------- | ------------------------------------------------------------------------------------------------------------ | | Access Restrictions | Limited access to resources like account balance and application state to ensure efficient block evaluation. | | Specification Requirement | Resources must be specified within the transaction for nodes to pre-fetch data. | ### Access Constraints | Access Type | Constraint | | ----------------------- | ------------------------------------------------------------------------------------------------------- | | Block Information | Programs cannot access information from previous blocks. | | Transaction Information | Cannot access other transactions in the current block unless part of the same atomic transaction group. | ### Logic Signatures | Parameter | Constraint | | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | | Transaction Commitment | Cannot determine the exact round or time of transaction commitment. | | Stateless Programs | Cannot query account balances or asset holdings. Transactions must comply with standard accounting rules and may fail if rules are violated. | ## AVM Environment | Parameter | Constraint | | -------------- | ----------------------------------------------------------- | | Indirect Jumps | Not supported; all jumps must reference specific addresses. | ## Storage Constraints | Storage Structure | Key Length | Value Length | Unique Key Requirement | Additional Details | Safety from Unexpected Deletion | | -------------------- | -------------- | ------------------------- | ---------------------- | --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | | Local State Storage | Up to 64 bytes | Key + value ≤ 128 bytes | Yes | Larger datasets require partitioning | Not Safe — Can be cleared by users at any time using ClearState transactions | | Box Storage | 1 to 64 bytes | Up to 32KB (32,768 bytes) | Yes | Key does not contribute to box size and values > 1,024 bytes need additional references | Boxes persist after app deletion but lock the minimum balance if not deleted beforehand | | Global State Storage | Up to 64 bytes | Key + value ≤ 128 bytes | Yes | Larger datasets require partitioning | Safe — Deleted only with the application; otherwise, data is safe from unexpected deletion |
# Cryptographic Tools
> Overview of the Cryptographic Tools section, for producing applications utilizing cryptography features in the AVM.
## Introduction The Algorand Virtual Machine (AVM) contains opcodes allowing for use-cases utilizing cryptography. This section aims to elucidate for the more experienced smart contract developer on how to make use of those opcodes to create powerful cryptographic protocols and applications. # Opcodes While the AVM is Turing-Complete and can compute any kind of arbitrary computation, given enough Algo to pay for fees and blocks to spread the computation over, certain commonly used operations have been added directly into the node software and are exposed for direct usage. Each transaction interacting with a stateful smart contract is allocated a budget of 700 units. Given that a group transaction can contain 16 transactions, that creates a limit of 11200 opcode budget at the first level of nesting. Stateless applications, on the other hand, have a budget of 20000, owing to not being able to access storage. While a stateless application cannot enter state, it can be called in a group that also involves a call to a stateful smart contract. Certain computation can be outsourced to the stateless application, while the stateful application verifies that it has been provided with the correct input arguments. Due to the nature of Algorand’s atomic group transactions, if one of them were to fail the entire thing would fail. It is also possible to “smear” computation across blocks, storing intermediate steps in storage (Global or Box). ## Hash Functions A hash function maps data of arbitrary size to fixed-size values. The following hash functions are available: * `sha256` * `keccak256` * `sha512_256` * `mimc` * `vFuture: sumhash512` Note that `sha512_256` is *not* the same as `sha512(x) % 2^256`. MiMC is a ZK-friendly hash function enjoying popularity for ZK-SNARK applications. Note that it is not designed to be used in general applications, but rather in ZK-related applications - hence the increased cost compared to the other hash functions. MiMC will result in a minimal circuit size compared to SHA-2 or SHA-3 hash functions, making generating ZK-SNARKs cheaper. It also comes in two flavors: BN254 and BLS12\_381. SumHash512 strives to strike a balance between ZK and non-ZK friendliness. It is currently seeing use in State Proofs, namely in constructing the VoterCommitment, a Merkle tree commitment committing to the top stakers. ## Signature Schemes Signature schemes allow use to do and verify digital signatures, a cornerstone of cryptography. The following opcodes are available: * Ed25519 (EdDSA) * `ed25519verify` * `ed25519verify_bare` * Secp256k1/r1 (ECDSA) * `ecdsa_verify` * `ecdsa_pk_decompress` * `ecdsa_pk_recover` * vFuture: `falcon_verify` `ed25519verify`requires passing in a hash of the smart contract. Algorand’s account structure and consensus mechanism is based off of Ed25519, and as such it is generally dangerous to have users sign off on aribtrary data with their Algorand addresses, given that a malicious entity could slip in an actual transaction (prefixed by e.g. `MX`). The `ed25519verify` was devised to force the user of a smart contract to sign off on a payload prefixed by a concatenation of `ProgData` and the actual hash of the smart contract code. Later on the `ed25519verify_bare` opcode was introduced, removing the restriction on the payload and making it possible to verify all signatures. ECDSA comes in two flavors: Secp256k1 and Secp256r1. The former is used in some other blockchains like Bitcoin and Ethereum. The latter is also referred to as P256 or Prime256v1 and it is commonly used in passkeys. FALCON is based off of lattice-based cryptography and is notably one of the NIST approved post-quantum secure (to the best of our knowledge) signature schemes. Like SumHash512, it is also currently being used in State Proofs. ## Elliptic Curve Operations Some of the underlying cryptographic primitives involved in ECC (elliptic curve cryptography) have been exposed for the BN254 and BLS12\_381 curves. These two curves are notably pairing friendly. * `ec_add` * `ec_scalar_mul` * `ec_multi_scalar_mul` * `ec_subrgroup_check` * `ec_map_to` * `ec_pairing_check` Note that the BN254 curve is also known under `alt_bn128` or `bn256`. It is *NOT* to be confused with `Fp254BNb`. It is defined as: ```plaintext Y^2 = X^3 + 3 over the field prime field p = 21888242871839275222246405745257275088696311157297823662689037894645226208583 and curve order/scalar field: r = 21888242871839275222246405745257275088548364400416034343698204186575808495617 ``` BLS12\_381 is more expensive than BN254 and requires more storage to store, but it comes with a higher number of bits of security. ## Verifiable Randomness VRF (Verifiable Random Function) allows someone with a private key to generate a random value against a message that can be verifiably proven using a public key. VRFs are at the core of Algorand and its consensus mechanism, Pure-Proof-of-Stake. * `vrf_verify` This VRF function is based off of the IETF Internet draft `draft-irtf-cfrg-vrf-03` and corresponds to what is currently in the node software. Note that it is not quite the same as the final version the IETF ended up adopting (RFC-9381), which was finalized after Algorand had entered production.
# Deployment
## Overview * Definition: Deploying a smart contract on Algorand involves uploading compiled TEAL (Transaction Execution Approval Language) code to the blockchain, enabling decentralized applications to execute predefined logic. * Purpose: Deployment makes the smart contract accessible on the Algorand network, allowing users and applications to interact with it. ## Key Concepts in Deployment * TEAL Compilation: Smart contracts are written in high-level languages like PyTeal and then compiled into TEAL bytecode for execution on the Algorand Virtual Machine (AVM). ## Updatable vs. Non-Updatable Contracts * Updatable Contracts: * Can be modified after deployment. * Provide flexibility to fix bugs or add features. * Configuration: Set the OnUpdate property to allow updates. * Non-Updatable Contracts: * Immutable once deployed. * Enhance security by preventing unauthorized changes. * Configuration: Set the OnUpdate property to disallow updates. ## Deletable vs. Non-Deletable Contracts * Deletable Contracts: * Can be removed from the blockchain. * Useful for temporary applications or testing. * Configuration: Set the OnDelete property to allow deletion. * Non-Deletable Contracts: * Permanent on the blockchain. * Ensure continuous availability. * Configuration: Set the OnDelete property to disallow deletion. ## Understanding Idempotent Deployment * Definition: Deploying a contract multiple times without changing the outcome. * Benefits: * Prevents duplicate deployments. * Ensures consistency across environments. * Implementation: * Use deployment tools that check for existing contracts before deploying new instances. * Maintain versioning to track contract changes. ## Automating Deployment with CI/CD * Continuous Integration/Continuous Deployment (CI/CD): * Automates testing and deployment processes. * Ensures code quality and reduces manual errors. * Best Practices: * Integrate deployment scripts into CI/CD pipelines. * Use tools like AlgoKit’s Deploy feature for seamless deployments. * Implement automated tests to validate contract behavior before deployment. ## Secret Management and Security Best Practices * Handling Sensitive Data: * Store private keys and credentials securely using environment variables or secret management tools. * Avoid hardcoding sensitive information in your codebase. * Access Control: * Restrict permissions to update or delete contracts to authorized accounts. * Regularly review and update access controls to maintain security. * Security Audits: * Conduct thorough testing to identify and fix vulnerabilities. * Consider third-party audits for critical contracts. # Deployment ## Todo * [ ] Find home * [ ] Frontmatter * [ ] Make PR ## Notes ### Central Questions from ticket 1. Overview of what it means to deploy 2. Updatable vs not 3. Deletable vs not 4. Understanding idempotent deployment 5. CI/CD capabilities 6. Secret management best practices ### Scribble along * Rob’s ADR * Deployment as part of lifecycle * Algokit Deploy * Deploying Smart Contracts * Different environments * algokit deploy * environment files * .algokit.toml * .env at root of project * .env.network * AlgoKit config file (.algokit.toml) * deploy section * command * environment secrets * Deploy to specific network * Custom project dir * custom deploy command * CI mode (skip mnemonics) * Full example * Algokit utils py * * idempotent (safely retryable) * deploy-time immutability * permanence control * TEAL template substitution * deploying byte code * deploy-time parameters * contracts can be built by any arc-4/arc-32 compatible framework * explicit control over immutability (update/upgrade) and permanance (delete) * TMPL\_UPDATABALE * TMPL\_DELETABLE * id or name+deployer * by creator: indexer calls * deploying an app * AppClient.deploy * checks for existence * if not: create * if yes: * check if logic changed, then depending on config * update * replace * automatic template substitution * Idempotentcy * Params
# Inner Transactions
> Overview of Inner Transactions in Algorand Smart Contracts.
## What are Inner Transactions? When a smart contract is deployed to the Algorand blockchain, it is assigned a unique identifier called the App ID. Additionally, every smart contract has an associated unique Algorand account. We call these accounts *application accounts*, and their unique identifier is a 58-character long public key known as the *application address*. The account allows the smart contract to function as an escrow account, which can hold and manage Algorand Standard Assets (ASA) and send transactions just like any other Algorand account. The transactions sent by the smart contract (application) account are called *Inner Transactions*. ## Inner Transaction Details Since application accounts are Algorand accounts, they need Algo to cover transaction fees when sending inner transactions. To fund the application account, any account in the Algorand network can send Algo to the specified account. For funds to leave the application account, the following conditions must be met: * The logic within the smart contract must submit an inner transaction. * The smart contract’s logic must return true. A smart contract can issue up to 256 inner transactions with one application call. If any of these transactions fail, the smart contract call will also fail. Inner transactions support all the same transaction types that a regular account can make, including: * Payment * Key Registration * Asset Configuration * Asset Freeze * Asset Transfer * Application Call * State Proof You can also group multiple inner transactions and atomically execute them. Refer to the below for more details. Inner transactions are evaluated during AVM execution, allowing changes to be visible within the contract. For example, if the `balance` opcode is used before and after submitting a `pay` transaction, the balance change would be visible to the executing contract. Inner transactions also have access to the `Sender` field. It is not required to set this field as all inner transactions default the sender to the contract address. If another account is rekeyed to the smart contract address, setting the sender to the address that has been rekeyed allows the contract to spend from that account. The recipient of an inner transaction must be in the accounts array. Additionally, if the sender of an inner transaction is not the contract, the sender must also be in the accounts array. Clear state programs do not support creating inner transactions. However, clear state programs can be called by an inner transaction. ## Paying Inner Transaction Fees By default, fees for Inner Transactions are paid by the application account—NOT the smart contract method caller—and are set automatically to the minimum transaction fee. However, for many smart contracts, this presents an attack vector in which the application account could be drained through repeated calls to send Inner Transactions that incur fee costs. The recommended pattern is to hard-code Inner Transaction fees to zero. This forces the app call sender to cover those fees through increased fees on the outer transaction through fee pooling. Fee pooling enables the application call to a smart contract method to cover the fees for inner transactions or any other transaction within an atomic transaction group. ## Payment Smart contracts can send Algo payments to other accounts using payment inner transactions. The following example demonstrates how to create a payment inner transaction while ensuring the app call sender covers the transaction fees through fee pooling. ## Asset Create Assets can be created by a smart contract. Use the following contract code to create an asset with an inner transaction. ## Asset Opt In If a smart contract wishes to transfer an asset it holds or needs to opt into an asset, this can be done with an asset transfer inner transaction. If the smart contract created the asset via an inner transaction, it does not need to opt into the asset. ## Asset Transfer If a smart contract is opted into the asset, it can transfer the asset with an asset transfer transaction. ## Asset Freeze A smart contract can freeze any asset, where the smart contract is set as the freeze address. ## Asset Revoke A smart contract can revoke or clawback any asset where the smart contract address is specified as the asset clawback address. ## Asset Configuration As with all assets, the mutable addresses can be changed using contract code similar to the code below. Note these these addresses cannot be changed once set to an empty value. ## Asset Delete Assets managed by the contract can also be deleted. This can be done with the following contract code. Note that the entire supply of the asset must be returned to the contract account before deleting the asset. ## Grouped Inner Transactions A smart contract can make inner transactions consisting of multiple transactions grouped together atomically. The following example groups a payment transaction with a call to another smart contract. ## Contract to Contract Calls A smart contract can also call another smart contract method with inner transactions. However there are some limitations when making contract to contract calls. * An application may not call itself, even indirectly. This is referred to as re-entrancy and is explicitly forbidden. * An application may only call into other applications up to a stack depth of 8. In other words, if app calls (->) look like `1->2->3->4->5->6->7->8`, App 8 may not call another application. This would violate the stack depth limit. * An application may issue up to 256 inner transactions to increase its budget (max budget of 179.2k even for a group size of 1), but the max call budget is shared for all applications in the group. This means you can’t have two app calls in the same group that both try to issue 256 inner app calls. * An application of AVM version 6 or above may not call contracts with a AVM version 3 or below. This limitation protects an older application from unexpected behavior introduced in newer AVM versions. A smart contract can call other smart contracts using any of the OnComplete types. This allows a smart contract to create, opt in, close out, clear state, delete, or just call (NoOp) other smart contracts. To call an existing smart contract the following contract code can be used. ### NoOp Application call A NoOp application call allows a smart contract to invoke another smart contract’s logic. This is the most common type of application call used for general-purpose interactions between contracts. ### Deploy smart contract via inner transaction Smart contracts can dynamically create and deploy other smart contracts using inner transactions. This powerful feature enables contracts to programmatically spawn new applications on the blockchain.
# Algorand Python
> Introduction to Algorand Python for writing smart contracts
Algorand Python is a partial implementation of the Python programming language that runs on the Algorand Virtual Machine (AVM). It includes a statically typed framework for the development of Algorand Smart Contracts and Logic Signatures, with Pythonic interfaces to underlying AVM functionality. Algorand Python is compiled for execution on the AVM by PuyaPy, an optimizing compiler that ensures the resulting AVM bytecode execution semantics match the given Python code. PuyaPy produces output that is directly compatible with AlgoKit typed clients, simplifying the process of deployment and calling. This allows developers to use standard Python tooling in their workflow. ## Benefits of using Algorand Python 1. Rapid development: Python’s concise syntax allows for quick prototyping and iteration of smart contract ideas. 2. Lower barrier to entry: Python’s popularity means more developers can transition into blockchain development without learning a new language. 3. Ease of Use: Algorand Python is designed to work with standard Python tooling, making it easy for developers familiar with Python to start building smart contracts on Algorand. 4. Efficiency: Algorand Python is compiled for execution on the AVM by PuyaPy, an optimizing compiler that ensures the resulting AVM bytecode execution semantics match the given Python code. This makes deployment and calling easy. 5. Modularity: Algorand Python supports modular and loosely coupled solution components, facilitating efficient parallel development by small, effective teams, reducing architectural complexity, and allowing developers to pick and choose the specific tools and capabilities they want to use based on their needs and what they are comfortable with. Learn how to install and start writing Algorand Python smart contracts ## Python Implementation for AVM Algorand Python maintains the syntax and semantics of Python, supporting a subset of the language that will grow over time. However, due to the restricted nature of the AVM, it will never be a complete implementation. For example, `async` and `await` keywords are not supported as they don’t make sense in the AVM context. Learn more about the Algorand Virtual Machine (AVM) and its implementation constraints This partial implementation allows existing developer tools like IDE syntax highlighting, static type checkers, linters, and auto-formatters to work out of the box. This approach differs from other partial language implementations that add or alter language elements, which require custom tooling support and force developers to learn non-obvious differences from regular Python. ## AVM Types and Their Algorand Python Equivalents The basic types of the AVM are: 1. `uint64`: Represented as `UInt64` in Algorand Python 2. `bytes[]`: Represented as `Bytes` in Algorand Python The AVM also supports “bounded” types, such as `bigint` (represented as `BigUInt` in Algorand Python), which is a variably sized (up to 512-bit) unsigned integer backed by `bytes[]`. It’s important to note that these types don’t directly map to standard Python primitives. For example, Python’s `int` is signed and effectively unbounded, while a `bytes` object in Python is limited only by available memory. In contrast, an AVM `bytes[]` has a maximum length of 4096. ## Differences from Standard Python ### Unsupported features Several features of standard Python are not supported in Algorand Python due to AVM limitations: | Feature | Rationale | | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Exception handling (raise, try/except/finally) | Implementing user-defined exceptions would be costly in terms of opcodes. Additionally, AVM errors and exceptions are not catchable and will immediately terminate the program. As a result, supporting exceptions and exception handling offers minimal to no benefit. | | Context managers (with statements) | Redundant without exception handling support | | Asynchronous programming (async/await) | AVM is not just single-threaded; all operations are effectively “blocking”, rendering asynchronous programming useless. | | Closures and lambdas | Without the support of function pointers, or other methods of invoking an arbitrary function, it’s impossible to return a function as a closure. Nested functions/lambdas may be supported in the future as a means of repeating common operations within a given function. | | Global keyword | Module-level values are only allowed to be constants. No rebinding of module constants is allowed. It’s unclear what the meaning here would be since there’s no real arbitrary means of storing a state without associating it with a particular contract. If you need such a thing, look at `gload_bytes` or `gload_uint64` to see if the contracts are within the same transaction. Otherwise `AppGlobal.get_ex_bytes` and `AppGlobal.get_ex_uint64`. | | Inheritance (outside of contract classes) | Contract inheritance is a special case since each concrete contract is compiled separately; true polymorphism isn’t required as all references can be resolved at compile time. Polymorphism is also impossible to support without function pointers, so data classes (such as `arc4.Struct`) don’t currently allow for inheritance. | ## Python Primitives Algorand Python has limitations on standard Python primitives due to the constraints of the Algorand Virtual Machine (AVM). ### Supported Primitives * `Bool`: Algorand Python has full support for `bool`. * `Tuple`: Python tuples are supported as arguments to subroutines, local variables, return types. * `typing.NamedTuple`: Python named tuples are also supported using `typing.NamedTuple`. * `None`: `None` is not supported as a value, but is supported as a type annotation to indicate a function or subroutine returns no value. The `int`, `str`, and `bytes` built-in types are currently only supported as module-level constants or literals. They can be passed as arguments to various Algorand Python methods that support them or when interacting with certain AVM types e.g. adding a number to a `UInt64`. ### Unsupported Primitives * `Float` is not supported. * Nested tuples are not supported. Keep in mind, Python’s `int` is unsigned and unbounded, while AVM’s `uint64` (represented as `UInt64` in Algorand Python) is a 64-bit unsigned integer. Similarly, Python’s `bytes` objects are limited only by available memory, whereas AVM’s `bytes[]` (represented as `Bytes` in Algorand Python) have a maximum length of 4096 bytes. ## PuyaPy Compiler The PuyaPy compiler is a multi-stage, optimizing compiler that takes Algorand Python and prepares it for execution on the Algorand Virtual Machine (AVM). It ensures that the resulting AVM bytecode execution semantics match the given Python code. The output produced by PuyaPy is directly compatible with AlgoKit typed clients, making deployment and calling of smart contracts easy. The PuyaPy compiler is based on the Puya compiler architecture, which allows for multiple frontend languages to leverage the majority of the compiler logic. This makes adding new frontend languages for execution on Algorand relatively easy. Learn more about installing and setting up AlgoKit for Algorand development ## Testing and Debugging The `algorand-python-testing` package allows for efficient unit testing of Algorand Python smart contracts in an offline environment. It emulates key AVM behaviors without requiring a network connection, offering fast and reliable testing capabilities with a familiar Pythonic interface. Learn how to unit test your Algorand Python smart contracts in an offline environment Discover tools and techniques for debugging Algorand Python smart contracts ## Algorand Python VS Code Extension The Algorand Python extension enhances Visual Studio Code with language server capabilities specifically designed for Algorand smart contract development. Working alongside your installed Python language server (recommended with Pylance), it automatically detects your project’s PuyaPy version and provides Algorand-specific code analysis, diagnostics, and validation. The extension offers intelligent quick fixes for common issues, helping you learn Algorand Python as you build, while displaying relevant errors and warnings for smart contract validation. Note that there is currently some latency between code changes and diagnostic updates, which will be improved in future releases. This extension brings language server powered capabilities to your smart contract authoring experience in VSCode ## Best Practices * Write type-safe code: Always specify variable types, function parameters, and return values. * Leverage existing Python knowledge: Use familiar Python constructs and patterns where possible. * Be aware of AVM limitations: When writing your smart contracts, consider the imposed by the AVM. * Static typing is crucial in Algorand Python, differing significantly from standard Python’s dynamic typing. This ensures type safety and helps prevent errors in smart contract development. ## Resources for Further Learning A comprehensive tutorial for beginners on writing, compiling, and debugging smart contracts with Algorand Python
# Algorand TEAL
TEAL, or Transaction Execution Approval Language, is the smart contract language used in the Algorand blockchain. It is an assembly-like language processed by the Algorand Virtual Machine (AVM) and is Turing-complete, supporting both looping and subroutines. TEAL is primarily used for writing smart contracts and smart signatures, which can be authored directly in TEAL or via Python or Typescript using . For a brief overview on how TEAL’s opcodes work, checkout the documentation. ## Use in Algorand Smart Contracts and Signatures TEAL scripts create conditions for transaction execution. Smart contracts written in TEAL can control Algorand’s native assets, interact with users, or enforce custom business logic. These contracts either approve or reject transactions based on predefined conditions. Smart signatures, on the other hand, enforce specific rules on transactions initiated by accounts, typically serving as a stateless contract. ## Relationship to the Algorand Virtual Machine The AVM is responsible for processing TEAL programs. It interprets and executes the TEAL code, managing state changes and ensuring the contract’s logic adheres to the set rules. The AVM also evaluates the computational cost of running TEAL code to enforce time limits on contract execution. ## TEAL Language Features 1. Assembly-like Structure: TEAL resembles assembly language, where operations are performed in a sequential manner. Each line in a TEAL program represents a single operation. 2. Stack-based Operations: TEAL is a stack-based language, meaning it relies heavily on a stack to manage data. Operations in TEAL typically involve pushing data onto the stack, manipulating it, and then popping the result off the stack. 3. Data Types: TEAL supports two primary data types: * Unsigned 64-bit Integers * Byte Strings These data types are used in various operations, including comparisons, arithmetic, and logical operations. 4. Operators and Flow Control: TEAL includes a set of operators for performing arithmetic (`+`, `-`), comparisons (`==`, `<`, `>`), and logical operations (`&&`, `||`). Flow control in TEAL is managed through branching (`bnz`, `bz`) and subroutine calls (`callsub`, `retsub`). 5. Access to Transaction Properties and Global Values: TEAL programs can access properties of transactions (e.g., sender, receiver, amount) and global values (e.g., current round, group size) using specific opcodes like `txn`, `gtxn`, and `global`. ## Program Versions and Compatibility Currently, Algorand supports versions 1 through 10 of TEAL. When writing contracts with program version 2 or higher, make sure to add `#pragma version #` where # should be replaced by the specific number as the first line of the program. If this line does not exist, the protocol will treat the contract as a version 1 contract. If upgrading a contract to version 2 or higher, it is important to verify you are checking the `RekeyTo` property of all transactions that are attached to the contract. ## Transaction Properties and Pseudo Opcodes The primary purpose of a TEAL program is to return either true or false. When the program completes, if there is a non-zero value on the stack, then it returns true. If there is a zero value or the stack is empty, it will return false. If the stack has more than one value, the program also returns false unless the return opcode is used. The following diagram illustrates how the stack machine processes the program. Program line number 1:  Getting Transaction Properties The program uses the txn to reference the current transaction’s list of properties. Grouped transaction properties are referenced using gtxn and gtxns. The number of transactions in a grouped transaction is available in the global variable GroupSize. To get the first transaction’s receiver, use gtxn 0 Receiver. ## Pseudo opcodes The TEAL specification provides several pseudo opcodes for convenience. For example, the second line in the program below uses the addr pseudo opcode.  Figure: Pseudo Opcodes The addr pseudo opcode converts Algorand addresses to a byte constant and pushes the result to the stack. See for additional pseudo opcodes. ## Operators and Stack Manipulation TEAL provides operators to work with data that is on the stack. For example, the == operator evaluates if the last two values on the stack are equal and pushes either a 1 or 0 depending on the result. The number of values used by an operator will depend on the operator. The documentation explains arguments and return values.  Figure: Operators ## Argument Passing TEAL supports program arguments. Smart contracts and smart signatures handle these parameters with different opcodes. Passing parameters to a smart signature is explained in the Interact with documentation. The diagram below shows an example of logic that is loading a parameter onto the stack within a smart signature.  Figure: Arguments All argument parameters to a TEAL program are byte arrays. The order that parameters are passed is specific. In the diagram above, The first parameter is pushed onto the stack. The SDKs provide standard language functions that allow you to convert parameters to a byte array.  Figure: Storing Values ## Scratch space Usage TEAL provides a scratch space as a way of temporarily storing values for use later in your code. The diagram below illustrates a small TEAL program that loads 12 onto the stack and then duplicates it. These values are multiplied together and result (144) is pushed to the top of the stack. The store command stores the value in the scratch space 1 slot.Figure5: Storing Values The load command is used to retrieve a value from the scratch space as illustrated in the diagram below. Note that this operation does not clear the scratch space slot, which allows a stored value to be loaded many times if necessary.  Figure: Loading Values ## Looping and Subroutines TEAL contracts written in version 4 or higher can use loops and subroutines. Loops can be performed using any branching opcodes b, bz, and bnz. For example, the TEAL below loops ten times. ```teal #pragma version 10 // loop 1 - 10 // init loop var int 0 loop: int 1 + dup // implement loop code // ... // check upper bound int 10 <= bnz loop // once the loop exits, the last counter value will be left on stack ``` Subroutines can be implemented using labels and the callsub and retsub opcodes. The sample below illustrates a sample subroutine call. ```teal #pragma version 10 // jump to main loop b main // subroutine my_subroutine: // implement subroutine code // with the two args retsub main: int 1 int 5 callsub my_subroutine return ``` ## Dynamic Operational Cost Smart signatures are limited to 1000 bytes in size. Size encompasses the compiled program plus arguments. Smart contracts are limited to 2KB total for the compiled approval and clear programs. This size can be increased in 2KB increments, up to an 8KB limit for both programs. For optimal performance, smart contracts and smart signatures are also limited in opcode cost. This cost is evaluated when a smart contract runs and is representative of its computational expense. Every opcode executed by the AVM has a numeric value that represents its computational cost. Most opcodes have a computational cost of 1. Some, such as SHA256 (cost 35) or ed25519verify (cost 1900) have substantially larger computational costs. The reference lists the opcode cost for every opcode. Smart signatures are limited to 20,000 for total computational cost. Smart contracts invoked by a single application transaction are limited to 700 for either of the programs associated with the contract. However, if the smart contract is invoked via a group of application transactions, the computational budget for approval programs is considered pooled. The total opcode budget will be 700 multiplied by the number of application transactions within the group (including inner transactions). So if the maximum transaction group size is used (i.e., 16 transactions) and the maximum number of inner transactions is used (i.e., 256 inner transactions), and all are application transactions, the computational budget would be 700x(16+256)=190,400. ## Tools and Development For developers who prefer Python or Typescript, you can also write smart contracts in Python or Typescript using . AlgoKit abstracts many low-level details of TEAL while providing the same functionality. For debugging a smart contract in Python, refer to the .
# Algorand Typescript
Algorand TypeScript is a partial implementation of the TypeScript programming language that runs on the Algorand Virtual Machine (AVM). It includes a statically typed framework for developing Algorand smart contracts and logic signatures, with TypeScript interfaces to underlying AVM functionality that works with standard TypeScript tooling. It maintains the syntax and semantics of TypeScript, so a developer who knows TypeScript can make safe assumptions about the behavior of the compiled code when running on the AVM. Algorand TypeScript is also executable TypeScript that can be run and debugged on a Node.js virtual machine with transpilation to EcmaScript and run from automated tests. # Benefits of using Algorand Typescript 1. Rapid development: Typescript’s concise syntax allows for quick prototyping and iteration of smart contract ideas. 2. Lower barrier to entry: Typescript’s popularity means more developers can transition into blockchain development without learning a new language. 3. Ease of Use: Algorand Typescript is designed to work with standard Typescript tooling, making it easy for developers familiar with Typescript to start building smart contracts on Algorand. 4. Efficiency: Algorand Typescript is compiled for execution on the AVM by PuyaTS, an optimizing compiler that ensures the resulting AVM bytecode execution semantics match the given Typescript code. This makes deployment and calling easy. 5. Modularity: Algorand Typescript supports modular solution components, facilitating efficient parallel development by small, effective teams, reducing architectural complexity, and allowing developers to pick and choose the specific tools and capabilities they want to use based on their needs and what they are comfortable with. Learn how to install and start writing Algorand TypeScript smart contracts ## Typescript Implementation for AVM Algorand Typescript maintains the syntax and semantics of Typescript, supporting a subset of the language that will grow over time. However, due to the restricted nature of the AVM, it will never be a complete implementation. Learn more about the Algorand Virtual Machine (AVM) and its implementation constraints Algorand TypeScript is compiled for execution on the AVM by PuyaTs, a TypeScript frontend for the Puya optimizing compiler that ensures the resulting AVM bytecode execution semantics that match the given TypeScript code. PuyaTs produces output directly compatible with AlgoKit-typed clients to simplify deployment and calling. ## Differences from Standard Typescript 1. Types Affect Behavior: In TypeScript, using types, as expressions, or type arguments don’t affect the compiled JS. In Algorand Typescript, however, types fundamentally change the compiled TEAL. For example, the literal expression 1 results in int 1 in TEAL, but 1 as uint8 results in byte 0x01. This also means that arithmetic is done differently on these numbers and they have different overflow protections. 2. Numbers Can Be Bigger: In TypeScript, numeric literals with absolute values equal to 2^53 or greater are too large to be represented accurately as integers. In Algorand Typescript, however, numeric literals can be much larger (up to 2^512) if properly type casted as uint512. 3. Types May Be Required: All JavaScript is valid TypeScript, but that is not the case with Algorand Typescript. In certain cases, types are required and the compiler will throw an error if they are missing. For example, types are always required when defining a method or when defining an array. ## Supported Primitives Algorand TypeScript supports several primitive types and data structures that are optimized for blockchain operations. These primitives are designed to work efficiently with the AVM while maintaining familiar TypeScript syntax. Understanding these primitives and their constraints is crucial for writing performant smart contracts. ### Static Arrays Static arrays are the most efficient and capable type of arrays in TypeScript for Algorand development. They have a fixed length and offer improved performance and type safety. For example, StaticArray `` for an array of 10 unsigned 64-bit integers. Static arrays can be partially initialized. Uninitialized elements default to undefined or zero bytes, depending on the context. ```ts const x: = [1] // [1, undefined, undefined] const y: = [1, 0, 0] // [1, 0, 0] ``` To iterate over a static array, use `for...of` which provides a clean syntax and supports continue/break statements: ```ts staticArrayIteration(): uint64 { const a: StaticArray = [1, 2, 3]; let sum = 0; for (const v of a) { sum += v; } return sum; // 6 } ``` **Supported Methods**: `length` ### Dynamic Arrays Dynamic arrays are supported in Algorand Typescript. Algorand Typescript will chop off the length prefix of dynamic arrays during runtime. Nested dynamic types are encoded as dynamic tuples, this requires much more opcodes to read/write the tuple head and tail values. **Supported Methods**: `pop`, `push`, `splice`, `length` ### Pass by Reference All arrays and objects are passed by reference even if in contract state, much like TypeScript. Algorand Typescript, however, will not let a function mutate an array that was passed as an argument. If you wish to pass by value you can use clone. ```ts const x: uint64[] = [1, 2, 3]; const y = x; y[0] = 4; log(y); // [4, 2, 3] log(x); // [4, 2, 3] const z = clone(x); z[1] = 5; log(x); // [4, 2, 3] note x has NOT changed log(z); // [4, 5, 3] ``` When instantiating an array or object, a type MUST be defined. For example, `const x: uint64[] = [1, 2, 3]`. If you omit the type, the compiler will throw an error. ### Objects Object can be defined much like in TypeScript. The same efficiencies of static vs dynamic types also applies to objects. Under the hood, Algorand Typescript objects are just tuples. For example \[uint64, uint8] is the same byteslice as `{ foo: uint64, bar: uint8 }`. The order of elements in the tuple depends on the order they are defined in the type definition. For example, the following definitions result in the same byteslice. ```ts type MyType = { foo: uint64, bar: uint8 } ... const x: MyType = { foo: 1, bar: 2} const y: MyType = { bar: 2, foo: 1 } ``` ### Numbers #### Integers The Algorand Virtual Machine (AVM) natively supports unsigned 64-bit integers (uint64). Using uint64 for numeric operations ensures optimal performance. You can, however, use any of the number types defined in ARC-0004. You can define specific-width unsigned integers with the `uint` generic type. This type takes one type argument, which is the bit width. The bit width must be divisible by 8. ```ts // Correct: Unsigned 64-bit integer const n1: UInt<64> = 1; // Correct: Unsigned 8-bit integer const n2: UInt<8> = 1; ``` #### Unsigned Fixed-Point Decimals To represent decimal values, use the `ufixed` generic type. The first type argument is the bit width, which must be divisible by 8. The second argument is the number of decimals places, which must be less than 160. ```ts // Correct: Unsigned 64-bit with two decimal places const price: UFixed<64, 2> = 1.23; // Incorrect: Missing type definition const invalidPrice = 1.23; // ERROR: Missing type // Incorrect: Precision exceeds defined decimal places const invalidPrice2: UFixed<64, 2> = 1.234; // ERROR: Precision of 2 decimal places, but 3 provided ``` #### Math Operations Algorand TypeScript requires explicit handling of math operations to ensure type safety and prevent overflow errors. Here are the key points about math operations: 1. Basic arithmetic operations (+, -, \*, /) are supported but require explicit type handling 2. Results of math operations must be explicitly typed using either: * A constructor: `const sum = Uint64(x + y)` * Type annotation: `const sum: uint64 = x + y` * Return type annotation: `function add(x: uint64, y: uint64): uint64 { return x + y }` 3. For non-uint64 types, overflow checks are performed at construction time: ```ts const a = UintN8(255); const b = UintN8(255); const c = UintN8(a + b); // Error: Overflow ``` 4. For better performance with smaller integer types, use uint64 for intermediate calculations: ```ts const a: uint64 = 255; const b: uint64 = 255; const c: uint64 = a + b; return UintN8(c - 255); // Only convert at the end ``` ### Limitations While TypeScript offers a rich set of primitives, certain features and types are either unsupported or have significant limitations within the Algorand ecosystem. 1. Dynamic types and booleans are much more expensive to use and have some limitations. 2. Anything beyond dynamic arrays of static types is very inefficient and hence not recommended. For example, `uint64[]` is fairly efficient but `uint64[][]` is much less efficient. Nested dynamic types are encoded as dynamic tuples, this requires much more opcodes to read/write the tuple head and tail values 3. Algorand Typescript will not let a function mutate an array that was passed as an argument. 4. For instantiating a static array by putting the length in a bracket (i.e., `uint64[10]`) is NOT valid TypeScript syntax thus not officially supported by Algorand Typescript. 5. `forEach` is not supported in Algorand TypeScript. Use `for...of` loops instead, which also enables continue/break functionality. 6. Dynamic arrays support the `splice` method but it is rather heavy in terms of opcode cost so it should be used sparringly. 7. No Object methods are supported in Algorand Typescript. 8. At the TypeScript level, all numbers are aliases to the standard number class. This is to ensure all arithmetic operators function on all numeric types as expected since they cannot be overwritten in TypeScript. As such, any number-related type errors might not show in the IDE and will only throw an error during compilation. ## PuyaTs Compiler Algorand TypeScript is compiled for execution on the AVM by PuyaTs, a TypeScript frontend for the Puya optimising compiler that ensures the resulting AVM bytecode execution semantics that match the given TypeScript code. PuyaTs produces output that is directly compatible with AlgoKit typed clients to make deployment and calling easy. ## Testing and Debugging The `algorand-typescript-testing` package allows for efficient unit testing of Algorand TypeScript smart contracts in an offline environment. It emulates key AVM behaviors without requiring a network connection, offering fast and reliable testing capabilities with a familiar TypeScript interface. Learn how to unit test your Algorand TypeScript smart contracts in an offline environment Discover tools and techniques for debugging Algorand TypeScript smart contracts ## Algorand TypeScript VS Code Extension The Algorand TypeScript extension (currently in beta) enhances Visual Studio Code with language server capabilities tailored for Algorand smart contract development in TypeScript. Working seamlessly alongside your installed TypeScript language server, it provides Algorand-specific code analysis, diagnostics, and validation for smart contracts. The extension offers intelligent quick fixes for common issues and displays relevant errors and warnings, helping developers learn and build with Algorand TypeScript more effectively through real-time feedback and suggested corrections. This extension brings language server powered capabilities to your smart contract authoring experience in VSCode ## Best Practices 1. Use Static Types: Always define explicit types for arrays, tuples, and objects to leverage TypeScript’s static typing benefits. 2. Prefer `UInt<64>`: Utilize `UInt<64>` for numeric operations to align with AVM’s native types, enhancing performance and compatibility. 3. Use the StaticArray generic type to define static arrays and avoid specifying array lengths using square brackets (e.g., number\[10]) as it is not valid TypeScript syntax in this context. 4. Limit Dynamic Arrays: Avoid excessive use of dynamic arrays, especially nested ones, to prevent inefficiencies. Also, splice is rather heavy in terms of opcode cost so it should be used sparringly. 5. Immutable Data Structures: Use immutable patterns for arrays and objects. Instead of mutating arrays directly, create new arrays with the desired changes (e.g., `myArray = [...myArray, newValue]`). 6. Efficient Iteration: Use `for...of` loops for iterating over arrays, which also enables continue/break functionality. 7. Type Casting: Use constructors (e.g., `UintN8`, `UintN<64>`) rather than `as` keyword for type casting. ## Resources for Further Learning A comprehensive tutorial for beginners on writing, compiling, and debugging smart contracts with Algorand TypeScript
# Smart Contract Development Lifecycle
This page will walk you through the Algorand smart contract development lifecycle, a comprehensive process that guides developers from initial setup to final deployment on MainNet. By following these steps with AlgoKit, you’ll be able to build robust, secure, and efficient smart contracts in either or . ## Project Initialization ### Environment setup Before you start coding, it’s crucial to have a reliable development environment. AlgoKit streamlines this process by automatically installing all required dependencies and configuring a local network. Rather than manually setting up nodes or downloading multiple tools, you simply run a few commands and let AlgoKit handle the rest. This approach not only saves time but also reduces setup errors, ensuring you have a consistent environment across different machines and team members. ### Base Project * Run `algokit init` and choose one of the templates to generate a base project structure. * This project will include all files needed to start coding immediately, and by using `algokit project bootstrap` you will have all your project dependencies up and running. ### Defining your goals and logic Before writing any code, take a moment to outline your contract’s objectives, logic and flow. Think about how users will interact with your contract, what data it will store, and any conditions or validations needed. Given that AlgoKit abstracts away many low-level details, you can focus purely on your dapp functionality instead of dealing with TEAL code or low level artifacts. This top-down approach helps you stay organized and makes the development experience smoother from the outset. ## Implementation ### Write your Smart Contract logic Your main task is to implement the business logic behind your application. You can choose between or . With AlgoKit Utils, the compilation and deployment steps are a smooth process. You won’t need to worry about generating TEAL or managing separate files for approval and clear programs. AlgoKit does all of that for you under the hood. This approach encourages clean, readable code while still harnessing the full power of the Algorand blockchain. It also makes it easier for developers who are familiar with Python or TypeScript to contribute without learning a new, lower-level language. ### Enhanced IDE Support with VS Code Extensions To streamline your smart contract development experience, AlgoKit provides dedicated VS Code extensions that work alongside standard language servers to deliver Algorand-specific features. These extensions provide real-time validation, diagnostics, and intelligent code assistance as you write your contracts. #### Algorand Python Extension The Algorand Python language extension brings language server powered capabilities to your smart contract authoring experience in Visual Studio Code. It extends the results from your installed Python language server to provide Algorand Python specific diagnostics and code actions. **Key capabilities:** * Works alongside your installed Python language server * Automatically discovers the PuyaPy version installed in your project venv * Algorand Python smart contract-aware code analysis, diagnostics and validation * Quick fixes for common Algorand Python issues, helping you learn the Algorand Python language as you build **Requirements:** * 1.80.0 or higher * 3.12 or higher * 5.3.0 or higher Install the Algorand Python VS Code extension (currently in \`beta\`) #### Algorand TypeScript Extension The Algorand TypeScript language extension brings language server powered capabilities to your smart contract authoring experience in Visual Studio Code. It extends the results from your installed TypeScript language server to provide Algorand TypeScript specific diagnostics and code actions. **Key capabilities:** * Works alongside your installed TypeScript language server * Algorand TypeScript smart contract-aware code analysis, diagnostics and validation * Quick fixes for common TypeScript issues, helping you learn the Algorand TypeScript language as you build **Requirements:** * 1.80.0 or higher * 1.0.1 or higher Install the Algorand TypeScript VS Code extension (currently in \`beta\`) ### Build and generate artifacts Once your contract logic is ready, you can run: ```bash algokit project run build ``` This command compiles your smart contract into the artifacts required for both testing and deployment. In practical terms, these artifacts are machine-readable files that the network will execute. They’re stored in a well-organized location within your project, keeping everything neat and accessible. With these artifacts in place, you have all you need for the next phases—testing, auditing, and eventually deploying your contract. The simplicity of this process means you can iterate on your logic quickly without getting bogged down in technical details. With AlgoKit, you can focus on writing clear, maintainable code. You won’t need to manually define separate contract programs or worry about complexities like approval/clear distinctions. ## Local Testing [Algorand Smart Contract Testing](https://www.youtube.com/embed/zlg6AizzQhw?rel=0) ### Unit Testing Quality assurance is essential for any application, and smart contracts are no exception. AlgoKit provides built-in support for local testing through tools like algopy-test. These tests run directly against your contract code, allowing you to verify each function’s correctness and spot logical errors early in the development cycle. Additionally, AlgoKit manages a local development network automatically, meaning you don’t have to spin up Docker containers or manually configure node settings. Each time you update your code, you can re-run your unit tests to catch regressions immediately. This practice leads to more stable code and fewer surprises later on. Learn more about unit testing for Algorand Typescript Learn more about unit testing for Algorand Python ### Explore with Lora While unit tests cover your basic logic, sometimes you need a more visual approach to verify how your contracts behave in an actual blockchain environment. This is where comes in. Lora acts as a localnet explorer that lets you visualize transactions, monitor contract states, and confirm that your application behaves as expected. It’s especially useful for understanding the real flow of funds or data through your contract. By combining structured tests with a hands-on explorer like Lora, you get a comprehensive understanding of your contract’s performance and reliability in a controlled setting. Learn more using Lora to accelerate your builds ## TestNet Testing Once your contract passes local tests, you can deploy it to the public Algorand TestNet to validate performance in a live environment without risking real ALGO. * Deployment to TestNet - Use `algokit deploy` pointing to TestNet for a quick and automated setup. * Exploration and Verification - Check your contract interactions using Lora or another block explorer configured for TestNet. * Programmatic Interaction - From your own scripts or applications, you can interact with the deployed contract using AlgoKit utils in Python or Typescript. This helps you confirm that transaction flows, and other on-chain behaviors work as intended. ## Audit Security and correctness are paramount for any on-chain application: * Internal Reviews Encourage your team to review the code, focusing on best practices, clarity, and maintainability. Peer reviews often catch minor issues that automated tests don’t. * Third-Party Audits Professional auditors or community experts bring a fresh perspective and can identify security loopholes or design flaws. Their evaluations might include stress tests, code analysis, and reviews of common pitfalls. * Common Issues Even with thorough testing and reviews, bugs happen. Common problems often involve unexpected edge cases, incorrect assumptions about network behavior, or mismanagement of user permissions. Address these vulnerabilities promptly to avoid costly problems on MainNet. You may get help from the community in discord or the Algorand Forum. ## Deploy to Mainnet When you’re confident in your contract’s stability, you can deploy to the Algorand MainNet: * AlgoKit Deploy By running `algokit deploy` configured for MainNet you’ll publish your contract to the live Algorand network. AlgoKit automates the inclusion of final parameters, ABI handling and artifacts generation, ensuring that your deployment process is smooth and reliable. * Alternative Approaches While AlgoKit is the recommended solution for most scenarios, you might need a tailored script for advanced use cases. In such cases, you can still leverage AlgoKit utils or integrate other methods to achieve your desired results. * Verification Once deployed, verify that the on-chain details—such as global or local states—match what you expect. This final check confirms everything is set up properly, and no additional initialization calls are needed. Since Algorand’s blockchain is immutable, any mistake here can be costly or permanent, making it essential to double-check configurations. ## Optional Frontend Integration Not all smart contracts require a user-facing interface, but if you’re building a dApp that users interact with directly, frontend integration becomes an important step: You can build your UI using any popular web framework—React, Angular, or Vue—and connect to your smart contract with AlgoKit Utils, it includes client methods, making it simple to invoke contract functions, handle user signatures, and respond to on-chain events without manually crafting transaction objects.
# Logic Signatures
> A guide in my new Starlight docs site.
Logic Signatures (LogicSig), are a feature in Algorand that allows transactions to be authorized using a TEAL program. These signatures are used to sign transactions from either a ***Contract Account*** or a ***Delegated Account***. Logic signatures contain logic used to sign transactions. When submitted with a transaction, the Algorand Virtual Machine (AVM) evaluates the logic to determine whether the transaction is authorized. If the logic fails, the transaction will not execute. Compiled logic signatures generate a corresponding Algorand account that can hold Algos or assets. Transactions from this account require successful logic execution. Alternatively, logic signatures can delegate signature authority, where another account signs the logic signature to authorize transactions from the original account. ## Niche Use Cases for Logic Signatures While smart contracts are the preferred solution in most cases, logic signatures can be useful for: 1. ***Costly Operations***: Logic signatures can be used for tasks that require expensive operations like ed25519verify but are not part of a composable smart contract. 2. ***Free Transactions***: Logic signatures can allow certain users to send specific transactions without paying fees, as long as the logic restricts the transaction rate. 3. ***Delegated Authority***: Used in cases where certain operations need to be delegated to another account or key, such as transferring assets in a custodial system. 4. ***Escrow/Contract Accounts***: In cases requiring conditional spending based on specific logic. However, smart contracts are generally preferred when dealing with escrow scenarios. ## Logic Signature Structure Logic Signatures are structures that contain four parts and are considered valid if one of the following scenarios is true:  Figure: Logic Signature Structure 1. ***Signature (Sig)***: A valid signature of the program from the account sending the transaction. 2. ***Multi-Signature (Msig)***: A valid multi-signature of the program from the multi-sig account sending the transaction. 3. ***Program Hash***: The hash of the program matches the sender’s address. In the first two cases, delegation is possible, allowing account owners to sign the logic signature and authorize transactions on their behalf. The third case pertains to ***Contract Accounts***, where the program fully governs the account, and Algos or assets can only leave the account when the logic approves a transaction. ## Computational Cost Smart contracts and logic signatures are also limited in opcode cost for optimal performance. This cost is evaluated when a smart contract runs and represents its computational expense. Every opcode executed by the AVM has a numeric value that represents its computational cost. Most opcodes have a computational cost of 1. Some, such as `SHA256` (cost 35) or `ed25519verify` (cost 1900) have substantially larger computational costs. The lists the opcode cost for every opcode. Logic signatures are limited to 20,000 for total computational cost. In comparison, traditional applications can handle up to ***700 opcode cost per transaction***, significantly increasing the computational flexibility, allowing for more complex operations per transaction than Algorand’s current limitations. ## Modes of Use Logic signatures have two basic usage scenarios: as a ***contract account*** or as a ***delegated signature***. These modes approve transactions in different ways, which are described below. Both modes use Logic Signatures. While using logic signatures for contract accounts is possible, it is now possible to create a contract account using a smart contract. 1. ***Contract Account Mode***: When compiled, a logic signature generates an Algorand address. This address functions like a regular account but is governed by the logic in the logic signature. Funds in the account can only be spent when a transaction satisfies the logic of the signature. 2. ***Delegated Signature Mode***: An account can sign a TEAL program, delegating authority to use the signature for future transactions. For instance, a user can create a recurring payment logic signature and allow a vendor to use it to collect payments within predefined limits. ### Contract Account For each unique compiled logic signature program there exists a single corresponding Algorand address. To use a TEAL program as a contract account, send Algos to its address to turn it into an account on Algorand with a balance. Outwardly, this account looks no different from any other Algorand account and anyone can send it Algos or Algorand Standard Assets to increase its balance. The account differs in how it authenticates spends from it, in that the logic determines if the transaction is approved. To spend from a contract account, create a transaction that will evaluate True against the TEAL logic, then add the compiled TEAL code as its logic signature. It is worth noting that anyone can create and submit the transaction that spends from a contract account as long as they have the compiled TEAL contract to add as a logic signature.  Figure: TEAL Contract Account ### Delegated Approval Logic signatures can also be used to delegate signature authority, which means that a private key can sign a TEAL program and the resulting output can be used as a signature in transactions on behalf of the account associated with the private key. The owner of the delegated account can share this logic signature, allowing anyone to spend funds from his or her account according to the logic within the TEAL program. For example, if Alice wants to set up a recurring payment with her utility company for up to 200 Algos every 50000 rounds, she creates a TEAL contract that encodes this logic, signs it with her private key, and gives it to the utility company. The utility company uses that logic signature in the transaction they submit every 50000 rounds to collect payment from Alice. The logic signature can be produced from either a single or multi-signature account.  Figure: TEAL Delegated Signature This contract can: 1. CloseRemainderTo vulnerability: The code doesn’t check the CloseRemainderTo field. This allows a transaction to drain the account by closing it to another address. 2. RekeyTo vulnerability: There’s no check for the RekeyTo field. This could lead to loss of authorization if a transaction rekeys the account to another address. 3. Fee draining: The code doesn’t limit the transaction fee, potentially allowing the account to be drained via high fees. 4. Lack of group transaction checks: If this LogicSig is used in a group transaction, it could be called multiple times, potentially leading to unexpected behavior. 5. No expiration mechanism: The LogicSig doesn’t include any expiration mechanism, which is recommended for security. ## Transition from Logic Signatures to Smart Contracts Logic signatures were historically the only way to write smart contracts on Algorand before applications were introduced. They were used in specific scenarios, including asset transfers, multisignature transactions, and atomic swaps. Applications, which came later, offer enhanced functionality and flexibility, but logic signatures remain important for simpler operations. * ***Escrow accounts***: Before inner transactions, contract accounts were used as escrow. Since AVM 1.0/TEAL v5, application accounts with inner transactions are preferred. However, rare cases (like TEAL v8 limits) still require contract accounts for specific methods, but this should be minimized. * ***Multiple escrow accounts***: Rekeying accounts to the application account simplifies managing multiple escrows. With advancements in Algorand (such as inner transactions and storage boxes), many use cases previously handled by logic signatures can now be implemented more efficiently with smart contracts. It is generally recommended to migrate to smart contracts unless a specific use case requires logic signatures. While logic signatures offer certain niche benefits, their use should be limited to specific scenarios as discussed in niche use cases section. Refer to the code example section which explains using logic signature. For most applications, especially those involving complex dApp logic, inner transactions, or composability, smart contracts are the preferred solution. If using logic signatures, ensure strict validation of transaction fields and implement expiration mechanisms to mitigate risks. ## Code Example The following code checks if the transaction is a self-payment with zero amount and no rekey or close actions. It ensures the transaction happens within a specific round and prevents replay attacks by verifying the genesis hash and lease field. The following logic signature ensures the contract will cover the fee for a prior transaction in a group, which must be an application call to a known app. It confirms the fee for this app call is zero and ensures the conditions are met for the payment to proceed. ## Limitations and Considerations * ***Security Considerations***: Logic signatures do not inherently define how frontends, particularly wallets, should consider them safe. It is recommended that only the signing of audited or otherwise trusted logic signatures be supported. The decision is made solely by the frontends as to which logic signatures they allow to be signed. * ***Auditability and Flexibility on Upgrading***: Logic Signatures are harder to audit than smart contracts in most settings and less flexible than smart contracts. While some simple dApps could be based on Logic Signatures, adding any feature would become problematic, and any upgrade would most likely be impossible. * ***Lack of Standardized ABI***: Unlike smart contracts, Logic Signatures do not have a standardized ABI (Application Binary Interface). Smart contracts have ARC-4. * ***Potential for Malicious Use***: Most wallets do not support signing delegated logic signatures, as this operation is potentially dangerous. A malicious delegated logic signature can remain dormant for years and can allow to siphon out funds from an account much later. * ***Non expiration***: Also, logic signatures don’t expire by default. It’s always recommended to include an expiration block in the logic to prevent any lsig from being valid indefinitely. This helps mitigate long-term security risks. * ***Size and Cost Constraints***: The maximum size of compiled TEAL code combined with arguments is 1000 bytes, and the maximum cost of TEAL code is 20000. * ***Public Nature of Code and Arguments***: The logic signature code, the transaction fields, and the arguments of the logic signature are all public. An attacker can replay a transaction signed by a logic signature. Also, arguments of Logic Signatures are not signed by the sender account and are not part of the computation of the group ID. * ***Network Considerations***: The same logic signature can be used in multiple networks. If a logic signature is signed with the intent of using it on TestNet, that same transaction can be sent to MainNet with that same logic signature. Its always recommended to check which network lsig is running on.
# Opcodes Overview
TEAL is an assembly language syntax used to write programs that are executed by the Algorand Virtual Machine (AVM). These programs can function as either Smart Signatures or Smart Contracts. The AVM is a bytecode-based stack interpreter that processes TEAL programs. Each opcode in TEAL performs a specific operation, manipulating data on the stack or interacting with the blockchain’s state. Algorand periodically updates TEAL to introduce new features and opcodes. A comprehensive list of opcodes, organized by TEAL version, is available in the . TEAL opcodes are categorized based on their functionality: 1. Stack Manipulation: Opcodes such as `push` and `pop` help manipulate values on the stack. 2. Arithmetic Operations: Opcodes such as `add`, `subtract`, and `multiply` perform mathematical computations. 3. Bitwise Operations: Opcodes such as `getbit` and `setbit` allow for bit-level data manipulation. 4. Control Flow: Opcodes such as `bz` (branch if zero) and `bnz` (branch if not zero) enable conditional logic. 5. Cryptographic Operations: Opcodes such as `ed25519verify` provide signature verification capabilities. ## High-Level Languages While TEAL provides a low-level approach to writing smart contracts, developers often prefer using high-level languages (HLLs) that compile down to TEAL bytecode. This abstraction simplifies the development process and reduces the potential for errors. Algorand provides high level languages like and which allows developers to write smart contract logic in in a more familiar syntax, which is then compiled into TEAL for execution on the Algorand Virtual Machine (AVM). Additionally, Algokit gives a way for easier development and deployment of these smart contracts.
# Overview
Algorand Smart Contracts (ASC1) are self-executing programs deployed on the Algorand blockchain that enable developers to build secure, scalable decentralized applications. Smart contracts on Algorand can be written in , , or directly in . Smart contract code written in Typescript or Python is compiled to TEAL, an assembly-like language that is interpreted by the running within an Algorand node. [Algorand Smart Contract Concepts](https://www.youtube.com/embed/cE4NHb_19vg?rel=0) Smart contracts are separated into two main categories: Applications and Logic Signatures. ## Applications When you deploy a smart contract to the Algorand blockchain, it becomes an Application with a unique Application ID. These Applications can be interacted with through special transactions called Application Calls. Applications form the foundation of decentralized applications (dApps) by handling their core on-chain logic. * Applications can **modify state** associated with the application as global state or as local state for specific application and account pairs. * Applications can **access** on-chain values, such as account balances, asset configuration parameters, or the latest block time. * Applications can **execute inner transactions** during their execution, allowing one application to call another. This enables composability between applications. * Each Application has an **Application Account** which can hold Algo and Algorand Standard Assets (ASAs), making it useful as an on-chain escrow. To provide a standard method for exposing an API and encoding/decoding data types from application call transactions, the should be used. Learn how to build and deploy Algorand smart contracts ## Logic Signatures Logic Signatures are programs that validate transactions through custom rules and are primarily used for signature delegation. When submitting a transaction with a Logic Signature, the program code is included and evaluated by the Algorand Virtual Machine (AVM). The transaction only proceeds if the program successfully executes - if the program fails, the transaction is rejected. Logic Signatures can be used in two ways. First, they can create specialized Algorand accounts that hold Algo or assets. These accounts only release funds when a transaction meets the conditions specified in the program. Second, they enable account delegation, where an account owner can define specific transaction rules that allow another account to act on their behalf. Each transaction using a Logic Signature is independently verified by an Algorand node using the AVM. These programs have limited access to global variables, temporary scratch space, and the properties of the transaction(s) they are validating. Learn how to create and use Logic Signatures for transaction validation and account delegation ## Writing Smart Contracts Algorand smart contracts are written in standard Python and TypeScript - known as Algorand Python and Algorand TypeScript in the ecosystem. These are not special variants or supersets, but rather standard code that compiles to TEAL. This means developers can use their existing knowledge, tools, and practices while building smart contracts. The direct compilation to TEAL for the Algorand Virtual Machine (AVM) provides an ideal balance of familiar development experience and blockchain performance. ## Key Concepts Understanding these fundamental concepts is essential for developing effective smart contracts on Algorand. The runtime environment that executes TEAL code. Understanding AVM versions, opcodes, and constraints is crucial for advanced contract design. Enable an application to submit sub-transactions on behalf of its account—creating or transferring assets, calling other applications, etc. Each opcode and AVM operation has a cost, tracked during execution. Exceeding cost limits leads to failure. This ensures transactions complete quickly, preventing denial-of-service. Smart contract logic is limited by features like maximum TEAL program size, global/local state keys, box storage, etc. These constraints keep on-chain execution efficient and stable.
# Resource Usage
Algorand smart contracts do not have default access to the entire blockchain ledger. Therefore, when a smart contract method needs to access resources such as accounts, assets (ASA), other applications (smart contracts), or box references, these must be provided through the reference array during invocation. This page explains what reference arrays are, why they are necessary, the different ways to provide them, and includes a series of code examples. ## Resource Availability When smart contracts are executed, they may require data stored within the blockchain ledger for evaluation. For this data (resource) to be accessible to the smart contract, it must be made available. When you say, ‘A resource is available to the smart contract,’ it means that the reference array, referencing the resource, was provided during the invocation and execution of a smart contract method that requires access to that resource. ### What are Reference Arrays? There are four reference arrays: * : Reference to Algorand accounts * : Reference to Algorand Standard Assets * : Reference to an external smart contract * : Reference to Boxes created within the smart contract Including necessary resources in the appropriate arrays enables the smart contract to access the necessary data during execution, such as reading an account’s Algo balance or examining the immutable properties of an ASA. This page explains how data access is managed by a smart contract in version 9 or later of the Algorand Virtual Machine (AVM). For details on earlier AVM versions, refer to the By default, the reference arrays are empty, with the exception of the accounts and applications arrays. The Accounts array contains the transaction sender’s address, and the Applications array contains the called smart contract ID. ### Types of Resources to Make Available Using these four reference arrays, you can make the following six unique ledger items available during smart contract execution: account, asset, application, account+asset, account+application, and application+box. Accounts and Applications can contain sublists with potentially large datasets. For example, an account may opt into an extensive set of assets or applications which store the user’s local state. Additionally, smart contracts can store potentially unlimited boxes of data within the ledger. For instance, a smart contract might create a unique box of arbitrary data for each user. These combinations, account+asset, account+application, and application+box, represent cases where you need to access data that exists at the intersection of two resources. For example: * Account+Asset: To read what the balance of an asset is for a specific account, both the asset and the account reference must be included in the respective reference arrays. * Account+Application: To access an account’s local state of an application, both the account and the application reference must be included in the respective reference arrays. * Application+Box: To retrieve data from a specific box created by an application, the application and the box reference must be included in the respective reference arrays. ### Inner Transaction Resource Availability When a smart contract executes an inner transaction to call another smart contract, the inner contract inherits all resource availability from the top-level contract. Here’s an example: Let’s say contract A sends an inner transaction that calls a method in contract B. If contract B’s method requires access to asset XYZ, you only need to provide the asset reference when calling contract A, while still properly referencing contract B in the Applications array. This makes asset XYZ available to contract B through the resource availability inherited from contract A. ### Reference Array Constraints and Requirements There are certain limitations and requirements you need to consider when providing references in the reference arrays: * The four reference arrays are limited to a combined total of eight values per application transaction. This limit excludes the default references to the transaction sender’s address and the called smart contract ID. * The accounts array can contain no more than four accounts. * The values passed into the reference arrays can change per application transaction. * When accessing one of the sublists of items, the application transaction must include both the top-level item and the nested list item within the same call. For example, to read an ASA balance for a specific account, the account and the asset must be present in the respective accounts and asset arrays for the given transaction. ### Reason for limited Access to Resources To maintain a high level of performance, the AVM restricts how much of the ledger can be viewed within a single contract execution. This is implemented with reference arrays passed with each application call transaction, defining the specific ledger items available during execution. These arrays are the Account, Asset, Application, and Boxes arrays. ### Resource Sharing Resources are shared across transactions within the same atomic group. This means that if there are two app calls calling different smart contracts in the same atomic group, the two smart contracts share resource availability. For example, say you have two smart contract call transactions grouped together, transaction #1 and transaction #2. Transaction #1 has asset 123456 in its assets array, and transaction #2 has asset 555555 in its assets array. Both assets are available to both smart contract calls during evaluation. When accessing a sublist resource (account+asa, account+application local state, application+box), both resources must be in the same transaction’s arrays. For example, you cannot have account A in transaction #1 and asset Z in transaction #2 and then try to get the balance of asset Z for account A. Asset Z and account A must be in the same application transaction. If asset Z and account A are in transaction #1’s arrays, A’s balance for Z is also available to transaction #2 during evaluation. Because Algorand supports grouping up to 16 transactions simultaneously, this pushes the available resources up to 8x16 or 128 items if all 16 transactions are application transactions. If an application transaction is grouped with other types of transactions, other resources will be made available to the smart contract called in the application transaction. For example, if an application transaction is grouped with a payment transaction, the payment transaction’s sender and receiver accounts are available to the smart contract. If the CloseRemainderTo field is set, that account will also be available to the smart contract. The table below summarizes what each transaction type adds to resource availability. | Transaction | Transaction Type | Availability Notes | | ------------------- | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Payment | `pay` | `txn.Sender`, `txn.Receiver`, and `txn.CloseRemainderTo` (if set) | | Key Registration | `keyreg` | `txn.Sender` | | Asset Config/Create | `acfg` | `txn.Sender`, `txn.ConfigAsset`, and the `txn.ConfigAsset` holding of `txn.Sender` | | Asset Transfer | `axfer` | `txn.Sender`, `txn.AssetReceiver`, `txn.AssetSender` (if set), `txnAssetCloseTo` (if set), `txn.XferAsset`, and the `txn.XferAsset` holding of each of those accounts | | Asset Freeze | `afrz` | `txn.Sender`, `txn.FreezeAccount`, `txn.FreezeAsset`, and the `txn.FreezeAsset` holding of `txn.FreezeAccount`. The `txn.FreezeAsset` holding of `txn.Sender` is not made available | ## Different Ways to Provide References There are different ways you can provide resource references when calling smart contract methods: 1. **Automatic Resource Population**: Automatically input resource references in the reference(foreign) arrays with automatic resource population using the AlgoKit Utils library ( and Python) 2. **Reference Types**: Pass reference types as arguments to contract methods. (You can only do this for Accounts, Assets, and Applications and not Boxes.) 3. **Manually Input**: Manually input resource references in the reference(foreign) arrays ## Account Reference Example Here is a simple smart contract with two methods that read the balance of an account. This smart contract requires the account reference to be provided during invocation. Here are three different ways you can provide the account reference when calling a contract method using the AlgoKit Utils library. ### Method #1: Automatic Resource Population ### Method #2: Using Reference Types ### Method #3: Manually Input ## Asset Reference Example Here is a simple smart contract with two methods that read the total supply of an asset(ASA). This smart contract requires the asset reference to be provided during invocation. Here are three different ways you can provide the asset reference when calling a contract method using the AlgoKit Utils library. ### Method #1: Automatic Resource Population ### Method #2: Using Reference Types ### Method #3: Manually Input ## App Reference Example Here is a simple smart contract named `ApplicationReference` with two methods that call the `increment` method in the `Counter` smart contract via inner transaction. The `ApplicationReference` smart contract requires the `Counter` application reference to be provided during invocation. Here are three different ways you can provide the app reference when calling a contract method using the AlgoKit Utils library. ### Method #1: Automatic Resource Population ### Method #2: Using Reference Types ### Method #3: Manually Input ## Account + Asset Example Here is a simple smart contract with two methods that read the balance of an ASA in an account. This smart contract requires both the asset reference and the account reference to be provided during invocation. Here are three different ways you can provide both the account reference and the asset reference when calling a contract method using the AlgoKit Utils library. ### Method #1: Automatic Resource Population ### Method #2: Using Reference Types ### Method #3: Manually Input ## Account + Application Example Here is a simple smart contract named `AccountAndAppReference` with two methods that read the local state `my_counter` of an account in the `Counter` smart contract. The `AccountAndAppReference` smart contract requires both the `Counter` application reference and the account reference to be provided during invocation. Here are three different ways you can provide both the account reference and the application reference when calling a contract method using the AlgoKit Utils library. ### Method #1: Automatic Resource Population ### Method #2: Using Reference Types ### Method #3: Manually Input ## Application + Box Reference Example Here is a simple smart contract with a methods that increments the counter value stored in a `BoxMap`. Each box uses `box_counter` + `account address` as its key and stores the counter as its value. This smart contract requires the box reference to be provided during invocation. Here are two different ways you can provide the box reference when calling a contract method using the AlgoKit Utils library. ### Method #1: Automatic Resource Population ### Method #2: Manually Input ## Access List: A Mutually Exclusive Alternative to Reference Arrays Starting with AVM v12 in consensus v41, the Access List, `txn.Access` or `al`, provides a more flexible way to specify resources. It serves as a mutually exclusive alternative to the traditional reference arrays: Accounts, Assets, Applications, Boxes. You cannot use both in the same transaction. Providing an Access List means no foreign arrays are used, and vice versa. ### Key Features * **Simple Resources**: * Address (`d`): References an account (e.g., for balance checks). * Asset (`s`): References an ASA (e.g., for params like total supply). * App (`p`): References another application (e.g., for global state reads). * **Complex Resources** (for sublist access): * Holding (`h`): Explicitly references an account’s holding of a specific asset (account + asset ID). * Locals (`l`): References an account’s local state for a specific app (account + app ID). * Box (`b`): References a box with an app index and box name (e.g., for reading/writing box data). * **Empty BoxRefs**: Use `{}` for boxes where the App ID is unknown or defaults to the current app. * **Limits**: Up to 16 entries via consensus parameter `MaxAppAccess`, higher than the 8 total for foreign references. * **Cross-Product vs. Explicit Access**: Traditional foreign arrays enable “cross-product” access (e.g., all combinations of listed accounts and assets for holdings). Access Lists require explicit listing of each needed resource (e.g., specify each holding individually), but allow more granular control and higher limits without combinatorial explosion. ## Access List Examples These examples show how to structure the `al` (Access List) field in an application transaction JSON. In practice, the SDKs set these fields for you. ### Simple Resources #### Address Reference This references a single account, e.g., for balance checks. ```json { "al": [ { "d": "BOBBYB3QD5QGQ27EBYHHUT7J76EWXKFOSF2NNYYYI6EOAQ5D3M2YW2UGEA" } ] } ``` #### Asset Reference This references a single ASA, e.g., for params like total supply. ```json { "al": [ { "s": 1010 } ] } ``` #### Application Reference This references another application, e.g., for global state reads. ```json { "al": [ { "p": 1042 } ] } ``` ### Complex Resources #### Holding Reference This explicitly references an account’s holding of a specific asset. Requires including the address and asset. ```json { "al": [ { "d": "BOBBYB3QD5QGQ27EBYHHUT7J76EWXKFOSF2NNYYYI6EOAQ5D3M2YW2UGEA" }, { "s": 1010 }, { "h": { "d": 1, "s": 2 } } ] } ``` Explanation: The holding (`h`) points to the 1st address (`d:1`) and 2nd asset (`s:2`) in the list. #### Locals Reference This references an account’s local state for a specific app. Requires including the app and address. ```json { "al": [ { "p": 1042 }, { "d": "BOBBYB3QD5QGQ27EBYHHUT7J76EWXKFOSF2NNYYYI6EOAQ5D3M2YW2UGEA" }, { "l": { "d": 2, "p": 1 } } ] } ``` #### Box Reference This references a box with an app index and box name. ```json { "al": [ { "p": 1042 }, { "b": { "i": 1, "n": "Ym94TmFtZQ==" } } ] } ``` ### Empty BoxRefs This provides extra budget without specifying a particular box. ```json { "al": [{}] } ```
# Box Storage
Box storage in Algorand is a feature that provides additional on-chain storage options for smart contracts, allowing them to store and manage larger amounts of data beyond the limitations of global and local state. Unlike the fixed sizes of global and local state storages, box storage offers dynamic flexibility for creating, resizing, and deleting storage units These storage units, called boxes, are key-value storage segments associated with individual applications, each capable of storing upto 32KB (32768 bytes) of data as byte arrays. Boxes are only visible and accessible to the application that created them, ensuring data integrity and security. The app account (the smart contract) is responsible for funding the box storage, and only the creating app can read, write, or delete its boxes on-chain. Both the box key and data are stored as byte arrays, requiring any uint64 variables to be converted before storage. While box storage expands the capabilities of Algorand smart contracts, it does incur additional costs in terms of minimum balance requirements (MBR) to cover the network storage space. The maximum number of box references is currently set to 8, allowing an app to create and reference up to 8 boxes simultaneously. Each box is a fixed-length structure but can be resized using the App.box\_resize method or by deleting and recreating the box. Boxes over 1024 bytes require additional references, as each reference has a 1024-byte operational budget. The app account’s MBR increases with each additional box and byte in the box’s name and allocated size. If an application with outstanding boxes is deleted, the MBR is not recoverable, so it’s recommended to delete all box storage and withdraw funds before app deletion. ## Usage of Boxes Boxes are helpful in many scenarios: * Applications that need more extensive or unbound contract storage. * Applications that want to store data per user but do not wish to require users to to the contract or need the account data to persist even after the user closes or clears out of the application. * Applications that have dynamic storage requirements. * Applications requiring larger storage blocks that can not fit the existing global state key-value pairs. * Applications that require storing arbitrary maps or hash tables. ## Box Array When interacting with apps via , developers need a way to specify which boxes an application will access during execution. The box array is part of the alongside the apps, accounts, and assets arrays. These arrays define the objects the app call will interact with (read, write, or send transactions to). The box array is an array of pairs: the first element of each pair is an integer specifying the index into the foreign application array, and the second element is the key name of the box to be accessed. Each entry in the box array allows access to only 1kb of data. For example, if a box is sized to 4kb, the transaction must use four entries in this array. To claim an allotted entry, a corresponding app ID and box name must be added to the box ref array. If you need more than the 1kb associated with that specific box name, you can either specify the box ref entry more than once or, preferably, add “empty” box refs `[0,""]` into the array. If you specify 0 as the app ID, the box ref is for the application being called. For example, suppose the contract needs to read “BoxA” which is 1.5kb, and “Box B” which is 2.5kb. This would require four entries in the box ref array and would look something like: ```plaintext boxes=[[0, "BoxA"],[0,"BoxB"], [0,""],[0,""]] ``` The required box I/O budget is based on the sizes of the boxes accessed rather than the amount of data read or written. For example, if a contract accesses “Box A” with a size of 2kb and “Box B” with a size of 10 bytes, this requires both boxes to be in the box reference array and one additional reference ( ceil((2kb + 10b) / 1kb), which can be an “empty” box reference. Access budgets are summed across multiple application calls in the same transaction group. For example, in a group of two smart contract calls, there is room for 16 array entries (8 per app call), allowing access to 16kb of data. If an application needs to access a 16kb box named “Box A”, it will need to be grouped with one additional application call, and the box reference array for each transaction in the group should look similar to this: Transaction 0: \[0,“Box A”],\[0,""],\[0,""],\[0,""],\[0,""],\[0,""],\[0,""],\[0,""] Transaction 1: \[0,""],\[0,""],\[0,""],\[0,""],\[0,""],\[0,""],\[0,""],\[0,""] Box refs can be added to the boxes array using `goal` or any SDKs. ```shell goal app method --app-id=53 --method="add_member2()void" --box="53,str:BoxA" --from=CONP4XZSXVZYA7PGYH7426OCAROGQPBTWBUD2334KPEAZIHY7ZRR653AFY ``` ## Minimum Balance Requirement For Boxes Boxes are created by a smart contract and raise the minimum balance requirement (MBR) in the contract’s ledger balance. This means that a contract intending to use boxes must be funded beforehand. When a box with name `n` and size `s` is created, the MBR is raised by `2500 + 400 * (len(n)+s)` microAlgos. When the box is destroyed, the minimum balance requirement is decremented by the same amount. Notice that the key (name) is included in the MBR calculation. For example, if a box is created with the name “BoxA” (a 4-byte long key) and with a size of 1024 bytes, the MBR for the app account increases by 413,700 microAlgos: ```plaintext (2500 per box) + (400 * (box size + key size)) (2500) + (400 * (1024+4)) = 413,700 microAlgos ``` ## Manipulating Box Storage Box storage offers several abstractions for efficient data handling: `Box`: Box abstracts the reading and writing of a single value to a single box. The box size will be reconfigured dynamically to fit the size of the value being assigned to it. `BoxRef`: BoxRef abstracts the reading and writing of boxes containing raw binary data. The size is configured manually and can be set to values larger than the AVM can handle in a single value. `BoxMap`: BoxMap abstracts the reading and writing of a set of boxes using a common key and content type. Each composite key (prefix + key) still needs to be made available to the application via the `boxes` property of the Transaction. ### Allocation App A can allocate as many boxes as needed when needed. App a allocates a box using the `box_create` opcode in its TEAL program, specifying the name and the size of the allocated box. Boxes can be any size from 0 to 32K bytes. Box names must be at least 1 byte, at most 64 bytes, and unique within app a. The app account(the smart contract) is responsible for funding the box storage (with an increase to its minimum balance requirement; see below for details). The app call’s boxes array must reference a box name and app ID to be allocated. Boxes may only be accessed (whether reading or writing) in a Smart Contract’s approval program, not in a clear state program. ### Creating a Box The AVM supports two opcodes `box_create` and `box_put` that can be used to create a box. The box\_create opcode takes two parameters, the name and the size in bytes for the created box. The `box_put` opcode takes two parameters as well. The first parameter is the name and the second is a byte array to write. Because the AVM limits any element on the stack to 4kb, `box_put` can only be used for boxes with length `<= 4kb.` Boxes can be created and deleted, but once created, they cannot be resized. At creation time, boxes are filled with 0 bytes up to their requested size. The box’s contents can be changed, but the size is fixed at that point. If a box needs to be resized, it must first be deleted and then recreated with the new size. Box names must be unique within an application. If using `box_create`, and an existing box name is passed with a different size, the creation will fail. If an existing box name is used with the existing size, the call will return a 0 without modifying the box contents. When creating a new box, the call will return a 1. When using `box_put` with an existing key name, the put will fail if the size of the second argument (data array) is different from the original box size. ### Reading Boxes can only be manipulated by the smart contract that owns them. While the SDKs and goal cmd tool allow these boxes to be read off-chain, only the smart contract that owns them can read or manipulate them on-chain. App a is the only app that can read the contents of its boxes on-chain. This on-chain privacy is unique to box storage. Recall that anybody can read everything from off-chain using the algod or indexer APIs. To read box b from app a, the app call must include b in its boxes array. Read budget: Each box reference in the boxes array allows an app call to access 1K bytes of box state - 1K of “box read budget”. To read a box larger than 1K, multiple box references must be put in the boxes arrays. The box read budget is shared across the transaction group. The total box read budget must be larger than the sum of the sizes of all the individual boxes referenced (it is not possible to use this read budget for a part of a box - the whole box is read in). Box data is unstructured. This is unique to box storage. A box is referenced by including its app ID and box name. The AVM provides two opcodes for reading the contents of a box, `box_get` and `box_extract`. The `box_get` opcode takes one parameter,: the key name of the box. It reads the entire contents of a box. The box\_get opcode returns two values. The top-of-stack is an integer that has the value of 1 or 0. A value of 1 means that the box was found and read. A value of 0 means that the box was not found. The next stack element contains the bytes read if the box exists; otherwise, it contains an empty byte array. box\_get fails if the box length exceeds 4kb. ### Writing App A is the only app that can write the contents of its boxes. As with reading, each box ref in the boxes array allows an app call to write 1kb of box state - 1kb of “box write budget”. The AVM provides two opcodes, box\_put and box\_replace, to write data to a box. The box\_put opcode is described in the previous section. The box\_replace opcode takes three parameters: the key name, the starting location and replacement bytes. When using `box_replace`, the box size can not increase. This means the call will fail if the replacement bytes, when added to the start byte location, exceed the box’s upper bounds. The following sections cover the details of manipulating boxes within a smart contract. ### Getting a Box Length The AVM offers the `box_len` opcode to retrieve the length of a box and verify its existence. The opcode takes the box key name and returns two unsigned integers (uint64). The top-of-stack is either a 0 or 1, where 1 indicates the box’s existence, and 0 indicates it does not exist. The next is the length of the box if it exists; otherwise, it is 0. ### Deleting a Box Only the app that created a box can delete it. If an app is deleted, its boxes are not deleted. The boxes will not be modifiable but can still be queried using the SDKs. The minimum balance will also be locked. (The correct cleanup design is to look up the boxes from off-chain and call the app to delete all its boxes before deleting the app itself.) The AVM offers the `box_del` opcode to delete a box. This opcode takes the box key name. The opcode returns one unsigned integer (uint64) with a value of 0 or 1. A value of 1 indicates the box existed and was deleted. A value of 0 indicates the box did not exist. ### Other methods for boxes Here are some methods that can be used with box reference to splice, replace and extract box You must delete all boxes before deleting a contract. If this is not done, the minimum balance for that box is not recoverable. ## Summary of Box Operations For manipulating box storage data like reading, writing, deleting and checking if it exists: TEAL: Different opcodes can be used | Function | Description | | ------------ | ---------------------------------------------------------------------------------------------------------------------------------- | | box\_create | creates a box named A of length B. It fails if the name A is empty or B exceeds 32,768. It returns 0 if A already exists else 1 | | box\_del | deletes a box named A if it exists. It returns 1 if A existed, 0 otherwise | | box\_extract | reads C bytes from box A, starting at offset B. It fails if A does not exist or the byte range is outside A’s size | | box\_get | retrieves the contents of box A if A exists, else ”. Y is 1 if A exists, else 0 | | box\_len | retrieves the length of box A if A exists, else 0. Y is 1 if A exists, else 0 | | box\_put | replaces the contents of box A with byte-array B. It fails if A exists and len(B) != len(box A). It creates A if it does not exist | | box\_replace | writes byte-array C into box A, starting at offset B. It fails if A does not exist or the byte range is outside A’s size | Different functions of the box can be used. The detailed API reference can be found ## Example: Storing struct in box map
# Encoding and Decoding
> Essential data encodings for Algorand smart contracts, ensuring consistent on-chain and off-chain data handling
Consistent data representation is crucial when interacting with Algorand smart contracts. This section explains the fundamental concepts of encoding and decoding information so that on-chain logic and off-chain applications remain compatible. By following these guidelines, developers can ensure reliable data handling and a seamless flow of information throughout the development lifecycle. ## Encoding Types ### JSON The encoding most often returned when querying the state of the chain is . It is easy to visually inspect but may be relatively slow to parse. All are base 64 encoded strings ### MessagePack The encoding used when transmitting transactions to a node is . To inspect a given msgpack file contents a convenience commandline tool is provided: ```shell msgpacktool -d < file.msgp ``` ### Base64 The encoding for byte arrays is . This is to make it safe for the byte array to be transmitted as part of a json object. ### Base32 The encoding used for Addresses and Transaction Ids is . ## Individual Field Encodings ### Address In Algorand a is a 32 byte array. The Accounts or Addresses are typically shown as a 58 character long string corresponding to a base32 encoding of the byte array of the public key + a checksum. Given an address `4H5UNRBJ2Q6JENAXQ6HNTGKLKINP4J4VTQBEPK5F3I6RDICMZBPGNH6KD4`, encoding to and from the public key format can be done as follows: ### Byte arrays When transmitting an array of bytes over the network, byte arrays are base64 encoded. The SDK will handle encoding from a byte array to base64 but may not decode some fields and you’ll have to handle it yourself. For example compiled program results or the keys and values in a state delta in an application call will be returned as base64 encoded strings. *Example:* Given a base64 encoded byte array `SGksIEknbSBkZWNvZGVkIGZyb20gYmFzZTY0` it may be decoded as follows: ### Integers Integers in algorand are almost always uint64, sometimes it’s required to encode them as bytes. For example when passing them as application arguments in an ApplicationCallTransaction. When encoding an integer to pass as an application argument, the integer should be encoded as the big endian 8 byte representation of the integer value. *Example:* Given an integer `1337`, you may encode it as: ## Working with Encoded Structures ### transactions Sometimes an application needs to transmit a transaction or transaction group between the front end and back end. This can be done by msgpack encoding the transaction object on one side and msgpack decoding it on the other side. Often the msgpack’d bytes will be base64 encoded so that they can be safely transmitted in some json payload so we use that encoding here. Essentially the encoding is: `tx_byte_str = base64encode(msgpack_encode(tx_obj))` and decoding is: `tx_obj = msgpack_decode(base64decode(tx_byte_str))` *Example:* Create a payment transaction from one account to another using suggested parameters and amount 10000, we write the msgpack encoded bytes
# Global Storage
Global state is associated with the app itself Global storage is a feature in Algorand that allows smart contracts to persistently store key-value pairs in a globally accessible state. This guide provides comprehensive information on how to allocate, read, write, and manipulate global storage within smart contracts. ## Manipulating Global State Storage Smart contracts can create, update, and delete values in global state using TEAL (Transaction Execution Approval Language) opcodes. The number of values that can be written is limited by the initial configuration set during smart contract creation. State is represented as key-value pairs, where keys are stored as byte slices (byte-array values), and values can be stored as either byte slices or uint64 values. TEAL provides several opcodes for facilitating reading and writing to state, including `app_global_put`, `app_global_get`, `app_global_get_ex`. ### Allocation Global storage can include between 0 and 64 key/value pairs and a total of 8K of memory to share among them. The amount of global storage is allocated in key/value units and determined at contract creation, which cannot be edited later. The contract creator address is responsible for funding the global storage by an increase to their minimum balance requirement. ### Reading from Global State The global storage of a smart contract can be read by any application call that specifies the contract’s application ID in its foreign apps array. The key-value pairs in global storage can be read on-chain directly, or off-chain using SDKs, APIs, and the goal CLI. Only the smart contract itself can write to its own global storage. TEAL provides opcodes to read global state values for the current smart contract. The `app_global_get` opcode retrieve values from the current contract’s global storage, respectively. The `app_global_get_ex` opcode returns two values on the stack: a `boolean` indicating whether the value was found, and the actual `value` if it exists. These \_ex opcodes allow reading global states from other accounts and smart contracts, as long as the account and contract are included in the accounts and applications arrays. Branching logic is typically used after calling the \_ex opcodes to handle cases where the value is found or not found. Refer to get global storage value for different data types In addition to using TEAL, the global state values of a smart contract can be read externally using SDKs and the goal CLI. These reads are non-transactional queries that retrieve the current state of the contract. Example command: ```shell goal app read --app-id 1 --guess-format --global --from ``` This command returns the global state of the smart contract with application ID 1, formatted for readability. Example Output Output.json ```json { "Creator": { "tb": "FRYCPGH25DHCYQGXEB54NJ6LHQG6I2TWMUV2P3UWUU7RWP7BQ2BMBBDPD4", "tt": 1 }, "MyBytesKey": { "tb": "hello", "tt": 1 }, "MyUintKey": { "tt": 2, "ui": 50 } } ``` Interpretation: * The keys are `Creator`, `MyBytesKey`, and `MyUintKey`. * The `tt` field indicates the type of the value: 1 for byte slices (byte-array values), 2 for uint64 values. * When `tt=1`, the value is stored in the `tb` field. The `--guess-format` option automatically converts the `Creator` value to an Algorand address with a checksum (instead of displaying the raw 32-byte public key). * When `tt=2`, the value is stored in the `ui` field. The app\_global\_get\_ex is used to read not only the global state of the current contract but any contract that is in the applications array. To access these foreign apps, they must be passed in with the application call. ```shell goal app call --foreign-app APP1ID --foreign-app APP2ID ``` In addition to modifying its own global storage, a smart contract can read the global storage of any contract specified in its applications array. However, this is a read-only operation. The global state of other smart contracts cannot be modified directly. External smart contracts can be changed per smart contract call (transaction). ### Writing to Global State Can only be written by smart contract. To write to global state, use the `app_global_put` opcode. Refer to set global storage value for different data types ### Deleting Global State Global storage is deleted when the corresponding smart contract is deleted. However, the smart contract can clear the contents of its global storage without affecting the minimum balance requirement. Refer to delete global storage value for different data types ## Summary of Global State Operations For manipulating global storage data like reading, writing, deleting and checking if exists: TEAL: Different opcodes can be used | Function | Description | | -------------------- | ----------------------------------------------- | | app\_global\_get | Get global data for the current app | | app\_global\_get\_ex | Get global data for other app | | app\_global\_put | Set global data to the current app | | app\_global\_del | Delete global data from the current app | | app\_global\_get\_ex | Check if global data exists for the current app | | app\_global\_get\_ex | Check if global data exists for the other app | Different functions of globalState class can be used. The detailed api reference can be found | Function | Description | | ------------------- | ------------------------------------------------------ | | GlobalState(type\_) | Initialize a global state with the specified data type | | get(default) | Get data or a default value if not found | | maybe() | Get data and a boolean indicating if it exists |
# Local Storage
Local state is associated with each account that opts into the application. Algorand smart contracts offer local storage, which enables accounts to maintain persistent key-value data. This data is accessible to authorized contracts and can be queried from external sources. ## Manipulating Local State Smart contracts can create, update, and delete values in local state. The number of values that can be written is limited by the initial configuration set during smart contract creation. TEAL (Transaction Execution Approval Language) provides several opcodes for facilitating reading and writing to state including `app_local_put`, `app_local_get` and `app_local_get_ex`. In addition to using TEAL, the local state values of a smart contract can be read externally using SDKs and the goal CLI. These reads are non-transactional queries that retrieve the current state of the contract. ### Allocation Local storage is allocated when an account opts into a smart contract by submitting an opt-in transaction. Each account can have between 0 and 16 key-value pairs in local storage, with a total of 2KB memory shared among them. The amount of local storage is determined during smart contract creation and cannot be edited later. The opted-in user account is responsible for funding the local storage by increasing their minimum balance requirement. ### Reading from Local State Local storage values are stored in the account’s balance record. Any account that sends a transaction to the smart contract can have its local storage modified by the smart contract, as long as the account has opted into the smart contract. Local storage can be read by any application call that has the smart contract’s app ID in its foreign apps array and the account in its foreign accounts array. In addition to the transaction sender, a smart contract call can reference up to four additional accounts whose local storage can be manipulated for the current smart contract, as long as those accounts have opted into the contract. These five accounts can have their storage values read for any smart contract on Algorand by specifying the application ID of the smart contract, if the additional contract is included in the transaction’s applications array. This is a read-only operation and does not allow one smart contract to modify the local state of another. The additionally referenced accounts can be changed per smart contract call (transaction). The key-value pairs in local storage can be read on-chain directly or off-chain using SDKs and the goal CLI. Local storage is editable only by the smart contract itself, but it can be deleted by either the smart contract or the user account (using a ClearState call). TEAL provides opcodes to read local state values for the current smart contract. The `app_local_get` opcode retrieves values from the current contract’s local storage. The `app_local_get_ex` opcode returns two values on the stack: a `boolean` indicating whether the value was found, and the actual `value` if it exists. The \_ex opcodes allow reading local states from other accounts and smart contracts, as long as the account and contract are included in the accounts and applications arrays. Branching logic is typically used after calling the \_ex opcodes to handle cases where the value is found or not found. Refer to get local storage value for different data types The local state values of a smart contract can also be read externally using goal CLI. The below command reads are non-transactional queries that retrieve the current state of the contract. Example command: ```shell goal app read --app-id 1 --guess-format --local --from ``` This command will return the local state for the account specified by `--from`. Example output with 3 key-value pairs Output.json ```json { "Creator": { "tb": "FRYCPGH25DHCYQGXEB54NJ6LHQG6I2TWMUV2P3UWUU7RWP7BQ2BMBBDPD4", "tt": 1 }, "MyBytesKey": { "tb": "hello", "tt": 1 }, "MyUintKey": { "tt": 2, "ui": 50 } } ``` Interpretation: * The keys are `Creator`, `MyBytesKey`, and `MyUintKey`. * The `tt` field indicates the type of the value: 1 for byte slices (byte-array values), 2 for uint64 values. * When `tt=1`, the value is stored in the `tb` field. The `--guess-format` option automatically converts the `Creator` value to an Algorand address with a checksum (instead of displaying the raw 32-byte public key). * When `tt=2`, the value is stored in the `ui` field. ### Writing to Local State To write to local state, use the `app_local_put` opcode. An additional account parameter is provided to specify which account’s local storage should be modified. Refer to set local storage value for different data types ### Deletion of Local State Deleting a smart contract does not affect its local storage. Accounts must clear out of the smart contract to recover their minimum balance. Every smart contract has an ApprovalProgram and a ClearStateProgram. An account holder can clear their local state for a contract at any time by executing a ClearState transaction, deleting their data and freeing up their locked minimum balance. An account can request to clear its local state using a closeout transaction or clear its local state for a specific contract using a clearstate transaction, which will always succeed, even after the contract is deleted. Refer to delete local storage value for different data types ## Summary of Local State Operations For manipulating local storage data like reading, writing, deleting and checking if exists: TEAL: Different opcodes can be used | Function | Description | | ------------------- | ---------------------------------------------- | | app\_local\_get | Get local data for the current app | | app\_local\_get\_ex | Get local data for other app | | app\_local\_put | Set local data to the current app | | app\_local\_del | Delete local data from the current app | | app\_local\_get\_ex | Check if local data exists for the current app | | app\_local\_get\_ex | Check if local data exists for the other app | Different functions of LocalState class can be used. The detailed api reference can be found | Function | Description | | ----------------------- | --------------------------------------------------------------------- | | LocalState(type\_) | Initialize a local state with the specified data type | | getitem(account) | Get data for the given account | | get(account, default) | Get data for the given account, or a default value if not found | | maybe(account) | Get data for the given account, and a boolean indicating if it exists | | setitem(account, value) | Set data for the given account | | delitem(account) | Delete data for the given account | | contains(account) | Check if data exists for the given account |
# On-Chain Storage
> Data storage primitives in the Algorand Virtual Machine (AVM)
In Algorand, when developing an application, data and values can be persistently saved on the decentralized ledger itself. This storage mechanism is known as on-chain storage. This storage can be global, local, or box storage. * Global storage is data stored explicitly on the blockchain for the contract globally. * Local storage refers to storing values related to a smart contract in the account balance record only if the account participates in the contract (this participation relationship is known as opt-in). * Box storage allows contracts to use larger segments of storage. The following section describes the main properties of each storage type. ## Global Storage Global Storage allows storing values on an application. This data will be linked to the application, and any user or client will be able to access the data only by interacting with the application. ### Usage example For example, in a voting application, the total votes for each candidate could be stored in Global Storage. Since these values represent aggregate data for the entire application, they should be accessible to all users and clients. ### Storage characteristics * Global Storage is composed of key/value pairs that are limited to 128 bytes in total per pair. * It can store up to 64 key/value pairs. * A total of 8Kb of memory can be used among the key/value pairs. ### Considerations When storing data in Global Storage, keep in mind that depending on the type and number of values you want to store, the Minimum Balance Requirement (MBR) of the application creator will be increased according to the following formula: ```shell 100,000*(1+ExtraProgramPages) + (25,000+3,500)*schema.NumUint + (25,000+25,000)*schema.NumByteSlice ``` * 100,000 microAlgo base fee for each page requested. * 25,000 + 3,500 = 28,500 microAlgo for each UInt in the Global State schema * 25,000 + 25,000 = 50,000 microAlgo for each byte slice in the Global State schema Detailed guide on implementing and managing Global Storage in Algorand smart contracts ## Local Storage Local Storage stores data associated directly with an account that has opted-in to the application. This opt-in mechanism is activated through an and allows any account to create a relationship and a storage space with the application for storing data for this specific account. ### Usage example When programming a voting application, storing each account’s vote may be necessary, so every user can check the candidate they voted for. This can be achieved by storing a key/value pair in each account’s local storage. This data will only be linked to the specific account that interacted with the smart contract. ### Storage characteristics * Local Storage is composed of key/value pairs that are limited to 128 bytes in total per pair. * It can store up to 16 key/values pairs. * A total of 2Kb of memory can be shared among the key/value pairs. * Remember this storage space is created per account. ### Considerations For this storage type, the account must perform an opt-in transaction. When storing data in Local Storage, keep in mind that depending on the type and number of values you want to store, the Minimum Balance Requirement (MBR) of the account that opts-in to the application will increase according to the following formula: ```shell 100,000 + (25,000+3,500)*schema.NumUint + (25,000+25,000)*schema.NumByteSlice ``` * 100,000 microAlgo base fee of opt-in * 25,000 + 3,500 = 28,500 for each UInt in the Local State schema * 25,000 + 25,000 = 50,000 for each byte-slice in the Local State schema Detailed guide on implementing and managing Local Storage in Algorand smart contracts ## Boxes Box Storage will allow you to store larger amounts of data, associated with the application you’re creating. Every box can store up to 32KB and an application is capable of create as many boxes as it needs. ### Usage example Let’s revisit our voting application example using Box Storage instead of Global and Local Storage. We can store: * The total vote counts in a single box as a structured data type * Each voter’s choice in a separate box, using the voter’s address as the box name This approach eliminates the need for opt-in transactions since we’re not using Local Storage. ### Storage characteristics * Each Box can store up to 32KB, split between the box name and its byte array content * Applications can create any number of boxes they need ### Considerations * Boxes can be created by an application and only accessed by the application that created it. Keep in mind that the Minimum Balance Requirement (MBR) of the application will be increased depending on its size. This means that a contract that intends to use boxes, must be funded beforehand. * The MBR of the application will increase according to the following formula: ```shell (2500 per box) + (400 * (box size + key size)) ``` * For example, if a box is created with the name “BoxA”, a 4 byte long key, and with a size of 1024 bytes, the MBR for the app account increases by 413,700 microAlgo: ```shell (2500) + (400 * (1024+4)) = 413,700 microAlgos ``` Detailed guide on implementing and managing Box Storage in Algorand smart contracts
# Scratch Storage
A scratch space is a temporary storage area used to store values for later use in your program. It consists of 256 scratch slots. Scratch spaces may be uint64 or byte-array and are initialized as uint64(0). The AVM (Algorand Virtual Machine) enables smart contracts to use scratch space for temporarily storing values during execution. Other contracts in the same atomic transaction group can read this scratch space. However, contracts can only access scratch space from earlier transactions within the same group, not from later ones. ## TEAL In TEAL, you can use the `store`/`stores` and `load`/`loads` opcodes to read and write scratch slots. Additionally, you can use `gload`/`gloads` to read scratch slots from earlier transactions in the same group. ## Algorand Python In Algorand Python, you can directly use these opcodes through the `op.Scratch` class. The accessed scratch slot indices or index ranges need to be declared using the `scratch_slots` variable during contract declaration.
# Atomic Transaction Groups
In traditional finance, trading assets generally requires a trusted intermediary, like a bank or an exchange, to make sure that both sides receive what they agreed to. On the Algorand blockchain, this type of trade is implemented within the protocol as an *Atomic Transfer*. This simply means that transactions that are part of the transfer either all succeed or all fail. Atomic transfers allow complete strangers to trade assets without the need for a trusted intermediary, all while guaranteeing that each party will receive what they agreed to. On Algorand, atomic transfers are implemented as irreducible batch operations, where a group of are submitted as a unit and all transactions in the batch either pass or fail. This also eliminates the need for more complex solutions like that are implemented on other blockchains. An atomic transfer on Algorand is confirmed in the same time like any other transaction. These atomic transactions can contain any type of transactions inside of the group. ## Use Cases Atomic transfers enable use cases such as: * **Circular Trades**- Alice pays Bob if and only if Bob pays Claire if and only if Claire pays Alice. * **Group Payments**- Everyone pays or no one pays. * **Decentralized Exchanges**- Trade one asset for another without going through a centralized exchange. * **Distributed Payments**- Payments to multiple recipients. * **Pooled Transaction Fees**- One transaction pays the fees of others. ## Process Overview To implement an atomic transfer, generate all of the transactions that will be involved in the transfer and then group those transactions together. The result of grouping is that each transaction is assigned the same group ID. Once all transactions contain this group ID, the transactions can be split up and sent to their respective senders to be authorized. A single party can then collect all the authorized transactions and submit them to the network together.  Figure: Atomic Transfer Flow ## How-to Below you will find how to create and send atomic transaction groups using Algokit Utils in Python and Typescript ## Step by Step in Goal The next guide will illustrate how atomic transaction groups can be created following a step by step approach using goal. ### Create Transactions Create two or more (up to 16 total) unsigned transactions of any type. Read about transaction types in the section. This could be done by a service or by each party involved in the transaction. For example, an asset exchange application can create the entire atomic transfer and allow individual parties to sign from their location. The example below illustrates creating, grouping, and signing transactions atomically and submitting to the network. ```goal $ goal clerk send --from=my-account-a --to=my-account-c --fee=1000 --amount=100000 --out=unsginedtransaction1.txn" $ goal clerk send --from=my-account-b --to=my-account-a --fee=1000 --amount=200000 --out=unsginedtransaction2.txn" ``` At this point, these are just individual transactions. The next critical step is to combine them and then calculate the group ID. ### Group Transactions The result of this step is what ultimately guarantees that a particular transaction belongs to a group and is not valid if sent alone (even if properly signed). A group-id is calculated by hashing the concatenation of a set of related transactions. The resulting hash is assigned to the field within each transaction. This mechanism allows anyone to recreate all transactions and recalculate the group ID to verify that the contents are as agreed upon by all parties. Ordering of the transaction set must be maintained. ```goal $ cat unsignedtransaction1.tx unsignedtransaction2.tx > combinedtransactions.tx $ goal clerk group -i combinedtransactions.tx -o groupedtransactions.tx -d data -w yourwallet ``` ### Split Transactions Goal Only At this point the transaction set must be split to allow distributing each component transaction to the appropriate wallet for authorization. ```goal # keys in distinct wallets $ goal clerk split -i groupedtransactions.tx -o splitfiles -d data -w yourwallet Wrote transaction 0 to splitfiles-0 Wrote transaction 1 to splitfiles-1 # distribute files for authorization ``` ### Sign Transactions With a group ID assigned, each transaction sender must authorize their respective transaction. ```goal # sign from single wallet containing all keys $ goal clerk sign -i groupedtransactions.tx -o signout.tx -d data -w yourwallet # -- OR -- # sign from distinct wallets $ goal clerk sign -i splitfiles-0 -o splitfiles-0.sig -d data -w my_wallet_1 $ goal clerk sign -i splitfiles-1 -o splitfiles-1.sig -d data -w my_wallet_2 ``` ### Assemble Transaction Group All authorized transactions are now assembled into an array, maintaining the original transaction ordering, which represents the transaction group. ```goal # combine signed transactions files cat splitfiles-0.sig splitfiles-1.sig > signout.tx ``` ### Send Transaction Group The transaction group is now broadcast to the network. ```goal goal clerk rawsend -f signout.tx -d data -w yourwallet ```
# Blocks
Blocks are the fundamental data structures of the Algorand blockchain, representing a batch of transactions that transitions the ledger state from one round to the next. They include essential metadata, like round number and timestamps, and the actual transactions data. Blocks are added through Algorand’s Pure Proof of Stake consensus protocol. ## Algorand Block Structure An Algorand block consists of two main parts: ### Header The block header contains high-level metadata about the block: * **Round**: The block’s height or index in the chain. * **Timestamp**: The Unix epoch time (in seconds) of block creation. * **Proposer**: The account chosen (via VRF) to propose the block. * **Previous Block Hash**: Reference linking this block to its predecessor. * **Genesis ID / Hash**: Identifiers anchoring the chain back to its genesis block. ### Body The block body contains the transaction sequence that updates both the account state and box state. It includes: * **Transactions**: All transactions in this round, such as payments, asset transfers, and application calls. This includes any inner transactions that applications generate. * **Fees Collected**: Sum of fees for the transactions included. ## Algorand Block Fundamentals ### First/Last Valid Rounds Unlike Ethereum which uses nonces to prevent transaction replay, Algorand uses a validity window specified by first and last valid rounds. This window determines between which blockchain rounds a transaction can be committed to the blockchain. Additionally, Algorand prevents transaction replay by rejecting identical transactions - two identical transactions cannot be committed to the blockchain. For further transaction control, optional leases can also be used. The validity window consists of: * **First Valid**: The earliest round in which the transaction can be included. * **Last Valid**: The final round after which the transaction is no longer valid. The validity window has a maximum span of 1,000 rounds. Since Algorand produces blocks every 2.82 seconds, this gives transactions approximately one hour to be included in the blockchain. By carefully selecting these rounds, developers can manage timing or ensure the transaction expires if not promptly processed. ### Average Block Time Algorand confirms blocks every 2.82 seconds on average. This means transactions are finalized within this timeframe, whether submitted by users or dApps. When designing applications, developers should consider how this block production timing impacts user experience and round-based logic. ### Throughput Algorand is designed for high throughput, supporting thousands of transactions per second (TPS). Each block can hold up to 25,000 transactions, which ensures the network can scale to meet growing demand without sacrificing security or decentralization. ### Finality and No Forking Unlike other blockchains that require multiple confirmations or risk chain reorganizations called “forks”, Algorand achieves instant finality at the block level. Once a block is certified via soft vote and certify vote, its transactions are final and cannot be reversed. ## Interaction with Blocks ### Algorand Node Endpoints To retrieve block data programmatically, Algorand provides several REST API endpoints through its node software. These endpoints allow developers to fetch complete blocks, just headers, or even cryptographic hashes for specific rounds. They are essential for inspecting block-level data or verifying state transitions in on-chain applications. ```bash GET /v2/blocks/{round}: Retrieve a complete block (header + transactions). GET /v2/blocks/{round}/header: Fetch just the block header. GET /v2/blocks/{round}/hash: Obtain the cryptographic hash of a given block. ``` These REST endpoints typically require an API token (X-Algo-API-Token header). ### Algorand Python and Typescript Developers can also interact with blocks and transaction data using Algokit Utils in Python and TypeScript. Algokit Utils offer abstractions to retrieve block details, inspect transaction content and inspect the whole block data. Below are code examples demonstrating how to do this: ## Block Fields | Field | Description | | ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Round | The block’s round, which matches the round of the state it is transitioning into. The block with round 0 is special in that this block specifies not a transition but rather the entire initial state, which is called the genesis state. This block is correspondingly called the genesis block. The round is stored under msgpack key `rnd`. | | Genesis Identifier and Genesis Hash | The block’s genesis identifier and hash, which match the genesis identifier and hash of the states it transitions between. The genesis identifier is stored under msgpack key `gen`, and the genesis hash is stored under msgpack key `gh`. | | Upgrade Vote | The block’s upgrade vote, which results in the new upgrade state. The block also duplicates the upgrade state of the state it transitions into. The msgpack representation of the components of the upgrade vote are described in detail below. | | Timestamp | The block’s timestamp, which matches the timestamp of the state it transitions into. The timestamp is stored under msgpack key `ts`. | | Seed | The block’s seed, which matches the seed of the state it transitions into. The seed is stored under msgpack key `seed`. | | Reward Updates | The block’s reward updates, which results in the new reward state. The block also duplicates the reward state of the state it transitions into. The msgpack representation of the components of the reward updates are described in detail below. | | Transaction Sequence | A cryptographic commitment to the block’s transaction sequence, described below, stored under msgpack key `txn`. | | Transaction Sequence Hash | A cryptographic commitment, using SHA256 hash function, to the block’s transaction sequence, described below, stored under msgpack key `txn256`. | | Previous Hash | The block’s previous hash, which is the cryptographic hash of the previous block in the sequence. (The previous hash of the genesis block is 0.) The previous hash is stored under msgpack key `prev`. | | Transaction Counter | The block’s transaction counter, which is the total number of transactions issued prior to this block. This count starts from the first block with a protocol version that supported the transaction counter. The counter is stored in msgpack field `tc`. | | Proposer | The block’s proposer, which is the address of the account that proposed the block. The proposer is stored in msgpack field `prp`. | | Fees Collected | The block’s fees collected is the sum of all fees paid by transactions in the block and is stored in msgpack field `fc`. | | Bonus Incentive | The potential bonus incentive is the amount, in MicroAlgos, that may be paid to the proposer of this block beyond the amount available from fees. It is stored in msgpack field `bi`. It may be set during a consensus upgrade, or else it must be equal to the value from the previous block in most rounds, or be 99% of the previous value (rounded down) if the round of this block is 0 modulo 1,000,000. | | Proposer Payout | The actual amount that is moved from the $I\_f$ to the proposer, and is stored in msgpack field `pp`. If the proposer is not eligible, as described below, the proposer payout must be 0. The proposer payout must not exceed \_ The sum of the bonus incentive and half of the fees collected. \_ The fee sink balance minus 100,000 microAlgos. | | Expired Participation Accounts | The block’s expired participation accounts, which contains an optional list of account addresses. These accounts’ participation key expire by the end of the current round, with exact rules below. The list is stored in msgpack key `partupdrmv`. | | Suspended Participation Accounts | The block’s suspended participation accounts, which contains an optional list of account addresses. These accounts are have not recently demonstrated that they available and participating, with exact rules below. The list is stored in msgpack key `partupdabs`. |
# Transaction Fees
Blockchains like Algorand are decentralized networks with finite computational resources. Transaction fees serve as an essential economic mechanism to protect the network by requiring users to pay for the computational resources their transactions use. This fee structure protects Algorand against potential spam attacks that could overwhelm the system and prevent the blockchain from being trapped in infinite computational loops that compromise performance and security. ## Minimum Fee The minimum fee for each transaction on Algorand is 1000 microAlgo or 0.001 Algo, and it is **fixed** to this amount when the network is not congested. ## Fee Calculation Formula The total fee of a transaction is calculated using the following formula where: * `txn_in_bytes` is the size of the transaction in bytes * `fee_per_byte` is the current network’s congestion-based fee per byte * `min_fee` is the minimum fee for a transaction ```shell fee = max(current_fee_per_byte*len(txn_in_bytes), min_fee) ``` If the network is not congested, the `current_fee_per_byte` will be zero, and the minimum transaction fee will be used. If the network is congested, the `current_fee_per_byte` will be non-zero. For a given transaction, if the product of the transaction’s size in bytes and the current fee per byte is greater than the minimum fee, the product is used as the fee. Otherwise, the minimum fee is used. Transaction fees in Algorand apply uniformly across all transaction types—payments, Asset transfers, application calls, and others all use the same fee structure. Application call transaction fees also don’t vary based on the complexity of the smart contract code being executed. Only the size of the serialized transaction determines the fee. ## Suggested Fee The provides that include the `fee` parameter which is the suggested `current fee per byte`. This suggested fee is used to determines transaction costs through this simple formula: ```shell fee = max(current_fee_per_byte * transaction_size_in_bytes, min_fee) ``` When the network isn’t congested, `current_fee_per_byte` equals zero, simplifying the formula to: ```shell fee = max(0, min_fee) = min_fee ``` This is why standard Algorand transactions cost **0.001 ALGO** or **1000 microAlgo** during normal network conditions. Here is an example of how to get suggested parameters using the Algorand Client. ### Algorand Client Suggested Params Configuration The caches suggested parameters provided by the network automatically to reduce network traffic. It has a set of default configurations that control this behavior, but the default configuration can be overridden and changed: * `algorand.setDefaultValidityWindow(validityWindow)` - Set the default validity window (number of rounds from the current known round that the transaction will be valid to be accepted for). Having a smallish value for this is usually ideal to avoid transactions that are valid for a long future period and may be submitted even after you think it failed to submit if waiting for a particular number of rounds for the transaction to be successfully submitted. The validity window defaults to 10, except in automated testing where it’s set to 1000 when targeting LocalNet. * `algorand.setSuggestedParams(suggestedParams, until?)` - Set the suggested network parameters to use (optionally until the given time) * `algorand.setSuggestedParamsTimeout(timeout)` - Set the timeout that is used to cache the suggested network parameters (by default 3 seconds) * `algorand.getSuggestedParams()` - Get the current suggested network parameters object, either the cached value, or if the cache has expired a fresh value Caution When using suggested fees, always set a maximum fee limit. During network congestion, fees become variable and could increase significantly based on network conditions. ## Flat Fee The suggested parameters include a `flat_fee` field that enables manual fee configuration. When set to `true`, you can specify exactly how much you want to pay for a transaction. If you choose this method, ensure your specified fee meets at least the minimum transaction fee (`min-fee`) available in the suggested parameters through the Algorand Client. ### Use Cases for Flat Fees * Covering extra fees in transaction groups or app calls with inner transactions * Displaying specific rounded fees to users in applications * Preparing future transactions when network conditions are unknown * Handling non-urgent transactions that can be retried if rejected ## Pooled Transaction Fees The Algorand protocol supports pooled fees, where one transaction can pay the fees of other transactions within the same . For atomic transactions, the fees set on all transactions in the group are summed. This sum is compared against the protocol determined expected fee for the group and may proceed as long as the sum of the fees is greater than or equal to the required fee for the group. Below is an example of setting a pooled fee on an atomic group of two transactions. Here transaction B’s fee is directly set to be 2x the minimum fee and transaction A’s fee is set to zero. This atomic group will successfully execute because the sum of the fees is greater than or equal to the required fee for the group. ## Inner Transaction Fees are transactions sent by an application account. When an account makes an application call transaction to a contract method with one inner transaction, there are two ways to cover the associated inner transaction fee. * **App caller pays fees (recommended)**: The account calling the contract method pays for both the application call and the inner transaction fees through fee pooling. This approach is recommended because the inner transaction execution doesn’t depend on the application account’s ALGO balance. * **App account pays fees (not recommended)**: The application account itself pays the inner transaction fee using its own ALGO balance. This approach is discouraged because repeated calls to the method could deplete the application’s ALGO balance, eventually resulting in “insufficient balance” errors that prevent the application from functioning. Smart Contract Inner transactions may have their fees covered by the outer transactions, but they cannot cover outer transaction fees. This limitation that only outer transactions may cover the inner transactions is true in the case of nested smart contract inner transactions as well. For example, if Smart Contract A is called, which then calls Smart Contract B, which then calls Smart Contract C, then C’s fee can not cover the call for B, which can not cover the call to A. Refer to the page for code examples. ### Fee Structure Inner transaction fees are **fixed at 1,000 microAlgo** per transaction, regardless of network congestion. To properly cover fees: * For one inner transaction: Add 1,000 microALGO to the outer transaction fee * For two inner transactions: Add 2,000 microALGO to the outer transaction fee * And so on for additional inner transactions Here is an example of calling a smart contract with an inner transaction and covering the inner transaction fee with the outer transaction. This is the `payment` contract method that will be called and it has one inner payment transaction.
# Leases
A lease is a mechanism in Algorand that reserves exclusive rights to submit transactions with a specific identifier for a defined period, preventing duplicate or competing transactions from the same account during that time. Leases provide security for transactions in three ways: they enable exclusive transaction execution (useful for recurring payments), help mitigate fee variability, and secure long-running smart contracts. When a transaction includes a *Lease* value (\[32]byte), the network creates a `{ Sender : Lease }` pair that persists on the validation node until the transaction’s *LastValid* round expires. This creates a “lock” that prevents any future transaction from using the same `{ Sender : Lease }` pair until expiration. The typical one-time *payment* or *asset* “send” transaction is short-lived and may not necessarily benefit from including a *Lease* value, but failing to define one within certain smart contract designs may leave an account vulnerable to a denial-of-service attack. ## How Leases Work Every transaction in Algorand includes a *Header* with required and optional validation fields. The required fields *FirstValid* and *LastValid* define a time window of up to 1000 rounds during which the transaction can be validated by the network. On MainNet, this creates a validity window of up to 70 minutes. Smart contracts often calculate a specific validity window and include a *Lease* value in their validation logic to enable secure transactions for payments, key management and other scenarios. Let’s take a look at why you may want to use the *Lease* field and when you definitely should. ## Step by Step Let’s examine a simple example where Alice sends Algo to Bob. This basic transaction is short-lived and typically wouldn’t need a lease under normal network conditions. ```bash $ goal clerk send –from $ALICE –to $BOB –amount $AMOUNT ``` Under normal network conditions, this transaction will be confirmed in the next round. Bob gets his money from Alice and there are no further concerns. However, now let’s assume the network is congested, fees are higher than normal and Alice desires to minimize her fee spend while ensuring only a single payment transaction to Bob is confirmed by the network. Alice may construct a series of transactions to Bob, each defining identical *Lease*, *FirstValid* and *LastValid* values but increasing *Fee* amounts, then broadcast them to the network. ```bash # Define transaction fields $ LEASE_VALUE=$(echo "Lease value (at most 32-bytes)" | xxd -p | base64) $ FIRST_VALID=$(goal node status | grep "Last committed block:" | awk '{ print $4 }') $ VALID_ROUNDS=1000 $ LAST_VALID=$(($FIRST_VALID+$VALID_ROUNDS)) $ FEE=1000 # Create the initial signed transaction and write it out to a file $ goal clerk send –-from $ALICE –-to $BOB –-amount $AMOUNT \ –-lease $LEASE_VALUE --firstvalid $FIRST_VALID –-lastvalid $LAST_VALID \ –-fee $FEE –-out $FEE.stxn --sign ``` Above, Alice defined values to use within her transactions. The `$LEASE_VALUE` must be base64 encoded and not exceed 32-bytes, typically using a hash value. The `$FIRST_VALID` value is obtained from the network and `$VALID_ROUNDS` is set to its maximum value of 1000 to calculate `$LAST_VALID`. Initially `$FEE` is set to the minimum and will be the only value modified in subsequent transactions. Alice now broadcasts the initial transaction with `goal clerk rawsend –-filename 1000.stxn` but due to network congestion and high fees, `goal` will continue awaiting confirmation until `$LAST_VALID`. During the validation window Alice may construct additional nearly identical transactions with *only* higher fees and broadcast each one concurrently. ```bash # Redefine ONLY the FEE value $ FEE=$(($FEE+1000)) # Broadcast additional signed transaction $ goal clerk send –-from $ALICE –-to $BOB –-amount $AMOUNT \ –-lease $LEASE_VALUE --firstvalid $FIRST_VALID –-lastvalid $LAST_VALID \ –-fee $FEE ``` Alice will continue to increase the `$FEE` value with each subsequent transaction. At some point, one of the transactions will be approved, likely the one with the highest fee at that time, and the “lock” is now set for `{ $ALICE : $LEASE_VALUE }` until `$LAST_VALID`. Alice is assured that none of her previously submitted pending transaction can be validated. Bob is paid just one time. ## Potential Pitfalls That was a rather simple scenario and unlikely during normal network conditions. Next, let’s uncover some security concerns Alice needs to guard against. Once Alice broadcasts her initial transaction, she must ensure all subsequent transactions utilize the exact same values for *FirstValid*, *LastValid* and *Lease*. Notice in the second transaction only the *Fee* is incremented, ensuring the other values remain static. If Alice executes the initial code block twice, the `$FIRST_VALID` value will be updated by querying the network presently, thus extending the validation window for `$LEASE_VALUE` to be evaluated. Similarly, if the `$LEASE_VALUE` is changed within a static validation window, multiple transactions may be confirmed. Remember, the “lock” is a mutual exclusion on `{ Sender : Lease }`; changing either creates a new lock. After the validation window expires, Alice is free to reuse the `$LEASE_VALUE` in any new transaction. This is a common practice for recurring payments. ## Code implementation Following you will find an example of implementing leases using Algokit Utils in Python and Typescript
# Networks
# Overview
Transactions are cryptographically signed instructions that modify the Algorand blockchain’s state. The transaction lifecycle follows these steps: creation, signing with private keys, submission to the Algorand network, selection by block proposers for inclusion in blocks, and execution when the block is validated and added to the blockchain. The most basic transaction type is a payment transaction, which transfers Algo between accounts. ## Transaction Types There are in the Algorand Protocol: * Payment * Key Registration * Asset Configuration * Asset Freeze * Asset Transfer * Application Call * State Proof * Heartbeat These eight transaction types can be configured in specific ways to produce more distinct functional transaction types. For example, both and use the same underlying `AssetConfigTx` type, which allows for the creation or deletion of Algorand Standard Assets (ASAs). Distinguishing between these two operations requires examining the specific combination of the `AssetConfigTx` fields and values that determine whether an asset is being created or destroyed. Fortunately, the libraries provide intuitive methods to create these more granular transaction types without having to necessarily worry about the underlying structure. However, if you are signing a pre-made transaction, correctly interpreting the underlying structure is critical. For detailed information about each transaction type, its structure, and how to create and send them using the AlgoKit Utils Library, refer to the Transaction Types page: Detailed information about transaction types ## Transaction Fees Every transaction on Algorand requires a fee to be included in a block and executed. When the network is not congested, transactions have a fixed minimum fee of **1000 microAlgo** (**0.001 Algo**). For detailed information about transaction fees on Algorand, refer to the Transaction Fees page: Detailed information about transaction fees ## Signing Transactions Before a transaction can be included in the blockchain, it must be signed by either the sender or an authorized . The signed transaction is wrapped in a `SignedTxn` object that contains both the original transaction and its signature. There are three types of signatures: * Single Signatures * Multisignatures * Logic Signatures For detailed information about signing transactions on Algorand, refer to the Signing Transactions page: Detailed information about signing transactions ## Atomic Transaction Groups Algorand’s protocol includes a feature called Atomic Transfers, which allows you to group up to 16 transactions for simultaneous execution. These transactions either all succeed or all fail, eliminating the need for complex solutions like used on other blockchains. Any Algorand transaction type can be included in an atomic group for simultaneous execution. ### Use Cases Atomic transfers enable use cases such as: * **Circular Trades** - Alice pays Bob if and only if Bob pays Claire if and only if Claire pays Alice. * **Group Payments** - Everyone pays or no one pays. * **Decentralized Exchanges** - Trade one asset for another without going through a centralized exchange. * **Distributed Payments** - Payments to multiple recipients. * **Pooled Transaction Fees** - One transaction pays the fees of others. Learn more about fee pooling . * **Op Up Transactions** - Group multiple transactions to cover higher opcode budget For detailed information about atomic transactions, refer to the Atomic Transaction Groups page: Detailed information about atomic transaction groups ## Leases A lease prevents multiple transactions with the same `(Sender, Lease)` pair from executing during the same validity period. When a transaction with a lease is confirmed, no other transaction using that same lease can be executed until after the `LastValid` round. A lease can be used to prevent replay attacks in or to safeguard against unintended duplicate spending. For example, if a user sends a transaction to the network and later realizes their fee was too low, they could send another transaction with a higher fee, but the same lease value. This would ensure that only one of those transactions ends up getting confirmed during the validity period. For detailed information about the lease field on Algorand, refer to the Lease page: Detailed information about the lease field ## Blocks Blocks form the foundation of the Algorand blockchain, storing all confirmed transactions and state changes. Each block represents a batch of transactions that advances the ledger from one round to the next, updating the state of the blockchain. Each block contains: * Essential metadata like round number and timestamps. * Transaction data * Links to previous blocks in the chain Blocks are added to the chain through Algorand’s , which is Algorand’s unique proof of stake consensus protocol that ensures security and instant finality through randomness. For detailed information about blocks on Algorand, refer to the Blocks page: Detailed information about blocks ## URI Scheme Algorand’s URI specification provides a standardized format for deeplinks and QR codes, allowing applications and websites to exchange transaction information. The specification is based on Bitcoin’s to maximize compatibility with existing applications. For detailed information about the URI scheme on Algorand, refer to the URI Scheme page: Detailed information about the URI scheme ## Transaction Reference Learn more on the Transaction Reference page: Detailed information about the transaction reference
# Transaction Reference
The tables below describe the fields used in Algorand transactions. Each table includes the field name, indicates if the field is required or optional, shows the type used in the protocol code, displays the codec name that appears in transactions, and provides a description of the field’s purpose. While the protocol types are shown in these tables, the input types may be different when using SDKs. ## Common Fields (Header and Type) These fields are common to all transactions. | Field | Required | Type | codec | Description | | --------------- | ---------- | --------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Fee]() | *required* | uint64 | `"fee"` | Paid by the sender to the FeeSink to prevent denial-of-service. The minimum fee on Algorand is currently 1000 microAlgos. | | [FirstValid]() | *required* | uint64 | `"fv"` | The first round for when the transaction is valid. If the transaction is sent prior to this round it will be rejected by the network. | | [GenesisHash]() | *required* | \[32]byte | `"gh"` | The hash of the genesis block of the network for which the transaction is valid. See the genesis hash for MainNet, TestNet, and BetaNet. | | [LastValid]() | *required* | uint64 | `"lv"` | The ending round for which the transaction is valid. After this round, the transaction will be rejected by the network. | | [Sender]() | *required* | Address | `"snd"` | The address of the account that pays the fee and amount. | | [TxType]() | *required* | string | `"type"` | Specifies the type of transaction. This value is automatically generated using any of the developer tools. | | [GenesisID]() | *optional* | string | `"gen"` | The human-readable string that identifies the network for the transaction. The genesis ID is found in the genesis block. See the genesis ID for MainNet, TestNet, and BetaNet. | | [Group]() | *optional* | \[32]byte | `"grp"` | The group specifies that the transaction is part of a group and, if so, specifies the hash of the transaction group. Assign a group ID to a transaction through the workflow described in the . | | [Lease]() | *optional* | \[32]byte | `"lx"` | A lease enforces mutual exclusion of transactions. If this field is nonzero, then once the transaction is confirmed, it acquires the lease identified by the (Sender, Lease) pair of the transaction until the LastValid round passes. While this transaction possesses the lease, no other transaction specifying this lease can be confirmed. A lease is often used in the context of Algorand Smart Contracts to prevent replay attacks. Read more about . Leases can also be used to safeguard against unintended duplicate spends. For example, if I send a transaction to the network and later realize my fee was too low, I could send another transaction with a higher fee, but the same lease value. This would ensure that only one of those transactions ends up getting confirmed during the validity period. | | [Note]() | *optional* | \[]byte | `"note"` | Any data up to 1000 bytes. | | [RekeyTo]() | *optional* | Address | `"rekey"` | Specifies the authorized address. This address will be used to authorize all future transactions. Learn more about accounts. | ## Payment Transaction Transaction Object Type: `PaymentTx` Includes all fields in and `"type"` is `"pay"`. | Field | Required | Type | codec | Description | | -------------------- | ---------- | ------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Receiver]() | *required* | Address | `"rcv"` | The address of the account that receives the . | | [Amount]() | *required* | uint64 | `"amt"` | The total amount to be sent in microAlgos. | | [CloseRemainderTo]() | *optional* | Address | `"close"` | When set, it indicates that the transaction is requesting that the account should be closed, and all remaining funds, after the and are paid, be transferred to this address. | ## Key Registration Transaction Transaction Object Type: `KeyRegistrationTx` Includes all fields in and `"type"` is `"keyreg"`. | Field | Required | Type | codec | Description | | -------------------- | --------------------- | ----------------------------------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [VotePk]() | *required for online* | ed25519PublicKey | `"votekey"` | The root participation public key. | | [SelectionPK]() | *required for online* | VrfPubkey | `"selkey"` | The VRF public key. | | [StateProofPk]() | *required for online* | MerkleSignature Verifier (64 bytes) | `"sprfkey"` | The 64 byte state proof public key commitment. | | [VoteFirst]() | *required for online* | uint64 | `"votefst"` | The first round that the *participation key* is valid. Not to be confused with the round of the keyreg transaction. | | [VoteLast]() | *required for online* | uint64 | `"votelst"` | The last round that the *participation key* is valid. Not to be confused with the round of the keyreg transaction. | | [VoteKeyDilution]() | *required for online* | uint64 | `"votekd"` | This is the dilution for the 2-level participation key. It determines the interval (number of rounds) for generating new ephemeral keys. | | [Nonparticipation]() | *optional* | bool | `"nonpart"` | All new Algorand accounts are participating by default. This means that they earn rewards. Mark an account nonparticipating by setting this value to `true` and this account will no longer earn rewards. It is unlikely that you will ever need to do this and exists mainly for economic-related functions on the network. | ## Asset Configuration Transaction Transaction Object Type: `AssetConfigTx` Includes all fields in and `"type"` is `"acfg"`. This is used to create, configure and destroy an asset depending on which fields are set. | Field | Required | Type | codec | Description | | ----------------------------- | ---------------------------- | ----------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------- | | [ConfigAsset]() | *required, except on create* | uint64 | `"caid"` | For re-configure or destroy transactions, this is the unique asset ID. On asset creation, the ID is set to zero. | | *required, except on destroy* | `"apar"` | See AssetParams table for all available fields. | | | ## Asset Parameters Object Name: `AssetParams` | Field | Required | Type | codec | Description | | ----------------- | ---------------------- | ------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Total]() | *required on creation* | uint64 | `"t"` | The total number of base units of the asset to create. This number cannot be changed. | | [Decimals]() | *required on creation* | uint32 | `"dc"` | The number of digits to use after the decimal point when displaying the asset. If 0, the asset is not divisible. If 1, the base unit of the asset is in tenths. If 2, the base unit of the asset is in hundredths, if 3, the base unit of the asset is in thousandths, and so on up to 19 decimal places | | [DefaultFrozen]() | *required on creation* | bool | `"df"` | True to freeze holdings for this asset by default. | | [UnitName]() | *optional* | string | `"un"` | The name of a unit of this asset. Supplied on creation. Max size is 8 bytes. Example: USDT | | [AssetName]() | *optional* | string | `"an"` | The name of the asset. Supplied on creation. Max size is 32 bytes. Example: Tether | | [URL]() | *optional* | string | `"au"` | Specifies a URL where more information about the asset can be retrieved. Max size is 96 bytes. | | [MetaDataHash]() | *optional* | \[]byte | `"am"` | This field is intended to be a 32-byte hash of some metadata that is relevant to your asset and/or asset holders. The format of this metadata is up to the application. This field can *only* be specified upon creation. An example might be the hash of some certificate that acknowledges the digitized asset as the official representation of a particular real-world asset. | | [ManagerAddr]() | *optional* | Address | `"m"` | The address of the account that can manage the configuration of the asset and destroy it. | | [ReserveAddr]() | *optional* | Address | `"r"` | The address of the account that holds the reserve (non-minted) units of the asset. This address has no specific authority in the protocol itself. It is used in the case where you want to signal to holders of your asset that the non-minted units of the asset reside in an account that is different from the default creator account (the sender). | | [FreezeAddr]() | *optional* | Address | `"f"` | The address of the account used to freeze holdings of this asset. If empty, freezing is not permitted. | | [ClawbackAddr]() | *optional* | Address | `"c"` | The address of the account that can clawback holdings of this asset. If empty, clawback is not permitted. | ## Asset Transfer Transaction Transaction Object Type: `AssetTransferTx` Includes all fields in and `"type"` is `"axfer"`. | Field | Required | Type | codec | Description | | ----------------- | ---------- | ------- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [XferAsset]() | *required* | uint64 | `"xaid"` | The unique ID of the asset to be transferred. | | [AssetAmount]() | *required* | uint64 | `"aamt"` | The amount of the asset to be transferred. A zero amount transferred to self allocates that asset in the account’s Asset map. | | [AssetSender]() | *required* | Address | `"asnd"` | The sender of the transfer. The regular field should be used and this one set to the zero value for regular transfers between accounts. If this value is nonzero, it indicates a clawback transaction where the is the asset’s clawback address and the asset sender is the address from which the funds will be withdrawn. | | [AssetReceiver]() | *required* | Address | `"arcv"` | The recipient of the asset transfer. | | [AssetCloseTo]() | *optional* | Address | `"aclose"` | Specify this field to remove the asset holding from the account and reduce the account’s minimum balance (i.e. opt-out of the asset). | ## Asset OptIn Transaction Transaction Object Type: `AssetTransferTx` Includes all fields in and `"type"` is `"axfer"`. This is a special form of an Asset Transfer Transaction. | Field | Required | Type | codec | Description | | ----------------- | ---------- | ------- | -------- | ----------------------------------------------------------------------- | | [XferAsset]() | *required* | uint64 | `"xaid"` | The unique ID of the asset to opt-in to. | | [Sender]() | *required* | Address | `"snd"` | The account which is allocating the asset to their account’s Asset map. | | [AssetReceiver]() | *required* | Address | `"arcv"` | The account which is allocating the asset to their account’s Asset map. | ## Asset Clawback Transaction Transaction Object Type: `AssetTransferTx` Includes all fields in and `"type"` is `"axfer"`. This is a special form of an Asset Transfer Transaction. | Field | Required | Type | codec | Description | | ----------------- | ---------- | ------- | -------- | ------------------------------------------------------------------------------------------------- | | [Sender]() | *required* | Address | `"snd"` | The sender of this transaction must be the clawback account specified in the asset configuration. | | [XferAsset]() | *required* | uint64 | `"xaid"` | The unique ID of the asset to be transferred. | | [AssetAmount]() | *required* | uint64 | `"aamt"` | The amount of the asset to be transferred. | | [AssetSender]() | *required* | Address | `"asnd"` | The address from which the funds will be withdrawn. | | [AssetReceiver]() | *required* | Address | `"arcv"` | The recipient of the asset transfer. | ## Asset Freeze Transaction Transaction Object Type: `AssetFreezeTx` Includes all fields in and `"type"` is `"afrz"`. | Field | Required | Type | codec | Description | | ----------------- | ---------- | ------- | -------- | ------------------------------------------------------------------- | | [FreezeAccount]() | *required* | Address | `"fadd"` | The address of the account whose asset is being frozen or unfrozen. | | [FreezeAsset]() | *required* | uint64 | `"faid"` | The asset ID being frozen or unfrozen. | | [AssetFrozen]() | *required* | bool | `"afrz"` | True to freeze the asset. | ## Application Call Transaction Transaction Object Type: `ApplicationCallTx` Includes all fields in and `"type"` is `"appl"`. | Field | Required | Type | codec | Description | | ----------------------- | ---------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Application ID]() | *required* | uint64 | `"apid"` | ID of the application being configured or empty if creating. | | [OnComplete]() | *required* | uint64 | `"apan"` | Defines what additional actions occur with the transaction. | | [Accounts]() | *optional* | \[]Address | `"apat"` | List of accounts in addition to the sender that may be accessed from the application’s approval-program and clear-state-program. | | [Approval Program]() | *optional* | \[]byte | `"apap"` | Logic executed for every application transaction, except when on-completion is set to “clear”. It can read and write global state for the application, as well as account-specific local state. Approval programs may reject the transaction. | | [App Arguments]() | *optional* | \[]\[]byte | `"apaa"` | Transaction specific arguments accessed from the application’s approval-program and clear-state-program. | | [Clear State Program]() | *optional* | \[]byte | `"apsu"` | Logic executed for application transactions with on-completion set to “clear”. It can read and write global state for the application, as well as account-specific local state. Clear state programs cannot reject the transaction. | | [Foreign Apps]() | *optional* | \[]uint64 | `"apfa"` | Lists the applications in addition to the application-id whose global states may be accessed by this application’s approval-program and clear-state-program. The access is read-only. | | [Foreign Assets]() | *optional* | \[]uint64 | `"apas"` | Lists the assets whose AssetParams may be accessed by this application’s approval-program and clear-state-program. The access is read-only. | | [GlobalStateSchema]() | *optional* | `"apgs"` | Holds the maximum number of global state values defined within a object. | | | [LocalStateSchema]() | *optional* | `"apls"` | Holds the maximum number of local state values defined within a object. | | | [ExtraProgramPages]() | *optional* | uint64 | `"apep"` | Number of additional pages allocated to the application’s approval and clear state programs. Each `ExtraProgramPages` is 2048 bytes. The sum of `ApprovalProgram` and `ClearStateProgram` may not exceed 2048\*(1+`ExtraProgramPages`) bytes. | | [Boxes]() | *optional* | \[]BoxRef | `"apbx"` | The boxes that should be made available for the runtime of the program. | | [RejectVersion]() | *optional* | uint64 | `"aprv"` | If the application being called has a version equal to or greater than the provided reject version, the transaction will be rejected. | | [AccessList]() | *optional* | `"al"` | An array of resources that the application or group can use. Each resource will be one of two types; “simple” or “complex”. The Access List and existing Foreign Reference arrays (apat, apfa, apas, apbx) are mutually exclusive and cannot be used together on the same transaction. | | ## Storage State Schema Object Name: `StateSchema` The `StateSchema` object is only required for the create application call transaction. The `StateSchema` object must be fully populated for both the `GlobalStateSchema` and `LocalStateSchema` objects. | Field | Required | Type | codec | Description | | --------------------- | ---------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------------------- | | [Number Ints]() | *required* | uint64 | `"nui"` | Maximum number of integer values that may be stored in the \[global \|\| local] application key/value store. Immutable. | | [Number ByteSlices]() | *required* | uint64 | `"nbs"` | Maximum number of byte slices values that may be stored in the \[global \|\| local] application key/value store. Immutable. | ## Signed Transaction Transaction Object Type: `SignedTxn` | Field | Required | Type | codec | Description | | --------------- | ------------------------------------- | ------------------ | --------- | ----------- | | [Sig]() | *required, if no other sig specified* | crypto.Signature | `"sig"` | | | [LogicSig]() | *required, if no other sig specified* | LogicSig | `"lsig"` | | | [Msig]() | *required, if no other sig specified* | crypto.MultisigSig | `"msig"` | | | [Msig]() | *required, if no other sig specified* | crypto.MultisigSig | `"lmsig"` | | | [Transaction]() | *required* | Transaction | `"txn"` | , , , , or | ## Heartbeat Transaction Transaction Object Type: `HeartbeatTx` Includes all fields in and `"type"` is `"hbt"`. | Field | Required | Type | codec | Description | | ----------------- | ---------- | ------------- | --------- | --------------------------------------------------------------- | | [HbAddress]() | *required* | Address | `"hbad"` | The account this transaction is proving onlineness for. | | [HbKeyDilution]() | *required* | uint64 | `"hbkd"` | Must match HbAddress account’s current KeyDilution. | | [HbProof]() | *required* | HbProofFields | `"hbpf"` | The heartbeat proof fields. | | [HbSeed]() | *required* | \[]byte | `"hbsd"` | Must be the block seed for this transaction’s firstValid block. | | [HbVoteID]() | *required* | \[]byte | `"hbvid"` | Must match the HbAddress account’s current VoteID. |
# Signing Transactions
This section explains how to authorize transactions on the Algorand Network. Transaction signing is a fundamental security feature that proves ownership of an account and authorizes specific actions on the blockchain. Before a transaction is sent to the network, it must first be authorized by the sender. There are different transactions signatures to be described in the following sections: ## Single Signatures A single signature corresponds to a signature from the private key of an Algorand public/private key pair. Learn more about Algorand public/private key pairs and how they are used for signing. This is an example of a transaction signed by an Algorand private key displayed with `goal clerk inspect` command: ```json { "sig": "ynA5Hmq+qtMhRVx63pTO2RpDrYiY1wzF/9Rnnlms6NvEQ1ezJI/Ir9nPAT6+u+K8BQ32pplVrj5NTEMZQqy9Dw==", "txn": { "amt": 10000000, "fee": 1000, "fv": 4694301, "gen": "testnet-v1.0", "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 4695301, "rcv": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "pay" } } ``` This transaction sends 10 Algo from `"EW64GC..."` to `"QC7XT7..."` on TestNet. The transaction was signed with the private key that corresponds to the `"snd"` address of `"EW64GC..."`. The base64 encoded signature is shown as the value of the `"sig"` field. ### How to The following example will demonstrate how to sign a transaction with an account whiwh originally doesn’t have a signer. Now let’s dive into an example where the given account has a signer in it. ## Multisignatures When a transaction’s is a , authorization requires signatures from multiple private keys. The number of signatures must be equal to or greater than the account’s threshold value. To sign a multisignature transaction, you need the complete multisignature account details: the list and order of authorized addresses, the threshold value, and the version. This information must be available either in the transaction itself or to the signing agent. Learn how to configure and manage multisignature accounts on Algorand. Here is what the same transaction above would look like if sent from a 2/3 multisig account. ```json { "msig": { "subsig": [ { "pk": "SYGHTA2DR5DYFWJE6D4T34P4AWGCG7JTNMY4VI6EDUVRMX7NG4KTA2WMDA" }, { "pk": "VBDMPQACQCH5M6SBXKQXRWQIL7QSR4FH2UI6EYI4RCJSB2T2ZYF2JDHZ2Q" }, { "pk": "W3KONPXCGFNUGXGDCOCQYVD64KZOLUMHZ7BNM2ZBK5FSSARRDEXINLYHPI" } ], "thr": 2, "v": 1 }, "txn": { "amt": 10000000, "fee": 1000, "fv": 4694301, "gen": "testnet-v1.0", "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 4695301, "rcv": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "snd": "GQ3QPLJL4VKVGQCHPXT5UZTNZIJAGVJPXUHCJLRWQMFRVL4REVW7LJ3FGY", "type": "pay" } } ``` The difference between this transaction and the one above is the form of its signature component. For multisignature accounts, an struct is added which contains the 3 public addresses (`"pk"`), the threshold value (`"thr"`) and the multisig version `"v"`. Although this transaction is still unsigned, the addition of the correct `"msig"` struct indicates that the transaction is “aware” of its multisig sender and will accept sub-signatures from single keys even if the signing agent does not contain information about its multisignature properties. It is highly recommended to include the `"msig"` template in the transaction. This is especially important when the transaction will be signed by multiple parties or offline. Without the template, the signing agent must already know the multisignature account details. For example, `goal` can only sign a multisig transaction without an `"msig"` template if the multisig address exists in its wallet. In this case, `goal` will add the `"msig"` template during signing. Sub-signatures can be added to the transaction one at a time, cumulatively, or merged together from multiple transactions. Here is the same transaction above, fully authorized: ```json { "msig": { "subsig": [ { "pk": "SYGHTA2DR5DYFWJE6D4T34P4AWGCG7JTNMY4VI6EDUVRMX7NG4KTA2WMDA", "s": "xoQkPyyqCPEhodngmOTP2930Y2GgdmhU/YRQaxQXOwh775gyVSlb1NWn70KFRZvZU96cMtq6TXW+r4sK/lXBCQ==" }, { "pk": "VBDMPQACQCH5M6SBXKQXRWQIL7QSR4FH2UI6EYI4RCJSB2T2ZYF2JDHZ2Q" }, { "pk": "W3KONPXCGFNUGXGDCOCQYVD64KZOLUMHZ7BNM2ZBK5FSSARRDEXINLYHPI", "s": "p1ynP9+LZSOZCBcrFwt5JZB2F+zqw3qpLMY5vJBN83A+55cXDYp5uz/0b+vC0VKEKw+j+bL2TzKSL6aTESlDDw==" } ], "thr": 2, "v": 1 }, "txn": { "amt": 10000000, "fee": 1000, "fv": 4694301, "gen": "testnet-v1.0", "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 4695301, "rcv": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "snd": "GQ3QPLJL4VKVGQCHPXT5UZTNZIJAGVJPXUHCJLRWQMFRVL4REVW7LJ3FGY", "type": "pay" } } ``` The two signatures are added underneath their respective addresses. Since this meets the required threshold of 2, the transaction is now fully authorized and can be sent to the network. While adding more sub-signatures than the threshold requires is unnecessary, it is perfectly valid. ### How-To The following code example demonstrates how to execute a transaction signed by a multisig account.
# Transaction Types
The following sections describe the seven types of Algorand transactions through example transactions that represent common use cases. These transaction types form the fundamental building blocks of the Algorand blockchain, enabling everything from simple payments to complex decentralized applications. The transaction types include Payment transactions for transferring Algo, Key Registration transactions for participating in consensus, Asset Configuration/Transfer/Freeze transactions for managing Algorand Standard Assets (ASAs), Application Call transactions for interacting with smart contracts, and State Proof transactions for consensus operations. ## Payment Transaction A Payment Transaction sends Algo, the Algorand blockchain’s native currency, from one account to another. ### Send Algo Here is an example transaction that sends 5 Algos (5,000,000 microAlgos) from sender address `"EW64GC..."` to receiver address `"GD64YI..."`. The sender pays the minimum transaction fee of 1,000 microAlgos. The transaction includes an optional note field containing the base64-encoded text “Hello World”. ```json { "txn": { "amt": 5000000, "fee": 1000, "fv": 6000000, "gen": "mainnet-v1.0", "gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=", "lv": 6001000, "note": "SGVsbG8gV29ybGQ=", "rcv": "GD64YIY3TWGDMCNPP553DZPPR6LDUSFQOIJVFDPPXWEG3FVOJCCDBBHU5A", "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "pay" } } ``` The `"type": "pay"` indicates that this is a payment transaction. In this transaction, 5 Algo (5,000,000 microAlgo) are sent from sender address `"EW64GC..."` to receiver address `"GD64YI..."`. The sender pays the minimum transaction fee of 1,000 microAlgos. The transaction includes an optional note field containing the base64-encoded text “Hello World”. This transaction is valid on MainNet between rounds 6000000 and 6001000. The genesis hash uniquely identifies MainNet, while the genesis ID (`mainnet-v1.0`) is included for readability but should not be used for validation since it can be replicated on other private networks. To implement payment transactions in your code, you can use the following examples: ### Close an Account Closing an account removes it from the Algorand ledger. Due to the minimum balance requirement for all accounts, the only way to completely remove an account is to use the `close` field, also known as Close Remainder To, as shown in the transaction below: ```json { "txn": { "close": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "fee": 1000, "fv": 4695599, "gen": "testnet-v1.0", "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 4696599, "rcv": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "snd": "SYGHTA2DR5DYFWJE6D4T34P4AWGCG7JTNMY4VI6EDUVRMX7NG4KTA2WMDA", "type": "pay" } } ``` In this transaction, the sender account `"SYGHTA..."` pays the transaction fee and transfers its remaining balance to the close-to account `"EW64GC..."`. When no amount is specified, `amt` defaults to 0 Algo. If an account has any assets, those holdings must be closed first by specifying an Asset Close Remainder To address in an Asset Transfer transaction before closing the Algorand account. For rekeyed accounts, using the `--close-to` parameter removes the **auth-addr** field and returns signing authority to the original address. Keyholders of the **auth-addr** should use this parameter with caution as it permanently removes their control of the account. To create a close account transaction in your code, refer to the following examples: ## Key Registration Transaction The purpose of a `KeyRegistrationTx` is to register an account as `online` or `offline` to participate and vote in Algorand Consensus. Marking an account as `online` is only the first step for consensus participation. Before submitting a KeyReg transaction, a participation key must be generated for the account. ### Register Account Online This is an example of an **online** key registration transaction. ```json { "txn": { "fee": 2000, "fv": 6002000, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 6003000, "selkey": "X84ReKTmp+yfgmMCbbokVqeFFFrKQeFZKEXG89SXwm4=", "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "keyreg", "votefst": 6000000, "votekd": 1730, "votekey": "eXq34wzh2UIxCZaI1leALKyAvSz/+XOe0wqdHagM+bw=", "votelst": 9000000 } } ``` What distinguishes this as a key registration transaction is `"type": "keyreg"` and what distinguishes it as an *online* key registration is the existence of the participation key-related fields, namely `"votekey"`, `"selkey"`, `"votekd"`, `"votefst"`, and `"votelst"`. The values for these fields are retrieved from the participation key info stored on the node where the participation key lives. The sender (`"EW64GC..."`) will pay a fee of `2000` microAlgos and its account state will change to `online` after this transaction is confirmed by the network. The transaction is valid between rounds 6,002,000 and 6,003,000 on TestNet. To register an account online in your code, you can use the following examples: ### Register Account Offline Here is an example of an **offline** key registration transaction. ```json { "txn": { "fee": 1000, "fv": 7000000, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 7001000, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "keyreg" } } ``` What distinguishes this from an *online* transaction is that it does *not* contain any participation key-related fields, since the account will no longer need a participation key if the transaction is confirmed. The sender (`"EW64GC..."`) will pay a fee of `2000` microAlgo and its account state will change to `offline` after this transaction is confirmed by the network. This transaction is valid between rounds 7,000,000 (`"fv"`) and 7,001,000 (`"lv"`) on TestNet as per the Genesis Hash (`"gh"`) value. To register an account offline in your code, you can use the following examples: ## Asset Configuration Transaction An `AssetConfigTx` is used to create an asset, modify certain parameters of an asset, or destroy an asset. ### Create an Asset Here is an example asset creation transaction: ```json { "txn": { "apar": { "am": "gXHjtDdtVpY7IKwJYsJWdCSrnUyRsX4jr3ihzQ2U9CQ=", "an": "My New Coin", "au": "developer.algorand.co", "c": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "dc": 2, "f": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "m": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "r": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "t": 50000000, "un": "MNC" }, "fee": 1000, "fv": 6000000, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 6001000, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "acfg" } } ``` The `"type": "acfg"` distinguishes this as an Asset Configuration transaction. What makes this uniquely an **asset creation** transaction is that *no* asset ID (`"caid"`) is specified and there exists an asset parameters struct (`"apar"`) that includes all the initial configurations for the asset. The asset is named (`an`) “My New Coin”. the unitname (`"un"`) is “MNC”. There are 50,000,000 total base units of this asset. Combine this with the decimals (`"dc"`) value set to 2, means that there are 500,000.00 of this asset. There is an asset URL (`"au"`) specified and a base64-encoded metadata hash (`"am"`). This specific value corresponds to the SHA512/256 hash of the string “My New Coin Certificate of Value”. The manager (`"m"`), freeze (`"f"`), clawback (`"c"`), and reserve (`"r"`) are the same as the sender. The sender is also the creator. This transaction is valid between rounds 6,000,000 (`"fv"`) and 6,001,000 (`"lv"`) on TestNet as per the Genesis Hash (`"gh"`) value. To create an asset creation transaction in your code, use the following examples: ### Reconfigure an Asset The asset manager can modify an existing asset’s configuration using a **Reconfiguration Transaction**. Here’s an example transaction that changes the manager address for asset ID `168103`: ```json { "txn": { "apar": { "c": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "f": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "m": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "r": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4" }, "caid": 168103, "fee": 1000, "fv": 6002000, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 6003000, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "acfg" } } ``` Unlike asset creation, a reconfiguration transaction requires an **asset id**. Only the manager, freeze, clawback, and reserve addresses can be modified, but all must be specified in the transaction even if they remain unchanged. Caution If any address fields are omitted in an `AssetConfigTx`, the protocol will set them to `null`. This change is permanent and cannot be reversed. After confirmation, this transaction will change the manager of the asset from `"EW64GC..."` to `"QC7XT7..."`. This transaction is valid on TestNet between rounds 6,002,000 and 6,003,000. A fee of `1000` microAlgo will be paid by the sender if confirmed. To reconfigure an asset in your code, you can use the following examples: ### Destroy an Asset A **Destroy Transaction** is issued to remove an asset from the Algorand ledger. To destroy an existing asset on Algorand, the original `creator` must be in possession of all units of the asset and the `manager` must send and authorize the transaction. Here is what an example transaction destroy transaction looks like: ```json { "txn": { "caid": 168103, "fee": 1000, "fv": 7000000, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 7001000, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "acfg" } } ``` This transaction differentiates itself from an **Asset Creation** transaction in that it contains an **asset ID** (`caid`) pointing to the asset to be destroyed. It differentiates itself from an **Asset Reconfiguration** transaction by the *lack* of any asset parameters. To destroy an asset in your code, use the following examples: ## Asset Transfer Transaction An Asset Transfer Transaction enables accounts to opt in to, transfer, or revoke assets. ### Opt-in to an Asset Here is an example of an opt-in transaction: ```json { "txn": { "arcv": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "fee": 1000, "fv": 6631154, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 6632154, "snd": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "type": "axfer", "xaid": 168103 } } ``` The `"type": "axfer"` identifies this as an asset transfer transaction. This specific transaction is an opt-in because the same address (`"QC7XT7..."`) appears as both sender and receiver, and the sender has no prior holdings of asset ID `168103`. No asset amount is specified. The transaction is valid on TestNet between rounds 6,631,154 and 6,632,154. To opt in to an asset in your code, you can use the following examples: ### Opt-out of an Asset Here is an example of an opt-out transaction: ```json { "txn": { "aclose": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "arcv": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "fee": 1000, "fv": 6633154, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 6634154, "snd": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "type": "axfer", "xaid": 168103 } } ``` This is an asset transfer transaction (`"type": "axfer"`) that removes the asset from the sender’s account. The `"aclose"` field specifies where any remaining asset balance will be transferred before closing. After this transaction, the sender’s minimum balance requirement will be reduced and they will no longer be able to receive the asset without opting in again. To opt out of an asset in your code, you can use the following examples: ### Transfer an Asset Here is an example of an asset transfer transaction. This type of transaction moves ASAs between accounts, requiring both a valid asset ID and that the receiving account has already opted in to the asset: ```json { "txn": { "aamt": 1000000, "arcv": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "fee": 3000, "fv": 7631196, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 7632196, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "axfer", "xaid": 168103 } } ``` In this example, sender `"EW64GC6..."` transfers 1 million base units (10,000.00 units) of asset `168103` to `"QC7XT7..."`, who must have already opted in to the asset. The transaction is valid on TestNet between rounds 7,631,196 and 7,632,196, with a fee of 3,000 microAlgos. To transfer an asset in your code, you can use the following examples: ### Revoke an Asset The clawback address has the unique authority to transfer assets from any holder of the asset to any other address. This feature can be used for asset recovery. Here is an example of a clawback transaction: ```json { "txn": { "aamt": 500000, "arcv": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "asnd": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "fee": 1000, "fv": 7687457, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 7688457, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "axfer", "xaid": 168103 } } ``` The `"asnd"` field indicates this is a clawback transaction. The clawback address (`"EW64GC..."`) initiates the transaction, pays the 1,000 microAlgo fee, and specifies the account (`"QC7XT7..."`) from which to revoke the assets. This transaction will transfer 500,000 base units of asset `168103` from `"QC7XT7..."` to `"EW64GC..."`. To revoke an asset in your code, you can use the following examples: ## Asset Freeze Transaction An Asset Freeze Transaction allows the designated freeze address to control whether a specific account can transfer or receive a particular asset. When an asset is frozen, the affected account cannot send or receive that asset until it is unfrozen. ### Freeze an Asset ```json { "txn": { "afrz": true, "fadd": "QC7XT7QU7X6IHNRJZBR67RBMKCAPH67PCSX4LYH4QKVSQ7DQZ32PG5HSVQ", "faid": 168103, "fee": 1000, "fv": 7687793, "gh": "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=", "lv": 7688793, "snd": "EW64GC6F24M7NDSC5R3ES4YUVE3ZXXNMARJHDCCCLIHZU6TBEOC7XRSBG4", "type": "afrz" } } ``` This transaction is identified by `"type": "afrz"`. The freeze manager (`"EW64GC..."`) sets `"afrz": true` to freeze asset `168103` for account `"QC7XT7..."`. Setting `"afrz": false` would unfreeze the asset instead. To freeze an asset in your code, you can use the following examples: ## Application Call Transaction An Application Call Transaction interacts with a smart contract (application) on the Algorand blockchain. These transactions allow users to create new applications, execute application logic, manage application state, and control user participation in the application. Each call includes an AppId to identify the target application and an OnComplete method that determines the type of interaction. Application Call transactions may include other fields needed by the logic such as: **ApplicationArgs** - To pass arbitrary arguments to an application (or in the future to call an ABI method) **Accounts** - To pass accounts that may require some balance checking or opt-in status **ForeignApps** - To pass apps and allow state access to an external application (or in the future to call an ABI method) **ForeignAssets** - To pass ASAs for parameter checking **Boxes** - To pass references to Application Boxes so the AVM can access the contents ### Application Create Transaction To create a new application, the transaction must include the Approval and Clear programs along with the state schema, but no AppId. The OnComplete method defaults to NoOp. The approval program can verify it’s being called during creation by checking if AppId equals 0. ```json { "txn": { "apap": "BYEB", "apgs": { "nbs": 1, "nui": 1 }, "apls": { "nbs": 1, "nui": 1 }, "apsu": "BYEB", "fee": 1000, "fv": 12774, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 13774, "note": "poeVkF5j4MU=", "snd": "FOZF4CMXLU2KDWJ5QARE3J2U7XELSXL7MWGNWUHD7OPKGQAI4GPSMGNLCE", "type": "appl" } } ``` This transaction contains the following key components: * The `"apap"` (Approval) and `"apsu"` (Clear) programs contain the minimal program `#pragma version 5; int 1` * Both global and local state schemas (`"apgs"` and `"apls"`) specify one byte slice and one integer * The transaction uses the default NoOp for OnComplete, so the `"apan"` field is omitted When this transaction is confirmed, it creates a new application with a unique AppId that can be referenced in subsequent calls. To create an application in your code, you can use the following examples: ### Application Update Transaction An Application Update Transaction modifies an existing application’s logic by providing new Approval and Clear programs. Only the current application’s Approval Program can authorize this update. ```json { "txn": { "apan": 4, "apap": "BYEB", "apid": 51, "apsu": "BYEB", "fee": 1000, "fv": 12973, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 13973, "note": "ATFKEwKGqLk=", "snd": "FOZF4CMXLU2KDWJ5QARE3J2U7XELSXL7MWGNWUHD7OPKGQAI4GPSMGNLCE", "type": "appl" } } ``` This transaction contains the following key components: * The `"apid"` field specifies the application to update (51) * The `"apan"` field is set to UpdateApplication (4) * New Approval and Clear programs are provided in `"apap"` and `"apsu"` fields To update an application in your code, you can use the following examples: ### Application Delete Transaction An Application Delete Transaction removes an application from the Algorand blockchain. The transaction can only succeed if the application’s Approval Program permits deletion. ```json { "txn": { "apan": 5, "apid": 51, "fee": 1000, "fv": 13555, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 14555, "note": "V/RAbQ57DnI=", "snd": "FOZF4CMXLU2KDWJ5QARE3J2U7XELSXL7MWGNWUHD7OPKGQAI4GPSMGNLCE", "type": "appl" } } ``` This transaction contains the following key components: * The `"apid"` field specifies the application to delete (51) * The `"apan"` field is set to DeleteApplication (5) To delete an application in your code, you can use the following examples: ### Application Opt-In Transaction An Application Opt-In Transaction enables an account to participate in an application by allocating local state. This transaction is only required if the application uses local state for the account. ```json { "txn": { "apan": 1, "apid": 51, "fee": 1000, "fv": 13010, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 14010, "note": "SEQpWAYkzoU=", "snd": "LNTMAFSF43V7RQ7FBBRAWPXYZPVEBGKPNUELHHRFMCAWSARPFUYD2A623I", "type": "appl" } } ``` This transaction contains the following key components: * The `"apid"` field specifies the application to opt into (51) * The `"apan"` field is set to OptIn (1) To opt into an application in your code, you can use the following examples: ### Application Close Out Transaction An Application Close Out transaction is used when an account wants to opt out of a contract gracefully and remove its local state from its balance record. This transaction *may* fail according to the logic in the Approval program. ```json { "txn": { "apan": 2, "apid": 51, "fee": 1000, "fv": 13166, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 14166, "note": "HFL7S60gOdM=", "snd": "LNTMAFSF43V7RQ7FBBRAWPXYZPVEBGKPNUELHHRFMCAWSARPFUYD2A623I", "type": "appl" } } ``` This transaction contains the following key components: * The AppId (`apid`) is set to the app being closed out of (51 here) * The OnComplete (`apan`) is set to CloseOut (2) ### Application Clear State Transaction An Application Clear State Transaction forcibly removes an account’s local state. Unlike Close Out, this transaction always succeeds if properly formatted. The application’s Clear Program handles any necessary cleanup when removing the account’s state. ```json { "txn": { "apan": 3, "apid": 51, "fee": 1000, "fv": 13231, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 14231, "note": "U93ZQy24zJ0=", "snd": "LNTMAFSF43V7RQ7FBBRAWPXYZPVEBGKPNUELHHRFMCAWSARPFUYD2A623I", "type": "appl" } } ``` This transaction contains the following key components: * The AppId (`apid`) is set to the app being cleared (51 here) * The OnComplete (`apan`) is set to ClearState (3) To clear an application’s state in your code, you can use the following examples: ### Application NoOp Transaction Application NoOp Transactions are the most common type of application calls. They execute the application’s logic without changing its lifecycle state. The application’s behavior is determined by the arguments and references provided in the transaction. ```json { "txn": { "apaa": ["ZG9jcw==", "AAAAAAAAAAE="], "apas": [16], "apat": ["4RLXQGPZVVRSXQF4VKZ74I6BCUD7TUVROOUBCVRKY37LQSHXORZV4KCAP4"], "apfa": [10], "apbx": [{ "i": 51, "n": "Y29vbF9ib3g=" }], "apid": 51, "fee": 1000, "fv": 13376, "gh": "ALXYc8IX90hlq7olIdloOUZjWfbnA3Ix1N5vLn81zI8=", "lv": 14376, "note": "vQXvgqySYPY=", "snd": "LNTMAFSF43V7RQ7FBBRAWPXYZPVEBGKPNUELHHRFMCAWSARPFUYD2A623I", "type": "appl" } } ``` This transaction contains the following key components: * The `"apid"` field specifies the application to call (51) * The `"apaa"` field contains application arguments: the string “docs” and the integer 1 * The `"apat"` field references an external account * The `"apas"` field references ASA ID 16 * The `"apfa"` field references application ID 10 * The `"apbx"` field references a box named “cool\_box” owned by the application * The OnComplete method defaults to NoOp (0), so the `"apan"` field is omitted To make a NoOp call to an application in your code, you can use the following examples: # State Proof Transaction State Proof Transactions are special consensus-related transactions that are generated by the network itself. They cannot be created by individual users or smart contracts. ```json { "txn": { "txn": { "fv": 24192139, "gh": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=", "lv": 24193139, "snd": "XM6FEYVJ2XDU2IBH4OT6VZGW75YM63CM4TC6AV6BD3JZXFJUIICYTVB5EU", "sp": {}, "spmsg": { "P": 2230170, "b": "8LkpbqSqlWcsfUr9EgpxBmrTDqQBg2tcubN7cpcFRM8=", "f": 24191745, "l": 24192000, "v": "drLLvXcg+sOqAhYIjqatF68QP7TeR0B/NljKtOtDit7Hv5Hk7gB9BgI5Ijz+tkmDkRoblcchwYDJ1RKzbapMAw==" }, "type": "stpf" } } } ``` # Heartbeat Transaction A Heartbeat Transaction allows validator nodes to signal they are operational, even when they haven’t proposed blocks recently. These transactions are particularly important for accounts with smaller stakes that might not frequently propose blocks. ```json { "fee": 0, "first-valid": 46514101, "last-valid": 46514111, "sender": "XM6FEYVJ2XDU2IBH4OT6VZGW75YM63CM4TC6AV6BD3JZXFJUIICYTVB5EU", "genesis-hash": "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8=", "heartbeat-transaction": { "hb-address": "LNTMAFSF43V7RQ7FBBRAWPXYZPVEBGKPNUELHHRFMCAWSARPFUYD2A623I", "hb-key-dilution": 1733, "hb-proof": { "hb-pk": "fS6sjbqtRseLgoRuWf3mJMWMJA6hZ1TemZCAmFg62SU=", "hb-pk1sig": "NQC4OxD01CAog8VPee0lZHLkJhvCK8FHqgqrjlHgtyGVxJBfmFSGrvRyd7BXXBpXqtz2gmiRiwsOPi9kuOXvDA==", "hb-pk2": "Oar7xcoAnGtGEicTlx864JiCVQS+GQIDNlt37MiCWa8=", "hb-pk2sig": "YWXDN49q4s5Wywyn6ZDi5yu13wCHICW5YH9wc3tnOqmlz/tAlXvX5GO0ePz6FyTTIgqQp1SheLQopNpME43yAA==", "hb-sig": "aMp1kUFzBAGcnUXo7dqko3BtiWi9624hj4Vu8un1cjDU0s4CAk69gxuaagxITd5rZla1Zaf+iX63DknMaIIXAA==" }, "hb-seed": "H3u5wO+W/QvGxSr9h0Oz14rV0WFJ/le5hbi/2OvafzY=", "hb-vote-id": "puFs2yVgp6oGrOU5DFs1QWkCk/S/cB7GMs/f9bx0gW8=" }, "tx-type": "hb" } ``` We know this is a heartbeat transaction based on the `"tx-type"` field being set to `"hb"`. This transaction contains the following required fields: * The `"hb-address"` field specifies the account this transaction is proving onlineness for * The `"hb-key-dilution"` field specifies the key dilution value that must match the account’s current KeyDilution * The `"hb-seed"` field contains the block seed for this transaction’s firstValid block * The `"hb-vote-id"` field contains the vote ID that must match the account’s current VoteID * The `"hb-proof"` field contains the heartbeat proof structure The transaction fee is zero when responding to a network challenge.
# URI Scheme
The Algorand URI specification defines a standardized format for applications and websites to encode transaction requests and wallet information in URIs. These URIs can be used in deeplinks, QR codes, and other interfaces. The specification is based on Bitcoin’s to maintain familiarity and compatibility with existing systems. ## Specifications ### General format Algorand URIs follow the general format for URIs specified in RFC 3986. The path component consists of an Algorand address, and the query component provides additional payment options. Elements of the query component may contain characters outside the valid range. These must first be encoded according to UTF-8, and then each octet of the corresponding UTF-8 sequence must be percent-encoded as described in RFC 3986. ### ABNF Grammar ```shell algorandurn = "algorand://" algorandaddress [ "?" algorandparams ] algorandaddress = *base32 algorandparams = algorandparam [ "&" algorandparams ] algorandparam = [ amountparam / labelparam / noteparam / assetparam / otherparam ] amountparam = "amount=" *digit labelparam = "label=" *qchar assetparam = "asset=" *digit noteparam = (xnote | note) xnote = "xnote=" *qchar note = "note=" *qchar otherparam = qchar *qchar [ "=" *qchar ] ``` Here, `qchar` corresponds to valid characters of an RFC 3986 URI query component, excluding the `=` and `&` characters, which this specification takes as separators. The scheme component `algorand:` is case-insensitive, and implementations must accept any combination of uppercase and lowercase letters. The rest of the URI is case-sensitive, including the query parameter keys. ### Query Keys * **`label`**: Label for that address (e.g. name of receiver) * **`address`**: Algorand address (if missing, sender address will be used as receiver.) * **`xnote`**: A URL-encoded notes field value that must not be modifiable by the user when displayed to users. * **`note`**: A URL-encoded default notes field value that the the user interface may optionally make editable by the user. * **`amount`**: microAlgo or smallest unit of asset * **`asset`**: The asset id this request refers to (if Algo, simply omit this parameter) * **`otherparam`**: optional, for future extensions ### Transfer Amount/Size If an amount is provided, it MUST be specified in the basic unit of the asset. For example, if it’s Algo (Algorand’s native token), the amount MUST be specified in microAlgo. All amounts MUST be non-negative integers and MUST NOT contain commas or decimal points. Examples: * 100 Algo = 100000000 microAlgo * 54.1354 Algo = 54135400 microAlgo Algorand clients should display amounts in whole Algo by default. When needed, microAlgo can be shown as well, but the units must always be clearly indicated to the user. ## Appendix This section contains several examples * Address ```shell algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4 ``` * Address with label ```shell algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?label=Silvio ``` * Request 150.5 Algo from an address ```shell algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150500000 ``` * Request 150 units of Asset ID 45 from an address ```shell algorand://TMTAD6N22HCS2LKH7677L2KFLT3PAQWY6M4JFQFXQS32ECBFC23F57RYX4?amount=150&asset=45 ``` * Opt-in request for Asset ID 37 ```shell algorand://?amount=0&asset=37 ```
# Library Documentation
> Browse documentation for Algorand libraries, SDKs, and tools.
Browse guides and API documentation for AlgoKit development libraries, SDKs, CLI tools, and REST APIs. Each library has its own documentation section with version-specific content. ## SDKs & Utilities Utilities for building solutions on Algorand ## CLI Tools The AlgoKit command-line interface ## Smart Contract Languages Build smart contracts on Algorand using Python Build smart contracts on Algorand using TypeScript ## Libraries & Tools Subscribe to Algorand blockchain events Tools for running Algorand nodes
# AlgoKit CLI
> The one-stop shop tool for developers building on the Algorand network
## Installation ## Quick Start ```bash # Verify installation algokit --version # Create a new project from a template algokit init # Start a local Algorand network algokit localnet start # Explore the network with lora algokit explore ``` ## Features Init Quickly scaffold new Algorand projects from official templates or community templates. LocalNet Spin up a local isolated Algorand network for development and testing. Compile Compile Algorand Python and other smart contract languages to AVM bytecode. Deploy Deploy smart contracts to LocalNet, TestNet or MainNet with a single command. Tasks Perform blockchain operations: send, sign, mint, transfer assets and more. Generate Generate typed clients and other code artifacts for your Algorand project.
# AlgoKit sandbox approach
* **Status**: Approved * **Owner:** Rob Moore * **Deciders**: Anne Kenyon (Algorand Inc.), Alessandro Cappellato (Algorand Foundation), Will Winder (Algorand Inc.) * **Date created**: 2022-11-14 * **Date decided:** 2022-11-14 * **Date updated**: 2022-11-16 ## Context In order for AlgoKit to facilitate a productive development experience it needs to provide a managed Algorand sandbox experience. This allows developers to run an offline (local-only) private instance of Algorand that they can privately experiment with, run automated tests against and reset at will. ## Requirements * The sandbox works cross-platform (i.e. runs natively on Windows, Mac and Linux) * You can spin up algod and indexer since both have useful use cases when developing * The sandbox is kept up to date with the latest version of algod / indexer * There is access to KMD so that you can programmatically fund accounts to improve the developer experience and reduce manual effort * There is access to the tealdbg port outside of algod so you can attach a debugger to it * The sandbox is isolated and (once running) works offline so the workload is private, allows development when there is no internet (e.g. when on a plane) and allows for multiple instances to be run in parallel (e.g. when developing multiple independent projects simultaneously) * Works in continuous integration and local development environments so you can facilitate automated testing ## Principles * \- specifically Seamless onramp, Leverage existing ecosystem, Meet devs where they are * **Lightweight** - the solution should have as low an impact as possible on resources on the developers machine * **Fast** - the solution should start quickly, which makes for a nicer experience locally and also allows it to be used for continuous integration automation testing ## Options ### Option 1 - Pre-built DockerHub images Pre-built application developer-optimised DockerHub images that work cross-platform; aka an evolved AlgoKit version of . **Pros** * It’s quick to download the images and quick to start the container since you don’t need to compile Algod / indexer and the images are optimised for small size * The only dependency needed is Docker, which is a fairly common dependency for most developers to use these days * The images are reasonably lightweight * The images provide an optimised application developer experience with: (devmode) algo, KMD, tealdbg, indexer * It natively works cross-platform **Cons** * Some people have reported problems running WSL 2 on a small proportion of Windows environments (to get the latest Docker experience) * Docker within Docker can be a problem in some CI environments that run agents on Docker in the first place * Work needs to be done to create an automated CI/CD that automatically releases new versions to keep it up to date with latest algod/indexer versions ### Option 2 - Lightweight algod client implementation Work with the Algorand Inc. team to get a lightweight algod client that can run outside of a Docker container cross-platform. **Pros** * Likely to be the most lightweight and fastest option - opening up better/easier isolated/parallelised automated testing options * Wouldn’t need Docker as a dependency **Cons** * Indexer wouldn’t be supported (Postgres would require Docker anyway) * Algorand Inc. does not distribute Windows binaries. ### Option 3 - Sandbox Use the existing . **Pros** * Implicitly kept up to date with Algorand - no extra thing to maintain * Battle-tested by the core Algorand team day-in-day-out * Supports all environments including unreleased feature branches (because it can target a git repo / commit hash) **Cons** * Sandbox is designed for network testing, not application development - it’s much more complex than the needs of application developers * Slow to start because it has to download and built algod and indexer (this is particularly problematic for ephemeral CI/CD build agents) * It’s not cross-platform (it requires bash to run sandbox.sh, although a sandbox.ps1 version could be created) ## Preferred option Option 1 and Option 2. Option 1 provides a fully-featured experience that will work great in most scenarios, having option 2 as a second option would open up more advanced parallel automated testing scenarios in addition to that. ## Selected option Option 1 We’re aiming to release the first version of AlgoKit within a short timeframe, which won’t give time for Option 2 to be developed. Sandbox itself has been ruled out since it’s not cross-platform and is too slow for both development and continuous integration. Option 1 also results in a similar result to running Sandbox, so existing Algorand documentation, libraries and approaches should work well with this option making it a good slot-in replacement for Sandbox for application developers. AlgoKit is designed to be modular: we can add in other approaches over time such as Option 2 when/if it becomes available.
# Beaker testing strategy
* **Status**: Draft * **Owner:** Rob Moore * **Deciders**: Anne Kenyon (Algorand Inc.), Alessandro Cappellato (Algorand Foundation), Michael Diamant (Algorand Inc.), Benjamin Guidarelli (Algorand Foundation) * **Date created**: 2022-11-22 * **Date decided:** TBD * **Date updated**: 2022-11-28 ## Context AlgoKit will be providing a smart contract development experience built on top of and . Beaker is currently in a pre-production state and needs to be productionised to provide confidence for use in generating production-ready smart contracts by AlgoKit users. One of the key things to resolve to productionisation of Beaker is to improve the automated test coverage. Beaker itself is currently split into the PyTEAL generation related code and the deployment and invocation related code (including interacting with Sandbox). This decision is solely focussed on the PyTEAL generation components of Beaker. The current automated test coverage of this part of the codebase is \~50% and is largely based on compiling and/or executing smart contracts against Algorand Sandbox. While it’s generally not best practice to try and case a specific code coverage percentage, a coverage of \~80%+ is likely indicative of good coverage in a dynamic language such as Python. The Sandbox tests provide a great deal of confidence, but are also slow to execute, which can potentially impair Beaker development and maintenance experience, especially as the coverage % is grown and/or features are added over time. Beaker, like PyTEAL, can be considered to be a transpiler on top of TEAL. When generating smart contracts, the individual TEAL opcodes are significant, since security audits will often consider the impact at that level, and it can have impacts on (limited!) resource usage of the smart contract. As such, “output stability” is potentially an important characteristic to test for. ## Requirements * We have a high degree of confidence that writing smart contracts in Beaker leads to expected results for production smart contracts * We have reasonable regression coverage so features are unlikely to break as new features and refactorings are added over time * We have a level of confidence in the “output stability” of the TEAL code generated from a Beaker smart contract ## Principles * **Fast development feedback loops** - The feedback loop during normal development should be as fast as possible to improve the development experience of developing Beaker itself * **Low overhead** - The overhead of writing and maintaining tests is as low as possible; tests should be quick to read and write * **Implementation decoupled** - Tests aren’t testing the implementation details of Beaker, but rather the user-facing experience and output of it; this reduces the likelihood of needing to rewrite tests when performing refactoring of the codebase ## Options ### Option 1: TEAL Approval tests Writing of the TEAL output generated from a given Beaker smart contract. **Pros** * Ensures TEAL output stability and focussing on asserting the output of Beaker rather than testing whether Algorand Protocol is working * Runs in-memory/in-process so will execute in low 10s of milliseconds making it easy to provide high coverage with low developer feedback loop overhead * Tests are easy to write - the assertion is a single line of code (no tedious assertions) * The tests go from Beaker contract -> TEAL approval so don’t bake implementation detail and thus allow full Beaker refactoring with regression confidence without needing to modify the tests * Excellent regression coverage characteristics - fast test run and quick to write allows for high coverage and anchoring assertions to TEAL output is a very clear regression marker **Cons** * The tests rely on the approver to understand the TEAL opcodes that are emitted and verify they match the intent of the Beaker contract - anecdotally this can be difficult at times even for experienced (Py)TEAL developers * Doesn’t assert the correctness of the TEAL output, just that it matches the previously manually approved output ### Option 2: Sandbox compile tests Writing Beaker smart contracts and checking that the TEAL output successfully compiles against algod. **Pros** * Ensures that the TEAL output compiles, giving some surety about the intactness of it and focussing on asserting the output of Beaker rather than testing whether Algorand Protocol is working * Faster than executing the contract * Tests are easy to write - the assertion is a single line of code (no tedious assertions) **Cons** * Order of magnitude slower than asserting TEAL output (out of process communication) * Doesn’t assert the correctness of the TEAL output, just that it compiles ### Option 3: Sandbox execution tests Execute the smart contracts and assert the output is as expected. This can be done using dry run and/or actual transactions. **Pros** * Asserts that the TEAL output *executes* correctly giving the highest confidence * Doesn’t require the test writer to understand the TEAL output * Tests don’t bake implementation detail and do assert on output so give a reasonable degree of refactoring confidence without modifying tests **Cons** * Tests are more complex to write * Tests take an order of magnitude longer to run than just compilation (two orders of magnitude to run than checking TEAL output) * Harder to get high regression coverage since it’s slower to write and run the tests making it impractical to get full coverage * Doesn’t ensure output stability * Is testing that the Algorand Protocol itself works (TEAL `x` when executed does `y`) so the testing scope is broader than just Beaker itself ## Preferred option Option 1 (combined with Option 2 to ensure the approved TEAL actually compiles, potentially only run on CI by default to ensure fast local dev loop) for the bulk of testing to provide a rapid feedback loop for developers as well as ensuring output stability and great regression coverage. ## Selected option Combination of option 1, 2 and 3: * While Option 1 + 2 provides high confidence with fast feedback loop, it relies on the approver being able to determine the TEAL output does what they think it does, which isn’t always the case * Option 3 will be used judiciously to provide that extra level of confidence that the fundamentals of the Beaker output are correct for each main feature; this will involve key scenarios being tested with execution based tests, the goal isn’t to get combinatorial coverage, which would be slow and time-consuming, but to give a higher degree of confidence * The decision of when to use Option 3 as well as Option 1+2 will be made on a per-feature basis and reviewed via pull request, over time a set of principles may be able to be revised that outline a clear delineation * Use of PyTest markers to separate execution so by default the dev feedback loop is still fast, but the full suite is always run against pull requests and merges to main
# Beaker productionisation review
* **Status**: Approved * **Owners:** Rob Moore, Adam Chidlow * **Deciders**: Anne Kenyon (Algorand Inc.), Alessandro Cappellato (Algorand Foundation), Jason Weathersby (Algorand Foundation), Benjamin Guidarelli (Algorand Foundation), Bob Broderick (Algorand Inc.) * **Date created**: 2023-01-11 * **Date decided:** 2023-02-04 * **Date updated**: 2023-02-04 ## Context Beaker is a smart contract development framework for that provides a wrapper over that focusses on providing a great developer experience through terse, expressive language constructs and making common tasks easier. Beaker is useful because it creates a higher level programming construct from PyTEAL that is easier to get started when learning and results in code that is terser and easier to read and write. Beaker is an important part of the . It helps create a more seamless onramp to Algorand development by providing an easier starting point for developers. As part of the lead up to releasing AlgoKit, it was desired to perform a v1.0 release of Beaker and explicitly mark it as being production ready. In order to provide confidence a productionisation review was conducted by ; this document summarises the recommendations from that review. An architecture decision was made in the lead up to this review on a . ## Goal The goals of this productionisation review are to: * Get Beaker ready for production use * Gain confidence in Beaker’s software architecture and maintainability * Reduce the likelihood of need for breaking changes soon after release by getting key recommended breaking changes identified now ## Findings summary The Beaker codebase is well factored and had a decent initial test coverage (albeit some of that test coverage is via a series of examples that while they provide high code coverage, don’t actually validate all of the functionality). A series of changes have been landed to improve some of the fundamentals of Beaker in preparation for production launch: * \- Improved test coverage, improved dev experience (setup + ongoing) via Poetry, improvements to the code quality setup (linting, automatic formatting, typing), allowed Windows development on Beaker itself, significantly improved CI/CD pipeline speed, removing the examples directory and tests from being distributed wit hthe PyPi package In addition, there are a remaining set of more major (breaking) changes that are recommended. The recommendations are split into 2 categories, recommendations for immediate improvement (i.e. included in v1.0) and future suggestions that can be addressed post v1.0 launch. The recommended additional areas for immediate improvement are: * **Replace the class-based structure with an instance based one** - remove some areas of potential surprise for developers and simplify the Beaker codebase by moving to a composable instance-based structure rather than a static class-based structure * **Defer PyTEAL compilation** - improve flexibility and future contract output stability by deferring PyTEAL compilation (i.e. Beaker -> TEAL transpilation) to not happen when the Beaker contract is initialised * **Renamings** - There are some clear parameters that make sense to rename for various reasons * **Key decorator improvements** - Refactor some of the Beaker decorators to fix some bugs and improve user experience * **Beaker state refactor** - Refactor of the Beaker state interfaces to improve user extensibility and significantly simplify the Beaker codebase to improve maintainability The recommended areas for future improvement are: * Typed client generation from app spec to improve deploy-time and run-time dev experience * `Tmpl` values in app spec so you can have type-safe deployment clients that substitute any template values reliably at contract deploy time * Refactor storage types (blob, reserved, etc.) to allow use of in-built Python types and operators (terser, more intuitive) * Box storage implementation improved to match local/global behaviour and also automatically delete itself on contract deletion * Composable and stackable authorization and `@authorize` as a standalone decorator * PyTEAL typings to be improved to support types beyond `Expr` where a more explicit type can be specified (improves typing and extensibility) * Support referencing an app/lsig via ID/address (deployed separately, potentially automatically as part reading a Directed Acyclic Graph (DAG) in application.json of application dependencies) or bytes (deployed inline, what was previously called precompile, noting this would be deploy-time substitution, not smart contract run-time substitute like `TemplateVariable`), this also may allow precompile to be deprecated (it’s a very complex implementation for what we believe to be an advanced edge case) ## Immediate recommendations ### (1) Replace the class-based structure with an instance based one #### What? Beaker is currently structured around users sub-classing the `beaker.Application` class. They then hold state variables (from `beaker.state.*`) as and also contain methods which are forwarded to the `pyteal.abi.Router` instance created during `Application.compile(...)` based on decorators from `beaker.decorators.*`. We propose replacing this with an “instance based structure”, drawing inspiration from highly popular Python web frameworks such as `flask` (). This change will simplify Beaker’s code (improving maintainability) and, more importantly, reduce the potential for end-user error and confusion. #### Why? **User-facing benefits** 1. The current structure, by encouraging and supporting alongside , is a potential source of confusion for users new to writing smart contracts or PyTEAL. The distinction between what runs on `beaker.Application` instantiation, evaluation by PyTeal during compile, and finally what runs on-chain, can be difficult to grasp at first. One might assume (wrongly) that Beaker is somehow maintaining the state of `self.*` between methods, but this is not the case. Contrast this with Solidity, for example, where state can be directly manipulated because it’s help within the class instance. 2. Currently, actually using `self.*` can easily lead to problems, since if they are not defined before calling `super().__init__(...)` they won’t be defined when compiling. This can be fixed by not automatically compiling in `Application.__init__()` (which is also proposed in (2) below) for simple constants, however another issue is that using `self.foo =