# Oasis Documentation > Official Oasis developer documentation. This file contains all documentation content in a single document following the llmstxt.org standard. ## Cross-Chain Key Generation (EVM / Base) This chapter shows how to build a tiny TypeScript app that **generates a secp256k1 key inside ROFL** using the **`@oasisprotocol/rofl-client` TypeScript SDK** (which talks to the [appd REST API] under the hood), derives an **EVM address**, **signs** messages, **deploys a contract**, and **sends** EIP-1559 transactions on **Base Sepolia**. We use a simple **smoke test** that prints to logs. [appd REST API]: ../rofl/features/appd.md ## Prerequisites This guide requires: - **Node.js 20+** and **Docker** (or Podman). - **Oasis CLI** and at least **120 TEST** tokens in your wallet (use [Oasis Testnet faucet]). - Some Base Sepolia test ETH (use [Base Sepolia faucet]) to test sending ETH. Check [Quickstart Prerequisites] for setup details. [Quickstart Prerequisites]: ../rofl/quickstart#prerequisites [Oasis Testnet faucet]: https://faucet.testnet.oasis.io [Base Sepolia faucet]: https://docs.base.org/base-chain/tools/network-faucets ## Init App Initialize a new app using the [Oasis CLI]: ```shell oasis rofl init rofl-keygen cd rofl-keygen ``` ## Create App Create the app on Testnet (100 TEST deposit): ```shell oasis rofl create --network testnet ``` The CLI prints the **App ID** (e.g., `rofl1...`). ## Init a Hardhat (TypeScript) project ```shell npx hardhat init ``` When prompted, **choose TypeScript** and accept the defaults. Now add the small runtime deps we use outside of Hardhat: ```shell npm i @oasisprotocol/rofl-client ethers dotenv @types/node npm i -D tsx ``` Using Hardhat’s TypeScript template, it already created a `tsconfig.json`. Add the following so our app code compiles to `dist/`: ```json // tsconfig.json { "compilerOptions": { "rootDir": "./src", "outDir": "./dist" }, "include": ["src"] } ``` ## App structure We'll add a few small TS files and one Solidity contract: ``` src/ ├── appd.ts # thin wrapper over @oasisprotocol/rofl-client ├── evm.ts # ethers helpers (provider, wallet, tx, deploy) ├── keys.ts # tiny helpers (checksum) └── scripts/ ├── deploy-contract.ts # generic deploy script for compiled artifacts └── smoke-test.ts # end-to-end demo (logs) contracts/ └── Counter.sol # sample contract ``` ### `src/appd.ts` — thin wrapper over the SDK Use the official client to talk to `appd` (UNIX socket) and keep an explicit **local‑dev fallback** when running outside ROFL. src/appd.ts ```ts import {existsSync} from 'node:fs'; import { RoflClient, KeyKind, ROFL_SOCKET_PATH } from '@oasisprotocol/rofl-client'; const client = new RoflClient(); // UDS: /run/rofl-appd.sock export async function getAppId(): Promise { return client.getAppId(); } /** * Generates (or deterministically re-derives) a secp256k1 key inside ROFL and * returns it as a 0x-prefixed hex string (for ethers.js Wallet). * * Local development ONLY (outside ROFL): If the socket is missing and you set * ALLOW_LOCAL_DEV=true and LOCAL_DEV_SK=0x<64-hex>, that value is used. */ export async function getEvmSecretKey(keyId: string): Promise { if (existsSync(ROFL_SOCKET_PATH)) { const hex = await client.generateKey(keyId, KeyKind.SECP256K1); return hex.startsWith('0x') ? hex : `0x${hex}`; } const allow = process.env.ALLOW_LOCAL_DEV === 'true'; const pk = process.env.LOCAL_DEV_SK; if (allow && pk && /^0x[0-9a-fA-F]{64}$/.test(pk)) return pk; throw new Error( 'rofl-appd socket not found and no LOCAL_DEV_SK provided (dev only).' ); } ``` ### `src/evm.ts` — ethers helpers src/evm.ts ```ts import { JsonRpcProvider, Wallet, parseEther, type TransactionReceipt, ContractFactory } from "ethers"; export function makeProvider(rpcUrl: string, chainId: number) { return new JsonRpcProvider(rpcUrl, chainId); } export function connectWallet( skHex: string, rpcUrl: string, chainId: number ): Wallet { const w = new Wallet(skHex); return w.connect(makeProvider(rpcUrl, chainId)); } export async function signPersonalMessage(wallet: Wallet, msg: string) { return wallet.signMessage(msg); } export async function sendEth( wallet: Wallet, to: string, amountEth: string ): Promise { const tx = await wallet.sendTransaction({ to, value: parseEther(amountEth) }); const receipt = await tx.wait(); if (receipt == null) { throw new Error("Transaction dropped or replaced before confirmation"); } return receipt; } export async function deployContract( wallet: Wallet, abi: any[], bytecode: string, args: unknown[] = [] ): Promise<{ address: string; receipt: TransactionReceipt }> { const factory = new ContractFactory(abi, bytecode, wallet); const contract = await factory.deploy(...args); const deployTx = contract.deploymentTransaction(); const receipt = await deployTx?.wait(); await contract.waitForDeployment(); if (!receipt) { throw new Error("Deployment TX not mined"); } return { address: contract.target as string, receipt }; } ``` ### `src/keys.ts` — tiny helpers src/keys.ts ```ts import { Wallet, getAddress } from "ethers"; export function secretKeyToWallet(skHex: string): Wallet { return new Wallet(skHex); } export function checksumAddress(addr: string): string { return getAddress(addr); } ``` ### `src/scripts/smoke-test.ts` — single end‑to‑end flow This script prints the App ID (inside ROFL), address, a signed message, waits for funding, and deploys the counter contract. src/scripts/smoke-test.ts ```ts import "dotenv/config"; import { readFileSync } from "node:fs"; import { join } from "node:path"; import { getAppId, getEvmSecretKey } from "../appd.js"; import { secretKeyToWallet, checksumAddress } from "../keys.js"; import { makeProvider, signPersonalMessage, sendEth, deployContract } from "../evm.js"; import { formatEther, JsonRpcProvider } from "ethers"; const RPC_URL = process.env.BASE_RPC_URL ?? "https://sepolia.base.org"; const CHAIN_ID = Number(process.env.BASE_CHAIN_ID ?? "84532"); const KEY_ID = process.env.KEY_ID ?? "evm:base:sepolia"; function sleep(ms: number): Promise { return new Promise((r) => setTimeout(r, ms)); } async function waitForFunding( provider: JsonRpcProvider, addr: string, minWei: bigint = 1n, timeoutMs = 15 * 60 * 1000, pollMs = 5_000 ): Promise { const start = Date.now(); while (Date.now() - start < timeoutMs) { const bal = await provider.getBalance(addr); if (bal >= minWei) return bal; console.log(`Waiting for funding... current balance=${formatEther(bal)} ETH`); await sleep(pollMs); } throw new Error("Timed out waiting for funding."); } async function main() { const appId = await getAppId().catch(() => null); console.log(`ROFL App ID: ${appId ?? "(unavailable outside ROFL)"}`); const sk = await getEvmSecretKey(KEY_ID); // NOTE: This demo trusts the configured RPC provider. For production, prefer a // light client (for example, Helios) so you can verify remote chain state. const wallet = secretKeyToWallet(sk).connect(makeProvider(RPC_URL, CHAIN_ID)); const addr = checksumAddress(await wallet.getAddress()); console.log(`EVM address (Base Sepolia): ${addr}`); const msg = "hello from rofl"; const sig = await signPersonalMessage(wallet, msg); console.log(`Signed message: "${msg}"`); console.log(`Signature: ${sig}`); const provider = wallet.provider as JsonRpcProvider; let bal = await provider.getBalance(addr); if (bal === 0n) { console.log("Please fund the above address with Base Sepolia ETH to continue."); bal = await waitForFunding(provider, addr); } console.log(`Balance detected: ${formatEther(bal)} ETH`); const artifactPath = join(process.cwd(), "artifacts", "contracts", "Counter.sol", "Counter.json"); const artifact = JSON.parse(readFileSync(artifactPath, "utf8")); if (!artifact?.abi || !artifact?.bytecode) { throw new Error("Counter artifact missing abi/bytecode"); } const { address: contractAddress, receipt: deployRcpt } = await deployContract(wallet, artifact.abi, artifact.bytecode, []); console.log(`Deployed Counter at ${contractAddress} (tx=${deployRcpt.hash})`); console.log("Smoke test completed successfully!"); } main().catch((e) => { console.error(e); process.exit(1); }); ``` ### `contracts/Counter.sol` — minimal sample contracts/Counter.sol ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.24; contract Counter { uint256 private _value; event Incremented(uint256 v); event Set(uint256 v); function current() external view returns (uint256) { return _value; } function inc() external { unchecked { _value += 1; } emit Incremented(_value); } function set(uint256 v) external { _value = v; emit Set(v); } } ``` ### `src/scripts/deploy-contract.ts` — generic deployer src/scripts/deploy-contract.ts ```ts import "dotenv/config"; import { readFileSync } from "node:fs"; import { getEvmSecretKey } from "../appd.js"; import { secretKeyToWallet } from "../keys.js"; import { makeProvider, deployContract } from "../evm.js"; const KEY_ID = process.env.KEY_ID ?? "evm:base:sepolia"; const RPC_URL = process.env.BASE_RPC_URL ?? "https://sepolia.base.org"; const CHAIN_ID = Number(process.env.BASE_CHAIN_ID ?? "84532"); /** * Usage: * npm run deploy-contract -- ./artifacts/MyContract.json '[arg0, arg1]' * The artifact must contain { abi, bytecode }. */ async function main() { const [artifactPath, ctorJson = "[]"] = process.argv.slice(2); if (!artifactPath) { console.error("Usage: npm run deploy-contract -- '[constructorArgsJson]'"); process.exit(2); } const artifactRaw = readFileSync(artifactPath, "utf8"); const artifact = JSON.parse(artifactRaw); const { abi, bytecode } = artifact ?? {}; if (!abi || !bytecode) { throw new Error("Artifact must contain { abi, bytecode }"); } let args: unknown[]; try { args = JSON.parse(ctorJson); if (!Array.isArray(args)) throw new Error("constructor args must be a JSON array"); } catch (e) { throw new Error(`Failed to parse constructor args JSON: ${String(e)}`); } const sk = await getEvmSecretKey(KEY_ID); // NOTE: This demo trusts the configured RPC provider. For production, prefer a // light client (for example, Helios) so you can verify remote chain state. const wallet = secretKeyToWallet(sk).connect(makeProvider(RPC_URL, CHAIN_ID)); const { address, receipt } = await deployContract(wallet, abi, bytecode, args); console.log(JSON.stringify({ contractAddress: address, txHash: receipt.hash, status: receipt.status }, null, 2)); } main().catch((e) => { console.error(e); process.exit(1); }); ``` ## Hardhat (contracts only) Minimal config to compile `Counter.sol`: hardhat.config.ts ```ts import type { HardhatUserConfig } from "hardhat/config"; const config: HardhatUserConfig = { solidity: { version: "0.8.24", settings: { optimizer: { enabled: true, runs: 200 } } }, paths: { sources: "./contracts", artifacts: "./artifacts", cache: "./cache" } }; export default config; ``` Compile locally (optional). Delete the existing `contracts/Lock.sol` file or update it to Solidity version `0.8.24`. ```shell npx hardhat compile ``` ## Containerize Add a Dockerfile that builds TS and compiles the contract, runs the **smoke test** once, then idles so you can inspect logs. Dockerfile ```dockerfile FROM node:20-alpine WORKDIR /app COPY package.json package-lock.json* ./ RUN npm ci COPY tsconfig.json ./ COPY src ./src COPY contracts ./contracts COPY hardhat.config.ts ./ RUN npm run build && npx hardhat compile && npm prune --omit=dev ENV NODE_ENV=production CMD ["sh", "-c", "node dist/scripts/smoke-test.js || true; tail -f /dev/null"] ``` Mount the **appd socket** provided by ROFL. No public ports are exposed. compose.yaml ```yaml services: demo: image: docker.io/YOURUSER/rofl-keygen:0.1.0 platform: linux/amd64 environment: - KEY_ID=${KEY_ID:-evm:base:sepolia} - BASE_RPC_URL=${BASE_RPC_URL:-https://sepolia.base.org} - BASE_CHAIN_ID=${BASE_CHAIN_ID:-84532} volumes: - /run/rofl-appd.sock:/run/rofl-appd.sock ``` ## Build the image ROFL only runs on Intel TDX-enabled hardware so don't forget to pass the `--platform linux/amd64` parameter if you're compiling images on a different host (e.g. macOS): ```shell docker buildx build --platform linux/amd64 \ -t docker.io/YOURUSER/rofl-keygen:0.1.0 --push . ``` For extra security and verifiability pin the digest and use `image: ...@sha256:...` in `compose.yaml`. ## Build ROFL bundle Before running the `oasis rofl build` command, make sure to update the `services.demo.image` in `compose.yaml` to the image you built. For TypeScript projects, image size may be larger, update the `rofl.yaml` `resources` section to at least: `memory: 1024` and `storage.size: 4096`. ```shell oasis rofl build ``` ```shell docker run --platform linux/amd64 --volume .:/src \ -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` Then publish the enclave identities and config: ```shell oasis rofl update ``` ## Deploy Deploy to a Testnet provider: ```shell oasis rofl deploy ``` ## End‑to‑end (Base Sepolia) 1. **View smoke‑test logs** ```shell oasis rofl machine logs ``` You should see: * App ID * EVM address and a signed message * A prompt to fund the address * After funding: a Counter.sol deployment 2. **Local dev (optional)** Run `npm run build:all` to compile the TypeScript code and the Solidity contract. ```shell export ALLOW_LOCAL_DEV=true export LOCAL_DEV_SK=0x<64-hex-dev-secret-key> # DO NOT USE IN PROD npm run smoke-test ``` ## Security & notes - **Never** log secret keys. Provider logs are not encrypted at rest. - The appd socket `/run/rofl-appd.sock` exists **only inside ROFL**. - Public RPCs may rate‑limit; prefer a dedicated Base RPC URL. That’s it! You generated a key in ROFL with **appd**, signed messages, deployed a contract, and moved ETH on Base Sepolia. Key Generation Demo You can fetch a complete example shown in this chapter from https://github.com/oasisprotocol/demo-rofl-keygen. --- ## Trustless Price Oracle This chapter will show you how to quickly create, build and test a minimal containerized ROFL-powered app that authenticates and communicates with a confidential smart contract on [Oasis Sapphire]. [Oasis Sapphire]: https://github.com/oasisprotocol/sapphire-paratime/blob/main/docs/README.mdx ## Prerequisites This guide requires: - a working Docker (or Podman), - **Oasis CLI** and at least **120 TEST** tokens in your wallet (use [Oasis Testnet faucet]). Check out the [Quickstart Prerequisites] section for details. [Quickstart Prerequisites]: ../rofl/quickstart.mdx#prerequisites [Oasis Testnet faucet]: https://faucet.testnet.oasis.io ## Init App First we init the basic directory structure for the app using the Oasis CLI: ```shell oasis rofl init rofl-price-oracle cd rofl-price-oracle ``` ## Create App Now create an app on Testnet (requires deposit of 100 TEST): ```shell oasis rofl create --network testnet ``` After successful creation, the CLI will also output the new identifier, for example: ``` Created ROFL application: rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf ``` ## Oracle Contract While we are using [EVM-based smart contracts] in this example, the on-chain part can be anything from a [WASM-based smart contract] to a dedicated [runtime module]. [EVM-based smart contracts]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/README.mdx [WASM-based smart contract]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/other-paratimes/cipher/README.mdx [runtime module]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/runtime/modules.md We will use the following smart contract: Oracle.sol ```solidity pragma solidity >=0.8.9 <=0.8.24; import {Subcall} from "@oasisprotocol/sapphire-contracts/contracts/Subcall.sol"; contract Oracle { // Maximum age of observations. uint private constant MAX_OBSERVATION_AGE = 10; // Configuration. uint8 public threshold; bytes21 public roflAppID; // Observations. struct Observation { uint128 value; uint block; } uint128[] private observations; Observation private lastObservation; constructor(bytes21 _roflAppID, uint8 _threshold) { require(_threshold > 0, "Invalid threshold"); roflAppID = _roflAppID; threshold = _threshold; lastObservation.value = 0; lastObservation.block = 0; } function submitObservation(uint128 _value) external { // Ensure only the authorized ROFL app can submit. Subcall.roflEnsureAuthorizedOrigin(roflAppID); // NOTE: This is a naive oracle implementation for ROFL example purposes. // A real oracle must do additional checks and better aggregation before // accepting values. // Add observation and check if we have enough for this round. observations.push(_value); if (observations.length < threshold) { return; } // Simple averaging. uint256 _agg = 0; for (uint i = 0; i < observations.length; i++) { _agg += uint256(observations[i]); } _agg = _agg / uint128(observations.length); lastObservation.value = uint128(_agg); lastObservation.block = block.number; delete observations; } function getLastObservation() external view returns (uint128 _value, uint _block) { // Last observation must be fresh enough, otherwise we don't disclose it. require( lastObservation.block + MAX_OBSERVATION_AGE > block.number, "No observation available" ); _value = lastObservation.value; _block = lastObservation.block; } } ``` This contract collects observations from **authenticated application on ROFL**, performs trivial aggregation and stores the final aggregated result. Read the [Sapphire quickstart] chapter to learn how to build and deploy smart contracts on Sapphire, but to get you up and running for this part, simply copy the `oracle` folder from the [example project], install dependencies and compile the smart contract by executing: ```shell cd oracle npm install npx hardhat compile ``` Then configure the `PRIVATE_KEY` of the **deployment account** and the **app ID** you received in the previous step. Then deploy the contract by running: ```shell PRIVATE_KEY="0xYOUR_PRIVATE_KEY" \ npx hardhat deploy YOUR_APP_ID --network sapphire-testnet ``` After successful deployment you will see a message like: ``` Oracle for ROFL app rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf deployed to 0x1234845aaB7b6CD88c7fAd9E9E1cf07638805b20 ``` Remember the address where the oracle contract was deployed to as you will need it in the next step. [Oracle.sol]: https://github.com/oasisprotocol/oasis-sdk/blob/main/examples/runtime-sdk/rofl-oracle/oracle/contracts/Oracle.sol [example project]: https://github.com/oasisprotocol/demo-rofl [Sapphire quickstart]: https://github.com/oasisprotocol/sapphire-paratime/blob/main/docs/quickstart.mdx ## Oracle Worker in Container Inside `docker` folder add a [simple shell script] which downloads price quotes from a centralized exchange (Binance in our case) and sends it to our contract using the [appd REST API][appd]. app.sh ```shell #!/bin/sh while true; do # Fetch a recent price from Binance. price=$(curl -s "https://www.binance.com/api/v3/ticker/price?symbol=${TICKER}" | jq '(.price | tonumber) * 1000000 | trunc') if [ -z "$price" ]; then sleep 15 continue fi # Format calldata to call submitObservation(uint128) method with the price. price_u128=$(printf '%064x' ${price}) method="dae1ee1f" # Keccak4("submitObservation(uint128)") data="${method}${price_u128}" # Submit it to the Sapphire contract. curl -s \ --json '{"tx": {"kind": "eth", "data": {"gas_limit": 200000, "to": "'${CONTRACT_ADDRESS}'", "value": 0, "data": "'${data}'"}}}' \ --unix-socket /run/rofl-appd.sock \ http://localhost/rofl/v1/tx/sign-submit >/dev/null # Sleep for a while. sleep 60 done ``` Similarly, inside the `docker` folder add a `Dockerfile` that copies over the shell script to a container: Dockerfile ```dockerfile FROM docker.io/alpine:3.21.2 # Add some dependencies. RUN apk add --no-cache curl jq # The entire application is defined as a shell script. ADD app.sh /app.sh ENTRYPOINT ["/app.sh"] ``` [appd]: ../rofl/features/appd.md [simple shell script]: https://github.com/oasisprotocol/demo-rofl/blob/main/docker/app.sh ## Compose Add a `compose.yaml` to the root of your project: compose.yaml ```yaml services: oracle: build: ./docker image: docker.io/YOUR_USERNAME/rofl-price-oracle:latest platform: linux/amd64 environment: # Address of the oracle contract deployed on Sapphire Testnet. - CONTRACT_ADDRESS=YOUR_CONTRACT_ADDRESS # Ticker. - TICKER=ROSEUSDT volumes: - /run/rofl-appd.sock:/run/rofl-appd.sock ``` Now build and push the image to a registry ```shell docker compose build docker compose push ``` For extra security, you can **[pin the image digest]** inside `compose.yaml`. [pin the image digest]: ../rofl/workflow/containerize-app.mdx#pin-your-image-hash ## Build To build an app and update the enclave identity in the app manifest, run: ```shell oasis rofl build ``` ```shell docker run --platform linux/amd64 --volume .:/src -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` This will generate the ROFL bundle which can be used for later deployment and output something like: ``` ROFL app built and bundle written to 'rofl-price-oracle.default.orc'. ``` [other features]: features/ ## Update On-chain App Config The on-chain app config needs to be updated in order for the changes to take effect: ```shell oasis rofl update ``` ## Deploy to ROFL provider Deploy the price oracle to one of the ROFL providers: ```shell oasis rofl deploy ``` By default, the provider maintained by the Oasis foundation will be picked. ## Check That the Oracle Contract is Getting Updated To check whether the oracle is actually working, you can use the prepared `oracle-query` task in the Hardhat project. Simply run: ```shell cd oracle npx hardhat oracle-query 0x1234845aaB7b6CD88c7fAd9E9E1cf07638805b20 --network sapphire-testnet ``` And you should get an output like the following: ``` Using oracle contract deployed at 0x1234845aaB7b6CD88c7fAd9E9E1cf07638805b20 ROFL app: rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf Threshold: 1 Last observation: 63990 Last update at: 656 ``` That's it! Your first ROFL oracle that authenticates with an Oasis Sapphire smart contract is running! 🎉 Price Oracle Demo You can fetch a complete example shown in this chapter from https://github.com/oasisprotocol/demo-rofl. --- ## Private Telegram Chat Bot This chapter shows you how to build a simple Telegram bot that will run inside ROFL. Along the way you will meet one of the most powerful ROFL features—how to safely store your bot's Telegram API token inside a built-in ROFL key-store protected by the Trusted Execution Environment and the Oasis blockchain! ## Prerequisites This guide requires: - a working python (>3.9) - a working Docker (or Podman), - **Oasis CLI** and at least **120 TEST** tokens in your wallet (use [Oasis Testnet faucet]). Check out the [Quickstart Prerequisites] section for details. [Quickstart Prerequisites]: ../rofl/quickstart#prerequisites [Oasis Testnet faucet]: https://faucet.testnet.oasis.io ## Init App First we init the basic directory structure for the app using the Oasis CLI: ```shell oasis rofl init rofl-tgbot cd rofl-tgbot ``` ## Create App Now create an app on Testnet (requires deposit of 100 TEST): ```shell oasis rofl create --network testnet ``` After successful creation, the CLI will also output the new identifier, for example: ``` Created ROFL application: rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf ``` ## Python Telegram Bot Use a simple [python-telegram-bot] wrapper. As a good python citizen create a new folder for your project. Then, set up a python virtual environment and properly install the `python-telegram-bot` dependency: ```shell python -m venv my_env source my_env/bin/activate echo python-telegram-bot > requirements.txt pip install -r requirements.txt ``` Create a file called `bot.py` and paste the following bot logic that greets us back after greeting it with the `/hello` command: bot.py ```python import os from telegram import Update from telegram.ext import ApplicationBuilder, CommandHandler, ContextTypes async def hello(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: await update.message.reply_text(f'Hello {update.effective_user.first_name}') app = ApplicationBuilder().token(os.getenv("TOKEN")).build() app.add_handler(CommandHandler("hello", hello)) app.run_polling() ``` Next, generate a Telegram API token for our bot. Search for `@BotFather` in your Telegram app and start a chat with the `/newbot` command. Then, you'll need to input the name and a username of your bot. Finally, `@BotFather` will provide you a token that resembles something like `0123456789:AAGax-vgGmQsRiwf4WIQI4xq8MMf4WaQI5x`. As you may have noticed our bot above will read its Telegram API token from the `TOKEN` *environment variable*. Since we'll use this variable throughout the tutorial, let's export it for our session and then we can run our bot: ```shell export TOKEN="0123456789:AAGax-vgGmQsRiwf4WIQI4xq8MMf4WaQI5x" python bot.py ``` The bot should be up and running now, so you can search for its username in your Telegram app and send it a `/hello` message: [Image: Python Telegram Bot] [python-telegram-bot]: https://pypi.org/project/python-telegram-bot/ ## Containerize the Bot Create [`Dockerfile`] which copies over the python script to an alpine linux with installed python: Dockerfile ```dockerfile FROM python:alpine3.17 WORKDIR /bot COPY ./bot.py ./requirements.txt /bot RUN pip install -r requirements.txt CMD ["python", "bot.py"] ``` Then add [`compose.yaml`] which simply spins up the container image from above: compose.yaml ```yaml services: python-telegram-bot: build: . image: docker.io/YOUR_USERNAME/rofl-tgbot platform: linux/amd64 environment: - TOKEN=${TOKEN} ``` [`Dockerfile`]: https://github.com/oasisprotocol/demo-rofl-tgbot/blob/main/Dockerfile [`compose.yaml`]: https://github.com/oasisprotocol/demo-rofl-tgbot/blob/main/compose.yaml ## Build ROFL Bundle To build a ROFL bundle invoke the following: ```shell oasis rofl build ``` ```shell docker run --platform linux/amd64 --volume .:/src -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` ## Secrets Do you recall the `TOKEN` environment variable we exported above? Now, we will encrypt it and safely store it on-chain, so that it will be fed to our bot container once it's started on one of the TEE provider's nodes: ```shell echo -n "$TOKEN" | oasis rofl secret set TOKEN - ``` To submit this secret and the signatures (*enclave IDs*) of our .orc bundle components on-chain run: ```shell oasis rofl update ``` ## Deploy Finally, we deploy our ROFL bundle to a Testnet node instance offered by one of the ROFL providers: ```shell oasis rofl deploy ``` Congratulations, you have just deployed your first app in ROFL! 🎉 Go ahead and test it by sending the `/hello` message in the Telegram app. You can also check out your app on the [Oasis Explorer]: [Image: Oasis Explorer - ROFL] ROFL Telegram Bot You can fetch a finished project of this tutorial from GitHub [here][demo-rofl-tgbot]. [demo-rofl-tgbot]: https://github.com/oasisprotocol/demo-rofl-tgbot [Oasis Explorer]: https://explorer.oasis.io/testnet/sapphire/rofl/app/rofl1qpjsc3qplf2szw7w3rpzrpq5rqvzv4q5x5j23msu --- ## Trustless AI Agent Learn how to deploy a trustless Eliza agent on Oasis using ROFL enclaves. ## What You’ll Build By the end you will have a working Eliza agent running inside a ROFL Trusted Execution Environment (TEE), registered and validated as a trustless agent in the [ERC-8004] registry. The agent's code can be fully audited and proved that the deployed instance really originates from it and cannot be silently altered. [ERC-8004]: https://eips.ethereum.org/EIPS/eip-8004 ## Prerequisites You will need: - **Docker** (or Podman) with credentials on docker.io, ghcr.io or other public OCI registry - **Oasis CLI** and at least **120 TEST** tokens in your wallet (use [Oasis Testnet faucet]). - **Node.js 22+** (for Eliza and helper scripts) - **OpenAI** API key - **RPC URL** for accessing the ERC-8004 registry (e.g. Infura) - **Pinata JWT** for storing agent information to IPFS Check [Quickstart Prerequisites] for setup details. [Quickstart Prerequisites]: ../rofl/quickstart#prerequisites [Oasis Testnet faucet]: https://faucet.testnet.oasis.io ## Create an Eliza Agent Initialize a project using the ElizaOS CLI and prepare it for ROFL. ```shell # Install bun and ElizaOS CLI bun --version || curl -fsSL https://bun.sh/install | bash bun install -g @elizaos/cli # Create and configure the agent elizaos create -t project rofl-eliza # 1) Select Pqlite database # 2) Select the OpenAI model and enter your OpenAI key # Test the agent locally cd rofl-eliza elizaos start # Visiting http://localhost:3000 with your browser should open Eliza UI ``` ## Containerize the App and the ERC-8004 wrapper The Eliza agent startup wizard already generated `Dockerfile` that packs your agent into a container and `docker-compose.yaml` that orchestrates the `postgres` and `elizaos` containers. Edit `docker-compose.yaml` with the following changes: 1. In the PostgreSQL section replace relative `image: ankane/pgvector:latest` with `image: docker.io/ankane/pgvector:latest`. 2. Name our `elizaos` image with a corresponding absolute path, e.g. `image: docker.io/YOUR_USERNAME/elizaos:latest` 3. Make our Eliza agent registered as a trustless agent in the ERC-8004 registry. Paste the following [`rofl-8004`] snippet that will register it for us (keep the environment variables mapping!): ```yaml title="docker-compose.yaml" rofl-8004: image: ghcr.io/oasisprotocol/rofl-8004@sha256:f57373103814a0ca4c0a03608284451221b026e695b0b8ce9ca3d4153819a349 platform: linux/amd64 environment: - RPC_URL=${RPC_URL} - PINATA_JWT=${PINATA_JWT} volumes: - /run/rofl-appd.sock:/run/rofl-appd.sock ``` Build and push: ```shell docker compose build docker compose push ``` For full verifiability pin the digest by appending `image: ...@sha256:...` in `docker-compose.yaml` to all images. [`rofl-8004`]: https://github.com/oasisprotocol/erc-8004 ## Init ROFL and Create App The agent will run in a container inside a TEE. ROFL will handle the startup attestation of the container and the secrets in form of environment variables. This way TEE will be completely transparent to the Eliza agent app. ```shell oasis rofl init oasis rofl create --network testnet ``` Inspect on-chain activity and app details in the [Oasis Explorer]. ## Build ROFL bundle Eliza requires at least 2 GiB of memory and 10 GB of storage. First, update the `resources` section in `rofl.yaml` accordingly: ```yaml title="rofl.yaml" resources: memory: 2048 cpus: 1 storage: kind: disk-persistent size: 10000 ``` Then, build the ROFL bundle by invoking: ```shell oasis rofl build ``` ```shell docker run --platform linux/amd64 --volume .:/src \ -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` ## Secrets Let's end-to-end encrypt `OPENAI_API_KEY` and store it on-chain. Also, provide the `RPC_URL` and `PINATA_JWT` values for ERC-8004 registration. ```shell echo -n "" | oasis rofl secret set OPENAI_API_KEY - echo -n "https://sepolia.infura.io/v3/" | oasis rofl secret set RPC_URL - echo -n "" | oasis rofl secret set PINATA_JWT - ``` Then store the secrets and previously built enclave identities on-chain: ```shell oasis rofl update ``` ## Deploy Deploy your Eliza agent to a ROFL provider by invoking: ```shell oasis rofl deploy ``` By default, the Oasis-maintained provider is selected on Testnet that lends you a node for 1 hour. You can extend the rental, for example by 4 hours by invoking `oasis rofl machine top-up --term hour --term-count 4` [command][deploy]. [deploy]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#deploy ## Trying it out After deploying the agent, use the CLI to check, if the agent is running: ```shell # Show machine details (state, proxy URLs, expiration). oasis rofl machine show ``` If the agent successfully booted up, the `Proxy:` section contains the URL where your agent is accessible on, for example: ``` Proxy: Domain: m1058.opf-testnet-rofl-25.rofl.app Ports from compose file: 3000 (elizaos): https://p3000.m1058.opf-testnet-rofl-25.rofl.app ``` In the example above, our app is accessible at https://p3000.m1058.opf-testnet-rofl-25.rofl.app. ## ERC-8004 Registration and Validation When spinning up the agent for the first time, the `rofl-8004` service will derive the ethereum address for registering the agent. You will need to fund that account with a small amount of ether to pay for the fees. Fetch your app logs: ```shell oasis rofl machine logs ``` Then look for `Please top it up` line which contains the derived address. After funding it, your agent will automatically be registered and validated. Logs are accessible to the app admin and are stored **unencrypted on the ROFL node**. Avoid printing secrets! Trustless Agent Demo You can fetch a complete example shown in this chapter from https://github.com/oasisprotocol/demo-trustless-agent. [machine-logs]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#machine-logs [sdk-deploy-logs]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/workflow/deploy.md#check-that-the-app-is-running [Oasis Explorer]: https://explorer.oasis.io/testnet/sapphire --- ## Runtime Off-Chain Logic (ROFL) *ROFL-powered apps* are applications running on Oasis nodes inside a *Trusted Execution Environment (TEE)* that are managed through the Oasis Sapphire blockchain. ROFL supports: - **Docker-like containers** or **single-executable** apps depending on your TCB demand and threat model - **Privacy and integrity** through Intel SGX/TDX including fully auditable history of updates - **Uncensorable** registration, management and deployment of your app on a permissionless pool of ROFL nodes **including billing** - **Built-in Key Management Service** (KMS) for storing your app secrets and secure derivation of keys within TEE - **Integration with Oasis Sapphire** enables EVM-compatible smart contracts to verify the ROFL transaction origin [Image: ROFL diagram] ROFL powers private trading and chat bots, provable AI learning, price oracles, home automation, VPNs and fair gaming! ## Build Your Application for ROFL Developers can easily wrap their existing apps into a ROFL-powered app! [Prerequisites]: ./workflow/prerequisites.mdx ## See also [Oasis Runtime SDK]: https://github.com/oasisprotocol/oasis-sdk/tree/main/runtime-sdk --- ## `appd` REST API Each containerized app running in ROFL runs a special daemon (called `rofl-appd`) that exposes additional functions via a simple HTTP REST API. In order to make it easier to isolate access, the API is exposed via a UNIX socket located at `/run/rofl-appd.sock` which can be passed to containers via volumes. An example using the [short syntax for Compose volumes][compose-volumes]: ```yaml services: mycontainer: # ... other details omitted ... volumes: - /run/rofl-appd.sock:/run/rofl-appd.sock ``` The following sections describe the available endpoints. UNIX sockets and HTTP headers Although the communication with `rofl-appd` is through UNIX sockets, the REST service still uses the HTTP protocol. In place of a host name you can provide any name. In our examples, we stick to the `http://localhost/` format. [compose-volumes]: https://docs.docker.com/reference/compose-file/services/#short-syntax-5 ## App Identifier This endpoint can be used to retrieve the app ID. **Endpoint:** `/rofl/v1/app/id` (`GET`) **Example response:** ``` rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf ``` ## Key Generation Each registered app automatically gets access to a decentralized on-chain key management system. All generated keys can only be generated inside properly attested app instances and will remain the same even in case the app is deployed somewhere else or its state is erased. **Endpoint:** `/rofl/v1/keys/generate` (`POST`) **Example request:** ```json { "key_id": "demo key", "kind": "secp256k1" } ``` **Request fields:** - `key_id` is used for domain separation of different keys (e.g. a different key id will generate a completely different key). - `kind` defines what kind of key should be generated. The following values are currently supported: - `raw-256` to generate 256 bits of entropy. - `raw-386` to generate 384 bits of entropy. - `ed25519` to generate an Ed25519 private key. - `secp256k1` to generate a Secp256k1 private key. **Example response:** ```json { "key": "a54027bff15a8726b6d9f65383bff20db51c6f3ac5497143a8412a7f16dfdda9" } ``` The generated `key` is returned as a hexadecimal string. ## Authenticated Transaction Submission An app running in ROFL can also submit _authenticated transactions_ to the chain where it is registered at. The special feature of these transactions is that they are signed by an **endorsed ephemeral key** and are therefore automatically authenticated as coming from the app itself. This makes it possible to easily authenticate the transaction origin in smart contracts by simply invoking an [appropriate subcall]: ```solidity Subcall.roflEnsureAuthorizedOrigin(roflAppID); ``` [appropriate subcall]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/Subcall.sol/library.Subcall.html#roflensureauthorizedorigin **Endpoint:** `/rofl/v1/tx/sign-submit` (`POST`) **Example request:** ```json { "tx": { "kind": "eth", "data": { "gas_limit": 200000, "to": "1234845aaB7b6CD88c7fAd9E9E1cf07638805b20", "value": 0, "data": "dae1ee1f00000000000000000000000000000000000000000000000000002695a9e649b2" } } } ``` **Request fields:** - `tx` describes the transaction content with different transaction kinds being supported (as defined by the `kind` field): - Ethereum-compatible calls (`eth`) use standard fields (`gas_limit`, `to`, `value` and `data`) to define the transaction content. - Oasis SDK calls (`std`) support CBOR-serialized hex-encoded `Transaction`s to be specified. - `encrypt` is a boolean flag specifying whether the transaction should be encrypted. By default this is `true`. Note that encryption is handled transparently for the caller using an ephemeral key and any response is first decrypted before being passed on. **Example response:** Inside `data` the JSON response contains a CBOR-serialized hex-encoded [call result]. To investigate it you will need to deserialize it first. For example: - Successful call result: ```json { "data": "a1626f6b40" } ``` deserialized as `{"ok": ''}`. - Unsusccessful call result: ```json { "data": "a1646661696ca364636f646508666d6f64756c656365766d676d6573736167657272657665727465643a20614a416f4c773d3d" } ``` deserialized as `{"fail": {"code": 8, "module": "evm", "message": "reverted: aJAoLw=="}}`. [call result]: https://api.docs.oasis.io/rust/oasis_runtime_sdk/types/transaction/enum.CallResult.html ## Replica Metadata Replica metadata allows apps to publish arbitrary key-value pairs that are included in the on-chain ROFL replica registration. This metadata is automatically namespaced with `net.oasis.app.` when published on-chain. ### Get Metadata Retrieve all user-set metadata key-value pairs. **Endpoint:** `/rofl/v1/metadata` (`GET`) **Example response:** ```json { "key_fingerprint": "a54027bff15a8726", "version": "1.0.0" } ``` ### Set Metadata Set metadata key-value pairs. This replaces all existing app-provided metadata and will trigger a registration refresh if the metadata has changed. **Endpoint:** `/rofl/v1/metadata` (`POST`) **Example request:** ```json { "key_fingerprint": "a54027bff15a8726", "version": "1.0.0" } ``` **Note:** Metadata is validated against runtime-configured limits for the number of pairs, key size, and value size. --- ## `rofl.yaml` Manifest File ## Metadata The following fields are valid in your yaml root: - `name`: A short, human-readabe name for your app. e.g. `my-app` - `version`: ROFL version. e.g. `0.1.1` - `repository`: A path to the git repository. e.g. `https://github.com/user/my-app` - `author`: The author name and their e-mail address. e.g. `John Doe ` - `license`: The ROFL license in [SPDX] format. e.g. `Apache-2.0` - `tee`: The Trusted Execution Environment type. Valid options are `tdx` (default) or `sgx` - `kind`: The ROFL "flavor". Valid options for TDX TEE are `containers` (default) or `raw`. The only valid option for SGX TEE is `raw` [SPDX]: https://spdx.org/licenses/ ## App Resources (`resources`) Each containerized app running in ROFL must define what kind of resources it needs for its execution. This includes the number of assigned vCPUs, amount of memory, storage requirements, GPUs, etc. Resources are specified in the app manifest file under the `resources` section as follows: ```yaml resources: memory: 512 cpus: 1 storage: kind: disk-persistent size: 512 ``` This chapter describes the set of supported resources. Changing the requested resources will result in a different enclave identity of the app and will require the policy to be updated! ### Memory (`memory`) The amount of memory is specified in megabytes. By default the this value is initialized to `512`. ### vCPU Count (`cpus`) The number of vCPUs allocated to the VM. By default this value is initialized to `1`. ### Storage (`storage`) Each app running in ROFL can request different storage options, depending on its use case. The storage kind is specified in the `kind` field with the following values currently supported: - `disk-persistent` provisions a persistent disk of the given size. The disk is encrypted and authenticated using a key derived by the decentralized on-chain key management system after successful attestation. - `disk-ephemeral` provisions an ephemeral disk of the given size. The disk is encrypted and authenticated using an ephemeral key randomly generated on each boot. - `ram` provisions an ephemeral filesystem entirely contained in encrypted memory. - `none` does not provision any kind of storage. Specifying this option will not work for containerized apps. The `size` argument defines the amount of storage to provision in megabytes. ## Deployments (`deployments`) This section contains ROFL deployments on specific networks. ### `` #### `policy` Contains the policy under which the app will be allowed to spin up: - `quotes`: defines a TEE-specific policy requirements such as the TCB validity period, and the minimum TCB-R number which indicates what security updates must be applied to the given platform. - `enclaves`: defines the allowed enclave IDs for running this app. - `endorsements`: a list of conditions that define who can run this app. - `- any: {}`: any node is allowed to run the app. - `- node: `: node with a specific node ID is allowed to run the app. - `- provider:
`: nodes belonging to the specified ROFL provider are allowed to run the app. - `- provider_instance_admin:
`: machines having the specified admin are allowed to run the app. You can also nest conditions with `and` and `or` operators. For example: ```yaml title="policy.yaml" endorsements: - and: - provider: oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz - or: - provider_instance_admin: oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt - provider_instance_admin: oasis1qqcd0qyda6gtwdrfcqawv3s8cr2kupzw9v967au6 ``` In the example the app will only run on a specified provider and on machines owned by either of the two admin addresses. - `fees: `: who pays for the registration and other fees: - `endorsing_node`: the node running the app pays the fees. - `instance`: The app instance pays the fees. --- ## Marketplace # The ROFL Marketplace The ROFL marketplace is an on-chain protocol that allows app developers to easily and safely deploy their apps for a small fee on one side and on the other enables ROFL node providers to lend their ROFL nodes for computation. [Image: ROFL marketplace] The ROFL marketplace consists of three entities: - **App developers**: Build an app, register it on-chain and rent a machine from the ROFL provider where they can deploy their ROFL to. - **ROFL provider**: Is an on-chain entity that bills and assigns **machines** to ROFL developers. Each machine has a hosting plan called an **offer**. - **ROFL node**: Is a server that runs the Oasis node and instantiates one or more **machines** which then host an app. ## Instructions for App Developers ## Instructions for Node Providers --- ## Port Proxy Port proxy for your ROFL automatically generates public HTTPS URLs for services in your app. Simply publish a port in your `compose.yaml` and the proxy handles TLS certificates and routing. TLS is terminated inside the app, providing end-to-end encryption so that even the provider cannot see the traffic. ## Enabling the Proxy To expose a port from your container, publish it in your `compose.yaml` file: ```yaml title="compose.yaml" services: frontend: image: docker.io/hashicorp/http-echo:latest ports: - "5678:5678" # Expose container port 5678 on host port 5678 ``` After deploying your app, you can find the generated URL by running `oasis rofl machine show`: ```shell oasis rofl machine show ``` The output will contain a `Proxy` section with the URL for each published port: ``` Proxy: Domain: m602.test-proxy-b.rofl.app Ports from compose file: 5678 (frontend): https://p5678.m602.test-proxy-b.rofl.app ``` ## Configuration The proxy behavior can be configured using annotations in your `compose.yaml` file. The annotation key is `net.oasis.proxy.ports..mode`. Supported modes are: - `terminate-tls` (default): The proxy terminates the TLS connection and forwards the unencrypted traffic to your container. This is suitable for HTTPS services. - `passthrough`: The proxy forwards the raw TCP connection to your container. This is suitable for services that handle their own TLS or use other TCP-based protocols. - `ignore`: The proxy will ignore this port, and it will not be exposed publicly. Example of configuring a port for TCP passthrough: ```yaml title="compose.yaml" services: myservice: image: docker.io/my/service:latest ports: - "8080:8080" annotations: net.oasis.proxy.ports.8080.mode: passthrough ``` --- ## Secrets Sometimes containers need access to data that should not be disclosed publicly, for example API keys to access certain services. This data can be passed to containers running in ROFL via _secrets_. Secrets are arbitrary key-value pairs which are end-to-end encrypted so that they can only be decrypted inside a correctly attested app. Secrets can be easily managed via the Oasis CLI, for example to create a secret called `mysecret` you can use: ```sh echo -n "my very secret value" | oasis rofl secret set mysecret - ``` Detailed CLI Reference For comprehensive documentation on secret management commands including importing from `.env` files, removing secrets, and other advanced features, consult the [Oasis CLI] documentation. Note that this only encrypts the secret and updates the local app manifest file, but the secret is not propagated to the app just yet. This allows you to easily configure as many secrets as you want without the need to constantly update the on-chain app configuration. While the secrets are stored in the local app manifest, this does not mean that the manifest needs to remain private. The secret values inside the manifest are end-to-end encrypted and cannot be read even by the administrator who set them. When a secret is created, a new ephemeral key is generated that is used in the encryption process. The ephemeral key is then immediately discarded so only the app itself can decrypt the secret. Updating the on-chain configuration can be performed via the usual `update` command as follows: ```sh oasis rofl update ``` Inside containers secrets can be passed either via environment variables or via container secrets. ## Environment Variables Each secret is automatically exposed in the Compose environment and can be trivially used in the Compose file. Note that when exposed as an environment variable, the secret name is capitalized and spaces are replaced with underscores, so a secret called `my secret` will be available as `MY_SECRET`. ```yaml services: test: image: docker.io/library/alpine:3.21.2@sha256:f3240395711384fc3c07daa46cbc8d73aa5ba25ad1deb97424992760f8cb2b94 command: echo "Hello $MYSECRET!" environment: - MYSECRET=${MYSECRET} ``` ## Container Secrets Each secret is also defined as a [container secret] and can be passed to the container as such. Note that the secret needs to be defined as an _external_ secret as it is created by the app during boot. ```yaml services: test: image: docker.io/library/alpine:3.21.2@sha256:f3240395711384fc3c07daa46cbc8d73aa5ba25ad1deb97424992760f8cb2b94 command: echo "Hello $(cat /run/secrets/mysecret)!" secrets: - mysecret secrets: mysecret: external: true ``` [container secret]: https://docs.docker.com/compose/how-tos/use-secrets/ [Oasis CLI]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#secret --- ## Persistent Storage ROFL developers may use Sapphire smart contracts for secure and consistent storage across all ROFL replicas. This storage however, is not appropriate for read/write intensive applications. For this reason ROFL has built-in support for local persistent storage with the following settings: - Local per-machine storage, not synchronized across other ROFL replicas. - Fully encrypted on the host machine. - Preserved during ROFL upgrades and node restarts. This type of a storage is particularly useful for caching. Docker images defined in the `compose.yaml` file are automatically stored to persistent storage. This way they are fetched only the first time an app is deployed, otherwise a cached version is considered. All non-external container volumes will automatically reside in persistent storage. In the example below, we [define a new volume] called `my-volume` and make `.ollama` in the home folder persistent. This way we avoid downloading ollama models each time a machine hosting the app is restarted: ```yaml title="compose.yaml" services: ollama: image: "docker.io/ollama/ollama" ports: - "11434:11434" volumes: - my-volume:/root/.ollama entrypoint: ["/usr/bin/bash", "-c", "/bin/ollama serve & sleep 5; ollama pull deepseek-r1:1.5b; wait"] volumes: my-volume: ``` [define a new volume]: https://docs.docker.com/reference/compose-file/volumes/ --- ## Quickstart You will **ROFLize your app** in five steps: 1. [Initialize](#initialize) the ROFL manifest. 2. [Create](#create) a new app on blockchain. 3. [Build](#build) a ROFL bundle. 4. Encrypt [secrets](#secrets) and store them on-chain. 5. [Deploy](#deploy) your app to ROFL node. [`oasis rofl init`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#init [`oasis rofl create`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#create [`oasis rofl build`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#build [`oasis rofl update`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#update [`oasis rofl secret`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#secret [`oasis rofl deploy`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#deploy [`oasis rofl machine show`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#machine-show [`oasis rofl machine logs`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#machine-logs ## Prerequisites ### Containerized App Your app should already run inside a container and have a Docker-like image ready to download from [docker.io], [GitHub containers registry][ghcr] or some other public OCI repository. If you never containerized an app yet, head over to the [Containerize your app] chapter. [docker.io]: https://docker.io [ghcr]: https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry [Containerize your app]: ./workflow/containerize-app.mdx ### Oasis CLI Download the [latest Oasis CLI release][oasis-cli-dl] and install it on your computer. [oasis-cli-dl]: https://github.com/oasisprotocol/cli/blob/master/docs/setup.md ### Some Tokens You'll need about 150 tokens in your Oasis CLI account for ROFL registration, renting a machine and paying for gas: Invoke the following to create a new account: ```shell oasis wallet create my_account --file.algorithm secp256k1-bip44 ``` You can later also import this account to Metamask or other Ethereum-compatible tooling like Hardhat. Export a `secp256k1` private key or mnemonic from your existing wallet. Then run the following command and follow the wizard: ```shell oasis wallet import my_account ``` Next, head over to the [Oasis faucet] to get free Testnet tokens. When deploying your app on Mainnet, you will need to [buy ROSE][get-rose]. [`oasis wallet create`]: https://github.com/oasisprotocol/cli/blob/master/docs/wallet.md#create [Oasis faucet]: https://faucet.testnet.oasis.io [create or import a `secp256k1-bip44` account]: https://github.com/oasisprotocol/cli/blob/master/docs/wallet.md [get-rose]: https://github.com/oasisprotocol/docs/blob/main/docs/general/manage-tokens/README.mdx#get-rose ## Initialize Inside your app folder which contains `compose.yaml` run [`oasis rofl init`]. This will generate the initial `rofl.yaml` manifest file: ```shell oasis rofl init ``` Change the `memory`, the number of `cpus` and the root filesystem `storage` section under `resources` to fit your needs: ```yaml title="rofl.yaml" {5-10} name: my-app version: 0.1.0 tee: tdx kind: container resources: memory: 512 # in megabytes cpus: 1 storage: kind: disk-persistent size: 512 # in megabytes artifacts: firmware: https://github.com/oasisprotocol/oasis-boot/releases/download/v0.6.2/ovmf.tdx.fd#db47100a7d6a0c1f6983be224137c3f8d7cb09b63bb1c7a5ee7829d8e994a42f kernel: https://github.com/oasisprotocol/oasis-boot/releases/download/v0.6.2/stage1.bin#e5d4d654ca1fa2c388bf64b23fc6e67815893fc7cb8b7cfee253d87963f54973 stage2: https://github.com/oasisprotocol/oasis-boot/releases/download/v0.6.2/stage2-podman.tar.bz2#b2ea2a0ca769b6b2d64e3f0c577ee9c08f0bb81a6e33ed5b15b2a7e50ef9a09f container: runtime: https://github.com/oasisprotocol/oasis-sdk/releases/download/rofl-containers%2Fv0.8.0/rofl-containers#08eb5bbe5df26af276d9a72e9fd7353b3a90b7d27e1cf33e276a82dfd551eec6 compose: compose.yaml ``` ## Create Create a new app on-chain with [`oasis rofl create`]. By default, the app will be registered on Sapphire Mainnet. Pass `--network tesnet` parameter to use Testnet: ```shell oasis rofl create --network testnet ``` If the transaction succeeds, you should be able to find your app on the [Oasis Explorer]. [Oasis Explorer]: https://explorer.oasis.io/testnet/sapphire/rofl/app ## Build Next, build the ROFL bundle. ```shell oasis rofl build ``` ```shell docker run --platform linux/amd64 --volume .:/src -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` As a result, a new `.orc` file will appear inside your project folder. ## Secrets If your application uses environment variables you would like to privately store on-chain, use the [`oasis rofl secret`] command, for example: ```shell echo -n "my-secret-token" | oasis rofl secret set TOKEN - ``` This will populate the `TOKEN` secret and you can use it in your compose file as follows: ```yaml title="compose.yaml" {6-7} services: python-telegram-bot: build: . image: "ghcr.io/oasisprotocol/demo-rofl-tgbot:ollama" platform: linux/amd64 environment: - TOKEN=${TOKEN} ``` To submit the secrets and the ROFL bundle information from the previous step on-chain, run [`oasis rofl update`]: ```shell oasis rofl update ``` ## Deploy Deploy your app to a ROFL provider with the [`oasis rofl deploy`] command: ```shell oasis rofl deploy ``` By default, a new machine that fits required resources provided by the Oasis foundation will be bootstrapped. You can check the status of the machine with [`oasis rofl machine show`]: ```shell oasis rofl machine show ``` If everything works, you should be able to fetch your application logs with [`oasis rofl machine logs`]: ```shell oasis rofl machine logs ``` **Congratulations, you have just deployed your first app in ROFL! 🎉** --- ## Troubleshooting ## Compilation ### `The following target_feature flags must be set: +aes,+ssse3.` You will see the following error, if the `aes` and `ssse3` compiler flags are not enabled during compilation of your SGX and TDX-raw ROFL: ``` error: The following target_feature flags must be set: +aes,+ssse3. --> /home/user/.cargo/registry/src/index.crates.io-6f17d22bba15001f/deoxysii-0.2.4/src/lib.rs:26:1 | 26 | compile_error!("The following target_feature flags must be set: +aes,+ssse3."); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` We suggest that you add the following default flags to your `.cargo/config.toml` file: ```toml [build] rustflags = ["-C", "target-feature=+aes,+ssse3"] rustdocflags = ["-C", "target-feature=+aes,+ssse3"] [test] rustflags = ["-C", "target-feature=+aes,+ssse3"] rustdocflags = ["-C", "target-feature=+aes,+ssse3"] ``` ## Compose file ### Environment variables defined are not considered Due to an upstream [`podman-compose` bug][podman-compose-env-var-bug] assigning environment variables inside the compose file and using it directly in the commands afterwards do not work: ```yaml services: oracle: platform: linux/amd64 environment: CONTRACT_ADDRESS: 0x5FbDB2315678afecb367f032d93F642f64180aa3 entrypoint: /bin/sh -c 'python main.py $${CONTRACT_ADDRESS}' ``` The `CONTRACT_ADDRESS` in this case will be empty in ROFL. Injecting the variable value directly inside `entrypoint` seems to be the only workaround: ```yaml services: oracle: platform: linux/amd64 entrypoint: /bin/sh -c 'python main.py 0x5FbDB2315678afecb367f032d93F642f64180aa3' ``` [podman-compose-env-var-bug]: https://github.com/containers/podman-compose/issues/264 ### `depends_on` is ignored Due to an upstream [`podman-compose` bug][podman-compose-depends-on-bug] waiting for containers to spin up in the correct order with `depends_on` directive doesn't work. For example, this `oracle` should spin up once the `contracts` service successfully deploys the contracts and finishes: ```yaml services: contracts: image: "ghcr.io/foundry-rs/foundry:latest" platform: linux/amd64 volumes: - ./contracts:/contracts entrypoint: /bin/sh -c 'cd contracts && forge create' oracle: platform: linux/amd64 entrypoint: /bin/sh -c 'python main.py' restart: on-failure depends_on: contracts: condition: service_completed_successfully ``` In ROFL the `oracle` service will be started in parallel with `contracts` and will ignore the `depends_on` directive. There is currently no workaround for this. You will need to implement a logic in your `oracle` service so that the service doesn't hang if the contracts are not deployed yet, but simpyl crashes. This way, the restart mechanism of the service will be triggered to restart `oracle` and try again. [podman-compose-depends-on-bug]: https://github.com/containers/podman-compose/issues/575 ## ROFL Proxy URL is not working If you have exposed a port in your `compose.yaml` but the proxy URL shown by `oasis rofl machine show` is not accessible, it is likely that your app is using outdated artifacts. To fix this, update to the latest Oasis CLI version, then run `oasis rofl upgrade` in your project directory to update the artifacts in your `rofl.yaml` file. After that, rebuild and redeploy your app: ```shell oasis rofl build oasis rofl update oasis rofl deploy ``` ## See also --- ## How to ROFLize an App? [Image: ROFL diagram] Each app running in ROFL runs in its own Trusted Execution Environment (TEE) which is provisioned by an Oasis Node from its _ORC bundle_ (a zip archive containing the program binaries and metadata required for execution). Apps in ROFL register to the Oasis Network in order to be able to easily authenticate to on-chain smart contracts and transparently gain access to the decentralized per-app key management system. Inside the TEE, the app performs important functions that ensure its security and enable secure communication with the outside world. This includes using a light client to establish a fresh view of the Oasis consensus layer which provides a source of rough time and integrity for verification of all on-chain state. The app also generates a set of ephemeral cryptographic keys which are used in the process of remote attestation and on-chain registration. These processes ensure that the app can authenticate to on-chain modules (e.g. smart contracts running on [Sapphire]) by signing and submitting special transactions. The app can then perform arbitrary work and interact with the outside world through (properly authenticated) network connections. Connections can be authenticated via HTTPS/TLS or use other methods (e.g. light clients for other chains). [Sapphire]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/README.mdx --- ## Build This operation packs `compose.yaml`, specific operating system components and the hash of a trusted block on the Sapphire chain. All these pieces are needed to safely execute our app inside a TEE. [Image: ROFL-compose-app bundle wrapper] Whenever you make changes to your app and want to deploy it, you first need to build it. The build process takes the compose file together with other ROFL artifacts and deterministically generates a bundle that can later be deployed. The build process also computes the _enclave identity_ of the bundle which is used during the process of remote attestation to authenticate the app instances before granting them access to the key management system and [other features]. To build an app and update the enclave identity in the app manifest, simply run: ```shell oasis rofl build ``` ```shell docker run --platform linux/amd64 --volume .:/src -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` This will generate a ROFL bundle which can be used for later deployment and output something like: ``` ROFL app built and bundle written to 'myapp.default.orc'. ``` [other features]: ../features/ ## Update On-chain App Config After any changes to the [app's policy] defined in the manifest, the on-chain app config needs to be updated in order for the changes to take effect. The designated admin account is able to update this policy by issuing an update transaction which can be done via the CLI by running: ```shell oasis rofl update ``` [app's policy]: ../features/manifest.md#policy --- ## Containerize an App Services are best maintained if they are run in a **controlled environment** also known as *a container*. This includes the exact version of the operating system, both system and user libraries, and your carefully configured service. The image of the container is uploaded to an *OCI file server* (e.g. [docker.io], [ghcr.io]) from where the server hosting your bot downloads it. [docker.io]: https://docker.io [ghcr.io]: https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry Let's have the following project consisting of two files: ``` my-bot ├── bot.py # A python bot script └── requirements.txt # Python dependencies ``` For containerization we will use [Docker][docker], but you can also use alternatives such as [Podman][podman]. In fact, when your app is deployed to a ROFL node the containers there will be orchestrated by Podman, so feel free to use it instead for better compatibility. ## Dockerfile Inside the project folder create a file called `Dockerfile`. This will instruct Docker to compile a **python-based image** and add **our python bot script on top of it**. ```dockerfile title="Dockerfile" FROM python:alpine3.17 WORKDIR /bot COPY ./bot.py ./requirements.txt /bot RUN pip install -r requirements.txt ENTRYPOINT ["python", "bot.py"] ``` ## Compose [Docker Compose][compose] orchestrates your containers. It makes sure they are spun up in correct order, defines storage points, networking and other functionalities. Create `compose.yaml` with the following example content: ```yaml title="compose.yaml" services: python-bot: build: . image: "docker.io/YOUR_USERNAME/YOUR_PROJECT" platform: linux/amd64 environment: - TOKEN=${TOKEN} ``` [compose]: https://docs.docker.com/reference/compose-file/ ### Adjust `image:` field to fit your needs The `image:` field(s) in `compose.yaml` above must point to a **publicly accessible OCI registry** where your image will be downloaded from for execution. In your case replace the `image:` field with a fully qualified domain of the OCI server you use followed by your username, for example: - `docker.io/your_username/my-bot` - `ghcr.io/your_username/my-bot` Always specify FQDN image URL When specifying the container image URL, make sure to use fully qualified domain name e.g. `docker.io/ollama/ollama` and not just `ollama/ollama`. ## Build and Push Build the container image and tag it using `docker compose`: ```shell docker compose build ``` You can also test the compose setup locally with: ```shell docker compose up ``` To stop it: ```shell docker compose down ``` After building and tagging the images you need to push the container images to publicly accessible OCI registry (e.g. [docker.io], [ghcr.io]). If this is the first time you're pushing images on your computer, you will first need to authenticacte with: ```shell docker login ``` Then run the following to upload the container images to the registry: ```shell docker compose push ``` Make sure your image is public If you're pushing the image to GitHub containers for the first time, make sure you [configure public package visibility][ghcr-package-visibility]! [ghcr-package-visibility]: https://docs.github.com/en/packages/learn-github-packages/configuring-a-packages-access-control-and-visibility#configuring-visibility-of-packages-for-your-personal-account ## Pin Your Image Hash To prevent another container image being pulled inside ROFL, pin the **image digest** inside `compose.yaml`. Fetch the `sha256:...` digest by invoking: ```shell docker images --digest ``` Then append `@` and the digest next to the image tag in your `compose.yaml`, for example: ```yaml image: "docker.io/MY_USERNAME/my-bot@sha256:9633593eb9e8395023cb0d926982602978466ec003efa189d94a34e7bea6ec0d" ``` [docker]: https://www.docker.com/ [podman]: https://www.podman.io/ --- ## Create Before the app can be built it needs to be created on chain and assigned a unique identifier or *app ID* which can be used by on-chain smart contracts to ensure that they are talking to the right app and also gives the app access to a decentralized key management system. Anyone with enough funds can create an app. Currently, this threshold is [100 tokens][stake-requirements]. In order to obtain TEST tokens needed for creating and running your apps use [the faucet]. To make things easier you should [create or import a `secp256k1-bip44` account] that you can also use with the Ethereum-compatible tooling like Hardhat. [the faucet]: https://faucet.testnet.oasis.io/?paratime=sapphire [create or import a `secp256k1-bip44` account]: https://github.com/oasisprotocol/cli/blob/master/docs/wallet.md We also need to select the network (`testnet` or `mainnet`) and the account that will be the initial administrator of the app (in this case `myaccount`). The CLI will automatically update the manifest file with the assigned app identifier. ```shell oasis rofl create --network testnet --account myaccount ``` After successful creation, the CLI will also output the new identifier: ``` Created ROFL application: rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf ``` The app deployer account automatically becomes the initial admin of the app so it can update the app's configuration. The admin address can always be changed by the current admin. While the CLI implements a simple governance mechanism where the admin of the app is a single account, even a smart contract can be the admin. This allows for implementation of advanced agent governance mechanisms, like using multi-sigs or DAOs with veto powers to control the upgrade process. App ID calculation App ID is derived using one of the two schemes: - **Creator address + creator account nonce (default)**: This approach is suitable for running tests (e.g. in [`sapphire-localnet`]) where you want deterministic app ID. - **Creator address + block round number + index of the `rofl.Create` transaction in the block**: This approach is non-deterministic and preferred in production environments so that the potential attacker cannot simply determine the app ID in advance, even if they knew what the creator address is. You can select the app ID derivation scheme by passing the [`--scheme` parameter][scheme-parameter]. [stake-requirements]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/prerequisites/stake-requirements.md [`sapphire-localnet`]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/localnet.mdx [scheme-parameter]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#create --- ## Deploy ROFLs can be deployed to any ParaTime that has the ROFL module installed. Most common is [Sapphire][sapphire] which implements all ROFL functionalities. [sapphire]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/network.mdx Your app will be deployed to a [ROFL node]. This is a light Oasis Node with support for TEE and configured Sapphire ParaTime. There are two ways to deploy your app: 1. The preferred option is to rent a ROFL node using the [ROFL marketplace](#deploy-on-rofl-marketplace) and deploy your app directly via the [Oasis CLI]. 2. Alternatively, you can copy over the ROFL bundle to your ROFL node manually and configure it. In this case, consult the [ROFL node → Hosting the ROFL bundle directly][rofl-node-hosting] section. [ROFL node]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/rofl-node.mdx [rofl-node-hosting]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/rofl-node.mdx#hosting-the-rofl-app-bundle-directly [Oasis CLI]: https://github.com/oasisprotocol/cli/blob/master/docs/README.md ## Deploy on ROFL Marketplace The Oasis CLI has built-in support for renting a machine on the [ROFL marketplace][rofl-marketplace] and deploying your app to it. To list offers of the default Oasis-managed ROFL provider, run: ```shell oasis rofl deploy --show-offers ``` ``` Using provider: oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz (oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz) Offers available from the selected provider: - playground_short [0000000000000001] TEE: tdx | Memory: 4096 MiB | vCPUs: 2 | Storage: 19.53 GiB Price: 5.0 TEST/hour ``` You can select a different provider and offer by using the [`--provider`][oasis-rofl-deploy] and [`--offer`][oasis-rofl-deploy] parameters respectively. For now, let's just go with defaults and execute: ```shell oasis rofl deploy ``` ``` Using provider: oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz (oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz) Pushing ROFL app to OCI repository 'rofl.sh/0ba0712d-114c-4e39-ac8e-b28edffcada8:1747909776'... No pre-existing machine configured, creating a new one... Taking offer: playground_short [0000000000000001] ``` The command above performed the following actions: 1. copied over ROFL bundle .orc to an Oasis-managed OCI repository `rofl.sh`, 2. paid an offer `playground_short` with ID `0000000000000001` to provider `oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz`, 3. obtained the machine ID and stored it to the manifest file. You can check the status of your active ROFL machine by invoking: ```shell oasis rofl machine show ``` ``` Name: default Provider: oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz ID: 00000000000000a2 Offer: 0000000000000001 Status: accepted Creator: oasis1qpupfu7e2n6pkezeaw0yhj8mcem8anj64ytrayne Admin: oasis1qpupfu7e2n6pkezeaw0yhj8mcem8anj64ytrayne Node ID: bOlqho9R3JHP64kJk+SfMxZt5fNkYWf6gdhErWlY60E= Created at: 2025-05-22 15:01:47 +0000 UTC Updated at: 2025-05-22 15:01:59 +0000 UTC Paid until: 2025-05-22 16:01:47 +0000 UTC Proxy: Domain: m162.test-proxy-a.rofl.app Ports from compose file: 5678 (frontend): https://p5678.m162.test-proxy-a.rofl.app Resources: TEE: Intel TDX Memory: 4096 MiB vCPUs: 2 Storage: 20000 MiB Deployment: App ID: rofl1qpjsc3qplf2szw7w3rpzrpq5rqvzv4q5x5j23msu Metadata: net.oasis.deployment.orc.ref: rofl.sh/0ba0712d-114c-4e39-ac8e-b28edffcada8:1747909776@sha256:77ff0dc76adf957a4a089cf7cb584aa7788fef027c7180ceb73a662ede87a217 Commands: ``` This shows you the details of the machine, including: - Machine status and expiration date - Provider information - Proxy URLs for any published ports - Resource allocation (TEE type, memory, CPUs, storage) - Deployment details You can also fetch the logs of your app by invoking the following command and signing the request with app's admin account: ```shell oasis rofl machine logs ``` Logs are not encrypted! While only app admin can access the logs they are stored **unencrypted on the ROFL node**. In production, make sure you never print any confidential data to the standard or error outputs! [rofl-marketplace]: ../features/marketplace.mdx [oasis-rofl-deploy]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#deploy ## Check That the App is Running To check out all active app replicas regardless of the deployment procedure, use the following command: ```shell oasis rofl show ``` ``` App ID: rofl1qqn9xndja7e2pnxhttktmecvwzz0yqwxsquqyxdf Admin: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Staked amount: 10000.0 Policy: { "quotes": { "pcs": { "tcb_validity_period": 30, "min_tcb_evaluation_data_number": 17, "tdx": {} } }, "enclaves": [ "z+StFagJfBOdGlUGDMH7RlcNUm1uqYDUZDG+g3z2ik8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==", "6KfY4DqD1Vi+H7aUn5FwwLobEzERHoOit7xsrPNz3eUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==" ], "endorsements": [ { "any": {} } ], "fees": 2, "max_expiration": 3 } === Instances === - RAK: AQhV3X660/+bR8REaWYkZNR6eAysFShylhe+7Ph00PM= Node ID: DbeoxcRwDO4Wh8bwq5rAR7wzhiB+LeYn+y7lFSGAZ7I= Expiration: 9 ``` Here you can see that a single instance of the app is running on the given node, its public runtime attestation key (RAK) and the epoch at which its registration will expire if not refreshed. Apps in ROFL must periodically refresh their registrations to ensure they don't expire. You can also check out the status of your app on the Oasis Explorer → Sapphire → ROFL ([Mainnet], [Testnet]): [Mainnet]: https://explorer.oasis.io/mainnet/sapphire/rofl/app [Testnet]: https://explorer.oasis.io/testnet/sapphire/rofl/app --- ## Init ## ROFL Flavors Apps running in ROFL come in different flavors and the right choice is a tradeoff between the Trusted Computing Base (TCB) size and ease of use: - **TDX containers ROFL (default)**: A Docker compose-based container services packed in a secure virtual machine. - **Raw TDX ROFL:** A Rust app compiled as the init process of the operating system and packed in a secure virtual machine. - **SGX ROFL**: A Rust app with fixed memory allocation compiled and packed into a single secure binary. ## Init App Directory and Manifest Create the basic directory structure for the app using the [Oasis CLI]: ```shell oasis rofl init my-app ``` This will create the `my-app` directory and initialize a *ROFL manifest file*. By default a TDX container-based flavor of the app is considered. You can select a different one with the [`--kind`] paramter. [`--kind`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#init The command will output a summary of what is being created: ``` Creating a new ROFL app with default policy... Name: my-app Version: 0.1.0 TEE: tdx Kind: container Git repository initialized. Created manifest in 'rofl.yaml'. Run `oasis rofl create` to register your ROFL app and configure an app ID. ``` The directory structure (omitting git artifacts) will look as follows: ``` myapp ├── compose.yaml # Container compose file. └── rofl.yaml # ROFL app manifest. ``` The [manifest] contains things like ROFL's [metadata], [secrets], [requested resources] and can be modified either manually or by using the CLI commands. [manifest]: ../features/manifest.md [metadata]: ../features/manifest.md#metadata [Oasis CLI]: https://github.com/oasisprotocol/cli/blob/master/docs/README.md [secrets]: ../features/secrets.md [requested resources]: ../features/manifest.md#resources --- ## Prerequisites The following tools are used for ROFL development and deployment: - **Oasis CLI**: The [`oasis`][oasis-cli] command will be used to manage your wallet and your app, including registering, building, deploying and managing your ROFL replicas. - **Docker** (or **Podman**): Having build environment inside container to build your ROFL is perfect because you don't need to install a handful of Intel-specific libraries and dependencies on your system. Also, Compose is useful for testing your ROFL locally before deploying on-chain. Pick among the three setups below. [oasis-cli]: ../../tools/cli/README.md ### Preferred: Native Oasis CLI + Container for building and testing 1. [Download and install][oasis-cli-setup] the Oasis CLI to your platform. 2. For building apps for ROFL, the Oasis team prepared the [`ghcr.io/oasisprotocol/rofl-dev`][rofl-dev] image. It contains all the tools needed to compile any ROFL flavor of your app. You can test it out by running: ```shell docker run --platform linux/amd64 --rm -v .:/src -it ghcr.io/oasisprotocol/rofl-dev:main oasis rofl build ``` --platform linux/amd64 Always provide the `--platform linux/amd64` parameter to the `rofl-dev` image, no matter which processor your computer has or the operating system you're running. [rofl-dev]: https://github.com/oasisprotocol/oasis-sdk/pkgs/container/rofl-dev [oasis-cli-setup]: ../../tools/cli/setup.mdx ### Conservative: Containers Everywhere If you're having issues installing the Oasis CLI locally or you simply don't want to, you can run the `oasis` command from the [`rofl-dev`][rofl-dev] image. Oasis CLI config You will need to carefully bind-mount the Oasis CLI config folder which contains your wallet. Failing to do so will result in losing access to your (funded) accounts. 1. Invoke `oasis` from the `rofl-dev` image: ```shell docker run --platform linux/amd64 --rm -v .:/src -v ~/.config/oasis:/root/.config/oasis -it ghcr.io/oasisprotocol/rofl-dev:main oasis ``` ```shell docker run --platform linux/amd64 --rm -v .:/src -v "~/Library/Application Support/oasis/":/root/.config/oasis -it ghcr.io/oasisprotocol/rofl-dev:main oasis ``` ```shell docker run --platform linux/amd64 --rm -v .:/src -v %USERPROFILE%/AppData/Local/oasis/:/root/.config/oasis -it ghcr.io/oasisprotocol/rofl-dev:main oasis ``` 2. (optionally) Add `oasis` alias to your shell startup script to get the same behavior as if Oasis CLI was installed locally: ```bash title="~/.bashrc" alias oasis='docker run --platform linux/amd64 --rm -v .:/src -v ~/.config/oasis:/root/.config/oasis -it ghcr.io/oasisprotocol/rofl-dev:main oasis' ``` ```bash title="~/.bash_profile" alias oasis='docker run --platform linux/amd64 --rm -v .:/src -v "~/Library/Application Support/oasis/":/root/.config/oasis -it ghcr.io/oasisprotocol/rofl-dev:main oasis' ``` ### Advanced: Native Oasis CLI and ROFL build utils (`linux/amd64` only) 1. Install the [Oasis CLI][oasis-cli] locally. 2. Install tools for creating and encrypting partitions and QEMU. On a Debian-based Linux you can do so by running: ``` sudo apt install squashfs-tools cryptsetup-bin qemu-utils ``` 3. If you want to build SGX and TDX-raw ROFL bundles, you will need to follow the installation of the Rust toolchain and Fortanix libraries as described in the [Oasis Core prerequisites] chapter. For building ROFL natively, you do not need a working SGX/TDX TEE, just the Intel-based CPU and the corresponding libraries. [Oasis Core prerequisites]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/development-setup/prerequisites.md --- ## Test ## SGX ROFL Apps running in SGX ROFL are fully supported by the [`sapphire-localnet`] Docker image. Simply bind-mount your app folder and any ORC bundles will automatically be registered and executed on startup: ```shell docker run -it -p8544-8548:8544-8548 -v .:/rofls ghcr.io/oasisprotocol/sapphire-localnet ``` [`sapphire-localnet`]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/localnet.mdx ## TDX ROFL raw Testing ROFL TDX raw instances locally is currently not support. You will need to deploy them on Sapphire Testnet. ## TDX ROFL containers The behavior of containers inside ROFL should be the same as running the `podman-compose` locally and exporting secrets: ```shell export SECRET=some_secret podman-compose up --build ``` --- ## Sapphire ParaTime Sapphire is our official confidential ParaTime for smart contract development with [Ethereum Virtual Machine (EVM)] compatibility. * Confidential state, end-to-end encryption, confidential randomness * Easy integration with EVM-based dApps, such as DeFi, NFT, Metaverse and crypto gaming * Scalability: increased throughput of transactions * Low-cost: 99%+ lower fees than Ethereum * 6 second finality (1 block) * Cross-chain bridges to enable cross-chain interoperability [Ethereum Virtual Machine (EVM)]: https://ethereum.org/en/developers/docs/evm/ ### Getting Started Develop and deploy a dApp on Sapphire: - follow along with a video walkthrough via [Quickstart][quickstart] - start with a working dApp [demo][demo] - explore showcase dApps deployed on Sapphire on the [playground][playground] [quickstart]: ./quickstart.mdx [playground]: https://playground.oasis.io/ [demo]: https://github.com/oasisprotocol/demo-starter ### Understanding EVM compatibility Get to know the differences between [Sapphire and Ethereum], and learn about the high level [Concepts] of developing dApps. [Sapphire and Ethereum]: ./ethereum.mdx [Concepts]: ./develop/concept.mdx ### Develop on Sapphire Take your existing dApp building knowledge and add Sapphire with our developer [cheatsheet](./images/cheatsheet.pdf) or visit the [Develop on Sapphire] chapter. [Develop on Sapphire]: ./develop/README.mdx ### Network Information Check out the RPC endpoints, block explorers and indexers at [Network Information][network]. [network]: ./network.mdx ### Faucet Visit the [faucet][faucet] to obtain testnet tokens for development purposes. [faucet]: https://faucet.testnet.oasis.io/ ## See also --- ## Contract Addresses and Deployments ## Standard Contract Addresses | Name | Mainnet Address | Testnet Address | Source | |----------------------------|------------------------------------------------------------|------------------------------------------------------------|------------------------------------| | [Multicall V3][multicall] | [`0xcA11bde05977b3631167028862bE2a173976CA11`][mc-mainnet] | [`0xcA11bde05977b3631167028862bE2a173976CA11`][mc-testnet] | [Multicall3.sol][multicall-source] | | [CreateX][createx] | [`0xba5Ed099633D3B313e4D5F7bdc1305d3c28ba5Ed`][cx-mainnet] | [`0xba5Ed099633D3B313e4D5F7bdc1305d3c28ba5Ed`][cx-testnet] | [Createx.sol][createx-source] | | [Wrapped ROSE][wrose-dapp] | [`0x8Bc2B030b299964eEfb5e1e0b36991352E56D2D3`][wr-mainnet] | [`0xB759a0fbc1dA517aF257D5Cf039aB4D86dFB3b94`][wr-testnet] | [WrappedROSE.sol][wrose-source] | [multicall-source]: https://github.com/mds1/multicall/blob/main/src/Multicall3.sol [multicall]: https://multicall3.com/ [mc-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0xcA11bde05977b3631167028862bE2a173976CA11 [mc-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0xcA11bde05977b3631167028862bE2a173976CA11 [createx]: https://github.com/pcaversaccio/createx/ [createx-source]: https://github.com/pcaversaccio/createx/blob/main/src/CreateX.sol [cx-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0xba5Ed099633D3B313e4D5F7bdc1305d3c28ba5Ed [cx-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0xba5Ed099633D3B313e4D5F7bdc1305d3c28ba5Ed [wrose-dapp]: https://rose.oasis.io/wrap [wrose-source]: https://github.com/oasisprotocol/sapphire-paratime/blob/main/contracts/contracts/WrappedROSE.sol [wr-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x8Bc2B030b299964eEfb5e1e0b36991352E56D2D3 [wr-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0xB759a0fbc1dA517aF257D5Cf039aB4D86dFB3b94 ## Celer cBridge Tokens (Mainnet) | Source Chain | Token Name | Source Address | Dest. Chain | Dest Address | | ------------ | ---------- | -------------- | ----------- | ------------ | | Ethereum Mainnet (1) | OCEAN | [`0x967da4048cD07aB37855c090aAF366e4ce1b9F48`](https://etherscan.io/address/0x967da4048cD07aB37855c090aAF366e4ce1b9F48) | Oasis Sapphire (23294) | [`0x39d22B78A7651A76Ffbde2aaAB5FD92666Aca520`](https://explorer.oasis.io/mainnet/sapphire/address/0x39d22B78A7651A76Ffbde2aaAB5FD92666Aca520) | | Ethereum Mainnet (1) | USDC | [`0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48`](https://etherscan.io/address/0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48) | Oasis Sapphire (23294) | [`0x2c2E3812742Ab2DA53a728A09F5DE670Aba584b6`](https://explorer.oasis.io/mainnet/sapphire/address/0x2c2E3812742Ab2DA53a728A09F5DE670Aba584b6) | | Ethereum Mainnet (1) | USDT | [`0xdAC17F958D2ee523a2206206994597C13D831ec7`](https://etherscan.io/address/0xdAC17F958D2ee523a2206206994597C13D831ec7) | Oasis Sapphire (23294) | [`0xE48151964556381B33f93E05E36381Fd53Ec053E`](https://explorer.oasis.io/mainnet/sapphire/address/0xE48151964556381B33f93E05E36381Fd53Ec053E) | | Ethereum Mainnet (1) | WBTC | [`0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599`](https://etherscan.io/address/0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599) | Oasis Sapphire (23294) | [`0xE9533976C590200E32d95C53f06AE12d292cFc47`](https://explorer.oasis.io/mainnet/sapphire/address/0xE9533976C590200E32d95C53f06AE12d292cFc47) | | Ethereum Mainnet (1) | WETH | [`0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`](https://etherscan.io/address/0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2) | Oasis Sapphire (23294) | [`0xfc6b18d694F2D137dB762B152736Ba098F9808d9`](https://explorer.oasis.io/mainnet/sapphire/address/0xfc6b18d694F2D137dB762B152736Ba098F9808d9) | | BNB Chain (56) | BNB | [`0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c`](https://bscscan.com/address/0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c) | Oasis Sapphire (23294) | [`0xe95E3a9f1a45B5EDa71781448F6047d7B7e31cbF`](https://explorer.oasis.io/mainnet/sapphire/address/0xe95E3a9f1a45B5EDa71781448F6047d7B7e31cbF) | | Polygon PoS (137) | MATIC | [`0x0d500B1d8E8eF31E21C99d1Db9A6444d3ADf1270`](https://polygonscan.com/address/0x0d500B1d8E8eF31E21C99d1Db9A6444d3ADf1270) | Oasis Sapphire (23294) | [`0xa349005a68FA33e8DACAAa850c45175bbcD49B19`](https://explorer.oasis.io/mainnet/sapphire/address/0xa349005a68FA33e8DACAAa850c45175bbcD49B19) | | Oasis Sapphire (23294) | wROSE | [`0x8Bc2B030b299964eEfb5e1e0b36991352E56D2D3`](https://explorer.oasis.io/mainnet/sapphire/address/0x8Bc2B030b299964eEfb5e1e0b36991352E56D2D3) | BNB Chain (56) | [`0xF00600eBC7633462BC4F9C61eA2cE99F5AAEBd4a`](https://bscscan.com/address/0xF00600eBC7633462BC4F9C61eA2cE99F5AAEBd4a) | ## Celer cBridge Tokens (Testnet) | Source Chain | Token Name | Source Address | Dest. Chain | Dest Address | | ------------ | ---------- | -------------- | ----------- | ------------ | | Oasis Sapphire Testnet (23295) | wROSE | [`0xB759a0fbc1dA517aF257D5Cf039aB4D86dFB3b94`](https://testnet.explorer.sapphire.oasis.dev/address/0xB759a0fbc1dA517aF257D5Cf039aB4D86dFB3b94) | BSC Testnet (97) | [`0x26a6f43BaEDD1767c283e2555A9E1236E5aE3A55`](https://testnet.bscscan.com/address/0x26a6f43BaEDD1767c283e2555A9E1236E5aE3A55) | ## Deployments | Name | Mainnet Address | Testnet Address | Source | | ---- | --------------- | --------------- | ------ | | [Celer IM Executor][message-executor] | Multiple executors available | [`0x9C850D230FFFaCEf1E2D1741a00080856630e455`][message-executor-testnet] | [Message Executor][message-executor-source] | | [Celer MessageBus][message-bus] | [`0x9Bb46D5100d2Db4608112026951c9C965b233f4D`][message-bus-mainnet] | [`0x9Bb46D5100d2Db4608112026951c9C965b233f4D`][message-bus-testnet] | [Message bus][message-bus-source] | | [Safe Singleton Factory][singleton-factory] | [`0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7`][singleton-factory-mainnet] | [`0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7`][singleton-factory-testnet] | [Singleton Factory][singleton-factory] | | [Band Oracle][band-oracle] | [`0xDA7a001b254CD22e46d3eAB04d937489c93174C3`][band-oracle-mainnet] | [`0x0c2362c9A0586Dd7295549C65a4A5e3aFE10a88A`][band-oracle-testnet] | [Oracle][band-oracle-source] | | [Router Gateway][router-gateway] | [`0x86dfc31d9cb3280ee1eb1096caa9fc66299af973`][router-gateway-mainnet] | [`0xfbE6D1e711CC2BC241dfa682CBbFF6D68bf62e67`][router-gateway-testnet] | [Gateway][router-gateway-source] | | [Router Asset Forwarder][router-forwarder] | [`0x21c1e74caadf990e237920d5515955a024031109`][router-forwarder-mainnet] | - | [Asset Forwarder][router-forwarder-source] | | [Router Asset Bridge][router-bridge] | [`0x01b4ce0d48ce91eb6bcaf5db33870c65d641b894`][router-bridge-mainnet] | - | [Asset Bridge][router-bridge-source] | [message-executor]: https://im-docs.celer.network/developer/development-guide/message-executor [message-executor-source]: https://github.com/celer-network/im-executor [message-executor-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0x9C850D230FFFaCEf1E2D1741a00080856630e455 [message-bus]: https://im-docs.celer.network/developer/development-guide/message-executor [message-bus-source]: https://github.com/celer-network/sgn-v2-contracts/blob/6af81b55a13a7aacab9a4d92a38d374d46c0fdbf/contracts/message/messagebus/MessageBus.sol [message-bus-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x9Bb46D5100d2Db4608112026951c9C965b233f4D [message-bus-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0x9Bb46D5100d2Db4608112026951c9C965b233f4D [singleton-factory]: https://github.com/safe-global/safe-singleton-factory/ [singleton-factory-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7 [singleton-factory-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7 [band-oracle]: https://docs.bandchain.org/ [band-oracle-source]: https://github.com/bandprotocol/band-std-reference-contracts-solidity [band-oracle-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0xDA7a001b254CD22e46d3eAB04d937489c93174C3 [band-oracle-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0x0c2362c9A0586Dd7295549C65a4A5e3aFE10a88A [router-gateway]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/key-concepts/high-level-architecture [router-forwarder]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/key-concepts/high-level-architecture [router-bridge]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/key-concepts/high-level-architecture [router-gateway-source]: https://github.com/router-protocol/router-contracts/tree/main/gateway/evm [router-bridge-source]: https://github.com/router-protocol/router-contracts/tree/main/asset-bridge/evm [router-forwarder-source]: https://github.com/router-protocol/router-contracts/tree/main/asset-forwarder/evm [router-gateway-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x86DFc31d9cB3280eE1eB1096caa9fC66299Af973 [router-forwarder-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x21c1e74caadf990e237920d5515955a024031109 [router-bridge-mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x01b4ce0d48ce91eb6bcaf5db33870c65d641b894 [router-gateway-testnet]: https://explorer.oasis.io/testnet/sapphire/address/0xfbE6D1e711CC2BC241dfa682CBbFF6D68bf62e67 --- ## Develop on Sapphire As Sapphire is EVM-compatible, you can use the same dev tooling as you would when building on Ethereum. Additionally, we build tools to support you in creating secure and confidential dApps. Feel free to check out the [Concepts] page to get a better understanding of the transaction flow and the contract state. [Concepts]: ./concept.mdx ## Contract Development Sapphire is programmable using any language that targets the EVM, such as Solidity, Fe or Vyper. If you prefer to use an Ethereum framework like Hardhat or Foundry, you can also use those with Sapphire; all you need to do is set your Web3 gateway URL. You can find the details of the Oasis Sapphire Web3 endpoints on the [Network information] page. [Network information]: ../network.mdx#rpc-endpoints ### Features [Randomness, Subcalls and More Precompiles][sapphire-contracts] in the contracts API reference [sapphire-contracts]: https://api.docs.oasis.io/sol/sapphire-contracts ## Frontend Development To connect your frontend to your smart contracts, see the [Browser] chapter. [Browser]: ./browser.md ## Backend Development If you want to connect and execute transactions from your backend. Sapphire has three clients in different programming languages: | Language | Package | API Reference | GitHub | | --------------- | -------------------------------------------------- | ------------- | ------------------- | | **Javascript** | [@oasisprotocol/sapphire-paratime][sapphire-npmjs] | [API][js-api] | [GitHub][js-github] | | **Go** | [@oasisprotocol/sapphire-paratime][go-pkg] | [API][go-api] | [GitHub][go-github] | | **Python** | | [API][py-api] | [GitHub][py-github] | [sapphire-npmjs]: https://www.npmjs.com/package/@oasisprotocol/sapphire-paratime [go-pkg]: https://pkg.go.dev/github.com/oasisprotocol/sapphire-paratime/clients/go [js-api]: https://api.docs.oasis.io/js/sapphire-paratime [go-api]: https://pkg.go.dev/github.com/oasisprotocol/sapphire-paratime/clients/go [py-api]: https://api.docs.oasis.io/py/sapphirepy/ [js-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/clients/js/README.md [go-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/clients/go/README.md [Py-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/clients/py/README.md ## Testing [Test][testing] confidential contracts with Hardhat or Ethers. [testing]: ./testing.md ## Examples See our [Examples] page for demo dApps that bring all the above together. [examples]: ../examples.mdx ## Tools and Services Should you have any questions or ideas to share, feel free to reach out to us on [discord and other social media channels][social-media]. [social-media]: https://github.com/oasisprotocol/docs/blob/main/docs/get-involved/README.md#social-media-channels --- ## View-Call Authentication User impersonation on Ethereum and other "transparent EVMs" isn't a problem because **everybody** can see **all** data. However, the Sapphire confidential EVM prevents contracts from revealing confidential information to the wrong party (account or contract)—for this reason we cannot allow arbitrary impersonation of any `msg.sender`. In Sapphire, you need to consider the following types of contract calls: 1. **Contract to contract calls** (also known as *internal calls*) `msg.sender` is set to the address corresponding to the caller function. If a contract calls another contract in a way which could reveal sensitive information, the calling contract must implement access control or authentication. 2. **Unauthenticted view calls** (queries using `eth_call`) `eth_call` queries used to invoke contract functions will always have the `msg.sender` parameter set to `address(0x0)` on Sapphire. This is regardless of any `from` overrides passed on the client side for simulating the query. Calldata end-to-end encryption has nothing to do with authentication. Although the calls may be unauthenticated they can still be encrypted, and the other way around! 3. **Authenticated view calls** (via SIWE token) Developer authenticates the view call explicitly by deriving a message sender from the SIWE token. This token is provided as a separate parameter to the contract function. The derived address can then be used for authentication in place of `msg.sender`. Otherwise, such view call behaves the same way as the unauthenticated view calls above and built-in `msg.sender` is `address(0x0)`. This approach is most appropriate for frontend dApps. 4. **Authenticated view calls** (via signed queries) [EIP-712] defines a format for signing view calls with the keypair of your Ethereum account. Sapphire will validate such signatures and automatically set the `msg.sender` parameter in your contract to the address of the signing account. This method is most appropriate for backend services because frontend applications would require user interaction each time. 5. **Transactions** (authenticated by signature) When a transaction is submitted it is signed by a keypair (thus costs gas and can make state updates) and the `msg.sender` will be set to the address of the signing account. [EIP-712]: https://eips.ethereum.org/EIPS/eip-712 ## How Sapphire Executes Contract Calls Let's see how Sapphire executes contract calls for each call variant presented above. Consider the following Solidity code: ```solidity contract Example { address _owner; constructor () { _owner = msg.sender; } function isOwner() public view returns (bool) { return msg.sender == _owner; } } ``` In the sample above, assuming we're calling from the same contract or account which created the contract, calling `isOwner` will return: 1. `true`, if called via the contract which created it 2. `false`, for unauthenticated `eth_call` 3. `false`, since the contract has no SIWE implementation 4. `true`, for signed view call using the wrapped client ([Go][wrapped-go], [Python][wrapped-py]) with signer attached 5. `true`, if called via transaction Now that we've covered basics, let's look more closely at the *authenticated view calls*. These are crucial for building confidential smart contracts on Sapphire. ## Authenticated view calls Consider this slightly extended version of the contract above. Only the owner is allowed to store and retrieve secret message: ```solidity contract MessageBox { address private _owner; string private _message; modifier onlyOwner() { if (msg.sender != _owner) { revert("not allowed"); } _; } constructor() { _owner = msg.sender; } function getSecretMessage() external view onlyOwner returns (string memory) { return _message; } function setSecretMessage(string calldata message) external onlyOwner { _message = message; } } ``` ### via SIWE token SIWE stands for "Sign-In with Ethereum" and is formally defined in [EIP-4361]. The initial use case for SIWE involved using your Ethereum account as a form of authentication for off-chain services (providing an alternative to user names and passwords). The MetaMask wallet quickly adopted the standard and it became a de-facto login mechanism in the Web3 world. An informative pop-up for logging into a SIWE-enabled website looks like this: [Image: MetaMask Log-In confirmation] After a user agrees by signing the SIWE login message above, the signature is verified by the website backend or by a 3rd party [single sign-on] service. This is done only once per session—during login. A successful login generates a token that is used for the remainder of the session. In contrast to transparent EVM chains, **Sapphire simplifies dApp design, improves trust, and increases the usability of SIWE messages through extending message parsing and verification to on-chain computation**. This feature (unique to Sapphire) removes the need to develop and maintain separate dApp backend services just for SIWE authentication. Let's take a look at an example authentication flow: [Image: SIWE authentication flow on Sapphire] Consider the `MessageBox` contract from [above](#authenticated-view-calls), and let's extend it with [SiweAuth]: ```solidity import {SiweAuth} from "@oasisprotocol/sapphire-contracts/contracts/auth/SiweAuth.sol"; contract MessageBox is SiweAuth { address private _owner; string private _message; modifier onlyOwner(bytes memory token) { if (msg.sender != _owner && authMsgSender(token) != _owner) { revert("not allowed"); } _; } constructor(string memory domain) SiweAuth(domain) { _owner = msg.sender; } function getSecretMessage(bytes memory token) external view onlyOwner(token) returns (string memory) { return _message; } function setSecretMessage(string calldata message) external onlyOwner(bytes("")) { _message = message; } } ``` We made the following changes: 1. In the constructor, we need to define the domain name where the dApp frontend will be deployed. This domain is included inside the SIWE log-in message and is verified by the user-facing wallet to make sure they are accessing the contract from a legitimate domain. 2. The `onlyOwner` modifier is extended with an optional `bytes memory token` parameter and is considered in the case of invalid `msg.sender` value. The same modifier is used for authenticating both SIWE queries and the transactions. 3. `getSecretMessage` was extended with the `bytes memory token` session token. On the client side, the code running inside a browser needs to make sure that the session token for making authenticated calls is valid. If not, the browser requests a wallet to sign a log-in message and fetch a fresh session token. ```typescript import {SiweMessage} from 'siwe'; import { ethers } from 'hardhat' let token = ''; async function getSecretMessage(): Promise { const messageBox = await ethers.getContractAt('MessageBox', '0x5FbDB2315678afecb367f032d93F642f64180aa3'); if (token == '') { // Stored in browser session. const domain = await messageBox.domain(); const siweMsg = new SiweMessage({ domain, address: addr, // User's selected account address. uri: `http://${domain}`, version: "1", chainId: 0x5aff, // Sapphire Testnet }).toMessage(); const sig = ethers.Signature.from((await window.ethereum.getSigner(addr)).signMessage(siweMsg)); token = await messageBox.login(siweMsg, sig); } return messageBox.getSecretMessage(token); } ``` #### A few words about the SIWE domain parameter During contract deployment you have to provide **the domain** where the web content of your dApp will be hosted at. MetaMask will check whether the domain shown in the user's browser window matches the one provided in the SIWE message used for logging in and then warn the user if there is a discrepancy. On the other hand—if the SIWE message is forged to match the (exploited) web site domain, the on-chain [SiweAuth] message will fail validation and prevent the user from obtaining a valid token. When deploying your contract, the provided domain **should include the host chunk of the [URI], including any subdomains, and optionally the port**. No scheme (e.g. `http://`, `https://`) or path (e.g. `/my-app/login`) should be included. If you wish to enforce authentication only on a specific port, provide it alongside the domain, e.g. `mydomain.com:12345`. Otherwise, MetaMask will consider any port valid. The visibility of `_domain` in [SiweAuth] is **`internal`**. By default, a public getter is implemented, so a web app can automatically obtain a domain name when generating the SIWE message. No setters are provided in keeping the domain immutable. If needed, feel free to implement a setter in your contract with appropriate authentication mechanisms (e.g. `onlyOwner` modifier). For traceability, we also suggest to **emit an event** when the domain is changed as transactions may be encrypted. Example: Starter project To see a running example of the TypeScript SIWE code including the Hardhat tests, Node.js and the browser, check out the official Oasis [demo-starter] project. The SIWE authentication is implemented on the backend as a [Hardhat task], in [unit tests], and on the frontend within the [Web3 Auth provider] code. Sapphire TypeScript wrapper? While the [Sapphire TypeScript wrapper][sp-npm] offers convenient end-to-end encryption for contract calls and transactions, using the TypeScript wrapper is required for SIWE security if you trust your Web3 endpoint. The token generation occurs inside the Sapphire's TEE and the communication with your Web3 endpoint is secured via HTTPS. [demo-starter]: https://github.com/oasisprotocol/demo-starter [Hardhat task]: https://github.com/oasisprotocol/demo-starter/blob/master/backend/hardhat.config.ts [unit tests]: https://github.com/oasisprotocol/demo-starter/blob/master/backend/test/MessageBox.ts [Web3 Auth provider]: https://github.com/oasisprotocol/demo-starter/blob/master/frontend/src/providers/Web3AuthProvider.tsx [tests]: https://github.com/oasisprotocol/demo-starter/blob/master/backend/test/MessageBox.ts [SiweAuth]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/auth/SiweAuth.sol/contract.SiweAuth.html [EIP-4361]: https://eips.ethereum.org/EIPS/eip-4361 [single sign-on]: https://en.wikipedia.org/wiki/Single_sign-on [sp-npm]: https://www.npmjs.com/package/@oasisprotocol/sapphire-paratime [URI]: https://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Syntax ### via signed queries [EIP-712] proposed a method to show data to the user in a structured fashion so they can verify it before signing. In the browser however, apps requiring signed view calls would trigger user interaction with their wallet each time—sometimes even multiple times per page—which is bad UX that frustrates users. Backend services on the other hand often have direct access to an Ethereum wallet (e.g. a secret key stored in the environment variable) without needing user interaction. This is possible because a backend service connects to a trusted site and executes trusted code, so it's fine to sign the necessary view calls non interactively. The Sapphire wrappers for [Go][sp-go] and [Python][sp-py] will **sign any view call** you make to a contract deployed on Sapphire using the aforementioned [EIP-712]. Suppose we want to store the private key of an account used to sign the view calls inside a `PRIVATE_KEY` environment variable. The following snippets demonstrate how to trigger signed queries **without any changes to the original `MessageBox` contract from [above](#authenticated-view-calls)**. [sp-go]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/clients/go [sp-py]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/clients/py [wrapped-go]: https://pkg.go.dev/github.com/oasisprotocol/sapphire-paratime/clients/go#WrapClient [wrapped-py]: https://api.docs.oasis.io/py/sapphirepy/sapphirepy.html#sapphirepy.sapphire.wrap Wrap the existing Ethereum client by calling the [`WrapClient()`][wrapped-go] helper and provide the signing logic. Then, all subsequent view calls will be signed. For example: ```go import ( "context" "crypto/ecdsa" "github.com/ethereum/go-ethereum/accounts/abi/bind" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/ethclient" sapphire "github.com/oasisprotocol/sapphire-paratime/clients/go" messageBox "demo-starter/contracts/message-box" ) func GetC10lMessage() (string, error) { client, err = ethclient.Dial("https://testnet.sapphire.oasis.io") if err != nil { return "", err } sk, err = crypto.HexToECDSA(os.Getenv("PRIVATE_KEY")) addr := crypto.PubkeyToAddress(*sk.Public().(*ecdsa.PublicKey)) wrappedClient, err := sapphire.WrapClient(c.Client, func(digest [32]byte) ([]byte, error) { return crypto.Sign(digest[:], sk) }) if err != nil { return "", fmt.Errorf("unable to wrap backend: %v", err) } mb, err := messageBox.NewMessageBox(common.HexToAddress("0x5FbDB2315678afecb367f032d93F642f64180aa3"), wrappedClient) if err != nil { return "", fmt.Errorf("Unable to get instance of contract: %v", err) } msg, err := mb.GetSecretMessage(&bind.CallOpts{From: addr}) // Don't forget to pass callOpts! if err != nil { return "", fmt.Errorf("failed to retrieve message: %v", err) } return msg, nil } ``` Example: Oasis starter in Go To see a running example of the Go code including the end-to-end encryption and signed queries check out the official [Oasis starter project for Go]. Wrap the existing Web3 client by calling the [`wrap()`][wrapped-py] helper and provide the signing logic. Then, all subsequent view calls will be signed. For example: ```python from web3 import Web3 from web3.middleware import construct_sign_and_send_raw_middleware from eth_account.signers.local import LocalAccount from eth_account import Account from sapphirepy import sapphire def get_c10l_message(address: str, network_name: Optional[str] = "sapphire-localnet") -> str: w3 = Web3(Web3.HTTPProvider(sapphire.NETWORKS[network_name])) account: LocalAccount = Account.from_key(os.environ.get("PRIVATE_KEY")) w3.middleware_onion.add(construct_sign_and_send_raw_middleware(account)) w3 = sapphire.wrap(w3, account) compiled_contract = json.load("MessageBox_compiled.json") contract_data = compiled_contract["contracts"]["MessageBox.sol"]["MessageBox"] message_box = w3.eth.contract(address=address, abi=contract_data["abi"]) return message_box.functions.message().call() ``` Example: Oasis starter in Python To see a running example of the Python code including the end-to-end encryption and signed queries, check out the official [Oasis starter project for Python]. If your smart contract needs to support view calls from both the frontend and the backend, then take the [SIWE approach](#via-siwe-token). The backend implementation then depends on your programming language: - **Go and Python**: Pass an empty string as a `token` parameter to your smart contract and let the wrapper sign the view call using EIP-712. Since `msg.sender` will be defined, the `isOwner` modifier will pass just fine. - **TypeScript**: Recycle the frontend client-side code [from above](#via-siwe-token) to generate the SIWE message, perform the authentication and pass it in the view call. You can check out the demo-starter's [Hardhat task] and [unit tests] for a working example. [Oasis starter project for Go]: https://github.com/oasisprotocol/demo-starter-go [Oasis starter project for Python]: https://github.com/oasisprotocol/demo-starter-py --- ## Browser Support This page provides guidance for developers looking to build confidential dApps on Sapphire that work across different web browsers and integrate with wallets, including Metamask. It covers supported libraries, best practices for secure transactions, and quick steps for using the libraries. ## Supported Libraries | Library | Package | API Reference | Source | | --------------------------------------------- | ------------------------------------------------- | ----------------- | ----------------------- | | **[Sapphire TypeScript Wrapper][s-p-github]** | [@oasisprotocol/sapphire-paratime][s-p-npmjs] | [API][s-p-api] | [GitHub][s-p-github] | | **[Ethers v6][ethers]** | [@oasisprotocol/sapphire-ethers-v6][ethers-npmjs] | [API][ethers-api] | [GitHub][ethers-github] | | **[Viem][viem]** | [@oasisprotocol/sapphire-viem-v2][viem-npmjs] | [API][viem-api] | [GitHub][viem-github] | | **[Wagmi][wagmi]** | [@oasisprotocol/sapphire-wagmi-v2][wagmi-npmjs] | [API][wagmi-api] | [GitHub][wagmi-github] | [s-p-npmjs]: https://www.npmjs.com/package/@oasisprotocol/sapphire-paratime [s-p-api]: https://api.docs.oasis.io/js/sapphire-paratime/ [s-p-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/clients/js [ethers]: https://docs.ethers.org/v6/ [ethers-npmjs]: https://www.npmjs.com/package/@oasisprotocol/sapphire-ethers-v6 [ethers-api]: https://api.docs.oasis.io/js/sapphire-ethers-v6 [ethers-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/integrations/ethers-v6 [viem]: https://viem.sh/ [viem-npmjs]: https://www.npmjs.com/package/@oasisprotocol/sapphire-viem-v2 [viem-api]: https://api.docs.oasis.io/js/sapphire-viem-v2 [viem-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/integrations/viem-v2 [wagmi]: https://wagmi.sh/ [wagmi-npmjs]: https://www.npmjs.com/package/@oasisprotocol/sapphire-wagmi-v2 [wagmi-api]: https://api.docs.oasis.io/js/sapphire-wagmi-v2 [wagmi-github]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/integrations/wagmi-v2 ## Choosing the Right Library Many browser-based dApps can use the lightweight [Sapphire TypeScript wrapper][s-p] if they rely entirely on the injected EIP-1193 wallet provider (e.g. window.ethereum) to communicate with and sign transactions on Sapphire. If you already use an EVM-frontend library, use our library-specific packages for [Ethers][ethers-quick], [Viem][viem-quick] or [Wagmi][wagmi-quick]. [s-p]: ./browser.md#lightweight-sapphire-typescript-wrapper [ethers-quick]: ./browser.md#ethers-v6 [viem-quick]: ./browser.md#viem [wagmi-quick]: ./browser.md#wagmi Example: Starter project If your project includes both a smart contract backend and a web frontend, you can explore our **[demo-starter]** repository. It provides a working example using React as well as a [Vue branch]. [demo-starter]: https://github.com/oasisprotocol/demo-starter [Vue branch]: https://github.com/oasisprotocol/demo-starter/tree/vue ## Transaction encryption When using the supported libraries, ensure that all transactions containing sensitive information **are encrypted**. Encryption is essential to safeguard user data and ensure privacy. To verify that a transaction is encrypted, you can check the transaction details on the Oasis Block Explorer for the corresponding network ([Localnet], [Testnet], or [Mainnet]). Look for a green lock icon next to the transaction, which indicates that it is securely encrypted. Check Calldata Encryption Programmatically You can check programmatically if calldata is encrypted by using [`isCalldataEnveloped()`], which is part of `@oasisprotocol/sapphire-paratime`. View-Call Authentication For authenticated view calls, make sure to visit the [View-Call Authentication] chapter to learn about the proper authentication procedures. [Localnet]: http://localhost:8548 [Testnet]: https://explorer.oasis.io/testnet/sapphire [Mainnet]: https://explorer.oasis.io/mainnet/sapphire [`isCalldataEnveloped()`]: https://api.docs.oasis.io/js/sapphire-paratime/functions/isCalldataEnveloped.html [View-Call Authentication]: ./authentication.md ## Lightweight Sapphire TypeScript Wrapper This shows a quick way to use **Sapphire TypeScript Wrapper** to encrypt transactions, for more info see [`@oasisprotocol/sapphire-paratime`][s-p-github]. ### Usage Install the library with your favorite package manager ```shell npm2yarn npm install @oasisprotocol/sapphire-paratime ``` After installing the library, find your Ethereum provider and wrap it using `wrapEthereumProvider`. ```js import { wrapEthereumProvider } from '@oasisprotocol/sapphire-paratime'; const provider = wrapEthereumProvider(window.ethereum); ``` Example: Hardhat boilerplate Our maintained Hardhat boilerplate uses the Sapphire TypeScript Wrapper to enable confidential transactions in development. Find the code in the [Sapphire ParaTime examples] repository. [Sapphire ParaTime examples]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/hardhat-boilerplate ## Ethers v6 This shows a quick way to use **Ethers v6** to encrypt transactions, for more info see [@oasisprotocol/sapphire-ethers-v6][ethers-github]. ### Usage Install the library with your favorite package manager ```shell npm2yarn npm install 'ethers@6.x' '@oasisprotocol/sapphire-ethers-v6' ``` After installing the library, find your Ethereum provider and wrap it using `wrapEthersSigner`. ```typescript import { BrowserProvider } from 'ethers'; import { wrapEthersSigner } from '@oasisprotocol/sapphire-ethers-v6'; const signer = wrapEthersSigner( new BrowserProvider(window.ethereum).getSigner() ); ``` ## Viem This shows a quick way to use **Viem** to encrypt transactions, for more info see [@oasisprotocol/sapphire-viem-v2][viem-github]. ### Usage Install the library with your favorite package manager ```shell npm2yarn npm install @oasisprotocol/sapphire-viem-v2 viem@2.x ``` After installing the library, wrap the WalletClient with `wrapWalletClient`. ```typescript import { createWalletClient } from 'viem' import { english, generateMnemonic, mnemonicToAccount } from 'viem/accounts'; import { sapphireLocalnet, sapphireHttpTransport, wrapWalletClient } from '@oasisprotocol/sapphire-viem-v2'; const account = mnemonicToAccount(generateMnemonic(english)); const walletClient = await wrapWalletClient(createWalletClient({ account, chain: sapphireLocalnet, transport: sapphireHttpTransport() })); ``` Viem Example You can find more example code demonstrating how to use the library in our [Hardhat-Viem example][viem-example]. [viem-example]: https://github.com/oasisprotocol/sapphire-paratime/blob/main/examples/hardhat-viem ## Wagmi This shows a quick way to use **Wagmi** to encrypt transactions, for more info see [@oasisprotocol/sapphire-wagmi-v2][wagmi-github]. ### Usage Install the library with your favorite package manager ```shell npm2yarn npm install @oasisprotocol/sapphire-wagmi-v2 wagmi@2.x viem@2.x ``` After installing the library, use the Sapphire specific connector and transports. ```typescript import { createConfig } from "wagmi"; import { sapphire, sapphireTestnet } from "wagmi/chains"; import { injectedWithSapphire, sapphireHttpTransport, } from "@oasisprotocol/sapphire-wagmi-v2"; export const config = createConfig({ multiInjectedProviderDiscovery: false, chains: [sapphire, sapphireTestnet], connectors: [injectedWithSapphire()], transports: { [sapphire.id]: sapphireHttpTransport(), [sapphireTestnet.id]: sapphireHttpTransport() }, }); ``` For a complete example of how to use this library, please refer to our [Wagmi example][wagmi-example]. [wagmi-example]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/wagmi-v2 --- ## Concepts ## Transactions & Calls {/*-- https://github.com/oasisprotocol/docs/blob/455980674563cad92ff1e1b62a7a5f2d4d6809f0/docs/general/images/architecture/client-km-compute.svg -->*/} [Image: Client, Key Manager, Compute Node diagram] The figure above illustrates the flow of a **confidential smart contract transaction** on Sapphire. Transactions and calls must be encrypted and signed for maximum security. The [@oasisprotocol/sapphire-paratime] npm package will make your life easy. It'll handle cryptography and signing for you. You should be aware that taking actions based on the value of private data may **leak the private data through side channels** like time spent, gas use and accessed memory locations. If you need to branch on private data, you should in most cases ensure that both branches exhibit the same time/gas and storage patterns. You can also make **confidential smart contract calls** on Sapphire. If you use `msg.sender` for access control in your contract, the call **must be signed**, otherwise `msg.sender` will be zeroed. On the other hand, set the `from` address to all zeros, if you want to avoid annoying signature popups in the user's wallet for calls that do not need to be signed. The JS library will do this for you. [@oasisprotocol/sapphire-paratime]: https://www.npmjs.com/package/@oasisprotocol/sapphire-paratime Inside the smart contract code, there is no way of knowing whether the client's call data were originally encrypted or not. Detailed confidential smart contract transaction flow on Sapphire [Image: Diagram of the detailed confidential smart contract transaction flow on Sapphire] Detailed confidential smart contract call flow on Sapphire [Image: Diagram of the detailed confidential smart contract call flow on Sapphire] ## Contract State The Sapphire state model is like Ethereum's except for all state being encrypted and not accessible to anyone except the contract. The contract, executing in an active (attested) Oasis compute node is the only entity that can request its state encryption key from the Oasis key manager. Both the keys and values of the items stored in state are encrypted, but the **size of either is not hidden**. Your app may need to pad state items to a constant length, or use other obfuscation. Observers may also be able to infer computation based on storage access patterns, so you may need to obfuscate that, too. See [Security chapter] for more recommendations. [Security chapter]: ./security.md#storage-access-patterns Contract state leaks a fine-grained access pattern Contract state is backed by an encrypted key-value store. However, the trace of encrypted records is leaked to the compute node. As a concrete example, an ERC-20 token transfer would leak which encrypted record is for the sender's account balance and which is for the receiver's account balance. Such a token would be traceable from sender address to receiver address. Obfuscating the storage access patterns may be done by using an ORAM implementation. Contract state may be made available to third parties through logs/events, or explicit getters. ## Contract Logs Contract logs/events (e.g., those emitted by the Solidity `emit` keyword) are exactly like Ethereum. Data contained in events is *not* encrypted. Precompiled contracts are available to help you encrypt data that you can then pack into an event, however. Unmodified contracts may leak state through logs Base contracts like those provided by OpenZeppelin often emit logs containing private information. If you don't know they're doing that, you might undermine the confidentiality of your state. As a concrete example, the ERC-20 spec requires implementers to emit an `event Transfer(from, to, amount)`, which is obviously problematic if you're writing a confidential token. What you can do instead is fork that contract and remove the offending emissions. ## See also --- ## End-to-End Testing Many modern web applications utilize [Playwright] tests during the development and release process to increase shipping speed and improve quality. While the Web3 dApps ecosystem is still evolving, tools exist to do the same. We recommend using [dAppwright] for dApps on the Sapphire Network. In this tutorial, we will examine the e2e testing involved in the [demo-starter] project. [Playwright]: https://playwright.dev/docs/intro [dAppwright]: https://github.com/TenKeyLabs/dappwright [demo-starter]: https://github.com/oasisprotocol/demo-starter ## dAppwright The [dAppwright package] builds on Playwright and includes tooling to support testing with a MetaMask or Coinbase wallet as an extension on a Chromium browser. [dAppwright package]: https://www.npmjs.com/package/@tenkeylabs/dappwright ## Installation We need to install both `dAppwright` and `Playwright`. Navigate to your frontend application directory: 1. Install dAppwright: ```shell npm2yarn npm install -D @tenkeylabs/dappwright ``` 2. Install Playwright (we recommend the TypeScript option): ```shell npm2yarn npm init playwright@latest ``` 3. A successful installation should allow the running of the example tests: ```shell npx playwright test ``` ## Setup We suggest starting a local dev server with each test run to consistently iterate over the same state. ```typescript title="playwright.config.ts" import { defineConfig } from '@playwright/test'; export default defineConfig({ // highlight-start /* Run your local dev server before starting the tests */ webServer: { command: 'pnpm dev', url: process.env.FRONTEND_URL || 'http://localhost:8080/', reuseExistingServer: !process.env.CI, stdout: 'pipe', stderr: 'pipe', }, // highlight-end }); ``` ## Adding Test Context We begin with a test file extending the testing context to include dAppwright: ```typescript title="tests/e2e.spec.ts" import { BrowserContext, expect, test as baseTest } from '@playwright/test' import dappwright, { Dappwright, MetaMaskWallet } from '@tenkeylabs/dappwright' export const test = baseTest.extend<{ context: BrowserContext wallet: Dappwright }>({ context: async ({}, use) => { // Launch context with extension const [wallet, _, context] = await dappwright.bootstrap('', { wallet: 'metamask', version: MetaMaskWallet.recommendedVersion, seed: 'test test test test test test test test test test test junk', // Hardhat's default https://hardhat.org/hardhat-network/docs/reference#accounts headless: false, }) // Add Sapphire Localnet as a custom network await wallet.addNetwork({ networkName: 'Sapphire Localnet', rpc: 'http://localhost:8545', chainId: 23293, symbol: 'ROSE', }) await use(context) }, wallet: async ({ context }, use) => { const metamask = await dappwright.getWallet('metamask', context) await use(metamask) }, }) ... ``` The above snippet includes the Sapphire [Localnet] as a network with the correct RPC for testing, and sets up the default MetaMask wallet to use the same [seed] as you would in a Hardhat test. [seed]: https://hardhat.org/hardhat-network/docs/reference#accounts ## Writing a Test Writing a test with dAppwright is very similar to how you would write a Playwright one. The first step is to navigate to our application: ```typescript title="tests/e2e.spec.ts" test.beforeEach(async ({ page }) => { await page.goto('http://localhost:5173') }) ``` Next, we can load the application and confirm using the Sapphire network in Metamask. Note that **we will need to use `wallet.approve` to access the MetaMask extension which waits for the MetaMask dom to reload.** Depending on your use case, you may force your extension page to reload with `wallet.page.reload()`. ```typescript title="tests/e2e.spec.ts" test('set and view message', async ({ wallet, page }) => { // Load page await page.getByTestId('rk-connect-button').click() await page.getByTestId('rk-wallet-option-injected-sapphire').click() await wallet.approve() }) ``` Otherwise, we write selectors and assertions in the same way. ```typescript title="tests/e2e.spec.ts" // Set a message await page.locator(':text-matches("0x.{40}")').fill('hola amigos') const submitBtn = page.getByRole('button', { name: 'Set Message' }) await submitBtn.click() await wallet.confirmTransaction() // Reveal the message await expect(submitBtn).toBeEnabled() await page.locator('[data-label="Tap to reveal"]').click() await wallet.confirmTransaction() // Assert message has been set await expect(page.locator('[data-label="Tap to reveal"]').locator('input')).toHaveValue('hola amigos') ``` You can make assertions in the same way on the wallet page, but [wallet actions] will significantly simplify the amount of boilerplate testing code. ```typescript await expect(wallet.page.getByText("My Account Name")).toBeVisible(); ``` [wallet actions]: https://github.com/TenKeyLabs/dappwright/blob/d791017c51edc4e61e786504c154f9ad3db43ab6/src/wallets/wallet.ts#L27-L45 ## Debugging Playwright's UI mode is very beneficial to debugging your tests as you develop. The pick locator button will help you refine element selectors while giving visual feedback. ```sh npx playwright test --ui ``` Alternatively, you can leverage the debug mode which allows you to set breakpoints, pause testing, and examine network requests. ```sh npx playwright test --debug ``` ## CI Running your dAppwright tests on CI environments like GitHub is possible with the right configurations. You will need to install `playwright` itself as a dependency before you can install Playwright's dependency packages, and run a [headed] execution in Linux agents with `Xvfb`. We recommend uploading test results on failure to more quickly move through CI cycles. You will need a Sapphire [Localnet] service to provide an RPC endpoint during testing. ```yaml playwright-test: runs-on: ubuntu-latest // highlight-start services: sapphire-localnet-ci: image: ghcr.io/oasisprotocol/sapphire-localnet:latest ports: - 8545:8545 - 8546:8546 env: OASIS_DOCKER_START_EXPLORER: no options: >- --rm --health-cmd="test -f /CONTAINER_READY" --health-start-period=90s // highlight-end ``` We recommend saving deployed smart contract addresses as environment variables and [passing] `$GITHUB_OUTPUT` to a subsequent testing step. ```yaml - name: Deploy backend working-directory: backend id: deploy run: | echo "message_box_address=$(pnpm hardhat deploy localhost --network sapphire-localnet | grep -o '0x.*')" >> $GITHUB_OUTPUT ``` Finally, run the test and upload results on failure: ```yaml - name: Build working-directory: frontend run: pnpm build - name: Install Playwright dependencies run: pnpm test:setup working-directory: frontend - name: Run playwright tests (with xvfb-run to support headed extension test) working-directory: frontend run: xvfb-run pnpm test env: VITE_MESSAGE_BOX_ADDR: ${{ steps.deploy.outputs.message_box_address }} - name: Upload playwright test-results if: ${{ failure() }} uses: actions/upload-artifact@v4 with: name: playwright-test-results path: frontend/test-results retention-days: 5 ``` [headed]: https://playwright.dev/docs/ci#running-headed [Localnet]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/localnet.mdx [passing]: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/passing-information-between-jobs Example: demo-starter If you are interested in seeing how dAppwright is integrated into an example application, check out the [demo-starter]. Example: wagmi If you are interested in seeing how dAppwright is integrated into a Sapphire dApp with Wagmi, check out the [Wagmi example]. [Wagmi example]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/wagmi-v2 --- ## Deployment Patterns ## Implementing Proxy contracts on Oasis Sapphire As a confidential Ethereum Virtual Machine (EVM), Oasis prevents external access to contract storage or runtime states in order to keep your secrets private. This unique feature affects how developers interact with and manage smart contracts, particularly when using common Ethereum development tools. ### What are Upgradable Contracts? Upgradable contracts are smart contracts designed to allow developers to update functionality even after being deployed to a blockchain. This is particularly useful for fixing bugs or adding new features without losing the existing state or having to deploy a new contract. Upgradability is achieved through proxy patterns, where a proxy contract directs calls to an underlying logic contract which developers can swap out without affecting the state stored in the proxy. #### [EIP-1822]: Universal Upgradeable Proxy Standard (UUPS) EIP-1822 introduces a method for creating upgradable contracts using a proxy pattern and specifies a mechanism where the proxy contract itself contains the upgrade logic. This design reduces the complexity and potential for errors compared to other proxy patterns because it consolidates upgrade functionality within the proxy and eliminates the need for additional external management. #### [EIP-1967]: Standard Proxy Storage Slots EIP-1967 defines standard storage slots to be used by all proxy contracts for consistent and predictable storage access. This standard helps prevent storage collisions and enhances security by outlining specific locations in a proxy contract for storing the address of the logic contract and other administrative information. Using these predetermined slots makes managing and auditing proxy contracts easier. [EIP-1822]: https://eips.ethereum.org/EIPS/eip-1822 [EIP-1967]: https://eips.ethereum.org/EIPS/eip-1967 ### The Impact of Confidential EVM on Tooling Compatibility While the underlying proxy implementations in EIP-1822 work perfectly in facilitating smart contract upgrades, the tools typically used to manage these proxies may still face limitations on Oasis Sapphire. As of now, only the following well-known EIP-1967 slots are readable via `eth_getStorageAt`, enabling compatibility with most proxy tooling: - Proxy implementation address - Beacon proxy implementation - Admin slot Access to all other storage remains restricted in the confidential environment. Additionally, Sapphire natively protects against replay and currently does not allow an empty chain ID à la pre [EIP-155] transactions. [eth_getStorageAt]: https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getstorageat [openzeppelin-upgrades]: https://github.com/OpenZeppelin/openzeppelin-upgrades [EIP-155]: https://eips.ethereum.org/EIPS/eip-155 ### Solutions for Using UUPS Proxies on Oasis Sapphire Developers looking to use UUPS proxies on Oasis Sapphire have two primary options: #### 1. Directly Implement EIP-1822 Avoid using [openzeppelin-upgrades] and manually handle the proxy setup and upgrades with your own scripts, such as by calling the `updateCodeAddress` method directly. #### 2. Modify Deployment Scripts Change deployment scripts to avoid `eth_getStorageAt`. Alternative methods like calling `owner()` which do not require direct storage access. [hardhat-deploy] as of `0.12.4` supports this approach with a default proxy that includes an `owner()` function when deploying with a configuration that specifies `proxy: true`. ```typescript module.exports = async ({getNamedAccounts, deployments, getChainId}) => { const {deploy} = deployments; const {deployer} = await getNamedAccounts(); await deploy('Greeter', { from: deployer, proxy: true, }); }; ``` ### Solution for Using Deterministic Proxies on Oasis Sapphire We suggest that developers interested in deterministic proxies on Oasis Sapphire use a contract that supports replay protection. `hardhat-deploy` supports using the [Safe Singleton factory][safe-singleton-factory] deployed on the Sapphire [Mainnet] and [Testnet] when `deterministicDeployment` is `true`. ```typescript module.exports = async ({getNamedAccounts, deployments, getChainId}) => { const {deploy} = deployments; const {deployer} = await getNamedAccounts(); await deploy('Greeter', { from: deployer, deterministicDeployment: true, }); }; ``` Next, in your `hardhat.config.ts` file, specify the address of the Safe Singleton factory: ```typescript deterministicDeployment: { "97": { factory: '0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7', deployer: '0xE1CB04A0fA36DdD16a06ea828007E35e1a3cBC37', funding: '2000000', signedTx: '', }, "23295": { factory: '0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7', deployer: '0xE1CB04A0fA36DdD16a06ea828007E35e1a3cBC37', funding: '2000000', signedTx: '', } }, ``` [hardhat-deploy]: https://github.com/wighawag/hardhat-deploy [Mainnet]: https://explorer.oasis.io/mainnet/sapphire/address/0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7 [Testnet]: https://explorer.oasis.io/testnet/sapphire/address/0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7 [safe-singleton-factory]: https://github.com/safe-global/safe-singleton-factory ## Clones Sapphire supports fixed address non-upgradable [clones][clones] to help developers replicate contract functionality and reduce contract deployment costs. [clones]: https://docs.openzeppelin.com/contracts/5.x/api/proxy#Clones #### [EIP-1167]: Minimal Proxy EIP-1167 introduces a way to minimize bytecode and associated contract deployment costs while copying contract functionality. "Clone" contracts delegate calls to a target or fixed address which serve as a reference for the behavior of the "clone." Third-party tools and users can correctly predict the outcome of contract calls with minimal side effects. [EIP-1167]: https://eips.ethereum.org/EIPS/eip-1167 ## Caution Against Using `eth_getStorageAt` Direct storage access, such as with `eth_getStorageAt`, is generally discouraged. It reduces contract flexibility and deviates from common practice which advocates for a standardized Solidity compatible API to both facilitate interactions between contracts and allow popular libraries such as [ABIType] and [TypeChain] to automatically generate client bindings. Direct storage access makes contracts less adaptable and complicates on-chain automation; it can even complicate the use of multisig wallets. For contracts aiming to maintain a standard interface and ensure future upgradeability, we advise sticking to ERC-defined Solidity compatible APIs and avoiding directly interacting with contract storage. [ABIType]: https://abitype.dev/ [TypeChain]: https://www.npmjs.com/package/typechain ### [EIP-7201]: Namespaced Storage for Delegatecall Contracts ERC-7201 proposes a structured approach to storage in smart contracts that utilize `delegatecall` which is often employed in proxy contracts for upgradability. This standard recommends namespacing storage to mitigate the risk of storage collisions — a common issue when multiple contracts share the same storage space in a `delegatecall` context. [EIP-7201]: https://eips.ethereum.org/EIPS/eip-7201 ### Benefits of Namespacing over Direct Storage Access Contracts using `delegatecall`, such as upgradable proxies, can benefit from namespacing their storage through more efficient data organization which enhances security. This approach isolates different variables and sections of a contract’s storage under distinct namespaces, ensuring that each segment is distinct and does not interfere with others. Namespacing is generally more robust and preferable to using `eth_getStorageAt`. See example ERC-7201 implementation and usage: https://gist.github.com/CedarMist/4cfb8f967714aa6862dd062742acbc7b ```solidity // SPDX-License-Identifier: Apache-2.0 pragma solidity ^0.8.0; contract Example7201 { /// @custom:storage-location erc7201:Example7201.state struct State { uint256 counter; } function _stateStorageSlot() private pure returns (bytes32) { return keccak256(abi.encode(uint256(keccak256("Example7201.state")) - 1)) & ~bytes32(uint256(0xff)); } function _getState() private pure returns (State storage state) { bytes32 slot = _stateStorageSlot(); assembly { state.slot := slot } } function increment() public { State storage state = _getState(); state.counter += 1; } function get() public view returns (uint256) { State storage state = _getState(); return state.counter; } } contract ExampleCaller { Example7201 private example; constructor () { example = new Example7201(); } function get() external returns (uint256 counter) { (bool success, bytes memory result ) = address(example).delegatecall(abi.encodeCall(example.get, ())); require(success); counter = abi.decode(result, (uint256)); } function increment() external { (bool success, ) = address(example).delegatecall(abi.encodeCall(example.increment, ())); require(success); } } ``` --- ## Gasless Transactions When you submit a transaction to a blockchain, you need to pay certain fee (called *gas* in Ethereum jargon). Since only the transactions with the highest fee will be included in the block, this mechanism effectively prevents denial of service attacks on the network. On the other hand, paying for gas requires from the user that they have certain amount of blockchain-native tokens available in their wallet which may not be feasible. In this chapter we will learn how the user signs and sends their transaction to a *relayer*. The relayer then wraps the original signed transaction into a new *meta-transaction* (see [ERC-2771] for details), signs it and pays for the necessary transaction fees. When the transaction is submitted the on-chain recipient contract decodes the meta-transaction, verifies both signatures and executes the original transaction. Oasis Sapphire supports two transaction relaying methods: The **on-chain signer** is trustless and utilizes the Oasis-specific contract state encryption while the **gas station network** service is known from other blockchains as well. The gas station network implementation on Sapphire is still in early beta. Some features such as the browser support are not fully implemented yet. [ERC-2771]: https://eips.ethereum.org/EIPS/eip-2771 ## On-Chain Signer The on-chain signer is a smart contract which: 1. receives the user's transaction, 2. checks whether the transaction is valid, 3. wraps it into a meta-transaction (which includes paying for the transaction fee) and 4. returns it back to the user in the [EIP-155] format. The steps above are executed as a confidential read-only call. Finally, the user then submits the obtained transaction to the network. [Image: Diagram of the On-Chain Signing] ### EIP155Signer To sign a transaction, the Sapphire's `EIP155Signer` library bundled along the `@oasisprotocol/sapphire-contract` package comes with the following helper which returns a raw, RLP-encoded, signed transaction ready to be broadcast: ```solidity function sign(address publicAddress, bytes32 secretKey, EthTx memory transaction) internal view returns (bytes memory); ``` `publicAddress` and `secretKey` are the signer's address and their private key used to sign a meta-transaction (and pay for the fees). We will store these sensitive data inside the encrypted smart contract state together with the signer's `nonce` field in the following struct: ```solidity struct EthereumKeypair { address addr; bytes32 secret; uint64 nonce; } ``` The last `transaction` parameter in the `sign()` function is the transaction encoded in a format based on [EIP-155]. This can either be the original user's transaction or a meta-transaction. ### Gasless Proxy Contract The following snippet is a complete *Gasless* contract for wrapping the user's transactions (`makeProxyTx()`) and executing them (`proxy()`). The signer's private key containing enough balance to cover transaction fees should be provided in the constructor. ```solidity import { encryptCallData } from "@oasisprotocol/sapphire-contracts/contracts/CalldataEncryption.sol"; import { EIP155Signer } from "@oasisprotocol/sapphire-contracts/contracts/EIP155Signer.sol"; struct EthereumKeypair { address addr; bytes32 secret; uint64 nonce; } struct EthTx { uint64 nonce; uint256 gasPrice; uint64 gasLimit; address to; uint256 value; bytes data; uint256 chainId; } // Proxy for gasless transaction. contract Gasless { EthereumKeypair private kp; function setKeypair(EthereumKeypair memory keypair) external payable { kp = keypair; } function makeProxyTx(address innercallAddr, bytes memory innercall) external view returns (bytes memory output) { bytes memory data = abi.encode(innercallAddr, innercall); // Call will invoke proxy(). return EIP155Signer.sign( kp.addr, kp.secret, EIP155Signer.EthTx({ nonce: kp.nonce, gasPrice: 100_000_000_000, gasLimit: 250000, to: address(this), value: 0, data: encryptCallData(abi.encodeCall(this.proxy, data)), chainId: block.chainid }) ); } function proxy(bytes memory data) external payable { (address addr, bytes memory subcallData) = abi.decode( data, (address, bytes) ); (bool success, bytes memory outData) = addr.call{value: msg.value}( subcallData ); if (!success) { // Add inner-transaction meaningful data in case of error. assembly { revert(add(outData, 32), mload(outData)) } } kp.nonce += 1; } } ``` The snippet above only runs on Sapphire Mainnet, Testnet or Localnet. [`EIP155Signer.sign()`] is not supported on other EVM chains. [`EIP155Signer.sign()`]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/EIP155Signer.sol/library.EIP155Signer.html#sign ### Simple Gasless Commenting dApp Let's see how we can implement on-chain signer for a gasless commenting dApp like this: ```solidity contract CommentBox { string[] public comments; function comment(string memory commentText) external { comments.push(commentText); } } ``` Then, the TypeScript code on a client side for submitting a comment in a gasless fashion would look like this: ```typescript const CommentBox = await ethers.getContractFactory("CommentBox"); const commentBox = await CommentBox.deploy(); await commentBox.waitForDeployment(); const Gasless = await ethers.getContractFactory("Gasless"); const gasless = await Gasless.deploy(); await gasless.waitForDeployment(); // Set the keypair used to sign the meta-transaction. await gasless.setKeypair({ addr: "70997970C51812dc3A010C7d01b50e0d17dc79C8", secret: Uint8Array.from(Buffer.from("59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d", 'hex')), nonce: 0, }); const innercall = commentBox.interface.encodeFunctionData('comment', ['Hello, free world!']); const tx = await gasless.makeProxyTx(commentBox.address, innercall); const plainProvider = new ethers.JsonRpcProvider(ethers.provider.connection); const plainResp = await plainProvider.sendTransaction(tx); const receipt = await ethers.provider.getTransactionReceipt(plainResp.hash); if (!receipt || receipt.status != 1) throw new Error('tx failed'); ``` Example: On-Chain Signer You can download a complete on-chain signer example based on the above snippets from the [Sapphire ParaTime examples] repository. [Sapphire ParaTime examples]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/onchain-signer ### Gasless Proxy in Production The snippets above have shown how the on-chain signer can generate and sign a meta-transaction for arbitrary transaction. In production environment however, you must consider the following: #### Confidentiality The [`encryptCallData()`] helper above will generate an ephemeral key and encrypt the transaction's calldata. Omit this call to generate a plain transaction. You can also explicitly encrypt specific function arguments of the inner-transaction by calling [`Sapphire.encrypt()`] using a private key stored somewhere in your smart contract and then [`Sapphire.decrypt()`] when executing the transaction. [`encryptCallData()`]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/CalldataEncryption.sol/function.encryptCallData.html#encryptcalldatabytes [`Sapphire.encrypt()`]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/Sapphire.sol/library.Sapphire.html#encrypt-1 [`Sapphire.decrypt()`]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/Sapphire.sol/library.Sapphire.html#decrypt-1 #### Gas Cost and Gas Limit The gas cost and the gas limit in our snippet were hardcoded inside the contract. Ideally the gas cost should be dynamically adjusted by an oracle and the gas limit determined based on the type of transactions. **Never let gas cost and limit to be freely defined by the user, since they can drain your relayer's account.** #### Allowed Transactions Your relayer will probably be used for transactions of a specific contract only. One approach is to store the allowed address of the target contract and **only allow calls to this contract address**. #### Access Control You can either whitelist specific addresses of the users in the relayer contract or implement the access control in the target contract. In the latter case, the relayer's `makeProxyTx()` should simulate the execution of the inner-transaction and generate the meta-transaction only if it inner-transaction succeeded. #### Multiple Signers Only one transaction per block can be relayed by the same signer since the order of the transactions is not deterministic and nonces could mismatch. To overcome this, relayer can randomly pick a signer from the **pool of signers**. When the transaction is relayed, don't forget to reimburse the signer of the transaction! Example: Voting dApp All the above points are considered in the [Demo Voting dApp][demo-voting]. You can explore the code and also try out a deployed gasless version of the voting dApp on the [Oasis Playground site][demo-voting-playground]. The access control list is configured so that anyone can vote on any poll and only poll creators can close the poll. [demo-voting]: https://github.com/oasisprotocol/demo-voting [demo-voting-playground]: https://playground.oasis.io/demo-voting [EIP-155]: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-155.md ## Gas Station Network [Gas Station Network](https://docs.opengsn.org) (GSN) was adapted to work with Sapphire in a forked `@oasislabs/opengsn-cli` package. The diagram below illustrates a flow for signing a transaction by using a GSN[^1]. [Image: Diagram of the Gas Station Network Flow] [^1]: The GSN flow diagram is courtesy of [OpenGSN documentation][opengsn-docs]. [opengsn-docs]: https://github.com/opengsn/docs ### Package Install Starting with an empty folder, let us install the [Oasis fork of the GSN command line tool](https://github.com/oasislabs/gsn) by using the following commands: ```shell npm2yarn npm init npm install -D @oasislabs/opengsn-cli ``` Next, we will export our hex-encoded private key (**without** the leading `0x`) for deploying the gas station network as an environment variable: ```shell export PRIVATE_KEY=... ``` ### Deploy GSN Deploy GSN relaying contracts along with the test paymaster using a test token. Use the address of your account as `--burnAddress` and `--devAddress` parameters: ```shell npx gsn deploy --network sapphire-testnet --burnAddress 0xfA3AC9f65C9D75EE3978ab76c6a1105f03156204 --devAddress 0xfA3AC9f65C9D75EE3978ab76c6a1105f03156204 --testToken true --testPaymaster true --yes --privateKeyHex $PRIVATE_KEY ``` After the command finishes successfully, you should find the addreses of deployed contracts at the end: ``` Deployed TestRecipient at address 0x594cd6354b23A5200a57355072E2A5B15354ee21 RelayHub: 0xc4423AB6133B06e4e60D594Ac49abE53374124b3 RelayRegistrar: 0x196036FBeC1dA841C60145Ce12b0c66078e141E6 StakeManager: 0x6763c3fede9EBBCFbE4FEe6a4DE6C326ECCdacFc Penalizer: 0xA58A0D302e470490c064EEd5f752Df4095d3A002 Forwarder: 0x59001d07a1Cd4836D22868fcc0dAf3732E93be81 TestToken (test only): 0x6Ed21672c0c26Daa32943F7b1cA1f1d0ABdbac66 Paymaster (Default): 0x8C06261f58a024C958d42df89be7195c8690008d ``` ### Start GSN Relay Server Now we are ready to start our own relay server by using the following command. Use the newly deployed: - `RelayHub` address for `--relayHubAddress`, - `TestToken` address for `--managerStakeTokenAddress`, - address of your account for `--owner-address` ```shell npx gsn relayer-run --relayHubAddress 0xc4423AB6133B06e4e60D594Ac49abE53374124b3 --managerStakeTokenAddress 0x6Ed21672c0c26Daa32943F7b1cA1f1d0ABdbac66 --ownerAddress '0xfA3AC9f65C9D75EE3978ab76c6a1105f03156204' --ethereumNodeUrl 'https://testnet.sapphire.oasis.io' --workdir . ``` ### Fund and Register GSN Relay Server The first thing is to fund your relay server so that it has enough native tokens to pay for others' transactions. Let's fund the paymaster with **5 tokens**. Use the `RelayHub` and `Paymaster` addresses for `--hub` and `--paymaster` values: ```shell npx gsn paymaster-fund --network sapphire-testnet --hub 0xc4423AB6133B06e4e60D594Ac49abE53374124b3 --paymaster 0x8C06261f58a024C958d42df89be7195c8690008d --privateKeyHex $PRIVATE_KEY --amount 5000000000000000000 ``` You can check the balance of the paymaster by running: ```shell npx gsn paymaster-balance --network sapphire-testnet --hub 0xc4423AB6133B06e4e60D594Ac49abE53374124b3 --paymaster 0x8C06261f58a024C958d42df89be7195c8690008d ``` Next, we need to register the relay server with the your desired `relayUrl` by staking the `token` the relayHub requires. ```shell npx gsn relayer-register --network sapphire-testnet --relayUrl 'http://localhost:8090' --token 0x6Ed21672c0c26Daa32943F7b1cA1f1d0ABdbac66 --wrap true --privateKeyHex $PRIVATE_KEY ``` After this step, your relay server should be ready to take incoming relay requests and forward them to the relay hub on Sapphire Testnet. ### Send Testing Relayed Requests: We can test whether a relayed request can be forwarded and processed correctly. Scroll up to find the GSN deployment response and use the following parameters: - `Forwarder` as `--to`, - `Paymaster` as `--paymaster`, - your account address as `--from` Parameters matching our deployment would be: ```shell npx gsn send-request --network sapphire-testnet --abiFile 'node_modules/@oasislabs/opengsn-cli/dist/compiled/TestRecipient.json' --method emitMessage --methodParams 'hello world!' --to 0x594cd6354b23A5200a57355072E2A5B15354ee21 --paymaster 0x8C06261f58a024C958d42df89be7195c8690008d --privateKeyHex $PRIVATE_KEY --from 0xfA3AC9f65C9D75EE3978ab76c6a1105f03156204 --gasLimit 150000 --gasPrice 100 ``` More detailed explanations of these GSN commands and parameters can be found on the [upstream OpenGSN website](https://docs.opengsn.org/javascript-client/gsn-helpers.html). ### Writing a GSN-enabled Smart Contract First, install the OpenGSN contracts package: ```shell npm2yarn npm install -D @opengsn/contracts@3.0.0-beta.2 ``` Then follow the remainder of the steps from the [upstream OpenGSN docs](https://docs.opengsn.org/contracts/#receiving-a-relayed-call). --- ## Security This page is an ongoing work in progress to support confidential smart contract development. At the moment we address safeguarding storage variable access patterns and provide best practices for more secure orderings of error checking to prevent leaking contract state. ## Storage Access Patterns You can use a tool such as [hardhat-tracer] to examine the base EVM state transitions under the hood. ```shell npm2yarn npm install -D hardhat-tracer ``` and add `hardhat-tracer` to your `config.ts` file, ```typescript import "hardhat-tracer" ``` in order to test and show call traces. ```shell npx hardhat test --vvv --opcodes SSTORE,SLOAD ``` You can also trace a particular transaction, once you know its hash. ```shell npx hardhat trace --hash 0xTransactionHash ``` For both [gas] usage and confidentiality purposes, we **recommend using non-unique data size**. E.g. 64-byte value will still be distinct from a 128-byte value. Inference based on access patterns `SSTORE` keys from one transaction may be linked to `SLOAD` keys of another transaction. ## Order of Operations When handling errors, gas usage patterns not only can reveal the code path taken, **but sometimes the balance of a user as well** (in the case of a diligent attacker using binary search). ```solidity function transferFrom(address who, address to, uint amount) external { require( balances[who] >= amount ); require( allowances[who][msg.sender] >= amount ); // ... } ``` Modifying the order of error checking can prevent the accidental disclosure of balance information in the example above. ```solidity function transferFrom(address who, address to, uint amount) external { require( allowances[who][msg.sender] >= amount ); require( balances[who] >= amount ); // ... } ``` ## Speed Bump If we would like to prevent off-chain calls from being chained together, we can ensure that the block has been finalized. ```solidity contract Secret { uint256 private _height; bytes private _secret; address private _buyer; constructor(bytes memory _text) { _secret = _text; } function recordPayment() external payable { require(msg.value == 1 ether); // set and lock buyer _height = block.number; _buyer = msg.sender; } /// @notice Reveals the secret. function revealSecret() view external returns (bytes memory) { require(block.number > _height, "not settled"); require(_buyer != address(0), "no recorded buyer"); // TODO: optionally authenticate call from buyer return _secret; } } ``` ## Gas Padding Gas padding lets you equalize **EVM execution** gas across private code paths to reduce side‑channel leakage. Sapphire provides a precompile ([`Sapphire.padGas`][precompile]) that burns execution gas so that your function’s execution cost is brought up to a target amount. The gas padding call is usually done somewhere at the end of the executed code to cover all possible execution paths. Scope & limits: - Pads **only the EVM engine (execution) gas** spent by your contract’s code path. It **does not** include the intrinsic/transaction‑size component (calldata bytes, signature, envelope, etc.). The transaction size and the fee attributable to it remains public. - Practically: if you pad to `10_000`, the total fee is `tx_size_gas + exec_gas_padded(≈10_000)`. - Padding is intentionally limited to the execution layer. If total gas were fully padded, an attacker could vary transaction size to leak information; therefore only the EVM execution portion is padded. - `padGas` protects the code path **within your contract**. Gas used by external calls can still differ unless those contracts also pad. ### Example attack (leaky code path) ```solidity contract Leaky { bytes32 private secret; bytes32 private tmp; // Returns true on correct guess; success path does extra work // (leaks via fee). function guess(bytes32 candidate) external returns (bool ok) { if (candidate == secret) { for (uint i = 0; i < 10_000; ++i) { tmp = keccak256(abi.encodePacked(tmp, i)); } return true; } return false; } } ``` An observer (or the caller) can compare total fees and infer whether `candidate == secret`. ### Fix with padding ```solidity contract Padded { bytes32 private secret; bytes32 private tmp; function guess(bytes32 candidate) external returns (bool ok) { if (candidate == secret) { for (uint i = 0; i < 10_000; ++i) { tmp = keccak256(abi.encodePacked(tmp, i)); } ok = true; } // Equalize execution cost across branches. Pads execution only // tx size cost stays visible. Sapphire.padGas(100_000); } } ``` Choose a target that is greater or equal to the **worst‑case execution** for the function, with a safety margin. You can measure worst‑case cost in tests (e.g., with tracers) and read the current execution gas via [`Sapphire.gasUsed()`][used]. ### When to use - Branches depend on confidential state/input and have materially different execution cost. - Success vs. revert paths would leak acceptance via fee differences. - Before returning from functions that conditionally perform heavy computation. ### When _not_ to rely on it (alone) - It does **not** hide **transaction size** (calldata length). Different‑length inputs will still lead to different total fees. - It does not pad external contracts you call unless **they** also pad. - It is not a replacement for constant‑time logic where feasible. ### Masking input size (guidance) - Prefer fixed‑size ABI types (`bytes32` instead of `bytes`) and pass **hashes** of variable‑length data rather than the data itself. - If variable‑length bytes/ciphertext must be sent, **pad client‑side to a fixed length or to bucketized sizes** (e.g., 256/512/1024 bytes) before encrypting/sending; strip padding inside the contract. - Bundle multiple fields into a fixed‑size envelope and parse lengths inside the confidential execution. #### Simple example ```solidity contract GasExample { bytes32 tmp; function constantMath(bool doMath, uint128 padTo) external { if (doMath) { bytes32 x; for (uint256 i = 0; i < 100; i++) { x = keccak256(abi.encodePacked(x, tmp)); } tmp = x; } // Pads EVM execution only; tx size cost remains public. Sapphire.padGas(padTo); } } ``` Both calls below will consume the **same execution gas**, while the **transaction‑size gas** may still differ if calldata sizes differ. You can also query the execution gas with [`Sapphire.gasUsed()`][used]. ```typescript await contract.constantMath(true, 100000); await contract.constantMath(false, 100000); ``` [gas]: https://docs.soliditylang.org/en/latest/internals/layout_in_storage.html [hardhat-tracer]: https://www.npmjs.com/package/hardhat-tracer [precompile]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/Sapphire.sol/library.Sapphire.html#padgas [used]: https://api.docs.oasis.io/sol/sapphire-contracts/contracts/Sapphire.sol/library.Sapphire.html#gasused --- ## Testing While Sapphire is EVM-compatible and you can use most EVM tools to build your dApp, but to test the confidential features you'll need to deploy and run the test on a network which supports it. Recommended networks for testing: 1. Sapphire [Localnet] 2. Sapphire [Testnet] ## Local Development and Testing When you want a quick, iterative cycle for testing, the recommended approach is to run Sapphire on your local machine. Oasis provides a Docker container that simulates a local Sapphire blockchain—similar in spirit to a Hardhat Node or Ganache. This makes it easy to: - Spin up and tear down a local environment on-demand. - Interact with a local instance of the Sapphire ParaTime. - Debug your contracts thoroughly before heading to a live network. For details on setting up and running this local environment, check out the [Localnet] documentation from Oasis. It covers installation, configuration, and provides example commands to help you get started. ### Localnet Hardhat Config To use the Localnet with Hardhat, add the network as follows: ```js title="hardhat.config.ts" import { HardhatUserConfig } from "hardhat/config"; import "@nomicfoundation/hardhat-toolbox"; // Example accounts script const TEST_HDWALLET = { mnemonic: "test test test test test test test test test test test junk", path: "m/44'/60'/0'/0", initialIndex: 0, count: 20, passphrase: "", }; const accounts = process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : TEST_HDWALLET; const config: HardhatUserConfig = { solidity: "0.8.19", // highlight-start networks: { "sapphire-localnet": { url: "http://localhost:8545", // Localnet RPC URL chainId: 23294, // Sapphire Localnet chain ID accounts }, }, // highlight-end }; ``` Running your tests locally would then be as simple as: ```sh npx hardhat test --network sapphire-localnet ``` ## Testing Encrypted Transactions One of Sapphire’s unique capability are encrypted transactions. To take full advantage of this during testing, you can use following provider: - Hardhat provider from `@oasisprotocol/sapphire-hardhat` - Ethers provider from `@oasisprotocol/sapphire-paratime` This custom provider automatically encrypts transactions, allowing you to test your contract’s confidential workflows in an environment that closely mirrors production on Oasis Sapphire. ### Hardhat Provider The Hardhat provider is the recommended when working in a Hardhat setup To add the provider to your project, run: ```shell npm2yarn npm install -D @oasisprotocol/sapphire-hardhat ``` Next, import it in your `hardhat.config.ts` above the rest of your plugins so that the provider gets wrapped before anything else starts to use it. ```js title="hardhat.config.ts" // ESM import '@oasisprotocol/sapphire-hardhat'; // CommonJS require('@oasisprotocol/sapphire-hardhat'); /** All other plugins must go below this one! **/ ``` After installation, simply write and run your tests and scripts as you normally would—your transactions will be automatically encrypted behind the scenes and you will see a green padlock for this transactions in the explorer. ### Ethers To add the provider to your project, run: ```shell npm2yarn npm install -D @oasisprotocol/sapphire-paratime ``` Next, import the `wrap` function and wrap your ethers signer: ```js import { wrap } from "@oasisprotocol/sapphire-paratime"; const wallet = new Wallet(process.env.PRIVATE_KEY); const provider = new ethers.JsonRpcProvider('http://127.0.0.1:8545'); // Localnet RPC URL const wrappedSigner = wrap(wallet.connect(provider)); ``` [Localnet]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/localnet.mdx [Testnet]: ../network.mdx --- ## Sapphire vs Ethereum Sapphire is generally compatible with Ethereum, the EVM, and all the user and developer tooling that you are used to. In addition to confidentiality features, you get a few extra benefits including the ability to **generate private entropy**, and **make signatures on-chain**. An example of a dApp that uses both is an HSM contract that generates an Ethereum wallet and signs transactions sent to it via transactions. There are also a few breaking changes compared to Ethereum though, but we think that you'll quickly grasp them: - [Encrypted Contract State](#encrypted-contract-state) - [End-to-End Encrypted Transactions and Calls](#end-to-end-encrypted-transactions-and-calls) - [`from` Address is Zero for Unsigned Calls](#from-address-is-zero-for-unsigned-calls) - [Override `receive` and `fallback` when Funding the Contract](#override-receive-and-fallback-when-funding-the-contract) - [Instant Finality](#instant-finality) Read below to learn more about them. Otherwise, Sapphire is like Emerald, a fast, cheap Ethereum. ## Encrypted Contract State The contract state is only visible to the contract that wrote it. With respect to the contract API, it's as if all state variables are declared as `private`, but with the further restriction that not even full nodes can read the values. Public or access-controlled values are provided instead through explicit getters. Calling `eth_getStorageAt()` will return zero for all storage slots, **except** for the following well-known [EIP-1967] proxy-related slots, which remain readable to support compatibility with standard tooling: - `0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc` — Proxy implementation address - `0xa3f0ad74e5423aebfd80d3ef4346578335a9a72aeaee59ff6cb3582b35133d50` — Beacon proxy implementation - `0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103` — Admin slot [EIP-1967]: https://eips.ethereum.org/EIPS/eip-1967 Do not use `immutable` nor `constant` for variables you want to keep private, since they are stored in the runtime bytecode, which is unencrypted on Sapphire. ## End-to-End Encrypted Transactions and Calls Transactions and calls are end-to-end encrypted into the contract. Only the caller and the contract can see the data sent to/received from the ParaTime. This ends up defeating some utility of block explorers, however. The status of the transaction is public and so are the error code, the revert message and logs (emitted events). ## `from` Address is Zero for Unsigned Calls The `from` address using of calls is derived from a signature attached to the call. Unsigned calls have their sender set to the zero address. This allows contract authors to write getters that release secrets to authenticated callers (e.g. by checking the `msg.sender` value), but without requiring a transaction to be posted on-chain. ## Override `receive` and `fallback` when Funding the Contract In Ethereum, you can fund a contract by sending Ether along the transaction in two ways: 1. a transaction must call a *payable* function in the contract, or 2. not calling any specific function (i.e. empty *calldata*). In this case, the payable `receive()` and/or `fallback()` functions need to be defined in the contract. If no such functions exist, the transaction will revert. The behavior described above is the same in Sapphire when using EVM transactions to fund a contract. However, the Oasis Network also uses [Oasis-native transactions] such as a deposit to a ParaTime account or a transfer. In this case, **you will be able to fund the contract's account even though the contract may not implement payable `receive()` or `fallback()`!** Or, if these functions do exist, **they will not be triggered**. You can send such Oasis-native transactions by using the [Oasis CLI] for example. [Oasis-native transactions]: https://github.com/oasisprotocol/docs/blob/main/docs/general/manage-tokens/README.mdx [Oasis CLI]: https://github.com/oasisprotocol/cli/blob/master/docs/README.md ## Instant Finality The Oasis Network is a proof of stake network where 2/3+ of the validator nodes need to verify each block in order to consider it final. However, in Ethereum the signatures of those validator nodes can be submitted minutes after the block is proposed, which makes the block proposal mechanism independent of the validation, but adds uncertainty if and when will the proposed block actually be finalized. In the Oasis Network, the 2/3+ of signatures need to be provided immediately after the block is proposed and **the network will halt, until the required number signatures are provided**. This means that you can rest assured that any validated block is final. As a consequence, the cross-chain bridges are more responsive yet safe on the Oasis Network. ## See also --- ## Examples ## Randomness **[Oasis Swag Wheel][rng-example]** A dApp which uses onchain RNG to determine which Swag a participant wins ## Confidential Voting **[VoTEE][votee-example]** Vote for the favorite Oasis mascot, see also [voTEE.oasis.io] **[Blockvote][voting-example]** General confidential and gasless voting, see also [vote.oasis.io] ## SIWE **[SIWE authentication][siwe-example]** An dApp witch uses Sign in with Ethereum (SIWE) for authentication. ## Onchain signing **[Onchain signing][onchain-signer]** An Example for Onchain transaction generation and signing Find more examples, including the unofficial ones, on [playground.oasis.io]. [rng-example]: https://github.com/oasisprotocol/demo-oasisswag [voting-example]: https://github.com/oasisprotocol/dapp-blockvote [vote.oasis.io]: https://vote.oasis.io [votee-example]: https://github.com/oasisprotocol/dapp-votee [voTEE.oasis.io]: https://votee.oasis.io [siwe-example]: https://github.com/oasisprotocol/demo-starter [onchain-signer]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/onchain-signer [playground.oasis.io]: https://playground.oasis.io --- ## Network information ## Networks | | Mainnet | Testnet | Localnet | |-------------------|-------------------------------------|-------------------------------------|--------------------------------------------| | Network name | `sapphire` | `sapphire-testnet` | `sapphire-localnet` | | Long network name | `Oasis Sapphire` | `Oasis Sapphire Testnet` | `Oasis Sapphire Localnet` | | Chain ID | Hex:`0x5afe` Decimal: `23294` | Hex:`0x5aff` Decimal: `23295` | Hex:`0x5afd` Decimal: `23293` | | Tools | | [Testing token Faucet][faucet] | [Local development Docker image][localnet] | Never deploy production services on Testnet Because Testnet state can be wiped in the future, you should **never** deploy a production service on Testnet! Just don't do it! Also note that while Testnet does use proper TEEs, due to experimental software and different security parameters, **confidentiality of Sapphire on Testnet is not guaranteed** -- all transactions and state published on the Sapphire Testnet should be considered public. [faucet]: https://faucet.testnet.oasis.io/ [localnet]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/localnet.mdx ## RPC Endpoints The RPC endpoint is a *point of trust*. Beside traffic rate limiting, it can also perform censorship or even a man-in-the-middle attack. If you have security considerations, we strongly recommend that you set up your own [ParaTime client node][paratime-client-node] and the [Web3-compatible gateway]. [Web3-compatible gateway]: ../../node/web3.mdx [paratime-client-node]: ../../node/run-your-node/paratime-client-node.mdx You can connect to one of the public Web3 gateways below (in alphabetic order): | Provider | Mainnet RPC URLs | Testnet RPC URLs | Supports Confidential Queries | |------------|-------------------------------------------------------------------------|------------------------------------------------------------------------------------------|-------------------------------| | [1RPC] | | *N/A* | Yes | | [Oasis] | | | Yes | | [thirdweb] | | | Yes | [Oasis]: https://oasis.net [thirdweb]: https://thirdweb.com Public RPCs may have rate limits or traffic restrictions. For professional, dedicated RPC endpoints, consider the following providers (in alphabetic order): | Provider | Instructions | Pricing | Supports Confidential Queries | |--------------|----------------------------------------|-------------------------------|-------------------------------| | [1RPC] | [docs.1rpc.io][1RPC-docs] | [Pricing][1RPC-pricing] | Yes | | [Chainstack] | [docs.chainstack.com][Chainstack-docs] | [Pricing][Chainstack-pricing] | Yes | | [thirdweb] | [portal.thirdweb.com][thirdweb-docs] | [Pricing][thirdweb-pricing] | Yes | [1RPC]: https://www.1rpc.io/ [1RPC-docs]: https://docs.1rpc.io/guide/how-to-use-1rpc [1RPC-pricing]: https://www.1rpc.io/#pricing [Chainstack]: https://chainstack.com/build-better-with-oasis-sapphire/ [Chainstack-docs]: https://docs.chainstack.com/docs/oasis-sapphire-tooling [Chainstack-pricing]: https://chainstack.com/pricing/ [thirdweb-docs]: https://portal.thirdweb.com/ [thirdweb-pricing]: https://thirdweb.com/pricing ## Block Explorers | Name (Provider) | Mainnet URL | Testnet URL | EIP-3091 compatible | |--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Oasis Explorer ([Oasis]) | `https://explorer.oasis.io/mainnet/sapphire` | `https://explorer.oasis.io/testnet/sapphire` | Yes | | Oasis Scan ([Bit Cat]) | [https://www.oasisscan.com/paratimes/000…279](https://www.oasisscan.com/paratimes/000000000000000000000000000000000000000000000000f80306c9858e7279) | [https://testnet.oasisscan.com/paratimes/000…f6c](https://testnet.oasisscan.com/paratimes/000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c) | No | [Bit Cat]: https://www.bitcat365.com/ ## Indexers | Name (Provider) | Mainnet URL | Testnet URL | Documentation | |------------------------------|--------------------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------| | [Covalent] | `https://api.covalenthq.com/v1/oasis-sapphire-mainnet` | `https://api.covalenthq.com/v1/oasis-sapphire-testnet` | [Unified API docs][Covalent-docs] | | [Goldsky Subgraph][Goldsky] | *N/A* | *N/A* | [Documentation site][Goldsky-docs] | | Oasis Nexus ([Oasis]) | `https://nexus.oasis.io/v1/` | `https://testnet.nexus.oasis.io/v1/` | [API][Nexus-docs] | | Oasis Scan ([Bit Cat]) | `https://api.oasisscan.com/v2/mainnet` | `https://api.oasisscan.com/v2/testnet` | [Runtime API][OasisScan-docs] | | [SubQuery Network][SubQuery] | *N/A* | *N/A* | [SubQuery Academy][SubQuery-docs], [QuickStart][SubQuery-quickstart], [Starter project][SubQuery-starter] | [Covalent]: https://www.covalenthq.com/ [Covalent-docs]: https://www.covalenthq.com/docs/unified-api/ [Nexus-docs]: https://nexus.oasis.io/v1/spec/v1.html [Goldsky]: https://goldsky.com [Goldsky-docs]: https://docs.goldsky.com/subgraphs/deploying-subgraphs [OasisScan-docs]: https://api.oasisscan.com/v2/swagger/#/runtime [SubQuery]: https://subquery.network [SubQuery-docs]: https://academy.subquery.network/ [SubQuery-quickstart]: https://academy.subquery.network/quickstart/quickstart.html [SubQuery-starter]: https://github.com/subquery/ethereum-subql-starter/tree/main/Oasis/oasis-sapphire-starter If you are running your own Sapphire endpoint, a block explorer, or an indexer and wish to be added to these docs, open an issue at [github.com/oasisprotocol/docs]. [github.com/oasisprotocol/docs]: https://github.com/oasisprotocol/docs --- ## Quickstart(Sapphire) In this tutorial, you will build and deploy a unique dApp that requires confidentiality to work. By the end of the tutorial, you should feel comfortable setting up your EVM development environment to target Sapphire, and know how and when to use confidentiality. The expected completion time of this tutorial is 15 minutes. ## Create a Sapphire-Native dApp Porting an existing EVM app is cool, and will provide benefits such as protection against MEV. However, starting from scratch with confidentiality in mind can unlock some really novel dApps and provide a [higher level of security]. One simple-but-useful dApp that takes advantage of confidentiality is a [dead person's switch] that reveals a secret (let's say the encryption key to a data trove) if the operator fails to re-up before too long. Let's make it happen! [higher level of security]: ./develop/README.mdx [dead person's switch]: https://en.wikipedia.org/wiki/Dead_man%27s_switch ### Init a new Hardhat project We're going to use Hardhat with TypeScript which relies on NodeJS, but Sapphire should be compatible with your dev environment of choice. See examples in [Go][Oasis starter project for Go] and [Python][Oasis starter project for Python] at the end of this chapter. Let us know if things are not as expected! [Oasis starter project for Go]: https://github.com/oasisprotocol/demo-starter-go [Oasis starter project for Python]: https://github.com/oasisprotocol/demo-starter-py 1. Make & enter a new directory: ```sh mkdir quickstart && cd quickstart ``` 2. Create a TypeScript project and install the project dependencies: ```sh npx hardhat init ``` 3. Add [`@oasisprotocol/sapphire-hardhat`] as dependency: ```shell npm2yarn npm install -D @oasisprotocol/sapphire-hardhat ``` ### Add the Sapphire Testnet to Hardhat Open up your `hardhat.config.ts` and import `sapphire-hardhat`. ```typescript import { HardhatUserConfig } from "hardhat/config"; import "@oasisprotocol/sapphire-hardhat"; import "@nomicfoundation/hardhat-toolbox"; import "./tasks"; const accounts = process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : { mnemonic: "test test test test test test test test test test test junk", path: "m/44'/60'/0'/0", initialIndex: 0, count: 20, passphrase: "", }; ``` By importing `@oasisprotocol/sapphire-hardhat`, **any network config entry corresponding to the Sapphire's chain ID will automatically be wrapped with Sapphire specifics for encrypting and signing the transactions**. Next, let's add an account with a private key from an environment variable: ```typescript import { HardhatUserConfig } from "hardhat/config"; import "@oasisprotocol/sapphire-hardhat"; import "@nomicfoundation/hardhat-toolbox"; import "./tasks"; const accounts = process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : { mnemonic: "test test test test test test test test test test test junk", path: "m/44'/60'/0'/0", initialIndex: 0, count: 20, passphrase: "", }; ``` Finally, let's add the [Sapphire Testnet] network to the network property of `HardhatUserConfig`: ```typescript const config: HardhatUserConfig = { solidity: "0.8.28", networks: { sapphire: { url: "https://sapphire.oasis.io", chainId: 0x5afe, accounts, }, "sapphire-testnet": { url: "https://testnet.sapphire.oasis.io", accounts, chainId: 0x5aff, }, "sapphire-localnet": { // docker run -it -p8544-8548:8544-8548 ghcr.io/oasisprotocol/sapphire-localnet url: "http://localhost:8545", chainId: 0x5afd, accounts, }, }, }; ``` ### Get some Sapphire Testnet tokens Now for the fun part. As you have configured the Sapphire Test network, get some native TEST tokens. Hit up the one and only [Oasis Testnet faucet], select "Sapphire" and enter your wallet address. Submit the form and TEST be on your way. [Oasis Testnet faucet]: https://faucet.testnet.oasis.io [Sapphire Testnet]: ./network.mdx ### Get the Contract This is a Sapphire tutorial and you're already a Solidity expert, so let's not bore you with explaining the gritty details of the contract. Start by pasting `Vigil.sol` into `contracts/Vigil.sol`. 1. Create a new file called `Vigil.sol` under `contracts/`: 2. Paste the following contract into it: Vigil.sol contract ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.9; contract Vigil { struct SecretMetadata { address creator; string name; /// @notice How long (in seconds) the secret should remain so past the creator's last update. uint256 longevity; } event SecretCreated( address indexed creator, string indexed name, uint256 index ); SecretMetadata[] public _metas; bytes[] private _secrets; /// @dev The unix timestamp at which the address was last seen. mapping(address => uint256) public _lastSeen; function createSecret( string calldata name, uint256 longevity, bytes calldata secret ) external { _updateLastSeen(); _metas.push( SecretMetadata({ creator: msg.sender, name: name, longevity: longevity }) ); _secrets.push(secret); emit SecretCreated(msg.sender, name, _metas.length - 1); } /// Reveal the secret at the specified index. function revealSecret(uint256 index) external view returns (bytes memory) { require(index < _metas.length, "no such secret"); address creator = _metas[index].creator; uint256 expiry = _lastSeen[creator] + _metas[index].longevity; require(block.timestamp >= expiry, "not expired"); return _secrets[index]; } /// Return the time (in seconds since the epoch) at which the owner was last seen, or zero if never seen. function getLastSeen(address owner) external view returns (uint256) { return _lastSeen[owner]; } function getMetas(uint256 offset, uint256 count) external view returns (SecretMetadata[] memory) { if (offset >= _metas.length) return new SecretMetadata[](0); uint256 c = offset + count <= _metas.length ? count : _metas.length - offset; SecretMetadata[] memory metas = new SecretMetadata[](c); for (uint256 i = 0; i < c; ++i) { metas[i] = _metas[offset + i]; } return metas; } function refreshSecrets() external { _updateLastSeen(); } function _updateLastSeen() internal { _lastSeen[msg.sender] = block.timestamp; } } ``` #### Vigil.sol, the interesting parts The key state variables are: ```solidity SecretMetadata[] public _metas; bytes[] private _secrets; ``` * `_metas` is marked with `public` visibility, so despite the state itself being encrypted and not readable directly, Solidity will generate a getter that will do the decryption for you. * `_secrets` is `private` and therefore truly secret; only the contract can access the data contained in this mapping. And the methods we'll care most about are * `createSecret`, which adds an entry to both `_metas` and `_secrets`. * `revealSecret`, which acts as an access-controlled getter for the data contained with `_secrets`. Due to trusted execution and confidentiality, the only way that the secret will get revealed is if execution proceeds all the way to the end of the function and does not revert. The rest of the methods are useful if you actually intended to use the contract, but they demonstrate that developing for Sapphire is essentially the same as for Ethereum. You can even write tests against the Hardhat network and use Hardhat plugins. ### Add the Tasks We will use [Hardhat tasks] to automate the deployment and testing of the Vigil contract. 1. Create a new file called `index.ts` under `tasks/`: 2. Paste the following tasks to the `tasks/index.ts`: tasks/index.ts ```typescript import { task } from "hardhat/config"; task("deploy").setAction(async (_args, hre) => { const Vigil = await hre.ethers.getContractFactory("Vigil"); const vigil = await Vigil.deploy(); const vigilAddr = await vigil.waitForDeployment(); console.log(`Vigil address: ${vigilAddr.target}`); return vigilAddr.target; }); task("create-secret") .addParam("address", "contract address") .setAction(async (args, hre) => { const vigil = await hre.ethers.getContractAt("Vigil", args.address); const tx = await vigil.createSecret( "ingredient", 30 /* seconds */, Buffer.from("brussels sprouts"), ); console.log("Storing a secret in", tx.hash); }); task("check-secret") .addParam("address", "contract address") .setAction(async (args, hre) => { const vigil = await hre.ethers.getContractAt("Vigil", args.address); try { console.log("Checking the secret"); await vigil.revealSecret(0); console.log("Uh oh. The secret was available!"); process.exit(1); } catch (e: any) { console.log("failed to fetch secret:", e.message); } console.log("Waiting..."); await new Promise((resolve) => setTimeout(resolve, 30_000)); console.log("Checking the secret again"); const secret = await vigil.revealSecret.staticCallResult(0); // Get the value. console.log( "The secret ingredient is", Buffer.from(secret[0].slice(2), "hex").toString(), ); }); task("full-vigil").setAction(async (_args, hre) => { await hre.run("compile"); const address = await hre.run("deploy"); await hre.run("create-secret", { address }); await hre.run("check-secret", { address }); }); ``` 3. Import the tasks inside `hardhat.config.ts`: ```typescript import { HardhatUserConfig } from "hardhat/config"; import "@oasisprotocol/sapphire-hardhat"; import "@nomicfoundation/hardhat-toolbox"; import "./tasks"; const accounts = process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : { mnemonic: "test test test test test test test test test test test junk", path: "m/44'/60'/0'/0", initialIndex: 0, count: 20, passphrase: "", }; ``` [Hardhat tasks]: https://hardhat.org/hardhat-runner/docs/guides/tasks ### Run the Contract And to wrap things up, we'll put `Vigil` through its paces. First, let's see what's actually going on. After deploying the contract, we can create a secret, check that it's not readable, wait a bit, and then check that it has become readable. Pretty cool if you ask me! Anyway, make it happen by running ```shell PRIVATE_KEY="0x..." npx hardhat full-vigil --network sapphire-testnet ``` And if you see something like the following, you'll know you're well on the road to deploying confidential dApps on Sapphire. ``` Vigil deployed to: 0x74dC4879B152FDD1DDe834E9ba187b3e14f462f1 Storing a secret in 0x13125d868f5fb3cbc501466df26055ea063a90014b5ccc8dfd5164dc1dd67543 Checking the secret failed to fetch secret: reverted: not expired Waiting... Checking the secret again The secret ingredient is brussels sprouts ``` ## All done! Congratulations, you made it through the Sapphire tutorial! If you want to dive deeper, please check out the [develop] chapter and join the discussion on the [#dev-central Discord channel][social-media]. Best of luck on your future forays into confidentiality! Example: Hardhat Visit the Sapphire ParaTime repository to download the [Hardhat][hardhat-example] example of this quickstart. Example: Starter project If your project involves building a web frontend, we recommend that you check out the official [Oasis starter] files. [Oasis starter]: https://github.com/oasisprotocol/demo-starter Example: Go and Python Are you building your dApp in languages other than TypeScript? Check out the official [Oasis starter project for Go] and the [Oasis starter project for Python]. ## See also [social-media]: https://github.com/oasisprotocol/docs/blob/main/docs/get-involved/README.md#social-media-channels [develop]: ./develop/README.mdx [hardhat-example]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/hardhat [`@oasisprotocol/sapphire-hardhat`]: https://www.npmjs.com/package/@oasisprotocol/sapphire-hardhat --- ## Oasis CLI # Oasis Command Line Interface Oasis command-line interface (CLI) is a powerful all-in-one tool for interacting with the Oasis Network. Head to the **[installation instructions]** to download and install it! [installation instructions]: setup.mdx It boasts a number of handy features: - Flexible setup: - supports Mainnet, Testnet, Localnet or any other deployment of the Oasis network - consensus layer configuration with arbitrary token - configuration of custom ParaTimes with arbitrary token - connecting to remote (via TCP/IP) or local (Unix socket) Oasis node instance - Powerful wallet features: - standard token operations (transfers, allowances, deposits, withdrawals and balance queries) - file-based wallet with password protection - full Ledger hardware wallet support - address book - generation, signing and submitting transactions in non-interactive (headless) mode - offline transaction generation for air-gapped machines - transaction encryption with X25519-Deoxys-II envelope - support for Ed25519, Ethereum-compatible Secp256k1 and Sr25519 signature schemes - raw, BIP-44, ADR-8 and Ledger's legacy derivation paths - Node operator features: - Oasis node inspection and health-checks - network governance transactions - staking reward schedule transactions - Developer features: - built-in testing accounts compatible with the Oasis test runner, the Oasis CI and the official Sapphire and Emerald Localnet Docker images - Oasis ROFL app compilation, deployment and management - Oasis Wasm smart contract code deployment, instantiation, management and calls - debugging tools for deployed Wasm contracts - inspection of blocks, transactions, results and events --- ## Account # Account-related Tasks The `account` command is the home for most consensus and ParaTime-layer on-chain transactions that are signed with one of your accounts such as: - getting the account balance including delegated assets, - sending tokens, - delegating or undelegating tokens to or from validators (*staking*), - depositing and withdrawing tokens to or from a ParaTime, - managing withdrawal beneficiaries of your accounts, - validator utils such as entity registration, setting the commission schedule, unfreezing your node and similar. ## Network, ParaTime and Account Selectors Before we dig into `account` subcommands, let's look at the three most common selectors. ### Network The `--network ` parameter specifies the [network] which the Oasis CLI should connect to. For example: ```shell oasis account show oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 --network testnet ``` ``` Name: test:cory Native address: oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 === CONSENSUS LAYER (testnet) === Nonce: 0 Total: 1.0 TEST Available: 1.0 TEST ``` ```shell oasis account show oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 --network mainnet ``` ``` Name: test:cory Native address: oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 === CONSENSUS LAYER (mainnet) === Nonce: 0 Total: 0.0 ROSE Available: 0.0 ROSE ``` ### ParaTime The `--paratime ` sets which [ParaTime] Oasis CLI should use. If you do not want to use any ParaTime, for example to perform a consensus layer operation, pass the `--no-paratime` flag explicitly. ```shell oasis account show eric --no-paratime ``` ``` Name: eric Native address: oasis1qzplmfaeywvtc2qnylyhk0uzcxr4y5s3euhaug7q === CONSENSUS LAYER (testnet) === Nonce: 0 Total: 0.0 TEST Available: 0.0 TEST ``` ### Account The `--account ` specifies which account in your wallet the Oasis CLI should use to sign the transaction with. ```shell oasis account transfer 1.5 0xDce075E1C39b1ae0b75D554558b6451A226ffe00 --account orlando ``` ``` You are about to sign the following transaction: Format: plain Method: accounts.Transfer Body: To: test:dave (oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt) Amount: 1.5 TEST Authorized signer(s): 1. cb+NHKt7JT4fumy0wQdkiBwO3P+DUh8ylozMpsu1xH4= (ed25519) Nonce: 0 Fee: Amount: 0.0002772 TEST Gas limit: 2772 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: orlando (In case you are using a hardware-based signer you may need to confirm on device.) ``` ```shell oasis account transfer 1.5 0xDce075E1C39b1ae0b75D554558b6451A226ffe00 --account eric ``` ``` You are about to sign the following transaction: Format: plain Method: accounts.Transfer Body: To: test:dave (oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt) Amount: 1.5 TEST Authorized signer(s): 1. A1ik9X/7X/eGSoSYOKSIJqM7pZ5It/gHbF+wraxi33u3 (secp256k1eth) Nonce: 0 Fee: Amount: 0.0002779 TEST Gas limit: 2779 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: eric (In case you are using a hardware-based signer you may need to confirm on device.) ``` You can also set **the default [network][network-set-default], [ParaTime][paratime-set-default] or [account][wallet-set-default] to use**, if no network, ParaTime or account selectors are provided. [network]: ./network.md [paratime]: ./paratime.md [network-set-default]: ./network.md#set-default [paratime-set-default]: ./paratime.md#set-default [wallet-set-default]: ./wallet.md#set-default ## Show the Balance of an Account The `account show [address]` command prints the balance, delegated assets and other validator information corresponding to: - a given address, - the name of the [address book entry] or - the name of one of the accounts in your wallet. The address is looked up both on the consensus layer and the ParaTime, if selected. Running the command without arguments will show you the balance of your default account on the default network and ParaTime: ```shell oasis account show ``` ``` Name: oscar Native address: oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e === CONSENSUS LAYER (testnet) === Nonce: 2 Total: 0.0 TEST Available: 0.0 TEST ``` You can also pass the name of the account in your wallet or address book, or one of the [built-in named addresses](#reserved-addresses): ```shell oasis account show orlando ``` ``` Name: orlando Native address: oasis1qq84sc4q0shp5c5klwklqu59evz2mg59hveg7dqx === CONSENSUS LAYER (testnet) === Nonce: 0 Total: 10.0 TEST Available: 10.0 TEST ``` ```shell oasis acc show pool:consensus:fee-accumulator ``` ``` Native address: oasis1qqnv3peudzvekhulf8v3ht29z4cthkhy7gkxmph5 === CONSENSUS LAYER (testnet) === Nonce: 0 Total: 0.0 TEST Available: 0.0 TEST ``` Or, you can check the balance of an arbitrary account address by passing the native or Ethereum-compatible addresses. ```shell oasis account show oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 ``` ``` Name: test:cory Native address: oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 === CONSENSUS LAYER (testnet) === Nonce: 0 Total: 1.0 TEST Available: 1.0 TEST ``` ```shell oasis account show 0xA3243B310CfA8D4b008780BC87E0bb9f6d4FDA06 ``` ``` Name: eric Ethereum address: 0xA3243B310CfA8D4b008780BC87E0bb9f6d4FDA06 Native address: oasis1qzplmfaeywvtc2qnylyhk0uzcxr4y5s3euhaug7q === CONSENSUS LAYER (testnet) === Nonce: 0 Total: 0.0 TEST Available: 0.0 TEST === sapphire PARATIME === Nonce: 0 Balances for all denominations: - Amount: 10.0 Symbol: TEST ``` To also include any staked assets in the balance, pass the `--show-delegations` flag. For example: ```shell oasis account show oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve --show-delegations ``` ``` Address: oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve Nonce: 33 === CONSENSUS LAYER (testnet) === Total: 972.898210067 TEST Available: 951.169098086 TEST Active Delegations from this Account: Total: 16.296833986 TEST Delegations: - To: oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx Amount: 16.296833986 TEST (15000000000 shares) Debonding Delegations from this Account: Total: 5.432277995 TEST Delegations: - To: oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx Amount: 5.432277995 TEST (5432277995 shares) End Time: epoch 26558 Allowances for this Account: Total: 269.5000002 TEST Allowances: - Beneficiary: oasis1qqczuf3x6glkgjuf0xgtcpjjw95r3crf7y2323xd Amount: 269.5 TEST - Beneficiary: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 0.0000002 TEST === sapphire PARATIME === Balances for all denominations: 6.9995378 TEST ``` Let's look more closely at the figures above. The account's **nonce** is the incremental number starting from 0 that must be unique for each account's transaction. In our case, the nonce is 32. This means there have been that many transactions made with this account as the source. The next transaction should have nonce equal to 32. We can see that the total account's **balance** on the consensus layer is \~973 tokens: - \~951 tokens can immediately be transferred. - \~16.3 tokens (15,000,000,0000 shares) are staked (delegated). - \~5.4 tokens are debonding and will be available for spending in the epoch 26558. - up to \~270 tokens are [allowed](#allow) to be transferred to accounts `oasis1qqczuf3x6glkgjuf0xgtcpjjw95r3crf7y2323xd` and `oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx` without the signature of the account above. Separately, you can notice there are \~7 tokens currently [deposited](#deposit) in Sapphire. The `--show-delegations` flag is not enabled by default, because account delegations are not indexed on-chain. This means that the endpoint needs to scan block by block to retrieve this information and takes some time often leading to the timeout on public endpoints due to denial-of-service protection. Next, let's look at how the account of a validator typically looks like. For example: ```shell oasis account show oasis1qz8w4erh0kkwpmdtwd3dt9ueaz9hmzfpecjhd7t4 --show-delegations ``` ``` Address: oasis1qz8w4erh0kkwpmdtwd3dt9ueaz9hmzfpecjhd7t4 Nonce: 17 === CONSENSUS LAYER (testnet) === Total: 1300.598418401 TEST Available: 52.73923316 TEST Active Delegations from this Account: Total: 1247.859185241 TEST Delegations: - To: oasis1qz8w4erh0kkwpmdtwd3dt9ueaz9hmzfpecjhd7t4 (self) Amount: 1247.859185241 TEST (1167021437369 shares) Active Delegations to this Account: Total: 1833.451690691 TEST (1714678589317 shares) Delegations: - From: oasis1qz8w4erh0kkwpmdtwd3dt9ueaz9hmzfpecjhd7t4 (self) Amount: 1247.859185241 TEST (1167021437369 shares) - From: oasis1qztnau4t75cf8wh3truwtl7awvnkwe4st5l25yfn Amount: 148.289115949 TEST (138682777102 shares) - From: oasis1qrvguq055xh42yjl84yn2h5dhm59fkzg9st0mu90 Amount: 116.290596782 TEST (108757158672 shares) - From: oasis1qzhulmesqkcu23r0h5hfslwelud46mkm25zh7uqq Amount: 111.30081746 TEST (104090622972 shares) - From: oasis1qq05qnywdzz3m45dzqxuek0p4a5dxr86rgxlxc58 Amount: 104.855987628 TEST (98063296601 shares) - From: oasis1qzpvsgt56jxz324dxjv5272mz4j6kfadd5ur7f98 Amount: 104.855987628 TEST (98063296601 shares) Commission Schedule: Rates: (1) start: epoch 15883 rate: 7.0% (2) start: epoch 15994 rate: 11.0% (3) start: epoch 16000 rate: 14.0% (4) start: epoch 16134 rate: 18.0% Rate Bounds: (1) start: epoch 15883 minimum rate: 0.0% maximum rate: 10.0% (2) start: epoch 15993 minimum rate: 0.0% maximum rate: 20.0% Stake Accumulator: Claims: - Name: registry.RegisterEntity Staking Thresholds: - Global: entity - Name: registry.RegisterNode.LAdHWnCkjFR5NUkFHVpfGuKFfZW1Cqjzu6wTFY6v2JI= Staking Thresholds: - Global: node-validator - Name: registry.RegisterNode.xk58fx5ys6CSO33ngMQkgOL5UUHSgOSt0QbqWGGuEF8= Staking Thresholds: - Global: node-compute Staking Thresholds: - Global: node-compute Staking Thresholds: - Global: node-compute ``` We can see there is a total of \~1833 tokens delegated to this validator. One delegation was done by the account itself and then there are five more delegators. Sometimes, we also refer to accounts with delegated assets to it as *escrow accounts*. Next, we can see a *commission schedule*. A validator can charge commission for tokens that are delegated to it in form of the commission schedule **rate steps** (7%, 11%, 14% and 18% activated on epochs 15883, 15994, 16000 and 16134 respectively) and the commission schedule **rate bound steps** (0-10% on epoch 15883 and then 0-20% activated on epoch 15993). For more details, see the [account amend-commission-schedule](./account#amend-commission-schedule) command. An escrow account may also accumulate one or more **stake claims** as seen above. The network ensures that all claims are satisfied at any given point. Adding a new claim is only possible if **all of the existing claims plus the new claim can be satisfied**. We can observe that the stake accumulator currently has the following claims: - The `registry.RegisterEntity` claim is for registering an entity. It needs to satisfy the global threshold for [registering the `entity`][show-native-token]. - The `registry.RegisterNode.LAdHWnCkjFR5NUkFHVpfGuKFfZW1Cqjzu6wTFY6v2JI=` claim is for registering the validator node with the public key `LAdHWnCkjFR5NUkFHVpfGuKFfZW1Cqjzu6wTFY6v2JI=`. The claim needs to satisfy the [`node-validator`][show-native-token] global staking threshold parameter. - The `registry.RegisterNode.xk58fx5ys6CSO33ngMQkgOL5UUHSgOSt0QbqWGGuEF8=` claim is for registering the three compute nodes with the public key `xk58fx5ys6CSO33ngMQkgOL5UUHSgOSt0QbqWGGuEF8==`. The claim needs to satisfy three [`node-compute`][show-native-token] global staking threshold parameters. For more details on registering entities, nodes and ParaTimes, see the [Oasis Core Registry service][oasis-core-registry]. [address book entry]: ./addressbook.md [show-native-token]: ./network#show-native-token [Network and ParaTime](#npa) selectors are available for the `account show` command. ## Transfer Use `account transfer ` command to transfer funds between two accounts on the consensus layer or between two accounts inside the same ParaTime. The following command will perform a token transfer inside default ParaTime: ```shell oasis account transfer 2.5 oscar --account orlando ``` ``` You are about to sign the following transaction: Format: plain Method: accounts.Transfer Body: To: oscar (oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e) Amount: 2.5 TEST Authorized signer(s): 1. cb+NHKt7JT4fumy0wQdkiBwO3P+DUh8ylozMpsu1xH4= (ed25519) Nonce: 0 Fee: Amount: 0.0002772 TEST Gas limit: 2772 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: orlando (In case you are using a hardware-based signer you may need to confirm on device.) ``` Consensus layer token transfers: ```shell oasis account transfer 2.5 oscar --account orlando --no-paratime ``` ``` You are about to sign the following transaction: Method: staking.Transfer Body: To: oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e Amount: 2.5 TEST Nonce: 0 Fee: Amount: 0.0 TEST Gas limit: 1272 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: orlando (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network, ParaTime and account](#npa) selectors are available for the `account transfer` command. The [`--subtract-fee`](#subtract-fee) flag is available both for consensus and ParaTime transfers. ## Allowance `account allow ` command makes your funds withdrawable by a 3rd party beneficiary at consensus layer. For example, instead of paying your partner for a service directly, you can ask for their address and enable **them** to withdraw the amount which you agreed on from your account. This is a similar mechanism to how payment checks were used in the past. ```shell oasis account allow logan 10 ``` ``` You are about to sign the following transaction: Method: staking.Allow Body: Beneficiary: oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl Amount change: +10.0 TEST Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1286 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` The allowance command uses relative amount. For example, if your run the above command 3 times, Logan will be allowed to withdraw 30 ROSE. To reduce the allowed amount or completely **disallow** the withdrawal, use the negative amount. To avoid flag ambiguity in the shell, you will first need to pass all desired flags and parameters except the negative amount, then append `--` to mark the end of options, and finally append the negative amount. ```shell oasis account allow logan -- -10 ``` ``` You are about to sign the following transaction: Method: staking.Allow Body: Beneficiary: oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl Amount change: -10.0 TEST Nonce: 0 Fee: Amount: 0.0 TEST Gas limit: 1288 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar ``` The allowance transaction is also required if you want to deposit funds from your consensus account to a ParaTime. The ParaTime will **withdraw** the amount from your consensus account and fund your ParaTime account with the same amount deducted by the deposit fee. Oasis CLI can derive the address of the ParaTime beneficiary, if you use `paratime:` as the beneficiary address. ```shell oasis account allow paratime:sapphire 10 ``` ``` You are about to sign the following transaction: Method: staking.Allow Body: Beneficiary: oasis1qqczuf3x6glkgjuf0xgtcpjjw95r3crf7y2323xd Amount change: +10.0 TEST Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1286 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network and account](#npa) selectors are available for the `account allow` command. ## Deposit Tokens to a ParaTime `account deposit [address]` will deposit funds from your consensus account to the target address inside the selected ParaTime. ```shell oasis accounts deposit 10 eugene --gas-price 0 ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Deposit Body: To: eugene (oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz) Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0 TEST Gas limit: 73572 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` If no address is provided, the deposit will be made to the address corresponding to your consensus account inside the ParaTime. ```shell oasis accounts deposit 10 --gas-price 0 ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Deposit Body: To: Self Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0 TEST Gas limit: 73542 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` Currently, deposit transactions are free of charge, hence the `--gas-price 0` parameter to avoid spending unnecessary gas fees. Also, keep in mind that **deposit and withdrawal fees are always paid by your ParaTime account.** If it doesn't contain any ROSE, you will not able to cover the fees. You can also make a deposit to an account with arbitrary address inside a ParaTime. For example, let's deposit to some native address inside the ParaTime: ```shell oasis account deposit 10 oasis1qpxhsf7xnm007csw2acaa7mta2krzpwex5c90qu6 --gas-price 0 ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Deposit Body: To: oasis1qpxhsf7xnm007csw2acaa7mta2krzpwex5c90qu6 Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0 TEST Gas limit: 73572 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` Or to some address in the Ethereum format: ```shell oasis accounts deposit 10 0x90adE3B7065fa715c7a150313877dF1d33e777D5 --gas-price 0 ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Deposit Body: To: oasis1qpupfu7e2n6pkezeaw0yhj8mcem8anj64ytrayne Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0 TEST Gas limit: 73572 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network, ParaTime and account](#npa) selectors are available for the `account deposit` command. ## Withdraw Tokens from the ParaTime `account withdraw [to]` will withdraw funds from your ParaTime account to a consensus address: ```shell oasis account withdraw 10 orlando ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Withdraw Body: To: orlando (oasis1qq84sc4q0shp5c5klwklqu59evz2mg59hveg7dqx) Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0073573 TEST Gas limit: 73573 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` If the address is not provided, the address of the account inside ParaTime will be used as a consensus address: ```shell oasis account withdraw 10 ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Withdraw Body: To: Self Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0073543 TEST Gas limit: 73543 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` Withdrawal transactions are not free of charge and the fee will be deducted **from your ParaTime balance**. Similar to the [`account deposit`](#deposit) command, you can also specify an arbitrary Oasis address which you want to withdraw your tokens to. ```shell oasis accounts withdraw 10 oasis1qpxhsf7xnm007csw2acaa7mta2krzpwex5c90qu6 ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Withdraw Body: To: oasis1qpxhsf7xnm007csw2acaa7mta2krzpwex5c90qu6 Amount: 10.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0073573 TEST Gas limit: 73573 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` You cannot use the destination address of your `secp256k1` account or any other Ethereum-formatted address for the withdrawal, because this signature scheme is not supported on the consensus layer! [Network, ParaTime and account](#npa) selectors are available for the `account withdraw` command. The [`--subtract-fee`](#subtract-fee) flag is available for withdrawal transactions. ## Delegate Tokens to a Validator To stake your tokens on the consensus layer, run `account delegate `. This will delegate the specified amount of tokens to a validator. You can either delegate directly on the consensus layer: ```shell oasis account delegate 20 oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk --no-paratime ``` ``` You are about to sign the following transaction: Method: staking.AddEscrow Body: To: oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk Amount: 20.0 TEST Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1279 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` Or you can delegate from inside a ParaTime that supports delegations: ```shell oasis account delegate 20 oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Delegate Body: To: oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk Amount: 20.0 TEST Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0073574 TEST Gas limit: 73574 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` Once your tokens are staked, they are converted into *shares* since the number of tokens may change over time based on the [staking reward schedule][token-metrics] or if your validator is subject to [slashing]. The number of shares on the other hand will remain constant. Also, shares are always interpreted as a whole number, whereas the amount of tokens is usually a rational number and may lead to rounding errors when managing your delegations. To find out how many shares did you delegate, run [`account show`](#show) and look for the `shares` under the active delegations section. [Network, ParaTime and account](#npa) selectors are available for the `account delegate` command. [token-metrics]: https://github.com/oasisprotocol/docs/blob/main/docs/general/oasis-network/token-metrics-and-distribution.mdx#staking-incentives [slashing]: https://github.com/oasisprotocol/docs/blob/main/docs/general/manage-tokens/terminology.md#slashing ## Undelegate Tokens from the Validator To reclaim your delegated assets, use `account undelegate `. You will need to specify the **number of shares instead of tokens** and the validator address you want to reclaim your assets from. Depending on where the tokens have been delegated from, you can either reclaim delegated tokens directly on the consensus layer: ```shell oasis account undelegate 20000000000 oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk --no-paratime ``` ``` You are about to sign the following transaction: Method: staking.ReclaimEscrow Body: From: oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk Shares: 20000000000 Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1283 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` Or you can reclaim from inside a ParaTime that supports delegations: ```shell oasis account undelegate 20000000000 oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk ``` ``` You are about to sign the following transaction: Format: plain Method: consensus.Undelegate Body: From: oasis1qpkl3vykn9mf4xcq9eevmey4ffrzf0ajtcpvd7sk Shares: 20000000000 Authorized signer(s): 1. Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= (ed25519) Nonce: 0 Fee: Amount: 0.0145572 TEST Gas limit: 145572 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` After submitting the transaction, a [debonding period] will commence. After the period has passed, the network will automatically move your assets back to your account. Note that during the debonding period, your assets may still be [slashed][slashing]. [Network, ParaTime and account](#npa) selectors are available for the `account undelegate` command. [debonding period]: ./network.md#show ## Advanced ### Public Key to Address `account from-public-key ` converts the Base64-encoded public key to the [Oasis native address]. ```shell oasis account from-public-key NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE= ``` ``` oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve ``` This command is most often used by the network validators for converting the public key of their entity to a corresponding address. You can find your entity's ID in the `id` field of the `entity.json` file. Oasis consensus transactions hold the public key of the signer instead of their *from* address. This command can be used for debugging to determine the signer's staking address on the network. [Oasis native address]: https://github.com/oasisprotocol/docs/blob/main/docs/general/manage-tokens/terminology.md#address ### Non-Interactive Mode Add `-y` flag to any operation, if you want to use Oasis CLI in non-interactive mode. This will answer "yes to all" for yes/no questions and for all other prompts it will keep the proposed default values. ### Output Transaction to File Use `--output-file ` parameter to save the resulting transaction to a file instead of broadcasting it to the network. You can then use the [`transaction`] command to verify and submit it. Check out the [`--unsigned`] flag, if you wish to store the unsigned version of the transaction and the [`--format`] parameter for a different transaction encoding. [`transaction`]: ./transaction.md [`--unsigned`]: #unsigned [`--format`]: #format ### Do Not Sign the Transaction If you wish to *prepare* a transaction to be signed by a specific account in the future, use the `--unsigned` flag. This will cause Oasis CLI to skip the signing and broadcasting steps. The transaction will be printed to the standard output instead. You can also use [`--output-file`] to store the transaction to a file. This setup is ideal when you want to sign a transaction with the [offline/air-gapped machine] machine: 1. First, generate an unsigned transaction on a networked machine, 2. copy it over to an air-gapped machine, 3. [sign it][transaction-sign] on the air-gapped machine, 4. copy it over to the networked machine, 5. [broadcast the transaction][transaction-submit] on the networked machine. Use the CBOR format, if you are using a 3rd party tool in step 3 to sign the transaction content directly. Check out the [`--format`] parameter to learn more. [`--output-file`]: #output-file [transaction-sign]: ./transaction.md#sign [transaction-submit]: ./transaction.md#submit [offline/air-gapped machine]: https://en.wikipedia.org/wiki/Air_gap_\(networking\) ### Output format Use `--format json` or `--format cbor` to select the output file format. By default the JSON encoding is selected so that the file is human-readable and that 3rd party applications can easily manage it. If you want to output the transaction in the same format that will be stored on-chain or you are using a 3rd party tool for signing the content of the transaction file directly use the CBOR encoding. This parameter only works together with [`--unsigned`] and/or [`--output-file`] parameters. ### Offline Mode To generate a transaction without accessing the network and also without broadcasting it, add `--offline` flag. In this case Oasis CLI will require that you provide all necessary transaction details (e.g. [account nonce](#nonce), [gas limit](#gas-limit), [gas price](#gas-price)) which would otherwise be automatically obtained from the network. Oasis CLI will print the transaction to the standard output for you to examine. Use [`--output-file`](#output-file), if you wish to save the transaction to the file and submit it to the network afterwards by using the [`transaction submit`][transaction-submit] command. ### Subtract fee To include the transaction fee inside the given amount, pass the `--subtract-fee` flag. This comes handy, if you want to drain the account or keep it rounded to some specific number. ```shell oasis account transfer 1.0 0xDce075E1C39b1ae0b75D554558b6451A226ffe00 --account orlando --subtract-fee ``` ```shell You are about to sign the following transaction: Format: plain Method: accounts.Transfer Body: To: test:dave (oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt) Amount: 0.9997228 TEST Authorized signer(s): 1. cb+NHKt7JT4fumy0wQdkiBwO3P+DUh8ylozMpsu1xH4= (ed25519) Nonce: 0 Fee: Amount: 0.0002772 TEST Gas limit: 2772 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: orlando (In case you are using a hardware-based signer you may need to confirm on device.) ``` ### Account's Nonce `--nonce ` will override the detection of the account's nonce used to sign the transaction with the specified one. ### Gas Price `--gas-price ` sets the transaction's price per gas unit in base units. ### Gas Limit `--gas-limit ` sets the maximum amount of gas that can be spend by the transaction. ### Entity Management #### Initialize Entity When setting up a validator node for the first time, you will need to provide the path to the file containing your entity descriptor as well as register it in the network registry. Use `account entity init` to generate the entity descriptor file containing the public key of the selected account. ```shell oasis account entity init ``` ```json { "id": "Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8=", "nodes": [], "v": 2 } ``` By default, the file content will be printed to the standard output. You can use `-o` parameter to store it to a file, for example: ```shell oasis account entity init -o entity.json ``` [Account](#account) selector is available for the `account entity init` command. #### Register your Entity In order for validators to become part of the validator set and/or the compute committee, they first need to register as an entity inside the network's registry. Use the `account entity register ` command to register your entity and provide a JSON file with the Entity descriptor. You can use the [`network show`][network-show] command to see existing entities and then examine specific ones to see how entity descriptors of the currently registered entities look like. [network-show]: ./network.md#show [oasis-core-registry]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/services/registry.md#entities-and-nodes ```shell oasis account entity register entity.json ``` ``` Signing the entity descriptor... (In case you are using a hardware-based signer you may need to confirm on device.) You are about to sign the following transaction: Method: registry.RegisterEntity Body: { "untrusted_raw_value": { "v": 2, "id": "Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8=", "nodes": [ "nshzFvqLNNLN+HS0id5XmXrVMhIgFV456i4VQicWgjk=" ] }, "signature": { "public_key": "Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8=", "signature": "DAwn+N8hKmQMbZda/fFJSEgErDAAdebXLfIPOpqUkJowJLUAL+nfrUMz5SVkKc0TnqQOavoSAVFz1yoRJ3QuBA==" } } Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 2479 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network and account](#npa) selectors are available for the `account entity register` command. #### Deregister Your Entity To remove an entity from the network's registry, invoke `account entity deregister`. No additional arguments are required since each account can only deregister their own entity, if one exists in the registry. ```shell oasis account entity deregister ``` ``` You are about to sign the following transaction: Method: registry.DeregisterEntity Body: {} Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1239 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network and account](#npa) selectors are available for the `account entity deregister` command. ### Change Your Commission Schedule Validators can use `account amend-commission-schedule` to add or remove their commission bounds and rates at consensus layer. Rate bounds can be defined by using the `--bounds //` parameter. Actual rates which can be subject to change every epoch can be defined with the `--rates /` parameter. Rates are specified in milipercents (100% = 100000m%). The new commission schedule will replace any previous schedules. ```shell oasis account amend-commission-schedule --bounds 329000/1000/2000,335000/900/1900 --rates 329000/1500 ``` ``` You are about to sign the following transaction: Method: staking.AmendCommissionSchedule Body: Amendment: Rates: (1) start: epoch 329000 rate: 1.5% Rate Bounds: (1) start: epoch 329000 minimum rate: 1.0% maximum rate: 2.0% (2) start: epoch 335000 minimum rate: 0.9% maximum rate: 1.9% Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1369 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` To learn more on commission rates read the section inside the Oasis Core [Staking service][staking-service-commission-schedule] chapter. [Network and account](#npa) selectors are available for the `account amend-commission-schedule` command. [staking-service-commission-schedule]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/services/staking.md#amend-commission-schedule ### Unfreeze Your Node Once the validators, based on their stake, get elected into the validator set, it is important that their nodes are actively participating in proposing new blocks and submitting votes for other proposed blocks. For regular node upgrades and maintenance, the validators should follow the [Shutting Down a Node] instructions. Nevertheless, if the network froze your node, the only way to unfreeze it is to execute the `account node-unfreeze` ```shell oasis account node-unfreeze fasTG3pMOwLfFA7JX3R8Kxw1zFflqeY6NP/cpjcFu5I= ``` ``` You are about to sign the following transaction: Method: registry.UnfreezeNode Body: { "node_id": "fasTG3pMOwLfFA7JX3R8Kxw1zFflqeY6NP/cpjcFu5I=" } Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1282 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network and account](#npa) selectors are available for the `account node-unfreeze` command. [Shutting Down a Node]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/maintenance/shutting-down-a-node.md ### Burn Tokens `account burn ` command will permanently destroy the amount of tokens in your account and remove them from circulation. This command should not be used on public networks since not only no one will be able to access burnt assets anymore, but will also permanently remove the tokens from circulation. ```shell oasis account burn 2.5 ``` ``` You are about to sign the following transaction: Method: staking.Burn Body: Amount: 2.5 TEST Nonce: 2 Fee: Amount: 0.0 TEST Gas limit: 1243 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: oscar (In case you are using a hardware-based signer you may need to confirm on device.) ``` [Network and account](#npa) selectors are available for the `account burn` command. ### Pools and Reserved Addresses The following literals are used in the Oasis CLI to denote special reserved addresses which cannot be directly used in the ledger: #### Consensus layer - `pool:consensus:burn`: The token burn address. - `pool:consensus:common`: The common pool address. - `pool:consensus:fee-accumulator`: The per-block fee accumulator address. - `pool:consensus:governance-deposits`: The governance deposits address. #### ParaTime layer - `pool:paratime:common`: The common pool address. - `pool:paratime:fee-accumulator`: The per-block fee accumulator address. - `pool:paratime:pending-withdrawal`: The internal pending withdrawal address. - `pool:paratime:pending-delegation`: The internal pending delegation address. - `pool:paratime:rewards`: The reward pool address. --- ## Address book # Address Book If you repeatedly transfer tokens to the same recipients or if you just want to store an arbitrary address for future use, you can use the `addressbook` command to **name the address and store it in your address book**. Entries in your address book are behaving similarly to the [accounts stored in your wallet][wallet], for example when checking the balance of the account or sending tokens to. Of course, you cannot sign any transactions with the address stored in your address book since you do not possess the private key of that account. Both the Oasis native and the Ethereum-compatible addresses can be stored. The name of the address book entry may not clash with any of the account names in your wallet. The Oasis CLI will prevent you from doing so. [wallet]: wallet.md ## Add a New Entry Use `addressbook add
` to name the address and store it in your address book. ```shell oasis addressbook add mike oasis1qrtrpg56l6y2cfudwtgfuxmq5e5cyhffcsfpdqvw ``` ```shell oasis addressbook add meghan 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 ``` Then, you can for example use the entry name in you address book to send the tokens to. In this case, we're sending `2.5 TEST` to `meghan` on Sapphire Testnet: ```shell oasis account transfer 2.5 meghan ``` ``` You are about to sign the following transaction: Format: plain Method: accounts.Transfer Body: To: meghan (oasis1qrjzcve7y6qp3nqs3n7ghavw68vkdh3epcv64ega) Amount: 2.5 ROSE Authorized signer(s): 1. ArEjDxsPfDvfeLlity4mjGzy8E/nI4umiC8vYQh+eh/c (secp256k1eth) Nonce: 0 Fee: Amount: 0.0002779 ROSE Gas limit: 2779 (gas price: 0.0000001 ROSE per gas unit) Network: mainnet ParaTime: emerald Account: eugene (In case you are using a hardware-based signer you may need to confirm on device.) ``` ## List Entries You can list all entries in your address book by invoking `addressbook list`. ```shell oasis addressbook list ``` ``` NAME ADDRESS meghan 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 mike oasis1qrtrpg56l6y2cfudwtgfuxmq5e5cyhffcsfpdqvw ``` ## Show Entry Details You can check the details such as the native Oasis address of the Ethereum account or simply check, if an entry exists in the address book, by running `addressbook show `: ```shell oasis addressbook show meghan ``` ``` Name: meghan Ethereum address: 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 Native address: oasis1qrjzcve7y6qp3nqs3n7ghavw68vkdh3epcv64ega ``` ```shell oasis addressbook show mike ``` ``` Name: mike Native address: oasis1qrtrpg56l6y2cfudwtgfuxmq5e5cyhffcsfpdqvw ``` ## Rename an Entry You can always rename the entry in your address book by using `addressbook rename `: ```shell oasis addressbook list ``` ``` NAME ADDRESS meghan 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 mike oasis1qrtrpg56l6y2cfudwtgfuxmq5e5cyhffcsfpdqvw ``` ```shell oasis addressbook rename mike mark ``` ```shell oasis addressbook list ``` ``` NAME ADDRESS mark oasis1qrtrpg56l6y2cfudwtgfuxmq5e5cyhffcsfpdqvw meghan 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 ``` ## Remove an Entry To delete an entry from your address book invoke `addressbook remove `. ```shell oasis addressbook list ``` ``` NAME ADDRESS meghan 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 mike oasis1qrtrpg56l6y2cfudwtgfuxmq5e5cyhffcsfpdqvw ``` ```shell oasis addressbook remove mike ``` ```shell oasis addressbook list ``` ``` NAME ADDRESS meghan 0xBe8B38ED9b0794e7ab9EbEfC1e710b4F4EC6F6C1 ``` --- ## Network # Manage Your Oasis Networks The `network` command is used to manage the Mainnet, Testnet or Localnet endpoints Oasis CLI will be connecting to. The `network` command is commonly used: - on network upgrades, because the chain domain separation context is changed due to a new [genesis document], - when setting up a local `oasis-node` instance instead of relying on public gRPC endpoints, - when running a private Localnet with `oasis-net-runner`, - when examining network properties such as the native token, the network registry, the validator set and others. Oasis CLI supports both **remote endpoints via the secure gRPC protocol** and **local Unix socket endpoints**. When running the Oasis CLI for the first time, it will automatically configure the current Mainnet and Testnet endpoints. [genesis document]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/genesis.md#genesis-documents-hash ## Add a Network Invoke `network add [chain-context]` to add a new endpoint with a specific chain domain separation context and a gRPC address. This command is useful, if you want to connect to your own instance of the Oasis node instead of relying on the public gRPC endpoints. For TCP/IP endpoints, run: ```shell oasis network add testnet_alt testnet2.grpc.oasis.io:443 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 ``` ``` ? Description: Testnet alternative ? Denomination symbol: TEST ? Denomination decimal places: (9) ``` For Unix sockets, use: ```shell oasis network add testnet_local unix:/node_testnet/data/internal.sock 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 ``` ``` ? Description: Testnet network, local node ? Denomination symbol: TEST ? Denomination decimal places: (9) ``` To automatically detect the chain context, simply omit the `[chain-context]` argument: ```shell oasis network add testnet_alt testnet2.grpc.oasis.io:443 ``` ``` ? Description: Testnet alternative ? Denomination symbol: TEST ? Denomination decimal places: (9) ``` ## Add a Local Network `network add-local ` command can be used if you are running `oasis-node` on your local machine. In this case, Oasis CLI will autodetect the chain domain separation context. For the Oasis Mainnet and Testnet chains, the native token symbol, the number of decimal places and registered ParaTimes will automatically be predefined. Otherwise, the Oasis CLI will ask you to enter them. ```shell oasis network add-local testnet_local unix:/node_testnet/data/internal.sock ``` To override the defaults, you can pass `--num-decimals`, `--symbol` and `--description` parameters. This is especially useful, if you are running the command in a [non-interactive mode](account.md#y): ```shell oasis network add-local testnet_local unix:/node_testnet/data/internal.sock --num-decimals 9 --symbol TEST --description "Work machine - Localnet" -y ``` ## List Networks Invoke `network list` to list all configured networks. ```shell oasis network list ``` ``` NAME CHAIN CONTEXT RPC mainnet (*) bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local b11b369e0da5bb230b220127f5e7b242d385ef8c6f54906243f30af63c815535 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 testnet_alt 50304f98ddb656620ea817cc1446c401752a05a249b36c9b90dba4616829977a testnet2.grpc.oasis.io:443 ``` The [default network](#set-default) is marked with the `(*)` sign. ## Remove a Network Use `network remove ` to remove the given network configuration including all dependant ParaTimes. ```shell oasis network remove testnet_alt ``` You can also delete network in non-interactive mode format by passing the `-y` parameter: ```shell oasis network remove testnet -y ``` ## Set Network Chain Context To change the chain context of a network, use `network set-chain-context [chain-context]`. Chain contexts represent a root of trust in the network, so before changing them for production networks make sure you have verified them against a trusted source like the [Mainnet] and [Testnet] chapters in the official Oasis documentation. ```shell oasis network list ``` ```shell NAME CHAIN CONTEXT RPC mainnet bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local (*) b11b369e0da5bb230b220127f5e7b242d385ef8c6f54906243f30af63c815535 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 ``` ```shell oasis network set-chain-context mainnet_local 01234513331133a715c7a150313877dF1d33e77a715c7a150313877dF1d33e77 ``` ```shell oasis network list ``` ```shell NAME CHAIN CONTEXT RPC mainnet bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local (*) 01234513331133a715c7a150313877dF1d33e77a715c7a150313877dF1d33e77 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 ``` To automatically detect the chain context, simply omit the `[chain-context]` argument. This is especially useful for Localnet, where the chain context changes each time you restart the `oasis-net-runner`: ```shell oasis network set-chain-context mainnet_local ``` [Mainnet]: https://github.com/oasisprotocol/docs/blob/main/docs/node/network/mainnet [Testnet]: https://github.com/oasisprotocol/docs/blob/main/docs/node/network/testnet ## Set Default Network To change the default network for future Oasis CLI operations, use `network set-default `. ```shell oasis network list ``` ``` NAME CHAIN CONTEXT RPC mainnet (*) bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local b11b369e0da5bb230b220127f5e7b242d385ef8c6f54906243f30af63c815535 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 ``` ```shell oasis network set-default mainnet_local ``` ```shell oasis network list ``` ``` NAME CHAIN CONTEXT RPC mainnet bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local (*) b11b369e0da5bb230b220127f5e7b242d385ef8c6f54906243f30af63c815535 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 ``` ## Change a Network's RPC Endpoint To change the RPC address of the already configured network, run `network set-rpc `: ```shell oasis network list ``` ``` NAME CHAIN CONTEXT RPC mainnet (*) bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local b11b369e0da5bb230b220127f5e7b242d385ef8c6f54906243f30af63c815535 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 testnet_alt 50304f98ddb656620ea817cc1446c401752a05a249b36c9b90dba4616829977a testnet2.grpc.oasis.io:443 ``` ```shell oasis network set-rpc testnet_alt testnet3.grpc.oasis.io:443 ``` ```shell oasis network list ``` ``` NAME CHAIN CONTEXT RPC mainnet (*) bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 grpc.oasis.io:443 mainnet_local b11b369e0da5bb230b220127f5e7b242d385ef8c6f54906243f30af63c815535 unix:/node/data/internal.sock testnet 0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76 testnet.grpc.oasis.io:443 testnet_alt 50304f98ddb656620ea817cc1446c401752a05a249b36c9b90dba4616829977a testnet3.grpc.oasis.io:443 ``` ## Advanced ### Governance Operations `network governance` command is aimed towards validators for proposing or voting on-chain for network upgrades or changes to other crucial network parameters. #### `list` Use `network list` to view all past and still active governance proposals. Each proposal has its unique subsequent ID, a submitter, an epoch when the proposal was created and when it closes and a state. ```shell oasis network governance list --network testnet ``` ``` ID KIND SUBMITTER CREATED AT CLOSES AT STATE 1 upgrade oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 5633 5645 passed 2 upgrade oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 7525 7537 passed 3 upgrade oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 8817 8829 passed 4 upgrade oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 14183 14195 passed 5 upgrade oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 14869 14881 passed 6 cancel upgrade 5 oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 14895 14907 passed 7 upgrade oasis1qrs2dl6nz6fcxxr3tq37laxlz6hxk6kuscnr6rxj 14982 14994 passed 8 upgrade oasis1qpwaggvmhwq5uk40clase3knt655nn2tdy39nz2f 29493 29505 passed 9 change parameters (governance) oasis1qrx85mv85k708ylww597rd42enlzhdmeu56wqj72 30693 30705 passed 10 change parameters (staking) oasis1qqxxut9x74dutu587f9nj8787qz4dm0ueu05l88c 33059 33071 passed 11 upgrade oasis1qqxxut9x74dutu587f9nj8787qz4dm0ueu05l88c 35915 35927 passed ``` [Network](./account.md#npa) selector is available for the `governance list` command. #### `show` `network governance show ` shows detailed information on past or opened governance proposals on the consensus layer. ```shell oasis network governance show 9 --network testnet ``` ``` === PROPOSAL STATUS === Network: testnet Proposal ID: 9 Status: passed Submitted By: oasis1qrx85mv85k708ylww597rd42enlzhdmeu56wqj72 Created At: epoch 30693 Results: - yes: 43494459676132712 - no: 0 - abstain: 0 === PROPOSAL CONTENT === Change Parameters: Module: governance Changes: - Parameter: upgrade_cancel_min_epoch_diff Value: 15 === VOTED STAKE === Total voting stake: 43777341677851724 Voted stake: 43494459676132712 (99.35%) Voted yes stake: 43494459676132712 (100.00%) Threshold: 68% ``` You can also view individual validator votes by passing the `--show-votes` parameter: ```shell oasis network governance show 9 --show-votes --network testnet ``` ``` === PROPOSAL STATUS === Network: testnet Proposal ID: 9 Status: passed Submitted By: oasis1qrx85mv85k708ylww597rd42enlzhdmeu56wqj72 Created At: epoch 30693 Results: - yes: 43494459676132712 - no: 0 - abstain: 0 === PROPOSAL CONTENT === Change Parameters: Module: governance Changes: - Parameter: upgrade_cancel_min_epoch_diff Value: 15 === VOTED STAKE === Total voting stake: 43777341677851724 Voted stake: 43494459676132712 (99.35%) Voted yes stake: 43494459676132712 (100.00%) Threshold: 68% === VALIDATORS VOTED === 1. oasis1qqv25adrld8jjquzxzg769689lgf9jxvwgjs8tha,,11072533240458237 (25.29%): yes 2. oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx,,10922431911536365 (24.95%): yes 3. oasis1qz424yg28jqmgfq3xvly6ky64jqnmlylfc27d7cp,,10786148310722167 (24.64%): yes 4. oasis1qq2vzcvxn0js5unsch5me2xz4kr43vcasv0d5eq4,,10713346213415943 (24.47%): yes === VALIDATORS NOT VOTED === 1. oasis1qrwncs459lauc77zw23efdn9dmfcp23cxv095l5z,GateOmega,43681995855414 (0.10%) 2. oasis1qq60zmsfca0gvmm3v8906pn5zqtt4ee2ssexramg,Validatrium,37519643115923 (0.09%) 3. oasis1qrkwv688m3naejcy8rhycls8r78ga0th4qaun90k,Tuzem,13051121909522 (0.03%) 4. oasis1qrg430wr84xqh2pm6hv609v7jx9j3gt7xykmjl65,cherkes,12829194880949 (0.03%) 5. oasis1qzjm0zwfg4egs9kk4d9rkujudzk8pjp5rvxyr3ag,Munay Network,12777089617060 (0.03%) 6. oasis1qqsxhxedvzt0et3sahcqcjw02p4kcz92dqtjuzwh,BroMyb,12062754356510 (0.03%) 7. oasis1qpq97fm6lf87jzms9agd6z902nh7axtxvus6m352,LDV,11442842011460 (0.03%) 8. oasis1qpz97gfrvj5xzx8jx7x9zweeq0rcf2q6jg4a09qz,Stardust,11304018972474 (0.03%) 9. oasis1qrkf98prkpf05kd6he7wcvpzr9sd6gs2jvrn5keh,glebanyy,10964792231490 (0.03%) 10. oasis1qzxtc82d7gmcr5yazlu786gkwcvukz3zvu9ph5la,ushakov,10954729838903 (0.03%) 11. oasis1qpgg65qg7r7yy2a0qp2yufvcsyl2lm46lg03g6cp,Breathe and Stake,10942254111385 (0.02%) 12. oasis1qrruwg0y4au55efu0pcgl0nanaq6p3sdwv0jhzv5,Dobrynya Hukutu4,10753083746804 (0.02%) 13. oasis1qq6k7q4uukpucz322m8dhy0pt0gvfdgrvcvrx2rm,Spectrum Staking,10724618200610 (0.02%) 14. oasis1qr2jxg57ch6p3787t2a8973v8zn8g82nxuex0773,Doorgod,9959349109598 (0.02%) 15. oasis1qrp0cgv0u5mxm7l3ruzqyk57g6ntz6f8muymfe4p,ELYSIUM,9536638984147 (0.02%) 16. oasis1qrfeessnrnyaggvyvne52aple2f8vaw93vt608sz,Julia-Ju,7765469996624 (0.02%) 17. oasis1qz9x0zpja6n25hc5242k2e60l6a7ke2zsq9cqrqz,SerGo,5553178612897 (0.01%) 18. oasis1qq4fj0fdydz83zvcgt4kn38ea7ncm3dj8qkcfnm4,Wanderer Staking,5471851136155 (0.01%) 19. oasis1qzcemlzf7zv2jxsufex4h9mjaqwy4upnzy7qrl7x,Making.Cash Validator,5461635837440 (0.01%) 20. oasis1qrq7hgvv26003hy89klcmy3mnedrmyd7lvf0k6qn,Perfect Stake,4040750411525 (0.01%) 21. oasis1qqxxut9x74dutu587f9nj8787qz4dm0ueu05l88c,Princess Stake,3406051188880 (0.01%) 22. oasis1qq45am6gzaur2rxhk26av9qf7ryhgca6ecg28clu,Jr,2201101606599 (0.01%) 23. oasis1qz7rce6dmnh9qtr9nltsyy69d69j3a95rqm3jmxw,Everstake,2171181028607 (0.00%) 24. oasis1qz8w4erh0kkwpmdtwd3dt9ueaz9hmzfpecjhd7t4,Chloris Network,2011713919098 (0.00%) 25. oasis1qzlzczsdme4scprjjh4h4vtljnmx3ag4vgpdnqln,Alexander (aka Bambarello) Validator,1757051650379 (0.00%) 26. oasis1qzwe6xywazp29tp20974wgxhdcpdf6yxfcn2jxvv,Simply Staking,1388519563110 (0.00%) 27. oasis1qq2vdcvkyzdghcrrdhvujk3tvva84wd9yvt68zyx,Lusia,1300150706950 (0.00%) 28. oasis1qphcvmsh6mw98vtg0cg4dvdsen5tm0g3e58eqpnp,Appload,1221281508316 (0.00%) 29. oasis1qpc66dgff4wrkj8kac4njrl2uvww3c9m5ycjwar2,Forbole-Testnet,1112551173826 (0.00%) 30. oasis1qzz9wdgt4hxfmcelfgyg8ne827a47pvh4g4jamtu,max999,1096825296654 (0.00%) 31. oasis1qz5zfcaqqud75naqln92ez7czjxf0dpyj5rmtfls,alexandr0,1096729833573 (0.00%) 32. oasis1qz4532s3lhkpju7fd3mxqfvaw98pjq5htss4g4w0,RedHead,1096422596648 (0.00%) 33. oasis1qphhz4u08xgt4wk85x4t8xv6g3nxy8fq5ue4htxr,Kumaji,1042663336329 (0.00%) 34. oasis1qrrggkf3jxa3mqjj0uzfpn8wg5hpsn5hdvusqspc,Bit Cat😻 ,959384168121 (0.00%) 35. oasis1qz6tqn2ktffz2jjlj2fwqlhw7f2rwwud5ghh54yv,WeHaveServers.com,933754283937 (0.00%) 36. oasis1qpswaz4djukz0zanquyh2vswk59up22emysq5am9,StakeService,879748845930 (0.00%) 37. oasis1qq87z733lxx87zyuutee5xpxcksqk3mj9uq3xvaq,w3coins,819152557031 (0.00%) 38. oasis1qrcf5mwjyu7hahwfjgwmywhy9cjyaqdd5vkj7ah3,ou812,418376899484 (0.00%) 39. oasis1qpxaq8thpx3y8wumn6hmfx70rvk0j9cxrgz9h27k,Colossus,410141268162 (0.00%) 40. oasis1qr4vsan850vmztuy9r2pex4fj4wxnmhvlgclg500,,327983310482 (0.00%) 41. oasis1qqgvqelw8kmcd8k4cqypcsyajkl3gq6ppc4t34n2,AnkaStake,220810245010 (0.00%) 42. oasis1qrpp8h9wl3wtqn04nvyx4dcrlz3jzqazugec7pxz,CryptoSJ.net,213393794996 (0.00%) ``` Governance proposals are not indexed and an endpoint may take some time to respond. If you encounter timeouts, consider setting up your own gRPC endpoint! [Network](./account.md#npa) selector is available for the `governance show` command. #### `cast-vote` `network governance cast-vote { yes | no | abstain }` is used to submit your vote on the governance proposal. The vote can either be `yes`, `no` or `abstein`. ```shell oasis network governance cast-vote 5 yes ``` ``` Unlock your account. ? Passphrase: You are about to sign the following transaction: Method: governance.CastVote Body: Proposal ID: 5 Vote: yes Nonce: 7 Fee: Amount: 0.0 TEST Gas limit: 1240 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: test ``` [Network and account](./account.md#npa) selectors are available for the `governance cast-vote` command. #### `create-proposal` To submit a new governance proposal use `network governance create-proposal`. The following proposal types are currently supported: - `cancel-upgrade `: Cancel network proposed upgrade. Provide the ID of the network upgrade proposal you wish to cancel. - `parameter-change `: Network parameter change proposal. Provide the consensus module name and the parameter changes JSON. Valid module names are: `staking`, `governance`, `keymanager`, `scheduler`, `registry`, and `roothash` - `upgrade `: Network upgrade proposal. Provide a JSON file containing the upgrade descriptor. [Network and account](./account.md#npa) selectors are available for all `governance create-proposal` subcommands. ### Show Network Properties `network show` shows the network property stored in the registry, scheduler, genesis document or on chain. By passing `--height ` with a block number, you can obtain a historic value of the property. [Network](./account.md#npa) selector is available for the `network show` command. The command expects one of the following parameters: #### `entities` Shows all registered entities in the network registry. See the [`account entity`] command, if you want to register or update your own entity. [`account entity`]: ./account.md#entity This call is not enabled on public Oasis gRPC endpoints. You will have to run your own client node to enable this functionality. #### `nodes` Shows all registered nodes in the network registry. See the [`account entity`], to add a node to your entity. This call is not enabled on public Oasis gRPC endpoints. You will have to run your own client node to enable this functionality. #### `parameters` Shows all consensus parameters for the following modules: consensus, key manager, registry, roothash, staking, scheduler, beacon, and governance. ```shell oasis network show parameters ``` ``` === CONSENSUS PARAMETERS === backend: tendermint params: timeout_commit: 5000000000 skip_timeout_commit: false empty_block_interval: 0 max_tx_size: 32768 max_block_size: 1048576 max_block_gas: 0 max_evidence_size: 51200 state_checkpoint_interval: 100000 state_checkpoint_num_kept: 2 state_checkpoint_chunk_size: 8388608 gas_costs: tx_byte: 1 === KEYMANAGER PARAMETERS === params: gas_costs: publish_ephemeral_secret: 1000 publish_master_secret: 1000 update_policy: 1000 statuses: id: 4000000000000000000000000000000000000000000000008c5ea5e49b4bc9ac is_initialized: true is_secure: true checksum: Wd1+cYi5c2iXynGezp3ObZYY4/SHVT3MvGAbqEi2XZw= nodes: null policy: policy: serial: 8 id: 4000000000000000000000000000000000000000000000008c5ea5e49b4bc9ac enclaves: oAcyPVTJyxSpDBpV2R+AseNuqpe4oy0OaP9Gf2dpL6pAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==: may_query: 000000000000000000000000000000000000000000000000e199119c992377cb: yJORh2eP/BKGIVTGWwyQowE65kx2EdME5DtKjbMcPxFAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ== 000000000000000000000000000000000000000000000000f80306c9858e7279: imO1np4RCgLOJauA/bz6x5aeGvcGPVJlDb44+xLt77xAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ== may_replicate: xfkp0XL+FcyMHjS2TAq8BYkOtzfvLnBN2nqNGus/58pAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ== xfkp0XL+FcyMHjS2TAq8BYkOtzfvLnBN2nqNGus/58pAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==: may_query: 000000000000000000000000000000000000000000000000e199119c992377cb: yJORh2eP/BKGIVTGWwyQowE65kx2EdME5DtKjbMcPxFAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ== 000000000000000000000000000000000000000000000000f80306c9858e7279: imO1np4RCgLOJauA/bz6x5aeGvcGPVJlDb44+xLt77xAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ== may_replicate: signatures: public_key: 723UDX3qFpiFwAKVey/G0pvEdP8821k2Dxb5C/bdHHE= signature: Cpy8gT2cMZkKwWlCiYlVmSvxgPg+wDghPAswIqd9CNm4v8hVpcYbG2eM6PQ65722v5w6vPpy0/NM6UPLqC4qDw== public_key: JnaLeRjP7xDPJlnD2mv3+PduIWJXqwjpZsaYuV0B5A0= signature: grn2xoLMMouPJOfRMeDs0psfUN3SQmK01MMPcuRXuwWr9ZA3by7p0IgJzJb8E8jaU67ejaBxbxRoaoNGHrf4Bg== public_key: K51hXrPo8spG6QhXW/5rqw2fmq3UevBsQKnRlcTEGkU= signature: 6AOtus3hSZSkeOUGix1TZh2QfMZWaTy3UI35m5mfbSL+u7JSGquBfIHDvD2eFRFoqxzx7Jn9gS91zEf1hiBmAA== === REGISTRY PARAMETERS === enable_km_churp: true gas_costs: deregister_entity: 1000 prove_freshness: 1000 register_entity: 1000 register_node: 1000 register_runtime: 1000 runtime_epoch_maintenance: 1000 unfreeze_node: 1000 max_node_expiration: 2 enable_runtime_governance_models: entity: true runtime: true tee_features: sgx: pcs: true signed_attestations: true max_attestation_age: 1200 freshness_proofs: true max_runtime_deployments: 5 === ROOTHASH PARAMETERS === gas_costs: compute_commit: 10000 evidence: 5000 merge_commit: 10000 proposer_timeout: 5000 submit_msg: 1000 max_runtime_messages: 256 max_in_runtime_messages: 128 max_evidence_age: 100 max_past_roots_stored: 1200 === STAKING PARAMETERS === thresholds: entity: 100000000000 keymanager-churp: 10000000000000 node-compute: 100000000000 node-keymanager: 100000000000 node-observer: 100000000000 node-validator: 100000000000 runtime-compute: 50000000000000 runtime-keymanager: 50000000000000 debonding_interval: 336 reward_schedule: until: 90000 scale: 283 signing_reward_threshold_numerator: 3 signing_reward_threshold_denominator: 4 commission_schedule_rules: rate_change_interval: 1 rate_bound_lead: 336 max_rate_steps: 10 max_bound_steps: 10 min_commission_rate: 0 slashing: consensus-equivocation: amount: 100000000000 freeze_interval: 18446744073709551615 consensus-light-client-attack: amount: 100000000000 freeze_interval: 18446744073709551615 gas_costs: add_escrow: 1000 allow: 1000 amend_commission_schedule: 1000 burn: 1000 reclaim_escrow: 1000 transfer: 1000 withdraw: 1000 min_delegation: 100000000000 min_transfer: 10000000 min_transact_balance: 0 allow_escrow_messages: true max_allowances: 16 fee_split_weight_propose: 2 fee_split_weight_vote: 1 fee_split_weight_next_propose: 1 reward_factor_epoch_signed: 1 reward_factor_block_proposed: 0 === SCHEDULER PARAMETERS === min_validators: 30 max_validators: 120 max_validators_per_entity: 1 reward_factor_epoch_election_any: 0 === BEACON PARAMETERS === backend: vrf vrf_parameters: alpha_hq_threshold: 20 interval: 600 proof_delay: 400 gas_costs: vrf_prove: 1000 === GOVERNANCE PARAMETERS === gas_costs: cast_vote: 1000 submit_proposal: 1000 min_proposal_deposit: 10000000000000 voting_period: 168 stake_threshold: 68 upgrade_min_epoch_diff: 336 upgrade_cancel_min_epoch_diff: 192 enable_change_parameters_proposal: true allow_vote_without_entity: true allow_proposal_metadata: true ``` By passing `--format json`, the output is formatted as JSON. #### `paratimes` Shows all registered ParaTimes in the network registry. #### `validators` Shows all IDs of the nodes in the validator set. #### `native-token` Shows information of the network's native tokens such as its symbol, the number of decimal points, total supply, debonding period and staking thresholds. ```shell oasis network show native-token ``` ``` Network: mainnet Token's ticker symbol: ROSE Token's base-10 exponent: 9 Total supply: 10000000000.0 ROSE Common pool: 853509298.875305407 ROSE Last block fees: 0.0 ROSE Governance deposits: 0.0 ROSE Debonding interval: 336 epoch(s) === STAKING THRESHOLDS === entity: 100.0 ROSE node-validator: 100.0 ROSE node-compute: 100.0 ROSE node-keymanager: 100.0 ROSE runtime-compute: 50000.0 ROSE runtime-keymanager: 50000.0 ROSE ``` We can see that the token's name is ROSE and that 1 token corresponds to 10^9 (i.e. one billion) base units. Next, we can observe that the **total supply** is 10 billion tokens and that about 1.3 billion tokens are in the **common pool**. The **staking thresholds** fields are the following: - `entity`: The amount needed to be staked when registering an entity. - `node-validator`, `node-compute`, `node-keymanager`: The amount needed to be staked to the corresponding entity for a node to run as a validator, a compute node or a key manager. This is the amount that will be slashed in case of inappropriate node behavior. - `runtime-compute`, `runtime-keymanager`: The amount needed to be staked to an entity for [registering a new ParaTime or a key manager]. Keep in mind that a ParaTime cannot be unregistered and there is no way of getting the staked assets back. For example, if you wanted to register an entity running a validator and a compute node, you would need to stake (i.e. *escrow*) at least 300 tokens. Apart from the `node-compute` threshold above, a ParaTime may require additional **ParaTime-specific escrow** for running a compute node. Use the [`network show id`](#show-id) command to see it. [registering a new ParaTime or a key manager]: ./paratime.md#register #### `gas-costs` Shows minimum gas costs for each consensus transaction. ```shell oasis network show gas-costs ``` ``` Gas costs for network mainnet: - add_escrow: 1000 - allow: 1000 - amend_commission_schedule: 1000 - burn: 1000 - reclaim_escrow: 1000 - transfer: 1000 - withdraw: 1000 ``` Above, we can see that the [maximum amount of gas](./account.md#gas-limit) our transaction can spend must be set to at least 1000 **gas units**, otherwise it will be rejected by the network. #### `committees` Shows runtime committees. ```shell oasis network show committees ``` ``` === COMMITTEE === Paratime: sapphire(000000000000000000000000000000000000000000000000f80306c9858e7279) Height: 19241881 ENTITY ID NODE ID ROLE T5k7PtOR01oZrdnZveDpO9AFpMUhEREZk7WSSfm8Gtg= RT7JKF5T1hlKXTYZsp4SL07f4IHG6O0SQppf8wnfr+Y= worker oOVxTw2hEYgYvSrTjjKODCt/Soy3OLcQV9YBy/PF/xY= Io86AKuu7YDnya+fVnldHBybFggwCoXeQPu3Wj8kHW4= worker sDi9ZxHYB+rHTpVh4abNFXDMRSecfGe4QzbyGK8ZgQg= FEMUVK91HEULeQpMZj07jN2giNKjd6HPK3VdjsIQcjY= worker RMa2ER0wvraR+4u5QOGOrRTwmMVOYNcOot7sFppPRP0= DW4/7kVEumpZV1CmntaQBncSV36t6QoE0QwQd5pLIZU= worker 21+iPu/omYBN7X5cUY4QnD4b9VVuAiW/u8uABqt2VjM= x8DFPc8E9BZxLJKbh51xj41es3R53AkJERfMEyRCrbk= worker 7nCBfl1vRS4kn7G2yJZeZdwE8OFA4avUphWdCRrFhJM= drsZxbpqG5h+4tq/JKWqmoVGXmQUirVCjD8GLBuNj9M= worker iGs5cCGos/I5KQv82MwgGMNENaxy3bhuWdFXtINcu0U= HH/jnBO0AqHocNg4aS7MiMjiKmta1VP0ceRc0iILMAw= worker ko5wr5SMqhKb+P1kimM1EF/T4SqvW/WjSOwPHigQl+k= aJFHeID4Q7qUfMa42dRwaa9PQrZ/cVDiE3WNt4bQjo0= worker UDV5FoaIkssdSFWC4asZtxvsyagoyrIS5rPX8p/np2U= 86y1tHzH9GlxvS0Bneh5l2AUDXYO6VMrzx75JvJViNE= worker BdSzNycR8Y3MdHooxU0vtOEPr3ZG9KD5p8wxHtvueUU= +JOOp6OMmzldm9Dy7Cnbl/FE66bNkU0TJquOYnQIv7s= worker nw+8VTk+LbrZ4mSmeKYuQGu/swFgAOpPB5ls4STzh1g= XCiPWblWT3n1aN2NI0vslmlfV9GOkxE2Ih2SI66ZR38= backup-worker J2nwlXuYEPNZ0mMH2Phg5RofbZzj65xDvQMNdy9Ji0E= ITrwEekdZNqXrEzvw3GT6Q3AtHDd51f19nD2nVU/f0c= backup-worker sDi9ZxHYB+rHTpVh4abNFXDMRSecfGe4QzbyGK8ZgQg= FEMUVK91HEULeQpMZj07jN2giNKjd6HPK3VdjsIQcjY= backup-worker 6XvrCu3wqMKYc5a0d5UZzG7ZGeb3j//MzcqUMUHkMCk= C+AWG4iXz590kCdbO/DAb4sBZr+umjyp683ucmawdM4= backup-worker T5k7PtOR01oZrdnZveDpO9AFpMUhEREZk7WSSfm8Gtg= RT7JKF5T1hlKXTYZsp4SL07f4IHG6O0SQppf8wnfr+Y= backup-worker RFpWeibJDHnfgoq9mO1BJcxyDbIstDi22ZBhvgXvE1Y= YTHRajyCrIwOiys1ktOarSUyV1NVolvAw6DQqhaXg6w= backup-worker LQaKibf9tD8KXO210NhiDUHzXTsRIeK5l/3ITmfg118= 7/WyW54TO+31VkXZcj4xIAgv5kWxR6azSEjwrSAte3Y= backup-worker hNBVs2ay1IWvufQwX0TbYA6X4ocKaMpzpyaMTHggi6Q= bKvnByvx8qwF41EqOG6wdmatGzz/qT2nbHC8i8VM65k= backup-worker UDV5FoaIkssdSFWC4asZtxvsyagoyrIS5rPX8p/np2U= 86y1tHzH9GlxvS0Bneh5l2AUDXYO6VMrzx75JvJViNE= backup-worker 9yOiPY3NnNMpEzB+6XS/OqahzFwwX8mFhmT2fvbxlVI= mIKW8IEDWZZxCRtDTvWQLpslvfUBwWxVAxLakwq25C8= backup-worker nZoBfua/odt7fZThkfzGQo2oBp8UnEj+VpG52SB8onM= 70ibfZrA3+d9O4qNnecsXceTCvsLTywOjNQfN83MYQQ= backup-worker 1JttHp0rBBBHDOpPl8fAiLTcN9tUzxJGjk7llFcvhZQ= rK6mrmCRi2dYPNraNwqg2jgEVi4sd6hi53JmT2HVGxQ= backup-worker 4gbOOU09bcyvM53Up1lTnP+sLb0feniJu0OcUUPCBSs= +zVbgQqOdY90Z2NQKXFByNT0OwLxj/Ho4j4qT5u2yKM= backup-worker fhXoWYc8Ml153jBBvFrQ4CRY7vnbCk9j269rVLeg7cE= NM/XberrrMrvavGDCYc9CX8HPT1TPz1YHWuBaDArDHc= backup-worker WXs7ElBlm30la2fG4oZDpubeFu5sKkjDVeflWo+YuIA= ST2B7aeKSspiFNy325rIh8alQIRVCDyZ5t8f2NKN3PU= backup-worker === COMMITTEE === Paratime: emerald(000000000000000000000000000000000000000000000000e2eaa99fc008f87f) Height: 19241881 ENTITY ID NODE ID ROLE L4OY/0mNEduAS9z6jh2xLp72b8gZURgcrd76AOiRIXk= fGqOEIbBxaM2YmMcKq4PbpUOd9+s3TcS4AxaTChGuz0= worker g9UqzvW6JvfKrflRKbCPBpm41rH/O+4apCK+KkD2tG4= 4wqhqp5wDAfvQxNZUUSDmM2fVYrkxKq/tqjnnCe72Uw= worker PrCDIA3uyoLqNOZJ1PrRWdyviFn6K0PWFz91qQ9QyTw= o8p0FlVg1Wlv+ZLKojWS7c0P4xZHklFt9frLW4B4QlE= worker bTok0el8GbmUzTAAgcQ78uww/TsgeWwXpM3N2S49qBQ= cb/avZHoAQkZiHGzkjJxEkVsqiiiJzL/5fHp1TsDTdY= worker BTAJNDyd6/UQ+pfhTDdPGsothzJ+C5/C/g52a3DIlMw= RV/KmNN7oWH7qDjx/7kn+o9nsyd52CPUauF9MGvxl70= worker 4gbOOU09bcyvM53Up1lTnP+sLb0feniJu0OcUUPCBSs= +zVbgQqOdY90Z2NQKXFByNT0OwLxj/Ho4j4qT5u2yKM= worker RMa2ER0wvraR+4u5QOGOrRTwmMVOYNcOot7sFppPRP0= 4mcgJKfEa6RqWh9NqSJ+/yfs6X8dU0tG1dI1L0lFNzM= worker 6XvrCu3wqMKYc5a0d5UZzG7ZGeb3j//MzcqUMUHkMCk= VtP8ubAEY1p8iOshGDUqxrZGstnswozt7h1wlMAvba8= worker 1YeMK0NAZtE1ZK8u6KWddkKGZoD5VLfG9EAZI3b8HzE= bhVU8RdrUXE7XgI7hIIdMFOhsomBFmPEnNU9zFPTHzY= worker cVGc1fI6xu0WeI2GUrLIwDpH/JtBE3PwD+P66YkSKg8= giemTZIHjRmBA3FzYMK01eokfs8L/VmusK3M5+lUdGc= worker 4gbOOU09bcyvM53Up1lTnP+sLb0feniJu0OcUUPCBSs= +zVbgQqOdY90Z2NQKXFByNT0OwLxj/Ho4j4qT5u2yKM= backup-worker p1VsfSsedbKn/5GzkPsr15XD+/AOIfbPda1/2yT84N4= S2eoEMq6Qzms5Yd/fIOhSEacHp7Pym0BfgBEmsijEDw= backup-worker bTok0el8GbmUzTAAgcQ78uww/TsgeWwXpM3N2S49qBQ= cb/avZHoAQkZiHGzkjJxEkVsqiiiJzL/5fHp1TsDTdY= backup-worker sDi9ZxHYB+rHTpVh4abNFXDMRSecfGe4QzbyGK8ZgQg= Dpj1ibIMtTHMh/i5qh0eZcGGmOVODELSHvg/ZFBIPbY= backup-worker UFXCpcvXBOHbxtObG4psGcn+LgZOedvDDUAqVengpPk= rczLI7bYocBYyQ+bsnHPNPKc+SJpunQiuxip/tNlolw= backup-worker RMa2ER0wvraR+4u5QOGOrRTwmMVOYNcOot7sFppPRP0= 4mcgJKfEa6RqWh9NqSJ+/yfs6X8dU0tG1dI1L0lFNzM= backup-worker kupW3Pt0XMeERSkdDWyZMU4oZrk0wGysVXVyqX3rylc= BZvhmvc1YZpXteI2nPhBDyC2jxi04MHEbKXB1DpTM1w= backup-worker 6XvrCu3wqMKYc5a0d5UZzG7ZGeb3j//MzcqUMUHkMCk= VtP8ubAEY1p8iOshGDUqxrZGstnswozt7h1wlMAvba8= backup-worker TWLcdgEfahwyFPTC7nN3rZacPO2aXlLfZIDt7uXbzEI= 5uD3zbTZGhivYt1ZQw/Yr/Bcg2t6zEdyR9Ogg5ipkho= backup-worker oOVxTw2hEYgYvSrTjjKODCt/Soy3OLcQV9YBy/PF/xY= jVPUq8aUDKe9jawIs7wPB4NBml27ft5kICIY7SBh/yQ= backup-worker WXs7ElBlm30la2fG4oZDpubeFu5sKkjDVeflWo+YuIA= RzMsfs49HQDT5fIVKQ+flR/sCJjrkKDPsc5ZS6O7VdM= backup-worker 1YeMK0NAZtE1ZK8u6KWddkKGZoD5VLfG9EAZI3b8HzE= bhVU8RdrUXE7XgI7hIIdMFOhsomBFmPEnNU9zFPTHzY= backup-worker cVGc1fI6xu0WeI2GUrLIwDpH/JtBE3PwD+P66YkSKg8= giemTZIHjRmBA3FzYMK01eokfs8L/VmusK3M5+lUdGc= backup-worker nZoBfua/odt7fZThkfzGQo2oBp8UnEj+VpG52SB8onM= urRVg0K+6UhuxOnRE1/wIiPuuTu188orpsLDTz5NFTw= backup-worker UkwjS1YvEfHx9b6MMT5Q1WvCY3aWn2lxRDsB/Pw+zGk= CdkWAAnsdYg0g6yl90Eiqdwqer6NK9yIxWWvPR3fwD8= backup-worker PrCDIA3uyoLqNOZJ1PrRWdyviFn6K0PWFz91qQ9QyTw= o8p0FlVg1Wlv+ZLKojWS7c0P4xZHklFt9frLW4B4QlE= backup-worker nw+8VTk+LbrZ4mSmeKYuQGu/swFgAOpPB5ls4STzh1g= XCiPWblWT3n1aN2NI0vslmlfV9GOkxE2Ih2SI66ZR38= backup-worker BTAJNDyd6/UQ+pfhTDdPGsothzJ+C5/C/g52a3DIlMw= RV/KmNN7oWH7qDjx/7kn+o9nsyd52CPUauF9MGvxl70= backup-worker 1JttHp0rBBBHDOpPl8fAiLTcN9tUzxJGjk7llFcvhZQ= dN/aIe69HWFUHtOy/oqWdp1jw4fzOIljXLbMI79ilTo= backup-worker uxSkvFu6x4MIYV+M1VrDu3m/qbADs/1Ae3mWAcEmnaQ= 0qOmNfZvPDnjyzPU/97x1FWsl0d3UsImNiSNXd7lE/0= backup-worker 21+iPu/omYBN7X5cUY4QnD4b9VVuAiW/u8uABqt2VjM= x8DFPc8E9BZxLJKbh51xj41es3R53AkJERfMEyRCrbk= backup-worker BdSzNycR8Y3MdHooxU0vtOEPr3ZG9KD5p8wxHtvueUU= CZgE+WU9T8YpTnPRosJYFqos9S8W53jGQKeRrRxMeQc= backup-worker FDqRmM1FyhaGas+lquWmGAKgMsU2rj7UESAlnOHtxco= qnRAoObwndP/P9otTzQ/9Z2+vmSQ1Pch7G4tGBSTxCo= backup-worker ko5wr5SMqhKb+P1kimM1EF/T4SqvW/WjSOwPHigQl+k= aJFHeID4Q7qUfMa42dRwaa9PQrZ/cVDiE3WNt4bQjo0= backup-worker AX8zJsi0DnrrdwCi/8JJptXSy62kZgQcAYKlCYD4oN8= BqCqG8wuMVdnONN5bysITf0mYQD5FD+TEF5wrJttsSQ= backup-worker aiTgGyYB2l4uAMG93Ajq5S4EXPIRkYDg1ICLjWD45Ck= pGkBYly79y2gJUEOau8XN04ErcfwrObO+W5+CYXJW5k= backup-worker kfr2A6K6TlvhQm4nz88Hczzkd2Aq5PlkxSpnmUUBAFs= KUjJArjDn1TtZOi6AgYki1fUTC2PrU0LJFJ4ppHt3NQ= backup-worker cgXH87+sYoe2mXsdDKWCyRvWZ8JqnVnxJkCq09LlBoI= 6ioksdd5uKtlNnPmCpu1NYohfamlb/QHiD8EhMuTbfw= backup-worker N+3/m12DoAqzFS0yF3R/kXSkSj7pZnWhq8nRCo/MKwk= zibJtnvTpDotvOK3a3nNYmlYwg/K4TdZB781TQCEAT4= backup-worker /ylWdaid2DDlI4BMVkX6gAR6eaBYlLolHbjCmHitrzc= 9sk2Nq2DFGv932dnavOIr02RnfQUOngggsn2HUEEfRg= backup-worker === COMMITTEE === Paratime: cipher(000000000000000000000000000000000000000000000000e199119c992377cb) Height: 19241881 ENTITY ID NODE ID ROLE bTok0el8GbmUzTAAgcQ78uww/TsgeWwXpM3N2S49qBQ= LI48Ol5Is045ijOAjiCiKFHKOyzwuGL6mMTr3F5cMdM= worker 1YeMK0NAZtE1ZK8u6KWddkKGZoD5VLfG9EAZI3b8HzE= /dBEDGDBCu6TF5w9crktZ9aloTBpOGGSa6A8uVNunAo= worker sDi9ZxHYB+rHTpVh4abNFXDMRSecfGe4QzbyGK8ZgQg= FEMUVK91HEULeQpMZj07jN2giNKjd6HPK3VdjsIQcjY= worker ko5wr5SMqhKb+P1kimM1EF/T4SqvW/WjSOwPHigQl+k= aJFHeID4Q7qUfMa42dRwaa9PQrZ/cVDiE3WNt4bQjo0= worker 1JttHp0rBBBHDOpPl8fAiLTcN9tUzxJGjk7llFcvhZQ= kgTUu0eXQWfPaE8Li8NgXf0bsjXdupxIfM8moGrTMK4= worker UDV5FoaIkssdSFWC4asZtxvsyagoyrIS5rPX8p/np2U= VonN99SPIvJ6Aq8dS5JQG9g50svyuLwMHjXZYAAtLKo= backup-worker RMa2ER0wvraR+4u5QOGOrRTwmMVOYNcOot7sFppPRP0= k0g6YN7CFSgjaPU1EjVWXhzPVmEset+3sQ3c3NJ8Ys4= backup-worker cgXH87+sYoe2mXsdDKWCyRvWZ8JqnVnxJkCq09LlBoI= 6ioksdd5uKtlNnPmCpu1NYohfamlb/QHiD8EhMuTbfw= backup-worker nw+8VTk+LbrZ4mSmeKYuQGu/swFgAOpPB5ls4STzh1g= XCiPWblWT3n1aN2NI0vslmlfV9GOkxE2Ih2SI66ZR38= backup-worker WazI78lMcmjyCH5+5RKkkfOTUR+XheHIohlqMu+a9As= uvPTOOyC+Kb+Hl3Pw34S3/YC9IerAdZncyW08LIaTtw= backup-worker PrCDIA3uyoLqNOZJ1PrRWdyviFn6K0PWFz91qQ9QyTw= vI2QpEG/5LYwU+Fp52QsYxdRMRoy9j+pdJSb23tW3ng= backup-worker YDHYz/R+Y7pCodhmgkCqzoqqN54gzRfVE5fjZriX+RI= 7Rz1yAFZcAD06OOTZxx5LLDg2L5+1Me4304xZB8cgxU= backup-worker zAhtGrpk1L3bBLaP5enm3natUTCoj7MEFryq9+MG4tE= PsfFUQrXqGoFtowWZcoc8ilh8xHP94LvNYHvqQHpw1E= backup-worker wCGlLKUiTNr9Ba49YA6dDuqm9rdtPcKKsKzHqMBn+rc= vlG7mUtP7s2PsnARfyrI3mW/q4pcqRi3SHk2GxmQ2NM= backup-worker J2nwlXuYEPNZ0mMH2Phg5RofbZzj65xDvQMNdy9Ji0E= ITrwEekdZNqXrEzvw3GT6Q3AtHDd51f19nD2nVU/f0c= backup-worker oOVxTw2hEYgYvSrTjjKODCt/Soy3OLcQV9YBy/PF/xY= Io86AKuu7YDnya+fVnldHBybFggwCoXeQPu3Wj8kHW4= backup-worker ``` #### `` The provided ID can be one of the following: - If the [ParaTime ID] is provided, Oasis CLI shows ParaTime information stored in the network's registry. For example, at time of writing information on Sapphire stored in the Mainnet registry were as follows: ```shell oasis network show 000000000000000000000000000000000000000000000000f80306c9858e7279 ``` ```json { "v": 3, "id": "000000000000000000000000000000000000000000000000f80306c9858e7279", "entity_id": "TAv9qXjV4yBphnKLJcNkzois1TLoYUjaRPrMfY58Apo=", "genesis": { "state_root": "c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a", "round": 0 }, "kind": 1, "tee_hardware": 1, "key_manager": "4000000000000000000000000000000000000000000000008c5ea5e49b4bc9ac", "executor": { "group_size": 5, "group_backup_size": 7, "allowed_stragglers": 1, "round_timeout": 2, "max_messages": 256, "min_live_rounds_percent": 90, "min_live_rounds_eval": 20, "max_liveness_fails": 4 }, "txn_scheduler": { "batch_flush_timeout": 1000000000, "max_batch_size": 1000, "max_batch_size_bytes": 1048576, "propose_batch_timeout": 2 }, "storage": { "checkpoint_interval": 100000, "checkpoint_num_kept": 2, "checkpoint_chunk_size": 8388608 }, "admission_policy": { "any_node": {} }, "constraints": { "executor": { "backup-worker": { "validator_set": {}, "max_nodes": { "limit": 1 }, "min_pool_size": { "limit": 7 } }, "worker": { "validator_set": {}, "max_nodes": { "limit": 1 }, "min_pool_size": { "limit": 7 } } } }, "staking": { "thresholds": { "node-compute": "5000000000000000" }, "min_in_message_fee": "0" }, "governance_model": "entity", "deployments": [ { "version": { "minor": 4 }, "valid_from": 20944, "tee": "oWhlbmNsYXZlc4GiaW1yX3NpZ25lclggQCXat+vaH77MTjY3YG4CEhTQ9BxtBCL9N4sqi4iBhFlqbXJfZW5jbGF2ZVgg3mqV02+CDfyth1fNyaR8jo3rVp024JOBkBGnjtLPypM=" }, { "version": { "minor": 5, "patch": 2 }, "valid_from": 23476, "tee": "oWhlbmNsYXZlc4GiaW1yX3NpZ25lclggQCXat+vaH77MTjY3YG4CEhTQ9BxtBCL9N4sqi4iBhFlqbXJfZW5jbGF2ZVggMBEUvUKRVLByqR+3a/KVNkkMjorOJLTw2Znb36baBQY=" } ] } ``` Network validators may be interested in the **ParaTime staking threshold** stored inside the `thresholds` field: ```shell oasis network show 000000000000000000000000000000000000000000000000f80306c9858e7279 | jq '.staking.thresholds."node-compute"' ``` ``` "5000000000000000" ``` In the example above, the amount to run a Sapphire compute node on the Mainnet is 5,000,000 tokens and should be considered on top of the consensus-layer validator staking thresholds obtained by the [`network show native-token`](#show-native-token) command. - If the entity ID is provided, Oasis CLI shows information on the entity and its corresponding nodes in the network registry. For example: ```shell oasis network show xQN6ffLSdc51EfEQ2BzltK1iWYAw6Y1CkBAbFzlhhEQ= ``` ```json === ENTITY === Entity Address: oasis1qzp84num6xgspdst65yv7yqegln6ndcxmuuq8s9w Entity ID: xQN6ffLSdc51EfEQ2BzltK1iWYAw6Y1CkBAbFzlhhEQ= Stake: 11504987.432765658 ROSE Commission: 20.0% === NODES === Node Address: oasis1qqzjrsadvr2q5qq5ev6xyspjses8cjxxdcrth0g7 Node ID: Kb6opWKGbJHL0LK2Lto+m5ROIAXLhIr1lxQz0/kAOUM= Node Roles: validator Software Version: 24.1 Node Status: Expiration Processed: false Freeze End Time: 0 Election Eligible After: 38659 ``` By passing `--format json`, the output is formatted as JSON. - If the node ID is provided, Oasis CLI shows detailed information of the node such as the Oasis Core software version, the node's role, supported ParaTimes, trusted execution environment support and more. For example: ```shell oasis network show Kb6opWKGbJHL0LK2Lto+m5ROIAXLhIr1lxQz0/kAOUM= ``` ```json { "v": 2, "id": "Kb6opWKGbJHL0LK2Lto+m5ROIAXLhIr1lxQz0/kAOUM=", "entity_id": "xQN6ffLSdc51EfEQ2BzltK1iWYAw6Y1CkBAbFzlhhEQ=", "expiration": 23482, "tls": { "pub_key": "SslsTv8Cq/UvKHPk8w1S/Ag/wwsscqSa05bqDAVOR1I=", "next_pub_key": "js0fhS02f+G3kW7uu+N47lzcfxjbBEPkPibTfeQrJTA=", "addresses": null }, "p2p": { "id": "e9fyvK+2FwU805dag81qOsrKHaO5b+nQnHyzEySi258=", "addresses": null }, "consensus": { "id": "3K2Vx3gTop+/GoM9Zh+ZSGPwVb2BRTFtcAo6xPo4pb4=", "addresses": [ "e9fyvK+2FwU805dag81qOsrKHaO5b+nQnHyzEySi258=@125.122.166.210:26656" ] }, "vrf": { "id": "3z85R+Rdud27NUTMFf4gO4NBQbMEnWqnhHhI6AtNx74=" }, "runtimes": null, "roles": "validator", "software_version": "22.2.7" } ``` [ParaTime ID]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/runtime/identifiers.md ### Status of the Network's Endpoint `network status` will connect to the gRPC endpoint and request extensive status report from the Oasis Core node. Node operators will find important information in the report such as: - the last proposed consensus block, - whether the node's storage is synchronized with the network, - the Oasis Core software version, - connected peers, - similar information as above for each ParaTime, if the node is running it. At time of writing, the following status of the official gRPC endpoint for Mainnet was reported: ```shell oasis network status ``` ```json === NETWORK STATUS === Network: mainnet Node's ID: mVyn1iZkOAlP7AQRuhYHahAkUEGJmywY1G8raR5u/3I= Core version: 23.0.9 ==== Consensus ==== Status: ready Version: 7.0.0 Chain context: bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55 Latest height: 18458209 (2024-03-21 10:47:52 +0100 CET) Latest block hash: eb3fbe258b3066935de32158ac1b0cf2d3f79f5558682eee8f04f3afc80374ae Latest epoch: 30750 Is validator: false Registration: false ==== ParaTimes ==== cipher (000000000000000000000000000000000000000000000000e199119c992377cb): Kind: compute Is confidential: true Status: ready Latest round: 1612018 (2024-03-21 10:47:52 +0100 CET) Last finalized round: 1612018 Storage status: syncing rounds Active version: 3.0.2 Available version(s): 2.6.2, 3.0.2 Number of peers: 30 emerald (000000000000000000000000000000000000000000000000e2eaa99fc008f87f): Kind: compute Is confidential: false Status: ready Latest round: 9509250 (2024-03-21 10:47:52 +0100 CET) Last finalized round: 9509250 Storage status: syncing rounds Active version: 11.0.0 Available version(s): 10.0.0, 11.0.0 Number of peers: 29 sapphire (000000000000000000000000000000000000000000000000f80306c9858e7279): Kind: compute Is confidential: true Status: ready Latest round: 2958958 (2024-03-21 10:47:52 +0100 CET) Last finalized round: 2958958 Storage status: syncing rounds Active version: 0.7.0 Available version(s): 0.7.0 Number of peers: 39 ``` By passing `--format json`, the output is formatted as JSON. [Network](./account.md#npa) selector is available for the `network status` command. ### State Sync Trust `network trust` will show suggested trust for the consensus [state sync]. For example: ```shell ./oasis network trust --network testnet ``` ```json Trust period: 240h0m0s Trust height: 29103886 Trust hash: ecff618ed2e8991e3e81eb37b2b61cb6990104c170f0fe34b4b2268b70f98fb5 WARNING: Cannot be trusted unless the CLI is connected to the RPC endpoint you control. ``` [state sync]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/advanced/sync-node-using-state-sync.md --- ## ParaTime # Managing Your ParaTimes The `paratime` command lets you manage your ParaTime configurations bound to a specific [network]. If you are a ParaTime developer, the command allows you to register a new ParaTime into the public network's registry. The command also supports examining specific block and a transaction inside the ParaTime and printing different validator-related statistics. When running the Oasis CLI for the first time, it will automatically configure official Oasis ParaTimes running on the [Mainnet] and [Testnet] networks. ## Add a ParaTime Invoke `paratime add ` to add a new ParaTime to your Oasis CLI configuration. Beside the name of the corresponding network and the unique ParaTime name inside that network, you will also need to provide the [ParaTime ID]. This is a unique identifier of the ParaTime on the network, and it remains the same even when the network and ParaTime upgrades occur. You can always check the IDs of the official Oasis ParaTimes on the respective [Mainnet] and [Testnet] pages. Each ParaTime also has a native token denomination symbol defined with specific number of decimal places which you will need to specify. ```shell oasis paratime add testnet sapphire2 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6d ``` ``` ? Description: ? Denomination symbol: TEST ? Denomination decimal places: 18 ``` You can also enable [non-interactive mode](account.md#y) and pass `--num-decimals`, `--symbol` and `--description` parameters directly: ```shell oasis paratime add testnet sapphire2 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6d --num-decimals 18 --symbol TEST --description "Testnet Sapphire 2" -y ``` Decimal places of the native and ParaTime token may differ! Emerald and Sapphire use **18 decimals** for compatibility with Ethereum tooling. The Oasis Mainnet and Testnet consensus layer tokens and the token native to Cipher have **9 decimals**. Configuring the wrong number of decimal places will lead to incorrect amount of tokens to be deposited, withdrawn or transferred from or into the ParaTime! If you configured your network with the [`network add-local`] command, then all registered ParaTimes of that network will be detected and added to your Oasis CLI config automatically. [network]: ./network.md [`network add-local`]: ./network.md#add-local [ParaTime ID]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/runtime/identifiers.md [Mainnet]: https://github.com/oasisprotocol/docs/blob/main/docs/node/network/mainnet [Testnet]: https://github.com/oasisprotocol/docs/blob/main/docs/node/network/testnet ## List ParaTimes Invoke `paratime list` to list all configured ParaTimes across the networks. For example, at time of writing this section the following ParaTimes were preconfigured by the Oasis CLI: ```shell oasis paratime list ``` ``` NETWORK PARATIME ID DENOMINATION(S) mainnet cipher 000000000000000000000000000000000000000000000000e199119c992377cb ROSE[9] (*) mainnet emerald 000000000000000000000000000000000000000000000000e2eaa99fc008f87f ROSE[18] (*) mainnet sapphire (*) 000000000000000000000000000000000000000000000000f80306c9858e7279 ROSE[18] (*) testnet cipher 0000000000000000000000000000000000000000000000000000000000000000 TEST[9] (*) testnet emerald 00000000000000000000000000000000000000000000000072c8215e60d5bca7 TEST[18] (*) testnet pontusx_dev 0000000000000000000000000000000000000000000000004febe52eb412b421 EUROe[18] (*) TEST[18] testnet pontusx_test 00000000000000000000000000000000000000000000000004a6f9071c007069 EUROe[18] (*) TEST[18] testnet sapphire (*) 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c TEST[18] (*) ``` The [default ParaTime](#set-default) for each network is marked with the `(*)` sign. ParaTimes on this list are configured inside your Oasis CLI instance. They may not actually exist on the network. ## Remove a ParaTime To remove a configuration of a ParaTime for a specific network, use `paratime remove `. For example, let's remove the [previously added](#add) ParaTime: ```shell oasis paratime list ``` ``` NETWORK PARATIME ID DENOMINATION(S) mainnet cipher 000000000000000000000000000000000000000000000000e199119c992377cb ROSE[9] (*) mainnet emerald (*) 000000000000000000000000000000000000000000000000e2eaa99fc008f87f ROSE[18] (*) mainnet sapphire 000000000000000000000000000000000000000000000000f80306c9858e7279 ROSE[18] (*) testnet cipher 0000000000000000000000000000000000000000000000000000000000000000 TEST[9] (*) testnet emerald (*) 00000000000000000000000000000000000000000000000072c8215e60d5bca7 TEST[18] (*) testnet pontusx_dev 0000000000000000000000000000000000000000000000004febe52eb412b421 EUROe[18] (*) TEST[18] testnet pontusx_test 00000000000000000000000000000000000000000000000004a6f9071c007069 EUROe[18] (*) TEST[18] testnet sapphire 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c TEST[18] (*) testnet sapphire2 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6d TEST[18] (*) ``` ```shell oasis paratime remove testnet sapphire2 ``` ```shell oasis paratime list ``` ``` NETWORK PARATIME ID DENOMINATION(S) mainnet cipher 000000000000000000000000000000000000000000000000e199119c992377cb ROSE[9] (*) mainnet emerald (*) 000000000000000000000000000000000000000000000000e2eaa99fc008f87f ROSE[18] (*) mainnet sapphire 000000000000000000000000000000000000000000000000f80306c9858e7279 ROSE[18] (*) testnet cipher 0000000000000000000000000000000000000000000000000000000000000000 TEST[9] (*) testnet emerald (*) 00000000000000000000000000000000000000000000000072c8215e60d5bca7 TEST[18] (*) testnet pontusx_dev 0000000000000000000000000000000000000000000000004febe52eb412b421 EUROe[18] (*) TEST[18] testnet pontusx_test 00000000000000000000000000000000000000000000000004a6f9071c007069 EUROe[18] (*) TEST[18] testnet sapphire 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c TEST[18] (*) ``` ## Set Default ParaTime To change the default ParaTime for Oasis CLI transactions on the specific network, use `paratime set-default `. For example, to set the Cipher ParaTime default on the Testnet, run: ```shell oasis paratime set-default testnet cipher ``` ```shell oasis paratime list ``` ``` NETWORK PARATIME ID DENOMINATION(S) mainnet cipher 000000000000000000000000000000000000000000000000e199119c992377cb ROSE[9] (*) mainnet emerald 000000000000000000000000000000000000000000000000e2eaa99fc008f87f ROSE[18] (*) mainnet sapphire (*) 000000000000000000000000000000000000000000000000f80306c9858e7279 ROSE[18] (*) testnet cipher (*) 0000000000000000000000000000000000000000000000000000000000000000 TEST[9] (*) testnet emerald 00000000000000000000000000000000000000000000000072c8215e60d5bca7 TEST[18] (*) testnet pontusx_dev 0000000000000000000000000000000000000000000000004febe52eb412b421 EUROe[18] (*) TEST[18] testnet pontusx_test 00000000000000000000000000000000000000000000000004a6f9071c007069 EUROe[18] (*) TEST[18] testnet sapphire 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c TEST[18] (*) ``` ## Show Use `paratime show` to investigate a specific ParaTime block or other parameters. ### `` Providing the block round or `latest` literal will print its header and other information. ```shell oasis paratime show 5850612 ``` ``` Network: mainnet ParaTime: emerald Round: 5850612 Version: 0 Namespace: 000000000000000000000000000000000000000000000000e2eaa99fc008f87f Timestamp: 2023-05-29T11:21:20Z Type: 1 Previous: 3e91bd4fc60d8a2cc03dc50c87ff532bef5703fedc35bba8aed4d8980526bb51 I/O root: d16db82426c93e2671b8fbe74db56d17fbc88800e93490dd0a6feae11d35a9a8 State root: 2c1bc5c89c59bee77511e7a58e7494bb815ab73bb63333da1c15d171e48b79b8 Messages (out): c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a Messages (in): c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a Transactions: 1 ``` To show the details of the transaction stored inside the block including the transaction status and any emitted events, pass the transaction index in the block or its hash: ```shell oasis paratime show 5850612 0 ``` ``` Network: mainnet ParaTime: emerald Round: 5850612 Version: 0 Namespace: 000000000000000000000000000000000000000000000000e2eaa99fc008f87f Timestamp: 2023-05-29T11:21:20Z Type: 1 Previous: 3e91bd4fc60d8a2cc03dc50c87ff532bef5703fedc35bba8aed4d8980526bb51 I/O root: d16db82426c93e2671b8fbe74db56d17fbc88800e93490dd0a6feae11d35a9a8 State root: 2c1bc5c89c59bee77511e7a58e7494bb815ab73bb63333da1c15d171e48b79b8 Messages (out): c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a Messages (in): c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a Transactions: 1 === Transaction 0 === Kind: evm.ethereum.v0 Hash: 4fc2907da5f73599519ed120916b7a9073a433b23b7ae65747e24fe75ebba832 Eth hash: 0x9cc12c960004b724356000d1d9af0ca3a092951d759590748a98431eb49c8d10 Chain ID: 42262 Nonce: 1976 Type: 0 To: 0x47DAcE3BDcc877f77fB92925ea55e25c792Bf265 Value: 0 Gas limit: 900000 Gas price: 100000000000 Data: 2ee6f87400000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000008d03941494a56164ea04d79f9867dddb0dd754a625cc21085e9307a7ec5206ca17db95be9eba7c71362e238396d4d01ba5621e66894a0228f6b3651f15660000008606060000000000066a7c4e95a979400021c718c22d52d0f3a789b752d4c2fd5908a8a733f02b3e437304892105992512539f769423a515cba1e73c01e0cf7930f5e91cb291031739fe5ad6c20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 === Result of transaction 0 === Status: ok Data: "0000000000000000000000000000000000000000000000000000000000000000" === Events emitted by transaction 0 === Events: 1 --- Event 0 --- Module: core Code: 1 Data: [ { "amount": 48219 } ] ``` Encrypted transactions can also be examined, although the data chunk will be encrypted: ```shell oasis paratime show 1078544 0 --network testnet --paratime sapphire ``` ``` Network: testnet ParaTime: sapphire Round: 1078544 Version: 0 Namespace: 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c Timestamp: 2023-05-03T15:06:19Z Type: 1 Previous: 1e239120b149d02e04778affbfc126cebfe5c758c953b015ab8cef876bd5f702 I/O root: 498269b1f1607ac35f8860437d2e9648994263f865905a4551174cf6e0fce52f State root: 9c2abe9051842cfa8d4b0981cfc9a08e55d13e516811ec20147e8a58c0b85c08 Messages (out): c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a Messages (in): c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a Transactions: 1 === Transaction 0 === Kind: evm.ethereum.v0 Hash: 41cc58c02147b7728c9dfcf528cfa71c6f4c04fc98fe33c5b2b9811e0379fa82 Eth hash: 0x970a642de01cffdcdd9e75d288d912b96c48bd29e14f2ad2b572770647ac97d4 Chain ID: 23295 Nonce: 1185 Type: 0 To: 0x8c064bCf7C0DA3B3b090BAbFE8f3323534D84d68 Value: 0 Gas limit: 1000000 Gas price: 100000000000 Data: a264626f6479a362706b5820051201801dd7c98d4b5b195344146ce425cc01f12912e37cbd2fd92d5654354c646461746159015d6c56384c75f48ba3e3ca53607969ae6d9d44e2b6921e7fd34ee5aee8da5a1b519709bf4778a725048552e6b3520281285969cae0169adfd6d5792847bc37439d89c4b9dbbf2cf3c22e305c9a3a3d5b61026831c8f672b49e565cdc6eda81a55492262a0ede45742020efdca28f9d53ec928c1ac5171345d956bdda31971eafc90892f8fdaf75587358db0c2cd20f182b34b9d11e98958fb2320f0b62a4061bca65ca529dcd51ced9b8f1d8ca45d4c3be642000324b176077fec82bbc7770ca670f5fa73a397871e4940fef662654c70aebeac53424f42a5a6b90792db90807912f8a491a2d5ea141dddf03cb8061c8cbedef1d847779792d6ccc679a64adf7961e793c0a9314c74e151e938d111186d0c47a265f390e482edc37ce53a49f7e319bcaa395c882cf5778c7c8245828db199000ae494c66f9f6dd7159116417d2671dd99c4e00683e42e53700014b3e71f9b752579fdb499eddbf83a71333656e6f6e63654f914e6a1dcbf430c9de867ed8534ec266666f726d617401 === Result of transaction 0 === Status: ok Data: "a1626f6ba264646174615561a6f942703204748459335a5d50058ebe8bf6cc5c656e6f6e63654f000000000010750f00000000000000" === Events emitted by transaction 0 === Events: 1 --- Event 0 --- Module: core Code: 1 Data: [ { "amount": 31451 } ] ``` ### `parameters` This will print various ParaTime-specific parameters such as the ROFL stake thresholds. ```shell oasis paratime show parameters ``` ``` Network: testnet ParaTime: sapphire === ROFL PARAMETERS === Stake thresholds: App create: 100.0 TEST === ROFL MARKET PARAMETERS === Stake thresholds: Provider create: 100.0 TEST ``` By passing `--format json`, the output is formatted as JSON. ### `events` This will return all Paratime events emitted in the block. Use `--round ` to specify the round number. ```shell oasis paratime show events --round 9399871 --format json ``` ``` [ { "code": 1, "data": "gaNidG9VAGIz3RCYb9ltIk8706by6j2XkXGmZGZyb21VAJZQKbOBY+XnA5YUaDhZkNc3y+nsZmFtb3VudIJHCxBZMMJwAEA=", "module": "accounts", "parsed": [ { "Transfer": { "from": "oasis1qzt9q2dns937tecrjc2xswzejrtn0jlfas40j7sz", "to": "oasis1qp3r8hgsnphajmfzfuaa8fhjag7e0yt35cjxq0u4", "amount": { "Amount": "3114200000000000", "Denomination": "" } }, "Burn": null, "Mint": null } ], "tx_hash": "c586f05e2103adb953d2287ef22dad0532540bd02481184b5477ba8c38894e62" }, { "code": 1, "data": "gqNidG9VAIyCi8jiQIOmvod+yJYxN0GhktyEZGZyb21VACg9qHdJLY0x3unzFR/SHF3dLD+oZmFtb3VudIJIAWNFeMTiZV9Ao2J0b1UAYjPdEJhv2W0iTzvTpvLqPZeRcaZkZnJvbVUAKD2od0ktjTHe6fMVH9IcXd0sP6hmYW1vdW50gkcH3eTk7RgAQA==", "module": "accounts", "parsed": [ { "Transfer": { "from": "oasis1qq5rm2rhfykc6vw7a8e3287jr3wa6tpl4qv49gzh", "to": "oasis1qzxg9z7gufqg8f47salv3933xaq6rykusslsq4k7", "amount": { "Amount": "100000001733846367", "Denomination": "" } }, "Burn": null, "Mint": null }, { "Transfer": { "from": "oasis1qq5rm2rhfykc6vw7a8e3287jr3wa6tpl4qv49gzh", "to": "oasis1qp3r8hgsnphajmfzfuaa8fhjag7e0yt35cjxq0u4", "amount": { "Amount": "2214300000000000", "Denomination": "" } }, "Burn": null, "Mint": null } ], "tx_hash": "de7e52e94f4614ec0b0de47971abc12d5070278e9401c2466ec5664a71bdc57d" }, { "code": 1, "data": "gaFmYW1vdW50GXmm", "module": "core", "parsed": [ { "GasUsed": { "amount": 31142 } } ], "tx_hash": "c586f05e2103adb953d2287ef22dad0532540bd02481184b5477ba8c38894e62" }, { "code": 1, "data": "gaFmYW1vdW50GVZ/", "module": "core", "parsed": [ { "GasUsed": { "amount": 22143 } } ], "tx_hash": "de7e52e94f4614ec0b0de47971abc12d5070278e9401c2466ec5664a71bdc57d" } ] ``` By passing `--format json`, the output is formatted as JSON. ## Set information about a denomination To set information about a denomination on the specific network and paratime use `paratime denom set --symbol `. To use this command a denomination must already exist in the actual paratime. ```shell oasis paratime denom set mainnet sapphire TESTTEST 16 ``` ## Set information about the native denomination To set information about the native denomination on the specific network and paratime use `paratime denom set-native `. The native denomination is already mandatory in the [`paratime add`](#add) command. ```shell oasis paratime denom set-native testnet cipher TEST 9 ``` ## Remove denomination To remove an existing denomination on the specific network and paratime use `paratime denom remove `. The native denomination cannot be removed. ```shell oasis paratime denom remove mainnet sapphire TESTTEST ``` ## Advanced ### Register a New ParaTime ParaTime developers may add a new ParaTime to the network's registry by invoking the `paratime register ` command and providing a JSON file with the ParaTime descriptor. You can use the [`network show`][network-show-id] command passing the ParaTime ID to see how descriptors of the currently registered ParaTimes look like. To learn more about registering your own ParaTime, check the [Oasis Core Registry service]. [network-show-id]: ./network.md#show-id [Oasis Core Registry service]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/services/registry.md#register-runtime ### Statistics `paratime statistics [ []]` will examine the voting details for a range of blocks. First, it will print you aggregated statistics showing you the number of successful rounds in that range, epoch transitions and also anomalies such as the proposer timeouts, failed rounds and discrepancies. Then, it will print out detailed validator per-entity statistics for that range of blocks. The passed block number should be enumerated based on the round inside the ParaTime. The start round can be one of the following: - If no round given, the validation of the last block will be examined. - If a negative round number `N` is passed, the last `N` blocks will be examined. - If `0` is given, the oldest block available to the Oasis endpoint will be considered as a starting block. - A positive number will be considered as a start round. At time of writing, the following statistics was available: ```shell oasis paratime statistics ``` ``` === PARATIME STATISTICS === Network: mainnet ParaTime ID: 000000000000000000000000000000000000000000000000e2eaa99fc008f87f Start height: 14097886 End height: 14097887 ParaTime rounds: 1 Successful rounds: 1 Epoch transition rounds: 0 Proposer timed out rounds: 0 Failed rounds: 0 Discrepancies: 0 Discrepancies (timeout): 0 Suspended: 0 === ENTITY STATISTICS === | ENTITY ADDR | ENTITY NAME | ELECTED | PRIMARY | BACKUP | PROPOSER | PRIMARY INVOKED | PRIMARY GOOD COMMIT | PRIM BAD COMMMIT | BCKP INVOKED | BCKP GOOD COMMIT | BCKP BAD COMMIT | PRIMARY MISSED | BCKP MISSED | PROPOSER MISSED | PROPOSED TIMEOUT | |------------------------------------------------|--------------------------------|---------|---------|--------|----------|-----------------|---------------------|------------------|--------------|------------------|-----------------|----------------|-------------|-----------------|------------------| | oasis1qpxpnxxk4qcgl7n55tx0yuqmrcw5cy2u5vzjq5u4 | Perfect Stake | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpavd66xsezz8s4wjw2fyycxw8jm2nlpnuejlg2g | Spherical One | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz72lvk2jchk0fjrz7u2swpazj3t5p0edsdv7sf8 | Ocean Stake | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz0ea28d8p4xk8xztems60wq22f9pm2yyyd82tmt | Simply Staking | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzl99wft8jtt7ppprk7ce7s079z3r3t77s6pf3dd | DCC Capital | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qps9drw07z0gmh5z2pn7zwl3z53ate2yvqf3uzq5 | cherkes | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpjuke27se2wnmvx6e8uc4l5h44yjp9h7g2clqfq | RockX | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz8vfnkcc48grazt83gstfm6yjwyptalny8cywtp | Kumaji | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzt4fvcc6cw9af69tek9p3mfjwn3a5e5vcyrw7ac | StakeService | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz0pvg26eudajp60835wl3jxhdxqz03q5qt9us34 | AnkaStake | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrs8zlh0mj37ug0jzlcykz808ylw93xwkvknm7yc | Bitoven | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qr0jwz65c29l044a204e3cllvumdg8cmsgt2k3ql | Staking Fund | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpntrlgxp5tt36pkdezdjt5d27fzkvp22y46qura | Chloris Network | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzf03q57jdgdwp2w7y6a8yww6mak9khuag9qt0kd | Spectrum Staking | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qq7vyz4ewrdh00yujw0mgkf459et306xmvh2h3zg | P2P.ORG - P2P Validator | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzugextrcdueshq63w7l9x4xglnusznsgqa95w7e | Alexander (aka Bambarello) | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | Validator | | | | | | | | | | | | | | | | oasis1qrugz89g5esmhs0ezer0plsfvmcgctge35n32vmr | Validatrium | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrdx0n7lgheek24t24vejdks9uqmfldtmgdv7jzz | Bit Cat🐱 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp9xlxurlcx3k5h3pkays56mp48zfv9nmcf982kn | ELYSIUM | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp4f47plgld98n5g2ltalalnndnzz96euv9n89lz | Julia-Ju | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqf6wmc0ax3mykd028ltgtqr49h3qffcm50gwag3 | ou812 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qq0xmq7r0z9sdv02t5j9zs7en3n6574gtg8v9fyt | Mars Staking | Long term fee | 1 | 1 | 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | 1% | | | | | | | | | | | | | | | | oasis1qqewwznmvwfvee0dyq9g48acy0wcw890g549pukz | Wanderer Staking | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqx820g2geqzeyeyfnm5hgz72eaj9emajgqmscy0 | max999 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp60saapdcrhe5zp3c3zk52r4dcfkr2uyuc5qjxp | Tessellated Geometry | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpaygvzwd5ffh2f5p4qdqylymgqcvl7sp5gxyrl3 | Appload | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrgxl0ylc7lvkj0akv6s32rj4k98nr0f7smf6m4k | itokenpool | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qram2p9w3yxm4px5nth8n7ugggk5rr6ay5d284at | Realizable | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz22xm9vyg0uqxncc667m4j4p5mrsj455c743lfn | S5 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qppctxzn8djkqfvrxugak9v7dp25vddq7sxqhkry | Tuzem | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqrv4g5wu543wa7fcae76eucqfn2uc77zgqw8fxk | Lusia | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrtq873ddwnnjqyv66ezdc9ql2a07l37d5vae9k0 | Forbole | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp53ud2pcmm73mlf4qywnrr245222mvlz5a2e5ty | SerGo | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrmexg6kh67xvnp7k42sx482nja5760stcrcdkhm | ushakov | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ``` To extend statistics to, say 5 last blocks, you can run: ```shell oasis paratime statistics -- -5 ``` ``` === PARATIME STATISTICS === Network: mainnet ParaTime ID: 000000000000000000000000000000000000000000000000e2eaa99fc008f87f Start height: 14097903 End height: 14097908 ParaTime rounds: 4 Successful rounds: 4 Epoch transition rounds: 0 Proposer timed out rounds: 0 Failed rounds: 0 Discrepancies: 0 Discrepancies (timeout): 0 Suspended: 0 === ENTITY STATISTICS === | ENTITY ADDR | ENTITY NAME | ELECTED | PRIMARY | BACKUP | PROPOSER | PRIMARY INVOKED | PRIMARY GOOD COMMIT | PRIM BAD COMMMIT | BCKP INVOKED | BCKP GOOD COMMIT | BCKP BAD COMMIT | PRIMARY MISSED | BCKP MISSED | PROPOSER MISSED | PROPOSED TIMEOUT | |------------------------------------------------|--------------------------------|---------|---------|--------|----------|-----------------|---------------------|------------------|--------------|------------------|-----------------|----------------|-------------|-----------------|------------------| | oasis1qrmexg6kh67xvnp7k42sx482nja5760stcrcdkhm | ushakov | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzt4fvcc6cw9af69tek9p3mfjwn3a5e5vcyrw7ac | StakeService | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qq7vyz4ewrdh00yujw0mgkf459et306xmvh2h3zg | P2P.ORG - P2P Validator | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqx820g2geqzeyeyfnm5hgz72eaj9emajgqmscy0 | max999 | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz22xm9vyg0uqxncc667m4j4p5mrsj455c743lfn | S5 | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqewwznmvwfvee0dyq9g48acy0wcw890g549pukz | Wanderer Staking | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp60saapdcrhe5zp3c3zk52r4dcfkr2uyuc5qjxp | Tessellated Geometry | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrs8zlh0mj37ug0jzlcykz808ylw93xwkvknm7yc | Bitoven | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzf03q57jdgdwp2w7y6a8yww6mak9khuag9qt0kd | Spectrum Staking | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpjuke27se2wnmvx6e8uc4l5h44yjp9h7g2clqfq | RockX | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpxpnxxk4qcgl7n55tx0yuqmrcw5cy2u5vzjq5u4 | Perfect Stake | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqrv4g5wu543wa7fcae76eucqfn2uc77zgqw8fxk | Lusia | 4 | 4 | 4 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qqf6wmc0ax3mykd028ltgtqr49h3qffcm50gwag3 | ou812 | 4 | 4 | 4 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qps9drw07z0gmh5z2pn7zwl3z53ate2yvqf3uzq5 | cherkes | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qr0jwz65c29l044a204e3cllvumdg8cmsgt2k3ql | Staking Fund | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qq0xmq7r0z9sdv02t5j9zs7en3n6574gtg8v9fyt | Mars Staking | Long term fee | 4 | 4 | 4 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | 1% | | | | | | | | | | | | | | | | oasis1qrdx0n7lgheek24t24vejdks9uqmfldtmgdv7jzz | Bit Cat🐱 | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpntrlgxp5tt36pkdezdjt5d27fzkvp22y46qura | Chloris Network | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrtq873ddwnnjqyv66ezdc9ql2a07l37d5vae9k0 | Forbole | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp9xlxurlcx3k5h3pkays56mp48zfv9nmcf982kn | ELYSIUM | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpavd66xsezz8s4wjw2fyycxw8jm2nlpnuejlg2g | Spherical One | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz8vfnkcc48grazt83gstfm6yjwyptalny8cywtp | Kumaji | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qppctxzn8djkqfvrxugak9v7dp25vddq7sxqhkry | Tuzem | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp4f47plgld98n5g2ltalalnndnzz96euv9n89lz | Julia-Ju | 4 | 4 | 4 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzl99wft8jtt7ppprk7ce7s079z3r3t77s6pf3dd | DCC Capital | 4 | 4 | 0 | 1 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz0pvg26eudajp60835wl3jxhdxqz03q5qt9us34 | AnkaStake | 4 | 4 | 0 | 1 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrgxl0ylc7lvkj0akv6s32rj4k98nr0f7smf6m4k | itokenpool | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qrugz89g5esmhs0ezer0plsfvmcgctge35n32vmr | Validatrium | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qram2p9w3yxm4px5nth8n7ugggk5rr6ay5d284at | Realizable | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz72lvk2jchk0fjrz7u2swpazj3t5p0edsdv7sf8 | Ocean Stake | 4 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qp53ud2pcmm73mlf4qywnrr245222mvlz5a2e5ty | SerGo | 4 | 4 | 4 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qz0ea28d8p4xk8xztems60wq22f9pm2yyyd82tmt | Simply Staking | 4 | 4 | 0 | 1 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qpaygvzwd5ffh2f5p4qdqylymgqcvl7sp5gxyrl3 | Appload | 4 | 4 | 0 | 1 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | oasis1qzugextrcdueshq63w7l9x4xglnusznsgqa95w7e | Alexander (aka Bambarello) | 4 | 4 | 4 | 0 | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | Validator | | | | | | | | | | | | | | | ``` For further analysis, you can easily export entity statistics to a CSV file by passing the `--output-file` parameter and the file name: ```shell oasis paratime statistics -o stats.csv ``` The analysis of the range of blocks may require some time or even occasionally fail due to denial-of-service protection. If you encounter such issues, consider setting up your own gRPC endpoint! --- ## ROFL # Manage ROFL Apps The `rofl` command combines a series of actions for managing the [Runtime OFfchain Logic (ROFL)][rofl] apps: - build ROFL locally, - verify the ROFL bundle, - register, deregister and update ROFL apps on the network, - show information about the registered ROFL apps, - other convenient tooling for ROFL app developers. [rofl]: https://github.com/oasisprotocol/docs/blob/main/docs/build/rofl/README.mdx ## Initialize a new ROFL app manifest The `rofl init` command will prepare a new ROFL manifest in the given directory (defaults to the current directory). The manifest is a YAML file named `rofl.yaml` which defines the versions of all components, upgrade policies, etc. needed to manage, build and deploy the ROFL app. ```shell oasis rofl init ``` ``` Creating a new ROFL app with default policy... Name: myapp Version: 0.1.0 TEE: tdx Kind: container Created manifest in 'rofl.yaml'. Run `oasis rofl create` to register your ROFL app and configure an app ID. ``` You can create a new ROFL manifest file based on the existing one by passing `--reset` flag. This is useful if you want to make your own deployment of the existing ROFL project. It will remove information on previous user-specific deployments but keep information such as the minimum CPU, memory and storage requirements. ## Create a new ROFL app on the network Use `rofl create` to register a new ROFL app on the network using an existing manifest. You can also define specific [Network, ParaTime and Account][npa] parameters as those get recorded into the manfiest so you don't need to specify them on each invocation: ```shell oasis rofl create --network testnet --account my_rofl_acc ``` ``` You are about to sign the following transaction: Format: plain Method: rofl.Create Body: { "policy": { "quotes": { "pcs": { "tcb_validity_period": 30, "min_tcb_evaluation_data_number": 17, "tdx": {} } }, "enclaves": [], "endorsements": [ { "any": {} } ], "fees": 2, "max_expiration": 3 }, "scheme": 1 } Authorized signer(s): 1. sk5kvBHaZ/si0xXRdjllIOxOgr7o2d1K+ckVaU3ndG4= (ed25519) Nonce: 319 Fee: Amount: 0.0101405 TEST Gas limit: 101405 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire (Sapphire Testnet) Account: test:dave ``` Returned is the unique ROFL app ID starting with `rofl1` and which you will refer to for managing your ROFL app in the future. The manifest is automatically updated with the newly assigned app identifier. In order to prevent spam attacks registering a ROFL app requires a certain amount to be deposited from your account until you decide to [remove it](#remove). The deposit remains locked for the lifetime of the app. Check out the [Stake Requirements] chapter for more information. With the `--scheme` parameter, you can select one of the following ROFL app ID derivation schemes: - `cn` for the ROFL app creator address (the account you're using to sign the transaction) combined with the account's nonce (default). This behavior is similar to the one of the Ethereum [smart contract address derivation] and is deterministic. - `cri` uses the ROFL app creator address combined with the block round the transaction will be validated in and its position inside that block. [Stake Requirements]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/prerequisites/stake-requirements.md [smart contract address derivation]: https://ethereum.org/en/developers/docs/accounts/#contract-accounts ## Build ROFL The `rofl build` command will execute a series of build commands depending on the target Trusted Execution Environment (TEE) and produce the Oasis Runtime Container (ORC) bundle. Additionally, the following flags are available: - `--output` the filename of the output ORC bundle. Defaults to the pattern `..orc` where `` is the app name from the manifest and `` is the deployment name from the manifest. - `--verify` also verifies the locally built enclave identity against the identity that is currently defined in the manifest and also against the identity that is currently set in the on-chain policy. It does not update the manifest file with new entity id(s). - `--no-update-manifest` do not update the enclave identity stored in the app manifest. Building ROFL apps does not require a working TEE on your machine. However, you do need to install all corresponding tools. Check out the [ROFL Prerequisites] chapter for details. [ROFL Prerequisites]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/workflow/prerequisites.md [npa]: ./account.md#npa ## Secrets management ### Set secret Run `rofl secret set |-` command to end-to-end encrypt a secret with a key derived from the selected deployment network and store it to the manifest file. If you have your secret in a file, run: ```shell oasis rofl secret set MY_SECRET mysecret.txt ``` You can also feed the secret from a standard input like this: ```shell echo -n "this-is-a-very-secret-value-here" | oasis rofl secret set MY_SECRET - ``` Once the secret is encrypted and stored, **there is no way of obtaining the unencrypted value back again apart from within the TEE on the designated ROFL deployment**. Additionally, the following flags are available: - `--force` replaces an existing secret. - `--public-name ` defines the name of the secret that will be publicly exposed e.g. in the Oasis Explorer. By default, the public name is the same as the name of the secret. Shells store history Passing secrets as a command line argument will store them in your shell history file as well! Use this approach only for testing. In production, always use file-based secrets. ### Import secrets from `.env` files Run `rofl secret import |-` to bulk-import secrets from a [dotenv](https://github.com/motdotla/dotenv) compatible file (key=value with `#` comments). This is handy for files like `.env`, `.env.production`, `.env.testnet`, or symlinks such as `.env → .env.production`. You can also pass `-` to read from standard input. Each `KEY=VALUE` pair becomes a separate secret entry in your manifest. Quoted values may span multiple physical lines; newline characters are preserved. Double-quoted values also support common escapes (`\n`, `\r`, `\t`, `\"`, `\\`). Lines starting with `#` are ignored. Unquoted values stop at an unquoted `#` comment. ```shell oasis rofl secret import .env.production ``` ```bash oasis rofl secret import .env ``` By default, if a secret with the same name already exists, the command will fail. Use `--force` to replace existing secrets. After importing, **run**: ```bash oasis rofl update ``` to push the updated secrets on-chain. ### Get secret info Run `rofl secret get ` to check whether the secret exists in your manifest file. ```shell oasis rofl secret get MY_SECRET ``` ``` Name: MY_SECRET Size: 156 bytes ``` ### Remove secret Run `rofl secret rm ` to remove the secret from your manifest file. ```shell oasis rofl secret rm MY_SECRET ``` ## Update ROFL app config on-chain Use `rofl update` command to push the ROFL app's configuration to the chain: ```shell oasis rofl update ``` ```shell You are about to sign the following transaction: Format: plain Method: rofl.Update Body: { "id": "rofl1qzd82n99vtwesvcqjfyur4tcm45varz2due7s635", "policy": { "quotes": { "pcs": { "tcb_validity_period": 30, "min_tcb_evaluation_data_number": 17, "tdx": {} } }, "enclaves": [], "endorsements": [ { "any": {} } ], "fees": 2, "max_expiration": 3 }, "admin": "oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt" } Authorized signer(s): 1. sk5kvBHaZ/si0xXRdjllIOxOgr7o2d1K+ckVaU3ndG4= (ed25519) Nonce: 320 Fee: Amount: 0.010145 TEST Gas limit: 101450 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire (Sapphire Testnet) Account: test:dave ``` The current on-chain policy, metadata and secrets will be replaced with the ones in the manifest file. Keep in mind that ROFL replicas need to be restarted in order for changes to take effect. ## Show ROFL information Run `rofl show` to obtain the information from the network on the ROFL admin account, staked amount, current ROFL policy and running instances: ```shell oasis rofl show ``` ``` App ID: rofl1qzd82n99vtwesvcqjfyur4tcm45varz2due7s635 Admin: oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt Staked amount: 10000.0 Policy: { "quotes": { "pcs": { "tcb_validity_period": 30, "min_tcb_evaluation_data_number": 17, "tdx": {} } }, "enclaves": [ "z+StFagJfBOdGlUGDMH7RlcNUm1uqYDUZDG+g3z2ik8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==", "6KfY4DqD1Vi+H7aUn5FwwLobEzERHoOit7xsrPNz3eUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==" ], "endorsements": [ { "any": {} } ], "fees": 2, "max_expiration": 3 } === Instances === - RAK: UwuhJrOYX6FDOc27NilQSrcVEtWD9voq+ST+ohsaRTI= Node ID: DbeoxcRwDO4Wh8bwq5rAR7wzhiB+LeYn+y7lFSGAZ7I= Expiration: 7 ``` ## Deploy ROFL app Run `rofl deploy` to automatically deploy your app to a machine obtained from the [ROFL marketplace]. If a machine is already configured in your manifest file a new version of your ROFL app will be deployed there. If no machines are rented yet, you can use the following arguments to select a specific provider and offer: - `--provider
` specifies the provider to rent the machine from. On Sapphire Testnet, the Oasis-managed provider will be selected by default. - `--offer ` specifies the offer of the machine to rent. By default it takes the most recent offer. Run `--show-offers` to list offers and specifications. - `--term ` specifies the base rent period. It takes the first available provider term by default. - `--term-count ` specifies the multiplier. Default is `1`. ```shell oasis rofl deploy --deployment mainnet --provider oasis1qzc8pldvm8vm3duvdrj63wgvkw34y9ucfcxzetqr --offer small --term hour --term-count 24 ``` ``` Using provider: oasis1qzc8pldvm8vm3duvdrj63wgvkw34y9ucfcxzetqr Pushing ROFL app to OCI repository 'rofl.sh/7aaddbd5-d782-430f-9362-f0107aa109d2:1750242297'... No pre-existing machine configured, creating a new one... Taking offer: small [0000000000000000] WARNING: Machine rental is non-refundable. You will not get a refund for the already paid term if you cancel. Unlock your account. ? Passphrase: You are about to sign the following transaction: Format: plain Method: roflmarket.InstanceCreate Body: { "provider": "oasis1qzc8pldvm8vm3duvdrj63wgvkw34y9ucfcxzetqr", "offer": "0000000000000000", "deployment": { "app_id": "rofl1qpw7gxp7dqq72sdtpv4jrmdfys9nsp73wysglhue", "manifest_hash": "c2bc74e68cbb5b9a70a2c7a378f79158e2c5975eca6bd4cbbeff602a1a12b311", "metadata": { "net.oasis.deployment.orc.ref": "rofl.sh/7aaddbd5-d782-430f-9362-f0107aa109d2:1750242297@sha256:ee206f123b395c630e6b52ff779c0cd63eb5ea99ba97275559558e340647ccb2" } }, "term": 1, "term_count": 24 } Authorized signer(s): 1. Amc63/tU+uNrYi7OID2a5a/hHbsbGTtAolnlyA+MF5g5 (secp256k1eth) Nonce: 6 Fee: Amount: 0.0121926 ROSE Gas limit: 121926 (gas price: 0.0000001 ROSE per gas unit) Network: mainnet ParaTime: sapphire Account: test:dave ? Sign this transaction? Yes (In case you are using a hardware-based signer you may need to confirm on device.) Broadcasting transaction... Transaction included in block successfully. Round: 9356523 Transaction hash: bce96976f38485546b5950f8b2a7f9b7c43b9656935cc472a90680187469f4dd Execution successful. Created machine: 0000000000000000 Deployment into machine scheduled. This machine expires on 2025-08-07 12:35:47 +0200 CEST. Use `oasis rofl machine top-up` to extend it. Use `oasis rofl machine show` to check status. ``` [ROFL marketplace]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/features/marketplace.mdx ## Manage a deployed ROFL machine Once a ROFL app is deployed, you can manage the machine it's running on using the `oasis rofl machine` subcommands. ### Show machine information To view details about a deployed machine, including its status, expiration, and any proxy URLs, run `oasis rofl machine show`: ```shell oasis rofl machine show ``` ``` Name: default Provider: oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz ID: 000000000000025a Offer: 0000000000000001 Status: accepted Creator: oasis1qzvxd8vjgp2y0tjx229420tva822mdk2wxpx0vws Admin: oasis1qzvxd8vjgp2y0tjx229420tva822mdk2wxpx0vws Node ID: DbeoxcRwDO4Wh8bwq5rAR7wzhiB+LeYn+y7lFSGAZ7I= Created at: 2025-08-25 10:00:00 +0000 UTC Updated at: 2025-08-25 10:00:10 +0000 UTC Paid until: 2025-08-26 10:00:00 +0000 UTC Proxy: Domain: m602.test-proxy-b.rofl.app Ports from compose file: 8080 (http-echo): https://p8080.m602.test-proxy-b.rofl.app Resources: TEE: Intel TDX Memory: 4096 MiB vCPUs: 2 Storage: 20000 MiB Deployment: App ID: rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg Metadata: net.oasis.deployment.orc.ref: rofl.sh/0ba0712d-114c-4e39-ac8e-b28edffcada8:1747909776@sha256:77ff0dc76adf957a4a089cf7cb584aa7788fef027c7180ceb73a662ede87a217 Commands: ``` If you have published ports in your `compose.yaml`, the output will include a `Proxy` section with public URLs to access your services. For more details on how to configure the proxy and for troubleshooting, see the [ROFL Proxy] feature page. [ROFL Proxy]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/features/proxy.mdx ### Top-up payment for the machine Run `rofl machine top-up` to extend the rental of the machine obtained from the [ROFL marketplace]. You can check the current expiration date of your machine in the `Paid until` field from the [`oasis rofl machine show` output](#machine-show). The rental is extended under the terms of the original offer. Specify the extension period with [`--term`][term-flags] and [`--term-count`][term-flags] parameters. ```shell oasis rofl machine top-up --term hour --term-count 12 ``` ``` Using provider: oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz (oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz) Top-up machine: default [000000000000022a] Top-up term: 12 x hour WARNING: Machine rental is non-refundable. You will not get a refund for the already paid term if you cancel. Unlock your account. ? Passphrase: You are about to sign the following transaction: Format: plain Method: roflmarket.InstanceTopUp Body: { "provider": "oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz", "id": "000000000000022a", "term": 1, "term_count": 12 } Authorized signer(s): 1. AyZKkxNFeyqLI5HGTYqEmCcYxKGo/kueOzSHzdnrSePO (secp256k1eth) Nonce: 996 Fee: Amount: 0.0013614 TEST Gas limit: 13614 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: dave ? Sign this transaction? Yes (In case you are using a hardware-based signer you may need to confirm on device.) Broadcasting transaction... Transaction included in block successfully. Round: 12917124 Transaction hash: 094ddc21c39acd96789153003016bda5d2a0077e7be11635bb755b6c49c287ac Execution successful. Machine topped up. ``` [term-flags]: #deploy ### Show machine logs You can fetch logs from your running ROFL app using `oasis rofl machine logs`. ```shell oasis rofl machine logs ``` Logs are not encrypted! While only the app admin can access the logs, they are stored **unencrypted on the ROFL node**. In production, make sure you never print any confidential data to the standard or error outputs! ### Restart a machine To restart a running machine, use `oasis rofl machine restart`. If you wish to clear the machine's persistent storage, pass the [`--wipe-storage`] flag. [`--wipe-storage`]: #deploy ### Stop a machine To stop a machine, use `oasis rofl machine stop`. To start it back again, use [`oasis rofl machine restart`]. [`oasis rofl machine restart`]: #machine-restart ### Remove a machine To cancel the rental and permanently remove a machine, including its persistent storage, use `oasis rofl machine remove`. Canceling a machine rental will not refund any payment for the already paid term. ## Advanced ### Upgrade ROFL app dependencies Run `rofl upgrade` to bump ROFL bundle TDX artifacts in your manifest file to their latest versions. This includes: - the firmware - the kernel - stage two boot - ROFL containers middleware (for TDX containers kind only) ```shell oasis rofl upgrade ``` ### Remove ROFL app from the network Run `rofl remove` to deregister your ROFL app: ```shell oasis rofl remove ``` ``` WARNING: Removing this ROFL app will DEREGISTER it, ERASE any on-chain secrets and local configuration! WARNING: THIS ACTION IS IRREVERSIBLE! ? Remove ROFL app 'rofl1qzd82n99vtwesvcqjfyur4tcm45varz2due7s635' deployed on network 'testnet' Yes Unlock your account. ? Passphrase: You are about to sign the following transaction: Format: plain Method: rofl.Remove Body: { "id": "rofl1qzd82n99vtwesvcqjfyur4tcm45varz2due7s635" } Authorized signer(s): 1. sk5kvBHaZ/si0xXRdjllIOxOgr7o2d1K+ckVaU3ndG4= (ed25519) Nonce: 321 Fee: Amount: 0.0011288 TEST Gas limit: 11288 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire (Sapphire Testnet) Account: test:dave ``` The deposit required to register the ROFL app will be returned to the current administrator account. Secrets will be permanently lost All secrets stored on-chain will be permanently lost when the ROFL app is deregistered! If you backed up your manifest file, those secrets will also be unretrievable since they were encrypted with a ROFL deployment-specific keypair. ### ROFL provider tooling The `rofl provider` commands offers tools for managing your on-chain provider information and your offers. An example provider configuration file looks like this: ```yaml title="rofl-provider.yaml" # Network name in your Oasis CLI network: testnet # ParaTime name in your Oasis CLI paratime: sapphire # Account name in your Oasis CLI provider: rofl_provider # List of Base64-encoded node IDs allowed to execute ROFL apps nodes: - # Address of the scheduler app scheduler_app: rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg # Account name or address of who receives ROFL machine rental payments payment_address: rofl_provider offers: - id: small # Short human-readable name resources: tee: tdx # Possible values: sgx, tdx memory: 4096 # In MiB cpus: 2 storage: 20000 # In MiB payment: native: # Possible keys: native, evm terms: hourly: 10 # Possible keys: hourly, monthly, yearly capacity: 50 # Max number of actively rented machines ``` #### Initialize a ROFL provider The `rofl provider init` initializes a new provider configuration file. [Network and ParaTime](./account.md#npa) selectors are available for the `rofl provider init` command. #### Create a ROFL provider on-chain Run `rofl provider create` to register your account as a provider on the configured network and ParaTime. ```shell oasis rofl provider create ``` ``` Unlock your account. ? Passphrase: You are about to sign the following transaction: Format: plain Method: roflmarket.ProviderCreate Body: { "nodes": [], "scheduler_app": "rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg", "payment_address": { "native": "oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt" }, "offers": null, "metadata": {} } Authorized signer(s): 1. AyZKkxNFeyqLI5HGTYqEmCcYxKGo/kueOzSHzdnrSePO (secp256k1eth) Nonce: 858 Fee: Amount: 0.012167 TEST Gas limit: 121670 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: test:dave ``` In order to prevent spam attacks registering a ROFL provider requires a certain amount to be deposited from your account until you decide to [remove it](#provider-remove). The deposit remains locked for the lifetime of the provider entity. Check out the [Stake Requirements] chapter for more information. #### Update ROFL provider policies Use `rofl provider update` to update the list of endorsed nodes, the scheduler app address, the payment recipient address and other provider settings. ```shell oasis rofl provider update ``` ``` Unlock your account. ? Passphrase: You are about to sign the following transaction: Format: plain Method: roflmarket.ProviderUpdate Body: { "provider": "oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt", "nodes": [], "scheduler_app": "rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg", "payment_address": { "native": "oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt" }, "metadata": {} } Authorized signer(s): 1. AyZKkxNFeyqLI5HGTYqEmCcYxKGo/kueOzSHzdnrSePO (secp256k1eth) Nonce: 860 Fee: Amount: 0.0121698 TEST Gas limit: 121698 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: test:dave ``` To update your offers, run [`rofl provider update-offers`](#provider-update-offers) instead. #### Update ROFL provider offers Use `rofl provider update-offers` to replace the on-chain offers with the ones in your provider manifest file. ```shell oasis rofl provider update-offers ``` ``` $ oasis rofl provider update-offers Unlock your account. ? Passphrase: Going to perform the following updates: Add offers: - small Update offers: Remove offers: You are about to sign the following transaction: Format: plain Method: roflmarket.ProviderUpdateOffers Body: { "provider": "oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt", "add": [ { "id": "0000000000000000", "resources": { "tee": 2, "memory": 4096, "cpus": 2, "storage": 20000 }, "payment": { "native": { "denomination": "", "terms": { "1": "10000000000000000000" } } }, "capacity": 50, "metadata": { "net.oasis.scheduler.offer": "small" } } ] } Authorized signer(s): 1. AyZKkxNFeyqLI5HGTYqEmCcYxKGo/kueOzSHzdnrSePO (secp256k1eth) Nonce: 860 Fee: Amount: 0.0133782 TEST Gas limit: 133782 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: test:dave ``` To update your provider policies, run [`rofl provider update`](#provider-update) instead. #### List ROFL providers Use `rofl provider list` to display all ROFL providers registered on the selected ParaTime: ```shell oasis rofl provider list ``` ``` PROVIDER ADDRESS SCHEDULER APP NODES OFFERS INSTANCES oasis1qp2ens0hsp7gh23wajxa4hpetkdek3swyyulyrmz rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg 2 1 12 oasis1qqw74ezqygseg32e7jq9tl637q7aa4h7qsssmwp3 rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg 1 3 0 oasis1qrcxr6lh03xyazkg7ad7q2dqs94kj0arusmyzq8g rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg 0 0 0 oasis1qrfeadn03ljm0kfx8wx0d5zf6kj79pxqvv0dukdm rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg 1 1 2 oasis1qrjprejadvxjwj3m3mj8xurt0mvafw4jhymmmtlj rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg 0 0 0 oasis1qrpptdcpsxvxn3re0cg3f6hfy0kyfujnz5ex7vgn rofl1qr95suussttd2g9ehu3zcpgx8ewtwgayyuzsl0x2 0 2 2 oasis1qrxhk2aqwq7g5fq85a89yv2khdgn2wzccqhg2sal rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg 1 4 0 ``` The command displays provider addresses, scheduler app IDs, node counts, and offer/instance counts for each provider. To see detailed information about all offers from each provider, use the `--show-offers` flag: ```shell oasis rofl provider list --show-offers ``` #### Show ROFL provider details Use `rofl provider show
` to display detailed information about a specific ROFL provider, including all their offers: ```shell oasis rofl provider show oasis1qqw74ezqygseg32e7jq9tl637q7aa4h7qsssmwp3 ``` ``` Provider: oasis1qqw74ezqygseg32e7jq9tl637q7aa4h7qsssmwp3 === Basic Information === Scheduler App: rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg Payment Address: oasis1qqw74ezqygseg32e7jq9tl637q7aa4h7qsssmwp3 Nodes: 1 1. mXsy6XlJlEK5vJwEfyqRWZLVN5Ss4QpwI6h124IDjjw= Stake: 100.0 TEST Offers: 3 Instances: 0 Created At: 2025-11-15T21:40:22Z Updated At: 2025-11-15T21:40:22Z === Offers === - small [0000000000000000] TEE: tdx | Memory: 8192 MiB | vCPUs: 2 | Storage: 39.06 GiB Capacity: 10 Note: Small instance - ideal for lightweight ROFL applications Description: Small compute instance with 2 vCPUs, 8GB RAM, and 40GB storage. Perfect for testing and lightweight ROFL applications. Hosted on Akash decentralized cloud infrastructure. Payment: monthly: 150.0 TEST - medium [0000000000000001] TEE: tdx | Memory: 16384 MiB | vCPUs: 4 | Storage: 78.12 GiB Capacity: 5 Note: Medium instance - balanced compute and memory Description: Medium compute instance with 4 vCPUs, 16GB RAM, and 80GB storage. Great for standard ROFL applications with moderate resource needs. Hosted on Akash decentralized cloud infrastructure. Payment: monthly: 300.0 TEST - large [0000000000000002] TEE: tdx | Memory: 28672 MiB | vCPUs: 8 | Storage: 175.78 GiB Capacity: 1 Note: Large instance - high-performance computing Description: Large compute instance with 8 vCPUs, 28GB RAM, and 180GB storage. Designed for resource-intensive ROFL applications. Hosted on Akash decentralized cloud infrastructure. Payment: monthly: 600.0 TEST ``` This command provides comprehensive information including: - Basic provider information (address, scheduler app, payment address) - List of endorsed nodes - Stake amount - Detailed information about all offers (resources, pricing terms, capacity) Use `--format json` to get the full provider metadata in machine-readable format. #### Remove ROFL provider from the network Run `rofl provider remove` to deregister your ROFL provider account: ```shell oasis rofl provider remove ``` ``` Unlock your account. ? Passphrase: You are about to sign the following transaction: Format: plain Method: roflmarket.ProviderRemove Body: { "provider": "oasis1qrk58a6j2qn065m6p06jgjyt032f7qucy5wqeqpt" } Authorized signer(s): 1. AyZKkxNFeyqLI5HGTYqEmCcYxKGo/kueOzSHzdnrSePO (secp256k1eth) Nonce: 859 Fee: Amount: 0.0121578 TEST Gas limit: 121578 (gas price: 0.0000001 TEST per gas unit) Network: testnet ParaTime: sapphire Account: test:dave ``` The deposit required to register the ROFL provider will be returned to its address. ### Show ROFL identity Run `rofl identity` to compute the **cryptographic identity** of the ROFL app: ```shell oasis rofl identity rofl-oracle.orc ``` ``` wzwUd5Ym/e5OO87pGVk2yWL4v0x12U3Zx/48Vdoe1PyTBkRbZbh9kPyqgY1RsvenXEIHQA0c2nR/WlmvS1vbcg== ``` The output above is Base64-encoded enclave identity which depends on the ROFL source code and the build environment. Enclave identities should be reproducible on any computer and are used to prove and verify the integrity of ROFL binaries on the network. See the [Reproducibility] chapter to learn more. [Reproducibility]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/runtime/reproducibility.md ### Show the current trust-root In order the ROFL app can trust the environment it is executed in, it needs to have a hardcoded *trust root*. Typically, it consists of: - the [ParaTime ID], - the [chain domain separation context], - the specific consensus block hash and its height. To obtain the latest trust root in rust programming language, run `oasis rofl trust-root`: ```shell oasis rofl trust-root ``` ``` TrustRoot { height: 1022, hash: "bb3e63d729dd568ce07b37eea33eddf8082ed4cacbd64097aad32168a4a4fc9a".into(), runtime_id: "8000000000000000000000000000000000000000000000000000000000000000".into(), chain_context: "074f5ba071c4385a7ad24aea0a3a7b137901395e8f3b35479c1cce87d3170f4e".to_string(), } ``` You can also define specific [Network and ParaTime][npa] parameters: ```shell oasis rofl trust-root --network testnet --paratime sapphire ``` [ParaTime ID]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/runtime/identifiers.md [chain domain separation context]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/crypto.md#chain-domain-separation --- ## Setup ## Download and Run The Oasis team provides CLI binaries for Linux, macOS and Windows operating systems. If you want to run it on another platform, you can [build the CLI from source][cli-source]. Download the latest release from our [GitHub repository][cli-releases] and follow the instructions for **your platform** below: ### Homebrew If you use [Homebrew on Linux](https://docs.brew.sh/Homebrew-on-Linux), you can install the Oasis CLI with: #### Installation ```shell brew install oasis ``` #### Verify ```shell oasis --version ``` ### Manual #### Prerequisites - amd64 or arm64 Linux. - Ensure `~/.local/bin` is on your `PATH`: ```shell echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc source ~/.bashrc mkdir -p ~/.local/bin ``` #### Installation 1. Download the latest **Linux** archive (e.g. `oasis_cli_X.Y.Z_linux_amd64.tar.gz`). 2. Extract it: ```shell cd ~/Downloads tar -xzf oasis_cli_X.Y.Z_linux_amd64.tar.gz # adjust version and architecture ``` 3. Move the binary to your path: ```shell mv oasis ~/.local/bin/ ``` 4. Verify: ```shell oasis --version ``` ### Homebrew (Recommended) The recommended way to install the Oasis CLI on macOS is via [Homebrew](https://brew.sh/). #### Installation ```shell brew install oasis ``` #### Verify ```shell oasis --version ``` ### Manual #### Prerequisites - macOS (Apple Silicon & Intel). - Ensure `~/.local/bin` is on your `PATH` (add it in `~/.zshrc` or `~/.bashrc`): ```shell echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc source ~/.zshrc mkdir -p ~/.local/bin ``` #### Installation 1. Download the latest **macOS** archive (e.g. `oasis_cli_X.Y.Z_darwin_all.tar.gz`) from the releases page. 2. Extract it: ```shell cd ~/Downloads tar -xzf oasis_cli_X.Y.Z_darwin_all.tar.gz # adjust version ``` 3. Move the binary to your path: ```shell mv oasis ~/.local/bin/ ``` 4. Bypass Gatekeeper (first run only): ```shell xattr -d com.apple.quarantine ~/.local/bin/oasis ``` If a dialog appears, open **System Settings → Privacy & Security** and click **Open Anyway**. 5. Verify: ```shell oasis --version ``` #### Prerequisites - Windows 10/11 (x86-64). - Decide on a folder already in your `PATH` (e.g. `%USERPROFILE%\bin`) or add one. #### Installation 1. Download the latest **Windows** ZIP file (e.g. `oasis_cli_X.Y.Z_windows_amd64.zip`). 2. Extract it (double-click or `tar -xf` in PowerShell). 3. Copy `oasis.exe` to a directory on your `PATH`, for example: ```powershell New-Item -ItemType Directory -Force "$env:USERPROFILE\bin" Copy-Item .\oasis.exe "$env:USERPROFILE\bin\" ``` If that folder isn’t on the `PATH`, add it via **System Properties → Environment Variables**. 4. Verify: ```powershell oasis --version ``` ## Update If you installed Oasis CLI manually, the application includes a built-in `oasis update` command which upgrades software to the latest version. This command will check for a newer version on GitHub, show you the release notes, and ask for confirmation before downloading and replacing the current binary. ## Configuration When running the Oasis CLI for the first time, it will generate a configuration file and populate it with the current Mainnet and Testnet networks. It will also configure all [ParaTimes supported by the Oasis Foundation][paratimes]. The configuration folder of Oasis CLI is located: - on Linux: - `$HOME/.config/oasis/` - on macOS: - `/Users/$USER/Library/Application Support/oasis/` - on Windows: - `%USERPROFILE%\AppData\Local\oasis\` There, you will find `cli.toml` which contains the configuration of the networks, ParaTimes and your wallet. Additionally, each file-based account in your wallet will have a separate, password-encrypted JSON file in the same folder named after the name of the account with the `.wallet` extension. ## Multiple Profiles You can utilize multiple profiles of your Oasis CLI by passing the `--config` parameter with a location of a desired `cli.toml`: ```shell oasis wallet list --config ~/.config/oasis_dev/cli.toml ``` ``` ACCOUNT KIND ADDRESS oscar file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` ## Back Up Your Wallet To back up your complete Oasis CLI configuration including your wallet, archive the configuration folder containing `cli.toml` and `.wallet` files. [cli-releases]: https://github.com/oasisprotocol/cli/releases [cli-source]: https://github.com/oasisprotocol/cli [paratimes]: https://github.com/oasisprotocol/docs/blob/main/docs/build/tools/other-paratimes/README.mdx --- ## Transaction # Transaction Tools The `transaction` command offers convenient tools for processing raw consensus or ParaTime transactions stored in a JSON file: - decoding and displaying the transaction, - verifying transaction's signature, - signing the transaction, - broadcasting the transaction. ## Decode, Verify and Show a Transaction To show the transaction, invoke `transaction show ` and provide a filename containing a previously generated transaction by `oasis-node` or the Oasis CLI's [`--output-file`][account-output-file] parameter. [account-output-file]: ./account.md#output-file For example, let's take the following transaction transferring `1.0 TEST` from `test:alice` to `test:bob` on Testnet consensus layer and store it to `testtx.json`: ```json title="testtx.json" { "untrusted_raw_value": "pGNmZWWiY2dhcwFmYW1vdW50QGRib2R5omJ0b1UAyND0Wds45cwxynfmbSxEVty+tQJmYW1vdW50RDuaygBlbm9uY2UBZm1ldGhvZHBzdGFraW5nLlRyYW5zZmVy", "signature": { "public_key": "NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE=", "signature": "ph5Sj29JFG8p0rCqAXjHm+yLwiXHybxah9C1cVTI01SDeJlyXT8dbp4BfI1hFxBomgi1hOrevTpShX0f9puTCQ==" } } ``` We can decode and verify the transaction as follows: ```shell oasis transaction show testtx.json --network testnet ``` ``` Hash: c996e9d17d652d5dc64589d10806c244a5ef0f650cc2ec8c810b28a85fef5705 Signer: NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE= (signature: ph5Sj29JFG8p0rCqAXjHm+yLwiXHybxah9C1cVTI01SDeJlyXT8dbp4BfI1hFxBomgi1hOrevTpShX0f9puTCQ==) Content: Method: staking.Transfer Body: To: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 1.0 TEST Nonce: 1 Fee: Amount: 0.0 TEST Gas limit: 1 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) ``` Since the signature depends on the [chain domain separation context], the transaction above will be invalid on other networks such as the Mainnet. In this case the Oasis CLI will print the `[INVALID SIGNATURE]` warning below the signature: ```shell oasis transaction show testtx.json --network mainnet ``` ```text Hash: c996e9d17d652d5dc64589d10806c244a5ef0f650cc2ec8c810b28a85fef5705 Signer: NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE= (signature: ph5Sj29JFG8p0rCqAXjHm+yLwiXHybxah9C1cVTI01SDeJlyXT8dbp4BfI1hFxBomgi1hOrevTpShX0f9puTCQ==) [INVALID SIGNATURE] Content: Method: staking.Transfer Body: To: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 1.0 ROSE Nonce: 1 Fee: Amount: 0.0 ROSE Gas limit: 1 (gas price: 0.0 ROSE per gas unit) Network: mainnet ParaTime: none (consensus layer) ``` The `show` command is also compatible with ParaTime transactions. Take the following transaction which transfers `1.0 TEST` from `test:alice` to `test:bob` inside Sapphire ParaTime on the Testnet: ```json title="testtx2.json" { "Body": "o2F2AWJhaaJic2mBomVub25jZQFsYWRkcmVzc19zcGVjoWlzaWduYXR1cmWhZ2VkMjU1MTlYIDXD8zVt2FNk/roDVLVFraEJ0b2zi/XWEmgX24xyz9aRY2ZlZaJjZ2FzAWZhbW91bnSCQEBkY2FsbKJkYm9keaJidG9VAMjQ9FnbOOXMMcp35m0sRFbcvrUCZmFtb3VudIJIDeC2s6dkAABAZm1ldGhvZHFhY2NvdW50cy5UcmFuc2Zlcg==", "AuthProofs": [ { "signature": "u71xOVJhRrUth5rNTAa2HuARYCsGLmvOCRE05fCbaQiSoQhXtKPVP9feoQSXmLVxISCHr/0aNnRLEoifJLMzBQ==" } ] } ``` The Oasis CLI will be able to verify a transaction only for the **exact network and ParaTime combination** since both are used to derive the chain domain separation context for signing the transaction. ```shell oasis transaction show testtx2.json --network testnet --paratime sapphire ``` ``` Hash: 1558a5d6254a1b216a0885fa16114899e35b27622fd5af7c8b2eee7284dcad2e Signer(s): 1. NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE= (signature: u71xOVJhRrUth5rNTAa2HuARYCsGLmvOCRE05fCbaQiSoQhXtKPVP9feoQSXmLVxISCHr/0aNnRLEoifJLMzBQ==) Content: Format: plain Method: accounts.Transfer Body: To: test:bob (oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx) Amount: 1.0 TEST Authorized signer(s): 1. NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE= (ed25519) Nonce: 1 Fee: Amount: 0.0 TEST Gas limit: 1 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: sapphire ``` ## Sign a Transaction To sign a [previously unsigned transaction][unsigned] transaction or to append another signature to the transaction (*multisig*), run `transaction sign `. For example, let's transfer `1.0 TEST` from `test:alice` to `test:bob` on Testnet consensus layer, but don't sign it and store it to `testtx_unsigned.json`: ```json title="testtx_unsigned.json" { "nonce": 32, "fee": { "amount": "0", "gas": 1265 }, "method": "staking.Transfer", "body": "omJ0b1UAyND0Wds45cwxynfmbSxEVty+tQJmYW1vdW50RDuaygA=" } ``` Comparing this transaction to [`testtx.json`](#show) which was signed, we can notice that the transaction is not wrapped inside the `untrusted_raw_value` envelope with the `signature` field. Decoding unsigned transaction gives us similar output: ```shell oasis transaction show testtx_unsigned.json --network testnet ``` ``` Method: staking.Transfer Body: To: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 1.0 TEST Nonce: 32 Fee: Amount: 0.0 TEST Gas limit: 1265 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) ``` Finally, let's sign the transaction: ```shell oasis transaction sign testtx_unsigned.json --network testnet --account test:alice ``` ``` You are about to sign the following transaction: Method: staking.Transfer Body: To: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 1.0 TEST Nonce: 32 Fee: Amount: 0.0 TEST Gas limit: 1265 (gas price: 0.0 TEST per gas unit) Network: testnet ParaTime: none (consensus layer) Account: test:alice (In case you are using a hardware-based signer you may need to confirm on device.) ``` We can also use [`--output-file`][account-output-file] here and store the signed transaction back to another file instead of showing it. [Network and Account][npa] selectors are available for the `transaction sign` command. [npa]: ./account.md#npa [unsigned]: ./account.md#unsigned ## Submit a Transaction Invoking `transaction submit ` will broadcast the consensus or ParaTime transaction to the selected network or ParaTime. If the transaction hasn't been signed yet, Oasis CLI will first sign it with the selected account in your wallet and then broadcast it. ```shell oasis tx submit testtx.json --network testnet --no-paratime ``` ``` Broadcasting transaction... Transaction executed successfully. Transaction hash: a81a1dcd203bba01761a55527f2c44251278110a247e63a12f064bf41e07f13a ``` ```shell oasis tx submit testtx2.json --network testnet --paratime sapphire ``` ``` Broadcasting transaction... Transaction included in block successfully. Round: 946461 Transaction hash: 25f0b2a92b6171969e9cd41d047bc20b4e2307c3a329ddef41af73df69d95b5d ``` [chain domain separation context]: ../../../core/crypto.md#chain-domain-separation --- ## Wallet # Managing Accounts in Your Wallet The `wallet` command is used to manage accounts in your wallet. The wallet can contain file-based accounts which are stored along your Oasis CLI configuration, or a reference to an account stored on your hardware wallet. The following encryption algorithms and derivation paths are supported by the Oasis CLI for your accounts: - `ed25519-adr8`: [Ed25519] keypair using the [ADR-8] derivation path in order to obtain a private key from the mnemonic. This is the default setting suitable for accounts on the Oasis consensus layer and Cipher. - `secp256k1-bip44`: [Secp256k1] Ethereum-compatible keypair using [BIP-44] with ETH coin type to derive a private key. This setting is used for accounts living on EVM-compatible ParaTimes such as Sapphire or Emerald. The same account can be imported into Metamask and other Ethereum wallets. - `ed25519-raw`: [Ed25519] keypair imported directly from the Base64-encoded private key. No key derivation is involved. This setting is primarily used by the network validators to sign the governance and other consensus-layer transactions. - `ed25519-legacy`: [Ed25519] keypair using a legacy 5-component derivation path. This is the preferred setting for Oasis accounts stored on a hardware wallet like Ledger. It is called legacy, because it was first implemented before the [ADR-8] was standardized. - `sr25519-adr8`: [Sr25519] keypair using the [ADR-8] derivation path. This is an alternative signature scheme for signing ParaTime transactions. - `secp256k1-raw` and `sr25519-raw`: Respective Secp256k1 and Sr25519 keypairs imported directly from the Hex- or Base64-encoded private key. No key derivation is involved. For compatibility with Ethereum, each `secp256k1` account corresponds to two addresses: - 20-byte hex-encoded Ethereum-compatible address, e.g. `0xDCbF59bbcC0B297F1729adB23d7a5D721B481BA9` - Bech32-encoded Oasis native address, e.g. `oasis1qq3agel5x07pxz08ns3d2y7sjrr3xf9paquhhhzl`. There exists a [mapping][eth-oasis-address-mapping] from the Ethereum address to the native Oasis address as in the example above, but **there is no reverse mapping**. [ADR-8]: ../../../adrs/0008-standard-account-key-generation.md [BIP-44]: https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki [Ed25519]: https://en.wikipedia.org/wiki/EdDSA [Secp256k1]: https://en.bitcoin.it/wiki/Secp256k1 [Sr25519]: https://wiki.polkadot.network/docs/learn-cryptography [eth-oasis-address-mapping]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/client-sdk/go/types/address.go#L135-L142 ## Create an Account The `wallet create []` command is used add a new account into your Oasis CLI wallet by: - generating a new mnemonic and storing it into a file-based wallet, or - creating a reference to an account stored on your hardware wallet. By default, a password-encrypted file-based wallet will be used for storing the private key. You will have to enter the password for this account each time to access use it for signing the transactions (e.g. to send tokens). The account address is public and can be accessed without entering the passphrase. ```shell oasis wallet create oscar ``` ``` ? Choose a new passphrase: ? Repeat passphrase: ``` The first account you create or import will become your **default account**. This means it will automatically be selected as a source for sending funds or calling smart contracts unless specified otherwise by using `--account ` flag. You can always [change the default account](#set-default) later. To use your hardware wallet, add `--kind ledger` parameter and Oasis CLI will store a reference to an account on your hardware wallet: ```shell oasis wallet create logan --kind ledger ``` A specific account kind (`ed25519-adr8`, `secp256k1-bip44`) and the derivation path number can be passed with `--file.algorithm` and `--file.number` or `--ledger.algorithm` and `--ledger.number` respectively. For example: ```shell oasis wallet create logan --kind ledger ``` When creating a hardware wallet account, Oasis CLI will: 1. obtain the public key of the account from your hardware wallet, 2. compute the corresponding native address, and 3. store the Oasis native address into the Oasis CLI. If you try to open the same account with a different Ledger device or reset your Ledger with a new mnemonic, Oasis CLI will abort because the address of the account obtained from the new device will not match the one stored in your config. ```shell oasis wallet show logan ``` ``` Error: address mismatch after loading account (expected: oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl got: oasis1qzdyu09x7hs5nqa0sjgy5jtmz3j5f99ccq0aezjk) ``` ## Import an Existing Keypair or a Mnemonic If you already have a mnemonic or a raw private key, you can import it as a new account by invoking `wallet import`. You will be asked interactively to select an account kind (`mnemonic` or `private key`), encryption algorithm (`ed25519` or `secp256k1`) and then provide either the mnemonic with the derivation number, or the raw private key in the corresponding format. Importing an account with a mnemonic looks like this: ```shell oasis wallet import eugene ``` ``` ? Kind: mnemonic ? Algorithm: secp256k1-bip44 ? Key number: 0 ? Mnemonic: [Enter 2 empty lines to finish]man ankle mystery favorite tone number ice west spare marriage control lucky life together neither ? Mnemonic: man ankle mystery favorite tone number ice west spare marriage control lucky life together neither ? Choose a new passphrase: ? Repeat passphrase: ``` Let's make another Secp256k1 account and entering a hex-encoded raw private key: ```shell oasis wallet import emma ``` ``` oasis wallet import emma ? Kind: private key ? Algorithm: secp256k1-raw ? Private key (hex-encoded): [Enter 2 empty lines to finish]4811ebbe4f29f32a758f6f7bad39deb97ea67f07350637e31c75795dc679262a ? Private key (hex-encoded): 4811ebbe4f29f32a758f6f7bad39deb97ea67f07350637e31c75795dc679262a ? Choose a new passphrase: ? Repeat passphrase: ``` To override the defaults, you can pass `--algorithm`, `--number` and `--secret` parameters. This is especially useful, if you are running the command in a non-interactive mode: ``` oasis wallet import eugene --algorithm secp256k1-bip44 --number 0 --secret "man ankle mystery favorite tone number ice west spare marriage control lucky life together neither" -y ``` Be cautious when importing accounts in non-interactive mode Since the account's secret is provided as a command line parameter in the non-interactive mode, make sure you **read the account's secret from a file or an environment variable**. Otherwise, the secret may be stored and exposed in your shell history. Also, protecting your account with a password is currently not supported in the non-interactive mode. ## List Accounts Stored in Your Wallet You can list all available accounts in your wallet with `wallet list`: ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz lenny ledger (secp256k1-bip44:3) oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar (*) file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` Above, you can see the native Oasis addresses of all local accounts. The [default account](#set-default) has a special `(*)` sign next to its name. ## Show Account Configuration Details To verify whether an account exists in your wallet, use `wallet show `. This will print the account's native address and the public key which requires entering your account's password. ```shell oasis wallet show oscar ``` ``` Unlock your account. ? Passphrase: Name: oscar Kind: file (ed25519-adr8:0) Public Key: Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= Native address: oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` For `secp256k1` accounts Ethereum's hex-encoded address will also be printed. ```shell oasis wallet show eugene ``` ``` Unlock your account. ? Passphrase: Name: eugene Kind: file (secp256k1-bip44:0) Public Key: ArEjDxsPfDvfeLlity4mjGzy8E/nI4umiC8vYQh+eh/c Ethereum address: 0xBd16C6bF701a01DF1B5C11B14860b6bDbE776669 Native address: oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz ``` Showing an account stored on your hardware wallet will require connecting it to your computer: ```shell oasis wallet show logan ``` ``` Name: logan Kind: ledger (ed25519-legacy:0) Public Key: l+cuboPsOeuY1+kYlROrpmKgiiELmXSw9xl0WEg8cWE= Native address: oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl ``` ## Export the Account's Secret You can obtain the secret material of a file-based account such as the mnemonic or the private key by running `wallet export `. For example: ```shell oasis wallet export oscar ``` ``` WARNING: Exporting the account will expose secret key material! Unlock your account. ? Passphrase: Name: oscar Kind: file (ed25519-adr8:0) Public Key: Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8= Native address: oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e Secret mnemonic: promote easily runway junior saddle gold flip believe wet example amount believe habit mixed pistol lemon increase moon rail mail fiction miss clip asset Derived secret key for account number 0: LHOUUJgVquTdi/3DVsS4caW4jQcvuFgl1Oag6BwlNvwHHqA6LGfHLXm0KzT25rkNwrJf26jYWitvfY7ofKOhzw== ``` The same goes for your Secp256k1 accounts: ```shell oasis wallet export eugene ``` ``` WARNING: Exporting the account will expose secret key material! Unlock your account. ? Passphrase: Name: eugene Kind: file (secp256k1-bip44:0) Public Key: ArEjDxsPfDvfeLlity4mjGzy8E/nI4umiC8vYQh+eh/c Ethereum address: 0xBd16C6bF701a01DF1B5C11B14860b6bDbE776669 Native address: oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz Secret mnemonic: man ankle mystery favorite tone number ice west spare marriage control lucky life together neither Derived secret key for account number 0: c559cad1e71e0db1b3a657f47ca7a618bfb6a51a7294df72bcfca57aded5377e ``` ```shell oasis wallet export emma ``` ``` WARNING: Exporting the account will expose secret key material! Unlock your account. ? Passphrase: Name: emma Kind: file (secp256k1-raw) Public Key: Az8B2UpSUET0E3n9XMzr+HBvviQKcRvz6C6bJtRFWNYG Ethereum address: 0xeEbE22411f579682F6f9D68f4C19B3581bCb576b Native address: oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh Secret key: 4811ebbe4f29f32a758f6f7bad39deb97ea67f07350637e31c75795dc679262a ``` Trying to export an account stored on your hardware wallet will only export its public key: ```shell oasis wallet export lenny ``` ``` WARNING: Exporting the account will expose secret key material! Name: lenny Kind: ledger (secp256k1-bip44:3) Public Key: AhhT2TUkEZ7rMasLBvHcsGj4SUO7Iw36ELEpL0evZDV1 Ethereum address: 0x95e5e3C1BDD92cd4A0c14c62480DB5867946281D Native address: oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall ``` ## Renaming the Account To rename an account, run `wallet rename `. For example: ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz lenny ledger (secp256k1-bip44:3) oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar (*) file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` ```shell oasis wallet rename lenny lester ``` ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz lester ledger (secp256k1-bip44:3) oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar (*) file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` ## Deleting an Account To irreversibly delete the accounts from your wallet use `wallet remove [names]`. For file-based accounts this will delete the file containing the private key from your disk. For hardware wallet accounts this will delete the Oasis CLI reference, but the private keys will remain intact on your hardware wallet. For example, let's delete `lenny` account: ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz lenny ledger (secp256k1-bip44:3) oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar (*) file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` ```shell oasis wallet remove lenny ``` ``` WARNING: Removing the account will ERASE secret key material! WARNING: THIS ACTION IS IRREVERSIBLE! ? Enter 'I really want to remove account lenny' (without quotes) to confirm removal: I really want to remove account lenny ``` ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar (*) file (ed25519-raw) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` You can also delete accounct in non-interactive mode format by passing the `-y` parameter: ```shell oasis wallet remove lenny -y ``` ## Set Default Account To change your default account, use `wallet set-default ` and the name of the desired default account. ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz lenny ledger (secp256k1-bip44:3) oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar (*) file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` ```shell oasis wallet set-default lenny ``` ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS emma file (secp256k1-raw) oasis1qph93wnfw8shu04pqyarvtjy4lytz3hp0c7tqnqh eugene file (secp256k1-bip44:0) oasis1qrvzxld9rz83wv92lvnkpmr30c77kj2tvg0pednz lenny (*) ledger (secp256k1-bip44:3) oasis1qrmw4rhvp8ksj3yx6p2ftnkz864muc3re5jlgall logan ledger (ed25519-legacy:0) oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl oscar file (ed25519-adr8:0) oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e ``` ## Advanced ### Import an Existing Keypair from PEM file Existing node operators may already use their Ed25519 private key for running their nodes stored in a PEM-encoded file typically named `entity.pem`. In order to submit their governance transaction, for example to vote on the network upgrade using the Oasis CLI, they need to import the key into the Oasis CLI wallet: ```shell oasis wallet import-file my_entity entity.pem ``` ``` ? Choose a new passphrase: ? Repeat passphrase: ``` The key is now safely stored and encrypted inside the Oasis CLI. ```shell oasis wallet list ``` ``` ACCOUNT KIND ADDRESS my_entity file (ed25519-raw) oasis1qpe0vnm0ahczgc353vytvtz9r829le4pjux8lc5z ``` ### Remote Signer for `oasis-node` You can bind the account in your Oasis CLI wallet with a local instance of `oasis-node`. To do this, use `wallet remote-signer `, pick the account you wish to expose and provide a path to the new unix socket: ```shell oasis wallet remote-signer oscar /datadir/oasis-oscar.socket ``` ``` Unlock your account. ? Passphrase: Address: oasis1qp87hflmelnpqhzcqcw8rhzakq4elj7jzv090p3e Node Args: --signer.backend=remote \ --signer.remote.address=unix:/datadir/oasis-oscar.socket *** REMOTE SIGNER READY *** ``` ### Test Accounts Oasis CLI comes with the following hardcoded test accounts: - `test:alice`: Ed25519 test account used by Oasis core tests - `test:bob`: Ed25519 test account used by Oasis core tests - `test:charlie`: Secp256k1 test account - `test:cory`: Ed25519 account used by `oasis-net-runner` - `test:dave`: Secp256k1 test account - `test:erin`: Sr25519 test account - `test:frank`: Sr25519 test account Do not use these accounts on public networks Private keys for these accounts are well-known. Do not fund them on public networks, because anyone can drain them! We suggest that you use these accounts for Localnet development or for reproducibility when you report bugs to the Oasis core team. You can access the private key of a test account the same way as you would for ordinary accounts by invoking the [`oasis wallet export`](#export) command. --- ## Build on Oasis The best way to start learning is by example! If you want to jumpstart right into it, check out our use cases that combine TEE and blockchain to build trustless distributed apps. ## The Oasis SDK [Image: Oasis architectural design including ParaTime and consensus layers] ### ROFL-Powered Apps [Runtime off-chain logic (ROFL)][rofl] enables you to wrap applications in trusted environment (TEE) containers managed through [Sapphire]. This framework is ideal for deploying provably trusted oracles, compute-expensive tasks in AI or a backend for interactive games. ### Smart Contracts Smart Contracts are deployed to [Sapphire], an EVM-compatible Layer 1 blockchain assuring confidential smart contract state. Extra on-chain features in your contracts such as the random number generator, cryptography, a cross-chain [privacy layer](opl/README.mdx) and ROFL verification are supported. [rofl]: ./rofl/README.mdx [Sapphire]: ./sapphire/README.mdx ### Web Browser Sapphire supports optional encrypted transactions and queries through client-side end-to-end encryption. Modern Web3 libraries running in a [web browser] are supported. [web browser]: ./sapphire/develop/browser.md ### Server-Side Apps End-to-end encrypted transactions and queries are often required by server-side applications running either inside ROFL or outside of the TEE. Check out our comprehensive [API reference guide] for your preferred programming language to learn how to integrate with Oasis network. [API reference guide]: https://api.docs.oasis.io --- ## Oasis Privacy Layer (OPL) The Oasis Privacy Layer (OPL) is a powerful solution that enables developers to integrate privacy features into their decentralized applications (dApps) across multiple EVM-compatible networks. - **Privacy-First**: OPL leverages the [Sapphire]'s privacy features to ensure that contract data and computation remains confidential. - **Cross-Chain Compatibility**: OPL is compatible with multiple blockchains through message bridging protocols, making it easy to integrate privacy regardless of the chain your dApp is built on. For more information about OPL and to catch the latest news, please visit the [official OPL page]. [official OPL page]: https://oasis.net/opl [Sapphire]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/README.mdx ## How OPL Works The OPL is made possible through message bridges, which enable secure communication between OPL-enabled contracts on Sapphire and smart contracts on other chains. This allows dApps to access privacy-preserving capabilities while keeping their main logic on their primary chain. [Image: Oasis Privacy Layer diagram] For how to use use signed messages with the GSN to trigger a cross-chain messages, please visit our [Gasless Transactions chapter]. [Gasless Transactions chapter]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/develop/gasless.md ## Message Bridges You can integrate messaging bridges into your dApps using one of these four methods: - **[Hyperlane Protocol][hyperlane]**: A permissionless interoperability protocol that enables seamless cross-chain communication for developers. - **[Router Protocol CrossTalk][router]**: An extensible cross-chain framework that enables seamless state transitions across multiple chains. - **[OPL SDK]**: A wrapper provided by the Oasis Protocol that simplifies the integration of message bridging with Oasis’s privacy features. - **[Celer Inter-Chain Messaging (IM)][celer]**: A generalized message bridging solution by Celer, which lets you build more complex solutions. ### Comparison | Protocol | Validator Network | Relayer | Fees | | ----------------------------- | ----------------------------------- | ------------------------------------------------------ | -------------------------------------------------------------------------------------- | | **[Hyperlane][hyperlane]** | Self-hosted orrun by Hyperlane | Self-hosted or run by Hyperlane | Interchain Gas Payments on origin chain | | **[Router Protocol][router]** | Orchestrators (Router Chain) | Relayer (run by 3rd party) | Paid by the approved feepayer on the Routerchain | | **[OPL SDK]** | SGN (Celer) | Executor (self-hosted or hosted service by Celer) | SGN Fee: Paid via `msg.value` Executor Fee: Charged externally (Free on testnet) | | **[Celer IM][celer]** | SGN (Celer) | Executor (self-hosted or hosted service by Celer) | SGN Fee: Paid via `msg.value` Executor Fee: Charged externally (Free on testnet) | ### Recommendation #### Development & Testing **[Hyperlane][hyperlane]**: Due to its permissionless nature, Hyperlane integrates well with other testnets, and you can easily run your own Relayer. Hyperlane's flexibility is great for hackathons, early-stage development and testing environments. #### Production **[Router Protocol][router]**: Battle-tested by ecosystem dApps like Neby and features the most active token pairs. Router provides a highly reliable, solution for cross-chain communication, making it a top recommendation for production-ready environments. ## Examples [OPL SDK]: ./opl-sdk/README.md [celer]: ./celer/README.md [router]: ./router-protocol/README.md [hyperlane]: ./hyperlane/README.md --- ## Celer Inter-Chain Messaging (IM) **Celer Inter-Chain Messaging (IM)** is a message passing protocol that facilitates the seamless transfer of any type of generic message, including function calls, across multiple blockchains via a single source-chain transaction. Celer IM currently supports message passing between Oasis Sapphire and all other IM-supported chains. The message-passing support enables developers to build entirely new privacy-centric dApps or add confidentiality to existing dApps on popular EVM networks using Sapphire as a privacy layer. **Celer IM** offers two design patterns: - Cross-chain logic execution without fund transfer - Cross-chain logic execution with accompanying fund transfer This documentation focuses on cross-chain logic execution **without** fund transfer. For information on using Celer IM with fund transfer, please refer to the [Celer IM documentation]. [Celer IM documentation]: https://im-docs.celer.network/ ## Architecture [Image: Celer IM Architecture] *Architecture diagram for Celer IM[^1]* [^1]: The Celer IM architecture diagram is courtesy of [Celer documentation][celer-architecture]. [celer-architecture]: https://im-docs.celer.network/developer/architecture-walkthrough/end-to-end-workflow Celer IM’s architecture is composed of several core components that work together to facilitate secure and reliable cross-chain messaging: - **MessageBus**: The primary component managing message transmission between source and destination blockchains. It ensures proper formatting and routing of messages through the Celer network. - **State Guardian Network (SGN)**: A decentralized network of validators that manage the state of cross-chain messages. SGN validators sign off on messages and coordinate their secure delivery, providing security and availability for cross-chain interactions. - **[Executor](#executor)**: An off-chain component that listens to the SGN for validated messages ready for execution on the destination chain. Once a message is verified, the Executor sends transactions to the MessageBus on the destination chain, triggering the execution of the specified logic. ## Executor The [Executor][Message Executor] is a crucial part of the Celer IM framework. It performs two main functions: - Monitors the Celer State Guardian Network (SGN) for messages ready to be submitted (with sufficient validator signatures). - Submits message execution transactions to the MessageBus contract on the destination chain. It is necessary a [Message Executor] runs for you dapp. To set up an executor, you have two options: - Follow the [documentation] to set up your own executor. - Fill out this [form][celer-form] for Celer to set up a hosted executor service for you. For Hackathon or Grant participants, we recommend filling out the [relay request form][celer-form] to use the shared Message Executor. In most cases, Celer advises dApp developers to use the shared executor services provided by the Celer Network team to avoid server configuration and operation concerns. Oasis is running an executor for the Sapphire Testnet, which is okay to rely on for a test, for a faster execution it's recommended to run your own or use the hosted service. [Message Executor]: https://im-docs.celer.network/developer/development-guide/message-executor [documentation]: https://im-docs.celer.network/developer/development-guide/message-executor/integration-guide [celer-form]:https://form.typeform.com/to/RsiUR9Xz ## Fees The cross-chain messaging process involves fees paid to two parties: - **SGN Fee**: Paid as `msg.value` to the *MessageBus* contract by the entity calling `sendMessge`. - **Executor Fee**: Charged by the Executor for submitting execute message transactions. ## Monitoring The Celer IM Scan API can be used to retrieve status and message details by providing the globally unique transaction ID from the chain which originated the message. https://api.celerscan.com/scan/searchByTxHash?tx=0x... For details of the response format, see the [Query IM Tx Status] page of the Celer-IM documentation. Using this API lets you to check if messages have been delivered. [Query IM Tx Status]: https://im-docs.celer.network/developer/development-guide/query-im-tx-status --- ## Supported Networks ## Mainnets | Name | Int ID | Hex ID | autoswitch name | | ---- | ------ | ------ | --------------- | | Ape | 16350 | 0x3fde | ape | | Arbitrum Nova | 42170 | 0xa4ba | arbitrum-nova | | Arbitrum One | 42161 | a4b1 | arbitrum-one | | Astar | 592 | 0x250 | astar | | Aurora | 1313161554 | 0x4e454152 | aurora | | Avalanche | 43114 | 0xa86a | avalanche | | Binance Smart Chain | 56 | 0x38 | bsc | | Ethereum | 1 | 0x1 | ethereum | | Fantom | 250 | 0xfa | fantom | | Filecoin | 314 | 0x13a | filecoin | | Milkomeda C1 | 2001 | 0x7d1 | milkomeda | | Moonriver | 1285 | 0x505 | moonriver | | Polygon | 137 | 0x89 | polygon | | Sapphire | 23294 | 0x5afe | sapphire | | Syscoin | 57 | 0x39 | syscoin | | Polygon zkEVM | 1101 | 0x44d | polygon-zkevm | | Optimism | 10 | 0xa | optimism | | zkSync Era | 324 | 0x144 | zksync-era | ## Testnets Oasis operates an IM [executor] supporting Avalanche and BSC testnets. You may need to deploy your own while developing on another Celer supported network. [executor]: ../celer/README.md#executor | Name | Int ID | Hex ID | autoswitch name | | ---- | ------ | ------ | --------------- | | Avalanche C-Chain Fuji Testnet | 43113 | 0xa869 | avalanche-fuji | | BSC Testnet | 97 | 0x61 | bsc-testnet | | Dexalot Testnet | 432201 | 0x69849 | dexalot-testnet | | Fantom Testnet | 4002 | 0xfa2 | fantom-testnet | | FNCY Testnet | 923018 | 0xe158a | fncy-testnet | | Godwoken Testnet | 71401 | 0x116e9 | godwoken-testnet | | Sapphire Testnet | 23295 | 0x5aff | sapphire-testnet | | Scroll Alpha Testnet | 534353 | 0x82751 | scroll-testnet | | Shibuya Testnet | 81 | 0x51 | shibuya-testnet | --- ## Ping Example This tutorial demonstrates how to send a cross-chain message using [Celer's Inter-Chain Messaging (IM)]. [Celer's Inter-Chain Messaging (IM)]: https://im-docs.celer.network/ You'll learn how to: - Deploy MessageBus-compatible contracts - Send cross-chain messages We recommend using [Remix] for an easy-to-follow experience. The only prerequisite is a set-up Metamask account. If you're new to Remix, follow our basic guide for using Remix [here][dapp-remix]. [dapp-remix]: ../../tools/remix.md ## Overview Ping In this example, you'll deploy the same contract on two different chains. You'll then send a `ping` from *BSC Testnet* to *Saphhire Testnet*, facilitated by Celer-IM. The contract on *Sapphire Testnet* will receive the `ping` and emits an event with the message which was received. ## Contract Setup 1. Open [Remix] and create a new file called `Ping.sol` 2. Paste the following contract and interface into it: Ping.sol Contract ```solidity title="Ping.sol" showLineNumbers // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IMessageBus { function sendMessage( address _receiver, uint256 _dstChainId, bytes calldata _message ) external payable; } contract Ping { address public messageBus; event MessageReceived( address srcContract, uint64 srcChainId, address sender, bytes message ); enum ExecutionStatus { Fail, // execution failed, finalized Success, // execution succeeded, finalized Retry // execution rejected, can retry later } constructor(address _messageBus) { messageBus = _messageBus; } modifier onlyMessageBus() { require(msg.sender == messageBus, "caller is not message bus"); _; } function sendPing( address _dstContract, uint64 _dstChainId, bytes calldata _message ) external payable { bytes memory message = abi.encode(msg.sender, _message); IMessageBus(messageBus).sendMessage{value: msg.value}(_dstContract, _dstChainId, message); } function executeMessage( address _srcContract, uint64 _srcChainId, bytes calldata _message, address // executor ) external payable onlyMessageBus returns (ExecutionStatus) { (address sender, bytes memory message) = abi.decode( (_message), (address, bytes) ); emit MessageReceived(_srcContract, _srcChainId, sender, message); return ExecutionStatus.Success; } } ``` ### Key points - `messageBus`: Celer's MessageBus contract on the respective chain. - `sendPing`: Initiates the cross-chain my calling Celers MessageBus. - `executeMessage`: Called by Celer's MessageBus on the destination chian. ## Compiling the Contract For compatibility with Sapphire, compile the contract using compiler version **`0.8.24`** and evm version **`paris`** (under advanced configuration). You can also use Celer's framework contracts and interfaces by importing them ```solidity import "sgn-v2-contracts/contracts/message/framework/MessageBusAddress.sol"; import "sgn-v2-contracts/contracts/message/framework/MessageReceiverApp.sol"; import "sgn-v2-contracts/contracts/message/interfaces/IMessageBus.sol"; ``` but this will limit you to use only Solidity version **`0.8.9`**. ## Deploying the Contract Deploy the Ping contract on two different chains: `BSC Testnet` and `Sapphire Testnet`. ### Deploying on BSC Testnet 1. Obtain BNB test token for `BSC Testnet` from the [BNB faucet] or their discord. 2. In MetaMask, switch to the `BSC Testnet` network and select `Injected Provider - MetaMask` as the environment in Remix. 3. Fill in the messageBus address for BSC Testnet: `0xAd204986D6cB67A5Bc76a3CB8974823F43Cb9AAA`. 4. Deploy the contract on `BSC Testnet`. [BNB faucet]: https://www.bnbchain.org/en/testnet-faucet ### Deploying on Sapphire Testnet 1. Obtain TEST tokens for `Sapphire Testnet` from the [Oasis faucet]. 2. In Metamask, switch to the `Sapphire Testnet` network and select `Injected Provider - MetaMask` as the environment in Remix 3. Fill in the messageBus address for BSC Testnet: `0x9Bb46D5100d2Db4608112026951c9C965b233f4D`. 4. Deploy the contract on Sapphire Testnet [Oasis Faucet]: https://faucet.testnet.oasis.io/ ## Executing Ping Now that you've deployed the contacts, you can send the ping message cross-chain. You'll need the following three parameters: - `_dstContract`: The contract address of the reveiving contract on the destination chain which you just deployed. - `_dstChainId`: The chain id of the destination chain. Which is in our example `Sapphire Testnet` - `23295`. - `message`: The encoded message. e.g. "Hello from BSC" - `0x48656c6c6f2066726f6d20425343000000000000000000000000000000000000`. Additionally you'll have to pay a fee which you send as value. For sending the ping 0.001 tBNB (1000000 gwei) will be enough. For the `Sapphire Testnet` an executor is running to relay the messages every few minutes. If you deploy on mainnet please refer to the [Executor chapter]. [Executor chapter]: ./README.md#executor ## Checking execution To see if you successfully send a ping message cross-chain you can watch for new transactions at the [MessageBus address] from Celer or your deployed contract Sapphire Testnet. [MessageBus address]: https://explorer.oasis.io/testnet/sapphire/address/0x9Bb46D5100d2Db4608112026951c9C965b233f4D [Remix]: https://remix.ethereum.org/ --- ## Hyperlane Protocol [Hyperlane] is a permissionless interoperability protocol that enables seamless cross-chain communication for developers. Its unique design allows deployment across various blockchain environments, including layer 1 chains, rollups, and app-chains, without the need for approvals or intermediaries. This [permissionless design] empowers developers to build cross-chain applications with full control over their operations in a multi-chain ecosystem. [Hyperlane]: https://hyperlane.xyz/ [permissionless design]: https://docs.hyperlane.xyz/docs/intro ### Architecture [Image: Hyperlane Messaging Flow] *Basic Hyperlane cross-chain messaging flow[^1]* [^1]: Architecture diagram is courtesy of [Hyperlane documentation][hyperlane-architecture] [hyperlane-architecture]: https://docs.hyperlane.xyz/docs/protocol/protocol-overview Hyperlane's architecture consists of four key components: - **[Mailboxes]**: Core messaging contracts deployed on each chain that handle message sending/receiving - **[Interchain Security Modules (ISMs)][ism]**: Custom security logic that determines how messages are verified - **[Relayers]**: Off-chain agents that transport messages between chains - **[Validators]**: Fulfilling the security layer of the Hyperlane protocol [Mailboxes]: https://docs.hyperlane.xyz/docs/protocol/core/mailbox [ism]: https://docs.hyperlane.xyz/docs/protocol/ISM/modular-security [Relayers]: https://docs.hyperlane.xyz/docs/protocol/agents/relayer [Validators]: https://docs.hyperlane.xyz/docs/protocol/agents/validators ## Fees Hyperlane fees are called **Interchain Gas Payments** and are payed by the *message sender* to the *relayer*. For more info about the Interchain Gas Payments, consult the [Hyperlane documentation][igp] [igp]: https://docs.hyperlane.xyz/docs/protocol/core/interchain-gas-payment ## Hyperlane CLI The [Hyperlane CLI][cli] is the official command-line tool for deploying and managing Hyperlane infrastructure. It provides a comprehensive set of utilities for: - **Chain Configuration**: Set up and register new chains with the Hyperlane network - **Core Contract Deployment**: Deploy Hyperlane's core contracts (Mailbox, ISM, etc.) to new chains - **Warp Route Management**: Configure and deploy token bridges between chains - **Message Testing**: Send test messages across chains to verify connectivity - **Registry Management**: Interact with chain metadata and contract addresses The CLI streamlines the process of connecting new chains to the Hyperlane network, making cross-chain communication accessible to developers and chain operators. [cli]: https://docs.hyperlane.xyz/docs/reference/developer-tools/cli ## Hyperlane Core Deployment For guidance on how to deploy the Hyperlane Core on Sapphire, refer to the [official deploy documentation][hyperlane-deploy]. [hyperlane-deploy]: https://docs.hyperlane.xyz/docs/get-started-building#step-2%3A-deploy-hyperlane-core-infrastructure ## See also --- ## Ping Pong Example This tutorial demonstrates how to send a cross-chain message via [Hyperlane Protocol]. [Hyperlane Protocol]: https://docs.hyperlane.xyz/docs/intro You'll learn how to: - Deploy Hyperlane Mailbox-compatible contracts - Deploy trusted relayer ISM(Interchain Security Module) - Run a simple relayer - Send cross-chain messages We recommend using [Hardhat] for an easy-to-follow experience. Example Code You can find the contracts and hardhat tasks for deployment and execution in our [demo-opl repository][opl]. [opl]: https://github.com/oasisprotocol/demo-opl/tree/main/examples/hyperlane-pingpong [Hardhat]: https://hardhat.org/ ## Overview Ping Pong In this example, you'll deploy a similar contract on two different chains. The contract on Chain A will send a `ping` message to Chain B using *Hyperlane Protocol*. The contract on Chain B will process this message and respond with a `ping` back to Chain A. [Image: Ping Pong Flow] ## Setup 1. Create and navigate to a new directory: ```shell mkdir hyperlane-pingpong && cd hyperlane-pingpong ``` 2. Initialize a Hardhat project and install dependencies: ```shell npx hardhat init ``` 3. Add [`@hyperlane-xyz/core`] as dependency: ```shell npm2yarn npm install -D @hyperlane-xyz/core ``` There can be some problems with dependencies, be sure to have `ethers@^6` and `openzeppelin@^4.9.3`. [`@hyperlane-xyz/core`]: https://www.npmjs.com/package/@hyperlane-xyz/core ### Test Token Make sure you have enough test token on `Arbitrum Sepolia` and `Sapphire Testnet` Get more: - TEST tokens for `Sapphire Testnet` from the [Oasis Faucet]. - ETH tokens for `Arbitrum Sepolia` from Alchemy's [Faucet]. [Oasis Faucet]: https://faucet.testnet.oasis.io/ [Faucet]: https://faucets.alchemy.com/faucets/arbitrum-sepolia ### Add Networks to Hardhat Open up your `hardhat.config.ts` and add Arbitrum Sepolia and Sapphire Testnet. ```js title="hardhat.config.ts" import { HardhatUserConfig } from "hardhat/config"; import "@nomicfoundation/hardhat-toolbox"; // highlight-next-line const accounts = process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : []; const config: HardhatUserConfig = { solidity: "0.8.24", // highlight-start networks: { 'arbitrum-sepolia': { url: 'https://arbitrum-sepolia-rpc.publicnode.com', chainId: 421614, accounts, }, 'sapphire-testnet': { url: "https://testnet.sapphire.oasis.io", accounts, chainId: 23295, // 0x5aff }, }, // highlight-end }; export default config; ``` Sapphire only supports evmVersion `paris`, which is the current default for Hardhat. Should Hardhat change this, you need to add `evmVersion: "paris"` to the solidity config. ### Ping Pong Contract For this example we leverage the `Router` wrapper from *Hyperlane*. This results in following advantages: - Contracts are compatible with *Hyperlane*'s **MailboxClient** and **IMessageRecipient** interfaces. - Supports *enrolling* Routers of other chains. - Supports setting up a **custom ISM**. 1. Create a new file called `Ping.sol` for Arbitrum Sepolia 2. Paste the following contract into it: Ping.sol Contract ```solidity title="Ping.sol" showLineNumbers // SPDX-License-Identifier: Apache-2.0 pragma solidity ^0.8.13; // ============ External Imports ============ import {Router} from "@hyperlane-xyz/core/contracts/client/Router.sol"; /* * @title Ping * @dev You can use this simple app as a starting point for your own application. */ contract Ping is Router { // A generous upper bound on the amount of gas to use in the handle // function when a message is processed. Used for paying for gas. uint256 public constant HANDLE_GAS_AMOUNT = 50_000; // A counter of how many messages have been sent from this contract. uint256 public sent; // A counter of how many messages have been received by this contract. uint256 public received; // Keyed by domain, a counter of how many messages that have been sent // from this contract to the domain. mapping(uint32 => uint256) public sentTo; // Keyed by domain, a counter of how many messages that have been received // by this contract from the domain. mapping(uint32 => uint256) public receivedFrom; // ============ Events ============ event SentPing( uint32 indexed origin, uint32 indexed destination, string message ); event ReceivedPing( uint32 indexed origin, uint32 indexed destination, bytes32 sender, string message ); event HandleGasAmountSet( uint32 indexed destination, uint256 handleGasAmount ); constructor(address _mailbox) Router(_mailbox) { // Transfer ownership of the contract to deployer _transferOwnership(msg.sender); setHook(address(0)); } // ============ External functions ============ /** * @notice Sends a message to the _destinationDomain. Any msg.value is * used as interchain gas payment. * @param _destinationDomain The destination domain to send the message to. * @param _message The message to send. */ function sendPing( uint32 _destinationDomain, string calldata _message ) public payable { sent += 1; sentTo[_destinationDomain] += 1; _dispatch(_destinationDomain, bytes(_message)); emit SentPing( mailbox.localDomain(), _destinationDomain, _message ); } /** * @notice Fetches the amount of gas that will be used when a message is * dispatched to the given domain. */ function quoteDispatch( uint32 _destinationDomain, bytes calldata _message ) external view returns (uint256) { return _quoteDispatch(_destinationDomain, _message); } // ============ Internal functions ============ /** * @notice Handles a message from a remote router. * @dev Only called for messages sent from a remote router, as enforced by Router.sol. * @param _origin The domain of the origin of the message. * @param _sender The sender of the message. * @param _message The message body. */ function _handle( uint32 _origin, bytes32 _sender, bytes calldata _message ) internal override { received += 1; receivedFrom[_origin] += 1; emit ReceivedPing( _origin, mailbox.localDomain(), _sender, string(_message) ); } } ``` 3. Create a new file called `Pong.sol` for Sapphire Testnet 4. Paste the following contract into it: Pong.sol Contract ```solidity title="Pong.sol" showLineNumbers // SPDX-License-Identifier: Apache-2.0 pragma solidity ^0.8.13; // ============ External Imports ============ import {Router} from "@hyperlane-xyz/core/contracts/client/Router.sol"; /* * @title Pong * @dev You can use this simple app as a starting point for your own application. */ contract Pong is Router { // A generous upper bound on the amount of gas to use in the handle // function when a message is processed. Used for paying for gas. uint256 public constant HANDLE_GAS_AMOUNT = 50_000; // A counter of how many messages have been sent from this contract. uint256 public sent; // A counter of how many messages have been received by this contract. uint256 public received; // Keyed by domain, a counter of how many messages that have been sent // from this contract to the domain. mapping(uint32 => uint256) public sentTo; // Keyed by domain, a counter of how many messages that have been received // by this contract from the domain. mapping(uint32 => uint256) public receivedFrom; // ============ Events ============ event SentPing( uint32 indexed origin, uint32 indexed destination, string message ); event ReceivedPing( uint32 indexed origin, uint32 indexed destination, bytes32 sender, string message ); event HandleGasAmountSet( uint32 indexed destination, uint256 handleGasAmount ); constructor(address _mailbox) Router(_mailbox) { // Transfer ownership of the contract to deployer _transferOwnership(msg.sender); setHook(address(0)); } // ============ External functions ============ /** * @notice Sends a message to the _destinationDomain. Any msg.value is * used as interchain gas payment. * @param _destinationDomain The destination domain to send the message to. * @param _message The message to send. */ function sendPing( uint32 _destinationDomain, string calldata _message ) public payable { sent += 1; sentTo[_destinationDomain] += 1; _dispatch(_destinationDomain, bytes(_message)); emit SentPing( mailbox.localDomain(), _destinationDomain, _message ); } /** * @notice Fetches the amount of gas that will be used when a message is * dispatched to the given domain. */ function quoteDispatch( uint32 _destinationDomain, bytes calldata _message ) external view returns (uint256) { return _quoteDispatch(_destinationDomain, _message); } // ============ Internal functions ============ /** * @notice Handles a message from a remote router. * @dev Only called for messages sent from a remote router, as enforced by Router.sol. * @param _origin The domain of the origin of the message. * @param _sender The sender of the message. * @param _message The message body. */ function _handle( uint32 _origin, bytes32 _sender, bytes calldata _message ) internal override { received += 1; receivedFrom[_origin] += 1; emit ReceivedPing( _origin, mailbox.localDomain(), _sender, string(_message) ); // send return message sendPing( _origin, string(_message) ); } } ``` ### ISM Contract In the current state the default ISM of the `Arbitrum Sepolia` Mailbox won't accept a message if you send a message from `Sapphire Testnet` to `Arbitrum Sepolia`. You can deploy and register a custom ISM on the `Arbitrum Sepolia` contract to make it work. A simple default ISM from *Hyperlane* is a **TrustedRelayerISM**, which checks the relayer address before delivering the message. 1. Create a new file called `TrustedRelayerIsm.sol` 2. Paste the following contract into it: TrustedRelayerIsm.sol Contract ```solidity title="TrustedRelayerIsm.sol" showLineNumbers // SPDX-License-Identifier: MIT OR Apache-2.0 pragma solidity >=0.8.0; // ============ Internal Imports ============ import {IInterchainSecurityModule} from "@hyperlane-xyz/core/contracts/interfaces/IInterchainSecurityModule.sol"; import {Address} from "@openzeppelin/contracts/utils/Address.sol"; import {Message} from "@hyperlane-xyz/core/contracts/libs/Message.sol"; import {Mailbox} from "@hyperlane-xyz/core/contracts/Mailbox.sol"; import {PackageVersioned} from "@hyperlane-xyz/core/contracts/PackageVersioned.sol"; contract TrustedRelayerIsm is IInterchainSecurityModule, PackageVersioned { using Message for bytes; uint8 public immutable moduleType = uint8(Types.NULL); Mailbox public immutable mailbox; address public immutable trustedRelayer; constructor(address _mailbox, address _trustedRelayer) { require( _trustedRelayer != address(0), "TrustedRelayerIsm: invalid relayer" ); require( Address.isContract(_mailbox), "TrustedRelayerIsm: invalid mailbox" ); mailbox = Mailbox(_mailbox); trustedRelayer = _trustedRelayer; } function verify( bytes calldata, bytes calldata message ) external view returns (bool) { return mailbox.processor(message.id()) == trustedRelayer; } } ``` If you want to read more about *Hyperlane's* **Interchain Security Modules,** visit the [Hyperlane docs] [Hyperlane docs]: https://docs.hyperlane.xyz/docs/protocol/ISM/modular-security ### Key Contract Functions - `sendPing`: Initiates the cross-chain message by calling *Hyperlane*'s `IMailbox.dispatch`. - `enrollRemoteRouter`: In the inherited `Router` contract, to register the contract from the other chain. - `setInterchainSecurityModule`: Set the ISM for the contract. - `_handle`: To handle incoming messages from the Mailbox (internal function called by the inherited `Router` contract `handle` function). ## Deploying the Contracts Deploy the Ping and Pong contracts on two different chains: `Sapphire Testnet` and `Arbitrum Sepolia`. Use for deploying either the provided `deploy-pingpong` and `deploy-ism` tasks or use the scripts below. ### Deploying Pong on Sapphire Testnet 1. Create a deployment script `deploypong.ts` under `scripts/`: ```js title="deploy.ts" import { ethers } from "hardhat"; async function main() { // deployed mailbox on Sapphire Testnet const mailbox = "0x79d3ECb26619B968A68CE9337DfE016aeA471435"; const PongFactory = await hre.ethers.getContractFactory("Pong"); const pong = await PongFactory.deploy(mailbox); const pongAddr = await pong.waitForDeployment(); console.log(`Pong deployed at: ${pongAddr.target}`); } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the deployment: ```shell pnpm hardhat run scripts/deploypong.ts --network sapphire-testnet ``` ### Deploying Ping on Arbitrum Sepolia 1. Create a deployment script `deployping.ts` under `scripts/`: ```js title="deploy.ts" import { ethers } from "hardhat"; async function main() { // default mailbox on Arbitrum Sepolia const mailbox = "0x598facE78a4302f11E3de0bee1894Da0b2Cb71F8"; const PingFactory = await ethers.getContractFactory("Ping"); const ping = await PingFactory.deploy(mailbox); const pingAddr = await ping.waitForDeployment(); console.log(`Ping deployed at: ${pingAddr.target}`); } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the deployment: ```shell pnpm hardhat run scripts/deployping.ts --network arbitrum-sepolia ``` ### Deploying ISM on Arbitrum Sepolia 1. Create a deployment script `deployISM.ts` under `scripts/`: ```js title="deployISM.ts" import { ethers } from "hardhat"; async function main() { // default mailbox on Arbitrum Sepolia const mailbox = "0x598facE78a4302f11E3de0bee1894Da0b2Cb71F8"; const trustedRelayer = "0x"; const trustedRelayerISM = await ethers.deployContract( "TrustedRelayerIsm", [mailbox, trustedRelayer] ); await trustedRelayerISM.waitForDeployment(); console.log(`TrustedRelayerISM deployed to ${trustedRelayerISM.target}`); } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the deployment: ```shell pnpm hardhat run scripts/deployISM.ts --network arbitrum-sepolia ``` ## Contracts setup ### Enroll Routers As we use the Router wrapper for our Ping Pong contracts, we need to enroll the contract addresses of the opposite contract. #### Enroll Router on Sapphire Testnet 1. Create a file named `enroll.ts`in the folder `/scripts` ```js title="enroll.ts" import { ethers } from "hardhat"; async function main() { let pingpongArbitrum = "0x"; let pingpongSapphire = "0x"; let arbId = "421614"; const signer = await ethers.provider.getSigner(); const contract = await ethers.getContractAt("Pong", sapphireAddr, signer); await contract.enrollRemoteRouter(arbId, ethers.zeroPadValue(pingpongArbitrum, 32)); const arbRouter = await contract.routers(arbId); console.log(`remote router adr for ${arbId}: ${arbRouter}`) } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the deployment ```shell pnpm hardhat run scripts/enroll.ts --network sapphire-testnet ``` #### Enroll Router on Arbitrum Sepolia 1. Create a file named `enroll.ts`in the folder `/scripts` ```js title="enroll.ts" import { ethers } from "hardhat"; async function main() { let pingpongArbitrum = "0x"; let pingpongSapphire = "0x"; let sapphireId = "23295"; const signer = await ethers.provider.getSigner(); const contract = await ethers.getContractAt("Ping", pingpongArbitrum, signer); await contract.enrollRemoteRouter(sapphireId, ethers.zeroPadValue(pingpongSapphire, 32)); const arbRouter = await contract.routers(sapphireId); console.log(`remote router adr for ${sapphireId}: ${arbRouter}`) } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the deployment ```shell pnpm hardhat run scripts/enroll.ts --network arbitrum-sepolia ``` ### Register ISM on Arbitrum Sepolia 1. Create a file named `registerIsm.ts`in the folder `/scripts` ```js title="registerIsm.ts" import { ethers } from "hardhat"; async function main() { let pingpongArbitrum = "0x"; let ismAddr = "0x"; const signer = await ethers.provider.getSigner(); const contract = await ethers.getContractAt("Ping", pingpongArbitrum, signer); await contract.setInterchainSecurityModule(ismAddr); } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the deployment ```shell pnpm hardhat run scripts/registerIsm.ts --network arbitrum-sepolia ``` ## Run Relayer Before starting to test the Ping Pong, make sure to run a Relayer for `Arbitrum Sepolia` and `Sapphire Testnet`. For information about how to run a Relayer, visit our [Relayer page]. [Relayer page]: ./relayer.md ## Executing Ping Pong To execute you will call the `sendPing` function on the `Ping.sol` contract. In the `demo-opl` example you can use the provided `send-ping` task or you can use the following script: 1. Create a file named `sendping.ts`in the folder `/scripts` ```js title="sendping.ts" import { ethers } from "hardhat"; async function main() { const destChainId = "23295"; const message = "Hello OPL" const pingpongArbitrum = "0x { console.error(error); process.exitCode = 1; }); ``` 2. Run the script ```shell pnpm hardhat run scripts/sendping.ts --network arbitrum-sepolia ``` ## Verify Ping Pong The most simple way to verify the cross-chain messaging is by checking the blockchain explorer and check the transaction on your deployed `Ping` contract. If you want to monitor the events directly you can use this script: 1. Create a file named `verifyping.ts`in the folder `/scripts` ```js title="verifyping.ts" import { ethers } from "hardhat"; async function main() { const contractAddr = "0x" const signer = await ethers.provider.getSigner(); const contract = await ethers.getContractAt("Ping", contractAddr, signer); const spinner = ['-', '\\', '|', '/']; let spinnerIndex = 0; const interval = setInterval(() => { process.stdout.write(`\rListing for event... ${spinner[spinnerIndex]}`); current = (spinnerIndex + 1) % spinner.length; }, 150); let events; do { const block = await ethers.provider.getBlockNumber(); events = await contract.queryFilter('ReceivedPing', block - 10, 'latest'); if (events.length === 0) { await new Promise(resolve => setTimeout(resolve, 60 * 1000)); } } while (events.length === 0); clearInterval(interval); process.stdout.write(`\r`); process.stdout.clearLine(0); const parsedEvent = contract.interface.parseLog(events[0]); const message = parsedEvent?.args?.message; console.log(`Message received with: ${message}`); } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` 2. Run the script ```shell pnpm hardhat run scripts/verifyping.ts --network sapphire-testnet ``` ## Troubleshooting ### Relayer doesn't relay message Check: - You have enrolled the opposite Router contract with `enrollRemoteRouter` by calling `ping.routers(oppositeChainId)` - You have set the custom ISM on the `Arbitrum Sepolia` Ping contract by calling `ping.interchainSecurityModule` - The relayer address matches the trusted ISM configuration. ### Can't verify messages Check: - Your Hardhat RPC allow historical event queries. - Use a dedicated RPC provider (e.g., Alchemy) if necessary. --- ## Relayer [Relayers][relayer] are off-chain agents that transport messages between chains. [relayer]: https://docs.hyperlane.xyz/docs/protocol/agents/relayer ## Run a Relayer The easiest way to run a relayer is with the **[Hyperlane CLI]**. [Hyperlane CLI]: https://docs.hyperlane.xyz/docs/reference/developer-tools/cli 1. Export your private key to be used with the CLI ```shell export HYP_KEY='' ``` 2. Start a relayer which watches `Arbitrum Sepolia` & `SapphireTestnet` ```shell hyperlane relayer --chains sapphiretestnet,arbitrumsepolia ``` Chain Configs `Sapphire Testnet` is registered in the *Hyperlane Registry*, if you deploy the *Hyperlane Core* on `Sapphire Testnet` yourself, make sure you have *Hyperlane* config files similar to the ones below in `$HOME/.hyperlane/chains/sapphiretestnet`. metadata.yaml ```yaml # yaml-language-server: $schema=../schema.json blockExplorers: - apiUrl: https://nexus.oasis.io/v1/ family: other name: Oasis Explorer url: https://explorer.oasis.io/testnet/sapphire chainId: 23295 displayName: Sapphire Testnet domainId: 23295 isTestnet: true name: sapphiretestnet nativeToken: decimals: 18 name: TEST symbol: TEST protocol: ethereum rpcUrls: - http: https://testnet.sapphire.oasis.io technicalStack: other ``` addresses.yaml ```yaml domainRoutingIsmFactory: "0x3497967f8E5041f486eC559E6B760d8f051A034C" interchainAccountIsm: "0xD84DE931A0EDA06Af3944a4e9933c24f3B56DCaC" interchainAccountRouter: "0xFdca43771912CE5F5B4D869B0c05df0b6eF8aEFc" mailbox: "0x79d3ECb26619B968A68CE9337DfE016aeA471435" proxyAdmin: "0x5Ed8004e3352df333901b0B2E98Bd98C3B4AA59A" staticAggregationHookFactory: "0x212c232Ee07E187CF9b4497A30A3a4D034aAC4D6" staticAggregationIsmFactory: "0xE25A539AdCa1Aac56549997f2bB88272c5D9498c" staticMerkleRootMultisigIsmFactory: "0x9851EC4C62943E9974370E87E93CE552abE7705E" staticMerkleRootWeightedMultisigIsmFactory: "0x688dE6d0aBcb60a711f149c274014c865446b49D" staticMessageIdMultisigIsmFactory: "0xFE0937b1369Bbba59211c4119B91984FF450ccf1" staticMessageIdWeightedMultisigIsmFactory: "0x1de05675c8cd512A30c17Ea0a3491d74eF290994" testRecipient: "0x7bf548104F8f500C563Aa6DC7FbF3b1ad93E4E03" validatorAnnounce: "0xB119f96a106919489b6495128f30e7088e55B05c" ``` Agents For a more complex validator and relayer setup, check Hyperlane's **[Local Agents guide]** or the more production ready **[Agent Operators guide]**. [Local Agents guide]: https://docs.hyperlane.xyz/docs/guides/deploy-hyperlane-local-agents [Agent Operators guide]: https://docs.hyperlane.xyz/docs/operate/overview-agents --- ## OPL SDK The OPL SDK is available in our [Solidity library][sapphire-contracts]. The SDK wraps the Celer Inter-Chain Message (IM) and makes it easy and straight forward to integrate [Sapphire] and its privacy features into your existing or future Web3 applications. [sapphire-contracts]: https://www.npmjs.com/package/@oasisprotocol/sapphire-contracts [Sapphire]: https://oasis.net/sapphire ## Overview [Image: Transaction Flow] 1. The **user** submits a transaction on the Home network to a contract which uses `postMessage` to emit an event about a the cross-chain message. 2. The **Celer *State Guardian Network* (SGN)** monitors for transactions which trigger a cross-chain message event and create attestation. 3. The **Executor** waits, when the SGN approves the message the Executor submits a transaction to the target contract on Sapphire. ## Fees The Home Contract pays the SGN to watch and approve the message, but the Executor needs to be run by somebody willing to pay for the gas to submit transactions to the destination chain. More details to the Celer Executor you can find [here][celer-executor]. ## Quickstart A pair of contracts are linked bidirectionally 1-1 to each other across chains, with one end on Sapphire and the other on a supported EVM-compatible chain (the Home Network). They can post and receive messages to & from each other using the message-passing bridge, but must register endpoints to define which messages they handle from each other. ### Setup Start by adding the [`@oasisprotocol/sapphire-contracts`] NPM package to your Hardhat project so you can import `OPL.sol`: ```shell npm2yarn npm install @oasisprotocol/sapphire-contracts ``` [`@oasisprotocol/sapphire-contracts`]: http://npmjs.com/package/@oasisprotocol/sapphire-contracts Now define the two contracts: - A contract on **Sapphire** which runs inside the confidential `enclave` - A contract on the **home chain** as a `host` which triggers the example ### Sapphire Contract On Sapphire use the constructor to provide the Sapphire contract with the location (address and chain) of the contract on the Home chain and register an endpoint called `secretExample`. ```solidity import {Enclave, Result, autoswitch} from "@oasisprotocol/sapphire-contracts/contracts/OPL.sol"; contract SapphireContract is Enclave { constructor(address otherEnd, string chain) Enclave(otherEnd, autoswitch(chain)) { registerEndpoint("secretExample", on_example); } function on_example(bytes calldata _args) internal returns (Result) { (uint256 a, bool b) = abi.decode(args, (uint256, bool)); // TODO: do confidential things here return Result.Success; } } ``` ### Home Contract On the other chain, define your contract which can be called via `triggerExample` to send a message to the contract on Sapphire using the `postMessage` interface. ```solidity import {Host, Result} from "@oasisprotocol/sapphire-contracts/contracts/OPL.sol"; contract HomeContract is Host { constructor(address otherEnd) Host(otherEnd) { } function triggerExample (uint256 a, bool b) external payable { postMessage("secretExample", abi.encode(a, b)); } } ``` After a few minutes the bridge will detect and then the executor will invoke the `SapphireContract.on_example` method. As noted in the [fees](#fees) section, an executor needs to relay your messages. Please refer to the Celer [Executor][celer-executor] section on how to get on the shared Message Executor or how to set up your own executor. [celer-executor]: ../celer/README.md#executor --- ## Ping Example(Opl-sdk) This tutorial demonstrates how to send a cross-chain message using [Oasis OPL]. [Oasis OPL]: ./README.md You'll learn how to: - Deploy a Host contract - Deploy a Enclave contract - Send a cross-chain message We recommend using [Remix] for an easy-to-follow experience. The only prerequisite is a set-up Metamask account. If you're new to Remix, follow our basic guide for using Remix [here][dapp-remix]. [dapp-remix]: ../../tools/remix.md ## Overview Ping In this example, you'll deploy a `host` contract on *BSC Testnet* and a `enclave` contract on *Sapphire Testnet*. You'll then send a `ping` from the host contract to the enclave contract, facilitated by the OPL SDK. The enclave contract will receive the `ping` and emits an event with the message which was received. ## Contract Setup 1. Open [Remix] and create a new file called `Ping.sol` 2. Paste the following Ping host contract into it: Ping.sol Contract ```solidity title="Ping.sol" showLineNumbers // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import {Host, Result} from "@oasisprotocol/sapphire-contracts/contracts/OPL.sol"; contract Ping is Host { event MessageReceived(bytes message); constructor(address pong) Host(pong) { registerEndpoint("pongMessage", _pongMessage); } function startPing (bytes calldata _message) external payable { postMessage("ping", abi.encode(_message)); } function _pongMessage(bytes calldata _args) internal returns (Result) { (bytes memory message) = abi.decode((_args), (bytes)); emit MessageReceived(message); return Result.Success; } } ``` 3. Create a new file called `Pong.sol` 4. Paste the following Pong enclave contract into it: Pong.sol Contract ```solidity title="Pong.sol" showLineNumbers // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import {Enclave, Result, autoswitch} from "@oasisprotocol/sapphire-contracts/contracts/OPL.sol"; contract Pong is Enclave { event MessageReceived(bytes message); constructor(uint nonce, bytes32 chain) Enclave(computeAddress(msg.sender, nonce), autoswitch(chain)) { registerEndpoint("ping", _pingMessage); } function _pingMessage(bytes calldata _args) internal returns (Result) { (bytes memory message) = abi.decode((_args), (bytes)); emit MessageReceived(message); return Result.Success; } function computeAddress(address _origin, uint _nonce) public pure returns (address) { if (_nonce == 0x00) { return address(uint160(uint256(keccak256(abi.encodePacked( bytes1(0xd6), bytes1(0x94), _origin, bytes1(0x80) ))))); } if (_nonce <= 0x7f) { return address(uint160(uint256(keccak256(abi.encodePacked( bytes1(0xd6), bytes1(0x94), _origin, bytes1(uint8(_nonce)) ))))); } if (_nonce <= 0xff) { return address(uint160(uint256(keccak256(abi.encodePacked( bytes1(0xd7), bytes1(0x94), _origin, bytes1(0x81), uint8(_nonce) ))))); } if (_nonce <= 0xffff) { return address(uint160(uint256(keccak256(abi.encodePacked( bytes1(0xd8), bytes1(0x94), _origin, bytes1(0x82), uint16(_nonce) ))))); } if (_nonce <= 0xffffff) { return address(uint160(uint256(keccak256(abi.encodePacked( bytes1(0xd9), bytes1(0x94), _origin, bytes1(0x83), uint24(_nonce) ))))); } return address(uint160(uint256(keccak256(abi.encodePacked( bytes1(0xda), bytes1(0x94), _origin, bytes1(0x84), uint32(_nonce) ))))); } } ``` ### Key points - `Host`: OPL wrapper for outside contract. - `Enclave`: OPL wrapper for the contract on Sapphire side. - `registerEndpoint`: Registers endpoints in an OPL managed map. - `postMessage`: Call registered endpoints. - `autoswitch`: Finds correct MessageBus address via chain name. ## Compiling the Contract For compatibility with Sapphire, compile the contract using compiler version **`0.8.24`** and evm version **`paris`** (under advanced configuration). ## Deploying the Contract Deploy the Ping contract on `BSC Testnet` and the Pong.sol contract on `Sapphire Testnet`. ### Deploying Pong.sol on Sapphire Testnet You'll deploy the contract on `Sapphire Testnet` first to avoid switching chains back and forth. 1. Obtain TEST tokens for `Sapphire Testnet` from the [Oasis faucet]. 2. Get next nonce of your account from `BSC Testnet` 1. If you didn't do anything on *BSC Testnet* yet this will `0`. 2. Else you need to get your last nonce, e.g. by checking your account address on [bscscan](https://testnet.bscscan.com/) and inspect the details of your latest transaction, and then add 1. 3. In Metamask, switch to the `Sapphire Testnet` network and select `Injected Provider - MetaMask` as the environment in Remix. 4. Select the `Pong.sol` contract. 5. Fill in the deployment parameters: - **`nonce`**: `0` (or next nonce as written above) - **`chain`**: `0x6273630000000000000000000000000000000000000000000000000000000000` (bytes encoded `"bsc"`) 6. Deploy the contract on `Sapphire Testnet` Copy the address of the deployed contract, you'll need it for the next step as Remix will remove the contract from the UI if you change the chain. [Oasis Faucet]: https://faucet.testnet.oasis.io/ ### Deploying Ping.sol on BSC Testnet 1. Obtain BNB test token for `BSC Testnet` from the [BNB faucet] or their discord. 2. In MetaMask, switch to the `BSC Testnet` network and select `Injected Provider - MetaMask` as the environment in Remix. 3. Select the `Ping.sol` contract. 4. Fill in the contract address you just have deployed on `Sapphire Testnet`. 5. Deploy the contract on `BSC Testnet`. [BNB faucet]: https://www.bnbchain.org/en/testnet-faucet ## Executing Ping Now that you've deployed the contacts, you can send the ping message cross-chain. You'll need the following parameter for `startPing`: - `_message`: The encoded message. e.g. "Hello from BSC" - `0x48656c6c6f2066726f6d20425343000000000000000000000000000000000000`. Additionally you'll have to pay a fee which you send as `value`. For sending the ping 0.001 tBNB (1000000 gwei) will be enough. Finally execute the function `startPing`. For the `Sapphire Testnet` an executor is running to relay the messages every few minutes. If you deploy on mainnet please refer to the [Executor chapter]. [Executor chapter]: ../celer/README.md#executor ## Checking execution To see if you successfully send a ping message cross-chain you can watch for new transactions at the [MessageBus address] from Celer or your deployed contract on `Sapphire Testnet`. [MessageBus address]: https://explorer.oasis.io/testnet/sapphire/address/0x9Bb46D5100d2Db4608112026951c9C965b233f4D [Remix]: https://remix.ethereum.org/ --- ## Router Protocol Router Protocol offers two frameworks for cross-chain interactions: - **Router CrossTalk**: Enables stateless and stateful cross-chain messaging - **Router Nitro**: Facilitates native cross-chain asset transfers For guidance on choosing the appropriate framework, refer to Router's [guide]. This documentation focuses on **Router CrossTalk**. If you're primarily interested in asset transfers, please consult the [Router Nitro documentation]. [Router Nitro documentation]: https://docs.routerprotocol.com/develop/category/asset-transfer-via-nitro [guide]: https://docs.routerprotocol.com/overview/choosing-the-right-framework ## Router CrossTalk Router CrossTalk is designed to enable cross-chain interactions, allowing developers to create decentralized applications (dApps) that operate across multiple blockchain networks. This framework supports both stateless and stateful operations, providing flexible and efficient communication between contracts on different chains. ### Architecture [Image: Router Architecture] *High-level architecture diagram for Router CrossTalk[^1]* [^1]: The CrossTalk high-level architecture diagram is courtesy of [Router documentation][router-architecture]. [router-architecture]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/key-concepts/high-level-architecture The **CrossTalk** infrastructure consists of three main components: - **Gateway** contracts on source and destination chains - **Orchestrators** on the Router chain - **Relayers** that forward messages to the Router Gateway contracts The process flow is as follows: 1. The dApp contract calls the iSend function on the source chain's Gateway contract. 2. Orchestrators monitor events emitted by the Gateway contract. 3. A Relayer picks up the transaction signed by the orchestrator and forwards the message to the destination chain's Router Gateway contract. 4. Gateway contract on the destination chain calls the dApp contract's `iReceive` function. 5. For acknowledgment, the process is reversed, and the Relayer calls the `iAck` function on the dApp contract on the source chain. ### Fees Fees in the cross-chain messaging process are paid by two parties: - The dApp **user** pays when initiating the transaction on the source chain. - The dApp **fee payer** pre-pays the Relayers for calling the Router Gateway contract. To ensure the correct **fee payer** is used, the dApp's contract must register the fee payer address as metadata with the Router Gateway. Additionally, the **fee payer** needs to approve the conntract on the Router chain, which can be done through the [Router Explorer]. For more info about the [fee management], consult the Router documentation. [Router Explorer]: https://testnet.routerscan.io/feePayer [fee management]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/key-concepts/fee-management ### Examples Example: PingPong Explore our [PingPong example] to see Router CrossTalk in action. [PingPong example]: ./pingpong-example.md For more examples, refer to the [Router Protocol documentation]: - [Cross-Chain NFT] - [Cross-Chain Read Request] and in the Router Protocol [CrossTalk sample repository]. [Router Protocol documentation]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk [Cross-Chain NFT]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/your-first-crosschain-nft-contract [Cross-Chain Read Request]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/cross-chain-read-requests [CrossTalk sample repository]: https://github.com/router-protocol/new-crosstalk-sample/ --- ## Approving the Fee Payer According to Router Protocol's [fee management] system, cross-chain requests initiated by a dApp are paid for by the dApp's corresponding fee payer account on the Router Chain. This fee payer is registered by calling the `setDappMetadata` function on the gateway contract. ## Obtaining Test Tokens To interact with the Router Protocol testnet, you'll need `ROUTE` test tokens. Follow these steps to obtain them from the Router Faucet: 1. Visit the [Router Faucet] website. 2. Connect your MetaMask wallet. 3. Add the Router Test Network to your MetaMask if prompted. 4. Enter your account address in the provided field. 5. Click the `Get Test Tokens` button. [Image: Router Test Faucet] ## Approving Contracts in Router Explorer After deploying your contracts, you need to approve the fee payer for each of them. Here's how to do it using the Router Explorer: 1. Navigate to the [Router Explorer]. 2. Connect your wallet by clicking the "Connect Wallet" button. 3. Once connected, you'll see a list of pending approvals for your deployed contracts. [Image: Router Approvals] 4. For each contract listed, click the `Approve` button. 5. Follow the prompts in your wallet to sign the approval message. If you don't see your deployed contracts in the list, it's possible you used an incorrect gateway address for the chain during deployment. Verify the current gateway addresses in the [Router Protocol documentation]. ## Troubleshooting If you encounter any issues during the approval process, consider the following: - Ensure you have sufficient ROUTE test tokens in your wallet. - Verify that you're connected to the correct network in MetaMask. - Double-check that the contracts were deployed with the correct gateway addresses. [fee management]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/key-concepts/fee-management [Router Faucet]: https://faucet.routerprotocol.com/ [Router Explorer]: https://testnet.routerscan.io/feePayer [Router Protocol documentation]: https://docs.routerprotocol.com/networks/supported-chains#for-testnet --- ## Router Interfaces Router Protocol provides the `evm-gateway-contracts` library to facilitate the development of cross-chain dApps. ## Installation ### Using Remix If you're using [Remix], you can import the contracts directly as shown in the examples below. [Remix]: https://remix.ethereum.org/ ### Using Hardhat For Hardhat projects, install the package via npm, yarn or pnpm: ```shell npm2yarn npm install @routerprotocol/evm-gateway-contracts ``` ## Gateway The Router Gateway is deployed on all chains supported by Router Protocol and serves as the central communication point between chains. ### IGateway ```solidity import "@routerprotocol/evm-gateway-contracts/contracts/IGateway.sol"; ``` To develop cross-chain contracts, you should generally: 1. Import the `IGateway.sol` interface into all cross-chain contracts 2. Create a variable to store the Gateway contract address 3. Initialize it with the corresponding Gateway address of the given chain This setup will be used later to call the `iSend` function. ### iSend() ```solidity function iSend( uint256 version, uint256 routeAmount, string calldata routeRecipient, string calldata destChainId, bytes calldata requestMetadata, bytes calldata requestPacket ) external payable returns (uint256); ``` `iSend` is the function you'll call on the Gateway of the source chain to initiate a cross-chain message. Every contract that wants to make a cross-chain call needs to call it. For a detailed description of each parameter, refer to the Router Protocol [iSend documentation]. [iSend documentation]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/iDapp-functions/iSend ### getRequestMetaData() ```solidity function getRequestMetadata( uint64 destGasLimit, uint64 destGasPrice, uint64 ackGasLimit, uint64 ackGasPrice, uint128 relayerFees, uint8 ackType, bool isReadCall, string memory asmAddress ) public pure returns (bytes memory) { bytes memory requestMetadata = abi.encodePacked( destGasLimit, destGasPrice, ackGasLimit, ackGasPrice, relayerFees, ackType, isReadCall, asmAddress ); return requestMetadata; } ``` The `getRequestMetadata` function helps create the `requestMetadata` bytes object required for the `iSend` function call. Here's an overview of the arguments: | Argument | Example Value | Description | | ------------ | ------------- | ------------------------------------------ | | destGasLimit | 300000 | Gas limit on destination chain | | destGasPrice | 100000000000 | Gas price on destination chain | | ackGasLimit | 300000 | Gas limit on source chain for ack | | ackGasPrice | 100000000000 | Gas price on source chain for ack | | relayerFees | 10000000000 | Relayer fees on Router chain | | ackType | 3 | Acknowledge type | | isReadCall | false | If the call is read-only | | asmAddress | "0x" | Address for the additional security module | Alternatively, you can use `ethers.js` to encode the metadata: ```js const metadata = ethers.utils.solidityPack( ['uint64', 'uint64', 'uint64', 'uint64', 'uint128', 'uint8', 'bool', 'string'], [destGasLimit, destGasPrice, ackGasLimit, ackGasPrice, relayerFees, ackType, isReadCall, asmAddress] ); ``` For more information about encoding and the request metadata, see the Router [metadata documentation]. [metadata documentation]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/iDapp-functions/iSend#5-requestmetadata ## IDapp ```solidity import "@routerprotocol/evm-gateway-contracts/contracts/IDapp.sol"; ``` The IDapp interface consists of two main functions: 1. `iReceive`: The entry point for the cross-chain message on the destination chain 2. `iAck`: The entry point on the source chain to receive the acknowledgment ### iReceive() ```solidity function iReceive( string memory requestSender, bytes memory packet, string memory srcChainId ) external returns (bytes memory) ``` `iReceive` is called by the Gateway on the destination chain. For more information about `iReceive`, see the Router [iReceive documentation]. [iReceive documentation]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/iDapp-functions/iReceive ### iAck() ```solidity function iAck( uint256 requestIdentifier, bool execFlag, bytes memory execData ) external ``` `iAck` is called by the Gateway on the source chain. For more information about `iAck`, see the Router [iAck documentation]. [iAck documentation]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/iDapp-functions/iAck --- ## PingPong Example This tutorial demonstrates how to send a cross-chain message using Router Protocol's [CrossTalk]. You'll learn how to: - Deploy Router-compatible contracts - Approve the feePayer for your contracts - Prepare metadata for cross-chain calls - Send cross-chain messages We recommend using [Remix] for an easy-to-follow experience. The only prerequisite is a set-up Metamask account. If you're new to Remix, follow our basic guide for using Remix [here][dapp-remix]. [dapp-remix]: ../../tools/remix.md ## Overview PingPong In this example, you'll deploy the same contract on two different chains. You'll then send a `ping` from chain A to chain B, facilitated by Router Protocol's [CrossTalk]. The contract on chain B will receive the `ping` and respond back to Router Protocol. Finally, Router Protocol will send an acknowledgment message back to the contract on chain A. [Image: PingPong Flow] [CrossTalk]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk ## Contract Setup 1. Open [Remix] and create a new file called `PingPong.sol` 2. Paste the following contract into it: PingPong.sol Contract ```solidity title="PingPong.sol" showLineNumbers //SPDX-License-Identifier: UNLICENSED pragma solidity >=0.8.0 <0.9.0; import "@routerprotocol/evm-gateway-contracts/contracts/IGateway.sol"; /// @title PingPong /// @author Yashika Goyal /// @notice This is a cross-chain ping pong smart contract to demonstrate how one can /// utilise Router CrossTalk for cross-chain transactions. contract PingPong { address public owner; uint64 public currentRequestId; // srcChainId + requestId => pingFromSource mapping(string => mapping(uint64 => string)) public pingFromSource; // requestId => ackMessage mapping(uint64 => string) public ackFromDestination; // instance of the Router's gateway contract IGateway public gatewayContract; // custom error so that we can emit a custom error message error CustomError(string message); // event we will emit while sending a ping to destination chain event PingFromSource( string indexed srcChainId, uint64 indexed requestId, string message ); event NewPing(uint64 indexed requestId); // events we will emit while handling acknowledgement event ExecutionStatus(uint256 indexed eventIdentifier, bool isSuccess); event AckFromDestination(uint64 indexed requestId, string ackMessage); constructor(address payable gatewayAddress, string memory feePayerAddress) { owner = msg.sender; gatewayContract = IGateway(gatewayAddress); gatewayContract.setDappMetadata(feePayerAddress); } /// @notice function to set the fee payer address on Router Chain. /// @param feePayerAddress address of the fee payer on Router Chain. function setDappMetadata(string memory feePayerAddress) external { require(msg.sender == owner, "only owner"); gatewayContract.setDappMetadata(feePayerAddress); } /// @notice function to set the Router Gateway Contract. /// @param gateway address of the gateway contract. function setGateway(address gateway) external { require(msg.sender == owner, "only owner"); gatewayContract = IGateway(gateway); } /// @notice function to generate a cross-chain request to ping a destination chain contract. /// @param destChainId chain ID of the destination chain in string. /// @param destinationContractAddress contract address of the contract that will handle this /// @param str string to be pinged to destination /// @param requestMetadata abi-encoded metadata according to source and destination chains function iPing( string calldata destChainId, string calldata destinationContractAddress, string calldata str, bytes calldata requestMetadata ) public payable { currentRequestId++; bytes memory packet = abi.encode(currentRequestId, str); bytes memory requestPacket = abi.encode(destinationContractAddress, packet); gatewayContract.iSend{ value: msg.value }( 1, 0, string(""), destChainId, requestMetadata, requestPacket ); emit NewPing(currentRequestId); } /// @notice function to get the request metadata to be used while initiating cross-chain request /// @return requestMetadata abi-encoded metadata according to source and destination chains function getRequestMetadata( uint64 destGasLimit, uint64 destGasPrice, uint64 ackGasLimit, uint64 ackGasPrice, uint128 relayerFees, uint8 ackType, bool isReadCall, string memory asmAddress ) public pure returns (bytes memory) { bytes memory requestMetadata = abi.encodePacked( destGasLimit, destGasPrice, ackGasLimit, ackGasPrice, relayerFees, ackType, isReadCall, asmAddress ); return requestMetadata; } /// @notice function to handle the cross-chain request received from some other chain. /// @param packet the payload sent by the source chain contract when the request was created. /// @param srcChainId chain ID of the source chain in string. function iReceive( string memory, //requestSender, bytes memory packet, string memory srcChainId ) external returns (uint64, string memory) { require(msg.sender == address(gatewayContract), "only gateway"); (uint64 requestId, string memory sampleStr) = abi.decode( packet, (uint64, string) ); if ( keccak256(abi.encodePacked(sampleStr)) == keccak256(abi.encodePacked("")) ) { revert CustomError("String should not be empty"); } pingFromSource[srcChainId][requestId] = sampleStr; emit PingFromSource(srcChainId, requestId, sampleStr); return (requestId, sampleStr); } /// @notice function to handle the acknowledgement received from the destination chain /// back on the source chain. /// @param requestIdentifier event nonce which is received when we create a cross-chain request /// We can use it to keep a mapping of which nonces have been executed and which did not. /// @param execFlag a boolean value suggesting whether the call was successfully /// executed on the destination chain. /// @param execData returning the data returned from the handleRequestFromSource /// function of the destination chain. function iAck( uint256 requestIdentifier, bool execFlag, bytes memory execData ) external { (uint64 requestId, string memory ackMessage) = abi.decode( execData, (uint64, string) ); ackFromDestination[requestId] = ackMessage; emit ExecutionStatus(requestIdentifier, execFlag); emit AckFromDestination(requestId, ackMessage); } } ``` ### Key Contract Functions - `iPing`: Initiates the cross-chain message by calling Router's `IGateway.iSend`. - `iReceive`: Serves as the entry point on the destination contract. - `iAck`: Handles the acknowledgment in a bidirectional cross-chain message on the source contract. ## Compiling the Contract For compatibility with Sapphire, compile the contract using compiler version **`0.8.24`** and evm version **`paris`** (under advanced configuration). ## Deploying the Contract Deploy the PingPong contract on two different chains: `Sapphire Testnet` and `Polygon Amoy`. ### Deploying on Sapphire Testnet 1. Obtain TEST tokens for `Sapphire Testnet` from the [Oasis faucet]. 2. In Metamask, switch to the `Sapphire Testnet` network and select `Injected Provider - MetaMask` as the environment in Remix 3. Fill in the deployment parameters: - **`gatewayAddress`**: `0xfbe6d1e711cc2bc241dfa682cbbff6d68bf62e67` (current Sapphire Testnet Gateway) - **`feePayerAddress`**: Your current account address (copy from MetaMask or Remix) 4. Deploy the contract on Sapphire Testnet Remix Example [Image: Deploy Sapphire] [Oasis Faucet]: https://faucet.testnet.oasis.io/ ### Deploying on Polygon Amoy 1. Obtain POL tokens for `Polygon Amoy` Testnet from the [Polygon faucet]. 2. Switch to the `Polygon Amoy` network in Metamask. 3. Fill in the deployment parameters: - **`gatewayAddress`**: `0x778a1f43459a05accd8b57007119f103c249f929` (current Polygon Amoy Gateway) - **`feePayerAddress`**: Your current account address (copy from MetaMask or Remix) 4. Deploy the contract on Polygon Amoy Remix Example [Image: Deploy Polygon Amoy] [Polygon Faucet]: https://faucet.polygon.technology/ ## Approving the Fee Payer After deploying the contracts, approve the **fee payer** on the Router Chain: 1. Obtain Router test tokens from the [Router faucet]. 2. Approve the contracts on the [Router Explorer][feepayer]. For detailed instructions on fee payer approval, see our [approval guide]. [Router faucet]: https://faucet.routerprotocol.com/ [feepayer]: https://testnet.routerscan.io/feePayer [approval guide]: ./approve.md ## Executing PingPong Now that you've deployed the contracts and approved the fee payer, you can play **PingPong**. This process involves two steps: 1. Obtaining the Request Metadata 2. Executing the iPing function ### Step 1: Obtaining Request Metadata Call the `getRequestMetadata()` function with the following parameters: | Argument | Example Value | Description | | ------------ | ------------- | ------------------------------------------ | | destGasLimit | 300000 | Gas limit on destination chain | | destGasPrice | 100000000000 | Gas price on destination chain | | ackGasLimit | 300000 | Gas limit on source chain for ack | | ackGasPrice | 100000000000 | Gas price on source chain for ack | | relayerFees | 10000000000 | Relayer fees on Router chain | | ackType | 3 | Acknowledge type | | isReadCall | false | Weather the call is read-only | | asmAddress | "0x" | Address for the additional security module | Remix Example [Image: Router getRequestMetadata] You will need the bytes answer in the next step, so copy it! For more information about request metadata, see the [Router documentation][metadata]. [metadata]: https://docs.routerprotocol.com/develop/message-transfer-via-crosstalk/evm-guides/iDapp-functions/iSend#5-requestmetadata ### Step 2: Executing iPing() To initiate the cross-chain message, call `iPing` with these parameters: | Argument | Value | Description | | -------------------------- | ----------------------- | ------------------------------------------------- | | destChainId | 23295 | Destination Chain ID (e.g. Sapphire) | | destinationContractAddress | 0x<your-contract> | Contract address on the destination chain | | str | "Hello" | Message to include in the ping | | requestMetadata | <bytes string> | Bytes response from the getRequestMetada call | After sending the transaction, you can monitor its status on the [Router Explorer]. Remix Example [Image: Router iPing] This completes the PingPong example, demonstrating cross-chain messaging using Router Protocol's CrossTalk framework. [Router Explorer]: https://testnet.routerscan.io/crosschain [Remix]: https://remix.ethereum.org/ --- ## Tools & Services Oasis integrates with a number of services and provides tooling support for developers using [Remix] (*unencrypted transactions only*), [Sourcify], [Docker images][localnet], [Band], and more. Please reach out to us on [Discord][discord] if you are using a tool that has problems integrating with Oasis. [Remix]: ./remix.md [Sourcify]: ./verification.md [localnet]: ./localnet.mdx [Band]: ./band.md [discord]: https://oasis.io/discord ## See also --- ## ABI Playground The [ABI Playground][abi-playground] provides an interactive environment for working with verified smart contracts on Oasis networks. Similar to Etherscan's read/write contract functionality, you can execute functions on verified contracts deployed to the Sapphire and Emerald networks. If your contract isn't verified yet, please see our [verification] chapter. ## Access Verified Contracts You can access verified contracts in two ways: ### Method 1: via Explorer 1. Navigate to the [Explorer]. 2. Search for a verified contract using its address, e.g. Wrapped ROSE: `0x8Bc2B030b299964eEfb5e1e0b36991352E56D2D3`. 3. Click `Interact in ABI Playground`. [Image: Explorer] 4. The ABI Playground will open with the Wrapped ROSE contract loaded. ### Method 2: Direct ABI Playground Access 1. Visit the [ABI Playground][abi-playground]. 2. Enter a verified contract address, e.g., Wrapped ROSE: `0x8Bc2B030b299964eEfb5e1e0b36991352E56D2D3`. 3. Click `Load Contract`. [Image: ABI Playground Load] 4. The ABI Playground will open with the Wrapped ROSE contract loaded. ## Working with Localnet Contracts The ABI Playground also supports interacting with contracts deployed on a [localnet] for testing purposes. 1. Visit the [ABI Playground][abi-playground]. 2. Select `Oasis Saphhire Localnet` from the network dropdown. [Image: ABI Playground localnet] 3. Enter address of the contract you deployed on localnet. 4. Paste the ABI JSON into the provided text field. 5. Click `Import ABI` to load the interface. [Image: ABI Playground import] Finding Your Contract's ABI When using development frameworks: - Hardhat: Look in the `artifacts` directory - Foundry: Check the `out` directory If you encounter format errors, validate your ABI JSON using an online formatter before importing. ## Troubleshooting ### Contract address not found - **Cause**: The contract might not be verified on Sourcify or the address is incorrect. - **Solution**: Verify the contract on Sourcify or double-check the address. ### Invalid ABI format - **Cause**: The ABI JSON might not be following standard formatting. - **Solution**: Use an online JSON formatter to validate and reformat the ABI before importing it. Should you have any other problems or questions, do not hesitate to share them with us on the [#dev-central Discord channel][discord]. [abi-playground]: https://abi-playground.oasis.io/ [Explorer]: https://explorer.oasis.io/ [localnet]: ./localnet.mdx [verification]: ./verification.md [discord]: https://oasis.io/discord --- ## Band Oracle This guide will explain how to query the Band Protocol reference data smart contract from another Solidity smart contract on Oasis Sapphire, a confidential EVM-compatible [Paratime][paratime]. You can follow the same steps to integrate with Band on Oasis [Emerald][emerald] , a non-confidential EVM. See Band [documentation][band-supported-blockchains] for full list of deployed contract addresses. [paratime]: ../../general/oasis-network/faq#how-is-a-paratime-different-from-a-parachain [emerald]: ./other-paratimes/emerald/README.mdx [band-supported-blockchains]: https://docs.bandchain.org/develop/supported-blockchains ### What is the Band Protocol? [Band Protocol](https://bandprotocol.com) is a cross-chain data oracle platform that aggregates and connects real-world data and APIs to smart contracts. You can read more about details of the protocol [here](https://docs.bandchain.org). ### Deploy Oracle 1. Follow [this link][demooracle-remix] to Remix. The link contains an encoded example `DemoOracle.sol` contract. 2. Compile the contract with compiler version `0.6.11`. 3. Switch to the Deploy tab of Remix. 1. Select "Injected Web3" in the Environment dropdown in the top left to connect Metamask. 2. Make sure that Metamask is connected to the Sapphire Paratime (Testnet/Mainnet) network. You can read about adding a network to Metamask [here](../../general/manage-tokens/#metamask). [Image: Setting up the environment in Remix] 4. Enter the Testnet Band reference data aggregator contract address (`0x61704EFB8b8120c03C210cAC5f5193BF8c80852a`) to the `DemoOracle` constructor and deploy the contract. You can access the reference data aggregator contract on the Sapphire Mainnet at `0xDA7a001b254CD22e46d3eAB04d937489c93174C3`. [Image: Deploying DemoOracle] An interface to interact with the contract will appear in the bottom left corner of Remix. ### Get Rates Clicking the `getPrice` button will return the current price of WBTC in USD. This function calls `getReferenceData(string memory _base, string memory _quote)` on the Band reference data contract, passing "WBTC" and "USD", indicating WBTC as the base and USD as the quote. The rate returned is base/quote multiplied by 1e18. [Image: Get Rates] Note that the `DemoOracle` contract only returns the latest rate, but the reference contract also returns values of the last time the base and quote references were updated. The price is offset by 1e18. The returned value at the time of testing is `39567000000000000000000`. Multiplying by 1e-18 gives the current USD price given by the reference contract, 39567.00 WBTC/USD. Clicking the `getMultiPrices` button returns multiple quotes in the same call, WBTC/USD and ETH/USD in this case. This function calls `getReferenceDataBulk(string[] memory _bases, string[] memory _quotes)` on the Band reference data contract, passing "WBTC" and "ETH" as the base and "USD" for the quote. This will return the current WBTC and ETH prices in USD, as an array of integers. The call also returns just the exchange rates (multipilied by 1e18), but can be modified to return the last updated times for the bases and quotes. The `savePrice` function will save any base/quote rate that is passed to it in the storage variable named `price`. This storage data will only be updated when the “savePrice” function is called, so the saved `price` value will go stale unless this function is called repeatedly. [Image: Save Price] ### Mainnet Reference Data Contract You can access the reference data aggregator contract on the Sapphire Mainnet at [0xDA7a001b254CD22e46d3eAB04d937489c93174C3](https://explorer.oasis.io/mainnet/sapphire/address/0xDA7a001b254CD22e46d3eAB04d937489c93174C3). ### Available Reference Data You can view the available reference data on the [Band Standard Dataset site here](https://data.bandprotocol.com/). ### Example of DemoOracle.sol contract [DemoOracle.sol contract example in Remix][demooracle-remix] ### Bandchain.js Band also has a JavaScript library that makes it easy to interact with BandChain directly from JavaScript or TypeScript applications. The library provides classes and methods for convenient to send transactions, query data, OBI encoding, and wallet management. You can read more about it [here](https://docs.bandchain.org/develop/developer-tools/bandchain.js/getting-started). [demooracle-remix]: https://remix.ethereum.org/?#code=cHJhZ21hIHNvbGlkaXR5IDAuNi4xMTsKcHJhZ21hIGV4cGVyaW1lbnRhbCBBQklFbmNvZGVyVjI7CgppbnRlcmZhY2UgSVN0ZFJlZmVyZW5jZSB7CiAgICAvLy8gQSBzdHJ1Y3R1cmUgcmV0dXJuZWQgd2hlbmV2ZXIgc29tZW9uZSByZXF1ZXN0cyBmb3Igc3RhbmRhcmQgcmVmZXJlbmNlIGRhdGEuCiAgICBzdHJ1Y3QgUmVmZXJlbmNlRGF0YSB7CiAgICAgICAgdWludDI1NiByYXRlOyAvLyBiYXNlL3F1b3RlIGV4Y2hhbmdlIHJhdGUsIG11bHRpcGxpZWQgYnkgMWUxOC4KICAgICAgICB1aW50MjU2IGxhc3RVcGRhdGVkQmFzZTsgLy8gVU5JWCBlcG9jaCBvZiB0aGUgbGFzdCB0aW1lIHdoZW4gYmFzZSBwcmljZSBnZXRzIHVwZGF0ZWQuCiAgICAgICAgdWludDI1NiBsYXN0VXBkYXRlZFF1b3RlOyAvLyBVTklYIGVwb2NoIG9mIHRoZSBsYXN0IHRpbWUgd2hlbiBxdW90ZSBwcmljZSBnZXRzIHVwZGF0ZWQuCiAgICB9CgogICAgLy8vIFJldHVybnMgdGhlIHByaWNlIGRhdGEgZm9yIHRoZSBnaXZlbiBiYXNlL3F1b3RlIHBhaXIuIFJldmVydCBpZiBub3QgYXZhaWxhYmxlLgogICAgZnVuY3Rpb24gZ2V0UmVmZXJlbmNlRGF0YShzdHJpbmcgbWVtb3J5IF9iYXNlLCBzdHJpbmcgbWVtb3J5IF9xdW90ZSkKICAgICAgICBleHRlcm5hbAogICAgICAgIHZpZXcKICAgICAgICByZXR1cm5zIChSZWZlcmVuY2VEYXRhIG1lbW9yeSk7CgogICAgLy8vIFNpbWlsYXIgdG8gZ2V0UmVmZXJlbmNlRGF0YSwgYnV0IHdpdGggbXVsdGlwbGUgYmFzZS9xdW90ZSBwYWlycyBhdCBvbmNlLgogICAgZnVuY3Rpb24gZ2V0UmVmZXJlbmNlRGF0YUJ1bGsoc3RyaW5nW10gbWVtb3J5IF9iYXNlcywgc3RyaW5nW10gbWVtb3J5IF9xdW90ZXMpCiAgICAgICAgZXh0ZXJuYWwKICAgICAgICB2aWV3CiAgICAgICAgcmV0dXJucyAoUmVmZXJlbmNlRGF0YVtdIG1lbW9yeSk7Cn0KCmNvbnRyYWN0IERlbW9PcmFjbGUgewogICAgSVN0ZFJlZmVyZW5jZSByZWY7CgogICAgdWludDI1NiBwdWJsaWMgcHJpY2U7CgogICAgY29uc3RydWN0b3IoSVN0ZFJlZmVyZW5jZSBfcmVmKSBwdWJsaWMgewogICAgICAgIHJlZiA9IF9yZWY7CiAgICB9CgogICAgZnVuY3Rpb24gZ2V0UHJpY2UoKSBleHRlcm5hbCB2aWV3IHJldHVybnMgKHVpbnQyNTYpewogICAgICAgIElTdGRSZWZlcmVuY2UuUmVmZXJlbmNlRGF0YSBtZW1vcnkgZGF0YSA9IHJlZi5nZXRSZWZlcmVuY2VEYXRhKCJXQlRDIiwiVVNEIik7CiAgICAgICAgcmV0dXJuIGRhdGEucmF0ZTsKICAgIH0KCiAgICBmdW5jdGlvbiBnZXRNdWx0aVByaWNlcygpIGV4dGVybmFsIHZpZXcgcmV0dXJucyAodWludDI1NltdIG1lbW9yeSl7CiAgICAgICAgc3RyaW5nW10gbWVtb3J5IGJhc2VTeW1ib2xzID0gbmV3IHN0cmluZ1tdKDIpOwogICAgICAgIGJhc2VTeW1ib2xzWzBdID0gIldCVEMiOwogICAgICAgIGJhc2VTeW1ib2xzWzFdID0gIkVUSCI7CgogICAgICAgIHN0cmluZ1tdIG1lbW9yeSBxdW90ZVN5bWJvbHMgPSBuZXcgc3RyaW5nW10oMik7CiAgICAgICAgcXVvdGVTeW1ib2xzWzBdID0gIlVTRCI7CiAgICAgICAgcXVvdGVTeW1ib2xzWzFdID0gIlVTRCI7CiAgICAgICAgSVN0ZFJlZmVyZW5jZS5SZWZlcmVuY2VEYXRhW10gbWVtb3J5IGRhdGEgPSByZWYuZ2V0UmVmZXJlbmNlRGF0YUJ1bGsoYmFzZVN5bWJvbHMscXVvdGVTeW1ib2xzKTsKCiAgICAgICAgdWludDI1NltdIG1lbW9yeSBwcmljZXMgPSBuZXcgdWludDI1NltdKDIpOwogICAgICAgIHByaWNlc1swXSA9IGRhdGFbMF0ucmF0ZTsKICAgICAgICBwcmljZXNbMV0gPSBkYXRhWzFdLnJhdGU7CgogICAgICAgIHJldHVybiBwcmljZXM7CiAgICB9CgogICAgZnVuY3Rpb24gc2F2ZVByaWNlKHN0cmluZyBtZW1vcnkgYmFzZSwgc3RyaW5nIG1lbW9yeSBxdW90ZSkgZXh0ZXJuYWwgewogICAgICAgIElTdGRSZWZlcmVuY2UuUmVmZXJlbmNlRGF0YSBtZW1vcnkgZGF0YSA9IHJlZi5nZXRSZWZlcmVuY2VEYXRhKGJhc2UscXVvdGUpOwogICAgICAgIHByaWNlID0gZGF0YS5yYXRlOwogICAgfQp9Cg== --- ## Build ParaTime This chapter will teach you how to build your own ParaTime with [Oasis Runtime SDK]. [Oasis Runtime SDK]: https://github.com/oasisprotocol/oasis-sdk/tree/main/runtime-sdk --- ## Minimal Runtime This chapter will show you how to quickly create, build and test a minimal runtime that allows transfers between accounts by using the `accounts` module provided by the Runtime SDK. ## Repository Structure and Dependencies First we create the basic directory structure for the minimal runtime using Rust's [`cargo`]: ```bash cargo init minimal-runtime ``` This will create the `minimal-runtime` directory and populate it with some boilerplate needed to describe a Rust application. It will also set up the directory for version control using Git. The rest of the guide assumes that you are executing commands from within this directory. Since the Runtime SDK requires a nightly version of the Rust toolchain, you need to specify a version to use by creating a special file called `rust-toolchain.toml` containing the following information: ```toml title="rust-toolchain.toml" [toolchain] channel = "nightly-2025-05-09" components = ["rustfmt", "clippy"] targets = ["x86_64-fortanix-unknown-sgx", "wasm32-unknown-unknown"] profile = "minimal" ``` Additionally, due to the requirements of some upstream dependencies, you need to configure Cargo to always build with specific target CPU platform features (namely AES-NI and SSE3) by creating a `.cargo/config.toml` file with the following content: ```toml title=".cargo/config.toml" [build] rustflags = ["-C", "target-feature=+aes,+ssse3"] rustdocflags = ["-C", "target-feature=+aes,+ssse3"] [test] rustflags = ["-C", "target-feature=+aes,+ssse3"] rustdocflags = ["-C", "target-feature=+aes,+ssse3"] ``` After you complete this guide, the minimal runtime directory structure will look as follows: ``` minimal-runtime ├── .cargo │ └── config.toml # Cargo configuration. ├── Cargo.lock # Rust dependency tree checksums. ├── Cargo.toml # Rust crate defintion. ├── rust-toolchain.toml # Rust toolchain version configuration. ├── src │ ├── lib.rs # The runtime definition. │ └── main.rs # Some boilerplate for building the runtime. └── test ├── go.mod # Go module definition ├── go.sum # Go dependency tree checksums. └── test.go # Test client implementation. ``` [`cargo`]: https://doc.rust-lang.org/cargo ## Runtime Definition First you need to declare the `oasis-runtime-sdk` as a dependency in order to be able to use its features. To do this, edit the `[dependencies]` section in your `Cargo.toml` to look like the following: ```toml title="Cargo.toml" [package] name = "minimal-runtime" version = "0.1.0" edition = "2021" [dependencies] oasis-runtime-sdk = { path = "../../../runtime-sdk" } ``` We are using the Git repository directly instead of releasing Rust packages on crates.io. After you have declared the dependency on the Runtime SDK the next thing is to define the minimal runtime. To do this, create `src/lib.rs` with the following content: ```rust title="src/lib.rs" //! Minimal runtime. use std::collections::BTreeMap; use oasis_runtime_sdk::{self as sdk, modules, types::token::Denomination, Version}; /// Configuration of the various modules. pub struct Config; // The base runtime type. // // Note that everything is statically defined, so the runtime has no state. pub struct Runtime; impl modules::core::Config for Config {} impl sdk::Runtime for Runtime { // Use the crate version from Cargo.toml as the runtime version. const VERSION: Version = sdk::version_from_cargo!(); // Define the module that provides the core API. type Core = modules::core::Module; // Define the module that provides the accounts API. type Accounts = modules::accounts::Module; // Define the modules that the runtime will be composed of. Here we just use // the core and accounts modules from the SDK. Later on we will go into // detail on how to create your own modules. type Modules = (modules::core::Module, modules::accounts::Module); // Define the genesis (initial) state for all of the specified modules. This // state is used when the runtime is first initialized. // // The return value is a tuple of states in the same order as the modules // are defined above. fn genesis_state() -> ::Genesis { ( // Core module. modules::core::Genesis { parameters: modules::core::Parameters { max_batch_gas: 10_000, max_tx_signers: 8, max_tx_size: 10_000, max_multisig_signers: 8, min_gas_price: BTreeMap::from([(Denomination::NATIVE, 0)]), ..Default::default() }, }, // Accounts module. modules::accounts::Genesis { parameters: modules::accounts::Parameters { gas_costs: modules::accounts::GasCosts { tx_transfer: 100 }, ..Default::default() }, balances: BTreeMap::from([ ( sdk::testing::keys::alice::address(), BTreeMap::from([(Denomination::NATIVE, 1_000_000_000)]), ), ( sdk::testing::keys::bob::address(), BTreeMap::from([(Denomination::NATIVE, 2_000_000_000)]), ), ]), total_supplies: BTreeMap::from([(Denomination::NATIVE, 3_000_000_000)]), ..Default::default() }, ) } } ``` This defines the behavior (state transition function) and the initial state of the runtime. We are populating the state with some initial accounts so that we will be able to test things later. The accounts use test keys provided by the SDK. While the test keys are nice for testing they __should never be used in production__ versions of the runtimes as the private keys are generated from publicly known seeds! In order to be able to build a runtime binary that can be loaded by an Oasis Node, we need to add some boilerplate into `src/main.rs` as follows: ```rust title="src/main.rs" use oasis_runtime_sdk::Runtime; fn main() { minimal_runtime::Runtime::start(); } ``` ## Building and Running In order to build the runtime you can use the regular Cargo build process by running: ```bash cargo build ``` This will generate a binary under `target/debug/minimal-runtime` which will contain the runtime. For simplicity, we are building a non-confidential runtime which results in a regular ELF binary. In order to build a runtime that requires the use of a TEE like Intel SGX you need to perform some additional steps which are described in later sections of the guide. You can also try to run your runtime using: ```bash cargo run ``` However, this will result in the startup process failing similar to the following: ``` Finished dev [unoptimized + debuginfo] target(s) in 0.08s Running `target/debug/minimal-runtime` {"msg":"Runtime is starting","level":"INFO","ts":"2021-06-09T10:35:10.913154095+02:00","module":"runtime"} {"msg":"Establishing connection with the worker host","level":"INFO","ts":"2021-06-09T10:35:10.913654559+02:00","module":"runtime"} {"msg":"Failed to connect with the worker host","level":"ERRO","ts":"2021-06-09T10:35:10.913723541+02:00","module":"runtime","err":"Invalid argument (os error 22)"} ``` The reason is that the built runtime binary is designed to be run by Oasis Node inside a specific sandbox environment. We will see how to deploy the runtime in a local test environment in the next section. ## Deploying Locally In order to deploy the newly developed runtime in a local development network, you can use the `oasis-net-runner` provided in Oasis Core. This will set up a small network of local nodes that will run the runtime. ```bash rm -rf /tmp/minimal-runtime-test; mkdir -p /tmp/minimal-runtime-test ${OASIS_CORE_PATH}/oasis-net-runner \ --fixture.default.node.binary ${OASIS_CORE_PATH}/oasis-node \ --fixture.default.runtime.binary target/debug/minimal-runtime \ --fixture.default.runtime.loader ${OASIS_CORE_PATH}/oasis-core-runtime-loader \ --fixture.default.runtime.provisioner unconfined \ --fixture.default.keymanager.binary '' \ --basedir /tmp/minimal-runtime-test \ --basedir.no_temp_dir ``` After successful startup this should result in the following message being displayed: ``` level=info module=net-runner caller=root.go:152 ts=2021-06-14T08:42:47.219513806Z msg="client node socket available" path=/tmp/minimal-runtime-test/net-runner/network/client-0/internal.sock ``` The local network runner will take control of the current terminal until you terminate it via Ctrl+C. For the rest of the guide keep the local network running and use a separate terminal to run the client. ## Testing From Oasis CLI After you have the runtime running in your local network, the next step is to test that it actually works. First, let's add a new `localhost` network to the Oasis CLI and provide the path to the local socket file reported above: ```bash oasis network add-local localhost unix:/tmp/minimal-runtime-test/net-runner/network/client-0/internal.sock ? Description: localhost ? Denomination symbol: TEST ? Denomination decimal places: 9 ``` Now, let's see, if the local network was correctly initialized and the runtime is ready: ```bash oasis network status --network localhost ``` If everything is working correctly, you should see the `"status": "ready"` under the runtime's `"committee"` field after a while and an increasing `"latest_round"` value: ``` "committee": { "status": "ready", "active_version": { "minor": 1 }, "latest_round": 19, "latest_height": 302, "executor_roles": null, ``` When you restart `oasis-net-runner`, a new [chain context] will be generated and you will have to remove the `localhost` network and add it again to Oasis CLI. Now, let's add `minimal` runtime to the wallet. By default, `oasis-net-runner` assigns ID `8000000000000000000000000000000000000000000000000000000000000000` to the first provided runtime. ```shell oasis paratime add localhost minimal 8000000000000000000000000000000000000000000000000000000000000000 ``` ``` ? Description: minimal ? Denomination symbol: TEST ? Denomination decimal places: 9 ``` If the Oasis CLI was configured correctly, you should see the balance of Alice's account in the runtime. Oasis CLI comes with hidden accounts for Alice, Bob and other test users (check the [oasis-sdk testing source] for a complete list). You can access the accounts by prepending `test:` literal in front of the test user's name, for example `test:alice`. ```shell oasis account show test:alice --network localhost ``` ``` Address: oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve Nonce: 0 === CONSENSUS LAYER (localhost) === Total: 0.0 TEST Available: 0.0 TEST === minimal PARATIME === Balances for all denominations: 1.0 TEST ``` Sending some TEST in your runtime should also work. Let's send 0.1 TEST to Bob's address. ```shell oasis account transfer 0.1 test:bob --network localhost --account test:alice ``` ``` Unlock your account. ? Passphrase: You are about to sign the following transaction: { "v": 1, "call": { "method": "accounts.Transfer", "body": "omJ0b1UAyND0Wds45cwxynfmbSxEVty+tQJmYW1vdW50gkQF9eEAQA==" }, "ai": { "si": [ { "address_spec": { "signature": { "ed25519": "NcPzNW3YU2T+ugNUtUWtoQnRvbOL9dYSaBfbjHLP1pE=" } }, "nonce": 0 } ], "fee": { "amount": { "Amount": "0", "Denomination": "" }, "gas": 100 } } } Account: test:alice Network: localhost (localhost) Paratime: minimal (minimal) ? Sign this transaction? Yes (In case you are using a hardware-based signer you may need to confirm on device.) Broadcasting transaction... Transaction included in block successfully. Round: 14 Transaction hash: 03a73bd08fb23472673ea45938b0871edd9ecd2cd02b3061d49c0906a772348a Execution successful. ``` [chain context]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/crypto.md#chain-domain-separation [oasis-sdk testing source]: https://github.com/oasisprotocol/oasis-sdk/blob/main/client-sdk/go/testing/testing.go ## Testing From a Client While the Oasis CLI is useful to quickly get your hands dirty, a more convenient way for writing end-to-end tests for your runtime once it grows is to create a Go client. Let's see how to use Go bindings for Oasis Runtime SDK in practice to submit some transactions and perform queries. First, create a `tests` directory and move into it, creating a Go module: ```shell go mod init example.com/oasisprotocol/minimal-runtime-client go mod tidy ``` Then create a `test.go` file with the following content: ```go title="test.go" // Package main provides a test for token transfers. package main import ( "context" "fmt" "os" "time" "google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure" "github.com/oasisprotocol/oasis-core/go/common" cmnGrpc "github.com/oasisprotocol/oasis-core/go/common/grpc" "github.com/oasisprotocol/oasis-core/go/common/logging" "github.com/oasisprotocol/oasis-core/go/common/quantity" "github.com/oasisprotocol/oasis-sdk/client-sdk/go/client" "github.com/oasisprotocol/oasis-sdk/client-sdk/go/modules/accounts" "github.com/oasisprotocol/oasis-sdk/client-sdk/go/testing" "github.com/oasisprotocol/oasis-sdk/client-sdk/go/types" ) // In reality these would come from command-line arguments, the environment // or a configuration file. const ( // This is the default runtime ID as used in oasis-net-runner. It can // be changed by using its --fixture.default.runtime.id argument. runtimeIDHex = "8000000000000000000000000000000000000000000000000000000000000000" // This is the default client node address as set in oasis-net-runner. nodeAddress = "unix:/tmp/minimal-runtime-test/net-runner/network/client-0/internal.sock" ) // The global logger. var logger = logging.GetLogger("minimal-runtime-client") // Client contains the client helpers for communicating with the runtime. This is a simple wrapper // used for convenience. type Client struct { client.RuntimeClient // Accounts are the accounts module helpers. Accounts accounts.V1 } // showBalances is a simple helper for displaying account balances. func showBalances(ctx context.Context, rc *Client, address types.Address) error { // Query the runtime, specifically the accounts module, for the given address' balances. rsp, err := rc.Accounts.Balances(ctx, client.RoundLatest, address) if err != nil { return fmt.Errorf("failed to fetch account balances: %w", err) } fmt.Printf("=== Balances for %s ===\n", address) for denom, balance := range rsp.Balances { fmt.Printf("%s: %s\n", denom, balance) } fmt.Printf("\n") return nil } func tokenTransfer() error { // Initialize logging. if err := logging.Initialize(os.Stdout, logging.FmtLogfmt, logging.LevelDebug, nil); err != nil { return fmt.Errorf("unable to initialize logging: %w", err) } // Decode hex runtime ID into something we can use. var runtimeID common.Namespace if err := runtimeID.UnmarshalHex(runtimeIDHex); err != nil { return fmt.Errorf("malformed runtime ID: %w", err) } // Establish a gRPC connection with the client node. logger.Info("connecting to local node") conn, err := cmnGrpc.Dial(nodeAddress, grpc.WithTransportCredentials(insecure.NewCredentials())) if err != nil { return fmt.Errorf("failed to establish connection to %s: %w", nodeAddress, err) } defer func() { _ = conn.Close() }() // Create the runtime client with account module query helpers. c := client.New(conn, runtimeID) rc := &Client{ RuntimeClient: c, Accounts: accounts.NewV1(c), } ctx, cancelFn := context.WithTimeout(context.Background(), 30*time.Second) defer cancelFn() // Show initial balances for Alice's and Bob's accounts. logger.Info("dumping initial balances") if err = showBalances(ctx, rc, testing.Alice.Address); err != nil { return err } if err = showBalances(ctx, rc, testing.Bob.Address); err != nil { return err } // Get current nonce for Alice's account. nonce, err := rc.Accounts.Nonce(ctx, client.RoundLatest, testing.Alice.Address) if err != nil { return fmt.Errorf("failed to fetch account nonce: %w", err) } // Perform a transfer from Alice to Bob. logger.Info("performing transfer", "nonce", nonce) // Create a transfer transaction with Bob's address as the destination and 10 native base units // as the amount. tb := rc.Accounts.Transfer( testing.Bob.Address, types.NewBaseUnits(*quantity.NewFromUint64(10), types.NativeDenomination), ). // Configure gas as set in genesis parameters. We could also estimate it instead. SetFeeGas(100). // Append transaction authentication information using a single signature variant. AppendAuthSignature(testing.Alice.SigSpec, nonce) // Sign the transaction using the signer. Before a transaction can be submitted it must be // signed by all configured signers. This will automatically fetch the corresponding chain // domain separation context for the runtime. if err = tb.AppendSign(ctx, testing.Alice.Signer); err != nil { return fmt.Errorf("failed to sign transfer transaction: %w", err) } // Submit the transaction and wait for it to be included and a runtime block. if err = tb.SubmitTx(ctx, nil); err != nil { return fmt.Errorf("failed to submit transfer transaction: %w", err) } // Show final balances for Alice's and Bob's accounts. logger.Info("dumping final balances") if err = showBalances(ctx, rc, testing.Alice.Address); err != nil { return err } return showBalances(ctx, rc, testing.Bob.Address) } func main() { if err := tokenTransfer(); err != nil { panic(err) } } ``` Fetch the dependencies: ```shell go get ``` And build it: ```shell go build ``` The example client will connect to one of the nodes in the network (the _client_ node), query the runtime for initial balances of two accounts (Alice and Bob as specified above in the genesis state), then proceed to issue a transfer transaction that will transfer 10 native base units from Alice to Bob. At the end it will again query and display the final balances of both accounts. To run the built client do: ```shell ./minimal-runtime-client ``` The output should be something like the following: ``` level=info ts=2022-06-28T14:08:02.834961397Z caller=test.go:81 module=minimal-runtime-client msg="connecting to local node" level=info ts=2022-06-28T14:08:02.836059713Z caller=test.go:103 module=minimal-runtime-client msg="dumping initial balances" === Balances for oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve === : 1000000000 === Balances for oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx === : 2000000000 level=info ts=2022-06-28T14:08:02.864348758Z caller=test.go:117 module=minimal-runtime-client msg="performing transfer" nonce=0 level=info ts=2022-06-28T14:08:18.515842571Z caller=test.go:146 module=minimal-runtime-client msg="dumping final balances" === Balances for oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve === : 999999990 === Balances for oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx === : 2000000010 ``` You can try running the client multiple times and it should transfer the given amount each time. As long as the local network is running the state will be preserved. Congratulations, you have successfully built and deployed your first runtime! Example You can view and download complete [runtime example] and [client code in Go] from the Oasis SDK repository. [runtime example]: https://github.com/oasisprotocol/oasis-sdk/tree/main/examples/runtime-sdk/minimal-runtime [client code in Go]: https://github.com/oasisprotocol/oasis-sdk/tree/main/examples/client-sdk/go/minimal-runtime-client --- ## Modules As we saw in the [minimal runtime example], creating an Oasis runtime is very easy to do thanks to the boilerplate provided by the Oasis SDK. The example hinted that almost all of the implementation of the state transition function is actually hidden inside the _modules_ that are composed together to form a runtime. This chapter explores how modules are built. [minimal runtime example]: minimal-runtime.md ## Runtime Trait Let's briefly revisit the `Runtime` trait which is what brings everything together. As we saw when [defining the minimal runtime], the trait requires implementing some basic things: ```rust impl sdk::Runtime for Runtime { // Use the crate version from Cargo.toml as the runtime version. const VERSION: Version = sdk::version_from_cargo!(); // Module that provides the core API. type Core = modules::core::Module; // Module that provides the accounts API. type Accounts = modules::accounts::Module; // Define the modules that the runtime will be composed of. type Modules = (modules::core::Module, modules::accounts::Module); // Define the genesis (initial) state for all of the specified modules. This // state is used when the runtime is first initialized. // // The return value is a tuple of states in the same order as the modules // are defined above. fn genesis_state() -> ::Genesis { ( // Core module. modules::core::Genesis { // ... snip ... }, // Accounts module. modules::accounts::Genesis { // ... snip ... }, ) } } ``` [defining the minimal runtime]: minimal-runtime.md#runtime-definition ### Version The `VERSION` constant is pretty self-explanatory as it makes it possible to version runtimes and check compatibility with other nodes. The versioning scheme follows [semantic versioning] with the following semantics: * The **major** version is used when determining state transition function compatibility. If any introduced change could lead to a discrepancy when running alongside a previous version, the major version _must_ be bumped. The [Oasis Core scheduler service] will make sure to only schedule nodes which are running a compatible version in order to make upgrades easier. * The **minor** and **patch** versions are ignored when determining compatibility and can be used for non-breaking features or fixes. [semantic versioning]: https://semver.org/ [Oasis Core scheduler service]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/services/scheduler.md ### List of Modules The `Modules` associated type contains all of the module types that compose the runtime. Due to the way modules are defined, you can specify multiple modules by using a tuple. ### Genesis State The genesis state is the initial state of the runtime. It is used when the runtime is first deployed to populate the initial persistent state of all of the modules. Each module can define its own genesis state format together with the methods for transforming that genesis state into internal persistent state. ## Module Lifecycle Traits ## Context ## Putting It All Together --- ## Prerequisites(Build-paratime) This chapter will show you how to install the software required for developing a runtime and client using the Oasis SDK. After successfully completing all the described steps you will be able to start building your first runtime! If you already have everything set up, feel free to skip to the [next chapter]. [next chapter]: minimal-runtime.md ## Environment Setup The following is a list of prerequisites required to start developing using the Oasis SDK: ### [Rust] We follow [Rust upstream's recommendation][rust-upstream-rustup] on using [rustup] to install and manage Rust versions. rustup cannot be installed alongside a distribution packaged Rust version. You will need to remove it (if it's present) before you can start using rustup. Install it by running: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` If you want to avoid directly executing a shell script fetched the internet, you can also [download `rustup-init` executable for your platform] and run it manually. This will run `rustup-init` which will download and install the latest stable version of Rust on your system. #### Rust Toolchain Version The version of the Rust toolchain we use in the Oasis SDK is specified in the [`rust-toolchain.toml`] file. The rustup-installed versions of `cargo`, `rustc` and other tools will [automatically detect this file and use the appropriate version of the Rust toolchain][rust-toolchain-precedence]. When you are building applications that use the SDK, it is recommended that you copy the same [`rust-toolchain.toml`] file to your project's top-level directory as well. To install the appropriate version of the Rust toolchain, make sure you are in the project directory and run: ``` rustup show ``` This will automatically install the appropriate Rust toolchain (if not present) and output something similar to: ``` ... active toolchain ---------------- nightly-2022-08-22-x86_64-unknown-linux-gnu (overridden by '/code/rust-toolchain') rustc 1.65.0-nightly (c0941dfb5 2022-08-21) ``` [rustup]: https://rustup.rs/ [rust-upstream-rustup]: https://www.rust-lang.org/tools/install [download `rustup-init` executable for your platform]: https://rust-lang.github.io/rustup/installation/other.html [Rust]: https://www.rust-lang.org/ [`rust-toolchain.toml`]: https://github.com/oasisprotocol/oasis-sdk/tree/main/rust-toolchain.toml [rust-toolchain-precedence]: https://github.com/rust-lang/rustup/blob/master/README.md#override-precedence ### (OPTIONAL) [Go] _Required if you want to use the Go Client SDK._ At least version **1.22.5** is required. If your distribution provides a new-enough version of Go, just use that. Otherwise: * install the Go version provided by your distribution, * [ensure `$GOPATH/bin` is in your `PATH`], * [install the desired version of Go], e.g. 1.22.5, with: ``` go get golang.org/dl/go1.22.5 go1.22.5 download ``` [Go]: https://golang.org [ensure `$GOPATH/bin` is in your `PATH`]: https://tip.golang.org/doc/code.html#GOPATH [install the desired version of Go]: https://golang.org/doc/install#extra_versions ## Oasis Core Installation The SDK requires utilities provided by [Oasis Core] in order to be able to run a local test network for development purposes. The recommended way is to download a pre-built release (at least version 24.2) from the [Oasis Core releases] page. After downloading the binary release (e.g. into `~/Downloads/oasis_core_24.2_linux_amd64.tar.gz`), unpack it as follows: ```bash cd ~/Downloads tar xf ~/Downloads/oasis_core_24.2_linux_amd64.tar.gz --strip-components=1 # This environment variable will be used throughout this guide. export OASIS_CORE_PATH=~/Downloads/oasis_core_24.2_linux_amd64 ``` [Oasis Core]: https://github.com/oasisprotocol/oasis-core [Oasis Core releases]: https://github.com/oasisprotocol/oasis-core/releases ## Oasis CLI Installation The rest of the guide uses the Oasis CLI as an easy way to interact with the ParaTime. You can use [one of the binary releases] or [compile it yourself]. [one of the binary releases]: https://github.com/oasisprotocol/cli/releases [compile it yourself]: https://github.com/oasisprotocol/cli/blob/master/README.md --- ## Reproducibility If you wish to build paratime binaries yourself, you can use the environment provided as part of the SDK. This way you can also verify that the binaries match the ones running on the network. The steps below show how to build the test runtimes provided in the `oasis-sdk` sources; steps for other paratimes should be similar. ## Environment Setup The build environment is provided as a Docker image containing all the necessary tools. Refer to your system's documentation for pointers on installing software. The runtime sources need to be mounted into the container so prepare a directory first, such as: ```bash git clone https://github.com/oasisprotocol/oasis-sdk.git ``` ## Running the Image The images are available in the `ghcr.io/oasisprotocol/runtime-builder` repository on Docker Hub and are tagged with the same version numbers as releases of the SDK. To pull the image and run a container with it, run the following: ```bash docker run -t -i -v /home/user/oasis-sdk:/src ghcr.io/oasisprotocol/runtime-builder:main /bin/bash ``` where: - `/home/user/oasis-sdk` is the absolute path to the directory containing the SDK sources (or other paratimes - you likely do not need to download the SDK separately if you're building other paratimes), and - `main` is a release of the SDK - the documentation of the paratime you're trying to build should mention the version required. This gives you a root shell in the container. Rust and Cargo are installed in `/cargo`, Go in `/go`, and the sources to your paratime are available in `/src`. ## Building ### ELF Simply build the paratime in release mode using: ```bash cargo build --release ``` The resulting binaries will be in `/src/target/release/`. ### Intel SGX Follow the normal build procedure for your paratime. For the testing runtimes in the SDK, e.g.: ```bash cd /src cargo build --release --target x86_64-fortanix-unknown-sgx ``` After this step is complete, the binaries will be in `/src/target/x86_64-fortanix-unknown-sgx/release/`. To produce the sgxs format needed on the Oasis network, change directory to where a particular runtime's `Cargo.toml` file is and run the following command: ```bash cargo elf2sgxs --release ``` It is necessary to change directories first because the tool does not currently support cargo workspaces. The resulting binaries will have the `.sgxs` extension. ## Generating Bundles Oasis Core since version 22.0 distributes bundles in the Oasis Runtime Container format which is basically a zip archive with some metadata attached. This makes it easier for node operators to configure paratimes. To ease creation of such bundles from built binaries and metadata, you can use the `orc` tool provided by the SDK. You can install the `orc` utility by running: ```bash go install github.com/oasisprotocol/oasis-sdk/tools/orc@latest ``` The same bundle can contain both ELF and Intel SGX artifacts. To create a bundle use the following command: ```bash orc init path/to/elf-binary ``` When including Intel SGX artifacts you may additionally specify: All bundles, even Intel SGX ones, are required to include an ELF binary of the paratime. This binary is used for client nodes that don't have SGX support. ```bash orc init path/to/elf-binary --sgx-executable path/to/binary.sgxs --sgx-signature path/to/binary.sig ``` You can omit the signature initially and add it later by using: ```bash orc sgx-set-sig bundle.orc path/to/binary.sig ``` ### Multi-step SGX Signing Example Multi-step signing allows enclave signing keys to be kept offline, preferrably in some HSM. The following example uses `openssl` and a locally generated key as an example, however, it is suggested that the key be stored in a more secure location than in plaintext on disk. #### Generate a key We will generate a valid key for enclave signing. This must be a 3072-bit RSA key with a public exponent of 3. Do this like so: ```bash openssl genrsa -3 3072 > private.pem ``` We will also need the public key in a later step so let's also generate this now. ```bash openssl rsa -in private.pem -pubout > public.pem ``` #### Generate signing data for your enclave Generating signing data is done with the `orc sgx-gen-sign-data` subcommand, like so: ```bash orc sgx-gen-sign-data [options] bundle.orc ``` See `orc sgx-gen-sign-data --help` for details on available options. For purposes of this example, let's assume your bundle is named `bundle.orc`. You would generate data to sign like so: ```bash orc sgx-gen-sign-data bundle.orc > sigstruct.sha256.bin ``` The output file `sigstruct.sha256.bin` contains the sha256 hash of the SIGSTRUCT fields to be signed. ##### Sign the SIGSTRUCT hash To sign the SIGSTRUCT you must create a signature using the `RSASSA-PKCS1-v1_5` scheme. The following command will do so with `openssl`. If you're using an HSM, your device may have a different process for generating a signature of this type. ```bash openssl pkeyutl -sign \ -in sigstruct.sha256.bin \ -inkey private.pem \ -out sigstruct.sha256.sig \ -pkeyopt digest:sha256 ``` ##### Attach the singed SIGSTRUCT to the bundle With the signature in `sigstruct.sha256.sig` we can now generate a valid SIGSTRUCT and attach it into the bundle. ```bash orc sgx-set-sig bundle.orc sigstruct.sha256.sig public.pem ``` If there are no errors, `bundle.orc` will now contain a valid SGX SIGSTRUCT that was signed by `private.pem`. To verify you can use `orc show` as follows. ```bash orc show bundle.orc ``` It should return something like the following, showing the bundle content including the signed SGX SIGSTRUCT (the signature is also verified): ``` Bundle: /path/to/bundle.orc Name: my-paratime Runtime ID: 000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c Version: 0.1.1 Executable: runtime.elf SGXS: runtime.sgx SGXS MRENCLAVE: a68535bda1574a5e15dfb155c26e39bd404e9991a4d98010581a35d053011340 SGXS signature: runtime.sgx.sig SGXS SIGSTRUCT: Build date: 2022-07-14 00:00:00 +0000 UTC MiscSelect: 00000000 MiscSelect mask: FFFFFFFF Attributes flags: 0000000000000004 - 64-bit mode Attributes XFRM: 0000000000000003 Attributes mask: FFFFFFFFFFFFFFFD FFFFFFFFFFFFFFFC MRENCLAVE: a68535bda1574a5e15dfb155c26e39bd404e9991a4d98010581a35d053011340 ISV product ID: 0 ISV SVN: 0 Digests: runtime.sgx.sig => 3c0daea89dfdb3d0381147dec3e041a596617f686afa9b28436ca17980dafee4 runtime.elf => a96397fc309bc2116802315c0341a2a9f6f21935d79a3f56d71b3e4d6f6d9302 runtime.sgx => b96ff3ae9c73646459b7e8dc1d096838720a7c62707affc1800967cbee99b28b ``` --- ## Foundry [Foundry] is a smart contract development environment for EVM-based chains. This guide will show you how to use Foundry to build, test, and deploy smart contracts on Oasis Sapphire. For comprehensive details about Foundry's features, consult the [Foundry documentation]. Foundry contains the following tools: - **Forge** is a CLI [tool] for building, testing, and deploying smart contracts. Due to integrated Revm it significantly increases runtime speed when testing. - **Anvil** is a local development EVM [node]. It is installed as part of Foundry, but currently cannot be extended with Sapphire features. - **Cast** is a [CLI tool] for interacting with EVM nodes. It uses RPC calls, so it can be used to interact with Sapphire nodes running the [Oasis Web3 gateway]. - **Chisel** is a Solidity [REPL] (short for "read-eval-print loop") that allows developers to write and test Solidity code snippets. It provides an interactive environment for writing and executing Solidity code, as well as a set of built-in commands for working with and debugging your code. ## Setup and Configuration Follow the steps below to setup your project: 1. Install and run Foundryup: ```shell curl -L https://foundry.paradigm.xyz | bash foundryup source ~/.bashrc ``` 2. Create a new Forge project and move inside: ```shell forge init sapphire_demo cd sapphire_demo ``` 3. **Sapphire Contracts** package contains helper solidity contracts and libraries that enable convenient access to on-chain data and precompiles (rng, signatures, on-chain encryption, etc.). After initializing the project, we can install the **Sapphire Contracts** package using Foundry package manager [Soldeer]: ```shell forge soldeer install @oasisprotocol-sapphire-contracts~0.2.14 ``` ### Installing Sapphire Foundry (Optional) If you intend to use Sapphire features in Forge tests/scripts follow this section. Forge tests use [Revm], an EVM implementation which does not contain [Sapphire-specific features]. For that reason, we have to install special Sapphire precompiles which are available in the **Sapphire foundry** package. 1. Install the **Sapphire Foundry** Soldeer package: ```shell forge soldeer install @oasisprotocol-sapphire-foundry~0.1.1 ``` 2. Your foundry.toml file should now look like this: ```toml title="foundry.toml" [profile.default] src = "src" out = "out" libs = ["lib", "dependencies"] // highlight-start ffi = true [dependencies] "@oasisprotocol-sapphire-contracts" = "0.2.14" "@oasisprotocol-sapphire-foundry" = "0.1.1" // highlight-end ``` Note: - `ffi = true` enables `vm.ffi()` ([foreign function interface]) which is used to call rust bindings containing the precompile and decryption logic - `"dependencies"` lists the required project dependencies installed via the Foundry package manager [Soldeer] - (Alternative) You can also skip previous steps, copy the contents to your foundry.toml file and run: ```shell forge soldeer install ``` 3. Install rust toolchain (nightly): ```shell curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh rustup toolchain install nightly rustup default nightly ``` Note: `oasisprotocol-sapphire-foundry` relies on Oasis Sapphire `runtime-sdk` crate which requires nightly toolchain. 4. Install Sapphire precompiles: ```shell cd dependencies/@oasisprotocol-sapphire-foundry-0.1.1/precompiles cargo +nightly build --release ``` ## Sapphire-specific Tests Users who aren't familiar with confidential benefits of Sapphire should take a moment to review the Ethereum [comparison page]. The `@oasisprotocol-sapphire-foundry` package enables testing with precompiles and encrypted calls. ### Precompiles Sapphire precompiles are special precompiled contracts that contain cryptographic primitives and enable confidential transactions. Since Foundry uses un-customized Revm, they are not available by default. To enable them in Forge tests, add the **import** statement to your test file: ```solidity import {SapphireTest} from "@oasisprotocol-sapphire-foundry-0.1.1/BaseSapphireTest.sol"; ``` and then inherit from `SapphireTest` from `BaseSapphireTest.sol` and override the `setUp()` function: ```solidity contract PrecompileTest is SapphireTest { function setUp() public override { super.setUp(); } ``` ### Encrypted Transactions and Calls Sapphire uses end-to-end encryption for confidential transactions and calls. This means that the calldata is encrypted using the shared [key]. For non-encrypted transactions, the process works same as on Ethereum. However, when testing for example [gasless transactions] with Foundry, we need to add a few things. After deploying the precompiles in the previous step, we need to update our custom contract, encrypt the transaction and broadcast it. 1. `SapphireDecryptor` is a special contract that implements decryption through the fallback function. We need to inherit from it to enable decryption. ```solidity import {SapphireDecryptor} from "@oasisprotocol-sapphire-foundry-0.1.1/BinaryContracts.sol"; contract CustomContract is SapphireDecryptor { ``` 2. See [examples/foundry] for this step to see how to test encrypted gasless transaction. 3. `vm.broadcastRawTransaction(raw_tx)` to send the raw transaction. ### `forge test` To run tests add a new file in the `tests/` directory. Run tests with: ```shell forge test ``` Example: Foundry Visit the Sapphire ParaTime repository to download the [Foundry][foundry-example] example. [foundry-example]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/foundry ## Forge Script To run a script add a new file in the `scripts/` directory. Run scripts with: ```shell forge script ``` ### Broadcasting transactions **When broadcasting to Mainnet/Testnet/Localnet we do not use any imports and contracts described in the [Installing Sapphire Foundry (Optional)](#installing-sapphire-foundry-optional)** When running scripts, we may also want to deploy and test on Localnet/Testnet/Mainnet using `--broadcast` and `--rpc-url` flags. The issue with Foundry, when broadcasting Forge ***scripts*** is, that it only broadcasts state-changing transactions. This means that none of ***view calls*** are sent through the RPC node. They are still executed in the local in-memory Revm even when `--broadcast` and `--skip-simulation` flags are provided. Using Forge script we can deploy and broadcast transactions on Sapphire just like on Ethereum, but since most Sapphire features are enabled through precompile ***view calls*** we then have to use `vm.rpc()` cheatcode to query data directly. This code contains Forge Script contract, that calls **RANDOM_BYTES (0x0100000000000000000000000000000000000001)** precompile directly using `vm.rpc()`: ```solidity contract CounterScript is Script { function setUp() public { vm.createSelectFork("https://testnet.sapphire.oasis.io"); } function run() public { vm.startBroadcast(); string memory transactionArgs = string.concat( "[{\"to\":\"", "0x0100000000000000000000000000000000000001", "\",\"data\":\"", vm.toString(abi.encode(32, "")), "\"}, \"latest\"]" ); bytes memory result = vm.rpc("eth_call", transactionArgs); vm.stopBroadcast(); } } ``` ### A Note on Fork testing Due to encrypted state, it is not possible to fork Sapphire. Using Forge with `--fork-block-number` will not work. ## Verification with Foundry After contracts are deployed, you can verify them with Sourcify. Check out the [Verification with Foundry] section for more details. Should you have any questions, do not hesitate to share them with us on the [#dev-central Discord channel][discord]. [Foundry]: https://github.com/foundry-rs/foundry [Foundry documentation]: https://book.getfoundry.sh/ [discord]: https://oasis.io/discord [tool]: https://book.getfoundry.sh/forge/ [node]: https://book.getfoundry.sh/anvil/ [CLI tool]: https://book.getfoundry.sh/cast/ [REPL]: https://book.getfoundry.sh/chisel/ [Sapphire-specific features]: ../sapphire/ethereum [comparison page]: ../sapphire/ethereum [gasless transactions]: ../sapphire/develop/gasless [key]: ../sapphire/develop/concept [examples/foundry]: https://github.com/oasisprotocol/sapphire-paratime/tree/main/examples/foundry [Verification with Foundry]: ./verification/#verification-with-foundry [foreign function interface]: https://book.getfoundry.sh/cheatcodes/ffi [Soldeer]: https://book.getfoundry.sh/projects/soldeer?highlight=soldeer#soldeer-as-a-package-manager [Oasis Web3 gateway]: ../../node/web3.mdx [Revm]: https://github.com/bluealloy/revm --- ## Localnet For convenient development and testing of your dApps the Oasis team prepared the [ghcr.io/oasisprotocol/sapphire-localnet][sapphire-localnet] container image. It will bring you a complete Oasis network stack to your workspace. The Localnet Sapphire instance **mimics confidential transactions**, but it does not run in a trusted execution environment nor does it require Intel's SGX on your computer. The network is isolated from the Mainnet or Testnet and consists of a: - single Oasis validator node with 1-second block time and 30-second epoch, - single Oasis client node, - single compute node running Oasis Sapphire, - single key manager node, - PostgreSQL instance, - Oasis Web3 gateway with transaction indexer and enabled Oasis RPCs, - Oasis Nexus indexer and Explorer frontend, - helper script which populates the account(s) for you. Hardware requirements You will need at least 16GB of RAM to run the Docker image in addition to your machine's OS. ## Installation and Setup To run the image, execute: ```sh docker run -it --rm -p8544-8548:8544-8548 ghcr.io/oasisprotocol/sapphire-localnet ``` ```sh docker run -it --rm -p8544-8548:8544-8548 --platform linux/x86_64 ghcr.io/oasisprotocol/sapphire-localnet ``` macOS Startup Issue on Apple Silicon On Apple Silicon Macs running macOS 26 (Tahoe) or later, the `sapphire-localnet` Docker image may hang on startup with peer authentication errors (e.g., `chacha20poly1305: message authentication failed`). This is due to a bug in Rosetta 2's x86_64 emulation. The workaround is to disable Rosetta in Docker Desktop settings, which makes Docker use QEMU instead. Go to `Settings > Virtual Machine Options` and disable "Use Rosetta for x86/amd64 emulation on Apple Silicon". ```sh docker run -it --rm -p8544-8548:8544-8548 --platform linux/x86_64 ghcr.io/oasisprotocol/sapphire-localnet ``` After a while, running the `sapphire-localnet` will show you something like: ```console sapphire-localnet 2024-11-29-gite748a1a (oasis-core: 24.3, sapphire-paratime: 0.9.0-testnet, oasis-web3-gateway: 5.1.0) * No ROFLs detected. * Starting oasis-net-runner with sapphire... * Waiting for Postgres to start... * Waiting for Oasis node to start..... * Waiting for Envoy proxy to start. * Starting oasis-web3-gateway... * Bootstrapping network (this might take a minute).... * Waiting for key manager...... * Creating database 'nexus' * Waiting for Nexus to start. * Waiting for Explorer to start. * Populating accounts... Available Accounts ================== (0) 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 (10000 TEST) (1) 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 (10000 TEST) (2) 0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC (10000 TEST) (3) 0x90F79bf6EB2c4f870365E785982E1f101E93b906 (10000 TEST) (4) 0x15d34AAf54267DB7D7c367839AAf71A00a2C6A65 (10000 TEST) Private Keys ================== (0) 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 (1) 0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d (2) 0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a (3) 0x7c852118294e51e653712a81e05800f419141751be58f605c371e15141b007a6 (4) 0x47e179ec197488593b187f80a00eb0da91f1b9d0b13f8733639f19c30a34926a HD Wallet ================== Mnemonic: test test test test test test test test test test test junk Base HD Path: m/44'/60'/0'/0/%d WARNING: The chain is running in ephemeral mode. State will be lost after restart! * GRPC listening on http://localhost:8544. * Web3 RPC listening on http://localhost:8545 and ws://localhost:8546. Chain ID: 0x5afd. * Nexus API listening on http://localhost:8547. * Localnet Explorer available at http://localhost:8548. * Container start-up took 69 seconds, node log level is set to warn. ``` Those familiar with local dApp environments will find the output above similar to `geth --dev` or `ganache-cli` commands or the `geth-dev-assistant` npm package. The [sapphire-localnet] will spin up a private Oasis Network locally, generate and populate test accounts and make the following Web3 endpoints available for you to use: - `http://localhost:8545` - `ws://localhost:8546` The [Oasis GRPC][oasis-rpc] endpoint is exposed on: - `http://localhost:8544` [oasis-rpc]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/oasis-node/rpc.md In addition to these, the Nexus API is available on `http://localhost:8547` and an Explorer instance on `http://localhost:8548`. These can be disabled by passing `--no-explorer` or setting the environment variable `OASIS_DOCKER_START_EXPLORER` to `no`. ## Optional Parameters By default, the Localnet docker image will populate the first five accounts derived from the standard test mnemonic, compatible with `hardhat node`. These accounts are typically used for Solidity unit tests. If you prefer populating different accounts, use `-to` flag and pass the mnemonics seed phrases or wallet addresses. Use the `-n` parameter to define the number of derived addresses to fund. ```sh docker run -it -p8544-8548:8544-8548 ghcr.io/oasisprotocol/sapphire-localnet -to "bench remain brave curve frozen verify dream margin alarm world repair innocent" -n3 docker run -it -p8544-8548:8544-8548 ghcr.io/oasisprotocol/sapphire-localnet -to "0x75eCF0d4496C2f10e4e9aF3D4d174576Ee9010E2,0xbDA5747bFD65F08deb54cb465eB87D40e51B197E" ``` The [sapphire-localnet] runs in ephemeral mode. Any smart contract and wallet balance will be lost after you quit the Docker container! Emerald Localnet An Emerald flavor of [sapphire-localnet] also exists, called [emerald-localnet]. It behaves the same way as Sapphire, but without confidentiality. [sapphire-localnet]: https://github.com/oasisprotocol/oasis-web3-gateway/pkgs/container/sapphire-localnet [emerald-localnet]: https://github.com/oasisprotocol/oasis-web3-gateway/pkgs/container/emerald-localnet ## GitHub Actions You can easily integrate localnet into your CI/CD workflow. Use the example GitHub Action configuration to start a Sapphire stack and expose the necessary ports for testing. ```yaml jobs: example-test: services: sapphire-localnet-ci: image: ghcr.io/oasisprotocol/sapphire-localnet ports: - 8544:8544 - 8545:8545 - 8546:8546 env: OASIS_DOCKER_START_EXPLORER: no options: >- --rm --health-cmd="test -f /CONTAINER_READY" --health-start-period=90s ``` --- ## Consensus Layer and Other ParaTimes In addition to our primary ParaTime, [Sapphire], several additional ParaTimes are running on top of the [consensus layer]. DApp developers can choose a Paratime to build on according to their specific requirements, such as confidentiality and EVM compatibility. Learn more by exploring the ParaTimes below! | | EVM-compatible | Oasis Wasm | |---------------------:|-------------------------------------------------------|-----------------------------------------------------| | **Confidential** | | | | **Non-Confidential** | | | [consensus layer]: ./network.md [Sapphire]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/README.mdx --- ## Cipher ParaTime Cipher is a confidential ParaTime for executing Wasm smart contracts. As an officially supported ParaTime by the Oasis Protocol Foundation, Cipher allows for: * Flexibility: developer can define which data to store in a public and which data in the (more expensive) confidential storage * Security: the [Rust language] primarily used for writing Wasm smart contracts is known for its strict memory management and was developed specifically to avoid memory leaks * Scalability: increased throughput of transactions * Low-cost: 99%+ lower fees than Ethereum * 6 second finality (1 block) * Cross-chain bridge to enable cross-chain interoperability (upcoming) If you're looking for EVM-compatible ParaTimes, check out the [Emerald](../emerald/README.mdx) and the confidential [Sapphire](https://github.com/oasisprotocol/sapphire-paratime/blob/main/docs/README.mdx) paratimes. [Rust language]: https://www.rust-lang.org/ ## Network Information See crucial network information [here][network]. [network]: ./network.mdx ## Smart Contract Development Cipher implements the [Oasis Contract SDK] API. To learn how to write a confidential smart contract in Rust and deploy it on Cipher, read the related Oasis Contract SDK chapters: ## See also [Oasis Contract SDK]: https://github.com/oasisprotocol/oasis-sdk/tree/main/contract-sdk --- ## Confidential Hello World Confidential smart contract execution on Oasis is assured by three mechanisms: - the contract is executed in a trusted execution environment, - the contract's storage on the blockchain is encrypted, - the client's transactions and queries are end-to-end encrypted. The first mechanism is implemented as part of the ParaTime attestation process on the consensus layer and is opaque to the dApp developer. The other two mechanisms are available to dApp developers. The remainder of this chapter will show you how to use an encrypted contract storage and perform contract operations with end-to-end encryption on Cipher. ## Confidential cell In the [hello world](./hello-world.md) example we used [`PublicCell`][PublicCell] to access the key-value store of that contract instance. In this case the value was stored unencrypted on the blockchain associated with the hash of the key we provided to the constructor (e.g., the `counter` in `PublicCell::new(b"counter")`). Cipher supports another primitive [`ConfidentialCell`][ConfidentialCell] which enables you to store and load data confidentially assured by hardware-level encryption. In addition, the value is encrypted along with a nonce so that it appears different each time to the blockchain observer, even if the decrypted value remains equal. Namely, the nonce is generated from: - the round number, - the number of the sub-call during current smart contract execution, - the number of confidential storage accesses from smart contracts in the current block. The location of the confidential cell inside the contract state is **still based on the initialization key passed to the constructor**. Consequently, if you declare a number of confidential cells and write to the same one on each call, the blockchain observers will notice that the same cell is being changed every time. To call the confidential cell getter and setter, you will need to provide the instance of the *confidential store*. The store is obtained by calling `confidential_store()` on the contract's *context* object. If, for example, the node operator will try to execute your code in a non-confidential environment, they would not obtain the keys required to perform decryption so the operation would fail. Now, let's look at how a confidential version of the hello world smart contract would look like: ```rust title="src/lib.rs" //! A confidential hello world smart contract. extern crate alloc; use oasis_contract_sdk as sdk; use oasis_contract_sdk_storage::cell::ConfidentialCell; /// All possible errors that can be returned by the contract. /// /// Each error is a triplet of (module, code, message) which allows it to be both easily /// human readable and also identifyable programmatically. #[derive(Debug, thiserror::Error, sdk::Error)] pub enum Error { #[error("bad request")] #[sdk_error(code = 1)] BadRequest, } /// All possible requests that the contract can handle. /// /// This includes both calls and queries. #[derive(Clone, Debug, cbor::Encode, cbor::Decode)] pub enum Request { #[cbor(rename = "instantiate")] Instantiate { initial_counter: u64 }, #[cbor(rename = "say_hello")] SayHello { who: String }, } /// All possible responses that the contract can return. /// /// This includes both calls and queries. #[derive(Clone, Debug, Eq, PartialEq, cbor::Encode, cbor::Decode)] pub enum Response { #[cbor(rename = "hello")] Hello { greeting: String }, #[cbor(rename = "empty")] Empty, } /// The contract type. pub struct HelloWorld; /// Storage cell for the counter. const COUNTER: ConfidentialCell = ConfidentialCell::new(b"counter"); impl HelloWorld { /// Increment the counter and return the previous value. fn increment_counter(ctx: &mut C) -> u64 { let counter = COUNTER.get(ctx.confidential_store()).unwrap_or_default(); COUNTER.set(ctx.confidential_store(), counter + 1); counter } } // Implementation of the sdk::Contract trait is required in order for the type to be a contract. impl sdk::Contract for HelloWorld { type Request = Request; type Response = Response; type Error = Error; fn instantiate(ctx: &mut C, request: Request) -> Result<(), Error> { // This method is called during the contracts.Instantiate call when the contract is first // instantiated. It can be used to initialize the contract state. match request { // We require the caller to always pass the Instantiate request. Request::Instantiate { initial_counter } => { // Initialize counter to specified value. COUNTER.set(ctx.confidential_store(), initial_counter); Ok(()) } _ => Err(Error::BadRequest), } } fn call(ctx: &mut C, request: Request) -> Result { // This method is called for each contracts.Call call. It is supposed to handle the request // and return a response. match request { Request::SayHello { who } => { // Increment the counter and retrieve the previous value. let counter = Self::increment_counter(ctx); // Return the greeting as a response. Ok(Response::Hello { greeting: format!("hello {who} ({counter})"), }) } _ => Err(Error::BadRequest), } } fn query(_ctx: &mut C, _request: Request) -> Result { // This method is called for each contracts.Query query. It is supposed to handle the // request and return a response. Err(Error::BadRequest) } } // Create the required Wasm exports required for the contract to be runnable. sdk::create_contract!(HelloWorld); // We define some simple contract tests below. #[cfg(test)] mod test { use oasis_contract_sdk::{testing::MockContext, types::ExecutionContext, Contract}; use super::*; #[test] fn test_hello() { // Create a mock execution context with default values. let mut ctx: MockContext = ExecutionContext::default().into(); // Instantiate the contract. HelloWorld::instantiate( &mut ctx, Request::Instantiate { initial_counter: 11, }, ) .expect("instantiation should work"); // Dispatch the SayHello message. let rsp = HelloWorld::call( &mut ctx, Request::SayHello { who: "unit test".to_string(), }, ) .expect("SayHello call should work"); // Make sure the greeting is correct. assert_eq!( rsp, Response::Hello { greeting: "hello unit test (11)".to_string() } ); // Dispatch another SayHello message. let rsp = HelloWorld::call( &mut ctx, Request::SayHello { who: "second call".to_string(), }, ) .expect("SayHello call should work"); // Make sure the greeting is correct. assert_eq!( rsp, Response::Hello { greeting: "hello second call (12)".to_string() } ); } } ``` The contract is built the same way as its non-confidential counterpart: ```shell cargo build --target wasm32-unknown-unknown --release ``` The blockchain store containing all compiled contracts is public. This means that anyone will be able to decompile your smart contract and see how it works. **Do not put any sensitive data inside the smart contract code!** Since the smart contracts store is public, uploading the Wasm code is the same as for the non-confidential ones: ```shell oasis contract upload hello_world.wasm ``` [PublicCell]: https://api.docs.oasis.io/oasis-sdk/oasis_contract_sdk_storage/cell/struct.PublicCell.html [ConfidentialCell]: https://api.docs.oasis.io/oasis-sdk/oasis_contract_sdk_storage/cell/struct.ConfidentialCell.html ## Confidential Instantiation and Calling To generate an encrypted transaction, the `oasis contract` subcommand expects a `--encrypted` flag. The client (`oasis` command in our case) will generate and use an ephemeral keypair for encryption. If the original transaction was encrypted, the returned transaction result will also be encrypted inside the trusted execution environment to prevent a man-in-the-middle attack by the compute node. Encrypted transactions have the following encrypted fields: contract address, function name, parameters and the amounts and types of tokens sent. **Encrypted transactions are not anonymous!** Namely, the transaction contains unencrypted public key of your account or a list of expected multisig keys, the gas limit and the amount of fee paid for the transaction execution. While the transaction execution is confidential, its effects may reveal some information. For example, the account balances are public. If the effect is, say, subtraction of 10 tokens from the signer's account, this most probably implies that they have been transferred as part of this transaction. Before we instantiate the contract we need to consider the gas usage of our confidential smart contract. Since the execution of the smart contract is dependent on the (confidential) smart contract state, the gas limit cannot be computed automatically. Currently, the gas limit for confidential transactions is tailored towards simple transaction execution (e.g. no gas is reserved for accessing the contract state). For more expensive transactions, we need to explicitly pass the `--gas-limit` parameter and *guess* the sufficient value for now or we will get the `out of gas` error. For example, to instantiate our smart contract above with a single write to the contract state, we need to raise the gas limit to `60000`: ```shell oasis contract instantiate CODEID '{instantiate: {initial_counter: 42}}' --encrypted --gas-limit 400000 ``` The `out of gas` error can **potentially reveal the (confidential) state of the smart contract**! If your smart contract contains a branch which depends on the value stored in the contract state, an attack similar to the **timing attack** known from the design of cryptographic algorithms can succeed. To overcome this, your code should **never contain branches depending on secret smart contract state**. A similar gas limit attack could reveal the **client's transaction parameters**. For example, if calling function `A` costs `50,000` gas units and function `B` `300,000` gas units, the attacker could imply which function call was performed based on the transaction's gas limit, which is public. To mitigate this attack, the client should always use the maximum gas cost among all contract function calls - in this case `300,000`. Finally, we make a confidential call: ```shell oasis contract call INSTANCEID '{say_hello: {who: "me"}}' --encrypted --gas-limit 400000 ``` Call Format The [Context] object has a special [`call_format`] attribute which holds information on whether the transaction was encrypted by the client's ephemeral key or not. Having access control based on this value is useful as an additional safety precaution to prevent leakage of any confidential information unencrypted out of the trusted execution environment by mistake. Regardless of the encrypted transaction and confidential storage used in the smart contract, any [emitted event][emit_event] will be public. Example You can view and download a [complete example] from the Oasis SDK repository. [Context]: https://api.docs.oasis.io/oasis-sdk/oasis_contract_sdk/context/trait.Context.html [`call_format`]: https://api.docs.oasis.io/oasis-sdk/oasis_contract_sdk/context/trait.Context.html#tymethod.call_format [emit_event]: https://api.docs.oasis.io/oasis-sdk/oasis_contract_sdk/context/trait.Context.html#tymethod.emit_event [complete example]: https://github.com/oasisprotocol/oasis-sdk/tree/main/examples/contract-sdk/c10l-hello-world --- ## Hello World This chapter will show you how to quickly create, build and test a minimal Oasis WebAssembly smart contract. ## Repository Structure and Dependencies First we create the basic directory structure for the hello world contract using Rust's [`cargo`]: ```bash cargo init --lib hello-world ``` This will create the `hello-world` directory and populate it with some boilerplate needed to describe a Rust application. It will also set up the directory for version control using Git. The rest of the guide assumes that you are executing commands from within this directory. Since the Contract SDK requires a nightly version of the Rust toolchain, you need to specify a version to use by creating a special file called `rust-toolchain` containing the following information: ``` [toolchain] channel = "nightly-2025-05-09" components = ["rustfmt", "clippy"] targets = ["x86_64-fortanix-unknown-sgx", "wasm32-unknown-unknown"] profile = "minimal" ``` After you complete this guide, the minimal runtime directory structure will look as follows: ``` hello-world ├── Cargo.lock # Dependency tree checksums (generated on first compilation). ├── Cargo.toml # Rust crate definition. ├── rust-toolchain.toml # Rust toolchain version configuration. └── src └── lib.rs # Smart contract source code. ``` [`cargo`]: https://doc.rust-lang.org/cargo ## Smart Contract Definition First you need to declare some dependencies in order to be able to use the smart contract SDK. Additionally, you will want to specify some optimization flags in order to make the compiled smart contract as small as possible. To do this, edit your `Cargo.toml` to look like the following: ```toml title="Cargo.toml" [package] name = "hello-world" version = "0.0.0" edition = "2021" license = "Apache-2.0" [lib] crate-type = ["cdylib"] [dependencies] cbor = { version = "0.5.1", package = "oasis-cbor" } oasis-contract-sdk = { git = "https://github.com/oasisprotocol/oasis-sdk", tag = "contract-sdk/v0.4.1" } oasis-contract-sdk-storage = { git = "https://github.com/oasisprotocol/oasis-sdk", tag = "contract-sdk/v0.4.1" } # Third party. thiserror = "1.0.30" [profile.release] opt-level = 3 debug = false rpath = false lto = true debug-assertions = false codegen-units = 1 panic = "abort" incremental = false overflow-checks = true strip = true ``` We are using Git tags for releases instead of releasing Rust packages on crates.io. After you have updated your `Cargo.toml` the next thing is to define the hello world smart contract. To do this, edit `src/lib.rs` with the following content: ```rust title="src/lib.rs" //! A minimal hello world smart contract. extern crate alloc; use oasis_contract_sdk as sdk; use oasis_contract_sdk_storage::cell::PublicCell; /// All possible errors that can be returned by the contract. /// /// Each error is a triplet of (module, code, message) which allows it to be both easily /// human readable and also identifyable programmatically. #[derive(Debug, thiserror::Error, sdk::Error)] pub enum Error { #[error("bad request")] #[sdk_error(code = 1)] BadRequest, } /// All possible requests that the contract can handle. /// /// This includes both calls and queries. #[derive(Clone, Debug, cbor::Encode, cbor::Decode)] pub enum Request { #[cbor(rename = "instantiate")] Instantiate { initial_counter: u64 }, #[cbor(rename = "say_hello")] SayHello { who: String }, } /// All possible responses that the contract can return. /// /// This includes both calls and queries. #[derive(Clone, Debug, Eq, PartialEq, cbor::Encode, cbor::Decode)] pub enum Response { #[cbor(rename = "hello")] Hello { greeting: String }, #[cbor(rename = "empty")] Empty, } /// The contract type. pub struct HelloWorld; /// Storage cell for the counter. const COUNTER: PublicCell = PublicCell::new(b"counter"); impl HelloWorld { /// Increment the counter and return the previous value. fn increment_counter(ctx: &mut C) -> u64 { let counter = COUNTER.get(ctx.public_store()).unwrap_or_default(); COUNTER.set(ctx.public_store(), counter + 1); counter } } // Implementation of the sdk::Contract trait is required in order for the type to be a contract. impl sdk::Contract for HelloWorld { type Request = Request; type Response = Response; type Error = Error; fn instantiate(ctx: &mut C, request: Request) -> Result<(), Error> { // This method is called during the contracts.Instantiate call when the contract is first // instantiated. It can be used to initialize the contract state. match request { // We require the caller to always pass the Instantiate request. Request::Instantiate { initial_counter } => { // Initialize counter to specified value. COUNTER.set(ctx.public_store(), initial_counter); Ok(()) } _ => Err(Error::BadRequest), } } fn call(ctx: &mut C, request: Request) -> Result { // This method is called for each contracts.Call call. It is supposed to handle the request // and return a response. match request { Request::SayHello { who } => { // Increment the counter and retrieve the previous value. let counter = Self::increment_counter(ctx); // Return the greeting as a response. Ok(Response::Hello { greeting: format!("hello {who} ({counter})"), }) } _ => Err(Error::BadRequest), } } fn query(_ctx: &mut C, _request: Request) -> Result { // This method is called for each contracts.Query query. It is supposed to handle the // request and return a response. Err(Error::BadRequest) } } // Create the required Wasm exports required for the contract to be runnable. sdk::create_contract!(HelloWorld); // We define some simple contract tests below. #[cfg(test)] mod test { use oasis_contract_sdk::{testing::MockContext, types::ExecutionContext, Contract}; use super::*; #[test] fn test_hello() { // Create a mock execution context with default values. let mut ctx: MockContext = ExecutionContext::default().into(); // Instantiate the contract. HelloWorld::instantiate( &mut ctx, Request::Instantiate { initial_counter: 11, }, ) .expect("instantiation should work"); // Dispatch the SayHello message. let rsp = HelloWorld::call( &mut ctx, Request::SayHello { who: "unit test".to_string(), }, ) .expect("SayHello call should work"); // Make sure the greeting is correct. assert_eq!( rsp, Response::Hello { greeting: "hello unit test (11)".to_string() } ); // Dispatch another SayHello message. let rsp = HelloWorld::call( &mut ctx, Request::SayHello { who: "second call".to_string(), }, ) .expect("SayHello call should work"); // Make sure the greeting is correct. assert_eq!( rsp, Response::Hello { greeting: "hello second call (12)".to_string() } ); } } ``` This is it! You now have a simple hello world smart contract with included unit tests for its functionality. You can also look at other smart contract handles supported by the [Oasis Contract SDK]. PublicCell object `PublicCell` can use any type `T` which implements `oasis_cbor::Encode` and `oasis_cbor::Decode`. Context object The `ctx` argument contains the contract context analogous to `msg` and `this` in the EVM world. To learn more head to the [Context] trait in our Rust API. [Oasis Contract SDK]: https://github.com/oasisprotocol/oasis-sdk/blob/main/contract-sdk/src/contract.rs [Context]: https://api.docs.oasis.io/oasis-sdk/oasis_contract_sdk/context/trait.Context.html ## Testing To run unit tests type: ```sh RUSTFLAGS="-C target-feature=+aes,+ssse3" cargo test ``` Running unit tests locally requires a physical or virtualized Intel-compatible CPU with AES and SSSE3 instruction sets. ## Building for Deployment In order to build the smart contract before it can be uploaded to the target chain, run: ```bash cargo build --target wasm32-unknown-unknown --release ``` This will generate a binary file called `hello_world.wasm` under `target/wasm32-unknown-unknown/release` which contains the smart contract compiled into WebAssembly. This file can be directly deployed on chain. ## Deploying the Contract Deploying the contract we just built is simple using the Oasis CLI. This section assumes that you already have an instance of the CLI set up and that you will be deploying contracts on the existing Testnet where you already have some TEST tokens to cover transaction fees. First, switch the default network to Cipher Testnet to avoid the need to pass it to every following invocation. ``` oasis network set-default testnet oasis paratime set-default testnet cipher ``` The first deployment step that needs to be performed only once for the given binary is uploading the Wasm binary. ``` oasis contract upload hello_world.wasm ``` After successful execution it will show the code ID that you need to use for any subsequent instantiation of the same contract. Next, create an instance of the contract by loading the code and calling its constructor with some dummy arguments. Note that the arguments depend on the contract that is being deployed and in our hello world case we are simply taking the initial counter value. ``` oasis contract instantiate CODEID '{instantiate: {initial_counter: 42}}' ``` After successful execution it shows the instance ID that you need for calling the instantiated contract. Next, you can test calling the contract. ``` oasis contract call INSTANCEID '{say_hello: {who: "me"}}' ``` Example You can view and download a [complete example] from the Oasis SDK repository. [complete example]: https://github.com/oasisprotocol/oasis-sdk/tree/main/examples/contract-sdk/hello-world --- ## Network Information(Cipher) ## RPC Endpoints The RPC endpoint is a *point of trust*. Beside traffic rate limiting, it can also perform censorship or even a man-in-the-middle attack. If you have security considerations, we strongly recommend that you set up your own [ParaTime client node][paratime-client-node]. Cipher endpoints share the gRPC protocol with the Oasis Core. You can connect to one of the public endpoints below (in alphabetic order): [paratime-client-node]: ../../../../node/run-your-node/paratime-client-node.mdx | Provider | Mainnet RPC URLs | Testnet RPC URLs | |----------|---------------------|-----------------------------| | [Oasis] | `grpc.oasis.io:443` | `testnet.grpc.oasis.io:443` | [Oasis]: https://oasis.net ## Block Explorers | Name/Provider | Mainnet URL | Testnet URL | EIP-3091 compatible | |------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Oasis Scan ([Bit Cat]) | [https://www.oasisscan.com/paratimes/000…7cb](https://www.oasisscan.com/paratimes/000000000000000000000000000000000000000000000000e199119c992377cb) | [https://testnet.oasisscan.com/paratimes/000…000](https://testnet.oasisscan.com/paratimes/0000000000000000000000000000000000000000000000000000000000000000) | No | [Bit Cat]: https://www.bitcat365.com/ Only rudimentary block explorer features exist for Cipher. Consider debugging Cipher transactions with the [`oasis paratime show`] command using the [Oasis CLI]. [`oasis paratime show`]: ../../../../build/tools/cli/paratime.md#show [Oasis CLI]: ../../../../build/tools/cli/README.md ## Indexers | Name (Provider) | Mainnet URL | Testnet URL | Documentation | |------------------------|----------------------------------------|----------------------------------------|-------------------------------| | Oasis Scan ([Bit Cat]) | `https://api.oasisscan.com/v2/mainnet` | `https://api.oasisscan.com/v2/testnet` | [Runtime API][OasisScan-docs] | [OasisScan-docs]: https://api.oasisscan.com/v2/swagger/#/runtime If you are running your own Cipher endpoint, a block explorer, or an indexer and wish to be added to these docs, open an issue at [github.com/oasisprotocol/docs]. [github.com/oasisprotocol/docs]: https://github.com/oasisprotocol/docs --- ## Prerequisites(Cipher) This chapter will guide you how to install the software required for developing smart contracts using the Oasis SDK. After successfully completing all the described steps you will be able to start building your first smart contract on Oasis! If you already have everything set up, feel free to skip to the [next chapter]. [next chapter]: hello-world.md ## Environment Setup The following is a list of prerequisites required to start developing using the Oasis SDK: ### [Rust] We follow [Rust upstream's recommendation][rust-upstream-rustup] on using [rustup] to install and manage Rust versions. rustup cannot be installed alongside a distribution packaged Rust version. You will need to remove it (if it's present) before you can start using rustup. Install it by running: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` If you want to avoid directly executing a shell script fetched the internet, you can also [download `rustup-init` executable for your platform] and run it manually. This will run `rustup-init` which will download and install the latest stable version of Rust on your system. #### Rust Toolchain Version The version of the Rust toolchain we use in the Oasis SDK is specified in the [`rust-toolchain.toml`] file. The rustup-installed versions of `cargo`, `rustc` and other tools will [automatically detect this file and use the appropriate version of the Rust toolchain][rust-toolchain-precedence]. When you are building applications that use the SDK, it is recommended that you copy the same [`rust-toolchain.toml`] file to your project's top-level directory as well. To install the appropriate version of the Rust toolchain, make sure you are in the project directory and run: ``` rustup show ``` This will automatically install the appropriate Rust toolchain (if not present) and output something similar to: ``` ... active toolchain ---------------- nightly-2022-08-22-x86_64-unknown-linux-gnu (overridden by '/code/rust-toolchain') rustc 1.65.0-nightly (c0941dfb5 2022-08-21) ``` [rustup]: https://rustup.rs/ [rust-upstream-rustup]: https://www.rust-lang.org/tools/install [download `rustup-init` executable for your platform]: https://rust-lang.github.io/rustup/installation/other.html [Rust]: https://www.rust-lang.org/ [`rust-toolchain.toml`]: https://github.com/oasisprotocol/oasis-sdk/tree/main/rust-toolchain.toml [rust-toolchain-precedence]: https://github.com/rust-lang/rustup/blob/master/README.md#override-precedence ### (OPTIONAL) [Go] _Required if you want to use the Go Client SDK._ At least version **1.20.2** is required. If your distribution provides a new-enough version of Go, just use that. Otherwise: * install the Go version provided by your distribution, * [ensure `$GOPATH/bin` is in your `PATH`], * [install the desired version of Go], e.g. 1.20.5, with2 ``` go get golang.org/dl/go1.20.2 go1.20.5 downloa2 ``` [Go]: https://golang.org [ensure `$GOPATH/bin` is in your `PATH`]: https://tip.golang.org/doc/code.html#GOPATH [install the desired version of Go]: https://golang.org/doc/install#extra_versions ## Oasis CLI Installation The rest of the guide uses the Oasis CLI as an easy way to interact with the smart contract. You can use [one of the binary releases] or [compile it yourself]. [one of the binary releases]: https://github.com/oasisprotocol/cli/releases [compile it yourself]: https://github.com/oasisprotocol/cli/blob/master/README.md --- ## Emerald ParaTime Emerald is our official ParaTime which executes smart contracts inside the [Ethereum Virtual Machine (EVM)]. Emerald allows for: * Full EVM compatibility and easy integration with EVM-based dApps, such as DeFi, NFT, Metaverse and crypto gaming * Scalability: increased throughput of transactions * Low-cost: 99%+ lower fees than Ethereum * 6 second finality (1 block) * Cross-chain bridge to enable cross-chain interoperability (upcoming) If you're looking for EVM, but with confidentiality, check out the [Sapphire ParaTime](https://github.com/oasisprotocol/sapphire-paratime/blob/main/docs/README.mdx). [Ethereum Virtual Machine (EVM)]: https://ethereum.org/en/developers/docs/evm/ ## Network Information See crucial network information [here][network]. [network]: ./network.mdx ## See also --- ## Network Information(Emerald) ## Networks | | Mainnet | Testnet | Localnet | | ------------------| ----------------------------------- | ----------------------------------- |---------------------------------------- | | Network name | `emerald` | `emerald-testnet` | `emerald-localnet` | Long network name | `Oasis Emerald` | `Oasis Emerald Testnet` | `Oasis Emerald Localnet` | Chain ID | Hex:`0xa516` Decimal: `42262` | Hex:`0xa515` Decimal: `42261` | Hex:`0xa514` Decimal: `42260` | Tools | | [Testing token Faucet][faucet] | [Local development Docker image][localnet] [faucet]: https://faucet.testnet.oasis.io/ [localnet]: ../../localnet.mdx ## RPC Endpoints The RPC endpoint is a *point of trust*. Beside traffic rate limiting, it can also perform censorship or even a man-in-the-middle attack. If you have security considerations, we strongly recommend that you set up your own [ParaTime client node][paratime-client-node] and the [Web3-compatible gateway]. [Web3-compatible gateway]: ../../../../node/web3.mdx [paratime-client-node]: ../../../../node/run-your-node/paratime-client-node.mdx You can connect to one of the public Web3 gateways below (in alphabetic order): | Provider | Mainnet RPC URLs | Testnet RPC URLs | |-----------------------------|-------------------------------------------------------------------------|------------------------------------------------------------------------------------------| | [1RPC] | | N/A | | [Oasis] | | | [Oasis]: https://oasis.net Public RPCs may have rate limits or traffic restrictions. For professional, dedicated RPC endpoints, consider the following providers (in alphabetic order): | Provider | Instructions | Pricing | |---------------------------|---------------------------|-------------------------| | [1RPC] | [docs.1rpc.io][1RPC-docs] | [Pricing][1RPC-pricing] | [1RPC]: https://www.1rpc.io/ [1RPC-docs]: https://docs.1rpc.io/guide/how-to-use-1rpc [1RPC-pricing]: https://www.1rpc.io/#pricing ## Block Explorers | Name/Provider | Mainnet URL | Testnet URL | EIP-3091 compatible | |-----------------------------|-------------------------------------------|-------------------------------------------|---------------------| | [Oasis Explorer][Oasis] | `https://explorer.oasis.io/mainnet/emerald` | `https://explorer.oasis.io/testnet/emerald` | Yes | | Oasis Scan ([Bit Cat]) | [https://www.oasisscan.com/paratimes/000…87f](https://www.oasisscan.com/paratimes/000000000000000000000000000000000000000000000000e2eaa99fc008f87f) | [https://testnet.oasisscan.com/paratimes/000…ca7](https://testnet.oasisscan.com/paratimes/00000000000000000000000000000000000000000000000072c8215e60d5bca7) | No | [Bit Cat]: https://www.bitcat365.com/ ## Indexers | Name (Provider) | Mainnet URL | Testnet URL | Documentation | |-------------------------------------------|-------------------------------------------------------|--------------------------------------|-------------------| | [Covalent] | `https://api.covalenthq.com/v1/oasis-emerald-mainnet` | *N/A* | [Unified API docs][Covalent-docs] | | Oasis Nexus ([Oasis]) | `https://nexus.oasis.io/v1/` | `https://testnet.nexus.oasis.io/v1/` | [API][Nexus-docs] | | Oasis Scan ([Bit Cat]) | `https://api.oasisscan.com/v2/mainnet` | `https://api.oasisscan.com/v2/testnet` | [Runtime API][OasisScan-docs] | | [SubQuery Network][SubQuery] | *N/A* | *N/A* | [SubQuery Academy][SubQuery-docs], [QuickStart][SubQuery-quickstart], [Starter project][SubQuery-starter] | [Covalent]: https://www.covalenthq.com/ [Covalent-docs]: https://www.covalenthq.com/docs/unified-api/ [Nexus-docs]: https://nexus.oasis.io/v1/spec/v1.html [OasisScan-docs]: https://api.oasisscan.com/v2/swagger/#/runtime [SubQuery]: https://subquery.network [SubQuery-docs]: https://academy.subquery.network/ [SubQuery-quickstart]: https://academy.subquery.network/quickstart/quickstart.html [SubQuery-starter]: https://github.com/subquery/ethereum-subql-starter/tree/main/Oasis/oasis-emerald-starter If you are running your own Emerald endpoint, a block explorer, or an indexer and wish to be added to these docs, open an issue at [github.com/oasisprotocol/docs]. [github.com/oasisprotocol/docs]: https://github.com/oasisprotocol/docs --- ## Writing dApps on Emerald This tutorial will show you how to set up dApp development environment for Emerald to be able to write and deploy dApps on Oasis Emerald. Oasis Emerald exposes an **EVM-compatible** interface so writing dApps isn't much different compared to the original Ethereum Network! We will walk you through the Hardhat configuration. Those who prefer a simpler web-only interface can also use the Remix IDE. Check out our general [Remix guide]. Just remember to use the Emerald [networks] when selecting *Inject Web3* environment and connecting to MetaMask. [Remix guide]: ../../remix.md [networks]: ./network.mdx#rpc-endpoints ## Oasis Consensus Layer and Emerald ParaTime Oasis Network consists of the consensus layer and a number of Layer 2 chains called the ParaTimes (to learn more, check the [Oasis Network Overview][overview] chapter). Emerald is a ParaTime which implements the Ethereum Virtual Machine (EVM). The minimum and also expected block time in Emerald is **6 seconds**. Any Emerald transaction will require at least this amount of time to be executed. The native Oasis addresses are Bech32-encoded (e.g. `oasis1qpupfu7e2n6pkezeaw0yhj8mcem8anj64ytrayne`) while Emerald supports both the Bech32-encoded and the Ethereum-compatible hex-encoded addresses (e.g. `0x90adE3B7065fa715c7a150313877dF1d33e777D5`). The underlying algorithm for signing the transactions is [Ed25519] on the Consensus layer and both [Ed25519] and [ECDSA] in Emerald. The Ed25519 scheme is used mostly by the Emerald compute nodes for managing their computation rewards. For signing your dApp-related transactions on Emerald you will probably want to use ECDSA since this is the de facto scheme supported by Ethereum wallets and libraries. Finally, the ParaTimes are not allowed to directly access your tokens stored in Consensus layer addresses. You will need to **deposit** tokens from your consensus account to Emerald. Consult the [How to transfer ROSE into Emerald ParaTime][how-to-deposit-rose] chapter to learn more. [overview]: ../../../../general/oasis-network/README.mdx [Ed25519]: https://en.wikipedia.org/wiki/EdDSA#Ed25519 [ECDSA]: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm [how-to-deposit-rose]: ../../../../general/manage-tokens/README.mdx [Testnet faucet]: https://faucet.testnet.oasis.io/ ## Testnet and Mainnet The Oasis Network currently has, similar to some other blockchains, two major public deployments: the [Mainnet] and the [Testnet]. The native tokens are called ROSE and TEST respectively. Each deployment has its own state, a different set of validators and ParaTimes. The state of the Mainnet is considered immutable for indefinite time, while the data on the Testnet can be subject to wipe in the future. The Emerald ParaTime is deployed similarly: the [Emerald Mainnet] is deployed on the Oasis Mainnet Network while the [Emerald Testnet] on the Oasis Testnet Network. The Emerald state on the Mainnet is stable. Testnet, apart from running the unstable version of the code and being prone to bugs, can have the state deliberately wiped either on the Emerald ParaTime layer or on the Oasis Testnet Network level. Never deploy production service on Testnet Because Testnet state can be wiped in the future, you should **never deploy a production service on the Testnet**! For testing purposes, visit our [Testnet faucet] to obtain some TEST which you can then use on the Emerald Testnet to pay for gas fees. The faucet supports sending TEST both to your Consensus layer address or to your address inside the ParaTime. [Mainnet]: ../../../../node/network/mainnet.md [Testnet]: ../../../../node/network/testnet.md [Emerald Mainnet]: ./network.mdx [Emerald Testnet]: ./network.mdx ## Localnet For development and testing, you can run a local [instance][localnet] of the entire Emerald stack. [localnet]: ../../localnet.mdx ## Create dApp on Emerald with Hardhat Let's begin writing our dApp with Hardhat. We will lay out a base for a modern dApp including TypeScript bindings for tests and later for the frontend application. First, make sure you installed [Node.js] and that you have `npm` and `npx` readily available. Then run: ``` npx hardhat init ``` Select the `Create an advanced sample project that uses TypeScript` option and enter the root directory for your project. You can leave other options as default. After a while Hardhat will finish downloading the dependencies and create a simple greeter dApp. To compile, deploy and test the smart contract of your sample project locally, move to your project directory and type: ``` $ npx hardhat compile Compiling 2 files with 0.8.4 Generating typings for: 2 artifacts in dir: typechain for target: ethers-v5 Successfully generated 5 typings! Compilation finished successfully $ npx hardhat test No need to generate any newer typings. Greeter Deploying an Emerald Greeter with greeting: Hello, world! Changing greeting from 'Hello, world!' to 'Hola, mundo!' ✓ Should return the new greeting once it's changed (613ms) 1 passing (614ms) ``` Hardhat already comes with a built-in EVM which is spun up from scratch each time we call `hardhat test` without parameters. It populates 20 accounts with ETH and registers them to the [ethers.js] instance used in the tests. Next, let's look at how to configure Hardhat for Emerald. For convenience, we assign the `PRIVATE_KEY` environment variable a hex-encoded private key of your Emerald wallet containing tokens to pay for gas fees. If you are running [localnet], use any of the five generated private keys. ``` export PRIVATE_KEY="YOUR_0x_EMERALD_PRIVATE_KEY" ``` Next, we configure three networks: `emerald_local`, `emerald_testnet`, and `emerald_mainnet`. Open `hardhat.config.ts` and replace the `networks` field to match the following: ``` networks: { emerald_local: { url: "http://localhost:8545", accounts: process.env.PRIVATE_KEY !== undefined ? [process.env.PRIVATE_KEY] : [], }, emerald_testnet: { url: "https://testnet.emerald.oasis.io", accounts: process.env.PRIVATE_KEY !== undefined ? [process.env.PRIVATE_KEY] : [], }, emerald_mainnet: { url: "https://emerald.oasis.io", accounts: process.env.PRIVATE_KEY !== undefined ? [process.env.PRIVATE_KEY] : [], }, }, ``` Next, we increase the default timeout for mocha tests from 20 seconds to 60 seconds. This step is not needed, if you will test your contracts solely on [localnet], but is required for Testnet to avoid timeouts. Append the following block to the `config` object: ``` mocha: { timeout: 60000 } ``` `geth --dev` and `ganache-cli` tools use a so-called "instant mining" mode. In this mode, a new block is mined immediately when a new transaction occurs in the mempool. Neither Oasis Mainnet and Testnet Networks nor [localnet] support such mode and the new block will always be mined at least after the 1 second block time elapsed. Now deploy the contract to the [localnet] Docker container by selecting the `emerald_local` network we configured above and run the tests: ``` $ npx hardhat run scripts/deploy.ts --network emerald_local No need to generate any newer typings. Greeter deployed to: 0x4e1de2f6cf4e57a8f55b4a5dd1fce770db734962 $ npx hardhat test --network emerald_local No need to generate any newer typings. Greeter ✓ Should return the new greeting once it's changed (6017ms) 1 passing (6s) ``` Next, you can try deploying the contract to the Testnet. Temporarily replace your `PRIVATE_KEY` environment variable with your Testnet one and deploy the contract by using the `emerald_testnet` network. Similarly, you can also run the tests. ``` $ PRIVATE_KEY="0xYOUR_TESTNET_PRIVATE_KEY" npx hardhat run scripts/deploy.ts --network emerald_testnet No need to generate any newer typings. Greeter deployed to: 0x735df9F166a2715bCA3D3A66B119CBef95a0D129 $ PRIVATE_KEY="0xYOUR_TESTNET_PRIVATE_KEY" npx hardhat test --network emerald_testnet No need to generate any newer typings. Greeter ✓ Should return the new greeting once it's changed (21016ms) 1 passing (6s) ``` Congratulations, you have just deployed your first smart contract to the public Emerald Testnet Network! If you are unsure, whether your contract was successfully deployed, you can monitor the transactions on the Emerald block explorer ([Mainnet][mainnet-explorer], [Testnet][testnet-explorer]). This tool indexes all Emerald accounts, blocks, transactions and even offers a neat user interface for browsing ETH-specifics like the ERC20 tokens and the ERC721 NFTs. [Image: Emerald Block Explorer showing the latest transactions] [Image: Emerald Block Explorer showing our account 0x90adE3B7065fa715c7a150313877dF1d33e777D5 used for deploying the smart contract] Finally, by selecting the `emerald_mainnet` network and the corresponding private key, we can deploy the contract on the Mainnet: ``` $ PRIVATE_KEY="0xYOUR_MAINNET_PRIVATE_KEY" npx hardhat run scripts/deploy.ts --network emerald_mainnet No need to generate any newer typings. Greeter deployed to: 0x6e8e9e0DBCa4EF4a65eBCBe4032e7C2a6fb7C623 ``` [Node.js]: https://nodejs.org [ethers.js]: https://docs.ethers.io/v5/ ## Troubleshooting ### Deployment of my contract timed out on Testnet or Mainnet Emerald validators, similar to Ethereum ones, order the execution of transactions by gas price. When deploying a contract and the deployment times out, first wait another few rounds to make sure that the contract will not be deployed eventually. Next, check that your `gasPrice` **is at least 10 nROSE** which is a minimum required gas price on Emerald. This value should already be propagated automatically by the web3 endpoint, but your deployment configuration might have ignored it. Finally, consider increasing the `gasPrice` parameter in the Hardhat config file by a fraction (e.g. 10% or 20%). This will require more ROSE from your wallet to deploy the contract, but you will also increase the chance of your transaction being included in the block. ### Execution of my contract failed. How do I debug what went wrong? If you are using Testnet or Mainnet, try to debug your transaction by finding it on the Emerald block explorer ([Mainnet][mainnet-explorer], [Testnet][testnet-explorer]): [Image: Emerald block explorer showing a failed transaction] In some cases, the transaction result on Emerald block explorer might be stuck at `Error: (Awaiting internal transactions for reason)`. In this case or in case of other Consensus layer ↔ ParaTime issues, try to find your Emerald transaction on the Oasis Scan ([Mainnet][mainnet-oasisscan], [Testnet][testnet-oasisscan]) which is primarily a Consensus layer explorer, but offers some introspection into ParaTime transactions as well. Once you find your failed Emerald transaction, the `Status` field should contain a more verbose error description, for example: [Image: Oasis Scan showing the Out of gas error for a transaction on Emerald] ## See also [mainnet-explorer]: https://explorer.oasis.io/mainnet/emerald [testnet-explorer]: https://explorer.oasis.io/testnet/emerald [mainnet-oasisscan]: https://oasisscan.com [testnet-oasisscan]: https://testnet.oasisscan.com --- ## Consensus network information ## RPC Endpoints The RPC endpoint is a **point of trust**. Beside rate limiting, it can also perform censorship or even man-in-the-middle attack. If you have security considerations, we strongly recommend that you [run your own client node][non-validator-node], [non-validator-node]: ../../../node/run-your-node/non-validator-node.mdx Most dApp developers will build dApps on the ParaTime layer (the *compute* layer). For Sapphire and Emerald which are EVM-compatible chains, those dApps connect directly to an [EVM-compatible Web3 endpoint][web3]. However, if you are building a dApp for Cipher or the one that needs to perform consensus operations such as the consensus-layer token transfers, governance transactions, cross-chain ParaTime deposits and withdrawals and similar, you will need to connect to the one of the endpoints speaking [Oasis gRPC][grpc]. Public gRPC endpoints (in alphabetic order): | Provider | Mainnet URL | Testnet URL | |----------|---------------------|-----------------------------| | [Oasis] | `grpc.oasis.io:443` | `testnet.grpc.oasis.io:443` | [Oasis]: https://oasis.net [web3]: ../../../node/web3.mdx [grpc]: ../../../node/grpc.mdx ## Block Explorers | Name (Provider) | Mainnet URL | Testnet URL | |--------------------------|-----------------------------------------------|---------------------------------------------| | Oasis Explorer ([Oasis]) | https://explorer.oasis.io/mainnet/consensus | https://explorer.oasis.io/testnet/consensus | | Oasis Scan ([Bit Cat]) | https://www.oasisscan.com | https://testnet.oasisscan.com | [Bit Cat]: https://www.bitcat365.com/ ## Indexers | Name (Provider) | Mainnet URL | Testnet URL | Documentation | |------------------------|----------------------------------------|----------------------------------------|--------------------------------------------| | Oasis Nexus ([Oasis]) | `https://nexus.oasis.io/v1` | `https://testnet.nexus.oasis.io/v1` | [API][Nexus-docs] | | Oasis Scan ([Bit Cat]) | `https://api.oasisscan.com/v2/mainnet` | `https://api.oasisscan.com/v2/testnet` | [API][OasisScan-docs] | [Nexus-docs]: https://nexus.oasis.io/v1/spec/v1.html [OasisScan-docs]: https://api.oasisscan.com/v2/swagger/ ## Rosetta Endpoints | Provider | Mainnet URL | Testnet URL | |----------|-------------------------------------------|-------------------------------------------| | [Oasis] | `https://rosetta.oasis.io/api/mainnet/v1` | `https://rosetta.oasis.io/api/testnet/v1` | If you are running your own Oasis client node endpoint, a block explorer, an indexer, or the Rosetta gateway and wish to be added to these docs, open an issue at [github.com/oasisprotocol/docs]. [github.com/oasisprotocol/docs]: https://github.com/oasisprotocol/docs/issues --- ## Remix [Remix] is a web-based Integrated Development Environment (IDE) designed for developing, testing, and deploying smart contracts on the Ethereum Network. This guide will show you how to use Remix in conjunction with MetaMask on the Sapphire Network. For comprehensive details about Remix's features, consult the [Remix documentation]. ## Prerequisites 1. Install the [MetaMask browser extension][metamask] 2. Configure your networks: - Add Sapphire Mainnet or Testnet to MetaMask using the `Add to MetaMask` button on our [network page] - (Optional) Configure local network settings if you're using the Sapphire [localnet] ## Getting Started When you first launch Remix, it creates a default project structure. Navigate to the `contracts` folder and open `1_Storage.sol` to begin. [Image: The initial example project in Remix - Ethereum IDE] ## Contract Compilation 1. Navigate to the **Solidity Compiler** tab 2. Configure the compiler settings: - Compiler version: **`0.8.24`** - EVM version: **`paris`** (found under Advanced Configuration) 3. Click `Compile 1_Storage.sol` Compiler Version The Sapphire uses the [Rust Ethereum EVM][rust-evm]. This implementation is compatible with Solidity versions up to **0.8.24**. However, it does not yet support some transaction types introduced in Solidity **0.8.25**, such as those mentioned in [rust-ethereum/evm#277][revm-277], pending release of the next version. EVM Version EVM versions after **paris** (shanghai and upwards) include the PUSH0 opcode which isn't supported on Sapphire. [rust-evm]:https://github.com/rust-ethereum/evm [revm-277]: https://github.com/rust-ethereum/evm/issues/277 [Image: Solidity compiler tab] ## Contract Deployment 1. Open the **Deploy and Run Transactions** tab. 2. Select `Injected Web3` as environment. 3. Accept in MetaMask the account connection to Remix. [Image: MetaMask connection confirmation] 4. Click `Deploy`. 5. Review and confirm the transaction in MetaMask. [Image: Metamask transaction confirmation] If everything goes well, your transaction will be deployed using the selected account in the MetaMask and the corresponding Sapphire Network. ## Working with Confidential Features Note that Remix operates without a Sapphire client, meaning transactions and queries are unencrypted and unsigned by default. To make use of Sapphire's confidential features, refer to our [Quickstart Tutorial]. [Quickstart Tutorial]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/quickstart.mdx Should you have any questions, do not hesitate to share them with us on the [#dev-central Discord channel][discord]. [localnet]: ./localnet.mdx [network page]: https://github.com/oasisprotocol/docs/blob/main/docs/build/sapphire/network.mdx#rpc-endpoints [Remix]: https://remix.ethereum.org [Remix documentation]: https://remix-ide.readthedocs.io/en/latest/ [metamask]: ../../general/manage-tokens/README.mdx#metamask [discord]: https://oasis.io/discord --- ## Contract Verification [Sourcify] is the preferred service for the [verification of smart contracts][ethereum-contract-verify] deployed on Sapphire. Make sure you have the **address of each deployed contract** available (your deployment scripts should report those) and the **contracts JSON metadata file** generated when compiling contracts (Hardhat stores it inside the `artifacts/build-info` folder and names it as a 32-digit hex number). If your project contains multiple contracts, you will need to verify each contract separately. Contract deployment encryption **Do not deploy your contract with an encrypted contract deployment transaction, if you want to verify it.** For example, if your `hardhat.config.ts` or deployment script contains `import '@oasisprotocol/sapphire-hardhat'` or `import '@oasisprotocol/sapphire-paratime'` lines at the beginning, you should comment those out for the deployment. Verification services will try to match the contract deployment transaction code with the one in the provided contract's metadata. Because the transaction was encrypted with an ephemeral ParaTime key, the verification service will not be able to decrypt it. Some services may extract the contract's bytecode from the chain directly by calling `eth_getCode` RPC, but this will not work correctly for contracts with immutable variables. ## Verification with Hardhat If you use Hardhat to deploy your contracts, consider using the [hardhat-verify] plugin. To configure it, add the following to your `hardhat.config.ts` file: ```js title="hardhat.config.ts" etherscan: { // Enabled by default (not supported on Sapphire) enabled: false }, sourcify: { // Disabled by default // Doesn't need an API key enabled: true } ``` Now you can use the `verify` task: ```shell pnpm hardhat verify --network sapphire-testnet DEPLOYED_CONTRACT_ADDRESS "Constructor argument 1" ``` [hardhat-verify]: https://hardhat.org/hardhat-runner/plugins/nomicfoundation-hardhat-verify ## Verification with Foundry [Foundry] natively supports Sourcify verification. To use **Sourcify** as a provider, specify it with the `--verifier` option. Example: ```shell forge verify-contract
src/MyToken.sol:MyToken --verifier sourcify ``` To see all available options and more examples visit the **[verify-contract page of foundry][foundry-verify]** or the **[sourcify docs]** [Foundry]: https://book.getfoundry.sh [foundry-verify]: https://book.getfoundry.sh/reference/forge/forge-verify-contract [sourcify docs]: https://docs.sourcify.dev/docs/how-to-verify/#foundry ## Verification with Sourcify UI To manually verify a contract deployed on Sapphire Mainnet or Testnet on Sourcify: 1. Visit the [Sourcify] website and hit the "VERIFY CONTRACT" button. [Image: Sourcify website] 2. Select the "Oasis Sapphire" or "Oasis Sapphire Testnet" chain for Mainnet or Testnet accordingly and enter the address of the specific contract. Then, select the "Solidity" language", either "Hardhat" or "Foundry" and toggle the "Upload build-info" file. [Image: Sourcify: Upload metadata JSON file] 3. Under the "File Upload" section go ahead and upload the contract's build-info JSON file that bundles your contract metadata. This file should be located under `artifacts/build-info` on Hardhat or `out/build-info` on Foundry once you compile the contract. Sourcify will then unpack the metadata and collect bundled contracts. Pick the contract name you want to verify from the "Contract Identifier" dropdown below. [Image: Sourcify: File upload] Store your metadata files For production deployments, it is generally a good idea to **archive your contract metadata JSON file** since it is not only useful for the verification, but contains a copy of all the source files, produced bytecode, an ABI, compiler and other relevant contract-related settings that may be useful in the future. Sourcify will store the metadata file for you and will even make it available via IPFS, but it is still a good idea to store it yourself. 4. Finally, click on the "Verify Contract" button to submit verification data. In a few moments the job should succeed and your contract is now verified! [Image: Sourcify: Verify contract] In case of a *Partial match*, the contracts metadata JSON differed from the one used for deployment although the compiled contract bytecode matched. Make sure the source code `.sol` file of the contract is the same as the one used during the deployment (including the comments, variable names and source code file names) and use the same version of Hardhat and solc compiler. You can also explore all verification methods on Sourcify by reading the [official Sourcify contract verification instructions][sourcify-contract-verify]. [Sourcify]: https://sourcify.dev/ [sourcify-contract-verify]: https://docs.sourcify.dev/docs/how-to-verify/ [ethereum-contract-verify]: https://ethereum.org/en/developers/docs/smart-contracts/verifying/ ## Troubleshooting ### Etherscan error with hardhat-verify - **Cause**: hardhat-verify tries to verify a contract on Etherscan for an unsupported network. - **Solution**: Disable Etherscan verification with ``` etherscan: { // Enabled by default (not supported on Sapphire) enabled: false }, ``` --- ## Getting Started Use Oasis to build verifiable, auditable applications powered by TEE running on a permissionless network of nodes. No censorship, no hidden costs, no central authority! Get your hands dirty quickly by reviewing [top use cases](/build/use-cases) and considering one that fits your needs. Our tutorials will help you make your app up and running in no time. Our [Runtime Offchain Logic (ROFL)](/build/rofl/) enables you to build secure applications running in a trusted environment (TEE). This is ideal for trusted oracles, AI agents, verifiable compute tasks such as AI training, or game servers. ROFL can seamlessly communicate with [Oasis Sapphire](/build/sapphire/)—an EVM-compatible L1 blockchain with built-in contract state and end-to-end transaction encryption. The Oasis team also prepared a set of libraries called the [Oasis Privacy Layer](/build/opl/) to bridge existing dApps running on other chains with the unique Sapphire's confidentiality and other [tools](/build/tools/). } > Learn the fundamentals of Oasis architecture, wallets, and unique privacy features. Connect with developers, run infrastructure, and contribute to the future of Oasis. If you want to run your own Oasis node, this part will provide you with guides on the current Mainnet and Testnet network parameters and how to set up your node, let it be a validator node, perhaps running a ParaTime or just a simple client node for your server to submit transactions and perform queries on the network. Whether you want to contribute your code to the core components of the Oasis Network or just learn more about the Oasis consensus layer and other core components, this is the part for you. Additions or changes to the interoperable Oasis network components are always made with consensus. Similar to the Ethereum's ERC/EIP mechanism Oasis follows formal Architectural Decision Records (ADRs) which are first proposed, voted on and finally implemented, if accepted. --- ## ADR 0001: Multiple Roots Under the Tendermint Application Hash ## Component Oasis Core ## Changelog - 2020-08-06: Added consequence for state checkpoints - 2020-07-28: Initial version ## Status Accepted ## Context Currently the Tendermint ABCI application hash is equal to the consensus state root for a specific height. In order to allow additional uses, like proving to light clients that specific events have been emitted in a block, we should make the application hash be derivable from potentially different kinds of roots. ## Decision The proposed design is to derive the Tendermint ABCI application hash by hashing all the different roots as follows: ``` AppHash := H(Context || Root_0 || ... || Root_n) ``` Where: - `H` is the SHA-512/256 hash function. - `Context` is the string `oasis-core/tendermint: roots`. - `Root_i` is the fixed-size SHA-512/256 root hash of the specified root. Currently, the only root would be the existing consensus state root at index 0. To implement this change the following modifications would be required: - Update the ABCI multiplexer's `Commit` method to calculate and return the application hash using the scheme specified above. - Update the consensus API `SignedHeader` response to include the `UntrustedStateRoot` (the untrusted prefix denotes that the user must verify that the state root corresponds to `AppHash` provided in the signed header in `Meta`). When new roots will be added in the future, both `Block` and `SignedHeader` will need to include them all. ## Alternatives The proposed design is simple and assumes that the number of additional roots is small and thus can always be included in signed headers. An alternative scheme would be to Merkelize the roots in a binary Merkle tree (like the one used for our MKVS), but this would add complexity and likely require more round trips for common use cases. ## Consequences ### Positive - This would open the path to including different kinds of provable data (e.g., in addition to state) as part of any consensus-layer block. ### Negative - As this changes the application hash, this would be a breaking change for the consensus layer. - Since we are simply hashing all the roots together, all of them need to be included in the signed headers returned to light clients. ### Neutral - Consensus state checkpoints will need to contain data for multiple roots. ## References - [tendermint#5134](https://github.com/tendermint/tendermint/pull/5134) --- ## ADR 0002: Go Modules Compatible Git Tags ## Component Oasis Core ## Changelog - 2020-09-04: Initial version ## Status Accepted ## Context Projects that depend on [Oasis Core's Go module], i.e. `github.com/oasisprotocol/oasis-core/go`, need a way to depend on its particular version. Go Modules only allow [Semantic Versioning 2.0.0] for [versioning of the modules][go-mod-ver] which makes it hard to work with [Oasis Core's CalVer (calendar versioning) scheme]. The currently used scheme for Go Modules compatible Git tags is: ``` go/v0.YY.MINOR[.MICRO] ``` where: - `YY` represents the short year (e.g. `19`, `20`, `21`, ...), - `MINOR` represents the minor version starting with zero (e.g. `0`, `1`, `2`, `3`, ...), - `MICRO` represents the final number in the version (sometimes referred to as the "patch" segment) (e.g. `0`, `1`, `2`, `3`, ...). If the `MICRO` version is `0`, it is omitted. It turns out this only works for Oasis Core versions with the `MICRO` version of `0` since the Go Modules compatible Git tag omits the `.MICRO` part and is thus compatible with [Go Modules versioning requirements][go-mod-ver]. [Oasis Core's Go module]: https://pkg.go.dev/mod/github.com/oasisprotocol/oasis-core/go [Semantic Versioning 2.0.0]: https://semver.org/spec/v2.0.0.html [go-mod-ver]: https://golang.org/ref/mod#versions [Oasis Core's CalVer (calendar versioning) scheme]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/versioning.md ## Decision The proposed design is to tag Oasis Core releases with the following Go Modules compatible Git tags (in addition to the ordinary Git tags): ``` go/v0.YY0MINOR.MICRO ``` where: - `YY` represents the short year (e.g. `19`, `20`, `21`, ...), - `0MINOR` represents the zero-padded minor version starting with zero (e.g. `00`, `01`, `02`, ..., `10`, `11`, ...), - `MICRO` represents the final number in the version (sometimes referred to as the "patch" segment) (e.g. `0`, `1`, `2`, `3`, ...). Here are some examples of how the ordinary and the corresponding Go Modules compatible Git tags would look like: | Version | Ordinary Git tag | Go Modules compatible Git tag | |:-------------:|:----------------:|:------------------------------:| | 20.9 | `v20.9` | `go/v0.2009.0` | | 20.9.1 | `v20.9.1` | `go/v0.2009.1` | | 20.9.2 | `v20.9.2` | `go/v0.2009.2` | | 20.10 | `v20.10` | `go/v0.2010.0` | | 20.10.1 | `v20.10.1` | `go/v0.2010.1` | | 20.10.2 | `v20.10.2` | `go/v0.2010.2` | | ... | ... | ... | | 21.0 | `v21.0` | `go/v0.2100.0` | | 21.0.1 | `v21.0.1` | `go/v0.2100.1` | | 21.0.2 | `v21.0.2` | `go/v0.2100.2` | | 21.1 | `v21.1` | `go/v0.2101.0` | | 21.1.1 | `v21.1.1` | `go/v0.2101.1` | | 21.1.2 | `v21.1.2` | `go/v0.2101.2` | | ... | ... | ... | Using such a scheme makes the version of the Oasis Core Go module fully compatible with the [Go Modules versioning requirements][go-mod-ver] and thus enables users to use the familiar Go tools to check for new module versions, i.e. `go list -m -u all`, or to obtain and require a module, i.e. `go get github.com/oasisprotocol/oasis-core/go@latest`. ## Alternatives An alternative scheme would be to use the following Go Modules compatible Git tags: ``` go/v0.YY.MINOR-MICRO ``` where: - `YY` represents the short year (e.g. `19`, `20`, `21`, ...), - `MINOR` represents the minor version starting with zero (e.g. `0`, `1`, `2`, `3`, ...), - `MICRO` represents the final number in the version (sometimes referred to as the "patch" segment) (e.g. `0`, `1`, `2`, `3`, ...). Using the `-MICRO` suffix would make Go treat all such versions as a [Go Modules pre-release version]. The consequence of that would be that all Go tools would treat such versions as pre-releases. For example, let's say the Oasis Core Go module would have the following Go version tags: - `go/v0.20.9` - `go/v0.20.10-0` - `go/v0.20.10-1` and a module that depends on the Oasis Core Go module would currently require version `v0.20.9`. One downside would be that the `go list -m -u all` command would not notify a user that an update, i.e. version `v0.20.10-1`, is available. The second downside would be that using the `go get github.com/oasisprotocol/oasis-core/go@latest` command would treat version `v0.20.9` as the latest version and download and require this version of the Oasis Core Go module instead of the real latest version, `v0.20.10-1` in this example. [Go Modules pre-release version]: https://golang.org/ref/mod#glos-pre-release-version ## Consequences ### Positive - This allow users to depend on a bugfix/patch release of the Oasis Core Go module in a [Go Modules versioning requirements][go-mod-ver] compatible way, i.e. without having to resort to pinning the requirement to a particular Oasis Core commit. ### Negative - The connection between an ordinary Git tag and a Go Modules compatible Git tag is not very obvious. For example, it might not be immediately obvious that `v21.0` and `go/v0.2100.0` refer to the same thing. - Using a zero-padded minor version fixed to two characters would limit the number of releases in a year to 100 releases. ## References - [BadgerDB] uses a [similar scheme for tagging Go Modules compatible Git tags] for their CalVer versioning scheme. [BadgerDB]: https://github.com/dgraph-io/badger [similar scheme for tagging Go Modules compatible Git tags]: https://github.com/dgraph-io/badger/releases --- ## ADR 0003: Consensus/Runtime Token Transfer ## Component Oasis Core ## Changelog - 2020-09-16: Beneficiary allowance, add message results - 2020-09-08: Initial draft ## Status Accepted ## Context Currently each runtime can define its own token (or none at all) and there is no mechanism that would support transfer of consensus layer tokens into a runtime and back out. Introducing such a mechanism would allow the consensus layer tokens to be used inside runtimes for various functions. This ADR proposes such a mechanism. ## Decision On a high level, this proposal adds support for consensus/runtime token transfers as follows: - **Each staking account can set an allowance for beneficiaries.** Each staking account can set an allowance, a maximum amount a beneficiary can withdraw from the given account. Beneficiaries are identified by their address. This is similar to approve/transferFrom calls defined by the [ERC-20 Token Standard]. Previously such functionality was already present but was removed in [oasis-core#2021]. - **Each runtime itself has an account in the consensus layer.** This account contains the balance of tokens which are managed exclusively by the runtime and do not belong to any specific regular account in the consensus layer. It is not possible to transfer directly into a runtime account and doing so may result in funds to be locked without a way to reclaim them. The only way to perform any operations on runtime accounts is through the use of messages emitted by the runtime during each round. These messages are subject to discrepancy detection and instruct the consensus layer what to do. Combined, the two mechanisms enable account holders to set an allowance in the benefit of runtimes so that the runtimes can withdraw up to the allowed amount from the account holder's address. ### Addresses This proposal introduces the following new address context for the runtime accounts: ``` oasis-core/address: runtime ``` Initial version for the address context is `0`. To derive the address, the standard address derivation scheme is used, with the runtime's 32-byte identifier used as the `data` part. ### State This proposal introduces/updates the following consensus state in the staking module: #### General Accounts The general account data structure is modified to include an additional field storing the allowances as follows: ```golang type GeneralAccount struct { // ... existing fields omitted ... Allowances map[Address]quantity.Quantity `json:"allowances,omitempty"` } ``` ### Transaction Methods This proposal adds the following new transaction methods in the staking module: #### Allow Allow enables an account holder to set an allowance for a beneficiary. **Method name:** ``` staking.Allow ``` **Body:** ```golang type Allow struct { Beneficiary Address `json:"beneficiary"` Negative bool `json:"negative,omitempty"` AmountChange quantity.Quantity `json:"amount_change"` } ``` **Fields:** - `beneficiary` specifies the beneficiary account address. - `amount_change` specifies the absolute value of the amount of base units to change the allowance for. - `negative` specifies whether the `amount_change` should be subtracted instead of added. The transaction signer implicitly specifies the general account. Upon executing the allow the following actions are performed: - If either the `disable_transfers` staking consensus parameter is set to `true` or the `max_allowances` staking consensus parameter is set to zero, the method fails with `ErrForbidden`. - It is checked whether either the transaction signer address or the `beneficiary` address are reserved. If any are reserved, the method fails with `ErrForbidden`. - Address specified by `beneficiary` is compared with the transaction signer address. If the addresses are the same, the method fails with `ErrInvalidArgument`. - The account indicated by the signer is loaded. - If the allow would create a new allowance and the maximum number of allowances for an account has been reached, the method fails with `ErrTooManyAllowances`. - The set of allowances is updated so that the allowance is updated as specified by `amount_change`/`negative`. In case the change would cause the allowance to be equal to zero or negative, the allowance is removed. - The account is saved. - The corresponding `AllowanceChangeEvent` is emitted with the following structure: ```golang type AllowanceChangeEvent struct { Owner Address `json:"owner"` Beneficiary Address `json:"beneficiary"` Allowance quantity.Quantity `json:"allowance"` Negative bool `json:"negative,omitempty"` AmountChange quantity.Quantity `json:"amount_change"` } ``` Where `allowance` contains the new total allowance, the `amount_change` contains the absolute amount the allowance has changed for and `negative` specifies whether the allowance has been reduced rather than increased. The event is emitted even if the new allowance is zero. #### Withdraw Withdraw enables a beneficiary to withdraw from the given account. **Method name:** ``` staking.Withdraw ``` **Body:** ```golang type Withdraw struct { From Address `json:"from"` Amount quantity.Quantity `json:"amount"` } ``` **Fields:** - `from` specifies the account address to withdraw from. - `amount` specifies the amount of base units to withdraw. The transaction signer implicitly specifies the destination general account. Upon executing the withdrawal the following actions are performed: - If either the `disable_transfers` staking consensus parameter is set to `true` or the `max_allowances` staking consensus parameter is set to zero, the method fails with `ErrForbidden`. - It is checked whether either the transaction signer address or the `from` address are reserved. If any are reserved, the method fails with `ErrForbidden`. - Address specified by `from` is compared with the transaction signer address. If the addresses are the same, the method fails with `ErrInvalidArgument`. - The source account indicated by `from` is loaded. - The destination account indicated by the transaction signer is loaded. - `amount` is deducted from the corresponding allowance in the source account. If this would cause the allowance to go negative, the method fails with `ErrForbidden`. - `amount` is deducted from the source general account balance. If this would cause the balance to go negative, the method fails with `ErrInsufficientBalance`. - `amount` is added to the destination general account balance. - Both source and destination accounts are saved. - The corresponding `TransferEvent` is emitted. - The corresponding `AllowanceChangeEvent` is emitted with the updated allowance. ### Queries This proposal adds the following new query methods in the staking module by updating the `staking.Backend` interface as follows: ```golang type Backend interface { // ... existing methods omitted ... // Allowance looks up the allowance for the given owner/beneficiary combination. Allowance(ctx context.Context, query *AllowanceQuery) (*quantity.Quantity, error) } // AllowanceQuery is an allowance query. type AllowanceQuery struct { Height int64 `json:"height"` Owner Address `json:"owner"` Beneficiary Address `json:"beneficiary"` } ``` ### Messages Since this is the first proposal that introduces a new runtime message type that can be emitted from a runtime during a round, it also defines some general properties of runtime messages and the dispatch mechanism: - Each message has an associated gas cost that needs to be paid by the submitter (e.g. as part of the `roothash.ExecutorCommit` method call). The gas cost is split among the committee members. - There is a maximum number of messages that can be emitted by a runtime during a given round. The limit is defined both globally (e.g. a roothash consensus parameter) and per-runtime (which needs to be equal to or lower than the global limit). - Messages are serialized using a sum type describing all possible messages, where each message type is assigned a _field name_: ```golang type Message struct { Message1 *Message1 `json:"message1,omitempty"` Message2 *Message2 `json:"message2,omitempty"` // ... } ``` - All messages are versioned by embeding the `cbor.Versioned` structure which provides a single `uint16` field `v`. - A change is made to how messages are included in commitments, to reduce the size of submitted transactions. The `ComputeResultsHeader` is changed so that the `Messages` field is replaced with a `MessagesHash` field containing a hash of the CBOR-encoded messages emitted by the runtime. At the same time `ComputeBody` is changed to include an additional field `Messages` as follows: ```golang type ComputeBody struct { // ... existing fields omitted ... Messages []*block.Message `json:"messages,omitempty"` } ``` The `Messages` field must only be populated in the commitment by the transaction scheduler and must match the `MessagesHash`. - If any of the included messages is deemed _malformed_, the round fails and the runtime state is not updated. - In order to support messages that fail to execute, a new roothash event is emitted for each executed message: ```golang type MessageEvent struct { Index uint32 `json:"index,omitempty"` Module string `json:"module,omitempty"` Code uint32 `json:"code,omitempty"` } ``` Where the `index` specifies the index of the executed message and the `module` and `code` specify the module and error code accoording to Oasis Core error encoding convention (note that the usual human readable message field is not included). This proposal introduces the following runtime messages: #### Staking Method Call The staking method call message enables a runtime to call one of the supported staking module methods. **Field name:** ``` staking ``` **Body:** ```golang type StakingMessage struct { cbor.Versioned Transfer *staking.Transfer `json:"transfer,omitempty"` Withdraw *staking.Withdraw `json:"withdraw,omitempty"` } ``` **Fields:** - `v` must be set to `0`. - `transfer` indicates that the `staking.Transfer` method should be executed. - `withdraw` indicates that the `staking.Withdraw` method should be executed. Exactly one of the supported method fields needs to be non-nil, otherwise the message is considered malformed. ### Consensus Parameters #### Staking This proposal introduces the following new consensus parameters in the staking module: - `max_allowances` (uint32) specifies the maximum number of allowances an account can store. Zero means that allowance functionality is disabled. #### Roothash This proposal introduces the following new consensus parameters in the roothash module: - `max_runtime_messages` (uint32) specifies the global limit on the number of messages that can be emitted in each round by the runtime. The default value of `0` disables the use of runtime messages. ### Runtime Host Protocol This proposal modifies the runtime host protocol as follows: #### Host to Runtime: Initialization The existing `RuntimeInfoRequest` message body is updated to contain a field denoting the consensus backend used by the host and its consensus protocol version as follows: ```golang type RuntimeInfoRequest struct { ConsensusBackend string `json:"consensus_backend"` ConsensusProtocolVersion uint64 `json:"consensus_protocol_version"` // ... existing fields omitted ... } ``` This information can be used by the runtime to ensure that it supports the consensus layer used by the host. In case the backend and/or protocol version is not supported, the runtime should return an error and terminate. In case the runtime does not interact with the consensus layer it may ignore the consensus layer information. #### Host to Runtime: Transaction Batch Dispatch The existing `RuntimeExecuteTxBatchRequest` and `RuntimeCheckTxBatchRequest` message bodies are updated to include the consensus layer light block at the last finalized round height (specified in `.Block.Header.Round`) and the list of `MessageEvent`s emitted while processing the runtime messages emitted in the previous round as follows: ```golang type RuntimeExecuteTxBatchRequest struct { // ConsensusBlock is the consensus light block at the last finalized round // height (e.g., corresponding to .Block.Header.Round). ConsensusBlock consensus.LightBlock `json:"consensus_block"` // MessageResults are the results of executing messages emitted by the // runtime in the previous round (sorted by .Index). MessageResults []roothash.MessageEvent `json:"message_results,omitempty"` // ... existing fields omitted ... } type RuntimeCheckTxBatchRequest struct { // ConsensusBlock is the consensus light block at the last finalized round // height (e.g., corresponding to .Block.Header.Round). ConsensusBlock consensus.LightBlock `json:"consensus_block"` // ... existing fields omitted ... } ``` The information from the light block can be used to access consensus layer state. #### Runtime to Host: Read-only Storage Access The existing `HostStorageSyncRequest` message body is updated to include an endpoint identifier as follows: ```golang type HostStorageSyncRequest struct { // Endpoint is the storage endpoint to which this request should be routed. Endpoint string `json:"endpoint,omitempty"` // ... existing fields omitted ... } ``` The newly introduced `endpoint` field can take the following values: - `runtime` (or empty string) denotes the runtime state endpoint. The empty value is allowed for backwards compatibility as this was the only endpoint available before this proposal. - `consensus` denotes the consensus state endpoint, providing access to consensus state. ### Rust Runtime Support Library The Rust runtime support library (`oasis-core-runtime`) must be updated to support the updated message structures. Additionally, there needs to be basic support for interpreting the data from the Tendermint consensus layer backend: - Decoding light blocks. - Decoding staking-related state structures. The Tendermint-specific functionality should be part of a separate crate. ### Expected User/Consensus/Runtime Flow **Scenario:** Account holder has 100 tokens in her account in the consensus layer staking ledger and would like to spend 50 tokens to execute an action in runtime X. **Flow:** - Account holder sets an allowance of 50 tokens for runtime X by submitting an allow transaction to the consensus layer. - Account holder submits a runtime transaction that performs some action costing 50 tokens. - Account holder's runtime transaction is executed in runtime X round R: - Runtime X emits a message to transfer 50 tokens from the user's account to the runtime's own account. _As an optimization runtime X can verify current consensus layer state and reject the transaction early to prevent paying for needless consensus layer message processing._ - Runtime X updates its state to indicate a pending transfer of 50 tokens from the user. It uses the index of the emitted message to be able to match the message execution result once it arrives. - Runtime X submits commitments to the consensus layer. - When finalizing round R for runtime X, the consensus layer transfers 50 tokens from the account holder's account to the runtime X account. - Corresponding message result event is emitted, indicating success. - When runtime X processes round R+1, the runtime receives the set of emitted message result events. - Runtime X processes message result events, using the index field to match the corresponding pending action and executes whatever action it queued. - In case the message result event would indicate failure, the pending action can be pruned. ## Consequences ### Positive - Consensus layer tokens can be transferred into and out of runtimes, enabling more use cases. - Any tokens must be explicitly made available to the runtime which limits the damage from badly written or malicious runtimes. - Account holders can change the allowance at any time. ### Negative - A badly written or malicious runtime could steal the tokens explicitly deposited into the runtime. This includes any actions by the runtime owner which would modify the runtime's security parameters. - A badly written, malicious or forever suspended runtime can lock tokens in the runtime account forever. This could be mitigated via an unspecified consensus layer governance mechanism. - Account holders may mistakenly transfer tokens directly into a runtime account which may cause such tokens to be locked forever. - Account holders may change the allowance or reduce their account balance right before the runtime round is finalized, causing the emitted messages to fail while the runtime still needs to pay for gas to execute the messages. ### Neutral - The runtime must handle all message results in the next round as otherwise it cannot easily get past messages. ## References - [ERC-20 Token Standard] - [oasis-core#2021] [ERC-20 Token Standard]: https://eips.ethereum.org/EIPS/eip-20 [oasis-core#2021]: https://github.com/oasisprotocol/oasis-core/issues/2021 --- ## ADR 0004: Runtime Governance ## Component Oasis Core ## Changelog - 2020-10-07: Add per-role max node limits, minimum required election pool size - 2020-09-30: Add entity whitelist admission policy max nodes limit - 2020-09-17: Initial draft ## Status Accepted ## Context Currently all runtimes can only be governed by a single entity -- the runtime owner. In this regard governance means being able to update certain fields in the runtime descriptor stored by the consensus layer registry service. On one hand the runtime descriptor contains security-critical parameters and on the other there needs to be a mechanism through which the runtimes can be upgraded (especially so for TEE-based runtimes where a specific runtime binary is enforced via remote attestation mechanisms). This proposal extends runtime governance options and enables a path towards runtimes that can define their own governance mechanisms. This proposal assumes that [ADR 0003] has been adopted and runtimes can have their own accounts in the staking module. ## Decision This proposal takes a simplistic but powerful approach which allows each runtime to choose its governance model upon its first registration. It does so through a newly introduced field in the runtime descriptor which indicates how the runtime descriptor can be updated in the future. ### Runtime Descriptor The runtime descriptor version is bumped to `2`. Version `1` descriptors are accepted at genesis and are converted to the new format by assuming the entity governance model as that is the only option in v1. All new runtime registrations must use the v2 descriptor. #### Governance Model This proposal updates the runtime descriptor by adding fields as follows: ```golang type Runtime struct { // GovernanceModel specifies the runtime governance model. GovernanceModel RuntimeGovernanceModel `json:"governance_model"` // ... existing fields omitted ... } // RuntimeGovernanceModel specifies the runtime governance model. type RuntimeGovernanceModel uint8 const ( GovernanceEntity RuntimeGovernanceModel = 1 GovernanceRuntime RuntimeGovernanceModel = 2 GovernanceConsensus RuntimeGovernanceModel = 3 ) // ... some text serialization methods omitted ... ``` The `governance_model` field can specifiy one of the following governance models: - **Entity governance (`GovernanceEntity`).** This causes the runtime to behave exactly as before, the runtime owner (indicated by `entity_id` in the runtime descriptor) is the only one who can update the runtime descriptor via `registry.RegisterRuntime` method calls. The runtime owner is also the one that needs to provide the required stake in escrow in order to avoid the runtime from being suspended. As before note that anyone can delegate the required stake to the runtime owner in order to enable runtime operation (but the owner can always prevent the runtime from operating by performing actions which would cause the stake claims to no longer be satisfied). - **Runtime-defined governance (`GovernanceRuntime`).** In this case the runtime itself is the only one who can update the runtime descriptor by emitting a runtime message. The runtime owner (indicated by `entity_id`) is not able to perform any updates after the initial registration and such attempts must return `ErrForbidden`. The runtime itself is the one that needs to provide the required stake in escrow in order to avoid the runtime from being suspended. This assumes that runtimes can have accounts in the staking module as specified by [ADR 0003]. Note that anyone can delegate the required stake to a runtime in order to enable its operation. - **Consensus layer governance (`GovernanceConsensus`).** In this case only the consensus layer itself can update the runtime descriptor either through a network upgrade or via a consensus layer governance mechanism not specified by this proposal. Runtimes using this governance model are never suspended and do not need to provide stake in escrow. Runtimes using this governance model cannot be registered/updated via regular registry method calls or runtime messages (doing so must return `ErrForbidden`). Instead such a runtime can only be registered at genesis, through a network upgrade or via a consensus layer governance mechanism not specified by this proposal. #### Entity Whitelist Admission Policy The entity whitelist admission policy configuration structure is changed to allow specifying the maximum number of nodes that each entity can register under the given runtime for each role. ```golang type EntityWhitelistConfig struct { // MaxNodes is the maximum number of nodes that an entity can register under // the given runtime for a specific role. If the map is empty or absent, the // number of nodes is unlimited. If the map is present and non-empty, the // the number of nodes is restricted to the specified maximum (where zero // means no nodes allowed), any missing roles imply zero nodes. MaxNodes map[node.RolesMask]uint16 `json:"max_nodes,omitempty"` } type EntityWhitelistRuntimeAdmissionPolicy struct { Entities map[signature.PublicKey]EntityWhitelistConfig `json:"entities"` } ``` The new `max_nodes` field specifies the maximum number of nodes an entity can register for the given runtime for each role. If the map is empty or absent, the number of nodes is unlimited. If the map is present and non-empty, the number of nodes is restricted to the specified number (where zero means no nodes are allowed). Any missing roles imply zero nodes. Each key (roles mask) in the `max_nodes` map must specify a single role, otherwise the runtime descriptor is rejected with `ErrInvalidArgument`. When transforming runtime descriptors from version 1, an entry in the `entities` field maps to an `EntityWhitelistConfig` structure with `max_nodes` absent, denoting that an unlimited number of nodes is allowed (as before). #### Minimum Required Committee Election Pool Size The executor and storage runtime parameters are updated to add a new field defining the minimum required committee election pool size. The committee scheduler is updated to refuse election for a given runtime committee in case the number of candidate nodes is less than the configured minimum pool size. ```golang type ExecutorParameters struct { // MinPoolSize is the minimum required candidate compute node pool size. MinPoolSize uint64 `json:"min_pool_size"` // ... existing fields omitted ... } type StorageParameters struct { // MinPoolSize is the minimum required candidate storage node pool size. MinPoolSize uint64 `json:"min_pool_size"` // ... existing fields omitted ... } ``` The value of `min_pool_size` must be non-zero and must be equal to or greater than the corresponding sum of `group_size` and `group_backup_size`. Otherwise the runtime descriptor is rejected with `ErrInvalidArgument`. When transforming runtime descriptors from version 1, `min_pool_size` for the executor committee is computed as `group_size + group_backup_size` while the `min_pool_size` for the storage committee is equal to `group_size`. ### State This proposal introduces/updates the following consensus state in the registry module: #### Stored Runtime Descriptors Since the runtime descriptors can now be updated by actors other than the initial registering entity, it does not make sense to store signed runtime descriptors. The value of storage key prefixed with `0x13` which previously contained signed runtime descriptors is modified to store plain runtime descriptors. ### Genesis Document This proposal updates the registry part of the genesis document as follows: - The type of the `runtimes` field is changed to a list of runtime descriptors (was a list of _signed_ runtime descriptors before). - The type of the `suspended_runtimes` field is changed to a list of runtime descriptors (was a list of _signed_ runtime descriptors before). Runtime descriptors must be transformed to support the new fields. ### Transaction Methods This proposal updates the following transaction methods in the registry module: #### Register Runtime Runtime registration enables a new runtime to be created or an existing runtime to be updated (in case the governance model allows it). **Method name:** ``` registry.RegisterRuntime ``` The body of a register runtime transaction must be a `Runtime` descriptor. The signer of the transaction must be the owning entity key. Registering a runtime may require sufficient stake in either the owning entity's (when entity governance is used) or the runtime's (when runtime governance is used) escrow account. Changing the governance model from `GovernanceEntity` to `GovernanceRuntime` is allowed. Any other governance model changes are not allowed and must fail with `ErrForbidden`. Support for other changes is deferred to a consensus layer governance mechanism not specified by this proposal. Using the `GovernanceRuntime` governance model for a runtime of any kind other than `KindCompute` must return `ErrInvalidArgument`. ### Messages This proposal introduces the following runtime messages: #### Update Runtime Descriptor The update runtime descriptor message enables a runtime to update its own descriptor when the current governance model allows it. **Field name:** ``` update_runtime ``` **Body:** ```golang type UpdateRuntimeMessage struct { registry.Runtime } ``` The body of the update runtime descriptor message is a new runtime descriptor that must be for the runtime emitting this message. Otherwise the message is considered malformed. The actions performed when processing the message are the same as those performed when processing the `registry.RegisterRuntime` method call, just made on the runtime's (instead of an entity's) behalf. ### Consensus Parameters #### Registry This proposal introduces the following new consensus parameters in the registry module: - `enable_runtime_governance_models` (set of `RuntimeGovernanceModel`) specifies the set of runtime governance models that are allowed to be used when creating/updating registrations (either via method calls or via runtime messages). In case a runtime is using a governance model not specified in this list, an update to such a runtime must fail with `ErrForbidden`. ### Rust Runtime Support Library The Rust runtime support library (`oasis-core-runtime`) must be updated to support the updated and newly needed message structures (the runtime descriptor and the update runtime message). ## Consequences ### Positive - Runtimes can define their governance model, enabling them to become more decentralized while still allowing upgrades. - Runtimes using the entity whitelist admission policy can limit the number of nodes that each entity can register. - Runtimes can specify the minimum size of the compute/storage node pool from which committees are elected. ### Negative ### Neutral ## References - [ADR 0003] - Consensus/Runtime Token Transfer [ADR 0003]: 0003-consensus-runtime-token-transfer.md --- ## ADR 0005: Runtime Compute Node Slashing ## Component Oasis Core ## Changelog - 2020-10-14: Evidence expiry, duplicate evidence detection - 2020-09-28: Initial draft ## Status Accepted ## Context The runtime compute nodes make updates to the runtime state by submitting commitment messages to the roothash service in the consensus layer where discrepancy detection and resolution are performed. Currently, the compute nodes are never slashed even if they commit incorrect results. While integrity is guarded by discrepancy detection and resolution, compute nodes should be disincentivized to behave incorrectly. ## Decision This proposal introduces a slashing mechanism for punishing misbehaving compute nodes as follows: - **Per-runtime configurable slashing parameters** are added to the runtime descriptor similar to the global slashing configuration that currently exists in the staking service. - **New runtime-specific slashing reasons** are introduced: (i) submitting incorrect compute results and (ii) signing two different executor commits or proposed batches for the same round. - **Failure-indicating executor commits** are introduced in order to give the compute nodes a possibility to vote for failure when they cannot execute the given batch (e.g., due to unavailability of storage or key manager) without getting slashed. Such commits will always trigger a discrepancy during discrepancy detection and will vote for failing the round in discrepancy resolution phase. ### Runtime Descriptor This proposal updates the runtime staking parameters (stored under the `staking` field of the runtime descriptor) as follows: ```golang type RuntimeStakingParameters struct { // ... existing fields omitted ... // Slashing are the per-runtime misbehavior slashing parameters. Slashing map[staking.SlashReason]staking.Slash `json:"slashing,omitempty"` // RewardSlashEquvocationRuntimePercent is the percentage of the reward obtained when slashing // for equivocation that is transferred to the runtime's account. RewardSlashEquvocationRuntimePercent uint8 `json:"reward_equivocation,omitempty"` // RewardSlashBadResultsRuntimePercent is the percentage of the reward obtained when slashing // for incorrect results that is transferred to the runtime's account. RewardSlashBadResultsRuntimePercent uint8 `json:"reward_bad_results,omitempty"` } ``` ### Slashing Parameters The slash reason type in the staking module is changed from `int` to `uint8`. The slash reason definitions are updated as follows: ```golang const ( // SlashConsensusEquivocation is slashing due to equivocation in the // consensus layer. SlashConsensusEquivocation SlashReason = 0x00 // SlashRuntimeIncorrectResults is slashing due to submission of incorrect // results in runtime executor commitments. SlashRuntimeIncorrectResults SlashReason = 0x80 // SlashRuntimeEquivocation is slashing due to signing two different // executor commits or proposed batches for the same round. SlashRuntimeEquivocation SlashReason = 0x81 ) ``` ### Executor Commitments The executor commitment body structures are updated to make certain fields optional and to introduce the `failure` field as follows: ```golang type ExecutorCommitmentFailure uint8 const ( // FailureNone indicates that no failure has occurred. FailureNone ExecutorCommitmentFailure = 0 // FailureUnknown indicates a generic failure. FailureUnknown ExecutorCommitmentFailure = 1 // FailureStorageUnavailable indicates that batch processing failed due to // storage being unavailable. FailureStorageUnavailable ExecutorCommitmentFailure = 2 // FailureKeyManagerUnavailable indicates that batch processing failed due // to key manager being unavailable. FailureKeyManagerUnavailable ExecutorCommitmentFailure = 3 ) type ExecutorCommitmentHeader struct { // Required fields. Round uint64 `json:"round"` PreviousHash hash.Hash `json:"previous_hash"` // Optional fields (may be absent for failure indication). IORoot *hash.Hash `json:"io_root,omitempty"` StateRoot *hash.Hash `json:"state_root,omitempty"` MessageHash *hash.Hash `json:"messages_hash,omitempty"` } type ExecutorCommitmentBody struct { Header ExecutorCommitmentHeader `json:"header"` Failure ExecutorCommitmentFailure `json:"failure,omitempty"` TxnSchedSig signature.Signature `json:"txn_sched_sig"` InputRoot hash.Hash `json:"input_root"` InputStorageSigs []signature.Signature `json:"input_storage_sigs"` // Optional fields (may be absent for failure indication). StorageSignatures []signature.Signature `json:"storage_signatures,omitempty"` RakSig *signature.RawSignature `json:"rak_sig,omitempty"` } ``` The notion of an _failure-indicating_ executor commitment is introduced as being an executor commitment with the following field values: - The `failure` field must be present and non-zero. The code can indicate a reason for the failure but currently the reason is ignored during processing. - `header.round`, `header.previous_hash`, `txn_sched_sig`, `input_root` and `input_storage_sigs` are set as for usual commitments (e.g., they must be valid). - All other fields must be omitted or set to nil. ### Root Hash Commitment Processing The processing of executor commitments by the commitment pool is modified as follows: - **Adding new commitments (`AddExecutorCommitment`)** - If a commitment for a node already exists the existing commitment is checked for evidence of equivocation. Any evidence of misbehavior is processed as described in the _Evidence_ subsection below. - **Discrepancy detection (`DetectDiscrepancy`)** - If any executor commitment indicates failure, the discrepancy detection process signals a discrepancy (which implies that discrepancy resolution is triggered). - **Discrepancy resolution (`ResolveDiscrepancy`)** - When tallying votes, any executor commitments indicating failure are tallied into its own bucket. If the failure bucket receives 1/2+ votes, the round fails. - If after discrepancy resolution a non-failure option receives 1/2+ votes, this is considered the correct result. Executor commitments for any other result (excluding failure indication) are considered incorrect and are subject to slashing (based on the configured slashing instructions for the `SlashRuntimeIncorrectResults` reason). A portion of slashed funds is disbursed equally to the compute nodes which participated in discrepancy resolution for the round. The remainder of slashed funds is transferred to the runtime account. Any slashing instructions related to freezing nodes are currently ignored. ### State This proposal introduces/updates the following consensus state in the roothash module: - **List of past valid evidence (`0x24`)** A hash uniquely identifying the evidence is stored for each successfully processed evidence that has not yet expired using the following key format: ``` 0x24 ``` The value is empty as we only need to detect duplicate evidence. ### Transaction Methods This proposal updates the following transaction methods in the roothash module: #### Evidence The evidence method allows anyone to submit evidence of runtime node misbehavior. **Method name:** ``` roothash.Evidence ``` **Body:** ```golang type EvidenceKind uint8 const ( // EvidenceKindEquivocation is the evidence kind for equivocation. EvidenceKindEquivocation = 1 ) type Evidence struct { ID common.Namespace `json:"id"` EquivocationExecutor *EquivocationExecutorEvidence `json:"equivocation_executor,omitempty"` EquivocationBatch *EquivocationBatchEvidence `json:"equivocation_batch,omitempty"` } type EquivocationExecutorEvidence struct { CommitA commitment.ExecutorCommitment `json:"commit_a"` CommitB commitment.ExecutorCommitment `json:"commit_b"` } type EquivocationBatchEvidence struct { BatchA commitment.SignedProposedBatch `json:"batch_a"` BatchB commitment.SignedProposedBatch `json:"batch_b"` } ``` **Fields:** - `id` specifies the runtime identifier of a runtime this evidence is for. - `equivocation_executor` (optional) specifies evidence of an executor node equivocating when signing commitments. - `equivocation_batch` (optional) specifies evidence of an executor node equivocating when signing proposed batches. If no evidence is specified (e.g., all evidence fields are `nil`) the method call is invalid and must fail with `ErrInvalidArgument`. For all kinds of evidence, the following steps are performed to verify evidence validity: - Current state for the runtime identified by `id` is fetched. If the runtime does not exist, the evidence is invalid. - If no slashing instructions for `SlashRuntimeEquivocation` are configured for the given runtime, there is no point in collecting evidence so the method call must fail with `ErrRuntimeDoesNotSlash`. When processing **`EquivocationExecutor`** evidence, the following steps are performed to verify evidence validity: - `header.round` fields of both commitments are compared. If they are not the same, the evidence is invalid. - Both executor commitments are checked for basic validity. If either is invalid, the evidence is invalid. - The `header.previous_hash`, `header.io_root`, `header.state_root` and `header.messages_hash` fields of both commitments are compared. If they are the same, the evidence is invalid. - The failure indication fields of both commitments are compared. If they are the same, the evidence is invalid. - `header.round` field is compared with the runtime's current state. If it is more than `max_evidence_age` (consensus parameter) rounds behind, the evidence is invalid. - Public keys of signers of both commitments are compared. If they are not the same, the evidence is invalid. - Signatures of both commitments are verified. If either is invalid, the evidence is invalid. - Otherwise the evidence is valid. When processing **`EquivocationBatch`** evidence, the following steps are performed to verify evidence validity: - The `header.round` fields of both proposed batches are compared. If they are not the same, the evidence is invalid. - The `header` fields of both proposed batches are checked for basic validity. If any is invalid, the evidence is invalid. - The `io_root` fields of both proposed batches are compared. If they are the same, the evidence is invalid. - Public keys of signers of both commitments are compared. If they are not the same, the evidence is invalid. - Signatures of both proposed batches are validated. If either is invalid, the evidence is invalid. - Otherwise the evidence is valid. For all kinds of valid evidence, the following steps are performed after validation: - The evidence hash is derived by hashing the evidence kind and the public key of the signer and the evidence is looked up in the _list of past valid evidence_. If evidence already exists there, the method fails with `ErrDuplicateEvidence`. - The valid evidence hash is stored in the _list of past valid evidence_. If the evidence is deemed valid by the above procedure, the misbehaving compute node is slashed based on the runtime slashing parameters for the `SlashRuntimeEquivocation` reason. Any slashing instructions related to freezing nodes are currently ignored. The node submitting the evidence may be rewarded from part of the slashed amount to incentivize evidence submission. The remainder of slashed funds is transferred to the runtime account. ### Evidence Expiry On each epoch transition, for each runtime, expired evidence (as defined by the `max_evidence_age` and the current runtime's round) must be pruned from the _list of past valid evidence_. ### Evidence Collection Nodes collect commitment messages distributed via the P2P gossip network and check for any signs of misbehavior. In case valid evidence can be constructed, it is submitted to the consensus layer. Any evidence parts that have expired should be discarded. ### Consensus Parameters #### Roothash This proposal introduces the following new consensus parameters in the roothash module: - `max_evidence_age` (uint64) specifies the maximum age of submitted evidence in the number of rounds. ## Consequences ### Positive - Compute nodes can be disincentivized to submit incorrect results by runtimes configuring slashing parameters. ### Negative - Checking for duplicate evidence requires additional state in the consensus layer to store the evidence hashes (73 bytes per evidence). - Expiring old evidence requires additional per-runtime state lookups and updates that happen on each epoch transition. - If a runtime exhibits non-determinism, this can result in a compute node being slashed. While we specify that runtimes should be deterministic, for non-SGX runtimes we have no way determining whether a discrepancy is due to runtime non-determinism or a faulty compute node. ### Neutral - This proposal does not introduce any kind of slashing for liveness. - This proposal does not introduce freezing misbehaving nodes. ## References - [oasis-core#2078](https://github.com/oasisprotocol/oasis-core/issues/2078) --- ## ADR 0006: Consensus Governance ## Component Oasis Core ## Changelog - 2021-03-30: Update name of the CastVote method's body - 2021-01-06: Update API to include Proposals() method - 2020-12-08: Updates to match the actual implementation - 2020-10-27: Voting period in epochs, min upgrade cancellation difference, failed proposal state - 2020-10-16: Initial draft ## Status Accepted ## Context Currently the consensus layer does not contain any on-chain governance mechanism so any network upgrades need to be carefully coordinated off-chain. An on-chain governance mechanism would allow upgrades to be handled in a more controlled (and automatable) manner without introducing the risk of corrupting state. ## Decision This proposal introduces a minimal on-chain governance mechanism where anyone can submit governance proposals and the validators can vote where one base unit of delegated stake counts as one vote. The high-level overview is as follows: - **A new governance API** is added to the consensus layer and its Tendermint based implementation. It supports transactions for submitting proposals and voting on proposals. It supports queries for listing current proposals and votes for any given proposal. - **Two governance proposal kinds are supported**, a consensus layer upgrade proposal (where the content is basically the existing upgrade descriptor) and the cancellation of a pending upgrade. A proposal is created through a _submit proposal_ transaction and requires a minimum deposit (which is later refunded in case the proposal passes). Once a proposal is successfully submitted the voting period starts. Entities that are part of the validator set may cast votes for the proposal. After the voting period completes, the votes are tallied and the proposal either passes or is rejected. In case the proposal passes, the actions specified in the content of the propsal are executed. Currently the only actions are scheduling of an upgrade by publishing an upgrade descriptor or cancelling a previously passed upgrade. ### State #### Staking This proposal adds the following consensus layer state in the staking module: - **Governance deposits account balance (`0x59`)**, similar to the common pool. #### Governance This proposal adds the following consensus layer state in the governance module: - **Next proposal identifier (`0x80`)** The next proposal identifier is stored as a CBOR-serialized `uint64`. - **List of proposals (`0x81`)** Each proposal is stored under a separate storage key with the following key format: ``` 0x81 ``` And CBOR-serialized value: ```golang // ProposalState is the state of the proposal. type ProposalState uint8 const ( StateActive ProposalState = 1 StatePassed ProposalState = 2 StateRejected ProposalState = 3 StateFailed ProposalState = 4 ) // Proposal is a consensus upgrade proposal. type Proposal struct { // ID is the unique identifier of the proposal. ID uint64 `json:"id"` // Submitter is the address of the proposal submitter. Submitter staking.Address `json:"submitter"` // State is the state of the proposal. State ProposalState `json:"state"` // Deposit is the deposit attached to the proposal. Deposit quantity.Quantity `json:"deposit"` // Content is the content of the proposal. Content ProposalContent `json:"content"` // CreatedAt is the epoch at which the proposal was created. CreatedAt beacon.EpochTime `json:"created_at"` // ClosesAt is the epoch at which the proposal will close and votes will // be tallied. ClosesAt beacon.EpochTime `json:"closes_at"` // Results are the final tallied results after the voting period has // ended. Results map[Vote]quantity.Quantity `json:"results,omitempty"` // InvalidVotes is the number of invalid votes after tallying. InvalidVotes uint64 `json:"invalid_votes,omitempty"` } ``` - **List of active proposals (`0x82`)** Each active proposal (one that has not yet closed) is stored under a separate storage key with the following key format: ``` 0x82 ``` The value is empty as the proposal ID can be inferred from the key. - **List of votes (`0x83`)** Each vote is stored under a separate storage key with the following key format: ``` 0x83 ``` And CBOR-serialized value: ```golang // Vote is a governance vote. type Vote uint8 const ( VoteYes Vote = 1 VoteNo Vote = 2 VoteAbstain Vote = 3 ) ``` - **List of pending upgrades (`0x84`)** Each pending upgrade is stored under a separate storage key with the following key format: ``` 0x84 ``` The value is empty as the proposal upgrade descriptor can be obtained via proposal that can be inferred from the key. - **Parameters (`0x85`)** Governance consensus parameters. With CBOR-serialized value: ```golang // ConsensusParameters are the governance consensus parameters. type ConsensusParameters struct { // GasCosts are the governance transaction gas costs. GasCosts transaction.Costs `json:"gas_costs,omitempty"` // MinProposalDeposit is the number of base units that are deposited when // creating a new proposal. MinProposalDeposit quantity.Quantity `json:"min_proposal_deposit,omitempty"` // VotingPeriod is the number of epochs after which the voting for a proposal // is closed and the votes are tallied. VotingPeriod beacon.EpochTime `json:"voting_period,omitempty"` // Quorum is he minimum percentage of voting power that needs to be cast on // a proposal for the result to be valid. Quorum uint8 `json:"quorum,omitempty"` // Threshold is the minimum percentage of VoteYes votes in order for a // proposal to be accepted. Threshold uint8 `json:"threshold,omitempty"` // UpgradeMinEpochDiff is the minimum number of epochs between the current // epoch and the proposed upgrade epoch for the upgrade proposal to be valid. // This is also the minimum number of epochs between two pending upgrades. UpgradeMinEpochDiff beacon.EpochTime `json:"upgrade_min_epoch_diff,omitempty"` // UpgradeCancelMinEpochDiff is the minimum number of epochs between the current // epoch and the proposed upgrade epoch for the upgrade cancellation proposal to be valid. UpgradeCancelMinEpochDiff beacon.EpochTime `json:"upgrade_cancel_min_epoch_diff,omitempty"` } ``` ### Genesis Document The genesis document needs to be updated to include a `governance` field with any initial state (see [_State_]) and consensus parameters (see [_Consensus Parameters_]) for the governance service. [_State_]: #state [_Consensus Parameters_]: #consensus-parameters ### Transaction Methods This proposal adds the following transaction methods in the governance module: #### Submit Proposal Proposal submission enables a new consensus layer governance proposal to be created. **Method name:** ``` governance.SubmitProposal ``` **Body:** ```golang // ProposalContent is a consensus layer governance proposal content. type ProposalContent struct { Upgrade *UpgradeProposal `json:"upgrade,omitempty"` CancelUpgrade *CancelUpgradeProposal `json:"cancel_upgrade,omitempty"` } // UpgradeProposal is an upgrade proposal. type UpgradeProposal struct { upgrade.Descriptor } // CancelUpgradeProposal is an upgrade cancellation proposal. type CancelUpgradeProposal struct { // ProposalID is the identifier of the pending upgrade proposal. ProposalID uint64 `json:"proposal_id"` } ``` **Fields:** - `upgrade` (optional) specifies an upgrade proposal. - `cancel_upgrade` (optional) specifies an upgrade cancellation proposal. Exactly one of the proposal kind fields needs to be non-nil, otherwise the proposal is considered malformed. Upon processing any proposal the following steps are first performed: - The account indicated by the signer is loaded. - If the account balance is less than `min_proposal_deposit`, the method call fails with `ErrInsufficientBalance`. Upon processing an **`UpgradeProposal`** the following steps are then performed: - The upgrade descriptor is checked for basic internal validity. If the check fails, the method call fails with `ErrInvalidArgument`. - The upgrade descriptor's `epoch` field is compared with the current epoch. If the specified epoch is not at least `upgrade_min_epoch_diff` epochs ahead of the current epoch, the method call fails with `ErrUpgradeTooSoon`. - The set of pending upgrades is checked to make sure that no upgrades are currently pending within `upgrade_min_epoch_diff` epochs of the upgrade descriptor's `epoch` field. If there is such an existing upgrade pending, the method call fails with `ErrUpgradeAlreadyPending`. Upon processing a **`CancelUpgradeProposal`** the following steps are then performed: - The set of pending upgrades is checked to make sure that the given upgrade proposal is currently pending to be executed. If there is no such upgrade, the method call fails with `ErrNoSuchUpgrade`. - The upgrade descriptor's `epoch` field is compared with the current epoch. If the specified epoch is not at least `upgrade_cancel_min_epoch_diff` epochs ahead of the current epoch, the method call fails with `ErrUpgradeTooSoon`. Upon processing any proposal the following steps are then performed: - The `min_proposal_deposit` base units are transferred from the signer's account to the governance service's _proposal deposit account_. - The signer's account is saved. - A new proposal is created and assigned an identifier. - The corresponding `ProposalSubmittedEvent` is emitted with the following structure: ```golang type ProposalSubmittedEvent struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // Submitter is the staking account address of the submitter. Submitter staking.Address `json:"submitter"` } ``` - The corresponding `staking.TransferEvent` is emitted, indicating transfer from the submitter's account to the _proposal deposit account_. #### Vote Voting for submitted consensus layer governance proposals. **Method name:** ``` governance.CastVote ``` **Body:** ```golang type ProposalVote struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // Vote is the vote. Vote Vote `json:"vote"` } ``` Upon processing a vote the following steps are performed: - The entity descriptor corresponding to the transaction signer is fetched. In case no such entity exists, the method call fails with `ErrNotEligible`. - It is checked whether any entity's nodes are in the current validator set. In case they are not, the method call fails with `ErrNotEligible`. - The proposal identified by `id` is loaded. If the proposal does not exist, the method call fails with `ErrNoSuchProposal`. - If the proposal's state is not `StateActive`, the method call fails with `ErrVotingIsClosed`. - The vote is added to the list of votes. If the vote already exists, it is overwritten. - The corresponding `VoteEvent` is emitted with the following structure: ```golang type VoteEvent struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // Submitter is the staking account address of the submitter. Submitter staking.Address `json:"submitter"` // Vote is the cast vote. Vote Vote `json:"vote"` } ``` ### Queries This proposal introduces the following query methods in the governance module: ```golang type Backend interface { // ActiveProposals returns a list of all proposals that have not yet closed. ActiveProposals(ctx context.Context, height int64) ([]*Proposal, error) // Proposals returns a list of all proposals. Proposals(ctx context.Context, height int64) ([]*Proposal, error) // Proposal looks up a specific proposal. Proposal(ctx context.Context, query *ProposalQuery) (*Proposal, error) // Votes looks up votes for a specific proposal. Votes(ctx context.Context, query *ProposalQuery) ([]*VoteEntry, error) // PendingUpgrades returns a list of all pending upgrades. PendingUpgrades(ctx context.Context, height int64) ([]*upgrade.Descriptor, error) // StateToGenesis returns the genesis state at specified block height. StateToGenesis(ctx context.Context, height int64) (*Genesis, error) // ConsensusParameters returns the governance consensus parameters. ConsensusParameters(ctx context.Context, height int64) (*ConsensusParameters, error) // GetEvents returns the events at specified block height. GetEvents(ctx context.Context, height int64) ([]*Event, error) // WatchEvents returns a channel that produces a stream of Events. WatchEvents(ctx context.Context) (<-chan *Event, pubsub.ClosableSubscription, error) } // ProposalQuery is a proposal query. type ProposalQuery struct { Height int64 `json:"height"` ID uint64 `json:"id"` } // VoteEntry contains data about a cast vote. type VoteEntry struct { Voter staking.Address `json:"voter"` Vote Vote `json:"vote"` } // Event signifies a governance event, returned via GetEvents. type Event struct { Height int64 `json:"height,omitempty"` TxHash hash.Hash `json:"tx_hash,omitempty"` ProposalSubmitted *ProposalSubmittedEvent `json:"proposal_submitted,omitempty"` ProposalExecuted *ProposalExecutedEvent `json:"proposal_executed,omitempty"` ProposalFinalized *ProposalFinalizedEvent `json:"proposal_finalized,omitempty"` Vote *VoteEvent `json:"vote,omitempty"` } ``` ### Tallying In `EndBlock` the list of active proposals is checked to see if there was an epoch transition in this block. If there was, the following steps are performed for each proposal that should be closed at the current epoch: - A mapping of current validator entity addresses to their respective active escrow balances is prepared. - A results mapping from `Vote` to number of votes is initialized in the proposal's `results` field. - Votes from the list of votes for the given proposal are iterated and the address of each vote is looked up in the prepared entity address mapping. The corresponding number of votes (on the principle of 1 base unit equals one vote) are added to the results mapping based on the voted option. Any votes that are not from the current validator set are ignored and the `invalid_votes` field is incremented for each such vote. - In case the percentage of votes relative to the total voting power is less than `quorum`, the proposal is rejected. - In case the percentage of `VoteYes` votes relative to all valid votes is less than `threshold`, the proposal is rejected. - Otherwise the proposal is passed. - The proposal's status is changed to either `StatePassed` or `StateRejected` and the proposal is saved. - The proposal is removed from the list of active proposals. - In case the proposal has been passed, the proposal content is executed. If proposal execution fails, the proposal's state is changed to `StateFailed`. - The corresponding `ProposalFinalizedEvent` is emitted with the following structure: ```golang type ProposalFinalizedEvent struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // State is the new proposal state. State ProposalState `json:"state"` } ``` - In case the proposal has been passed, the deposit is transferred back to the proposal submitter and a corresponding `staking.TransferEvent` is emitted, indicating transfer from the _proposal deposit account_ to the submitter's account. - In case the proposal has been rejected, the deposit is transferred to the common pool and a corresponding `staking.TransferEvent` is emitted, indicating transfer from the _proposal deposit account_ to the common pool account. ### Proposal Content Execution After any proposal is successfully executed the corresponding `ProposalExecutedEvent` is emitted with the following structure: ```golang type ProposalExecutedEvent struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` } ``` #### Upgrade Proposal The set of pending upgrades is checked to make sure that no upgrades are currently pending within `upgrade_min_epoch_diff` of the upgrade descriptor's `epoch` field. If there is such an existing pending upgrade the upgrade proposal execution fails. When an upgrade proposal is executed, a new entry is added to the list of pending upgrades using `epoch` as ``. On each epoch transition (as part of `BeginBlock`) it is checked whether a pending upgrade is scheduled for that epoch. In case it is and we are not running the new version, the consensus layer will panic. Otherwise, the pending upgrade proposal is removed. #### Cancel Upgrade Proposal When a cancel upgrade proposal is executed, the proposal identified by `proposal_id` is looked up and removed from the list of pending upgrades. In case the pending upgrade does not exist anymore, no action is performed. ### Consensus Parameters This proposal introduces the following new consensus parameters in the governance module: - `gas_costs` (transaction.Costs) are the governance transaction gas costs. - `min_proposal_deposit` (base units) specifies the number of base units that are deposited when creating a new proposal. - `voting_period` (epochs) specifies the number of epochs after which the voting for a proposal is closed and the votes are tallied. - `quorum` (uint8: \[0,100\]) specifies the minimum percentage of voting power that needs to be cast on a proposal for the result to be valid. - `threshold` (uint8: \[0,100\]) specifies the minimum percentage of `VoteYes` votes in order for a proposal to be accepted. - `upgrade_min_epoch_diff` (epochs) specifies the minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade proposal to be valid. Additionally specifies the minimum number of epochs between two consecutive pending upgrades. - `upgrade_cancel_min_epoch_diff` (epochs) specifies the minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade cancellation proposal to be valid. The following parameter sanity checks are introduced: - Product of `quorum` and `threshold` must be 2/3+. - `voting_period` must be less than `upgrade_min_epoch_diff` and `upgrade_cancel_min_epoch_diff`. ## Consequences ### Positive - The consensus layer can coordinate on upgrades. ### Negative ### Neutral ## References --- ## ADR 0007: Improved Random Beacon ## Component Oasis Core ## Changelog - 2020-10-22: Initial version ## Status Proposed ## Context > Any one who considers arithmetical methods of producing random digits > is, of course, in a state of sin. > > --Dr. John von Neumann The existing random beacon used by Oasis Core, is largely a placeholder implementation that naively uses the previous block's commit hash as the entropy input. As such it is clearly insecure as it is subject to manipulation. A better random beacon which is harder for an adversary to manipulate is required to provide entropy for secure committee elections. ## Decision At a high level, this ADR proposes implementing an on-chain random beacon based on "SCRAPE: Scalabe Randomness Attested by Public Entities" by Cascudo and David. The new random beacon will use a commit-reveal scheme backed by a PVSS scheme so that as long as the threshold of participants is met, and one participant is honest, secure entropy will be generated. Note: This document assumes the reader understands SCRAPE. Details regarding the underlying SCRAPE implementation are omitted for brevity. ### Node Descriptor The node descriptor of each node will be extended to include the following datastructure. ```golang type Node struct { // ... existing fields omitted ... // Beacon contains information for this node's participation // in the random beacon protocol. // // TODO: This is optional for now, make mandatory once enough // nodes provide this field. Beacon *BeaconInfo `json:"beacon,omitempty"` } // BeaconInfo contains information for this node's participation in // the random beacon protocol. type BeaconInfo struct { // Point is the elliptic curve point used for the PVSS algorithm. Point scrape.Point `json:"point"` } ``` Each node will generate and maintain a long term elliptic curve point and scalar pair (public/private key pair), the point (public key) of which will be included in the node descriptor. For the purposes of the initial implementation, the curve will be P-256. ### Consensus Parameters The beacon module will have the following consensus parameters that control behavior. ```golang type SCRAPEParameters struct { Participants uint64 `json:"participants"` Threshold uint64 `json:"threshold"` PVSSThreshold uint64 `json:"pvss_threshold"` CommitInterval int64 `json:"commit_interval"` RevealInterval int64 `json:"reveal_interval"` TransitionDelay int64 `json:"transition_delay"` } ``` Fields: - `Participants` - The number of participants to be selected for each beacon generation protocol round. - `Threshold` - The minimum number of participants which must successfully contribute entropy for the final output to be considered valid. - `PVSSThreshold` - The minimum number of participants that are required to reconstruct a PVSS secret from the corresponding decrypted shares (Note: This usually should just be set to `Threshold`). - `CommitInterval` - The duration of the Commit phase, in blocks. - `RevealInterval` - The duration of the Reveal phase, in blocks. - `TransitionDelay` - The duration of the post Reveal phase delay, in blocks. ### Consensus State and Events The on-chain beacon will maintain and make available the following consensus state. ```golang // RoundState is a SCRAPE round state. type RoundState uint8 const ( StateInvalid RoundState = 0 StateCommit RoundState = 1 StateReveal RoundState = 2 StateComplete RoundState = 3 ) // SCRAPEState is the SCRAPE backend state. type SCRAPEState struct { Height int64 `json:"height,omitempty"` Epoch EpochTime `json:"epoch,omitempty"` Round uint64 `json:"round,omitempty"` State RoundState `json:"state,omitempty"` Instance *scrape.Instance `json:"instance,omitempty"` Participants []signature.PublicKey `json:"participants,omitempty"` Entropy []byte `json:"entropy,omitempty"` BadParticipants map[signature.PublicKey]bool `json:"bad_participants,omitempty"` CommitDeadline int64 `json:"commit_deadline,omitempty"` RevealDeadline int64 `json:"reveal_deadline,omitempty"` TransitionHeight int64 `json:"transition_height,omitempty"` RuntimeDisableHeight int64 `json:"runtime_disable_height,omitempty"` } ``` Fields: - `Height` - The block height at which the last event was emitted. - `Epoch` - The epoch in which this beacon is being generated. - `Round` - The epoch beacon generation round. - `State` - The beacon generation step (commit/reveal/complete). - `Instance` - The SCRAPE protocol state (encrypted/decrypted shares of all participants). - `Participants` - The node IDs of the nodes selected to participate in this beacon generation round. - `Entropy` - The final raw entropy, if any. - `BadParticipants` - A map of nodes that were selected, but have failed to execute the protocol correctly. - `CommitDeadline` - The height in blocks by which participants must submit their encrypted shares. - `RevealDeadline` - The height in blocks by which participants must submit their decrypted shares. - `TransitionHeight` - The height at which the epoch will transition assuming this round completes successfully. - `RuntimeDisableHeight` - The height at which, upon protocol failure, runtime transactions will be disabled. This height will be set to the transition height of the 0th round. Upon transition to a next step of the protocol, the on-chain beacon will emit the following event. ```golang // SCRAPEEvent is a SCRAPE backend event. type SCRAPEEvent struct { Height int64 `json:"height,omitempty"` Epoch EpochTime `json:"epoch,omitempty"` Round uint64 `json:"round,omitempty"` State RoundState `json:"state,omitempty"` Participants []signature.PublicKey `json:"participants,omitempty"` } ``` Field definitions are identical to that of those in the `SCRAPEState` datastructure. ## Transactions Participating nodes will submit the following transactions when required, signed by the node identity key. ```golang var ( // MethodSCRAPECommit is the method name for a SCRAPE commitment. MethodSCRAPECommit = transaction.NewMethodName(ModuleName, "SCRAPECommit", SCRAPECommit{}) // MethodSCRAPEReveal is the method name for a SCRAPE reveal. MethodSCRAPEReveal = transaction.NewMethodName(ModuleName, "SCRAPEReveal", SCRAPEReveal{}) ) // SCRAPECommit is a SCRAPE commitment transaction payload. type SCRAPECommit struct { Epoch EpochTime `json:"epoch"` Round uint64 `json:"round"` Commit *scrape.Commit `json:"commit,omitempty"` } // SCRAPEReveal is a SCRAPE reveal transaction payload. type SCRAPEReveal struct { Epoch EpochTime `json:"epoch"` Round uint64 `json:"round"` Reveal *scrape.Reveal `json:"reveal,omitempty"` } ``` Fields: - `Epoch` - The epoch in which the transaction is applicable. - `Round` - The epoch beacon generation round for the transaction. - `Commit` - The SCRAPE commit consisting of PVSS shares encrypted to every participant. - `Reveal` - The SCRAPE reveal consisting of the decrypted result of PVSS shares received from every participant. ### Beacon Generation The beacon generation process is split into three sequential stages, roughly corresponding to the steps in the SCRAPE protocol. Any failures in the Commit and Reveal phases result in a failed protocol round, and the generation process will restart after disqualifying participants who have induced the failure. #### Commit Phase Upon epoch transition or a prior failed round the commit phase is initiated the consensus application will select `Particpants` nodes from the current validator set (in order of decending stake) to serve as entropy contributors. The `SCRAPEState` structure is (re)-initialized, and a `SCRAPEEvent` is broadcast to signal to the participants that they should generate and submit their encrypted shares via a `SCRAPECommit` transaction. Each commit phase lasts exactly `CommitInterval` blocks, at the end of which, the round will be closed to further commits. At the end of the commit phase, the SCRAPE protocol state is evaluated to ensure that `Threshold`/`PVSSThreshold` nodes have published encrypted shares, and if an insufficient number of nodes have published in either case, the round is considered to have failed. The following behaviors are currently candidates for a node being marked as malicious/non-particpatory (`BadParticipant`) and subject to exclusion from future rounds and slashing. - Not submitting a commitment. - Malformed commitments (corrupted/fails to validate/etc). - Attempting to alter an existing commitment for a given Epoch/Round. #### Reveal Phase When the `CommitInterval` has passed, assuming that a sufficient number of commits have been received, the consensus application transitions into the reveal phase by updating the `SCRAPEState` structure and broadcasting a `SCRAPEEvent` to signal to the participants that they should reveal the decrypted values of the encrypted shares received from other participants via a `SCRAPEReveal` transaction. Each reveal phase lasts exactly `RevealInterval` blocks, at the end of which, the round will be closed to further reveals. At the end of the reveal phase, the SCRAPE protocol state is evaluated to ensure that `Threshold`/`PVSSThreshold` nodes have published decrypted shares, and if an insufficient number of nodes have published in either case, the round is considered to have failed. The following behaviors are currently candidates for a node being marked as malicious/non-participatory (`BadParticipant`) and subject to exclusion from future rounds and slashing. - Not submitting a reveal. - Malformed commitments (corrupted/fails to validate/etc). - Attempting to alter an existing reveal for a given Epoch/Round. Note: It is possible for anybody who can observe consensus state to derive the entropy the moment a threshold number of `SCRAPEReveal` transactions have been processed. Therefore the reveal phase should be a small fraction of the desired epoch as it is possible to derive the results of the committee elections for the next epoch mid-reveal phase. #### Complete (Transition Wait) Phase When the `RevealInterval` has passed, assuming that a sufficient number of reveals have been received, the consensus application recovers the final entropy output (the hash of the secret shared by each participant) and transitions into the complete (transition wait) phase by updating the `SCRAPEState` structure and broadcasting a `SCRAPEEvent` to signal to participants the completion of the round. No meaningful protocol activity happens one a round has successfully completed, beyond the scheduling of the next epoch transition. ### Misc. Changes/Notes Nodes MUST not be slashed for non-participation if they have not had the opportunity to propose any blocks during the relevant interval. Processing commitments and reveals is currently rather CPU intensive and thus each block SHOULD only contain one of each to prevent the consesus from stalling. To thwart attempts to manipulate committee placement by virute of the fact that it is possible to observe the entropy used for elections early nodes that register between the completion of the final commit phase and the epoch transition in any given epoch MUST be excluded from committee eligibility. ## Consequences ### Positive - The random beacon output is unbaised, provided that at least one participant is honest. - The amount of consensus state required is relatively small. - All protocol messages and steps can be verified on-chain, and misbehavior can be attributed. - The final output can be generated on-chain. ### Negative - Epoch intervals are theoretically variable under this proposal, as the beacon generation needs to be re-ran with new participants upon failure. - A new failure mode is introduced at the consensus layer, where the beacon generation protocol exhausts eligible participants. - Without using pairing based cryptography, the number of participants in the beacon generation is limited to a small subset of the anticipated active validator set. - There is a time window where the next beacon can be derived by anyone with access to the consensus state before the epoch transition actually happens. This should be mitigated by having a relatively short reveal period. - The commit and reveal steps of the protocol are rather slow, especially as the number of participants increases. ### Neutral - Due to performance reasons, the curve used by the PVSS scheme will be P-256 instead of Ed25519. The point and scalar pairs that each node generates on this curve are exclusively for use in the random beacon protocol and are not used anywhere else. ## References - [SCRAPE: SCalabe Randomness Attested by Public Entities](https://eprint.iacr.org/2017/216.pdf) - [oasis-core#3180](https://github.com/oasisprotocol/oasis-core/pull/3180) --- ## ADR 0008: Standard Account Key Generation ## Component Oasis Core ## Changelog - 2021-05-07: Add test vectors and reference implementation, extend Consequences section - 2021-04-19: Switch from BIP32-Ed25519 to SLIP-0010 for hierarchical key derivation scheme - 2021-01-27: Initial draft ## Status Accepted ## Context Currently, each application interacting with the Oasis Network defines its own method of generating an account's private/public key pair. [Account]'s public key is in turn used to derive the account's address of the form `oasis1 ... 40 characters ...` which is used to for a variety of operations (i.e. token transfers, delegations/undelegations, ...) on the network. The blockchain ecosystem has developed many standards for generating keys which improve key storage and interoperability between different applications. Adopting these standards will allow the Oasis ecosystem to: - Make key derivation the same across different applications (i.e. wallets). - Allow users to hold keys in hardware wallets. - Allow users to hold keys in cold storage more reliably (i.e. using the familiar 24 word mnemonics). - Define how users can generate multiple keys from a single seed (i.e. the 24 or 12 word mnemonic). ## Decision ### Mnemonic Codes for Master Key Derivation We use Bitcoin's [BIP-0039]: _Mnemonic code for generating deterministic keys_ to derivate a binary seed from a mnemonic code. The binary seed is in turn used to derive the _master key_, the root key from which a hierarchy of deterministic keys is derived, as described in [Hierarchical Key Derivation Scheme][hd-scheme]. We strongly recommend using 24 word mnemonics which correspond to 256 bits of entropy. ### Hierarchical Key Derivation Scheme We use Sathoshi Labs' [SLIP-0010]: _Universal private key derivation from master private key_, which is a superset of Bitcoin's [BIP-0032]: _Hierarchical Deterministic Wallets_ derivation algorithm, extended to work on other curves. Account keys use the [edwards25519 curve] from the Ed25519 signature scheme specified in [RFC 8032]. ### Key Derivation Paths We adapt [BIP-0044]: _Multi-Account Hierarchy for Deterministic Wallets_ for generating deterministic keys where `coin_type` equals 474, as assigned to the Oasis Network by [SLIP-0044]. The following [BIP-0032] path should be used to generate keys: ``` m/44'/474'/x' ``` where `x` represents the key number. Note that all path levels are _hardened_, e.g. `44'` is `44 | 0x8000000` or `44 + 2^31`. The key corresponding to key number 0 (i.e. `m/44'/474'/0'`) is called the _primary key_. The account corresponding to the _primary key_ is called the _primary account_. Applications (i.e. wallets) should use this account as a user's default Oasis account. ## Rationale BIPs and SLIPs are industry standards used by a majority of blockchain projects and software/hardware wallets. ### SLIP-0010 for Hierarchical Key Derivation Scheme [SLIP-0010] defines a hierarchical key derivation scheme which is a superset of [BIP-0032] derivation algorithm extended to work on other curves. In particular, we use their adaptation for the [edwards25519 curve]. #### Adoption It is used by Stellar ([SEP-0005]). It is supported by [Ledger] and [Trezor] hardware wallets. It is commonly used by Ledger applications, including: - [Stellar's Ledger app][stellar-ledger-slip10], - [Solana's Ledger app][solana-ledger-slip10], - [NEAR Protocol's Ledger app][near-ledger-slip10], - [Siacoin's Ledger app][sia-ledger-slip10], - [Hedera Hashgraph's Ledger app][hedera-ledger-slip10]. #### Difficulties in Adapting BIP-0032 to edwards25519 Curve Creating a hierarchical key derivation scheme for the [edwards25519 curve] proved to be very challenging due to edwards25519's small cofactor and bit "clamping". [BIP-0032] was designed for the [secp256k1] elliptic curve with a prime-order group. For performance reasons, edwards25519 doesn't provide a prime-order group and provides a group of order _h_ * _l_ instead, where _h_ is a small co-factor (8) and _l_ is a 252-bit prime. While using a co-factor offers better performance, it has proven to be a source of issues and vulnerabilities in higher-layer protocol implementations as [described by Risretto authors][risretto-cofactor-issues]. Additionally, edwards25519 curve employs bit "clamping". As [described by Trevor Perrin][trevor-perrin-clamping], low bits are "clamped" to deal with small-subgroup attacks and high bits are "clamped" so that: - the scalar is smaller than the subgroup order, and - the highest bit set is constant in case the scalar is used with a non-constant-time scalar multiplication algorithm that leaks based on the highest set bit. These issues were discussed on [modern crypto]'s mailing list [[1]][ moderncrypto-ed25519-hd1], [[2]][moderncrypto-ed25519-hd2]. [SLIP-0010] avoids these issues because it doesn't try to support non-hardened parent public key to child public key derivation and only supports hardened private parent key to private child key derivation when used with the edwards25519 curve. ### Shorter Key Derivation Paths Similar to Stellar's [SEP-0005], we decided not to use the full [BIP-0032] derivation path specified by [BIP-0044] because [SLIP-0010]'s scheme for [edwards25519 curve] only supports hardened private parent key to private child key derivation and additionally, the Oasis Network is account-based rather than [UTXO]-based. [Trezor] follows the same scheme for account-based blockchain networks as described in their [BIP-44 derivation paths][trezor-bip44-paths] document. ## Test Vectors ```json [ { "kind": "standard account key generation", "bip39_mnemonic": "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about", "bip39_passphrase": "", "bip39_seed": "5eb00bbddcf069084889a8ab9155568165f5c453ccb85e70811aaed6f6da5fc19a5ac40b389cd370d086206dec8aa6c43daea6690f20ad3d8d48b2d2ce9e38e4", "oasis_accounts": [ { "bip32_path": "m/44'/474'/0'", "private_key": "fb181e94e95cc6bedd2da03e6c4aca9951053f3e9865945dbc8975a6afd217c3ad55bbb7c192b8ecfeb6ad18bbd7681c0923f472d5b0c212fbde33008005ad61", "public_key": "ad55bbb7c192b8ecfeb6ad18bbd7681c0923f472d5b0c212fbde33008005ad61", "address": "oasis1qqx0wgxjwlw3jwatuwqj6582hdm9rjs4pcnvzz66" }, { "bip32_path": "m/44'/474'/1'", "private_key": "1792482bcb001f45bc8ab15436e62d60fe3eb8c86e8944bfc12da4dc67a5c89b73fd7c51a0f059ea34d8dca305e0fdb21134ca32216ca1681ae1d12b3d350e16", "public_key": "73fd7c51a0f059ea34d8dca305e0fdb21134ca32216ca1681ae1d12b3d350e16", "address": "oasis1qr4xfjmmfx7zuyvskjw9jl3nxcp6a48e8v5e27ty" }, { "bip32_path": "m/44'/474'/2'", "private_key": "765be01f40c1b78dd807e03a5099220c851cfe55870ab082be2345d63ffb9aa40f85ea84b81abded443be6ab3e16434cdddebca6e12ea27560a6ed65ff1998e0", "public_key": "0f85ea84b81abded443be6ab3e16434cdddebca6e12ea27560a6ed65ff1998e0", "address": "oasis1qqtdpw7jez243dnvmzfrhvgkm8zpndssvuwm346d" }, { "bip32_path": "m/44'/474'/3'", "private_key": "759b3c2af3d7129072666677b37e9e7b6d22c8bbf634816627e1704f596f60c411ebdac05bfa37b746692733f15a02be9842b29088272354012417a215666b0e", "public_key": "11ebdac05bfa37b746692733f15a02be9842b29088272354012417a215666b0e", "address": "oasis1qqs7wl20gfppe2krdy3tm4298yt9gftxpc9j27z2" }, { "bip32_path": "m/44'/474'/4'", "private_key": "db77a8a8508fd77083ba63f31b0a348441d4823e6ba73b65f354a93cf789358d8753c5da6085e6dbf5969773d27b08ee05eddcb3e11d570aaadf0f42036e69b1", "public_key": "8753c5da6085e6dbf5969773d27b08ee05eddcb3e11d570aaadf0f42036e69b1", "address": "oasis1qq8neyfkydj874tvs6ksljlmtxgw3plkkgf69j4w" }, { "bip32_path": "m/44'/474'/5'", "private_key": "318e1fba7d83ca3ea57b0f45377e77391479ec38bfb2236a2842fe1b7a624e8800e1a8016629f2882bca2174f29033ec2a57747cd9d3c27f49cc6e11e38ee7bc", "public_key": "00e1a8016629f2882bca2174f29033ec2a57747cd9d3c27f49cc6e11e38ee7bc", "address": "oasis1qrdjslqdum7wwehz3uaw6t6xkpth0a9n8clsu6xq" }, { "bip32_path": "m/44'/474'/6'", "private_key": "63a7f716e1994f7a8ab80f8acfae4c28c21af6b2f3084756b09651f4f4ee38606b85d0a8a9747faac85233ad5e4501b2a6862a4c02a46a0b7ea699cf2bd38f98", "public_key": "6b85d0a8a9747faac85233ad5e4501b2a6862a4c02a46a0b7ea699cf2bd38f98", "address": "oasis1qzlt62g85303qcrlm7s2wx2z8mxkr5v0yg5me0z3" }, { "bip32_path": "m/44'/474'/7'", "private_key": "34af69924c04d75c79bd120e03d667ff6287ab602f9285bb323667ddf9f25c974f49a7672eeadbf78f910928e3d592d17f1e14964693cfa2afd94b79f0d49f48", "public_key": "4f49a7672eeadbf78f910928e3d592d17f1e14964693cfa2afd94b79f0d49f48", "address": "oasis1qzw2gd3qq8nse6648df32zxvsryvljeyyyl3cxma" }, { "bip32_path": "m/44'/474'/8'", "private_key": "aa5242e7efe8dee05c21192766a11c46531f500ff7c0cc29ed59523c5e618792c0a24bf07953520f21c1c25882d9dbf00d24d0499be443fdcf07f2da9601d3e5", "public_key": "c0a24bf07953520f21c1c25882d9dbf00d24d0499be443fdcf07f2da9601d3e5", "address": "oasis1qqzjkx9u549r87ctv7x7t0un29vww6k6hckeuvtm" }, { "bip32_path": "m/44'/474'/9'", "private_key": "b661567dcb9b5290889e110b0e9814e72d347c3a3bad2bafe2969637541451e5da8c9830655103c726ff80a4ac2f05a7e0b948a1986734a4f63b3e658da76c66", "public_key": "da8c9830655103c726ff80a4ac2f05a7e0b948a1986734a4f63b3e658da76c66", "address": "oasis1qpawhwugutd48zu4rzjdcgarcucxydedgq0uljkj" }, { "bip32_path": "m/44'/474'/2147483647'", "private_key": "cc05cca118f3f26f05a0ff8e2bf5e232eede9978b7736ba10c3265870229efb19e7c2b2d03265ce4ea175e3664a678182548a7fc6db04801513cff7c98c8f151", "public_key": "9e7c2b2d03265ce4ea175e3664a678182548a7fc6db04801513cff7c98c8f151", "address": "oasis1qq7895v02vh40yc2dqfxhldww7wxsky0wgfdenrv" } ] }, { "kind": "standard account key generation", "bip39_mnemonic": "equip will roof matter pink blind book anxiety banner elbow sun young", "bip39_passphrase": "", "bip39_seed": "ed2f664e65b5ef0dd907ae15a2788cfc98e41970bc9fcb46f5900f6919862075e721f37212304a56505dab99b001cc8907ef093b7c5016a46b50c01cc3ec1cac", "oasis_accounts": [ { "bip32_path": "m/44'/474'/0'", "private_key": "4e9ca1a4c2ed90c90da93ea181557ef9f465f444c0b7de35daeb218f9390d98545601f761af17dba50243529e629732f1c58d08ffddaa8491238540475729d85", "public_key": "45601f761af17dba50243529e629732f1c58d08ffddaa8491238540475729d85", "address": "oasis1qqjkrr643qv7yzem6g4m8rrtceh42n46usfscpcf" }, { "bip32_path": "m/44'/474'/1'", "private_key": "2d0d2e75a13fd9dc423a2db8dfc1db6ebacd53f22c8a7eeb269086ec3b443eb627ed04a3c0dcec6591c001e4ea307d65cbd712cb90d85ab7703c35eee07a77dd", "public_key": "27ed04a3c0dcec6591c001e4ea307d65cbd712cb90d85ab7703c35eee07a77dd", "address": "oasis1qp42qp8d5k8pgekvzz0ld47k8ewvppjtmqg7t5kz" }, { "bip32_path": "m/44'/474'/2'", "private_key": "351749392b02c6b7a5053bc678e71009b4fb07c37a67b44558064dc63b2efd9219456a3f0cf3f4cc5e6ce52def57d92bb3c5a651fa9626b246cfec07abc28724", "public_key": "19456a3f0cf3f4cc5e6ce52def57d92bb3c5a651fa9626b246cfec07abc28724", "address": "oasis1qqnwwhj4qvtap422ck7qjxf7wm89tgjhwczpu0f3" }, { "bip32_path": "m/44'/474'/3'", "private_key": "ebc13ccb62142ed5b600f398270801f8f80131b225feb278d42982ce314f896292549046214fdb4729bf7a6ee4a3bbd0f463c476acc933b2c7cce084509abee4", "public_key": "92549046214fdb4729bf7a6ee4a3bbd0f463c476acc933b2c7cce084509abee4", "address": "oasis1qp36crawwyk0gnfyf0epcsngnpuwrz0mtu8qzu2f" }, { "bip32_path": "m/44'/474'/4'", "private_key": "664b95ad8582831fb787afefd0febdddcf03343cc1ca5aa86057477e0f22c93b331288192d442d3a32e239515b4c019071c57ee89f91942923dd4c1535db096c", "public_key": "331288192d442d3a32e239515b4c019071c57ee89f91942923dd4c1535db096c", "address": "oasis1qz8d2zptvf44y049g9dtyqya4g0jcqxmjsf9pqa3" }, { "bip32_path": "m/44'/474'/5'", "private_key": "257600bfccc21e0bc772f4d1dcfb2834805e07959ad7bd586e7deec4a320bfcecbbfef21f0833744b3504a9860b42cb0bb11e2eb042a8b83e3ceb91fe0fca096", "public_key": "cbbfef21f0833744b3504a9860b42cb0bb11e2eb042a8b83e3ceb91fe0fca096", "address": "oasis1qz0cxkl3mftumy9l4g663fmwg69vmtc675xh8exw" }, { "bip32_path": "m/44'/474'/6'", "private_key": "10d224fbbac9d6e3084dff75ed1d3ae2ce52bce3345a48bf68d1552ed7d89594defb924439e0c93f3b14f25b3cb4044f9bc9055fa4a14d89f711528e6760133b", "public_key": "defb924439e0c93f3b14f25b3cb4044f9bc9055fa4a14d89f711528e6760133b", "address": "oasis1qz3pjvqnkyj42d0mllgcjd66fkavzywu4y4uhak7" }, { "bip32_path": "m/44'/474'/7'", "private_key": "517bcc41be16928d32c462ee2a38981ed15b784028eb0914cfe84acf475be342102ad25ab9e1707c477e39da2184f915669791a3a7b87df8fd433f15c926ede2", "public_key": "102ad25ab9e1707c477e39da2184f915669791a3a7b87df8fd433f15c926ede2", "address": "oasis1qr8zs06qtew5gefgs4608a4dzychwkm0ayz36jqg" }, { "bip32_path": "m/44'/474'/8'", "private_key": "ee7577c5cef5714ba6738635c6d9851c43428ff3f1e8db2fe7f45fb8d8be7c55a6ec8903ca9062910cc780c9b209c7767c2e57d646bbe06901d090ad81dabe8b", "public_key": "a6ec8903ca9062910cc780c9b209c7767c2e57d646bbe06901d090ad81dabe8b", "address": "oasis1qp7w82tmm6srgxqqzragdt3269334pjtlu44qpeu" }, { "bip32_path": "m/44'/474'/9'", "private_key": "5257b10a5fcfd008824e2216be17be6e47b9db74018f63bb55de4d747cae6d7bba734348f3ec7af939269f62828416091c0d89e14c813ebf5e64e24d6d37e7ab", "public_key": "ba734348f3ec7af939269f62828416091c0d89e14c813ebf5e64e24d6d37e7ab", "address": "oasis1qp9t7zerat3lh2f7xzc58ahqzta5kj4u3gupgxfk" }, { "bip32_path": "m/44'/474'/2147483647'", "private_key": "e7152f1b69ad6edfc05dccf67dad5305edb224669025c809d89de7e56b2cabe58c348f412819da57361cdbd7dfbe695a05dba7f24b8e7328ff991ffadab6c4d2", "public_key": "8c348f412819da57361cdbd7dfbe695a05dba7f24b8e7328ff991ffadab6c4d2", "address": "oasis1qzajez400yvnzcv8x8gtcxt4z5mkfchuh5ca05hq" } ] } ] ``` To generate these test vectors yourself, run: ``` make -C go staking/gen_account_vectors ``` We also provide more extensive test vectors. To generate them, run: ``` make -C go staking/gen_account_vectors_extended ``` ## Implementation Reference implementation is in Oasis Core's [`go/common/crypto/sakg` package]. ## Alternatives ### BIP32-Ed25519 for Hierarchical Key Derivation The [BIP32-Ed25519] (also sometimes referred to as _Ed25519 and BIP32 based on [Khovratovich]_) is a key derivation scheme that also adapts [BIP-0032]'s hierarchical derivation scheme for the [edwards25519 curve] from the Ed25519 signature scheme specified in [RFC 8032]. #### Adoption It is used by Cardano ([CIP 3]) and Tezos (dubbed [bip25519 derivation scheme]). It is supported by [Ledger] and [Trezor] hardware wallets. It is commonly used by Ledger applications, including: - [Polkadot's Ledger app][polkadot-ledger-normal], - [Kusama's Ledger app][kusama-ledger-normal], - [Zcash's Ledger app][zcash-ledger-normal], - [Polymath's Ledger app][polymath-ledger-normal]. #### Security Concerns Its advantage is that it supports non-hardened parent public key to child public key derivation which enables certain use cases described in [BIP-0032][ BIP-0032-use-cases] (i.e. audits, insecure money receiver, ...). At the same time, allowing non-hardened parent public key to child public key derivation presents a serious security concern due to [edwards25519's co-factor issues][diff-bip32-ed25519]. [Jeff Burdges (Web3 Foundation)] warned about a potential [key recovery attack on the BIP32-Ed25519 scheme][BIP32-Ed25519-attack] which could occur under the following two assumptions: 1. The Ed25519 library used in BIP-Ed25519 derivation scheme does clamping immediately before signing. 2. Adversary has the power to make numerous small payments in deep hierarchies of key derivations, observe if the victim can cash out each payment, and adaptively continue this process. The first assumption is very reasonable since the [BIP32-Ed25519] paper makes supporting this part of their specification. The second assumption is a bit more controversial. The [BIP32-Ed25519] paper's specification limits the [BIP-0032] path length (i.e. the number of levels in the tree) to 220. But in practice, no implementation checks that and the issue is that path length is not an explicit part of the BIP32-Ed25519 algorithm. That means that one doesn't know how deep in the tree the current parent/child node is. Hence, it would be very hard to enforce the 220 path length limit. #### Implementation Issues One practical issue with [BIP32-Ed25519] is that its authors didn't provide a reference implementation and accompanying test vectors. This has led to a number of incompatible BIP32-Ed25519 implementations. For example, [Vincent Bernardoff's OCaml implementation][BIP32-Ed25519-OCaml] and [Shude Li's Go implementation][BIP32-Ed25519-Go] follow [BIP32-Ed25519]'s original master (i.e. root) key derivation specification and use SHA512 and SHA256 for deriving the private key _k_ and chain code _c_ (respectively) from the seed (i.e. master secret). On the other hand, Ledger's [Python implementation in orakolo repository][ BIP32-Ed25519-orakolo] and [C implementation for their Speculos emulator][ BIP32-Ed25519-speculos] (variant with `curve` equal to `CX_CURVE_Ed25519` and `mode` equal to `HDW_NORMAL`) use HMAC-SHA512 and HMAC-SHA256 for deriving the private key _k_ and chain code _c_ (respectively) from the seed. Furthermore, [Vincent Bernardoff's OCaml implementation][ BIP32-Ed25519-OCaml-root-discard] follows [BIP32-Ed25519] paper's instructions to discard the seed (i.e. master secret) if the master key's third highest bit of the last byte of _kL_ is not zero. On the other hand, [Shude Li's Go implementation][BIP32-Ed25519-Go-root-clear] just clears the master key's third highest bit and [Ledger's implementations][BIP32-Ed25519-orakolo-root-repeat] repeatedly set the seed to the master key and restart the derivation process until a master key with the desired property is found. Cardano uses its own variants of [BIP32-Ed25519] described in [CIP 3]. In particular, they define different variants of master key derivation from the seed described in [SLIP-0023]. Lastly, some implementations, notably [Oasis' Ledger app][oasis-ledger-app], don't use use [BIP32-Ed25519]'s private and public key directly but use the obtained _kL_ (first 32 bytes) of the 64 byte BIP32-Ed25519 derived private key as Ed25519's seed (i.e. non-extended private key). For more details, see [Zondax/ledger-oasis#84]. ### Tor's Next Generation Hidden Service Keys for Hierarchical Key Derivation The [Next-Generation Hidden Services in Tor] specification defines a [hierarchical key derivation scheme for Tor's keys][tor-hd] which employs multiplicative blinding instead of an additive one use by [BIP-0032]. [Jeff Burdges (Web3 Foundation)]'s post on potential [key recovery attack on the BIP32-Ed25519 scheme][BIP32-Ed25519-attack] mentions there is nothing wrong with this proposed scheme. Likewise, [Justin Starry (Solana)]'s [summary of approaches to adopting BIP-0032 for Ed25519][jstarry-summary] recommends this scheme as one of the possible approaches to adapt BIP-0032 for [edwards25519 curve]. One practical issue with using this scheme would be the absence of support by the [Ledger] and [Trezor] hardware wallets. ## Consequences ### Positive - Different applications interacting with the Oasis Network will use a _standards-compliant_ ([BIP-0039], [SLIP-0010], [BIP-0044]) and _interoperable_ account key generation process. Hence, there will be no vendor lock-in and users will have the option to easily switch between standards-compliant applications (e.g. different wallets). - Using [SLIP-0010] avoids a spectrum of issues when trying to support non-hardened public parent key to public child key derivation with the [edwards25519 curve]. Non-hardened key derivation is practically impossible to implement securely due to [edwards25519 curve's co-factor issues][diff-bip32-ed25519]. This is achieved by [SLIP-0010] explicitly disallowing non-hardened public parent key to public child key derivation with the edwards25519 curve. - Using a [3-level BIP-0032 path][key-derivation-paths] (i.e. `m/44'/474'/x'`) allows [Oasis' Ledger app][oasis-ledger-app] to implement automatic switching between existing (legacy) account key generation and the standard account key generation proposed in this ADR. Since the existing (legacy) account key generation used in [Oasis' Ledger app][oasis-ledger-app] uses a 5-level [BIP-0032] path, the Oasis' Ledger app will be able to automatically switch between standard and existing (legacy) account key generation just based on the number of levels of the given BIP-0032 path. ### Negative - The account key generation proposed in this ADR is incompatible with two existing account key generation schemes deployed in the field: - [Oasis' Ledger app][oasis-ledger-app], - [Bitpie mobile wallet][bitpie]. That means that these two applications will need to support two account key generations schemes simultaneously to allow existing users to access their (old) accounts generated via the existing (legacy) account key generation scheme. - [SLIP-0010]'s scheme for [edwards25519 curve] only supports hardened private parent key to private child key derivation. That means it will not be possible to implement wallet features that require non-hardened key derivation, e.g. watch-only feature where one is able to monitor a hierarchical wallet's accounts just by knowing the root public key and deriving all accounts' public keys from that. ### Neutral ## References - [SLIP-0010] - Stellar's [SEP-0005] - [Justin Starry (Solana)]'s [summary of approaches to adopting BIP-0032 for Ed25519][jstarry-summary] - [Andrew Kozlik (SatoshiLabs)]'s [comments on BIP32-Ed25519, SLIP-0010 and SLIP-0023][kozlik-comments] [hd-scheme]: #hierarchical-key-derivation-scheme [diff-bip32-ed25519]: #difficulties-in-adapting-bip-0032-to-edwards25519-curve [key-derivation-paths]: #key-derivation-paths [Account]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/services/staking.md#accounts [BIP-0032]: https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki [BIP-0032-use-cases]: https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki#use-cases [BIP-0039]: https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki [BIP-0044]: https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki [SLIP-0010]: https://github.com/satoshilabs/slips/blob/master/slip-0010.md [SLIP-0023]: https://github.com/satoshilabs/slips/blob/master/slip-0023.md [SLIP-0044]: https://github.com/satoshilabs/slips/blob/master/slip-0044.md [edwards25519 curve]: https://tools.ietf.org/html/rfc8032#section-5 [RFC 8032]: https://tools.ietf.org/html/rfc8032 [SEP-0005]: https://github.com/stellar/stellar-protocol/blob/master/ecosystem/sep-0005.md [Trezor]: https://trezor.io/ [Ledger]: https://www.ledger.com/ [stellar-ledger-slip10]: https://github.com/LedgerHQ/app-stellar/blob/fc4ec38d9abcae9bd47c95ef93feb5e1ff25961f/src/stellar.c#L42-L49 [near-ledger-slip10]: https://github.com/LedgerHQ/app-near/blob/40ea52a0de81d65b993a49ac705e7edad8efff0e/workdir/app-near/src/crypto/ledger_crypto.c#L24 [solana-ledger-slip10]: https://github.com/LedgerHQ/app-solana/blob/1c72216edf4e5358f719b164a8d1b6100988b34d/src/utils.c#L42-L51 [hedera-ledger-slip10]: https://github.com/LedgerHQ/app-hedera/blob/47066dcfa02379a48a65c33efb1484bb744a30a5/src/hedera.c#L21-L30 [sia-ledger-slip10]: https://github.com/LedgerHQ/app-sia/blob/d4dbb5a9cae2e2389d6b6a44701069e234f0f392/src/sia.c#L14 [secp256k1]: https://en.bitcoin.it/wiki/Secp256k1 [risretto-cofactor-issues]: https://ristretto.group/why_ristretto.html#pitfalls-of-a-cofactor [trevor-perrin-clamping]: https://moderncrypto.org/mail-archive/curves/2017/000874.html [modern crypto]: https://moderncrypto.org/ [moderncrypto-ed25519-hd1]: https://moderncrypto.org/mail-archive/curves/2017/000858.html [moderncrypto-ed25519-hd2]: https://moderncrypto.org/mail-archive/curves/2017/000866.html [UTXO]: https://en.wikipedia.org/wiki/Unspent_transaction_output [Jeff Burdges (Web3 Foundation)]: https://github.com/burdges [trezor-bip44-paths]: https://github.com/trezor/trezor-firmware/blob/master/docs/misc/coins-bip44-paths.md [BIP32-Ed25519]: https://github.com/WebOfTrustInfo/rwot3-sf/blob/master/topics-and-advance-readings/HDKeys-Ed25519.pdf [Khovratovich]: https://en.wikipedia.org/wiki/Dmitry_Khovratovich [BIP32-Ed25519-attack]: https://web.archive.org/web/20210513183118/https://forum.w3f.community/t/key-recovery-attack-on-bip32-ed25519/44 [BIP32-Ed25519-orakolo]: https://github.com/LedgerHQ/orakolo/blob/0b2d5e669ec61df9a824df9fa1a363060116b490/src/python/orakolo/HDEd25519.py [BIP32-Ed25519-orakolo-root-repeat]: https://github.com/LedgerHQ/orakolo/blob/0b2d5e669ec61df9a824df9fa1a363060116b490/src/python/orakolo/HDEd25519.py#L130-L133 [BIP32-Ed25519-speculos]: https://github.com/LedgerHQ/speculos/blob/dce04843ad7d4edbcd399391b3c39d30b37de3cd/src/bolos/os_bip32.c [BIP32-Ed25519-OCaml]: https://github.com/vbmithr/ocaml-bip32-ed25519 [BIP32-Ed25519-OCaml-root-discard]: https://github.com/vbmithr/ocaml-bip32-ed25519/blob/461e6a301996d41755acd35d82cd7ab6e30a8437/src/bip32_ed25519.ml#L120-L128 [BIP32-Ed25519-Go]: https://github.com/islishude/bip32 [BIP32-Ed25519-Go-root-clear]: https://github.com/islishude/bip32/blob/72b7efc571fdb69a3f0ce4caf7078e5466b9273d/xprv.go#L51-L53 [Zondax/ledger-oasis#84]: https://github.com/Zondax/ledger-oasis/issues/84#issuecomment-827017112 [CIP 3]: https://cips.cardano.org/cips/cip3/ [bip25519 derivation scheme]: https://medium.com/@obsidian.systems/v2-2-0-of-tezos-ledger-apps-babylon-support-and-more-e8df0e4ea161 [polkadot-ledger-normal]: https://github.com/Zondax/ledger-polkadot/blob/7c3841a96caa5af6b78d49aac52b1373f10e3773/app/src/crypto.c#L44-L52 [kusama-ledger-normal]: https://github.com/Zondax/ledger-kusama/blob/90593207558ed82ad97123b730b07bcc33aeabf2/app/src/crypto.c#L44-L52 [zcash-ledger-normal]: https://github.com/Zondax/ledger-zcash/blob/61fe324e567af59d39c609b84b591e28997c1a61/app/src/crypto.c#L173-L178 [polymath-ledger-normal]: https://github.com/Zondax/ledger-polymesh/blob/6228950a76c945fb0b5d7fc19fa475eccdf4160d/app/src/crypto.c#L44-L52 [Next-Generation Hidden Services in Tor]: https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt [tor-hd]: https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt#n2135 [oasis-ledger-app]: https://github.com/LedgerHQ/app-oasis [bitpie]: https://github.com/oasisprotocol/docs/blob/main/docs/general/manage-tokens/faq.mdx [Andrew Kozlik (SatoshiLabs)]: https://github.com/andrewkozlik [kozlik-comments]: https://github.com/satoshilabs/slips/issues/703#issuecomment-515213584 [Justin Starry (Solana)]: https://github.com/jstarry [jstarry-summary]: https://github.com/solana-labs/solana/issues/6301#issuecomment-551184457 [`go/common/crypto/sakg` package]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/crypto/sakg --- ## ADR 0009: Ed25519 Signature Verification Semantics ## Component Oasis Core ## Changelog - 2021-05-10: Initial version ## Status Informative ## Context > In programming, it's often the buts in the specification that kill you. > > -- Boris Beizer For a large host of reasons, mostly historical, there are numerous definitions of "Ed25519 signature validation" in the wild, which have the potential to be mutually incompatible. This ADR serves to provide a rough high-level overview of the issue, and to document the current definition of "Ed25519 signature verification" as used by Oasis Core. ## Decision The Oasis Core consensus layer (and all of the Go components) currently uses the following Ed25519 verification semantics. - Non-canonical s is rejected (MUST enforce `s < L`) - Small order A/R are rejected - Non-canonical A/R are accepted - The cofactored verification equation MUST be used (`[8][S]B = [8]R + [8][k]A`) - A/R may have a non-zero torsion component. ### Reject Non-canonical s Ed25519 signatures are trivially malleable unless the scalar component is constrained to `0 <= s < L`, as is possible to create valid signatures from an existing public key/message/signature tuple by adding L to s. This check is mandated in all recent formulations of Ed25519 including but not limited to RFC 8032 and FIPS 186-5, and most modern implementations will include this check. Note: Only asserting that `s[31] & 224 == 0` as done in older implementations is insufficient. ### Reject Small Order A/R Rejecting small order A is required to make the signature scheme strongly binding (resilience to key/message substitution attacks). Rejecting (or accepting) small order R is not believed to have a security impact. ### Accept Non-canonical A/R The discrete logarithm of the Ed25519 points that have a valid non-canonical encoding and are not small order is unknown, and accepting them is not believed to have a security impact. Note: RFC 8032 and FIPS 186-5 require rejecting non-canonically encoded points. ### Cofactored Verification Equation There are two forms of the Ed25519 verification equation commonly in use, `[S]B = R + [k]A` (cofactor-less), and `[8][S]B = [8]R + [8][k]A` (cofactored), which are mutually incompatible in that it is possible to produce signatures that pass with one and fail with the other. The cofactored verification equation is explicitly required by FIPS 186-5, and is the only equation that is compatible with batch signature verification. Additionally, the more modern lattice-reduction based technique for fast signature verification is incompatible with existing implementations unless cofactored. ### Accept A/R With Non-zero Torsion No other library enforces this, the check is extremely expensive, and with how Oasis Core currently uses Ed25519 signatures, this has no security impact. In the event that Oasis Core does exotic things that, for example, require that the public key is in the prime-order subgroup, this must be changed. ## Consequences ### Positive The verification semantics in use by Oasis Core provides the following properties: - SUF-CMA security - Non-repudiation (strong binding) - Compatibility with batch and lattice reduction based verification. ### Negative The combination of "reject small order A/R" and "accept non-canonical A/R" is difficult to test as it is not easily possible to generate valid signatures that meet both conditions. ### Neutral ### Future Improvements WARNING: Any changes to verification semantics are consensus breaking. - Consider switching to the "Algorithm 2" definition, for ease of testing and because it is the default behavior provided by curve25519-voi. - Consider switching to ZIP-215 semantics, to be inline with other projects, more library support (Give up on strong binding). - Switching to ristretto255 (sr25519) eliminates these problems entirely. ## Recomendations For Future Projects The definition used in Oasis Core is partly historical. New code should strongly consider using one of FIPS 186-5, Algorithm 2, or ZIP-215 semantics. ## References - [Taming the many EdDSAs](https://eprint.iacr.org/2020/1244.pdf) - [Explicitly Defining and Modifying Ed25519 Validation Rules](https://zips.z.cash/zip-0215) --- ## ADR 0010: VRF-based Committee Elections ## Component Oasis Core ## Changelog - 2021-05-10: Initial version ## Status Accepted ## Context While functional, the current PVSS-based random beacon is neither all that performant, nor all that scalable. To address both concerns, this ADR proposes transitioning the election procedure to one that is based on cryptographic sortition of Verifiable Random Function (VRF) outputs. ## Decision ### Cryptographic Primitives Let the VRF to be used across the system be ECVRF-EDWARDS25519-SHA512-ELL2 from the [Verifiable Random Functions (VRFs) draft (v10)][1], with the following additions and extra clarifications: - All public keys MUST be validated via the "ECVRF Validate Key" procedure as specified in section 5.6.1 (Small order public keys MUST be rejected). - The string_to_point routine MUST reject non-canonically encoded points as specified in RFC 8032. Many ed25519 implementations are lax about enforcing this when decoding. - When decoding s in the ECVRF_verify routine, the s scalar MUST fall within the range 0 ≤ i < L. This change will make proofs non-malleable. Note that this check is unneeded for the c scalar as it is 128-bits, and thus will always lie within the valid range. This check was not present in the IETF draft prior to version 10. - Implementations MAY choose to incorporate additional randomness into the ECVRF_nonce_generation_RFC8032 function. Note that proofs (pi_string) are not guaranteed to be unique or deterministic even without this extension (the signer can use any arbitrary value for the nonce and produce a valid proof, without altering beta_string). Let the tuple oriented cryptographic hash function be TupleHash256 from [NIST SP 800-185][2]. ### Node Descriptor Changes The node descriptor of each node will be extended to include the following datastructure. ```golang type Node struct { // ... existing fields omitted ... // VRF is the public key used by the node to generate VRF proofs. VRF *VRFInfo `json:"vrf,omitempty"` } type VRFInfo struct { // ID is the unique identifier of the node used to generate VRF proofs. ID signature.PublicKey `json:"id"` } ``` The VRF public key shall be a long-term Ed25519 public key, that is distinct from every other key used by the node. The key MUST not be small order. The existing `Beacon` member of the node descriptor is considered deprecated and will first be ignored by the consensus layer, and then removed in a subsequent version following a transitionary period. ### Consensus Parameters The scheduler module will have the following additional consensus parameters that control behavior. ```golang type ConsensusParameters struct { // ... existing fields omitted ... // VRFParameters is the paramenters for the VRF-based cryptographic // sortition based election system. VRFParameters *VRFParameters `json:"vrf_params"` } // VRFParameters are the VRF scheduler parameters. type VRFParameters struct { // AlphaHighQualityThreshold is the minimum number of proofs (Pi) // that must be received for the next input (Alpha) to be considered // high quality. If the VRF input is not high quality, runtimes will // be disabled for the next epoch. AlphaHighQualityThreshold uint64 `json:"alpha_hq_threshold,omitempty"` // Interval is the epoch interval (in blocks). Interval int64 `json:"interval,omitempty"` // ProofSubmissionDelay is the wait peroid in blocks after an epoch // transition that nodes MUST wait before attempting to submit a // VRF proof for the next epoch's elections. ProofSubmissionDelay int64 `json:"proof_delay,omitempty"` // PrevState is the VRF state from the previous epoch, for the // current epoch's elections. PrevState *PrevVRFState `json:"prev_state,omitempty"` } // PrevVRFState is the previous epoch's VRF state that is to be used for // elections. type PrevVRFState struct { // Pi is the accumulated pi_string (VRF proof) outputs for the // previous epoch. Pi map[signature.PublicKey]*signature.Proof `json:"pi.omitempty"` // CanElectCommittees is true iff the previous alpha was generated // from high quality input such that committee elections are possible. CanElectCommittees bool `json:"can_elect,omitempty"` } ``` ### Consensus State, Events, and Transactions The scheduler component will maintain and make available the following additonal consensus state. ```golang // VRFState is the VRF scheduler state. type VRFState struct { // Epoch is the epoch for which this alpha is valid. Epoch EpochTime `json:"epoch"` // Alpha is the active VRF alpha_string input. Alpha []byte `json:"alpha"` // Pi is the accumulated pi_string (VRF proof) outputs. Pi map[signature.PublicKey]*signature.Proof `json:"pi,omitempty"` // AlphaIsHighQuality is true iff the alpha was generated from // high quality input such that elections will be possible. AlphaIsHighQuality bool `json:"alpha_hq"` // SubmitAfter is the block height after which nodes may submit // VRF proofs for the current epoch. SubmitAfter int64 `json:"submit_after"` } ``` Implementations MAY cache the beta_string values that are generated from valid pi_strings for performance reasons, however as this is trivial to recalculate, it does not need to be explicitly exposed. Upon epoch transition, the scheduler will emit the following event. ```golang // VRFEvent is the VRF scheduler event. type VRFEvent struct { // Epoch is the epoch that Alpha is valid for. Epoch EpochTime `json:"epoch,omitempty"` // Alpha is the active VRF alpha_string input. Alpha []byte `json:"alpha,omitempty"` // SubmitAfter is the block height after which nodes may submit // VRF proofs for the current epoch. SubmitAfter int64 `json:"submit_after"` } ``` ```golang type VRFProve struct { // Epoch is the epoch that this VRF proof is for. Epoch epochtime.EpochTime `json:"epoch"` // Pi is the VRF proof for the current epoch. Pi []byte `json:"pi"` } ``` ### VRF Operation For the genesis epoch, let the VRF alpha_string input be derived as: `TupleHash256((chain_context, I2OSP(epoch,8)), 256, "oasis-core:vrf/alpha")` For every subsequent epoch, let alpha_string be derived as: `TupleHash256((chain_context, I2OSP(epoch, 8), beta_0, ... beta_n), 256, "oasis-core:vrf/alpha")` where beta_0 through beta_n are the beta_string outputs gathered from all valid pi_strings submitted during the previous epoch (after the on-transition culling is complete), in ascending lexographic order by VRF key. If the number of beta values incorporated into the TupleHash computation is greater than or equal to AlphaHighQuality threshold, the alpha is considered "strong", and committee elections are allowed based on the proofs generated with this alpha. If the alpha value is weak (insufficient nodes submitted proofs), only validator elections are allowed. Upon receiving a VRFEvent, all eligible nodes MUST wait a minimum of ProofSubmissionDelay blocks, and then submit a VRFProve transaction, with the Proof field set to the output of `ECVRF_prove(VRFKey_private, alpha_string)`. Upon receiving a VRFProve transaction, the scheduler does the following: 1. Rejects the transaction if less than ProofSubmissionDelay blocks have elapsed since the transition into the current epoch. 2. Checks to see if the node tentatively eligible to be included in the next election according to the following criteria: - Not frozen. - Has registered the VRF.ID used to generate the proof prior to the transition into the current epoch (May slash). - Has not already submitted a proof for the current epoch (May slash if proof is different). 3. Validates the proof, and if valid, stores the VRF.ID + pi_string in the consensus state. ### VRF Committee Elections The following changes are made to the committee election process. On epoch transition, as long as the alpha used to generate the proofs is considered strong re-validate node eligibility for all nodes that submitted a VRF proof (Not frozen, VRF.ID has not changed), and cull proofs from nodes that are now ineligible. If the alpha value is considered weak, no commitee elections are allowed. For each committee: 1. Filter the node list based on the current stake/eligibility criteria, and additionally filter out nodes that have not submitted a valid VRF proof. 2. For each eligible (node, commitee kind, committe role) tuple, derive a sortition string as: `s_n = TupleHash256((chain_context, I2OSP(epoch, 8), runtime_id, I2OSP(kind, 1), I2OSP(role, 1), beta_n), 256, "oasis-core:vrf/committee")` 3. Sort s_0 ... s_n in ascending lexographical order. 4. Select the requisite nodes that produced the sortition strings starting from the head of the sorted list as the committee. Committee elections MUST be skipped for the genesis and subsequent epoch, as the genesis epoch has no VRF proofs, and proofs submitted during the genesis epoch are based on the bootstrap alpha_string. ### VRF Validator Elections The only place where the beacon is currently used in the validator selection process is to pick a single node out of multiple eligible nodes controlled by the same entity to become a validator. When this situation occurs the validator is selected as follows: 1. For all validator-eligible nodes controlled by the given entity, derive a sortition string as: `s_n = TupleHash256((chain_context, I2OSP(epoch, 8), beta_n), 256, "oasis-core:vrf/validator")` 2. Sort s_0 ... s_n, in ascending lexographic order. 3. Select the node that produced the 0th sortition string in the sorted list as the validator. This is safe to do with beta values generated via the bootstrap alpha string as it is up to the entity running the nodes in question as to which ones are a validator anyway. As a concession for the transition process, if the number of validators that submit proofs is less than the minimum number of validators configured in the scheduler, validator tie-breaks (and only validator tie-breaks) will be done by permuting the node list (as in the current PVSS beacon), using entropy from the block hash. As nodes are required to submit a VRF public key as part of non-genesis registrations, and each node will attempt to submit a VRF proof, this backward compatibility hack should only be triggered on the genesis epoch, and can be removed on the next major upgrade. ### Timekeeping Changes Timekeeping will go back to a fixed-interval epoch transition mechanism, with all of the beacon related facilities removed. As this is primarily a module rename and code removal, the exact details are left unspecified. ## Consequences ### Positive - This is significantly simpler from a design standpoint. - This is significantly faster and scales significantly better. - It is possible to go back to fixed-length epochs again. ### Negative - The system loses a way to generate entropy at the consensus layer. - The simple design involves an additional 1-epoch period after network initialization where elections are not available. ### Neutral - I need to implement TupleHash256. ## References [1]: https://datatracker.ietf.org/doc/draft-irtf-cfrg-vrf/ [2]: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-185.pdf --- ## ADR 0011: Incoming Runtime Messages ## Component Oasis Core ## Changelog - 2022-01-07: Update based on insights from implementation - 2021-12-09: Introduce an explicit fee field, clarify token transfers - 2021-10-26: Initial draft ## Status Accepted ## Context There is currently a single mechanism through which the consensus layer and a runtime may interact in a consistent and secure manner. This is the mechanism of runtime messages that can be emitted by runtimes (see [ADR 3]) and allows the consensus layer to act on a runtime's behalf. This mechanism is currently used for _pulling_ tokens from consensus layer accounts that have previously set proper allowances and for updating the runtime descriptor when the runtime governance model (see [ADR 4]) is in effect. This ADR proposes to implement the reverse mechanism where anyone issuing a transaction at the consensus layer can queue arbitrary messages for processing by the runtime in its next round. [ADR 3]: 0003-consensus-runtime-token-transfer.md [ADR 4]: 0004-runtime-governance.md ## Decision On a high level this proposal affects the following components: - A new transaction method `roothash.SubmitMsg` is added to the roothash consensus service to queue a new message for the specific runtime. - Additional per-runtime state is added to the roothash service containing the currently queued messages, sorted by arrival time. - During processing of a round the proposer may propose to pop any number of messages and process them by pushing them to the runtime, similar as it does for transaction batches. This is of course subject to discrepancy detection. - The runtime host protocol is updated to allow the host to push arbitrary incoming messages in addition to the transaction batch. - The runtime descriptor is updated to include a field that specifies the maximum size of the incoming message queue. ### Incoming Message Each incoming message is represented as follows: ```golang type IncomingMessage struct { // ID is the unique identifier of the message. ID uint64 `json:"id"` // Caller is the address of the caller authenticated by the consensus layer. Caller staking.Address `json:"caller"` // Tag is an optional tag provided by the caller which is ignored and can be used to match // processed incoming message events later. Tag uint64 `json:"tag,omitempty"` // Fee is the fee sent into the runtime as part of the message being sent. // The fee is transferred before the message is processed by the runtime. Fee quantity.Quantity `json:"fee,omitempty"` // Tokens are any tokens sent into the runtime as part of the message being // sent. The tokens are transferred before the message is processed by the // runtime. Tokens quantity.Quantity `json:"tokens,omitempty"` // Data is arbitrary runtime-dependent data. Data []byte `json:"data,omitempty"` } ``` ### Executor Commitments The compute results header structure is updated to include two fields that specify the number and hash of incoming messages included in a batch as follows: ```golang type ComputeResultsHeader struct { // ... existing fields omitted ... // InMessagesHash is the hash of processed incoming messages. InMessagesHash *hash.Hash `json:"in_msgs_hash,omitempty"` // InMessagesCount is the number of processed incoming messages. InMessagesCount uint32 `json:"in_msgs_count,omitempty"` } ``` Where the hash of included incoming messages is computed as follows: ```golang // InMessagesHash returns a hash of provided incoming runtime messages. func InMessagesHash(msgs []IncomingMessage) (h hash.Hash) { if len(msgs) == 0 { // Special case if there are no messages. h.Empty() return } return hash.NewFrom(msgs) } ``` Note that this also requires the enclave RAK signature (for runtimes requiring the use of TEEs) to be computed over this updated new header. ### Runtime Block Header The runtime block header is updated to include the `InMessagesHash` field as follows: ```golang type Header struct { // ... existing fields omitted ... // InMessagesHash is the hash of processed incoming messages. InMessagesHash hash.Hash `json:"in_msgs_hash"` } ``` ### Runtime Descriptor This proposal updates the runtime transaction scheduler parameters (stored under the `txn_scheduler` field of the runtime descriptor) as follows: ```golang type TxnSchedulerParameters struct { // ... existing fields omitted ... // MaxInMessages specifies the maximum size of the incoming message queue // for this runtime. MaxInMessages uint32 `json:"max_in_messages,omitempty"` } ``` It also updates the runtime staking parameters (stored under the `staking` field of the runtime descriptor) as follows: ```golang type RuntimeStakingParameters struct { // ... existing fields omitted ... // MinInMessageFee specifies the minimum fee that the incoming message must // include for the message to be queued. MinInMessageFee quantity.Quantity `json:"min_in_msg_fee,omitempty"` } ``` ### State This proposal introduces/updates the following consensus state in the roothash module: - **Incoming message queue metadata (`0x28`)** Metadata for the incoming message queue. ``` 0x28 ``` The value is the following CBOR-serialized structure: ```golang type IncomingMessageQueue struct { // Size contains the current size of the queue. Size uint32 `json:"size,omitempty"` // NextSequenceNumber contains the sequence number that should be used for // the next queued message. NextSequenceNumber uint64 `json:"next_sequence_number,omitempty"` } ``` - **Incoming message queue item (`0x29`)** A queue of incoming messages pending to be delivered to the runtime in the next round. ``` 0x29 ``` The value is a CBOR-serialized `IncomingMessage` structure. ### Transaction Methods This proposal updates the following transaction methods in the roothash module: #### Submit Message The submit message method allows anyone to submit incoming runtime messages to be queued for delivery to the given runtime. **Method name:** ``` roothash.SubmitMsg ``` **Body:** ```golang type SubmitMsg struct { ID common.Namespace `json:"id"` Fee quantity.Quantity `json:"fee,omitempty"` Tokens quantity.Quantity `json:"tokens,omitempty"` Data []byte `json:"data,omitempty"` } ``` **Fields:** - `id` specifies the destination runtime's identifier. - `fee` specifies the fee that should be sent into the runtime as part of the message being sent. The fee is transferred before the message is processed by the runtime. - `tokens` specifies any tokens to be sent into the runtime as part of the message being sent. The tokens are transferred before the message is processed by the runtime. - `data` arbitrary data to be sent to the runtime for processing. The transaction signer implicitly specifies the caller. Upon executing the submit message method the following actions are performed: - Gas is accounted for (new `submitmsg` gas operation). - The runtime descriptor for runtime `id` is retrieved. If the runtime does not exist or is currently suspended the method fails with `ErrInvalidRuntime`. - The `txn_scheduler.max_in_messages` field in the runtime descriptor is checked. If it is equal to zero the method fails with `ErrIncomingMessageQueueFull`. - If the value of the `fee` field is smaller than the value of the `staking.min_in_msg_fee` field in the runtime descriptor the method fails with `ErrIncomingMessageInsufficientFee`. - The number of tokens corresponding to `fee + tokens` are moved from the caller's account into the runtime account. If there is insufficient balance to do so the method fails with `ErrInsufficientBalance`. - The incoming queue metadata structure is fetched. If it doesn't yet exist it is populated with zero values. - If the value of the `size` field in the metadata structure is equal to or larger than the value of the `txn_scheduler.max_in_messages` field in the runtime descriptor the method fails with `ErrIncomingMessageQueueFull`. - An `IncomingMessage` structure is generated based on the caller and method body and the value of the `next_sequence_number` metadata field is used to generate a proper key for storing it in the queue. The structure is inserted into the queue. - The `size` and `next_sequence_number` fields are incremented and the updated metadata is saved. ### Queries This proposal adds the following new query methods in the roothash module by updating the `roothash.Backend` interface as follows: ```golang type Backend interface { // ... existing methods omitted ... // GetIncomingMessageQueueMeta returns the given runtime's incoming message queue metadata. GetIncomingMessageQueueMeta(ctx context.Context, request *RuntimeRequest) (*message.IncomingMessageQueueMeta, error) // GetIncomingMessageQueue returns the given runtime's queued incoming messages. GetIncomingMessageQueue(ctx context.Context, request *InMessageQueueRequest) ([]*message.IncomingMessage, error) } // IncomingMessageQueueMeta is the incoming message queue metadata. type IncomingMessageQueueMeta struct { // Size contains the current size of the queue. Size uint32 `json:"size,omitempty"` // NextSequenceNumber contains the sequence number that should be used for the next queued // message. NextSequenceNumber uint64 `json:"next_sequence_number,omitempty"` } // InMessageQueueRequest is a request for queued incoming messages. type InMessageQueueRequest struct { RuntimeID common.Namespace `json:"runtime_id"` Height int64 `json:"height"` Offset uint64 `json:"offset,omitempty"` Limit uint32 `json:"limit,omitempty"` } ``` ### Runtime Host Protocol This proposal updates the existing host to runtime requests in the runtime host protocol as follows: ```golang type RuntimeExecuteTxBatchRequest struct { // ... existing fields omitted ... // IncomingMessages are the incoming messages from the consensus layer that // should be processed by the runtime in this round. IncomingMessages []*IncomingMessage `json:"in_messages,omitempty"` } ``` ### Rust Runtime Support Library This proposal updates the `transaction::Dispatcher` trait as follows: ```rust pub trait Dispatcher: Send + Sync { // ... existing unchanged methods omitted ... /// Execute the transactions in the given batch. fn execute_batch( &self, ctx: Context, batch: &TxnBatch, in_msgs: Vec, // Added argument. ) -> Result; } ``` ### Executor Processing The executor processing pipeline is changed such that pending incoming messages are queried before the next round starts and are then passed to the runtime via the runtime host protocol. The executor may perform checks to estimate resource use early, similarly to how checks are performed for transactions as they arrive. ### Runtime Processing The proposal requires that messages are processed by the runtime in queue order (e.g. on each round `InMessagesCount` messages are poped from the queue). This simplifies the design but the runtimes need to carefully consider how much resources to allocate for executing messages (vs. regular transactions) in a round. The runtime has full autonomy in choosing how many messages to execute as it is given the complete message batch. It should first compute how many messages to process by running them in "check" mode and computing how much gas (or other resources) they take and then choosing as many as fits. Specifying these details is left to the runtime implementation although the SDK is expected to adopt an approach with separate `max_inmsg_gas` and `max_inmsg_slots` parameters which limits how resources are allocated for incoming message processing in each round. If a single message exceeds either of these limits it will result in execution failure of that message. ### Root Hash Commitment Processing The processing of executor commitments is modified as follows: - No changes are made to the discrepancy detection and resolution protocols besides the newly added fields being taken into account in discrepancy determination. - After a successful round, the `InMessagesCount` field of the compute body is checked and the corresponding number of messages are popped from the queue in increasing order of their sequence numbers. The queue metadata is updated accoordingly by decrementing the value of the `size` field and the `InMessagesHash` is added to the newly emitted block header. ## Consequences ### Positive - Consensus layer transactions can trigger actions in the runtime without additional runtime transactions. This would also allow pushing tokens into the runtime via a consensus layer transaction or even invoking smart contracts that result in consensus layer actions to happen (via emitted messages). - Each runtime can define the format of incoming messages. The SDK would likely use something that contains a transaction (either signed to support non-Ed25519 callers or unsigned for smaller Ed25519-based transactions) so arbitrary invocations would be possible. ### Negative - Storing the queue will increase the size of consensus layer state. - This could lead to incoming messages being used exclusively to interact with a runtime leading to the consensus layer getting clogged with incoming message submission transactions. Posting such messages would be more expensive though as it would require paying per transaction consensus layer fees in addition to the runtime fees. If clogging does eventually happen the fees can be adjusted to encourage transaction submission to runtimes directly. ### Neutral - Allows rollup-like constructions where all transactions are posted to the consensus layer first and the runtime is just executing those. - Retrieving the result of processing an incoming message is more involved. --- ## ADR 0012: Runtime Message Results ## Component Oasis Core ## Changelog - 2021-12-04: Initial version - 2021-12-10: Extend the implementation section - 2022-01-27: Update the concrete result types ## Status Accepted ## Context Currently, the results of emitted runtime messages are `MessageEvent`s, which only provide information whether the message execution was successful or not. For various use-cases additional information about message results would be useful. One of such is supporting staking by runtimes. Currently, a runtime can emit an `AddEscrow` message, but is unaware of the actual amount of shares it obtained as a result of the added escrow. For some use-cases (e.g. runtime staking user deposited funds) this information is crucial for accounting. Similarly, for `ReclaimEscrow`, the runtime doesn't have the direct information at which epoch the stake gets debonded. The only way to currently obtain this data is to subscribe to consensus events, something which runtime doesn't have access to. Adding results to `MessageEvent` solves both of the mentioned use cases: - for `AddEscrow` the result should contain amount of shares obtained with the escrow - for `ReclaimEscrow` the result should contain the amount of shares and epoch at which the stake gets debonded ## Decision Implement support for arbitrary result data in `MessageEvent` runtime message results. ## Implementation - Result field is added to `roothash.MessageEvent` struct: ```golang // MessageEvent is a runtime message processed event. type MessageEvent struct { Module string `json:"module,omitempty"` Code uint32 `json:"code,omitempty"` Index uint32 `json:"index,omitempty"` // Result contains message execution results for successfully executed messages. Result cbor.RawMessage `json:"result,omitempty" } ``` The `Result` field is runtime message specific and is present only when the message execution was successful (`Code` is `errors.CodeNoError`). - `ExecuteMessage` method in `MessageSubscriber` interface is updated to include a response: ```golang // MessageSubscriber is a message subscriber interface. type MessageSubscriber interface { // ExecuteMessage executes a given message. ExecuteMessage(ctx *Context, kind, msg interface{}) (interface{}, error) } ``` - `Publish` method of the `MessageDispatcher` interface is updated to include the response: ```golang // MessageDispatcher is a message dispatcher interface. type MessageDispatcher interface { // Publish publishes a message of a given kind by dispatching to all subscribers. // Subscribers can return a result, but at most one subscriber should return a // non-nil result to any published message. Panics in case more than one subscriber // returns a non-nil result. // // In case there are no subscribers ErrNoSubscribers is returned. Publish(ctx *Context, kind, msg interface{}) (interface{}, error) } ``` In case the `Publish` `error` is `nil` the Roothash backend propagates the result to the emitted `MessageEvent`. With these changes the runtime is able to obtain message execution results via `MessageEvents` in `RoundResults`. ### Message Execution Results - `AddEscrow` message execution result is the `AddEscrowResult`: ```golang type AddEscrowResult struct { Owner Address `json:"owner"` Escrow Address `json:"escrow"` Amount quantity.Quantity `json:"amount"` NewShares quantity.Quantity `json:"new_shares"` } ``` - `ReclaimEscrow` message execution result is the `ReclaimEscrowResult`: ```golang type ReclaimEscrowResult struct { Owner Address `json:"owner"` Escrow Address `json:"escrow"` Amount quantity.Quantity `json:"amount"` DebondingShares quantity.Quantity `json:"debonding_shares"` RemainingShares quantity.Quantity `json:"remaining_shares"` DebondEndTime beacon.EpochTime `json:"debond_end_time"` } ``` - `Transfer` message execution result is the `TransferResult`: ```golang type TransferResult struct { From Address `json:"from"` To Address `json:"to"` Amount quantity.Quantity `json:"amount"` } ``` - `Withdraw` message execution result is the `WithdrawResult`: ```golang type WithdrawResult struct { Owner Address `json:"owner"` Beneficiary Address `json:"beneficiary"` Allowance quantity.Quantity `json:"allowance"` AmountChange quantity.Quantity `json:"amount_change"` } ``` - `UpdateRuntime` message execution result is the registry `Runtime` descriptor. ## Consequences ### Positive All the functionality for runtimes to support staking is implemented. ### Negative Requires breaking changes. ### Neutral ### Alternatives considered Add support to runtimes for subscribing to consensus events. A more heavyweight solution, that can still be implemented in future if desired. Decided against it due to simplicity of the message events solution for the present use cases. --- ## ADR 0013: Runtime Upgrade Improvements ## Component Oasis Core ## Changelog - 2022-01-25: Initial version ## Status Accepted ## Context Currently major runtime updates incur at least one epoch worth of downtime for the transition period. This is suboptimal, and can be improved to allow seamless runtime updates, with some changes to the runtime descriptor and scheduler behavior. ## Decision Implement support for seamless breaking runtime upgrades. ## Implementation Runtime descriptor related changes: ```golang // Runtime represents a runtime. type Runtime struct { // nolint: maligned // Deployments specifies the runtime deployments (versions). Deployments []*VersionInfo `json:"deployments"` // Version field is relocated to inside the VersionInfo structure. // Other unchanged fields omitted for brevity. } // VersionInfo is the per-runtime version information. type VersionInfo struct { // Version of the runtime. Version version.Version `json:"version"` // ValidFrom stores the epoch at which, this version is valid. ValidFrom beacon.EpochTime `json:"valid_from"` // TEE is the enclave version information, in an enclave provider specific // format if any. TEE []byte `json:"tee,omitempty"` } ``` The intended workflow here is to: - Deploy runtimes with the initial Deployment populated. - Update the runtime version via the deployment of a new version of the descriptor with an additional version info entry. Sufficient nodes must upgrade their runtime binary and configuration by the `ValidFrom` epoch or the runtime will fail to be scheduled (no special handling is done, this is the existing "insufficient nodes" condition). - Aborting or altering pending updates via the deployment of a new version of the descriptor with the removed/ammended not-yet-valid `Deployments` is possible in this design, but perhaps should be forbidden. - Altering exisiting `Deployments` entries is strictly forbidden, except the removal of superceded descriptors. - Deploying descriptors with `Deployments` that will never be valid (as in one that is superceded by a newer version) is strictly forbidden. The existing node descriptor is a flat vector of `Runtime` entries containing the runtime ID, version, and TEE information, so no changes are required. On transition to an epoch where a new version takes effect, the consensus layer MAY prune the descriptor's `Deployments` field of superceded versions. The only scheduler and worker side changes are to incorporate the runtime version into scheduling, and to pick the correct deployed version of the runtime to use, both on a once-per-epoch-per-runtime basis. ## Consequences ### Positive - Seamless runtime upgrades will be possible. - The code changes required are relatively minimal, and this is likely the simplest possible solution that will work. ### Negative - It may be overly simplistic. --- ## ADR 0014: Signing Runtime Transactions with Hardware Wallet ## Component Oasis SDK ## Changelog - 2023-02-24: - APDU: Define Oasis native and Ethereum-compatible address length. - 2023-02-09: - Encode `Meta.runtime_id` and `Meta.orig_to` as Base16. - Change `SIG` in `SIGN_RT_SECP256K1` to 65-byte encoded R,S,V format. - 2023-01-23: - Fix Deoxys-II field description in [Signing encrypted runtime transactions](#signing-encrypted-runtime-transactions) section. - Rename `SIGN_PT_` instructions in APDUSPEC to `SIGN_RT_` for consistency with oasis-core and oasis-sdk codebase. - 2023-01-03: - Add Sapphire runtime ID and consensus address on Mainnet. - 2022-12-13: - Fix Secp256k1 public key size. - 2022-10-12: - Add Sapphire runtime ID and consensus address on Testnet, - Remove redundant `sig_context` from `Meta`, - Require `tx.call.format` to be either `0` or `1`. - 2022-07-15: Initial public version ## Status Proposed ## Context This document proposes additions to APDU specification, guidelines for parsing runtime transactions and general UI/UX for signing them on a hardware wallet: 1. [APDUSPEC additions](#apduspec-additions) 2. [Changes to Allowance transaction](#changes-to-allowance-transaction) 3. [Signing general runtime transactions](#signing-general-runtime-transactions), 4. [Signing smart contract runtime transactions](#signing-smart-contract-runtime-transactions), 5. [Signing EVM runtime transactions](#signing-evm-runtime-transactions). 6. [Signing encrypted runtime transactions](#signing-encrypted-runtime-transactions), ### Test vectors Test vectors for all runtime transactions in this ADR can be generated by using [gen_runtime_vectors][gen_runtime_vectors] tool as part of the Oasis SDK. ### Runtime transaction format The format of the [runtime transaction][runtime-sdk tx] to be signed by the hardware wallet is the following: ```rust /// Transaction. #[derive(Clone, Debug, cbor::Encode, cbor::Decode)] pub struct Transaction { #[cbor(rename = "v")] pub version: u16, pub call: Call, #[cbor(rename = "ai")] pub auth_info: AuthInfo, } ``` The transaction **can be signed with `Secp256k1` ("Ethereum"), `Ed25519` key, or `Sr25519` key!** Information on this along with the gas fee is stored inside [`ai` field][runtime-sdk ai]. `call` is defined as [follows][runtime-sdk call]: ```rust /// Method call. #[derive(Clone, Debug, cbor::Encode, cbor::Decode)] pub struct Call { /// Call format. #[cbor(optional, default)] pub format: CallFormat, /// Method name. #[cbor(optional, default, skip_serializing_if = "String::is_empty")] pub method: String, /// Method body. pub body: cbor::Value, /// Read-only flag. /// /// A read-only call cannot make any changes to runtime state. Any attempt at modifying state /// will result in the call failing. #[cbor(optional, default, rename = "ro")] pub read_only: bool, } ``` If [`format`][runtime-sdk format] is: - `0`, the transaction is unencrypted, - `1`, the transaction is encrypted, - any other, the transaction should be rejected with `unsupported call format` error unless implemented outside the scope of this ADR. `method` contains the name of the runtime module followed by `.` and the method name. If `format` is `1`, `method` is empty. `body` contains a CBOR-encoded transaction. If `format` equals `1`, `body` contains CBOR-encoded [`CallEnvelopeX25519DeoxysII`][runtime-sdk envelope] which contains the encrypted transaction body inside its `data` field. ## Decision ### APDUSPEC additions #### GET_ADDR_SECP256K1 ##### Command | Field | Type | Content | Expected | | ---------- | -------------- | ---------------------- | -------------- | | CLA | byte (1) | Application Identifier | 0x05 | | INS | byte (1) | Instruction ID | 0x04 | | P1 | byte (1) | Request User confirmation | No = 0 | | P2 | byte (1) | Parameter 2 | ignored | | L | byte (1) | Bytes in payload | (depends) | | Path[0] | byte (4) | Derivation Path Data | 44 | | Path[1] | byte (4) | Derivation Path Data | 60 | | Path[2] | byte (4) | Derivation Path Data | ? | | Path[3] | byte (4) | Derivation Path Data | ? | | Path[4] | byte (4) | Derivation Path Data | ? | The first three items in the derivation path are hardened. ##### Response | Field | Type | Content | Note | | ------- | --------- | --------------------- | ------------------------ | | PK | byte (33) | Public Key | | | ADDR | byte (40) | Lower-case hex addr | | | SW1-SW2 | byte (2) | Return code | see list of return codes | #### GET_ADDR_SR25519 ##### Command | Field | Type | Content | Expected | | ---------- | -------------- | ---------------------- | -------------- | | CLA | byte (1) | Application Identifier | 0x05 | | INS | byte (1) | Instruction ID | 0x03 | | P1 | byte (1) | Request User confirmation | No = 0 | | P2 | byte (1) | Parameter 2 | ignored | | L | byte (1) | Bytes in payload | (depends) | | Path[0] | byte (4) | Derivation Path Data | 44 | | Path[1] | byte (4) | Derivation Path Data | 474 | | Path[2] | byte (4) | Derivation Path Data | ? | | Path[3] | byte (4) | Derivation Path Data | ? | | Path[4] | byte (4) | Derivation Path Data | ? | The first three items in the derivation path are hardened. ##### Response | Field | Type | Content | Note | | ------- | --------- | --------------------- | ------------------------ | | PK | byte (32) | Public Key | | | ADDR | byte (46) | Bech32 addr | | | SW1-SW2 | byte (2) | Return code | see list of return codes | #### SIGN_RT_ED25519 ##### Command | Field | Type | Content | Expected | | ----- | -------- | ---------------------- | --------- | | CLA | byte (1) | Application Identifier | 0x05 | | INS | byte (1) | Instruction ID | 0x05 | | P1 | byte (1) | Payload desc | 0 = init | | | | | 1 = add | | | | | 2 = last | | P2 | byte (1) | ---- | not used | | L | byte (1) | Bytes in payload | (depends) | The first packet/chunk includes only the derivation path. All other packets/chunks should contain message to sign. *First Packet* | Field | Type | Content | Expected | | ---------- | -------- | ---------------------- | --------- | | Path[0] | byte (4) | Derivation Path Data | 44 | | Path[1] | byte (4) | Derivation Path Data | 474 | | Path[2] | byte (4) | Derivation Path Data | ? | | Path[3] | byte (4) | Derivation Path Data | ? | | Path[4] | byte (4) | Derivation Path Data | ? | *Other Chunks/Packets* | Field | Type | Content | Expected | | ------- | -------- | -------------------- | -------- | | Data | bytes... | Meta+Message | | Data is defined as: | Field | Type | Content | Expected | | ------- | -------- |-------------------| ------------ | | Meta | bytes.. | CBOR metadata | | | Message | bytes.. | CBOR data to sign | | ##### Response | Field | Type | Content | Note | | ------- | --------- | ----------- | ------------------------ | | SIG | byte (64) | Signature | | | SW1-SW2 | byte (2) | Return code | see list of return codes | #### SIGN_RT_SECP256K1 ##### Command | Field | Type | Content | Expected | | ----- | -------- | ---------------------- | --------- | | CLA | byte (1) | Application Identifier | 0x05 | | INS | byte (1) | Instruction ID | 0x07 | | P1 | byte (1) | Payload desc | 0 = init | | | | | 1 = add | | | | | 2 = last | | P2 | byte (1) | ---- | not used | | L | byte (1) | Bytes in payload | (depends) | The first packet/chunk includes only the derivation path. All other packets/chunks should contain message to sign. *First Packet* | Field | Type | Content | Expected | | ---------- | -------- | ---------------------- | --------- | | Path[0] | byte (4) | Derivation Path Data | 44 | | Path[1] | byte (4) | Derivation Path Data | 60 | | Path[2] | byte (4) | Derivation Path Data | ? | | Path[3] | byte (4) | Derivation Path Data | ? | | Path[4] | byte (4) | Derivation Path Data | ? | *Other Chunks/Packets* | Field | Type | Content | Expected | | ------- | -------- | -------------------- | -------- | | Data | bytes... | Meta+Message | | Data is defined as: | Field | Type | Content | Expected | | ------- | -------- |-------------------| ------------ | | Meta | bytes.. | CBOR metadata | | | Message | bytes.. | CBOR data to sign | | ##### Response | Field | Type | Content | Note | | ------- | --------- | ----------- | ------------------------ | | SIG | byte (65) | Signature | R,S,V bigendian integers | | SW1-SW2 | byte (2) | Return code | see list of return codes | #### SIGN_RT_SR25519 ##### Command | Field | Type | Content | Expected | | ----- | -------- | ---------------------- | --------- | | CLA | byte (1) | Application Identifier | 0x05 | | INS | byte (1) | Instruction ID | 0x06 | | P1 | byte (1) | Payload desc | 0 = init | | | | | 1 = add | | | | | 2 = last | | P2 | byte (1) | ---- | not used | | L | byte (1) | Bytes in payload | (depends) | The first packet/chunk includes only the derivation path. All other packets/chunks should contain message to sign. *First Packet* | Field | Type | Content | Expected | | ---------- | -------- | ---------------------- | --------- | | Path[0] | byte (4) | Derivation Path Data | 44 | | Path[1] | byte (4) | Derivation Path Data | 474 | | Path[2] | byte (4) | Derivation Path Data | ? | | Path[3] | byte (4) | Derivation Path Data | ? | | Path[4] | byte (4) | Derivation Path Data | ? | *Other Chunks/Packets* | Field | Type | Content | Expected | | ------- | -------- | -------------------- | -------- | | Data | bytes... | Meta+Message | | Data is defined as: | Field | Type | Content | Expected | | ------- | -------- |-------------------| ------------ | | Meta | bytes.. | CBOR metadata | | | Message | bytes.. | CBOR data to sign | | ##### Response | Field | Type | Content | Note | | ------- | --------- | ----------- | ------------------------ | | SIG | byte (64) | Signature | | | SW1-SW2 | byte (2) | Return code | see list of return codes | #### Meta parameter `Meta` is a CBOR-encoded string → string map with the following fields: - `runtime_id`: Base16-encoded [runtime ID][runtime id] (64-byte string) - `chain_context`: [chain ID][chain context] (64-byte string) - `orig_to` (optional): Base16-encoded ethereum destination address (40-byte string) ### Changes to Allowance transaction [`staking.Allow`] transaction already exists on the consensus layer. We propose the following improvement to the UI: ```ledger | Type > | < To > | < Amount > | < Fee > | < Gas limit > | < Network > | < > | < | | Allowance | | ROSE +- | ROSE | | | APPROVE | REJECT | | | | | | | | | | ``` **IMPROVEMENT:** The hardware wallet renders the following literals in place of `TO` for specific `NETWORK` and addresses: - Network: Mainnet, To: `oasis1qrnu9yhwzap7rqh6tdcdcpz0zf86hwhycchkhvt8` → `Cipher` - Network: Testnet, To: `oasis1qqdn25n5a2jtet2s5amc7gmchsqqgs4j0qcg5k0t` → `Cipher` - Network: Mainnet, To: `oasis1qzvlg0grjxwgjj58tx2xvmv26era6t2csqn22pte` → `Emerald` - Network: Testnet, To: `oasis1qr629x0tg9gm5fyhedgs9lw5eh3d8ycdnsxf0run` → `Emerald` - Network: Mainnet, To: `oasis1qrd3mnzhhgst26hsp96uf45yhq6zlax0cuzdgcfc` → `Sapphire` - Network: Testnet, To: `oasis1qqczuf3x6glkgjuf0xgtcpjjw95r3crf7y2323xd` → `Sapphire` For more information on how the addresses above are derived from the runtime ID check the [runtime accounts] section. ### Signing general runtime transactions #### Deposit We propose the following UI for [`consensus.Deposit`] runtime transaction: ```ledger | Type > | < To (1/1) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Deposit | | | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | ``` `MIXED_TO` can either be `oasis1` or the Ethereum's `0x` address. If `Meta` does not contain `orig_to` field, render the `tx.call.body.to` value in `oasis1` format in place of `MIXED_TO`. If `Meta.orig_to` is set, then: 1. Check that the ethereum address stored in `orig_to` field maps to the native address in `tx.call.body.to` according to [the reference implementation of the mapping][ethereum to native address]. 2. Render `orig_to` value in `0x` format in place of `MIXED_TO`. In addition, if `tx.call.body.to` is empty, then the deposit is made to the signer's account inside the runtime. In this case `Self` literal is rendered in place of `MIXED_TO`. `AMOUNT` and `FEE` show the amount of tokens transferred in the transaction and the transaction fee. The number must be formatted according to the number of decimal places and showing a corresponding symbol `SYM` beside. These are determined by the following mapping hardcoded in the hardware wallet: `(Network, Runtime ID, Denomination) → (Number of decimals, SYM)` Denomination information is stored in `tx.part.body.amount[1]` or `tx.ai.fee.amount[1]` for the tokens transferred in the transaction or the fee respectively. Empty Denomination is valid and signifies the native token for the known networks and runtime IDs (see below). The hardware wallet should have at least the following mappings hardcoded: - Network: Mainnet, runtime ID: Cipher, Denomination: "" → 9, `ROSE` - Network: Testnet, runtime ID: Cipher, Denomination: "" → 9, `TEST` - Network: Mainnet, runtime ID: Emerald, Denomination: "" → 18, `ROSE` - Network: Testnet, runtime ID: Emerald, Denomination: "" → 18, `TEST` - Network: Mainnet, runtime ID: Sapphire, Denomination: "" → 18, `ROSE` - Network: Testnet, runtime ID: Sapphire, Denomination: "" → 18, `TEST` If the lookup fails, the following policy should be respected: 1. `SYM` is rendered as empty string. 2. The number of decimals is 18, if runtime ID matches any Emerald or Sapphire runtime on any network. 3. Otherwise, the number of decimals is 9. `RUNTIME` shows the 32-byte hex encoded runtime ID stored in `Meta.runtime_id`. If `NETWORK` matches Mainnet or Testnet, then human-readable version of `RUNTIME` is shown: - Network: Mainnet, runtime ID: `000000000000000000000000000000000000000000000000e199119c992377cb` → `Cipher` - Network: Testnet, runtime ID: `0000000000000000000000000000000000000000000000000000000000000000` → `Cipher` - Network: Mainnet, runtime ID: `000000000000000000000000000000000000000000000000e2eaa99fc008f87f` → `Emerald` - Network: Testnet, runtime ID: `00000000000000000000000000000000000000000000000072c8215e60d5bca7` → `Emerald` - Network: Mainnet, runtime ID: `000000000000000000000000000000000000000000000000f80306c9858e7279` → `Sapphire` - Network: Testnet, runtime ID: `000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c` → `Sapphire` **SIGNATURE CONTEXT COMPUTATION:** [Chain domain separation] context for **runtime** transactions beginning with `oasis-runtime-sdk/tx: v0 for chain ` and followed by the hash derived from `Meta.runtime_id` and `Meta.chain_context`. See [golang implementation][chain domain separation implementation] for the reference implementation. #### Withdraw We propose the following UI for [`consensus.Withdraw`] method: ```ledger | Type > | < To (1/1) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Withdraw | | | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | ``` If `tx.call.body.to` is empty, then the withdrawal is made to the signer's consensus account. In this case `Self` literal is rendered in place of `TO`. #### Transfer We propose the following UI for the [`accounts.Transfer`] method: ```ledger | Type > | < To (1/1) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Transfer | | | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | ``` #### Example The user wants to deposit 100 ROSE to `0xDce075E1C39b1ae0b75D554558b6451A226ffe00` account on Emerald on the Mainnet. First they sign the deposit allowance transaction for Emerald. ```ledger | Type > | < To > | < Amount > | < Gas limit > | < Fee > | < Network > | < > | < | | Allowance | Emerald | ROSE +100.0 | 1277 | ROSE 0.0 | Mainnet | APPROVE | REJECT | | | Mainnet | | | | | | | ``` Next, they sign the runtime deposit transaction. ```ledger | Type > | < To (1/2) > | < To (2/2) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Deposit | 0xDce075E1C39b1 | 451A226ffe00 | ROSE 100.0 | ROSE 0.0 | 11310 | Mainnet | Emerald | APPROVE | REJECT | | (ParaTime) | ae0b75D554558b6 | | | | | | | | | ``` Then, they transfer some tokens to another account inside the runtime: ```ledger | Type > | < To (1/2) > | < To (2/2) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Transfer | oasis1qpupfu7e2n | m8anj64ytrayne | ROSE 10.0 | ROSE 0.00015 | 11311 | Mainnet | Emerald | APPROVE | REJECT | | (ParaTime) | 6pkezeaw0yhj8mce | | | | | | | | | ``` Finally, the user withdraws the remainder of tokens back to the Mainnet. ```ledger | Type > | < To (1/2) > | < To (2/2) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Withdraw | oasis1qrec770vre | 504k68svq7kzve | ROSE 99.9997 | ROSE 0.00015 | 11311 | Mainnet | Emerald | APPROVE | REJECT | | (ParaTime) | k0a9a5lcrv0zvt22 | | | | | | | | | ``` ### Signing smart contract runtime transactions #### Uploading smart contract [`contracts.Upload`] method will not be signed by the hardware wallet because the size of the Wasm byte code to sign may easily exceed the maximum size of the available encrypted memory. #### Instantiating smart contract We propose the following UI for the [`contracts.Instantiate`] method: ```ledger | Review Contract > | < Code ID > | < Amount (1/1) > | < Data (1/1) > | ... | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Instantiation | | | | ... | | | | | APPROVE | REJECT | | (ParaTime) | | | | ... | | | | | | | ``` `DATA` is a JSON-like representation of `tx.call.body.data`, if the latter is a CBOR-encoded map. If `tx.call.body.data` is empty or not present, then Data screen is hidden. If `tx.call.body.data` is in some other format, require **blind signing** mode and hide Data screen. Blind signing means that the user does not see all contract information. In some cases - as is this - not even the amount or the contract address! **When signing blindly, it is crucial that the user trusts the client application that it generated a non-malicious transaction!** `AMOUNT...` is the amount of tokens sent. Contract SDK supports sending multiple tokens at once, each with its own denomination symbol. The hardware wallet should render all of them, one per page. For rendering rules of each `AMOUNT` consult the [runtime deposit](#deposit) behavior. There can be multiple Data screens Data 1, Data 2, ..., Data N for each key in `tx.call.body.data` map. `DATA` can be one of the following types: - string - number (integer, float) - array - map - boolean - null Strings are rendered as UTF-8 strings and the following characters need to be escaped: `:`, `,`, `}`, `]`, `…`. Numbers are rendered in standard general base-10 encoding. Floats use decimal period and should be rendered with at least one decimal. For strings and numbers that cannot fit a single page, a pagination is activated. Boolean and null values are rendered as `true`, `false` and `null` respectively on a single page. Array and map is rendered in form `VAL1,VAL2,...` and `KEY1:VAL1,KEY1:VAL1,...` respectively. For security, **the items of the map must be sorted lexicographically by KEY**. `KEY` and `VAL` can be of any supported type. If it is a map or array it is rendered as `{DATA}` or `[DATA]` respectively to avoid disambiguation. Otherwise, it is just `DATA`. If the content of an array or a map cannot fit a single page, no pagination is introduced. Instead, the content is trimmed, ellipsis `…` is appended at the end and the screen **becomes confirmable**. If the user double-clicks it, a subscreen for item `n` of an array or a map is shown. There is one subscreen for each item of the array or a map of size `N` titled Data n.1, Data n.2, ..., Data n.N which renders the item `n` as `DATA` for an array item or `DATA:DATA` for a map item: ```ledger | Data 1.1 (1/1) > | < Data 1.2 (1/1) | < Data 1.3 (1/1) | ... | < | | | | | | BACK | | | | | | | ``` The recursive approach described above allows user to browse through a complete tree of data stracture (typically a request name along with the arguments) by using ⬅️ and ➡️ buttons, visit a child by double-clicking and returning to a parent node by confirming the *BACK* screen. The maximum string length, the length of the array, the depth of a map must have reasonable limits on the hardware wallet. If that limit is exceeded, the hardware wallet displays an error on the initial screen. Then, if the user still wants to sign such a transaction, they need to enable **blind signing**. The following UI is shown when blind-signing a non-encrypted transaction due to too complex function parameters. ```ledger | Review Contract > | < BLIND > | < Instance ID (1/1) > | < Amount > | < Fee > | < Network > | < ParaTime > | < > | < | | Instantiation | SIGNING! | | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | ``` #### Calling smart contract The hardware wallet should show details of the runtime transaction to the user, when this is possible. We propose the following UI for the [`contracts.Call`] method: ```ledger | Review Contract > | < Instance ID > | < Amount (1/1) > | < Data (1/1) > | ... | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Call | | | | ... | | | | | APPROVE | REJECT | | (ParaTime) | | | | ... | | | | | | | ``` The Data screen behavior is the same as for [`contracts.Instantiate`](#instantiating-smart-contract) transaction. #### Upgrading smart contracts We propose the following UI for the [`contracts.Upgrade`] method: ```ledger | Review Contract > | < Instance ID (1/1) > | < Amount (1/1) > | < New Code ID (1/1) > | < Data (1/1) > | ... | < ParaTime > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Upgrade | | | | | | | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | | | | | ``` The Data screen behavior is the same as for the [contract instantiate](#instantiating-smart-contract) transaction. #### Example To upload, instantiate and call the [hello world example] running on Testnet Cipher the user first signs the contract upload transaction with a file-based ed25519 keypair. The user obtains the `Code ID` 3 for the uploaded contract. Next, the user instantiates the contract and obtains the `Instance ID` 2. ```ledger | Review Contract > | < Code ID > | < Amount > | < Data > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Instantiation | 3 | ROSE 0.0 | {instantiate:{init | ROSE 0.0 | 1348 | Mainnet | Cipher | APPROVE | REJECT | | (ParaTime) | | | ial_counter:42}} | | | | | | | | ``` Finally, they perform a call to `say_hello` function on a smart contract passing the `{"who":"me"}` object as a function argument. ```ledger | Review Contract > | < Instance ID > | < Amount > | < Data > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Call | 2 | ROSE 0.0 | {say_hello:{who:me | ROSE 0.0 | 1283 | Mainnet | Cipher | APPROVE | REJECT | | (ParaTime) | | | }} | | | | | | | ``` For a complete example, the user can provide a more complex object: ```json { "who": { "username": "alice", "client_secret": "e5868ebb4445fc2ad9f949956c1cb9ddefa0d421", "last_logins": [1646835046, 1615299046, 1583763046, 1552140646], "redirect": null } } ``` In this case the hardware wallet renders the following UI. ```ledger | Review Contract > | < Instance ID > | < Amount > | < Data > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Call | 2 | ROSE 0.0 | {say_hello:{who:{u | ROSE 0.15 | 1283 | Mainnet | Cipher | APPROVE | REJECT | | (ParaTime) | | | sername:alice,cli… | | | | | | | V V | Data 1 > | < | | say_hello:{who:{us | BACK | | ername:alice,clie… | | V V | Data 1.1 > | < | | who:{username:alic | BACK | | e,client_secret:[… | | V V | Data 1.1.1 > | < Data 1.1.2 (1/2) > | < Data 1.1.2 (2/2) > | < Data 1.1.3 > | < Data 1.1.4 > | < | | username:alice | client_secret:e5868e | 1cb9ddefa0d421 | last_logins:[1646835 | redirect:null | BACK | | | bb4445fc2ad9f949956c | | 046,1615299046,1583… | | | V V | Data 1.1.3.1 > | < Data 1.1.3.2 > | < Data 1.1.3.3 > | < Data 1.1.3.4 | < | | 1646835046 | 1615299046 | 1583763046 | 1552140646 | BACK | | | | | | | ``` ### Signing EVM runtime transactions #### Creating smart contract [`evm.Create`] method will not be managed by the hardware wallet because the size of the EVM byte code may easily exceed the wallet's encrypted memory size. #### Calling smart contract In contrast to `contracts.Call`, [`evm.Call`] method requires contract ABI and support for RLP decoding in order to extract argument names from `tx.call.body.data`. This is outside of the scope of this ADR and the **blind signing**, explicitly allowed by the user, is performed. We propose the following UI: ```ledger | Review Contract > | < BLIND > | < Tx hash (1/1) > | < Address (1/1) > | < Amount > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Call | SIGNING! | |
| | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | | | ``` `TX_HASH` is a hex representation of sha256 checksum of `tx.call.body.data` field. `ADDRESS` is a hex-encoded address of the smart contract. ### Signing encrypted runtime transactions Encrypted transactions (`tx.call.format == 1`) contain call data inside the [envelope's `data` field][runtime-sdk envelope] encrypted with Deoxys-II ephemeral key and X25519 key derivation. The hardware wallet is not expected to implement any of these decryption schemes, neither it is safe to share the ephemeral key with anyone. Instead, the user should enable **blind signing** and the hardware wallet should show the hash of the encrypted call data, the public key and the nonce: ```ledger | Review Encrypted > | < BLIND > | < Tx hash (1/1) > | < Public key (1/1) > | < Nonce (1/1) > | < Fee > | < Gas limit > | < Network > | < ParaTime > | < > | < | | Transaction | SIGNING! | | | | | | | | APPROVE | REJECT | | (ParaTime) | | | | | | | | | | | ``` `TX_HASH` is a hex representation of sha256 checksum of `tx.call.body.data` field. `PUBLIC_KEY` is a hex representation of the 32-byte `tx.call.body.pk` field. `NONCE` is a hex representation of the 15-byte `tx.call.body.nonce` field. Since the transaction stored inside the `tx.call.body.data` field is encrypted, there is also no way to discriminate between the transactions, for example `contracts.Call`, `contracts.Upgrade` or `evm.Call`. ## Consequences ### Positive Users will have a similar experience for signing runtime transactions on any wallet implementing this ADR. ### Negative For some transactions, user will need to trust the client application and use blind signing. ### Neutral #### Consideration of `roothash.SubmitMsg` transactions This ADR does not propose a UI for *generic* runtime calls (`roothash.SubmitMsg`, see [ADR 11]). The proposed design in this ADR assumes a new release of the hardware wallet app each time a new runtime transaction type is introduced. #### Signing contract uploads on hardware wallets In the future perhaps, if only the merkle root hash of the Wasm contract would be contained in the transaction, signing such a contract could be feasible. See how Ethereum 2.x contract deployment is done using this approach. #### Consideration of adding `From` screen None of the proposed UIs and the existing implementation of signing the consensus transactions on Ledger show *who* is a signer of the transaction. The signer's *from* address can be extracted from `tx.ai.si[0].address_spec.signature.` for oasis native address and if the signer wants to show the Ethereum address, `Meta.orig_from` should be populated and the hardware wallet should verify it before showing the tx. ## References - [Existing APDU specification](https://github.com/Zondax/ledger-oasis/blob/master/docs/APDUSPEC.md) [ADR 11]: ./0011-incoming-runtime-messages.md [runtime accounts]: https://github.com/oasisprotocol/oasis-core/blob/ba9802c0c2ccce366bec65f8426a0f3413670aff/docs/consensus/services/staking.md#runtime-accounts [runtime id]: https://github.com/oasisprotocol/oasis-core/blob/ba9802c0c2ccce366bec65f8426a0f3413670aff/docs/runtime/identifiers.md [chain context]: https://github.com/oasisprotocol/oasis-core/blob/ba9802c0c2ccce366bec65f8426a0f3413670aff/docs/consensus/genesis.md#genesis-documents-hash [ethereum to native address]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/client-sdk/go/types/address.go#L135-L142 [Chain domain separation]: https://github.com/oasisprotocol/oasis-core/blob/ba9802c0c2ccce366bec65f8426a0f3413670aff/docs/crypto.md#chain-domain-separation [chain domain separation implementation]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/client-sdk/go/crypto/signature/context.go [gen_runtime_vectors]: https://github.com/oasisprotocol/oasis-sdk/tree/main/tools/gen_runtime_vectors [runtime-sdk tx]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/types/transaction.rs#L86-L96 [runtime-sdk ai]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/types/transaction.rs#L159-L173 [runtime-sdk call]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/types/transaction.rs#L129-L146 [runtime-sdk format]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/types/transaction.rs#L113-L121 [runtime-sdk envelope]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/types/callformat.rs#L7-L16 [`staking.Allow`]: https://github.com/oasisprotocol/oasis-core/blob/ba9802c0c2ccce366bec65f8426a0f3413670aff/docs/consensus/services/staking.md#allow [hello world example]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/contract/hello-world.md#deploying-the-contract [`consensus.Deposit`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/modules/consensus_accounts/types.rs#L4-L13 [`consensus.Withdraw`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/modules/consensus_accounts/types.rs#L15-L23 [`accounts.Transfer`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/src/modules/accounts/types.rs#L8-L13 [`contracts.Call`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/modules/contracts/src/types.rs#L142-L153 [`contracts.Upload`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/modules/contracts/src/types.rs#L99-L110 [`contracts.Instantiate`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/modules/contracts/src/types.rs#L119-L133 [`contracts.Upgrade`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/modules/contracts/src/types.rs#L160-L174 [`evm.Call`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/modules/evm/src/types.rs#L10-L16 [`evm.Create`]: https://github.com/oasisprotocol/oasis-sdk/blob/c36a7ee194abf4ca28fdac0edbefe3843b39bf69/runtime-sdk/modules/evm/src/types.rs#L3-L8 --- ## ADR 0015: Randomized Paratime Proposer Selection ## Component Oasis Core ## Changelog - 2022-09-14: Initial import ## Status Proposed ## Context The paratime block proposer currently is selected via a round-robin algorithm, and it is trivial to determine the block proposer well in advance. This ADR proposes having a mechanism for generating per-consensus block entropy via ECVRF[1], and randomizing the Paratime block proposer. ## Decision ### Prerequisites Let each node have a distinct long-term VRF keypair, that is published as part of the node's descriptor (as per ADR 0010). Let Tendermint actually implement `ExtendVote`/`VerifyVoteExtension` as per certain versions of the ABCI++ spec[2]. Note that it appears that this will NOT be in Tendermint 0.37.x, but instead is scheduled for a later release. ### Vote extension ABCI++ introduces a notion of an application defined `vote_extension` blob that is set by the tendermint block proposer, and verified by all of the voters. Oasis will use the following datastructure, serialized to canonical CBOR, and signed with the node's consensus signing key. ```golang type OasisVoteExtension struct { // Pi is the proposer's VRF proof for the current block height. Pi []byte `json:"pi"` } ``` For the genesis block +1 (No previous beta), let the VRF alpha_string input be derived as: `TupleHash256((chain_context, I2OSP(height,8)), 256, "oasis-core:tm-vrf/alpha")` For subsequent blocks, let the VRF alpha_string input be derived as: `TupleHash256((chain_context, I2OSP(height,8), prev_beta), 256, "oasis-core:tm-vrf/alpha")` where prev_beta is the beta_string output from the previous height's ECVRF proof. Blocks must have a valid `OasisVoteExtension` blob to be considered valid, and nodes MUST use the same ECVRF key for the entire epoch (key changes mid-epoch are ignored till the epoch transition) to prevent the proposer from regenerating the ECVRF key repeatedly to fish for entropy output. ### Proposer selection Instead of round-robin through the per-epoch list of primary (non-backup) workers, the index for the node can be selected as thus: ```golang seed = TupleHash256((chain_context, I2OSP(height,8), runtime_id, pi), 256, "oasis-core:tm-vrf/paratime") drbg = drbg.New(crypto.SHA512, seed, nil, "BlockProposer") rng = rand.New(mathrand.New(drbg)) l := len(primary_nodes) primary_index = int(rng.Int63n(l)) ``` ## Consequences ### Positive The paratime block proposer(s) will be randomized. This can be done without having to patch tendermint. In theory, the system will have a way to generate entropy at the consensus layer again. ### Negative The tendermint block proposer still will be selected via a round robin algorithm. Note that Oasis does not have smart contracts at that level so the impact of being able to predict the block proposer there is less significant than other systems. People may be tempted to abuse this entropy for other things (eg: inside paratimes), which would be incorrect (block proposer can cheat). This relies on interfaces exposed by ABCI++, which appear to no longer be part of 0.37.x, so it is unknown when this will be possible to implement. ### Neutral ## References [1]: https://datatracker.ietf.org/doc/draft-irtf-cfrg-vrf/ [2]: https://github.com/cometbft/cometbft/blob/main/docs/references/rfc/tendermint-core/rfc-013-abci%2B%2B.md --- ## ADR 0016: Consensus Parameters Change Proposal ## Component Oasis Core ## Changelog - 2022-09-15: Initial version ## Status Proposed ## Context Currently consensus parameters can only be changed with an upgrade governance proposal which is effective but not very efficient. Upgrades require downtime during which binaries need to be updated, nodes restarted and synced, consensus network version has to be increased etc. We would like to avoid this cumbersome procedure and change the parameters of a consensus module as fast and as simple as possible without affecting the performance of the consensus layer. ## Decision Implement governance proposal which changes consensus parameters only. ## Implementation ### New proposal A new type of governance proposal named `ChangeParametersProposal` should be added to the consensus layer. The proposal should contain two non-empty fields: - the name of the consensus `Module` the changes should be applied to, and, - a CBOR-encoded document `Changes` describing parameter changes. ```golang // ChangeParametersProposal is a consensus parameters change proposal. type ChangeParametersProposal struct { // Module identifies the consensus backend module to which changes should be // applied. Module string `json:"module"` // Changes are consensus parameter changes that should be applied to // the module. Changes cbor.RawMessage `json:"changes"` } ``` Both fields should be validated before proposal submission to avoid having invalid proposals with empty fields. A more in-depth validation should be done by consensus modules during submission to ensure that the encoded `Changes` are complete and well-formed and that there is exactly one module to which changes will be applied. ```golang // ValidateBasic performs a basic validation on the change parameters proposal. func (p *ChangeParametersProposal) ValidateBasic() error { // Validate that both fields are set. } ``` The new proposal should be added to the `ProposalContent`. The extension should still allow only one proposal at a time, so we must not forget to update the code responsible for validation. ```golang type ProposalContent struct { ... ChangeParameters *ChangeParametersProposal `json:"change_parameters,omitempty"` } ``` ### Parameter changes Each consensus module should carefully scope which parameters are allowed to be changed. For example, a governance module could allow changing only the gas costs and the voting period, while the staking module would allow changing all parameters. ```golang // ConsensusParameterChanges define allowed governance consensus parameter // changes. type ConsensusParameterChanges struct { // GasCosts are the new gas costs. GasCosts *transaction.Costs `json:"gas_costs,omitempty"` // VotingPeriod is the new voting period. VotingPeriod *beacon.EpochTime `json:"voting_period,omitempty"` } ``` To prevent invalid proposals being submitted, `ConsensusParameterChanges` should expose validation method which can be used to check if changes are valid (e.g. changes are not empty, parameters have the right ranges). ```golang // SanityCheck performs a sanity check on the consensus parameters changes. func (c *ConsensusParameterChanges) SanityCheck() error { // Validate changes. } ``` How changes are executed is up to the module implementation. ```golang // Apply applies changes to the given consensus parameters. func (c *ConsensusParameterChanges) Apply(params *ConsensusParameters) error { // Apply changes. } ``` ### Submission When a new `ChangeParametersProposal` is submitted a basic validation is performed first which checks whether the `Module` name and `Changes` are set correctly. Afterwards, a validation message is broadcasted to all modules requesting them to validate the proposal. Only the module for which `Changes` are intended should act and reply to the message, other modules should silently ignore it. In case no module replies, the proposal is immediately rejected as not being supported. The module should carefully examine the proposal, check whether the proposal is well-formed, `Changes` are not empty and deserialize correctly to the expected format, deserialized parameter changes are valid etc. If all checks succeed, the module should respond with a confirmation message. Otherwise, an error describing why proposal is invalid should be returned as a response. Note: Validation at this stage cannot always be complete as valid parameter values are not necessary independent of each other. If multiple proposals are being executed at the same time, the resulting parameters can be invalid even though validation of each proposal passed. Therefore, another validation is required when the proposal is about to be executed. ### Execution If `ChangeParametersProposal` closes as accepted (vote passed), the governance module will execute the proposal by broadcasting a message containing the proposal to all modules. Notification can be done using the same message dispatch mechanism as in the submission phase. Once messages are delivered, only one module will act and try to apply `Changes`. That module should first fetch current consensus parameters, then apply proposed `Changes` and finally validate the result. Validation of parameters is necessary as mentioned in the submission phase. If validation succeeds, the consensus parameters are updated and proposal is marked as passed. Otherwise, the proposal is marked as failed and the proposed parameter changes are discarded. ```golang // SanityCheck performs a sanity check on the consensus parameters. func (p *ConsensusParameters) SanityCheck() error { // Validate parameters. } ``` ### How to enable the new proposal Adding a new proposal type is a consensus breaking change. To make it non-breaking we introduce a new governance consensus parameter which disables the new type by default and can be enabled via governance. When disabled, the governance module will treat the new proposal type as invalid, thus not violating the consensus. ```golang type ConsensusParameters struct { ... // EnableChangeParametersProposal is true iff change parameters proposals are // allowed. EnableChangeParametersProposal bool `json:"enable_change_parameters_proposal,omitempty"` } ``` ## Consequences ### Positive - Agile and zero-downtime consensus parameter changes. - Separation of consensus parameter changes and consensus upgrades. ### Negative - Introduction of a new governance consensus parameter which enables new proposals in the upgrade handler. New parameters can always be considered as a minor disadvantage as they usually increase the complexity of the code. ### Neutral ## References No references. --- ## ADR 0017: ParaTime Application Standard Proposal Process ## Component ADRs ## Changelog - 2022-10-05: Initial version - 2022-10-12: Accepted ## Status Accepted ## Context Applications running within a ParaTime having a novel runtime environment (e.g., Sapphire, Cipher) benefit from interoperability standards. For example, [ERCs] in Ethereum. ADRs are already present in the Oasis ecosystem and so are a starting point, but these are intended for lightweight recording of decisions, not gathering consensus around community contributions. This ADR proposes a template and process amendment for ADRs introducing ParaTime-specific application standards. [ERCs]: https://github.com/ethereum/EIPs ## Decision ADRs will be used for application standards because they are already well supported within the Oasis ecosystem, and have most of the structure needed for application standards. Although adapting another project's process would be easy, having multiple proposal repositories could lead to confusion. For use with application standards, ADRs shall have more structure to make contributions fair and straightforward. Specifically, additional required sections and concrete requirements for acceptance. Although community standards are only proposals, the _Decision_ section will keep its name for compatibility with the existing template. The decision in this context will be to accept the standard for distribution to a wider audience. ### Naming Conventions App standard ADRs shall be referred to as ADR-<number> regardless of the targeted ParaTime. ### Changes to the ADR template - add a new _Apps_ component, which has the ParaTime as its sub-component ### New Section Requirements #### Decision: Specification & Reference Implementation The _Decision_ section gets two new sub-sections: **Specification**: A complete description of the interface of the standard, including the threat/trust model, rationale for design decisions, alternative approaches, and references to related work. This section will generally be mostly prose with sprinkles of code for illustration. **Reference Implementation**: A basic implementation of the proposed standard written in a common language that targets the ParaTime runtime environment. The reference implementation in the ADR should be executable. #### Security Considerations This new section details any weak points of the proposal or common security flaws that a re-implementation of the specification may run into, as well as suggestions for avoiding security issues. ### Acceptance Requirements Like all ADRs, an Apps component ADR will start as _Proposed_ and end up merged once _Accepted_. An application standard ADR following the above format will be accepted once: - there is consensus within the ParaTime's own community that the standard meets its design goals - there are no outstanding compatibility or security issues - an ADR repo committer has signed off on the structure and format of the ADR ## Alternatives One alternative is to fit the ParaTime-specific application standard proposals into the existing ADR template, but this would cause the _Decision_ section to become overloaded with the necessary information in an ad-hoc way. Another alternative is to encourage ParaTimes to do whatever they think most effective. That's always allowed, of course, and it may sometimes be useful to wholesale copy the best practices of another community. However, if we make the ADR process convenient enough, the community can focus its collective effort on the single ADR repo. Within the chosen decision, there were many choices of structure from the now several EIP-like repos. The ones chosen were the minimum we need to get going, in the spirit of the lightweight ADR process. If more structure is needed in the future, we can amend this process or switch to a new system entirely, at which point this ADR shall be marked as _Superseded_. ## Consequences ### Positive - The community has a rallying point for standard development. - We can reuse existing process. ### Negative - The app standard process might still not be ideal even after this proposal. - ADR-NNN naming convention is not forwards compatible. ### Neutral - We will need to maintain additional ADR process going forward. ## References - [Ethereum Improvement Proposals](https://github.com/ethereum/EIPs) - [RFCs](https://www.rfc-editor.org/pubprocess/) - [Inter-Chain Standards](https://github.com/cosmos/ibc/blob/main/spec/ics-template.md) --- ## ADR 0020: Governance Support for Delegator Votes ## Component Oasis Core ## Changelog - 2022-11-07: Minor updates. Added Cosmos-SDK implementation note. - 2022-11-03: Added benchmarks, minor updates. - 2022-11-02: Initial draft. ## Status Accepted ## Context With the current governance voting mechanism ([ADR 04]), only the active validator set is participating in voting. This means that the validators are voting on behalf of all their delegators. This ADR proposes a change so that each delegator is able to vote with its own stake. The delegators vote acts as an override of the validator vote. ## Decision ### Casting Votes In the current implementation the submitter of a vote needs to be a part of the active validator committee at the time the vote is cast. This requirement is relaxed so that additionally anyone with a delegation to an active validator committee entity can vote. This change requires an efficient `staking.DelegationsFor` query to obtain a list of accounts the submitter is delegating to. Staking state is updated with: ```go // delegationKeyReverseFmt is the key format used for reverse mapping of // delegations (delegator address, escrow address). // // Value is CBOR-serialized delegation. delegationKeyReverseFmt = keyformat.New(0x5A, &staking.Address{}, &staking.Address{}) ``` `state.SetDelegation` function is updated to store both `delegationKeyFmt` and the reverse `delegationKeyReverseFmt`. `DelegationsFor` query function is updated to use the added reverse mapping. For completeness the same can be done for debonding delegations, although not necessary for the governance changes proposed in this ADR. #### Alternative solutions Possible alternatives that would avoid adding the reverse mapping are: - Querying `DelegationsTo` for each validator. This results in `num_validators` queries per cast vote transaction which is still too much. - Allowing anyone to cast votes. Potentially a viable solution, but this could result in the number of voters growing uncontrollably large. This might be ok, if the vote tallying procedure would ignore those votes. However the votes state could still grow problematically big. ### Vote tallying When a proposal closes, the vote tallying procedure changes to: ``` # Two-pass over votes approach. 1 Tally up the validator votes (as it is already implemented) # First pass. 2 For each of the voters do: # Second pass. 3 For each of the entities voter delegates to: 4 Skip non validator entities 5 Skip if voter's vote matches the delegation entity vote 6 Compute stake from the delegation shares 4 If delegation entity voted, subtract the stake from the delegation entity vote tally 5 Add computed stake to the voter's vote tally ``` - Possbile variant: instead of using `DelegationsFor` query in step 3), a map of all validator delegators could be prebuild, by using `DelegationTo` for each of the validators. Even with the efficient `DelegationsFor` query, this can be beneficial IF the number of voters is large. This procedure iterates over all voters and can be beneficial if the number of voters is relatively low compared to the number of all validator delegators. #### Alternative Vote Tallying procedures A possible alternative would be to iterate over the delegators-validator sets: ``` # Delegators-validator pass approach. 1 Precompute stakes for all delegators to validators from shares. 2 For each validator 3 For each delegator to the validator 4 IF validator and delegators votes match (or delegator didn't vote) 5 Add delegator stake to the validator's vote (or nothing if validator didn't vote) 6 IF validator and delegator vote don't match 7 Add delegator stake to the delegator's vote (or nothing if delegator didn't vote) ``` The voting procedure now iterates over all delegators of the active validator set. The amount of work is somewhat predictable as it doesn't depend on the number of voters but on the delegators-to-validator sets. However the number of votes is bound by the size of the delegators-to-validator set and in realistic scenario likely much smaller. #### Implementations in other chains Cosmos-SDK uses a similar approach to the proposed solution in the ADR. The tallying iterates over voters, their delegations and validators. For detailed implementation see: [Cosmos-SDK Vote Tallying Code]. The voting itself is limited to delegators (similar as proposed in this document). #### Benchmarks The Vote Tallying procedure variants were benchmarked on mainnet data. Some basic stats from mainnet: - 120 validators - ~49500 eligible voters (unique delegators to validators) - average number of delegations-to per account is 1 The variants were benchmarked in scenarios with different number of voters. In all scenarios the mainnet consensus state was used, only the number of (simulated) voters varied. All votes were eligible (had at least one delegation to an active validator) and all of the delegator votes did override the validator votes. The three tested variants were: - "Two pass over voters (optimized DelegationsFor)" - as described in the proposed Vote tallying solution. Reverse mapping key is used for the `DelegationsFor` queries (described in Casting Votes section). - "Two pass over voters (pre-build validator escrow)" - as described in the proposed Vote tallying solution with modification of prebuilding a map of all validator delegators (mentioned in the "Possible variant" section). - "Validator-delegators" - as described in the alternatives section. [Image: Two pass over voters (optimized DelegationsFor)] [Image: Two pass over voters (pre-build validators escrow)] [Image: Validator-delegators] The above results show that: - Two-pass approach (querying `DelegationsFor` for every voter) is fastest up to about 25000 voters for a proposal. In the worst case (every eligible voter voted) it is about twice as slow as the alternatives. In that case the tallying took about 3 seconds. - The two-pass approach using pre-built map of all validator delegators is comparable to the "Validator-delegators" procedure. This makes sense as in both cases the main work is done in querying the delegators of validators. In reality, the number of voters will likely be small compared to the eligible set of all delegators, so the two-pass approach (with querying `DelegationsFor` for every voter) seems to make the most sense. If number of voters ever becomes problematic, the method could also implement a heuristic to use the prebuilt validator-delegators map when the number of voters is large (e.g. number of voters > 1/2 eligible delegators), but at the moment there is no efficient way to query the number of all delegators. ### Pruning With the possibility of increased number of votes per proposal a pruning of votes can be implemented. Votes for a proposal can be pruned as soon as the first block after the proposal is closed. Because proposal is closed in the `EndBlock` state (which includes votes received in this last block), the pruning should not be done before the block after, so that the exact state at the time of the closing can be queried. ### Voting via messages Delegator can also be a runtime. For enabling runtimes to vote, casting votes should also be supported via runtime messages. Roothash message type is updated to include governance message field: ```go type Message struct { Staking *StakingMessage `json:"staking,omitempty"` Registry *RegistryMessage `json:"registry,omitempty"` Governance *GovernanceMessage `json:"governance,omitempty"` } // GovernanceMessage is a governance message that allows a runtime to perform governance operations. type GovernanceMessage struct { cbor.Versioned CastVote *governance.ProposalVote `json:"cast_vote,omitempty"` } ``` Governance backend is updated to handle the cast vote message. For completeness, support for submitting proposals via runtime messages can also be implemented. ## Consequences ### Positive - Delegators are able to override validators vote. In the case of unresponsive validators this increases the voting participation. - Delegators are able to vote with their own stake. - (if implemented) Staking `DelegationsFor` queries are now efficient and don't require scanning the full delegations state. ### Negative - This increases the complexity of the vote tallying procedure. - This increases the size of the governance votes state. - This increases the complexity and size of the consensus staking state if the `DelegationsFor` reverse mapping is implemented. ### Neutral [ADR 04]: ./0004-runtime-governance.md [Cosmos-SDK Vote Tallying Code]: https://github.com/cosmos/cosmos-sdk/blob/dc004c85f2e8b8fb4f66caac2703228c5bf544cf/x/gov/keeper/tally.go#L37-L90 --- ## ADR 0021: Forward-Secret Ephemeral Secrets ## Component Oasis Core ## Changelog - 2023-02-17: - Rename ephemeral entropy to ephemeral secret - Update types and methods, add method for loading a secret - Define publish ephemeral secret transaction - Split instructions for generation and replication in two sections - 2022-12-01: Initial proposal ## Status Accepted ## Context The network needs forward-secret ephemeral secrets that are distributed amongst enclave executors. Because of the forward-secrecy requirements, using the current key manager master secret is not workable. ## Decision ### Runtime encryption key (REK) Let the per-enclave `node.CapabilityTEE` structure and related helpers be ammeded as follows, to faciliate the addition of a X25519 public key held by the enclave, so that encrypted data can be published on-chain to an enclave instance. ``` // Note: This could also be done via the keymanager InitResponse, but // it is the author's opinion that having a general mechanism for this // may be useful in other contexts. // CapabilityTEE represents the node's TEE capability. type CapabilityTEE struct { ... // Runtime encryption key. REK *x25519.PublicKey `json:"rek,omitempty"` } ``` ### Ephemeral secrets The key manger enclave will gain the following additional RPC methods: ``` const ( // Local RPC methods (plaintext). GenerateEphemeralSecret = "generate_ephemeral_secret" LoadEphemeralSecret = "load_ephemeral_secret" // Remote RPC method (Noise session). ReplicateEphemeralSecret = "replicate_ephemeral_secret" ) type GenerateEphemeralSecretRequest struct { Epoch beacon.EpochTime `json:"epoch"` } type ReplicateEphemeralSecretRequest struct { Epoch beacon.EpochTime `json:"epoch"` } type LoadEphemeralSecretRequest struct { SignedSecret SignedEncryptedEphemeralSecret `json:"signed_secret"` } type GenerateEphemeralSecretResponse struct { SignedSecret SignedEncryptedEphemeralSecret `json:"signed_secret"` } type ReplicateEphemeralSecretResponse struct { // The request and this response are considered confidential, // so the channel handles authentication and confidentiality. EphemeralSecret [32]byte `json:"ephemeral_secret"` } ``` Ephemeral secret generation will return a signed and encrypted ephemeral secret for the requested epoch. ``` type EncryptedSecret struct { // Checksum is the secret verification checksum. Checksum []byte `json:"checksum"` // PubKey is the public key used to derive the symmetric key for decryption. PubKey x25519.PublicKey `json:"pub_key"` // Nonce is the nonce used to decrypt the secret. Nonce []byte `json:"nonce"` // Ciphertexts is the map of REK encrypted ephemeral secrets for all known key manager enclaves. Ciphertexts map[x25519.PublicKey][]byte `json:"ciphertexts"` } type EncryptedEncryptedSecret struct { // ID is the runtime ID of the key manager. ID common.Namespace `json:"runtime_id"` // Epoch is the epoch to which the secret belongs. Epoch beacon.EpochTime `json:"epoch"` // Secret is the encrypted secret. Secret EncryptedSecret `json:"secret"` } type SignedEncryptedEphemeralSecret struct { // Secret is the encrypted ephemeral secret. Secret EncryptedEphemeralSecret `json:"secret"` // Signature is a signature of the ephemeral secret. Signature signature.RawSignature `json:"signature"` } ``` ### Ephemeral secret transaction Key manager application will be augmented with a `PublishEphemeralSecret` transaction that will accept the first published secret for an epoch and discard the others. ``` MethodPublishEphemeralSecret = transaction.NewMethodName( ModuleName, "PublishEphemeralSecret", SignedEncryptedEphemeralSecret{} ) ``` ### Generation Each keymanager will, at a random time in a given epoch: 1. Check to see if another instance has published the next epoch's ephemeral secret. If yes, go to step 4. 2. Execute a local `generate_ephemeral_secret` RPC call. The enclave will, in-order, use the light client to query the members of the committee, generate secret, and return a `GenerateEphemeralSecretResponse`. On failure, go to step 1. 3. Publish `SignedEncryptedEphemeralSecret` to consensus via a `PublishEphemeralSecret` transaction. 4. This key manager instance is DONE. ### Replication Each key manager will: 1. Listen to the publications of new ephemeral secrets and forward them to the enclave. 2. Enclave will validate the secret and verify that it was published in the consensus. Iff verification succeeds and there is a corresponding REK entry in the `Ciphertexts` map, decrypt the secret and go to step 4. 3. Until a successful response is obtained, iterate through the enclaves in the ephemeral secret `Ciphertexts` map, issuing `replicate_ephemeral_secret` RPC calls. On failure, repeat step 3. 4. This key manager instance is DONE. ## Consequences ### Positive - It will be possible to publish ephemeral encrypted data to enclave instances on-chain. - There will be ephemeral secret per key manager committee. - Enclave compromise can not go back to previous epochs to compromise the ephemeral secrets. - Ephemeral secrets are never encrypted with SGX sealing key nor stored in cold storage. ### Negative - If enough key manager workers restart at the wrong time, the epoch's ephemeral secret will be lost, and it will take until the next epoch to recover. - Forward-secrecy is imperfect due to the epoch granular nature of the ephemeral secret. --- ## ADR 0022: Forward-Secret Master Secrets ## Component Oasis Core ## Changelog - 2023-04-17: Initial proposal ## Status Proposed ## Context The network needs forward-secret master secrets that are generated periodically and distributed amongst enclave executors. ## Decision ### Key manager status Key manager status will be extended with the following fields: ``` type Status struct { ... // Generation is the generation of the latest master secret. Generation uint64 `json:"generation,omitempty"` // RotationEpoch is the epoch of the last master secret rotation. RotationEpoch beacon.EpochTime `json:"rotation_epoch,omitempty"` } ``` ### Enclave init response Key manager enclave init response will be extended with the following fields: ``` type InitResponse struct { ... NextChecksum []byte `json:"next_checksum,omitempty"` NextRSK *signature.PublicKey `json:"next_rsk,omitempty"` } ``` ### Master secrets The key manager enclave will gain the following additional local RPC methods: ``` const ( GenerateMasterSecret = "generate_master_secret" LoadMasterSecret = "load_master_secret" ) type GenerateMasterSecretRequest struct { Generation uint64 `json:"generation"` Epoch beacon.EpochTime `json:"epoch"` } type GenerateMasterSecretResponse struct { SignedSecret SignedEncryptedMasterSecret `json:"signed_secret"` } type LoadMasterSecretRequest struct { SignedSecret SignedEncryptedMasterSecret `json:"signed_secret"` } ``` Remote RPC method for replicating master secret will be extended to support replication of generations and to return a Merkle proof for secret verification. ``` pub struct ReplicateMasterSecretRequest { ... /// Generation. #[cbor(optional)] pub generation: u64, } pub struct ReplicateMasterSecretResponse { ... /// Checksum of the preceding master secret. #[cbor(optional)] pub checksum: Vec, } ``` Master secret generation will return a signed and encrypted master secret for the requested generation and epoch. ``` type EncryptedMasterSecret struct { // ID is the runtime ID of the key manager. ID common.Namespace `json:"runtime_id"` // Generation is the generation of the secret. Generation uint64 `json:"generation"` // Epoch is the epoch in which the secret was created. Epoch beacon.EpochTime `json:"epoch"` // Secret is the encrypted secret. Secret EncryptedSecret `json:"secret"` } type SignedEncryptedMasterSecret struct { // Secret is the encrypted master secret. Secret EncryptedMasterSecret `json:"secret"` // Signature is a signature of the master secret. Signature signature.RawSignature `json:"signature"` } ``` ### Checksums Checksum computation will be extended with hash chains: - `checksum_0 = KMAC(generation_0, runtime_id)` - `checksum_N = KMAC(generation_N, checksum_(N-1)) for N > 0` Hash chains allow us to use the previous checksum as a Merkle proof. Given a verified checksum and a proof, a master secret can be verified using the following formula: - `next_checksum = KMAC(secret, prev_checksum)` ### Master secret transaction Key manager application will be augmented with a `PublishMasterSecret` transaction which will accept the proposal for the next generation of the master secret if the following conditions are met: - The proposal's master secret generation number is one greater than the last accepted generation, or 0 if no secrets have been accepted so far. - The proposal is intended to be accepted in the upcoming epoch. - Master secret hasn't been proposed in the current epoch. - The rotation period will either expire in the upcoming epoch or has already expired. - The first master secret (generation 0) can be proposed immediately and even if the rotation interval is set to 0. - If the rotation interval is set to 0, rotations are disabled and secrets cannot be proposed anymore. To enable them again, update the rotation interval in the policy. - The master secret is encrypted to the majority of the enclaves that form the committee. - The node proposing the secret is a member of the key manager committee. If accepted, the next secret can be proposed after the rotation interval expires. Otherwise, the next secret can be proposed in the next epoch. ``` MethodPublishMasterSecret = transaction.NewMethodName( ModuleName, "PublishMasterSecret", SignedEncryptedMasterSecret{} ) ``` ### Setup The key manager is initialized with an empty checksum and no nodes. Every node needs to register with an empty checksum to be included in the key manager committee. Only members of the committee are allowed to generate master secrets and will be able to decrypt the proposals. ### Generation Each keymanager will, at a random time in a given epoch: 1. Check to see if rotation period has expired. If not, go to step 5. 2. Check to see if another instance has published a proposal for the upcoming epoch. If yes, go to step 5. 3. Execute a local `generate_master_secret` RPC call. The enclave will, in-order: - Verify the master secret generation number. - Randomly select a secret. - Use the light client to query the members of the committee. - Encrypt and checksum the selected secret. - Return `GenerateMasterSecretResponse`. On failure, go to step 1. 4. Read `SignedEncryptedMasterSecret` from the response and publish it in the consensus layer using `PublishMasterSecret` transaction. 5. This key manager instance is DONE. ### Replication Each key manager will listen for the publication of new master secret proposals and will, when a new secret is proposed: 1. Cancel master secret generation scheduled for the current epoch. 2. Forward the proposal to the enclave. 3. The enclave will verify that: - The proposal was published in the consensus layer. - The secret can be decrypted with the enclave's REK key. - The master secret generation number is one greater than the last known generation. - The checksum computed from the decrypted secret and the last known checksum matches the one in the proposal. If all verifications pass, the enclave will: - Decrypt the secret, encrypt it with SGX sealing key and store the ciphertext locally. - Derive the RSK key for the proposed secret and store it in the memory together with the computed checksum. Otherwise, go to step 5. 4. Request enclave to initialize again and use the response to register with the forthcoming checksum and RSK key derived from the proposal. 5. This key manager instance is DONE. ### Rotation Key manager application will try to rotate the master secret every epoch as part of the key manager status generation as follows: 1. Fetch the latest master secret proposal. On failure, go to step 6. 2. Verify the master secret generation number and epoch of the proposal. On failure, go to step 6. - The rotation period is not verified here as it is already checked when the secret is proposed. Optionally, we can add this check to cover the case when the policy changes after the secret is proposed. 3. Count how many nodes have stored the proposal locally. - Compare the checksum of the proposal to the `next_checksum` field in the init response. 4. Accept the proposal if the majority of the nodes have replicated the proposed secret and announced `next_checksum` in their init status. - Increment the master secret generation number by 1. - Update the last rotation epoch. - Update the checksum. 5. Broadcast the new status. - If the master secret generation number has advanced, the enclaves will try to apply the proposal they stored locally. 6. Key manager application is DONE. ### Confirmation Each key manager will listen for the key manager status updates and will, when the master secret generation number advances: 1. Send the key manager status to the enclave. 2. The enclave will: - Check that the master secret generation number is one greater than the last known generation. - Load locally stored proposal for the next master secret or replicate it from another enclave. - Use the proposal to compute the next checksum. - Verify the computed checksum against the latest key manager status. If checksum matches, the enclave will: - Encrypt the secret with SGX sealing key using master secret generation number as additional data and store the ciphertext locally. - Update the last known generation number. - Update the latest checksum and RSK key. Otherwise, go to step 1. 3. Request enclave to initialize again and use the response to register with the latest checksum and RSK key while leaving the forthcoming checksum and RSK key empty. 4. This key manager instance is DONE. ## Consequences ### Positive - Runtimes can periodically or on demand re-encrypt their state using the latest generation of the master secret. - Compromise of an enclave cannot reveal master secrets generated after its upgrade or obsolescence. - If enclave initialization is interrupted or aborted, the subsequent initialization will resume from where the previous one left off. This means that any secrets that have already been replicated and verified will not be fetched again. - When compared to Merkle trees, hash chains provide a straightforward way to transition from the current checksum implementation and also enable the use of simpler proofs that can be validated in constant time. ### Negative - Initialization takes time as all master secrets need to be replicated. | Number of secrets | Replication time | |:-----------------:|:----------------:| | 10 | 45 sec | | 100 | 52 sec | | 1000 | 2 min 45 sec | | 10000 | 21 min 17 sec | Table 1: Local machine benchmarks (without any network overhead) - Master secret replication response must contain a Merkle proof for secret verification. - Newly accepted master secrets cannot be used immediately to derive runtime keys because key manager enclaves need to confirm them first. When using Tendermint as a backend, this delay is even greater as the verifier is one block behind. ### Neutral - Master secrets need to be replicated in reverse order to ensure all secrets are verified against checksum published in the consensus layer. --- ## ADR 0023: Secret Sharing Schemes (CHURP) ## Component Oasis Core ## Changelog - 2025-07-15: Initial proposal ## Status Proposed ## Context Currently, key managers derive keys from either master secrets or ephemeral secrets, which are unique to the key manager runtime and shared among key manager nodes. We acknowledge that this approach is not comprehensive, as a compromise of a single key manager enclave would reveal past secrets, potentially leading to the decryption of the internal state of runtimes using them. However, this key derivation method is straightforward and, as a result, exceptionally fast. While master secret rotations and ephemeral secrets aim to rotate the secrets to mitigate the impact of a secret compromise, we also want to support different types of key derivation, each offering varying levels of security. Examples include verifiable secret sharing schemes and dynamic-committee proactive secret sharing, where key managers would only hold a share of the secret. This proposal aims to introduce support for the CHUrn-Robust Proactive secret sharing scheme (CHURP) and Key Derivation Center (KDC). ## CHURP CHURP is a proactive secret sharing scheme that allows the committee of nodes, each holding a share of the secret, to change dynamically over time. The CHURP protocol can be broadly divided into the following stages: ### Setup In this stage, the key manager owner configures CHURP. All steps in this stage can be performed off-chain. - The owner assigns unique, non-zero ID numbers to all nodes and associates them with their public keys. A simple approach is to encode each public key and use its binary representation as the node’s ID. - The owner selects a cipher suite based on security requirements, which determines the algebraic group used for cryptographic operations. - The owner also prepares an access control policy to specify which enclaves are trusted, and configures global parameters, such as how frequently key shares should be proactively refreshed, the minimum number of distinct shares required to reconstruct the secret, and the number of shares that can be lost before the secret becomes unrecoverable. ### Initialization Once the CHURP configuration is prepared, a new instance can be initialized. - The owner publishes the configuration in the consensus layer. - Key manager nodes that wish to participate update their configuration with the CHURP instance ID generated by consensus and then restart. To avoid requiring a restart, a CLI command could be added to support hot-loading of the updated configuration. - After restarting, each node requests its enclave to generate a non-zero-hole verification matrix for the upcoming dealing phase. The node then uses the checksum of this matrix to prepare an application to join the new committee and submits it to the consensus layer. ### Dealing The dealing starts the first epoch with a sufficient number of nodes that applied for the committee. - The first threshold plus two nodes serve as dealers; the rest are ignored. The consensus will discard the verification matrices from these nodes, and their entropy will not be included in the secret. However, they will still receive a share. The addition of two extra nodes prevents dealer corruption. - The construction of the shared secret and dealing occurs off-chain through a peer-to-peer network, following the specified enclave policy. - Each applied node (dealing): - requests its bivariate shares (polynomials and non-zero-hole verification matrices) from the dealers, - validates received shares, - verifies non-zero-hole verification matrices against the consensus layer, - combines shares (adds polynomials and merges non-zero-hole verification matrices), - seals the result (full share) and stores it locally in the enclave's confidential storage, - sends a transaction containing the checksum of the merged matrix to the consensus layer, confirming receipt of all shares. - If the timeout or epoch expires, or if checksums do not match, the dealing data in the consensus layer is cleared, and nodes must reapply for committee membership. - Upon receiving confirmations from all applied nodes, the consensus layer announces the new committee and begins collecting applications for the first handoff. Nodes may apply for the handoff no earlier than one epoch in advance. - The dealers delete the dealing data. - The committee starts serving requests. ### Serving Once committee nodes receive their shares, they begin serving requests for deriving key shares according to the KDC protocol, which clients can use to reconstruct the derived key. - To derive a key, a key manager client must contact at least the threshold number of committee nodes to obtain the required key shares. - The committee responds exclusively to nodes specified in the access policy. - Blame detection for corrupted key shares may be added later, though it is computational-intensive. ### Handoff Handoff transfers secret shares from the old committee to a new one. It occurs periodically, as defined in the CHURP configuration. - Starts if sufficient time has elapsed since the last handoff/dealing and an adequate number of nodes have prepared a zero-hole verification matrix and applied for the new committee. - Each applied node (share reduction): - requests switch data points for constructing the dimension switched polynomial and the merged verification matrix from the current committee, - validates received points, - verifies the merged verification matrix against the consensus layer, - combines the points into a polynomial (reduced share). - Each applied node (proactive randomization): - requests its bivariate shares (polynomials and zero-hole verification matrices) from the new committee members, - validates received shares, - verifies zero-hole verification matrices against the consensus layer, - applies shares to the secret polynomial and to the merged verification matrix. - Each applied node (full share distribution): - requests switch data points for constructing the dimension switched polynomial and the proactive verification matrix from the new committee members, - verifies received points, - combines the points into a polynomial (full share), - seals the result (full share) and stores it locally in the enclave's confidential storage, - sends a transaction containing the checksum of the proactive verification matrix to the consensus layer confirming that the full share was received. - If the committee hasn't changed, skip the share reduction and full node distribution steps, and only execute proactive randomization. - If the timeout or epoch expires, or if checksums do not match, the handoff data in the consensus layer is cleared, and nodes must reapply for committee membership. - Upon receiving confirmations from all applied nodes, the consensus layer announces the new committee and begins collecting applications for the next handoff. - The old committee deletes obsolete full shares. - The committee starts serving requests. ## Key derivation center Key derivation center (KDC) is a secret sharing scheme based on the verifiable secret sharing scheme (VSS) where every node possesses only a share of the master secret. To derive a key from the master secret, one needs to obtain at least threshold plus one number of key shares from distinct nodes and reassemble them locally. ## Key manager applications To facilitate the straightforward addition of new features to the key manager, we must first generalize it to support the concurrent execution of multiple (independent) applications. Not all key manager runtimes are required to support all apps, and likewise, not all nodes need to run all the apps. In fact, the committee nodes for each application should be dynamic, allowing for the addition or removal of nodes based on specific requirements. ### Current situation Currently, the key manager supports two applications: one for generating, distributing, and storing master secrets, and the other for ephemeral secrets. However, these two applications are not independent, as they both share the same key manager policy for secret replication and key derivation. Issues: - In the runtime, the logic for key manager status, policy, master secrets, and ephemeral secrets should be decoupled. - On the host, each application should have its own worker, e.g., a master secret app should have a dedicated worker responsible for participating in the master secret protocol. ### Workers Each key manager application should have a dedicated worker on the host node responsible for communicating with the app and ensuring its consensus view is up-to-date. For example, a master secret worker should be responsible for participating in the master secret protocol. ### Handler trait Each application should implement the following trait, which defines the enclave RPC methods exposed to the local host or remote clients. These methods are registered with the dispatcher during initialization. ```rust /// RPC handler. pub trait Handler { /// Returns the list of RPC methods supported by this handler. fn methods(&'static self) -> Vec; } ``` Each application should adhere to the naming convention `app.Method`. #### Example 1 ```rust /// Master secrets key manager application. pub trait MasterSecrets { fn generate(&self); fn load(&self); fn replicate(&self); fn key_pair(&self); fn private_key(&self); fn public_key(&self); fn symmetric_key(&self); fn update_status(&self); } ``` Methods: - `MasterSecrets.Generate` - `MasterSecrets.Load` - `MasterSecrets.Replicate` - `MasterSecrets.KeyPair` - `MasterSecrets.PrivateKey` - `MasterSecrets.PublicKey` - `MasterSecrets.SymmetricKey` - `MasterSecrets.UpdateStatus` #### Example 2 ```rust /// CPU change detection key manager application. pub trait CPUChangeDetection { fn encrypt(&self); fn decrypt(&self); } ``` Methods: - `CpuChange.Encrypt` - `CpuChange.Decrypt` #### Example applications Current and future applications: - Master secrets (for generation and replication master secrets) - Ephemeral secrets (for generation and replication of ephemeral secrets) - CPU change (for detecting whether the CPU has changed) - CHURP (secret sharing scheme) - Key derivation center (secret sharing scheme) ## Implementation This section outlines the core components used to implement the CHURP protocol. The implementation must support multiple secrets and remain resilient to key manager restarts. To meet these goals, the implementation must satisfy the following requirements: - Each group of nodes can share a unique secret. - Secrets may vary in terms of committee size, handoff intervals, security levels, etc. - State re-encryption should be possible with keys derived from a new shared secret. - Key manager restarts should not lose shares or committed data. ### Identification A node can be assigned a non-zero unique ID number based on its public key. ```go // NodeToID assigns a unique ID to a node for use in the CHURP protocol. func NodeToID(nodeID signature.PublicKey) []byte { id := nodeID return id[:] } ``` In addition to node identifiers, each CHURP instance can also be uniquely identified using the following structure: ```go // Identity uniquely identifies a CHURP instance. type Identity struct { // ID is a unique CHURP identifier within the key manager runtime. ID uint8 `json:"id"` // RuntimeID is the identifier of the key manager runtime. RuntimeID common.Namespace `json:"runtime_id"` } ``` ### Configuration A CHURP instance can use one of several cipher suites designed for verifiable secret sharing and key derivation. Each cipher suite is assigned a unique identifier and must specify the following components: - An algebraic group used for cryptographic operations. - A hash function that maps data to elements within the group. - A hash function that maps data to scalars in the underlying field. Initially, we should support only one suite that uses the SHA3 hash function and the NIST P-384 elliptic curve. Additional suites may be added in the future. ```go // NistP384Sha3_384 represents the NIST P-384 elliptic curve group with // the SHA3-384 hash function used to encode arbitrary-length byte strings // to elements of the underlying prime field or elliptic curve points. const NistP384Sha3_384 uint8 = 0 ``` To protect sensitive data, each enclave running the CHURP protocol enforces a strict access control policy. This policy defines which enclaves it is authorized to communicate with during various phases of the protocol. ```go // PolicySGX represents an SGX access control policy used to authenticate // key manager enclaves during handoffs and remote client enclaves when // querying key shares. type PolicySGX struct { Identity // Serial is the monotonically increasing policy serial number. Serial uint32 `json:"serial"` // MayShare is the vector of enclave identities from which a share can be // obtained during handoffs. MayShare []sgx.EnclaveIdentity `json:"may_share"` // MayJoin is the vector of enclave identities that may form the new // committee in the next handoffs. MayJoin []sgx.EnclaveIdentity `json:"may_join"` // MayQuery is the map of runtime identities to the vector of enclave // identities that may query key shares. MayQuery map[common.Namespace][]sgx.EnclaveIdentity `json:"may_query,omitempty"` } ``` ### Consensus transactions The following method names define the CHURP-related transactions to be added to the consensus layer: ```go // MethodCreate is the method name for creating a new CHURP instance. var MethodCreate = transaction.NewMethodName(ModuleName, "Create", CreateRequest{}) // CreateRequest contains the initial configuration. type CreateRequest struct { Identity // SuiteID is the identifier of a cipher suite used for verifiable secret // sharing and key derivation. SuiteID uint8 `json:"suite_id,omitempty"` // Threshold is the minimum number of distinct shares required // to reconstruct a key. Threshold uint8 `json:"threshold,omitempty"` // ExtraShares represents the minimum number of shares that can be lost // to render the secret unrecoverable. ExtraShares uint8 `json:"extra_shares,omitempty"` // HandoffInterval is the time interval in epochs between handoffs. // // A zero value disables handoffs. HandoffInterval beacon.EpochTime `json:"handoff_interval,omitempty"` // Policy is a signed SGX access control policy. Policy SignedPolicySGX `json:"policy,omitempty"` } ``` ```go // MethodUpdate is the method name for CHURP updates. var MethodUpdate = transaction.NewMethodName(ModuleName, "Update", UpdateRequest{}) // UpdateRequest contains the updated configuration. type UpdateRequest struct { Identity // ExtraShares represents the minimum number of shares that can be lost // to render the secret unrecoverable. ExtraShares *uint8 `json:"extra_shares,omitempty"` // HandoffInterval is the time interval in epochs between handoffs. // // Zero value disables handoffs. HandoffInterval *beacon.EpochTime `json:"handoff_interval,omitempty"` // Policy is a signed SGX access control policy. Policy *SignedPolicySGX `json:"policy,omitempty"` } ``` ```go // MethodApply is the method name for a node submitting an application // to form a new committee. var MethodApply = transaction.NewMethodName(ModuleName, "Apply", ApplicationRequest{}) // ApplicationRequest contains node's application to form a new committee. type ApplicationRequest struct { // Identity of the CHRUP scheme. Identity // Epoch is the epoch of the handoff for which the node would like // to apply. Epoch beacon.EpochTime `json:"epoch"` // Checksum is the hash of the verification matrix. Checksum hash.Hash `json:"checksum"` } ``` ```go // MethodConfirm is the method name for a node confirming completion // of a handoff. var MethodConfirm = transaction.NewMethodName(ModuleName, "Confirm", ConfirmationRequest{}) // ConfirmationRequest confirms that the node successfully completed // the handoff. type ConfirmationRequest struct { Identity // Epoch is the epoch of the handoff for which the node reconstructed // the share. Epoch beacon.EpochTime `json:"epoch"` // Checksum is the hash of the verification matrix. Checksum hash.Hash `json:"checksum"` } ``` ### Churp status The CHURP status provides a comprehensive overview of the selected instance. It includes information such as how frequently handoffs occur, when the next handoff is scheduled, which nodes have submitted applications for the upcoming handoff, and which nodes form the current committee. ```go // Status represents the current state of a CHURP instance. type Status struct { Identity // SuiteID is the identifier of a cipher suite used for verifiable secret // sharing and key derivation. SuiteID uint8 `json:"suite_id"` // Threshold represents the degree of the secret-sharing polynomial. // // In a (t,n) secret-sharing scheme, where t represents the threshold, // any combination of t+1 or more shares can reconstruct the secret, // while losing n-t or fewer shares still allows the secret to be // recovered. Threshold uint8 `json:"threshold"` // ExtraShares represents the minimum number of shares that can be lost // to render the secret unrecoverable. // // If t and e represent the threshold and extra shares, respectively, // then the minimum size of the committee is t+e+1. ExtraShares uint8 `json:"extra_shares"` // HandoffInterval is the time interval in epochs between handoffs. // // A zero value disables handoffs. HandoffInterval beacon.EpochTime `json:"handoff_interval"` // Policy is a signed SGX access control policy. Policy SignedPolicySGX `json:"policy"` // Handoff is the epoch of the last successfully completed handoff. // // The zero value indicates that no handoffs have been completed so far. // Note that the first handoff is special and is called the dealer phase, // in which nodes do not reshare or randomize shares but instead construct // the secret and shares. Handoff beacon.EpochTime `json:"handoff"` // The hash of the verification matrix from the last successfully completed // handoff. Checksum *hash.Hash `json:"checksum,omitempty"` // Committee is a vector of nodes holding a share of the secret // in the active handoff. // // A client needs to obtain more than a threshold number of key shares // from the nodes in this vector to construct the key. Committee []signature.PublicKey `json:"committee,omitempty"` // NextHandoff defines the epoch in which the next handoff will occur. // // If an insufficient number of applications is received, the next handoff // will be delayed by one epoch. NextHandoff beacon.EpochTime `json:"next_handoff"` // NextChecksum is the hash of the verification matrix from the current // handoff. // // The first candidate to confirm share reconstruction is the source // of truth for the checksum. All other candidates need to confirm // with the same checksum; otherwise, the applications will be annulled, // and the nodes will need to apply for the new committee again. NextChecksum *hash.Hash `json:"next_checksum,omitempty"` // Applications is a map of nodes that wish to form the new committee. // // Candidates are expected to generate a random bivariate polynomial, // construct a verification matrix, compute its checksum, and submit // an application one epoch in advance of the next scheduled handoff. // Subsequently, upon the arrival of the handoff epoch, nodes must execute // the handoff protocol and confirm the reconstruction of its share. Applications map[signature.PublicKey]Application `json:"applications,omitempty"` } ``` ### Key manager worker Enabling CHURP on a key manager node requires explicitly specifying the CHURP instances it should join. ```go // Config is the keymanager worker configuration structure. type Config struct { // ... existing fields omitted ... // Churp holds configuration details for the CHURP extension. Churp ChurpConfig `yaml:"churp,omitempty"` } // ChurpConfig holds configuration details for the CHURP extension. type ChurpConfig struct { // Schemes is a list of CHURP scheme configurations. Schemes []ChurpSchemeConfig `yaml:"schemes,omitempty"` } // ChurpSchemeConfig holds configuration details for a CHURP scheme. type ChurpSchemeConfig struct { // ID is the unique identifier of the CHURP scheme. ID uint8 `yaml:"id,omitempty"` } ``` ### CHURP application The application should be capable of running multiple CHURP instances, with each instance implementing the following trait: ```rust /// Interface for handling a CHURP instance. pub(crate) trait Handler: Send + Sync { /// Returns the verification matrix of the shared secret bivariate /// polynomial from the last successfully completed handoff. /// /// The verification matrix is a matrix of dimensions t_n x t_m, where /// t_n = threshold and t_m = 2 * threshold + 1. It contains encrypted /// coefficients of the secret bivariate polynomial whose zero coefficient /// represents the shared secret. /// /// Verification matrix: /// ```text /// M = [b_{i,j} * G] /// ``` /// Bivariate polynomial: /// ```text /// B(x,y) = \sum_{i=0}^{t_n} \sum_{j=0}^{t_m} b_{i,j} x^i y^j /// ``` /// Shared secret: /// ```text /// Secret = B(0, 0) /// ``` /// /// This matrix is used to verify switch points derived from the bivariate /// polynomial share in handoffs. /// /// NOTE: This method can be called over an insecure channel, as the matrix /// does not contain any sensitive information. However, the checksum /// of the matrix should always be verified against the consensus layer. fn verification_matrix(&self, req: &QueryRequest) -> Result>; /// Returns switch point for share reduction for the calling node. /// /// The point is evaluation of the shared secret bivariate polynomial /// at the given x (me) and y value (node ID). /// /// Switch point: /// ```text /// Point = B(me, node_id) /// ``` /// Bivariate polynomial: /// ```text /// B(x,y) = \sum_{i=0}^{t_n} \sum_{j=0}^{t_m} b_{i,j} x^i y^j /// ``` /// /// WARNING: This method must be called over a secure channel as the point /// needs to be kept secret and generated only for authorized nodes. fn share_reduction_switch_point(&self, ctx: &RpcContext, req: &QueryRequest) -> Result>; /// Returns switch point for full share distribution for the calling node. /// /// The point is evaluation of the proactivized shared secret bivariate /// polynomial at the given x (node ID) and y value (me). /// /// Switch point: /// ```text /// Point = B(node_id, me) + \sum Q_i(node_id, me) /// ``` /// Bivariate polynomial: /// ```text /// B(x,y) = \sum_{i=0}^{t_n} \sum_{j=0}^{t_m} b_{i,j} x^i y^j /// ``` /// Proactive bivariate polynomial: /// ```text /// Q_i(x,y) = \sum_{i=0}^{t_n} \sum_{j=0}^{t_m} b_{i,j} x^i y^j /// ``` /// /// WARNING: This method must be called over a secure channel as the point /// needs to be kept secret and generated only for authorized nodes. fn share_distribution_switch_point( &self, ctx: &RpcContext, req: &QueryRequest, ) -> Result>; /// Returns proactive bivariate polynomial share for the calling node. /// /// A bivariate share is a partial evaluation of a randomly selected /// bivariate polynomial at a specified x or y value (node ID). Each node /// interested in joining the new committee selects a bivariate polynomial /// before the next handoff and commits to it by submitting the checksum /// of the corresponding verification matrix to the consensus layer. /// The latter can be used to verify the received bivariate shares. /// /// Bivariate polynomial share: /// ```text /// S_i(y) = Q_i(node_id, y) (dealing phase or unchanged committee) /// S_i(x) = Q_i(x, node_id) (committee changes) /// ``` /// Proactive bivariate polynomial: /// ```text /// Q_i(x,y) = \sum_{i=0}^{t_n} \sum_{j=0}^{t_m} b_{i,j} x^i y^j /// ``` /// /// WARNING: This method must be called over a secure channel as /// the polynomial needs to be kept secret and generated only /// for authorized nodes. fn bivariate_share( &self, ctx: &RpcContext, req: &QueryRequest, ) -> Result; /// Returns the key share for the given key ID generated by the key /// derivation center. /// /// Key share: /// ```text /// KS_i = s_i * H(key_id) /// ``` /// /// WARNING: This method must be called over a secure channel as the key /// share needs to be kept secret and generated only for authorized nodes. fn sgx_policy_key_share( &self, ctx: &RpcContext, req: &KeyShareRequest, ) -> Result; /// Prepare CHURP for participation in the given handoff of the protocol. /// /// Initialization randomly selects a bivariate polynomial for the given /// handoff, computes the corresponding verification matrix and its /// checksum, and signs the latter. /// /// Bivariate polynomial: /// B(x,y) = \sum_{i=0}^{t_n} \sum_{j=0}^{t_m} b_{i,j} x^i y^j /// /// Verification matrix: /// M = [b_{i,j} * G] /// /// Checksum: /// H = KMAC256(M, runtime ID, handoff) /// /// The bivariate polynomial is zero-hole in all handoffs expect in the /// first one (dealing phase). /// /// This method must be called locally. fn apply(&self, req: &HandoffRequest) -> Result; /// Tries to fetch switch points for share reduction from the given nodes. /// /// Switch points should be obtained from (at least) t distinct nodes /// belonging to the old committee, verified against verification matrix /// whose checksum was published in the consensus layer, merged into /// a reduced share using Lagrange interpolation and proactivized with /// bivariate shares. /// /// Switch point: /// ```text /// P_i = B(node_i, me) ///``` /// Reduced share: /// ```text /// RS(x) = B(x, me) /// ```` /// Proactive reduced share: /// ```text /// QR(x) = RS(x) + \sum Q_i(x, me) /// ```` fn share_reduction(&self, req: &FetchRequest) -> Result; /// Tries to fetch switch data points for full share distribution from /// the given nodes. /// /// Switch points should be obtained from (at least) 2t distinct nodes /// belonging to the new committee, verified against the sum of the /// verification matrix and the verification matrices of proactive /// bivariate shares, whose checksums were published in the consensus /// layer, and merged into a full share using Lagrange interpolation. /// /// Switch point: /// ```text /// P_i = B(me, node_i) + \sum Q_i(me, node_i) ///``` /// Full share: /// ```text /// FS(x) = B(me, y) + \sum Q_i(me, y) = B'(me, y) /// ```` fn share_distribution(&self, req: &FetchRequest) -> Result; /// Tries to fetch proactive bivariate shares from the given nodes. /// /// Bivariate shares should be fetched from all candidates for the new /// committee, including our own, verified against verification matrices /// whose checksums were published in the consensus layer, and summed /// into a bivariate polynomial. /// /// Bivariate polynomial share: /// ```text /// S_i(y) = Q_i(me, y) (dealing phase or unchanged committee) /// S_i(x) = Q_i(x, me) (committee changes) /// ``` fn proactivization(&self, req: &FetchRequest) -> Result; /// Returns a signed confirmation request containing the checksum /// of the merged verification matrix. fn confirmation(&self, req: &HandoffRequest) -> Result; /// Finalizes the specified scheme by cleaning up obsolete dealers, /// handoffs, and shareholders. If the handoff was just completed, /// the shareholder is made available, and its share is persisted /// to the local storage. fn finalize(&self, req: &HandoffRequest) -> Result<()>; } ``` Methods: - `Churp.Apply` - `Churp.ShareReduction` - `Churp.ShareDistribution` - `Churp.Proactivization` - `Churp.Confirm` - `Churp.Finalize` - `Churp.VerificationMatrix` - `Churp.ShareReductionPoint` - `Churp.ShareDistributionPoint` - `Churp.BivariateShare` - `Churp.SGXPolicyKeyShare` ### Key manager client The key manager client should be extended to support CHURP functionality. ```rust /// Key manager client interface. #[async_trait] pub trait KeyManagerClient: Send + Sync { /// ... existing fields omitted ... /// Returns the verification matrix for the given handoff. async fn churp_verification_matrix( &self, churp_id: u8, epoch: EpochTime, nodes: Vec, ) -> Result, KeyManagerError>; /// Returns a switch point for the share reduction phase /// of the given handoff. async fn churp_share_reduction_point( &self, churp_id: u8, epoch: EpochTime, node_id: PublicKey, nodes: Vec, ) -> Result, KeyManagerError>; /// Returns a switch point for the share distribution phase /// of the given handoff. async fn churp_share_distribution_point( &self, churp_id: u8, epoch: EpochTime, node_id: PublicKey, nodes: Vec, ) -> Result, KeyManagerError>; /// Returns a bivariate share for the given handoff. async fn churp_bivariate_share( &self, churp_id: u8, epoch: EpochTime, node_id: PublicKey, nodes: Vec, ) -> Result; /// Returns state key. async fn churp_state_key( &self, churp_id: u8, key_id: KeyPairId, ) -> Result; } ``` ### Key derivation center application The key derivation center must implement the following traits to support share generation and key recovery. ```rust /// A trait for shareholders capable of deriving key shares. pub trait KeySharer { /// Derives a key share based on the given key ID and domain separation tag. fn make_key_share>( &self, key_id: &[u8], dst: &[u8], ) -> Result>; } /// A trait for recovering a secret key from key shares. pub trait KeyRecoverer { /// Returns the minimum number of key shares required to recover /// the secret key. fn min_shares(&self) -> usize; /// Recovers the secret key from the provided key shares. fn recover_key(&self, shares: &[EncryptedPoint]) -> Result where G: Group + Zeroize; } ``` ## Consequences ### Positive CHURP: - High security, as the master secret is shared among key manager nodes. - Supports proactive randomization (share refresh). - Dynamic committees. KDC: - High security, as the master secret is shared among key manager nodes. - Supports proactive randomization (share refresh). ### Negative CHURP: - Handoffs are computationally intensive. KDC: - The number of key manager nodes that share a master secret is fixed and cannot be changed once shares are generated. Consequently, if too many nodes are destroyed, the secret cannot be recovered. - Support for replicating a share to a specific node is needed. ### Neutral - Issuing derived key shares with CHURP should be slightly slower compared to KDC. --- ## ADR 0024: Runtime Off-chain Logic (ROFL) ## Component Oasis Core, Oasis SDK ## Changelog - 2024-02-26: Notifications - 2023-11-27: Initial draft ## Status Proposed ## Context Sometimes we may want the runtime compute nodes to run additional off-chain logic that communicates with the on-chain state securely (e.g. ensuring that the off-chain logic is being run by the same node operator, is properly attested when running in a TEE, etc.). The off-chain logic may then perform non-deterministic and potentially expensive things (like remote HTTPS requests or complex local computation) and securely interact with the on-chain logic via transactions. The main use case driving this proposal is support for running attested light client committees that read and verify information from other chains, then make this information available to Oasis runtimes with no additional trust assumptions. ## Decision While similar functionality can be implemented entirely independently on the application layer (and such solutions already exist), this proposal attempts to reuse the same security and attestation infrastructure that is already available for on-chain parts of the runtimes, specifically: - Compute nodes and runtime binary distribution and execution can stay the same as it has been for existing node operators that run the runtimes. Handling of the off-chain logic part should be done transparently if the runtime provides it. - Existing attestation, consensus and freshness proof flows can be leveraged for ensuring that the off-chain logic is running in a secure environment. One important consideration is also whether to have the off-chain logic part of the same runtime binary or have it as a completely separate binary running in its own process. This proposal decides on the latter to ensure that the off-chain TCB is completely separate from the on-chain TCB. Given that the logic running off-chain can be much more complex and can interact with untrusted external services, ensuring this separation is important as a defense-in-depth measure. The proposed architecture extends the composition of the runtime so that it now contains the following components: - **Runtime On-chain Logic (RONL)** is what has existed as the sole runtime component before this proposal. It contains the logic (and TCB) that is responsible for executing the deterministic on-chain logic of the runtime. - **Runtime Off-chain Logic (ROFL)** is an optional runtime component that may run in parallel with RONL and is part of its own TCB. It also uses the same general runtime framework and RHP, but instead of implementing the on-chain batch scheduling, execution and query methods, it only implements specific notification hooks that can trigger arbitrary actions. Both RONL and ROFL are managed as independent runtimes by the Oasis Node as host, using the existing runtime host architecture. Failure of ROFL does not affect RONL which can proceed to run as usual. ### Attestation An assumption made in this proposal is that both RONL and ROFL components are developed and built together, by the same entity, and are part of the same release. This means that we can simplify attestation by making RONL being able to attest ROFL by being aware of its exact identity. The idea is that during the release build process, ROFL is built first, its signer-independent identity (e.g. MRENCLAVE) is measured and included during compilation of RONL. The signer-dependent part of identity (e.g. MRSIGNER) is assumed to be the same for both and can be read from trusted CPU state (since it may not be available during the build process due to offline signing). Alternatively, one can imagine a proposal where the ROFL identity is backed by some sort of on-chain governance process defined in the RONL component. Defining such a mechanism is outside the scope of this proposal. The process for ROFL attestation proceeds as follows: 1. **Remote Attestation.** The normal runtime attestation flow is initiated by the host. As a result of this flow, the `node.CapabilityTEE` structure is generated which includes the remote attestation quote and additional data. 2. **Node Endorsement.** The host verifies the `node.CapabilityTEE` structure and if deemed correct, it signs it using the node's identity key and the following domain separation context: ``` oasis-core/node: endorse TEE capability ``` The signature is stored in a new structure `EndorsedCapabilityTEE` which is defined as follows: ```go type EndorsedCapabilityTEE struct { // CapabilityTEE is the TEE capability structure to be endorsed. CapabilityTEE CapabilityTEE `json:"capability_tee"` // NodeEndorsement is the node endorsement signature. NodeEndorsement signature.Signature `json:"node_endorsement"` } ``` 3. **Updating Node-Endorsed CapabilityTEE in ROFL.** The `EndorsedCapabilityTEE` is sent to ROFL to be stored and available for establishing secure EnclaveRPC sessions. 4. **RONL Verification.** When establishing a new session with RONL, the endorsed TEE capability is presented during session establishment. RONL verifies the quote, ensures the enclave identity is one of the known identities set at compile-time and verifies the node endorsement against the locally known node identity (both RONL and ROFL must be from the same node). If all the checks pass, a secure EnclaveRPC session is established. This flow needs to be repeated whenever RAK changes for any reason and also periodically to ensure freshness (consistent with the quote policy configured for the runtime in the consensus layer). ### Updates to the ORC Manifest The ORC manifest is extended with a field that can specify extra components which currently include ROFL binaries in a similar way as we already support regular runtime binaries (e.g. specifying the executable and SGX metadata). The manifest is updated as follows: ```go // Manifest is a deserialized runtime bundle manifest. type Manifest struct { // ... existing fields omitted ... // Components are the additional runtime components. Components []*Component `json:"components,omitempty"` } // ComponentKind is the kind of a component. type ComponentKind string const ( // ComponentInvalid is an invalid component. ComponentInvalid ComponentKind = "" // ComponentRONL is the on-chain logic component. ComponentRONL ComponentKind = "ronl" // ComponentROFL is the off-chain logic component. ComponentROFL ComponentKind = "rofl" ) // Component is a runtime component. type Component struct { // Kind is the component kind. Kind ComponentKind `json:"kind"` // Name is the name of the component that can be used to filter components // when multiple are provided by a runtime. Name string `json:"name,omitempty"` // Executable is the name of the runtime ELF executable file. Executable string `json:"executable"` // SGX is the SGX specific manifest metadata if any. SGX *SGXMetadata `json:"sgx,omitempty"` } ``` The top-level `executable` and `sgx` fields are supported for backwards compatibility and implicitly define a new `Component` of kind `ComponentRONL`. ### Updates to the Runtime Host Protocol This proposal includes some non-breaking updates to the Runtime Host Protocol in order to support the ROFL component, as follows: - **Consensus Block Notification.** No updates are required to facilitate notifications about consensus layer blocks as this is already handled as part of the existing RHP flow. The only change is that for ROFL, these notifications invoke a hook that can be implemented by the runtime. - **Runtime Transaction Submission.** A new method `HostSubmitTx` is introduced which allows ROFL to submit transactions to the runtime. It works by queueing the transaction in the transaction pool (local queue) for later scheduling. ```go type HostSubmitTxRequest struct { // RuntimeID is the identifier of the target runtime. RuntimeID common.Namespace `json:"runtime_id"` // Data is the raw transaction data. Data []byte `json:"data"` // Wait specifies whether the call should wait until the transaction is // included in a block. Wait bool `json:"wait,omitempty"` // Prove specifies whether the response should include a proof of // transaction being included in a block. Prove bool `json:"prove,omitempty"` } ``` - **Notify Registration.** A new method `HostRegisterNotify` is introduced which allows ROFL to register to be notified by the host when specific events occur. Note that delivery of these notifications is best effort as a dishonest host may withold notification delivery or generate spurious notifications. Registering for notifications overwrites any previous configuration. ```go type HostRegisterNotifyRequest struct { // RuntimeBlock subscribes to runtime block notifications. RuntimeBlock bool `json:"runtime_block,omitempty"` // RuntimeEvent subscribes to runtime event emission notifications. RuntimeEvent *struct { // Tags specifies which event tags to subscribe to. Tags [][]byte `json:"tags,omitempty"` } `json:"runtime_event,omitempty"` } ``` - **Notification Delivery.** A new method `RuntimeNotify` is introduced which allows the host to deliver event notifications based on previously registered notifiers. ```go type RuntimeNotifyRequest struct { // RuntimeBlock notifies about a new runtime block. RuntimeBlock *roothash.AnnotatedBlock `json:"runtime_block,omitempty"` // RuntimeEvent notifies about a specific runtime event being emitted. RuntimeEvent *struct { // Block is the block header of the block that emitted the event. Block *roothash.AnnotatedBlock `json:"block"` // Tags are the matching tags that were emitted. Tags [][]byte `json:"tags"` } `json:"runtime_event,omitempty"` } ``` - **RONL-ROFL Communication.** The existing EnclaveRPC is reused to facilitate the communication between the two components if/when needed. For this purpose the endpoint identifier `ronl` is made available in the ROFL host method handler to address the RONL component. - **Updating Node-Endorsed CapabilityTEE in ROFL.** A new method `RuntimeCapabilityTEEUpdateEndorsementRequest` is introduced which allows the node to refresh the `EndorsedCapabilityTEE` for ROFL. ```go type RuntimeCapabilityTEEUpdateEndorsementRequest struct { // EndorsedCapabilityTEE is an endorsed TEE capability. EndorsedCapabilityTEE node.EndorsedCapabilityTEE `json:"ect"` } ``` ### Updates to EnclaveRPC RAK Binding Version 2 of the `RAKBinding` structure is introduced for establishment of EnclaveRPC sessions, as follows: ```rust pub enum RAKBinding { // ... previous versions omitted ... /// V2 format which supports endorsed CapabilityTEE structures. #[cbor(rename = 2)] V2 { ect: EndorsedCapabilityTEE, binding: Signature, }, } ``` Additionally, the relevant EnclaveRPC session implementation is updated to facilitate thew new authentication mechanism via endorsed TEE capabilities and the session demultiplexer is updated to support authentication policies on incoming connections. ### Updates to the Runtime Host Sandbox This proposal updates the runtime host sandbox to support optionally allowing external network requests. These are then allowed only for the ROFL component (if any is available for a runtime). The following modifications are required: - When setting up the Bubblewrap sandbox, `--share-net` is passed to share the network namespace with the sandboxed runtime. All other namespaces are still unshared. - The runtime loader is modified to accept an additional argument `--allow-network` which then changes the usercall extension to pass through any address passed in the `connect_stream` handler. ### Configuration ROFL may require additional configuration which it may do through one of several ways: - **On-chain Configuration.** Configuration for the ROFL component may be stored in on-chain state. ROFL would then query the current configuration and apply it locally. - **Local Per-Node Configuration.** In case some per-node configuration is required (e.g. to allow the node operator to override a default), the existing runtime local configuration mechanism can be used where configuration is provided as part of the RHP handshake. All configuration for ROFL should be contained under the `rofl` configuration key. ### Untrusted Local Storage ROFL may utilize the existing untrusted node-local storage to store things like sealed data local to the node. This store is shared between RONL and ROFL, but all ROFL keys are transparently prefixed by `rofl.` on the host such that only RONL can see (but not necessarily read) ROFL's keys but not vice versa. ### Updates to the Oasis SDK A convenient way to develop ROFL modules alongside the on-chain support functionality should be implemented in the Oasis SDK, including a convenient way for ROFL to submit runtime transactions in a way that can be verified on-chain as coming from a specific node/runtime instance. ## Consequences ### Positive - Oasis runtimes can easily be extended with arbitrary off-chain logic that can securely interact with on-chain functionality. - Node operators do not need to perform much additional configuration in order to support the new off-chain logic. ### Negative - Additional complexity is introduced to the Runtime Host Protocol and to the node binary. ### Neutral ## References --- ## ADR 0025: Hot-loading of Runtime Bundles ## Component Oasis Core ## Changelog - 2025-07-13: Initial version ## Status Accepted ## Context The secure procedure for upgrading runtimes is not simple and requires significant effort from node operators. First, they must download the latest bundle, which includes the updated version of the runtime they intend to upgrade. Then, they need to verify the bundle, update the node configuration to point to the new bundle's location, and restart the node. Finally, once the new version is active, they must remove the outdated bundles. ## Decision This proposal aims to automate bundle discovery and distribution so that the upgrade process is more secure, reliable, and more user-friendly for node operators. The process would involve the following steps: - The runtime owner publishes the bundle checksum, i.e. the SHA-256 hash of the runtime bundle manifest, on-chain when registering a new runtime deployment. - Upon registration of the new deployment, the node automatically retrieves the bundle URL corresponding to the provided checksum from the configured bundle registries and downloads the bundle. The downloaded bundle is then verified and extracted to the appropriate location. - Once the new deployment becomes active, files and bundles associated with the previous versions are removed from the file system. If needed, we can gradually transition to a more decentralized version of this process. ## Metadata File A metadata file is a plain text document that references a specific bundle ORC file and must follow these rules: - The name must match the checksum of the bundle. - The content must be a single line containing the URL of the corresponding bundle ORC file. For example, for the Sapphire ParaTime version 0.8.2, the metadata file would be named `e523903e480a8bef7caf18b846aefaa17913878b67eee13ac618849dd0bb8741` and would look like this: ```txt https://github.com/oasisprotocol/sapphire-paratime/releases/download/v0.8.2/sapphire-paratime.orc ``` ## Bundle Registry The bundle registry is responsible for storing metadata files used for bundle discovery and distribution. It may host metadata for one or more runtimes. The registry must ensure that all metadata files are accessible through a bundle registry URL, as metadata URLs are formed by appending the metadata file name, i.e. the bundle checksum, to this URL. Therefore, the bundle registry URL doesn't need to be valid endpoint, only the constructed metadata URLs need to be valid. Note that the registry itself does not need to store any bundles; these can be hosted externally. Similarly, the runtime owners do not need to change their existing release process. Instead, they simply need to extend it by publishing metadata files with each release, if they wish to support hot-loading. ### Oasis Bundle Registry To avoid requiring every runtime owner to host their own bundle registry, the Oasis team has prepared a shared [bundle registry](https://github.com/oasisprotocol/bundle-registry), which is included by default in the node configuration. Runtime owners can contribute by creating a pull request to add their metadata files to this shared registry. Alternatively, they can override the default configuration and use a custom one if they prefer to maintain their own. To explicitly use the Oasis bundle registry, you need to add the following URL to the configuration. Note that this URL is not a valid endpoint by itself: ```txt https://raw.githubusercontent.com/oasisprotocol/bundle-registry/main/metadata/ ``` When the node will request a metadata file, for example, for Sapphire ParaTime version 0.8.2, the full URL will be constructed by appending the bundle checksum. This will point to the correct file: ```txt https://raw.githubusercontent.com/oasisprotocol/bundle-registry/main/metadata/e523903e480a8bef7caf18b846aefaa17913878b67eee13ac618849dd0bb8741 ``` ## Updates to the Runtime Configuration Runtime configuration needs to be extended to accept a list of bundle registry URLs from which metadata files can be fetched. Registries hosting metadata for all runtimes can be defined at the top level, while runtime-specific registries can be configured individually for each runtime. The global configuration and runtime-specific configuration should be updated as follows: ```go // Config is the runtime registry configuration structure. type Config struct { // ... existing fields omitted ... // Registries is the list of base URLs used to fetch runtime bundle metadata. // // The actual metadata URLs are constructed by appending the manifest hash // to the base URL. Therefore, the provided URLs don't need to be valid // endpoints themselves, only the constructed URLs need to be valid. Registries []string `yaml:"registries,omitempty"` } ``` ```go // RuntimeConfig is the runtime configuration. type RuntimeConfig struct { // ... existing fields omitted ... // Registries is the list of base URLs used to fetch runtime bundle metadata. // // The actual metadata URLs are constructed by appending the manifest hash // to the base URL. Therefore, the provided URLs don't need to be valid // endpoints themselves, only the constructed URLs need to be valid. Registries []string `yaml:"registries,omitempty"` } ``` ## Consequences ### Positive - Seamless runtime upgrades. - Reduced manual effort for node operators. - Improved security, as bundles are automatically verified. ### Negative - Requires runtime owners to maintain metadata files and publish them consistently. ### Neutral - Does not change the release process. - Bundle hosting remains flexible and decentralized. --- ## Oasis Core Developer Documentation [Image: Architecture] ## Development Setup Here are instructions on how to set up the local build environment, run the tests and some examples on how to prepare test networks for local development of Oasis Core components. * Build Environment Setup and Building * [Prerequisites](development-setup/prerequisites.md) * [Building](development-setup/building.md) * Running Tests and Development Networks * [Running Tests](development-setup/running-tests.md) * [Local Network Runner With a Simple Runtime](development-setup/oasis-net-runner.md) * [Single Validator Node Network](development-setup/single-validator-node-network.md) * [Deploying a Runtime](development-setup/deploying-a-runtime.md) ## High-Level Components At the highest level, Oasis Core is divided into two major layers: the _consensus layer_ and the _runtime layer_ as shown on the figure above. The idea behind the consensus layer is to provide a minimal set of features required to securely operate independent runtimes running in the runtime layer. It provides the following services: * Epoch-based time keeping and a random beacon. * Basic staking operations required to operate a PoS blockchain. * An entity, node and runtime registry that distributes public keys and metadata. * Runtime committee scheduling, commitment processing and minimal state keeping. On the other side, each runtime defines its own state and state transitions independent from the consensus layer, submitting only short proofs that computations were performed and results were stored. This means that runtime state and logic are completely decoupled from the consensus layer, and the consensus layer only provides information on what state (summarized by a cryptographic hash of a Merklized data structure) is considered canonical at any given point in time. See the following chapters for more details on specific components and their implementations. * [Consensus Layer](consensus/README.md) * [Transactions](consensus/transactions.md) * Services * [Epoch Time](consensus/services/epochtime.md) * [Random Beacon](consensus/services/beacon.md) * [Staking](consensus/services/staking.md) * [Registry](consensus/services/registry.md) * [Committee Scheduler](consensus/services/scheduler.md) * [Governance](consensus/services/governance.md) * [Root Hash](consensus/services/roothash.md) * [Key Manager](consensus/services/keymanager.md) * [Genesis Document](consensus/genesis.md) * [Transaction Test Vectors](consensus/test-vectors.md) * [Runtime Layer](runtime/README.md) * [Operation Model](runtime/README.md#operation-model) * [Runtime Host Protocol](runtime/runtime-host-protocol.md) * [Identifiers](runtime/identifiers.md) * [Messages](runtime/messages.md) * Oasis Node (`oasis-node`) * [RPC](oasis-node/rpc.md) * [Metrics](oasis-node/metrics.md) * [CLI](oasis-node/cli.md) ## Common Functionality * [Serialization](encoding.md) * [Cryptography](crypto.md) * Protocols * [Authenticated gRPC](authenticated-grpc.md) * [Merklized Key-Value Store (MKVS)](mkvs.md) ## Processes * [Architectural Decision Records](https://github.com/oasisprotocol/adrs) * [Release Process](release-process.md) * [Versioning](versioning.md) * [Security](SECURITY.md) --- ## Security(Core) At [Oasis Foundation], we take security very seriously and we deeply appreciate any effort to discover and fix vulnerabilities in [Oasis Core] and other projects powering the [Oasis Network]. We prefer that security reports be sent through our [private bug bounty program linked on our website](https://oasis.net/security-and-tees). We sketch out the general classification of the kinds of errors below. This is not intended to be an exhaustive list. [Oasis Foundation]: https://oasis.net/ [Oasis Core]: https://github.com/oasisprotocol/oasis-core [Oasis Network]: https://github.com/oasisprotocol/docs/blob/main/docs/general/oasis-network/README.md ## Specifications Our [papers] specify what we are building. Additional designs and specifications may be made available later. NB: Our designs/specifications describe what we are building toward, and do not necessarily reflect the state of the current iteration of the system. Implementation/specification mismatches in such cases are expected. - Conceptual errors. - Ambiguities, inconsistencies, or incorrect statements. - Mismatch between specifications and implementation of any subsystems / modules, when the implementation is considered complete. [papers]: https://github.com/oasisprotocol/docs/blob/main/docs/general/oasis-network/papers.mdx ## Contract Computational/Data Integrity - Race conditions / non-determinism. These may introduce a denial of service opportunity, or a way to force the system into slow path / recovery mode. - Conditions under which compute nodes may cause a bogus transaction result to be accepted (committed to the blockchain) by the system. ### CometBFT - CometBFT has its own vulnerability disclosure policy and bug bounty, so in general issues in the core CometBFT code should be reported [there](https://github.com/cometbft/cometbft/blob/master/SECURITY.md). - Oasis Core code that misuses CometBFT code, i.e., in violation of API/contract, would definitely be in scope. ### Discrepancy Detection - Situations where a discrepant computation is not detected. - Situations where a discrepancy occurs but no receipts are generated/retained for blame assignment / slashing after slow-path recovery. ### Storage - We use immutable authenticated data structures. - Undetected mutations. E.g., situations where a conceptually immutable data structure can be changed without updating hashes (and thus getting a new ID). - Missing/incomplete ADS proof generation or verification. - Availability failures. Potential DoS, e.g., malformed requests that cause node panics, etc. ## Contract Confidentiality - Cryptography: information leak or integrity failure, e.g., due to a poor choice of signature algorithm, AEAD schemes, etc, or to improper usage of the cryptographic schemes. NB: side channels are out of scope. - TEE misuse, model failures. ## Availability Bugs that create a potential for DOS or DDOS attack, e.g.: - Amplification attacks. - Failstop crashes / panics. - Deadlocks / livelocks. --- ## Authenticated gRPC Oasis Core nodes communicate between themselves over various protocols. One of those protocols is [gRPC] which is currently used for the following: * Compute nodes talking to storage nodes. * Compute nodes talking to key manager nodes. * Key manager nodes talking to other key manager nodes. * Clients talking to compute nodes. * Clients talking to key manager nodes. All these communications can have access control policies attached specifying who is allowed to perform certain actions at which point in time. This first requires an authentication mechanism. [gRPC]: https://grpc.io ## TLS In order to authenticate both ends of a connection, gRPC is always used together with TLS. However, since this is a decentralized network, there are some specifics on how peer verification is performed when establishing a TLS session between two nodes. Instead of relying on Certificate Authorities, we use the [registry service] provided by the [consensus layer]. Each node publishes its own trusted public keys in the registry as part of its [signed node descriptor]. TLS sessions use its own ephemeral [Ed25519 key pair] that is used to (self-)sign a node's X509 certificate. When verifying peer identities the public key on the certificate is compared with the public key(s) published in the registry. All TLS keys are ephemeral and nodes are encouraged to frequently rotate them (the Oasis Core implementation in this repository supports this automatically). For details on how certificate verification is performed see [the `VerifyCertificate` implementation] in [`go/common/crypto/tls`]. [registry service]: consensus/services/registry.md [consensus layer]: consensus/README.md [signed node descriptor]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/node?tab=doc#Node [Ed25519 key pair]: crypto.md [the `VerifyCertificate` implementation]: https://github.com/oasisprotocol/oasis-core/tree/master/go/common/crypto/tls/verify.go [`go/common/crypto/tls`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/common/crypto/tls ## gRPC Oasis Core uses some specific conventions that depart from the most common gRPC setups and are described in the following sections. ### CBOR Codec While gRPC is most commonly used with the Protocol Buffers codec the gRPC protocol is agnostic to the actual underlying serialization format. Oasis Core uses [CBOR] for encoding of all messages used in our gRPC services. This requires that the codec is explicitly configured while setting up connections. Our [gRPC helpers] automatically configure the correct codec so using it should be transparent. The only quirk of this setup is that service codegen is not available with arbitrary codecs, so glue code for both the server and the client needs to be generated manually (for examples see the `grpc.go` files in various `api` packages). [CBOR]: encoding.md [gRPC helpers]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/grpc?tab=doc ### Errors As gRPC provides very limited error reporting capability in the form of a few defined error codes, we extend this mechanism to support proper error remapping. Detailed errors are returned as part of the [gRPC error details structure]. The `Value` field of the first detail element contains the following CBOR-serialized structure that specifies the (namespaced) error: ```golang type grpcError struct { Module string `json:"module,omitempty"` Code uint32 `json:"code,omitempty"` } ``` If you use the provided [gRPC helpers] any errors will be mapped to registered error types automatically. [gRPC error details structure]: https://pkg.go.dev/google.golang.org/genproto/googleapis/rpc/status?tab=doc#Status ### Service Naming Convention We use the same service method namespacing convention as gRPC over Protocol Buffers. All Oasis Core services have unique identifiers starting with `oasis-core.` followed by the service identifier. A single slash (`/`) is used as the separator in method names, e.g., `/oasis-core.Storage/SyncGet`. --- ## Big Integer Quantities Arbitrary-precision positive integer quantities are represented by the `quantity.Quantity` type. ## Encoding When encoded it uses the big-endian byte order. --- ## Consensus Layer Oasis Core is designed around the principle of modularity. The _consensus layer_ is an interface that provides a number of important services to other parts of Oasis Core. This allows, in theory, for the consensus backend to be changed. The different backends live in [`go/consensus`], with the general interfaces in [`go/consensus/api`]. The general rule is that anything outside of a specific consensus backend package should be consensus backend agnostic. For more details about the actual API that the consensus backends must provide see the [consensus backend API documentation]. Currently the only supported consensus backend is [CometBFT], a BFT consensus protocol. For this reason some API surfaces may not be fully consensus backend agnostic. Each consensus backend needs to provide the following services: - [Epoch Time], an epoch-based time keeping service. - [Random Beacon], a source of randomness for other services. - [Staking], operations required to operate a PoS blockchain. - [Registry], an entity/node/runtime public key and metadata registry service. - [Committee Scheduler] service. - [Governance] service. - [Root Hash], runtime commitment processing and minimal runtime state keeping service. - [Key Manager] policy state keeping service. Each of the above services provides methods to query its current state. In order to mutate the current state, each operation needs to be wrapped into a [consensus transaction] and submitted to the consensus layer for processing. Oasis Core defines an interface for each kind of service (in `go//api`), with all concrete service implementations living together with the consensus backend implementation. The service API defines the transaction format for mutating state together with any query methods (both are consensus backend agnostic). [`go/consensus`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus [`go/consensus/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/api [consensus backend API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api?tab=doc [CometBFT]: https://cometbft.com/ [Epoch Time]: services/epochtime.md [Random Beacon]: services/beacon.md [Staking]: services/staking.md [Registry]: services/registry.md [Committee Scheduler]: services/scheduler.md [Governance]: services/governance.md [Root Hash]: services/roothash.md [Key Manager]: services/keymanager.md [consensus transaction]: transactions.md ## CometBFT [Image: CometBFT] The CometBFT consensus backend lives in [`go/consensus/cometbft`]. For more information about CometBFT itself see [the CometBFT Core developer documentation]. This section assumes familiarity with the CometBFT Core concepts and APIs. When used as an Oasis Core consensus backend, CometBFT Core is used as a library and thus lives in the same process. The CometBFT consensus backend is split into two major parts: 1. The first part is the **ABCI application** that represents the core logic that is replicated by CometBFT Core among the network nodes using the CometBFT BFT protocol for consensus. 1. The second part is the **query and transaction submission glue** that makes it easy to interact with the ABCI application, presenting everything via the Oasis Core Consensus interface. [`go/consensus/cometbft`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/cometbft [the CometBFT Core developer documentation]: https://docs.cometbft.com/ ### ABCI Application Multiplexer CometBFT Core consumes consensus layer logic via the [ABCI protocol], which assumes a single application. Since we have multiple services that need to be provided by the consensus layer we use an _ABCI application multiplexer_ which performs some common functions and dispatches transactions to the appropriate service-specific handler. The multiplexer lives in [`go/consensus/cometbft/abci/mux.go`] with the multiplexed applications, generally corresponding to services required by the _consensus layer_ interface living in [`go/consensus/cometbft/apps/`]. [ABCI protocol]: https://github.com/cometbft/cometbft/blob/master/spec/abci/abci.md [`go/consensus/cometbft/abci/mux.go`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/cometbft/abci/mux.go [`go/consensus/cometbft/apps/`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/cometbft/apps ### State Storage All application state for the CometBFT consensus backend is stored using our [Merklized Key-Value Store]. [Merklized Key-Value Store]: ../mkvs.md ### Service Implementations Service implementations for the CometBFT consensus backend live in [`go/consensus/cometbft/`]. They provide the glue between the services running as part of the ABCI application multiplexer and the Oasis Core service APIs. The interfaces generally provide a read-only view of the consensus layer state at a given height. Internally, these perform queries against the ABCI application state. #### Queries Queries do not use the [ABCI query functionality] as that would incur needless overhead for our use case (with CometBFT Core running in the same process). Instead, each multiplexed service provides its own `QueryFactory` which can be used to query state at a specific block height. An example of a `QueryFactory` and the corresponding `Query` interfaces for the staking service are as follows: ```golang // QueryFactory is the staking query factory interface. type QueryFactory interface { QueryAt(ctx context.Context, height int64) (Query, error) } // Query is the staking query interface. type Query interface { TotalSupply(ctx context.Context) (*quantity.Quantity, error) CommonPool(ctx context.Context) (*quantity.Quantity, error) LastBlockFees(ctx context.Context) (*quantity.Quantity, error) // ... further query methods omitted ... } ``` Implementations of this interface generally directly access the underlying ABCI state storage to answer queries. CometBFT implementations of Oasis Core consensus services generally follow the following pattern (example from the staking service API for querying `TotalSupply`): ```golang func (s *staking) TotalSupply(ctx context.Context, height int64) (*quantity.Quantity, error) { q, err := s.querier.QueryAt(ctx, height) if err != nil { return nil, err } return q.TotalSupply(ctx) } ``` [`go/consensus/cometbft/`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/cometbft [ABCI query functionality]: https://github.com/cometbft/cometbft/blob/master/spec/abci/abci.md#query-1 #### Transactions Each [serialized signed Oasis Core transaction] directly corresponds to a [CometBFT transaction]. Submission is performed by pushing the serialized transaction bytes into the [mempool] where it first undergoes basic checks and is then gossiped to the CometBFT P2P network. Handling of basic checks and transaction execution is performed by the ABCI application multiplexer mentioned above. [serialized signed Oasis Core transaction]: transactions.md [CometBFT transaction]: https://docs.cometbft.com/v0.38/core/using-cometbft#transactions [mempool]: https://github.com/cometbft/cometbft/blob/master/spec/abci/abci.md#mempool-connection --- ## Genesis Document The genesis document contains a set of parameters that outline the initial state of the [consensus layer] and its services. For more details about the actual genesis document's API, see [genesis API documentation]. [consensus layer]: README.md [genesis API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/genesis/api ## Genesis Document's Hash The genesis document's hash is computed as: ``` Base16(SHA512-256(CBOR())) ``` where: - `Base16()` represents the hex encoding function, - `SHA512-256()` represents the SHA-512/256 hash function as described in [Cryptography][crypto-hash] documentation, - `CBOR()` represents the *canonical* CBOR encoding function as described in [Serialization] documentation, and - `` represents a given genesis document. This should not be confused with a SHA-1 or a SHA-256 checksum of a [genesis file] that is used to check if the downloaded genesis file is correct. This hash is also used for [chain domain separation][crypto-chain] as the last part of the [domain separation] context. [crypto-chain]: ../crypto.md#chain-domain-separation [domain separation]: ../crypto.md#domain-separation [crypto-hash]: ../crypto.md#hash-functions [Serialization]: ../encoding.md [genesis file]: #genesis-file ## Genesis File A genesis file is a JSON file corresponding to a serialized genesis document. For a high-level overview of the genesis file, its sections, parameters and the parameter values that are used for the Oasis Network, see: [Genesis File Overview]. [Genesis File Overview]: https://github.com/oasisprotocol/docs/blob/main/docs/node/reference/genesis-doc.md ### Canonical Form The *canonical* form of a genesis file is the pretty-printed JSON file with 2-space indents ending with a newline, where: - Struct fields are encoded in the order in which they are defined in the corresponding struct definitions. The genesis document is defined by the [`genesis/api.Document`] struct which contains pointers to other structs defining the genesis state of all [consensus layer] services. - Maps have their keys converted to strings which are then encoded in lexicographical order. This is Go's default behavior. For more details, see [`encoding/json.Marshal()`]'s documentation. This should not be confused with the *canonical* CBOR encoding of the genesis document that is used to derive the domain separation context as described in the [Genesis Document's Hash] section. This form is used to enable simple diffing/patching with the standard Unix tools (i.e. `diff`/`patch`). [`genesis/api.Document`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/genesis/api#Document [`encoding/json.Marshal()`]: https://golang.org/pkg/encoding/json/#Marshal [Genesis Document's Hash]: #genesis-documents-hash --- ## Random Beacon The random beacon service is responsible for providing a source of unbiased randomness on each epoch. It uses a commit-reveal scheme backed by a PVSS scheme such that as long as the threshold of participants is met, and one participant is honest, secure entropy will be generated. The service interface definition lives in [`go/beacon/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation] and the [beacon ADR specification]. [`go/beacon/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/beacon/api [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/beacon/api?tab=doc [beacon ADR specification]: https://github.com/oasisprotocol/adrs/blob/main/0007-improved-random-beacon.md ## Operation Each node generates and maintains a long term elliptic curve point and scalar pair (public/private key pair), the point (public key) of which is included in the node descriptor stored by the [registry service]. In the initial implementation, the curve is P-256. The beacon generation process is split into three sequential stages. Any failures in the _Commit_ and _Reveal_ phases result in a failed protocol round, and the generation process will restart after disqualifying participants who have induced the failure. [registry service]: registry.md ### Commit Phase Upon epoch transition or a prior failed round the commit phase is initiated and the consensus service will select `participants` nodes from the current validator set (in order of descending stake) to serve as entropy contributors. The beacon state is (re)-initialized, and an event is broadcast to signal to the participants that they should generate and submit their encrypted shares via a `beacon.SCRAPECommit` transaction. Each commit phase lasts exactly `commit_interval` blocks, at the end of which, the round will be closed to further commits. At the end of the commit phase, the protocol state is evaluated to ensure that `threshold` of nodes have published encrypted shares, and if an insufficient number of nodes have published them, the round is considered to have failed. The following behaviors are currently candidates for a node being marked as malicious/non-particpatory and subject to exclusion from future rounds and slashing: - Not submitting a commitment. - Malformed commitments (corrupted/fails to validate/etc). - Attempting to alter an existing commitment for a given epoch/round. ### Reveal Phase When the `commit_interval` has passed, assuming that a sufficient number of commits have been received, the consensus service transitions into the reveal phase and broadcasts an event to signal to the participants that they should reveal the decrypted values of the encrypted shares received from other participants via a `beacon.PVSSReveal` transaction. Each reveal phase lasts exactly `reveal_interval` blocks, at the end of which, the round will be closed to further reveals. At the end of the reveal phase, the protocol state is evaluated to ensure that `threshold` nodes have published decrypted shares, and if an insufficient number of nodes have published in either case, the round is considered to have failed. The following behaviors are currently candidates for a node being marked as malicious/non-participatory and subject to exclusion from future rounds and slashing: - Not submitting a reveal. - Malformed commitments (corrupted/fails to validate/etc). - Attempting to alter an existing reveal for a given Epoch/Round. ### Complete (Transition Wait) Phase When the `reveal_interval` has passed, assuming that a sufficient number of reveals have been received, the beacon service recovers the final entropy output (the hash of the secret shared by each participant) and transitions into the complete (transition wait) phase and broadcasting an event to signal to participants the completion of the round. No meaningful protocol activity happens once a round has successfully completed, beyond the scheduling of the next epoch transition. ## Methods The following sections describe the methods supported by the consensus beacon service. Note that the methods can only be called by validators and only when they are the block proposer. ### PVSS Commit Submits a PVSS commit. **Method name:** ``` beacon.PVSSCommit ``` **Body:** ```golang type PVSSCommit struct { Epoch EpochTime `json:"epoch"` Round uint64 `json:"round"` Commit *pvss.Commit `json:"commit,omitempty"` } ``` ### PVSS Reveal Submits a PVSS reveal. **Method name:** ``` beacon.PVSSReveal ``` **Body:** ```golang type PVSSReveal struct { Epoch EpochTime `json:"epoch"` Round uint64 `json:"round"` Reveal *pvss.Reveal `json:"reveal,omitempty"` } ``` ## Consensus Parameters - `participants` is the number of participants to be selected for each beacon generation protocol round. - `threshold` is the minimum number of participants which must successfully contribute entropy for the final output to be considered valid. This is also the minimum number of participants that are required to reconstruct a PVSS secret from the corresponding decrypted shares. - `commit_interval` is the duration of the _Commit_ phase, in blocks. - `reveal_interval` is the duration of the _Reveal_ phase, in blocks. - `transition_delay` is the duration of the post _Reveal_ phase delay, in blocks. --- ## Epoch Time --- ## Governance The governance service is responsible for providing an on-chain governance mechanism. The service interface definition lives in [`go/governance/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation] and the [governance ADR specification]. [`go/governance/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/governance/api [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/governance/api?tab=doc [governance ADR specification]: https://github.com/oasisprotocol/adrs/blob/main/0006-consensus-governance.md ## Methods The following sections describe the methods supported by the consensus governance service. ### Submit Proposal Proposal submission enables a new consensus layer governance proposal to be created. **Method name:** ``` governance.SubmitProposal ``` **Body:** ```golang // ProposalContent is a consensus layer governance proposal content. type ProposalContent struct { Upgrade *UpgradeProposal `json:"upgrade,omitempty"` CancelUpgrade *CancelUpgradeProposal `json:"cancel_upgrade,omitempty"` } // UpgradeProposal is an upgrade proposal. type UpgradeProposal struct { upgrade.Descriptor } // CancelUpgradeProposal is an upgrade cancellation proposal. type CancelUpgradeProposal struct { // ProposalID is the identifier of the pending upgrade proposal. ProposalID uint64 `json:"proposal_id"` } ``` **Fields:** - `upgrade` (optional) specifies an upgrade proposal. - `cancel_upgrade` (optional) specifies an upgrade cancellation proposal. Exactly one of the proposal kind fields needs to be non-nil, otherwise the proposal is considered malformed. ### Vote Voting for submitted consensus layer governance proposals. **Method name:** ``` governance.CastVote ``` **Body:** ```golang type ProposalVote struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // Vote is the vote. Vote Vote `json:"vote"` } ``` ## Events ### Proposal Submitted Event **Body:** ```golang type ProposalSubmittedEvent { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // Submitter is the staking account address of the submitter. Submitter staking.Address `json:"submitter"` } ``` Emitted for every submitted proposal. ### Proposal Finalized Event **Body:** ```golang type ProposalFinalizedEvent struct { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // State is the new proposal state. State ProposalState `json:"state"` } ``` Emitted when a proposal is finalized. ### Proposal Executed Event **Body:** ```golang type ProposalExecutedEvent { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` } ``` Emitted when a passed proposal is executed. ### Vote Event **Body:** ```golang type VoteEvent { // ID is the unique identifier of a proposal. ID uint64 `json:"id"` // Submitter is the staking account address of the vote submitter. Submitter staking.Address `json:"submitter"` // Vote is the cast vote. Vote Vote `json:"vote"` } ``` Emitted when a vote is cast. ## Consensus Parameters - `gas_costs` (transaction.Costs) are the governance transaction gas costs. - `min_proposal_deposit` (base units) specifies the number of base units that are deposited when creating a new proposal. - `voting_period` (epochs) specifies the number of epochs after which the voting for a proposal is closed and the votes are tallied. - `quorum` (uint8: \[0,100\]) specifies the minimum percentage of voting power that needs to be cast on a proposal for the result to be valid. - `threshold` (uint8: \[0,100\]) specifies the minimum percentage of `VoteYes` votes in order for a proposal to be accepted. - `upgrade_min_epoch_diff` (epochs) specifies the minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade proposal to be valid. Additionally specifies the minimum number of epochs between two consecutive pending upgrades. - `upgrade_cancel_min_epoch_diff` (epochs) specifies the minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade cancellation proposal to be valid. ## Test Vectors To generate test vectors for various governance [transactions], run: ```bash make -C go governance/gen_vectors ``` For more information about the structure of the test vectors see the section on [Transaction Test Vectors]. [transactions]: ../transactions.md [Transaction Test Vectors]: ../test-vectors.md --- ## Key Manager The key manager service is responsible for coordinating the SGX-based key manager runtimes. It stores and publishes policy documents and status updates required for key manager replication. The service interface definition lives in [`go/keymanager/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation]. [`go/keymanager/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/keymanager/api [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/keymanager/api?tab=doc ## Policies A key manager policy document defines the policy that key manager implementations use to enforce access control to key material. At this point the policy document is specifically designed to work with our Intel SGX-based key manager runtime. The [policy document] specifies the following access control policies that are enforced by the key manager runtime based on the calling enclave identity: * **Enclaves that may query private keys.** These are usually enclave identities of confidential runtimes that need access to per-runtime private keys to decrypt state. * **Enclaves that may replicate the master secret.** These are usually enclave identities of new key manager enclave versions, to support upgrades. Own enclave identity is implied (to allow key manager replication) and does not need to be explicitly specified. In order for the policy to be valid and accepted by a key manager enclave it must be signed by a configured threshold of keys. Both the threshold and the authorized public keys that can sign the policy are hardcoded in the key manager enclave. [policy document]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/keymanager/api?tab=doc#PolicySGX ## Methods ### Update Policy Policy update enables the key manager runtime owning entity to update the current key manager policy. A new update policy transaction can be generated using [`NewUpdatePolicyTx`]. **Method name:** ``` keymanager.UpdatePolicy ``` The body of an update policy transaction must be a [`SignedPolicySGX`] which is a signed key manager access control policy. The signer of the transaction must be the key manager runtime's owning entity. [`NewUpdatePolicyTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/keymanager/api?tab=doc#NewUpdatePolicyTx [`SignedPolicySGX`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/keymanager/api?tab=doc#SignedPolicySGX ## Events --- ## Registry The registry service is responsible for managing a registry of runtime, entity and node public keys and metadata. The service interface definition lives in [`go/registry/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation]. [`go/registry/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/registry/api [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc ## Resources The registry service manages different kinds of resources which are described from a high level perspective in this chapter. ### Entities and Nodes An entity managed by the registry service is a key pair that owns resources in the registry. It can represent an organization or an individual with [stake] on the network. Currently, an entity can own the following types of resources: * nodes and * runtimes. A node is a device (process running in a VM, on bare metal, in a container, etc.) that is participating in a committee on the Oasis Core network. It is identified by its own key pair. The reason for separating entities from nodes is to enable separation of concerns. Both nodes and entities require stake to operate (e.g., to be registered in the registry and be eligible for specific roles). While entities have their own (or [delegated]) stake, nodes use stake provided by entities that operate them. Nodes need to periodically refresh their resource descriptor in the registry in order for it to remain fresh and to do this they need to have online access to their corresponding private key(s). On the other hand entities' private keys are more sensitive as they can be used to manage stake and other resources. For this reason they should usually be kept offline and having entities as separate resources enables that. [stake]: staking.md [delegated]: staking.md#delegation ### Runtimes A [runtime] is effectively a replicated application with shared state. The registry resource describes a runtime's operational parameters, including its identifier, kind, admission policy, committee scheduling, storage, governance model, etc. For a full description of the runtime descriptor see [the `Runtime` structure]. The chosen governance model indicates how the runtime descriptor can be updated in the future. There are currently three supported governance models: * **Entity governance** where the runtime owner is the only one who can update the runtime descriptor via `registry.RegisterRuntime` method calls. * **Runtime-defined governance** where the runtime itself is the only one who can update the runtime descriptor by emitting a runtime message. * **Consensus layer governance** where only the consensus layer itself can update the runtime descriptor through network governance. [runtime]: ../../runtime/README.md [the `Runtime` structure]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#Runtime ## Methods The following sections describe the methods supported by the consensus registry service. ### Register Entity Entity registration enables a new entity to be created. A new register entity transaction can be generated using [`NewRegisterEntityTx`]. **Method name:** ``` registry.RegisterEntity ``` The body of a register entity transaction must be a [`SignedEntity`] structure, which is a [signed envelope][envelopes] containing an [`Entity`] descriptor. The signer of the entity MUST be the same as the signer of the transaction. Registering an entity may require sufficient stake in the entity's [escrow account]. [`NewRegisterEntityTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#NewRegisterEntityTx [`SignedEntity`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/entity?tab=doc#SignedEntity [`Entity`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/entity?tab=doc#Entity [envelopes]: ../../crypto.md#envelopes [escrow account]: staking.md#escrow ### Deregister Entity Entity deregistration enables an existing entity to be removed. A new deregister entity transaction can be generated using [`NewDeregisterEntityTx`]. **Method name:** ``` registry.DeregisterEntity ``` The body of a register entity transaction must be `nil`. The entity is implied to be the signer of the transaction. _If an entity still has either nodes or runtimes registered, it is not possible to deregister an entity and such a transaction will fail._ [`NewDeregisterEntityTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#NewDeregisterEntityTx ### Register Node Node registration enables a new node to be created. A new register node transaction can be generated using [`NewRegisterNodeTx`]. **Method name:** ``` registry.RegisterNode ``` The body of a register entity transaction must be a [`MultiSignedNode`] structure, which is a [multi-signed envelope][envelopes] containing a [`Node`] descriptor. The signer of the transaction MUST be the node identity key. The owning entity MUST have the given node identity public key whitelisted in the `Nodes` field in its [`Entity`] descriptor. The node descriptor structure MUST be signed by all the following keys: * Node identity key. * Consensus key. * TLS key. * P2P key. Registering a node may require sufficient stake in the owning entity's [escrow account]. There are two kinds of thresholds that the node may need to satisfy: * Global thresholds are the same for all runtimes and are defined by the consensus parameters (see [`Thresholds` in staking consensus parameters]). * In _addition_ to the global thresholds, each runtime the node is registering for may define their own thresholds. The runtime-specific thresholds are defined in the [`Staking` field] in the runtime descriptor. In case the node is registering for multiple runtimes, it needs to satisfy the sum of thresholds of all the runtimes it is registering for. [`NewRegisterNodeTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#NewRegisterNodeTx [`MultiSignedNode`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/node?tab=doc#MultiSignedNode [`Node`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/node?tab=doc#Node [`Thresholds` in staking consensus parameters]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#ConsensusParameters.Thresholds [`Staking` field]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#Runtime.Staking ### Unfreeze Node Node unfreezing enables a previously frozen (e.g., due to slashing) node to be thawed so it can again be eligible for committee elections. A new unfreeze node transaction can be generated using [`NewUnfreezeNodeTx`]. **Method name:** ``` registry.UnfreezeNode ``` **Body:** ```golang type UnfreezeNode struct { NodeID signature.PublicKey `json:"node_id"` } ``` **Fields:** * `node_id` specifies the node identifier of the node to thaw. The transaction signer MUST be the entity key that owns the node. Thawing a node requires that the node's freeze period has already passed. The freeze period for any given attributable fault (e.g., double signing) is a consensus parameter (see [`Slashing` in staking consensus parameters]). [`NewUnfreezeNodeTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#NewUnfreezeNodeTx [`Slashing` in staking consensus parameters]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#ConsensusParameters.Slashing ### Register Runtime Runtime registration enables a new runtime to be created. A new register runtime transaction can be generated using [`NewRegisterRuntimeTx`]. **Method name:** ``` registry.RegisterRuntime ``` The body of a register runtime transaction must be a [`Runtime`] descriptor. The signer of the transaction MUST be the owning entity key. Registering a runtime may require sufficient stake in either the owning entity's (when entity governance is used) or the runtime's (when runtime governance is used) [escrow account]. Changing the governance model from entity governance to runtime governance is allowed. Any other governance model changes are not allowed. [`NewRegisterRuntimeTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#NewRegisterRuntimeTx [`Runtime`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#Runtime ## Events ## Test Vectors To generate test vectors for various registry [transactions], run: ```bash make -C go registry/gen_vectors ``` For more information about the structure of the test vectors see the section on [Transaction Test Vectors]. [transactions]: ../transactions.md [Transaction Test Vectors]: ../test-vectors.md --- ## Root Hash The roothash service is responsible for runtime commitment processing and minimal runtime state keeping. The service interface definition lives in [`go/roothash/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation]. [`go/roothash/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/roothash/api/api.go [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/roothash/api?tab=doc ## Methods ### Executor Commit The executor commit method allows an executor node to submit commitments of an executed computation. A new executor commit transaction can be generated using [`NewExecutorCommitTx`]. **Method name:** ``` roothash.ExecutorCommit ``` **Body:** ```golang type ExecutorCommit struct { ID common.Namespace `json:"id"` Commits []commitment.ExecutorCommitment `json:"commits"` } ``` **Fields:** * `id` specifies the [runtime identifier] of a runtime this commit is for. * `commits` are the [executor commitments]. [`NewExecutorCommitTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/roothash/api?tab=doc#NewExecutorCommitTx [runtime identifier]: ../../runtime/identifiers.md [executor commitments]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/roothash/api/commitment?tab=doc#ExecutorCommitment ## Events ## Consensus Parameters * `max_runtime_messages` (uint32) specifies the global limit on the number of [messages] that can be emitted in each round by the runtime. The default value of `0` disables the use of runtime messages. [messages]: ../../runtime/messages.md --- ## Committee Scheduler The committee scheduler service is responsible for periodically scheduling all committees (validator, compute, key manager) based on [epoch-based time] and entropy provided by the [random beacon]. The service interface definition lives in [`go/scheduler/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation]. [epoch-based time]: epochtime.md [random beacon]: beacon.md [`go/scheduler/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/scheduler/api [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/scheduler/api?tab=doc ## Events ## Validator Committee To schedule the validator committee, the committee scheduler selects among nodes [registered] with the [`RoleValidator`] role. Each node's entity must have an [escrow account balance] meeting the total thresholds for the nodes and runtimes that it has registered. If an entity's escrow account balance is too low to meet the total threshold, the committee scheduler does not consider that entity's nodes. From these qualifying nodes, the committee scheduler selects at most one node from each entity, up to a maximum validator committee size. The maximum validator committee size is configured in the genesis document, under the path `.scheduler.params.max_validators` (consult the [genesis document] for details). Unlike how the committee scheduler schedules other committees, it schedules the validator committee by choosing nodes from the entities that have the highest escrow account balances. When the committee scheduler schedules the validator committee, it additionally assigns each member a _voting power_, which controls (i) the weight of its votes in the consensus protocol and (ii) how often it serves as the proposer in the consensus protocol. The committee scheduler assigns a validator's voting power proportional to its entity's [escrow account balance]. [registered]: registry.md#register-node [`RoleValidator`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/node?tab=doc#RoleValidator [escrow account balance]: staking.md#escrow [genesis document]: https://github.com/oasisprotocol/docs/blob/main/docs/node/reference/genesis-doc.md#committee-scheduler --- ## Staking The staking service is responsible for managing the staking ledger in the consensus layer. It enables operations like transferring stake between accounts and escrowing stake for specific needs (e.g., operating nodes). The service interface definition lives in [`go/staking/api`]. It defines the supported queries and transactions. For more information you can also check out the [consensus service API documentation]. [`go/staking/api`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/staking/api/api.go [consensus service API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc ## Tokens and Base Units Stake amounts can be denominated in tokens and base units. Tokens are used in user-facing scenarios (e.g. CLI commands) where the token amount is prefixed with the token's ticker symbol as defined by the [`Genesis`' `TokenSymbol` field][pkggodev-genesis]. Another [`Genesis`' field, `TokenValueExponent`][pkggodev-genesis], defines the token's value base-10 exponent. For example, if `TokenValueExponent` is 6, then 1 token equals 10^6 (i.e. one million) base units. Internally, base units are used for all stake calculation and processing. [pkggodev-genesis]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#Genesis ## Accounts A staking account is an entry in the staking ledger. It can hold both general and escrow accounts. Each staking account has an address which is derived from the corresponding public key as follows: ``` [ 1 byte ][ first 20 bytes of SHA512-256( || || ) ] ``` Where `` and `` represent the staking account address' context version and identifier and `` represents the data specific to the address kind. There are two kinds of accounts: * User accounts linked to a specific public key. * Runtime accounts linked to a specific [runtime identifier]. Addresses use [Bech32 encoding] for text serialization with `oasis` as its human readable part (HRP) prefix (for both kinds of accounts). ### User Accounts In case of user accounts, the `` and `` are as defined by the [`AddressV0Context` variable], and `` represents the account signer's public key (e.g. entity id). For more details, see the [`NewAddress` function]. When generating an account's private/public key pair, follow [ADR 0008: Standard Account Key Generation][ADR 0008]. ### Runtime Accounts In case of runtime accounts, the `` and `` are as defined by the [`AddressRuntimeV0Context` variable], and `` represents the [runtime identifier]. For more details, see the [`NewRuntimeAddress` function]. The runtime accounts belong to runtimes and can only be manipulated by the runtime by [emitting messages] to the consensus layer. ### Reserved Addresses Some staking account addresses are reserved to prevent them from being accidentally used in the actual ledger. Currently, they are: * `oasis1qrmufhkkyyf79s5za2r8yga9gnk4t446dcy3a5zm`: common pool address (defined by [`CommonPoolAddress` variable]). * `oasis1qqnv3peudzvekhulf8v3ht29z4cthkhy7gkxmph5`: per-block fee accumulator address (defined by [`FeeAccumulatorAddress` variable]). * `oasis1qp65laz8zsa9a305wxeslpnkh9x4dv2h2qhjz0ec`: governance deposits address (defined by the [`GovernanceDeposits` variable]). [runtime identifier]: ../../runtime/identifiers.md [`AddressV0Context` variable]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#pkg-variables [`NewAddress` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewAddress [`AddressRuntimeV0Context` variable]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#pkg-variables [`NewRuntimeAddress` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewRuntimeAddress [emitting messages]: ../../runtime/messages.md [Bech32 encoding]: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki#bech32 [`CommonPoolAddress` variable]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#pkg-variables [`FeeAccumulatorAddress` variable]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#pkg-variables [`GovernanceDeposits` variable]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#pkg-variables [ADR 0008]: https://github.com/oasisprotocol/adrs/blob/main/0008-standard-account-key-generation.md ### General General accounts store account's general balance and nonce. Nonce is the incremental number that must be unique for each account's transaction. ### Escrow Escrow accounts are used to hold stake delegated for specific consensus-layer operations (e.g., registering and running nodes). Their balance is subject to special delegation provisions and a debonding period. Delegation provisions, also called commissions, are specified by the [`CommissionSchedule` field]. An escrow account also has a corresponding stake accumulator. It stores stake claims for an escrow account and ensures all claims are satisfied at any given point. Adding a new claim is only possible if all of the existing claims plus the new claim can be satisfied. [`CommissionSchedule` field]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#CommissionSchedule #### Delegation When a delegator wants to delegate some of amount of stake to a staking account, he needs to escrow stake using [Add Escrow method]. Similarly, when a delegator wants to reclaim some amount of escrowed stake back to his general account, he needs to reclaim stake using [Reclaim Escrow method]. To simplify accounting, each escrow results in the delegator account being issued shares which can be converted back to stake during the reclaim escrow operation. When a delegator delegates some amount of stake to an escrow account, the delegator receives the number of shares proportional to the current _share price_ (in base units) calculated from the total number of stake delegated to an escrow account so far and the number of shares issued so far: ``` shares_per_base_unit = account_issued_shares / account_delegated_base_units ``` For example, if an escrow account has the following state: ```json "escrow": { "active": { "balance": "250", "total_shares": "1000" }, ... } ``` then the current share price (i.e. `shares_per_base_unit`) is 1000 / 250 = 4. Delegating 500 base units to this escrow account would result in 500 * 4 = 2000 newly issued shares. Thus, the escrow account would have the following state afterwards: ```json "escrow": { "active": { "balance": "750", "total_shares": "3000" }, ... } ``` When a delegator wants to reclaim a certain number of escrowed stake, the _base unit price_ (in shares) must be calculated based on the escrow account's current active balance and the number of issued shares: ```text base_units_per_share = account_delegated_base_units / account_issued_shares ``` Returning to our example escrow account, the current base unit price (i.e. `base_units_per_share`) is 750 / 3000 = 0.25. Reclaiming 1200 shares would result in 1200 * 0.25 = 300 base units being reclaimed. The escrow account would have the following state afterwards: ```json "escrow": { "active": { "balance": "450", "total_shares": "1800" }, ... } ``` Reclaiming escrow does not complete immediately, but may be subject to a debonding period during in which the stake still remains escrowed. [Add Escrow method]: #add-escrow [Reclaim Escrow method]: #reclaim-escrow #### Commission Schedule A staking account can be configured to take a commission on staking rewards given to its node(s). They are defined by the [`CommissionRateStep` type]. The commission rate must be within bounds, which the staking account can also specify using the [`CommissionRateBoundStep` type]. The commission rates and rate bounds can change over time which is defined by the [`CommissionSchedule` type][`CommissionSchedule` field]. To prevent unexpected changes in commission rates and rate bounds, they must be specified a number of epochs in the future, controlled by the [`CommissionScheduleRules` consensus parameter]. [`CommissionRateStep` type]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#CommissionRateStep [`CommissionRateBoundStep` type]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#CommissionRateBoundStep [`CommissionScheduleRules` consensus parameter]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#CommissionScheduleRules ## Methods The following sections describe the methods supported by the consensus staking service. ### Transfer Transfer enables stake transfer between different accounts in the staking ledger. A new transfer transaction can be generated using [`NewTransferTx` function]. **Method name:** ``` staking.Transfer ``` **Body:** ```golang type Transfer struct { To Address `json:"to"` Amount quantity.Quantity `json:"amount"` } ``` **Fields:** * `to` specifies the destination account's address. * `amount` specifies the amount of base units to transfer. The transaction signer implicitly specifies the source account. [`NewTransferTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewTransferTx ### Burn Burn destroys some stake in the caller's account. A new burn transaction can be generated using [`NewBurnTx` function]. **Method name:** ``` staking.Burn ``` **Body:** ```golang type Burn struct { Amount quantity.Quantity `json:"amount"` } ``` **Fields:** * `amount` specifies the amount of base units to burn. The transaction signer implicitly specifies the caller's account. [`NewBurnTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewBurnTx ### Add Escrow Escrow transfers stake into an escrow account. For more details, see the [Delegation section] of this document. A new add escrow transaction can be generated using [`NewAddEscrowTx` function]. **Method name:** ``` staking.AddEscrow ``` **Body:** ```golang type Escrow struct { Account Address `json:"account"` Amount quantity.Quantity `json:"amount"` } ``` **Fields:** * `account` specifies the destination escrow account's address. * `amount` specifies the amount of base units to transfer. The transaction signer implicitly specifies the source account. [Delegation section]: #delegation [`NewAddEscrowTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewAddEscrowTx ### Reclaim Escrow Reclaim escrow starts the escrow reclamation process. For more details, see the [Delegation section] of this document. A new reclaim escrow transaction can be generated using [`NewReclaimEscrowTx` function]. **Method name:** ``` staking.ReclaimEscrow ``` **Body:** ```golang type ReclaimEscrow struct { Account Address `json:"account"` Shares quantity.Quantity `json:"shares"` } ``` **Fields:** * `account` specifies the source escrow account's address. * `shares` specifies the number of shares to reclaim. The transaction signer implicitly specifies the destination account. [`NewReclaimEscrowTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewReclaimEscrowTx ### Amend Commission Schedule Amend commission schedule updates the commission schedule specified for the given escrow account. For more details, see the [Commission Schedule section] of this document. A new amend commission schedule transaction can be generated using [`NewAmendCommissionScheduleTx` function]. **Method name:** ``` staking.AmendCommissionSchedule ``` **Body:** ```golang type AmendCommissionSchedule struct { Amendment CommissionSchedule `json:"amendment"` } ``` **Fields:** * `amendment` defines the amended commission schedule. The transaction signer implicitly specifies the escrow account. [Commission Schedule section]: #commission-schedule [`NewAmendCommissionScheduleTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewAmendCommissionScheduleTx ### Allow Allow enables an account holder to set an allowance for a beneficiary. A new allow transaction can be generated using [`NewAllowTx` function]. **Method name:** ``` staking.Allow ``` **Body:** ```golang type Allow struct { Beneficiary Address `json:"beneficiary"` Negative bool `json:"negative,omitempty"` AmountChange quantity.Quantity `json:"amount_change"` } ``` **Fields:** * `beneficiary` specifies the beneficiary account address. * `amount_change` specifies the absolute value of the amount of base units to change the allowance for. * `negative` specifies whether the `amount_change` should be subtracted instead of added. The transaction signer implicitly specifies the general account. Upon executing the allow the following actions are performed: * If either the `disable_transfers` staking consensus parameter is set to `true` or the `max_allowances` staking consensus parameter is set to zero, the method fails with `ErrForbidden`. * It is checked whether either the transaction signer address or the `beneficiary` address are reserved. If any are reserved, the method fails with `ErrForbidden`. * Address specified by `beneficiary` is compared with the transaction signer address. If the addresses are the same, the method fails with `ErrInvalidArgument`. * The account indicated by the signer is loaded. * If the allow would create a new allowance and the maximum number of allowances for an account has been reached, the method fails with `ErrTooManyAllowances`. * The set of allowances is updated so that the allowance is updated as specified by `amount_change`/`negative`. In case the change would cause the allowance to be equal to zero or negative, the allowance is removed. * The account is saved. * The corresponding [`AllowanceChangeEvent`] is emitted. [`NewAllowTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewAllowTx [`AllowanceChangeEvent`]: #allowance-change-event ### Withdraw Withdraw enables a beneficiary to withdraw from the given account. A new withdraw transaction can be generated using [`NewWithdrawTx` function]. **Method name:** ``` staking.Withdraw ``` **Body:** ```golang type Withdraw struct { From Address `json:"from"` Amount quantity.Quantity `json:"amount"` } ``` **Fields:** * `from` specifies the account address to withdraw from. * `amount` specifies the amount of base units to withdraw. The transaction signer implicitly specifies the destination general account. Upon executing the withdrawal the following actions are performed: * If either the `disable_transfers` staking consensus parameter is set to `true` or the `max_allowances` staking consensus parameter is set to zero, the method fails with `ErrForbidden`. * It is checked whether either the transaction signer address or the `from` address are reserved. If any are reserved, the method fails with `ErrForbidden`. * Address specified by `from` is compared with the transaction signer address. If the addresses are the same, the method fails with `ErrInvalidArgument`. * The source account indicated by `from` is loaded. * The destination account indicated by the transaction signer is loaded. * `amount` is deducted from the corresponding allowance in the source account. If this would cause the allowance to go negative, the method fails with `ErrForbidden`. * `amount` is deducted from the source general account balance. If this would cause the balance to go negative, the method fails with `ErrInsufficientBalance`. * `amount` is added to the destination general account balance. * Both source and destination accounts are saved. * The corresponding [`TransferEvent`] is emitted. * The corresponding [`AllowanceChangeEvent`] is emitted with the updated allowance. [`NewWithdrawTx` function]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#NewWithdrawTx [`TransferEvent`]: #transfer-event ## Events ### Transfer Event The transfer event is emitted when tokens are transferred from a source account to a destination account. **Body:** ```golang type TransferEvent struct { From Address `json:"from"` To Address `json:"to"` Amount quantity.Quantity `json:"amount"` } ``` **Fields:** * `from` contains the address of the source account. * `to` contains the address of the destination account. * `amount` contains the amount (in base units) transferred. ### Burn Event The burn event is emitted when tokens are burned. **Body:** ```golang type BurnEvent struct { Owner Address `json:"owner"` Amount quantity.Quantity `json:"amount"` } ``` **Fields:** * `owner` contains the address of the account that burned tokens. * `amount` contains the amount (in base units) burned. ### Escrow Event Escrow events are emitted when tokens are escrowed, taken from escrow by the protocol or reclaimed from escrow by the account owner. **Body:** ```golang type EscrowEvent struct { Add *AddEscrowEvent `json:"add,omitempty"` Take *TakeEscrowEvent `json:"take,omitempty"` Reclaim *ReclaimEscrowEvent `json:"reclaim,omitempty"` } ``` **Fields:** * `add` is set if the emitted event is an _Add Escrow_ event. * `take` is set if the emitted event is a _Take Escrow_ event. * `reclaim` is set if the emitted event is a _Reclaim Escrow_ event. #### Add Escrow Event The add escrow event is emitted when funds are escrowed. **Body:** ```golang type AddEscrowEvent struct { Owner Address `json:"owner"` Escrow Address `json:"escrow"` Amount quantity.Quantity `json:"amount"` NewShares quantity.Quantity `json:"new_shares"` } ``` **Fields:** * `owner` contains the address of the source account. * `escrow` contains the address of the destination account the tokens are being escrowed to. * `amount` contains the amount (in base units) escrowed. * `new_shares` contains the amount of shares created as a result of the added escrow event. Can be zero in case of (non-commissioned) rewards, where stake is added without new shares to increase share price. #### Take Escrow Event The take escrow event is emitted by the protocol when escrowed funds are slashed for whatever reason. **Body:** ```golang type TakeEscrowEvent struct { Owner Address `json:"owner"` Amount quantity.Quantity `json:"amount"` DebondingAmount quantity.Quantity `json:"debonding_amount"` } ``` **Fields:** * `owner` contains the address of the account escrow has been taken from. * `amount` contains the total amount (in base units) taken. The debonding and active escrow balances are slashed in equal proportions. * `debonding_amount` contains the amount (in base units) taken from just the debonding escrow balance. #### Reclaim Escrow Event The reclaim escrow event is emitted when a reclaim escrow operation completes successfully (after the debonding period has passed). **Body:** ```golang type ReclaimEscrowEvent struct { Owner Address `json:"owner"` Escrow Address `json:"escrow"` Amount quantity.Quantity `json:"amount"` Shares quantity.Quantity `json:"shares"` } ``` **Fields:** * `owner` contains the address of the account that reclaimed tokens from escrow. * `escrow` contains the address of the account escrow has been reclaimed from. * `amount` contains the amount (in base units) reclaimed. * `shares` contains the amount of shares reclaimed. ### Allowance Change Event **Body:** ```golang type AllowanceChangeEvent struct { Owner Address `json:"owner"` Beneficiary Address `json:"beneficiary"` Allowance quantity.Quantity `json:"allowance"` Negative bool `json:"negative,omitempty"` AmountChange quantity.Quantity `json:"amount_change"` } ``` **Fields:** * `owner` contains the address of the account owner where allowance has been changed. * `beneficiary` contains the address of the beneficiary. * `allowance` contains the new total allowance. * `amount_change` contains the absolute amount the allowance has changed for. * `negative` specifies whether the allowance has been reduced rather than increased. The event is emitted even if the new allowance is zero. ## Consensus Parameters * `max_allowances` (uint32) specifies the maximum number of [allowances] an account can store. Zero means that allowance functionality is disabled. [allowances]: #allow ## Test Vectors To generate test vectors for various staking [transactions], run: ```bash make -C go staking/gen_vectors ``` For more information about the structure of the test vectors see the section on [Transaction Test Vectors]. [transactions]: ../transactions.md [Transaction Test Vectors]: ../test-vectors.md --- ## Transaction Test Vectors In order to test transaction generation, parsing and signing, we provide a set of test vectors. They can be generated for the following consensus services: * [Staking] * [Registry] * [Governance] [Staking]: services/staking.md#test-vectors [Registry]: services/registry.md#test-vectors [Governance]: services/governance.md#test-vectors ## Structure The generated test vectors file is a JSON document which provides an array of objects (test vectors). Each test vector has the following fields: * `kind` is a human-readable string describing what kind of a transaction the given test vector is describing (e.g., `"Transfer"`). * `signature_context` is the [domain separation context] used for signing the transaction. * `tx` is the human-readable _interpreted_ unsigned transaction. Its purpose is to make it easier for the implementer to understand what the content of the transaction is. **It does not contain the structure that can be serialized directly (e.g., [addresses] may be represented as Bech32-encoded strings while in the [encoded] transaction, these would be binary blobs).** * `signed_tx` is the human-readable signed transaction to make it easier for the implementer to understand how the [signature envelope] looks like. * `encoded_tx` is the CBOR-encoded (since test vectors are in JSON and CBOR encoding is a binary encoding it also needs to be Base64-encoded) unsigned transaction. * `encoded_signed_tx` is the CBOR-encoded (since test vectors are in JSON and CBOR encoding is a binary encoding it also needs to be Base64-encoded) signed transaction. **This is what is actually broadcast to the network.** * `valid` is a boolean flag indicating whether the given test vector represents a valid transaction, including: * transaction having a valid signature, * transaction being correctly serialized, * transaction passing basic static validation. _NOTE: Even if a transaction passes basic static validation, it may still **not** be a valid transaction on the given network due to invalid nonce, or due to some specific parameters set on the network._ * `signer_private_key` is the Ed25519 private key that was used to sign the transaction in the test vector. * `signer_public_key` is the Ed25519 public key corresponding to `signer_private_key`. [domain separation context]: ../crypto.md#domain-separation [address]: services/staking.md#address [encoded]: ../encoding.md [signature envelope]: ../crypto.md#envelopes --- ## Transactions The consensus layer uses a common transaction format for all transactions. As with other Oasis Core components, it tries to be independent of any concrete [consensus backend]. The transaction API definitions and helper methods for creating and verifying transactions lives in [`go/consensus/api/transaction`]. For more information you can also check out the [consensus backend API documentation]. [consensus backend]: README.md [`go/consensus/api/transaction`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/api/transaction/transaction.go [consensus backend API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api/transaction?tab=doc ## Format Each (unsigned) transaction is represented by the following [encoded] structure: ```golang type Transaction struct { Nonce uint64 `json:"nonce"` Fee *Fee `json:"fee,omitempty"` Method string `json:"method"` Body any `json:"body,omitempty"` } ``` Fields: * `nonce` is the current caller's nonce to prevent replays. * `fee` is an optional fee that the caller commits to paying to execute the transaction. * `method` is the called method name. Method names are composed of two parts, the component name and the method name, joined by a separator (`.`). For example, `staking.Transfer` is the method name of the staking service's `Transfer` method. * `body` is the method-specific body. The actual transaction that is submitted to the consensus layer must be signed which means that it is wrapped into a [signed envelope]. [Domain separation] context (+ [chain domain separation]): ``` oasis-core/consensus: tx ``` [encoded]: ../encoding.md [signed envelope]: ../crypto.md#envelopes [Domain separation]: ../crypto.md#domain-separation [chain domain separation]: ../crypto.md#chain-domain-separation ## Fees As the consensus operations require resources to process, the consensus layer charges fees to perform operations. ### Gas Gas is an unsigned 64-bit integer denominated in _gas units_. Different operations cost different amounts of gas as defined by the consensus parameters of the consensus component that implements the operation. Transactions that require fees to process will include a `fee` field to declare how much the caller is willing to pay for fees. Specifying an `amount` (in base units) and `gas` (in gas units) implicitly defines a _gas price_ (price of one gas unit) as `amount / gas`. Consensus validators may refuse to process operations with a gas price that is too low. The `gas` field defines the maximum amount of gas that can be used by an operation for which the fee has been included. In case an operation uses more gas, processing will be aborted and no state changes will take place. Signing a transaction which includes a fee structure implicitly grants permission to withdraw the given amount of base units from the signer's account. In case there is not enough balance in the account, the operation will fail. ```golang type Fee struct { Amount quantity.Quantity `json:"amount"` Gas Gas `json:"gas"` } ``` Fees are not refunded. Fields: * `amount` is the total fee amount (in base units) to be paid. * `gas` is the maximum gas that an operation can use. ## Gas Estimation As transactions need to provide the maximum amount of gas that can be consumed during their execution, the caller may need to be able to estimate the amount of gas needed. In order to do that the consensus backend API includes a method called [`EstimateGas`] for estimating gas. The implementation of gas estimation is [backend-specific] but usually involves some kind of simulation of transaction execution to derive the maximum amount consumed by execution. [`EstimateGas`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api?tab=doc#ClientBackend.EstimateGas [backend-specific]: README.md ## Submission Transactions can be submitted to the consensus layer by calling [`SubmitTx`] and providing a signed transaction. The consensus backend API provides a submission manager for cases where the [signer] is available and automatic gas estimation and nonce lookup is desired. It is available via the [`SignAndSubmitTx`] function. [`SubmitTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api?tab=doc#ClientBackend.SubmitTx [signer]: ../crypto.md [`SignAndSubmitTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api?tab=doc#SignAndSubmitTx --- ## Cryptography ## Hash Functions In most places where cryptographic hashes are required, we use the SHA-512/256 hash function as specified in [FIPS 180-4]. [FIPS 180-4]: https://csrc.nist.gov/publications/detail/fips/180/4/final ## Signatures All cryptographic signatures are made using the Ed25519 (pure) scheme specified in [RFC 8032]. [RFC 8032]: https://tools.ietf.org/html/rfc8032 ### Domain Separation When signing messages and verifying signatures we require the use of a domain separation context in order to make sure the messages cannot be repurposed in a different protocol. The domain separation scheme adds a preprocessing step to any signing and verification operation. The step computes the value that is then signed/verified using Ed25519 as usual. The message to be signed is computed as follows: ``` M := H(Context || Message) ``` Where: * `H` is the SHA-512/256 cryptographic hash function. * `Context` is the domain separation context string. * `Message` is the original message. The Ed25519 signature is then computed over `M`. *NOTE: While using something like Ed25519ph/ctx as specified by [RFC 8032] would be ideal, unfortunately these schemes are not supported in many hardware security modules which is why we are using an ad-hoc scheme.* #### Contexts All of the domain separation contexts used in Oasis Core use the following convention: * They start with the string `oasis-core/`, * followed by the general module name, * followed by the string `: `, * followed by a use case description. The maximum length of a domain separation context is 255 bytes to be compatible with the length defined in [RFC 8032]. The Go implementation maintains a registry of all used contexts to make sure they are not reused incorrectly. #### Chain Domain Separation For some signatures, we must ensure that the domain separation context is tied to the given network instance as defined by the genesis document. This ensures that such messages cannot be replayed on a different network. For all domain separation contexts where chain domain separation is required, we use the following additional convention: * The context is as specified by the convention in the section above, * followed by the string ` for chain `, * followed by the [genesis document's hash]. [genesis document's hash]: consensus/genesis.md#genesis-documents-hash ### Envelopes There are currently two kinds of envelopes that are used when signing CBOR messages: * [Single signature envelope (`Signed`)] contains the CBOR-serialized blob in the `untrusted_raw_value` field and a single `signature`. * [Multiple signature envelope (`MultiSigned`)] contains the CBOR-serialized blob in the `untrusted_raw_value` field and multiple signatures in the `signatures` field. The envelopes are themselves CBOR-encoded. While no separate test vectors are provided, [those used for transactions] can be used as a reference. ## Standard Account Key Generation When generating an [account]'s private/public key pair, follow [ADR 0008: Standard Account Key Generation][ADR 0008]. [Single signature envelope (`Signed`)]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/crypto/signature?tab=doc#Signed [Multiple signature envelope (`MultiSigned`)]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/crypto/signature?tab=doc#MultiSigned [those used for transactions]: consensus/test-vectors.md [account]: consensus/services/staking.md#accounts [ADR 0008]: https://github.com/oasisprotocol/adrs/blob/master/0008-standard-account-key-generation.md --- ## Building This chapter contains a description of steps required to build Oasis Core. Before proceeding, make sure to look at the [prerequisites] required for running an Oasis Core environment. [prerequisites]: prerequisites.md ## Unsafe Non-SGX Environment To build everything required for running an Oasis node locally, simply execute the following in the top-level directory: ``` export OASIS_UNSAFE_SKIP_AVR_VERIFY=1 export OASIS_UNSAFE_SKIP_KM_POLICY=1 export OASIS_UNSAFE_ALLOW_DEBUG_ENCLAVES=1 make ``` To build BadgerDB without `jemalloc` support (and avoid installing `jemalloc` on your system), set ``` export OASIS_BADGER_NO_JEMALLOC=1 ``` Not using `jemalloc` is fine for development purposes. This will build all the required parts (build tools, Oasis node, runtime libraries, runtime loader, key manager and test runtimes). The AVR and KM flags are supported on production SGX systems only and these features must be disabled in our environment. ## SGX Environment Compilation procedure under SGX environment is similar to the non-SGX with slightly different environmental variables set: ``` export OASIS_UNSAFE_SKIP_AVR_VERIFY=1 export OASIS_UNSAFE_ALLOW_DEBUG_ENCLAVES=1 make ``` The AVR flag is there because we are running the node in a local development environment and we will not do any attestation with Intel's remote servers. The debug enclaves flag allows enclaves in debug mode to be used. To run an Oasis node under SGX make sure: * Your hardware has SGX support. * You either explicitly enabled SGX in BIOS or made a `sgx_cap_enable_device()` system call, if SGX is in software controlled state. * You installed [Intel's SGX driver] (check that `/dev/isgx` exists). * You have the AESM daemon running. The easiest way is to just run it in a Docker container by doing (this will keep the container running and it will be automatically started on boot): ``` docker run \ --detach \ --restart always \ --device /dev/isgx \ --volume /var/run/aesmd:/var/run/aesmd \ --name aesmd \ ghcr.io/oasisprotocol/aesmd:master ``` Run `sgx-detect` (part of fortanix rust tools) to verify that everything is configured correctly. [Intel's SGX driver]: https://github.com/intel/linux-sgx-driver --- ## Deploying a Runtime Before proceeding, make sure to look at the [prerequisites] required for running an Oasis Core environment followed by [build instructions] for the respective environment (non-SGX or SGX), using the [`oasis-net-runner`] and see [runtime documentation] for a general documentation on runtimes. These instructions will show how to register and deploy a runtime node on a local development network. [prerequisites]: prerequisites.md [build instructions]: building.md [`oasis-net-runner`]: oasis-net-runner.md [runtime documentation]: ../runtime/README.md ## Provision a Single Validator Node Network Use the [`oasis-net-runner`] to provision a validator node network without any registered runtimes. ``` mkdir /tmp/runtime-example oasis-net-runner \ --basedir.no_temp_dir \ --basedir /tmp/runtime-example \ --fixture.default.node.binary go/oasis-node/oasis-node \ --fixture.default.setup_runtimes=false \ --fixture.default.deterministic_entities \ --fixture.default.fund_entities \ --fixture.default.num_entities 2 ``` The following steps should be run in a separate terminal window. To simplify the instructions set up an `ADDR` environment variable pointing to the UNIX socket exposed by the started node: ``` export ADDR=unix:/tmp/runtime-example/net-runner/network/validator-0/internal.sock ``` Confirm the network is running by listing all registered entities: ``` oasis-node registry entity list -a $ADDR -v ``` Should give output similar to: ``` {"v":2,"id":"JTUtHd4XYQjh//e6eYU7Pa/XMFG88WE+jixvceIfWrk=","nodes":["LQu4ZtFg8OJ0MC4M4QMeUR7Is6Xt4A/CW+PK/7TPiH0="]} {"v":2,"id":"+MJpnSTzc11dNI5emMa+asCJH5cxBiBCcpbYE4XBdso="} {"v":2,"id":"TqUyj5Q+9vZtqu10yw6Zw7HEX3Ywe0JQA9vHyzY47TU="} ``` In following steps we will register and run the [simple-keyvalue] runtime on the network. [simple-keyvalue]: https://github.com/oasisprotocol/oasis-core/tree/master/tests/runtimes/simple-keyvalue ## Initializing a Runtime To generate and sign a runtime registration transaction that will initialize and register the runtime we will use the `registry runtime gen_register` command. When initializing a runtime we need to provide the runtime descriptor. For additional information about runtimes and parameters see the [runtime documentation] and [code reference]. Before generating the registration transaction, gather the following data and set up environment variables to simplify instructions. - `ENTITY_DIR` - Path to the entity directory created when starting the development network. This entity will be the runtime owner. The genesis used in the provisioning initial network step funds the all entities in entities. In the following instructions we will be using the `entity-2` entity (located in `/tmp/runtime-example/net-runner/network/entity-2/` directory). - `ENTITY_ID` - ID of the entity that will be the owner of the runtime. You can get the entity ID from `$ENTITY_DIR/entity.json` file. - `GENESIS_JSON` - Path to the genesis.json file used in the development network. (defaults to: `/tmp/runtime-example/net-runner/network/genesis.json`). - `RUNTIME_ID` - See [runtime identifiers] on how to choose a runtime identifier. In this example we use `8000000000000000000000000000000000000000000000000000000001234567` which is a test identifier that will not work outside local tests. - `RUNTIME_GENESIS_JSON` - Path to the runtime genesis state file. The runtime used in this example does not use a genesis file. - `NONCE` - Entity account nonce. If you followed the guide, nonce `0` would be the initial nonce to use for the entity. Note: make sure to keep updating the nonce when generating new transactions. To query for current account nonce value use [stake account info] CLI. ``` export ENTITY_DIR=/tmp/runtime-example/net-runner/network/entity-2/ export ENTITY_ID=+MJpnSTzc11dNI5emMa+asCJH5cxBiBCcpbYE4XBdso= export GENESIS_JSON=/tmp/runtime-example/net-runner/network/genesis.json export RUNTIME_ID=8000000000000000000000000000000000000000000000000000000001234567 export RUNTIME_DESCRIPTOR=/tmp/runtime-example/runtime_descriptor.json export NONCE=0 ``` Prepare a runtime descriptor: ``` cat << EOF > "${RUNTIME_DESCRIPTOR}" { "v": 2, "id": "${RUNTIME_ID}", "entity_id": "${ENTITY_ID}", "genesis": { "state_root": "c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a", "state": null, "storage_receipts": null, "round": 0 }, "kind": 1, "tee_hardware": 0, "versions": { "version": {} }, "executor": { "group_size": 1, "group_backup_size": 0, "allowed_stragglers": 0, "round_timeout": 5, "max_messages": 32 }, "txn_scheduler": { "algorithm": "simple", "batch_flush_timeout": 1000000000, "max_batch_size": 1000, "max_batch_size_bytes": 16777216, "propose_batch_timeout": 5 }, "storage": { "group_size": 1, "min_write_replication": 1, "max_apply_write_log_entries": 100000, "max_apply_ops": 2, "checkpoint_interval": 10000, "checkpoint_num_kept": 2, "checkpoint_chunk_size": 8388608 }, "admission_policy": { "entity_whitelist": { "entities": { "${ENTITY_ID}": {} } } }, "staking": {}, "governance_model": "entity" } EOF ``` [runtime identifiers]: ../runtime/identifiers.md [stake account info]: ../oasis-node/cli.md#info ``` oasis-node registry runtime gen_register \ --transaction.fee.gas 1000 \ --transaction.fee.amount 0 \ --transaction.file /tmp/runtime-example/register_runtime.tx \ --transaction.nonce $NONCE \ --genesis.file $GENESIS_JSON \ --signer.backend file \ --signer.dir $ENTITY_DIR \ --runtime.descriptor /tmp/runtime-example/runtime-descriptor.json --debug.dont_blame_oasis \ --debug.allow_test_keys ``` After confirmation, this command outputs a signed transaction in the `/tmp/runtime-example/register_runtime.tx` file. In the next step we will submit the transaction to complete the runtime registration. When registering a runtime on a _non-development_ network you will likely want to modify default parameters. Additionally, since we are running this on a debug network, we had to enable the `debug.dont_blame_oasis` and `debug.allow_test_keys` flags. [code reference]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#Runtime ## Submitting the Runtime Register Transaction To register the runtime, submit the generated transaction. ``` oasis-node consensus submit_tx \ --transaction.file /tmp/runtime-example/register_runtime.tx \ --address $ADDR ``` ## Confirm Runtime is Registered To confirm the runtime is registered use the `registry runtime list` command. ``` oasis-node registry runtime list \ --verbose \ --include_suspended \ --address $ADDR ``` Should give output similar to ``` { "v": 2, "id": "8000000000000000000000000000000000000000000000000000000001234567", "entity_id": "+MJpnSTzc11dNI5emMa+asCJH5cxBiBCcpbYE4XBdso=", "genesis": { "state_root": "c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a", "state": null, "storage_receipts": null, "round": 0 }, "kind": 1, "tee_hardware": 0, "versions": { "version": {} }, "executor": { "group_size": 1, "group_backup_size": 0, "allowed_stragglers": 0, "round_timeout": 5, "max_messages": 32 }, "txn_scheduler": { "algorithm": "simple", "batch_flush_timeout": 1000000000, "max_batch_size": 1000, "max_batch_size_bytes": 16777216, "propose_batch_timeout": 5 }, "storage": { "group_size": 1, "min_write_replication": 1, "max_apply_write_log_entries": 100000, "max_apply_ops": 2, "checkpoint_interval": 10000, "checkpoint_num_kept": 2, "checkpoint_chunk_size": 8388608 }, "admission_policy": { "entity_whitelist": { "entities": { "+MJpnSTzc11dNI5emMa+asCJH5cxBiBCcpbYE4XBdso=": {} } } }, "staking": {}, "governance_model": "entity" } ``` Since we did not setup any runtime nodes, the runtime will get [suspended] until nodes for the runtime register. In the next step we will setup and run a runtime node. [suspended]: ../runtime/README.md#suspending-runtimes ## Running a Runtime Node We will now run a node that will act as a compute, storage and client node for the runtime. In a real word scenario there would be multiple nodes running the runtime, each likely serving as a single type only. Before running the node, gather the following data parameters and set up environment variables to simplify instructions. - `RUNTIME_BINARY` - Path to the runtime binary that will be run on the node. We will use the [simple-keyvalue] runtime. If you followed the [build instructions] the built binary is available at `./target/default/release/simple-keyvalue`. - `SEED_NODE_ADDRESS` - Address of the seed node in the development network. Seed node address can be seen in the `oasis-net-runner` logs, when the network is initially provisioned. ``` export RUNTIME_BINARY=/workdir/target/default/release/simple-keyvalue export SEED_NODE_ADDRESS=@127.0.0.1:20000 # Runtime node data dir. mkdir -m 0700 /tmp/runtime-example/runtime-node # Start runtime node. oasis-node \ --datadir /tmp/runtime-example/runtime-node \ --log.level debug \ --log.format json \ --log.file /tmp/runtime-example/runtime-node/node.log \ --grpc.log.debug \ --worker.registration.entity $ENTITY_DIR/entity.json \ --genesis.file $GENESIS_JSON \ --worker.storage.enabled \ --worker.compute.enabled \ --runtime.provisioner unconfined \ --runtime.supported $RUNTIME_ID \ --runtime.paths $RUNTIME_ID=$RUNTIME_BINARY \ --consensus.cometbft.debug.addr_book_lenient \ --consensus.cometbft.debug.allow_duplicate_ip \ --consensus.cometbft.p2p.seed $SEED_NODE_ADDRESS \ --debug.dont_blame_oasis \ --debug.allow_test_keys ``` This also enables unsafe debug-only flags which must never be used in a production setting as they may result in node compromise. When running a runtime node in a production setting, the `p2p.addresses` flag needs to be configured as well. Following steps should be run in a new terminal window. ## Updating Entity Nodes Before the newly started runtime node can register itself as a runtime node, we need to update the entity information in registry, to include the started node. Before proceeding, gather the runtime node id and store it in a variable. If you followed above instructions, the node id can be seen in `/tmp/runtime-example/runtime-node/identity_pub.pem` (or using the [node control status command]). Update the entity and generate a transaction that will update the registry state. ``` # NOTE: this ID is not generated deterministically make sure to change the ID # with your node id. export NODE_ID=NOPhD7UlMZBO8fNyo2xLFanlmvl+EmZ5s4mM2z9nEBg= oasis-node registry entity update \ --signer.dir $ENTITY_DIR \ --entity.node.id $NODE_ID oasis-node registry entity gen_register \ --genesis.file $GENESIS_JSON \ --signer.backend file \ --signer.dir $ENTITY_DIR \ --transaction.file /tmp/runtime-example/update_entity.tx \ --transaction.fee.gas 2000 \ --transaction.fee.amount 0 \ --transaction.nonce $NONCE \ --debug.dont_blame_oasis \ --debug.allow_test_keys ``` Submit the generated transaction: ``` oasis-node consensus submit_tx \ --transaction.file /tmp/runtime-example/update_entity.tx \ --address $ADDR ``` Confirm the entity in the registry has been updated by querying the registry state: ``` oasis-node registry entity list -a $ADDR -v {"v":1,"id":"JTUtHd4XYQjh//e6eYU7Pa/XMFG88WE+jixvceIfWrk=","nodes":["LQu4ZtFg8OJ0MC4M4QMeUR7Is6Xt4A/CW+PK/7TPiH0="]} {"v":1,"id":"+MJpnSTzc11dNI5emMa+asCJH5cxBiBCcpbYE4XBdso=","nodes":["vWUfSmjrHSlN5tSSO3/Qynzx+R/UlwPV9u+lnodQ00c="]} {"v":1,"id":"TqUyj5Q+9vZtqu10yw6Zw7HEX3Ywe0JQA9vHyzY47TU=","allow_entity_signed_nodes":true} ``` Node is now able to register and the runtime should get resumed, make sure this happens by querying the registry for runtimes: ``` # Ensure node is registered oasis-node registry node list -a $ADDR -v | grep "$NODE_ID" # Ensure runtime is resumed. oasis-node registry runtime list -a $ADDR -v ``` You might need to wait few seconds for an epoch transition so that the node is registered and runtime gets resumed. [node control status command]: ../oasis-node/cli.md#status --- ## Local Network Runner In order to make development easier (and also to facilitate automated E2E tests), the Oasis Core repository provides a utility called `oasis-net-runner` that enables developers to quickly set up local networks. Before proceeding, make sure to look at the [prerequisites] required for running an Oasis Core environment followed by [build instructions] for the respective environment (non-SGX or SGX). The following sections assume that you have successfully completed the required build steps. [prerequisites]: prerequisites.md [build instructions]: building.md ## Unsafe Non-SGX Environment To start a simple Oasis network as defined by [the default network fixture] running the `simple-keyvalue` test runtime, do: ``` ./go/oasis-net-runner/oasis-net-runner \ --fixture.default.node.binary go/oasis-node/oasis-node \ --fixture.default.runtime.binary target/default/release/simple-keyvalue \ --fixture.default.runtime.loader target/default/release/oasis-core-runtime-loader \ --fixture.default.keymanager.binary target/default/release/simple-keymanager ``` Wait for the network to start, there should be messages about nodes being started and at the end the following message should appear: ``` level=info module=oasis/net-runner caller=oasis.go:319 ts=2019-10-03T10:47:30.776566482Z msg="network started" level=info module=net-runner caller=root.go:145 ts=2019-10-03T10:47:30.77662061Z msg="client node socket available" path=/tmp/oasis-net-runner530668299/net-runner/network/client-0/internal.sock ``` The `simple-keyvalue` runtime implements a key-value hash map in the enclave and supports reading, writing, and fetching string values associated with the given key. To learn how to create your own runtime, see the sources of the [simple-keyvalue example] and [Building a runtime] chapter in the Oasis SDK. [the default network fixture]: https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-net-runner/fixtures/default.go [simple-keyvalue example]: https://github.com/oasisprotocol/oasis-core/tree/master/tests/runtimes/simple-keyvalue [Building a runtime]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/runtime/README.md ## SGX Environment To run an Oasis node under SGX follow the same steps as for non-SGX, except the `oasis-net-runner` invocation: ``` ./go/oasis-net-runner/oasis-net-runner \ --fixture.default.tee_hardware intel-sgx \ --fixture.default.node.binary go/oasis-node/oasis-node \ --fixture.default.runtime.binary target/sgx/x86_64-fortanix-unknown-sgx/release/simple-keyvalue.sgxs \ --fixture.default.runtime.loader target/default/release/oasis-core-runtime-loader \ --fixture.default.keymanager.binary target/sgx/x86_64-fortanix-unknown-sgx/release/simple-keymanager.sgxs ``` ## Common Issues If the above does not appear to work (e.g., when you run the client, it appears to hang and not make any progress) usually the best place to start debugging is looking at the various node logs which are stored under a directory starting with `/tmp/oasis-net-runner` (unless overridden via `--basedir` options). Specifically look at `node.log` and `console.log` files located in directories for each of the nodes comprising the local network. ### User Namespace Permission Issues The Oasis Core compute nodes use [sandboxing] to execute runtime binaries and the sandbox implementation requires that the process is able to create non-privileged user namespaces. In case this is not available, the following error message may appear in `console.log` of any compute or key manager nodes: ``` bwrap: No permissions to creating new namespace, likely because the kernel does not allow non-privileged user namespaces. On e.g. debian this can be enabled with 'sysctl kernel.unprivileged_userns_clone=1' ``` In this case do as indicated in the message and run: ``` sysctl kernel.unprivileged_userns_clone=1 ``` This could also happen if you are running in a Docker container without specifying additional options at startup. See the [Using the Development Docker Image] section of the Prerequisites for details. [sandboxing]: ../runtime/README.md#runtimes [Using the Development Docker Image]: prerequisites.md#using-the-development-docker-image --- ## Prerequisites(Development-setup) The following is a list of prerequisites required to start developing on Oasis Core: * Linux (if you are not on Linux, you will need to either set up a VM with the proper environment or, if Docker is available for your platform, use the provided Docker image which does this for you, [see below](#using-the-development-docker-image)). * System packages: * [Bubblewrap] (at least version 0.3.3). * [GCC] (including C++ subpackage). * [Clang] (including development package). * [Protobuf] compiler. * [GNU Make]. * [CMake]. * [pkg-config]. * [OpenSSL] development package. * [libseccomp] development package. _NOTE: On Ubuntu/Debian systems, compiling [mbedtls] crate when building the `oasis-core-runtime` binary requires having the `gcc-multilib` package installed._ On Fedora 29+, you can install all the above with: ``` sudo dnf install bubblewrap gcc gcc-c++ clang-devel clang protobuf-compiler make cmake openssl-devel libseccomp-devel pkg-config ``` On Ubuntu 18.10+ (18.04 LTS provides overly-old `bubblewrap`), you can install all the above with: ``` sudo apt install bubblewrap gcc g++ gcc-multilib libclang-dev clang protobuf-compiler make cmake libssl-dev libseccomp-dev pkg-config ``` * [Go] (at least version 1.25.3). If your distribution provides a new-enough version of Go, just use that. Please note that if you want to compile Oasis Core v22.1.9 or earlier, then go >=1.19 is not supported yet; you need to use 1.18.x. Otherwise: * install the Go version provided by your distribution, * [ensure `$GOPATH/bin` is in your `PATH`]( https://tip.golang.org/doc/code.html#GOPATH), * [install the desired version of Go]( https://golang.org/doc/install#extra_versions), e.g. 1.25.3, with: ``` go install golang.org/dl/go1.25.3@latest go1.25.3 download ``` * instruct the build system to use this particular version of Go by setting the `OASIS_GO` environment variable in your `~/.bashrc`: ``` export OASIS_GO=go1.25.3 ``` * [Rust]. We follow [Rust upstream's recommendation][rust-upstream-rustup] on using [rustup] to install and manage Rust versions. _NOTE: rustup cannot be installed alongside a distribution packaged Rust version. You will need to remove it (if it's present) before you can start using rustup._ Install it by running: ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` _NOTE: If you want to avoid directly executing a shell script fetched the internet, you can also [download `rustup-init` executable for your platform] and run it manually._ This will run `rustup-init` which will download and install the latest stable version of Rust on your system. * [Fortanix Rust EDP] utilities. Install the Fortanix Rust EDP utilities by running: ``` cargo install fortanix-sgx-tools cargo install sgxs-tools ``` * Oasis Core's Rust toolchain version with Fortanix SGX target. The version of the Rust toolchain we use in Oasis Core is specified in the [`rust-toolchain.toml`] file. The rustup-installed versions of `cargo`, `rustc` and other tools will [automatically detect this file and use the appropriate version of the Rust toolchain][rust-toolchain-precedence] when invoked from the Oasis core git checkout directory. To install the appropriate version of the Rust toolchain, make sure you are in an Oasis Core git checkout directory and run: ``` rustup show ``` This will automatically install the appropriate Rust toolchain (if not present) and output something similar to: ``` ... active toolchain ---------------- nightly-2024-01-08-x86_64-unknown-linux-gnu (overridden by '/code/rust-toolchain.toml') rustc 1.77.0-nightly (75c68cfd2 2024-01-07) ``` * (**OPTIONAL**) [gofumpt] and [goimports]. Required if you plan to change any of the Go code in order for automated code formatting (`make fmt`) to work. Download and install it with: ``` ${OASIS_GO:-go} install mvdan.cc/gofumpt@v0.8.0 ${OASIS_GO:-go} install golang.org/x/tools/cmd/goimports@v0.36.0 ``` * (**OPTIONAL**) [golangci-lint]. Required if you plan to change any of the Go code in order for automated code linting (`make lint`) to work. Download and install it with: ``` curl -sSfL \ https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh \ | sh -s -- -b $(${OASIS_GO:-go} env GOPATH)/bin v2.6.0 ``` * (**OPTIONAL**) [protoc-gen-go]. Download and install it with: ``` ${OASIS_GO:-go} install google.golang.org/protobuf/cmd/protoc-gen-go@v1.21.0 ``` _NOTE: If you didn't/can't add `$GOPATH/bin` to your `PATH`, you can install `protoc-gen-go` to `/usr/local/bin` (which is in `$PATH`) with:_ ``` sudo GOBIN=/usr/local/bin ${OASIS_GO:-go} install google.golang.org/protobuf/cmd/protoc-gen-go@v1.21.0 ``` _NOTE: The repository has the most up-to-date files generated by protoc-gen-go committed for convenience. Installing protoc-gen-go is only required if you are a developer making changes to protobuf definitions used by Go._ * (**OPTIONAL**) [jemalloc] (version 5.2.1, built with `'je_'` jemalloc-prefix) Alternatively set `OASIS_BADGER_NO_JEMALLOC=1` environment variable when building `oasis-node` code, to build [BadgerDB] without `jemalloc` support. Download and install `jemalloc` with: ``` JEMALLOC_VERSION=5.2.1 JEMALLOC_CHECKSUM=34330e5ce276099e2e8950d9335db5a875689a4c6a56751ef3b1d8c537f887f6 JEMALLOC_GITHUB=https://github.com/jemalloc/jemalloc/releases/download/ pushd $(mktemp -d) wget \ -O jemalloc.tar.bz2 \ "${JEMALLOC_GITHUB}/${JEMALLOC_VERSION}/jemalloc-${JEMALLOC_VERSION}.tar.bz2" # Ensure checksum matches. echo "${JEMALLOC_CHECKSUM} jemalloc.tar.bz2" | sha256sum -c tar -xf jemalloc.tar.bz2 cd jemalloc-${JEMALLOC_VERSION} # Ensure reproducible jemalloc build. # https://reproducible-builds.org/docs/build-path/ EXTRA_CXXFLAGS=-ffile-prefix-map=$(pwd -L)=. \ EXTRA_CFLAGS=-ffile-prefix-map=$(pwd -L)=. \ ./configure \ --with-jemalloc-prefix='je_' \ --with-malloc-conf='background_thread:true,metadata_thp:auto' make sudo make install popd ``` _NOTE: jemalloc needs to be installed to the (default) `/usr/local` prefix (i.e. you can't use `./configure --prefix=$HOME/.local ...`) because upstream authors [hardcode its path][jemalloc-hardcode-path]._ In the following instructions, the top-level directory is the directory where the code has been checked out. [Bubblewrap]: https://github.com/projectatomic/bubblewrap [GCC]: http://gcc.gnu.org/ [Clang]: https://clang.llvm.org/ [Protobuf]: https://github.com/protocolbuffers/protobuf [GNU Make]: https://www.gnu.org/software/make/ [CMake]: https://cmake.org/ [pkg-config]: https://www.freedesktop.org/wiki/Software/pkg-config [OpenSSL]: https://www.openssl.org/ [libseccomp]: https://github.com/seccomp/libseccomp [mbedtls]: https://github.com/fortanix/rust-mbedtls [Go]: https://golang.org [rustup]: https://rustup.rs/ [rust-upstream-rustup]: https://www.rust-lang.org/tools/install [download `rustup-init` executable for your platform]: https://github.com/rust-lang/rustup#other-installation-methods [Rust]: https://www.rust-lang.org/ [`rust-toolchain.toml`]: https://github.com/oasisprotocol/oasis-core/tree/master/rust-toolchain.toml [rust-toolchain-precedence]: https://github.com/rust-lang/rustup/blob/master/README.md#override-precedence [Fortanix Rust EDP]: https://edp.fortanix.com [gofumpt]: https://github.com/mvdan/gofumpt [goimports]: https://pkg.go.dev/golang.org/x/tools/cmd/goimports [golangci-lint]: https://golangci-lint.run/ [protoc-gen-go]: https://github.com/golang/protobuf [jemalloc]: https://github.com/jemalloc/jemalloc [BadgerDB]: https://github.com/dgraph-io/badger/ [jemalloc-hardcode-path]: https://github.com/dgraph-io/ristretto/blob/221ca9b2091d12e5d24aa5d7d56e49745fc175d8/z/calloc_jemalloc.go#L9-L13 ## Using the Development Docker Image If for some reason you don't want or can't install the specified prerequisites on the host system, you can use our development Docker image. This requires that you have a [recent version of Docker installed]( https://docs.docker.com/install/). Oasis development environment with all the dependencies preinstalled is available in the `ghcr.io/oasisprotocol/oasis-core-dev:master` image. To run a container, do the following in the top-level directory: ```bash make docker-shell ``` If you are curious, this target will internally run the following command: ``` docker run -t -i \ --name oasis-core \ --security-opt apparmor:unconfined \ --security-opt seccomp=unconfined \ -v $(pwd):/code \ -w /code \ ghcr.io/oasisprotocol/oasis-core-dev:master \ bash ``` All the following commands can then be used from inside the container. See the Docker documentation for detailed instructions on working with Docker containers. --- ## Running Tests Before proceeding, make sure to look at the [prerequisites] required for running an Oasis Core environment followed by [build instructions] for the respective environment (non-SGX or SGX). The following sections assume that you have successfully completed the required build steps. [prerequisites]: prerequisites.md [build instructions]: building.md ## Tests After you've built everything, you can use the following commands to run tests. To run all unit tests: ``` make test-unit ``` To run end-to-end tests locally: ``` make test-e2e ``` To run all tests: ``` make test ``` To execute tests using SGX set the following environmental variable before running the tests: ``` export OASIS_TEE_HARDWARE=intel-sgx ``` ## Troubleshooting Check the console output for mentions of a path of the form `/tmp/oasis-test-runnerXXXXXXXXX` (where each `X` is a digit). That's the log directory. Start with coarsest-level debug output in `console.log` files: ``` cat $(find /tmp/oasis-test-runnerXXXXXXXXX -name console.log) | less ``` For even more output, check the other `*.log` files. --- ## Single Validator Node Network It is possible to provision a local "network" consisting of a single validator node. This may be useful for specific development use cases. Before proceeding, make sure to look at the [prerequisites] required for running an Oasis Core environment followed by [build instructions] for the respective environment (non-SGX or SGX). The following sections assume that you have successfully completed the required build steps. These instructions are for a development-only instance, do not use them for setting up any kind of production instances as they are unsafe and will result in insecure configurations leading to node compromise. [prerequisites]: prerequisites.md [build instructions]: building.md ## Provisioning an Entity To provision an [entity] we first prepare an empty directory under `/path/to/entity` and then initialize the entity: ``` mkdir -p /path/to/entity cd /path/to/entity oasis-node registry entity init --signer.backend file --signer.dir . ``` [entity]: ../consensus/services/registry.md#entities-and-nodes ## Provisioning a Node To provision a [node] we first prepare an empty directory under `/path/to/node` and the initialize the node. The node is provisioned as a validator. ``` mkdir -p /path/to/node cd /path/to/node oasis-node registry node init \ --signer.backend file \ --signer.dir /path/to/entity \ --node.consensus_address 127.0.0.1:26656 \ --node.is_self_signed \ --node.role validator ``` After the node is provisioned we proceed with updating the [entity whitelist] so that the node will be able to register itself: ``` oasis-node registry entity update \ --signer.backend file \ --signer.dir /path/to/entity \ --entity.node.descriptor /path/to/node/node_genesis.json ``` [node]: ../consensus/services/registry.md#entities-and-nodes [entity whitelist]: ../consensus/services/registry.md#register-node ## Creating a Test Genesis Document To create a test genesis document for your development "network" use the following commands: ``` mkdir -p /path/to/genesis cd /path/to/genesis oasis-node genesis init \ --chain.id test \ --entity /path/to/entity/entity_genesis.json \ --node /path/to/node/node_genesis.json \ --debug.dont_blame_oasis \ --debug.test_entity \ --debug.allow_test_keys \ --registry.debug.allow_unroutable_addresses \ --staking.token_symbol TEST ``` This enables unsafe debug-only flags which must never be used in a production setting as they may result in node compromise. ## Running the Node To run the single validator node, use the following command: ``` oasis-node \ --datadir /path/to/node \ --genesis.file /path/to/genesis/genesis.json \ --worker.registration.entity /path/to/entity/entity.json \ --consensus.validator \ --debug.dont_blame_oasis \ --debug.allow_test_keys \ --log.level debug ``` This enables unsafe debug-only flags which must never be used in a production setting as they may result in node compromise. ## Using the Node CLI The `oasis-node` exposes [an RPC interface] via a UNIX socket located in its data directory (e.g., under `/path/to/node/internal.sock`). To simplify the following instructions set up an `ADDR` environment variable pointing to it: ``` export ADDR=unix:/path/to/node/internal.sock ``` This can then be used to execute CLI commands against the running node (in a separate terminal). For example to show all registered entities: ``` oasis-node registry entity list -a $ADDR -v ``` Giving output similar to: ``` {"v":1,"id":"UcxpyD0kSo/5keRqv8pLypM/Mg5S5iULRbt7Uf73vKQ=","nodes":["jo+quvaFYAP4Chyf1PRqCZZObqpDeJCxfBzTyghiXxs="]} {"v":1,"id":"TqUyj5Q+9vZtqu10yw6Zw7HEX3Ywe0JQA9vHyzY47TU=","allow_entity_signed_nodes":true} ``` Or getting a list of all staking accounts: ``` oasis-node stake list -a $ADDR ``` Giving output similar to: ``` oasis1qzzd6khm3acqskpxlk9vd5044cmmcce78y5l6000 oasis1qz3xllj0kktskjzlk0qacadgwpfe8v7sy5kztvly oasis1qrh4wqfknrlvv7whjm7mjsjlvka2h35ply289pp2 ``` [an RPC interface]: ../oasis-node/rpc.md --- ## Encoding All messages exchanged by different components in Oasis Core are encoded using [canonical CBOR as defined by RFC 7049](https://tools.ietf.org/html/rfc7049). When describing different messages in the documentation, we use Go structs with field annotations that specify how different fields translate to their encoded form. --- ## Merklized Key-Value Store (MKVS) For all places that require an [authenticated data structure (ADS)] we provide an implementation of a Merklized Key-Value Store, internally implemented as a Merklized [Patricia trie]. [authenticated data structure (ADS)]: https://www.cs.umd.edu/~mwh/papers/gpads.pdf [Patricia trie]: https://en.wikipedia.org/wiki/Radix_tree#PATRICIA ## Interfaces ### Updates ### Read Syncer --- ## `oasis-node` CLI ## `control` ### `status` Run ```sh oasis-node control status ``` to get information like the following (example taken from a runtime compute node): ```json { "software_version": "21.3", "identity": { "node": "iWq6Nft6dU2GWAr9U7ICbhXWwmAINIniKzMMblSo5Xs=", "p2p": "dGd+pGgIlkJb0dnkBQ7vI2EWWG81pF5M1G+jL2/6pyA=", "consensus": "QaMdKVwX1da0Uf82cp0DDukQQwrSjr8BwlIxc//ANE8=", "tls": [ "Kj8ANHwfMzcWoA1vx0OMhn4oGv8Y0vc46xMOdQUIh5c=", ] }, "consensus": { "version": { "major": 4 }, "backend": "tendermint", "features": 3, "node_peers": [ "5ab8074ce3053ef9b72d664c73e39972241442e3@57.71.39.73:26658", "abb66e8780f3815d87bad488a2892b4d4b2221e3@108.15.34.59:50716" ], "latest_height": 5960191, "latest_hash": "091c29c3d588c52421a4f215268c6b4ab1a7762c429a98fec5de9251f8907add", "latest_time": "2021-09-24T21:42:29+02:00", "latest_epoch": 10489, "latest_state_root": { "ns": "0000000000000000000000000000000000000000000000000000000000000000", "version": 5960190, "root_type": 1, "hash": "c34581dcec59d80656d6082260d63f3206aef0a1b950c1f2c06d1eaa36a22ec3" }, "genesis_height": 5891596, "genesis_hash": "e9d9fb99baefc3192a866581c35bf43d7f0499c64e1c150171e87b2d5dc35087", "last_retained_height": 5891596, "last_retained_hash": "e9d9fb99baefc3192a866581c35bf43d7f0499c64e1c150171e87b2d5dc35087", "chain_context": "9ee492b63e99eab58fd979a23dfc9b246e5fc151bfdecd48d3ba26a9d0712c2b", "is_validator": true }, "runtimes": { "0000000000000000000000000000000000000000000000000000000000000001": { "descriptor": { "v": 2, "id": "0000000000000000000000000000000000000000000000000000000000000001", "entity_id": "Ldzg8aeLiUBrMYxidd5DqEzpamyV2cprmRH0pG8d/Jg=", "genesis": { "state_root": "c672b8d1ef56ed28ab87c3622c5114069bdd3ad7b8f9737498d0c01ecef0967a", "state": null, "storage_receipts": null, "round": 0 }, "kind": 1, "tee_hardware": 0, "versions": { "version": { "minor": 2 } }, "executor": { "group_size": 3, "group_backup_size": 3, "allowed_stragglers": 1, "round_timeout": 5, "max_messages": 256 }, "txn_scheduler": { "algorithm": "simple", "batch_flush_timeout": 1000000000, "max_batch_size": 100, "max_batch_size_bytes": 1048576, "propose_batch_timeout": 2000000000 }, "storage": { "group_size": 3, "min_write_replication": 2, "max_apply_write_log_entries": 10000, "max_apply_ops": 2, "checkpoint_interval": 100, "checkpoint_num_kept": 2, "checkpoint_chunk_size": 8388608 }, "admission_policy": { "any_node": {} }, "constraints": { "executor": { "backup-worker": { "max_nodes": { "limit": 1 }, "min_pool_size": { "limit": 3 } }, "worker": { "max_nodes": { "limit": 1 }, "min_pool_size": { "limit": 3 } } }, "storage": { "worker": { "max_nodes": { "limit": 1 }, "min_pool_size": { "limit": 3 } } } }, "staking": {}, "governance_model": "entity" }, "latest_round": 1355, "latest_hash": "2a11820c0524a8a753f7f4a268ee2d0a4f4588a89121f92a43f4be9cc6acca7e", "latest_time": "2021-09-24T21:41:29+02:00", "latest_state_root": { "ns": "0000000000000000000000000000000000000000000000000000000000000000", "version": 1355, "root_type": 1, "hash": "45168e11548ac5322a9a206abff4368983b5cf676b1bcb2269f5dfbdf9df7be3" }, "genesis_round": 0, "genesis_hash": "aed94c03ebd2d16dfb5f6434021abf69c8c15fc69b6b19554d23da8a5a053776", "committee": { "latest_round": 1355, "latest_height": 5960180, "last_committee_update_height": 5960174, "executor_roles": [ "worker", "backup-worker" ], "storage_roles": [ "worker" ], "is_txn_scheduler": false, "peers": [ "/ip4/57.71.39.73/tcp/41002/p2p/12D3KooWJvL8mYzHbcLtj91bf5sHhtrB7C8CWND5sV6Kk24eUdpQ", "/ip4/108.67.32.45/tcp/26648/p2p/12D3KooWBKgcH7TGMSLuxzLxK41nTwk6DsxHRpb7HpWQXJzLurcv" ] }, "storage": { "last_finalized_round": 1355 } } }, "registration": { "last_registration": "2021-09-24T21:41:08+02:00", "descriptor": { "v": 1, "id": "iWq6Nft6dU2GWAr9U7ICbhXWwmAINIniKzMMblSo5Xs=", "entity_id": "4G4ISI8hANvMRYTbxdXU+0r9m/6ZySHERR+2RDbNOU8=", "expiration": 10491, "tls": { "pub_key": "Kj8ANHwfMzcWoA1vx0OMhn4oGv8Y0vc46xMOdQUIh5c=", "addresses": [ "Kj8ANHwfMzcWoA1vx0OMhn4oGv8Y0vc46xMOdQUIh5c=@128.89.215.24:30001", ] }, "p2p": { "id": "dGd+pGgIlkJb0dnkBQ7vI2EWWG81pF5M1G+jL2/6pyA=", "addresses": [ "159.89.215.24:30002" ] }, "consensus": { "id": "QaMdKVwX1da0Uf82cp0DDukQQwrSjr8BwlIxc//ANE8=", "addresses": [ "dGd+pGgIlkJb0dnkBQ7vI2EWWG81pF5M1G+jL2/6pyA=@128.89.215.24:26656" ] }, "beacon": { "point": "BHg8TOqKD4wV8UCu9nICvJt7rhXFd8CxXuYiHa6X/NnzlIndzGNEJyyTr00s5rgKwX25yPmv+r2xRFbcQK6hGLE=" }, "runtimes": [ { "id": "0000000000000000000000000000000000000000000000000000000000000001", "version": { "minor": 2 }, "capabilities": {}, "extra_info": null } ], "roles": "compute,storage,validator" }, "node_status": { "expiration_processed": false, "freeze_end_time": 0, "election_eligible_after": 9810 } }, "pending_upgrades": [] } ``` ## `genesis` ### `check` To check if a given [genesis file] is valid, run: ```sh oasis-node genesis check --genesis.file /path/to/genesis.json ``` This also checks if the genesis file is in the [canonical form]. ### `dump` To dump the state of the network at a specific block height, e.g. 717600, to a [genesis file], run: ```sh oasis-node genesis dump \ --address unix:/path/to/node/internal.sock \ --genesis.file /path/to/genesis_dump.json \ --height 717600 ``` You must only run the following command after the given block height has been reached on the network. ### `init` To initialize a new [genesis file] with the given chain id and [staking token symbol], run: ```sh oasis-node genesis init --genesis.file /path/to/genesis.json \ --chain.id "name-of-my-network" \ --staking.token_symbol TEST ``` You can set a lot of parameters for the various [consensus layer services]. To see the full list, run: ```sh oasis-node genesis init --help ``` [genesis file]: ../consensus/genesis.md#genesis-file [canonical form]: ../consensus/genesis.md#canonical-form [consensus layer services]: ../consensus/README.md [staking token symbol]: ../consensus/services/staking.md#tokens-and-base-units ## `stake` ### `account` #### `info` Run ```sh oasis-node stake account info \ --stake.account.address \ --address unix:/path/to/node/internal.sock ``` to get staking information for a specific account: ``` General Account: Balance: TEST 0.0 Nonce: 0 Escrow Account: Active: Balance: TEST 0.0 Total Shares: 0 Debonding: Balance: TEST 0.0 Total Shares: 0 Commission Schedule: Rates: (none) Rate Bounds: (none) Stake Accumulator: Claims: - Name: registry.RegisterEntity Staking Thresholds: - Global: entity - Name: registry.RegisterNode.LQu4ZtFg8OJ0MC4M4QMeUR7Is6Xt4A/CW+PK/7TPiH0= Staking Thresholds: - Global: node-validator ``` ### `pubkey2address` Run ```sh oasis-node stake pubkey2address --public_key ``` to get staking account address from an entity or node public key. Example response: ``` oasis1qqncl383h8458mr9cytatygctzwsx02n4c5f8ed7 ``` ## storage ### compact-experimental Run ```sh oasis-node storage compact-experimental --config /path/to/config/file ``` to trigger manual compaction of consensus database instances: ```sh {"caller":"storage.go:310","level":"info","module":"cmd/storage", \ "msg":"Starting database compactions. This may take a while...", \ "ts":"2025-10-08T09:18:22.185451554Z"} ``` If pruning was not enabled from the start or was recently increased, then even after successful pruning, the disk usage may stay the same. This is due to the LSM-tree storage design that BadgerDB uses. Concretely, deleting a key only marks it as ready to be deleted (a tombstone entry). The actual removal of the stale data happens later during the compaction. During normal operation, compaction happens in the background. However, BadgerDB is intentionally lazy, trading write throughput for disk space among other things. Therefore it is expected that in case of late pruning, the disk space may stay constant or not be reclaimed for a very long time. This command gives operators manual control to release disk space during maintenance periods. --- ## Metrics `oasis-node` can report a number of metrics to Prometheus server. By default, no metrics are collected and reported. There is one way to enable metrics reporting: * *Pull mode* listens on given address and waits for Prometheus to scrape the metrics. ## Configuring `oasis-node` in Pull Mode To run `oasis-node` in *pull mode* with Prometheus metrics enabled, add the following to your `config.yml`. ``` metrics: mode: pull address: 0.0.0.0:3000 ``` After restarting the node, Prometheus metrics will be exposed on port 3000. Then, add the following segment to your `prometheus.yml` and restart Prometheus: ```yaml - job_name : 'oasis-node' scrape_interval: 5s static_configs: - targets: ['localhost:3000'] ``` ## Metrics Reported by `oasis-node` `oasis-node` reports metrics starting with `oasis_`. The following metrics are currently reported: Name | Type | Description | Labels | Package -----|------|-------------|--------|-------- oasis_abci_db_size | Gauge | Total size of the ABCI database (MiB). | | [consensus/cometbft/abci](https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/cometbft/abci/mux.go) oasis_codec_size | Summary | CBOR codec message size (bytes). | call, module | [common/cbor](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/cbor/codec.go) oasis_consensus_proposed_blocks | Counter | Number of blocks proposed by the node. | backend | [consensus/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/metrics/metrics.go) oasis_consensus_signed_blocks | Counter | Number of blocks signed by the node. | backend | [consensus/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/consensus/metrics/metrics.go) oasis_finalized_rounds | Counter | Number of finalized rounds. | | [roothash](https://github.com/oasisprotocol/oasis-core/tree/master/go/roothash/metrics.go) oasis_grpc_client_calls | Counter | Number of gRPC calls. | call | [common/grpc](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/grpc/grpc.go) oasis_grpc_client_latency | Summary | gRPC call latency (seconds). | call | [common/grpc](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/grpc/grpc.go) oasis_grpc_client_stream_writes | Counter | Number of gRPC stream writes. | call | [common/grpc](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/grpc/grpc.go) oasis_grpc_server_calls | Counter | Number of gRPC calls. | call | [common/grpc](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/grpc/grpc.go) oasis_grpc_server_latency | Summary | gRPC call latency (seconds). | call | [common/grpc](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/grpc/grpc.go) oasis_grpc_server_stream_writes | Counter | Number of gRPC stream writes. | call | [common/grpc](https://github.com/oasisprotocol/oasis-core/tree/master/go/common/grpc/grpc.go) oasis_node_cpu_stime_seconds | Gauge | CPU system time spent by worker as reported by /proc/<PID>/stat (seconds). | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/cpu.go) oasis_node_cpu_utime_seconds | Gauge | CPU user time spent by worker as reported by /proc/<PID>/stat (seconds). | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/cpu.go) oasis_node_disk_read_bytes | Gauge | Read data from block storage by the worker as reported by /proc/<PID>/io (bytes). | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/disk.go) oasis_node_disk_usage_bytes | Gauge | Size of datadir of the worker (bytes). | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/disk.go) oasis_node_disk_written_bytes | Gauge | Written data from block storage by the worker as reported by /proc/<PID>/io (bytes) | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/disk.go) oasis_node_mem_rss_anon_bytes | Gauge | Size of resident anonymous memory of worker as reported by /proc/<PID>/status (bytes). | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/mem.go) oasis_node_mem_rss_file_bytes | Gauge | Size of resident file mappings of worker as reported by /proc/<PID>/status (bytes) | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/mem.go) oasis_node_mem_rss_shmem_bytes | Gauge | Size of resident shared memory of worker. | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/mem.go) oasis_node_mem_vm_size_bytes | Gauge | Virtual memory size of worker (bytes). | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/mem.go) oasis_node_net_receive_bytes_total | Gauge | Received data for each network device as reported by /proc/net/dev (bytes). | device | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/net.go) oasis_node_net_receive_packets_total | Gauge | Received data for each network device as reported by /proc/net/dev (packets). | device | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/net.go) oasis_node_net_transmit_bytes_total | Gauge | Transmitted data for each network device as reported by /proc/net/dev (bytes). | device | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/net.go) oasis_node_net_transmit_packets_total | Gauge | Transmitted data for each network device as reported by /proc/net/dev (packets). | device | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/net.go) oasis_p2p_blocked_peers | Gauge | Number of blocked P2P peers. | | [p2p](https://github.com/oasisprotocol/oasis-core/tree/master/go/p2p/metrics.go) oasis_p2p_connections | Gauge | Number of P2P connections. | | [p2p](https://github.com/oasisprotocol/oasis-core/tree/master/go/p2p/metrics.go) oasis_p2p_peers | Gauge | Number of connected P2P peers. | | [p2p](https://github.com/oasisprotocol/oasis-core/tree/master/go/p2p/metrics.go) oasis_p2p_protocols | Gauge | Number of supported P2P protocols. | | [p2p](https://github.com/oasisprotocol/oasis-core/tree/master/go/p2p/metrics.go) oasis_p2p_topics | Gauge | Number of supported P2P topics. | | [p2p](https://github.com/oasisprotocol/oasis-core/tree/master/go/p2p/metrics.go) oasis_registry_entities | Gauge | Number of registry entities. | | [registry](https://github.com/oasisprotocol/oasis-core/tree/master/go/registry/metrics.go) oasis_registry_nodes | Gauge | Number of registry nodes. | | [registry](https://github.com/oasisprotocol/oasis-core/tree/master/go/registry/metrics.go) oasis_registry_runtimes | Gauge | Number of registry runtimes. | | [registry](https://github.com/oasisprotocol/oasis-core/tree/master/go/registry/metrics.go) oasis_rhp_failures | Counter | Number of failed Runtime Host calls. | call | [runtime/host/protocol](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/protocol/metrics.go) oasis_rhp_latency | Summary | Runtime Host call latency (seconds). | call | [runtime/host/protocol](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/protocol/metrics.go) oasis_rhp_successes | Counter | Number of successful Runtime Host calls. | call | [runtime/host/protocol](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/protocol/metrics.go) oasis_rhp_timeouts | Counter | Number of timed out Runtime Host calls. | | [runtime/host/protocol](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/protocol/metrics.go) oasis_roothash_block_interval | Summary | Time between roothash blocks (seconds). | runtime | [roothash](https://github.com/oasisprotocol/oasis-core/tree/master/go/roothash/metrics.go) oasis_storage_failures | Counter | Number of storage failures. | call | [storage/api](https://github.com/oasisprotocol/oasis-core/tree/master/go/storage/api/metrics.go) oasis_storage_latency | Summary | Storage call latency (seconds). | call | [storage/api](https://github.com/oasisprotocol/oasis-core/tree/master/go/storage/api/metrics.go) oasis_storage_successes | Counter | Number of storage successes. | call | [storage/api](https://github.com/oasisprotocol/oasis-core/tree/master/go/storage/api/metrics.go) oasis_storage_value_size | Summary | Storage call value size (bytes). | call | [storage/api](https://github.com/oasisprotocol/oasis-core/tree/master/go/storage/api/metrics.go) oasis_tee_attestations_failed | Counter | Number of failed TEE attestations. | runtime, kind | [runtime/host/sgx/common](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/sgx/common/metrics.go) oasis_tee_attestations_performed | Counter | Number of TEE attestations performed. | runtime, kind | [runtime/host/sgx/common](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/sgx/common/metrics.go) oasis_tee_attestations_successful | Counter | Number of successful TEE attestations. | runtime, kind | [runtime/host/sgx/common](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/sgx/common/metrics.go) oasis_txpool_accepted_transactions | Counter | Number of accepted transactions (passing check tx). | runtime | [runtime/txpool](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/txpool/metrics.go) oasis_txpool_pending_check_size | Gauge | Size of the pending to be checked queue (number of entries). | runtime | [runtime/txpool](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/txpool/metrics.go) oasis_txpool_pending_schedule_size | Gauge | Size of the main schedulable queue (number of entries). | runtime | [runtime/txpool](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/txpool/metrics.go) oasis_txpool_rejected_transactions | Counter | Number of rejected transactions (failing check tx). | runtime | [runtime/txpool](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/txpool/metrics.go) oasis_txpool_rim_queue_size | Gauge | Size of the roothash incoming message transactions schedulable queue (number of entries). | runtime | [runtime/txpool](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/txpool/metrics.go) oasis_up | Gauge | Is oasis-test-runner active for specific scenario. | | [oasis-node/cmd/common/metrics](https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-node/cmd/common/metrics/metrics.go) oasis_worker_aborted_batch_count | Counter | Number of aborted batches. | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_batch_processing_time | Summary | Time it takes for a batch to finalize (seconds). | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_batch_runtime_processing_time | Summary | Time it takes for a batch to be processed by the runtime (seconds). | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_batch_size | Summary | Number of transactions in a batch. | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_client_lb_healthy_instance_count | Gauge | Number of healthy instances in the load balancer. | runtime | [runtime/host/loadbalance](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/loadbalance/metrics.go) oasis_worker_client_lb_requests | Counter | Number of requests processed by the given load balancer instance. | runtime, lb_instance | [runtime/host/loadbalance](https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/loadbalance/metrics.go) oasis_worker_epoch_number | Gauge | Current epoch number as seen by the worker. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_epoch_transition_count | Counter | Number of epoch transitions. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_execution_discrepancy_detected_count | Counter | Number of detected execute discrepancies. | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_executor_committee_p2p_peers | Gauge | Number of executor committee P2P peers. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_executor_is_backup_worker | Gauge | 1 if worker is currently an executor backup worker, 0 otherwise. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_executor_is_worker | Gauge | 1 if worker is currently an executor worker, 0 otherwise. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_executor_liveness_live_ratio | Gauge | Ratio between live and total rounds. Reports 1 if node is not in committee. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_executor_liveness_live_rounds | Gauge | Number of live rounds in last epoch. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_executor_liveness_total_rounds | Gauge | Number of total rounds in last epoch. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_failed_round_count | Counter | Number of failed roothash rounds. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_keymanager_churp_committee_size | Gauge | Number of nodes in the committee | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_confirmed_applications_total | Gauge | Number of confirmed applications | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_enclave_rpc_failures_total | Counter | Number of failed enclave rpc calls. | runtime, churp, method | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_enclave_rpc_latency_seconds | Summary | Latency of enclave rpc calls in seconds. | runtime, churp, method | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_extra_shares_number | Gauge | Minimum number of extra shares. | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_handoff_interval | Gauge | Handoff interval in epochs | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_handoff_number | Counter | Epoch number of the last handoff | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_next_handoff_number | Counter | Epoch number of the next handoff | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_submitted_applications_total | Gauge | Number of submitted applications | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_churp_threshold_number | Counter | Degree of the secret-sharing polynomial | runtime, churp | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_compute_runtime_count | Counter | Number of compute runtimes using the key manager. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_consensus_ephemeral_secret_epoch_number | Gauge | Epoch number of the latest ephemeral secret. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_consensus_master_secret_generation_number | Gauge | Generation number of the latest master secret. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_consensus_master_secret_proposal_epoch_number | Gauge | Epoch number of the latest master secret proposal. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_consensus_master_secret_proposal_generation_number | Gauge | Generation number of the latest master secret proposal. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_consensus_master_secret_rotation_epoch_number | Gauge | Epoch number of the latest master secret rotation. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_ephemeral_secret_epoch_number | Gauge | Epoch number of the latest ephemeral secret loaded into the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_generated_ephemeral_secret_epoch_number | Gauge | Epoch number of the latest ephemeral secret generated by the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_generated_master_secret_epoch_number | Gauge | Epoch number of the latest master secret generated by the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_generated_master_secret_generation_number | Gauge | Generation number of the latest master secret generated by the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_master_secret_generation_number | Gauge | Generation number of the latest master secret as seen by the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_master_secret_proposal_epoch_number | Gauge | Epoch number of the latest master secret proposal loaded into the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_master_secret_proposal_generation_number | Gauge | Generation number of the latest master secret proposal loaded into the enclave. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_keymanager_enclave_rpc_count | Counter | Number of remote Enclave RPC requests via P2P. | method | [worker/keymanager/p2p](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/p2p/metrics.go) oasis_worker_keymanager_policy_update_count | Counter | Number of key manager policy updates. | runtime | [worker/keymanager](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/keymanager/metrics.go) oasis_worker_node_registered | Gauge | Is oasis node registered (binary). | | [worker/registration](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/registration/worker.go) oasis_worker_node_registration_eligible | Gauge | Is oasis node eligible for registration (binary). | | [worker/registration](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/registration/worker.go) oasis_worker_node_status_frozen | Gauge | Is oasis node frozen (binary). | | [worker/registration](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/registration/worker.go) oasis_worker_node_status_runtime_faults | Gauge | Number of runtime faults. | runtime | [worker/registration](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/registration/worker.go) oasis_worker_node_status_runtime_suspended | Gauge | Runtime node suspension status (binary). | runtime | [worker/registration](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/registration/worker.go) oasis_worker_processed_block_count | Counter | Number of processed roothash blocks. | runtime | [worker/common/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/common/committee/node.go) oasis_worker_processed_event_count | Counter | Number of processed roothash events. | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_storage_commit_latency | Summary | Latency of storage commit calls (state + outputs) (seconds). | runtime | [worker/compute/executor/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/compute/executor/committee/metrics.go) oasis_worker_storage_full_round | Gauge | The last round that was fully synced and finalized. | runtime | [worker/storage/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/storage/committee/metrics.go) oasis_worker_storage_pending_round | Gauge | The last round that is in-flight for syncing. | runtime | [worker/storage/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/storage/committee/metrics.go) oasis_worker_storage_round_sync_latency | Summary | Storage round sync latency (seconds). | runtime | [worker/storage/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/storage/committee/metrics.go) oasis_worker_storage_synced_round | Gauge | The last round that was synced but not yet finalized. | runtime | [worker/storage/committee](https://github.com/oasisprotocol/oasis-core/tree/master/go/worker/storage/committee/metrics.go) ## Consensus backends ### Metrics Reported by *CometBFT* When `oasis-node` is configured to use [CometBFT][1] for BFT consensus, all CometBFT metrics are also reported. Consult [CometBFT-core documentation][2] for a list of reported by CometBFT. [1]: ../consensus/README.md#cometbft [2]: https://docs.cometbft.com/v0.38/core/metrics --- ## RPC Oasis Node exposes an RPC interface to enable external applications to query current [consensus] and [runtime] states, [submit transactions], etc. The RPC interface is ONLY exposed via an AF_LOCAL socket called `internal.sock` located in the node's data directory. **This interface should NEVER be directly exposed over the network as it has no authentication and allows full control, including shutdown, of a node.** In order to support remote clients and different protocols (e.g. REST), a gateway that handles things like authentication and rate limiting should be used. An example of such a gateway is the [Oasis Core Rosetta Gateway] which exposes a subset of the consensus layer via the [Rosetta API]. [consensus]: ../consensus/README.md [runtime]: ../runtime/README.md [submit transactions]: ../consensus/transactions.md#submission [Oasis Core Rosetta Gateway]: https://github.com/oasisprotocol/oasis-core-rosetta-gateway [Rosetta API]: https://www.rosetta-api.org ## Protocol Like other parts of Oasis Core, the RPC interface exposed by Oasis Node uses the [gRPC protocol] with the [CBOR codec (instead of Protocol Buffers)]. If your application is written in Go, you can use the convenience gRPC wrappers provided by Oasis Core to create clients. Check the [Oasis SDK] for more information. For example to create a gRPC client connected to the Oasis Node endpoint exposed by your local node at `/path/to/datadir/internal.sock` you can do: ```golang import ( // ... oasisGrpc "github.com/oasisprotocol/oasis-core/go/common/grpc" ) // ... conn, err := oasisGrpc.Dial("unix:/path/to/datadir/internal.sock") ``` This will automatically handle setting up the required gRPC dial options for setting up the CBOR codec and error mapping interceptors. For more detail about the gRPC helpers see the [API documentation]. [gRPC protocol]: https://grpc.io [CBOR codec (instead of Protocol Buffers)]: ../authenticated-grpc.md#cbor-codec [API documentation]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/grpc?tab=doc [Oasis SDK]: https://github.com/oasisprotocol/oasis-sdk ## Errors We use a specific convention to provide more information about the exact error that occurred when processing a gRPC request. See the [gRPC specifics] section for details. [gRPC specifics]: ../authenticated-grpc.md#errors ## Services We use the same service method namespacing convention as gRPC over Protocol Buffers. All Oasis Core services have unique identifiers starting with `oasis-core.` followed by the service identifier. A single slash (`/`) is used as the separator in method names, e.g., `/oasis-core.NodeControl/IsSynced`. The following gRPC services are exposed (with links to API documentation): * **General** * [Node Control] (`oasis-core.NodeController`) * **Consensus Layer** * [Consensus (client subset)] (`oasis-core.Consensus`) * [Consensus (light client subset)] (`oasis-core.ConsensusLight`) * [Staking] (`oasis-core.Staking`) * [Registry] (`oasis-core.Registry`) * [Scheduler] (`oasis-core.Scheduler`) * [RootHash] (`oasis-core.RootHash`) * [Governance] (`oasis-core.Governance`) * [Beacon] (`oasis-core.Beacon`) * **Runtime Layer** * [Storage] (`oasis-core.Storage`) * [Runtime Client] (`oasis-core.RuntimeClient`) For more details about what the exposed services do see the respective documentation sections. The Go API also provides gRPC client implementations for all of the services which can be used after establishing a gRPC connection via the internal socket (multiple clients can share the same gRPC connection). For example in case of the consensus service using the connection we established in the previous example: ```golang import ( // ... consensus "github.com/oasisprotocol/oasis-core/go/consensus/api" ) // ... cc := consensus.NewConsensusClient(conn) err := cc.SubmitTx(ctx, &tx) ``` [Node Control]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/control/api?tab=doc#NodeController [Consensus (client subset)]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api?tab=doc#ClientBackend [Consensus (light client subset)]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/consensus/api?tab=doc#LightClientBackend [Staking]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/staking/api?tab=doc#Backend [Registry]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#Backend [Scheduler]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/scheduler/api?tab=doc#Backend [RootHash]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/roothash/api?tab=doc#Backend [Governance]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/governance/api?tab=doc#Backend [Beacon]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/beacon/api?tab=doc#Backend [Storage]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/storage/api?tab=doc#Backend [Runtime Client]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/client/api?tab=doc#RuntimeClient --- ## Release Process The following steps should be followed when preparing a release. ## Prerequisites Our release process relies on some tooling that needs to be available on a maintainer's system: - [Python] 3.6+. - [Oasis' towncrier fork]. - [Punch] 2.0.x. Most systems should already have [Python] pre-installed. To install [Oasis' towncrier fork] and [Punch], use [pip]: ```bash pip3 install --upgrade \ https://github.com/oasisprotocol/towncrier/archive/oasis-master.tar.gz \ punch.py~=2.0.0 ``` You might want to install the packages to a [Python virtual environment] or via so-called [User install] (i.e. isolated to the current user). [Python]: https://www.python.org/ [Oasis' towncrier fork]: https://github.com/oasisprotocol/towncrier [Punch]: https://github.com/lgiordani/punch [pip]: https://pip.pypa.io/en/stable/ [Python virtual environment]: https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments [User install]: https://pip.pypa.io/en/stable/user_guide/#user-installs ## Tooling Our [Make] tooling has some targets that automate parts of the release process and try to make it less error-prone: - `changelog`: Bumps project's version with the [Punch] utility and assembles the [Change Log] from the [Change Log Fragments] using the [towncrier][Oasis' towncrier fork] utility. - `release-tag`: After performing a bunch of sanity checks, it tags the git origin remote's release branch's `HEAD` with the `v` tag and pushes it to the remote. - `release-stable-branch`: Creates and pushes a stable branch for the current release. Note that all above targets depend on the `fetch-git` target which fetches the latest changes (including tags) from the git origin remote to ensure the computed next version and other things are always up-to-date. The version of the Oasis Core's next release is computed automatically using the [Punch] utility according to the project's [Versioning] scheme. The `changelog` Make target checks the name of the branch on which the release is being made to know which part of the project's version to bump. To customize the release process, one can set the following environment variables: - `GIT_ORIGIN_REMOTE` (default: `origin`): Name of the git remote pointing to the canonical upstream git repository. - `RELEASE_BRANCH` (default: `master`): Name of the branch where to tag the next release. [Make]: https://en.wikipedia.org/wiki/Make_\(software\) [Change Log]: https://github.com/oasisprotocol/oasis-core/tree/master/CHANGELOG.md [Change Log Fragments]: https://github.com/oasisprotocol/oasis-core/tree/master/.changelog/README.md [Versioning]: versioning.md ## Preparing a Regular Release ### Bump Protocol Versions Before a release, make sure that the proper protocol versions were bumped correctly (see [`go/common/version/version.go`]). If not, make a pull request that bumps the respective version(s) before proceeding with the release process. [`go/common/version/version.go`]: https://github.com/oasisprotocol/oasis-core/tree/master/go/common/version/version.go ### Prepare the Change Log Before a release, all [Change Log fragments] should be assembled into a new section of the [Change Log] using the `changelog` [Make] target. Create a new branch, e.g. `changelog`, and then run [Make]: ```bash git checkout -b changelog make changelog ``` Review the staged changes and make appropriate adjustment to the Change Log (e.g. re-order entries, make formatting/spelling fixes, ...). Replace the `` strings in the protocol versions table just below the next version's heading with appropriate protocol versions as defined in [go/common/version/version.go][version-file] file. For example: | Protocol | Version | |:------------------|:---------:| | Consensus | 4.0.0 | | Runtime Host | 2.0.0 | | Runtime Committee | 2.0.0 | After you are content with the changes, commit them, push them to the origin and make a pull request. Once the pull request had been reviewed and merged, proceed to the next step. [version-file]: https://github.com/oasisprotocol/oasis-core/tree/master/go/common/version/version.go ### Tag Next Release To create a signed git tag from the latest commit in origin remote's `master` branch, use: ```bash make release-tag ``` This command will perform a bunch of sanity checks to prevent common errors while tagging the next release. After those checks have passed, it will ask for confirmation before proceeding. ### Ensure GitHub Release Was Published After the tag with the next release is pushed to the [canonical git repository], the GitHub Actions [Release manager workflow] is triggered which uses the [GoReleaser] tool to automatically build the binaries, prepare archives and checksums, and publish a GitHub Release that accompanies the versioned git tag. Browse to [Oasis Core's releases page] and make sure the new release is properly published. **Initially the release will be published as a pre-release to allow for early testing on Testnet. Once the release is ready for a wider audience on Mainnet it should be explicitly changed to be a normal production release.** ### Create `stable/YY.MINOR.x` Branch To prepare a new stable branch from the new release tag and push it to the origin remote, use: ```bash make release-stable-branch ``` This command will perform sanity checks to prevent common errors. After those checks have passed, it will ask for confirmation before proceeding. [canonical git repository]: https://github.com/oasisprotocol/oasis-core [Release manager workflow]: https://github.com/oasisprotocol/oasis-core/tree/master/.github/workflows/release.yml [GoReleaser]: https://goreleaser.com/ [Oasis Core's releases page]: https://github.com/oasisprotocol/oasis-core/releases ## Preparing a Bugfix/Stable Release As mentioned in the [Versioning] documentation, sometimes we will want to back-port some fixes (e.g. a security fix) and (backwards compatible) changes from an upcoming release and release them without also releasing all the other (potentially breaking) changes. Set the `RELEASE_BRANCH` environment variable to the name of the stable branch of the `YY.MINOR` release you want to back-port the changes to, e.g. `stable/21.2.x`, and export it: ```bash export RELEASE_BRANCH="stable/21.2.x" ``` ### Back-port Changes Create a new branch, e.g. `backport-foo-${RELEASE_BRANCH#stable/}`, from the `${RELEASE_BRANCH}` branch: ```bash git checkout -b backport-foo-${RELEASE_BRANCH#stable/} ${RELEASE_BRANCH} ``` After back-porting all the desired changes, push it to the origin and make a pull request against the `${RELEASE_BRANCH}` branch. ### Prepare Change Log for Bugfix/Stable Release As with a regular release, the back-ported changes should include the corresponding [Change Log Fragments] that need to be assembled into a new section of the [Change Log] using the `changelog` [Make] target. Create a new branch, e.g. `changelog-${RELEASE_BRANCH#stable/}`, from the `${RELEASE_BRANCH}` branch: ```bash git checkout -b changelog-${RELEASE_BRANCH#stable/} ${RELEASE_BRANCH} ``` Then run [Make]'s `changelog` target: ```bash make changelog ``` *NOTE: The `changelog` Make target will bump the `MICRO` part of the version automatically.* Replace the `` strings in the protocol versions table just below the next version's heading with appropriate protocol versions as defined in [go/common/version/version.go][version-file] file. After reviewing the staged changes, commit them, push the changes to the origin and make a pull request against the `${RELEASE_BRANCH}` branch. Once the pull request had been reviewed and merged, proceed to the next step. ### Tag Bugfix/Stable Release As with a regular release, create a signed git tag from the latest commit in origin remote's release branch by running the `release-tag` Make target: ```bash make release-tag ``` After the sanity checks have passed, it will ask for confirmation before proceeding. ### Ensure GitHub Release for Bugfix/Stable Release Was Published Similar to a regular release, after the tag with the next release is pushed to the [canonical git repository], the GitHub Actions [Release manager workflow] is triggered which uses the [GoReleaser] tool to automatically build a new release. Browse to [Oasis Core's releases page] and make sure the new bugfix/stable release is properly published. --- ## Runtime Layer [Image: Runtime Layer] The Oasis Core runtime layer enables independent _runtimes_ to schedule and execute stateful computations and commit result summaries to the [consensus layer]. In addition to [verifying and storing] the canonical runtime state summaries the [consensus layer] also serves as the [registry] for node and runtime metadata, a [scheduler] that elects runtime compute committees and a coordinator for [key manager replication]. [consensus layer]: ../consensus/README.md [verifying and storing]: ../consensus/services/roothash.md [registry]: ../consensus/services/registry.md [scheduler]: ../consensus/services/scheduler.md [key manager replication]: ../consensus/services/keymanager.md ## Runtimes A _runtime_ is effectively a replicated application with shared state. The application can receive transactions from clients and based on those it can perform arbitrary state mutations. This replicated state and application logic exists completely separate from the consensus layer state and logic, but it leverages the same consensus layer for finality with the consensus layer providing the source of canonical state. Multiple runtimes can share the same consensus layer. In Oasis Core a runtime can be any executable that speaks the [Runtime Host Protocol] which is used to communicate between a runtime and an Oasis Core Node. The executable usually runs in a sandboxed environment with the only external interface being the Runtime Host Protocol. The execution environment currently includes a sandbox based on Linux namespaces and SECCOMP optionally combined with Intel SGX enclaves for confidential computation. [Image: Runtime Execution] In the future this may be expanded with supporting running each runtime in its own virtual machine and with other confidential computing technologies. [Runtime Host Protocol]: runtime-host-protocol.md ## Operation Model The relationship between [consensus layer services] and runtime services is best described by a simple example of a "Runtime A" that is created and receives transactions from clients (also see the figure above for an overview). 1. The runtime first needs to be created. In addition to developing code that will run in the runtime itself, we also need to specify some metadata related to runtime operation, including a unique [runtime identifier], and then [register the runtime]. 1. We also need some nodes that will actually run the runtime executable and process any transactions from clients (compute nodes). These nodes currently need to have the executable available locally and must be configured as compute nodes. 1. In addition to compute nodes a runtime also needs storage nodes to store its state. 1. Both kinds of [nodes will register] on the consensus layer announcing their willingness to participate in the operation of Runtime A. 1. After an [epoch transition] the [committee scheduler] service will elect registered compute and storage nodes into different committees based on role. Elections are randomized based on entropy provided by the [random beacon]. 1. A client may submit transactions by querying the consensus layer to get the current executor committee for a given runtime, connect to it, publish transactions and wait for finalization by the consensus layer. In order to make it easier to write clients, the Oasis Node exposes a runtime [client RPC API] that encapsulates all this functionality in a [`SubmitTx`] call. 1. The transactions are batched and proceed through the transaction processing pipeline. At the end, results are persisted to storage and the [roothash service] in the consensus layer finalizes state after verifying that computation was performed correctly and state was correctly persisted. 1. The compute nodes are ready to accept the next batch and the process can repeat from step 6. Note that the above example describes the _happy path_, a scenario where there are no failures. Described steps mention things like verifying that computation was performed _correctly_ and that state was _correctly stored_. How does the consensus layer actually know that? [consensus layer services]: ../consensus/README.md [runtime identifier]: identifiers.md [register the runtime]: ../consensus/services/registry.md#register-runtime [nodes will register]: ../consensus/services/registry.md#register-node [epoch transition]: ../consensus/services/epochtime.md [committee scheduler]: ../consensus/services/scheduler.md [random beacon]: ../consensus/services/beacon.md [client RPC API]: ../oasis-node/rpc.md [`SubmitTx`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/client/api?tab=doc#RuntimeClient.SubmitTx [roothash service]: ../consensus/services/roothash.md ### Discrepancy Detection and Resolution The key idea behind ensuring integrity of runtime computations is replicated computation with discrepancy detection. This basically means that any computation (e.g., execution of a transaction) is replicated among multiple compute nodes. They all execute the exact same functions and produce results, which must all match. If they don't (e.g., if even a single node produces different results), this is treated as a discrepancy. In case of a discrepancy, the computation must be repeated using a separate larger compute committee which decides what the correct results were. Since all commitments are attributable to compute nodes, any node(s) that produced incorrect results may be subject to having their stake slashed and may be removed from future committees. Given the above, an additional constraint with replicated runtimes is that they must be fully deterministic, meaning that a computation operating on the same initial state executing the same inputs (transactions) must always produce the same outputs and new state. In case a runtime's execution exhibits non-determinism this will manifest itself as discrepancies since nodes will derive different results when replicating computation. ### Compute Committee Roles and Commitments A compute node can be elected into an executor committee and may have one of the following roles: * Primary executor node. At any given round a single node is selected among all the primary executor nodes to be a _transaction scheduler node_ (roughly equal to the role of a _block proposer_). * Backup executor node. Backup nodes can be activated by the consensus layer in case it determines that there is a discrepancy. The size of the primary and backup executor committees, together with other related parameters, can be configured on a per-runtime basis. The _primary_ nodes are the ones that will batch incoming transactions into blocks and execute the state transitions to derive the new state root. They perform this in a replicated fashion where all the primary executor nodes execute the same inputs (transactions) on the same initial state. After execution they will sign [cryptographic commitments] specifying the inputs, the initial state, the outputs and the resulting state. In case computation happens inside a trusted execution environment (TEE) like Intel SGX, the commitment will also include a platform attestation proving that the computation took place in a given TEE. The [roothash service] in the consensus layer will collect commitments and verify that all nodes have indeed computed the same result. As mentioned in case of discrepancies it will instruct nodes elected as _backups_ to repeat the computation. [cryptographic commitments]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/roothash/api/commitment?tab=doc ### Storage Receipts All runtime persistent state is stored by storage nodes. These provide a [Merklized Key-Value Store (MKVS)] to compute nodes. The MKVS stores immutable state cryptographically summarized by a single root hash. When a storage node stores a given state update, it signs a receipt stating that it is storing a specific root. These receipts are verified by the [roothash service] before accepting a commitment from a compute node. [Merklized Key-Value Store (MKVS)]: ../mkvs.md ### Suspending Runtimes Since periodic maintenance work must be performed on each epoch transition (e.g., electing runtime committees), fees for that maintenance are paid by any nodes that register to perform work for a specific runtime. Fees are pre-paid for the number of epochs a node registers for. If there are no committees for a runtime on epoch transition, the runtime is suspended for the epoch. The runtime is also suspended in case the registering entity no longer has enough stake to cover the entity and runtime deposits. The runtime will be resumed on the epoch transition if runtime nodes will register and the registering entity will have enough stake. ### Emitting Messages Runtimes may [emit messages] to instruct the consensus layer what to do on their behalf. This makes it possible for runtimes to [own staking accounts]. [emit messages]: messages.md [own staking accounts]: ../consensus/services/staking.md#runtime-accounts --- ## Runtime IDs Identifiers for runtimes are represented by the [`common.Namespace`] type. The first 64 bits are reserved for specifying flags expressing various properties of the runtime, and the last 192 bits are used as the runtime identifier. Currently the following flags are defined (bit positions assume the flags vector is interpreted as an unsigned 64 bit big endian integer): * Bit 63: The runtime is a test runtime and not for production networks. * Bit 62: The runtime is a key manager runtime. * Bits 61-0: Reserved for future expansion and MUST be set to 0. Note: Unless the registry consensus parameter `DebugAllowTestRuntimes` is set, attempts to register a test runtime will be rejected. [`common.Namespace`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common?tab=doc#Namespace --- ## Runtime Messages In order to enable runtimes to perform actions in the consensus layer on their behalf, they can emit _messages_ in each round. ## Supported Messages The following sections describe the methods supported by the consensus roothash service. ### Staking Method Call The staking method call message enables a runtime to call one of the supported [staking service methods]. **Field name:** ``` staking ``` **Body:** ```golang type StakingMessage struct { cbor.Versioned Transfer *staking.Transfer `json:"transfer,omitempty"` Withdraw *staking.Withdraw `json:"withdraw,omitempty"` } ``` **Fields:** - `v` must be set to `0`. - `transfer` indicates that the [`staking.Transfer` method] should be executed. - `withdraw` indicates that the [`staking.Withdraw` method] should be executed. Exactly one of the supported method fields needs to be non-nil, otherwise the message is considered malformed. [staking service methods]: ../consensus/services/staking.md#methods [`staking.Transfer` method]: ../consensus/services/staking.md#transfer [`staking.Withdraw` method]: ../consensus/services/staking.md#withdraw ## Limits The maximum number of runtime messages that can be emitted in a single round is limited by the `executor.max_messages` option in the runtime descriptor. Its upper bound is the [`max_messages` consensus parameter] of the roothash service. [`max_messages` consensus parameter]: ../consensus/services/roothash.md#consensus-parameters --- ## Runtime Host Protocol The Runtime Host Protocol (RHP) is a simple RPC protocol which is used to communicate between a runtime and an Oasis Core Compute Node. ## Transport The RHP assumes a reliable byte stream oriented transport underneath. The only current implementation uses AF_LOCAL sockets and [Fortanix ABI streams] backed by shared memory to communicate with runtimes inside Intel SGX enclaves. [Image: Runtime Execution] [Fortanix ABI streams]: https://edp.fortanix.com/docs/api/fortanix_sgx_abi/struct.Usercalls.html#streams ## Framing All RHP messages use simple length-value framing with the value being encoded using [canonical CBOR]. The frames are serialized on the wire as follows: ``` [4-byte message length (big endian)] [CBOR-serialized message] ``` Maximum allowed message size is 16 MiB. [canonical CBOR]: ../encoding.md ## Messages Each [message] can be either a request or a response as specified by the type field. Each request is assigned a unique 64-bit sequence number by the caller to make it possible to correlate responses. See the API reference ([Go], [Rust]) for a list of all supported message bodies. In case the request resulted in an error, the special [`Error`] response body must be used. [message]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#Message [Go]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#Body [Rust]: https://github.com/oasisprotocol/oasis-core/tree/master/runtime/src/types.rs [`Error`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#Error ## Operation RHP allows two forms of communication: * **Host-to-runtime** where the host (compute node) submits requests to the runtime to handle and the runtime provides responses. All such request [messages] are prefixed with `Runtime`. * **Runtime-to-host** where the runtime submits requests to the host and the host provides responses. All such request [messages] are prefixed with `Host`. In its lifetime, from connection establishment to its termination, the RHP connection goes through the following states: * *Uninitialized* is the default state of a newly created connection. In this state the connection could be used either on the runtime side or the host side. To proceed to the next state, the connection must be initialized either as a runtime or as a host. The [Rust implementation] only supports runtime mode while the [Go implementation] can be initialized in either mode by using either [`InitHost` or `InitGuest`]. * *Initializing* is the state when the connection is being initialized (see below for details). After a connection has been successfully initialized it will transition into *ready* state. If the initialization failed, it will instead transition into *closed* state. * *Ready* is the state when the connection can be used to exchange messages in either direction. * *Closed* is the state of the connection after it is considered closed. No messages may be exchanged at this point. If either the runtime or the host generates an invalid message, either end may terminate the connection (and/or the runtime process). [messages]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#Body [Rust implementation]: https://github.com/oasisprotocol/oasis-core/tree/master/runtime [Go implementation]: https://github.com/oasisprotocol/oasis-core/tree/master/go/runtime/host/protocol [`InitHost` or `InitGuest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#Connection ### Initialization Before a connection can be used, it must be initialized as either representing the runtime end or the host (compute node) end. The [Rust implementation] only supports being initialized as the runtime and the [Go implementation] is currently only used as the host. If one uses the [`oasis-core-runtime` crate] to build a runtime, initialization is handled automatically. The initialization procedure is driven by the host and it proceeds as follows: * The host sends [`RuntimeInfoRequest`] providing the runtime with its [designated identifier]. The identifier comes from the [registry service] in the consensus layer. * The runtime must reply with a [`RuntimeInfoResponse`] specifying its own version and the version of the runtime host protocol that it supports. If the protocol version is incompatible, initialization fails. After the initialization procedure, the connection can be used for other messages. In case the runtime is running in a trusted execution environment (TEE) like Intel SGX, the next required step is to perform remote attestation. [`oasis-core-runtime` crate]: https://github.com/oasisprotocol/oasis-core/tree/master/runtime [`RuntimeInfoRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeInfoRequest [designated identifier]: identifiers.md [registry service]: ../consensus/services/registry.md#runtimes [`RuntimeInfoResponse`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeInfoResponse ### Remote Attestation When a runtime is executed in a TEE, it must perform remote attestation immediately after initialization. The [Rust implementation] also requires that remote attestation is periodically renewed and will start rejecting requests otherwise. In case a runtime is not executed in a TEE, this step is skipped. *NOTE: As currently Intel SGX is the only supported TEE, the elements of the remote attestation protocol are in some parts very specific to Intel SGX. This may change in the future when support for additional TEEs is added.* Upon initialization the host performs the following steps: * *[Intel SGX]* The host obtains information for the runtime to be able to generate an attestation report. This includes talking to the AESM service and the IAS configuration. The information includes the identity of the Quoting Enclave. * The host sends [`RuntimeCapabilityTEERakInitRequest`] passing the information required for the runtime to initialize its own ephemeral Runtime Attestation Key (RAK). The RAK is valid for as long as the runtime is running. The initialization then proceeds as follows, with the following steps also being performed as part of periodic re-attestation: * The host sends [`RuntimeCapabilityTEERakReportRequest`] requesting the runtime to generate an attestation report. * The runtime prepares an attestation report based on the information provided during the first initialization step. It responds with [`RuntimeCapabilityTEERakReportResponse`] containing the public part of the RAK, the attestation report (binding RAK to the TEE identity) and a replay protection nonce. * *[Intel SGX]* The host proceeds to submit the attestation report to the Quoting Enclave to receive a quote. It submits the received quote to the Intel Attestation Service (IAS) to receive a signed Attestation Verification Report (AVR). It submits the AVR to the runtime by sending a [`RuntimeCapabilityTEERakAvrRequest`]. * *[Intel SGX]* The runtime verifies the validity of the AVR, making sure that it is not a replay and that it in fact contains the correct enclave identity and the RAK binding. * Upon successful verification the runtime is now ready to accept requests. As mentioned the attestation procedure must be performed periodically by the host as otherwise the runtime may start rejecting requests. The compute node will submit remote attestation information to the consensus [registry service] as part of its [node registration descriptor]. The registry service will verify that the submitted AVR is in fact valid and corresponds to the registered runtime enclave identity. It will reject node registrations otherwise. [`RuntimeCapabilityTEERakInitRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeCapabilityTEERakInitRequest [`RuntimeCapabilityTEERakReportRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeCapabilityTEERakReportRequest [`RuntimeCapabilityTEERakReportResponse`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeCapabilityTEERakReportResponse [`RuntimeCapabilityTEERakAvrRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeCapabilityTEERakAvrRequest [node registration descriptor]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/common/node?tab=doc#Node ### Host-to-runtime The following section describes the calls that a host can make to request processing from the runtime after successfully performing initialization (and initial remote attestation if running in a TEE). #### Transaction Batch Dispatch When a compute node needs to verify whether individual transactions are valid it can optionally request the runtime to perform a simplified transaction check. It can do this by sending a [`RuntimeCheckTxBatchRequest`] message. The runtime should perform the required non-expensive checks, but should not fully execute the transactions. When a compute node receives a batch of transactions to process from the transaction scheduler executor, it passes the batch to the runtime via the [`RuntimeExecuteTxBatchRequest`] message. The runtime must execute the transactions in the given batch and produce a set of state changes (storage updates for the output and state roots). In case the runtime is running in a TEE the execution results must be signed by the Runtime Attestation Key (see above). [`RuntimeCheckTxBatchRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeCheckTxBatchRequest [`RuntimeExecuteTxBatchRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeExecuteTxBatchRequest #### EnclaveRPC #### Key Manager Policy Update #### Abort The host can request the runtime to abort processing the current batch by sending the [`RuntimeAbortRequest`] message. The request does not take any arguments. In case the response does not indicate an error the abort is deemed successful by the host. In case the runtime does not reply quickly enough the host may terminate the runtime and start a new instance. [`RuntimeAbortRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeAbortRequest #### Extensions RHP provides a way for runtimes to support custom protocol extensions by utilizing the [`RuntimeLocalRPCCallRequest`] and [`RuntimeLocalRPCCallResponse`] messages. [`RuntimeLocalRPCCallRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeLocalRPCCallRequest [`RuntimeLocalRPCCallResponse`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#RuntimeLocalRPCCallResponse ### Runtime-to-host The following section describes the calls that a runtime can make to request processing from the host (or the wider distributed network on host's behalf). #### EnclaveRPC to Remote Endpoints #### Read-only Runtime Storage Access The host exposes the [MKVS read syncer] interface (via the [`HostStorageSyncRequest`] message) to enable runtimes read-only access to global runtime storage. [MKVS read syncer]: ../mkvs.md#read-syncer [`HostStorageSyncRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#HostStorageSyncRequest #### Untrusted Local Storage Access The host exposes a simple key-value local store that can be used by the runtime to store arbitrary instance-specific data. **Note that if the runtime is running in a TEE this store must be treated as UNTRUSTED as the host may perform arbitrary attacks. The runtime should use TEE-specific sealing to ensure integrity and confidentiality of any stored data.** There are two local storage operations, namely get and set, exposed via [`HostLocalStorageGetRequest`] and [`HostLocalStorageSetRequest`] messages, respectively. [`HostLocalStorageGetRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#HostLocalStorageGetRequest [`HostLocalStorageSetRequest`]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/runtime/host/protocol?tab=doc#HostLocalStorageSetRequest --- ## Versioning ## Oasis Core Oasis Core (as a whole) uses a [CalVer] (calendar versioning) scheme with the following format: ```text YY.MINOR[.MICRO][-MODIFIER] ``` where: - `YY` represents short year (e.g. `19`, `20`, `21`, ...), - `MINOR` represents the minor version starting with zero (e.g. `0`, `1`, `2`, `3`, ...), - `MICRO` represents (optional) final number in the version (sometimes referred to as the "patch" segment) (e.g. `0`, `1`, `2`, `3`, ...). If the `MICRO` version is `0`, it will be omitted. - `MODIFIER` represents (optional) build metadata, e.g. `git8c01382`. The `YY` version must be bumped after each new calendar year. When a regularly scheduled release is made, the `MINOR` version should be bumped. If there is a major fix that we want to back-port from an upcoming next release and release it, then the `MICRO` version should be bumped. The `MODIFIER` should be used to denote a build from an untagged (and potentially unclean) git source. It should be of the form: ```text gitCOMMIT_SHA[+dirty] ``` where: - `COMMIT_SHA` represents the current commit’s abbreviated SHA. The `+dirty` part is optional and is only present if there are uncommitted changes in the working directory. ## Protocols (Consensus, Runtime Host, Runtime Committee) Oasis Core’s protocol versions use [SemVer] (semantic versioning) 2.0.0 with the following format: ```text MAJOR.MINOR.PATCH ``` where: - `MAJOR` represents the major version, - `MINOR` represents the minor version, - `PATCH` represents the patch version. Whenever a backward-incompatible change is made to a protocol, the `MAJOR` version must be bumped. If a new release adds a protocol functionality in a backwards compatible manner, the `MINOR` version must be bumped. When only backwards compatible bug fixes are made to a protocol, the `PATCH` version should be bumped. ### Version 1.0.0 With the release of [Oasis Core 20.10], we bumped the protocol versions to version 1.0.0 which [signified that they are ready for production use]( https://semver.org/#how-do-i-know-when-to-release-100). [CalVer]: http://calver.org [SemVer]: https://semver.org/ [Oasis Core 20.10]: https://github.com/oasisprotocol/oasis-core/blob/v20.10/CHANGELOG.md --- ## Learn about Oasis This chapter provides general overview of the Oasis Network and introduces basic tools for you to get started. --- ## Manage your Tokens The **native token** on Oasis Mainnet is called **ROSE**. Native tokens are used for: - proof-of-stake **block proposal and validation**, - **governance proposal voting**, - paying out **staking rewards**, - paying **network gas fees**, - dApp-specific use cases. ROSE App - The quickest way into the Oasis Ecosystem The Oasis team built a **[ROSE App][rose-move]** for you to easily **move ROSE** from a crypto exchange to Sapphire and the other way around. This way you can quickly and safely start using your tokens with Sapphire dApps without diving into mechanics of the Oasis ParaTime deposits and withdrawals! You simply need a working [Metamask/Ethereum compatible wallet](#metamask). ## ROSE and the ParaTimes The [Oasis Network architecture] separates between the **consensus** and the **compute** (a.k.a. ParaTime) layer. The consensus layer and each ParaTime running on the compute layer have their own **ledger** containing, among other data, the **balances of the accounts**. [Image: Deposits, withdrawals, transfers] Moving tokens from the consensus layer to a ParaTime is called a **deposit** and moving them from a ParaTime back to the consensus layer is a **withdrawal** (see [ADR-3] for technical specifications). You can **transfer** tokens from your account to another account only, if both accounts are either on the consensus layer or inside the same ParaTime. Besides moving the tokens across layers and accounts, you can also **[delegate tokens]** to a validator and **earn passive income** as a reward. [delegate tokens]: staking-and-delegating.md [ADR-3]: ../../adrs/0003-consensus-runtime-token-transfer.md [Oasis Network architecture]: ../oasis-network/README.mdx ## Get ROSE ### From a Centralized Exchange via ROSE App The most common way to obtain ROSE is by buying it on a centralized [crypto exchange] (Binance, Coinbase, etc.). **Most exchanges only operate on the Oasis consensus layer**. This means that you can deposit and withdraw ROSE from an exchange only to **your consensus account**. To address this, the Oasis team built a simple **[ROSE App][rose-move]** tool that quickly and safely **moves funds from the consensus account derived from your [Ethereum-compatible wallet](#metamask) to Sapphire** and the other way around. [crypto exchange]: https://en.wikipedia.org/wiki/Cryptocurrency_exchange 1. **Open the ROSE App Move interface:** Visit [**rose.oasis.io/move**][rose-move] in a web browser. 2. **Connect Your Wallet:** Click **"Connect Wallet"** and sign in with your EVM wallet (e.g. MetaMask with Oasis Sapphire network). The app will prompt you to select or unlock your wallet. 3. **Choose the destination:** Click **Select and sign-in** on the left card to move ROSE to Oasis Sapphire. The app will prompt you to sign-in. 4. **Copy Deposit Address:** The app will display the a **Oasis Consensus Layer address** for your withdrawl. **Copy this address.** (It will be a oasis1… style address). 5. **Withdraw from the Exchange:** Now go to your exchange account and initiate a **withdrawal of ROSE**. When asked for a destination address, paste the **Oasis Consensus Layer address** you copied in the previous step. **Note:** MEXC supports direct Sapphire withdrawal (0x…). Binance/Coinbase only support consensus layer (oasis1…). 5. **Confirm the Transfer:** Complete withdrawal on the exchange. 6. **ROSE Arrives on Sapphire:** ROSE will appear in your Sapphire wallet. If your exchange only supports withdrawal to the **Oasis mainnet (consensus)**, an alternative is to withdraw to your Oasis **consensus-layer ROSE wallet** (a bech32 address managed in the [ROSE Wallet][rose-wallet]). Once you have the ROSE in your consensus account, follow the [From Oasis Consensus (Mainnet ROSE)](#from-oasis-consensus-mainnet-rose) guide below. ### From BNB Chain (Bridging wROSE) **[Decentralized exchanges] (DEX) running on Sapphire** are also gaining pace. In this case, the payout is made from the DEX directly to **your account on Sapphire**, and you can use a standard [Ethereum-compatible wallet](#metamask). For **wROSE on BNB Chain**, use cBridge: 1. In the **cBridge** interface, set **BNB Chain** as the source and **Oasis Sapphire** as the destination . 2. Select **wROSE** from the token list (this is the wrapped ROSE token on BNB Chain) . 3. Enter the amount of wROSE to bridge and initiate the transfer. Approve and confirm the transaction in your BSC or MetaMask wallet. 4. After a few minutes, the ROSE will appear in your Sapphire address. [Decentralized exchanges]: https://en.wikipedia.org/wiki/Decentralized_finance#Decentralized_exchanges ### From Oasis Consensus (Mainnet ROSE) Alternatively, you can use a fully-featured [ROSE Wallet](#official-non-custodial-wallets) to **create a consensus account** and then **deposit ROSE** from that account to your Sapphire one. * In the **Oasis Wallet extension** or web wallet, go to the **ParaTimes** section. * Select **"Deposit to ParaTime"**, then choose **Sapphire**. * Enter your **Sapphire EVM address** (0x… from MetaMask) as the recipient, and the amount of ROSE to deposit . * Confirm the transaction to receive ROSE on Sapphire. For detailed instructions with screenshots, see our [onboarding guide][onboarding]. [onboarding]: https://oasis.net/blog/onboarding-guide-rose-sapphire ## The Wallets To sign the token-related transactions such as transfers, deposits, withdrawals and delegations described above, you need a **private key** tied to the corresponding account. Your keys are stored in *[crypto wallets]*. [crypto wallets]: https://en.wikipedia.org/wiki/Cryptocurrency_wallet For your own security and peace of mind, please only use the wallets that are listed here. **Using unofficial wallets can result in the permanent loss of your ROSE!** ### Official Non-Custodial Wallets The Oasis team developed the following **non-custodial wallets** for you. This means that the keys for managing the tokens are **stored on your device** such as a laptop or a mobile phone, and **you are responsible to keep it safe**: - **[ROSE Wallet - Web]**: Runs as a web application in your web browser, the private keys are encrypted with a password and stored inside your Browser's local store. - **[ROSE Wallet - Browser extension]**: Runs as an extension to your Chrome-based browser, the private keys are encrypted with a password and stored inside your Browser's encrypted store. - **[Oasis CLI]**: Command line tool, suitable for builders on Oasis, automation, the private keys are encrypted by a password and stored inside your home folder. [ROSE Wallet - Web]: oasis-wallets/web.mdx [ROSE Wallet - Browser extension]: oasis-wallets/browser-extension.mdx [Oasis CLI]: ../../build/tools/cli/README.md ### MetaMask [MetaMask] is probably the best-known crypto wallet. However, it is an **EVM-compatible** wallet. This means **you can only use it to check the account balances and sign transactions on Sapphire and Emerald chains**. You cannot use it, for example, to sign **consensus layer transactions** or perform **deposits** and **withdrawals** to and from ParaTimes. You can add the Sapphire RPC endpoint by clicking on the "Add to MetaMask" button next to your preferred Mainnet endpoint provider in the [Sapphire] chapter. [Image: Metamask - Adding Sapphire Mainnet Network Configuration] [MetaMask]: https://metamask.io/download/ [Sapphire]: ../../build/sapphire/network.mdx#rpc-endpoints ### Ledger The wallets above are just carefully programmed computer programs that store your keys (in an encrypted form) somewhere on your disk and then use them to sign the transactions. However, if your device gets infected with a piece of malicious software (malware, keyloggers, screen captures), **the password to decrypt your private keys may be obtained and your private keys stolen**. To mitigate such attacks, a **hardware wallet** should be used. This is a physical device which stores your private key and which is only accessed when you send the hardware wallet a transaction to be signed. The transaction content is then shown on the hardware wallet screen for a user to verify and if the user agrees, the transaction is signed and sent back to your computer or your mobile phone for submission to the network. The Oasis team **integrated support for Ledger hardware wallets into all ROSE wallets and the Oasis CLI**. Check out a special [Ledger chapter][Ledger] to learn how to install the Oasis nano app on your Ledger device. [Ledger]: holding-rose-tokens/ledger-wallet.md ### Custodial Services It is up to you to pick the right strategy for keeping the private key of your account holding your tokens safe. Some users may decide to trust their tokens to a **custody provider**. You can read more about those in the [Custody providers][custody-providers] chapter. [custody-providers]: holding-rose-tokens/custody-providers.md ## Account Formats and Signature Schemes Transactions on the consensus layer must be signed using the **ed25519 signature scheme**. The addresses on the consensus layer use the **[Bech-32 encoding]**, and you can recognize them by a typical `oasis1` prefix. ParaTimes can implement arbitrary signature schemes and address encodings. For example, since the Sapphire and Emerald ParaTimes are EVM-compatible, they implement the **secp256k1** scheme and prefer the **hex-encoded** addresses and private keys starting with `0x`. The table below summarizes the current state of the address formats, signature schemes and compatible wallets. | Consensus or ParaTime | Address Format | Digital Signature Scheme | Supported Wallets | |-----------------------|----------------|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Consensus | `oasis1` | ed25519 | ROSE Wallet - WebROSE Wallet - Browser ExtensionOasis CLI | | Sapphire | `0x`, `oasis1` | secp256k1, ed25519, sr25519 | Metamask and other EVM-compatible wallets (transfers only)ROSE Wallet - Browser ExtensionROSE Wallet - Web (deposits and withdrawals only)Oasis CLI | | Cipher | `oasis1` | secp256k1, ed25519, sr25519 | Oasis CLI | | Emerald | `0x`, `oasis1` | secp256k1, ed25519, sr25519 | Metamask and other EVM-compatible wallets (transfers only)ROSE Wallet - Browser ExtensionROSE Wallet - Web (deposits and withdrawals only)Oasis CLI | [Bech-32 encoding]: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki#bech32 ## Check your account To check the balance of your consensus account, you can use the [Oasis Scan](https://www.oasisscan.com) block explorer. Enter your `oasis1` address at the top and hit enter. For example: [Image: Account details of entered oasis1 address in Oasis Scan] The "Amount" is a sum of three values: - the "Available" tokens that can immediately be transferred, - the "Escrow" tokens that are delegated, - the "Reclaim" tokens that are waiting for the debonding period to pass. To check the account's deposits and withdrawals navigate to "Transactions" pane and press "ParaTime" on the right side, next to the "Consensus" button. You will see all ParaTime-related transactions including deposits, withdrawals, transfers and even smart contract transactions. [Image: Search result of oasis1 address - Account details] Furthermore, you can view the transaction details, if you click on a transaction's "Tx Hash". Among others, you will see the transaction type, the "from", "to" and "amount" fields. [Image: Tx Hash - Transaction details] Be aware that the [Oasis Scan Blockchain Explorer](https://www.oasisscan.com) is built for consensus layer. If you want to explore Sapphire (0x addresses, Token Transfers, Contract Calls, etc.), you have to use the [Sapphire Blockchain Explorer](https://explorer.oasis.io/mainnet/sapphire). [rose-move]: https://rose.oasis.io/move --- ## Frequently Asked Questions This documents answers frequently asked questions about the official Oasis and 3rd party wallets & custody providers supporting ROSE. ## Wallets ### How can I transfer ROSE tokens from my BitPie wallet to my ROSE Wallet? BitPie wallet doesn't use the [standardized Oasis mnemonic derivation (ADR-8)][ADR-8]. Consequently, your **Bitpie wallet's mnemonic phrase will not open the same account in the ROSE Wallet**. In May 2023, the BitPie team [removed support for the Oasis network][bitpie-announcement]. Thus, the only way to access your tokens stored on your BitPie wallet is to obtain the private key of your BitPie's wallet in one of the two ways: - [Export private key from a working BitPie wallet](#how-can-i-export-oasis-private-key-from-a-working-bitpie-wallet) - [Convert BitPie mnemonic to Oasis private key offline](#how-can-i-export-oasis-private-key-if-bitpie-doesnt-show-my-rose-account-anymore) Once you obtained the private key, you can [import it to the ROSE Wallet][rose-wallet-import-private-key] and access your assets. [bitpie-announcement]: https://medium.com/bitpie/announcement-on-suspension-of-support-for-algo-and-rose-fc35cb322617 [ADR-8]: ../../adrs/0008-standard-account-key-generation.md [rose-wallet-import-private-key]: ../../general/manage-tokens/oasis-wallets/web.mdx#import-an-existing-account ### How can I export Oasis private key from a working BitPie wallet? Access your BitPie wallet's "Receive" screen, tap the kebab menu, select "Display Private Key", enter your PIN, and copy the Base64-encoded private key to import into ROSE Wallet. Detailed Info: Chinese users can follow the [official BitPie support article] on how to export the Oasis private key from your BitPie wallet. [official BitPie support article]: https://bitpie.zendesk.com/hc/zh-cn/articles/6796209839503 If you have an existing ROSE account in your BitPie wallet, you can obtain the wallet's private key by following the steps below. On the main BitPie wallet screen, click on the "Receive" button. [Image: BitPie main screen] The QR code with your ROSE address will appear. Then, in the top right corner, tap on the kebab menu "⋮" and select "Display Private Key"_._ [Image: BitPie show private key] BitPie wallet will now ask you to enter your PIN to access the private key. Finally, your account's private key will be shown to you encoded in Base64 format (e.g. `YgwGOfrHG1TVWSZBs8WM4w0BUjLmsbk7Gqgd7IGeHfSqdbeQokEhFEJxtc3kVQ4KqkdZTuD0bY7LOlhdEKevaQ==`) which you can [import into ROSE Wallet][rose-wallet-import-private-key]. ### How can I export Oasis private key, if BitPie doesn't show my ROSE account anymore? Use the Oasis unmnemonic tool to convert your BitPie mnemonic to an Oasis private key by selecting the Bitpie algorithm, entering your 12-word mnemonic, and generating the private key file to import into ROSE Wallet. Detailed Info: If you reinstalled BitPie or restored it with a mnemonic on a new device, you may not have your ROSE account present anymore. In this case, you will have to convert your BitPie mnemonic to the Oasis private key using the [Oasis unmnemonic tool] (64-bit binaries available for [Linux][unmnemonic-linux], [MacOS][unmnemonic-macos] and [Windows][unmnemonic-windows]). Open a terminal (on Windows run `cmd`), move to the corresponding folder and invoke the unmnemonic executable: ```shell ./unmnemonic_linux_amd64 ``` ```shell ./unmnemonic_darwin_all ``` ```shell unmnemonic_windows_amd64 ``` An interactive prompt will be shown to you: 1. select the **Bitpie** algorithm, 2. enter the number of words in the mnemonic (typically **12**) and carefully type them in one by line, 3. enter the private key index (start with **0** and gradually increase it by 1, if the resulting account does not contain any tokens), 4. answer **Yes** for writing the keys to disk, 5. enter the name of the **output directory** to store the Oasis private key into (by default the folder name starts with `wallet-export-`) For example: ``` unmnemonic - Recover Oasis Network signing keys from mnemonics ? Which algorithm does your wallet use Bitpie ? How many words is your mnemonic 12 ? Enter word 1 ***** ? Enter word 2 ****** ? Enter word 3 ****** ? Enter word 4 ******* ? Enter word 5 ***** ? Enter word 6 ******* ? Enter word 7 ******* ? Enter word 8 **** ? Enter word 9 ***** ? Enter word 10 ****** ? Enter word 11 ***** ? Enter word 12 ****** ? Wallet index(es) (comma separated) 0 Index[0]: oasis1qp8d9kuduq0zutuatjsgltpugxvl38cuaq3gzkmn ? Write the keys to disk Yes ? Output directory /home/user/unmnemonic/wallet-export-2023-05-01 Index[0]: oasis1qp8d9kuduq0zutuatjsgltpugxvl38cuaq3gzkmn.private.pem - done Done writing wallet keys to disk, goodbye. ``` Finally, look into the output directory. There you should find a `.private.pem` file containing the private key and named after the address it belongs to. You can either: - Open it with a text editor, copy the Base64-encoded key between the `-----BEGIN ED25519 PRIVATE KEY-----` and `-----END ED25519 PRIVATE KEY-----`, and [paste it into the ROSE Wallet][rose-wallet-import-private-key], or - if you use the [Oasis CLI] simply execute the [`oasis wallet import-file`] command to add the exported account to your CLI wallet, for example: ```shell oasis wallet import-file my_unmnemonic_account oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl.private.pem ``` [Oasis unmnemonic tool]: https://github.com/oasisprotocol/tools/tree/main/unmnemonic [unmnemonic-linux]: https://github.com/oasisprotocol/tools/releases/download/unmnemonic-tool-0.1.0/unmnemonic_linux_amd64 [unmnemonic-windows]: https://github.com/oasisprotocol/tools/releases/download/unmnemonic-tool-0.1.0/unmnemonic_windows_amd64.exe [unmnemonic-macos]: https://github.com/oasisprotocol/tools/releases/download/unmnemonic-tool-0.1.0/unmnemonic_darwin_all [`oasis wallet import-file`]: ../../build/tools/cli/wallet.md#import-file ### Chromium under Ubuntu does not recognize my Ledger device. What is the problem? The snap-packaged Chromium browser blocks USB device access due to security restrictions, so install native Chromium or Google Chrome to resolve Ledger connectivity issues. Detailed Info: First check that you added the Ledger udev device descriptors as mentioned in the [Linux installation guide]. Next, check that your Ledger wallet is recognized by the [Oasis CLI]. You should be able to add your Ledger account to the Oasis CLI wallet by running: ```shell oasis wallet create oscar ``` If all of the above works, then the issue is most likely that Chromium does not have the permission to access your Ledger device. Starting with Ubuntu 20.04 the Chromium browser is installed via snap package by default. Snap is more convenient for upstream developers to deploy their software and it also adds additional layer of security by using apparmor. In our case however, it prevents Chromium to access arbitrary USB devices with WebUSB API including your Ledger device. A workaround for this issue is to install Chromium natively using the official [Chormium beta PPA] or the official [Google Chrome .deb package]. [Linux installation guide]: https://support.ledger.com/article/4404389606417-zd [Oasis CLI]: ../../build/tools/cli/README.md [Chormium beta PPA]: https://launchpad.net/\~saiarcot895/+archive/ubuntu/chromium-beta [Google Chrome .deb package]: https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb ### Are Ethereum and ROSE Wallet that different? I can use the same mnemonics with both, right? While both wallets use BIP39 mnemonics, ROSE and Ethereum wallets use different signature schemes and derivation paths, making their addresses and private keys incompatible despite using the same mnemonic words. Detailed Info: Yes, both ROSE and Ethereum wallets make use of the mnemonics as defined in [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki) and they even use the same wordlist to derive the keypairs for your wallet. However, they use a different **signature scheme and a derivation path**, so the addresses and the private keys are incompatible. Here's a task for you: 1. Visit [https://iancoleman.io/bip39/](https://iancoleman.io/bip39/) to generate a BIP39 mnemonic. 2. Select ETH token and copy the hex-encoded private key of the first derived account, for example `0xab2c4f3bc70d40f12f6030750fe452448b5464114cbfc46704edeef2cd06da74`. 3. Import the Ethereum-compatible account with the private key obtained above to your ROSE Wallet. 4. Notice the Ethereum address of the account, for example `0x58c72Eb040Dd0DF10882aA87a39851c21Ae5F331`. 5. Now in the Account management screen, select this account and click on the "Export private key" button. Confirm the risk warning. 6. You will notice the private key of the Ethereum-compatible account, the hex-encoded address and the very same address encoded in the Oasis Bech32 format, in our case `oasis1qpaj6hznytpvyvalmsdg8vw5fzlpftpw7g7ku0h0`. 7. Now let's use the private key from step 2 to import the account. First, convert the hex-encoded key to base64 format, for example by using [this service](https://base64.guru/converter/encode/hex). In our example, that would be `qyxPO8cNQPEvYDB1D+RSRItUZBFMv8RnBO3u8s0G2nQ=`. 8. Next, import this base64-encoded private key to the ROSE Wallet Browser Extension. 9. You should see your newly imported account and the Oasis address. In our case `oasis1qzaf9zd8rlmchywmkkqmy00wrczstugfxu9q09ng`. 10. Observe that this account address **is different** than the Bech32-encoded version of the Ethereum-compatible address despite using the same private key to import the wallet with, because of a different _signature scheme_. As an additional exercise, you can also create an Oasis account using the BIP39 mnemonic from the step 1 above. You will notice that the imported account's base64-encoded private key in the account details screen is different from the one in step 7 above. That's because Oasis uses a different _derivation path_ than Ethereum. ### Which derivation path should I use on Ledger? ADR-8 or Legacy? Use ADR-8 for faster key derivation (twice as fast) and compatibility with ROSE Wallet recovery, though legacy derivation remains supported for existing users. Detailed Info: To convert your mnemonic phrase into a private key for signing transactions, each wallet (hardware or software) performs a _key derivation_. The Oasis Protocol Foundation standardized the key derivation for the official ROSE Wallet in a document called [ADR-8] back in January 2021. However, the Ledger hardware wallet already supported signing transactions at that time using a custom (we now call it _legacy_) derivation path which is incompatible with the one defined in ADR-8. Later, in Oasis app for Ledger v2.3.1 support for ADR-8 was added so the wallet can request either derivation from the Ledger device. The key derivation path defined in ADR-8 has the following advantages compared to the legacy one: - Derivation path is shorter which results in approximately twice as fast key derivation (and transaction signing) without compromising security. - In case your Ledger device is broken or lost and you are unable to retrieve a new one, you will be able to import your Ledger mnemonic and restore your private key in a ROSE Wallet which implements ADR-8. For reasons above, we recommend the usage of ADR-8. However, since there are no security considerations at stake, the ROSE Wallet will support legacy derivation on Ledger for the foreseeable future. ### I lost my Ledger or my Ledger is broken. I urgently need to access my assets. Can I import Ledger mnemonic into ROSE Wallet? For ADR-8 derivation, import directly into Oasis CLI; for legacy derivation, use the unmnemonic tool to convert your 24-word Ledger mnemonic, but note this compromises the mnemonic's security permanently. Detailed Info: When you import your Ledger mnemonic to a software wallet, consider that mnemonic _potentially exposed/compromised_, i.e. not appropriate for a hardware wallet mnemonic anymore. If you use a new hardware wallet in the future, **never restore it from the mnemonic that was previously used by any software wallet!** Ledger supports [two derivation paths](#ledger-derivation-paths) on the Oasis network. If you used your Ledger with the [Oasis CLI] and the [ADR-8] derivation path, you can import the mnemonic directly into the CLI wallet by invoking [`oasis wallet import`] and selecting the `ed25519-adr8` algorithm. If you used your Ledger with the ROSE Wallet Web, the browser extension, the Oasis CLI with the `ed25519-legacy` derivation path or the Oasis Core, then you will need to use the [Oasis unmnemonic tool] (64-bit binaries available for [Linux][unmnemonic-linux], [MacOS][unmnemonic-macos] and [Windows][unmnemonic-windows]). Open a terminal (on Windows run `cmd`), move to the corresponding folder and invoke the unmnemonic executable: ```shell ./unmnemonic_linux_amd64 ``` ```shell ./unmnemonic_darwin_all ``` ```shell unmnemonic_windows_amd64 ``` An interactive prompt will be shown to you: 1. select the **Ledger** algorithm, 2. enter the number of words in the mnemonic (typically **24**) and carefully type them in one by line, 3. enter the private key index (start with **0** and gradually increase it by 1, if the resulting account does not contain any tokens), 4. answer **Yes** for writing the keys to disk, 5. enter the name of the **output directory** to store the Oasis private key into (by default the folder name starts with `wallet-export-`) For example: ``` unmnemonic - Recover Oasis Network signing keys from mnemonics ? Which algorithm does your wallet use Ledger WARNING: Entering your Ledger device mnemonic into any non-Leger device can COMPROMISE THE SECURITY OF ALL ACCOUNTS TIED TO THE MNEMONIC. Use of this tool is STRONGLY DISCOURAGED. ? Have you read and understand the warning Yes ? How many words is your mnemonic 24 ? Enter word 1 ***** ? Enter word 2 **** ? Enter word 3 **** ? Enter word 4 ****** ? Enter word 5 **** ? Enter word 6 ***** ? Enter word 7 **** ? Enter word 8 ******* ? Enter word 9 ****** ? Enter word 10 ***** ? Enter word 11 *** ? Enter word 12 ***** ? Enter word 13 ***** ? Enter word 14 **** ? Enter word 15 **** ? Enter word 16 ****** ? Enter word 17 **** ? Enter word 18 ***** ? Enter word 19 **** ? Enter word 20 ******* ? Enter word 21 ****** ? Enter word 22 ***** ? Enter word 23 *** ? Enter word 24 ***** ? Wallet index(es) (comma separated) 4 Index[4]: oasis1qqwkm23pyg638xvl2nu00frxhapusjjv8qhh3p77 ? Write the keys to disk Yes ? Output directory /home/oa/tmp/wallet-export-2023-11-28 Index[4]: oasis1qqwkm23pyg638xvl2nu00frxhapusjjv8qhh3p77.private.pem - done Done writing wallet keys to disk, goodbye. ``` Finally, look into the output directory. There you should find a `.private.pem` file containing the private key and named after the address it belongs to. You can either: - Open it with a text editor, copy the Base64-encoded key between the `-----BEGIN ED25519 PRIVATE KEY-----` and `-----END ED25519 PRIVATE KEY-----`, and [paste it into the ROSE Wallet][rose-wallet-import-private-key], or - if you use the [Oasis CLI] simply execute the [`oasis wallet import-file`] command to add the exported account to your CLI wallet, for example: ```shell oasis wallet import-file my_unmnemonic_account oasis1qpl4axynedmdrrgrg7dpw3yxc4a8crevr5dkuksl.private.pem ``` [`oasis wallet import`]: ../../build/tools/cli/wallet.md#import ### The wallet gives me _Invalid keyphrase_ error when importing my wallet from mnemonics. How do I solve it? Please check that: * All mnemonics were spelled correctly. The ROSE Wallet uses English mnemonic phrase words as defined in BIP39. You can find a complete list of valid phrase words [here](https://github.com/bitcoin/bips/blob/master/bip-0039/english.txt). * The mnemonics were input in correct order. * All mnemonics were provided. The keyphrase should be either 12, 15, 18, 21, or 24 words long. If you checked all of the above and the keyphrase still cannot be imported, please contact Oasis support. ### I imported my wallet with mnemonics. The wallet should contain funds, but the balance is empty. What can I do? First, check your wallet address. If the address equals the one that you expected your funds on, then the key derivation from mnemonics worked correctly. Make sure you have a working internet connection so that the wallet can fetch the latest balance. Then check that the correct network (Mainnet or Testnet) is selected. These are completely separated networks and although the wallet address can be the same, the transactions and consequently the balances may differ. Finally, there might be a temporary problem with the [Oasis Explorer backend](https://explorer.oasis.io) itself which observes the network and indexes transactions. The ROSE Wallet relies on that service and once it is back up and running, you should be able to see the correct balance. If your wallet address is different than the one you used to transfer your funds to, then you used one of the wallets that don't implement the [standardized key derivation path][ADR-8]. If you were using the BitPie wallet see [this question](#how-can-i-export-oasis-private-key-from-a-working-bitpie-wallet). Ledger hardware wallet users should refer to [this question](#ledger-derivation-paths). If you still cannot access your funds, please contact Oasis support on [#wallets Discord channel][discord]. [discord]: https://oasis.io/discord ### I sent my ROSE to BinanceStaking address. Are they staked? Are they lost? What can I do? If you just make a **Send** transaction to BinanceStaking address `oasis1qqekv2ymgzmd8j2s2u7g0hhc7e77e654kvwqtjwm` then your ROSE coins are not staked. They are now owned by BinanceStaking, which means they are not lost but only owned and managed by them. In this case, you should contact Binance via their [Support Center](https://www.binance.com/en/support) or [Submit a request](https://www.binance.com/en/chat). Sending ROSE is different than staking it! With the staking transaction you **lend** your ROSE to the chosen validator and you are rewarded for that. **Sending** your ROSE to the receiving address you enter means that only the person who owns the private key (e.g. mnemonics) of that receiving address can manage these tokens and no one else. To learn more, read the [Staking and Delegating chapter](staking-and-delegating.md). ### I withdrew ROSE from Emerald to an exchange (Binance, KuCoin), but my deposit is not there. What should I do? Withdrawals from Emerald are slightly different from regular `staking.Transfer` transactions used to send ROSE on the consensus layer. If you withdrew your ROSE directly to an exchange and you were not funded there, contact the exchange support and provide them the link to your account on the [Oasis Scan](https://www.oasisscan.com) where they can verify all transactions. To learn more about this issue, read the [Manage tokens](README.mdx) chapter. ### Error when accessing Ledger on Linux The following error is common on fresh Linux installations when accessing your Ledger device from the Oasis CLI or Ledger Live: ``` Error: ledger: failed to connect to device: hidapi: failed to open device ``` If you are running the application as a normal user, you need to add some **udev rules** to fix the USB device permissions. Install the `ledger-wallets-udev` package to enable access to Ledger devices for all users inside the `udev` group: ```shell sudo apt install ledger-wallets-udev ``` The Ledger team [prepared the script][ledger-udev-script] that enables access to Ledger devices for all users inside the `udev` group. You can download and run the original script as `sudo` or copy&paste the following snippet to your terminal: ```bash #!/bin/bash cat < /dev/null # HW.1 / Nano SUBSYSTEMS=="usb", ATTRS{idVendor}=="2581", ATTRS{idProduct}=="1b7c|2b7c|3b7c|4b7c", TAG+="uaccess", TAG+="udev-acl" # Blue SUBSYSTEMS=="usb", ATTRS{idVendor}=="2c97", ATTRS{idProduct}=="0000|0000|0001|0002|0003|0004|0005|0006|0007|0008|0009|000a|000b|000c|000d|000e|000f|0010|0011|0012|0013|0014|0015|0016|0017|0018|0019|001a|001b|001c|001d|001e|001f", TAG+="uaccess", TAG+="udev-acl" # Nano S SUBSYSTEMS=="usb", ATTRS{idVendor}=="2c97", ATTRS{idProduct}=="0001|1000|1001|1002|1003|1004|1005|1006|1007|1008|1009|100a|100b|100c|100d|100e|100f|1010|1011|1012|1013|1014|1015|1016|1017|1018|1019|101a|101b|101c|101d|101e|101f", TAG+="uaccess", TAG+="udev-acl" # Aramis SUBSYSTEMS=="usb", ATTRS{idVendor}=="2c97", ATTRS{idProduct}=="0002|2000|2001|2002|2003|2004|2005|2006|2007|2008|2009|200a|200b|200c|200d|200e|200f|2010|2011|2012|2013|2014|2015|2016|2017|2018|2019|201a|201b|201c|201d|201e|201f", TAG+="uaccess", TAG+="udev-acl" # HW2 SUBSYSTEMS=="usb", ATTRS{idVendor}=="2c97", ATTRS{idProduct}=="0003|3000|3001|3002|3003|3004|3005|3006|3007|3008|3009|300a|300b|300c|300d|300e|300f|3010|3011|3012|3013|3014|3015|3016|3017|3018|3019|301a|301b|301c|301d|301e|301f", TAG+="uaccess", TAG+="udev-acl" # Nano X SUBSYSTEMS=="usb", ATTRS{idVendor}=="2c97", ATTRS{idProduct}=="0004|4000|4001|4002|4003|4004|4005|4006|4007|4008|4009|400a|400b|400c|400d|400e|400f|4010|4011|4012|4013|4014|4015|4016|4017|4018|4019|401a|401b|401c|401d|401e|401f", TAG+="uaccess", TAG+="udev-acl" # Ledger Test SUBSYSTEMS=="usb", ATTRS{idVendor}=="2c97", ATTRS{idProduct}=="0005|5000|5001|5002|5003|5004|5005|5006|5007|5008|5009|500a|500b|500c|500d|500e|500f|5010|5011|5012|5013|5014|5015|5016|5017|5018|5019|501a|501b|501c|501d|501e|501f", TAG+="uaccess", TAG+="udev-acl" EOF sudo udevadm trigger sudo udevadm control --reload-rules ``` [ledger-udev-script]: https://raw.githubusercontent.com/LedgerHQ/udev-rules/master/add_udev_rules.sh ## Bridging and Transferring Assets ### Do I need ROSE in my wallet to bridge assets to Sapphire? No ROSE needed for bridging (pay in source chain's currency). But you need ROSE on Sapphire to use your bridged tokens. Get ROSE first via methods above. 0.5 ROSE covers many transactions. ### What assets can I bridge to Oasis Sapphire? ETH, USDC, USDT, WBTC, MATIC, BNB, ROSE, and other major tokens. Check the bridge interface for full list. Only bridge tokens with utility on Sapphire. ### Can I bridge directly from Layer-2 networks like Arbitrum or Base? No. Currently, cBridge only supports bridging to Sapphire from Ethereum, Polygon, and BNB Chain. Layer-2 networks like Arbitrum and Base are not supported yet. ### How long does a bridge transfer take? Usually under 5 minutes. Polygon and BNB Chain are typically faster than Ethereum mainnet. Check status after 10-15 minutes if delayed. Keep transaction hash for support. ### Are there any fees to bridge apart from network gas fees? Yes, small bridge fees plus network gas. Ethereum gas is usually the bigger cost compared to Polygon or BNB Chain. Check fee info in the interface. Some popular routes may have 0% fees. ### What is the difference between using the ROSE App vs. using a bridge? **ROSE App**: Only for ROSE transfers between consensus layer/exchanges and Sapphire. Not for other tokens. **cBridge**: For all tokens including ETH, USDC, etc. Also works for ROSE from BNB Chain. ### The token I bridged has ".e" at the end (like USDC.e). What does that mean? ".e" means "bridged from Ethereum" - helps distinguish token versions. Same underlying asset, different contracts. Check which version your dApp supports. ### Can I move my bridged tokens back to the original chain? Yes, use cBridge in reverse: select Sapphire as source, original chain as destination. You'll need ROSE for gas on Sapphire. ### The Wormhole bridge was used in older docs – is that still available? Wormhole was used for Emerald ParaTime and is considered deprecated (archived instructions are available [here](https://github.com/oasisprotocol/docs/blob/01cee79bf75fbda1a2f74543f2cf24ccd25eeed1/docs/general/manage-tokens/how-to-transfer-eth-erc20-to-emerald-paratime.md)). Sapphire uses **Celer cBridge** as the recommended bridge. Use cBridge for all Sapphire bridging. --- ## Custody Providers & Protocols Another way to hold your ROSE is by involving custodial partners—either by giving them complete custody over your tokens ([Custody Providers](#custody-providers)) or just require a multi-signature transaction to move them and then splitting some of those keys among trusted parties ([Decentralized Custody Protocols](#decentralized-custody-protocols)). We've partnered with industry-leaders who support a number of top crypto assets. You can pick among the custodial providers or decentralized custody protocols below. ## Custody Providers Below are some simple ways to get in touch, but please do reach out to them directly for more information on insurance, fees and cross-chain support. ### [Copper.co](https://copper.co) Copper.co is a leading provider of digital asset custody and trading solutions. It provides a gateway into the cryptoasset space for institutional investors by offering custody, prime brokerage, and settlements across 250 digital assets and more than 40 exchanges. It offers a comprehensive and secure suite of tools and services required to safely acquire, trade, and store cryptocurrencies, including access to margin lending trading facilities and the DeFi space. * **Delegation options:** Copper.co allows delegation to any validator running a node on the Oasis Network. * **Min holding:** No threshold for assets under custody. They do not onboard individuals as clients. Suitable for larger token holders. * **Sign up:** Email [betty.sharples@copper.co](mailto:betty.sharples@copper.co) to set up an account. ### [Anchorage](https://anchorage.com) Anchorage is an advanced digital asset platform, with a solution designed to meet the evolving needs of institutional investors. It offers world class custody, trading, and financing services, as well as on-chain participation like staking and governance. * **Delegation options:** Anchorage allows delegation to any validator running a node on the Oasis Network. * **Min holding:** Minimum custody requirements. Suitable for larger token holders. * **Sign up:** Please sign up [here](https://www.anchorage.com/get-started/). ### [Finoa](https://finoa.io) Finoa is a regulated custodian for digital assets, servicing professional investors with custody and staking. The platform enables its users to securely store and manage their crypto-assets, while providing a directly accessible, highly intuitive and unique user-experience, enabling seamless access to the ecosystem of Decentralized Finance (DeFi). * **Delegation options:** Finoa offers delegation to a number of select validators including Bison Trails, Blockdaemon, Chorus One, Figment Networks, and more. * **Min holding:** No threshold for assets under custody. Please check out details at [finoa.io](https://www.finoa.io). * **Sign up:** Email [oasis@finoa.io](mailto:oasis@finoa.io) to set up an account. ## Decentralized Custody Protocols ### [Oasis Safe][safe.oasis.io] Unlock a new way of ownership! Oasis Safe is the most trusted **decentralized custody protocol** and collective asset management (*multisignature* support) platform running on **Oasis Sapphire**. Visit [safe.oasis.io] and login with your MetaMask or other Ethereum-compatible wallet. [safe.oasis.io]: https://safe.oasis.io --- ## Ledger Hardware Wallet This is a general documentation that will help users setup the [Ledger] hadware wallet on the Oasis Network. Ledger Live doesn't support Oasis (ROSE) tokens natively yet. In this guide we will install Oasis app via Ledger Live to open and access wallet with one or multiple accounts via our official [ROSE Wallet - Web][wallet.oasis.io]. ## Setup your Ledger device and Install Oasis App 1. To use your [Ledger] wallet to hold your ROSE tokens, you will have to install [Oasis app] on your Ledger wallet via [Ledger Live]'s Manager. Click on the "My Ledger" button. Then, you need to connect your Ledger to your computer and unlock it with your PIN code. [Image: Unlock ledger] The Oasis app requires an up-to-date firmware on your Ledger wallet: * At least [version 2.0.0] released on Oct 21, 2021 on a Nano X device. * At least [version 2.1.0] released on Nov 30, 2021 on a Nano S device. * At least [version 1.0.4] released on Sep 27, 2022 on a Nano S Plus device. * At least [version 1.4.0] released on Apr 29, 2024 on a Stax device. * At least [version 1.1.1] released on Jul 26, 2024 on a Flex device. Follow Ledger's instructions for updating the firmware on your Ledger wallet: * [Nano X] * [Nano S] * [Nano S Plus] * [Stax] * [Flex] 2. Next, allow the Ledger Manager on your Ledger device. Then you will be able to open the App catalog and search for `oasis`: [Image: Allow Ledger Manager] [Image: Search app in catalog..] 3. Install the **Oasis** Nano app. [Image: Install the Oasis Nano app] 4. After the installation is complete, take your Ledger device, navigate to the Oasis app and use both buttons to open it. Your Ledger device is ready when you will see the "Oasis Ready" message. [Image: Oasis Ready] The Oasis app will use the [BIP 39] mnemonic seed stored secretly on your Ledger hardware wallet to generate the private & public key pairs for your Oasis accounts. **Make sure you backed up the mnemonic when you first initialized your Ledger device!** For security, the ROSE Wallet uses a **different mnemonic to private key derivation path** by default for your Ledger accounts (known as *Ledger* or *ed25519-legacy*) and for the accounts stored on a disk or inside a browser (also known as [ADR-8]). If you find yourself in a situation where your Ledger device does not function anymore, you have the backup mnemonic available, and you urgently need to access your funds, use the *[Oasis unmnemonic tool][unmnemonic-tool]* to **derive the private key from your Ledger mnemonic**. Checkout out this [FAQ section][unmnemonic-tool-faq] to download it and learn more. [ADR-8]: ../../../adrs/0008-standard-account-key-generation.md [unmnemonic-tool]: https://github.com/oasisprotocol/tools/tree/main/unmnemonic [unmnemonic-tool-faq]: ../faq.mdx#i-lost-my-ledger-or-my-ledger-is-broken-i-urgently-need-to-access-my-assets-can-i-import-ledger-mnemonic-into-rose-wallet ## Connect to your wallet ### ROSE Wallet This is a simpler option since it offers a nice UI for connecting your Ledger to a web application or a browser extension. Check out the following sections corresponding to your wallet for instructions: - [ROSE Wallet - Web: Import Ledger account](../oasis-wallets/web.mdx#import-an-existing-account) - [ROSE Wallet - Browser extension: Ledger](../oasis-wallets/browser-extension.mdx#import-an-existing-account) At time of writing, signing the ParaTime transactions is not yet supported by the ROSE Wallet - Web or the Browser extension. ### Oasis CLI This is a more powerful option that allows performing not just token-related tasks (transferring, staking, ParaTime deposits, withdrawals and transfers), but also generating and/or signing raw transactions, multi-signatures, network governance operations etc. Ledger is supported by the [Oasis CLI] out of the box. You can add a new Ledger account to the Oasis CLI by invoking the [`oasis wallet create`] command and adding the `--kind ledger` parameter. For example: ```shell oasis wallet create logan --kind ledger ``` ## Signing the transaction Once your Ledger account is registered to the wallet on your computer, you can use it to sign the transactions. After confirming the transaction on your computer, the transaction details **will appear on your Ledger screen** where you will need to **carefully review transaction details and make sure they match the ones on your computer**. Then, navigate to the screen where you will see the "APPROVE" button. Use the two buttons to approve your transaction. [Image: ROSE Wallet - Web -> Ledger -> Approve TX] The signed transaction will be sent back to your computer and submitted to the network. [Ledger]: https://www.ledger.com [Oasis app]: https://github.com/Zondax/ledger-oasis [Ledger Live]: https://www.ledger.com/ledger-live/ [wallet.oasis.io]: https://wallet.oasis.io [version 2.0.0]: https://support.ledger.com/article/360014980580-zd [version 2.1.0]: https://support.ledger.com/article/360010446000-zd [version 1.0.4]: https://support.ledger.com/article/4494540771997-zd [version 1.4.0]: https://support.ledger.com/article/Ledger-Stax-OS-release-notes [version 1.1.1]: https://support.ledger.com/article/Ledger-Flex-OS-release [Nano X]: https://support.ledger.com/article/360013349800-zd [Nano S]: https://support.ledger.com/article/360013349800-zd [Nano S Plus]: https://support.ledger.com/article/360013349800-zd [Stax]: https://support.ledger.com/article/360013349800-zd [Flex]: https://support.ledger.com/article/360013349800-zd [BIP 39]: https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki [Oasis CLI]: ../../../build/tools/cli/README.md [`oasis wallet create`]: ../../../build/tools/cli/wallet.md#create --- ## How to Bridge Assets to Oasis Sapphire This guide shows how to **bridge assets (ETH, USDC, USDT, etc.)** from networks like Ethereum, BNB Chain, or Polygon to **Oasis Sapphire** - an EVM-compatible ParaTime with confidential smart contracts. **Main Bridge:** [**Celer cBridge**][cbridge] - a decentralized, non-custodial bridge supporting fast transfers from multiple networks. This guide covers: - Bridging assets via cBridge - Transferring ROSE to Sapphire - Important warnings and FAQs **Prerequisites:** - Web3 wallet (e.g. MetaMask) with Oasis Sapphire network added - Gas fees on source chain (ETH, BNB, etc.) Need some **ROSE** on Sapphire for gas fees to use your bridged tokens? See the [Get ROSE][get-rose] chapter. Are you a developer? Celer cBridge uses Celer Inter-Chain Messaging (IM) protocol to communicate between the chains. You can learn how build cross-chain dApps on Oasis using OPL and Celer IM [here](../../build/opl/celer/README.md). ## Using Celer cBridge to Bridge Assets to Sapphire **Supported assets:** ETH, USDC, USDT, WBTC, BNB, MATIC, OCEAN, wROSE **Supported networks:** Ethereum, Polygon, BNB Chain 1. **Open cBridge and Connect Wallet:** Go to the cBridge web app at [**cbridge.celer.network**][cbridge]. Connect your wallet (MetaMask or another Web3 wallet) to cBridge. Make sure your wallet is set to the **source network** from which you want to bridge. For example, if bridging from Ethereum Mainnet, switch to Ethereum in MetaMask; if from BNB Smart Chain, switch to BNB Chain, etc. [Image: Celer Bridge] 2. **Select Source and Destination Chains:** In the cBridge interface, use the drop-down menus to select your **"From" chain and "To" chain**. For example, choose **Ethereum** as the source and **Oasis Sapphire** as the destination to bridge from Ethereum to Sapphire. 3. **Choose the Asset to Bridge:** Next, select the token you want to transfer. The available token list will update based on the selected chains. For instance: * If bridging from Ethereum, you might choose **ETH** or a stablecoin like **USDC**. Bridging ETH will result in **WETH on Sapphire** (wrapped Ether on Oasis) , and bridging USDC will result in USDC on Sapphire (address given by cBridge). * If bridging from BNB Chain to Sapphire, you can select **wROSE** (the wrapped ROSE token on BNB Chain). 4. **Enter Amount and Transfer:** Enter the amount you wish to bridge. The interface may display the estimated receive amount after fees. Click **Transfer**. For ERC-20 assets, your wallet will first ask you to **approve** the cBridge contract to spend that token (e.g. approve USDC). Approve the token, then confirm the **bridging transaction** in your wallet. This will send the tokens to cBridge on the source chain. 5. **Confirm and Wait:** After confirming, cBridge will handle the cross-chain transfer. The bridging typically completes **within a few minutes**, but times vary by chain. You can monitor the transfer status on the cBridge interface. 6. **Receive Tokens on Sapphire:** Tokens arrive at your Sapphire address. You may need to add the token contract address in MetaMask to see them. See [Contract Addresses][token-addresses] for official token addresses. cBridge is **non-custodial** - tokens are sent automatically after confirmation. Always use the **official URL** and verify **Oasis Sapphire** as destination. ## Get ROSE to Sapphire (From Exchanges or Oasis Wallet) You need ROSE on Sapphire for gas fees. For information how to get ROSE, see our [Get ROSE section][get-rose] ## After Bridging – Using Your Assets on Sapphire Your bridged tokens are now on Sapphire. To see them in MetaMask, add the token contract address from our [Contract Addresses][token-addresses] page. ## Warnings and Best Practices * **Use Official Links:** Only use official [cBridge][cbridge] URL - beware of phishing sites. * **Gas Fees:** Need native tokens on source chain + ROSE on Sapphire. Without ROSE on Sapphire, you cannot use your bridged tokens. * **Transaction Time:** Transfers take a few minutes. Check cBridge status if delayed beyond 10-15 minutes. * **Hardware Wallets:** Enable "Blind Signing" or "Contract Data" in Ledger's Ethereum app for bridge transactions. * **Test First:** Do a small test transfer before large amounts. * **Fees:** cBridge charges small fees. Check minimum/maximum limits in the interface. * **Security:** Always bridge to your own wallet first, not directly to dApp contracts. * **Token Versions:** Verify dApps support your bridged token version. * **Mistakes:** Contact bridge support immediately if sent to wrong network. Always verify **Oasis Sapphire (chain ID 23294)**. ## FAQs Check the Bridging section in our [Frequently Asked Questions][faq] --- For support, join our [community channels][social-media]. [token-addresses]: https://github.com/oasisprotocol/sapphire-paratime/blob/main/docs/addresses.md [social-media]: ../../get-involved/README.md#social-media-channels [cbridge]: https://cbridge.celer.network/ [get-rose]: ./README.mdx#get-rose [faq]: faq.mdx#bridging-and-transferring-assets --- ## ROSE Wallet - Browser Extension ## Installation Currently, the [ROSE Wallet - Browser Extension](https://github.com/oasisprotocol/wallet) **only supports** [**Chrome**](https://www.google.com/chrome/) or other [Chromium](https://www.chromium.org/Home)-based browsers. You can install the ROSE Wallet - Browser Extension by heading to the [Chrome Web Store](https://chrome.google.com/webstore/detail/oasis-wallet/ppdadbejkmjnefldpcdjhnkpbjkikoip). [Image: ROSE Wallet Extension - chrome web store] Next, either [create a new wallet](#create-a-new-account) or [restore your existing one](#import-an-existing-account). ## Create a New Account The next screen is devoted to your mnemonic—**a unique list of words representing your account(s)**. Review the information on this page very carefully. Save your mnemonic in the right order in a secure location. Your mnemonic (i.e. keyphrase) is required to access your wallet. Be sure to store it in a secure location. If you lose or forget your mnemonic, you will lose access to your wallet and any token funds contained in it. Never share your mnemonic (i.e. keyphrase)! Anyone with your mnemonic can access your wallet and your tokens. After you’ve saved your mnemonic, click the “I saved my keyphrase” checkbox and then click on the “Import my wallet” button. [Image: Create a New Wallet] Next, you will need to confirm your mnemonic by writing the mnemonic into the text area. The ROSE Wallet will check for any typos and missing words. When done click the "Import my wallet" button. [Image: Confirm your mnemonic] ### Account Derivation If you correctly entered the mnemonic the **account derivation popup** will appear containing a list of `oasis1` addresses with their balances on the right. These are the accounts derived from your mnemonic based on the [ADR-8 derivation scheme][adr8]. Select one or more accounts and click the "Open" button to import them into your wallet. [Image: Account derivation] ### User Profile If you want to permanently store the keys of selected accounts, turn on the "Create a profile" toggle button below, which will Store your private keys locally and protect them with a password by creating a profile. After entering a password below, this will **instantiate a profile inside the local store of your browser to safely store your keys**. To access them, you will need to enter the correct password each time you will open the ROSE Wallet - Chrome extension. [Image: Create a profile] After clicking the 'Open' button, you will be taken to the _Wallet screen_, containing information about your account balance, recent transactions and more. [Image: The Wallet screen] [adr8]: ../../../adrs/0008-standard-account-key-generation.md ## Import an Existing Account On the "Open wallet" page select whether you want to open your wallet via a mnemonic, a private key, or a Ledger hardware wallet. [Image: Access an Existing Wallet] In the "Enter your mnemonic here" field, enter each word of your mnemonic separated by a space. Then, hit the "Import my wallet" button. [Image: Open Wallet via Mnemonic] The [account derivation popup](#account-derivation) will be shown where you can pick one or more derived accounts to import. The ROSE Wallet uses English mnemonic phrase words as defined in [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki). You can find a complete list of all valid phrase words [here](https://github.com/bitcoin/bips/blob/master/bip-0039/english.txt). If you misspelled a word, the wallet will warn you. Paste your Base64-encoded Ed25519 private key in the "Enter your private key here" field and then click on the "Import my wallet" button. [Image: Open Wallet via Private Key] Toggling on "Create a profile" will store private keys locally, protected by a password and instantiate the [user's profile](#user-profile). The 64 bytes long keypair is the preferred format for importing your account. It consists of **two 32-byte parts**: 1. The first part is the **private key** used for signing the transactions (e.g. for sending tokens from your account). 2. The second part is a **public key** used to verify the signatures of your transactions and also derive your account's address of the form `oasis1...`. If you entered the 64-byte keypair then the wallet checks whether the public key corresponds to its private counterpart. **If you mistyped any character, the wallet will not allow you to proceed.** However, if you only typed in the first 32-byte part there is **no error detection possible. It is imperative that you **correctly input all characters** and not mix similar characters like the big O and 0 or big I and 1! If unsure, we suggest that you perform a test transaction the first time you import your wallet from the private key so you can rest assured the key is valid. To use the Ledger hardware wallet, make sure you have your Ledger device readily available and have familiarized yourself with the [Oasis-specific Ledger usage instructions](../holding-rose-tokens/ledger-wallet.md). Click on 'Grant access to your Ledger' and you'll see a pop-up in your Chrome Browser asking you to select which device to connect. Click on your Ledger device. Then click "Connect Ledger device". Next, click on the "Select accounts to open". If this is the first time you're using Ledger with your browser, a system popup will appear showing the list of Ledger devices connected to your computer and requesting permission to use it. Select one and then click the "Connect" button. [Image: Open Wallet via Ledger] [Image: Open Wallet via Ledger] The [account derivation popup](#account-derivation) will be shown next, where you can pick one or more derived accounts to import. To date, only Chromium-based browsers support WebUSB component which is required to access your Ledger device. Finally, you will be taken to your _Wallet screen_, containing information about your account balance, recent transactions and more. ## Transfer To transfer tokens, open the _Wallet screen_. Fill in the "Recipient" and "Amount" fields and click "Send". A confirmation popup will appear showing transaction details. Carefully review the transaction and click the "Confirm" button. [Image: Transfer confirmation dialog] To receive tokens, open the _Wallet screen_ and copy the `oasis1` account address at the top. [Image: The Wallet screen] ## Stake To [stake your tokens](../staking-and-delegating.md) go to _Stake tab_ at the bottom navigation. The list of validators will appear, their status, current escrow and the commission fee. Follow the sections below to delegate or undelegate your tokens. [Image: Stake screen] 1. To delegate tokens, select the preferred validator you wish to delegate your tokens to by clicking on it. Fill in the amount and click the "Delegate" button. [Image: Stake screen: Selected validator] 2. A confirmation popup will appear showing transaction details. Carefully review the transaction and click the "Confirm" button. [Image: Delegate confirmation dialog] 3. In a while, your delegated tokens will appear under the "Staked" tab. [Image: Active delegations] 1. To undelegate, click on a validator in the "Staked" tab, enter the amount of tokens you wish to undelegate and click "Reclaim". You can also click the "Reclaim all" button to undelegate all delegated tokens from this validator. [Image: Active delegations] 2. A confirmation popup will appear showing transaction details. Carefully review the transaction and click the "Confirm" button. [Image: Undelegate confirmation dialog] 3. In a while, your undelegated tokens will enter the **debonding period**. You can check out all the delegations that are in the debonding period in the "Debonding delegations" tab. [Image: Debonding delegations] ## ParaTimes To move tokens from the consensus layer to a ParaTime (**deposit**) or the other way around (**withdrawal**), open the _ParaTime screen_. Click on the "Deposit to ParaTime" or "Withdraw from ParaTime" button and follow the sections below. [Image: The ParaTime screen] 1. Select the ParaTime you wish to deposit your tokens to and click "Next". [Image: Deposit tokens: Select ParaTime] 2. Enter the recipient address in the ParaTime. For EVM-compatible ParaTimes you will need to enter a hex-encoded address starting with `0x` and for other ParaTimes the Oasis native address starting with `oasis1`. Click "Next". [Image: Deposit tokens: Recipient address] 3. Enter the amount to deposit. The gas fee and price will automatically be computed. You can toggle the "Advanced" button to set it manually. Finally, click "Next". [Image: Deposit tokens: Amount] 4. Review deposit details, check the "I confirm the amount and the address are correct" and click the "Deposit" button. [Image: Deposit tokens: Review deposit] Once the deposit transaction is confirmed the tokens will appear on your ParaTime account. [Image: Deposit tokens: Deposit complete] 1. Select the ParaTime you wish to withdraw your tokens from and click "Next". [Image: Withdraw tokens: Select ParaTime] 2. Enter the recipient address on the consensus layer below. If the ParaTime is EVM-compatible you will also need to enter the **hex-encoded private key** of the account on the ParaTime which you are withdrawing from. If you are using a [profile](#user-profile), the **private key will be stored for any future withdrawals**. For other ParaTimes, the withdrawal transaction will be signed with the **private key of your currently selected account in your wallet**. Click "Next" to continue. [Image: Withdraw tokens: Recipient address] 3. Enter the amount to withdraw. The gas fee and price will automatically be computed. You can toggle the "Advanced" button to set it manually. Finally, click "Next". [Image: Withdraw tokens: Amount] 4. Review withdrawal details, check the "I confirm the amount and the address are correct" and click the "Withdraw" button. [Image: Withdraw tokens: Review withdrawal] Once the withdrawal transaction is confirmed the tokens will appear on your consensus account. [Image: Withdraw tokens: Withdrawal complete] ## Account options When you have at least one account opened, click on the account jazz icon in the top-right corner. A popup will appear. [Image: Settings popup] ### My Accounts Select a different account and click "Select" to switch the current account. ### Contacts Contains a list of named addresses similar to the address book. ### Profile Used to change the password or delete your [profile](#user-profile). ## Share your feedback with us If you have any questions or issues using the [ROSE Wallet - Browser Extension](https://github.com/oasisprotocol/wallet/), you can [submit a GitHub issue](https://github.com/oasisprotocol/wallet/issues), and the dev team will take a look. You can also connect with us to share your feedback via [Discord](https://oasis.io/discord) or [Telegram](https://t.me/oasisprotocolcommunity). --- ## ROSE Wallet - Web This is the Oasis Foundation-managed non-custodial web wallet for the Oasis Network. You can access it by visiting **[wallet.oasis.io](https://wallet.oasis.io)**. [Image: Home screen] The wallet was designed to work with any modern browser. In order to use the [Ledger hardware wallet](../holding-rose-tokens/ledger-wallet.md) though, **you will need the WebUSB support**. At time of writing, this was only available in [Chrome](https://www.google.com/chrome/) and other [Chromium](https://www.chromium.org/Home)-based browsers. Opening the wallet for the first time will show the *Home screen* where you can choose to [create a new account](#create-a-new-account) or [open an existing one](#import-an-existing-account). ## Create a New Account The next screen is devoted to your mnemonic—**a unique list of words representing your account(s)**. Review the information on this page very carefully. Save your mnemonic in the right order in a secure location. Your mnemonic (i.e. keyphrase) is required to access your wallet. Be sure to store it in a secure location. If you lose or forget your mnemonic, you will lose access to your wallet and any token funds contained in it. Never share your mnemonic (i.e. keyphrase)! Anyone with your mnemonic can access your wallet and your tokens. After you’ve saved your mnemonic, click the “I saved my keyphrase” checkbox and then click on the “Import my wallet” button. [Image: Create a New Wallet] Next, you will need to confirm your mnemonic by writing the mnemonic into the text area. The ROSE Wallet will check for any typos and missing words. When done click the "Import my wallet" button. [Image: Confirm your mnemonic] ### Account Derivation If you correctly entered the mnemonic the **account derivation popup** will appear containing a list of `oasis1` addresses with their balances on the right. These are the accounts derived from your mnemonic based on the [ADR-8 derivation scheme][adr8]. Select one or more accounts and click the "Open" button to import them into your wallet. [Image: Account derivation popup] ### User Profile If you want to permanently store the keys of selected accounts, turn on the "Create a profile" toggle button below, which will Store your private keys locally and protect them with a password by creating a profile. After entering a password below, this will **instantiate a profile inside the local store of your browser to safely store your keys**. To access them, you will need to enter the correct password each time you will open the ROSE Wallet - Web. Finally, you will be taken to the *Wallet screen*, containing information about your account balance, recent transactions and more. [Image: The Wallet screen] [adr8]: ../../../adrs/0008-standard-account-key-generation.md ## Import an Existing Account On the "Open wallet" page select whether you want to open your wallet via a mnemonic, a private key, or a Ledger hardware wallet. [Image: Access an Existing Wallet] In the "Enter your mnemonic here" field, enter each word of your mnemonic separated by a space. Then, hit the "Import my wallet" button. [Image: Open Wallet via Mnemonic] The [account derivation popup](#account-derivation) will be shown where you can pick one or more derived accounts to import. The ROSE Wallet uses English mnemonic phrase words as defined in [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki). You can find a complete list of all valid phrase words [here](https://github.com/bitcoin/bips/blob/master/bip-0039/english.txt). If you misspelled a word, the wallet will warn you. Paste your Base64-encoded Ed25519 private key in the "Enter your private key here" field and then click on the "Import my wallet" button. [Image: Open Wallet via Private Key] Toggling on "Create a profile" will store private keys locally, protected by a password and instantiate the [user's profile](#user-profile). The 64 bytes long keypair is the preferred format for importing your account. It consists of **two 32-byte parts**: 1. The first part is the **private key** used for signing the transactions (e.g. for sending tokens from your account). 2. The second part is a **public key** used to verify the signatures of your transactions and also derive your account's address of the form `oasis1...`. If you entered the 64-byte keypair then the wallet checks whether the public key corresponds to its private counterpart. **If you mistyped any character, the wallet will not allow you to proceed.** However, if you only typed in the first 32-byte part there is **no error detection possible. It is imperative that you **correctly input all characters** and not mix similar characters like the big O and 0 or big I and 1! If unsure, we suggest that you perform a test transaction the first time you import your wallet from the private key so you can rest assured the key is valid. To use the Ledger hardware wallet, make sure you have your Ledger device readily available and have familiarized yourself with the [Oasis-specific Ledger usage instructions](../holding-rose-tokens/ledger-wallet.md). Next, click on the "Select accounts to open". If this is the first time you're using Ledger with your browser, a system popup will appear showing the list of Ledger devices connected to your computer and requesting permission to use it. Select one and then click the "Connect" button. [Image: Open Wallet via Ledger] The [account derivation popup](#account-derivation) will be shown next, where you can pick one or more derived accounts to import. To date, only Chromium-based browsers support WebUSB component which is required to access your Ledger device. Finally, you will be taken to your *Wallet screen*, containing information about your account balance, recent transactions and more. ## Transfer To transfer tokens, open the *Wallet screen*. Fill in the "Recipient" and "Amount" fields and click "Send". A confirmation popup will appear showing transaction details. Carefully review the transaction and click the "Confirm" button. [Image: Transfer confirmation dialog] To receive tokens, open the *Wallet screen* and copy the `oasis1` account address at the top. You can also scan or store a QR code corresponding to your account on the right side of the screen. [Image: The Wallet screen] ## Stake To [stake your tokens](../staking-and-delegating.md) open the *Stake screen*. The list of validators will appear, their status, current escrow and the commission fee. Follow the sections below to delegate or undelegate your tokens. [Image: Stake screen] 1. To delegate tokens, select the preferred validator you wish to delegate your tokens to by clicking on it. Fill in the amount and click the "Delegate" button. [Image: Stake screen: Selected validator] 2. A confirmation popup will appear showing transaction details. Carefully review the transaction and click the "Confirm" button. [Image: Delegate confirmation dialog] 3. In a while, your delegated tokens will appear under the "Staked" tab. [Image: Active delegations] 1. To undelegate, click on a validator in the "Staked" tab, enter the amount of tokens you wish to undelegate and click "Reclaim". You can also click the "Reclaim all" button to undelegate all delegated tokens from this validator. [Image: Active delegations] 2. A confirmation popup will appear showing transaction details. Carefully review the transaction and click the "Confirm" button. [Image: Undelegate confirmation dialog] 3. In a while, your undelegated tokens will enter the **debonding period**. You can check out all the delegations that are in the debonding period in the "Debonding delegations" tab. [Image: Debonding delegations] ## ParaTimes To move tokens from the consensus layer to a ParaTime (**deposit**) or the other way around (**withdrawal**), open the *ParaTime screen*. Click on the "Deposit to ParaTime" or "Withdraw from ParaTime" button and follow the sections below. [Image: The ParaTime screen] 1. Select the ParaTime you wish to deposit your tokens to and click "Next". [Image: Deposit tokens: Select ParaTime] 2. Enter the recipient address in the ParaTime. For EVM-compatible ParaTimes you will need to enter a hex-encoded address starting with `0x` and for other ParaTimes the Oasis native address starting with `oasis1`. Click "Next". [Image: Deposit tokens: Recipient address] 3. Enter the amount to deposit. The gas fee and price will automatically be computed. You can toggle the "Advanced" button to set it manually. Finally, click "Next". [Image: Deposit tokens: Amount] 4. Review deposit details, check the "I confirm the amount and the address are correct" and click the "Deposit" button. [Image: Deposit tokens: Review deposit] Once the deposit transaction is confirmed the tokens will appear on your ParaTime account. [Image: Deposit tokens: Deposit complete] 1. Select the ParaTime you wish to withdraw your tokens from and click "Next". [Image: Withdraw tokens: Select ParaTime] 2. Enter the recipient address on the consensus layer below. If the ParaTime is EVM-compatible you will also need to enter the **hex-encoded private key** of the account on the ParaTime which you are withdrawing from. If you are using a [profile](#user-profile), the **private key will be stored for any future withdrawals**. For other ParaTimes, the withdrawal transaction will be signed with the **private key of your currently selected account in your wallet**. Click "Next" to continue. [Image: Withdraw tokens: Recipient address] 3. Enter the amount to withdraw. The gas fee and price will automatically be computed. You can toggle the "Advanced" button to set it manually. Finally, click "Next". [Image: Withdraw tokens: Amount] 4. Review withdrawal details, check the "I confirm the amount and the address are correct" and click the "Withdraw" button. [Image: Withdraw tokens: Review withdrawal] Once the withdrawal transaction is confirmed the tokens will appear on your consensus account. [Image: Withdraw tokens: Withdrawal complete] ## Account options When you have at least one account opened, click on the account jazz icon in the top-right corner. A popup will appear. [Image: Settings popup] ### My Accounts Select a different account and click "Select" to switch the current account. ### Contacts Contains a list of named addresses similar to the address book. ### Profile Used to change the password or delete your [profile](#user-profile). ### Settings You can change the wallet language and toggle between the light and the dark theme. [Image: Account popup: Change theme] If you do not have a profile, a sun/moon icon will be shown in the lower-left corner. [Image: Toggle between light mode and dark mode, no profile] ## Share your feedback with us If you have any questions or issues using the [ROSE Wallet - Web](https://github.com/oasisprotocol/wallet/), you can [submit a GitHub issue](https://github.com/oasisprotocol/wallet/issues), and the dev team will take a look. You can also connect with us to share your feedback via [Discord](https://oasis.io/discord) or [Telegram](https://t.me/oasisprotocolcommunity). --- ## Staking and Delegating The Oasis Network is a proof-of-stake network. This means that the **voting power of an entity in the network is determined by the amount of tokens staked to that entity**. For example, this amount determines, how frequent the validator will be elected to propose a new block. Each epoch, the staking reward is distributed among the validators based on the amount of *staked* tokens. You can check out the **current staking rewards** in the [Token metrics chapter][current staking rewards]. But it's not just the validators that can stake. You can *delegate* your tokens to a validator and earn **passive income**, when the validator receives the staking reward. Of course, the validator may take their cut (the *commission fee*) for running the validator node hardware, but in essence staking **improves the security of the network** because paying the commission fee rewards good validators and expels the malicious ones. Keep in mind that the validator's misbehavior **will result in _slashing_** or even **losing a portion of the staked tokens**! [current staking rewards]: ../oasis-network/token-metrics-and-distribution.mdx#staking-incentives When you undelegate your tokens, you will need to wait the **debonding period** to pass in which you will not earn any rewards. Currently, this period is **336 epochs (~14 days)**. ## How to Delegate? Staking can only be performed on the **consensus layer**. Currently, the Oasis Wallet - Web and the Browser extension require that you delegate your tokens explicitly from your consensus account. The Oasis CLI and some dApps running in ParaTimes also allow you to implicitly delegate tokens from your ParaTime account. Check out the current validator set, their escrow of staked tokens, the commission rate, and the availability in the [Oasis Scan explorer][explorer-validators]. [Image: The validator set in the morning of March 29, 2024] Some validators prefer anonymity and they do not list their name or any contact information. In this case only their entity's Oasis address is shown. Regardless of which validator you pick, **you will earn the same reward as long as the validator is online, proposes and signs valid blocks**. We recommend that you consider delegating your tokens to the ones without the largest delegations since this **concentrates the voting power and potentially reduces the network security**. Once you decided which validator you want to delegate to, consult the following sections based on your wallet for a step-by-step walkthrough: * [ROSE Wallet - Web](oasis-wallets/web.mdx#stake) * [ROSE Wallet - Browser Extension](oasis-wallets/browser-extension.mdx#stake) * [Oasis CLI](../../build/tools/cli/account.md#delegate) Staking your ROSE is a different transaction than sending them! When you stake your tokens (the `staking.Escrow` transaction), you can reclaim them at any time. Sending your tokens (the `staking.Transfer` transaction) on the other hand means that the **receiver will own the tokens and there is no way of retrieving that tokens back by yourself**. If you happen to send your tokens to the validator instead of staking them, try contacting the validator via email or other channels listed on the block explorers and kindly ask them to send the tokens back to you. Know that it is completely up to them to send the tokens back and there is no other mechanism of doing it. After you delegated your tokens, [check your account balance][check-account]. If the Escrow is correct, then congratulations, your tokens are successfully staked! Some custody providers may also allow delegation of your tokens. Check out the [custody providers][custody-providers] chapter to learn more. [check-account]: ./README.mdx#check-your-account [explorer-validators]: https://www.oasisscan.com/validators ## Become a validator yourself? If you find the validator commission rates too high, you may be interested in **running your own node and become a validator**. You can get started [here](../../node/README.mdx). Be sure to [join the **#node-operators** channel on Discord and sign up for the node operator mailing list](../../get-involved/README.md)! --- ## Terminology ## Account A staking **account** is an entry in the staking ledger. It has two (sub)accounts: - **General account** It is used to keep the funds that are freely available to the account owner to transfer, delegate/stake, pay gas fees, etc. - **Escrow account** It is used to keep the funds needed for specific consensus-layer operations (e.g. registering and running nodes, staking and delegation of tokens, ...). To simplify accounting, each escrow results in the source account being issued shares which can be converted back into staking tokens during the reclaim escrow operation. Reclaiming escrow does not complete immediately, but may be subject to a debonding period during which the tokens still remain escrowed. ## Address A staking account **address** is represented by a truncated hash of a corresponding entity's public key, prefixed by a 1 byte address version. It uses [Bech32 encoding] for text serialization with `oasis` as its human readable part (HRP) prefix. EVM-compatible ParaTimes running on the Oasis compute layer **may use** EVM-compatible 20-byte addresses in hex format (starting with `0x`). ## Delegation You can **delegate** your tokens by submitting an **escrow** transaction that deposits a specific number of tokens into someone else’s escrow account (as opposed to **staking** tokens, which usually refers to depositing tokens into your own escrow account). In other words, delegating your tokens is equivalent to staking your tokens in someone else's validator node. Delegating your tokens can give you the opportunity to participate in the Oasis Network's proof-of-stake consensus system and earn rewards via someone else's validator node. ## Staking You can stake your tokens by submitting an **escrow** transaction that deposits a specific number of tokens into your escrow account. ## Rewards By delegating your tokens to someone else's node, you can earn a portion of the rewards earned by that node through its participation in the Oasis Network. ## Commission Node operators collect **commissions** when their node earns a **staking reward** for delegators. A validator node earns a staking reward for participating in the consensus protocol each epoch. The **commission rate** is a fraction of the staking reward. For example, if our validator node earns a reward of 0.007 tokens, 0.0035 tokens are added to the escrow pool (increasing the value of our escrow pool shares uniformly), and 0.0035 tokens are given to us (issuing us new shares as if we manually deposited them). ## Slashing A portion of your delegated tokens can be **slashed** (seized) by the network, if the node that you delegated your tokens to gets slashed, e.g. as a penalty for equivocating in the protocol by signing diverging blocks for the same height. [Bech32 encoding]: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki#bech32 --- ## Oasis Network The Oasis Network is a Layer 1 decentralized blockchain network designed to be uniquely scalable, privacy-first and versatile. The Network has two main architectural components, the consensus layer and the ParaTime layer. 1. The **consensus layer** is a scalable, high-throughput, secure, proof-of-stake consensus run by a decentralized set of validator nodes. 2. The **ParaTime layer** hosts many parallel runtimes (ParaTimes), each representing a replicated compute environment with shared state. [Image: Oasis architectural design including ParaTime and consensus layers] ## Technology Highlights * **Separates consensus and execution into two layers** — the consensus layer (sometimes called Layer 0) and the ParaTime layer — for better scalability and increased versatility. * Separation of consensus and execution allows **multiple ParaTimes to process transactions in parallel**, meaning complex workloads processed on one ParaTime won’t slow down faster, simpler transactions on another. * The ParaTime layer is entirely decentralized, allowing **anyone to develop and build their own ParaTime**. Each ParaTime can be developed in isolation to meet the needs of a specific application, such as confidential compute, open or closed committees, and more. * The network’s sophisticated discrepancy detection makes Oasis **more efficient than sharding and parachains** — requiring a smaller replication factor for the same level of security. * **The network has broad support for confidential computing technology**. The Oasis Eth/WASI Runtime is an open source example of a confidential ParaTime that uses secure enclaves to keep data private while being processed. ## Benefits of the Oasis Network Technology Stack ### Scalability The Oasis Network’s impressive scalability is achieved through a cutting-edge set of features that provide faster transaction speeds and higher throughput than other networks. The top-tier performance of the network is largely due to its separation of compute and consensus operations into the consensus layer and ParaTime layer. This separation allows multiple ParaTimes to process transactions in parallel, meaning complex workloads processed on one ParaTime won’t slow down faster, simpler transactions on another. Plus, the network’s sophisticated discrepancy detection makes Oasis more efficient than sharding and parachains — requiring a smaller replication factor for the same level of security. ### Privacy-First The Oasis Network designed the first ever confidential ParaTime with support for confidential smart contracts. In a confidential ParaTime, nodes are required to use a type of secure computing technology called a TEE (trusted execution environment.) TEEs act as a hypothetical black box for smart contract execution in a confidential ParaTime. Encrypted data goes into the black box along with the smart contract, data is decrypted, processed by the smart contract, and then encrypted before it is sent out of the TEE. This process ensures that data remains confidential, and is never leaked to the node operator or application developer. [Image: Client, Key Manager, Compute Node diagram] The Oasis Eth/WASI Runtime is an open source example of a confidential ParaTime that uses Intel SGX. Other secure compute technology, such as ZKP, HE, or other secure enclaves, can also be used. In the future we hope to support additional computation techniques such as secure multi-party compute, federated learning and more. Confidentiality unlocks a range of new use cases on blockchain by allowing personal or sensitive data, such as their social security number, bank statements, health information to be used by apps on the Oasis Network — something incredibly risky on other Layer 1 networks. ### Versatility Designed to support the next generation of blockchain applications, the Oasis Network is incredibly versatile, agile, and customizable. Namely, each ParaTime can be developed in isolation to meet the needs of a specific application. ParaTimes committees can be made large or small, open or closed, allowing for faster or more secure execution depending on the requirements of a particular use case. Nodes can be required to have specific hardware, such as Secure Enclaves in a confidential ParaTime. Each ParaTime can similarly run different Runtime VMs (ParaTime Engines) such as EVM backwards compatible engine, Rust based smart contract language, or a Data tokenization engine. Finally to support enterprise and developer use cases, ParaTimes can be made Permissioned or Permissionless — allowing consortiums to have their own closed ParaTime, or communities to have full decentralized open ParaTimes. The versatility of the ParaTime layer allows the Oasis Network to expand and grow to address a broad set of new and exciting use cases, while still maintaining the same core ledger and consensus layer. --- ## Frequently Asked Questions(Oasis-network) This page tries to answer some of the most frequently asked questions about the Oasis Network. This page will constantly be updated with new questions and responses. ## **Overview** ### **Why Oasis?** Designed for the next generation of blockchain, the Oasis Network is the first privacy-enabled blockchain platform for open finance and a responsible data economy. Combined with its high throughput and secure architecture, the Oasis Network is able to power private, scalable DeFi, revolutionizing Open Finance and expanding it beyond traders and early adopters to a mass market. Its unique privacy features can not only redefine DeFi, but also create a new type of digital asset called Tokenized Data that can enable users to take control of the data they generate and earn rewards for staking it with applications — creating the first ever responsible data economy. **First Privacy-Enabled Blockchain:** The Oasis Network is the world’s first scalable, privacy-enabled blockchain. ParaTimes on the Oasis Network can leverage confidential computing technology such as secure enclaves to keep data confidential — unlocking new use cases and applications for blockchain. **Scalable, Private DeFi:** The Oasis Network’s privacy-first design can expand DeFi beyond traders and early adopters — unlocking a new mainstream market. Plus its innovative scalability design brings fast speeds and high-throughput to DeFi transactions. **First to Enable Data Tokenization:** The Oasis Network can **Tokenize Data**, unlocking game changing use cases for blockchain, and an entirely new ecosystem of apps and projects on the network — powering the next generation of privacy-first applications. **Rapidly Growing Community:** The Oasis Network has a thriving community of close to a thousand node operators, developers, enterprise partners, ambassadors, and nearly ten thousand community members engaged in global social channels. **Top-Tier Team:** The Oasis Team is made up of top talent from around the world with backgrounds from Apple, Google, Amazon, Goldman Sachs, UC Berkeley, Carnegie Mellon, Stanford, Harvard and more — all committed to growing and expanding the impact of the Oasis Network. ### **Is the Oasis Protocol Foundation still taking grant applications for projects that are building new dApps?** Yes! We are still taking grant applications. You can apply any time [here](https://medium.com/oasis-protocol-project/oasis-foundation-grant-wishlist-3ad73b723d7). ## **Architecture** ### **What kind of blockchain is the Oasis Network? Does it use sidechains?** The Oasis Network is a Layer 1 blockchain protocol using a BFT, proof-of-stake consensus system. The network’s innovative ParaTime architecture enables us to scale without using sidechains. For more information please refer to our [platform whitepaper](https://docsend.com/view/aq86q2pckrut2yvq). ### **What does the Oasis Network’s architecture look like?** The Oasis Network is a Layer 1, proof-of-stake, decentralized network. It has two main components, the consensus layer and the ParaTime layer. 1. The **consensus layer** is a scalable, high-throughput, secure, proof-of-stake consensus run by a decentralized set of validator nodes. 2. The **ParaTime layer** hosts many parallel runtimes (ParaTimes), each representing a replicated compute environment with shared state. [Image: Paratime Communication] ### **How is a ParaTime different from a Parachain?** Unlike a Parachain, a ParaTime does not need to do consensus itself, making them simpler to develop and more integrated into the network as a whole. ParaTimes take care of compute and discrepancy detection is used to ensure correctness and integrity of execution, making ParaTimes more efficient than Parachains and other chain designs that rely on sharding. ### **Who will be running all of these ParaTimes? Can anyone run a ParaTime?** The network is agnostic in this regard. Anyone can run a ParaTime. It is completely left up to the devs and users to see which ones provide the functionality that they need. Examples of ParaTimes in development include the Oasis Labs Data Sovereignty ParaTime and the [Second State Virtual Machine](https://medium.com/oasis-protocol-project/ethereum-support-on-the-oasis-blockchain-3add9e13556?source=collection_home---4------0-----------------------), an EVM compatible Runtime. ### **What consensus mechanism are you running? Is it BFT?** The Oasis Network uses CometBFT as its BFT consensus protocol. Given that the consensus layer uses a BFT protocol, the Oasis Network offers instant finality, meaning that once a block is finalized, it cannot be reverted (at least not for full nodes). A ParaTime commitment goes into a block and as such the ParaTime state is also finalized and cannot be reverted once a block is finalized. ### **Why doesn’t the Oasis Network do sharding? Does that mean it’s slow?** The Oasis Network does not use sharding. Instead, Oasis leverages a discrepancy detection model leading up to roothash updates, giving the network the same scalability benefits that sharding offers but with added benefits that come from a design that is much simpler to implement in practice. Sharding is a nice idea in theory but comes with a lot of complexity and costs that make it harder to implement in practice. From a security perspective, the complexity of sharding also makes it harder to audit and inherently more vulnerable to security breaches The Oasis Network’s discrepancy detection-based approach provides the same benefits as sharding through a cleaner, simpler, more efficient implementation. Ultimately, the Oasis Network’s unique scalability mechanism ensures that the network is not only fast (like sharding networks purport to be) but also versatile and secure enough to support a wide range of real-world workloads. ### **How does storage work on the Oasis Network? Do you use IPFS?** Storage on the Oasis Network is determined by each ParaTime. There is a clear separation of concerns between the consensus layer and the runtime layer. The ParaTimes that make up the runtime layer have a lot of flexibility in how they choose to manage storage. For instance, the ParaTime being developed by Oasis Labs can support IPFS as its storage solution. Other ParaTime developers could opt to implement different storage mechanisms based on their own unique storage needs. ## **Open Finance & DeFi** ### **Does the Oasis Network have a vision for DeFi? Is it different from the mainstream view of DeFi?** The first generation of DeFi dApps has provided the market with a huge number of protocols and primitives that are meant to serve as the foundation for the specific components of a new financial system. Despite the current focus on short-term returns, we at Oasis believe the goal of DeFi applications should be to give rise to a new financial system that removes subjectivity, bias, and inefficiencies by leveraging programmable parameters instead of status, wealth, and geography. Oasis aims to support the next wave of DeFi applications by offering better privacy and scalability features than other Layer 1 networks. ### **I’ve seen Oasis use both the terms "Open Finance" and "DeFi"? What’s the difference?** The terms "Open Finance" and "DeFi" are interchangeable. However, we believe that "Open Finance" better represents the idea that the new financial system should be accessible to everyone who operates within the bounds of specific programmable parameters, regardless of their status, wealth, or geography**.** ### **Will Oasis provide oracle solutions for use in DeFi applications?** Oasis recently announced a partnership with [Chainlink](https://medium.com/oasis-protocol-project/oasis-network-chainlink-integrating-secure-and-reliable-oracles-for-access-to-off-chain-data-5d31e6e4591c?source=collection_home---4------1-----------------------) as the preferred oracle provider of the Oasis Network. This integration is ongoing. ### **What aspects of DeFi require privacy? How can the Oasis Network’s focus on privacy help with DeFi applications?** In the current generation of DeFi, some miners and traders are leveraging the inefficiencies of Ethereum to stack mining fees and interest rates, while preventing many more people from participating in the industry. Privacy can play a strong role in making the network function properly by reducing these inefficiencies. At the application level, privacy is an enabler. For instance, strong privacy guarantees can encourage established institutions to participate in the system because these institutions would be able to protect their interests and relationships. Additionally, privacy features can serve as the foundation for a reputation system, thereby unlocking the full potential of undercollateralized lending. We keep hearing that privacy is the next big thing in DeFi, and we look forward to empowering developers to build the next generation of DeFi applications. ### **How does privacy help create a new system of Open Finance?** Existing financial systems and data systems are not open at all. They are only accessible to a select few. Privacy has a much broader meaning than just keeping something private. Thanks to privacy-preserving computation, users can retain ownership of their information and grant others access to compute on their data without actually revealing (or transferring) their data. This will enable users to accrue data yields by essentially staking their data on the blockchain, unlocking a wide range of new financial opportunities. Open Finance refers to the idea that status, wealth, and geography won't block you from accessing a certain financial product. Adherence to a programmable set of parameters will determine whether someone can participate or not, making new financial opportunities open to more people around the world. For example, services such as lending protocols could offer different interest rates depending on the history of that user. What's game changing for the world of finance is that companies would not have to rely on a centralized score such as FICO - they would be able to build their own models. ### **Why would anyone choose to build a DeFi project on Oasis over Ethereum?** The network’s cutting-edge scalable features can help unblock DeFi as it works today, fixing the high-transaction fees and slow throughput currently plaguing other Layer 1 networks. Combined, Oasis’ unique ability to provide scalable, private DeFi is expected to make it the leading platform for unlocking the next generation of DeFi markets and use cases. ## **Token** ### **How will the Oasis Network’s token be used in the network when it launches?** The ROSE token will be used for transaction fees, staking, and delegation at the consensus layer. ## **Privacy** ### **How does the Oasis Network achieve privacy and confidentiality? Is it through homomorphic encryption?** There are many ways to achieve confidentiality. Using a trusted execution environments (TEEs) is one way. This is what we do. In effect, we provide end-to-end confidentiality for transactions where state and payload are encrypted at rest, in motion, and, more importantly, in compute. homomorphic encryption is another technique for confidentiality. At this time, anyone can build a ParaTime on the Oasis Network that uses homomorphic encryption to provide confidentiality. We are not prescriptive about what approach developers should take. Something worth noting is that privacy and confidentiality are not equivalent. Privacy implies confidentiality but not the other way around. For privacy, there are techniques such as differential privacy that can be implemented. ## **Interoperability** ### **Can you run Ethereum smart contracts on the Oasis Network? Or if not directly run smart contracts, could you access a bridge between Ethereum ERC20 assets and Oasis?** In short, yes! The Oasis Network supports EVM-compatible ParaTimes which will support a wide range of applications. --- ## Papers 1. **[Liquefaction: Privately Liquefying Blockchain Assets](https://doi.ieeecomputersociety.org/10.1109/SP61157.2025.00156)** \[[arXiv PDF](./papers/2025-liquefaction-privately-liquefying-blockchain-assets.pdf)\] James Austgen, Andrés Fábrega, Mahimna Kelkar, Dani Vilardell, Sarah Allen, Kushal Babel, Jay Yu, Ari Juels 2025 IEEE Symposium on Security and Privacy (SP) 1. **[Keep Your Transactions On Short Leashes or Anchoring For Stability In A Multiverse of Block Tree Madness](https://arxiv.org/abs/2206.11974)** \[[PDF](./papers/2022-short_leashes.pdf)\] Bennet Yee 2022 Technical Report 1. **[Shades of Finality and Layer 2 Scaling](https://arxiv.org/abs/2201.07920)** \[[PDF](./papers/2022-shades_of_finality.pdf)\] Bennet Yee, Dawn Song, Patrick McCorry, Chris Buckland 2022 Technical Report 1. **An Implementation of Ekiden on the Oasis Network** \[[PDF](./papers/2021-an_implementation_of_ekiden.pdf)\] Oasis Protocol Project 2021 Technical Report 1. **The Oasis Blockchain Platform** \[[PDF](./papers/2020-the_oasis_blockchain_platform.pdf)\] Oasis Protocol Project 2020 Technical Report 1. **[Digital Stewardship: An Introductory White Paper](https://ssrn.com/abstract=3669911)** \[[PDF](./papers/2020-digital_stewardship.pdf)\] Richard Whitt 2020 SSRN, Elsevier 1. **[Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts](https://doi.org/10.1109/EuroSP.2019.00023)** \[[PDF](./papers/2019-ekiden.pdf)\] Raymond Cheng, Fan Zhang, Jernej Kos, Warren He, Nicholas Hynes, Noah Johnson, Ari Juels, Andrew Miller, Dawn Song 2019 IEEE European Symposium on Security and Privacy (EuroS&P) --- ## Token Metrics and Distribution [Image: Background illustration] ## Quick Token Facts **Supply**: The ROSE native token is a capped supply token. The circulating supply at launch will be approximately 1.5 billion tokens, and the total cap is fixed at 10 billion tokens. **Token utility:** The ROSE token will be used for transaction fees, staking, and delegation at the consensus layer. **Staking rewards**: \~2.3 billion tokens will be automatically paid out as staking rewards to stakers and delegators for securing the network over time. ## Token Distribution The quantity of ROSE tokens reserved for various network functions, as a percentage of the total existing token supply, approximately follows the distribution below. Please note that these percentages and allocations are subject to change as we finalize the logistics for the network and its related programs. ### Token Distribution Glossary **Backers**: Tokens sold directly to backers prior to mainnet launch. The vast majority of these sales took place in 2018. **Core Contributors**: Compensation to core contributors for contributing to the development of the Oasis Network. **Foundation Endowment**: Endowment to the Oasis Foundation to foster the development and maintenance of the Oasis Network. **Community and Ecosystem**: Funding programs and services that engage the Oasis Network community, including developer grants and other community incentives by the Oasis Foundation. **Strategic Partners and Reserve**: Funding programs and services provided by key strategic partners in the Oasis Network. **Staking Rewards**: Rewards to be paid out on-chain to stakers and delegators for contributing to the security of the Oasis Network. ### Circulating Supply Not all tokens have been released publicly or will be released publicly by Mainnet launch. Due to release schedules and locks, only a fraction of the total existing token supply will be in circulation at the time of Mainnet. Approximately 1.5 billion tokens out of a fixed supply of 10 billion tokens in total will be in circulation immediately upon Mainnet. In addition, a portion of Foundation tokens that are not in the circulation supply at launch are staked on the network. Any staking rewards earned will go back into the network via future validator delegations, network feature development, and ecosystem grants. Tokens set aside for Staking Rewards will be disbursed in accordance with on-chain reward mechanisms which calculate rewards based on how many blocks are proposed by validator, how many blocks are signed by a validator, how many nodes are participating in staking, and how many tokens are staked etc. The remaining allocations will be disbursed according to the following release schedule: Alternative formats: [CSV](../../../src/token_distribution/data.csv), [JSON](../../../src/token_distribution/data.json) ## Fundraising History Between 2018 and 2020 Oasis has raised over $45 million from backers including: [Image: Fundraising History] ## Staking Incentives Given the Oasis Network’s founding vision to become a world-class, public, permissionless blockchain platform, the contributing team at Oasis has been focused on ensuring that setting up a node is as seamless as possible for all community members. To that end, we’ve put a lot of thought into making sure our staking conditions minimize barriers to entry and encourage meaningful engagement on the network. Some key parameters include: * **Number of validators to participate in the consensus committee (and receive staking rewards):** 120. Validators will be based on the stake weight on the network. * **Minimum stake**: 100 tokens per entity * **Selection to the consensus committee**: Each entity can have at most one node elected to the consensus committee at a time. * **Staking rewards**: The network is targeted to reward stakers with rewards of between 2.0% to 20.0% depending on the length of time staked to provide staking services on the network. In order to be eligible for staking rewards per epoch, a node would need to sign at least 75% of blocks in that epoch. * **Slashing**: At the time of Mainnet launch, the network will only slash for forms of double-signing. The network would slash the minimum stake amount (100 tokens) and freeze the node. Freezing the node is a precaution in order to prevent the node from being over-penalized. The Network will not slash for liveness or uptime at launch. * **Unbonding period**: The network will have a \~14 day unbonding period. During this time, staked tokens are at risk of getting slashed for double-signing and do not accrue rewards during this time. * **Consensus voting power**: The current voting power mechanism is stake-weighted. This means that the consensus voting power of a validator is proportional to its stake. In this model, the network will require signatures by validators representing +2/3 of the total stake of the committee to sign a block. Note that in Tendermint, a validator's opportunities to propose a block in the round-robin block proposer order are also proportional to its voting power. Alternative formats: [CSV](../../../src/staking_rewards/data.csv), [JSON](../../../src/staking_rewards/data.json) ## Delegation Policy The Oasis Protocol Foundation is committed to give delegations to entities participating in various incentivized networks. For more details, see its [Delegation Policy](../../get-involved/delegation-policy.md). ## Change Log * **Mar 27, 2024:** * Updated Staking Rewards Schedule after the [governance proposal #4](https://www.oasisscan.com/proposals/4) passed (Mar 27, 2024) which changed the staking rewards schedule. * **Nov 2, 2023:** * Improve epoch duration estimates inside Staking Rewards Schedule chart after Damask Upgrade. * **Jul 28, 2022:** * Created interactive Staking Rewards Schedule chart. * Created interactive Token Distribution pie chart. * **Jul 15, 2022:** * Created interactive Token Circulation Schedule chart. * **Apr 28, 2022:** * Updated validator set to 120 as reflected in the Oasis Network 2022-04-11 Upgrade. * **Nov 10, 2021:** * Updated validator set to 110 as reflected in the Oasis Network 2021-08-31 Upgrade. * Added Circulating Supply title to the part talking about Oasis Network's circulating supply. * **April 30, 2021:** * Updated validator set to 100 as reflected in the Oasis Network Cobalt Upgrade. * **Jan 15, 2021:** * Added section on Foundation's Delegation Policy. * **Nov 15, 2020:** * Corrected the initial validator consensus committee to 80 validators. This reflects what is currently in the community approved genesis file and community proposed upgrade to Mainnet. * **Nov 2, 2020:** * Updated Backers image to include more publicly-known backers. * Included a community-proposed (and foundation supported) increase in staking rewards range from 15% - 2% to 20% - 2% over the first four years of the network. Impacted charts (distribution, token delivery schedule, and expected staking rewards) also updated to reflect the increase in staking rewards. --- ## Join our Community ## Welcome to the Oasis Community Whether you're a blockchain enthusiast, a software developer, or someone who is just starting to learn about crypto, we're excited to welcome you to our community. We look forward to working together on our mission to build a responsible data economy and empower users around the world to take back ownership of their data and their online privacy. ## Social Media Channels To stay up-to-date on the latest Oasis Network news, events, and programs, be sure to join our social media channels: * [Discord](https://oasis.io/discord) * [Twitter](https://twitter.com/OasisProtocol) * [Public Telegram channel](https://t.me/oasisprotocolcommunity) (for community discussions open to everyone) * [Telegram Announcement channel](https://t.me/oasisprotocolfoundation) (for one-way updates from the Oasis Foundation) ## Regional Communities To connect with Oasis community members who live in your home region or speak your native language, check out our region-based community channels: * Arabic Speaking Countries: [https://t.me/OasisNetworkCommunity_Arabic](https://t.me/OasisNetworkCommunity_Arabic) * Austria: [https://t.me/OasisNetworkCommunity_Austria](https://t.me/OasisNetworkCommunity_Austria) * Bangladesh: [https://t.me/OasisNetworkCommunity_Bangladesh](https://t.me/OasisNetworkCommunity_Bangladesh) * Brazil: [https://t.me/OasisNetworkCommunity_Brazil](https://t.me/OasisNetworkCommunity_Brazil) * China WeChat: [https://t.me/oasisprotocolcommunity/27378](https://t.me/oasisprotocolcommunity/27378) * France: [https://t.me/OasisNetworkCommunity_France](https://t.me/OasisNetworkCommunity_France) * Germany: [https://t.me/OasisNetworkCommunity_Germany](https://t.me/OasisNetworkCommunity_Germany) * India: [https://t.me/OasisNetworkCommunity_India](https://t.me/OasisNetworkCommunity_India) * Indonesia: [https://t.me/OasisNetworkCommunity_Indonesia](https://t.me/OasisNetworkCommunity_Indonesia) * Japan: [https://t.me/OasisNetworkCommunity_japan8](https://t.me/OasisNetworkCommunity_japan8) * Korea: [https://t.me/OasisNetworkCommunity_Korea](https://t.me/OasisNetworkCommunity_Korea) * Nigeria: [https://t.me/OasisNetworkCommunity_Nigeria](https://t.me/OasisNetworkCommunity_Nigeria) * Philippines: [https://t.me/OasisNetworkCommunity_Philippine](https://t.me/OasisNetworkCommunity_Philippine) * Russia: [https://t.me/OasisNetworkCommunity_Russia](https://t.me/OasisNetworkCommunity_Russia) * Singapore: [https://t.me/OasisNetworkCommunity_Singapore8](https://t.me/OasisNetworkCommunity_Singapore8) * Spanish Speaking Countries: [https://t.me/OasisNetworkCommunity_Spanish](https://t.me/OasisNetworkCommunity_Spanish) * Sri Lanka: [https://t.me/OasisNetworkCommunity_SriLanka](https://t.me/OasisNetworkCommunity_SriLanka) * Sweden: [https://t.me/OasisNetworkCommunity_Sweden8](https://t.me/OasisNetworkCommunity_Sweden8) * Switzerland: [https://t.me/OasisNetworkCommunitySwitzerland](https://t.me/OasisNetworkCommunitySwitzerland) * Turkey: [https://t.me/OasisNetworkCommunity_Turkey](https://t.me/OasisNetworkCommunity_Turkey) * Ukraine: [https://t.me/OasisNetworkCommunity_Ukraine](https://t.me/OasisNetworkCommunity_Ukraine) * Uzbekistan: [https://t.me/OasisNetworkCommunity_Uzbekistan](https://t.me/OasisNetworkCommunity_Uzbekistan) * Vietnam: [https://t.me/OasisNetworkCommunity_Vietnam](https://t.me/OasisNetworkCommunity_Vietnam) --- ## Delegation Policy The Oasis Protocol Foundation (OPF) delegates ROSE tokens to node operators who run Oasis Consensus and ParaTime nodes on Mainnet and/or Testnet. Delegations are based on node reliability, performance, community engagement, and overall contributions to the network’s growth and stability. ## Requirements for Receiving Delegations To be eligible for a delegation, you have to [join the network], follow the [Code of Conduct], and meet [performance requirements]. For guidance on how to increase your delegation over time, please refer to the instructions at the bottom of this page. ### Join the Network 1. **Join the Community Channels** - Join the `#node-operators` channel on the [Oasis Discord server] - Or join the Telegram group: [@oasisnodeoperators] 2. **Create Your Entity Keys** - Generate your **Testnet** and **Mainnet** [entity keys] 3. **Set Up Your Nodes** - [Configure and start] your Testnet Consensus and ParaTime nodes - [Configure and start] your Mainnet Consensus and ParaTime nodes 4. **Submit Your Metadata** - [Provide metadata] for both **Mainnet** and **Testnet** 5. **Reach Out to the Node Operator Relationship Manager** - Message `@am3lody` on Discord or Telegram - Be prepared to answer any follow-up questions 6. **Stay Updated** - Follow network announcements: - 📢 [Oasis Discord server]: [`#mainnet-announcements`], [`#testnet-announcements`], [`#node-operators`] - 📢 Telegram: [Oasis Mainnet Announcements], [Oasis Testnet Announcements] and [Oasis Node Operators Group] 7. **Engage with the Community** - Actively participate on the Oasis Network Community on Discord or Telegram - Follow the Code of Conduct and meet Performance Requirements - Help others and answer questions from fellow node operators and community members [join the network]: #join-the-network [Code of Conduct]: #code-of-conduct [performance requirements]: #performance-requirements [Oasis Discord server]: https://oasis.io/discord [`#mainnet-announcements`]: https://discord.com/channels/748635004384313474/960599344745185330 [`#testnet-announcements`]: https://discord.com/channels/748635004384313474/967039075527827496 [`#node-operators`]: https://discord.com/channels/748635004384313474/748644177394532463 [@oasisnodeoperators]: https://t.me/oasisnodeoperators [entity keys]: ../node/run-your-node/validator-node#initialize-entity [Configure and start]: ../node/run-your-node/validator-node#configuration [Provide metadata]: ../node/run-your-node/validator-node#oasis-metadata-registry [Oasis Mainnet Announcements]: https://t.me/oasisMAINNETannouncements [Oasis Testnet Announcements]: https://t.me/oasisTESTNETannouncements [Oasis Node Operators Group]: https://t.me/oasisnodeoperators ### Code of Conduct - **Set Appropriate Commission Rates** - Ensure your commission rate and commission rate bounds do not exceed *20%* - Keep your commission rate within ±10% of the weighted median commission rate on the network - **Keep Metadata Updated** - Ensure your entity’s website, social media, and/or contact information in the [Oasis Metadata Registry] is accurate and current - **Respect Branding Guidelines** - Do not include the word "Oasis" in your entity name - Do not use Oasis Foundation logos or branding in your entity’s logo - **Maintain Integrity** - Avoid any dishonest, fraudulent, or malicious behavior - **Participate Actively in Governance and Network Operations** - Participate in on-chain governance and network upgrade votes - Be available and ready to collaborate with other node operators during unplanned network upgrades or events [Oasis Metadata Registry]: https://github.com/oasisprotocol/metadata-registry ### Performance Requirements - **Maintain Active Validator Status** Remain in good standing as an active validator on both Testnet and Mainnet. - **Achieve High Uptime Across All Networks** Ensure consistently high uptime for your Consensus, Sapphire, and Cipher nodes on both Testnet and Mainnet. - **Timely Upgrades for Planned Network Events** Apply all planned network upgrades within 1 hour of their scheduled release. - **Rapid Response to Unplanned Upgrades** Complete any unplanned or emergency upgrades within 24 hours of announcement. If you run into a complex technical issue that prevents your node from meeting the above Performance Requirements, please reach out to the Oasis team as soon as possible. ## How to Increase Delegations? - **Operate Core Infrastructure Across Networks** Run and maintain high-uptime nodes for Consensus, Sapphire, and Cipher on both Testnet and Mainnet. - **Be Ready for Network Upgrades** Stay prepared for both planned and unplanned upgrades. Monitor relevant communication channels and respond quickly when action is required. - **Participate in On-Chain Governance** Actively vote on governance proposals to help shape the network’s direction. - **Communicate Downtime Transparently** If you experience downtime or technical issues, report them promptly to the Oasis Protocol Foundation (OPF) — contact `@am3lody` on Discord or Telegram. - **Maintain Credible Stakeholding** Maintain a meaningful level of self-delegation and/or community delegations. - **Engage in Strategic Discussions** Join and contribute to community discussions about the network’s long-term roadmap and strategic initiatives. - **Build and Support the Ecosystem** Develop or contribute to tools, services, or dApps that provide value to developers, node operators, and the broader Oasis community. - **Contribute Code to Oasis Projects** Actively contribute to open-source codebases or support the development of Oasis-related repositories and infrastructure. - **Identify and Report Issues** Proactively discover and report bugs or issues in Oasis network protocols and/or their implementations. - **Demonstrate Reliable Node Operations History** Demonstrate a strong track record of reliably operating nodes on other major blockchain networks. ## Disclaimer Delegations from the Oasis Foundation are discretionary and may be adjusted or revoked at any time, with or without prior notice. --- ## Network Governance If you have a general question on how to use and deploy our software, please read our [Run a Node](../node/README.mdx) section or join our [community Discord](README.md). All community members are welcome and encouraged to commit code, documentation and enhancement proposals to the platform. Contribution guidelines can be found [here](oasis-core.md). ### Governance Model Overview The Oasis Protocol Foundation proposes a representative democracy governance model based on a combination of off-chain and on-chain processes for the continued development of the Oasis Network. The Oasis Protocol Foundation will be tasked with guiding the long-term development of the platform and coordinating the community of development and network operations, with input collected from community members and changes to the network being voted on by node operators, with voting power based proportionally on staked and delegated tokens. We propose this model because we think it will provide a balanced voice to all engaged community members -- from developers of all sizes, to node owners, to token holders -- while at the same time still facilitating the swift deployment of network updates, new features, critical bug fixes. In order for the community to balance distributed ownership and participation with speed and quality of platform development, we propose a hybrid model of off- and on-chain mechanisms, organized around the following key components: 1. Minor Feature Requests 2. Major Feature Requests 3. Bug Fixes ### Decision Making Process Moving forward, our proposed process for reviewing and approving major protocol updates is: * **Proposals.** for features and roadmap updates can come from anyone in the community in the form of issues ([for minor features](network-governance.md#minor-feature-requests)) or [Architectural Decision Records](../../adrs) (ADRs, [for major features](network-governance.md#major-feature-requests)). * **Review and discussion of the proposals.** Decisions about the future of the project are made through discussion with all members of the community, from the newest user to the most experienced. All non-sensitive project management discussion takes place in the Oasis Protocol GitHub via issues ([for minor features](network-governance.md#minor-feature-requests)) and ADRs ([for major features](network-governance.md#major-feature-requests)). * **Decision making process.** In order to ensure that the project is not bogged down by endless discussion and continual voting, the project operates a policy of lazy consensus. This allows the majority of decisions to be made without resorting to a formal vote. In general, as long as nobody explicitly opposes a proposal or patch, it is recognised as having the support of the community. For lazy consensus to be effective, it is necessary to allow at least 72 hours before assuming that there are no objections to the proposal. This requirement ensures that everyone is given enough time to read, digest and respond to the proposal. In case consensus is not reached through discussion, the [project committers](https://github.com/oasisprotocol/oasis-core/blob/master/GOVERNANCE.md#committers) may vote to either accept the proposal or reject it. Votes are cast using comments in the proposal pull request. The proposal is accepted by a simple majority vote. * **Final vote for approval.** Once built, the community votes to approve each upgrade and the corresponding features that are included in the proposal. This voting process may initially be done off-chain but will eventually become an on-chain process. Entities holding stake will vote to approve changes, with each entity's voting power being proportional to their share of tokens staked relative to the total tokens staked. * **Upgrade.** Node operators autonomously upgrade their system to run the new version of the software. ### Minor Feature Requests To request new functionality, there are two primary approaches that will be most effective at receiving input and making progress. If the feature is small - a change to a single piece of functionality, or an addition that can be expressed clearly and succinctly in a few sentences, then the most appropriate place to [propose it is as a new feature request](https://github.com/oasisprotocol/oasis-core/issues/new?template=feature_request.md) in the Oasis Core repository. ### Major Feature Requests If the feature is more complicated, involves protocol changes, or has potential safety or performance implications, then consider [proposing an Architectural Decision Record (ADR)](../../adrs) and submit it as a pull request to the Oasis Core repository. This will allow a structured review and commenting of the proposed changes. You should aim to get the ADR accepted and merged before starting on implementation. Please keep in mind that the project's committers still have the final word on what is accepted into the project. We recommend that major protocol updates including a need to hard fork, roadmap and feature planning be conducted with recommendations from the Oasis Protocol Foundation and its technical advisory committee. ### Urgent Bug Fixes Urgent bug fixes will primarily be coordinated off-chain to optimize for speed in addressing any issues that are critical to the immediate health of the network. The Oasis Network community as a whole is collectively responsible for identifying and addressing bugs. As bugs are identified, the Oasis Protocol Foundation can serve as a line of first defense to triage these bugs and coordinate security patches for quick release. Bugs are a reality for any software project. We can't fix what we don't know about! If you believe a bug report presents a security risk, please follow [responsible disclosure](https://en.wikipedia.org/wiki/Responsible_disclosure) and report it by following the [security disclosure information](https://oasis.net/security) instead of filing a public issue or posting it to a public forum. We will get back to you promptly. Otherwise, please, first search between [existing issues in our repository](https://github.com/oasisprotocol/oasis-core/issues) and if the issue is not reported yet, [file a new one](https://github.com/oasisprotocol/oasis-core/issues/new?template=bug_report.md). ### Contributing to the Network If you are interested in contributing to the Oasis Network's codebase or documentation, please [review our contribution guidelines here.](oasis-core.md). --- ## Develop Oasis Core This document outlines our guidelines for contributing to the Oasis Network's codebase and documentation. If you are interested in learning more about the Oasis Network's governance model, including the processes for submitting feature requests and bug fixes, [please see our governance overview here](network-governance.md). If you wish to contribute either code, documentation or larger enhancement proposals, feel free to read our [Oasis Core Contributing Guidelines](https://github.com/oasisprotocol/oasis-core/blob/master/CONTRIBUTING.md). --- ## ParaTime Node This guide provides an overview of the requirements to become a compute node for a ParaTime connected to the Oasis Network. ## About Oasis Network The Oasis Network has two main components, the consensus layer and the ParaTime Layer. 1. The **consensus layer** is a scalable, high-throughput, secure, proof-of-stake consensus run by a decentralized set of validator nodes. 2. The **ParaTime layer** hosts many parallel runtimes (ParaTimes), each representing a replicated compute environment with shared state. [Image: Oasis architectural design including ParaTime and consensus layers] ## Operating ParaTimes Operating a ParaTime requires the participation of node operators who contribute nodes to the committee in exchange for rewards. ParaTimes can be operated by anyone, and have their own reward system, participation requirements, and structure. As a node operator you can participate in any number of ParaTimes. While there are a number of ParaTimes under development, below are a few key ParaTimes that you can get involved in today. For operational documentation on running a ParaTime, please see the section on [running a ParaTime node for node operators]. [running a ParaTime node for node operators]: ../../node/run-your-node/paratime-node.mdx ### Sapphire ParaTime A confidential EVM-compatible Oasis Foundation developed ParaTime that enables the use of EVM smart contracts on the Oasis network. ### Overview * **Leading Developer:** [Oasis], with contributions from community developers * **Status:** Deployed on Mainnet and Testnet * **Testnet Launch Date:** July 2022 * **Mainnet Launch Date:** Dec 2022 * **Discord Channel:** [#node-operators][connect-with-us] * **Requires SGX:** Yes * **Parameters:** * [Mainnet](../../node/network/mainnet.md#sapphire) * [Testnet](../../node/network/testnet.md#sapphire) ### Features * Fully decentralized with node operators distributed across the world. * Oasis **ROSE** tokens are the native token used in the ParaTime for gas fees. * Support for EVM smart contracts. * Support for confidential compute. ### Mainnet Requirements For your Sapphire ParaTime node to be eligible to be elected into the Sapphire committee on Mainnet, your entity needs to: - Have a validator in the **validator set**. - Have at least **5,000,000.00 ROSE staked/delegated** to it. ### Cipher ParaTime An Oasis Foundation developed ParaTime that enables WebAssembly-based confidential smart contracts. ### Overview * **Leading Developer:** [Oasis], with contributions from community developers * **Status:** Deployed on Mainnet and Testnet * **Testnet Launch Date:** June 2021 * **Mainnet Launch Date:** October 2021 * **Discord Channel:** [#node-operators][connect-with-us] * **Requires SGX:** Yes * **Parameters:** * [Mainnet](../../node/network/mainnet.md#cipher) * [Testnet](../../node/network/testnet.md#cipher) ### Features * Fully decentralized with node operators distributed across the world. * Oasis **ROSE** tokens are the native token used in the ParaTime for gas fees. * Support for WebAssembly smart contracts. * Support for confidential compute. ### Mainnet Requirements For your Cipher ParaTime node to be eligible to be elected into the Cipher committee on Mainnet, your entity needs to: - Have a validator in the **validator set**. ### Emerald ParaTime An EVM-compatible Oasis Foundation developed ParaTime that enables the use of EVM smart contracts on the Oasis network. ### Overview * **Leading Developer:** [Oasis], with contributions from community developers * **Status:** Deployed on Mainnet and Testnet * **Testnet Launch Date:** October 2021 * **Mainnet Launch Date:** November 2021 * **Discord Channel:** [#node-operators][connect-with-us] * **Requires SGX:** No * **Parameters:** * [Mainnet](../../node/network/mainnet.md#emerald) * [Testnet](../../node/network/testnet.md#emerald) ### Features * Fully decentralized with node operators distributed across the world. * Oasis **ROSE** tokens are the native token used in the ParaTime for gas fees. ### Mainnet Requirements For your Emerald ParaTime node to be eligible to be elected into the Emerald committee on Mainnet, your entity needs to: - Have a validator in the **validator set**. - Have at least **5,000,000.00 ROSE staked/delegated** to it. [Oasis]: http://oasis.net [connect-with-us]: ../README.md --- ## Consensus Validator Node This guide provides an overview of the technical setup and stake requirements to become a validator on the consensus layer of the Oasis Network. ## About Oasis Network [Oasis Network](../../general/oasis-network/README.mdx)'s consensus Layer is a decentralised set of validator nodes that maintain a proof-of-stake blockchain. Hence, it needs a set of distributed node operators that run different nodes (including validator nodes). ## Technical setup Make sure your system meets the [Hardware](../../node/run-your-node/prerequisites/hardware-recommendations.md) prerequisites and has [Oasis Node](../../node/run-your-node/prerequisites/oasis-node.md) installed. Then proceed by following the [Run a Validator Node](../../node/run-your-node/validator-node.mdx) guide to: * Create your entity. * Initialize and configure your node. * Put enough stake in your escrow account. * Register your entity on the network. ## Stake requirements To become a validator on the Oasis Network, you need to have enough tokens staked in your escrow account. For more information about obtaining information on your entity's account, see the [Account Get Info](../../build/tools/cli/account.md#show) doc. Currently, you should have: * 100 ROSE staked for your entity's registration since that is the [current entity's staking threshold](../../node/reference/genesis-doc.md#staking-thresholds). * 100 ROSE staked for your validator node's registration since that is the [current validator node's staking threshold](../../node/reference/genesis-doc.md#staking-thresholds). * Enough ROSE staked to be in the top 120 entities (by stake) so your validator will be elected into the consensus committee. The size of the consensus committee (i.e. the validator set) is configured by the [**max_validators** consensus parameter](../../node/reference/genesis-doc.md#consensus). To determine if you are eligible to receive a delegation from the Oasis Protocol Foundation, see the [Delegation Policy](../delegation-policy.md) document. --- ## Token Delivery & KYC Guide If you're visiting this page, you may have recently earned ROSE tokens via the Community Cup program, the Ambassador Rewards program, or a recent hackathon. Congratulations! Let's review the process for receiving your ROSE tokens from the Oasis Foundation: * **Step 1: Wait to be contacted regarding KYC by an Oasis Foundation core team member.** We will reach out to you via email to the email address we have on file for you for Oasis-related activities. This email will contain information about how to complete the KYC process. Please allow our team approximately 1 to 3 weeks after the day you have won or earned your ROSE tokens before reaching out to us with any questions. If 3 weeks have passed since the day you won or earned your ROSE tokens, you are welcome to reach out to us via our [token delivery question form](https://airtable.com/shrGmpohTNnytBpQU). Please be sure to use the same email address for all Oasis-related activities. For example, if you registered for the Oasis Ambassador Rewards program with a specific email address, use that same email address when you complete the KYC process and when you contact the Oasis team if you have any questions. For the Ambassador Rewards program, reward submissions will be tallied up after each quarter (every 3 months) and will require an additional 3 weeks to update the leaderboard and apply any bonuses. Rewards will be sent out on the third Friday following the end of a given quarter. If you have rewards outstanding past any given quarterly distribution date, we will be doing weekly distributions on Fridays, but only if you have completed all of the steps described in his document. * **Step 2: Complete the KYC process.** The KYC email you receive will contain everything you need to complete the KYC process which lets us know that you are eligible to receive ROSE tokens. In particular, you first need to request access to Synaps, which is the online KYC tool the Oasis Foundation uses. Once you get access to Synaps, you need to get verified in all 3 of the required categories. How long this step takes depends on how long you take to submit all of the required documentation, following all of the required guidelines. Assuming you submit all of the necessary documents correctly right away, you can expect this step to take around 1 week. When your documents and information have been verified, we'll reach out again via email with next steps The Proof of Residency document required for Synaps must be dated within 3 months of the date you are submitting it, and the document must include your name and your address, matching the information you have submitted to Oasis. * **Step 3: Submit the address of your Oasis account.** After you complete the KYC process, you will receive a follow-up email from the Oasis Foundation with information on how to submit your Oasis account address via a unique code that connects your wallet to your KYC data. Please allow at least 1 full week to pass after you complete KYC to receive the follow-up email with instructions on how to submit your account address. * **Step 4: Receive the tokens deposited to your Oasis account.** After you have completed KYC and submitted your Oasis account address, allow 1 to 2 weeks for your ROSE to be deposited into your wallet. We will be doing weekly token distributions on Fridays, but only if you have completed all of the steps described in his document. Before reaching out to our team with questions, please make sure you have closely reviewed each of the steps described in this document and that you have correctly completed each step. Please consider only reaching out for support via our [token delivery question form](https://airtable.com/shrGmpohTNnytBpQU) if you have correctly followed all of the instructions but for some reason have not received any communications from our team. Thank you for your cooperation in helping us manage our resources so that we can more effectively support the community. --- ## Run Node Welcome! This documentation is designed to provide you with a comprehensive understanding and step-by-step guidance for becoming a node operator within the Oasis Network. Embark on your journey by setting up a node on the Testnet. The Testnet serves as an experimental sandbox, offering a safe environment for learning and experimentation, free from the risks associated with real token loss. For any queries or support related to node operation, our team and community members are readily available on [Discord] to assist you. [Discord]: https://oasis.io/discord ## Validator and ParaTime nodes [Oasis Network] consists of the consensus layer and ParaTimes. Consensus and ParaTime nodes can be operated by anyone. To run a **validator node**, make sure your system meets the [hardware] and the [system] prerequisites and has [Oasis Node] installed. Then proceed by following the [Run a Validator Node] guide to: * Create your entity. * Initialize and configure your node. * Put enough stake in your escrow account. * Register your entity on the network. To run a **ParaTime node** make sure to first set up a validator node. Then, set up a [trusted execution environment (TEE)], if you want to run confidential ParaTimes. Afterwards, proceed to the [Run a ParaTime node] chapter. Consensus layer is a decentralised set of 120 validator nodes that are a backbone of the Oasis Network. The current validator set size is determined by governance - the network started with 80 nodes in the validator set in 2020 and has expanded to 120 nodes over the past few network upgrades. Current node operators can be seen on a block explorer such as [Oasis Scan]. Operating a ParaTime Node on the Mainnet requires the participation of node operators who have the validator node in the active validator set. ParaTimes have their own reward system, participation requirements and structure. As a node operator you can participate in any number of ParaTimes. [Oasis Network]: ../general/oasis-network/README.mdx [Oasis Scan]: https://www.oasisscan.com/validators ## DApp Developers If you are **building a service** on top of the Oasis Network, you will simply want to set up your own **[Non-validator node]** and (optionally) a **[ParaTime client node]**. This way, your service will not depend on third party endpoints which can be behind a traffic limiter, can go down unexpectedly, or they can have some more CPU-intensive queries disabled which you would like to use. [hardware]: ./run-your-node/prerequisites/hardware-recommendations.md [system]: ./run-your-node/prerequisites/system-configuration.mdx [Oasis Node]: ./run-your-node/prerequisites/oasis-node.md [Run a Validator Node]: ./run-your-node/validator-node.mdx [trusted execution environment (TEE)]: ./run-your-node/prerequisites/set-up-tee.mdx [Run a ParaTime node]: ./run-your-node/paratime-node.mdx [ParaTime client node]: ./run-your-node/paratime-node.mdx [Non-validator node]: ./run-your-node/non-validator-node.mdx ## Quick Navigation --- ## gRPC proxy for your Oasis node The Oasis node API is exposed via the [gRPC protocol]. It enables communication with external applications such as the Oasis CLI, dApps running in your browser that need to perform actions on the consensus layer or a ParaTime, services for monitoring and controlling your node and similar. Web3 gateway The Oasis gRPC **is not related to the standardized Web3 JSON-RPC API**. For EVM-compatible dApps configure a [Web3 gateway] instead which is also compatible with other Ethereum tooling. The gRPC proxy may perform the following: - makes gRPC available to in-browser applications via gRPC-Web, - filters out control methods such as shutting down your node, - authentication, - load balances the traffic to multiple Oasis nodes. This chapter will show you how to set up a **public** gRPC endpoint using Envoy so that it exposes only a **safe subset of the [Oasis RPC services]**. The final section presents an alternative approach to communicate with your node by **securely tunnelling the Unix socket over the network**, so it can safely be used by the client, but **does not filter out any services**. Never expose the UNIX socket directly! The `oasis-node` deliberately exposes the RPC interface only via an AF_LOCAL socket called `internal.sock` located in the node's data directory. This socket should **never be directly exposed over the network** as it has no authentication and allows full control—including shutting down your node. [gRPC protocol]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/oasis-node/rpc.md [Web3 gateway]: web3.mdx [Oasis RPC services]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/oasis-node/rpc.md#services ## gRPC Proxy with Envoy Let's set up a typical public Oasis endpoint using the [Envoy HTTP proxy] with the following behavior: - whitelisted methods for checking the network status, performing queries and submitting transactions, - no control methods allowed and no queries that are CPU or I/O intensive, - lives on `grpc.example.com` with TLS-enabled connection and certificates that you already have from Let's Encrypt, - `internal.sock` of the Oasis node is accessible at `/node/data/internal.sock`. Example hostnames This chapter uses various hostnames under the `example.com` domain. These only serve as an example and in a real deployment you should replace them with your own domain. Envoy already has built-in support for gRPC so after installing Envoy, simply put the configuration below inside your `envoy.yaml` and then execute the proxy with `envoy -c envoy.yaml`. ```yaml title="envoy.yaml" # Envoy gRPC-web proxy configuration --- admin: address: socket_address: address: "127.0.0.1" port_value: 10000 static_resources: listeners: - name: listener_0 address: socket_address: address: "0.0.0.0" port_value: 443 filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager codec_type: AUTO stat_prefix: ingress_http access_log: - name: envoy.file_access_log typed_config: "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog path: /dev/stdout route_config: virtual_hosts: - name: vh_0 domains: - "*" routes: - match: safe_regex: regex: "\ /oasis-core\\.(\ Beacon/(\ ConsensusParameters|\ GetBaseEpoch|\ GetBeacon|\ GetEpoch|\ GetEpochBlock|\ GetFutureEpoch)|\ Consensus/(\ EstimateGas|\ GetBlock|\ GetBlockResults|\ GetChainContext|\ GetGenesisDocument|\ GetLastRetainedHeight|\ GetLatestHeight|\ GetLightBlock|\ GetNextBlockState|\ GetParameters|\ GetSignerNonce|\ GetStatus|\ GetTransactions|\ GetTransactionsWithResults|\ GetTransactionsWithProofs|\ GetUnconfirmedTransactions|\ MinGasPrice|\ SubmitEvidence|\ SubmitTx|\ SubmitTxWithProof|\ StateSyncGet|\ StateSyncGetPrefixes|\ StateSyncIterate|\ WatchBlocks)|\ Governance/(\ ActiveProposals|\ ConsensusParameters|\ GetEvents|\ PendingUpgrades|\ Proposal|\ Proposals|\ Votes)|\ NodeController/(\ GetStatus)|\ Registry/(\ ConsensusParameters|\ GetEntity|\ GetEvents|\ GetNode|\ GetNodeByConsensusAddress|\ GetNodeStatus|\ GetRuntime|\ GetRuntimes)|\ RootHash/(\ ConsensusParameters|\ GetEvents|\ GetGenesisBlock|\ GetLastRoundResults|\ GetLatestBlock|\ GetRuntimeState)|\ RuntimeClient/(\ CheckTx|\ GetBlock|\ GetBlockResults|\ GetEvents|\ GetGenesisBlock|\ GetLastRetainedBlock|\ GetTransactions|\ GetTransactionsWithResults|\ Query|\ SubmitTx|\ SubmitTxMeta|\ SubmitTxNoWait|\ WatchBlocks)|\ Scheduler/(\ ConsensusParameters|\ GetCommittees|\ GetValidators)|\ Staking/(\ Account|\ Allowance|\ CommonPool|\ ConsensusParameters|\ DebondingDelegationInfosFor|\ DebondingDelegationsFor|\ DebondingDelegationsTo|\ DelegationInfosFor|\ DelegationsFor|\ DelegationsTo|\ GetEvents|\ GovernanceDeposits|\ LastBlockFees|\ Threshold|\ TokenSymbol|\ TokenValueExponent|\ TotalSupply))\ " route: cluster: oasis_node_grpc timeout: "0s" max_stream_duration: grpc_timeout_header_max: "0s" - match: prefix: /oasis-core direct_response: status: 404 body: inline_string: Only some methods are allowed on this proxy. typed_per_filter_config: envoy.filters.http.cors: "@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.CorsPolicy expose_headers: grpc-status,grpc-message,grpc-status-details-bin allow_origin_string_match: - exact: '*' allow_headers: content-type,x-grpc-web,x-user-agent max_age: '1728000' http_filters: - name: envoy.health_check typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck pass_through_mode: false headers: - name: :path string_match: exact: /health - name: envoy.filters.http.grpc_web typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb - name: envoy.filters.http.cors typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors - name: envoy.filters.http.router typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router transport_socket: name: envoy.transport_sockets.tls typed_config: "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext common_tls_context: alpn_protocols: - h2,http/1.1 tls_certificates: - certificate_chain: filename: /etc/letsencrypt/live/grpc.example.com/fullchain.pem private_key: filename: /etc/letsencrypt/live/grpc.example.com/privkey.pem clusters: - name: oasis_node_grpc connect_timeout: 0.25s load_assignment: cluster_name: cluster_0 endpoints: - lb_endpoints: - endpoint: address: pipe: path: /node/data/internal.sock typed_extension_protocol_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions: "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions explicit_http_config: http2_protocol_options: {} layered_runtime: layers: - name: static static_layer: re2: max_program_size: error_level: 1000000 ``` [Envoy HTTP proxy]: https://www.envoyproxy.io/ ## Tunnel Unix socket via SSH SSH supports forwarding a Unix socket over a secure layer. The command below will establish a secure shell to the `example.com` server and then tunnel the `internal.sock` file inside the data directory to a Unix socket inside your home folder: ```shell ssh oasis@example.com -L /home/user/oasis-node-internal.sock:/node/data/internal.sock ``` The `/home/user/oasis-node-internal.sock` can now be used to safely connect to the Oasis node **as if it was running locally without filtering any services**. For example, using the [Oasis CLI]: ```shell oasis network add-local my-oasis-node unix:/home/user/oasis-node-internal.sock ``` ```shell oasis network status --network my-oasis-node ``` Permanent SSH channel To make a tunneled Unix socket over SSH permanent, consider using [autossh]. [Oasis CLI]: https://github.com/oasisprotocol/cli/blob/master/docs/network.md#add-local [autossh]: https://www.harding.motd.ca/autossh/ ## See also --- ## Network Information(Network) --- ## Mainnet ## Network Parameters These are the current parameters for the Mainnet: * [Genesis file](https://github.com/oasisprotocol/mainnet-artifacts/releases/download/2023-11-29/genesis.json): * SHA256: `b14e45e97da0216a16c25096fd216f591e4d526aa6abac110ac23cb327b64ba1` Genesis file is signed by [network's current maintainers]. To verify its authenticity, follow the [PGP verification instructions]. * Genesis document's hash ([explanation](../reference/genesis-doc.md#genesis-file-vs-genesis-document)): * `bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55` * Oasis seed node addresses: * `H6u9MtuoWRKn5DKSgarj/dzr2Z9BsjuRHgRAoXITOcU=@34.187.216.34:26656` * `H6u9MtuoWRKn5DKSgarj/dzr2Z9BsjuRHgRAoXITOcU=@34.187.216.34:9200` * `90em3ItdQkFy15GtWqCKHi5j7uEUmZPZIzBt7I5d6w4=@146.148.13.130:26656` * `90em3ItdQkFy15GtWqCKHi5j7uEUmZPZIzBt7I5d6w4=@146.148.13.130:9200` Feel free to use other seed nodes besides the one provided here. * [Oasis Core](https://github.com/oasisprotocol/oasis-core) version: * [25.6](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.6) * [Oasis Rosetta Gateway](https://github.com/oasisprotocol/oasis-rosetta-gateway) version: * [2.7.0](https://github.com/oasisprotocol/oasis-rosetta-gateway/releases/tag/v2.7.0) The Oasis Node is part of the Oasis Core release. Do not use a newer version of Oasis Core since it likely contains changes that are incompatible with the version of Oasis Core used by other nodes. If you want to join our Testnet, see the [Testnet](../network/testnet.md) docs for the current Testnet parameters. [network's current maintainers]: https://github.com/oasisprotocol/mainnet-artifacts/blob/master/README.md#pgp-keys-of-current-maintainers [PGP verification instructions]: https://github.com/oasisprotocol/mainnet-artifacts/blob/master/README.md#verifying-genesis-file-signatures ## ParaTimes This section contains parameters for various ParaTimes known to be deployed on the Mainnet. ### Sapphire * Oasis Core version: * [25.6](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.6) * Runtime identifier: * `000000000000000000000000000000000000000000000000f80306c9858e7279` * Runtime bundle version: * [1.0.0](https://github.com/oasisprotocol/sapphire-paratime/releases/tag/v1.0.0) * [1.1.2](https://github.com/oasisprotocol/sapphire-paratime/releases/tag/v1.1.2) * Oasis Web3 Gateway version: * [5.3.4](https://github.com/oasisprotocol/oasis-web3-gateway/releases/tag/v5.3.4) ### Cipher * Oasis Core version: * [25.6](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.6) * Runtime identifier: * `000000000000000000000000000000000000000000000000e199119c992377cb` * Runtime bundle version: * [3.4.0](https://github.com/oasisprotocol/cipher-paratime/releases/tag/v3.4.0) * [3.5.2](https://github.com/oasisprotocol/cipher-paratime/releases/tag/v3.5.2) ### Emerald * Oasis Core version: * [25.6](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.6) * Runtime identifier: * `000000000000000000000000000000000000000000000000e2eaa99fc008f87f` * Runtime bundle version (or [build your own](https://github.com/oasisprotocol/emerald-paratime/tree/v11.0.0#building)): * [11.0.0](https://github.com/oasisprotocol/emerald-paratime/releases/tag/v11.0.0) * Web3 Gateway version: * [5.3.4](https://github.com/oasisprotocol/oasis-web3-gateway/releases/tag/v5.3.4) Check the [Emerald ParaTime page](../../build/tools/other-paratimes/emerald/network#rpc-endpoints) on how to access the public Web3 endpoint. ### Key Manager * Oasis Core version: * [25.6](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.6) * Runtime identifier: * `4000000000000000000000000000000000000000000000008c5ea5e49b4bc9ac` * Runtime bundle version: * [0.6.0](https://github.com/oasisprotocol/keymanager-paratime/releases/tag/v0.6.0) --- ## Testnet ## Network Parameters These are the current parameters for the Testnet, a test-only network for testing out upcoming features and changes to the protocol. **The Testnet may be subject to frequent version upgrades and state resets.** Also note that while the Testnet does use actual TEEs, due to experimental software and different security parameters, **confidentiality of confidential ParaTimes on the Testnet is not guaranteed** -- all transactions and state published on the Testnet should be considered public even when stored inside confidential ParaTimes. On the Testnet, TEST tokens are in use -- if you need some to test your clients, nodes or paratimes, feel free to use our [Testnet Faucet](https://faucet.testnet.oasis.io). Note that these are test-only tokens and account balances, as any other state, may be frequently reset. This page is meant to be kept up to date with the information from the currently released Testnet. Use the information here to deploy or upgrade your node on the Testnet. * Latest Testnet version: **2023-10-12** * [Genesis file](https://github.com/oasisprotocol/testnet-artifacts/releases/download/2023-10-12/genesis.json): * SHA256: `02ce385c050b2a5c7cf0e5e34f5e4930f7804bb21efba2d1d3aa8215123aab68` * Genesis document's hash ([explanation](../reference/genesis-doc.md#genesis-file-vs-genesis-document)): * `0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76` * Oasis seed node addresses: * `HcDFrTp/MqRHtju5bCx6TIhIMd6X/0ZQ3lUG73q5898=@35.247.24.212:26656` * `HcDFrTp/MqRHtju5bCx6TIhIMd6X/0ZQ3lUG73q5898=@35.247.24.212:9200` * `kqsc8ETIgG9LCmW5HhSEUW80WIpwKhS7hRQd8FrnkJ0=@34.140.116.202:26656` * `kqsc8ETIgG9LCmW5HhSEUW80WIpwKhS7hRQd8FrnkJ0=@34.140.116.202:9200` Feel free to use other seed nodes besides the one provided here. * [Oasis Core](https://github.com/oasisprotocol/oasis-core) version: * [25.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.8) * [Oasis Rosetta Gateway](https://github.com/oasisprotocol/oasis-rosetta-gateway) version: * [2.7.0](https://github.com/oasisprotocol/oasis-rosetta-gateway/releases/tag/v2.7.0) The Oasis Node is part of the Oasis Core release. [handling network upgrades]: ../run-your-node/maintenance/handling-network-upgrades.md ## ParaTimes This chapter contains parameters for various ParaTimes known to be deployed on the Testnet. Similar to the Testnet, these may be subject to frequent version upgrades and/or state resets. ### Sapphire * Oasis Core version: * [25.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.8) * Runtime identifier: * `000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c` * Runtime bundle version: * [1.1.2-testnet](https://github.com/oasisprotocol/sapphire-paratime/releases/tag/v1.1.2-testnet) * Web3 Gateway version: * [5.3.4](https://github.com/oasisprotocol/oasis-web3-gateway/releases/tag/v5.3.4) ### Cipher * Oasis Core version: * [25.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.8) * Runtime identifier: * `0000000000000000000000000000000000000000000000000000000000000000` * Runtime bundle version: * [3.5.2-testnet](https://github.com/oasisprotocol/cipher-paratime/releases/tag/v3.5.2-testnet) ### Emerald * Oasis Core version: * [25.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.8) * Runtime identifier: * `00000000000000000000000000000000000000000000000072c8215e60d5bca7` * Runtime bundle version (or [build your own](https://github.com/oasisprotocol/emerald-paratime/tree/v11.0.0-testnet#building)): * [11.0.0-testnet](https://github.com/oasisprotocol/emerald-paratime/releases/tag/v11.0.0-testnet) * Web3 Gateway version: * [5.3.4](https://github.com/oasisprotocol/oasis-web3-gateway/releases/tag/v5.3.4) ### Key Manager * Oasis Core version: * [25.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v25.8) * Runtime identifier: * `4000000000000000000000000000000000000000000000004a1a53dff2ae482d` * Runtime bundle version: * [0.5.0-testnet](https://github.com/oasisprotocol/keymanager-paratime/releases/tag/v0.5.0-testnet) * [0.6.0-testnet](https://github.com/oasisprotocol/keymanager-paratime/releases/tag/v0.6.0-testnet) --- ## Genesis Document(Reference) A genesis document contains the initial state of an Oasis Network, and all the necessary information for launching that particular network (e.g. [Mainnet], [Testnet]). For a more in-depth explanation of the genesis document, see the [Genesis Document](../../core/consensus/genesis.md) part of Oasis Core's developer documentation. The important thing to note is that the genesis document is used to compute the [genesis document's hash](../../core/consensus/genesis.md#genesis-documents-hash). This hash is used to verify for which network a given transaction is intended for. ## Genesis File vs. Genesis Document A genesis file is a JSON file corresponding to a serialized genesis document. As such, it is more convenient for distribution and sharing. When Oasis Node loads a genesis file, it converts it to a genesis document. Up to date information about the current genesis file and the current genesis document's hash can be found on the Network Parameters page ([Mainnet], [Testnet]). [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md ## Parameters This sections explains some of the key parameters of the genesis document. The concrete parameter values in the following sections pertain to the [Mainnet]. Other Oasis networks (e.g. [Testnet]) might use different values. The token balances in a genesis document (or a genesis file) are enumerated in base units. The **`staking.token_value_exponent`** parameter defines the token value's base-10 exponent. For the Mainnet it is set to 9 which means 1 ROSE equals 10^9 (i.e. billion) base units. ### Height, Genesis Time and Chain ID The **`height`** parameter specifies the network's initial block height. When a network is upgraded, its height is retained. For example, for the [Cobalt upgrade](./upgrades/cobalt-upgrade.md) the height of the Mainnet state dump was bumped by 1 from 3,027,600 to 3,027,601. The **`genesis_time`** parameter is an ISO8601 UTC timestamp that specifies when the network is officially going to launch. At the time of genesis, validators are expected to come online and start participating in the consensus process for operating the network. The network starts once validators representing more than 2/3 of stake in the initial consensus committee are online. The **`chain_id`** is a human-readable version identifier for a network. It is important to note that this value alone doesn't dictate the version of an Oasis network. Rather, the hash of the whole genesis document, i.e. the [genesis document's hash](../../core/consensus/genesis.md#genesis-documents-hash), is the network's unique identifier. ### Registry Within the **`registry`** object, there are a broad range of parameters that specify the initial set of node operators and their corresponding initial node statuses. * **`registry.params.max_node_expiration`** The maximum duration (in epochs) that node registrations last. The starting value is set to 2 in order to ensure that a node is continuously online, since the node’s registration would expire each time 2 epochs pass, requiring the node to re-register. * **`registry.params.enable_runtime_governance_models`** The set of [runtime governance models] that are allowed to be used when creating/updating registrations. It is set to `{"entity": true, "runtime": true}` which means a runtime can choose between **entity governance** and **runtime-defined governance**. * **`registry.entities`** The entity registrations for initial node operators, including public key and signature information. * **`registry.runtimes`** The initial runtime registrations. Each item describes a runtime's operational parameters, including its identifier, kind, admission policy, committee scheduling, storage, governance model, etc. For a full description of the runtime descriptor see the [`Runtime` structure] documentation. * **`registry.suspended_runtimes`** The initial suspended runtime registrations. Each item describes a suspended runtime's operational parameters, including its identifier, kind, admission policy, committee scheduling, storage, governance model, etc. For a full description of the runtime descriptor see the [`Runtime` structure] documentation. * **`registry.nodes`** The node registrations for initial node operators, including public key and signature information. For a new network, the entity and node registrations are obtained via the entity package collection process (e.g. [Mainnet Network Entities]). For an upgrade to an existing network, the network's state dump tool captures the network's current entity and node registrations. [runtime governance models]: ../../core/consensus/services/registry.md#runtimes [`Runtime` structure]: https://pkg.go.dev/github.com/oasisprotocol/oasis-core/go/registry/api?tab=doc#Runtime [Mainnet Network Entities]: https://github.com/oasisprotocol/mainnet-entities ### Gas Costs The following parameters define the gas costs for various types of transactions on the network: * **`staking.params.gas_costs.add_escrow`** The cost for an add escrow (i.e. stake tokens) transaction. The value is set to 1000. * **`staking.params.gas_costs.burn`** The cost for a burn (i.e. destroy tokens) transaction. The value is set to 1000. * **`staking.params.gas_costs.reclaim_escrow`** The cost for a reclaim escrow transaction (i.e. unstake tokens). The value is set to 1000. * **`staking.params.gas_costs.transfer`** The cost for a transfer transaction. The value is set to 1000. * **`staking.params.gas_costs.amend_commission_schedule`** The cost for amending, or changing, a commission schedule. The value is set to 1000. * **`registry.params.gas_costs.deregister_entity`** The cost for a deregister entity transaction. The value is set to 1000. * **`registry.params.gas_costs.register_entity`** The cost for a register entity transaction. The value is set to 1000. * **`registry.params.gas_costs.register_node`** The cost for a register node transaction. The value is set to 1000. * **`registry.params.gas_costs.register_runtime`** The cost for a register ParaTime transaction. The value is set to 1000. * **`registry.params.gas_costs.runtime_epoch_maintenance`** The cost of a maintenance fee that a node that is registered for a ParaTime pays each epoch. The value is set to 1000. * **`registry.params.gas_costs.unfreeze_node`** The cost for unfreeze node (i.e. after the node is slashed and frozen) transaction. The current value is 1000. * **`keymanager.params.gas_costs.publish_ephemeral_secret`** The cost for keymanager ephemeral secret publication transaction. The value is set to 1000. * **`keymanager.params.gas_costs.publish_master_secret`** The cost for keymanager master secret publication transaction. The value is set to 1000. * **`keymanager.params.gas_costs.update_policy`** The cost for keymanager update policy transaction. The value is set to 1000. * **`roothash.params.gas_costs.compute_commit`** The cost for a ParaTime compute commitment transaction. The value is set to 10000. In addition to the gas costs specified above, each transaction also incurs a cost proportional to its size. The **`consensus.params.gas_costs.tx_byte`** parameter specifies the additional gas cost for each byte of a transaction. The value is set to 1. For example, a staking transfer transaction of size 230 bytes would have a total gas cost of 1000 + 230. ### Root Hash The **`roothash`** object contains parameters related to the [Root Hash service] and minimal state related to runtimes. * **`roothash.params.max_runtime_messages`** The global limit on the number of [messages] that can be emitted in each round by the runtime. The value is set to 256. * **`roothash.params.max_evidence_age`** The maximum age (in the number of rounds) of submitted evidence for [compute node slashing]. The value is set to 100. [Root Hash service]: ../../core/consensus/services/roothash.md [messages]: ../../core/runtime/messages.md [compute node slashing]: ../../adrs/0005-runtime-compute-slashing.md ### Staking The **`staking`** object contains parameters controlling the Staking service and all state related to accounts, delegations, special pools of tokens... #### Token Supply & Ledger The following parameters specify the total token supply, total token pool reserved for staking rewards, and account balances across the network at the time of genesis: * **`staking.total_supply`** The total token supply (in base units) for the network. This is fixed at 10 billion ROSE tokens (the value is set to 10,000,000,000,000,000,000 base units). * **`staking.common_pool`** The tokens (in base units) reserved for staking rewards to be paid out over time. * **`staking.governance_deposits`** The tokens (in base units) collected from governance proposal deposits. * **`staking.ledger`** The staking ledger, encoding all accounts and corresponding account balances on the network at the time of genesis, including accounts for initial operators, backers, custodial wallets, etc. * **`staking.delegations`** The encoding of the initial delegations at the time of genesis. **Interpreting your account balance in the `staking.ledger`** Your account's **`general.balance`** includes all of your tokens that have not been staked or delegated. Within your account's `escrow` field, the `active.balance` holds the total amount of tokens are (actively) delegated _to_ you. #### Delegations The following parameters control how delegations behave on the network: * **`staking.params.debonding_interval`** The period of time (in epochs) that must pass before staked or delegated tokens that are requested to be withdrawn are returned to the account's general balance. The value is set to 336 epochs, which is expected to be approximately 14 days. * **`staking.params.min_delegation`** The minimum amount of tokens one can delegate. The value is set to 100,000,000,000 base units, or 100 ROSE tokens. * **`staking.params.allow_escrow_messages`** Indicator whether to enable support for `AddEscrow` and `ReclaimEscrow` [runtime messages](../../core/runtime/messages.md). The value is set to `true`. #### Node & ParaTime Token Thresholds There are several **`staking.params.thresholds`** parameters that specify the minimum number of tokens that need to be staked in order for a particular entity or a particular type of node to participate in the network. The **`entity`**, **`node-compute`**, **`node-keymanager`**, and **`node-validator`** parameters are set to 100,000,000,000 base units for each, indicating that you need to stake at least 100 ROSE tokens in order to have your entity or any of the specified nodes go live on the network. The **`staking.params.thresholds`** parameters also specify the minimum thresholds for registering new ParaTimes. The **`runtime-compute`** and **`runtime-keymanager`** parameters are set to 50,000,000,000,000 base units, indicating that you need to stake at least 50,000 ROSE tokens in order to register a compute/key manager ParaTime. #### Rewards The following parameters control the staking rewards on the network: * **`staking.params.reward_schedule`** The staking reward schedule, indicating how the staking reward rate changes over time, defined at an epoch-by-epoch granular basis. The reward schedule uses a tapering formula with higher rewards being paid out at earlier epochs and then gradually decreasing over time. For more details, see the [Staking Incentives] doc. * **`staking.params.signing_reward_threshold_numerator`** and **`staking.params.signing_reward_threshold_denominator`** These parameters define the proportion of blocks that a validator must sign during each epoch to receive staking rewards. The set fraction of 3/4 means that a validator must maintain an uptime of at least 75% blocks during an epoch in order to receive staking rewards for that period. * **`staking.params.fee_split_weight_propose`** The block proposer's share of transaction fees. The value is set to 2. * **`staking.params.fee_split_weight_next_propose`** The next block proposer's share of transaction fees. The value is set to 1. * **`staking.params.fee_split_weight_vote`** The block signer’s/voter’s share of transaction fees. The value is set to 1. * **`staking.params.reward_factor_epoch_signed`** The factor for rewards distributed to validators who signed at least threshold blocks in a given epoch. The value is set to 1. * **`staking.params.reward_factor_block_proposed`** The factor for rewards earned for block proposal. The value is set to 0, indicating validators get no extra staking rewards for proposing a block. [Staking Incentives]: ../../general/oasis-network/token-metrics-and-distribution.mdx#staking-incentives #### Commission Schedule The following parameters control how commission rates and bounds can be defined and changed: * **`staking.params.commision_schedule_rules.rate_change_interval`** The time interval (in epochs) at which rate changes can be specified in a commission schedule. The value is set to 1 indicating that the commission rate can change on every epoch. * **`staking.params.commision_schedule_rules.rate_bound_lead`** The minimum lead time (in epochs) needed for changes to commission rate bounds. This is set to protect the delegators from unexpected changes in an operator's commission rates. The value is set to 336, which is expected to be approximately 14 days. * **`staking.params.commision_schedule_rules.max_rate_steps`** The maximum number of rate step changes in a commission schedule. The value is set to 10, indicating that the commission schedule can have a maximum of 10 rate steps. * **`staking.params.commision_schedule_rules.max_bound_steps`** The maximum number of commission rate bound step changes in the commission schedule. The value is set to 10, indicating that the commission schedule can have a maximum of 10 rate bound steps. #### Slashing These parameters specify key values for the network's slashing mechanism: * **`staking.params.slashing.consensus-equivocation.amount`** The amount of tokens to slash for equivocation (i.e. double signing). The value is set to 100,000,000,000 base units, or 100 ROSE tokens. * **`staking.params.slashing.consensus-equivocation.freeze_interval`** The duration (in epochs) for which a node that has been slashed for equivocation is “frozen,” or barred from participating in the network's consensus committee. The value of 18446744073709551615 (the maximum value for a 64-bit unsigned integer) means that any node slashed for equivocation is, in effect, permanently banned from the network. * **`staking.params.slashing.consensus-light-client-attack.amount`** The amount of tokens to slash for light client attack. The value is set to 100,000,000,000 base units, or 100 ROSE tokens. * **`staking.params.slashing.consensus-light-client-attack.freeze_interval`** The duration (in epochs) for which a node that has been slashed for light client attack is “frozen,” or barred from participating in the network's consensus committee. The value of 18446744073709551615 (the maximum value for a 64-bit unsigned integer) means that any node slashed for light client attack is, in effect, permanently banned from the network. ### Committee Scheduler The **`scheduler`** object contains parameters controlling how various committees (validator, compute, key manager) are periodically [scheduled]. * **`scheduler.params.min_validators`** The minimum size for the consensus committee. The value is set to 15 validators. * **`scheduler.params.max_validators`** The maximum size for the consensus committee. The value is set to 100 validators. * **`scheduler.params.max_validators_per_entity`** The maximum number of nodes from a given entity that can be in the consensus committee at any time. The value is set to 1. [scheduled]: ../../core/consensus/services/roothash.md ### Random Beacon The **`beacon`** object contains parameters controlling the network's random beacon. * **`beacon.base`** Network's starting epoch. When a network is upgraded, its epoch is retained. For example, for the [Cobalt upgrade](./upgrades/cobalt-upgrade.md) the epoch of the Mainnet state dump was bumped by 1 from 5,046 to 5,047. * **`beacon.params.backend`** The random beacon backend to use. The value is set to "vrf" indicating that the beacon implementing a VRF (verifiable random function) should be used. #### VRF Beacon These parameters control the behavior of the VRF random beacon: * **`beacon.params.vrf_parameters.alpha_hq_threshold`** The minimum number of proofs that must be received for the next input to be considered high quality. If the VRF input is not high quality, runtimes will be disabled for the next epoch. The value is set to 20. * **`beacon.params.vrf_parameters.interval`** The epoch interval in blocks. The value is set to 600 blocks, which is expected to be approximately 1 hour. * **`beacon.params.vrf_parameters.proof_delay`** The wait period in blocks after an epoch transition that nodes must wait before attempting to submit a VRF proof for the next epoch's elections. The value is set to 400 blocks. ### **Governance** The **`governance`** object contains parameters controlling the network's [on-chain governance](../../core/consensus/services/governance.md) introduced in the [Cobalt upgrade](./upgrades/cobalt-upgrade.md): * **`governance.params.min_proposal_deposit`** The amount of tokens (in base units) that are deposited when creating a new proposal. The value is set to 10,000,000,000,000 base units, or 10,000 ROSE tokens. * **`governance.params.voting_period`** The number of epochs after which the voting for a proposal is closed and the votes are tallied. The value is set to 168, which is expected to be approximately 7 days. * **`governance.params.stake_threshold`** The percentage of `VoteYes` votes in terms of total voting power for a governance proposal to pass. The value is set to 68 (i.e. 68%). * **`governance.params.upgrade_min_epoch_diff`** The minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade proposal to be valid. Additionally, it specifies the minimum number of epochs between two consecutive pending upgrades. The value is set to 336, which is expected to be approximately 14 days. * **`governance.params.upgrade_cancel_min_epoch_diff`** The minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade cancellation proposal to be valid. The value is set to 192, which is expected to be approximately 8 days. ### Consensus The following parameters are used to define key values for the network's consensus protocol: * **`consensus.backend`** Defines the backend consensus protocol. The value is set to "tendermint" indicating that the Tendermint BFT protocol is used. * **`consensus.params.timeout_commit`** Specifies how long to wait (in nanoseconds) after committing a block before starting a new block height (this affects the block interval). The value is set to 5,000,000,000 nanoseconds, or 5 seconds. * **`consensus.params.max_tx_size`** Maximum size (in bytes) for consensus-layer transactions. The value is set to 32,768 bytes, or 32 kB. * **`consensus.params.max_block_size`** Maximum block size (in bytes). The value is set to 22,020,096 bytes, or 22 MB. * **`consensus.params.max_block_gas`** Maximum block gas. The value is set to 0, which specifies an unlimited amount of gas. * **`consensus.params.max_evidence_size`** Maximum evidence size (in bytes). The value is set to 51,200 bytes, or 50 kB. * **`consensus.params.public_key_blacklist`** A list of the public keys that cannot be used on the network. Currently, there are no blacklisted public keys. * **`consensus.params.state_checkpoint_interval`** The interval (in blocks) on which state checkpoints should be taken. The value is set to 10000. * **`consensus.params.state_checkpoint_num_kept`** The number of past state checkpoints to keep. The value is set to 2. * **`consensus.params.state_checkpoint_chunk_size`** The chunk size (in bytes) that should be used when creating state checkpoints. The value is set to 8,388,608 bytes, or 8 MB. --- ## Mainnet Upgrade Log For each upgrade of the Oasis Network, we are tracking important changes for node operators' deployments. They are enumerated and explained in this document. ## 2023-11-29 (10:02 UTC) - Eden Upgrade For a detailed view of the Eden Upgrade process, please refer to the [Eden Upgrade] page. The Oasis Core 23.0.x binary in our published releases is built only for Ubuntu 22.04 (GLIBC>=2.32). You'll have to build it yourself if you're using prior Ubuntu versions (or other distributions using older system libraries). [Eden Upgrade]: ../upgrades/eden-upgrade.md ### Configuration Changes The node configuration has been refactored so that everything is now configured via a YAML configuration file and **configuring via command-line options is no longer supported**. Some configuration options have changed and so the configuration file needs to be updated. To make this step easier, a command-line tool has been provided that will perform most of the changes automatically. You can run it with: ``` oasis-node config migrate --in config.yml --out new-config.yml ``` The migration subcommand logs the various changes it makes and warns you if a config option is no longer supported, etc. At the end, any unknown sections of the input config file are printed to the terminal to give you a chance to review them and make manual changes if required. Note that the migration subcommand does not preserve comments and order of sections from the input YAML config file. You should always carefully read the output of this command, as well as compare the generated config file with the original before using it. After you are satisfied with the new configuration file, replace the old file with the new one as follows: ``` mv new-config.yml config.yml ``` The configuration format for seed nodes has changed and it now requires the node's P2P public key to be used. In case your old configuration file contains known Mainnet seed nodes, this transformation is performed automatically. However, if it contains unknown seed nodes then the conversion did not happen automatically and you may need to obtain the seed node's P2P public key. For Mainnet you can use the following addresses: * `TBD` * `TBD` Please be aware that every seed node should be configured to listen on two distinct ports. One is dedicated to peer discovery within the CometBFT P2P network, while the other is used to bootstrap the Oasis P2P network. [Change Log]: https://github.com/oasisprotocol/oasis-core/blob/stable/23.0.x/CHANGELOG.md ### Data Directory Changes The subdirectory (located inside the node's data directory) used to store consensus-related data, previously called `tendermint` (after the consensus layer protocol backend) has been renamed to `consensus` in Oasis Core 23.0.x. If any of your scripts rely on specific directory names, please make sure to update them to reflect the changed sdirectory name. ### State Changes The following parts of the genesis document will be updated: For a more detailed explanation of the parameters below, see the [Genesis Document] docs. All state changes will be done automatically with the migration command provided by the new version of `oasis-node`. It can be used as follows to derive the same genesis file from an existing state dump at the correct height (assuming there is a `genesis.json` present in the current working directory): ``` oasis-node genesis migrate --genesis.new_chain_id oasis-4 ``` #### General * **`chain_id`** will be set to `oasis-4`. * **`halt_epoch`** will be removed as it is no longer used. #### Registry * **`registry.runtimes[].txn_scheduler.propose_batch_timeout`** specifies how long to wait before accepting proposal from the next backup scheduler. It will be set to `5000000000` (5 seconds). Previously the value was represented in the number of consensus layer blocks. * **`registry.params.gas_costs.prove_freshness`** specifies the cost of the freshness proof transaction. It will be set to `1000`. * **`registry.params.gas_costs.update_keymanager`** specifies the cost of the keymanager policy update transaction. It will be removed as the parameter has been moved under `keymanager.params.gas_costs.update_policy`. * **`registry.params.tee_features`** specify various TEE features supported by the consensus layer registry service. These will be set to the following values to activate the new features: ```json "tee_features": { "sgx": { "pcs": true, "signed_attestations": true, "max_attestation_age": 1200 }, "freshness_proofs": true } ``` * **`registry.params.max_runtime_deployments`** specifies the maximum number of runtime deployments that can be specified in the runtime descriptor. It will be set to `5`. #### Root Hash * **`roothash.params.max_past_roots_stored`** specifies the maximum number of past runtime state roots that are stored in consensus state for each runtime. It will be set to `1200`. #### Staking * **`staking.params.commission_schedule_rules.min_commission_rate`** specifies the minimum commission rate. It will be set to `0` to maintain the existing behavior. * **`staking.params.thresholds.node-observer`** specifies the stake threshold for registering an observer node. It will be set to `100000000000` base units (or `100` tokens), same as for existing compute nodes. #### Key Manager * **`keymanager.params.gas_costs`** specify the cost of key manager transactions. These will be set to the following values: ```json "gas_costs": { "publish_ephemeral_secret": 1000, "publish_master_secret": 1000, "update_policy": 1000 } ``` #### Random Beacon * **`beacon.base`** is the network's starting epoch. It will be set to the epoch of Mainnet's state dump + 1, `28017`. #### Governance * **`governance.params.enable_change_parameters_proposal`** specifies whether parameter change governance proposals are allowed. It will be set to `true`. #### Consensus * **`consensus.params.max_block_size`** specifies the maximum block size in the consensus layer. It will be set to `1048576` (1 MiB). #### Other * **`extra_data`** will be set back to the value in the [Mainnet genesis file] to include the Oasis Network's genesis quote: _”_[_Quis custodiet ipsos custodes?_][mainnet-quote]_” \[submitted by Oasis Community Member Daniyar Borangaziyev]:_ ``` "extra_data": { "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" } ``` [Genesis Document]: ../../reference/genesis-doc.md#parameters [Mainnet genesis file]: https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-11-18 [mainnet-quote]: https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F ## 2022-04-11 (8:30 UTC) - Damask Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **13402**. We expect the Mainnet network to reach this epoch at around 2022-04-11 8:30 UTC. ### Instructions - Voting **Voting for the upgrade proposal will end at epoch 13152. We expect the Mainnet network to reach this epoch at around 2022-03-31 21:00 UTC**. At this time only entities which have active validator nodes scheduled in the validator set are eligible to vote for governance proposals. At least **75%** of the total **voting power** of the validator set needs to cast a vote on the upgrade proposal for the result to be valid. At least **90%** of the vote needs to be **yes** votes for a proposal to be accepted. The Oasis Protocol Foundation has submitted an [upgrade governance proposal] with the following contents: ```yaml { "v": 1, "handler": "mainnet-upgrade-2022-04-11", "target": { "consensus_protocol": { "major": 5 }, "runtime_host_protocol": { "major": 5 }, "runtime_committee_protocol": { "major": 4 } }, "epoch": 13402 } ``` To view the proposal yourself, you can run the following command on your online Oasis Node: ```bash oasis-node governance list_proposals -a $ADDR ``` where `$ADDR` represents the path to the internal Oasis Node UNIX socket prefixed with `unix:` (e.g.`unix:/serverdir/node/internal.sock`). The output should look like: ```yaml [ { "id": 2, "submitter": "oasis1qpydpeyjrneq20kh2jz2809lew6d9p64yymutlee", "state": "active", "deposit": "10000000000000", "content": { "upgrade": { "v": 1, "handler": "mainnet-upgrade-2022-04-11", "target": { "consensus_protocol": { "major": 5 }, "runtime_host_protocol": { "major": 5 }, "runtime_committee_protocol": { "major": 4 } }, "epoch": 13402 } }, "created_at": 12984, "closes_at": 13152 } ] ``` Obtain [your entity's nonce] and store it in the `NONCE` variable. You can do that by running: ```yaml ENTITY_DIR= ADDRESS=$(oasis-node stake pubkey2address --public_key \ $(cat $ENTITY_DIR/entity.json | jq .id -r)) NONCE=$(oasis-node stake account nonce --stake.account.address $ADDRESS -a $ADDR) ``` where `` is the path to your entity's descriptor, e.g. `/serverdir/node/entity/`. To vote for the proposal, use the following command to generate a suitable transaction: ```bash oasis-node governance gen_cast_vote \ "${TX_FLAGS[@]}" \ --vote.proposal.id 2 \ --vote yes \ --transaction.file tx_cast_vote.json \ --transaction.nonce $NONCE \ --transaction.fee.gas 2000 \ --transaction.fee.amount 2000 ``` where `TX_FLAGS` refer to previously set base and signer flags as described in the [Oasis node CLI Tools Setup] doc. If you use a Ledger-signer backed entity, you will need to install version 2.3.2 of the Oasis App as described in [Installing Oasis App on Your Ledger Wallet]. Note that the previous version of the Oasis App available through Ledger Live, version 1.8.2, doesn't support signing the `governance.CastVote` transaction type. To submit the generated transaction, copy `tx_cast_vote.json` to the online Oasis node and submit it from there: ```bash oasis-node consensus submit_tx \ -a $ADDR \ --transaction.file tx_cast_vote.json ``` [upgrade governance proposal]: ../../../core/consensus/services/governance.md#submit-proposal [your entity's nonce]: ../../../build/tools/cli/account.md#show [Oasis node CLI Tools Setup]: ../../../core/oasis-node/cli.md [Installing Oasis App on Your Ledger Wallet]: ../../../general/manage-tokens/holding-rose-tokens/ledger-wallet.md ### Instructions - Before upgrade This upgrade will upgrade **Oasis Core** to the **22.1.x release series** which **no longer allow running Oasis Node** (i.e. the `oasis-node` binary) **as root** (effective user ID of 0). Running network accessible services as the root user is extremely bad for system security as a general rule. While it would be "ok" if we could drop privileges, `syscall.AllThreadsSyscall` does not work if the binary uses `cgo` at all. Nothing in Oasis Node will ever require elevated privileges. Attempting to run the `oasis-node` process as the root user will now terminate immediately on startup. While there may be specific circumstances where it is safe to run network services with the effective user ID set to 0, the overwhelming majority of cases where this is done is a misconfiguration. Please, follow our [Changing Your Setup to Run Oasis Services with Non-root System User][change-to-non-root] guide for steps-by-step instructions on how to update your system. If the previous behavior is required, the binary must be run in unsafe/debug mode (via the intentionally undocumented flag), and `debug.allow_root` must also be set. [change-to-non-root]: ../../run-your-node/prerequisites/system-configuration.mdx#create-a-user ### Instructions - Upgrade day Following steps should be performed on **2022-04-11** only after the network has reached the upgrade epoch and has halted: 1. Download the genesis file published in the [Damask upgrade release]. Mainnet state at epoch **13402** will be exported and migrated to a 22.1.x compatible genesis file. Upgrade genesis file will be published on the above link soon after reaching the upgrade epoch. 2. Verify the provided Damask upgrade genesis file by comparing it to network state dump. The state changes are described in the [Damask Upgrade] document. See instructions in the [Handling Network Upgrades] guide. 3. Replace the old genesis file with the new Damask upgrade genesis file. 4. Ensure your node will remain stopped by disabling auto-starting via your process manager (e.g., [systemd] or [Supervisor]) 5. Replace the old version of Oasis Node with version [22.1.3]. 6. [Wipe state]. State of ParaTimes/runtimes is not affected by this upgrade and should NOT be wiped. We recommend **backing up the pre-Damask consensus state** so all the transactions and history are not permanently lost. Also, if you ever need to access that state in the future, you will be able to spin up an Oasis Node in archive mode and query the pre-Damask state. 7. Perform any needed [configuration changes][damask-conf-changes] described below. 8. (only for ParaTime operators) Replace old versions of your ParaTime binaries with the new [Oasis Runtime Containers (`.orc`)][orcs] introduced in the Damask upgrade. For the official Oasis ParaTimes, use the following versions: - Emerald ParaTime: [8.2.0][emerald-8.2.0] - Cipher ParaTime: [1.1.0][cipher-1.1.0] 9. (only Emerald Web3 Gateway operators) Replace old version of Emerald Web3 Gateway with version [2.1.0][emerald-gw-2.1.0]. 10. (only Rosetta Gateway operators) Replace old version of Oasis Rosetta Gateway with version [2.2.0][rosetta-gw-2.2.0]. 11. Start your node and re-enable auto-starting via your process manager. [Damask Upgrade]: ../upgrades/damask-upgrade.md#proposed-state-changes [Damask upgrade release]: https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2022-04-11 [Handling Network Upgrades]: ../../run-your-node/maintenance/handling-network-upgrades#verify-genesis [orcs]: https://github.com/oasisprotocol/oasis-core/issues/4469 [22.1.3]: https://github.com/oasisprotocol/oasis-core/releases/tag/v22.1.3 [emerald-8.2.0]: https://github.com/oasisprotocol/emerald-paratime/releases/tag/v8.2.0 [cipher-1.1.0]: https://github.com/oasisprotocol/cipher-paratime/releases/tag/v1.1.0 [emerald-gw-2.1.0]: https://github.com/oasisprotocol/emerald-web3-gateway/releases/tag/v2.1.0 [rosetta-gw-2.2.0]: https://github.com/oasisprotocol/oasis-rosetta-gateway/releases/tag/v2.2.0 [Wipe state]: ../../run-your-node/maintenance/wiping-node-state#state-wipe-and-keep-node-identity [damask-conf-changes]: #damask-conf-changes [systemd]: https://systemd.io/ [Supervisor]: http://supervisord.org/ ### Configuration Changes To see the full extent of the changes examine the [Change Log][changelog-22.1.3]: of the **Oasis Core 22.1.3**, in particular the [22.1][changelog-22.1] and [22.0][changelog-22.0] sections. If your node is currently configured to run a ParaTime, you need to perform some additional steps. The way ParaTime binaries are distributed has changed so that all required artifacts are contained in a single archive called the Oasis Runtime Container and have the `.orc` extension. Links to updated ParaTime binaries will be published on the [Network Parameters][network-paratimes] page for their respective ParaTimes. The configuration is simplified as the `runtime.paths` now only needs to list all of the supported `.orc` files (see below for an example). Instead of separately configuring various roles for a node, there is now a single configuration flag called `runtime.mode` which enables the correct roles as needed. It should be set to one of the following values: - `none` (runtime support is disabled, only consensus layer is enabled) - `compute` (node is participating as a runtime compute node for all the configured runtimes) - `keymanager` (node is participating as a keymanager node) - `client` (node is a stateful runtime client) - `client-stateless` (node is a stateless runtime client and connects to remote nodes for any state queries) Nodes that have so far been participating as compute nodes should set the mode to `compute` and nodes that have been participating as clients for querying and transaction submission should set it to `client`. The following configuration flags have been removed: - `runtime.supported` (existing `runtime.paths` is used instead) - `worker.p2p.enabled` (now automatically set based on runtime mode) - `worker.compute.enabled` (now set based on runtime mode) - `worker.keymanager.enabled` (now set based on runtime mode) - `worker.storage.enabled` (no longer needed) Also the `worker.client` option is no longer needed unless you are providing consensus layer RPC services. For example, if your _previous_ configuration looked like: ```yaml runtime: supported: - "000000000000000000000000000000000000000000000000000000000000beef" paths: "000000000000000000000000000000000000000000000000000000000000beef": /path/to/runtime worker: # ... other settings omitted ... storage: enabled: true compute: enabled: true client: port: 12345 addresses: - "xx.yy.zz.vv:12345" p2p: enabled: true port: 12346 addresses: - "xx.yy.zz.vv:12346" ``` The _new_ configuration should look like: ```yaml runtime: mode: compute paths: - /path/to/runtime.orc worker: # ... other settings omitted ... p2p: port: 12346 addresses: - "xx.yy.zz.vv:12346" ``` [network-paratimes]: ../../network/mainnet#paratimes ### Additional notes Examine the [Change Log][changelog-22.1.3] of the 22.1.3 release, in particular the [22.1][changelog-22.1] and [22.0][changelog-22.0] sections. [changelog-22.1.3]: https://github.com/oasisprotocol/oasis-core/blob/v22.1.3/CHANGELOG.md [changelog-22.1]: https://github.com/oasisprotocol/oasis-core/blob/v22.1.3/CHANGELOG.md#221-2022-04-01 [changelog-22.0]: https://github.com/oasisprotocol/oasis-core/blob/v22.1.3/CHANGELOG.md#220-2022-03-01 ## 2021-08-31 (16:00 UTC) - Parameter Update * **Upgrade height:** upgrade is scheduled to happen at epoch **8049.** We expect the Mainnet network to reach this epoch at around 2021-08-31 16:00 UTC. ### Proposed Parameter Changes The [Oasis Core 21.2.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.8) release contains the [`consensus-params-update-2021-08` upgrade handler](https://github.com/oasisprotocol/oasis-core/blob/v21.2.8/go/upgrade/migrations/consensus_parameters.go) which will update the following parameters in the consensus layer: * **`staking.params.max_allowances` ** specifies the maximum number of allowances on account can store. It will be set to `16` (default value is `0`) to enable support for beneficiary allowances which are required to transfer tokens into a ParaTime. * **`staking.params.gas_costs` ** , **`governance.params.gas_costs`** and **`roothash.params.gas_costs`** specify gas costs for various types of staking, governance and roothash transactions. Gas costs for transactions that were missing gas costs will be added. * **`scheduler.params.max_validators`** is the maximum size of the consensus committee (i.e. the validator set). It will be increased to`110` (it was set to `100` previously). ### Instructions - Voting **Voting for the upgrade proposal will end at epoch 7876. We expect the Mainnet network to reach this epoch at around 2021-08-24 12:00 UTC**_**.**_ At this time only entities which have active validator nodes scheduled in the validator set are eligible to vote for governance proposals. At least **75%** of the **voting power** needs to cast vote on the upgrade proposal for the result to be valid. At least **90%** of the votes need to be **yes** votes for a proposal to be accepted. This upgrade will be the first upgrade that will use the new on-chain governance service introduced in the [Cobalt Upgrade](../upgrades/cobalt-upgrade.md). The Oasis Protocol Foundation has submitted an [upgrade governance proposal](../../../core/consensus/services/governance.md#submit-proposal) with the following contents: ```yaml { "v": 1, "handler": "consensus-params-update-2021-08", "target": { "consensus_protocol": { "major": 4 }, "runtime_host_protocol": { "major": 3 }, "runtime_committee_protocol": { "major": 2 } }, "epoch": 8049 } ``` To view the proposal yourself, you can run the following command on your online Oasis Node: ```bash oasis-node governance list_proposals -a $ADDR | jq ``` where `$ADDR` represents the path to the internal Oasis Node UNIX socket prefixed with `unix:` (e.g.`unix:/serverdir/node/internal.sock`). The output should look like: ```yaml [ { "id": 1, "submitter": "oasis1qpydpeyjrneq20kh2jz2809lew6d9p64yymutlee", "state": "active", "deposit": "10000000000000", "content": { "upgrade": { "v": 1, "handler": "consensus-params-update-2021-08", "target": { "consensus_protocol": { "major": 4 }, "runtime_host_protocol": { "major": 3 }, "runtime_committee_protocol": { "major": 2 } }, "epoch": 8049 } }, "created_at": 7708, "closes_at": 7876 } ] ``` Obtain [your entity's nonce](../../../build/tools/cli/account.md#show) and store it in the `NONCE` variable. You can do that by running: ```yaml ENTITY_DIR= ADDRESS=$(oasis-node stake pubkey2address --public_key \ $(cat $ENTITY_DIR/entity.json | jq .id -r)) NONCE=$(oasis-node stake account nonce --stake.account.address $ADDRESS -a $ADDR) ``` where `` is the path to your entity's descriptor, e.g. `/serverdir/node/entity/`. To vote for the proposal, use the following command to generate a suitable transaction: ```bash oasis-node governance gen_cast_vote \ "${TX_FLAGS[@]}" \ --vote.proposal.id 1 \ --vote yes \ --transaction.file tx_cast_vote.json \ --transaction.nonce $NONCE \ --transaction.fee.gas 2000 \ --transaction.fee.amount 2000 ``` where `TX_FLAGS` refer to previously set base and signer flags as described in the [Oasis node CLI Tools Setup] doc. If you use a Ledger-signer backed entity, you will need to install version 2.3.1 of the Oasis App as described in [Installing Oasis App 2.3.1 to Your Ledger](#installing-oasis-app-231-to-your-ledger). This is needed because the current version of the Oasis App available through Ledger Live, version 1.8.2, doesn't support signing the `governance.CastVote` transaction type. To submit the generated transaction, copy `tx_cast_vote.json` to the online Oasis node and submit it from there: ```bash oasis-node consensus submit_tx \ -a $ADDR \ --transaction.file tx_cast_vote.json ``` ### Instructions - Before Upgrade System Preparation * This upgrade will upgrade **Oasis Core** to version **21.2.8** which: * Upgrades the BadgerDB database backend from v2 to v3. See [**BadgerDB v2 to v3 Migration**](mainnet.md#badgerdb-v2-to-v3-migration) section for required steps to be done before upgrade. * Has a check that makes sure the **file descriptor limit** is set to an appropriately high value (at least 50000). While previous versions only warned in case the limit was set too low, this version will refuse to start. Follow the [File Descriptor Limit](../../run-your-node/prerequisites/system-configuration.mdx#increase-file-descriptor-limit) documentation page for details on how to increase the limit on your system. * Stop your node, replace the old version of Oasis Node with version [21.2.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.8) and restart your node. Since Oasis Core 21.2.8 is otherwise compatible with the current consensus layer protocol, you may upgrade your Mainnet node to this version at any time. This is not dump & restore upgrade For this upgrade, **do NOT wipe state**. * Once reaching the designated upgrade epoch, your node will stop and needs to be upgraded to Oasis Core [21.2.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.8). * If you upgraded your node to Oasis Core 21.2.8 before the upgrade epoch was reached, you only need to restart your node for the upgrade to proceed. * Otherwise, you need to upgrade your node to Oasis Core 21.2.8 first and then restart it. If you use a process manager like [systemd](https://github.com/systemd/systemd) or [Supervisor](http://supervisord.org), you can configure it to restart the Oasis Node automatically. The Mainnet's genesis file and the genesis document's hash will remain the same. ### BadgerDB v2 to v3 Migration This upgrade will upgrade Oasis Core to version **21.2.x** which includes the new [**BadgerDB**](https://github.com/dgraph-io/badger) **v3**. Since BadgerDB's on-disk format changed in v3, it requires on-disk state migration. The migration process is done automatically and makes the following steps: * Upon startup, Oasis Node will start migrating all `/**/*.badger.db` files (Badger v2 files) and start writing Badger v3 DB to files with the `.migrate` suffix. * If the migration fails in the middle, Oasis Node will delete all `/**/*.badger.db.migrate` files the next time it starts and start the migration (of the remaining `/**/*.badger.db` files) again. * If the migration succeeds, Oasis Node will append the `.backup` suffix to all `/**/*.badger.db` files (Badger v2 files) and remove the `.migrate` suffix from all `/**/*.badger.db.migrate` files (Badger v3 files). The BadgerDB v2 to v3 migration is **very I/O intensive** (both IOPS and throughput) and **may take a couple of hours** to complete. To follow its progress, run: ``` shopt -s globstar du -h /**/*.badger.db* | sort -h -r ``` and observe the sizes of various `*.badger.db*` directories. For example, if it outputted the following: ``` 55G data/tendermint/data/blockstore.badger.db 37G data/tendermint/abci-state/mkvs_storage.badger.db.backup 32G data/tendermint/abci-state/mkvs_storage.badger.db 16G data/tendermint/data/blockstore.badger.db.migration 2.9G data/tendermint/data/state.badger.db 62M data/persistent-store.badger.db.backup 2.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints 1.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints/4767601/ca51b06a054b69f2c18b9781ea42f0b00900de199c1937398514331b0d136ec3/chunks 1.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints/4767601/ca51b06a054b69f2c18b9781ea42f0b00900de199c1937398514331b0d136ec3 1.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints/4767601 1.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints/4757601/2ec3a28b1f4a2fcce503f2e80eb5d77b6c0a4d1075e8a14d880ac390338a855e/chunks 1.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints/4757601/2ec3a28b1f4a2fcce503f2e80eb5d77b6c0a4d1075e8a14d880ac390338a855e 1.1M data/tendermint/abci-state/mkvs_storage.badger.db.backup/checkpoints/4757601 36K data/persistent-store.badger.db 20K data/tendermint/data/evidence.badger.db ``` then the `mkvs_storage.badger.db` was already migrated: * old BadgerDB v2 directory: `37G data/tendermint/abci-state/mkvs_storage.badger.db.backup` * new BadgerDB v3 directory: `32G data/tendermint/abci-state/mkvs_storage.badger.db` and now the `blockstore.badger.db` is being migrated: * current BadgerDB v2 directory: `55G data/tendermint/data/blockstore.badger.db` * new BadgerDB v3 directory: `16G data/tendermint/data/blockstore.badger.db.migration` Note that usually, the new BadgerDB v3 directory is smaller due to less fragmentation. #### Extra storage requirements Your node will thus need to have extra storage space to store both the old and the new BadgerDB files. To see estimate how much extra space the migration will need, use the `du` tool: ``` shopt -s globstar du -h /**/*.badger.db | sort -h -r ``` This is an example output from a Mainnet node that uses `/srv/oasis/node` as the ``: ``` 43G /srv/oasis/node/tendermint/data/blockstore.badger.db 28G /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db 311M /srv/oasis/node/tendermint/data/state.badger.db 2.0M /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints 996K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4517601 996K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4507601 992K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4517601/ba6218d7be2df31ba6e7201a8585c6435154728e55bbb7df1ffebe683bf60217 992K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4507601/1e0bf592bb0d99832b13ad91bc32aed018dfc2639e07b93a254a05f6791a19ac 984K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4517601/ba6218d7be2df31ba6e7201a8585c6435154728e55bbb7df1ffebe683bf60217/chunks 984K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4507601/1e0bf592bb0d99832b13ad91bc32aed018dfc2639e07b93a254a05f6791a19ac/chunks 148K /srv/oasis/node/persistent-store.badger.db 36K /srv/oasis/node/tendermint/data/evidence.badger.db ``` After you've confirmed your node is up and running, you can safely delete all the `/**/*.badger.db.backup` files. #### Extra memory requirements BadgerDB v2 to v3 migration can use a number of Go routines to migrate different database files in parallel. However, this comes with a memory cost. For larger database files, it might need up to 4 GB of RAM per database, so we recommend lowering the number of Go routines BadgerDB uses during migration (`badger.migrate.num_go_routines`) if your node has less than 8 GB of RAM. If your node has less than 8 GB of RAM, set the number of Go routines BadgerDB uses during migration to 2 (default is 8) by adding the following to your node's `config.yml`: ``` # BadgerDB configuration. badger: migrate: # Set the number of Go routines BadgerDB uses during migration to 2 to lower # the memory pressure during migration (at the expense of a longer migration # time). num_go_routines: 2 ``` ### Installing Oasis App 2.3.1 to Your Ledger This manual installation procedure is needed until the latest version of the Oasis App, version 2.3.1, becomes available through [Ledger Live](https://www.ledger.com/ledger-live/)'s Manager. Unlike Nano S devices, **Nano X** devices are locked meaning one cannot manual install the latest version of the Oasis App on them. If you use a Nano X device, you will need to temporarily switch to a Nano S device or wait for the new version of the Oasis App to be available through Ledger Live's Manager. #### Update Firmware to Version 2.0.0 First, make sure the firmware on your Nano S is up-to-date. At least [version 2.0.0](https://support.ledger.com/hc/en-us/articles/360010446000-Ledger-Nano-S-firmware-release-notes) released on May 4, 2021, is required. Follow [Ledger's instructions for updating the firmware on your Nano S](https://support.ledger.com/hc/en-us/articles/360002731113-Update-Ledger-Nano-S-firmware). #### Install Prerequisites for Manual Installation The manual installation process relies on some tooling that needs to be available on the system: * [Python](https://www.python.org) 3. * [Python tools for Ledger Blue, Nano S and Nano X](https://github.com/LedgerHQ/blue-loader-python). Most systems should already have [Python](https://www.python.org) pre-installed. To install [Python tools for Ledger Blue, Nano S and Nano X](https://github.com/LedgerHQ/blue-loader-python), use [pip](https://pip.pypa.io/en/stable/): ``` pip3 install --upgrade ledgerblue ``` You might want to install the packages to a [Python virtual environment](https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments) or via so-called [User install](https://pip.pypa.io/en/stable/user_guide/#user-installs) (i.e. isolated to the current user). #### Download Oasis App 2.3.1 Download the [Oasis App 2.3.1 installer for Nano S](https://github.com/Zondax/ledger-oasis/releases/download/v2.3.1/installer_s.sh) from [Zondax's Oasis App GitHub repo](https://github.com/Zondax/ledger-oasis). #### Install Oasis App 2.3.1 Make the downloaded installer executable by running: ```yaml chmod +x installer_s.sh ``` Connect you Nano S and unlock it. Then execute the installer: ```yaml ./installer_s.sh load ``` Your Nano S will give you the option to either: * _Deny unsafe manager_, or * review the _Public Key_ and _Allow unsafe manager_. First review the public key and ensure it matches the `Generated random root public key` displayed in the terminal. Then double press the _Allow unsafe manager_ option. If there is an existing version of the _Oasis App_ installed on your Nano S, you will be prompted with the _Uninstall Oasis_ screen, followed by reviewing the _Identifier_ (it will depend on the version of the Oasis App you have currently installed) and finally confirming deletion on the _Confirm action_ screen. After the new version of the Oasis App has finished loading, you will be prompted with the _Install app Oasis_ screen, followed by reviewing the _Version_, _Identifier_ and _Code Identifier_ screens_._ Ensure the values are as follows: * Version: 2.3.1 * Identifier (_Application full hash_ on the terminal): `E0CB424D3B1C2A0F694BCB6E99C3B37C7685399D59DD12D7CF80AF4A487882B1` * Code Identifier: `C17EBE7CD356D01411A02A81C64CDA3E81F193BDA09BEBBD0AEAF75AD7EC35E3` Finally, confirm installation of the new app by double pressing on the _Perform installation_ screen. Your Ledger device will ask for your PIN again. Installing Oasis App 2.3.1 on a Nano S with the firmware version < 2.0.0 (e.g. 1.6.1) will NOT fail. It will show a different _Identifier_ when installing the app which will NOT match the _Application full hash_ shown on the terminal. However, opening the app will not work and it will "freeze" your Nano S device. #### Verify Installation Open the Oasis App on your Nano S and ensure the _Version_ screen shows version 2.3.1. Starting the manually installed version of the Oasis App will always show the _This app is not genuine_ screen, followed by _Identifier_ (which should match the Identifier value above) screen. Finally, open the application by double pressing on the _Open application_ screen. After you've signed your `governance.CastVote` transaction, you can safely downgrade Oasis App to the latest official version available via [Ledger Live](https://www.ledger.com/ledger-live/), version 1.8.2. To do that, just open Ledger Live's Manager and it will prompt you to install version 1.8.2. ## 2021-04-28 (16:00 UTC) - Cobalt Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **5046.** We expect the Mainnet network to reach this epoch at around 2021-04-28 12:00 UTC. ### Instructions - Before upgrade * Make sure you are running the latest Mainnet-compatible Oasis Node version: [20.12.7](https://github.com/oasisprotocol/oasis-core/releases/tag/v20.12.7). * If you are running a different **20.12.x** Oasis Node version, update to version **20.12.7** before the upgrade. Version **20.12.7** is backwards compatible with other **20.12.x** releases, so upgrade can be performed at any time by stopping the node and replacing the binary. * To ensure your node will stop at epoch **5046** [submit the following upgrade descriptor](https://github.com/oasisprotocol/cli/blob/master/docs/network.md#governance-create-proposal) at any time before the upgrade: ``` { "name": "mainnet-upgrade-2021-04-28", "method": "internal", "identifier": "mainnet-upgrade-2021-04-28", "epoch": 5046 } ``` The upgrade descriptor contains a non-existing upgrade handler and will be used to coordinate the network shutdown, the rest of the upgrade is manual. #### **Runtime operators** Following section is relevant only for **runtime operators** that are running **storage** nodes for active runtimes on the Mainnet. This upgrade requires a runtime storage node migration to be performed **before the upgrade genesis is published**. This can be done before the upgrade epoch is reached by stopping all runtime nodes and running the migration. **Backup your node's data directory** To prevent irrecoverable runtime storage data corruption/loss in case of a failed storage migration, backup your node's data directory. For example, to backup the `/serverdir/node` directory using the [rsync](https://rsync.samba.org) tool, run: ``` rsync -a /serverdir/node/ /serverdir/node-BACKUP/ ``` The storage database on all storage nodes needs to be migrated with the following command (using the [21.1.1](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.1.1) binary): ``` oasis-node storage migrate \ --datadir \ --runtime.supported ``` After the migration to v5 completes, you will see an output similar to: ``` ... - migrating from v4 to v5... - migrating version 24468... - migrated root state-root:195cf7a9a103e7300b2bb4e537cb9935cbebd83e448e67aa55433861a6ad7426 -> state-root:cea105a5d701deab935b94af9e8e0c5af5dcdb61c242bf434da9f11aa8d110ba - migrated root io-root:0850c5a33ee7f45aa92724b7d5f28c9ac9ae8799b88cc5be9773e8aba9526ca7 -> io-root:19713a2b44e1bf868ebee43c36872baa3058870bb890a5e25d1c4cea2622be77 - migrated root io-root:477391131f60ac2c22bce9167c7e3783a13d4fb81fddd2d388b4ead6a586fe52 -> io-root:f29f86d491303c5fd7b3572e97cbd65b7487b6b4ac519623afd161cc2e4678b7 ``` Take note of the displayed `state-root` and report it to the Foundation, as it needs to be included in the upgrade's new genesis file. Keep the runtime nodes stopped until the upgrade epoch is reached. At upgrade epoch, upgrade the nodes by following the remaining steps above. ### Instructions - Upgrade day Following steps should be performed on **2021-04-28** only after the network has reached the upgrade epoch and has halted: * Download the genesis file published in the [Cobalt Upgrade release](https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2021-04-28). Mainnet state at epoch **5046** will be exported and migrated to a 21.1.x compatible genesis file. Upgrade genesis file will be published on the above link soon after reaching the upgrade epoch. * Verify the provided Cobalt upgrade genesis file by comparing it to network state dump. See instructions in the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md#verify-genesis) guide. * Replace the old genesis file with the new Cobalt upgrade genesis file. * Stop your node (if you haven't stopped it already by submitting the upgrade descriptor). * Replace the old version of Oasis Node with version [21.1.1](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.1.1). * [Wipe state](../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity). * Start your node. For more detailed instructions, see the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md) guide. ### Additional notes Examine the [Change Log](https://github.com/oasisprotocol/oasis-core/blob/v21.1.1/CHANGELOG.md) of the 21.1.1 (and 21.0) releases. **Runtime operators** Note the following configuration change in the [21.0](https://github.com/oasisprotocol/oasis-core/blob/v21.1.1/CHANGELOG.md#configuration-changes) release. **Storage access policy changes** Due to the changes in the default access policy on storage nodes, at least one of the storage nodes should be configured with the `worker.storage.public_rpc.enabled` flag set to `true`. Otherwise, external runtime clients wont be able to connect to any storage nodes. ## 2020-11-18 (16:00 UTC) - Mainnet * **Block height** when Mainnet Beta network stops: **702000.** We expect the Mainnet Beta network to reach this block height at around 2020-11-18 13:30 UTC. * **Upgrade window:** * Start: **2020-11-18T16:00:00Z.** * End: After nodes representing **2/3+ stake** do the upgrade. ### Instructions * Download [Oasis Node](../../run-your-node/prerequisites/oasis-node.md) version [20.12.2](https://github.com/oasisprotocol/oasis-core/releases/tag/v20.12.2), while continuing to run version 20.10.x. * (optional) Use Oasis Node version 20.12.2 to dump network state at the specified block height. It will connect to the running version 20.10.x node. * Download the Mainnet genesis file published in the [2020-11-18 release](https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-11-18). * (optional) Verify the provided Mainnet genesis file by comparing it to network state dump. See instructions in the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md#verify-genesis) guide. * Replace the old Mainnet Beta genesis file with the Mainnet genesis file. * Stop your node. * Remove the old 20.10.x version of Oasis Node. * [Wipe state](../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity). * Update your node's configuration per instructions in [Configuration changes](mainnet.md#configuration-changes) below. * Start your node. This time, we recommend dumping the network state with the upgraded Oasis Node binary so that the genesis file will be in the [canonical form](../../../core/consensus/genesis.md#canonical-form). The canonical form will make it easier to compare the obtained genesis file with the one provided by us. For more detailed instructions, see the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md) guide. ### Configuration changes Since we are upgrading to the Mainnet, we recommend you change your node's configuration and disable pruning of the consensus' state by removing the `consensus.tendermint.abci.prune` key. For example, this configuration: ```yaml ... # Consensus backend. consensus: # Setting this to true will mean that the node you're deploying will attempt # to register as a validator. validator: true # Tendermint backend configuration. tendermint: abci: prune: strategy: keep_n # Keep ~7 days of data since block production is ~1 block every 6 seconds. # (7*24*3600/6 = 100800) num_kept: 100800 core: listen_address: tcp://0.0.0.0:26656 ... ``` Becomes: ```yaml ... # Consensus backend. consensus: # Setting this to true will mean that the node you're deploying will attempt # to register as a validator. validator: true # Tendermint backend configuration. tendermint: core: listen_address: tcp://0.0.0.0:26656 ... ``` ## 2020-10-01 - Mainnet Beta ### Instructions * Stop your node. * [Wipe state](../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity). * Replace the old genesis file with the Mainnet Beta genesis file published in the [2020-10-01 release](https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-10-01). * Start your node. You should keep using Oasis Core version 20.10. For more detailed instructions, see the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md) guide. ## 2020-09-22 - Mainnet Dry Run ### Instructions * This is the initial deployment. --- ## Testnet Upgrade Log For each upgrade of the Testnet network, we are tracking important changes for node operators' deployments. They are enumerated and explained in this document. ## 2023-10-12 Upgrade Note that some of the software releases mentioned below have not yet been published. They are expected to become available as we get closer to the upgrade window. * **Upgrade height:** upgrade is scheduled to happen at epoch **29570**. We expect the Testnet network to reach this epoch at around 2023-10-12 07:56 UTC. ### Instructions 1. (optional) Vote for the upgrade. On 2023-10-09, an upgrade proposal will be proposed which (if accepted) will schedule the upgrade on epoch **29570**. See the [Governance documentation] for details on voting for proposals. The following steps should be performed only after the network has reached the upgrade epoch and has halted: 2. Download the Testnet genesis file published in the [Testnet 2023-10-12 release]. Testnet state at epoch **29570** will be exported and migrated to a 23.0 compatible genesis file. The new genesis file will be published on the above link soon after reaching the upgrade epoch. 3. Verify the provided Testnet upgrade genesis file by comparing it to the local network state dump. The state changes are described in the [State Changes](#state-changes) section below. 4. Replace the old genesis file with the new Testnet genesis file. 5. Ensure your node will remain stopped by disabling auto-starting via your process manager (e.g., [systemd] or [Supervisor]) 6. Back up the entire data directory of your node. Verify that the backup includes the following folders: - for consensus: `tendermint/abci-state` and `tendermint/data` - for runtimes: `runtimes/*/mkvs_storage.badger.db` and `runtimes/*/worker-local-storage.badger.db` 7. [Wipe state]. This must be performed _before_ replacing the Oasis Node binary. State of ParaTimes/runtimes is not affected by this upgrade and MUST NOT be wiped. Wiping state for confidential ParaTimes will prevent your compute or key manager node from transitioning to the new network. Transitioning confidential ParaTimes to the new network requires local state that is sealed to the CPU. This also means that bootstrapping a new node on a separate CPU immediately after the network upgrade will not be possible until an updated ParaTime containing new trust roots is released and adopted. 8. Replace the old version of Oasis Node with version [23.0]. The Oasis Core 23.0 binary in our published releases is built only for Ubuntu 22.04 (GLIBC>=2.32). You'll have to build it yourself if you're using prior Ubuntu versions (or other distributions using older system libraries). 9. Perform any needed [configuration changes](#configuration-changes) described below. 10. (only Rosetta Gateway operators) Replace old version of Oasis Rosetta Gateway with version [2.6.0][rosetta-gw-2.6.0]. 11. Start your node and re-enable auto-starting via your process manager. [Governance documentation]: ../../../build/tools/cli/network.md#governance-cast-vote [Testnet 2023-10-12 release]: https://github.com/oasisprotocol/testnet-artifacts/releases/tag/2023-10-12 [systemd]: https://systemd.io/ [Supervisor]: http://supervisord.org/ [Wipe state]: ../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity [23.0]: https://github.com/oasisprotocol/oasis-core/releases/tag/v23.0 [rosetta-gw-2.6.0]: https://github.com/oasisprotocol/oasis-rosetta-gateway/releases/tag/v2.6.0 ### Configuration Changes To see the full extent of the changes examine the [Change Log] of the 23.0 release. The node configuration has been refactored so that everything is now configured via a YAML configuration file and **configuring via command-line options is no longer supported**. Some configuration options have changed and so the configuration file needs to be updated. To make this step easier, a command-line tool has been provided that will perform most of the changes automatically. You can run it with: ``` oasis-node config migrate --in config.yml --out new-config.yml ``` The migration subcommand logs the various changes it makes and warns you if a config option is no longer supported, etc. At the end, any unknown sections of the input config file are printed to the terminal to give you a chance to review them and make manual changes if required. Note that the migration subcommand does not preserve comments and order of sections from the input YAML config file. You should always carefully read the output of this command, as well as compare the generated config file with the original before using it. After you are satisfied with the new configuration file, replace the old file with the new one as follows: ``` mv new-config.yml config.yml ``` The configuration format for seed nodes has changed and it now requires the node's P2P public key to be used. In case your old configuration file contains known Testnet seed nodes, this transformation is performed automatically. However, if it contains unknown seed nodes then the conversion did not happen automatically and you may need to obtain the seed node's P2P public key. For Testnet you can use the following addresses: * `HcDFrTp/MqRHtju5bCx6TIhIMd6X/0ZQ3lUG73q5898=@34.86.165.6:26656` * `HcDFrTp/MqRHtju5bCx6TIhIMd6X/0ZQ3lUG73q5898=@34.86.165.6:9200` Please be aware that every seed node should be configured to listen on two distinct ports. One is dedicated to peer discovery within the CometBFT P2P network, while the other is used to bootstrap the Oasis P2P network. [Change Log]: https://github.com/oasisprotocol/oasis-core/blob/v23.0/CHANGELOG.md ### Data Directory Changes The subdirectory (located inside the node's data directory) used to store consensus-related data, previously called `tendermint` (after the consensus layer protocol backend) has been renamed to `consensus` in Oasis Core 23.0. If any of your scripts rely on specific directory names, please make sure to update them to reflect the changed directory name. ### State Changes The following parts of the genesis document will be updated: For a more detailed explanation of the parameters below, see the [Genesis Document] docs. All state changes will be done automatically with the migration command provided by the new version of `oasis-node`. It can be used as follows to derive the same genesis file from an existing state dump at the correct height (assuming there is a `genesis.json` present in the current working directory): ``` oasis-node genesis migrate --genesis.new_chain_id testnet-2023-10-12 ``` #### General * **`chain_id`** will be set to `testnet-2023-10-12`. * **`halt_epoch`** will be removed as it is no longer used. #### Registry * **`registry.runtimes[].txn_scheduler.propose_batch_timeout`** specifies how long to wait before accepting proposal from the next backup scheduler. It will be set to `5000000000` (5 seconds). Previously the value was represented in the number of consensus layer blocks. * **`registry.params.gas_costs.prove_freshness`** specifies the cost of the freshness proof transaction. It will be set to `1000`. * **`registry.params.gas_costs.update_keymanager`** specifies the cost of the keymanager policy update transaction. It will be removed as the parameter has been moved under `keymanager.params.gas_costs.update_policy`. * **`registry.params.tee_features`** specify various TEE features supported by the consensus layer registry service. These will be set to the following values to activate the new features: ```json "tee_features": { "sgx": { "pcs": true, "signed_attestations": true, "max_attestation_age": 1200 }, "freshness_proofs": true } ``` * **`registry.params.max_runtime_deployments`** specifies the maximum number of runtime deployments that can be specified in the runtime descriptor. It will be set to `5`. #### Root Hash * **`roothash.params.max_past_roots_stored`** specifies the maximum number of past runtime state roots that are stored in consensus state for each runtime. It will be set to `1200`. #### Staking * **`staking.params.commission_schedule_rules.min_commission_rate`** specifies the minimum commission rate. It will be set to `0` to maintain the existing behavior. * **`staking.params.thresholds.node-observer`** specifies the stake threshold for registering an observer node. It will be set to `100000000000` base units (or `100` tokens), same as for existing compute nodes. #### Key Manager * **`keymanager.params.gas_costs`** specify the cost of key manager transactions. These will be set to the following values: ```json "gas_costs": { "publish_ephemeral_secret": 1000, "publish_master_secret": 1000, "update_policy": 1000 } ``` #### Random Beacon * **`beacon.base`** is the network's starting epoch. It will be set to the epoch of Testnet's state dump + 1, `29570`. #### Governance * **`governance.params.enable_change_parameters_proposal`** specifies whether parameter change governance proposals are allowed. It will be set to `true`. #### Consensus * **`consensus.params.max_block_size`** specifies the maximum block size in the consensus layer. It will be set to `1048576` (1 MiB). #### Other * **`extra_data`** will be set back to the value in the [Mainnet genesis file] to include the Oasis Network's genesis quote: _”_[_Quis custodiet ipsos custodes?_][mainnet-quote]_” \[submitted by Oasis Community Member Daniyar Borangaziyev]:_ ``` "extra_data": { "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" } ``` [Genesis Document]: ../../reference/genesis-doc.md#parameters [Mainnet genesis file]: https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-11-18 [mainnet-quote]: https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F ## 2022-04-04 Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **15056**. We expect the Testnet network to reach this epoch at around 2022-04-04 7:45 UTC. ### Instructions * See [Before upgrade](testnet.md#before-upgrade) section for required steps to be done before upgrade. * (optional) Vote for the upgrade. On 2022-04-01, an upgrade proposal will be proposed which (if accepted) will schedule the upgrade on epoch **15056**. See the [Governance documentation](../../../build/tools/cli/network.md#governance-cast-vote) for details on voting for proposals. The upgrade proposal contains the `"empty"` upgrade handler whose only purpose is allow specifying a no-op handler when submitting an upgrade proposal. This is not dump & restore upgrade For this upgrade, **do NOT wipe state**. The new Oasis Core version expects the synced state using Oasis Core 22.0.x all the way until the upgrade epoch. Read [Handling Network Upgrades] for more info. * Once reaching the designated upgrade epoch, your node will stop and needs to be upgraded to Oasis Core [22.1]. After your node is upgraded to Oasis Core 22.1, restart it and wait for more than 2/3+ of nodes by stake to do the same and the network starts again. The Testnet's genesis file and the genesis document's hash will remain the same. * If the nodes are running any ParaTimes, make sure you upgrade to the versions published on the [Testnet network parameters page](../../network/testnet.md). ParaTime binaries for the Cipher ParaTime will be published at a later time due to the additional offline signing step. If you are running _multiple_ ParaTimes on the same node, you should _disable_ the Cipher ParaTime until the new version is published. * If you are running a Rosetta gateway, upgrade it to version [2.2.0]. [Handling Network Upgrades]: ../../run-your-node/maintenance/handling-network-upgrades.md ### Before Upgrade This upgrade will upgrade **Oasis Core** to **version [22.1]** which **no longer allows running Oasis Node** (i.e. the `oasis-node` binary) **as root** (effective user ID of 0). Running network accessible services as the root user is extremely bad for system security as a general rule. While it would be "ok" if we could drop privileges, `syscall.AllThreadsSyscall` does not work if the binary uses `cgo` at all. Nothing in Oasis Node will ever require elevated privileges. Attempting to run the `oasis-node` process as the root user will now terminate immediately on startup. While there may be specific circumstances where it is safe to run network services with the effective user ID set to 0, the overwhelming majority of cases where this is done is a misconfiguration. If the previous behavior is required, the binary must be run in unsafe/debug mode (via the intentionally undocumented flag), and `debug.allow_root` must also be set. [22.1]: https://github.com/oasisprotocol/oasis-core/releases/tag/v22.1 [2.2.0]: https://github.com/oasisprotocol/oasis-rosetta-gateway/releases/tag/v2.2.0 ## 2022-03-03 Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **14209.** We expect the Testnet network to reach this epoch at around 2022-03-03 12:45 UTC. ### Instructions * (optional) Vote for the upgrade. On 2022-03-02, an upgrade proposal will be proposed which (if accepted) will schedule the upgrade on epoch **14209.** See the [Governance documentation](../../../build/tools/cli/network.md#governance-cast-vote) for details on voting for proposals. The upgrade proposal contains a non-existing upgrade handler and will be used to coordinate the network shutdown, the rest of the upgrade is manual. The following steps should be performed only after the network has reached the upgrade network and has halted: * Download the Testnet genesis file published in the [Testnet 2022-03-03 release](https://github.com/oasisprotocol/testnet-artifacts/releases/tag/2022-03-03). Testnet state at epoch **14209** will be exported and migrated to a 22.0 compatible genesis file. The new genesis file will be published on the above link soon after reaching the upgrade epoch. * Replace the old genesis file with the new Testnet genesis file. The [state changes](#state-changes) are described and explained below. * Replace the old version of Oasis Node with version [22.0](https://github.com/oasisprotocol/oasis-core/releases/tag/v22.0). * [Wipe state](../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity). * Perform any needed [configuration changes](#configuration-changes) described below. * Start your node. ### Configuration Changes To see the full extent of the changes examine the [Change Log](https://github.com/oasisprotocol/oasis-core/blob/v22.0/CHANGELOG.md#220-2022-03-01) of the 22.0 release. If your node is currently configured to run a ParaTime, you need to perform some additional steps. The way ParaTime binaries are distributed has changed so that all required artifacts are contained in a single archive called the Oasis Runtime Container and have the `.orc` extension. Links to updated ParaTime binaries will be published on the [Testnet network parameters page](../../network/testnet.md) for their respective ParaTimes. The configuration is simplified as the `runtime.paths` now only needs to list all of the supported `.orc` files (see below for an example). Instead of separately configuring various roles for a node, there is now a single configuration flag called `runtime.mode` which enables the correct roles as needed. It should be set to one of the following values: - `none` (runtime support is disabled, only consensus layer is enabled) - `compute` (node is participating as a runtime compute node for all the configured runtimes) - `keymanager` (node is participating as a keymanager node) - `client` (node is a stateful runtime client) - `client-stateless` (node is a stateless runtime client and connects to remote nodes for any state queries) Nodes that have so far been participating as compute nodes should set the mode to `compute` and nodes that have been participating as clients for querying and transaction submission should set it to `client`. The following configuration flags have been removed: - `runtime.supported` (existing `runtime.paths` is used instead) - `worker.p2p.enabled` (now automatically set based on runtime mode) - `worker.compute.enabled` (now set based on runtime mode) - `worker.keymanager.enabled` (now set based on runtime mode) - `worker.storage.enabled` (no longer needed) Also the `worker.client` option is no longer needed unless you are providing consensus layer RPC services. For example if your _previous_ configuration looked like: ```yaml runtime: supported: - "000000000000000000000000000000000000000000000000000000000000beef" paths: "000000000000000000000000000000000000000000000000000000000000beef": /path/to/runtime worker: # ... other settings omitted ... storage: enabled: true compute: enabled: true client: port: 12345 addresses: - "xx.yy.zz.vv:12345" p2p: enabled: true port: 12346 addresses: - "xx.yy.zz.vv:12346" ``` The _new_ configuration should look like: ```yaml runtime: mode: compute paths: - /path/to/runtime.orc worker: # ... other settings omitted ... p2p: port: 12346 addresses: - "xx.yy.zz.vv:12346" ``` ### State Changes The following parts of the genesis document will be updated: For a more detailed explanation of the parameters below, see the [Genesis Document] docs. ### **General** * **`height`** will be set to the height of the Testnet state dump + 1, i.e. `8535081`. * **`genesis_time`** will be set to `2022-03-03T13:00:00Z`. * **`chain_id`** will be set to `testnet-2022-03-03`. * **`halt_epoch`** will be set to `24210` (more than 1 year from the upgrade). ### **Registry** * **`registry.runtimes`** list contains the registered runtimes' descriptors. In this upgrade, all runtime descriptors will be migrated from version `2` to version `3`. The migration will be done automatically with the `oasis-node debug fix-genesis` command. * **`registry.suspended_runtimes`** list contains the suspended registered runtimes' descriptors. In this upgrade, all runtime descriptors will be migrated from version `2` to version `3`. The migration will be done automatically with the `oasis-node debug fix-genesis` command. * Inactive registered entities in **`registry.entities`** (and their corresponding nodes in **`registry.nodes`**) that don't pass the [minimum staking thresholds] will be removed. The removal will be done automatically with the `oasis-node debug fix-genesis` command. ### **Root Hash** * **`roothash.params.gas_costs.submit_msg`** is a new parameter that specifies the cost for an submit message transaction. It will be set to `1000`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`roothash.params.max_in_runtime_messages`** is a new parameter that specifies the maximum number of incoming messages that can be queued for processing by a runtime. It will be set to `128`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`roothash.runtime_state`** contains the state roots of the runtimes. Empty fields will be omitted. This will be done automatically with the `oasis-node debug fix-genesis` command. ### **Staking** * **`staking.params.thresholds`** specifies the minimum number of tokens that need to be staked in order for a particular entity or a particular type of node to participate in the network. The `node-storage` key is removed since Oasis Core 22.0+ removes separate storage nodes (for more details, see: [#4308](https://github.com/oasisprotocol/oasis-core/pull/4308)). This will be done automatically with the `oasis-node debug fix-genesis` command. * **`staking.params.min_transfer`** specifies the minimum number of tokens one can transfer. The value is set to 10,000,000 base units, or 0.01 TEST tokens. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`staking.params.min_transact_balance`** specifies the minimum general balance an account must have to be able to perform transactions on the network. The value is set to 0 base units meaning this requirement is currently not enforced. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`staking.params.reward_schedule`** specifies the staking reward schedule as an array with elements of the form: ``` { "until": 14226, "scale": "1229" } ``` For example, this element specifies that the staking reward is 0.001229% per epoch until epoch `14226`. It will be set to the same schedule that is currently on used on the Mainnet. ### **Random Beacon** The **`beacon`** object contains parameters controlling the new [improved VRF-based random beacon][ADR 0010] introduced in the Damask upgrade. * **`beacon.base`** is the network's starting epoch. It will be set to the `14209`. * **`beacon.params.backend`** configures the random beacon backend to use. It will be set to `"vrf"` indicating that the beacon implementing a [VRF-based random beacon][ADR 0010] should be used. * **`beacon.params.vrf_parameters.alpha_hq_threshold`** is minimal number of nodes that need to contribute a VRF proof for the beacon's output to be valid. It will be set to `3`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`beacon.params.vrf_parameters.interval`** is the duration of an epoch. It will be set to `600`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`beacon.params.vrf_parameters.proof_delay`** is number of blocks since the beginning of an epoch after a node can still submit its VRF proof. It will be set to `300`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`beacon.params.vrf_parameters.gas_costs.vrf_prove`** specifies the cost for a VRF prove transaction. It will be set to `1000`. This will be done automatically with the `oasis-node debug fix-genesis` command. The **`beacon.params.pvss_parameters`** control the behavior of the [previous random beacon implementing a PVSS scheme][pvss-beacon]. Since PVSS is no longer supported, all its configuration options are removed as well. [ADR 0010]: ../../../adrs/0010-vrf-elections [pvss-beacon]: ../../../adrs/0007-improved-random-beacon.md ### **Governance** * **`governance.params.stake_threshold`** is a new parameter specifying the single unified stake threshold representing the percentage of `VoteYes` votes in terms of total voting power for a governance proposal to pass. It will be set to `68` (i.e. 68%). This will be done automatically with the `oasis-node debug fix-genesis` command. * **`governance.params.quorum`** is the minimum percentage of voting power that needs to be cast on a proposal for the result to be valid. It will be removed since it is being replaced by the single **`governance.params.staking_threshold`** parameter. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`governance.params.threshold`** is the minimum percentage of `VoteYes` votes in order for a proposal to be accepted. It will be removed since it is being replaced by the single **`governance.params.staking_threshold`** parameter. This will be done automatically with the `oasis-node debug fix-genesis` command. ### **Consensus** * **`consensus.params.state_checkpoint_interval`** parameter controls the interval (in blocks) on which state checkpoints should be taken. It will be increased from `10000` to `100000` to improve nodes' performance since computing checkpoints is I/O intensive. [Genesis Document]: ../../reference/genesis-doc.md#parameters [minimum staking thresholds]: ../../reference/genesis-doc.md#staking-thresholds ## 2021-08-11 Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **8844.** We expect the Testnet network to reach this epoch at around 2021-08-11 08:50 UTC. ### Proposed Parameter Changes The [Oasis Core 21.2.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.8) release contains the [`consensus-params-update-2021-08` upgrade handler](https://github.com/oasisprotocol/oasis-core/blob/v21.2.8/go/upgrade/migrations/consensus_parameters.go) which will update the following parameters in the consensus layer: * **`staking.params.max_allowances` ** specifies the maximum number of allowances on account can store. It will be set to `16` (default value is `0`) to enable support for beneficiary allowances which are required to transfer tokens into a ParaTime. _Note that this has already been the case on Testnet since the_ [_2021-06-23 upgrade_](testnet.md#2021-06-23-upgrade)_._ * **`staking.params.gas_costs` ** , **`governance.params.gas_costs`** and **`roothash.params.gas_costs`** specify gas costs for various types of staking, governance and roothash transactions. Gas costs for transactions that were missing gas costs will be added. * **`scheduler.params.max_validators`** is the maximum size of the consensus committee (i.e. the validator set). It will be increased to`110` (it was set to `100` previously). ### Instructions - Before Upgrade System Preparation * This upgrade will upgrade **Oasis Core** to version **21.2.8** which: * Has a check that makes sure the **file descriptor limit** is set to an appropriately high value (at least 50000). While previous versions only warned in case the limit was set too low, this version will refuse to start. Follow the [File Descriptor Limit](../../run-your-node/prerequisites/system-configuration.mdx#increase-file-descriptor-limit) documentation page for details on how to increase the limit on your system. * Stop your node, replace the old version of Oasis Node with version [21.2.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.8) and restart your node. Since Oasis Core 21.2.8 is otherwise compatible with the current consensus layer protocol, you may upgrade your Testnet node to this version at any time. This is not dump & restore upgrade For this upgrade, **do NOT wipe state**. * Once reaching the designated upgrade epoch, your node will stop and needs to be upgraded to Oasis Core [21.2.8](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.8). * If you upgraded your node to Oasis Core 21.2.8 before the upgrade epoch was reached, you only need to restart your node for the upgrade to proceed. * Otherwise, you need to upgrade your node to Oasis Core 21.2.8 first and then restart it. If you use a process manager like [systemd](https://github.com/systemd/systemd) or [Supervisor](http://supervisord.org), you can configure it to restart the Oasis Node automatically. The Testnet's genesis file and the genesis document's hash will remain the same. ## 2021-06-23 Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **7553.** We expect the Testnet network to reach this epoch at around 2021-06-23 14:30 UTC. ### Instructions * See [Before upgrade](testnet.md#before-upgrade) section for required steps to be done before upgrade. * (optional) Vote for the upgrade. On 2021-06-21, an upgrade proposal will be proposed which (if accepted) will schedule the upgrade on epoch **7553.** See the [Governance documentation](../../../build/tools/cli/network.md#governance-cast-vote) for details on voting for proposals. The upgrade proposal contains the `"consensus-max-allowances-16"` upgrade handler whose only purpose is to set the**`staking.params.min_delegation`** consensus parameter to 16 (default value is 0) to enable support for beneficiary allowances which are required to transfer tokens into a ParaTime. * Stop your node, replace the old version of Oasis Node with version [21.2.4](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.4) and restart your node. Since Oasis Core 21.2.4 is otherwise compatible with the current consensus layer protocol, you may upgrade your Testnet node to this version at any time. This is not dump & restore upgrade For this upgrade, **do NOT wipe state**. * Once reaching the designated upgrade epoch, your node will stop and needs to be upgraded to Oasis Core [21.2.4](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.2.4). * If you upgraded your node to Oasis Core 21.2.4 before the upgrade epoch was reached, you only need to restart your node for the upgrade to proceed. * Otherwise, you need to upgrade your node to Oasis Core 21.2.4 first and then restart it. If you use a process manager like [systemd](https://github.com/systemd/systemd) or [Supervisor](http://supervisord.org), you can configure it to restart the Oasis Node automatically. The Testnet's genesis file and the genesis document's hash will remain the same. ### Before upgrade This upgrade will upgrade Oasis Core to version **21.2.x** which includes the new [**BadgerDB**](https://github.com/dgraph-io/badger) **v3**. Since BadgerDB's on-disk format changed in v3, it requires on-disk state migration. The migration process is done automatically and makes the following steps: * Upon startup, Oasis Node will start migrating all `/**/*.badger.db` files (Badger v2 files) and start writing Badger v3 DB to files with the `.migrate` suffix. * If the migration fails in the middle, Oasis Node will delete all `/**/*.badger.db.migrate` files the next time it starts and start the migration (of the remaining `/**/*.badger.db` files) again. * If the migration succeeds, Oasis Node will append the `.backup` suffix to all `/**/*.badger.db` files (Badger v2 files) and remove the `.migrate` suffix from all `/**/*.badger.db.migrate` files (Badger v3 files). #### Extra storage requirements Your node will thus need to have extra storage space to store both the old and the new BadgerDB files. To see estimate how much extra space the migration will need, use the `du` tool: ``` shopt -s globstar du -h /**/*.badger.db | sort -h -r ``` This is an example output from a Testnet node that uses `/srv/oasis/node` as the ``: ``` 6.3G /srv/oasis/node/tendermint/data/blockstore.badger.db 2.7G /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db 1.4G /srv/oasis/node/tendermint/data/state.badger.db 158M /srv/oasis/node/persistent-store.badger.db 164K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints 80K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4424334 80K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4423334 76K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4424334/fc815694d8219acb97fc0207a2159601df76df4d96802c147252ad0f2fd8a3f3 76K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4423334/613e734e4ee4999bf71c3a190df13ea9d9b7d65af6a7fd8b2c9a477f2d052313 68K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4424334/fc815694d8219acb97fc0207a2159601df76df4d96802c147252ad0f2fd8a3f3/chunks 68K /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/checkpoints/4423334/613e734e4ee4999bf71c3a190df13ea9d9b7d65af6a7fd8b2c9a477f2d052313/chunks 20K /srv/oasis/node/tendermint/data/evidence.badger.db ``` After you've confirmed your node is up and running, you can safely delete all the `/**/*.badger.db.backup` files. #### Extra memory requirements BadgerDB v2 to v3 migration can use a number of Go routines to migrate different database files in parallel. However, this comes with a memory cost. For larger database files, it might need up to 4 GB of RAM per database, so we recommend lowering the number of Go routines BadgerDB uses during migration (`badger.migrate.num_go_routines`) if your node has less than 8 GB of RAM. If your node has less than 8 GB of RAM, set the number of Go routines BadgerDB uses during migration to 2 (default is 8) by adding the following to your node's `config.yml`: ``` # BadgerDB configuration. badger: migrate: # Set the number of Go routines BadgerDB uses during migration to 2 to lower # the memory pressure during migration (at the expense of a longer migration # time). num_go_routines: 2 ``` ## 2021-04-13 Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **5662.** We expect the Testnet network to reach this epoch at around 2021-04-13 12:00 UTC. ### Instructions * Runtime operators see [Before upgrade](testnet.md#before-upgrade) section for required steps to be done before upgrade. * (optional) Vote for the upgrade. On 2021-04-12 an upgrade proposal will be proposed which (if accepted) will schedule a network shutdown on epoch **5662.** See the [Governance documentation](../../../build/tools/cli/network.md#governance-cast-vote) for details on voting for proposals. The upgrade proposal contains a non-existing upgrade handler and will be used to coordinate the network shutdown, the rest of the upgrade is manual. Following steps should be performed only after the network has reached the upgrade network and has halted: * Download the Testnet genesis file published in the [Testnet 2021-04-13 release](https://github.com/oasisprotocol/testnet-artifacts/releases/tag/2021-04-13). Testnet state at epoch **5662** will be exported and migrated to a 21.1.x compatible genesis file. Upgrade genesis file will be published on the above link soon after reaching the upgrade epoch. * Replace the old genesis file with the new Testnet genesis file. * Replace the old version of Oasis Node with version [21.1](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.1). * [Wipe state](../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity). * Start your node. ### Before upgrade **Runtime operators** This upgrade requires a runtime storage node migration to be performed **before the upgrade genesis is published**. This can be done before the upgrade epoch is reached by stopping all runtime nodes and running the migration. **Backup your node's data directory** To prevent irrecoverable runtime storage data corruption/loss in case of a failed storage migration, backup your node's data directory. For example, to backup the `/serverdir/node` directory using the rsync tool, run: ``` rsync -a /serverdir/node/ /serverdir/node-BACKUP/ ``` The storage database on all storage nodes needs to be migrated with the following command (using the [21.1](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.1) binary): ``` oasis-node storage migrate \ --datadir \ --runtime.supported ``` After the migration to v5 completes, you will see an output similar to: ``` ... - migrating from v4 to v5... - migrating version 24468... - migrated root state-root:195cf7a9a103e7300b2bb4e537cb9935cbebd83e448e67aa55433861a6ad7426 -> state-root:cea105a5d701deab935b94af9e8e0c5af5dcdb61c242bf434da9f11aa8d110ba - migrated root io-root:0850c5a33ee7f45aa92724b7d5f28c9ac9ae8799b88cc5be9773e8aba9526ca7 -> io-root:19713a2b44e1bf868ebee43c36872baa3058870bb890a5e25d1c4cea2622be77 - migrated root io-root:477391131f60ac2c22bce9167c7e3783a13d4fb81fddd2d388b4ead6a586fe52 -> io-root:f29f86d491303c5fd7b3572e97cbd65b7487b6b4ac519623afd161cc2e4678b7 ``` Take note of the displayed `state-root` and report it to the Foundation, as it needs to be included in the upgrade's new genesis file. Keep the runtime nodes stopped until the upgrade epoch is reached. At upgrade epoch, upgrade the nodes by following the remaining steps above. ## 2021-03-24 Upgrade * **Upgrade height:** upgrade is scheduled to happen at epoch **5128.** We expect the Testnet network to reach this epoch at around 2021-03-24 11:30 UTC. ### Instructions * (optional) To ensure your node will stop at epoch **5128** [submit the following upgrade descriptor](https://github.com/oasisprotocol/cli/blob/master/docs/network.md#governance-create-proposal) at any time before the upgrade: ```json { "name": "testnet-upgrade-2021-03-24", "method": "internal", "identifier": "testnet-upgrade-2021-03-24", "epoch": 5128 } ``` * Download the Testnet genesis file published in the [Testnet 2021-03-24 release](https://github.com/oasisprotocol/testnet-artifacts/releases/tag/2021-03-24). Testnet state at epoch **5128** will be exported and migrated to a 21.0.x compatible genesis file. Upgrade genesis file will be published on the above link soon after reaching the upgrade epoch. * (optional) Verify the provided Testnet genesis file by comparing it to network state dump. See instructions in the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md#verify-genesis) guide. * Replace the old genesis file with the new Testnet genesis file. * Stop your node (if you haven't stopped it already by submitting the upgrade descriptor). * Replace the old version of Oasis Node with version [21.0.1](https://github.com/oasisprotocol/oasis-core/releases/tag/v21.0.1). * Update your node's configuration or perform any additional needed steps as per [Additional Steps](#additional-steps) below. * [Wipe state](../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity). * Start your node. For more detailed instructions, see the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md) guide. ### Additional steps Examine the [Changelog](https://github.com/oasisprotocol/oasis-core/blob/v21.0.1/CHANGELOG.md#210-2021-03-18) of the 21.0 release. **Runtime operators** In addition to some [configuration changes](https://github.com/oasisprotocol/oasis-core/blob/v21.0.1/CHANGELOG.md#configuration-changes), this upgrade contains breaking runtime API changes. Make sure any runtime code is updated and compatible with the 21.0.x runtime API version. **Backup your node's data directory** To prevent irrecoverable runtime storage data corruption/loss in case of a failed storage migration, backup your node's data directory. For example, to backup the `/serverdir/node` directory using the rsync tool, run: ``` rsync -a /serverdir/node/ /serverdir/node-BACKUP/ ``` For this upgrade, the runtime node operators need to perform an additional migration of the storage nodes. **Before starting the upgraded node and before wiping state**, the storage database on all storage nodes needs to be migrated with the following command (using the 21.0.1 binary): ``` oasis-node storage migrate \ --datadir \ --runtime.supported ``` **Storage access policy changes** Due to the changes in the default access policy on storage nodes, at least one of the storage nodes should be configured with the `worker.storage.public_rpc.enabled` flag set to `true`. Otherwise, external runtime clients wont be able to connect to any storage nodes. --- ## Cobalt Upgrade This document provides an overview of the proposed criteria and changes for the Cobalt Mainnet upgrade. This has been [reviewed and approved by community members and validators of the Oasis Network](https://github.com/oasisprotocol/community/discussions/18) and is being reproduced and summarized here for easy access. As proposed by the community, the Cobalt upgrade on Mainnet will kick off around April 28, 2021 at 16:00 UTC. ## Major Features All features for the Cobalt upgrade are implemented as part of **Oasis Core 21.1.1** which is a protocol-breaking release. Summary of the major features is as follows: * **Light Clients and Checkpoint Sync**: In order to make bootstrapping of new network nodes much faster, the upgrade will introduce support for light clients and restoring state from checkpoints provided by other nodes in the network. * **Random Beacon**: The random beacon is used by the consensus layer for ParaTime committee elections and is a critical component in providing security for ParaTimes with an open admission policy. The improved random beacon implementation is based on SCRAPE and provides unbiased output as long as at least one participant is honest. * **On-Chain Governance**: The new on-chain governance service provides a simple framework for submitting governance proposals, validators voting on proposals and once an upgrade proposal passes, having a way to perform the upgrade in a controlled manner which minimizes downtime. * **Support for Beneficiary Allowances**: Each account is able to configure beneficiaries which are allowed to withdraw tokens from it in addition to the account owner. * **ROSE Transfers Between the consensus layer and ParaTimes**: The proposed upgrade introduces a mechanism where ParaTimes can emit messages as part of processing any ParaTime block. These messages can trigger operations in the consensus layer on the ParaTime's behalf. ParaTimes get their own accounts in the consensus layer which can hold ROSE and ParaTimes are able to request them be transferred to other accounts or to withdraw from other accounts when allowed via the allowances mechanism. * **A Path Towards Self-Governing ParaTimes**: By building on the ParaTime messages mechanism, the proposed upgrade extends ParaTime governance options and enables a path towards ParaTimes that can define their own governance mechanisms. In addition to the specified additional features we also propose the validator set size to be increased from the current 80 to 100 validators as [suggested earlier by the community](https://github.com/oasisprotocol/community/discussions/5#discussioncomment-282746). ## Mechanics of the Upgrade This section will be updated with more details as we get closer to the upgrade. Upgrading the Mainnet will require a coordinated upgrade of the Network. All nodes will need to configure a new genesis file that they can generate or verify independently and reset/archive any state from Mainnet. Once enough (representing 2/3+ of stake) nodes have taken this step, the upgraded network will start. For the actual steps that node operators need to make on their nodes, see the [Upgrade Log](../upgrade-logs/mainnet.md#cobalt-upgrade). ## Proposed State Changes The following parts of the genesis document will be updated: For a more detailed explanation of the parameters below, see the [Genesis Document](../../reference/genesis-doc.md#parameters) docs. ### **General** * **`height`** will be set to the height of the Mainnet state dump + 1, i.e.`3027601`. * **`genesis_time`** will be set to`2021-04-28T16:00:00Z`. * **`chain_id`** will be set to `oasis-2`. * **`halt_epoch`** will be set to`13807`(approximately 1 year from the Cobalt upgrade). ### **Epoch Time** The **`epochtime`**object will be removed since it became obsolete with the new [improved random beacon](../../../adrs/0007-improved-random-beacon.md). It will be replaced with the new **`beacon`** object described [below](cobalt-upgrade.md#random-beacon). ### **Registry** * **`registry.params.enable_runtime_governance_models` ** is a new parameter that specifies the set of [runtime governance models](../../../core/consensus/services/registry.md#runtimes) that are allowed to be used when creating/updating registrations. It will be set to: ``` { "entity": true, "runtime": true } ``` * **`registry.runtimes`** list contains the registered runtimes' descriptors. In the Cobalt upgrade, it will be migrated from a list of _signed_ runtime descriptors to a list of runtime descriptors. The migration will be done automatically with the `oasis-node debug fix-genesis` command. * **`registry.suspended_runtimes`** list contains the suspended registered runtimes' descriptors. In the Cobalt upgrade, it will be migrated from a list of _signed_ suspended runtime descriptors to a list of suspended runtime descriptors. The migration will be done automatically with the `oasis-node debug fix-genesis` command. * Inactive registered entities in **`registry.entities`** (and their corresponding nodes in **`registry.nodes`**) that don't pass the [minimum staking thresholds](../../reference/genesis-doc.md#staking-thresholds) will be removed. The removal will be done automatically with the `oasis-node debug fix-genesis` command. Removing entities from **`registry.entities`** will effectively deregister them but the entities' accounts in **`staking.ledger`** will remain intact. Deregistered entities can always re-register by submitting the [entity registration transaction](../../run-your-node/validator-node.mdx#entity-registration) after the upgrade. * **`registry.node_statuses`** object contains the registered nodes' statuses. In the Cobalt upgrade, each node's status will get a new parameter: **`election_eligible_after`**. This parameter specifies at which epoch a node is eligible to be [scheduled into various committees](../../../core/consensus/services/scheduler.md). All nodes will have the parameter set to `0` which means they are immediately eligible. The migration will be done automatically with the `oasis-node debug fix-genesis` command. ### **Root Hash** * **`roothash.params.max_runtime_messages` ** is a new parameter that specifies the global limit on the number of [messages](../../../core/runtime/messages.md) that can be emitted in each round by the runtime. It will be set to `256`. * **`roothash.params.max_evidence_age`** is a new parameter that specifies the maximum age (in the number of rounds) of submitted evidence for [compute node slashing](../../../adrs/0005-runtime-compute-slashing.md). It will be set to `100`. ### **Staking** * **`staking.governance_deposits` ** are the tokens collected from governance proposal deposits. The initial balance will be set to `"0"`. * **`staking.params.allow_escrow_messages`** is a new parameter indicating whether to enable support for the newly added `AddEscrow` and `ReclaimEscrow` [runtime messages](../../../core/runtime/messages.md) . It will be set to`true`. * **`staking.params.slashing.0`** will be renamed to **`staking.params.slashing.consensus-equivocation`**. * **`staking.params.slashing.consensus-light-client-attack.amount`** is a new parameter controlling how much to slash for light client attack. It will be set to `"100000000000"` (i.e. 100,000,000,000 base units, or 100 ROSE tokens). * **`staking.params.slashing.consensus-light-client-attack.freeze_interval` ** is a new parameter controlling the duration (in epochs) for which a node that has been slashed for light client attack is “frozen,” or barred from participating in the network's consensus committee. It will be set to `18446744073709551615` (i.e. the maximum value for a 64-bit unsigned integer) which means that any node slashed for light client attack will be, in effect, permanently banned from the network. ### **Committee Scheduler** * **`scheduler.params.max_validators`** is the maximum size of the consensus committee (i.e. the validator set). It will be increased from `80` to`100`. ### **Random Beacon** The **`beacon`** object contains parameters controlling the new [improved random beacon](../../../adrs/0007-improved-random-beacon.md) introduced in the Cobalt upgrade. * **`beacon.base`** is the network's starting epoch. It will be set to the epoch of Mainnet's state dump + 1, i.e. `5047`. * **`beacon.params.backend`** configures the random beacon backend to use. It will be set to `"pvss"` indicating that the beacon implementing a [PVSS (publicly verifiable secret sharing) scheme](../../../adrs/0007-improved-random-beacon.md) should be used. * **`beacon.params.pvss_parameters.participants`** is the number of participants to be selected for each beacon generation protocol round. It will be set to `20`. * **`beacon.params.pvss_parameters.threshold`** is the minimum number of participants which must successfully contribute entropy for the final output to be considered valid. It will be set to `10`. * **`beacon.params.pvss_parameters.commit_interval`** is the duration of the Commit phase (in blocks). It will be set to `400`. * **`beacon.params.pvss_parameters.reveal_interval`** is the duration of the Reveal phase (in blocks). It will be set to `196`. * **`beacon.params.pvss_parameters.transition_delay`** is the duration of the post Reveal phase (in blocks). It will be set to `4`. ### **Governance** The **`governance`** object contains parameters controlling the network's [on-chain governance](../../../core/consensus/services/governance.md) introduced in the Cobalt upgrade**.** * **`governance.params.min_proposal_deposit`** is the amount of tokens (in base units) that are deposited when creating a new proposal. It will be set to `"10000000000000"` (i.e. 10,000,000,000,000 base units, or 10,000 ROSE tokens). * **`governance.params.voting_period`** is the number of epochs after which the voting for a proposal is closed and the votes are tallied. It will be set to `168`, which is expected to be approximately 7 days. * **`governance.params.quorum`** is the minimum percentage of voting power that needs to be cast on a proposal for the result to be valid. It will be set to `75` (i.e. 75%). **`governance.params.threshold`** is the minimum percentage of `VoteYes` votes in order for a proposal to be accepted. It will be set to `90`(i.e. 90%). * **`governance.params.upgrade_min_epoch_diff`** is the minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade proposal to be valid. Additionally, it specifies the minimum number of epochs between two consecutive pending upgrades. It will be set to `336`, which is expected to be approximately 14 days. * **`governance.params.upgrade_cancel_min_epoch_diff`** is the minimum number of epochs between the current epoch and the proposed upgrade epoch for the upgrade cancellation proposal to be valid. It will be set to`192`, which is expected to be approximately 8 days. ### **Consensus** * **`consensus.params.max_evidence_num`** parameter will be removed and replaced by the **`consensus.params.max_evidence_size`** parameter. * **`consensus.params.max_evidence_size`** is a new parameter specifying the maximum evidence size in bytes. It replaces the **`consensus.params.max_evidence_num`** parameter and will be set to`51200` (51,200 bytes, or 50 kB). * **`consensus.params.state_checkpoint_interval`** parameter controls the interval (in blocks) on which state checkpoints should be taken. It will be set to `10000`. * **`consensus.params.state_checkpoint_num_kept`** parameter specifies the number of past state checkpoints to keep. It will be set to `2`. * **`consensus.params.state_checkpoint_chunk_size`** parameters controls the chunk size (in bytes) that should be used when creating state checkpoints. It will be set to `8388608` (8,388,608 bytes, or 8 MB). ### Other * **`extra_data`** will be set back to the value in the [Mainnet genesis file](https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-11-18) to include the Oasis network's genesis quote: _”_[_Quis custodiet ipsos custodes?_](https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F)_” \[submitted by Oasis Community Member Daniyar Borangaziyev]:_ ``` "extra_data": { "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" } ``` ### Runtime State Root Migration Additionally, each runtime's state root will need to be updated for the [runtime storage migration](../upgrade-logs/mainnet.md#runtime-operators) that is performed during this upgrade. At this time, there is only one active runtime on the Mainnet, namely Second State's Oasis Ethereum ParaTime with ID (Base64-encoded) `AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA/wM=`. After completing the runtime storage migration, Second State will communicate the new state root of their runtime and the genesis document needs to be updated as follows: * **`roothash.runtime_states..state_root`** will be set to the (Base64-encoded) migrated state root. * **`registry.runtimes.[id=].genesis.state_root`** will be set to the (Base64-encoded) migrated state root. * **`registry.runtimes.[id=].genesis.state`** will be set to `null`. * **`registry.runtimes.[id=].genesis.round`** will be set to the same value as **`roothash.runtime_states..round`**. ## Launch Support The Oasis team will be offering live video support during the Cobalt upgrade. Video call link and calendar details will be shared with node operators via email and Slack. For any additional support, please reach out via the [**#nodeoperators** Oasis Community Slack channel](../../../get-involved/README.md) with your questions, comments, and feedback related to Cobalt upgrade. To follow the network, please use one of the many [community block explorers][archived - block explorers]. [archived - block explorers]: https://github.com/oasisprotocol/docs/blob/0aeeb93a6e7c9001925923661e4eb3340ec4fb4b/docs/general/community-resources/community-made-resources.md#block-explorers--validator-leaderboards-block-explorers-validator-leaderboards --- ## Damask Upgrade This document provides an overview of the changes for the Damask Mainnet upgrade. The Damask upgrade on Mainnet is scheduled at epoch **13402** which will happen around **Apr 11, 2022 at 8:30 UTC**. ## Major Features All features for the Damask upgrade are implemented as part of **Oasis Core 22.1.x** release series which is a consensus protocol-breaking release. Summary of the major features is as follows: - **Random Beacon**: The random beacon is used by the consensus layer for ParaTime committee elections and is a critical component in providing security for ParaTimes with an open admission policy. To make the random beacon more performant and scalable, the upgrade transitions the election procedure to one that is based on cryptographic sortition of Verifiable Random Function (VRF) outputs. For more details, see [ADR 0010]. - **On-Chain Governance**: The upgrade simplifies the governance by replacing separate quorum and threshold parameters with a single unified stake threshold parameter that represents the percentage of _yes_ votes in terms of total voting power for a governance proposal to pass. - **ParaTime Performance**: By simplifying the protocol (executor and storage committees are merged into a single committee) the upgrade improves ParaTime committee performance and opens the way for even more improvements on the ParaTime side. It also leads to simplified configuration of ParaTime nodes. - **ParaTime Upgrades**: After the Damask upgrade, runtime descriptors will include information regarding supported versions, and the epoch from which they are valid, which will allow ParaTime upgrades to happen without incurring downtime by having upgrades and the descriptor changes pre-staged well in advance of the upgrade epoch. For more details, see [ADR 0013]. - **ParaTime Packaging**: This upgrade changes runtime bundles to be unified across all supported TEE types and self describing so that configuring ParaTimes is only a matter of passing in the runtime bundle file. - **Consensus and ParaTime Communication**: The upgrade adds support for incoming runtime messages where consensus layer transactions can trigger actions inside ParaTimes. For more details, see [ADR 0011]. The upgrade also adds support for runtime message results which extends the results of the emitted runtime messages with relevant information beyond indicating whether the message execution was successful or not. For more details, see [ADR 0012]. In addition to the specified additional features, we also propose the **validator set size** to be **increased from** the current **110 to 120** as discussed in the [Oasis Community Slack #nodeoperators channel][slack-validator-increase]. This upgrade marks an important milestone for the Oasis Network, as it sets the foundation for unlocking the network's full capabilities. [ADR 0010]: /adrs/0010-vrf-elections [ADR 0013]: /adrs/0013-runtime-upgrades [ADR 0011]: /adrs/0011-incoming-runtime-messages [ADR 0012]: /adrs/0012-runtime-message-results [slack-validator-increase]: https://oasiscommunity.slack.com/archives/CMUSJCRFA/p1647881564057319?thread_ts=1647448573.197229&cid=CMUSJCRFA ## Mechanics of the Upgrade On Mar 24, 2022, the Oasis Protocol Foundation submitted the upgrade governance proposal with id of `2` which proposed upgrading the network at epoch 13402. In addition to submitting the actual governance proposal to the network, Oasis Protocol Foundation also published the [Damask Upgrade Proposal discussion] to the [Oasis Community Forum on GitHub]. Node operators which had an active validator node in the validator set had 1 week to cast their vote. Validators representing more than 88% of the total stake in the consensus committee participated in the vote, and 100% of them voted _yes_ for the proposal. The upgrade will be performed by exporting the network's state at the upgrade epoch, updating the [genesis document][genesis-doc], upgrading the Oasis Node and the ParaTime binaries and starting a new network from the new genesis file. This will require coordination between node operators and the Oasis Protocol Foundation. All nodes will need to configure the new genesis file that they can generate or verify independently and reset/archive any existing state from Mainnet. Once enough nodes (representing 2/3+ of stake) have taken this step, the upgraded network will start. For the actual steps that node operators need to make on their nodes, see the [Upgrade Log][upgrade-log-damask]. [Damask Upgrade Proposal discussion]: https://github.com/oasisprotocol/community/discussions/30 [Oasis Community Forum on GitHub]: https://github.com/oasisprotocol/community [upgrade-log-damask]: ../upgrade-logs/mainnet.md#damask-upgrade ## Proposed State Changes The following parts of the genesis document will be updated: This section will be updated with the exact details as we get closer to the upgrade. For a more detailed explanation of the parameters below, see the [Genesis Document][genesis-doc] docs. ### **General** * **`height`** will be set to the height of the Mainnet state dump + 1, `8048956`. * **`genesis_time`** will be set to`2022-04-11T09:30:00Z`. * **`chain_id`** will be set to `oasis-3`. * **`halt_epoch`** will be bumped by `10000` (a little more than a year) to `23807`. ### **Registry** * **`registry.runtimes`** list contains the registered runtimes' descriptors. In this upgrade, all runtime descriptors will be migrated from version `2` to version `3`. The migration will be done automatically with the `oasis-node debug fix-genesis` command. * **`registry.runtimes.[id=000000000000000000000000000000000000000000000000e2eaa99fc008f87f].deployments.version`** specifies Emerald ParaTime's version on Mainnet. It will be upgraded from version 7.1.0 to 8.2.0 and hence the configuration needs to be manually updated to: ``` "version": { "major": 8, "minor": 2 }, ``` * **`registry.runtimes.[id=000000000000000000000000000000000000000000000000e199119c992377cb].deployments`** specifies Cipher ParaTime's version and TEE identity on Mainnet. It will be upgraded from version 1.0.0 to 1.1.0 and hence the configuration needs to be manually updated to: ``` "version": { "major": 1, "minor": 1 }, "valid_from": 0, "tee": "oWhlbmNsYXZlc4GiaW1yX3NpZ25lclggQCXat+vaH77MTjY3YG4CEhTQ9BxtBCL9N4sqi4iBhFlqbXJfZW5jbGF2ZVggoiJgre0cDF5arUk9wh0X9eGWr5cHb8LY0A3/msmznHc=" ``` * **`registry.suspended_runtimes`** list contains the suspended registered runtimes' descriptors. In this upgrade, all runtime descriptors for suspended runtimes will be migrated from version `2` to version `3`. The migration will be done automatically with the `oasis-node debug fix-genesis` command. * Inactive registered entities in **`registry.entities`** (and their corresponding nodes in **`registry.nodes`**) that don't pass the [minimum staking thresholds] will be removed. The removal will be done automatically with the `oasis-node debug fix-genesis` command. ### **Root Hash** * **`roothash.params.gas_costs.submit_msg`** is a new parameter that specifies the cost for a submit message transaction. It will be set to `1000`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`roothash.params.max_in_runtime_messages`** is a new parameter that specifies the maximum number of incoming messages that can be queued for processing by a runtime. It will be set to `128`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`roothash.runtime_state`** contains the state roots of the runtimes. Empty fields will be omitted. This will be done automatically with the `oasis-node debug fix-genesis` command. ### **Staking** * **`staking.params.thresholds`** specifies the minimum number of tokens that need to be staked in order for a particular entity or a particular type of node to participate in the network. The `node-storage` key is removed since Oasis Core 22.0+ removes separate storage nodes. For more details, see: [Oasis Core #4308][oasis-core-4308]. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`staking.params.min_transfer`** is a new parameter that specifies the minimum number of tokens one can transfer. It will be set to 10,000,000 base units, or 0.01 ROSE tokens. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`staking.params.min_transact_balance`** is a new parameter that specifies the minimum general balance an account must have to be able to perform transactions on the network. It will be set to 0 base units, meaning this requirement is currently not enforced. This will be done automatically with the `oasis-node debug fix-genesis` command. ### **Committee Scheduler** * **`scheduler.params.min_validators`** is the minimum size of the consensus committee (i.e. the validator set). It will be increased from `15` to `30`. * **`scheduler.params.max_validators`** is the maximum size of the consensus committee (i.e. the validator set). It will be increased from `110` to `120`. ### **Random Beacon** The **`beacon`** object contains parameters controlling the new [improved VRF-based random beacon][ADR 0010] introduced in the Damask upgrade. * **`beacon.base`** is the network's starting epoch. It will be set to the epoch of Mainnet's state dump + 1, `13402`. * **`beacon.params.backend`** configures the random beacon backend to use. It will be set to `"vrf"` indicating that the beacon implementing [VRF-based random beacon][ADR 0010] should be used. This will be done automatically with the `oasis-node debug fix-genesis` command. The **`beacon.params.vrf_parameters`** control the behavior of the new [VRF-based random beacon][ADR 0010]: * **`beacon.params.vrf_parameters.alpha_hq_threshold`** is minimal number of nodes that need to contribute a VRF proof for the beacon's output to be valid. It will be set to `20`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`beacon.params.vrf_parameters.interval`** is the duration of an epoch. It will be set to `600`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`beacon.params.vrf_parameters.proof_delay`** is number of blocks since the beginning of an epoch after a node can still submit its VRF proof. It will be set to `400`. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`beacon.params.vrf_parameters.gas_costs.vrf_prove`** specifies the cost for a VRF prove transaction. It will be set to `1000`. This will be done automatically with the `oasis-node debug fix-genesis` command. The **`beacon.params.pvss_parameters`** control the behavior of the [previous random beacon implementing a PVSS scheme][pvss-beacon]. Since PVSS is no longer supported, all its configuration options are removed as well. ### **Governance** * **`governance.params.stake_threshold`** is a new parameter specifying the single unified stake threshold representing the percentage of `VoteYes` votes in terms of total voting power for a governance proposal to pass. It will be set to `68` (i.e. 68%). This will be done automatically with the `oasis-node debug fix-genesis` command. * **`governance.params.quorum`** is the minimum percentage of voting power that needs to be cast on a proposal for the result to be valid. It will be removed since it is being replaced by the single **`governance.params.staking_threshold`** parameter. This will be done automatically with the `oasis-node debug fix-genesis` command. * **`governance.params.threshold`** is the minimum percentage of `VoteYes` votes in order for a proposal to be accepted. It will be removed since it is being replaced by the single **`governance.params.staking_threshold`** parameter. This will be done automatically with the `oasis-node debug fix-genesis` command. ### **Consensus** * **`consensus.params.state_checkpoint_interval`** parameter controls the interval (in blocks) on which state checkpoints should be taken. It will be increased from `10000` to `100000` to improve nodes' performance since computing checkpoints is I/O intensive. ### Other * **`extra_data`** will be set back to the value in the [Mainnet genesis file] to include the Oasis Network's genesis quote: _”_[_Quis custodiet ipsos custodes?_][mainnet-quote]_” \[submitted by Oasis Community Member Daniyar Borangaziyev]:_ ``` "extra_data": { "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" } ``` [genesis-doc]: ../../reference/genesis-doc.md#parameters [minimum staking thresholds]: ../../reference/genesis-doc.md#staking-thresholds [oasis-core-4308]: https://github.com/oasisprotocol/oasis-core/pull/4308 [pvss-beacon]: ../../../adrs/0007-improved-random-beacon.md [Mainnet genesis file]: https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-11-18 [mainnet-quote]: https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F ## Launch Support The Oasis team will be offering live video support during the Damask upgrade. Video call link and calendar details will be shared with node operators via email and Slack. For any additional support, please reach out via the [**#nodeoperators** Oasis Community Slack channel][node-operators-slack] with your questions, comments, and feedback related to Damask upgrade. To follow the network, please use one of the many [community block explorers][archived - block explorers]. [node-operators-slack]: ../../../get-involved/README.md [archived - block explorers]: https://github.com/oasisprotocol/docs/blob/0aeeb93a6e7c9001925923661e4eb3340ec4fb4b/docs/general/community-resources/community-made-resources.md#block-explorers--validator-leaderboards-block-explorers-validator-leaderboards --- ## Eden Upgrade This document provides an overview of the changes for the Eden Mainnet upgrade. The Eden upgrade on Mainnet is scheduled at epoch 28017 which will happen on 2023-11-29 at 10:02 AM UTC. ## Major Features All features for the Eden upgrade are implemented as part of **Oasis Core 23.0.x** release series which is a consensus protocol-breaking release. Summary of the major features is as follows: - **On-Chain Governance**: - The upgrade adds support for delegators to participate in on-chain governance. So far, only validators have been able to vote on governance proposals on upgrades. From now on, anyone who is staking will be able to override the votes of the validator. - Prior to this upgrade, validators could vote solely on one specific type of upgrade proposal. This upgrade adds support for voting on parameter changes. An example of such a change includes voting on the staking rewards schedule modifications. - **Node Operators UX**: - With the enhancement of the P2P Stack in the latest upgrade, we've integrated support for state sync. This improvement simplifies the process of initiating a new node, enabling immediate synchronization without the need for manual RPC node configuration. - The upgrade lays the foundation for the system to distribute bundles automatically. While this update doesn't enable this feature, in the future nodes will have the capability to upgrade automatically to the appropriate version immediately after a governance vote passes. - Enhancements have been made to bolster security and optimize the efficiency of specific queries. - **Key Rotations**: - The upgrade introduces major updates to facilitate key rotations, both for ephemeral and state keys. - Key derivation for ephemeral keys will be modified such that the master ephemeral key will be rotated on every epoch and old entropy will be discarded after a few epochs. As a result, past transaction keys will be irretrievable unless the user keeps additional data to enable disclosure of past transactions. - The upgrade introduces support for state key rotations, incorporating key generations. ParaTimes can now rotate state keys daily, and use new keys for newer state. This facilitates re-encryption and, in the event of a TCB recovery, helps to partially mitigate the effects of compromised nodes. - **ParaTime Confidential Query Latency**: - The current confidential ParaTimes have a block delay when querying confidential state following transaction execution. This is due to the fact that the ParaTime needs to independently verify finalization in the consensus layer to guard against attacks. The upcoming version introduces same-block execution. As soon as a block gets finalized on the consensus layer, ParaTimes can promptly obtain the latest state root hash and verify the state without delay. This reduces latency for those looking to query outcomes of their transactions, e.g. dApp users and developers. - **ParaTime Upgrades**: - The upgrade reduces downtime associated with upgrading confidential ParaTimes. Previously, an upgrade to Sapphire mandated an epoch of downtime. Now, as compute nodes transition to the new version, the upgrade will be instantaneous, ensuring no delays or downtime. - **ParaTime Performance**: - The upgrade implements a series of modifications to enhance the robustness of runtimes. These improvements encompass better response mechanisms for SGX TCB recovery events, expanded support for new SGX platforms, and improved proof validation within runtimes. - The upgrade improves runtime performance, especially in scenarios involving node failures. Should nodes malfunction, the impact on performance will now be significantly reduced. ## Mechanics of the Upgrade ### Voting On 2023-11-14, an upgrade proposal was proposed, which was accepted with more than 86% of the total voting power of the validator set. The upgrade is now scheduled for epoch **28017** (2023-11-29, start at 10:02 AM UTC). The Eden upgrade proposal has the ID number 3. For optimal voting experience, we recommend using the [Oasis CLI]. Follow these steps to cast your vote: 1. [Import your keys into the wallet] ``` oasis wallet import-file my_entity entity.pem ``` 2. [Cast your vote]: ``` oasis network governance cast-vote 3 yes ``` ### Upgrade Instructions The following steps should be performed only after the network has reached the upgrade epoch and has halted: 1. Wait for the network to reach the upgrade epoch and halt. The node will automatically stop without any action required on your part. After the network has halted, proceed to the next steps. 2. Download the Mainnet genesis file published in the [Mainnet 2023-11-29 release]. Mainnet state at epoch **28017** will be exported and migrated to a 23.0.x compatible genesis file. The new genesis file will be published on the above link soon after reaching the upgrade epoch. 3. Verify the provided Mainnet upgrade genesis file by comparing it to the local network state dump. Find the `genesis-oasis-3-at-HEIGHT.json` file in the `exports` subdirectory in your data dir (e.g. `/node/`, `/srv/oasis/node/`) and run `sha256sum` on it. Afterwards, compare it with the hash that we will share on the `#node-operators` Discord channel. The state changes are described in the [State Changes](#state-changes) section below. 4. Replace the old genesis file with the new Mainnet genesis file. 5. Ensure your node will remain stopped by disabling auto-starting via your process manager (e.g., [systemd] or [Supervisor]). 6. Back up the entire data directory of your node. Verify that the backup includes the following folders: - for consensus: `tendermint/abci-state` and `tendermint/data` - for runtimes: `runtimes/*/mkvs_storage.badger.db` and `runtimes/*/worker-local-storage.badger.db` 7. [Wipe state]. This must be performed _before_ replacing the Oasis Node binary. In case you are upgrading ParaTimes/runtimes ensure you read the following section: State of ParaTimes/runtimes is not affected by this upgrade and MUST NOT be wiped. Wiping state for confidential ParaTimes will prevent your compute or key manager node from transitioning to the new network. To safely wipe the blockchain state on a runtime while preserving the runtime state, follow these steps: 1. **Dry run:** initiate a dry run to preview which files will be deleted by running the following command: ``` bash # specify 'datadir' as your node's data directory oasis-node unsafe-reset \ --datadir=/node/data \ --dry_run ``` 2. **Wipe blockchain state:** after reviewing the dry run results, proceed with the reset by running: ``` bash # specify 'datadir' as your node's data directory oasis-node unsafe-reset \ --datadir=/node/data ``` Transitioning confidential ParaTimes to the new network requires local state that is sealed to the CPU. This also means that bootstrapping a new node on a separate CPU immediately after the network upgrade will not be possible until an updated ParaTime containing new trust roots is released and adopted. 8. Replace the old version of Oasis Node with version [23.0.7]. The Oasis Core 23.0.7 binary in our published releases is built only for Ubuntu 22.04 (GLIBC>=2.32). You'll have to build it yourself if you're using prior Ubuntu versions (or other distributions using older system libraries). 9. Perform any needed [configuration changes](#configuration-changes) described below. 10. (only Rosetta Gateway operators) Replace old version of Oasis Rosetta Gateway with version [2.6.0][rosetta-gw-2.6.0]. 11. (only Emerald paratime operators) Upgrade Emerald to version 11.0.0. 12. Start your node and re-enable auto-starting via your process manager. [Oasis CLI]: ../../../build/tools/cli/README.md [Import your keys into the wallet]: ../../../build/tools/cli/wallet.md#import-file [Cast your vote]: ../../../build/tools/cli/network.md#governance-cast-vote [Mainnet 2023-11-29 release]: https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2023-11-29 [systemd]: https://systemd.io/ [Supervisor]: http://supervisord.org/ [Wipe state]: ../../run-your-node/maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity [23.0.7]: https://github.com/oasisprotocol/oasis-core/releases/tag/v23.0.7 [rosetta-gw-2.6.0]: https://github.com/oasisprotocol/oasis-rosetta-gateway/releases/tag/v2.6.0 ### Configuration Changes To see the full extent of the changes examine the [Change Log] of the 23.0.x release. The node configuration has been refactored so that everything is now configured via a YAML configuration file and **configuring via command-line options is no longer supported**. Some configuration options have changed and so the configuration file needs to be updated. To make this step easier, a command-line tool has been provided that will perform most of the changes automatically. You can run it with: ``` oasis-node config migrate --in config.yml --out new-config.yml ``` The migration subcommand logs the various changes it makes and warns you if a config option is no longer supported, etc. At the end, any unknown sections of the input config file are printed to the terminal to give you a chance to review them and make manual changes if required. Note that the migration subcommand does not preserve comments and order of sections from the input YAML config file. You should always carefully read the output of this command, as well as compare the generated config file with the original before using it. After you are satisfied with the new configuration file, replace the old file with the new one as follows: ``` mv new-config.yml config.yml ``` The configuration format for seed nodes has changed and it now requires the node's P2P public key to be used. In case your old configuration file contains known Mainnet seed nodes, this transformation is performed automatically. However, if it contains unknown seed nodes then the conversion did not happen automatically and you may need to obtain the seed node's P2P public key. For Mainnet you can use the following addresses: * `H6u9MtuoWRKn5DKSgarj/dzr2Z9BsjuRHgRAoXITOcU=@35.199.49.168:26656` * `H6u9MtuoWRKn5DKSgarj/dzr2Z9BsjuRHgRAoXITOcU=@35.199.49.168:9200` Please be aware that every seed node should be configured to listen on two distinct ports. One is dedicated to peer discovery within the CometBFT P2P network, while the other is used to bootstrap the Oasis P2P network. For those running the Emerald ParaTime, please make sure to upgrade to Emerald version 11.0.0 before or soon after the network upgrade is completed. [Change Log]: https://github.com/oasisprotocol/oasis-core/blob/stable/23.0.x/CHANGELOG.md ### Data Directory Changes The subdirectory (located inside the node's data directory) used to store consensus-related data, previously called `tendermint` (after the consensus layer protocol backend) has been renamed to `consensus` in Oasis Core 23.0.x. If any of your scripts rely on specific directory names, please make sure to update them to reflect the changed sdirectory name. ### State Changes The following parts of the genesis document will be updated: For a more detailed explanation of the parameters below, see the [Genesis Document] docs. All state changes will be done automatically with the migration command provided by the new version of `oasis-node`. It can be used as follows to derive the same genesis file from an existing state dump at the correct height (assuming there is a `genesis.json` present in the current working directory): ``` oasis-node genesis migrate --genesis.new_chain_id oasis-4 ``` #### General * **`chain_id`** will be set to `oasis-4`. * **`halt_epoch`** will be removed as it is no longer used. #### Registry * **`registry.runtimes[].txn_scheduler.propose_batch_timeout`** specifies how long to wait before accepting proposal from the next backup scheduler. It will be set to `5000000000` (5 seconds). Previously the value was represented in the number of consensus layer blocks. * **`registry.params.gas_costs.prove_freshness`** specifies the cost of the freshness proof transaction. It will be set to `1000`. * **`registry.params.gas_costs.update_keymanager`** specifies the cost of the keymanager policy update transaction. It will be removed as the parameter has been moved under `keymanager.params.gas_costs.update_policy`. * **`registry.params.tee_features`** specify various TEE features supported by the consensus layer registry service. These will be set to the following values to activate the new features: ```json "tee_features": { "sgx": { "pcs": true, "signed_attestations": true, "max_attestation_age": 1200 }, "freshness_proofs": true } ``` * **`registry.params.max_runtime_deployments`** specifies the maximum number of runtime deployments that can be specified in the runtime descriptor. It will be set to `5`. #### Root Hash * **`roothash.params.max_past_roots_stored`** specifies the maximum number of past runtime state roots that are stored in consensus state for each runtime. It will be set to `1200`. #### Staking * **`staking.params.commission_schedule_rules.min_commission_rate`** specifies the minimum commission rate. It will be set to `0` to maintain the existing behavior. * **`staking.params.thresholds.node-observer`** specifies the stake threshold for registering an observer node. It will be set to `100000000000` base units (or `100` tokens), same as for existing compute nodes. #### Key Manager * **`keymanager.params.gas_costs`** specify the cost of key manager transactions. These will be set to the following values: ```json "gas_costs": { "publish_ephemeral_secret": 1000, "publish_master_secret": 1000, "update_policy": 1000 } ``` #### Random Beacon * **`beacon.base`** is the network's starting epoch. It will be set to the epoch of Mainnet's state dump + 1, `28017`. #### Governance * **`governance.params.enable_change_parameters_proposal`** specifies whether parameter change governance proposals are allowed. It will be set to `true`. #### Consensus * **`consensus.params.max_block_size`** specifies the maximum block size in the consensus layer. It will be set to `1048576` (1 MiB). #### Other * **`extra_data`** will be set back to the value in the [Mainnet genesis file] to include the Oasis Network's genesis quote: _”_[_Quis custodiet ipsos custodes?_][mainnet-quote]_” \[submitted by Oasis Community Member Daniyar Borangaziyev]:_ ``` "extra_data": { "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" } ``` [Genesis Document]: ../../reference/genesis-doc.md#parameters [Mainnet genesis file]: https://github.com/oasisprotocol/mainnet-artifacts/releases/tag/2020-11-18 [mainnet-quote]: https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F --- ## Upgrade to Mainnet This document provides an overview of the proposed criteria and changes to upgrade from Mainnet Beta to Mainnet. This has been [reviewed and approved by community members and validators of the Oasis Network](https://github.com/oasisprotocol/community/discussions/1) and is being reproduced and summarized here for easy access. As proposed by the community, the upgrade from Mainnet Beta to Mainnet will kick off on November 18, 2020 at 16:00 UTC. ## Criteria for Mainnet In order to transition from Mainnet Beta to Mainnet, community members have collectively suggested the following criteria be met. This is a collection of community feedback. * [x] Validators representing more than 2/3 of stake in the initial consensus committee successfully get online to launch Mainnet Beta. * [x] Beta network runs successfully for at least 10 days. * [x] No major security risks on the Beta Network have been discovered or otherwise unremediated and untested in the past 10 days. * [x] At least 50 validators run on the Network. * _Throughout Mainnet Beta there have been between 75 and 77 active validators on the network._ * [x] There are NO Oasis Protocol Foundation or Oasis Labs nodes serving as validators. * [x] At least one block explorer exists to track network stability, transactions, and validator activity. * _There is much more. See_ [_Block Explorers & Validator Leaderboards_][archived - block explorers] _part of our docs._ * [x] At least one qualified custodian supports the native ROSE token. * _Currently, Anchorage and Finoa support the ROSE token. See_ [_Custody Providers_](../../../general/manage-tokens/holding-rose-tokens/custody-providers.md) _part of our docs._ * [x] At least one web wallet or hardware wallet supports native ROSE token. * _Currently, Bitpie mobile wallet and RockX Ledger-backed web wallet are available and support ROSE token transfers. Support for staking and delegation is in development. See_ [_3rd Party Wallets_](../../../general/manage-tokens/holding-rose-tokens/) _and_ [_Oasis Wallets_](../../../general/manage-tokens/oasis-wallets) _parts of our docs._ [archived - block explorers]: https://github.com/oasisprotocol/docs/blob/0aeeb93a6e7c9001925923661e4eb3340ec4fb4b/docs/general/community-resources/community-made-resources.md#block-explorers--validator-leaderboards-block-explorers-validator-leaderboards ## Mechanics of Upgrading to Mainnet Upgrading from Mainnet Beta to Mainnet will require a coordinated upgrade of the Network. All nodes will need to configure a new genesis file that they can generate or verify independently and reset/archive any state from Mainnet Beta. Once enough (representing 2/3+ of stake) nodes have taken this step, the network will start. ## Proposed Changes From Mainnet Beta to Mainnet The Mainnet genesis file is intended to be as close as possible to the state of the Mainnet Beta network at the time of upgrade. That includes retaining validator token balances, retaining genesis file wallet allocations, and block height at time of the snapshot. In addition, after receiving additional feedback from the community, the Oasis Protocol Foundation has proposed to increase the staking rewards model. In the new proposed model staking rewards will start at 20% (annualized) and range from 20% to 2% over the first 4 years of the network (see more in updated [Token Metrics and Distribution](../../../general/oasis-network/token-metrics-and-distribution.mdx) doc). The following parts of the genesis file will be updated: * **`height`** will remain the same as at the time of the snapshot of Mainnet Beta, i.e. `702000`. * **`genesis_time`** will be set to `2020-11-18T16:00:00Z`. * **`chain_id`** will be set to `oasis-1`. * **`halt_epoch`** will be set to `9940` (approximately 1 year from Mainnet launch). * **`staking.params.disable_transfers`** will be omitted (or set to`false)`to enable transfers. * **`staking.params.reward_schedule`** will be updated to reflect the updated reward schedule as mentioned above. * **`staking.common_pool`** will be increased by 450M ROSE to fund increased staking rewards. * **`staking.ledger.oasis1qrad7s7nqm4gvyzr8yt2rdk0ref489rn3vn400d6`**, which corresponds to the Community and Ecosystem Wallet, will have its `general.balance` reduced by 450M ROSE to `1183038701000000000` and transferred to the Common Pool to fund increased staking rewards. * **`extra_data`** will be set back to the value in the [Mainnet Beta genesis file](https://github.com/oasisprotocol/mainnet-artifacts/releases/download/2020-10-01/genesis.json) to include the Oasis network's genesis quote: _”_[_Quis custodiet ipsos custodes?_](https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F)_” \[submitted by Oasis Community Member Daniyar Borangaziyev\]:_ ```diff "extra_data": { "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" } ``` See the updated [Network Parameters](../../network/mainnet.md) for the published Mainnet genesis file. For more detailed instructions how to verify the provided Mainnet genesis file by comparing it to network state dump, see the [Handling Network Upgrades](../../run-your-node/maintenance/handling-network-upgrades.md#example-diff-for-mainnet-beta-to-mainnet-network-upgrade) guide. Mainnet will use [**Oasis Core 20.12.2**](https://github.com/oasisprotocol/oasis-core/releases/tag/v20.12.2). ## Mainnet Launch Support The Oasis team will be offering live video support during the launch of Mainnet. Video call link and calendar details will be shared with node operators via email and Slack. For any additional support, please reach out via the [**#node-operators** channel at the Oasis Network Community server on Discord](../../../get-involved/README.md) with your questions, comments, and feedback related to Mainnet. To follow the network, please use one of the many community block explorers including [oasisscan.com](https://www.oasisscan.com/). --- ## Run your node The Oasis Network consists of several types of nodes, each serving distinct roles to maintain the functionality, security, and decentralization of the network. In this section you can find descriptions of the main types of nodes within the Oasis Network. ## Validator Node A [Validator Node] is an essential component, as Oasis Network uses proof-of-stake (PoS) consensus mechanisms. It is responsible for verifying transactions and proposing new blocks to be added to the blockchain. Validator nodes operate on the consensus layer by staking the network's tokens, which grants them the right to participate in the consensus process. This process involves validating transactions, signing blocks, and ensuring the integrity of the blockchain. [Validator Node]: ./validator-node.mdx ## Compute Nodes [Compute Nodes] are responsible for executing smart contracts and processing transactions within a specific ParaTime (Parallel Runtime). These nodes handle the actual computation tasks, such as running decentralized applications (dApps), performing data processing, and executing privacy-preserving smart contracts. - **Sapphire Compute Node** is responsible for executing EVM-compatible privacy-preserving smart contracts and processing transactions within the Sapphire ParaTime. These nodes validate and execute transactions while maintaining the confidentiality of sensitive data, which is a crucial aspect of applications that handle private information or require enhanced security. This is achieved through trusted execution environments (TEEs) that ensure data remains encrypted and confidential, even while being processed. - **Cipher Compute Node** is responsible for executing privacy-preserving smart contracts written in Oasis Wasm and processing transactions within the Cipher ParaTime. These nodes validate and execute transactions while maintaining the confidentiality of sensitive data, which is a crucial aspect of applications that handle private information or require enhanced security. This is achieved through trusted execution environments (TEEs) that ensure data remains encrypted and confidential, even while being processed. - **Emerald Compute Node** is responsible for executing EVM-compatible smart contracts and processing transactions within the Emerald ParaTime. It performs tasks such as validating transactions, running EVM-based smart contracts, and ensuring that the operations within the Emerald ParaTime are carried out efficiently. [Compute Nodes]: ./paratime-node.mdx ## Client Nodes A [Client Node] is a type of node within the Oasis Network that serves as an interface for users or other applications to interact with the blockchain. Unlike compute nodes, which handle transaction processing and smart contract execution, client nodes are primarily responsible for tasks such as querying the blockchain, submitting transactions, and retrieving other data from the network. - **[Non-Validator Node]** is a type of node in the Oasis Network that does not participate in the consensus process of validating and proposing new blocks. Instead, it has client node functions that support the network's operations and decentralization. - **[Sapphire Client Node]** is a specific type of client node within the Oasis Network that interacts with the Sapphire ParaTime. The Sapphire ParaTime is designed to support EVM-compatible confidential smart contracts and privacy-preserving decentralized applications (dApps) with strong privacy features and high performance. - **[Cipher Client Node]** is a type of node within the Oasis Network designed to interact specifically with the Cipher ParaTime. The Cipher ParaTime is known for its strong privacy features, allowing for the execution of confidential smart contracts and the development of privacy-preserving decentralized applications (dApps). - **[ROFL Node]** is a **Sapphire Client Node** that supports the TEE and hosts one or more [ROFL apps]. - **[Observer Node]** is a special type of client node for confidential ParaTimes such as Sapphire and Cipher that supports confidential smart contact queries. - **[Stateless Node]** is a special type of client node that operates without maintaining state, allowing it to bootstrap instantly, making it particularly well-suited for ROFL development. [Non-Validator Node]: ./non-validator-node.mdx [Client Node]: ./paratime-client-node.mdx [Observer Node]: ./paratime-observer-node.mdx [Stateless Node]: ./paratime-client-node.mdx#stateless-client-node-optional [Sapphire Client Node]: ./paratime-client-node.mdx [Cipher Client Node]: ./paratime-client-node.mdx [ROFL Node]: ./rofl-node.mdx [ROFL apps]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/README.mdx ## Archive Node An [Archive Node] is a specialized node within the Oasis Network that stores the entire blockchain history, making it a crucial tool for in-depth analysis, development, and ensuring that the network's past states remain accessible. [Archive Node]: ./archive-node.md ## Seed Node A [Seed Node] is a type of node in the Oasis Network that serves a critical role in helping other nodes discover peers and join the network (*address book*). Unlike validator nodes, which participate in the consensus process, Seed Nodes do not play a direct role in consensus. [Seed Node]: ./seed-node.md ## Key Manager Node A [Key Manager Node] is a specialized node in the Oasis Network responsible for securely managing cryptographic keys used in confidential computing. These nodes are crucial for the network's privacy-preserving features, enabling secure encryption and decryption of data processed within Trusted Execution Environments (TEEs). They play a vital role in enabling the Oasis Network's secure, decentralized, and privacy-focused operations. [Key Manager Node]: ./keymanager-node/README.md ## Services ### Rosetta Gateway A [Rosetta Gateway] service is a specialized service within the Oasis Network that implements the [Rosetta API] to provide a standardized and simplified interface for interacting with the blockchain. This service is crucial for enabling seamless integration between the Oasis Network and various external platforms, such as exchanges, wallets, custodians, and blockchain-based applications. The Rosetta Gateway connects to a [non-validator](#client-nodes) node. [Rosetta Gateway]: https://github.com/oasisprotocol/oasis-rosetta-gateway [Rosetta API]: https://docs.cdp.coinbase.com/mesh/docs/api-reference ### Public gRPC The [Oasis gRPC protocol] is an efficient, real-time, and cross-platform communication protocol that allows developers to both manage the Oasis node and communicate with the Oasis network. While each Oasis node opens a gRPC socket, the public endpoints run a proxy and expose only a small, safe subset of the API calls publicly. [Oasis gRPC protocol]: ../grpc.mdx ### Web3 Gateway A [Web3 Gateway] enables interaction with the Oasis Network using standard Web3 protocol, which is widely used in the Ethereum ecosystem. It acts as a bridge between Web3-based applications and the Oasis Network, allowing developers to leverage the tools, libraries, and practices familiar in Ethereum development while benefiting from the unique features of the Oasis Network, such as privacy and confidentiality. The Web3 gateway connects to a [compute](#compute-nodes) or a [client](#client-nodes) Sapphire or Emerald node. [Web3 Gateway]: ../web3.mdx ## See also --- ## Copy State from One Node to the Other A network that's been running for some time can accrue significant amount of state. This means bootstrapping a new Oasis node would take quite some time and resources (bandwidth, CPU) since it would need to fetch (download) and validate (replay) all the blocks from the genesis height onwards. If one has access to an Oasis node that is synced with the latest height, he could just copy Oasis node's state from the synced node to an unsynced node. Another way to speed up bootstrapping an Oasis node is to sync the node using the [State Sync](sync-node-using-state-sync.md). To bootstrap a new Oasis node by copying state from a synced Oasis node, first set up your new Oasis node as a [Validator Node](../validator-node.mdx), a [Non-validator Node](../non-validator-node.mdx) or a [ParaTime Node](../paratime-node.mdx). Make sure you use the **exact same version of Oasis Core** on both the synced Oasis node and your new Oasis node to prevent **issues or state corruption** if an Oasis node's state is opened **with an incompatible version** of Oasis Core. Before starting your new Oasis node, do the following: 1. Stop the synced Oasis node. If an Oasis node is **not stopped** before its state is copied, its on-disk state might not be consistent and up-to-date. This can lead to **state corruption** and inability to use the state with your new Oasis node. 2. Copy the following directories from your synced Oasis node's **data directory** (e.g. `/node/data`) to your new Oasis node's data directory: * `consensus/state` * `consensus/data` You could also copy the whole `consensus` directory from your synced Oasis node's working directory. In that case, you must **omit** the **`oasis_priv_validator.json` file**, otherwise starting your new Oasis node with fail with something like: ```json {"caller":"node.go:541","err":"cometbft/crypto: public key mismatch, state corruption?: %!w()","level":"error","module":"oasis-node","msg":"failed to initialize cometbft service","ts":"2023-11-25T14:13:17.919296668Z"} ``` If you are copying data from a node that is running [TEE-enabled ParaTimes], you must make sure to **remove** the `runtimes/*/worker-local-storage.badger.db` as otherwise the ParaTime binary may fail to start on a different node since it contains data sealed to the source CPU. 3. Start back the synced Oasis node. Finally, you can start your new Oasis node for the first time. [TEE-enabled ParaTimes]: ../prerequisites/set-up-tee.mdx --- ## Remote Signer for Oasis Node Keys The [Oasis remote signer][oasis-core-remote-signer] is an application that contains logic for various Oasis Core signers. Currently, only the file-based signer is implemented, but support for hardware signers is in the works. Access to the remote signer is provided via a gRPC service through which the Oasis node can connect to it and request signatures. This chapter will describe how to install the Oasis remote signer and then configure your Oasis node to use it. We will use two separate physical machines for deployment: - a `server` which will function as a system running the Oasis node, - a `signer-server` which will run the Oasis remote signer and store the node keys. **These are advanced instructions intended for node operators that want to increase the security of their validator nodes.** This chapter describes a tool to remotely access the [node keys] (i.e. `beacon.pem`, `consensus.pem`, `identity.pem`, `p2p.pem`...). There is another [`oasis wallet remote-signer`] Oasis CLI command which enables remote access to your [entity key] and should not be confused with. [oasis-core-remote-signer]: https://github.com/oasisprotocol/oasis-core/tree/master/go/oasis-remote-signer [node keys]: ../validator-node.mdx#node-keys [entity key]: ../validator-node.mdx#initialize-entity [validator-node]: ../validator-node.mdx [discord]: ../../../get-involved/README.md [`oasis wallet remote-signer`]: ../../../build/tools/cli/wallet.md#remote-signer ## Prerequisites Before we continue, make sure you've followed the [Install Oasis Node Binary] chapter and have the Oasis node binary installed on your system. [Install Oasis Node Binary]: ../prerequisites/oasis-node.md ## Install Oasis Remote Signer Binary The Oasis remote signer is part of the [Oasis Core][oasis-core-remote-signer]. You can either download the binary or compile it from source and then copy it over to your `signer-server` system. The Oasis remote signer is currently only supported on x86_64 Linux systems. ### Downloading a Binary Release The Oasis remote signer binary is part of the **Oasis Core release bundle**. Links to the latest releases are on the Network Parameters page ([Mainnet], [Testnet]). The Oasis remote signer binary inside the release bundle is called `oasis-remote-signer`. You should always use the version of the remote signer matching the version of your Oasis node. [Mainnet]: ../../network/mainnet.md [Testnet]: ../../network/testnet.md ### Building From Source Follow the [Oasis Core's Build Environment Setup and Building][oasis-core-build] chapter. After the Oasis Core is compiled, the `oasis-remote-signer` binary should be located in the `go/oasis-remote-signer` subdirectory. The code in the current [`master`] branch may be incompatible with the code used by other nodes on the network. Make sure to use the version specified on the Network Parameters page ([Mainnet], [Testnet]). [oasis-core-build]: ../../../core/development-setup/build-environment-setup-and-building [`master`]: https://github.com/oasisprotocol/oasis-core/tree/master/ ### Adding `oasis-remote-signer` Binary to `PATH` To install the `oasis-remote-signer` binary for the current user, copy/symlink it to `~/.local/bin`. To install the `oasis-remote-signer` binary for all users of the system, copy it to `/usr/local/bin`. ## Set Up Remote Signer System ### Initialize Remote Signer On `signer-server`, create a directory for the remote signer, e.g. `remote-signer`, by running: ``` mkdir --mode=700 remote-signer ``` Then, generate the [node keys] and the server certificate by running: ``` oasis-remote-signer init --datadir remote-signer/ ``` Also, generate the remote signer's client certificate which will be used by the Oasis node to connect to the remote signer: ``` oasis-remote-signer init_client --datadir remote-signer/ ``` ### Run Remote Signer Choose the gRPC port on which the remote signer will listen for client requests and run: ``` oasis-remote-signer \ --datadir remote-signer \ --client.certificate remote-signer/remote_signer_client_cert.pem \ --grpc.port \ --log.level DEBUG ``` The Oasis Remote Signer is configured to run in the foreground by default. We recommend you configure and use it with a process manager like [systemd] or [Supervisor]. Check out the [System Configuration] page for examples. [systemd]: https://github.com/systemd/systemd [Supervisor]: http://supervisord.org [System Configuration]: ../prerequisites/system-configuration.mdx#create-a-user ### Copy Remote Signer Certificate, Client Key and Certificate In order for the Oasis node to securely connect to the Oasis remote signer and be able to demonstrate its authenticity, you need to copy the following files from `signer-server` to `server` inside the `/node/data/remote-signer` directory: * `remote-signer/remote_signer_server_cert.pem`: The remote signer's certificate. This certificate ensures the Oasis node system is connecting to the trusted remote signer system. * `remote-signer/remote_signer_client_key.pem`: The remote signer's client key. This key enables the Oasis node system to demonstrate its authenticity when it is requesting signatures from the remote signer system. * `remote-signer/remote_signer_client_cert.pem`: The remote signer's client certificate. This certificate is the counterpart of the remote signer's client key. ## Configuration When [configuring your Oasis Node](../validator-node.mdx#configuration) on `server`, you need to add the appropriate `signer` section to configure the **composite** and **remote** signers. For example: ```yaml # Signer. signer: backend: composite # Use file-based signer for entity, node and P2P keys and remote signer for the # consensus key. composite: backends: entity:file,node:file,p2p:file,consensus:remote # Configure how to connect to the Oasis Remote Signer. remote: address: : server: certificate: /node/data/remote-signer/remote_signer_server_cert.pem client: key: /node/data/remote-signer/remote_signer_client_key.pem certificate: /node/data/remote-signer/remote_signer_client_cert.pem ``` This assumes you've copied the remote signer's certificate and remote signer's client key and certificate to the `/node/data/remote-signer/` directory. ## Starting the Oasis node [Start the Oasis node] using the modified config above. To ensure that your Oasis node will be able to sign consensus transactions, check that the Oasis remote signer is running and accepting remote client connections via the designated port. The `/node/data` directory on `server` will only have `consensus_pub.pem` and no `consensus.pem` since the consensus key is backed by the Oasis remote signer. [Start the Oasis node]: ../validator-node.mdx#starting-the-oasis-node # See also --- ## Using State Sync for Quick Bootstrapping The State Sync is a way to **quickly bootstrap** a **full Oasis node** (either a [validator node](../validator-node.mdx) or a [non-validator node](../non-validator-node.mdx)) by initializing it from the trusted block's header, identified by the trusted height and hash. The node's trusted state is then securely updated by requesting and verifying a minimal set of data (checkpoints metadata and chunks) from the P2P network. Internally, it uses [CometBFT's Light Client protocol] and Merkle proofs to filter out invalid data. If you have access to an Oasis node that is synced with the latest height, another option to speed bootstrapping a new Oasis node is to [copy state from one node to the other]. CometBFT's Light Client protocol requires at least 1 full node to be correct to be able to [detect and submit evidence for a light client attack]. After a successful state sync it is always recommended to check if you see the same chain as other nodes. This can be done by comparing block hash at a recent height with sources that you trust: e.g. your own nodes, trusted nodes from external entity, block explorers, etc. To configure your node to use the state sync, amend your node's configuration (i.e. `config.yml`) with (non-relevant fields omitted): ```yaml ... trimmed ... # Consensus. consensus: ... trimmed ... # Enable consensus state sync (i.e. CometBFT light client sync). state_sync: enabled: true # Configure trusted period, height and hash for the light client. light_client: trust: period: {{ trusted_period }} height: {{ trusted_height }} hash: "{{ trusted_height_hash }}" ... trimmed ... ``` and replace the following variables in the configuration snippet: * `{{ trusted_period }}`: Trusted period is the duration for which trust remains valid. * `{{ trusted_height }}`: Trusted height defines the height at which your node should trust the chain. * `{{ trusted_height_hash }}`: Trusted height hash defines the hash of the block header corresponding to the trusted height. You need to **delete any existing node state** (if it exists), otherwise the state sync will be skipped. To do that, follow the [Wiping Node State] instructions. If existing node state is found and state sync is skipped, you will see something like the following in your node's logs: ``` {"caller":"full.go:709","level":"info","module":"cometbft","msg":"state sync enabled","ts":"2023-11-16T20:06:58.56502593Z"} {"caller":"node.go:770","level":"info","module":"cometbft:base","msg":"Found local state with non-zero height, skipping state sync","ts":"2023-11-16T20:06:59.22387592Z"} ``` [CometBFT's Light Client protocol]: https://docs.cometbft.com/main/explanation/core/light-client [copy state from one node to the other]: copy-state-from-one-node-to-the-other.md [detect and submit evidence for a light client attack]: https://docs.cometbft.com/main/explanation/core/light-client#where-to-obtain-trusted-height--hash [Wiping Node State]: ../maintenance/wiping-node-state.md#state-wipe-and-keep-node-identity ### Obtaining Trusted Period To prevent long-range attacks it is recommended that the light client trust period is shorter than the debonding period (currently 336 epochs or ~14 days). If you trust a header older than the debonding period, you risk accepting invalid headers from nodes that have already withdrawn their stake. Such nodes can no longer be penalized for their misbehaviour and you may be tricked into following the wrong chain. We recommend using `trust_period=288h` (12 days). This way the time required to verify headers, submit possible misbehavior evidence and penalize nodes is still less than the debonding period, giving nodes strong incentive not to lie. ### Obtaining Trusted Height and Hash Currently, checkpoints happen approximately once per week. It is important to set sufficiently old trusted height and hash, so that the network has at least one checkpoint that is more recent than the configured trust. We recommend configuring trusted header that is around 10 days old. This way there will be checkpoints available and the trust will still be shorter than the debonding period. To obtain the trusted height and the corresponding block header's hash, use one of the following options. If using centralized or untrusted sources it is always recommended to fetch and compare data from multiple sources. #### Block Explorers Browse to one of our block explorers (e.g. [Oasis Explorer], [Oasis Scan]) and obtain the trusted height and hash there: 1. Obtain the block height (10 days old) from the main page, e.g. 4819139. 2. Click on block height's number to view the block's details and obtain its hash, e.g. `377520acaf7b8011b95686b548504a973aa414abba2db070b6a85725dec7bd21`. [Oasis Explorer]: https://explorer.oasis.io/ [Oasis Scan]: https://www.oasisscan.com #### A Trusted Node If you have an existing node that you trust, you can use its status output to retrieve the current block height and hash by running: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` This will give you output like the following (non-relevant fields omitted): ```json { "software_version": "23.0.5", "identity": { ... }, "consensus": { ... "latest_height": 18466200, "latest_hash": "9611c81c7e231a281f1de491047a833364f97c38142a80abd65ce41bce123378", "latest_time": "2023-11-27T08:31:15Z", "latest_epoch": 30760, ... }, ... } ``` the values you need are `latest_height` and `latest_hash`. #### Public Rosetta Gateway First obtain the network's Genesis document's hash (e.g. from the Networks Parameters Page): - [Mainnet](https://docs.oasis.io/node/mainnet/): `bb3d748def55bdfb797a2ac53ee6ee141e54cd2ab2dc2375f4a0703a178e6e55` - [Testnet](https://docs.oasis.io/node/testnet/): `0b91b8e4e44b2003a7c5e23ddadb5e14ef5345c0ebcb3ddcae07fa2f244cab76` Query our public Rosetta Gateway instance and obtain the latest height by replacing the `` with the value obtained in the previous step: ```bash curl -X POST https://rosetta.oasis.io/api/block \ -H "Content-Type: application/json" \ -d '{ "network_identifier": { "blockchain": "Oasis", "network": "" }, "block_identifier": { "index": 0 } }' ``` This will give you output like the following (non-relevant fields omitted): ```json { "block": { "block_identifier": { "index": 25688638, "hash": "3076ae195cfeda09ad49a6c74f6f655bc623e526184f814a842b224bf1846223" }, ... } } ``` Assuming blocks happen every 6 seconds, subtract around `140_000` blocks to get the height that is around 10 days old and query again: ```bash curl -X POST https://rosetta.oasis.io/api/block \ -H "Content-Type: application/json" \ -d '{ "network_identifier": { "blockchain": "Oasis", "network": "" }, "block_identifier": { "index": 25548638 } }' ``` The values you need are `index` and `hash`: ```json { "block": { "block_identifier": { "index": 25548638, "hash": "76ac9d6b59e662d024097a07eb65777292ce6a7ebe9aca8bd0caf73e72b06834" }, ... } } ``` #### Oasis CLI Query our public Oasis node's endpoint using the Oasis CLI and obtain the trusted height and hash there: ```bash oasis network status ``` This will give you output like the following (non-relevant fields omitted): ```json { "software_version": "23.0.5", "identity": { ... }, "consensus": { ... "latest_height": 18466200, "latest_hash": "9611c81c7e231a281f1de491047a833364f97c38142a80abd65ce41bce123378", "latest_time": "2023-11-27T08:31:15Z", "latest_epoch": 30760, ... }, ... } ``` The values you need are `latest_height` and `latest_hash` . --- ## Archive Node This guide will cover setting up an archive node for the Oasis Network. Node started in archive mode only serves existing consensus and runtime states. The node has all unneeded consensus and P2P functionality disabled, therefore it will not participate in the network. Archive nodes can be used to access historic state which is pruned in dump-restore network upgrades. ## Prerequisites Running an archive node requires a pre-existing `oasis-node` state. If you don't have one, you can download a snapshot of a specific network state [here][snapshots]. [snapshots]: https://snapshots.oasis.io ## Configuration (Oasis Core 23 and later) Starting from the Oasis Core version 23, the configuration for enabling archive mode has changed. Use the `mode` setting: This setting configures the node to act as an archive node. ```yaml mode: archive common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info genesis: file: /node/etc/genesis.json runtime: # Paths to ParaTime bundles for all of the supported ParaTimes. paths: - {{ runtime_orc_path }} ``` Keep all other settings the same as those for a full client node. For example, to serve archived runtime state, the node needs to have the runtime configured and the state present. ## Configuration (Oasis Core 22 and earlier) For all pre-Eden networks, such as Damask, the configuration remains the same but requires the appropriate version of `oasis-node` and the node state. #### Damask To run an archive node for Damask, use [Oasis Core v22.2.12] and the following configuration: ```yaml datadir: /node/data log: level: default: info tendermint: info tendermint/context: error format: JSON genesis: file: /node/etc/genesis.json consensus: tendermint: mode: archive runtime: mode: client paths: # Paths to ParaTime bundles for all of the supported ParaTimes. - "{{ runtime_orc_path }}" ``` #### Cobalt To run an archive node for Cobalt, use [Oasis Core v21.3.14] and the following configuration: ```yaml datadir: /node/data log: level: default: info tendermint: info tendermint/context: error format: JSON genesis: file: /node/etc/genesis.json consensus: tendermint: mode: archive runtime: supported: - "{{ runtime_id }}" paths: "{{ runtime_id }}": {{ paratime_binary_path }} worker: storage: enabled: true ``` Ensure you are using the correct version of oasis-node and the pre-existing state for your specific pre-Eden network. ## Starting the Oasis Node You can start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` ### Archive node status The mode field is currently unavailable in the control status output. It will be included in an upcoming release. To ensure the node is running in archive mode, run the following command: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` Output should report `archive` consensus mode status: ```json { // other fields omitted ... "mode": "archive", // ... } ``` ## See also [Archive Web3 Gateway](../web3.mdx#archive-web3-gateway) [Oasis Core v22.2.12]: https://github.com/oasisprotocol/oasis-core/releases/tag/v22.2.12 [Oasis Core v21.3.14]: https://github.com/oasisprotocol/oasis-core/releases/tag/v21.3.14 --- ## Key Manager Node These instructions are for setting up a _key manager_ node. Key manager nodes run a special runtime that provides confidentiality to other ParaTimes. If you want to run a _validator_ node instead, see the [instructions for running a validator node](../validator-node.mdx). Similarly, if you want to run a _ParaTime_ node instead, see the [instructions for running a ParaTime node](../paratime-node.mdx). At time of writing the key manager ParaTime is deployed only on the Testnet to support Cipher and Sapphire ParaTimes, and limited to be run by trustworthy partners. This guide will cover setting up your key manager node for the Oasis Network. This guide assumes some basic knowledge on the use of command line tools. ## Prerequisites Before following this guide, make sure you've followed the [Prerequisites](../prerequisites) and [Run a Non-validator Node](../non-validator-node.mdx) sections and have: * Oasis Node binary installed on your system and a dedicated non-root user that will run your Oasis Node. * The chosen top-level `/node/` working directory prepared (feel free to name it as you wish, e.g. `/srv/oasis/`) with: * `etc`: This will store the node configuration and genesis file. * `data`: This will store the data directory needed by Oasis Node, including your node identity and the blockchain state. The directory permissions should be `rwx------`. * `bin`: This will store binaries needed by Oasis Node for running the ParaTimes. * `runtimes`: This will store the ParaTime bundles. * Downloaded or compiled the correct versions of everything according to Network Parameters page ([Mainnet], [Testnet]). * The genesis file copied to `/node/etc/genesis.json`. * The binaries needed by Oasis Node for running the ParaTimes copied to `/node/bin/`. * The key manager ParaTime bundle (`.orc` extension) copied to `/node/runtimes/`. * Initialized a new node and updated your entity registration by following the [Register a New Entity or Update Your Entity Registration](../paratime-node.mdx#register-a-new-entity-or-update-your-entity-registration) instructions. * The entity descriptor file copied to `/node/etc/entity.json`. [Mainnet]: ../../network/mainnet.md [Testnet]: ../../network/testnet.md Reading the rest of the [validator node setup instructions](../validator-node.mdx) and [ParaTime node setup instructions](../paratime-node.mdx) may also be useful. ### Setting up Trusted Execution Environment (TEE) The key manager ParaTime requires the use of a TEE. See the [Set up trusted execution environment](../prerequisites/set-up-tee.mdx) doc for instructions on how to set it up before proceeding. ## Configuration In order to configure the node create the `/node/etc/config.yml` file with the following content: ```yaml mode: keymanager common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info genesis: file: /node/etc/genesis.json registration: # In order for the node to register itself, the entity ID must be set. entity_id: {{ entity_id }} p2p: # External P2P configuration. port: 9200 registration: addresses: # The external IP that is used when registering this node to the network. - "{{ external_address }}:9200" seeds: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. - {{ seed_node_address }} consensus: listen_address: tcp://0.0.0.0:26656 # The external IP that is used when registering this node to the network. external_address: tcp://{{ external_address }}:26656 runtime: paths: # Path to the key manager ParaTime bundle. - "{{ keymanager_runtime_orc_path }}" # The following section is required for ParaTimes which are running inside the # Intel SGX Trusted Execution Environment. sgx: loader: /node/bin/oasis-core-runtime-loader keymanager: runtime_id: "{{ keymanager_runtime_id }}" ``` Before using this configuration you should collect the following information to replace the `{{ ... }}` variables present in the configuration file: * `{{ external_address }}`: The external IP you used when registering this node. * `{{ seed_node_address }}`: The seed node address in the form `ID@IP:port`. * You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). * `{{ keymanager_runtime_orc_path }}`: Path to the key manager [ParaTime bundle](../paratime-node.mdx#manual-bundle-installation) of the form `/node/runtimes/foo-paratime.orc`. * You can find the current Oasis-supported key manager ParaTime in the Network Parameters page ([Mainnet], [Testnet]). * `{{ entity_id }}`: The node's entity ID from the `entity.json` file. * `{{ keymanager_runtime_id }}`: Runtime identified for the key manager ParaTime. * You can find the current Oasis-supported key manager ParaTime identifiers in the Network Parameters page ([Mainnet], [Testnet]). Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). ## Starting the Oasis Node You can start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` ## Checking Node Status To ensure that your node is properly connected with the network, you can run the following command after the node has started: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` --- ## Upgrading Key Managers This guide will describe how to upgrade a key manager node. ## About the Upgrade Every key manager node contains all the keys used by confidential ParaTimes inside its TEE-encrypted state. The key material is sealed and can only be decrypted by exactly the same TEE enclave running on the exactly same CPU. This means that newer key manager ParaTimes can not read the key material and that key material can not be restored on another machine. During a key manager node upgrade it is therefore essential that the key material is not lost, not even due to an operational error or even a catastrophically failed upgrade. ## Safe Upgrade Procedure A key manager node's upgrade procedure differs from other Oasis nodes upgrades because the upgraded node cannot unseal/decrypt the old key manager's state. To upgrade a key manager node, we need to delete the local state and let the key manager's state replicate itself from other nodes. Only one key manager runtime in the configuration file can be present at once. **In case you are running multiple key manager nodes always follow the safe upgrade procedure:** 1. Keep approximately one half of nodes running the old version. 2. Upgrade the other half. 3. Wait for the ParaTime upgrade epoch. 4. Verify that secrets have been replicated [as shown below]. 5. **Verify again.** 6. Upgrade the rest of the nodes. [as shown below]: #verifying-successful-replication ### Upgrade Nodes To upgrade a key manager node, follow the next steps: 1. Stop the node. 2. Wipe its local state `worker-local-storage.badger.db`, e.g.: ``` rm -rf runtimes/4000000000000000000000000000000000000000000000004a1a53dff2ae482d/worker-local-storage.badger.db/ ``` 3. Upgrade the key manager runtime: - get the new ORC file ([mainnet], [testnet]); - update the configuration to replace the ORC file; and - restart the node. 4. Wait for the key material to get replicated from active nodes before continuing. 5. Verify that secrets have been replicated [as shown below]. [mainnet]: ../../network/mainnet#key-manager [testnet]: ../../network/testnet#key-manager ### After the Upgrade #### Verifying Successful Replication After the upgrade epoch and when the key material is successfully replicated, the `control status` output should show `keymanager.status="ready"` and `registration.descriptor.runtimes.0.extra_info` should contain a hash of the key material state: ``` $ oasis-node -a unix:/node/data/internal.sock control status ... "registration": { "last_registration": "2023-02-06T08:40:30Z", "descriptor": { ... "runtimes": [ { "id": "4000000000000000000000000000000000000000000000004a1a53dff2ae482d", "version": { "minor": 3, "patch": 3 }, "capabilities": { "tee": { "hardware": 1, ... } }, "extra_info": "omlzaWduYXR1cmVYQG7nDuKTOUKAlJAfukdY6Xljox376lCLI0cIP0zPw2B8abJxa31j+NoQAWA0KZuHD41XPyICmjXDTpjDXukEEgVtaW5pdF9yZXNwb25zZaNoY2hlY2tzdW1YIEWZF5YaFQChstrZ9u1UdgyqZCagmNfghvyQna9WkmvyaWlzX3NlY3VyZfVvcG9saWN5X2NoZWNrc3VtWCCsrqRzYjx05t+KoCYz7wFSdKJ720g2LQBAsRKXmClMvw==" } ], "roles": "key-manager", } } ... "keymanager": { "status": "ready", "may_generate": false, "runtime_id": "4000000000000000000000000000000000000000000000004a1a53dff2ae482d", "client_runtimes": [ "000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c", "0000000000000000000000000000000000000000000000000000000000000000" ], "access_list": [ ... ], "private_peers": [ ... ] } ``` ## Troubleshooting If you forgot to wipe the key manager's state when upgrading, the upgraded Key Manager will be unable to unseal the old state and will abort: ``` {"level":"warn","module":"runtime","msg":"thread 'main' panicked at 'runtime execution failed: Enclave panicked.', runtime-loader/bin/main.rs:57:10","runtime_id":"4000000000000000000000000000000000000000000000004a1a53dff2ae482d","runtime_name":"keymanager","ts":"2022-11-11T13:38:18.805919693Z"} ``` --- ## Signing Key Manager Policy This guide will describe how to print and sign an Oasis [key manager policy]. These instructions are only applicable if you are part of a key manager policy signer set. ## Prerequisites ### Oasis Node Binary Make sure you have followed the [Oasis Node binary installation guide] and have the Oasis Node binary installed on your system. ### Entity Similarly to other things, an entity's private key is used to sign a key manager policy. The trusted key manager policy signer set (i.e. the authorized public keys) and the threshold of keys that need to sign the policy are hard-coded in the key manager's source code. The trusted signer set for the Oasis Key Manager is defined in [its source code][oasis-km-signer-set]. We strongly recommend using a dedicated (single-purpose) entity for signing key manager policies for production key managers, i.e. the ones deployed on Mainnet and connected to a production ParaTime. To provision a new entity, follow the [instructions in our Validator Node guide]. Currently, Ledger-based signers do not support signing key manager policies. In case a file-based signer needs to be used, we strongly recommend using an [offline/air-gapped machine] for this purpose and never exposing the entity's private key to an online machine. Gaining access to the entity's private key can compromise the trusted key manager policy signer set and hence the key manager itself. [key manager policy]: https://github.com/oasisprotocol/oasis-core/blob/master/docs/consensus/services/keymanager.md#policies [Oasis Node binary installation guide]: ../prerequisites/oasis-node.md [oasis-km-signer-set]: https://github.com/oasisprotocol/keymanager-paratime/blob/main/src/lib.rs [instructions in our Validator Node guide]: ../validator-node.mdx#initialize-entity [offline/air-gapped machine]: https://en.wikipedia.org/wiki/Air_gap_\(networking\) ## Define Variables For easier handling of key manager policy files, define the following variables: ```shell POLICY=path/to/policy.cbor KEY=path/to/entity/key.pem NAME=your_name ``` ## Printing a Policy To print and inspect a key manager policy, use the following command: ```shell oasis-node keymanager verify_policy \ --keymanager.policy.file $POLICY \ --keymanager.policy.ignore.signature \ --verbose ``` This should output something like the following: ```json title="Example of an actual Oasis Testnet Key Manager policy" { "serial": 8, "id": "4000000000000000000000000000000000000000000000004a1a53dff2ae482d", "enclaves": { "ZhD5ufyc/MReZD1qMSKNCRxnkNiZ3BtxqcYdx4+M0N9AJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==": { "may_query": { "0000000000000000000000000000000000000000000000000000000000000000": [ "c0SidcKhBx3iuonmtXURnFB+qIVkg+nAiaAozAh16ltAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==" ], "000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c": [ "LwbLEQ6dv+R5wv5q5CGRZWiEBWGxgCi/gpphcJFQ5zVAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==" ] }, "may_replicate": [ "jTX8etUcGSQBq3C4WbLlexga7dhQFnwzSJOEmRCPvfRAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==" ] }, "jTX8etUcGSQBq3C4WbLlexga7dhQFnwzSJOEmRCPvfRAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==": { "may_query": { "0000000000000000000000000000000000000000000000000000000000000000": [ "c0SidcKhBx3iuonmtXURnFB+qIVkg+nAiaAozAh16ltAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==" ], "000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c": [ "LwbLEQ6dv+R5wv5q5CGRZWiEBWGxgCi/gpphcJFQ5zVAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ==" ] }, "may_replicate": [] } } } ``` The `"serial"` key, e.g. `8`, represents the key manager policy's serial number that must increase with every update of the key manager policy. The `"id"` key, e.g. `"4000000000000000000000000000000000000000000000004a1a53dff2ae482d"`, represents the key manager ParaTime's runtime ID. The keys below `"enclaves"`, e.g. `"ZhD5ufyc/MReZD1qMSKNCRxnkNiZ3BtxqcYdx4+M0N9AJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ=="` and `"jTX8etUcGSQBq3C4WbLlexga7dhQFnwzSJOEmRCPvfRAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ=="`, represent the identities of the key manager enclaves. Each key manager enclave ID is comprised of two parts: its `MRENCLAVE` and its `MRSIGNER`. Each key manager enclave identity has two lists: `"may_query"` and `"may_replicate"`. Items in `"may_query"` list, e.g. `"0000000000000000000000000000000000000000000000000000000000000000"` and `"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c"`, represent the runtime IDs of the ParaTimes that are allowed to query the key manager (in this example, the Cipher and the Sapphire ParaTimes running on the Testnet). The items under runtime IDs of the ParaTimes, e.g. `"c0SidcKhBx3iuonmtXURnFB+qIVkg+nAiaAozAh16ltAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ=="` and `"LwbLEQ6dv+R5wv5q5CGRZWiEBWGxgCi/gpphcJFQ5zVAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ=="`, represent the identities of the runtime enclaves. Similarly to the key manager enclave ID, each runtime enclave ID is comprised of two parts: its `MRENCLAVE` and its `MRSIGNER`. Items in `"may_replicate"` list, e.g. `"jTX8etUcGSQBq3C4WbLlexga7dhQFnwzSJOEmRCPvfRAJdq369ofvsxONjdgbgISFND0HG0EIv03iyqLiIGEWQ=="`, represent the key manager enclave IDs to which an existing key manager enclave is allowed to replicate itself to. This is used for key manager upgrades when an old key manager enclave (i.e. its master secret) is allowed to replicate itself to a new key manager enclave. To see what has changed between two key manager policies, diff the outputs of the `oasis-node keymanager verify_policy` commands for the corresponding key manager policy files. ## Signing a Policy Once a key manager policy has been inspected, use the following command to sign it: ```bash oasis-node keymanager sign_policy \ --keymanager.policy.file $POLICY \ --keymanager.policy.key.file $KEY \ --keymanager.policy.signature.file $POLICY.$NAME.signed ``` --- ## Adding or Removing Nodes At some point you may wish to add or remove nodes from your entity. In order to do so, you will need to have at least the following: * Access to a synced node * Access to your entity's private key If you just need to temporarily disable your node (e.g. to perform system updates), use [graceful shutdown] instead. This will assure you that your entity will not get penalized during node's downtime. ## Overview The process for adding/removing nodes is similar and has the following steps: 1. Obtain the ID of your running Oasis node 2. Download your entity descriptor (`entity.json`) from the network registry 3. Update the entity descriptor by adding/removing a node 4. Submitting the updated entity descriptor to the network [graceful shutdown]: shutting-down-a-node.md ## Obtain the ID of your Node Connect to your `server` and obtain the ID of your node by running: ```shell oasis-node control status -a unix:/node/data/internal.sock | jq .identity.node ``` ## Download Your Latest Entity Descriptor To ensure that we do not update your entity descriptor (`entity.json`) incorrectly we should get the latest entity descriptor state from the network. For this operation, you will need to know the base64 encoding of your entity's public key. Use [`oasis network show`] command on your `localhost` to get the latest entity descriptor stored in the network registry. This command is part of [Oasis CLI]. For example: ```shell oasis network show xQN6ffLSdc51EfEQ2BzltK1iWYAw6Y1CkBAbFzlhhEQ= ``` Now store the obtained JSON as `entity.json`. [`oasis network show`]: ../../../build/tools/cli/network.md#show-id [Oasis CLI]: ../../../build/tools/cli/README.md ## Updating Your Entity Descriptor ### To Add a Node Due to how the node election process works, only a single node from your entity can be selected as a validator for any given epoch. Additional nodes will _not_ give you more voting power nor will it, inherently, provide high availability to have multiple nodes. To attach a new node with your entity, add the ID of your node obtained in the [section above](#obtain-the-id-of-your-node) to the `nodes` field in your `entity.json`. For example: ```json { "v": 2, "id": "xQN6ffLSdc51EfEQ2BzltK1iWYAw6Y1CkBAbFzlhhEQ=", "nodes": [ "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=", "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=" ] } ``` In the above entity descriptor 2 nodes are attached to the entity: 1. A node with an identity `AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=` 2. A node with an identity `BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=` ## To Remove a Node To remove node with ID `BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=` from your entity descriptor, simply remove the record from the array in the `nodes` field. For example: ```json { "v": 2, "id": "xQN6ffLSdc51EfEQ2BzltK1iWYAw6Y1CkBAbFzlhhEQ=", "nodes": [ "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" ], } ``` ## Submitting Your Entity Registration to the Network Finally, to commit the changes on the network, invoke the [`oasis account entity register`] command on your `localhost`: ```shell oasis account entity register entity.json --account my_entity ``` The account used to sign the transaction (`my_entity` in the snippet above) must correspond to the entity ID in `entity.json`. If there are no errors, your entity registration should be updated. You can run the [`oasis network show`] command again to see the changes. --- ## Handling Network Upgrades Changes between the major consensus network versions are backward and forward breaking. You have to always run **a specific version of the Oasis Core to fetch and validate the blocks matching the specific consensus network version**. There are two kinds of consensus network upgrades that can occur: - *Seamless upgrade*: on-chain upgrade without resetting the consensus state or changing the genesis document (e.g. [Testnet upgrade 2022-04-04], [Mainnet upgrade 2021-08-31]). - *Dump & restore upgrade*: requires wiping state and starting the upgraded network from a fresh genesis document (e.g. [Mainnet upgrade 2022-04-11 (Damask)], [Testnet upgrade 2022-03-03]). The specific Oasis Core version requirements also impact the way how you **initially sync your node with the network**: - If the last network upgrade was a dump & restore one, then your node will complete the synchronization automatically by fetching and validating all blocks following the state in the genesis document. - If the last network upgrade was a seamless one, you will first need to download the older version of the Oasis Core to sync the initial blocks and then sequentially perform seamless upgrade(s). For example, at time of writing this guide in order to sync your node from scratch on the [Testnet Network Parameters][Testnet] you needed to do the following: - Download the genesis document and run Oasis Core 22.0.x which synced blocks from epoch 14209 through (excluding) upgrade epoch 15056. - Wait until the node automatically stopped, then upgrade to Oasis Core 22.2.x which synced the blocks from epoch 15056 onwards. The expected versions of the Oasis Core to sync your node from the latest genesis document on the Mainnet and Testnet are always published on the Network Parameters page ([Mainnet], [Testnet]). ## Reaching Upgrade Epoch Once a [governance proposal] is accepted the node will automatically stop when reaching the **upgrade epoch** specified in the proposal. The node will write something like this in the log: ```json {"caller":"mux.go:426","level":"debug","module":"abci-mux","msg":"dispatching halt hooks for upgrade","ts":"2022-05-06T13:11:41.721994647Z"} ``` and on the error output: ```text panic: upgrade: reached upgrade epoch ``` The state of the network at the upgrade epoch height will be automatically exported into a genesis file located in `/exports/genesis--at-.json`, where `CHAIN_ID` is the chain ID of the network and `LATEST_HEIGHT` is the height of the last consensus block before the upgrade epoch. This command, depending on the size of the state, may take some time to finish. While waiting for the network upgrade epoch, you can check the current height and epoch by running: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` and observe the value of the `consensus.latest_height` and `consensus.latest_epoch` fields respectively. [governance proposal]: ../../../build/tools/cli/network.md#governance-cast-vote Once the upgrade epoch is reached, follow the instructions in the corresponding [upgrade log]. [upgrade log]: ../../reference/upgrade-logs/mainnet.md [Mainnet]: ../../network/mainnet.md [Testnet]: ../../network/testnet.md [Testnet upgrade 2022-04-04]: ../../reference/upgrade-logs/testnet.md#2022-04-04-upgrade [Testnet upgrade 2022-03-03]: ../../reference/upgrade-logs/testnet.md#2022-03-03-upgrade [Testnet upgrade 2021-08-11]: ../../reference/upgrade-logs/testnet.md#2021-08-11-upgrade [Mainnet upgrade 2022-04-11 (Damask)]: ../../reference/upgrade-logs/mainnet.md#damask-upgrade [Mainnet upgrade 2021-08-31]: ../../reference/upgrade-logs/mainnet.md#2021-08-31-upgrade [Mainnet upgrade 2021-04-28 (Cobalt)]: ../../reference/upgrade-logs/mainnet.md#cobalt-upgrade ## Preparing New Genesis File and Wiping State For dump & restore upgrades, the exported genesis file needs to be patched and verified accordingly. Then, we wipe the existing consensus state including the history of all transactions and let the node reload the state from the genesis file. ### Patching Dumped State First, let's run a built-in helper which migrates and updates parts of the genesis file which changed in the new version of Oasis Core. We will provide the dumped genesis file as the input and write the new version of the genesis file into `genesis_dump.json`. ```bash oasis-node debug fix-genesis --genesis.file genesis--at-.json --genesis.new_file genesis_dump.json ``` Other parts of the genesis need to be updated manually, as described in each upgrade's *Proposed State Changes* section (e.g. [Damask upgrade's Proposed State Changes], [Cobalt upgrade's Proposed State Changes]). [Cobalt upgrade's Proposed State Changes]: ../../reference/upgrades/cobalt-upgrade.md#proposed-state-changes [Damask upgrade's Proposed State Changes]: ../../reference/upgrades/damask-upgrade.md#proposed-state-changes ### Download and Verify the Provided Genesis File In addition, download the new genesis file linked in the Network Parameters page ([Mainnet], [Testnet]) and save it as `/node/etc/genesis.json`. Compare the dumped state with the downloaded genesis file: ```bash diff --unified=3 genesis_dump.json genesis.json ``` If you obtain the same result, then you have successfully verified the provided genesis file! #### Example diff for Mainnet Beta to Mainnet network upgrade Let's look at what `diff` returned before performing manual changes to the genesis file for the Mainnet network upgrade: ```diff --- genesis_dump.json 2020-11-16 17:49:46.864554271 +0100 +++ genesis.json 2020-11-16 17:49:40.353496022 +0100 @@ -1,7 +1,7 @@ { "height": 702000, - "genesis_time": "2020-11-18T13:38:00Z", - "chain_id": "mainnet-beta-2020-10-01-1601568000", + "genesis_time": "2020-11-18T16:00:00Z", + "chain_id": "oasis-1", "epochtime": { "params": { "interval": 600 @@ -2506,1563 +2506,1779 @@ "debonding_interval": 336, "reward_schedule": [ { - "until": 3696, - "scale": "1595" + "until": 4842, + "scale": "2081" }, { - "until": 3720, - "scale": "1594" + "until": 4866, + "scale": "2080" }, ... trimmed ... { - "until": 35712, + "until": 36882, "scale": "2" }, { - "until": 35760, + "until": 36930, "scale": "1" } ], @@ -4087,7 +4303,6 @@ "transfer": 1000 }, "min_delegation": "100000000000", - "disable_transfers": true, "fee_split_weight_propose": "2", "fee_split_weight_vote": "1", "fee_split_weight_next_propose": "1", @@ -4097,7 +4312,7 @@ "token_symbol": "ROSE", "token_value_exponent": 9, "total_supply": "10000000000000000000", - "common_pool": "1835039672187348312", + "common_pool": "2285039672187348312", "last_block_fees": "0", "ledger": { "oasis1qp0l8r2s3076n4xrq8av0uuqegj7z9kq55gu5exy": { @@ -6419,7 +6634,7 @@ }, "oasis1qrad7s7nqm4gvyzr8yt2rdk0ref489rn3vn400d6": { "general": { - "balance": "1633038701000000000" + "balance": "1183038701000000000" }, "escrow": { "active": { @@ -9862,6 +10077,8 @@ } } }, - "halt_epoch": 1440, - "extra_data": null + "halt_epoch": 9940, + "extra_data": { + "quote": "UXVpcyBjdXN0b2RpZXQgaXBzb3MgY3VzdG9kZXM/IFtzdWJtaXR0ZWQgYnkgT2FzaXMgQ29tbXVuaXR5IE1lbWJlciBEYW5peWFyIEJvcmFuZ2F6aXlldl0=" + } } ``` We can observe that the provided genesis file mostly updates some particular network parameters. In addition, some ROSE tokens were transferred from an account to the Common Pool. All other things remained unchanged. Let's break down the diff and explain what has changed. The following genesis file fields will always change on a network upgrade: * `chain_id`: A unique ID of the network. Mainnet upgrades follow a pattern `oasis-1`, `oasis-2`, ... * `genesis_time`: Time from which the genesis file is valid. * `halt_epoch`: The epoch when the node will stop functioning. We set this to intentionally force an upgrade. The following fields were a particular change in this upgrade: * `staking.params.reward_schedule`: This field describes the staking reward model. It was changed to start at 20% (annualized) and range from 20% to 2% over the first 4 years of the network. For more details, see the [Token Metrics and Distribution] doc. * `staking.params.disable_transfers`: This field was removed to enable token transfers. * `staking.common_pool`: This field represents the Common Pool. Its balance was increased by 450M ROSE to fund increased staking rewards. * `staking.ledger.oasis1qrad7s7nqm4gvyzr8yt2rdk0ref489rn3vn400d6`: This field corresponds to the Community and Ecosystem Wallet. Its `general.balance` was reduced by 450M ROSE and transferred to the Common Pool to fund increased staking rewards. * `extra_data`: This field can hold network's extra data, but is currently ignored everywhere. For this upgrade, we changed it back to the value in the Mainnet Beta genesis file to include the Oasis network's genesis quote: _”_[_Quis custodiet ipsos custodes?_](https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F)_” \[submitted by Oasis Community Member Daniyar Borangaziyev\]._ The balances in the genesis file are enumerated in base units with 1 ROSE token equaling 10^9 (i.e. billion) base units. For more details, see the [Genesis Document](../../reference/genesis-doc.md#parameters). [Token Metrics and Distribution]: ../../../general/oasis-network/token-metrics-and-distribution.mdx ### Wiping State We do not suggest that you wipe _all_ state. You might lose node identities and keys if you do it this way. The process is described in the [Wiping Node State](wiping-node-state.md#state-wipe-and-keep-node-identity) document. ## Updating ParaTimes If you are running a compute or a client ParaTime node, you will often need to upgrade the ParaTime. The required ParaTime versions are stored in the network registry. The [`oasis network show paratimes`] command below queries the registry and extracts the version information for the Paratime `00000000000000000000000000000000000000000000000072c8215e60d5bca7`: ```bash oasis network show paratimes \| jq 'select(.id=="00000000000000000000000000000000000000000000000072c8215e60d5bca7") | .deployments' ``` At time of writing the Emerald ParaTime on Testnet has the following record: ``` [ { "version": { "major": 7, "minor": 1 }, "valid_from": 14320 }, { "version": { "major": 8 }, "valid_from": 15056 } ] ``` The record above specifies that after epoch 14320, Emerald version 7.1.0 is required and from epoch 15056, Emerald 8.0.0. If you are running a compute node, **the installed ParaTime version must match exactly the ParaTime version in the registry**! If you are running a client node, ParaTime state syncing will be performed regardless of the version installed. Oasis node supports configuring multiple versions of ParaTime bundles, for example: ```yaml runtime: paths: - /path/to/emerald-paratime-7.1.0-testnet.orc - /path/to/emerald-paratime-8.0.0-testnet.orc ``` The node will then automatically run the correct version of the ParaTime as specified in the registry. [`oasis network show paratimes`]: ../../../build/tools/cli/network.md#show-paratimes ## Start Your Node This will depend on your process manager. If you don't have a process manager, you should use one. However, to start the node without a process manager you can start the [Oasis Node](../prerequisites/oasis-node.md) like this: ```bash oasis-node --config /node/etc/config.yml ``` ## Clean Up After you're comfortable with your node deployment, you can remove the old Oasis Core version and the intermediate `genesis--at-.json` and `genesis_dump.json` files. --- ## Refreshing Node Certificates ## Refreshing Sentry Client TLS Certificate on the Validator Node ### Steps on the Validator Node Go to your validator node's data directory, e.g. `/node/data`: ``` cd ``` We recommend backing up your validator's private and public keys (i.e. all `*.pem` files) in your node's data directory before continuing. Remove the validator's current sentry client TLS private key and certificate by running: ``` rm sentry_client_tls_identity.pem sentry_client_tls_identity_cert.pem ``` Re-generate node's keys by running: ``` oasis-node identity init --datadir ./ ``` This should keep all your other node's keys (i.e. `beacon.pem`, `consensus.pem`, `consensus_pub.pem`, `identity.pem`, `identity_pub.pem`, ...) intact. Then run: ``` oasis-node identity show-sentry-client-pubkey --datadir ./ ``` to obtain the value of the validator's new sentry client TLS public key in Base64-encoding that can be put in sentry node's configuration under `control.authorized_pubkey` list. Restart your validator node. ### Steps on the Sentry Node After generating a new sentry client TLS private key and certificate on the validator node, set the new client TLS public key in your sentry node's configuration. Before using the below sentry node configuration snippet, replace the following variables: * `{{ validator_sentry_client_grpc_public_key }}`: The validator node's new sentry client TLS public key encoded in Base64-encoding (e.g. `KjVEdeGbtdxffQaSxIkLE+kW0sINI5/5YR/lgUkuEcw=`). ``` ... trimmed ... # Worker configuration. worker: sentry: # Enable sentry node. enabled: true # Port used by validator nodes to query sentry node for registry # information. # IMPORTANT: Only validator nodes protected by the sentry node should have # access to this port. This port should not be exposed on the public # network. control: port: 9009 authorized_pubkey: - {{ validator_sentry_client_grpc_public_key }} ... trimmed ... ``` Restart your sentry node. The validator node will re-register itself automatically once it's connected to the network through the sentry again. ## Refreshing TLS Certificate on the Sentry Node ### Steps on the Sentry Node Go to your sentry node's data directory, e.g. `/node/data`: ``` cd ``` We recommend backing up your sentry's private and public keys (i.e. all `*.pem` files) in your node's data directory before continuing. Remove the sentry's current TLS private key and certificate by running: ``` rm tls_identity.pem tls_identity_cert.pem ``` Re-generate node's keys by running: ``` oasis-node identity init --datadir ./ ``` This should keep all your other node's keys (i.e. `beacon.pem`, `consensus.pem`, `consensus_pub.pem`, `identity.pem`, `identity_pub.pem`, ...) intact. Then run: ``` oasis-node identity show-tls-pubkey --datadir ./ ``` to obtain the value of the sentry's new TLS public key in Base64-encoding that can be put in validator node's configuration under `worker.sentry.address` list. Restart your sentry node. ### Steps on the Validator Node After generating a new TLS private key and certificate on the sentry node, set the new TLS public key in your validator node's configuration. Before using the below validator node configuration snippet, replace the following variables: * `{{ entity_id }}`: The node's entity ID from the `entity.json` file. * `{{ sentry_node_grpc_public_key }}`: The sentry node's new TLS public key encoded in Base64-encoding (e.g. `1dA4/NuYPSWXYaKpLhaofrZscIb2FDKtJclCMnVC0Xc=`). * `{{ sentry_node_private_ip }}`: The private IP address of the sentry node over which sentry node should be accessible to the validator. ``` ... trimmed ... worker: registration: # In order for the node to register itself, the entity ID must be set. entity_id: {{ entity_id }} sentry: address: - "{{ sentry_node_grpc_public_key }}@{{ sentry_node_private_ip }}:9009" ... trimmed ... ``` Restart your validator node. --- ## Shutting Down a Node Depending on the role (e.g. validator), a node may periodically register itself to the consensus registry, committing itself to serve requests until the expiration epoch. Due to this availability commitment, nodes must be shutdown gracefully to avoid network disruption. To have the node gracefully shutdown: 1. Ensure your service manager (e.g. systemd) will not restart the node after exit. Otherwise the node may re-register on startup and you will need to wait again. 2. Run one of the commands below: ```bash # Issue a graceful shutdown request. oasis-node control shutdown # Issue a graceful shutdown request, and block until the node terminates. # Note: This can take up to 3 full epochs to complete, because the node # registers each epoch for the next 2 epochs (inclusive). oasis-node control shutdown \ --wait ``` Internally, the command will halt the automatic re-registration, wait for the node's existing registration to expire and terminate the node binary. If the node is not registered (e.g. non-validator or paratime client node) this command will immediately terminate the node binary. Failure to gracefully shutdown the node may result in the node being frozen (and potentially stake being slashed) due to the node being unavailable to service requests in an epoch that it is registered for. --- ## Wiping Node State In certain situations, you may need to do a complete node redeployment with a clean state. Two common scenarios for this are during a breaking network upgrade or in cases of severe data corruption. If you need to wipe your node due to severe corruption, it's important to note that your node will need some time to catch up with the rest of the network. The following instructions are based on the assumption that you have defined your `datadir` as `/node/data` in your node's configuration. ## State Wipe and Keep Node Identity Note that by default, the `--preserve.mkvs_database` flag is set to true, preserving the runtime/paratime state. To wipe consensus state while preserving the runtime/paratime state follow these instructions: 1. Stop the `oasis-node` server process (this will depend on your own deployment setup). 2. Remove consensus state using the `oasis-node unsafe-reset` command: ```bash # Do a dry run first to see which files will get deleted. oasis-node unsafe-reset \ --datadir=/node/data \ --dry_run # Delete. oasis-node unsafe-reset \ --datadir /node/data ``` 3. Start the `oasis-node` server process. `oasis-node` is very strict regarding the ownership of the files. If you encounter the following error: ``` common/Mkdir: path '/node/data' has invalid owner: 1000. Expected owner: 0 ``` you need to run the `oasis-node` command as the exact user that owns the files, e.g.: ``` sudo --user=#1000 -- oasis-node unsafe-reset --datadir=/node/data --dry_run --log.level info ``` ## Full State Wipe This is likely not what you want to do. This is destructive and will result in losing private state required to operate the given node. **USE AT YOUR OWN RISK.** A full state wipe will also mean that you'll need to generate a new node identity (or copy the old one). To perform a full state wipe follow these steps: 1. Stop the `oasis-node` server process (this will depend on your own deployment setup) 2. Remove the `/node/data` directory. 3. Redeploy your node. You'll need to copy your Node artifacts or create brand new ones. --- ## Non-validator Node These instructions are for setting up a _non-validator_ node. If you want to run a _validator_ node instead, see the [instructions for running a validator node](validator-node.mdx). Similarly, if you want to run a _ParaTime_ node instead, see the [instructions for running a ParaTime node](paratime-node.mdx). This guide will cover setting up your non-validator node for the Oasis Network. This guide assumes some basic knowledge on the use of command line tools. ## Prerequisites Before following this guide, make sure you've followed the [Prerequisites](prerequisites) chapter and have the Oasis Node binary installed on your systems. ### Creating a Working Directory We will be creating the following directory structure inside a chosen top-level `/node` (feel free to name your directories however you wish) directory: * `etc`: This will store the node configuration and genesis file. * `data`: This will store the data directory needed by the running `oasis-node` binary, including the complete blockchain state. The directory permissions should be `rwx------`. To create the directory structure, use the following command: ```bash mkdir -m700 -p /node/{etc,data} ``` ### Copying the Genesis File The latest genesis file can be found in the Network Parameters page ([Mainnet], [Testnet]). You should download the latest `genesis.json` file and copy it to the `/node/etc` directory we just created. [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md ## Configuration This will configure the given node to only follow the consensus layer. In order to configure the node create the `/node/etc/config.yml` file with the following content: ```yaml mode: client common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info genesis: file: /node/etc/genesis.json p2p: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. seeds: - {{ seed_node_address }} ``` Before using this configuration you should collect the following information to replace the variables present in the configuration file: * `{{ seed_node_address }}`: The seed node address in the form `ID@IP:port`. You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). ## Starting the Oasis Node You can start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` ## Checking Node Status To ensure that your node is properly connected with the network, you can run the following command after the node has started: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` ## See also --- ## ParaTime Client Node This guide will cover setting up your ParaTime client node for the Oasis Network. This guide assumes some basic knowledge on the use of command line tools. ## Prerequisites Make sure you have fulfilled all the requirements outlined in the [Prerequisites] chapter before proceeding with node configuration. [Prerequisites]: ./paratime-node.mdx#prerequisites ## Configuration For the Emerald ParaTime, configuring paths to ParaTime bundles is required. However, for the Sapphire and Cipher ParaTimes, this setup is optional, as bundles are automatically downloaded from the [Oasis Bundle Registry]. In order to configure the ParaTime client node, create the `/node/etc/config.yml` file with the following content: ```yaml mode: client common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info genesis: file: /node/etc/genesis.json p2p: seeds: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. - {{ seed_node_address }} runtime: # Paths to bundles for ParaTimes without hot-loading support (e.g., Emerald). paths: - {{ runtime_orc_path }} # Runtime configuration for every ParaTime. Mandatory for ParaTimes with # hot-loading support (e.g. Sapphire, Cipher). runtimes: - id: {{ runtime_identifier }} # List additional ParaTimes here. # The following section is required if at least one ParaTime is running # inside the Intel SGX Trusted Execution Environment. sgx_loader: /node/bin/oasis-core-runtime-loader ``` Before using the configuration, replace the following variables: * `{{ seed_node_address }}`: The seed node address in the form `ID@IP:port`. * You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). * `{{ runtime_orc_path }}`: Path to the [ParaTime bundle](./paratime-node.mdx#manual-bundle-installation) of the form `/node/runtimes/foo-paratime.orc`. See the [Advanced](#advanced) section for more information. * You can find the current Oasis-supported ParaTimes in the Network Parameters page ([Mainnet], [Testnet]). * `{{ runtime_identifier }}`: You can find the runtime identifier in the Network Parameters page ([Mainnet], [Testnet]). Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). [Oasis Bundle Registry]: https://github.com/oasisprotocol/bundle-registry [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md ### Stateless Client Node (optional) Stateless client is still in early stage, and thus not suitable for use cases where you need high availability. Client node requires sufficient disk space and must wait for state synchronization. A stateless client node avoids these problems by fetching the state and light blocks via configured gRPC endpoints (provider node addresses) and using a light client to verify the data against state roots from block headers. It can be started using the following configuration: ```yaml mode: client-stateless # ... sections not relevant are omitted ... consensus: # Light client configuration light_client: trust: # ... sections less relevant are omitted ... providers: - - # Add more node addresses as needed ``` We recommend configuring a recent trust root (e.g. 1000 blocks old), that is younger than your providers last retained height, taking their pruning into account. See the [State Sync](./advanced/sync-node-using-state-sync.md) documentation for more info on configuring trusted period, height and hash for the light client. The provider address can be a domain name, IP of a node on the network, or a path to the socket of a local node. To ensure compatibility, all provider nodes specified must be running the latest version of Oasis Core. Make sure you allow all the methods listed in [this example](../grpc.mdx#envoy) in your gRPC proxy for Oasis node. You can use `grpc.oasis.io:443` or `grpc.testnet.oasis.io:443` as provider addresses for Mainnet and Testnet. ### TEE ParaTime Client Node (optional) If your node requires the ability to issue queries that can access confidential data, start by following the [Configuration](#configuration) section to create the `/node/etc/config.yml` file. Once the file is set up, add the following content to the `runtime` part within the `/node/etc/config.yml` file: ```yaml # ... sections not relevant are omitted ... runtime: # Paths to bundles for ParaTimes without hot-loading support (e.g., Emerald). paths: - {{ runtime_orc_path }} # Configuration for all ParaTimes with hot-loading support # (e.g., Sapphire, Cipher) runtimes: - id: {{ runtime_identifier }} # List additional ParaTimes here. # The following section is required for ParaTimes which are running inside # the Intel SGX Trusted Execution Environment. sgx_loader: /node/bin/oasis-core-runtime-loader ``` Before using the configuration, replace the following variables: * `{{ runtime_orc_path }}`: Path to the [ParaTime bundle](paratime-node.mdx#manual-bundle-installation) of the form `/node/runtimes/foo-paratime.orc`. * You can find the current Oasis-supported ParaTimes in the Network Parameters page ([Mainnet], [Testnet]). * `{{ runtime_identifier }}`: You can find the runtime identifier in the Network Parameters page ([Mainnet], [Testnet]). ### Enabling Expensive Queries (optional) In case you need to issue runtime queries that may require more resources to compute (e.g. when running a Web3 Gateway), you need to configure the following in your node's `/node/etc/config.yml` file: ```yaml # ... sections not relevant are omitted ... runtime: # Paths to ParaTime bundles for all of the supported ParaTimes. paths: - {{ runtime_orc_path }} # Configuration for all ParaTimes with hot-loading support # (e.g., Sapphire, Cipher) runtimes: - id: {{ runtime_identifier }} components: {} # ... sections not relevant are omitted ... config: {{ runtime_id }}: estimate_gas_by_simulating_contracts: true allowed_queries: - all_expensive: true ``` Before using the configuration, replace the following variables: * `{{ runtime_orc_path }}`: Path to the [ParaTime bundle](paratime-node.mdx#manual-bundle-installation) of the form `/node/runtimes/foo-paratime.orc`. * You can find the current Oasis-supported ParaTimes in the Network Parameters page ([Mainnet], [Testnet]). * `{{ runtime_id}}`: You can find the `runtime_id` in the Network Parameters chapter ([Mainnet], [Testnet]) * `{{ runtime_identifier }}`: You can find the runtime identifier in the Network Parameters page ([Mainnet], [Testnet]). ## Starting the Oasis Node You can start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` ## Checking Node Status To ensure that your node is properly connected with the network, you can run the following command after the node has started: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` ## Advanced ### Set Custom Registry For instructions on setting up custom bundle registry, please see the [Custom Bundle Registry] chapter. [Custom Bundle Registry]: ./paratime-node.mdx#custom-bundle-registry ### Manual Bundle Installation For instructions on manual bundle installation, please see the [Manual Bundle Installation] chapter. [Manual Bundle Installation]: ./paratime-node.mdx#manual-bundle-installation ## See also --- ## ParaTime Node(Run-your-node) For a production setup, we recommend running the ParaTime compute/storage node separately from the validator node (if you run one). Running ParaTime and validator nodes as separate Oasis nodes will prevent configuration mistakes and/or (security) issues affecting one node type from affecting the other one. If you are looking for some concrete ParaTimes that you can run, see [the list of ParaTimes and their parameters](../../get-involved/run-node/paratime-node.mdx). Oasis Core refers to ParaTimes as runtimes internally, so all configuration options will have runtime in their name. This guide will cover setting up your ParaTime compute node for the Oasis Network. This guide assumes some basic knowledge on the use of command line tools. ## Prerequisites Before following this guide, make sure you've followed the [Prerequisites](prerequisites) and [Run a Non-validator Node](non-validator-node.mdx) sections and have: * Oasis Node binary installed and configured on your system. * The chosen top-level `/node/` working directory prepared. In addition to `etc` and `data` directories, also prepare the following directories: * `bin`: This will store binaries needed by Oasis Node for running the ParaTimes. * `runtimes`: This will store the ParaTime bundles. Feel free to name your working directory as you wish, e.g. `/srv/oasis/`. Just make sure to use the correct working directory path in the instructions below. * Genesis file copied to `/node/etc/genesis.json`. Reading the rest of the [validator node setup instructions](validator-node.mdx) may also be useful. To speed up bootstrapping your new node, we recommend [copying node's state from your existing node](advanced/copy-state-from-one-node-to-the-other.md) or [syncing it using state sync](advanced/sync-node-using-state-sync.md). ### Stake Requirements To be able to register as a ParaTime node on the Oasis Network, you need to have enough tokens staked in your entity's escrow account. Current minimum staking requirements for a specific ParaTime are listed on the [Stake Requirements] page. Should you want to check the staking requirements for other node roles and registered ParaTimes manually, use the Oasis CLI tools as described in [Common Staking Info]. Finally, to stake the tokens, use our [Oasis CLI tools]. If everything was set up correctly, you should see something like below when running [`oasis account show`] command for your entity's account (this is an example for Testnet): ```shell oasis account show oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve --show-delegations ``` ``` Address: oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve Nonce: 33 === CONSENSUS LAYER (testnet) === Total: 972.898210067 TEST Available: 951.169098086 TEST Active Delegations from this Account: Total: 16.296833986 TEST Delegations: - To: oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx Amount: 16.296833986 TEST (15000000000 shares) Debonding Delegations from this Account: Total: 5.432277995 TEST Delegations: - To: oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx Amount: 5.432277995 TEST (5432277995 shares) End Time: epoch 26558 Allowances for this Account: Total: 269.5000002 TEST Allowances: - Beneficiary: oasis1qqczuf3x6glkgjuf0xgtcpjjw95r3crf7y2323xd Amount: 269.5 TEST - Beneficiary: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 0.0000002 TEST === sapphire PARATIME === Balances for all denominations: 6.9995378 TEST ``` The stake requirements may differ from ParaTime to ParaTime and are subject to change in the future. [Stake Requirements]: ./prerequisites/stake-requirements.md [Common Staking Info]: ../../build/tools/cli/network.md#show-native-token [Oasis CLI tools]: ../../build/tools/cli/account.md#delegate [`oasis account show`]: ../../build/tools/cli/account.md#show ### Register a New Entity or Update Your Entity Registration Everything in this section should be done on the `localhost` as there are sensitive items that will be created. 1. If you don't have an entity yet, create a new one by following the [Initialize Entity] instructions for validators. 2. If you will be running the ParaTime on a new Oasis node, also initialize a new node by following the [Starting the Oasis Node] instructions for validators. 3. Now, [list your node ID] in the entity descriptor file `nodes` field. 4. [Register] the updated entity descriptor. You will [configure the node](#configuration) to automatically register for the roles it has enabled (i.e. storage and compute roles) via the `worker.registration.entity` configuration flag. No manual node registration is necessary. ParaTime rewards for running the compute node will be sent to your entity address **inside the ParaTime**. To access the rewards on the consensus layer, you will need to withdraw them first. Use the [`oasis account withdraw`] command, for example: ```shell oasis account withdraw 10 ``` [Initialize Entity]: validator-node.mdx#initialize-entity [Starting the Oasis Node]: validator-node.mdx#starting-the-oasis-node [list your node ID]: validator-node.mdx#add-your-node-id-to-the-entity-descriptor [Register]: validator-node.mdx#entity-registration [`oasis account withdraw`]: ../../build/tools/cli/account.md#withdraw ### Install Oasis Core Runtime Loader For ParaTimes running inside [Intel SGX trusted execution environment](paratime-node.mdx#setting-up-trusted-execution-environment-tee), you will need to install the Oasis Core Runtime Loader. The Oasis Core Runtime Loader binary (`oasis-core-runtime-loader`) is part of Oasis Core binary releases, so make sure you download the appropriate version specified the Network Parameters page ([Mainnet], [Testnet]). Install it to `bin` subdirectory of your node's working directory, e.g. `/node/bin/oasis-core-runtime-loader`. [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md ### Install Bubblewrap Sandbox (at least version 0.3.3) ParaTime compute nodes execute ParaTime binaries inside a sandboxed environment provided by [Bubblewrap](https://github.com/containers/bubblewrap). In order to install it, please follow these instructions, depending on your distribution. Also note that in case your platform is using AppArmor, you may need to update the policy (see [AppArmor profiles](prerequisites/system-configuration.mdx#apparmor-profiles)). ```bash sudo apt install bubblewrap ``` ```bash sudo dnf install bubblewrap ``` On other systems, you can download the latest [source release provided by the Bubblewrap project](https://github.com/containers/bubblewrap/releases) and build it yourself. Make sure you have the necessary development tools installed on your system and the `libcap` development headers. On Ubuntu, you can install them with: ```bash sudo apt install build-essential libcap-dev ``` After obtaining the Bubblewrap source tarball, e.g. [bubblewrap-0.4.1.tar.xz](https://github.com/containers/bubblewrap/releases/download/v0.4.1/bubblewrap-0.4.1.tar.xz), run: ```bash tar -xf bubblewrap-0.4.1.tar.xz cd bubblewrap-0.4.1 ./configure --prefix=/usr make sudo make install ``` Note that Oasis Node expects Bubblewrap to be installed under `/usr/bin/bwrap` by default. Ensure you have a new enough version by running: ``` bwrap --version ``` Ubuntu 18.04 LTS (and earlier) provide overly-old `bubblewrap`. Follow _Other Distributions_ section on those systems. ### Setting up Trusted Execution Environment (TEE) If a ParaTime requires the use of a TEE, then make sure you set up TEE as instructed in the [Set up trusted execution environment (TEE)](prerequisites/set-up-tee.mdx) doc. ## Configuration You can configure a ParaTime in two ways. If the ParaTime supports hot-loading, use [Hot-loading ParaTime Bundle Installation](#hot-loading). Otherwise, use [Manual Bundle Installation](#manual-bundle-installation). Sapphire and Cipher ParaTimes support hot-loading installation, allowing bundles to be dynamically downloaded using the metadata from the [Oasis Bundle Registry]. For Emerald and other ParaTimes, that don't support hot-loading you have to configure the node manually. ### Hot-loading In order to configure the node with ParaTimes that support hot-loading, create the `/node/etc/config.yml` file with the following content: ```yaml mode: compute common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info consensus: # The external IP that is used when registering this node to the network. # NOTE: If you are using the Sentry node setup, this option should be # omitted. external_address: tcp://{{ external_address }}:26656 listen_address: tcp://0.0.0.0:26656 genesis: file: /node/etc/genesis.json p2p: # External P2P configuration. port: 9200 registration: addresses: # The external IP that is used when registering this node to the # network. - {{ external_address }}:9200 seeds: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. - {{ seed_node_address }} registration: # In order for the node to register itself, the entity ID must be set. entity_id: {{ entity_id }} runtime: # Configuration for all ParaTimes with hot-loading support # (e.g., Sapphire, Cipher) runtimes: - id: {{ runtime_identifier }} # List additional ParaTimes here. # The following section is required if at least one ParaTime is running # inside the Intel SGX Trusted Execution Environment. sgx_loader: /node/bin/oasis-core-runtime-loader ``` Before using the configuration, replace the following variables: * `{{ external_address }}`: The external IP you used when registering this node. * `{{ seed_node_address }}`: The seed node address in the form `ID@IP:port`. * You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). * `{{ entity_id }}`: The node's entity ID from the `entity.json` file. * `{{ runtime_identifier }}`: You can find the runtime identifier in the Network Parameters page ([Mainnet], [Testnet]). Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). #### Custom Bundle Registry If you want to download bundles using a registry other than the [Oasis Bundle Registry], add the URL of your desired registry to the runtime configuration. ``` # ... sections not relevant are omitted ... runtime: runtimes: - id: {{ runtime_identifier }} # Custom registries for all ParaTimes registries: - {{ url_to_registry }} ``` Before using the configuration, replace the following variables: * `{{ runtime_identifier }}`: You can find the runtime identifier in the Network Parameters page ([Mainnet], [Testnet]). * `{{ url_to_registry }}`: Url to your custom registry. The registry must ensure that all metadata files are accessible through a bundle registry URL, as metadata URLs are formed by appending the metadata file name, i.e. the bundle checksum, to this URL. Therefore, the bundle registry URL doesn't need to be valid endpoint, only the constructed metadata URLs need to be valid. For more information, see [ADR-25]. [ADR-25]: ../../adrs/0025-bundle_hot_loading/ [Oasis Bundle Registry]: https://github.com/oasisprotocol/bundle-registry ## Starting the Oasis Node You can start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` ## Checking Node Status To ensure that your node is properly connected with the network, you can run the following command after the node has started: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` ## Troubleshooting See the general [Node troubleshooting](troubleshooting.md) and [Set up TEE troubleshooting](prerequisites/set-up-tee.mdx#troubleshooting) sections before proceeding with ParaTime node-specific troubleshooting. ### Too Old Bubblewrap Version Double check your installed `bubblewrap` version, and ensure is at least of version **0.3.3**. For details see the [Install Bubblewrap Sandbox](#install-bubblewrap-sandbox-at-least-version-033) section. ### Bubblewrap Sandbox Fails to Start If the environment in which you are running the ParaTime node applies too restricted Seccomp or AppArmor profiles, the Bubblewrap sandbox that isolates each runtime may fail to start. In the logs you will see how the runtime attempts to restart, but fails with an `bwrap` error, like: ```json {"level":"warn","module":"runtime","msg":"bwrap: Failed to mount tmpfs: Permission denied","runtime_id":"000000000000000000000000000000000000000000000000f80306c9858e7279","runtime_name":"sapphire-paratime","ts":"2023-03-06T10:08:51.983330021Z"} ``` In case of `bwrap` issues you need to adjust your Seccomp or AppArmor profiles to support Bubblewrap sandboxes. In Docker you can set or disable Seccomp and AppArmor profiles with parameters: ``` --security-opt apparmor=unconfined \ --security-opt seccomp=unconfined \ ``` You can also configure an [AppArmor profile for Bubblewrap](prerequisites/system-configuration.mdx#apparmor-profiles). ### Bubblewrap Fails to Create Temporary Directory If the `/tmp` directory is not writable by the user running the node, the Bubblewrap sandbox may fail to start the ParaTimes. In the logs you will see errors about creating a temporary directory, like: ```json {"caller":"sandbox.go:546","err":"failed to create temporary directory: mkdir /tmp/oasis-runtime1152692396: read-only file system","level":"error","module":"runtime/host/sandbox","msg":"failed to start runtime","runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c","ts":"2023-11-09T14:08:50.554629545Z"} ``` The node might report in the status field that a runtime has not been provisioned yet, like: ``` oasis-node control status -a unix:/node/data/internal.sock | grep status "status": "waiting for hosted runtime provision", ``` This can happen, for example, in Kubernetes, when the `readOnlyRootFilesystem` setting in a Pod or container security context is set to `true`. To resolve the issue, please make sure that the `/tmp` directory is writable by the user running the node. If you are running the node in Kubernetes, you can set the `readOnlyRootFilesystem` setting to `false`, or better yet, mount a dedicated volume into `/tmp`. It can be very small in size, e.g., `1 MiB` is enough. ### Stake Requirement Double check your node entity satisfies the staking requirements for a ParaTime node. For details see the [Stake Requirements](paratime-node.mdx#stake-requirements) section. ### Enclave panicked If there is a misconfiguration in the prerequisite [BIOS settings], you can see an error in the logs reporting a problem when running SGX enclaves. ```json {"component":"ronl","level":"warn","module":"runtime","msg":"runtime execution failed: Enclave panicked: Enclave triggered exception: SgxEnclaveRun { function: EResume, exception_vector: 6, exception_error_code: 0, exception_addr: 0x0 }","runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c","runtime_name":"sapphire-paratime","ts":"2024-06-03T11:00:43.417403299Z"} ``` For example, this can happen if you forget to configure AES instruction set (i.e. the `CPU AES: ENABLE` BIOS setting). To see if your system supports AES instruction set in the CPU run: ```bash cpuid -1 | grep "AES" ``` and look for the following line: ``` AES instruction = true ``` If the AES instruction is set to `false`, you need to reconfigure you BIOS and set it to `true`. You can do similar inspection for other [BIOS settings]. You can use the [attestation tool] (at least version 0.3.4) that also checks if the AES instruction set is available. [BIOS settings]: prerequisites/set-up-tee.mdx#sgx-bios-configuration [attestation tool]: https://github.com/oasisprotocol/tools/tree/main/attestation-tool#readme ## Advanced ### Manual Bundle Installation The ParaTime bundle needs to be obtained from a trusted source. The bundle (usually with an `.orc` extension that stands for Oasis Runtime Container) contains all the needed ParaTime binaries together with the identifier and version metadata to ease deployment. When the ParaTime is running in a trusted execution environment (TEE) the bundle will also contain all the required artifacts (e.g. SGXS version of the binary and any enclave signatures). #### **Compiling the ParaTime Binary from Source Code** In case you decide to build the ParaTime binary from source yourself, make sure that you follow our [guidelines for deterministic compilation](../../build/tools/build-paratime/reproducibility.md) to ensure that you receive the exact same binary. When the ParaTime is running in a TEE, a different binary to what is registered in the consensus layer will not work and will be rejected by the network. #### Install ParaTime Bundle For Emerald ParaTime, you need to obtain its bundle and install it to the `runtimes` subdirectory of your node's working directory. For example, for the [Sapphire ParaTime](../network/mainnet.md#sapphire), you would have to obtain the `sapphire-paratime.orc` bundle and install it to `/node/runtimes/sapphire-paratime.orc`. ```yaml mode: compute common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info consensus: # The external IP that is used when registering this node to the network. # NOTE: If you are using the Sentry node setup, this option should be # omitted. external_address: tcp://{{ external_address }}:26656 listen_address: tcp://0.0.0.0:26656 genesis: file: /node/etc/genesis.json p2p: # External P2P configuration. port: 9200 registration: addresses: # The external IP that is used when registering this node to the # network. - {{ external_address }}:9200 seeds: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. - {{ seed_node_address }} registration: # In order for the node to register itself, the entity ID must be set. entity_id: {{ entity_id }} runtime: paths: # Paths to bundles for ParaTimes without hot-loading support (e.g., Emerald) - {{ runtime_orc_path }} ``` Before using the configuration, replace the following variables: * `{{ external_address }}`: The external IP you used when registering this node. * `{{ seed_node_address }}`: The seed node address in the form `ID@IP:port`. * You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). * `{{ entity_id }}`: The node's entity ID from the `entity.json` file. * `{{ runtime_orc_path }}`: Path to the [ParaTime bundle](paratime-node.mdx#manual-bundle-installation) of the form `/node/runtimes/foo-paratime.orc`. * You can find the current Oasis-supported ParaTimes in the Network Parameters page ([Mainnet], [Testnet]). ## See also --- ## ParaTime Observer Node These instructions are for setting up a _ParaTime observer_ node, which is a special client node that supports confidential smart contact queries. If you just want to run a _ParaTime client_ node, see the [instructions for running a ParaTime client node](paratime-client-node.mdx). If you want to run a _ParaTime_ node instead, see the [instructions for running a ParaTime node](paratime-node.mdx). Similarly, if you want to run a _validator_ or a _non-validator_ node instead, see the [instructions for running a validator node](validator-node.mdx) or [instructions for running a non-validator node](non-validator-node.mdx). [TEE support] and a ParaTime client node with a confidential ParaTime is required to run a ParaTime observer node. There may be per-ParaTime on-chain policy requirements (such as whitelisting) for running observer nodes. This guide will cover setting up your ParaTime observer node for the Oasis Network. Observer nodes are ParaTime client nodes that support confidential queries without being elected into the compute committee. They are registered on chain so that their eligibility can be enforced by an on-chain policy (e.g. key manager committees can grant them permissions). This way users can, for example, run confidential transactions and view calls on [Sapphire] ParaTime. This guide assumes some basic knowledge on the use of command line tools. [Sapphire]: ../../build/sapphire/README.mdx ## Prerequisites Before following this guide, make sure you've followed the [Prerequisites](prerequisites), [Run a Non-validator Node](non-validator-node.mdx), and [Run a ParaTime client Node](paratime-client-node.mdx) sections and have a working ParaTime client node with [TEE support]. ### Stake Requirements To be able to register as a ParaTime observer node on the Oasis Network, you need to have enough tokens staked in your entity's escrow account. Current minimum staking requirements for a specific ParaTime are listed on the [Stake Requirements] page. You can also use [Oasis CLI tools] to check that as described in [Common Staking Info]. Finally, to stake the tokens and to check if you staked correctly, you can use any wallet and any explorer. If using our [Oasis CLI tools] and if everything was set up correctly, you should see something like below when running [`oasis account show`] command for your entity's account (this is an example for Testnet): ```shell oasis account show oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve --show-delegations ``` ``` Address: oasis1qrec770vrek0a9a5lcrv0zvt22504k68svq7kzve Nonce: 33 === CONSENSUS LAYER (testnet) === Total: 972.898210067 TEST Available: 951.169098086 TEST Active Delegations from this Account: Total: 16.296833986 TEST Delegations: - To: oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx Amount: 16.296833986 TEST (15000000000 shares) Debonding Delegations from this Account: Total: 5.432277995 TEST Delegations: - To: oasis1qz2tg4hsatlxfaf8yut9gxgv8990ujaz4sldgmzx Amount: 5.432277995 TEST (5432277995 shares) End Time: epoch 26558 Allowances for this Account: Total: 269.5000002 TEST Allowances: - Beneficiary: oasis1qqczuf3x6glkgjuf0xgtcpjjw95r3crf7y2323xd Amount: 269.5 TEST - Beneficiary: oasis1qrydpazemvuwtnp3efm7vmfvg3tde044qg6cxwzx Amount: 0.0000002 TEST === sapphire PARATIME === Balances for all denominations: 6.9995378 TEST ``` The stake requirements may differ from ParaTime to ParaTime and are subject to change in the future. Currently, for example, if you want to register an observer node for Testnet/Mainnet, you currently need to have at least **200 TEST/ROSE** tokens delegated: - **100 TEST/ROSE** for registering a new node entity and, - **100 TEST/ROSE** for each observer node. See the [Stake Requirements] page for more details. [Stake Requirements]: prerequisites/stake-requirements.md [Run a ParaTime Node]: ../../get-involved/run-node/paratime-node.mdx [Common Staking Info]: ../../build/tools/cli/network.md#show-native-token [TEE support]: prerequisites/set-up-tee.mdx [Oasis CLI tools]: ../../build/tools/cli/account.md#delegate [`oasis account show`]: ../../build/tools/cli/account.md#show ### Register a New Entity or Update Your Entity Registration Everything in this section should be done on the `localhost` as there are sensitive items that will be created. If you plan to run an observer node for Mainnet and Testnet make sure you create and use two separate entities to prevent replay attacks. 1. If you don't have an entity yet, create a new one by following the [Initialize Entity] instructions for validators. 2. If you will be running the ParaTime on a new Oasis node, also initialize a new node by following the [Starting the Oasis Node] instructions for validators. 3. Now, [list your node ID] in the entity descriptor file `nodes` field. 4. [Register] the updated entity descriptor. 5. By adding the created entity ID in the node config file, you will [configure the node] to automatically register for the roles it has enabled (i.e. observer role) via the `registration.entity_id` configuration flag. No manual node registration is necessary. ```yaml mode: client # ... sections not relevant are omitted ... registration: entity_id: {{ entity_id }} ``` 6. Once the registration is complete, please share the Entity IDs with us so that we can whitelist them accordingly. [Initialize Entity]: validator-node.mdx#initialize-entity [Starting the Oasis Node]: validator-node.mdx#starting-the-oasis-node [list your node ID]: validator-node.mdx#add-your-node-id-to-the-entity-descriptor [Register]: validator-node.mdx#entity-registration [`oasis account withdraw`]: ../../build/tools/cli/account.md#withdraw [configure the node]: paratime-node.mdx#configuration ## (Re)starting the Oasis Node You can (re)start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` After one epoch the node should register as observer (assuming it satisfies per-ParaTime policy requirements). ## Checking Node Status To ensure that your node has the observer node, you can run the following command after the node has started: ```bash oasis-node control status -a unix:/node/data/internal.sock ``` You should see `"observer"` in the `.registration.descriptor.roles` output entry. ## See also --- ## Cloud Providers Before committing to a service be sure to verify the processor compatibility and enquire with the provider about the status of Intel SGX support. Intel maintains a comprehensive list of processors that support Intel SGX: * https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions-processors.html ## Possible Limitations While many bare-metal or dedicated server providers use Intel processors that support SGX, there are potential limitations: * **BIOS Configuration:** Some providers may not allow customers to access or modify BIOS settings, which may be necessary to correctly configure Intel SGX. * **Outdated Firmware:** SGX requires up-to-date firmware. Some providers may not maintain their systems with the latest firmware updates, preventing SGX from functioning correctly. * **Lack of SGX-specific Offerings:** Many providers may not advertise or specifically offer SGX-enabled servers, making it difficult for customers to know if the feature is available. * **Limited Support:** Even if SGX is available, the provider's support team may not be familiar with SGX-specific issues or configurations. * **Hardware Provisioning:** If you use keys (such as SGX sealing keys) that are bound to hardware to encrypt the data of an instance within an Intel SGX enclave, the encrypted data cannot be decrypted after the host of the instance is changed. ## Known Providers | Provider | Product | Documentation | Last Updated | | - | - | - | - | | [Alibaba Cloud](https://www.alibabacloud.com) | [(ECS) Bare Metal Instances](https://www.alibabacloud.com/en/product/ebm) | [Build an SGX confidential computing environment](https://www.alibabacloud.com/help/en/ecs/user-guide/build-an-sgx-encrypted-computing-environment) | 2024-09-25 | | [Azure](https://azure.microsoft.com/) | [Some Dedicated Host SKUs](https://learn.microsoft.com/en-us/azure/virtual-machines/dedicated-host-general-purpose-skus) | [Solutions on Azure for Intel SGX](https://learn.microsoft.com/en-us/azure/confidential-computing/virtual-machine-solutions-sgx) | 2024-09-25 | | [Gcore](https://gcore.com) | [Bare Metal](https://gcore.com/cloud/bare-metal-servers) | [Computing with Intel SGX](https://gcore.com/cloud/intel-sgx) | 2024-09-25 | | [IBM Cloud](https://cloud.ibm.com/) | [Virtual Private Cloud (VPC)](https://www.ibm.com/cloud/vpc) | [Confidential computing with SGX for VPC](https://cloud.ibm.com/docs/vpc?topic=vpc-about-sgx-vpc). | 2024-09-25 | | [OVH](https://www.ovhcloud.com/) | [Bare Metal servers](https://www.ovhcloud.com/en/bare-metal/prices/?use_cases=confidential-computing) | [SGX for Confidential Computing](https://www.ovhcloud.com/en/bare-metal/intel-software-guard-extensions/) | 2024-09-25 | | [PhoenixNAP](https://phoenixnap.com/) | [Bare Metal Cloud](https://phoenixnap.com/bare-metal-cloud) | [What is Intel SGX and What are the Benefits?](https://phoenixnap.com/kb/intel-sgx) | 2024-09-25 | | [Vultr](https://www.vultr.com/) | [Bare Metal](https://www.vultr.com/products/bare-metal/) | [Intel SGX development on Vultr](https://zenlot.medium.com/intel-sgx-development-on-vultr-30cdfd5c9754) | 2024-09-25 | If you are aware of more cloud or dedicated server providers that actively support Intel SGX or Intel TDX, or have updated information about the providers listed on this page, please [create an issue on Github] with the additional details. [create an issue on Github]: https://github.com/oasisprotocol/docs/issues/new --- ## Hardware Requirements The Oasis Network is composed of multiple classes of nodes and services such as: * Consensus validator or non-validator node * Sapphire ParaTime compute or client node * Emerald ParaTime compute or client node * Cipher ParaTime compute or client node Hardware requirements for running the Oasis Web3 gateway can be found [here](../../web3.mdx#hardware). This page describes the **minimum** and **recommended** system hardware requirements for running different types of nodes on the Oasis Network. If you are running more than one ParaTime on a single node, you will require more resources. If you configure a system with slower resources than the recommended values, you run the risk of being underprovisioned, causing proposer node timeouts and synchronization delays. This could result in losing stake and not participating in committees. If you run out of memory or storage, the Oasis node process will be forcefully killed. This could lead to state corruption, losing stake and not participating in committees. ### CPU * Consensus validator or non-validator node: * Minimum: 2.0 GHz x86-64 CPU with [AES instruction set] support * Recommended: 2.0 GHz x86-64 CPU with 2 cores/vCPUs with [AES instruction set] and [AVX2] support * Emerald ParaTime compute node and all ParaTime client nodes: * Minimum: 2.0 GHz x86-64 CPU with [AES instruction set] support * Recommended: 2.0 GHz x86-64 CPU with 4 cores/vCPUs with [AES instruction set] and [AVX2] support * Sapphire and Cipher ParaTime compute node: * Minimum: 2.0 GHz x86-64 CPU with [AES instruction set] and [Intel SGX] support * Recommended: 2.0 GHz x86-64 CPU with 2 cores/vCPUs with [AES instruction set], [Intel SGX] and [AVX2] support During regular workload your node will operate with the minimal CPU resources. However, if put under heavy load it might require more cores/vCPUs (e.g. an Emerald ParaTime client node behind a public Emerald Web3 gateway). The [AES instruction set] support is required by [Deoxys-II-256-128], a Misuse-Resistant Authenticated Encryption (MRAE) algorithm, which is used for encrypting ParaTime's state. The [Advanced Vector Extensions 2 (AVX2)][AVX2] support enables faster Ed25519 signature verification which in turn makes a node sync faster. The [Intel SGX] support is required if you want to run ParaTime compute nodes for SGX-enabled ParaTimes (such as Sapphire and Cipher). Some ParaTimes like Emerald do not require SGX. Check the specific ParaTime documentation to determine if SGX is required for your use case. Intel maintains a comprehensive list of [processors that support Intel SGX]. [AES instruction set]: https://en.wikipedia.org/wiki/AES_instruction_set [Deoxys-II-256-128]: https://sites.google.com/view/deoxyscipher [AVX2]: https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#Advanced_Vector_Extensions_2 [Intel SGX]: https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html [processors that support Intel SGX]: https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions-processors.html ### Memory * Consensus validator or non-validator node: * Minimum: 6 GB of ECC RAM * Recommended: 8 GB of ECC RAM * Each Emerald, Sapphire, Cipher compute or client node: * Minimum: 12 GB of ECC RAM * Recommended: 20 GB of ECC RAM During regular workload your node will operate with less than the minimum amount of memory. However, at certain time points, it will absolutely require more memory. Examples of such more resource intensive time points are the initial state sync, BadgerDB migration when upgrading a node to a new major version of the Oasis Core, generating storage checkpoints with BadgerDB, periodic BadgerDB compactions... ### Storage * Consensus validator or non-validator node: * Minimum: 400 GB of SSD or NVMe fast storage * Recommended: 700 GB of SSD or NVMe fast storage * Emerald ParaTime compute or client node (in addition to the consensus storage requirements): * Minimum: 400 GB of SSD or NVMe fast storage * Recommended: 700 GB of SSD or NVMe fast storage * Sapphire and Cipher ParaTime compute or client node (in addition to the consensus storage requirements): * Minimum: 200 GB of SSD or NVMe fast storage * Recommended: 300 GB of SSD or NVMe fast storage Consensus and ParaTime state is stored in an embedded [BadgerDB] database which was [designed to run on SSDs][badgerdb-ssds]. Hence, we **strongly discourage** trying to run a node that stores data **on classical HDDs**. The consensus layer and ParaTimes accumulate state over time. The speed at which the state grows depends on the number of transactions on the network and ParaTimes. For example, a consensus non-validator node on the Mainnet accumulated: * 280 GB of consensus state in ~1 year between Apr 28, 2021 and Apr 11, 2022 (since the [Cobalt upgrade]) * 32 GB of consensus state in ~1 month since the [Damask upgrade] For example, an Emerald client node on the Mainnet additionally accumulated: * 260 GB of Emerald ParaTime state in ~5 months between Nov 18, 2021 and Apr 11, 2022 (since the [Emerald Mainnet launch]) * 25 GB of Emerald ParaTime state in ~1 month since the [Damask upgrade] Dump & restore upgrades (e.g. [Damask upgrade], [Cobalt upgrade]) include state wipes which will free the node storage. Historical state can be accessed by running a separate archive node. You can configure your node _not to_ keep a complete state from the genesis onward. This will reduce the amount of storage required for the consensus and ParaTime state. To enable pruning of the consensus state set the `consensus.prune.strategy` and `consensus.prune.num_kept` parameters appropriately in your [node's configuration]. To enable pruning of the ParaTime state set the `runtime.prune.strategy` and `runtime.prune.num_kept` parameters appropriately in your [node's configuration]. [BadgerDB]: https://dgraph.io/docs/badger/ [badgerdb-ssds]: https://dgraph.io/docs/badger/design/ [Cobalt upgrade]: ../../reference/upgrades/cobalt-upgrade.md [Damask upgrade]: ../../reference/upgrades/damask-upgrade.md [Emerald Mainnet launch]: https://medium.com/oasis-protocol-project/oasis-emerald-evm-paratime-is-live-on-mainnet-13afe953a4c9 [node's configuration]: ../validator-node.mdx#configuration ### Network * Consensus validator node and all ParaTime compute nodes: * Minimum: 200 Mbps internet connection with low latency * Recommended: 1 Gbps internet connection with low latency During regular workload your node will receive much less network traffic. However, at certain time points when huge bursts of transactions arrive, you need to assure that it doesn't timeout. --- ## Install the Oasis Node The Oasis node is a binary that is created from the [Oasis Core] repository's [`go/`] directory. It is a single executable that contains the logic for running your node in various [roles]. The Oasis Node is currently only supported on x86_64 Linux systems. [Oasis Core]: https://github.com/oasisprotocol/oasis-core [`go/`]: https://github.com/oasisprotocol/oasis-core/tree/master/go [roles]: ../../README.mdx#validator-and-paratime-nodes ## Set up the Oasis Node's Working Directory Before we install the Oasis node we need to ensure that we have a place to store necessary files. We will reference the working directory on the server as `/node` throughout the documentation. ### Setting Up the `/node` Directory In the `/node` directory, create the following subdirectories: * `etc/`: this is to store the configuration and `entity.json` * `data/`: this is to store the node's data * `bin/`: this is to store the `oasis-node` binary * `runtimes/`: this is to store the ParaTime `.orc` bundles You can make this directory structure with the **corresponding permissions** by executing the following command: ```shell mkdir -m700 -p /node/{etc,bin,runtimes,data} ``` ### Copying the Genesis File to the server The latest Genesis file can be found in the Network Parameters page ([Mainnet], [Testnet]). You should download the latest `genesis.json` file and copy it to `/node/etc/genesis.json` on the `server`. [Mainnet]: ../../network/mainnet.md [Testnet]: ../../network/testnet.md ## Obtain the `oasis-node` Binary ### Downloading a Binary Release For convenience, we provide binaries that have been built by the Oasis Protocol Foundation. Links to the binaries are provided in the Network Parameters page ([Mainnet], [Testnet]). [Mainnet]: ../../network/mainnet.md [Testnet]: ../../network/testnet.md ### Building From Source Although highly suggested, building from source is currently beyond the scope of this documentation. See [Oasis Core's Build Environment Setup and Building][oasis-core-build] documentation for more details. The code in the current [`master`] branch may be incompatible with the code used by other nodes on the network. Make sure to use the version specified on the Network Parameters page ([Mainnet], [Testnet]). [oasis-core-build]: ../../../core/development-setup/build-environment-setup-and-building [`master`]: https://github.com/oasisprotocol/oasis-core/tree/master/ ### Adding `oasis-node` Binary to `PATH` To install the `oasis-node` binary next to your Oasis node data directory, copy/symlink it to e.g. `/node/bin`. To install the `oasis-node` binary for the current user, copy/symlink it to `~/.local/bin`. To install the `oasis-node` binary for all users of the system, copy it to `/usr/local/bin`. ## Running ParaTimes If you intend to [run a ParaTime node](../paratime-node.mdx) you will need to additionally install the following software packages: * [Bubblewrap](https://github.com/projectatomic/bubblewrap) 0.4.1+, needed for creating process sandboxes. On Ubuntu 20.04+, you can install it with: ```shell sudo apt install bubblewrap ``` On Fedora, you can install it with: ```shell sudo dnf install bubblewrap ``` --- ## Set up Trusted Execution Environment (TEE) Most Oasis ParaTimes and ROFLs are configured to run in a TEE. There are two kinds of Intel TEEs currently in use: - [Intel SGX] is required by the ParaTimes and [ROFL SGX][rofl-types] apps. - [Intel TDX] is required by the [ROFL TDX raw and container-based][rofl-types] apps. To run SGX/TDX enclaves: 1. your hardware must have SGX/TDX support, 2. you must have the latest BIOS updates installed, 3. you must have SGX/TDX enabled in your BIOS, 4. you must have the Linux kernel, drivers and software components properly installed and running. [Intel SGX]: https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html [Intel TDX]: https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/trust-domain-extensions.html [rofl-types]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/workflow/init.md ## Software Guard Extensions (SGX) ### BIOS Configuration To enable Intel SGX on your hardware, you also need to configure the BIOS. First, **update the BIOS to the latest version with the latest microcode** and then proceed with BIOS configuration as shown below. Note that some settings may not apply to your BIOS. In that case, configure only the relevant ones. Please set the BIOS settings as follows: - **SGX**: ENABLE - **Hyper-Threading**: DISABLE - **Intel SpeedStep**: DISABLE - **Turbo Mode**: DISABLE - **CPU AES**: ENABLE - **SGX Auto MP Registration**: ENABLE ### Ensure Clock Synchronization Due to additional sanity checks within runtime enclaves, you should ensure that the node's local clock is synchronized (e.g. using NTP). If it is off by more than half a second you may experience unexpected runtime aborts. ### Ensure Proper SGX Device Permissions Make sure that the user that is running the Oasis Node binary has access to the SGX device (e.g. `/dev/sgx_enclave`). This can usually be achieved by adding the user into the right group, for example in case the permissions of the SGX device are as follows: ``` crw-rw---- 1 root sgx 10, 125 Oct 28 09:28 /dev/sgx_enclave ``` and the user running Oasis Node is `oasis`, you can do: ```bash sudo adduser oasis sgx ``` Failure to do so may result in the "permission denied OS error 13" during runtime startup. If you are planning to run your node from an interactive session, make sure to log out for permissions to take effect. ### AESM Service To allow execution of SGX enclaves, several **Architectural Enclaves (AE)** are involved (i.e. Launch Enclave, Provisioning Enclave, Provisioning Certificate Enclave, Quoting Enclave, Platform Services Enclaves). Communication between application-spawned SGX enclaves and Intel-provided Architectural Enclaves is through **Application Enclave Service Manager (AESM)**. AESM runs as a daemon and provides a socket through which applications can facilitate various SGX services such as launch approval, remote attestation quote signing, etc. Oasis node requires the use of DCAP attestation. To see if your system supports it, run the following: ```bash cpuid -1 | grep "SGX" ``` and look for the following line: ``` SGX_LC: SGX launch config supported = true ``` ### DCAP Attestation #### Ubuntu 22.04+ A convenient way to install the AESM service on Ubuntu 22.04 systems is to use the Intel's [official Intel SGX APT repository](https://download.01.org/intel-sgx/sgx_repo/). First add Intel SGX APT repository to your system: ```bash curl -fsSL https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | sudo gpg --dearmor -o /usr/share/keyrings/intel-sgx-deb.gpg echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-sgx-deb.gpg] https://download.01.org/intel-sgx/sgx_repo/ubuntu $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/intel-sgx.list > /dev/null ``` And then install the `sgx-aesm-service`, `libsgx-aesm-ecdsa-plugin`, `libsgx-aesm-quote-ex-plugin` and `libsgx-dcap-default-qpl` packages: ```bash sudo apt update sudo apt install sgx-aesm-service libsgx-aesm-ecdsa-plugin libsgx-aesm-quote-ex-plugin libsgx-dcap-default-qpl ``` The AESM service should be up and running. To confirm that, use: ```bash sudo systemctl status aesmd.service ``` #### Configuring the Quote Provider The Intel Quote Provider (`libsgx-dcap-default-qpl`) needs to be configured in order to use either the Intel PCS, the PCCS of your cloud service provider, or your own PCCS. The configuration file is located at `/etc/sgx_default_qcnl.conf`. Make sure to always restart the `aesmd.service` after updating the configuration, via: ```bash sudo systemctl restart aesmd.service ``` ##### Intel PCS Using the Intel PCS is the simplest and most generic way, but it may be less reliable than using your own PCCS. Some cloud providers (see the [following section](#cloud-service-providers-pccs)) also require you to use their PCCS. To use Intel PCS update the `pccs_url` value in `/etc/sgx_default_qcnl.conf` to the Intel PCS API URL: ```json //PCCS server address "pccs_url": "https://api.trustedservices.intel.com/sgx/certification/v4/" ``` In case there is an error in the QPL configuration file, attestation will refuse to work and the AESM service may produce unhelpful errors like the following: ``` Couldn't find the platform library. (null) ``` The only thing that needs to be changed is the `pccs_url` value above. **Do not add any comments and/or modify punctuation as these could make the configuration file invalid.** ##### Cloud Service Provider's PCCS Some cloud providers require you to use their PCCS. - Azure: See the [Azure documentation] for details on configuring the quote provider. The documentation contains an example of an Intel QPL configuration file that can be used. - Alibaba Cloud: See the [Alibaba Cloud documentation] for details on configuring the quote provider. The documentation shows the required `sgx_default_qcnl.conf` changes. - IBM Cloud: See the [IBM Cloud documentation] for details on configuring the quote provider. The documentation shows the required `sgx_default_qcnl.conf` changes. - Other cloud providers: If you are using a different cloud service provider, consult their specific documentation for the appropriate PCCS configuration and guidance on configuring the quote provider, or use one of the other PCCS options. [Azure documentation]: https://learn.microsoft.com/en-us/azure/security/fundamentals/trusted-hardware-identity-management#how-do-i-use-intel-qpl-with-trusted-hardware-identity-management [Alibaba Cloud documentation]: https://www.alibabacloud.com/help/en/ecs/user-guide/build-an-sgx-encrypted-computing-environment [IBM Cloud documentation]: https://cloud.ibm.com/docs/vpc?topic=vpc-about-attestation-sgx-dcap-vpc ##### Own PCCS It is also possible to run PCCS yourself. Follow [official Intel instructions] on how to setup your own PCCS. [official Intel Instructions]: https://www.intel.com/content/www/us/en/developer/articles/guide/intel-software-guard-extensions-data-center-attestation-primitives-quick-install-guide.html #### DCAP Attestation Docker Alternatively, an easy way to install and run the AESM service on a [Docker](https://docs.docker.com/engine/)-enabled system is to use [our AESM container image](https://github.com/oasisprotocol/oasis-core/pkgs/container/aesmd). Executing the following command should (always) pull the latest version of our AESMD Docker container, map the SGX devices and `/var/run/aesmd` directory and ensure AESM is running in the background (also automatically started on boot): ```bash docker run \ --pull always \ --detach \ --restart always \ --device /dev/sgx_enclave \ --device /dev/sgx_provision \ --volume /var/run/aesmd:/var/run/aesmd \ --name aesmd \ ghcr.io/oasisprotocol/aesmd-dcap:master ``` By default, the Intel Quote Provider in the docker container is configured to use the Intel PCS endpoint. To override the Intel Quote Provider configuration within the container mount your own custom configuration using the `volume` flag. ```bash docker run \ --pull always \ --detach \ --restart always \ --device /dev/sgx_enclave \ --device /dev/sgx_provision \ --volume /var/run/aesmd:/var/run/aesmd \ --volume /etc/sgx_default_qcnl.conf:/etc/sgx_default_qcnl.conf \ --name aesmd \ ghcr.io/oasisprotocol/aesmd-dcap:master ``` The default Intel Quote Provider config is available in [Intel SGX Github repository](https://github.com/intel/SGXDataCenterAttestationPrimitives/blob/master/QuoteGeneration/qcnl/linux/sgx_default_qcnl.conf). #### Multi-socket Systems Note that platform provisioning for multi-socket systems (e.g. systems with multiple CPUs) is more complex, especially if one is using a hypervisor and running SGX workloads inside guest VMs. In this case additional provisioning may be required to be performed on the host. Note that the system must be booted in UEFI mode for provisioning to work as the provisioning process uses UEFI variables to communicate with the BIOS. In addition the **SGX Auto MP Registration** BIOS configuration setting should be set to _enabled_. ##### Ubuntu 22.04+ To provision and register your multi-socket system you need to install the Intel SGX Multi-Package Registration Agent Service as follows (assuming Intel's SGX apt repository has been added as discussed above): ```shell sudo apt install sgx-ra-service ``` After boot, the log in `/var/log/mpa_registration.log` should indicate successful registration. If an error is reported, make sure that you have enabled SGX Auto MP Registration in the BIOS as mentioned above. You can also perform re-provisioning by rebooting and setting the **SGX Factory Reset** option to _enabled_. ##### VMware vSphere 8.0+ In order to enable SGX remote attestation on VMware vSphere-based systems, please follow [the vSphere guide]. [the vSphere guide]: https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-F16476FD-3B66-462F-B7FB-A456BEDC3549.html ### Migrate from EPID Attestation to DCAP Attestation EPID attestation will be discontinued in 2025 and will no longer be available on any processors. All nodes using EPID attestation must migrate to DCAP attestation. For transitioning to the DCAP attestation, follow these steps: 1. See if your system [supports DCAP attestation]. If your hardware does not support DCAP attestation, you'll need to migrate your node to newer hardware. 2. Transition to DCAP attestation: - In case you are running AESM service on Docker follow [these instructions]. - Otherwise manually configure AESM service to use DCAP attestation: 1. Remove any leftover EPID attestation packages. If running on Ubuntu 22.04 run the following command: ```bash sudo apt remove libsgx-aesm-launch-plugin libsgx-aesm-epid-plugin ``` 2. Configure AESM service to use [DCAP attestation] 3. Restart the AESM service. If running on Ubuntu 22.04 run the following command: ```bash sudo systemctl restart aesmd.service ``` 3. [Configure the Quote Provider]. 4. Use the [attestation tool] to test if your settings are correct. 5. Restart your compute node and verify that node is `ready`. [these instructions]: #dcap-attestation-docker [supports DCAP attestation]: #aesm-service [Gracefully shutdown]: ../maintenance/shutting-down-a-node.md [DCAP attestation]: #dcap-attestation [Configure the Quote Provider]: #configuring-the-quote-provider [attestation tool]: #check-sgx-setup ### Check SGX Setup To test if your settings are correct, you may use the [Oasis attestation tool][attestation-tool] ([binary]) for testing remote attestation against Intel SGX's development server. [attestation tool]: https://github.com/oasisprotocol/tools/tree/main/attestation-tool#readme [binary]: https://github.com/oasisprotocol/tools/releases ## Trust Domain Extensions (TDX) Before proceeding with the TDX installation, **make sure you followed the SGX installation steps above and you have a working SGX environment**! ### BIOS configuration - **SGX**: ENABLE - **SGX memory**: at least 256MB - **SGX Auto MP Registration**: ENABLE - **Hyper-Threading**: DISABLE - **Intel SpeedStep**: DISABLE - **SecureBoot**: DISABLE - **All Internal Graphics**: DISABLE - **Turbo Mode**: DISABLE - **CPU AES**: ENABLE - **TDX**: ENABLE - **Memory Encryption (TME)**: ENABLE - **Total Memory Encryption Bypass**: ENABLE - **Total Memory Encryption Multi-Tenant (TME-MT)**: ENABLE - **TME-MT memory integrity**: DISABLE - **TDX Secure Arbitration Mode Loader (SEAM Loader)**: ENABLE - **TME-MT/TDX key split**: non-zero value - **ACPI S3 and deeper power states**: DISABLED ### Host OS setup The following section contains summarized instructions for setting up an environment for running ROFL node and other TDX services on Ubuntu 24.04 or later. Check out the official [Canonical TDX repository] for details. [Canonical TDX repository]: https://github.com/canonical/tdx 1. Add the following TDX PPAs to your APT sources and the keyring: ```shell sudo add-apt-repository ppa:kobuk-team/tdx-release sudo add-apt-repository ppa:kobuk-team/tdx-attestation-release sudo apt update ``` 2. Install the TDX quote generation service and QEMU for running guest virtual machines: ```shell sudo apt install tdx-qgs qemu-utils qemu-system-x86 ``` 3. Install a special TDX-enabled Linux kernel: ```shell sudo apt install linux-image-intel ``` 3. Disable ACPI S3 (add kernel parameter: `nohibernate`): ``` sed -i -E "s/GRUB_CMDLINE_LINUX=\"(.*)\"/GRUB_CMDLINE_LINUX=\"\1 nohibernate\"/g" /etc/default/grub update-grub ``` 4. Make sure the non-root user running Oasis-node is a member of `sgx`, `sgx_prv` and `kvm` groups on host (access to `/dev/sgx*`, `/dev/kvm*` and `/dev/*vsock*` devices). 5. Reboot your system and select the new `-intel` kernel. If you don't have access to the grub selector during machine startup, you can also detect and set the correct default kernel by executing the script below with elevated privileges: ```bash export KERNEL_RELEASE=$(apt show "linux-image-intel" 2>&1 | gawk 'match($0, /Depends:.* linux-image-([^, ]+)/, a) {print a[1]}') if [ -z "${KERNEL_RELEASE}" ]; then echo "ERROR : unable to determine kernel release" exit 1 fi MID=$(awk '/Advanced options for Ubuntu/{print $(NF-1)}' /boot/grub/grub.cfg | cut -d\' -f2) KID=$(awk "/with Linux $KERNEL_RELEASE/"'{print $(NF-1)}' /boot/grub/grub.cfg | cut -d\' -f2 | head -n1) cat > /etc/default/grub.d/99-tdx-kernel.cfg <, include profile bwrap /usr/bin/bwrap flags=(unconfined) { userns, # Site-specific additions and overrides. See local/README for details. include if exists } ``` Enable the Bubblewrap user namespace restriction policy: ```shell sudo ln -s /usr/share/apparmor/extra-profiles/bwrap-userns-restrict /etc/apparmor.d/ ``` Then reload AppArmor policies by running: ``` sudo systemctl reload apparmor.service ``` ## Example snippets for different setups You may find the following snippets helpful in case you are running `oasis-node` process with systemd, Docker or runit. Add a [`User` directive] to the Oasis service's systemd unit file: ``` ... User=oasis ... ``` Below can be found a simple systemd unit file for `oasis-node` that can be used as a starting point. ```ini [Unit] Description=Oasis Node After=network.target [Service] Type=simple User=oasis WorkingDirectory=/node/data ExecStart=/node/bin/oasis-node --config /node/etc/config.yml Restart=on-failure RestartSec=3 LimitNOFILE=1024000 [Install] WantedBy=multi-user.target ``` Add [`USER` instruction] to your Oasis service's Dockerfile: ``` ... USER oasis ... ``` Wrap the invocation in a [`chpst` command]: ```shell chpst -u oasis oasis-node ... ``` [`User` directive]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#User= [`User` instruction]: https://docs.docker.com/engine/reference/builder/#user [`chpst` command]: http://smarden.org/runit/chpst.8.html [Invalid Permissions]: ../troubleshooting.md#invalid-permissions --- ## ROFL Node These instructions are for setting up a _ROFL node_ which executes ROFLs inside a TEE, but otherwise only observes the ParaTime activity and can also submit transactions. This guide will cover setting up your ROFL node for running ROFL apps on the Oasis Network. This guide assumes some basic knowledge on the use of command line tools. ## Prerequisites ### ParaTime Client Node ROFL node is a special kind of a **ParaTime Client Node** with a TEE-capable hardware in order to securely run ROFLs. First, complete the ParaTime Client Node instructions: 1. Set up your [ParaTime Client Node] including support for [TEE ParaTimes]. 2. Add the Sapphire ParaTime to your node ([Mainnet], [Testnet]). 3. To support [ROFL TDX apps][rofl-types] enable support for the [Intel TDX] on your node. [ParaTime Client Node]: ./paratime-client-node.mdx#configuration [TEE ParaTimes]: ./paratime-client-node.mdx#tee-paratime-client-node-optional [Mainnet]: ../network/mainnet.md#sapphire [Testnet]: ../network/testnet.md#sapphire [Intel TDX]: ./prerequisites/set-up-tee.mdx#tdx [rofl-types]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/workflow/init.md [client-stateless]: ./paratime-client-node.mdx#stateless-client-node-optional Consider running ROFL node in [client-stateless] mode for faster bootstrapping and using less resources. ### Configure Firewall Since you will be hosting 3rd party applications on your server, we strongly recommend to **configure your firewall to prevent any local area network connections from ROFL apps**. Using `iptables`, running something like this will prevent Oasis node and other processes owned by the `oasis` user accessing LAN on `192.168.0.255` except for the gateway: ``` iptables -A OUTPUT -d 192.168.0.1/32 -m owner --uid-owner $(id -u oasis) -j ACCEPT iptables -A OUTPUT -d 192.168.0.0/16 -m owner --uid-owner $(id -u oasis) -j DROP ``` You can also permanently store the rules above. On Debian-based OSes you can do so by running: ```shell sudo apt install iptables-persistent sudo /etc/init.d/iptables-persistent save ``` ### Fund Your Node The node will also need to cover any transaction fees required to maintain registration of the ROFL node and the apps. First, determine the address of the account corresponding to your node: ```shell oasis-node identity show-address -a unix:/node/data/internal.sock ``` ``` oasis1qp6tl30ljsrrqnw2awxxu2mtxk0qxyy2nymtsy90 ``` Then fund this account **on Sapphire** by transferring or depositing tokens with the [`oasis account`] command, for example: ```shell oasis account transfer 10 oasis1qp6tl30ljsrrqnw2awxxu2mtxk0qxyy2nymtsy90 ``` If you are just testing out your node on Sapphire Testnet, you can also request some TEST from the [Testnet Faucet]. [`oasis account`]: https://github.com/oasisprotocol/cli/blob/master/docs/account.md [Testnet Faucet]: https://faucet.testnet.oasis.io/?paratime=sapphire ## Configuration There are two ways you can host ROFL apps on your ROFL node. The preferred way is to join a network of ROFL providers called the **ROFL marketplace** and which is also integrated in the [`oasis rofl deploy`] command. Alternatively, you can copy ROFL bundle(s) directly to your node and configure each one of them in your node configuration file. [`oasis rofl deploy`]: https://github.com/oasisprotocol/cli/blob/master/docs/rofl.md#deploy ### Hosting via ROFL marketplace To make your ROFL node accessible through [ROFL marketplace], you will: 1. Create a new [ROFL provider entity](#register-rofl-provider). 2. [Configure one or more ROFL nodes](#configure-rofl-node-marketplace) to execute ROFL transactions corresponding to that provider and/or machine. Both steps take place solely on-chain. There is no centralized mechanism or KYC process involved at any time. [ROFL marketplace]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/features/marketplace.mdx #### Register your ROFL provider To register a new ROFL provider, using the [Oasis CLI] to run the following command: ```shell oasis rofl provider init ``` Then edit your `rofl-provider.yaml` and add one or more hosting offers. Now obtain your [Node ID] and decide how much resources you are willing to lend out. In the example below, we'll offer *small* instances with 4 GiB of memory, 2 CPUs and 20 GB of storage. Hourly rent will cost 10 tokens and there can be at most 50 active instances at a time. [Node ID]: ./validator-node.mdx#obtain-the-node-id ```yaml title="rofl-provider.yaml" network: testnet paratime: sapphire provider: test:erin nodes: - 5MsgQwijUlpH9+0Hbyors5jwmx7tTmKMA4c9leV3prI= scheduler_app: rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg payment_address: test:erin offers: - id: small resources: tee: tdx memory: 4096 cpus: 2 storage: 20000 payment: native: terms: hourly: 10 capacity: 50 ``` To register a new provider using the configuration above, run: ```shell oasis rofl provider create ``` The account signing the transaction is now registered as a ROFL provider on-chain. In our case, the built-in `test:erin` account which we used for signing has address `oasis1qqcd0qyda6gtwdrfcqawv3s8cr2kupzw9v967au6`. Registering a new ROFL provider requires depositing [100 tokens][stake-requirements] which are returned to you, when you deregister it. [stake-requirements]: ./prerequisites/stake-requirements.md [Oasis CLI]: https://github.com/oasisprotocol/cli/blob/master/docs/README.md #### Configure your ROFL node for the marketplace 1. Download the [latest release of the Scheduler ROFL app][rofl-scheduler] and save it to your ROFL node, for example `/node/rofls/`. This app will listen to ROFL hosting requests, configure any incoming ROFLs and spin them up. It will also listen to ROFL admin request for stopping or restarting the ROFL machines. 2. Add the Scheduler ROFL app to your Oasis node's `config.yml` inside the `runtime.paths` and configure the scheduler specifics such as the provider address, acceptable offers and capacities on this node: ```yaml title="config.yml" runtime: sgx: loader: /node/bin/oasis-core-runtime-loader paths: - /node/rofls/rofl-scheduler.testnet.orc runtimes: # Sapphire Testnet RONL with SGX - id: "000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c" config: allowed_queries: - all_expensive: true components: - id: rofl.rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg # ROFL scheduler app ID, should not change permissions: - bundle_add - bundle_remove - volume_add - volume_remove config: rofl_scheduler: provider_address: oasis1qqcd0qyda6gtwdrfcqawv3s8cr2kupzw9v967au6 # Your provider address offers: - small # Your offer name(s) capacity: instances: 24 memory: 65536 cpus: 24 storage: 549755813888 ``` 3. Restart your Oasis node. After a while, your ROFL node will be ready to accept ROFLs. ROFL app developers can now simply deploy their ROFL to your node by providing the `--provider ` to the [`oasis rofl deploy`] command. ```shell oasis rofl deploy --provider oasis1qqcd0qyda6gtwdrfcqawv3s8cr2kupzw9v967au6 ``` Multiple ROFL nodes If you configured multiple ROFL nodes for a single provider, the machine instantiated to execute the ROFL app will be arbitrarily picked depending on which ROFL node register transaction appears first on chain. #### Multiple ROFL replicas on a single node The ROFL scheduler supports running multiple replicas of the same ROFL app on the same ROFL node by **remapping** the ROFL app ID to a unique value on each deployment. Look for the `starting processor` message in [your logs](#checking-status) to figure out the remapped value, for example: ```json { "app_id":"rofl1qrjtky678pd3uchsdlhqtjugnsvtck3wyg7w5324", "component":"rofl.4bd2d31255ae7e5cec31084cde02fb40640d4d678db111d1c6ba53478f5f2fc2", "msg":"starting processor", ... } ``` Above, the original ROFL app ID `rofl1qrjtky678pd3uchsdlhqtjugnsvtck3wyg7w5324` was remapped to `4bd2d31255ae7e5cec31084cde02fb40640d4d678db111d1c6ba53478f5f2fc2`. [rofl-scheduler]: https://github.com/oasisprotocol/oasis-sdk/releases ### Hosting the ROFL App Bundle Directly To execute a ROFL app on your node, simply copy it over to your node, for example inside the `/node/rofls` folder. Then, put the location of the ORC bundle to the `runtime.paths` section of your configuration similarly to how other ParaTimes can be enabled on your node: ```yaml title="config.yml" runtime: # ... other options omitted ... paths: - /node/rofls/myapp.default.orc ``` Check that the path to your ROFL app bundle is correct. After starting your node, please make sure that the node is fully synchronized with Sapphire. ### Exposing the ports To expose a specific TCP port of a ROFL app externally, add the following configuration to your Oasis node config: ```yaml title="config.yml" runtime: runtimes: - id: "000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c" components: - id: rofl.rofl1qpkplp3uq5yage4kunt0ylmulett0arzwcdjvc8u # Your ROFL app ID networking: incoming: - ip: 192.168.0.10 protocol: tcp src_port: 443 dst_port: 443 ``` In the example above, we exposed a TCP port `443` externally on the IP address `192.168.0.10` of our host. [client node documentation]: https://github.com/oasisprotocol/docs/blob/main/docs/node/run-your-node/paratime-client-node.mdx#configuring-tee-paratime-client-node ## Persistent storage The encrypted persistent storage of each ROFL app replica lives in `/node/data/runtimes/volumes/{random hex value}` folder. It is generated when a ROFL app is executed for the first time and will remain intact even during ROFL upgrades and removal. Beside is also a `descriptor.json` that contains information which ROFL app does this volume belong to and other metadata. ## Checking status You can check the logs of any ROFL app by grepping its app ID in your Oasis node log file. Since a Scheduler app to manage your ROFLs is also just a ROFL app, you can check if there are any issues reported by Scheduler by grepping for the its app ID: ```shell grep rofl.rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg /node/data/node.log ``` To extract only the relevant `msg` field you may do: ```shell grep rofl.rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg /node/data/node.log | jq -r '.msg' ``` When exploring the logs, keep in mind that the ROFL app IDs of the Scheduler-managed apps [will be remapped](#rofl-app-id-remap). ## See also --- ## Seed Node This guide will cover setting up a seed node for the Oasis Network. This guide assumes some basic knowledge on the use of command line tools. ## Prerequisites Before following this guide, make sure you've followed the [Prerequisites Guide](prerequisites/oasis-node.md) and understand how to use the `oasis-node` binary. ### Creating a Working Directory We will be creating the following directory structure inside a chosen top-level `/node` (feel free to name your directories however you wish) directory: * `etc`: This will store the node configuration and genesis file. * `data`: This will store the data directory needed by the running `oasis-node` binary, including the complete blockchain state. The directory permissions should be `rwx------`. To create the directory structure, use the following command: ```text mkdir -m700 -p /node/{etc,data} ``` ### Copying the Genesis File The latest genesis file can be found in the Network Parameters page ([Mainnet], [Testnet]). You should download the latest `genesis.json` file and copy it to the `/node/etc` directory we just created. [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md ## Configuration This will configure the given node to only act as a seed node. In order to configure the node create the `/node/etc/config.yml` file with the following content: ```yaml mode: seed common: data_dir: /node/data log: format: JSON level: cometbft: info cometbft/context: error default: info genesis: file: /node/etc/genesis.json ``` Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). ## Starting the Oasis Node You can start the node by running the following command: ```bash oasis-node --config /node/etc/config.yml ``` ### Seed node address To get the seed node Tendermint identity, run the following command: ```bash oasis-node identity tendermint show-node-address --datadir /node/data/ ``` ### Share seed node address Nodes can now use your seed node by specifying it via a configuration flag: ```bash --consensus.tendermint.p2p.seed @:26656 ``` --- ## Sentry Node This guide provides instructions for a deployment using the Sentry node architecture to protect validator nodes from being directly exposed on the public network. This guide assumes a setup where an Oasis validator node is only accessible over a private network, with sentry nodes having access to it. The guide does not cover setting this infrastructure up. Knowledge of [Tendermint's Sentry Node architecture](https://forum.cosmos.network/t/sentry-node-architecture-overview/454) is assumed as well. This is only an example of a Sentry node deployment, and we take no responsibility for mistakes contained therein. Make sure you understand what you are doing. ## Prerequisites Before following this guide, make sure you've read the [Prerequisites](prerequisites/oasis-node.md) and [Running a Node on the Network](validator-node.mdx) guides and created your Entity. ## Configuring the Oasis Sentry Node ### Initializing Sentry Node Sentry node identity keys can be initialized with: ```bash oasis-node identity init --datadir /node/data ``` ### Configuring Sentry Node An Oasis node can be configured to run as a sentry node by setting the `worker.sentry.enabled` flag. The `tendermint.sentry.upstream_address` flag can be used to configure a list of nodes that will be protected by the sentry node. An example of full `YAML` configuration of a sentry node is below. Before using this configuration you should collect the following information to replace the variables present in the configuration file: * `{{ external_address }}`: This is the external IP on which sentry node will be reachable. * `{{ seed_node_address }}`: This the seed node address of the form `ID@IP:port`. You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md * `{{ validator_tendermint_id }}`: This is the Tendermint ID (address) of the Oasis validator node that will be protected by the sentry node. This address can be obtained by running: ```bash oasis-node identity tendermint show-node-address --datadir /node/data ``` on the validator node. * `{{ validator_private_address }}`: This is the (presumably) private address on which validator should be reachable from the sentry node. * `{{ validator_sentry_client_grpc_public_key }}`: This is the public TLS key of the Oasis validator node that will be protected by the sentry node. This public key can be obtained by running: ```bash oasis-node identity show-sentry-client-pubkey --datadir /node/data ``` on the validator node. Note that the above command is only available in `oasis-node` from version 20.8.1 onward. ```yaml mode: client common: data_dir: /node/data log: format: JSON level: cometbft: warn cometbft/context: error # Per-module log levels. Longest prefix match will be taken. Fallback to # "default", if no match. default: debug # By default logs are output to stdout. If you're running this in docker # keep the default #file: /var/log/oasis-node.log consensus: external_address: tcp://{{ external_address }}:26656 listen_address: tcp://0.0.0.0:26656 sentry_upstream_addresses: - {{ validator_tendermint_id }}@{{ validator_private_address }}:26656 genesis: # Path to the genesis file for the current version of the network. file: /node/etc/genesis.json p2p: seeds: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. - {{ seed_node_address }} sentry: # Port used by validator nodes to query sentry node for registry # information. # IMPORTANT: Only validator nodes protected by the sentry node should have # access to this port. This port should not be exposed on the public # network. control: authorized_pubkeys: - {{ validator_sentry_client_grpc_public_key }} port: 9009 enabled: true ``` Multiple sentry nodes can be provisioned following the above steps. ## Configuring the Oasis Validator Node In this setup the Oasis validator node should not be exposed directly on the public network. The Oasis validator only needs to be able to connect to its sentry nodes, preferably via a private network. ### Initializing Validator Node If your validator node is already registered and running in a non-sentry setup, this step can be skipped as the Oasis validator will update its address in the Registry automatically once we redeploy it with new configuration. When you are [initializing a validator node](validator-node.mdx#configuration), you should use the sentry node's external address and Consensus ID in the `node.consensus_address` flag. If you are running multiple sentry nodes, you can specify the `node.consensus_address` flag multiple times. To initialize a validator node with 2 sentry nodes, run the following commands from the `/localhostdir/node` directory: ```bash export SENTRY1_CONSENSUS_ID= export SENTRY1_STATIC_IP= export SENTRY2_CONSENSUS_ID= export SENTRY2_STATIC_IP= oasis-node registry node init \ --signer.backend file \ --signer.dir /localhostdir/entity \ --node.consensus_address $SENTRY1_CONSENSUS_ID@$SENTRY1_STATIC_IP:26656 \ --node.consensus_address $SENTRY2_CONSENSUS_ID@$SENTRY2_STATIC_IP:26656 \ --node.is_self_signed \ --node.role validator ``` `SENTRY_CONSENSUS_ID`: This is the Consensus ID of the sentry node in base64 format. This ID can be obtained from `consensus_pub.pem`: ```bash sed -n 2p /node/data/consensus_pub.pem ``` on the sentry node. ### Configuring the Validator Node There are some configuration changes needed for the Oasis validator node to enable proxying through its sentry node. Most of these flags should be familiar from the Tendermint sentry node architecture. Since the validator node will not have an external address, the `consensus.tendermint.core.external_address` flag should be skipped. Similarly, the `consensus.tendermint.p2p.seed` flag can be skipped, as the `oasis-node` won't be directly connecting to any of the seed nodes. Tendermint Peer Exchange should be disabled on the validator with the `consensus.tendermint.p2p.disable_peer_exchange` flag. Sentry nodes can also be configured as Tendermint Persistent-Peers with the `consensus.tendermint.p2p.persistent_peer` flag. In addition to the familiar Tendermint setup above, the node needs to be configured to query sentry nodes for external addresses every time the validator performs a re-registration. This is configured with the `worker.sentry.address` flag. The `worker.sentry.address` flag is of format: `@ip:port` where: * ``: Is the sentry node's TLS public key. * `ip:port`: Is the (private) address of the sentry node's control endpoint. Putting it all together, an example configuration of a validator node in the sentry node architecture is given below. Before using this configuration you should collect the following information to replace the `{{ var_name }}` variables present in the configuration file: * `{{ sentry_node_private_ip }}`: This is the private IP address of the sentry node over which sentry node should be accessible to the validator. * `{{ sentry_node_grpc_public_key }}`: This is the sentry node's control endpoint TLS public key. This ID can be obtained by running: ```bash oasis-node identity show-tls-pubkey --datadir /node/data ``` on the sentry node. Note that the above command is only available in `oasis-node` from version 20.8.1 onward. * `{{ sentry_node_tendermint_id }}`: This is the Tendermint ID (address) of the sentry node that will be configured as a Persistent Peer. This ID can be obtained by running: ```bash oasis-node identity tendermint show-node-address --datadir /node/data ``` on the sentry node. * `{{ entity_id }}`: This is node's entity ID from `entity.json`. ```yaml mode: validator common: data_dir: /node/data log: format: JSON level: cometbft: warn cometbft/context: error # Per-module log levels. Longest prefix match will be taken. # Fallback to "default", if no match. default: debug # By default logs are output to stdout. If you're running this in docker keep # the default #file: /var/log/oasis-node.log consensus: listen_address: tcp://0.0.0.0:26656 p2p: disable_peer_exchange: true persistent_peers: - {{ sentry_node_tendermint_id }}@{{ sentry_node_private_ip }}:26656 genesis: # Path to the genesis file for the current version of the network. file: /node/etc/genesis.json registration: # In order for the node to register itself, the entity ID must be set. entity_id: {{ entity_id }} sentry: address: - {{ sentry_node_grpc_public_key }}@{{ sentry_node_private_ip }}:9009 ``` Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). --- ## Troubleshooting(Run-your-node) **BEFORE YOU BEGIN TROUBLESHOOTING** Before you begin troubleshooting your Oasis Node we suggest you check all of the following: * Check that your current binary version matches the version listed on the Network Parameters page ([Mainnet](../network/mainnet.md), [Testnet](../network/testnet.md)) * Check the version on your localhost using `oasis-node --version` * Check the version on your server using `oasis-node --version` * If upgrading, make sure that you've wiped state (unless that is explicitly not required) * If you're doing anything with the entity: * Do you have the latest genesis? * Do you have the correct private key (or Ledger device). * If you're signing a transaction: * Check your account balance and nonce using the [`oasis account show`](https://github.com/oasisprotocol/cli/blob/master/docs/account.md#show) command. * If you're generating a transaction: * Do you have the latest genesis? * If you're submitting a transaction: * Do you have the latest genesis? * Is your node synced? If not, the transaction will fail to run properly ## Starting a Node ### Invalid Permissions #### Permissions for node and entity Error Message: ```text common/Mkdir: path '/node/data' has invalid permissions: -rwxr-xr-x ``` The `entity` and `node` directories both need to have permissions `rwx------`. Make sure you initialize the directory with correct permissions or change them using `chmod`: ```bash mkdir --mode 700 --parents {entity,node} ``` ```bash chmod 700 /node/data chmod 700 /node/etc ``` #### Permissions for .pem files Error Message example: ```text signature/signer/file: invalid PEM file permissions 700 on /node/data/identity.pem ``` All `.pem` files should have the permissions `600`. You can set the permissions for all `.pem` files in a directory using the following command: ```bash chmod -R 600 /path/*.pem ``` #### Node directory Ownership Another possible cause of permission issues is not giving ownership of your `node/` to the user running the node (e.g. `docker-host` or replace with your user): ```bash chown -R docker-host:docker-host /node ``` In general, to avoid problems when running docker, specify the user when running `docker` commands by adding the flag `--user $(id -u):$(id -g)`. ### Cannot Find File Error Message examples: ```text no such file or directory ``` ```text file does not exist ``` ```json { "ts":"2019-11-17T03:42:09.778647033Z", "level":"error", "module":"cmd/registry/node", "caller":"node.go:127", "msg":"failed to load entity", "err":"file does not exist" } ``` More often than you'd expect, this error is the result of setting the path incorrectly. You may have left something like `--genesis.file $GENESIS_FILE_PATH` in the command without setting `$GENESIS_FILE_PATH` first, or set the path incorrectly. Check that `echo $GENESIS_FILE_PATH` matches your expectation or provide a path in the command. Another possible cause: the files in your localhost directory cannot be read from the container. Make sure to run commands in the same session within the container. ## Staking and Registering ### Transaction Out of Gas Error message: ```text module=cmd/stake caller=stake.go:70 msg="failed to submit transaction" err="rpc error: code = Unknown desc = staking: add escrow transaction failed: out of gas" attempt=1 ``` The docs are now updated to show that you need to add `--stake.transaction.fee.gas` and `--stake.transaction.fee.amount` flags when generating your transaction. Note that if you're re-generating a transaction, you will need to increment the `--nonce` flag. ## Trusted Execution Environment (TEE) ### AESM could not be contacted If running `sgx-detect --verbose` reports: ``` 🕮 SGX system software > AESM service AESM could not be contacted. AESM is needed for launching enclaves and generating attestations. Please check your AESM installation. debug: error communicating with aesm debug: cause: Connection refused (os error 111) More information: https://edp.fortanix.com/docs/installation/help/#aesm-service ``` Ensure you have completed all the necessary installation steps outlined in [DCAP Attestation][tee-dcap-attestation] section. [tee-dcap-attestation]: prerequisites/set-up-tee.mdx#dcap-attestation ### AESM: error 30 If you are encountering the following error message in your node's logs: ``` failed to initialize TEE: error while getting quote info from AESMD: aesm: error 30 ``` Ensure you have all required SGX driver libraries installed as listed in [DCAP Attestation][tee-dcap-attestation] section. ### Permission Denied When Accessing SGX Kernel Device If running `sgx-detect --verbose` reports: ``` 🕮 SGX system software > SGX kernel device Permission denied while opening the SGX device (/dev/sgx/enclave, /dev/sgx or /dev/isgx). Make sure you have the necessary permissions to create SGX enclaves. If you are running in a container, make sure the device permissions are correctly set on the container. debug: Error opening device: Permission denied (os error 13) debug: cause: Permission denied (os error 13) ``` Ensure you are running the `sgx-detect` tool as `root` via: ``` sudo $(which sgx-detect) --verbose ``` ### Error Opening SGX Kernel Device If running `sgx-detect --verbose` reports: ``` 🕮 SGX system software > SGX kernel device The SGX device (/dev/sgx/enclave, /dev/sgx or /dev/isgx) could not be opened: "/dev" mounted with `noexec` option. debug: Error opening device: "/dev" mounted with `noexec` option debug: cause: "/dev" mounted with `noexec` option ``` #### Ensure `/dev` is NOT Mounted with the `noexec` Option Some Linux distributions mount `/dev` with the `noexec` mount option. If that is the case, it will prevent the enclave loader from mapping executable pages. Ensure your `/dev` (i.e. `devtmpfs`) is not mounted with the `noexec` option. To check that, use: ``` cat /proc/mounts | grep devtmpfs ``` To temporarily remove the `noexec` mount option for `/dev`, run: ``` sudo mount -o remount,exec /dev ``` To permanently remove the `noexec` mount option for `/dev`, add the following to the system's `/etc/fstab` file: ``` devtmpfs /dev devtmpfs defaults,exec 0 0 ``` This is the recommended way to modify mount options for virtual (i.e. API) file system as described in [systemd's API File Systems](https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems/) documentation. ### Unable to Launch Enclaves: Operation not permitted If running `sgx-detect --verbose` reports: ``` 🕮 SGX system software > Able to launch enclaves > Debug mode The enclave could not be launched. debug: failed to load report enclave debug: cause: failed to load report enclave debug: cause: Failed to map enclave into memory. debug: cause: Operation not permitted (os error 1) ``` Ensure your system's [`/dev` is NOT mounted with the `noexec` mount option][tee-dev-noexec]. [tee-dev-noexec]: #ensure-dev-is-not-mounted-with-the-noexec-option ### Unable to Launch Enclaves: Invalid argument If running `sgx-detect --verbose` reports: ``` 🕮 SGX system software > Able to launch enclaves > Debug mode The enclave could not be launched. debug: failed to load report enclave debug: cause: Failed to call EINIT. debug: cause: I/O ctl failed. debug: cause: Invalid argument (os error 22) ``` This may be related to a bug in the Linux kernel when attempting to run enclaves on certain hardware configurations. Upgrading the Linux kernel to a version equal to or greater than 6.5.0 may solve the issue. ### Unable to Launch Enclaves: Input/output error If running `sgx-detect --verbose` reports: ``` 🕮 SGX system software > Able to launch enclaves > Debug mode The enclave could not be launched. debug: failed to load report enclave debug: cause: Failed to call ECREATE. debug: cause: I/O ctl failed. debug: cause: Input/output error (os error 5) ``` This may be related to a bug in the [`rust-sgx`](https://github.com/fortanix/rust-sgx/issues/565) library causing `sgx-detect` (and `attestation-tool`) to fail and report that debug enclaves cannot be launched. This is a known issue and is being worked on. If the `sgx-detect` is reporting that production enclaves can be launched, you can ignore this error when setting up the Oasis node. ### Couldn't find the platform library If AESMD service log reports: ``` [read_persistent_data ../qe_logic.cpp:1084] Couldn't find the platform library. (null) [get_platform_quote_cert_data ../qe_logic.cpp:438] Couldn't load the platform library. (null) ``` It may be that the [DCAP quote provider][tee-dcap-quote-provider] is misconfigured or the configuration file is not a valid JSON file but is malformed. Double-check that its configuration file (e.g. `/etc/sgx_default_qcnl.conf`) is correct. [tee-dcap-quote-provider]: prerequisites/set-up-tee.mdx#configuring-the-quote-provider ### [QPL] Failed to get quote config. Error code is 0xb011 The following error appears in the QGS daemon logs leaving ROFL runtime inoperable: ``` qgsd[1412990]: [QPL] Failed to get quote config. Error code is 0xb011 qgsd[1412990]: [get_platform_quote_cert_data ../td_ql_logic.cpp:302] Error returned from the p_sgx_get_quote_config API. 0xe044 qgsd[1412990]: tee_att_get_quote_size return 0x11001 ``` This is a known bug, which hasn't been fixed yet at time of writing this section https://github.com/intel/SGXDataCenterAttestationPrimitives/issues/450. The current workaround is to restart the QGS daemon, for example `sudo service qgsd restart`. If you are managing your QGS daemon with Docker compose, you can configure it as follows: ```yaml title="docker-compose.yaml" command: ["/opt/intel/tdx-qgs/qgs", "--no-daemon"] entrypoint: ["/bin/bash", "-c", "exec \"$0\" \"$@\" &> >(tee -a /tmp/qgsd.log)"] init: true healthcheck: test: ["CMD", "/bin/bash", "-c", "grep 'Error code is 0xb011' /tmp/qgsd.log && (: > /tmp/qgsd.log && kill -SIGTERM 1 && exit -1) || (: > /tmp/qgsd.log && exit 0)"] interval: 60s timeout: 2s retries: 0 ``` ### [QPL] No certificate data for this platform. The following error is reported on a multi-CPU systems if the user forgot to install and configure MPA: ``` May 09 13:24:16 oasis-node-1 qgsd[6732]: call tee_att_init_quote May 09 13:24:16 oasis-node-1 qgsd[6732]: [QPL] No certificate data for this platform. May 09 13:24:16 oasis-node-1 qgsd[6732]: [get_platform_quote_cert_data ../td_ql_logic.cpp:302] Error returned from the p_sgx_get_quote_config API. 0xe011 May 09 13:24:16 oasis-node-1 qgsd[6732]: tee_att_init_quote return 0x11001 May 09 13:24:16 oasis-node-1 qgsd[6732]: tee_att_get_quote_size return 0x1100f ``` Correctly configure your TEE by following the [Set up TEE - Multi-socket system][tee-multi-socket-systems] section. [tee-multi-socket-systems]: ./prerequisites/set-up-tee.mdx#multi-socket-systems ## ROFL The following errors appear in the ROFL node logs. ### Unknown enclave This error is reported when the enclave ID of the ROFL provided in the .orc file mismatches the currently registered enclave ID of the on-chain ROFL app. ```json { "component":"rofl.rofl1qrtetspnld9efpeasxmryl6nw9mgllr0euls3dwn", "err":"call failed: module=rofl code=5: unknown enclave", "level":"error", "module":"runtime/modules/rofl/app/registration", "msg":"failed to refresh registration", "provisioner":"tdx-qemu", "runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c", "runtime_name":"", "ts":"2025-02-21T08:10:10.012956311Z" } ``` Update the on-chain enclave ID by running `oasis rofl update` on the machine where ROFL is being compiled and deployed. ### Root not found This error is reported, when the node hasn't been fully synced yet. This includes both the consensus and the ParaTime blocks. ```json { "component":"rofl.rofl1qrtetspnld9efpeasxmryl6nw9mgllr0euls3dwn", "err":"call failed: root not found", "level":"error", "module":"runtime/modules/rofl/app/registration", "msg":"failed to refresh registration", "provisioner":"tdx-qemu", "runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c", "runtime_name":"", "ts":"2025-04-17T05:40:24.305875715Z" } ``` Wait for the node to sync. ### Failed to resize persistent overlay image The following error is reported on the ROFL node, if there was an error during the persistent storage resize operation. Most commonly this happens during ROFL upgrade if [persistent storage size][rofl-yaml-storage] was decreased below the actually occupied storage. ```json { "caller":"host.go:486", "err":"failed to configure process: failed to resize persistent overlay image: qemu-img: Use the --shrink option to perform a shrink operation.\nqemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\n\nexit status 1", "level":"error", "module":"runtime/host/tdx/qemu", "msg":"failed to start runtime", "runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c", "ts":"2025-04-17T09:56:36.321911319Z" } ``` Similarly, if the persistent storage is corrupted in any way, a message like this may appear in the logs: ```json { "component":"rofl.rofl1qrtetspnld9efpeasxmryl6nw9mgllr0euls3dwn", "level":"info", "module":"runtime/global", "msg":"Error: writing blob: adding layer with blob \"sha256:9f202d637e1bbe0e48c7855d7872fa4ab33af88b61ef10d4cb6dd7caba0e2c8a\"/\"\"/\"sha256:b240b4f256e7bd304b5a1335b4bc73b47ce21aaf31bb1107452a89a101f50054\": readlink /storage/containers/graph/overlay/l: invalid argument", "provisioner":"tdx-qemu", "runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c", "runtime_name":"", "ts":"2025-02-25T13:44:47.05176383Z" } ``` ROFL admin user should run `oasis rofl machine restart --wipe-storage` to clear persistent storage and recreate the volume of the ROFL app. Alternatively, you can remove the persistent storage folder manually located at `/node/data/runtimes/volumes/` and restart the ROFL app. Both options will permanently delete persistent storage of this ROFL app on the ROFL node. [rofl-yaml-storage]: https://github.com/oasisprotocol/oasis-sdk/blob/main/docs/rofl/features/manifest.md#resources-storage ### Offer not acceptable for this instance The following error occurs, if your ROFL node Scheduler configuration is not configured to accept the offer names of the selected provider. ```json { "component":"rofl.rofl1qrqw99h0f7az3hwt2cl7yeew3wtz0fxunu7luyfg", "id":"0000000000000005", "level":"info", "module":"runtime/scheduler/manager", "msg":"offer not acceptable for this instance", "offer":"0000000000000002", "provisioner":"tdx-qemu", "runtime_id":"000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c", "runtime_name":"", "ts":"2025-04-25T09:25:57.726444176Z" } ``` Update your node's `runtime.runtimes.sapphire_id.components.scheduler_id.config.rofl_scheduler.offers` in your `config.yml` and include the valid offer name. ### Image platform (linux/arm64/v8) does not match the expected platform (linux/amd64) This error occurs, if the Docker container to be executed inside the ROFL TDX was not compiled for the `linux/amd64` platform. ```json { "component":"rofl.rofl1qpdzzm4h73gtes04xjn4whan84s3k33l5gx787l2", "level":"info", "module":"runtime/global", "msg":"WARNING: image platform (linux/arm64/v8) does not match the expected platform (linux/amd64)", "provisioner":"tdx-qemu", "runtime_id":"000000000000000000000000000000000000000000000000f80306c9858e7279", "runtime_name":"", "ts":"2025-04-28T06:16:24.20330395Z" } ``` Always compile your Docker container for ROFL with the `--platform linux/amd64` parameter or put the `platform: linux/amd64` line inside your `compose.yaml`. Then recompile and push the container to the OCI repository. --- ## Validator Node This guide will walk you through the process of setting up your **validator node** for the Oasis Network either on Mainnet or Testnet. It is designed for individuals who have basic understanding of the command line environment. We will be using two separate physical machines for deployment: - your local system, referred to as `localhost`, - a remote `server` which will function as an Oasis node. The guide consists of the following steps: 1. On the `localhost`, we will use [Oasis CLI] to [Initialize your Entity](#initialize-entity) which is essential for deploying nodes on the network. To ensure the security of these private keys, we strongly recommend to either isolate the `localhost` from any network or internet connectivity, or use a [hardware wallet] as a secure storage, such as [Ledger]. [Oasis CLI]: ../../build/tools/cli/README.md [hardware wallet]: https://en.wikipedia.org/wiki/Hardware_security_module [Ledger]: ../../../general/manage-tokens/holding-rose-tokens/ledger-wallet 2. After the entity has been created, we will move over to the `server` and [Start the Oasis Node](#starting-the-oasis-node). The server needs to meet the hardware requirements and have access to the internet. 3. Finally, we will [stake assets to your entity, register it on the network, and attach the unique ID](#staking-and-registering) of the Oasis Node instance running on your server. ## Prerequisites Before proceeding with this guide, ensure that you have completed the steps outlined in the [Prerequisites] chapter so that: * your system meets the [hardware requirements], * you have the [Oasis CLI] installed on your `localhost`, * you have the [Oasis Node binary] installed on your `server`, * you understand what are [Stake requirements] to become a validator on the Oasis Network. [Prerequisites]: prerequisites/ [hardware requirements]: prerequisites/hardware-recommendations.md [Oasis Node binary]: prerequisites/oasis-node.md [Stake requirements]: prerequisites/stake-requirements.md ## Initialize Entity Everything in this section should be done on the `localhost` as there are sensitive items that will be created. During the entity initialization process, you will generate essential components such as keys and other crucial artifacts that are necessary for the deployment of nodes on the network. This guide has been designed with a particular file structure in mind. Nonetheless, feel free to reorganize and rename directories as needed to accommodate your preferences. ### Add Entity Account to Oasis CLI An entity is critical to operating nodes on the network as it controls the stake attached to a given individual or organization on the network. The entity is represented as a consensus-layer account using the Ed25519 signature scheme. To protect your entity private key, we strongly recommend using a [hardware wallet] such as [Ledger]. We will be using [Oasis CLI] to initialize the entity and later stake our assets and register the entity on the network. If you haven't already, go ahead and install it. Oasis CLI stores either your entity private key encrypted inside a file or a reference to an account whose keypair is stored on your hardware wallet. If you really need to use the file-based wallet using another [offline/air-gapped machine] for this purpose is highly recommended. Gaining access to the entity private key can compromise your tokens and the network security through proposing and signing malicious governance transactions. On the `localhost` add a new entity account to Oasis CLI. This can be done in one of the following ways: - Create an account entry in Oasis CLI, but use your Ledger device to store the actual keypair to sign the transactions by executing [`oasis wallet create`] and passing the `--kind ledger` flag. For example: ```shell oasis wallet create my_entity --kind ledger ``` - Import your existing `entity.pem` into Oasis CLI by executing [`oasis wallet import-file`] command, for example: ```shell oasis wallet import-file my_entity entity.pem ``` - Generate a new keypair and store the private key in the encrypted file by executing [`oasis wallet create`]: ```shell oasis wallet create my_entity ``` Similar to the examples above we will assume that you named your entity account as **`my_entity`** in the remainder of this chapter. ### Write the Entity Descriptor File On the `localhost` we begin by creating a directory named `/localhostdir` with the `entity` subdirectory that will contain the entity file descriptor. ```shell mkdir -p /localhostdir/entity ``` Create a JSON file containing the **public key** of your entity by executing [`oasis account entity init`] and store it as `entity.json`, for example: ```shell oasis account entity init -o /localhostdir/entity/entity.json --account my_entity ``` Now, we can move on to configuring our Oasis node with the information from freshly generated `entity.json`. You can obtain your entity ID by running the `cat entity.json` command and reading out the `id` field. Alternatively, if your entity account is imported into the Oasis CLI you can use the [`oasis wallet show`] command. Your entity ID will be displayed in the `Public Key` field. [Ledger]: ../../general/manage-tokens/holding-rose-tokens/ledger-wallet.md [offline/air-gapped machine]: https://en.wikipedia.org/wiki/Air_gap_\(networking\) [`oasis wallet create`]: ../../build/tools/cli/wallet.md#create [`oasis wallet import`]: ../../build/tools/cli/wallet.md#import [`oasis wallet import-file`]: ../../build/tools/cli/wallet.md#import-file [`oasis account entity init`]: ../../build/tools/cli/account.md#entity-init [`oasis wallet show`]: ../../build/tools/cli/wallet.md#show ## Configuration There are a variety of options available when running an Oasis node. The following YAML file is a basic configuration for a validator node on the network. Before using this configuration you should collect the following information to replace the variables present in the configuration file: * `{{ external_ip }}`: The external/public IP address you used when registering this node. If you are using a [Sentry Node](sentry-node.md), you should use the public IP of that machine. * `{{ seed_node_address }}`: The seed node address in the form `ID@IP:port`. You can find the current Oasis Seed Node address in the Network Parameters page ([Mainnet], [Testnet]). * `{{ entity_id }}`: The node's entity ID from `entity.json`. To use this configuration, save it in the `/node/etc/config.yml` file: [Mainnet]: ../network/mainnet.md [Testnet]: ../network/testnet.md ```yaml title="/node/etc/config.yml" mode: validator common: # Set this to where you wish to store node data. The node's artifacts # should also be located in this directory. data_dir: /node/data # Logging. # # Per-module log levels are defined below. If you prefer just one unified # log level, you can use: # # log: # level: debug log: level: cometbft: warn cometbft/context: error # Per-module log levels. Longest prefix match will be taken. # Fallback to "default", if no match. default: debug format: JSON # By default logs are output to stdout. If you would like to output # logs to a file, you can use: # # file: /var/log/oasis-node.log consensus: # The external IP that is used when registering this node to the network. # NOTE: If you are using the Sentry node setup, this option should be # omitted. external_address: tcp://{{ external_ip }}:26656 listen_address: tcp://0.0.0.0:26656 genesis: # Path to the genesis file for the current version of the network. file: /node/etc/genesis.json p2p: port: 9200 registration: addresses: - {{ external_ip }}:9200 seeds: # List of seed nodes to connect to. # NOTE: You can add additional seed nodes to this list if you want. - {{ seed_node_address }} registration: # In order for the node to register itself, the entity ID must be set. entity_id: {{ entity_id }} ``` Make sure the `consensus` port (default: `26656`) and `p2p.port` (default: `9200`) are exposed and publicly accessible on the internet (for `TCP` and `UDP` traffic). ## Starting the Oasis Node You can start the node by simply running the command: ```shell oasis-node --config /node/etc/config.yml ``` The Oasis node is configured to run in the foreground by default. We recommend that you configure and use it with a process manager like [systemd](https://github.com/systemd/systemd) or [Supervisor](http://supervisord.org). Check out the [System Configuration] page for examples. ### Node Keys The Oasis node requires **node keys** in order to register itself and to securely communicate with other nodes in the peer-to-peer network. The following keys will automatically be generated and stored in your `/node/data` folder as `.pem` files: * `consensus.pem`: The node's consensus private key. **DO NOT SHARE** * `consensus_pub.pem`: The node's consensus public key. * `identity.pem`: The node's identity private key. **DO NOT SHARE** * `identity_pub.pem`: The node's identity public key. * `p2p.pem`: The node's private key for libp2p. **DO NOT SHARE** * `p2p_pub.pem`: The node's public key for libp2p. * `sentry_client_tls_identity.pem`: The node's TLS private key for communicating with sentry nodes. **DO NOT SHARE** * `sentry_client_tls_identity_cert.pem`: The node's TLS certificate for communicating with sentry nodes. If the node keys do not exist, they will be automatically generated when you launch the Oasis node. Otherwise, the existing ones will be used. You may have noticed that some files above are listed as **DO NOT SHARE**. Ideally, the node keys should be stored on a separate device such as a [hardware wallet] or a [remote signer]. However, until the support is fully implemented, keep the keys on the `server` as secure as possible. [System Configuration]: prerequisites/system-configuration.mdx#create-a-user ### Ensuring Proper Permissions Only the owner of the process that runs the Oasis node should have access to the files in the `/node/data` directory. The `oasis-node` binary ensures that the files used by the node are as least privileged as possible so that you don't accidentally shoot yourself in the foot while operating a node. If you followed the steps described in the [Install the Oasis Node][Oasis node binary] chapter, then the proper permissions are already set: * `700` for the `/node/data` directory * `700` for the `/node/etc` directory * `700` for the `/node/runtimes` directory * `600` for all `/node/data/*.pem` files Otherwise, run the following to remove all non-owner read/write/execute permissions: ```shell chmod -R go-r,go-w,go-x /node ``` [remote signer]: advanced/remote-signer.mdx ### Obtain the Node ID Now that the Oasis node is running, you can obtain your unique node ID which is needed in order to associate your node with your entity in the network registry. ```shell oasis-node control status -a unix:/node/data/internal.sock | jq .identity.node ``` ``` "5MsgQwijUlpH9+0Hbyors5jwmx7tTmKMA4c9leV3prI=" ``` ### Check that your Node is Synced Before you can become a validator, you will have to make sure that your node is synced. To do so call this command on the server: ```shell oasis-node control is-synced -a unix:/node/data/internal.sock ``` If your node is synced, the above command should output: ``` "ready" ``` If your node is not yet synced, you will need to wait before you can move forward. ## Staking and Registering Once you have been funded, you can complete the process of connecting your node to the network by registering both your entity and your node, as described below. ### Staking (Escrow) Transaction The current minimum stake required to register an entity and register a node as a validator is 200 tokens. We will submit the escrow transaction that delegates 200 tokens from your entity account on the consensus layer to itself by invoking the [`oasis account delegate`] command. ```shell oasis account delegate 200 my_entity --no-paratime --account my_entity ``` You can also fund your entity account from a different one. If you haven't yet invoke the [`oasis wallet import`] command to import the private key of the funding account to the Oasis CLI and follow the instructions. ```shell oasis wallet import my_funding_account ``` Then, invoke the [`oasis account delegate`] passing the new account name with the `--account` parameter. For example: ```shell oasis account delegate 200 my_entity --no-paratime --account my_funding_account ``` [`oasis account delegate`]: ../../build/tools/cli/account.md#delegate ### Add your Node ID to the Entity Descriptor Now we can register our entity on the network and associate it with the node ID obtained in the [section above](#obtain-the-node-id). Open the `entity.json` file we initially generated and add the ID inside the `nodes` block. Your entity descriptor file should now look like this: ```json { "id": "Bx6gOixnxy15tCs09ua5DcKyX9uo2Forb32O6Hyjoc8=", "nodes": [ "5MsgQwijUlpH9+0Hbyors5jwmx7tTmKMA4c9leV3prI=" ], "v": 2 } ``` ### Entity Registration We can submit the fresh entity file descriptor by invoking the [`oasis account entity register`] command: ```shell oasis account entity register entity.json --account my_entity ``` [`oasis account entity register`]: ../../build/tools/cli/account.md#entity-register ### Check that Your Node is Properly Registered To ensure that your node is properly connected as a validator on the network, invoke the following command on your `server`: ```shell oasis-node control status -a unix:/node/data/internal.sock | jq .consensus.is_validator ``` If your node is registered and became a validator, the above command should output: ``` true ``` Nodes are only elected into the validator set at epoch transitions, so you may need to wait for up to an epoch before being considered. In order to be elected in the validator set you **need to have enough stake to be in the top K entities** (where K is a network-specific parameter specified by the [`scheduler.max_validators`] field in the genesis document). Congratulations, if you made it this far, you've properly connected your node to the network and became a validator on the Oasis Network. [`scheduler.max_validators`]: ../reference/genesis-doc.md#consensus ## Oasis Metadata Registry For the final touch, you can add some metadata about your entity to the [Metadata Registry]. The Metadata Registry is the same for Mainnet and the Testnet. The metadata consists of your entity name, email, Keybase handle, Twitter handle, etc. This information is also used by various applications. For example the [ROSE Wallet - Web] and the [Oasis Scan] will fetch and show the node operator's name and the avatar. [Metadata Registry]: https://github.com/oasisprotocol/metadata-registry [ROSE Wallet - Web]: https://wallet.oasis.io [Oasis Scan]: https://www.oasisscan.com/validators # See also --- ## Oasis Web3 Gateway for your EVM ParaTime This guide will walk you through the steps needed to set up the Oasis Web3 gateway for EVM-compatible ParaTimes, such as [Emerald] and [Sapphire]. Each ParaTime requires its own instance of the Web3 gateway! ## Prerequisites ### Hardware In addition to the minimum hardware requirements for running the Oasis node, the following should be added for running the Web3 gateway: * CPU: * Minimum: 2.0 GHz x86-64 CPU * Recommended: 2.0 GHz x86-64 CPU with 2 cores/vCPUs * Memory: * Minimum: 4 GB of ECC RAM * Recommended: 8 GB of ECC RAM * Storage: * Minimum: 300 GB of SSD or NVMe fast storage * Recommended: 500 GB of SSD or NVMe fast storage To put the figures above into perspective, the Web3 gateway for Emerald with PostgreSQL encountered **210 GB** of database growth in ~5 months between Nov 18, 2021 and Apr 11, 2022 (since the [Emerald Mainnet launch]). ### Oasis ParaTime Client Node The Web3 gateway requires a locally deployed ParaTime-enabled Oasis Node. First, follow the [Oasis ParaTime client node](run-your-node/paratime-client-node.mdx) guide on how to configure the Oasis client node with one or more ParaTimes. Always use the exact combination of the Oasis node/ParaTime versions as published on the Network Parameters page ([Mainnet], [Testnet]). Apart from the transactions that happen on-chain and produce some effects, there are also a number of read-only queries implemented in the Oasis protocol and EVM. Some of them may be quite resource-hungry such as simulating EVM calls and are disabled by default to avoid DDOS attacks. If your Oasis node instance will only be used by you and your Web3 gateway(s), you can safely enable *expensive* transactions by adding the following to your Oasis node config: ```yaml title='config.yml' # ... sections not relevant are omitted ... runtime: mode: client paths: - {{ emerald_bundle_path }} - {{ sapphire_bundle_path }} config: "{{ emerald_paratime_id }}": estimate_gas_by_simulating_contracts: true allowed_queries: - all_expensive: true "{{ sapphire_paratime_id }}": estimate_gas_by_simulating_contracts: true allowed_queries: - all_expensive: true ``` In the config above replace `{{ ... }}` placeholders with actual ParaTime IDs: * `{{ emerald_paratime_id }}`: * Emerald on [Mainnet][mainnet-emerald]: `000000000000000000000000000000000000000000000000e2eaa99fc008f87f` * Emerald on [Testnet][testnet-emerald]: `00000000000000000000000000000000000000000000000072c8215e60d5bca7` * `{{ sapphire_paratime_id }}`: * Sapphire on [Mainnet][mainnet-sapphire]: `000000000000000000000000000000000000000000000000f80306c9858e7279` * Sapphire on [Testnet][testnet-sapphire]: `000000000000000000000000000000000000000000000000a6d1e3ebf60dff6c` ### PostgreSQL The Web3 gateway stores blockchain data in a [PostgreSQL](https://www.postgresql.org/) database version **13.3** or higher. Install it by following instructions specific to your operating system and environment. Because each ParaTime requires its own instance of the Web3 gateway, you will have to create a separate database and a separate user for each Web3 instance. ## Download Oasis Web3 Gateway Check the required version of the Web3 gateway for the network you will be deploying it to: [Mainnet], [Testnet]. Next, download the Oasis-provided binaries from the [official GitHub repository][github-releases]. Alternatively, you can download the source release and compile it yourself. Consult the [README.md] file for more information. ## Running the Web3 Gateway Copy the content below to the config file of your Web3 gateway. ```yaml title='gateway.yml' # Uncomment the runtime_id below. runtime_id: {{ paratime_id }} # Path to your internal.sock file in the root Oasis node datadir. node_address: "unix:{{ oasis_node_unix_socket }}" # By default, we index the entire blockchain history. # If you are low on disk space or you use the gateway just for submitting transactions, enable # pruning below. enable_pruning: false pruning_step: 100000 indexing_start: 0 log: level: debug format: json database: # Change host and port, if PostgreSQL is running somewhere else. host: "127.0.0.1" port: 5432 # Enter your database name, username and password. db: {{ postgresql_db }} user: {{ postgresql_user }} password: {{ postgresql_password }} dial_timeout: 5 read_timeout: 10 write_timeout: 5 max_open_conns: 0 gateway: chain_id: {{ chain_id }} http: # Change host to your external IP address if you have users accessing Web3 from the outside. host: "localhost" # Use different port for each Web3 gateway instance, if all run locally. port: 8545 cors: ["*"] ws: # Change host to your external IP address if you have users accessing Web3 from the outside. host: "localhost" # Use different port for each Web3 gateway instance, if all run locally. port: 8546 origins: ["*"] method_limits: get_logs_max_rounds: 100 ``` Use the following placeholder values: - `{{ paratime_id }}`: The ID of the Emerald or Sapphire ParaTime which you are configuring the Web3 gateway for (see [above](#oasis-paratime-client-node)). - `{{ oasis_node_unix_socket }}`: Path to the `internal.sock` file created by the Oasis node. - `{{ postgresql_db }}`, `{{ postgresql_user }}`, `{{ postgresql_password }}`: Database name and credentials for your PostgreSQL database. - `{{ chain_id }}`: The chain ID of your EVM network: - Emerald on [Mainnet][emerald-mainnet]: `42262` - Emerald on [Testnet][emerald-testnet]: `42261` - Sapphire on [Mainnet][sapphire-mainnet]: `23294` - Sapphire on [Testnet][sapphire-testnet]: `23295` All configuration settings can also be set via environment variables. For example, instead of setting up the database password in the config file above you can export: ```shell DATABASE__PASSWORD=your_password_here ``` To start the Web3 gateway invoke: ```shell ./oasis-web3-gateway --config gateway.yml ``` The Web3 gateway will connect to your Oasis node and start indexing available blocks (i.e. from the last network upgrade). This may — depending on your hardware and the size of the blockchain — take hours. If your database contains any tables populated by the previous version of the Web3 gateway, migration scripts will automatically be applied upon startup. If you want to migrate the database separately, run: ```shell ./oasis-web3-gateway migrate-db --config gateway.yml ``` Above, we are invoking the `oasis-web3-gateway` process directly from the shell, so you can quickly start using it. If you are setting up a production environment, you should [configure the Web3 gateway as a system service][system service] and register it in the service manager for your platform. ### Metrics Web3 Gateway can report a number of metrics to Prometheus server. Metrics collection is not enabled by default. Enable metrics by configuring the `monitoring` section in the config file of the Web3 gateway. ```yaml title='gateway.yml' ... # Existing fields omitted ... gateway: # Existing fields omitted ... monitoring: host: "localhost" # Use different port for each Web3 gateway instance, if all run locally. port: 9999 ``` Oasis Web3 Gateway reports metrics starting with `oasis_web3_gateway_`. The following metrics are currently reported: Name | Type | Description | Labels | Package -----|------|-------------|--------|-------- oasis_web3_gateway_gas_oracle_node_min_price | Gauge | Min gas price queried from the node. | | [gas](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/gas/backend.go) oasis_web3_gateway_gas_oracle_computed_price | Gauge | Computed recommended gas price based on recent full blocks. -1 if none (no recent full blocks). | | [gas](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/gas/backend.go) oasis_web3_gateway_cache_hits | Gauge | Number of cache hits. | cache | [indexer](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/indexer/backend_cache.go) oasis_web3_gateway_cache_misses | Gauge | Number of cache misses. | cache | [indexer](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/indexer/backend_cache.go) oasis_web3_gateway_cache_hit_ratio | Gauge | Percent of Hits over all accesses (Hits + Misses). | cache | [indexer](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/indexer/backend_cache.go) oasis_web3_gateway_block_indexed | Gauge | Indexed block heights. | | [indexer](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/indexer/indexer.go) oasis_web3_gateway_block_pruned | Gauge | Pruned block heights. | | [indexer](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/indexer/indexer.go) oasis_web3_gateway_indexer_health | Gauge | 1 if gateway indexer healthcheck is reporting as healthy, 0 otherwise. | | [indexer](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/indexer/indexer.go) oasis_web3_gateway_subscription_seconds | Histogram | Histogram for the eth subscription API subscriptions duration. | method_name | [rpc/eth/filters](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/eth/filters/metrics.go) oasis_web3_gateway_subscription_inflight | Gauge | Number of concurrent eth inflight subscriptions. | method_name | [rpc/eth/filters](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/eth/filters/metrics.go) oasis_web3_gateway_api_request_heights | Histogram | Histogram of eth API request heights (difference from the latest height). | method_name | [rpc/eth/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/eth/metrics/api.go) oasis_web3_gateway_signed_queries | Counter | Number of eth_call signed queries. | | [rpc/eth/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/eth/metrics/api.go) oasis_web3_gateway_api_seconds | Histogram | Histogram for the eth API requests duration. | method_name | [rpc/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/metrics/metrics.go) oasis_web3_gateway_api_request | Counter | Counter for API requests. | method_name | [rpc/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/metrics/metrics.go) oasis_web3_gateway_api_failure | Counter | Counter for API request failures. | method_name | [rpc/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/metrics/metrics.go) oasis_web3_gateway_api_success | Counter | Counter for API successful requests. | method_name | [rpc/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/metrics/metrics.go) oasis_web3_gateway_api_inflight | Gauge | Number of inflight API request. | method_name | [rpc/metrics](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/rpc/metrics/metrics.go) oasis_web3_gateway_health | Gauge | 1 if gateway healthcheck is reporting as healthy, 0 otherwise. | | [server](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/server/json_rpc.go) oasis_web3_gateway_psql_query_seconds | Histogram | Histogram for the postgresql query duration. | query | [storage/psql](https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/storage/psql/metrics.go) ## Archive Web3 Gateway Each Oasis Web3 gateway can only connect to and synchronize blocks from a single Oasis node instance. To enable access to older EVM blocks, you can configure the Web3 gateway to behave as a proxy to another — archive — instance of the Web3 gateway. First, set up an instance of the [Oasis archive node]. Then, repeat the similar process of setting up a Web3 gateway as you would normally do, but configure it to use the newly set up Oasis archive node. Suppose the archive instances of the Web3 gateway and Oasis nodes are up and running and the archive Web3 gateway is listening on the local port `8543`. Enable the proxy for historical blocks by adding the following to your (live) Web3 gateway config and restart it: ```yaml title='gateway.yml' # URI of an archive web3 gateway instance for servicing historical queries. archive_uri: 'http://localhost:8543' ``` If a query requires information on the block which isn't stored in the live version of the Web3 gateway, the gateway will pass the query to the configured archive instance and return the obtained result. Historical estimate gas calls are not supported. ## Troubleshooting ### Wipe state to force a complete reindex If you encounter database or hardware issues, you may need to wipe the database and reindex all blocks. First, run the `truncate-db` subcommand: ```bash oasis-web3-gateway truncate-db --config gateway.yml --unsafe ``` Then, execute the `oasis-web3-gateway` normally to start reindexing the blocks. This will wipe all existing state in the PostgreSQL database and can lead to extended downtime while the Web3 Gateway is reindexing the blocks. [Emerald]: ../build/tools/other-paratimes/emerald/README.mdx [Emerald Mainnet launch]: https://medium.com/oasis-protocol-project/oasis-emerald-evm-paratime-is-live-on-mainnet-13afe953a4c9 [emerald-mainnet]: ../build/tools/other-paratimes/emerald/network.mdx [emerald-testnet]: ../build/tools/other-paratimes/emerald/network.mdx [github-releases]: https://github.com/oasisprotocol/oasis-web3-gateway/releases [Mainnet]: network/mainnet.md [mainnet-emerald]: network/mainnet.md#emerald [mainnet-sapphire]: network/mainnet.md#sapphire [Oasis archive node]: run-your-node/archive-node.md [README.md]: https://github.com/oasisprotocol/oasis-web3-gateway/blob/main/README.md#building-and-testing [Sapphire]: ../build/sapphire/README.mdx [sapphire-testnet]: ../build/sapphire/network.mdx [sapphire-mainnet]: ../build/sapphire/network.mdx [system service]: run-your-node/prerequisites/system-configuration.mdx#create-a-user [Testnet]: network/testnet.md [testnet-emerald]: network/testnet.md#emerald [testnet-sapphire]: network/testnet.md#sapphire ## See also