diff --git a/.env b/.env
new file mode 100644
index 00000000..3097e6bd
--- /dev/null
+++ b/.env
@@ -0,0 +1,35 @@
+# Build configuration
+VERBOSE_HARNESS=0
+
+# Used to determine sandbox build type:
+TYPE="channel" # OR "source"
+
+# Used when TYPE==channel:
+ALGOD_CHANNEL="nightly"
+
+# Used when TYPE==source:
+ALGOD_URL="https://github.com/algorand/go-algorand"
+ALGOD_BRANCH="master"
+ALGOD_SHA=""
+
+# Used regardless of TYPE:
+NETWORK_TEMPLATE="images/algod/DevModeNetwork.json" # refers to the ./images directory in the sandbox repo
+NETWORK_NUM_ROUNDS=30000
+INDEXER_URL="https://github.com/algorand/indexer"
+NODE_ARCHIVAL="False"
+INDEXER_BRANCH="develop"
+INDEXER_SHA=""
+
+# Sandbox configuration:
+SANDBOX_URL="https://github.com/algorand/sandbox"
+SANDBOX_BRANCH="master"
+LOCAL_SANDBOX_DIR=".sandbox"
+
+# replacement values for Sandbox's docker-compose:
+ALGOD_CONTAINER=sdk-harness-algod
+KMD_PORT=60001
+ALGOD_PORT=60000
+INDEXER_CONTAINER=sdk-harness-indexer
+INDEXER_PORT=59999
+POSTGRES_CONTAINER=sdk-harness-postgres
+POSTGRES_PORT=65432
diff --git a/.gitignore b/.gitignore
index 47acc826..b085088d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -18,3 +18,5 @@ java_cucumber/src/test/resources/java_cucumber/*.feature
js_cucumber/features/*.feature
py_behave/*.feature
+# asdf
+.tool-versions
\ No newline at end of file
diff --git a/.up-env b/.up-env
deleted file mode 100644
index 1580be21..00000000
--- a/.up-env
+++ /dev/null
@@ -1,8 +0,0 @@
-export TYPE="channel"
-
-# Used when TYPE=channel
-export CHANNEL="nightly"
-
-# Used when TYPE=source
-export ALGOD_URL="https://github.com/algorand/go-algorand"
-export ALGOD_BRANCH="master"
diff --git a/README.md b/README.md
index 1bbeb2cb..a555d06e 100644
--- a/README.md
+++ b/README.md
@@ -24,9 +24,13 @@ These reside in the [unit features directory](features/unit)
| @unit.algod.ledger_refactoring | |
| @unit.applications | Application endpoints added to Algod and Indexer. |
| @unit.atomic_transaction_composer | ABI / atomic transaction construction unit tests. |
+| @unit.atc_method_args | Test that algod's Atomic Transaction Composer assserts that the same number of arguments given as expected |
+| @unit.c2c | Test for contract to contract calling |
| @unit.dryrun | Dryrun endpoint added to Algod. |
+| @unit.dryrun.trace.application | DryrunResult formatting tests. |
| @unit.feetest | Fee transaction encoding tests. |
| @unit.indexer | Indexer REST API unit tests. |
+| @unit.indexer.ledger_refactoring | Assertions for indexer after ledger refactoring. |
| @unit.indexer.logs | Application logs endpoints added to Indexer. |
| @unit.indexer.rekey | Rekey endpoints added to Algod and Indexer |
| @unit.offline | The first unit tests we wrote for cucumber. |
@@ -36,6 +40,7 @@ These reside in the [unit features directory](features/unit)
| @unit.responses.genesis | REST Client Unit Tests for GetGenesis endpoint |
| @unit.responses.messagepack | REST Client MessagePack Unit Tests |
| @unit.responses.messagepack.231 | REST Client MessagePack Unit Tests for Indexer 2.3.1+ |
+| @unit.sourcemap | Test the sourcemap endpoint. |
| @unit.stateproof.responses | REST Client Response Tests for State Proof. |
| @unit.stateproof.responses.msgp | REST Client MessagePack Tests for State Proofs. |
| @unit.stateproof.paths | REST Client Unit Tests for State Proof feature. |
@@ -43,7 +48,6 @@ These reside in the [unit features directory](features/unit)
| @unit.transactions | Transaction encoding tests. |
| @unit.transactions.keyreg | Keyreg encoding tests. |
| @unit.transactions.payment | Payment encoding tests. |
-| @unit.dryrun.trace.application | DryrunResult formatting tests. |
### Integration Tests
@@ -53,17 +57,14 @@ These reside in the [integration features directory](features/integration)
| ---------------------- | -------------------------------------------------------------------------------------- |
| @abi | Test the Application Binary Interface (ABI) with atomic txn composition and execution. |
| @algod | General tests against algod REST endpoints. |
-| @application.evaldelta | Test that eval delta fields are included in algod and indexer. |
| @applications.verified | Submit all types of application transactions and verify account state. |
| @assets | Submit all types of asset transactions. |
| @auction | Encode and decode bids for an auction. |
| @c2c | Test Contract to Contract invocations and injestion. |
| @compile | Test the algod compile endpoint. |
+| @compile.sourcemap | Test the algod compile endpoint returns a valid Source Map |
| @dryrun | Test the algod dryrun endpoint. |
| @dryrun.testing | Test the testing harness that relies on dryrun endpoint. Python only. |
-| @indexer | Test all types of indexer queries and parameters against a static dataset. |
-| @indexer.231 | REST Client Integration Tests for Indexer 2.3.1+ |
-| @indexer.applications | Endpoints and parameters added to support applications. |
| @kmd | Test the kmd REST endpoints. |
| @rekey_v1 | Test the rekeying transactions. |
| @send | Test the ability to submit transactions to algod. |
@@ -76,9 +77,9 @@ However, a few are not fully supported:
| tag | SDK's which implement |
| ------------------------------- | ---------------------------- |
-| @application.evaldelta | Java only |
| @dryrun.testing | Python only |
-| @indexer.rekey | missing from Python and JS |
+| @unit.c2c | missing from Python |
+| @unit.indexer.rekey | missing from Python and JS |
| @unit.responses.genesis | missing from Python and Java |
| @unit.responses.messagepack | missing from Python |
| @unit.responses.messagepack.231 | missing from Python and JS |
@@ -167,14 +168,15 @@ The SDKs come with a Makefile to coordinate running the cucumber test suites. Th
- **unit**: runs all of the short unit tests.
- **integration**: runs all integration tests.
+- **harness**: downloads this repo and calls `up.sh` to stand up a sandbox ready for running tests
- **docker-test**: installs feature file dependencies, starts the test environment, and runs the SDK tests in a docker container.
At a high level, the **docker-test** target is required to:
-1. clone `algorand-sdk-testing`.
-2. copy supported feature files from the `features` directory into the SDK.
-3. build and start the test environment by calling `./scripts/up.sh`
-4. launch an SDK container using `--network host` which runs the cucumber test suite.
+1. clone `algorand-sdk-testing`
+2. copy supported feature files from the `features` directory into the SDK
+3. build and start the test environment by calling `./scripts/up.sh` which clones `sandbox` and stands it up
+4. run all cucumber tests against the `sandbox` containers
### Running tests during development
@@ -186,12 +188,15 @@ Once the test environment is running you can use `make unit` and `make integrati
## Integration test environment
-Docker compose is used to manage several containers which work together to provide the test environment. Currently that includes algod, kmd, indexer and a postgres database. The services run on specific ports with specific API tokens. Refer to [docker-compose.yml](docker-compose.yml) and the [docker](docker/) directory for how this is configured.
+Algorand's [sandbox](https://github.com/algorand/sandbox) is used to manage several containers which work together to provide the test environment. This includes `algod`, `kmd`, `indexer` and a `postgres` database. The services run on specific ports with specific API tokens. Refer to [.env](.env) and to [sandbox'es docker-compose.yml](https://github.com/algorand/sandbox/blob/master/docker-compose.yml) for how these are configured.
![Integration Test Environment](docs/SDK%20Test%20Environment.png)
-### Start the test environment
+### Managing the test environment
+
+[up.sh](scripts/up.sh) is used to bring up the test environment. Not surprisingly, [down.sh](scripts/down.sh) brings it all down.
-There are a number of [scripts](scripts/) to help with managing the test environment. The names should help you understand what they do, but to get started simply run **up.sh** to bring up a new environment, and **down.sh** to shut it down.
+When starting the environment, we default to using `go-algorand`'s nightly build. If you're interested in running tests against a specific branch of `go-algorand`, you should set `TYPE="source"` in `.env`
+and set `ALGOD_URL`, and either `ALGOD_BRANCH` or `ALGOD_SHA` appropriately.
-When starting the environment we avoid using the cache intentionally. It uses the go-algorand nightly build, and we want to ensure that the containers are always running against the most recent nightly build. In the future these scripts should be improved, but for now we completely avoid using cached docker containers to ensure that we don't accidentally run against a stale environment.
+`indexer` and even the `sandbox` itself can be configured similarly through `.env`.
diff --git a/config.harness b/config.harness
new file mode 100644
index 00000000..87fe74e1
--- /dev/null
+++ b/config.harness
@@ -0,0 +1,15 @@
+export ALGOD_CHANNEL="$ALGOD_CHANNEL"
+export ALGOD_URL="$ALGOD_URL"
+export ALGOD_BRANCH="$ALGOD_BRANCH"
+export ALGOD_SHA="$ALGOD_SHA"
+export NETWORK=""
+export NETWORK_TEMPLATE="$NETWORK_TEMPLATE"
+export NETWORK_BOOTSTRAP_URL=""
+export NETWORK_GENESIS_FILE=""
+export NETWORK_NUM_ROUNDS=$NETWORK_NUM_ROUNDS
+export NODE_ARCHIVAL="$NODE_ARCHIVAL"
+export INDEXER_URL="$INDEXER_URL"
+export INDEXER_BRANCH="$INDEXER_BRANCH"
+export INDEXER_SHA="$INDEXER_SHA"
+export INDEXER_DISABLED=""
+export INDEXER_ENABLE_ALL_PARAMETERS="false"
diff --git a/docker-compose.yml b/docker-compose.yml
deleted file mode 100644
index d502adba..00000000
--- a/docker-compose.yml
+++ /dev/null
@@ -1,154 +0,0 @@
-version: '3'
-
-services:
- algod:
- image: "sdk-harness-algod"
- container_name: sdk-harness-algod
- build:
- context: .
- dockerfile: "./docker/algod/${TYPE:-channel}/Dockerfile"
- args:
- # This is used with TYPE="channel" to override the channel.
- CHANNEL: "${CHANNEL:-nightly}"
-
- # Set network template file to dev mode network if it is not manually set.
- NETWORK_TEMPLATE_PATH: "/tmp/network_config/${NETWORK_TEMPLATE:-DevModeNetwork.json}"
-
- # This is used with TYPE="source" to override the git information.
- URL: "${ALGOD_URL:-https://github.com/algorand/go-algorand}"
- BRANCH: "${ALGOD_BRANCH:-master}"
- ports:
- - 60001:60001
- - 60000:60000
- networks:
- - sdk-harness
- volumes:
- - genesis-file:/genesis-file
-
- # Live indexer instance connected to algod
- indexer-live:
- image: "sdk-harness-indexer-live"
- container_name: sdk-harness-indexer-live
- build:
- context: .
- dockerfile: ./docker/indexer/Dockerfile
- args:
- # allow override of the git information.
- URL: "${INDEXER_URL:-https://github.com/algorand/indexer}"
- BRANCH: "${INDEXER_BRANCH:-develop}"
- ports:
- - 60002:8980
- restart: unless-stopped
- networks:
- - sdk-harness
- environment:
- TYPE: "live"
- CONNECTION_STRING: "host=indexer-db port=5432 user=algorand password=harness dbname=live sslmode=disable"
- volumes:
- - genesis-file:/genesis-file
-
- # Applications Branch using dataset1
- indexer-221-1:
- image: "sdk-harness-indexer-release"
- container_name: sdk-harness-indexer-release
- build:
- context: .
- dockerfile: ./docker/indexer/Dockerfile
- args:
- URL: "https://github.com/algorand/indexer"
- BRANCH: "master"
- SHA: "a30878e3669310c30a2b916fb41511516b906c9a"
- ports:
- - 59999:8980
- restart: unless-stopped
- networks:
- - sdk-harness
- environment:
- TYPE: "snapshot"
- SNAPSHOT_FILE: /tmp/dataset1.tar.bz2
- CONNECTION_STRING: "host=indexer-db port=5432 user=algorand password=harness dbname=dataset1 sslmode=disable"
-
- # Applications Branch using dataset2
- indexer-221-2:
- image: "sdk-harness-indexer-applications"
- container_name: sdk-harness-indexer-applications
- build:
- context: .
- dockerfile: ./docker/indexer/Dockerfile
- args:
- URL: "https://github.com/algorand/indexer"
- BRANCH: "master"
- SHA: "a30878e3669310c30a2b916fb41511516b906c9a"
- ports:
- - 59998:8980
- restart: unless-stopped
- networks:
- - sdk-harness
- environment:
- TYPE: "snapshot"
- SNAPSHOT_FILE: /tmp/dataset2.tar.bz2
- CONNECTION_STRING: "host=indexer-db port=5432 user=algorand password=harness dbname=dataset2 sslmode=disable"
-
- # Create/Delete/Rewards Branch using dataset1
- indexer-23x-1:
- image: "sdk-harness-indexer-23x-1"
- container_name: sdk-harness-indexer-23x-1
- build:
- context: .
- dockerfile: ./docker/indexer/Dockerfile
- args:
- URL: "https://github.com/algorand/indexer"
- # TODO: Set back to master when include-all makes it to master.
- BRANCH: "develop"
- SHA: "cf93e3acacdf6fde9afd0b6b24fa0fde723ff43b"
- ports:
- - 59997:8980
- restart: unless-stopped
- networks:
- - sdk-harness
- environment:
- TYPE: "snapshot"
- SNAPSHOT_FILE: /tmp/dataset1.tar.bz2
- CONNECTION_STRING: "host=indexer-db port=5432 user=algorand password=harness dbname=dataset1_2 sslmode=disable"
-
- # Create/Delete/Rewards Branch using dataset2
- indexer-23x-2:
- image: "sdk-harness-indexer-23x-2"
- container_name: sdk-harness-indexer-23x-2
- build:
- context: .
- dockerfile: ./docker/indexer/Dockerfile
- args:
- URL: "https://github.com/algorand/indexer"
- # TODO: Set back to master + SHA when include-all makes it to master.
- BRANCH: "develop"
- SHA: "cf93e3acacdf6fde9afd0b6b24fa0fde723ff43b"
- ports:
- - 59996:8980
- restart: unless-stopped
- networks:
- - sdk-harness
- environment:
- TYPE: "snapshot"
- SNAPSHOT_FILE: /tmp/dataset2.tar.bz2
- CONNECTION_STRING: "host=indexer-db port=5432 user=algorand password=harness dbname=dataset2_2 sslmode=disable"
-
- indexer-db:
- image: "postgres"
- container_name: sdk-harness-postgres
- volumes:
- - ./docker/indexer/init-scripts:/docker-entrypoint-initdb.d
- ports:
- - 65432:5432
- networks:
- - sdk-harness
- environment:
- POSTGRES_USER: algorand
- POSTGRES_PASSWORD: harness
- POSTGRES_MULTIPLE_DATABASES: live, dataset1, dataset2, dataset1_2, dataset2_2
-
-networks:
- sdk-harness:
-
-volumes:
- genesis-file:
diff --git a/docker/algod/channel/Dockerfile b/docker/algod/channel/Dockerfile
deleted file mode 100644
index 2d97d437..00000000
--- a/docker/algod/channel/Dockerfile
+++ /dev/null
@@ -1,33 +0,0 @@
-FROM ubuntu:20.04
-
-ARG CHANNEL=nightly
-ENV BIN_DIR="$HOME/node"
-ARG NETWORK_TEMPLATE_PATH
-
-ENV DEBIAN_FRONTEND noninteractive
-
-RUN echo "Installing from channel: ${CHANNEL}"
-
-# Basic dependencies.
-ENV HOME /opt
-RUN apt-get update && apt-get install -y apt-utils curl git git-core bsdmainutils python3
-
-# Copy everything into the container.
-COPY . /tmp
-
-# Install algod binaries from a channel.
-RUN python3 /tmp/docker/algod/setup.py install \
- --bin-dir "${BIN_DIR}" \
- --channel "${CHANNEL}"
-
-RUN python3 /tmp/docker/algod/setup.py configure \
- --bin-dir "${BIN_DIR}" \
- --network-dir /opt/testnetwork \
- --network-template "$NETWORK_TEMPLATE_PATH" \
- --network-token aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa \
- --algod-port 60000 \
- --kmd-port 60001
-
-
-# Start algod
-CMD ["/usr/bin/env", "bash", "-c", "python3 /tmp/docker/algod/setup.py start --bin-dir \"$BIN_DIR\" --network-dir \"/opt/testnetwork\" --copy-genesis-to \"/genesis-file\""]
diff --git a/docker/algod/setup.py b/docker/algod/setup.py
deleted file mode 100755
index ae858e86..00000000
--- a/docker/algod/setup.py
+++ /dev/null
@@ -1,137 +0,0 @@
-#!/usr/bin/env python3
-
-import argparse
-import os
-import pprint
-import shutil
-import subprocess
-import tarfile
-import time
-import json
-import urllib.request
-from os.path import expanduser, join
-
-parser = argparse.ArgumentParser(description='Install, configure, and start algod.')
-
-# Shared parameters
-base_parser = argparse.ArgumentParser(add_help=False)
-base_parser.add_argument('--bin-dir', required=True, help='Location to install algod binaries.')
-
-subparsers = parser.add_subparsers()
-
-install = subparsers.add_parser('install', parents=[base_parser], help='Install the binaries.')
-install.add_argument('--channel', required=True, help='Channel to install, nightly and beta are good options.')
-
-configure = subparsers.add_parser('configure', parents=[base_parser], help='Configure private network for SDK.')
-configure.add_argument('--network-template', required=True, help='Path to private network template file.')
-configure.add_argument('--network-token', required=True, help='Valid token to use for algod/kmd.')
-configure.add_argument('--algod-port', required=True, help='Port to use for algod.')
-configure.add_argument('--kmd-port', required=True, help='Port to use for kmd.')
-configure.add_argument('--network-dir', required=True, help='Path to create network.')
-
-start = subparsers.add_parser('start', parents=[base_parser], help='Start the network.')
-start.add_argument('--network-dir', required=True, help='Path to create network.')
-start.add_argument('--copy-genesis-to', required=False, help='Copy the genesis to a given folder.')
-
-pp = pprint.PrettyPrinter(indent=4)
-
-def install_algod_binaries(bin_dir, channel):
- """
- Download and install algod.
- """
- home = expanduser('~')
- os.makedirs("%s/inst" % home, exist_ok=True)
- print('downloading updater...')
- url='https://algorand-releases.s3.amazonaws.com/channel/stable/install_stable_linux-amd64_3.8.0.tar.gz'
- updater_tar='%s/inst/installer.tar.gz' % home
- filedata = urllib.request.urlretrieve(url, updater_tar)
- tar = tarfile.open(updater_tar)
- tar.extractall(path='%s/inst' % home)
- subprocess.check_call(['%s/inst/update.sh -i -c %s -p %s -d %s/data -n' % (home, channel, bin_dir, bin_dir)], shell=True)
-
-
-def algod_directories(network_dir):
- """
- Compute data/kmd directories.
- """
- data_dir=join(network_dir, 'Node')
-
- kmd_dir = [filename for filename in os.listdir(data_dir) if filename.startswith('kmd')][0]
- kmd_dir=join(data_dir, kmd_dir)
-
- return data_dir, kmd_dir
-
-
-def create_network(bin_dir, network_dir, template, token, algod_port, kmd_port):
- """
- Create a private network.
- """
- # Reset network dir before creating a new one.
- if os.path.exists(args.network_dir):
- shutil.rmtree(args.network_dir)
-
- # $BIN_DIR/goal network create -n testnetwork -r $NETWORK_DIR -t network_config/$TEMPLATE
- subprocess.check_call(['%s/goal network create -n testnetwork -r %s -t %s' % (bin_dir, network_dir, template)], shell=True)
- node_dir, kmd_dir = algod_directories(network_dir)
-
- # Set tokens
- with open(join(node_dir, 'algod.token'), 'w') as f:
- f.write(token)
- with open(join(kmd_dir, 'kmd.token'), 'w') as f:
- f.write(token)
-
- # Setup config, inject port
- with open(join(node_dir, 'config.json'), 'w') as f:
- f.write('{ "GossipFanout": 1, "EndpointAddress": "0.0.0.0:%s", "DNSBootstrapID": "", "IncomingConnectionsLimit": 0, "Archival":true, "isIndexerActive":true, "EnableDeveloperAPI":true}' % algod_port)
- with open(join(kmd_dir, 'kmd_config.json'), 'w') as f:
- f.write('{ "address":"0.0.0.0:%s", "allowed_origins":["*"]}' % kmd_port)
-
-
-def start_network(bin_dir, network_dir):
- """
- # $BIN_DIR/goal network start -r $NETWORK_DIR
-
- kmd start runs forever, so this command never returns.
- """
- data_dir, kmd_dir = algod_directories(network_dir)
- subprocess.check_call(['%s/goal network start -r %s' % (bin_dir, network_dir)], shell=True)
- subprocess.check_call(['%s/kmd start -t 0 -d %s' % (bin_dir, kmd_dir)], shell=True)
-
-
-def install_handler(args):
- """
- install subcommand - create and configure the network.
- """
- install_algod_binaries(args.bin_dir, args.channel)
-
-def configure_handler(args):
- """
- configure subcommand - configure a private network using the installed binaries.
- """
- create_network(args.bin_dir, args.network_dir, args.network_template, args.network_token, args.algod_port, args.kmd_port)
-
-
-def start_handler(args):
- """
- start subcommand - start algod + kmd
- """
- # Optionally copy the genesis file to shared location before starting.
- if args.copy_genesis_to is not None:
- print('copying genesis file...')
- node_dir, _ = algod_directories(args.network_dir)
- genesis_file = join(node_dir, 'genesis.json')
- dest = join(args.copy_genesis_to, 'genesis.json')
- shutil.copyfile(genesis_file, dest)
- else:
- print('Not copying genesis file...')
-
- start_network(args.bin_dir, args.network_dir)
-
-
-if __name__ == '__main__':
- install.set_defaults(func=install_handler)
- configure.set_defaults(func=configure_handler)
- start.set_defaults(func=start_handler)
-
- args = parser.parse_args()
- args.func(args)
diff --git a/docker/algod/source/Dockerfile b/docker/algod/source/Dockerfile
deleted file mode 100644
index 5775058e..00000000
--- a/docker/algod/source/Dockerfile
+++ /dev/null
@@ -1,35 +0,0 @@
-FROM golang:1.17
-
-ARG URL=https://github.com/algorand/go-algorand
-ARG BRANCH=master
-ARG NETWORK_TEMPLATE_PATH
-
-RUN echo "Installing from source. ${URL} -- ${BRANCH}"
-ENV BIN_DIR="$GOPATH/bin"
-
-ENV DEBIAN_FRONTEND noninteractive
-
-# Basic dependencies.
-ENV HOME /opt
-RUN apt-get update && apt-get install -y apt-utils curl git git-core bsdmainutils python3
-
-# Copy everything into the container.
-COPY . /tmp
-
-# Install algod binaries.
-RUN git clone --single-branch --branch "${BRANCH}" "${URL}" && \
- cd go-algorand && \
- ./scripts/configure_dev.sh && \
- make install
-
-# Configure private network
-RUN python3 /tmp/docker/algod/setup.py configure \
- --bin-dir "$BIN_DIR" \
- --network-dir /opt/testnetwork \
- --network-template "$NETWORK_TEMPLATE_PATH" \
- --network-token aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa \
- --algod-port 60000 \
- --kmd-port 60001
-
-# Start algod
-CMD ["/usr/bin/env", "bash", "-c", "python3 /tmp/docker/algod/setup.py start --bin-dir \"$BIN_DIR\" --network-dir \"/opt/testnetwork\" --copy-genesis-to \"/genesis-file\""]
diff --git a/docker/indexer/Dockerfile b/docker/indexer/Dockerfile
deleted file mode 100644
index 0ebe03ae..00000000
--- a/docker/indexer/Dockerfile
+++ /dev/null
@@ -1,27 +0,0 @@
-FROM golang:1.17-alpine
-
-ARG URL=https://github.com/algorand/indexer
-ARG BRANCH=master
-ARG SHA=""
-
-RUN echo "Installing from source. ${URL} -- ${BRANCH} -- ${SHA}"
-
-RUN apk update && apk add --no-cache --update \
- alpine-sdk git bzip2 make bash libtool autoconf automake boost-dev musl-dev
-
-# Install indexer binaries.
-RUN git clone --single-branch --branch "${BRANCH}" "${URL}" /opt/indexer
-ENV HOME /opt/indexer
-WORKDIR /opt/indexer
-RUN if [ "${SHA}" != "" ]; then echo "Checking out ${SHA}" && git checkout "${SHA}"; fi
-RUN make
-RUN cp cmd/algorand-indexer/algorand-indexer /tmp
-
-ENV DEBIAN_FRONTEND noninteractive
-
-COPY docker/indexer/setup.sh /tmp/setup.sh
-COPY docker/indexer/data/* /tmp/
-
-RUN ls -lh /tmp
-
-CMD ["/tmp/setup.sh"]
diff --git a/docker/indexer/data/dataset1.tar.bz2 b/docker/indexer/data/dataset1.tar.bz2
deleted file mode 100644
index 03baa8c8..00000000
Binary files a/docker/indexer/data/dataset1.tar.bz2 and /dev/null differ
diff --git a/docker/indexer/data/dataset2.tar.bz2 b/docker/indexer/data/dataset2.tar.bz2
deleted file mode 100644
index 61ffae3a..00000000
Binary files a/docker/indexer/data/dataset2.tar.bz2 and /dev/null differ
diff --git a/docker/indexer/e2edata.tar.bz2 b/docker/indexer/e2edata.tar.bz2
deleted file mode 100644
index 03baa8c8..00000000
Binary files a/docker/indexer/e2edata.tar.bz2 and /dev/null differ
diff --git a/docker/indexer/init-scripts/create-multiple-postgresql-databases.sh b/docker/indexer/init-scripts/create-multiple-postgresql-databases.sh
deleted file mode 100755
index 68977f0f..00000000
--- a/docker/indexer/init-scripts/create-multiple-postgresql-databases.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-
-# source: https://github.com/mrts/docker-postgresql-multiple-databases
-
-set -e
-set -u
-
-function create_user_and_database() {
- local database=$1
- echo " Creating user and database '$database'"
- psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
- CREATE USER $database;
- CREATE DATABASE $database;
- GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
-EOSQL
-}
-
-if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
- echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
- for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
- create_user_and_database $db
- done
- echo "Multiple databases created"
-fi
diff --git a/docker/indexer/setup.sh b/docker/indexer/setup.sh
deleted file mode 100755
index e0c513b0..00000000
--- a/docker/indexer/setup.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/bin/bash
-
-# Start indexer daemon. There are various configurations controlled by
-# environment variables.
-#
-# Configuration:
-# TYPE [snapshot] - load a snapshot, start indexer in readonly mode.
-# [live ] - connect to an existing algod instance.
-# CONNECTION_STRING - the postgres connection string to use.
-# SNAPSHOT_FILE - snapshot to import.
-#
-# Volume:
-# /genesis-file/genesis.json - Must be mounted when connecting to algod.
-set -e
-
-
-TYPE="${TYPE:-snapshot}"
-
-start_with_algod() {
- echo "Starting indexer against algod."
-
- /tmp/algorand-indexer daemon \
- --algod-net "algod:60000" \
- --algod-token aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa \
- --genesis "/genesis-file/genesis.json" \
- -P "$CONNECTION_STRING"
-}
-
-import_and_start_readonly() {
- echo "Starting indexer with DB."
-
- # Extract the correct dataset
- ls -lh /tmp
- echo "Extracting ${SNAPSHOT_FILE}"
- tar -xf "${SNAPSHOT_FILE}" -C /tmp
-
- /tmp/algorand-indexer import \
- -P "$CONNECTION_STRING" \
- --genesis "/tmp/algod/genesis.json" \
- /tmp/blocktars/*
-
- /tmp/algorand-indexer daemon -P "$CONNECTION_STRING"
-}
-
-case $TYPE in
- live) start_with_algod ;;
- snapshot) import_and_start_readonly ;;
- *) echo "Unknown setup type: $TYPE" && help && exit 1 ;;
-esac
-
-sleep infinity
diff --git a/docs/SDK Test Environment.drawio b/docs/SDK Test Environment.drawio
index 0e3fefe0..0293522a 100644
--- a/docs/SDK Test Environment.drawio
+++ b/docs/SDK Test Environment.drawio
@@ -1 +1,105 @@
-7Vrbcts2EP0azyQP7vBu6TGW3dSZpE3rTOs8dWASphiThAJCtpSvL0AAvACQKEuUqMbyeGxgCRLk2bMXAHvmTrLFewxm008ogumZY0WLM/fqzHHG9oj+ZYIlF4zcgAtinERcZNeC2+QHFEJLSOdJBIvWQIJQSpJZWxiiPIchackAxui5PewBpe1ZZyCGmuA2BKku/SeJyFR8hW/V8t9gEk/lzLYlrmRADhaCYgoi9NwQuddn7gQjRHgrW0xgyrCTuPD7fl1xtXoxDHOyyQ1fJ5Pvz+Ns9On9uROd3938+fftp3OHP+UJpHPxwVcofISY6iGi8i+wIPTfdf6UYJRnbCb+LWQpASJwQae/nJIspQKbNguC0SOcoBRhKslRTkdePiRpqohAmsQ57Yb0sZDKL58gJgmF/p24kCVRxKa5fJ4mBN7OQMjmfKY8ozKM5nkE2bdZtKeDIfBhz4SLhkiA8x6iDBK8pEPEVXcsFCWYWinuuda7EwjZtKFzOQ4IqsXVo2tt0IZQyAuU42rKAWmMqF6ClEF+j2krJuXXByBjoPC/+oDHLNIUZwK1oUVQzLg9PSQLBnMfEHu+AvFIh3hkQHi0L4Q9DWENJphH75gfqXkbgWJaEs828V56CWcdZDBqeR0dsAYgvgEQKcMwBSR5avsqE0pihs8oYSZcUT5o6+PCV4Au0ByHUNzV9C0dD3I85UEE4BgS7UGl0qrP3l6PvqZH4bn+muckyeBr8WG20+3DbJOF7c2HBZpmPqOCxJjGdFULNEDOWDNcpgmFBa8AraGle47fx/tKAMLHuET1jzmhT4FC3gO0vkLxwNehNUWHvfku+QlNX0Xdyq3oIkymKEY5SK9r6WUt/YjQTKDzDRKyFKkXmBO0zq3ZLdquxZY7jzVfMObjuG/o5lCn03yZN6ReHSwbA2bMORWrnWVgtQngt/Mt2uBP7NWxjTXzuaHQLyDWY/ybDBTUB72VV4oZyFv0CL7PWa55GXIX9o4hFt+DN/Qz6C99P8vYesuaTM00ec7J+QPIknTJb89QjorSOltDipJKbIA1W9Tz1tmKr64XfAoJk5bZcNWTEPklSFRyxdrsxXyGik+h7xprV2MlQ7d6jFM/hquiuiIyMr8EnUpLhfjll7Je2WZZGevI/6uHl3manIlrrpqp4QuZrPSG7J7SVJnE5pPwvI0JeOYmZxSvzx3eJgBVlypsalvzmVlUI71RDaa1bMDWlHM/WV1rXhL+0niNmmcljxvzq5oru5X6msI2qcQ4jX3caKiRcruRVD2+pNkbOmk2Jcn7CDwUMry8Y8nRL67sfi27ll0JrhYie+K9pejpQau8/pmubCkILLcoh/UVyeRuxeFD2U6alK+9UXiJ4BNM0ewUX07x5RRffuL44g8eXywNlu740l5K97LMqeNPO/pc+B3BZ4c44m0YR9zjiiP6PhpdLVn6ArWtpQ6y98Bt121zu+p3LNr3th0ij0MOy224SMhdo8157ItezWLWWTYpfaemW/xOdzTqMAFjtqWb2A6WEvw/LUXfD2OWonu8wS1FWu9QluJ4W1hKOMdPVTq/05KjTfguj98nsx13Q2aPj4rZjn5a5Y/pz9Ex27cHjgHOxWtltr8hs6WGjoXa+vESo7YeyoemdjA4tXVzfyXUHm1I7ePaAJKv3WS257rHR2zDmnRfxP6W/e7N7z98//Yn/dof44fzD1/+lWo73Jbny6hs2sdUKLYdr11Zc9XpsoMdebzVmZyj8MQbKzVQHeNd21aosdsZnpE724T718UdbxDueC/kjjLe9fo9/zVyZ5vtgp+VO0aABikFcNQjuS7qqON7Lh0wIrNNKvazUmfNivXg3Ll4IXeU8X2XnZitapst+NdFHn8Q8gQvJI9atHaImiWJoV6NeYUpMthYv6yeNt9cXfPz4cnHG94IUU5AkpcVTgo3Bzg1s9XM01AOeNhSZh113Ya1WuaVB2DuTsthzzHapI7mYeqctbJYtTx50zrnwOt40J7rnCWuDSWnKATpFDH7Uo1oRj2yxoGhl/KmkmTTUl5Ftj9D0Teq92YonWHkZCg7G4oxgdnm8KidVBwwmzFSoNPxrl4ObbBscsws633Xcd1LKmeguhVu4ao0v2SAcWNX5R1wO90IlL4Ru+9Cy3F9si+O+rvIu3FdZS/8djb0q4Ms7fz1y/zO8a41Uiizh6WdnkGsLvlMKVSnes9Tveep3tM9qnrPnaKcWu/p7q/e80yA1XBgNUzu9X8=
\ No newline at end of file
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/SDK Test Environment.png b/docs/SDK Test Environment.png
index b3ac5bcf..a86021b0 100644
Binary files a/docs/SDK Test Environment.png and b/docs/SDK Test Environment.png differ
diff --git a/features/integration/algod.feature b/features/integration/algod.feature
index 9a3f559c..c3216c0b 100644
--- a/features/integration/algod.feature
+++ b/features/integration/algod.feature
@@ -20,8 +20,6 @@ Feature: Algod
Given a kmd client
And wallet information
Then I get transactions by address and round
- And I get transactions by address only
- And I get transactions by address and date
Scenario: Getting transaction by ID
Given a kmd client
@@ -30,19 +28,10 @@ Feature: Algod
When I get the private key
And I sign the transaction with the private key
And I send the transaction
- Then I can get the transaction by ID
Scenario: Get pending transactions
Then I get pending transactions
- Scenario Outline: Get Transactions By Address and Limit Count
- Given a kmd client
- And wallet information
- When I get recent transactions, limited by transactions
- Examples:
- | cnt |
- | 0 |
- | 1 |
Scenario: Suggested params
When I get the suggested params
diff --git a/features/integration/compile.feature b/features/integration/compile.feature
index 6a5547b2..52356038 100644
--- a/features/integration/compile.feature
+++ b/features/integration/compile.feature
@@ -4,28 +4,28 @@ Feature: Compile
@compile
Scenario Outline: Compile programs
- When I compile a teal program
- Then it is compiled with and and
+ When I compile a teal program ""
+ Then it is compiled with and "" and ""
Examples:
- | program | status | result | hash |
- | "programs/one.teal" | 200 | "AiABASI=" | "YOE6C22GHCTKAN3HU4SE5PGIPN5UKXAJTXCQUPJ3KKF5HOAH646MKKCPDA" |
- | "programs/invalid.teal" | 400 | "" | "" |
+ | program | status | result | hash |
+ | programs/one.teal | 200 | AiABASI= | YOE6C22GHCTKAN3HU4SE5PGIPN5UKXAJTXCQUPJ3KKF5HOAH646MKKCPDA|
+ | programs/invalid.teal | 400 | | |
@compile
Scenario Outline: Teal compiles to its associated binary
- When I compile a teal program
- Then base64 decoding the response is the same as the binary
+ When I compile a teal program ""
+ Then base64 decoding the response is the same as the binary ""
Examples:
- | teal | program |
- | "programs/one.teal" | "programs/one.teal.tok" |
- | "programs/zero.teal" | "programs/zero.teal.tok" |
- | "programs/abi_method_call.teal" | "programs/abi_method_call.teal.tok" |
+ | teal | program |
+ | programs/one.teal | programs/one.teal.tok |
+ | programs/zero.teal | programs/zero.teal.tok |
+ | programs/abi_method_call.teal | programs/abi_method_call.teal.tok |
@compile.sourcemap
Scenario Outline: Algod compiling Teal returns a valid Source Map
- When I compile a teal program with mapping enabled
- Then the resulting source map is the same as the json
+ When I compile a teal program "" with mapping enabled
+ Then the resulting source map is the same as the json ""
Examples:
- | teal | sourcemap |
- | "programs/quine.teal" | "v2algodclient_responsejsons/sourcemap.json" |
\ No newline at end of file
+ | teal | sourcemap |
+ | programs/quine.teal | v2algodclient_responsejsons/sourcemap2.json |
\ No newline at end of file
diff --git a/features/integration/dryrun.feature b/features/integration/dryrun.feature
index 6e4296d0..9b071645 100644
--- a/features/integration/dryrun.feature
+++ b/features/integration/dryrun.feature
@@ -4,11 +4,11 @@ Feature: Dryrun
Given an algod v2 client
Scenario Outline: Dryrun execution with binary and source programs
- When I dryrun a program
- Then I get execution result
+ When I dryrun a "" program ""
+ Then I get execution result ""
Examples:
- | program | kind | result |
- | "programs/one.teal.tok" | "compiled" | "PASS" |
- | "programs/zero.teal.tok" | "compiled" | "REJECT" |
- | "programs/one.teal" | "source" | "PASS" |
- | "programs/zero.teal" | "source" | "REJECT" |
+ | program | kind | result |
+ | programs/one.teal.tok | compiled | PASS |
+ | programs/zero.teal.tok | compiled | REJECT |
+ | programs/one.teal | source | PASS |
+ | programs/zero.teal | source | REJECT |
diff --git a/features/integration/dryrun_testing.feature b/features/integration/dryrun_testing.feature
index 3e06b00b..f02ad8d3 100644
--- a/features/integration/dryrun_testing.feature
+++ b/features/integration/dryrun_testing.feature
@@ -4,47 +4,47 @@ Feature: Dryrun Testing
Given an algod v2 client
Scenario Outline: Dryrun test case with simple assert
- Given dryrun test case with of type
- Then status assert of is succeed
+ Given dryrun test case with "" of type ""
+ Then status assert of "" is succeed
Examples:
- | program | kind | status |
- | "programs/one.teal.tok" | "lsig" | "PASS" |
- | "programs/one.teal.tok" | "approv" | "PASS" |
- | "programs/one.teal.tok" | "clearp" | "PASS" |
- | "programs/zero.teal.tok" | "lsig" | "REJECT" |
- | "programs/zero.teal.tok" | "approv" | "REJECT" |
- | "programs/zero.teal.tok" | "clearp" | "REJECT" |
- | "programs/one.teal" | "lsig" | "PASS" |
- | "programs/one.teal" | "approv" | "PASS" |
- | "programs/one.teal" | "clearp" | "PASS" |
- | "programs/zero.teal" | "lsig" | "REJECT" |
- | "programs/zero.teal" | "approv" | "REJECT" |
- | "programs/zero.teal" | "clearp" | "REJECT" |
+ | program | kind | status |
+ | programs/one.teal.tok | lsig | PASS |
+ | programs/one.teal.tok | approv | PASS |
+ | programs/one.teal.tok | clearp | PASS |
+ | programs/zero.teal.tok | lsig | REJECT |
+ | programs/zero.teal.tok | approv | REJECT |
+ | programs/zero.teal.tok | clearp | REJECT |
+ | programs/one.teal | lsig | PASS |
+ | programs/one.teal | approv | PASS |
+ | programs/one.teal | clearp | PASS |
+ | programs/zero.teal | lsig | REJECT |
+ | programs/zero.teal | approv | REJECT |
+ | programs/zero.teal | clearp | REJECT |
Scenario Outline: Dryrun test case with global state delta assert succeed
- Given dryrun test case with of type
- Then global delta assert with , and is succeed
+ Given dryrun test case with "" of type ""
+ Then global delta assert with "", "" and is succeed
Examples:
- | program | kind | key | action | value |
- | "programs/globalwrite.teal" | "approv" | "Ynl0ZXNrZXk=" | 1 | "dGVzdA==" |
- | "programs/globalwrite.teal" | "approv" | "aW50a2V5" | 2 | "11" |
- | "programs/globalwrite.teal" | "clearp" | "Ynl0ZXNrZXk=" | 1 | "dGVzdA==" |
- | "programs/globalwrite.teal" | "clearp" | "aW50a2V5" | 2 | "11" |
+ | program | kind | key | action | value |
+ | programs/globalwrite.teal | approv | Ynl0ZXNrZXk= | 1 | dGVzdA== |
+ | programs/globalwrite.teal | approv | aW50a2V5 | 2 | 11 |
+ | programs/globalwrite.teal | clearp | Ynl0ZXNrZXk= | 1 | dGVzdA== |
+ | programs/globalwrite.teal | clearp | aW50a2V5 | 2 | 11 |
Scenario Outline: Dryrun test case with global state delta assert failed
- Given dryrun test case with of type
- Then global delta assert with , and is failed
+ Given dryrun test case with "" of type ""
+ Then global delta assert with "", "" and is failed
Examples:
- | program | kind | key | action | value |
- | "programs/globalwrite.teal" | "clearp" | "Ynl0ZXNrZXk=" | 1 | "test" |
- | "programs/globalwrite.teal" | "approv" | "aW50a2V5" | 2 | "12" |
+ | program | kind | key | action | value|
+ | programs/globalwrite.teal | clearp | Ynl0ZXNrZXk= | 1 | test |
+ | programs/globalwrite.teal | approv | aW50a2V5 | 2 | 12 |
Scenario Outline: Dryrun test case with local state delta assert succeed
- Given dryrun test case with of type
- Then local delta assert for of accounts with , and is succeed
+ Given dryrun test case with "" of type ""
+ Then local delta assert for "" of accounts with "", "" and is succeed
Examples:
- | program | kind | addr | index | key | action | value |
- | "programs/localwrite.teal" | "approv" | "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ" | 0 | "Ynl0ZXNrZXk=" | 1 | "dGVzdA==" |
- | "programs/localwrite.teal" | "approv" | "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ" | 0 | "aW50a2V5" | 2 | "11" |
- | "programs/localwrite.teal" | "clearp" | "6Z3C3LDVWGMX23BMSYMANACQOSINPFIRF77H7N3AWJZYV6OH6GWTJKVMXY" | 1 | "Ynl0ZXNrZXk=" | 1 | "dGVzdA==" |
- | "programs/localwrite.teal" | "clearp" | "6Z3C3LDVWGMX23BMSYMANACQOSINPFIRF77H7N3AWJZYV6OH6GWTJKVMXY" | 1 | "aW50a2V5" | 2 | "11" |
+ | program | kind | addr | index | key | action | value |
+ | programs/localwrite.teal | approv | AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ | 0 | Ynl0ZXNrZXk= | 1 | dGVzdA== |
+ | programs/localwrite.teal | approv | AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY5HFKQ | 0 | aW50a2V5 | 2 | 11 |
+ | programs/localwrite.teal | clearp | 6Z3C3LDVWGMX23BMSYMANACQOSINPFIRF77H7N3AWJZYV6OH6GWTJKVMXY | 1 | Ynl0ZXNrZXk= | 1 | dGVzdA== |
+ | programs/localwrite.teal | clearp | 6Z3C3LDVWGMX23BMSYMANACQOSINPFIRF77H7N3AWJZYV6OH6GWTJKVMXY | 1 | aW50a2V5 | 2 | 11 |
diff --git a/features/integration/evaldelta.feature b/features/integration/evaldelta.feature
deleted file mode 100644
index c0fbe1ab..00000000
--- a/features/integration/evaldelta.feature
+++ /dev/null
@@ -1,26 +0,0 @@
-Feature: EvalDelta
- # These are optional for the SDKs to implement because
- # their primary purpose is testing algod / indexer
-
- Background:
- Given a kmd client
- And wallet information
- And an algod v2 client connected to "localhost" port 60000 with token "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
- And I create a new transient account and fund it with 100000000 microalgos.
- And indexer client 3 at "localhost" port 60002 with token "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
-
- @applications.evaldelta
- Scenario Outline:Set '' in state
- # Create app
- Given I build an application transaction with the transient account, the current application, suggested params, operation "create_optin", approval-program "", clear-program "programs/one.teal.tok", global-bytes , global-ints , local-bytes , local-ints , app-args "", foreign-apps "", foreign-assets "", app-accounts "", extra-pages 0
- And I sign and submit the transaction, saving the txid. If there is an error it is "".
- Then the unconfirmed pending transaction by ID should have no apply data fields.
- And I wait for the transaction to be confirmed.
- Then the confirmed pending transaction by ID should have a "" state change for "Zm9v" to "", indexer 3 should also confirm this.
-
- Examples:
- | program | state-location | global-bytes | global-ints | local-bytes | local-ints | arg | value |
- | programs/globwrite.teal.tok | global | 1 | 0 | 0 | 0 | str:hello | aGVsbG8= |
- | programs/globwrite_int.teal.tok | global | 0 | 1 | 0 | 0 | int:90000 | 90000 |
- | programs/locwrite.teal.tok | local | 0 | 0 | 1 | 0 | str:hello | aGVsbG8= |
- | programs/locwrite_int.teal.tok | local | 0 | 0 | 0 | 1 | int:90000 | 90000 |
diff --git a/features/integration/indexer.feature b/features/integration/indexer.feature
deleted file mode 100644
index 859ac1ee..00000000
--- a/features/integration/indexer.feature
+++ /dev/null
@@ -1,531 +0,0 @@
-Feature: Indexer Integration Tests
-
- For all queries, parameters will not be set for default values as defined by:
- * Numeric inputs: 0
- * String inputs: ""
-
- Background:
- Given indexer client 1 at "localhost" port 59999 with token "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
- Given indexer client 2 at "localhost" port 59998 with token "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
- # Indexer 2.3.x Dataset 1
- Given indexer client 3 at "localhost" port 59997 with token "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
- # Indexer 2.3.x Dataset 2
- Given indexer client 4 at "localhost" port 59996 with token "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
-
- @indexer
- Scenario Outline: /health
- When I use to check the services health
- Then I receive status code
-
- Examples:
- | indexer | status |
- | 1 | 200 |
-
- #
- # /blocks/{round-number}
- #
- @indexer
- Scenario Outline: /blocks/
- When I use to lookup block
- Then The block was confirmed at , contains transactions, has the previous block hash ""
-
- Examples:
- | indexer | number | timestamp | num | hash |
- | 1 | 7 | 1585684086 | 8 | PpPusF+bU/uNLS5ODG/hG0pP0vehdSSlBcnnyZDr770= |
- | 1 | 20 | 1585684138 | 2 | 9jzxFIKLoTGkFl60aqGwyzO0AVyMBnbs/Wb5R9hPrsA= |
- | 1 | 100 | 1585684463 | 0 | rEWRbwgzDagT5wYTf9TuiuC+VR3XLLy4S73vInxkmrA= |
-
- #
- # /accounts/{account-id}
- #
- @indexer
- Scenario Outline: has asset - /account/
- When I use to lookup account "" at round 0
- Then The account has assets, the first is asset has a frozen status of "" and amount .
-
- Examples:
- | indexer | account | num | index | frozen | units |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 1 | 9 | false | 999931337000 |
- | 1 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | 1 | 9 | false | 68663000 |
-
- @indexer
- Scenario Outline: creator - /account/
- When I use to lookup account "" at round 0
- Then The account created assets, the first is asset is named "" with a total amount of ""
-
- Examples:
- | indexer | account | num | index | name | total | unit |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 1 | 9 | bogocoin | 1000000000000 | bogo |
-
- @indexer
- Scenario Outline: lookup - /account/
- When I use to lookup account "" at round 0
- Then The account has <μalgos> μalgos and assets, 0 has 0
-
- Examples:
- | indexer | account | μalgos | num |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 999899126000 | 1 |
- | 1 | FROJFIFQRARWEHOL6GR3MBFCDJY76CPF3UY55HM3PCK42AD5HA5SKKXLLA | 4992999999993000 | 0 |
-
- #
- # /accounts/{account-id} - at round (rollback test)
- #
- @indexer
- Scenario Outline: rewind - /accounts/{account-id}?round=
- When I use to lookup account "" at round
- Then The account has <μalgos> μalgos and assets, has
-
- Examples:
- | indexer | account | round | μalgos | num | asset-id | asset-amount |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 351 | 999899126000 | 1 | 9 | 999931337000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 30 | 999899126000 | 1 | 9 | 999931337000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 29 | 999898989000 | 1 | 9 | 999900000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 27 | 999998990000 | 1 | 9 | 999900000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 25 | 999998991000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 21 | 999999992000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 19 | 999998995000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 17 | 999998995000 | 1 | 9 | 999999000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 15 | 999998996000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 11 | 999999997000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 9 | 999899998000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 6 | 999999999000 | 1 | 9 | 1000000000000 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 4 | 1000000000000 | 1 | 9 | 0 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 2 | 0 | 1 | 9 | 0 |
- | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 1 | 0 | 1 | 9 | 0 |
-
- #
- # /assets/{asset-id}
- #
- @indexer
- Scenario Outline: lookup - /assets/
- When I use to lookup asset
- Then The asset found has: "", "", "", , "", , ""
-
- Examples:
- | indexer | asset-id | name | units | creator | decimals | default-frozen | total | clawback |
- | 1 | 9 | bogocoin | bogo | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 0 | false | 1000000000000 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 |
-
- #
- # /assets/{asset-id}/balances
- #
- @indexer
- Scenario Outline: balances - /assets//balances?gt=<=&limit=
- When I use to lookup asset balances for with , , and token ""
- Then There are with the asset, the first is "" has "" and
-
- Examples:
- | indexer | asset-id | currency-gt | currency-lt | limit | num-accounts | account | is-frozen | amount |
- | 1 | 9 | 0 | 0 | 0 | 2 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- | 1 | 9 | 0 | 0 | 1 | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- | 1 | 9 | 0 | 0 | 1 | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- | 1 | 9 | 68663000 | 0 | 0 | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- | 1 | 9 | 0 | 68663001 | 0 | 1 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | false | 68663000 |
- # these pick up deleted asset balances in dataset 1 + 2.3.x.
- #| 3 | 9 | 0 | 0 | 0 | 2 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- #| 3 | 9 | 0 | 0 | 1 | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- #| 3 | 9 | 0 | 0 | 1 | 1 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | false | 999931337000 |
- #| 3 | 9 | 0 | 68663001 | 0 | 1 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | false | 68663000 |
-
-
- @indexer
- Scenario Outline: /assets/{asset-id}/balances?next=token
- When I use to lookup asset balances for with , , and token ""
- And I get the next page using to lookup asset balances for with , ,
- Then There are with the asset, the first is "" has "" and
-
- Examples:
- | indexer | asset-id | currency-gt | currency-lt | limit | num-accounts | account | is-frozen | amount |
- | 1 | 9 | 0 | 0 | 1 | 1 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | false | 68663000 |
- # this picks up a deleted asset balance
- #| 3 | 9 | 1 | 0 | 1 | 1 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | false | 68663000 |
-
- #####################
-
- #
- # /accounts
- #
- @indexer
- Scenario Outline: general - /accounts?asset-id=&limit=>=<=
- When I use to search for an account with , , , and token ""
- Then There are , the first has , , , , "", , "", ""
-
- Examples:
- | indexer | asset-id | limit | currency-gt | currency-lt | num | pending-rewards | rewards-base | rewards | without-rewards | address | amount | status | type |
- # These changed when adding 'ORDER BY' in the backend
- #| 1 | 0 | 0 | 0 | 0 | 32 | 0 | 0 | 0 | 1000000 | XKWNJ6MDJWB5WEIARTAJI6GMCX3ETHBSM4OZ2NYACFEXHQJ2RHTC4SHH5A | 1000000 | Offline | |
- #| 1 | 0 | 10 | 0 | 0 | 10 | 0 | 0 | 0 | 1000000 | XKWNJ6MDJWB5WEIARTAJI6GMCX3ETHBSM4OZ2NYACFEXHQJ2RHTC4SHH5A | 1000000 | Offline | |
- | 1 | 0 | 0 | 0 | 0 | 32 | 0 | 0 | 0 | 0 | A5QNF7MATDBZHXVYROXVZ6WTWMMDGX5RPEUCYAQEINOS3LQUW7NQGUJ4OI | 0 | Offline | lsig |
- | 1 | 0 | 10 | 0 | 0 | 10 | 0 | 0 | 0 | 0 | A5QNF7MATDBZHXVYROXVZ6WTWMMDGX5RPEUCYAQEINOS3LQUW7NQGUJ4OI | 0 | Offline | lsig |
- | 1 | 9 | 0 | 68663000 | 0 | 1 | 0 | 0 | 0 | 999899126000 | OSY2LBBSYJXOBAO6T5XGMGAJM77JVPQ7OLRR5J3HEPC3QWBTQZNWSEZA44 | 999899126000 | Offline | sig |
- | 1 | 9 | 0 | 0 | 68663001 | 1 | 0 | 0 | 0 | 998000 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | 998000 | Offline | lsig |
- | 1 | 0 | 0 | 798999 | 799001 | 1 | 0 | 0 | 0 | 799000 | RRHDAJKO5HQBLHPCVK6K7U54LENDIP2JKM3RNRYX2G254VUXBRQD35CK4E | 799000 | Offline | msig |
- # this picks up a deleted account
- | 3 | 9 | 0 | 1 | 68663001 | 1 | 0 | 0 | 0 | 998000 | ZBBRQD73JH5KZ7XRED6GALJYJUXOMBBP3X2Z2XFA4LATV3MUJKKMKG7SHA | 998000 | Offline | lsig |
-
- #
- # /accounts - online
- #
- @indexer
- Scenario Outline: online - /accounts?asset-id=&limit=>=<=
- When I use to search for an account with , , , and token ""
- Then The first account is online and has "", , , , "", ""
-
- Examples:
- | indexer | asset-id | limit | currency-gt | currency-lt | address | key-dilution | first-valid | last-valid | vote-key | selection-key |
- | 1 | 0 | 0 | 998999 | 999001 | NNFTUMXU5EMDOSFRGQ55TOGOJIS7P7POIDHJTQNQUBVVYJ6GCIPHOMAMQE | 10000 | 0 | 100 | h0wDwh1yhWiWu0S79wEiQaWXnNLCMjcce5MPeWPRQ/Q= | Ob0jBcHd0uh6nMjls6bOHlissWvPlINGiREJ+gaEOSg= |
- # These changed when adding 'ORDER BY' in the backend
- #| 1 | 0 | 0 | 4992999999992999 | 0 | FROJFIFQRARWEHOL6GR3MBFCDJY76CPF3UY55HM3PCK42AD5HA5SKKXLLA | 10000 | 0 | 3000000 | mQzj8cwerZh1QzdCR9WBteLQ6MQszzLP4MAjSi5wuD4= | NRnpzxRIGUnTICoPloP9eWU1W6OPksR0ReEDRTwzoYg= |
- | 1 | 0 | 0 | 4992999999992999 | 0 | BYP7VVRIBDOOFKEYICNYIM43S6DW7RIZC73XNMKF3KT5YUITDDMH3W5D5Q | 10000 | 0 | 3000000 | 9OO2S7ikfESeDZg8Z9mrzdN2Lh52UBSVH9uD7XqQHhs= | BkTjDJB2Su5Fi9uwJTODkxpEjrhCJSYtF10m0ee6THU= |
-
-
- #
- # /accounts - paging
- #
- @indexer
- Scenario Outline: paging - /accounts?asset-id=&limit=>=<=
- When I use to search for an account with , , , and token ""
- Then I get the next page using to search for an account with , , and
- Then There are , the first has , , , , "", , "", ""
-
- Examples:
- | indexer | asset-id | limit | currency-gt | currency-lt | num | pending-rewards | rewards-base | rewards | without-rewards | address | amount | status | type |
- | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 149234 | A7NMWS3NT3IUDMLVO26ULGXGIIOUQ3ND2TXSER6EBGRZNOBOUIQXHIBGDE | 149234 | NotParticipating | |
- # These changed when adding 'ORDER BY' in the backend
- #| 1 | 0 | 10 | 0 | 0 | 10 | 0 | 0 | 0 | 99862000 | M7E3Z6MJ7LZT725IK3ZQ6YE64TUTVU6VPPHFMT3DSD5KRDYRE44BE6GY5A | 99862000 | Offline | lsig |
- | 1 | 0 | 10 | 0 | 0 | 10 | 0 | 0 | 0 | 999899996766 | LQU5S7HMDXLQUQD5BKIMPPZYK7LYXPC5AVGIWNVNTBVQHL3GCXFVXZFJ3A | 999899996766 | Offline | sig |
-
- #
- # /accounts - paging multiple times
- #
- @indexer
- Scenario Outline: paging 6 times - /accounts?asset-id=&limit=>=<=
- When I use to search for an account with , , , and token ""
- Then I get the next page using to search for an account with , , and
- Then I get the next page using to search for an account with , , and
- Then I get the next page using to search for an account with , , and
- Then I get the next page using to search for an account with , , and
- Then I get the next page using to search for an account with , , and
- Then I get the next page using to search for an account with , , and
- Then There are , the first has , , , , "", , "", ""
-
- Examples:
- | indexer | asset-id | limit | currency-gt | currency-lt | num | pending-rewards | rewards-base | rewards | without-rewards | address | amount | status | type |
- | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | GP44P6YCVSRK4IYIEZYDYO5POY3QO5VTATZIMRI6DFLMO2EPK7GBBNQRCM | 0 | Offline | lsig |
- | 1 | 0 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | 999000 | NNFTUMXU5EMDOSFRGQ55TOGOJIS7P7POIDHJTQNQUBVVYJ6GCIPHOMAMQE | 999000 | Online | sig |
-
- #
- # /transactions
- # When I use to search for transactions with , "", "", "", "",