Skip to content

Commit

Permalink
Accumulator Support and Integration Testing (#62)
Browse files Browse the repository at this point in the history
* adding stuff

* add accumulator thing

* tests: New oracle.so, add accumulator *.so, gen accumulator key

* exporter: add transaction statuses logging

* uh oh

* stuff

* update oracle

* Update oracle.so, re-enable pub tx failure, more updates in tests

* aggregate now updates

* CPI is invoked

* tests: fix wrong *.so for accumulator, bypassed checks in binaries

* integration_tests: WIP accumulator initialize() call

* update stuff

* agent: oracle auth, tests: regen client, setup auth, new oracle.so

* add anchorpy

* it works

* stuff

* exporter: re-enable preflight, tests: hardcoding my thing this time!

* Clean integration tests, new accumulator address, agent logging

* exporter.rs: restore rpc calls to their former infallible glory

* exporter: fix missing UPDATE_PRICE_NO_FAIL_ON_ERROR

* test_integration.py: bring back solana logs

* message_buffer -> message_buffer_client_codegen

* move prebuilt artifacts to `program-binaries`, add md5 verification

* README.md: replace other README with testing section, config docs

* exporter: Remove code comment, oracle PDA log statement

* integration-tests/pyproject.toml: Point at the root readme

---------

Co-authored-by: Jayant Krishnamurthy <[email protected]>
  • Loading branch information
drozdziak1 and jayantk authored Apr 30, 2023
1 parent c0a3cd1 commit 77a858c
Show file tree
Hide file tree
Showing 34 changed files with 2,873 additions and 271 deletions.
2 changes: 2 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
target
Dockerfile
6 changes: 6 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,9 @@ repos:
language: "rust"
entry: cargo +nightly fmt
pass_filenames: false
- id: integration-test-checksums
name: Integration Test Artifact Checksums
language: "system"
files: integration-tests/program-binaries/.*\.(json|so|md5sum)$
entry: md5sum --check canary.md5sum
pass_filenames: false
108 changes: 95 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,27 @@ Note that only permissioned publishers can publish data to the network. Please r

Prerequisites: Rust 1.68 or higher. A Unix system is recommended.

```bash
# Install OpenSSL
apt install libssl-dev
```shell
# Install OpenSSL (Debian-based systems)
$ apt install libssl-dev

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
$ rustup default 1.68 # Optional

# Build the project. This will produce a binary at target/release/agent
cargo build --release
$ cargo build --release
```

## Configure
Configuration is managed through a configuration file. An example configuration file with sensible defaults can be found at [config/config.toml](config/config.toml).
The agent takes a single `--config` CLI option, pointing at
`config/config.toml` by default. An example configuration is provided
there, containing a minimal set of mandatory options and documentation
comments for optional settings. **The config file must exist.**

The logging level can be configured at runtime through the `RUST_LOG` environment variable, using [env_logger](https://docs.rs/env_logger/latest/env_logger/)'s scheme. For example, to log at `debug` instead of the default `info` level, set `RUST_LOG=debug`.

## Run
The logging level can be configured at runtime
through the `RUST_LOG` environment variable using the standard
`error|warn|info|debug|trace` levels.

### Key Store
If you already have a key store set up, you can skip this step. If you haven't, you will need to create one before publishing data. A key store contains the cryptographic keys needed to publish data. Once you have a key store set up, please ensure that the configuration file mentioned above contains the correct path to your key store.
Expand All @@ -44,11 +51,86 @@ PYTH_KEY_ENV=devnet # Can be devnet, testnet or mainnet
./scripts/init_key_store.sh $PYTH_KEY_ENV $PYTH_KEY_STORE
```

### API Server
## Run
`cargo run --release -- --config <your_config.toml>` will build and run the agent in a single step.

## Publishing API
A running agent will expose a WebSocket serving the JRPC publishing API documented [here](https://docs.pyth.network/publish-data/pyth-client-websocket-api). See `config/config.toml` for related settings.

# Development
## Unit Testing
A collection of Rust unit tests is provided, ran with `cargo test`.

## Integration Testing
In `integration-tests`, we provide end-to-end tests for the Pyth
`agent` binary against a running `solana-test-validator` with Pyth
oracle deployed to it. Optionally, accumulator message buffer program
can be deployed and used to validate accumulator CPI correctness
end-to-end (see configuration options below). Prebuilt binaries are
provided manually in `integration-tests/program-binaries` - see below
for more context.

### Running Integration Tests
The tests are implemented as a Python package containing a `pytest`
test suite, managed with [Poetry](https://python-poetry.org/) under
Python >3.10. Use following commands to install and run them:

```bash
# Run the agent binary, which will start a JRPC websocket API server.
./target/release/agent --config config/config.toml
cd integration-tests/
poetry install
poetry run pytest -s --log-cli-level=debug
```

### Publish Data
You can now publish data to the Pyth Network using the JRPC websocket API documented [here](https://docs.pyth.network/publish-data/pyth-client-websocket-api).
### Optional Integration Test Configuration
* `USE_ACCUMULATOR`, off by default - when this env is set, the test
framework also deploys the accumulator program
(`message_buffer.so`), initializes it and configures the agent to
make accumulator-enabled calls into the oracle
* `SOLANA_TEST_VALIDATOR`, systemwide `solana-test-validator` by
default - when this env is set, the specified binary is used as the
test validator. This is especially useful with `USE_ACCUMULATOR`,
enabling life-like accumulator output from the `pythnet` validator.

### Testing Setup Overview
For each test's setup in `integration-tests/tests/test_integration.py`, we:
* Start `solana-test-validator` with prebuilt Solana programs deployed
* Generate and fund test Solana keypairs
* Initialize the oracle program - allocate test price feeds, assign
publishing permissions. This is done using the dedicated [`program-admin`](https://github.com/pyth-network/program-admin) Python package.
* (Optionally) Initialize accumulator message buffer program
initialize test authority, preallocate message buffers, assign
allowed program permissions to the oracle - this is done using a
generated client package in
`integration-tests/message_buffer_client_codegen`, created using
[AnchorPy](https://github.com/kevinheavey/anchorpy).
* Build and run the agent

This is followed by a specific test scenario,
e.g. `test_update_price_simple` - a couple publishing attempts with
assertions of expected on-chain state.

### Prebuilt Artifact Safety
In `integration-tests/program-binaries` we store oracle and
accumulator `*.so`s as well as accumulator program's Anchor IDL JSON
file. These artifacts are guarded against unexpected updates with a
commit hook verifying `md5sum --check canary.md5sum`. Changes to the
`integration-tests/message_buffer_client_codegen` package are much
harder to miss in review and tracked manually.

### Updating Artifacts
While you are free to experiment with the contents of
`program-binaries`, commits for new or changed artifacts must include
updated checksums in `canary.md5sum`. This can be done
by running `md5sum` in repository root:
```shell
$ md5sum integration-tests/program-binaries/*.json > canary.md5sum
$ md5sum integration-tests/program-binaries/*.so >> canary.md5sum # NOTE: Mind the ">>" for appending
```

### Updating `message_buffer_client_codegen`
After obtaining an updated `message_buffer.so` and `message_buffer_idl.json`, run:
```shell
$ cd integration-tests/
$ poetry install # If you haven't run this already
$ poetry run anchorpy client-gen --pdas program-binaries/message_buffer_idl.json message_buffer_client_codegen
```
3 changes: 3 additions & 0 deletions canary.md5sum
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
b213ae5b2a4137238c47bdc5951fc95d integration-tests/program-binaries/message_buffer_idl.json
1d5b5e43be31e10f6e747b20ef77f4e9 integration-tests/program-binaries/message_buffer.so
7c2782f6f58e9c91a95ce7c310a47927 integration-tests/program-binaries/oracle.so
23 changes: 14 additions & 9 deletions config/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -73,27 +73,32 @@ key_store.root_path = "/path/to/keystore"
# channel_capacities.logger_buffer = 10000


# Path to publisher identity keypair. When the specified path is not
# found on startup, the relevant primary/secondary network will expect
# a remote-loaded keypair. See remote_keypair_loader options for
# Relative path to publisher identity keypair
# w.r.t. `key_store.root_path`. When the specified file is not found
# on startup, the relevant primary/secondary network will expect a
# remote-loaded keypair. See remote_keypair_loader options for
# details.
# key_store.publish_keypair_path = "publish_key_pair.json" # I exist, remote loading disabled
# key_store.publish_keypair_path = "none" # I do not exist, remote loading activated for the network

# Relative path to accumulator message buffer program ID. Setting this
# value enables accumulator support on publishing transactions.
# key_store.accumulator_program_key = <not set by default>

# The interval with which to poll account information.
# oracle.poll_interval_duration = "2m"

# Whether subscribing to account updates over websocket is enabled
# oracle.subscriber_enabled = true

# Ask the RPC for up to this many product/price accounts in a
# single request. Tune this setting if you're experiencing
# timeouts on data fetching. In order to keep concurrent open
# socket count at bay, the batches are looked up sequentially,
# trading off overall time it takes to fetch all symbols.
# Ask the Solana RPC for up to this many product/price accounts in a
# single request. Tune this setting if you're experiencing timeouts on
# data fetching. In order to keep concurrent open socket count at bay,
# the batches are looked up sequentially, trading off overall time it
# takes to fetch all symbols.
# oracle.max_lookup_batch_size = 100

# Duration of the interval at which to refresh the cached network state (current slot and blockhash).
# How often to refresh the cached network state (current slot and blockhash).
# It is recommended to set this to slightly less than the network's block time,
# as the slot fetched will be used as the time of the price update.
# exporter.refresh_network_state_interval_duration = "200ms"
Expand Down
20 changes: 0 additions & 20 deletions integration-tests/README.md

This file was deleted.

6 changes: 3 additions & 3 deletions integration-tests/agent_conf.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[metrics_server]
bind_address="0.0.0.0:8888"

[primary_network]
key_store.root_path = "keystore"
oracle.poll_interval_duration = "1s"
exporter.transaction_monitor.poll_interval_duration = "1s"

[metrics_server]
bind_address="0.0.0.0:8888"
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from .message_buffer import MessageBuffer, MessageBufferJSON
from .whitelist import Whitelist, WhitelistJSON
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
import typing
from dataclasses import dataclass
from solana.publickey import PublicKey
from solana.rpc.async_api import AsyncClient
from solana.rpc.commitment import Commitment
import borsh_construct as borsh
from anchorpy.coder.accounts import ACCOUNT_DISCRIMINATOR_SIZE
from anchorpy.error import AccountInvalidDiscriminator
from anchorpy.utils.rpc import get_multiple_accounts
from ..program_id import PROGRAM_ID


class MessageBufferJSON(typing.TypedDict):
bump: int
version: int
header_len: int
end_offsets: list[int]


@dataclass
class MessageBuffer:
discriminator: typing.ClassVar = b"\x19\xf4\x03\x05\xe1\xa5\x1d\xfa"
layout: typing.ClassVar = borsh.CStruct(
"bump" / borsh.U8,
"version" / borsh.U8,
"header_len" / borsh.U16,
"end_offsets" / borsh.U16[255],
)
bump: int
version: int
header_len: int
end_offsets: list[int]

@classmethod
async def fetch(
cls,
conn: AsyncClient,
address: PublicKey,
commitment: typing.Optional[Commitment] = None,
program_id: PublicKey = PROGRAM_ID,
) -> typing.Optional["MessageBuffer"]:
resp = await conn.get_account_info(address, commitment=commitment)
info = resp.value
if info is None:
return None
if info.owner != program_id.to_solders():
raise ValueError("Account does not belong to this program")
bytes_data = info.data
return cls.decode(bytes_data)

@classmethod
async def fetch_multiple(
cls,
conn: AsyncClient,
addresses: list[PublicKey],
commitment: typing.Optional[Commitment] = None,
program_id: PublicKey = PROGRAM_ID,
) -> typing.List[typing.Optional["MessageBuffer"]]:
infos = await get_multiple_accounts(conn, addresses, commitment=commitment)
res: typing.List[typing.Optional["MessageBuffer"]] = []
for info in infos:
if info is None:
res.append(None)
continue
if info.account.owner != program_id:
raise ValueError("Account does not belong to this program")
res.append(cls.decode(info.account.data))
return res

@classmethod
def decode(cls, data: bytes) -> "MessageBuffer":
if data[:ACCOUNT_DISCRIMINATOR_SIZE] != cls.discriminator:
raise AccountInvalidDiscriminator(
"The discriminator for this account is invalid"
)
dec = MessageBuffer.layout.parse(data[ACCOUNT_DISCRIMINATOR_SIZE:])
return cls(
bump=dec.bump,
version=dec.version,
header_len=dec.header_len,
end_offsets=dec.end_offsets,
)

def to_json(self) -> MessageBufferJSON:
return {
"bump": self.bump,
"version": self.version,
"header_len": self.header_len,
"end_offsets": self.end_offsets,
}

@classmethod
def from_json(cls, obj: MessageBufferJSON) -> "MessageBuffer":
return cls(
bump=obj["bump"],
version=obj["version"],
header_len=obj["header_len"],
end_offsets=obj["end_offsets"],
)
Loading

0 comments on commit 77a858c

Please sign in to comment.