Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kush/upgrade changes #60

Merged
merged 7 commits into from
Sep 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion pages/community/resources.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,5 @@
- Workshop – "How to bring AI to your web3 apps with the Allora Edgenet" by Allora Labs
- https://www.youtube.com/watch?v=aPCvTVFUynA

## Community Guides
## Community Repositories

24 changes: 14 additions & 10 deletions pages/devs/topic-creators/how-to-create-topic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ type MsgCreateNewTopic struct {
Creator string `json:"creator,omitempty"`
// Information about the topic
Metadata string `json:"metadata,omitempty"`
// The method used for loss calculations (.wasm)
// The method used for loss calculations
LossMethod string `json:"loss_method,omitempty"`
// The frequency (in blocks) of inference calculations (Must be greater than 0)
EpochLength int64 `json:"epoch_length,omitempty"`
// The time it takes for the ground truth to become available (Cannot be negative)
GroundTruthLag int64 `json:"ground_truth_lag,omitempty"`
// Default argument the worker will receive when py script is called
DefaultArg string `json:"default_arg,omitempty"`
// the time window within a given epoch that worker nodes can submit an inference
WorkerSubmissionWindow int64 `json:"worker_submission_window"`
// Raising this parameter raises how much high-quality inferences are favored over lower-quality inferences (Must be between 2.5 and 4.5)
PNorm github_com_allora_network_allora_chain_math.Dec `json:"p_norm"`
// Raising this parameter lowers how much workers historical performances influence their current reward distribution (Must be between 0 and 1)
Expand All @@ -38,6 +38,10 @@ type MsgCreateNewTopic struct {
AllowNegative bool `json:"allow_negative,omitempty"`
// the numerical precision at which the network should distinguish differences in the logarithm of the loss
Epsilon github_com_allora_network_allora_chain_math.Dec `json:"epsilon"`
MeritSortitionAlpha github_com_allora_network_allora_chain_math.Dec `json:"merit_sortition_alpha"`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are comments in the other fields, seems proper to add them here too.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MeritSortitionAlpha:
// alpha parameter to weight previous score EMA importance in filtering the active set of inferers, forecasters, and reputers

ActiveXQuantile
// marks the quantile of lower Xs by score on the active set that may be replaced by new Xs each epoch

ActiveInfererQuantile github_com_allora_network_allora_chain_math.Dec `json:"active_inferer_quantile"`
ActiveForecasterQuantile github_com_allora_network_allora_chain_math.Dec `json:"active_forecaster_quantile"`
ActiveReputerQuantile github_com_allora_network_allora_chain_math.Dec `json:"active_reputer_quantile"`
}
```

Expand All @@ -47,17 +51,18 @@ Using the [`allorad` CLI](/devs/get-started/cli#installing-allorad) to create a
allorad tx emissions create-topic \
allo13tr5nx74zjdh7ya8kgyuu0hweppnnx8d4ux7pj \ # Creator address
"ETH prediction in 24h" \ # Metadata
"bafybeid7mmrv5qr4w5un6c64a6kt2y4vce2vylsmfvnjt7z2wodngknway" \ # LossLogic
"loss-calculation-eth.wasm" \ # LossMethod
"bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm" \ # InferenceLogic
"allora-inference-function.wasm" \ # InferenceMethod
"mse" \ # LossMethod
3600 \ # EpochLength
0 \ # GroundTruthLag

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ground Truth lag should be at least EpochLength. Using EpochLength lgtm as an example here. Most if not all of our topics are defined like that.

"ETH" # DefaultArg
3 \ # WorkerSubmissionWindow

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

worker submission window suggested to be 12 at least.
This is 12 blocks of time we leave workers to make submissions.
As this is an example this can be set to 36 safely (1% of the EpochLength).

3 \ # PNorm
1 \ # AlphaRegret
true \ # AllowNegative
0.001 \ # Epsilon
0.001 \ # Epsilon
0.1 \ # MeritSortitionAlpha
0.25 \ # ActiveInfererQuantile
0.25 \ # ActiveForecasterQuantile
0.25 \ # ActiveReputerQuantile
--node <RPC_URL>
--chain-id <CHAIN_ID>
```
Expand All @@ -69,7 +74,6 @@ Be sure to swap out [`RPC_URL`](/devs/get-started/setup-wallet#rpc-url-and-chain
An explanation in more detail of some of these fields.

- `Metadata` is a descriptive field to let users know what this topic is about and/or any specific indication about how it is expected to work.
- `DefaultArg` value will be passed as an argument to the python script, i.e. when the worker receives the request, it will attempt to run `python3 <location-of-main.py> <TopicId> <DefaultArg>`. This will be used in the scoring stage to request inferences from the chain.
- `allowNegative` determines whether the loss function output can be negative.
- If **true**, the reputer submits raw losses.
- If **false**, the reputer submits logs of losses.
8 changes: 1 addition & 7 deletions pages/devs/validators/software-upgrades.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

The Allora network relies on multiple different pieces of software to do different tasks.
For example the `allora-chain` repository handles the blockchain software that runs the chain, while
the `allora-inference-base` repository performs off-chain tasks. Each piece of software may need to
the `offchain-node` repository performs off-chain tasks. Each piece of software may need to
be upgraded separately.

## Allora-Chain Upgrades
Expand Down Expand Up @@ -55,12 +55,6 @@ For those running the chain software, you will have to have to perform an upgrad
3. When the developers put up the upgrade proposal to governance, be helpful and vote to make it pass. You can do this via the CLI with `allorad tx gov vote $proposal_id yes --from $validator` or an example of doing this programmatically can be found in the integration test [voteOnProposal](https://github.com/allora-network/allora-chain/blob/main/test/integration/upgrade_test.go) function.
4. At the block height of the upgrade, the old software will panic - cosmovisor will catch the panic and restart the process using the new binary for the upgrade instead. Monitor your logs appropriately to see the restart.

## Allora-Inference-Base Upgrades

New software releases are published on the Allora Inference Base
[Github](https://github.com/allora-network/allora-inference-base/releases) page.
Download and install the new version of the software to upgrade.

## Further References

This is probably the most helpful document to understand the full workflow of a cosmos-sdk chain
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,26 +26,26 @@ This diagram illustrates the architecture of the integration between the Allora
- **VPC Internet Gateway**: Allows communication between the instances in the VPC and the internet.

3. **EC2 Instance (Allora Worker Node)**
- **Inference Base**: This component handles network communication, receiving requests from the Allora Network's Public Head Node and sending responses back.
- **Offchain Node**: This component handles network communication, receiving requests from the Allora Network and sending responses back.
- **Node Function**: Processes requests by interfacing with the private model server. It acts as an intermediary, ensuring the requests are correctly formatted and the responses are appropriately handled.
- **Model Server**: Hosts the proprietary model. It executes the main inference script (`Main.py`) to generate inferences based on the received requests.

#### Process Flow

1. **Request Flow**:
- The Allora Network's Public Head Node sends a request for inferences to the EC2 instance within the AWS environment.
- The request passes through the VPC Internet Gateway and reaches the Inference Base in the public subnet.
- The Inference Base forwards the request to the Node Function.
- The request passes through the VPC Internet Gateway and reaches the Offchain node in the public subnet.
- The Offchain node forwards the request to the Node Function.
- The Node Function calls `Main.py` on the Model Server to generate the required inferences.

2. **Response Flow**:
- The Model Server processes the request and returns the inferences to the Node Function.
- The Node Function sends the inferences back to the Inference Base.
- The Inference Base communicates the inferences back to the Allora Network's Public Head Node via the VPC Internet Gateway.
- The Node Function sends the inferences back to the Offchain node.
- The Offchain node communicates the inferences back to the Allora Network via the VPC Internet Gateway.

## AWS Activate

Before proceeding, please note that eligibility for AWS Activate credits and terms are governed by AWS. This documentation may become outdated, so ensure you refer to the [AWS Activate program page](https://aws.amazon.com/startups/credits#hero) for the latest eligibility requirements and instructions.
Before proceeding, please note that eligibility for AWS Activate credits and terams are governed by AWS. This documentation may become outdated, so ensure you refer to the [AWS Activate program page](https://aws.amazon.com/startups/credits#hero) for the latest eligibility requirements and instructions.

## AWS Activate Stepwise Process

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,64 @@ cd basic-coin-prediction-node

## Configure Your Environment

### `.env` File Configuration

When setting up your environment, please follow the guidelines below for configuring your `.env` file:

- **`TOKEN`**: Specifies the cryptocurrency token to use. Must be one of the following:
- `'ETH'` (Ethereum)
- `'SOL'` (Solana)
- `'BTC'` (Bitcoin)
- `'BNB'` (Binance Coin)
- `'ARB'` (Arbitrum)

> **Note**: If you are using Binance as the data provider, any token can be used. However, if you are using Coingecko, you should add its `coin_id` in the [token map](https://github.com/allora-network/basic-coin-prediction-node/blob/70cf49d0a2317769d883ae882c146efbb915f5c0/updater.py#L107). Find more information [here](https://docs.coingecko.com/reference/simple-price) and the list [here](https://docs.google.com/spreadsheets/d/1wTTuxXt8n9q7C4NDXqQpI3wpKu1_5bGVmP9Xz0XGSyU/edit?gid=0#gid=0).

- **`TRAINING_DAYS`**: Represents the number of days of historical data to use for training. Must be an integer greater than or equal to 1.

- **`TIMEFRAME`**: Defines the timeframe of the data used in the format like `10min` (minutes), `1h` (hours), `1d` (days), etc.

- For Coingecko, the data granularity (candle's body) is automatic. To avoid downsampling when using Coingecko:
- Use a **`TIMEFRAME`** of `>= 30min` if **`TRAINING_DAYS`** is `<= 2`.
- Use a **`TIMEFRAME`** of `>= 4h` if **`TRAINING_DAYS`** is `<= 30`.
- Use a **`TIMEFRAME`** of `>= 4d` if **`TRAINING_DAYS`** is `>= 31`.

- **`MODEL`**: Specifies the machine learning model to use. Must be one of the following:
- `'LinearRegression'`
- `'SVR'` (Support Vector Regression)
- `'KernelRidge'`
- `'BayesianRidge'`

> You can easily add support for other models by adding them to the configuration [here](https://github.com/allora-network/basic-coin-prediction-node/blob/main/model.py#L133).

- **`REGION`**: Defines the region for the Binance API. Must be `'EU'` or `'US'`.

- **`DATA_PROVIDER`**: Specifies the data provider to use. Must be either `'Binance'` or `'Coingecko'`.

- Feel free to add support for other data providers to personalize your model!

- **`CG_API_KEY`**: Your Coingecko API key, required if you've set **`DATA_PROVIDER`** to `'coingecko'`.

#### Sample Configuration (.env file)

Below is an example configuration for your `.env` file:

```bash
TOKEN=ETH
TRAINING_DAYS=30
TIMEFRAME=4h
MODEL=SVR
REGION=US
DATA_PROVIDER=binance
CG_API_KEY=
```

### `config.json` Configuration

1. Copy `config.example.json` and name the copy `config.json`.
2. Open `config.json` and **update** the necessary fields inside the `wallet` sub-object and `worker` config with your specific values:

### `wallet` Sub-object
#### `wallet` Sub-object

1. `nodeRpc`: The [RPC URL](/devs/get-started/setup-wallet#rpc-url-and-chain-id) for the corresponding network the node will be deployed on
2. `addressKeyName`: The name you gave your wallet key when [setting up your wallet](/devs/get-started/setup-wallet)
Expand All @@ -31,7 +85,7 @@ cd basic-coin-prediction-node
If you have existing keys that you wish to use, you will need to provide these variables.
</Callout> */}

### `worker` Config
#### `worker` Config

1. `topicId`: The specific topic ID you created the worker for.
2. `InferenceEndpoint`: The endpoint exposed by your worker node to provide inferences to the network.
Expand Down Expand Up @@ -69,7 +123,7 @@ To deploy a worker that provides inferences for multiple topics, you can duplica

## Building a Custom Model

`basic-coin-prediction-node` comes preconfigured with a model that uses linear regression to predict the price of Ethereum, and contribute an inference to topic 1 on Allora. Learn more about how this model is built from the ground up and how you can customize your model to give a unique inference to the network in the [next section](/devs/workers/walkthroughs/walkthrough-price-prediction-worker/modelpy).
`basic-coin-prediction-node` comes preconfigured with a model that uses regression to predict the price of Ethereum, and contribute an inference to topic 1 on Allora. Learn more about how this model is built from the ground up and how you can customize your model to give a unique inference to the network in the [next section](/devs/workers/walkthroughs/walkthrough-price-prediction-worker/modelpy).

## Deployment

Expand Down
2 changes: 1 addition & 1 deletion pages/home/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ The AI/ML agents within the Allora Network use their data and algorithms to broa

## More Info

Allora aims to incentivize data scientists to provide high-quality inferences. This is achieved through a technical architecture detailed in the [Allora whitepaper](https://whitepaper.assets.allora.network/whitepaper.pdf) and implemented in the Allora Network GitHub repositories, particularly [`allora-chain`](https://github.com/allora-network/allora-chain) and [`allora-inference-base`](https://github.com/allora-network/allora-inference-base).
Allora aims to incentivize data scientists to provide high-quality inferences. This is achieved through a technical architecture detailed in the [Allora whitepaper](https://whitepaper.assets.allora.network/whitepaper.pdf) and implemented in the Allora Network GitHub repositories, particularly [`allora-chain`](https://github.com/allora-network/allora-chain) and [`offchain-node`](https://github.com/allora-network/allora-offchain-node).
Loading