Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs link fixes and additional section #46

Merged
merged 2 commits into from
Jul 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pages/devs/consumers/existing-consumers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ If you would like to deploy to an additional chain not listed above, you can lea

## Existing Allora Appchain Topics

Existing Allora Appchain Topics can be found [here](./existing-topics).
Existing Allora Appchain Topics can be found [here](/devs/get-started/existing-topics).
2 changes: 1 addition & 1 deletion pages/devs/consumers/walkthrough-use-topic-inference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Follow these instructions to bring the most recent inference data on-chain for a
## Step by Step Guide:

1. Create an Upshot API key by [creating an account](https://developer.upshot.xyz/signup).
2. Call the Consumer Inference API using the `topicId` found in the [deployed topics list](./existing-topics) and the correct chainId. For example, if you use sepolia, you would provide `ethereum-11155111`.
2. Call the Consumer Inference API using the `topicId` found in the [deployed topics list](/devs/getting-started/existing-topics) and the correct chainId. For example, if you use sepolia, you would provide `ethereum-11155111`.

```shell
curl -X 'GET' --url 'https://api.upshot.xyz/v2/allora/consumer/<chainId>?allora_topic_id=<topicId>' -H 'accept: application/json' -H 'x-api-key: <apiKey>'
Expand Down
10 changes: 9 additions & 1 deletion pages/devs/get-started/setup-wallet.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import { Callout } from 'nextra/components'

## Create Wallet

Follow the instructions [here](/devs/installation/cli) to install our CLI tool `allorad`, which is needed to create a wallet.
Follow the instructions [here](/devs/get-started/installation/cli) to install our CLI tool `allorad`, which is needed to create a wallet.

Prior to executing transactions, a wallet must be created by running:

Expand All @@ -20,6 +20,14 @@ Make sure you save your mnemomic and account information safely.
Creating a wallet using `allorad` will generate a wallet address for all currently deployed versions of the Allora Chain (e.g. testnet, local, mainnet).
</Callout>

## Wallet Recovery

To recover a given wallet's keys, run the following command:

```bash
allorad keys add <wallet name> --recover
```

## Add Faucet Funds
Each network has a different URL to access and request funds from. Please see the faucet URLs for the different networks below:

Expand Down
2 changes: 1 addition & 1 deletion pages/devs/reference/allorad.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ allorad tx emissions [Command]
- `allow_negative`
- `tolerance`

Detailed instructions on [how to create a topic](/devs/how-to-create-topic) are linked.
Detailed instructions on [how to create a topic](/devs/topic-creators/how-to-create-topic) are linked.

### Add an Admin Address to the Whitelist
- **RPC Method:** `AddToWhitelistAdmin`
Expand Down
2 changes: 1 addition & 1 deletion pages/devs/reference/params/chain.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Sets the minimum allowed time interval, in seconds, between consecutive calls fo

Default Value: 3600 seconds (1 hour)

Imposing a minimum cadence ensures a reasonable pacing of weight-adjustment, preventing potential abuse or unnecessary strain on the network. That being said, it need not occur too frequently, because weights accrue over many inferences anyway, and these calls are relatively expensive involving off-chain communication.
Imposing a minimum cadence ensures a reasonable pacing of loss-calculation, preventing potential abuse or unnecessary strain on the network. That being said, it need not occur too frequently, because weights accrue over many inferences anyway, and these calls are relatively expensive involving off-chain communication.

#### max_inference_request_validity

Expand Down
4 changes: 2 additions & 2 deletions pages/devs/validators/deploy-chain.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ The Allora Appchain is a Cosmos SDK appchain that serves as the settlement layer

The appchain also coordinates actions between protocol actors.

- The appchain triggers requests to workers and reputers to collect inferences and run weight-adjustment logic, respectively, as per each topic's respective inference and weight-adjustment cadence.
- The appchain collects a recent history of inferences in batches to later be scored by weight-adjustment.
- The appchain triggers requests to workers and reputers to collect inferences and run loss-calculation logic, respectively, as per each topic's respective inference and loss-calculation cadence.
- The appchain collects a recent history of inferences in batches to later be scored by loss-calculation.

## Why and How might one interact with the Allora Appchain?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import { Callout } from 'nextra/components'

## Prerequisites

1. Make sure you have checked the documentation on how to [build and deploy a worker node from scratch](./build-and-deploy-worker-from-scratch).
1. Make sure you have checked the documentation on how to [build and deploy a worker node from scratch](/devs/workers/deploy-worker/build-and-deploy-worker-from-scratch).
2. Clone the [basic-coin-prediction-node](https://github.com/allora-network/basic-coin-prediction-node) repository. It will serve as the base sample for your quick setup.

We will work from the repository you just cloned. We will explain each part of the source code and make changes to your custom setup as required. Additionally, we encourage you to check the repository's [README](https://github.com/allora-network/basic-coin-prediction-node/blob/main/README.md) for further guidance.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"modelpy": "Model.py"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# Model.py

## Introduction

The [`model.py` file](https://github.com/allora-network/basic-coin-prediction-node/blob/main/model.py) in `basic-coin-prediction-node` consists of several key components:

- **Imports and Configuration:** Sets up necessary libraries and configuration variables.
- **Paths Configuration:** Generates paths for storing data dynamically based on coin symbols.
- **Downloading Data:** Downloads historical price data for the specified symbols, intervals, years, and months.
- **Formatting Data:** Reads, formats, and saves the downloaded data as CSV files.
- **Training the Model:** Trains a linear regression model on the formatted price data and saves the trained model.

While the import and path configuration processes are straightforward, downloading and formatting the data, as well as training the model, require specific steps.

This documentation will guide you through creating models for different coins, making it easy to extend the script for general-purpose use.

## Downloading the Data

The [`download_data`](https://github.com/allora-network/basic-coin-prediction-node/blob/5d70e9feee7d1e7725c7602427b6856e7ffbe479/model.py#L16) function is designed to automate the process of downloading historical market data from Binance, a popular cryptocurrency exchange.
This function focuses on fetching data for a specified set of symbols (in this case, the trading pair `"ETHUSDT"`) across various time intervals and storing them in a defined directory.

### How to Use for Downloading Data of Any Coin

#### Update the Symbols List

Replace `["ETHUSDT"]` with the desired trading pair(s), e.g., `["BTCUSDT", "LTCUSDT"]`.

#### Adjust Time Intervals

Modify the intervals list if you need different time intervals. Binance supports various intervals like `["1m", "5m", "1h", "1d", "1w", "1M"]`.

#### Extend Date Ranges

Update the years and months lists to match the historical range you need.

#### Define the Download Path

Ensure `binance_data_path` is set to the directory where you want the data to be saved.

Here’s a quick **example** of how to adjust the script for downloading data for multiple trading pairs:

```python
def download_data():
cm_or_um = "um"
symbols = ["BTCUSDT", "LTCUSDT"] # Updated symbols
intervals = ["1d"]
years = ["2020", "2021", "2022", "2023", "2024"]
months = ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12"]
download_path = binance_data_path
download_binance_monthly_data(
cm_or_um, symbols, intervals, years, months, download_path
)
print(f"Downloaded monthly data to {download_path}.")
current_datetime = datetime.now()
current_year = current_datetime.year
current_month = current_datetime.month
download_binance_daily_data(
cm_or_um, symbols, intervals, current_year, current_month, download_path
)
print(f"Downloaded daily data to {download_path}.")
```

### Formatting the Data

The [`format_data`](https://github.com/allora-network/basic-coin-prediction-node/blob/5d70e9feee7d1e7725c7602427b6856e7ffbe479/model.py#L36) function processes raw data files downloaded from Binance, transforming them into a consistent format for analysis. Here are the key steps:

1. **File Handling:**
- Lists and sorts all files in the `binance_data_path` directory.
- Exits if no files are found.

2. **Initialize DataFrame:**
- An empty DataFrame `price_df` is created to store the combined data.

3. **Process Each File:**
- Filters for `.zip` files and reads the contained CSV file.
- Retains the first 11 columns and renames them to: `["start_time", "open", "high", "low", "close", "volume", "end_time", "volume_usd", "n_trades", "taker_volume", "taker_volume_usd"]`.
- Sets the DataFrame index to the `end_time` column, converted to a timestamp.

4. **Concatenate Data:**
- Combines data from each file into the `price_df` DataFrame.

5. **Sort and Save:**
- Sorts the final DataFrame by date and saves it to `training_price_data_path`.

### Column Descriptions

- **start_time**: The start of the trading period.
- **open**: Opening price.
- **high**: Highest price during the period.
- **low**: Lowest price during the period.
- **close**: Closing price.
- **volume**: Trading volume.
- **end_time**: End of the trading period.
- **volume_usd**: Trading volume in USD.
- **n_trades**: Number of trades.
- **taker_volume**: Taker buy volume.
- **taker_volume_usd**: Taker buy volume in USD.

This function consolidates and formats the historical price data, making it ready for analysis or machine learning tasks.

### Training the Model

The [`train_model`](https://github.com/allora-network/basic-coin-prediction-node/blob/5d70e9feee7d1e7725c7602427b6856e7ffbe479/model.py#L75) function trains a **linear regression model** using historical price data and saves the trained model to a file. Here's a breakdown of the process:

1. **Load the Data:**
- Reads the price data from a CSV file specified by `training_price_data_path`.

2. **Prepare the DataFrame:**
- Converts the `date` column to a timestamp and stores it as a numerical value.
- Computes the average price using the `open`, `close`, `high`, and `low` columns.

3. **Reshape Data for Regression:**
- Extracts the `date` column as the feature (`x`) and the computed average price as the target (`y`).
- Reshapes these arrays to the format expected by `scikit-learn`.

4. **Split the Data:**
- Splits the data into a training set and a test set using an 80/20 split. However, the test set is not used further in this function.

5. **Train the Model:**
- Initializes and trains a `LinearRegression` model using the training data.

6. **Save the Model:**
- Creates the directory for the model file if it doesn't exist.
- Saves the trained model to a file specified by `model_file_path` using `pickle`.

7. **Print Confirmation:**
- Prints a message indicating that the trained model has been saved.

### Modifying the Function for Different Models

To change the model used for training, replace the `LinearRegression` model with another machine learning algorithm. Here are a few examples:

#### Using Decision Tree Regression

```python
from sklearn.tree import DecisionTreeRegressor

def train_model():
# Load the eth price data
price_data = pd.read_csv(training_price_data_path)
df = pd.DataFrame()

# Convert 'date' to a numerical value (timestamp) we can use for regression
df["date"] = pd.to_datetime(price_data["date"])
df["date"] = df["date"].map(pd.Timestamp.timestamp)

df["price"] = price_data[["open", "close", "high", "low"]].mean(axis=1)

# Reshape the data to the shape expected by sklearn
x = df["date"].values.reshape(-1, 1)
y = df["price"].values.reshape(-1, 1)

# Split the data into training set and test set
x_train, _, y_train, _ = train_test_split(x, y, test_size=0.2, random_state=0)

# Train the model using Decision Tree Regression
model = DecisionTreeRegressor()
model.fit(x_train, y_train)

# create the model's parent directory if it doesn't exist
os.makedirs(os.path.dirname(model_file_path), exist_ok=True)

# Save the trained model to a file
with open(model_file_path, "wb") as f:
pickle.dump(model, f)

print(f"Trained model saved to {model_file_path}")
```
Binary file modified public/exchange-inferences.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified public/forecast-inference-workers.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified public/forecast-initial.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified public/layers-of-allora.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified public/reputers.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified public/synthesis-final.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified public/topic-coordinator.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading