diff --git a/pages/devs/consumers/existing-consumers.mdx b/pages/devs/consumers/existing-consumers.mdx index 42cae6e..ba6189c 100644 --- a/pages/devs/consumers/existing-consumers.mdx +++ b/pages/devs/consumers/existing-consumers.mdx @@ -14,4 +14,4 @@ If you would like to deploy to an additional chain not listed above, you can lea ## Existing Allora Appchain Topics -Existing Allora Appchain Topics can be found [here](./existing-topics). +Existing Allora Appchain Topics can be found [here](/devs/get-started/existing-topics). diff --git a/pages/devs/consumers/walkthrough-use-topic-inference.mdx b/pages/devs/consumers/walkthrough-use-topic-inference.mdx index 4a0c213..a1d5493 100644 --- a/pages/devs/consumers/walkthrough-use-topic-inference.mdx +++ b/pages/devs/consumers/walkthrough-use-topic-inference.mdx @@ -33,7 +33,7 @@ Follow these instructions to bring the most recent inference data on-chain for a ## Step by Step Guide: 1. Create an Upshot API key by [creating an account](https://developer.upshot.xyz/signup). -2. Call the Consumer Inference API using the `topicId` found in the [deployed topics list](./existing-topics) and the correct chainId. For example, if you use sepolia, you would provide `ethereum-11155111`. +2. Call the Consumer Inference API using the `topicId` found in the [deployed topics list](/devs/getting-started/existing-topics) and the correct chainId. For example, if you use sepolia, you would provide `ethereum-11155111`. ```shell curl -X 'GET' --url 'https://api.upshot.xyz/v2/allora/consumer/?allora_topic_id=' -H 'accept: application/json' -H 'x-api-key: ' diff --git a/pages/devs/get-started/setup-wallet.mdx b/pages/devs/get-started/setup-wallet.mdx index febb80a..a9f5030 100644 --- a/pages/devs/get-started/setup-wallet.mdx +++ b/pages/devs/get-started/setup-wallet.mdx @@ -4,7 +4,7 @@ import { Callout } from 'nextra/components' ## Create Wallet -Follow the instructions [here](/devs/installation/cli) to install our CLI tool `allorad`, which is needed to create a wallet. +Follow the instructions [here](/devs/get-started/installation/cli) to install our CLI tool `allorad`, which is needed to create a wallet. Prior to executing transactions, a wallet must be created by running: @@ -20,6 +20,14 @@ Make sure you save your mnemomic and account information safely. Creating a wallet using `allorad` will generate a wallet address for all currently deployed versions of the Allora Chain (e.g. testnet, local, mainnet). +## Wallet Recovery + +To recover a given wallet's keys, run the following command: + +```bash +allorad keys add --recover +``` + ## Add Faucet Funds Each network has a different URL to access and request funds from. Please see the faucet URLs for the different networks below: diff --git a/pages/devs/reference/allorad.mdx b/pages/devs/reference/allorad.mdx index c2d8d62..19fbf0b 100644 --- a/pages/devs/reference/allorad.mdx +++ b/pages/devs/reference/allorad.mdx @@ -213,7 +213,7 @@ allorad tx emissions [Command] - `allow_negative` - `tolerance` -Detailed instructions on [how to create a topic](/devs/how-to-create-topic) are linked. +Detailed instructions on [how to create a topic](/devs/topic-creators/how-to-create-topic) are linked. ### Add an Admin Address to the Whitelist - **RPC Method:** `AddToWhitelistAdmin` diff --git a/pages/devs/reference/params/chain.mdx b/pages/devs/reference/params/chain.mdx index 1b5c31d..307bee4 100644 --- a/pages/devs/reference/params/chain.mdx +++ b/pages/devs/reference/params/chain.mdx @@ -124,7 +124,7 @@ Sets the minimum allowed time interval, in seconds, between consecutive calls fo Default Value: 3600 seconds (1 hour) -Imposing a minimum cadence ensures a reasonable pacing of weight-adjustment, preventing potential abuse or unnecessary strain on the network. That being said, it need not occur too frequently, because weights accrue over many inferences anyway, and these calls are relatively expensive involving off-chain communication. +Imposing a minimum cadence ensures a reasonable pacing of loss-calculation, preventing potential abuse or unnecessary strain on the network. That being said, it need not occur too frequently, because weights accrue over many inferences anyway, and these calls are relatively expensive involving off-chain communication. #### max_inference_request_validity diff --git a/pages/devs/validators/deploy-chain.mdx b/pages/devs/validators/deploy-chain.mdx index 45b5982..120d851 100644 --- a/pages/devs/validators/deploy-chain.mdx +++ b/pages/devs/validators/deploy-chain.mdx @@ -12,8 +12,8 @@ The Allora Appchain is a Cosmos SDK appchain that serves as the settlement layer The appchain also coordinates actions between protocol actors. -- The appchain triggers requests to workers and reputers to collect inferences and run weight-adjustment logic, respectively, as per each topic's respective inference and weight-adjustment cadence. -- The appchain collects a recent history of inferences in batches to later be scored by weight-adjustment. +- The appchain triggers requests to workers and reputers to collect inferences and run loss-calculation logic, respectively, as per each topic's respective inference and loss-calculation cadence. +- The appchain collects a recent history of inferences in batches to later be scored by loss-calculation. ## Why and How might one interact with the Allora Appchain? diff --git a/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker.mdx b/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker.mdx index b5a3e9f..85973af 100644 --- a/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker.mdx +++ b/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker.mdx @@ -6,7 +6,7 @@ import { Callout } from 'nextra/components' ## Prerequisites -1. Make sure you have checked the documentation on how to [build and deploy a worker node from scratch](./build-and-deploy-worker-from-scratch). +1. Make sure you have checked the documentation on how to [build and deploy a worker node from scratch](/devs/workers/deploy-worker/build-and-deploy-worker-from-scratch). 2. Clone the [basic-coin-prediction-node](https://github.com/allora-network/basic-coin-prediction-node) repository. It will serve as the base sample for your quick setup. We will work from the repository you just cloned. We will explain each part of the source code and make changes to your custom setup as required. Additionally, we encourage you to check the repository's [README](https://github.com/allora-network/basic-coin-prediction-node/blob/main/README.md) for further guidance. diff --git a/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker/_meta.json b/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker/_meta.json new file mode 100644 index 0000000..4bc7434 --- /dev/null +++ b/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker/_meta.json @@ -0,0 +1,3 @@ +{ + "modelpy": "Model.py" +} \ No newline at end of file diff --git a/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker/modelpy.mdx b/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker/modelpy.mdx new file mode 100644 index 0000000..668c7f8 --- /dev/null +++ b/pages/devs/workers/walkthroughs/walkthrough-price-prediction-worker/modelpy.mdx @@ -0,0 +1,168 @@ +# Model.py + +## Introduction + +The [`model.py` file](https://github.com/allora-network/basic-coin-prediction-node/blob/main/model.py) in `basic-coin-prediction-node` consists of several key components: + +- **Imports and Configuration:** Sets up necessary libraries and configuration variables. +- **Paths Configuration:** Generates paths for storing data dynamically based on coin symbols. +- **Downloading Data:** Downloads historical price data for the specified symbols, intervals, years, and months. +- **Formatting Data:** Reads, formats, and saves the downloaded data as CSV files. +- **Training the Model:** Trains a linear regression model on the formatted price data and saves the trained model. + +While the import and path configuration processes are straightforward, downloading and formatting the data, as well as training the model, require specific steps. + +This documentation will guide you through creating models for different coins, making it easy to extend the script for general-purpose use. + +## Downloading the Data + +The [`download_data`](https://github.com/allora-network/basic-coin-prediction-node/blob/5d70e9feee7d1e7725c7602427b6856e7ffbe479/model.py#L16) function is designed to automate the process of downloading historical market data from Binance, a popular cryptocurrency exchange. +This function focuses on fetching data for a specified set of symbols (in this case, the trading pair `"ETHUSDT"`) across various time intervals and storing them in a defined directory. + +### How to Use for Downloading Data of Any Coin + +#### Update the Symbols List + +Replace `["ETHUSDT"]` with the desired trading pair(s), e.g., `["BTCUSDT", "LTCUSDT"]`. + +#### Adjust Time Intervals + +Modify the intervals list if you need different time intervals. Binance supports various intervals like `["1m", "5m", "1h", "1d", "1w", "1M"]`. + +#### Extend Date Ranges + +Update the years and months lists to match the historical range you need. + +#### Define the Download Path + +Ensure `binance_data_path` is set to the directory where you want the data to be saved. + +Here’s a quick **example** of how to adjust the script for downloading data for multiple trading pairs: + +```python +def download_data(): + cm_or_um = "um" + symbols = ["BTCUSDT", "LTCUSDT"] # Updated symbols + intervals = ["1d"] + years = ["2020", "2021", "2022", "2023", "2024"] + months = ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12"] + download_path = binance_data_path + download_binance_monthly_data( + cm_or_um, symbols, intervals, years, months, download_path + ) + print(f"Downloaded monthly data to {download_path}.") + current_datetime = datetime.now() + current_year = current_datetime.year + current_month = current_datetime.month + download_binance_daily_data( + cm_or_um, symbols, intervals, current_year, current_month, download_path + ) + print(f"Downloaded daily data to {download_path}.") +``` + +### Formatting the Data + +The [`format_data`](https://github.com/allora-network/basic-coin-prediction-node/blob/5d70e9feee7d1e7725c7602427b6856e7ffbe479/model.py#L36) function processes raw data files downloaded from Binance, transforming them into a consistent format for analysis. Here are the key steps: + +1. **File Handling:** + - Lists and sorts all files in the `binance_data_path` directory. + - Exits if no files are found. + +2. **Initialize DataFrame:** + - An empty DataFrame `price_df` is created to store the combined data. + +3. **Process Each File:** + - Filters for `.zip` files and reads the contained CSV file. + - Retains the first 11 columns and renames them to: `["start_time", "open", "high", "low", "close", "volume", "end_time", "volume_usd", "n_trades", "taker_volume", "taker_volume_usd"]`. + - Sets the DataFrame index to the `end_time` column, converted to a timestamp. + +4. **Concatenate Data:** + - Combines data from each file into the `price_df` DataFrame. + +5. **Sort and Save:** + - Sorts the final DataFrame by date and saves it to `training_price_data_path`. + +### Column Descriptions + +- **start_time**: The start of the trading period. +- **open**: Opening price. +- **high**: Highest price during the period. +- **low**: Lowest price during the period. +- **close**: Closing price. +- **volume**: Trading volume. +- **end_time**: End of the trading period. +- **volume_usd**: Trading volume in USD. +- **n_trades**: Number of trades. +- **taker_volume**: Taker buy volume. +- **taker_volume_usd**: Taker buy volume in USD. + +This function consolidates and formats the historical price data, making it ready for analysis or machine learning tasks. + +### Training the Model + +The [`train_model`](https://github.com/allora-network/basic-coin-prediction-node/blob/5d70e9feee7d1e7725c7602427b6856e7ffbe479/model.py#L75) function trains a **linear regression model** using historical price data and saves the trained model to a file. Here's a breakdown of the process: + +1. **Load the Data:** + - Reads the price data from a CSV file specified by `training_price_data_path`. + +2. **Prepare the DataFrame:** + - Converts the `date` column to a timestamp and stores it as a numerical value. + - Computes the average price using the `open`, `close`, `high`, and `low` columns. + +3. **Reshape Data for Regression:** + - Extracts the `date` column as the feature (`x`) and the computed average price as the target (`y`). + - Reshapes these arrays to the format expected by `scikit-learn`. + +4. **Split the Data:** + - Splits the data into a training set and a test set using an 80/20 split. However, the test set is not used further in this function. + +5. **Train the Model:** + - Initializes and trains a `LinearRegression` model using the training data. + +6. **Save the Model:** + - Creates the directory for the model file if it doesn't exist. + - Saves the trained model to a file specified by `model_file_path` using `pickle`. + +7. **Print Confirmation:** + - Prints a message indicating that the trained model has been saved. + +### Modifying the Function for Different Models + +To change the model used for training, replace the `LinearRegression` model with another machine learning algorithm. Here are a few examples: + +#### Using Decision Tree Regression + +```python +from sklearn.tree import DecisionTreeRegressor + +def train_model(): + # Load the eth price data + price_data = pd.read_csv(training_price_data_path) + df = pd.DataFrame() + + # Convert 'date' to a numerical value (timestamp) we can use for regression + df["date"] = pd.to_datetime(price_data["date"]) + df["date"] = df["date"].map(pd.Timestamp.timestamp) + + df["price"] = price_data[["open", "close", "high", "low"]].mean(axis=1) + + # Reshape the data to the shape expected by sklearn + x = df["date"].values.reshape(-1, 1) + y = df["price"].values.reshape(-1, 1) + + # Split the data into training set and test set + x_train, _, y_train, _ = train_test_split(x, y, test_size=0.2, random_state=0) + + # Train the model using Decision Tree Regression + model = DecisionTreeRegressor() + model.fit(x_train, y_train) + + # create the model's parent directory if it doesn't exist + os.makedirs(os.path.dirname(model_file_path), exist_ok=True) + + # Save the trained model to a file + with open(model_file_path, "wb") as f: + pickle.dump(model, f) + + print(f"Trained model saved to {model_file_path}") +``` \ No newline at end of file diff --git a/public/exchange-inferences.png b/public/exchange-inferences.png old mode 100644 new mode 100755 index e4d1a59..d35c22d Binary files a/public/exchange-inferences.png and b/public/exchange-inferences.png differ diff --git a/public/forecast-inference-workers.png b/public/forecast-inference-workers.png old mode 100644 new mode 100755 index cfd4398..6dcc994 Binary files a/public/forecast-inference-workers.png and b/public/forecast-inference-workers.png differ diff --git a/public/forecast-initial.png b/public/forecast-initial.png old mode 100644 new mode 100755 index 2bb8167..3551ceb Binary files a/public/forecast-initial.png and b/public/forecast-initial.png differ diff --git a/public/layers-of-allora.png b/public/layers-of-allora.png old mode 100644 new mode 100755 index 7bcebcd..3714812 Binary files a/public/layers-of-allora.png and b/public/layers-of-allora.png differ diff --git a/public/reputers.png b/public/reputers.png old mode 100644 new mode 100755 index ebd669d..bcec75b Binary files a/public/reputers.png and b/public/reputers.png differ diff --git a/public/synthesis-final.png b/public/synthesis-final.png old mode 100644 new mode 100755 index cff0f91..76e6910 Binary files a/public/synthesis-final.png and b/public/synthesis-final.png differ diff --git a/public/topic-coordinator.png b/public/topic-coordinator.png old mode 100644 new mode 100755 index b91a1a8..b1249e1 Binary files a/public/topic-coordinator.png and b/public/topic-coordinator.png differ