diff --git a/pages/community/resources.mdx b/pages/community/resources.mdx index ae26c98..85f9138 100644 --- a/pages/community/resources.mdx +++ b/pages/community/resources.mdx @@ -19,4 +19,5 @@ - Workshop – "How to bring AI to your web3 apps with the Allora Edgenet" by Allora Labs - https://www.youtube.com/watch?v=aPCvTVFUynA -## Community Guides +## Community Repositories + diff --git a/pages/devs/validators/software-upgrades.mdx b/pages/devs/validators/software-upgrades.mdx index 4a4f247..a3981ff 100644 --- a/pages/devs/validators/software-upgrades.mdx +++ b/pages/devs/validators/software-upgrades.mdx @@ -4,7 +4,7 @@ The Allora network relies on multiple different pieces of software to do different tasks. For example the `allora-chain` repository handles the blockchain software that runs the chain, while -the `allora-inference-base` repository performs off-chain tasks. Each piece of software may need to +the `offchain-node` repository performs off-chain tasks. Each piece of software may need to be upgraded separately. ## Allora-Chain Upgrades @@ -55,12 +55,6 @@ For those running the chain software, you will have to have to perform an upgrad 3. When the developers put up the upgrade proposal to governance, be helpful and vote to make it pass. You can do this via the CLI with `allorad tx gov vote $proposal_id yes --from $validator` or an example of doing this programmatically can be found in the integration test [voteOnProposal](https://github.com/allora-network/allora-chain/blob/main/test/integration/upgrade_test.go) function. 4. At the block height of the upgrade, the old software will panic - cosmovisor will catch the panic and restart the process using the new binary for the upgrade instead. Monitor your logs appropriately to see the restart. -## Allora-Inference-Base Upgrades - -New software releases are published on the Allora Inference Base -[Github](https://github.com/allora-network/allora-inference-base/releases) page. -Download and install the new version of the software to upgrade. - ## Further References This is probably the most helpful document to understand the full workflow of a cosmos-sdk chain diff --git a/pages/devs/workers/deploy-worker/build-and-deploy-worker-with-node-runners.mdx b/pages/devs/workers/deploy-worker/build-and-deploy-worker-with-node-runners.mdx index 2e097da..1108330 100644 --- a/pages/devs/workers/deploy-worker/build-and-deploy-worker-with-node-runners.mdx +++ b/pages/devs/workers/deploy-worker/build-and-deploy-worker-with-node-runners.mdx @@ -26,7 +26,7 @@ This diagram illustrates the architecture of the integration between the Allora - **VPC Internet Gateway**: Allows communication between the instances in the VPC and the internet. 3. **EC2 Instance (Allora Worker Node)** - - **Inference Base**: This component handles network communication, receiving requests from the Allora Network's Public Head Node and sending responses back. + - **Offchain Node**: This component handles network communication, receiving requests from the Allora Network and sending responses back. - **Node Function**: Processes requests by interfacing with the private model server. It acts as an intermediary, ensuring the requests are correctly formatted and the responses are appropriately handled. - **Model Server**: Hosts the proprietary model. It executes the main inference script (`Main.py`) to generate inferences based on the received requests. @@ -34,18 +34,18 @@ This diagram illustrates the architecture of the integration between the Allora 1. **Request Flow**: - The Allora Network's Public Head Node sends a request for inferences to the EC2 instance within the AWS environment. - - The request passes through the VPC Internet Gateway and reaches the Inference Base in the public subnet. - - The Inference Base forwards the request to the Node Function. + - The request passes through the VPC Internet Gateway and reaches the Offchain node in the public subnet. + - The Offchain node forwards the request to the Node Function. - The Node Function calls `Main.py` on the Model Server to generate the required inferences. 2. **Response Flow**: - The Model Server processes the request and returns the inferences to the Node Function. - - The Node Function sends the inferences back to the Inference Base. - - The Inference Base communicates the inferences back to the Allora Network's Public Head Node via the VPC Internet Gateway. + - The Node Function sends the inferences back to the Offchain node. + - The Offchain node communicates the inferences back to the Allora Network via the VPC Internet Gateway. ## AWS Activate -Before proceeding, please note that eligibility for AWS Activate credits and terms are governed by AWS. This documentation may become outdated, so ensure you refer to the [AWS Activate program page](https://aws.amazon.com/startups/credits#hero) for the latest eligibility requirements and instructions. +Before proceeding, please note that eligibility for AWS Activate credits and terams are governed by AWS. This documentation may become outdated, so ensure you refer to the [AWS Activate program page](https://aws.amazon.com/startups/credits#hero) for the latest eligibility requirements and instructions. ## AWS Activate Stepwise Process