Skip to content
This repository has been archived by the owner on Aug 6, 2024. It is now read-only.

Commit

Permalink
Update index.md to fix a few minor grammatical issues (#322)
Browse files Browse the repository at this point in the history
Signed-off-by: afkehaya <[email protected]>
  • Loading branch information
afkehaya authored Jan 11, 2024
1 parent 402c43f commit df670c8
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions content/en/blog/a-acc-akash-accelerationism/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,14 @@ In August of last year, Akash completed the [upgrade to Mainnet 6, which brought

When GPU support was launched, plans were laid out for the future inclusion of other chip manufacturers, such as AMD (and support for these chips was added in late 2023). This upgrade made NVIDIA H100s and A100s, along with a wide range of consumer-grade GPUs, accessible on the network, positioning Akash as one of the leading networks to access the broadest range of compute flexibly.

Akash serves the market for permissionless compute by focusing primarily on two segments of the GPU supply. The first segment includes high-performance chips that are often difficult to access (given the GPU shortage) but are the most performant for AI training, fine-tuning, and inference. The second segment includes consumer-grade chips which represent a massive supply of underutilized compute. These could GPUs are often repurposed from former crypto mining operations, and also from individual builders and those who have built personal computers with these chips. Akash enables both of these segments to bring this underutilized compute to market on an open and permissionless network that gives the hardware owner significant flexibility and control.
Akash serves the market for permissionless compute by focusing primarily on two segments of the GPU supply. The first segment includes high-performance chips that are often difficult to access (given the GPU shortage) but are the most performant for AI training, fine-tuning, and inference. The second segment includes consumer-grade chips which represent a massive supply of underutilized compute. These GPUs are often repurposed from former crypto mining operations, and also from individual builders and those who have built personal computers with these chips. Akash enables both of these segments to bring this underutilized compute to market on an open and permissionless network that gives the hardware owner significant flexibility and control.

The proliferation of Large Language Models (LLMs) and AI applications triggered an unprecedented surge in demand for high-performance GPUs. This demand spanned various sectors, from large corporations to startups, all facing the same compute shortage.

### Training a foundation AI model on Akash
Beginning in August last year, [Overclock Labs started training a foundation AI model alongside ThumperAI](https://github.com/orgs/akash-network/discussions/300). This training effort will culminate in the creation of an open-source AI model named “Akash-Thumper” (abbreviated as AT-1), which will be open-sourced and shared on Huggingface once training is complete. This milestone showcases the potential for training AI models on a distributed network.

The project brought will bring three key benefits to the Akash community. First, working through the first model training will help define the process so that those who want to train models in the future will have a set of well-documented steps with a knowledge base for troubleshooting. Second, the open-source availability of AT-1 will serve as a tangible way to generate awareness of Akash and the feasibility of training on distributed compute more generally. Third, by increasing the attractiveness of Akash for training and fine-tuning AI models, we ultimately attract demand to the network, which helps to increase utilization and keep network providers consistently earning for their listed resources.
The project will bring three key benefits to the Akash community. First, working through the first model training will help define the process so that those who want to train models in the future will have a set of well-documented steps with a knowledge base for troubleshooting. Second, the open-source availability of AT-1 will serve as a tangible way to generate awareness of Akash and the feasibility of training on distributed compute more generally. Third, by increasing the attractiveness of Akash for training and fine-tuning AI models, we ultimately attract demand to the network, which helps to increase utilization and keep network providers consistently earning for their listed resources.

How the training was structured and engaged with community funding is a testament to what is possible with prudent strategic planning. The original proposal outlined a budget of $48,000, equivalent to 53,631 AKT, based on the 30-day moving average price (based on the market price at the time of the proposal). We effectively managed the inherent volatility of the crypto market. This cautious approach ensured that any unspent funds at the end of the project were either returned to the community pool or allocated towards future tenant/provider incentives. Overclock Labs' commitment to cover any shortfall further underscored our dedication to the community's sustainability. The transparency and governance in handling the project’s finances, including the public sharing of the wallet address holding the funds, set a high standard for future endeavors.

Expand Down

0 comments on commit df670c8

Please sign in to comment.