diff --git a/pages/home/key-terms.mdx b/pages/home/key-terms.mdx
index d64fb8c..20403d7 100644
--- a/pages/home/key-terms.mdx
+++ b/pages/home/key-terms.mdx
@@ -20,10 +20,11 @@ Predictions or conclusions made by workers about specific outcomes within a give
## Forecasts
-Predictions made by workers about the performance of their peers in the current epoch. Forecasts are used to gauge the reliability and accuracy of the participants' inferences.
+Predictions made by workers about the performance of their peers in the current epoch, expressed as a set of forecasted losses in accordance with the topic's loss function.
+Forecasts are used to gauge the reliability and accuracy of the participants' inferences within the specific context provided by the current circumstances.
-Forecasts and predictions are used interchangeably throughout the docs.
+Forecasts and predictions are used interchangeably throughout the docs when referring to the output of forecasters.
## Context Awareness
@@ -32,7 +33,7 @@ An additional dimension of evaluation that enables the network to achieve the be
By incorporating feedback from the test set (live, revealed ground truth plus the live, revealed performances of one's peers), both inference and forecast models of individual workers can be improved over time. This improves overall network performance.
-By incentivizing forecasts, workers are incentivized to understand the contexts in which they and their peers perform well or poorly. For example, Allora may understand that a subset of workers perform better on Wednesdays, whereas another performs well on Thursdays, or some do well in bear markets and others in bull markets. Integrating such context-aware insights empowers Allora to admit better performance than any individual actor because it can selectively leverage insights from the appropriate actors in the appropriate contexts.
+By incentivizing forecasts, workers are incentivized to understand the contexts in which they and their peers perform well or poorly. For example, forecasters may understand that a subset of workers perform better on Wednesdays, whereas another performs well on Thursdays, or some do well in bear markets and others in bull markets. Integrating such context-aware insights empowers Allora to admit better performance than any individual actor because it can selectively leverage insights from the appropriate actors in the appropriate contexts.
## Network Participants
@@ -53,11 +54,7 @@ Incentives given to workers and reputers based on their accuracy, performance an
## Stake
A financial commitment made by reputers to show confidence in their ability to assess reputation by sourcing the truth and comparing it to workers' inferences.
-This stake increases the importance and rewards of a topic.
-
-- Reputers and workers must place a minimum stake to participate, ensuring network security and preventing attacks
-- This registration stake allows participation but **does not** influence topic rewards
-- Participants use the Allora chain CLI to stake
+This stake increases the importance and rewards of a topic. Participants use the Allora chain CLI to stake.
### Delegated Stake
@@ -79,4 +76,4 @@ This two-step process helps protect against flash attacks while maintaining a sm
A measure of how the performance of a worker’s inference compares to the network’s previously reported accuracy.
-A positive regret implies that the inference of worker `x` is expected to outperform, whereas a negative regret implies lower expected performance for worker `x`.
\ No newline at end of file
+A positive regret implies that the inference of worker `x` outperforms the network, whereas a negative regret implies the network outperforms worker `x`.
\ No newline at end of file
diff --git a/pages/home/layers/consensus/reputers.mdx b/pages/home/layers/consensus/reputers.mdx
index e1bc60d..000e214 100644
--- a/pages/home/layers/consensus/reputers.mdx
+++ b/pages/home/layers/consensus/reputers.mdx
@@ -1,6 +1,6 @@
# Consensus: Reputers
-Reputers are rewarded based on their accuracy comparative to consensus (the other reputers providing data for a topic) and stake, with added functionality to prevent centralization of rewards caused by reputers with larger stakes.
+Reputers are rewarded based on their accuracy relative to consensus (formed by all reputers providing data for a topic) and stake, with added functionality to prevent centralization of rewards caused by reputers with larger stakes.
## Problem: Runaway Centralization
1. **Stake-Weighted Average:**
@@ -8,7 +8,7 @@ Reputers are rewarded based on their accuracy comparative to consensus (the othe
- Normally, we might average the accuracy (or "losses") they report, but give more weight to reputers with bigger stakes (more reputation).
- This means reputers with more stake have more influence on the consensus (agreed-upon truth).
2. **Runaway Effect:**
-- The problem is that reputers with higher stakes get more rewards, which further increases their stakes.
+- The problem is that reputers with higher stakes will be closer to consensus, which they have more influence on, and get more rewards, which further increases their stakes.
- This creates a cycle where the rich get richer, leading to centralization. A few reputers end up controlling most of the influence and rewards, which is unfair and unhealthy for the system.
## Solution: Adjusted Stake
@@ -16,12 +16,12 @@ Reputers are rewarded based on their accuracy comparative to consensus (the othe
- To prevent runaway centralization, we adjust how much weight each reputer's stake has when setting the consensus.
- Instead of using the full stake for weighting, we use an adjusted version that doesn’t let any one reputer dominate.
2. **How It Works:**
-- The formula for adjusted stake ensures that if a reputer's stake goes above a certain level, it doesn’t keep increasing their weight.
+- The formula for adjusted stake ensures that if a reputer's stake goes above a certain level, it doesn’t keep increasing their weight in the consensus calculation.
- It levels the playing field, so reputers with smaller stakes still have some influence.
$$\hat{S}_{im} = \min \left( \frac{N_r a_{im} S_{im}}{\sum_m a_{im} S_{im}}, 1 \right)$$
Where:
-- $$Nr$$ is the number of reputers.
-- $$a_{im}$$ is a listening coefficient, which is a measure of how much the network considers each reputer's input, based on their past performance.
+- $$N_{r}$$ is the number of reputers.
+- $$a_{im}$$ is a listening coefficient, which is a measure of how much the network considers each reputer's input, and is optimized to maximize consensus among the reputers.
- $$S_{im}$$ is the original stake.
\ No newline at end of file
diff --git a/pages/home/layers/consensus/topic-rewards.mdx b/pages/home/layers/consensus/topic-rewards.mdx
index 4672731..58e4823 100644
--- a/pages/home/layers/consensus/topic-rewards.mdx
+++ b/pages/home/layers/consensus/topic-rewards.mdx
@@ -2,7 +2,7 @@ import { Callout } from 'nextra/components'
# Calculating Topic Reward Distribution Across all Network Participants
-Now that we've explained the mechanisms behind distributing rewards for the actors of each network participant, let's dive in to how topic rewards are distributed across:
+Now that we've explained the mechanisms behind distributing rewards for the actors of each network participant, let's dive in to how topic rewards are distributed across the groups of::
- inference workers
- forecast workers
- reputers
@@ -29,7 +29,7 @@ $$
Where:
- $$ F_i $$ is the entropy for inference workers.
- $$ \sum_j $$ means we add up the values for all inference workers $$ j $$.
-- $$ f_{ij} $$ is the fraction of rewards for the $$ j $$-th inference worker.
+- $$ f_{ij} $$ is the (smoothed) fraction of rewards for the $$ j $$-th inference worker.
- $$ N_i^{\text{eff}} $$ is the effective number of inference workers (a fair count to prevent cheating).
- $$ N_i $$ is the total number of inference workers.
- $$ \beta $$ is a constant that helps adjust the calculation.
@@ -47,7 +47,7 @@ $$
Where:
- $$ G_i $$ and $$ H_i $$ are the entropies for forecast workers and reputers, respectively.
- $$ \sum_k $$ and $$ \sum_m $$ mean we add up the values for all forecast workers $$ k $$ and all reputers $$ m $$.
-- $$ f_{ik} $$ and $$ f_{im} $$ are the fractions of rewards for the $$ k $$-th forecast worker and the $$ m $$-th reputer.
+- $$ f_{ik} $$ and $$ f_{im} $$ are the (smoothed) fractions of rewards for the $$ k $$-th forecast worker and the $$ m $$-th reputer.
- $$ N_f^{\text{eff}} $$ and $$ N_r^{\text{eff}} $$ are the effective numbers of forecast workers and reputers.
- $$ N_f $$ and $$ N_r $$ are the total numbers of forecast workers and reputers.
@@ -58,6 +58,10 @@ $$
f_{ij} = \frac{\tilde{u}_{ij}}{\sum_j \tilde{u}_{ij}}, \quad f_{ik} = \frac{\tilde{v}_{ik}}{\sum_k \tilde{v}_{ik}}, \quad f_{im} = \frac{\tilde{w}_{im}}{\sum_m \tilde{w}_{im}}
$$
+
+ Here, the tilde over the rewards indicates they are smoothed using an exponential moving average to remove noise and volatility from the decentralization measure.
+
+
### Effective Number of Participants
To prevent manipulation of the reward system against sybil attacks, we calculate the effective number of participants (actors). It ensures that the reward distribution remains fair even if someone tries to game the system.
@@ -85,35 +89,35 @@ In simpler terms:
- $$ U_i $$ is the reward for inference workers.
- $$ V_i $$ is the reward for forecast workers.
- $$ W_i $$ is the reward for reputers.
-- $$ E_{i,t} $$ is the total reward for the task at time $$ t $$.
+- $$ E_{i,t} $$ is the total reward for the participants in topic $$ t $$.
- $$ \chi $$ is a factor that adjusts how much reward goes to forecast workers.
-- $$ \gamma $$ is a balance factor to keep everything fair.
+- $$ \gamma $$ is a is a normalization factor to ensure the rewards add up to $$ E_{i,t} $$.
- $$ F_i $$, $$ G_i $$, and $$ H_i $$ are the entropies for inference workers, forecast workers, and reputers.
-### How Good is the Forecast? Checking the Predictions
+### What Value is Added by Forecasters? Checking the Predictions
-We check how good the forecast is using a score called $$ T_i $$:
+We quantify the value added by the entire forecasting task using a score called $$ T_i $$:
$$
T_i = \log L_i^- - \log L_i
$$
Where:
-- $$ T_i $$ is the performance score for the forecast.
-- $$ L_i^- $$ is the network loss without the forecast task.
+- $$ T_i $$ is the performance score for the entire forecasting task.
+- $$ L_i^- $$ iis the network loss when including all forecast-implied inferences.
- $$ L_i $$ is the network loss with the forecast task.
-We then use this score to decide how much the forecast workers should get. The more accurate their predictions, the higher their reward:
+We then use this score to decide how much the forecast workers should get. The higher their score relative to inference workers, the higher the total reward allocated to forecasters:
$$
-\tau_i \equiv \frac{T_i - \min \left( 0, \sum_j T_{ij} \right)}{\sum_j T_{ij}}
+\tau_i \equiv \alpha \frac{T_i - \min(0, \max_j T_{ij})}{|\max_j T_{ij}|} + (1 - \alpha) \tau_{i-1}
$$
Where:
-- $$ \tau_i $$ is the adjusted score for the forecast task.
-- $$ T_{ij} $$ is the performance score for each forecast worker.
+- $$ \tau_i $$ is a ratio expressing the relative added value of the forecasting task relative to inference workers.
+- $$ T_{ij} $$ is the performance score for each inference worker.
-This score is then adjusted to fit within a specific range:
+This ratio is then mapped onto a fraction of the worker rewards that is allocated to forecasters:
$$
\chi = \begin{cases}
@@ -123,15 +127,14 @@ $$
\end{cases}
$$
-### The Balance Factor
+### The Normalization Factor
-We use a balance factor $$ \gamma $$ to make sure everything stays fair:
+We use a normalization factor $$ \gamma $$ to ensure the rewards add up to $$ E_{i,t} $$:
$$
\gamma = \frac{F_i + G_i}{(1 - \chi)F_i + \chi G_i}
$$
Where:
-- $$ \gamma $$ ensures that the rewards are balanced correctly.
-
+- $$ \gamma $$ ensures that the total reward allocated to workers ($$ U_{i} + V_{i} $$) remains constant after accounting for the added value of the forecasting task.
By using these methods, we can ensure that rewards are spread out fairly and encourage everyone to contribute their best work.
diff --git a/pages/home/layers/consensus/workers.mdx b/pages/home/layers/consensus/workers.mdx
index df20d81..2a31447 100644
--- a/pages/home/layers/consensus/workers.mdx
+++ b/pages/home/layers/consensus/workers.mdx
@@ -9,7 +9,7 @@ Scores are used to calculate rewards.
## Inference Workers
-**One-out Inference**: evaluating the performance of an inference by removing one worker’s inference and seeing how much the loss increases.
+**One-out Inference**: evaluating the impact of an inference by removing one worker’s inference and seeing how much the loss increases.
The performance score $$T_{ij}$$ for inference worker $$j$$ is calculated as the difference in the logarithm of losses with and without the worker’s inference. If removing a worker's contribution increases the loss, their performance score is positive, indicating a valuable contribution.
@@ -27,7 +27,7 @@ When you remove one worker to see their impact (**one-out inference**), the over
### Why One-In Inferences Are Needed
To better understand each forecast worker's unique contribution, you need to also see what happens when you specifically include their work (**one-in inference**).
-By adding a worker’s specific input to the group and measuring the change, you can see how much they really help.
+By adding a worker’s forecast-implied inference to a group without any forecast-implied inferences and measuring the change, you can see how much they really help on their own.
$$
T_{ik}^+ = \log \mathcal{L}_i^- - \log \mathcal{L}_{ki}^+
diff --git a/pages/home/layers/forecast-synthesis/forecast.mdx b/pages/home/layers/forecast-synthesis/forecast.mdx
index bd14cc6..870aca7 100644
--- a/pages/home/layers/forecast-synthesis/forecast.mdx
+++ b/pages/home/layers/forecast-synthesis/forecast.mdx
@@ -4,7 +4,7 @@
Some workers function to forecast the expected performance of other workers inferences' and make the network **Context Aware**. [Context awareness](/home/key-terms#context-awareness) enables the aggregated inference produced by the network to be better than any individual model's output.
-Inference Synthesis becomes possible due to the context awareness of the workers.
+Inference Synthesis is greatly enhanced due to the context awareness of the workers.

@@ -19,7 +19,7 @@ Forecasted losses allow the network to become context aware.
## Regrets
Forecasted losses are used to calculate "regret," which indicates how much better or worse each inference is expected to be compared to previous inferences.
-Positive regret means an inference is expected to be more accurate, while negative regret means it is expected to be less accurate.
+Positive regret means an inference is expected to be more accurate than the network inference, while negative regret means it is expected to be less accurate.
Regrets are used to generate 'weights', where more accurate inferences get higher weights.
diff --git a/pages/home/layers/forecast-synthesis/synthesis.mdx b/pages/home/layers/forecast-synthesis/synthesis.mdx
index 32846ef..20b41ff 100644
--- a/pages/home/layers/forecast-synthesis/synthesis.mdx
+++ b/pages/home/layers/forecast-synthesis/synthesis.mdx
@@ -1,11 +1,12 @@
# Inference Synthesis
-Inference synthesis in Allora is a process that combines individual inferences from various workers to produce an aggregate inference. This process takes place over several epochs and involves both inference and forecasting tasks.
+Inference synthesis in Allora is a process that combines individual inferences from various workers to produce an aggregate inference. This process takes place at each epoch and involves both inference and forecasting tasks.
## Normalization of Regrets
Regrets are normalized to ensure weights are comparable and within a reasonable range.
-This prevents any single inference from disproportionately affecting the final result.
+
+This allows the use of a single mapping function to map regrets to weights, independently of the absolute scale of the losses and regrets.
The regret is normalized using its standard deviation across all workers, adjusted by a small constant 𝜖:

@@ -19,7 +20,8 @@ The normalized regrets 𝑅𝑖𝑗𝑘 are then used to calculate the weights f

-These weights determine how much each initial inference 𝐼𝑖𝑗 will contribute to the final network inference.
+These weights determine how much each raw inference 𝐼𝑖𝑗 will contribute to the final network inference.
+
## Forecast-Implied Inferences
@@ -32,7 +34,7 @@ Here, 𝑤𝑖𝑗𝑘 are weights assigned to each inference based on the forec
## Final Network Inference
-The final inference for the network is a weighted combination of all individual inferences.
+The final inference for the network is a weighted combination of all individual inferences following a procedure similar to the generation of forecast-implied inferences discussed above, but using the actual regrets based on the losses provided by reputers instead of forecasted losses.
This combined result is expected to be more accurate and reliable due to the weighting process.

diff --git a/pages/home/layers/inference-consumption.mdx b/pages/home/layers/inference-consumption.mdx
index 15ceb59..3fc3a22 100644
--- a/pages/home/layers/inference-consumption.mdx
+++ b/pages/home/layers/inference-consumption.mdx
@@ -19,7 +19,7 @@ Inferences have a [topic life cycle](/devs/topic-creators/topic-life-cycle) that
## Reputers
As the number of workers in the network increases, some will naturally perform better than others due to the system's permissionless nature.
-To maintain quality, Reputers evaluate each worker's performance against the ground truth when it becomes available.
+To maintain quality and help the network set the reward distribution, Reputers evaluate each worker's performance against the ground truth when it becomes available.

diff --git a/pages/home/participants.mdx b/pages/home/participants.mdx
index d0f8082..72f31ae 100644
--- a/pages/home/participants.mdx
+++ b/pages/home/participants.mdx
@@ -1,10 +1,12 @@
# Allora Network Participants
-Allora Network participants can fulfill a variety of different roles after any of these participants have created a topic. A topic is registered on the Allora chain with a short rule set governing network interaction, including the loss function that needs to be optimized by the topic network.
+Allora Network participants can fulfill a variety of different roles after any of these participants have created a topic.
+A topic is registered on the Allora chain with a short rule set governing network interaction, including the loss function that needs to be optimized by the topic network.
-Allora Labs will contribute to the development of the network alongside other external code contributors. Allora Labs will also participate in the network as a worker by running models. Allora Labs will contribute as a sales/marketing service provider for Allora.
+Allora Labs will contribute to the development of the network alongside other external code contributors. Allora Labs will also participate in the network as a worker by running models.
+Allora Labs will contribute as a sales/marketing service provider for Allora.
- **Workers** provide AI/ML-powered inferences to the network. These inferences can directly refer to the object that the network topic is generating or to the predicted quality of the inferences produced by other workers to help the network combine these inferences. A worker receives rewards proportional to the quality of its inferences.
- **Reputers** evaluate the quality of the inferences provided by the workers. This is done by comparing the inferences to the ground truth when available. Reputers also quantify how much these inferences contribute to the network-wide inference. A reputer receives rewards proportional to its stake and the quality of its evaluations. Reputers are often authoritative domain experts to assess the quality of inferences accurately.
-- **Validators** are responsible for operating most of the infrastructure associated with instantiating the Allora Network. They do this in several ways: staking in worker nodes (data scientists) based on their confidence in said workers' abilities to produce accurate inferences, operating the appchain as Cosmos validators, operating Blockless (b7s) nodes that execute topic-specific weight-adjustment logic off-chain.
+- **Validators** are responsible for operating most of the infrastructure associated with instantiating the Allora Network by operating the appchain as Cosmos validators.
- **Consumers** request inferences from the network. They pay for the inferences using the native network token.
diff --git a/pages/home/tokenomics.mdx b/pages/home/tokenomics.mdx
index 38dccf1..d96cec9 100644
--- a/pages/home/tokenomics.mdx
+++ b/pages/home/tokenomics.mdx
@@ -6,7 +6,7 @@ The Allora token (ALLO) is minted by the Allora Network to facilitate the exchan
## Pay-What-You-Want (PWYW)
-The ALLO token incorporates a Pay-What-You-Want (PWYW) model to allow token holders the flexibility to choose the fee they pay for inferences generated by the network. This model fosters inclusivity and accessibility by enabling participants to determine the value they assign to the service.
+The Allora Network incorporates a Pay-What-You-Want (PWYW) model to allow token holders the flexibility to choose the fee they pay for inferences generated by the network. This model fosters inclusivity and accessibility by enabling participants to determine the value they assign to the service.
Token holders have the autonomy to decide the amount of ALLO they wish to pay for a given inference, which encourages token holders to contribute to the network's ecosystem according to their perceived value of the service.
@@ -14,7 +14,7 @@ Token holders have the autonomy to decide the amount of ALLO they wish to pay fo
**Important Note**: If participants choose to pay **zero** fees for a particular topic, the weight of that topic tends to zero. As a result, participants within that topic **receive no rewards**, and the token emission will be redistributed over other topics. This mechanism ensures that topics with no fee payments do not sustain themselves, driving healthy competition and price discovery across the network.
-Flexible price discovery across topics is a less opinionated method that allows the market to reach an agreed-upon price through natural negotiation and market dynamics. Running a Large Language Model (LLM) involves a different, heterogeneous resource provision compared to something like Ethereum, which is homogeneous since each opcode in Ethereum has a fixed price.
+Flexible price discovery across topics is a less opinionated method that allows the market to reach an agreed-upon price through natural negotiation and market dynamics.
## Token Emissions
@@ -40,7 +40,7 @@ Flexible price discovery across topics is a less opinionated method that allows
### Creating or Participating in Topics
- ALLO tokens are paid for creating topics or participating in the network as a worker.
-- The access fee price is variable.
+- ALLO tokens can be used to pay the registration fee for workers and reputers to register to a topic.
### Staking and Delegating Stake