Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[book]: fix some typos #81

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions book/src/dkginit.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

Each DKG session begins by choosing a unique integer session id \\(\tau\\). This can begin at 0 and then be incremented from the previous \\(\tau\\). When strongly integrated into Tendermint, the epoch number can be used as \\(tau\\), with the note that additional DKG sessions within an epoch (for example, to do key refresh) must use a unique \\(\tau\\).

# Share partitioning
# Share partitioning

In general the validator's staking weights will total much greater than \\(W\\), the number of shares issued in the DKG; therefore, the staking weights will have to be scaled and rounded.

The algorithm to assign relative weights achieves exactly the desired total weight. Initially, every participant weight is scaled and rounded down to the nearest integer. The amount of assigned weight is greater than the total desired weight minus the number of participants, so weight at most 1 can be added to each participant in order of staked weight, until the total desired weight is reached. After all total weight is assigned, each participant will have relative weight at most 1 away from their fractional scaled weight.

Using the consensus layer, all validators should agree on a canonical ordering of \\((pk_i, w_i)\\)$ where \\(pk_i\\) is the public key of the \\(i\\)th validator and \\(w_i\\) is number of shares belonging to node \\(i\\). The value \\(i\\) is the integer id of the node with public key \\(pk_i\\).

Let \\(\Psi_{i} = \{ a, a+1, \ldots, a+w_i \}$\\) be the disjoint partition described above such that \\(\cup_i \Psi_{i} = \{0,1, \ldots, W-1\}\\), and \\(\Omega_{i} = \{ \omega^k \ mid k \in \Psi_{i} \}\\). \\(\Psi_i\\) are the **share indexes** assigned to the \\(i\\)th validator and \\(\Omega_i\\) is the **share domain** of the \\(i\\)th validator.
Let \\(\Psi_{i} = \{ a, a+1, \ldots, a+w_i \}$\\) be the disjoint partition described above such that \\(\cup_i \Psi_{i} = \{0,1, \ldots, W-1\}\\), and \\(\Omega_{i} = \{ \omega^k \mid k \in \Psi_{i} \}\\). \\(\Psi_i\\) are the **share indexes** assigned to the \\(i\\)th validator and \\(\Omega_i\\) is the **share domain** of the \\(i\\)th validator.
10 changes: 5 additions & 5 deletions book/src/pvss.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,18 @@ The PVSS scheme used is a modified Scrape PVSS.

## Dealer's role

1. The dealer chooses a uniformly random polynomial \\(f(x) = \sum^p_i a_i x^i \\) of degree \\(t\\).
2. Let \\(F_0, \ldots, F_p \leftarrow [a_0] G_1, \ldots, [a_t] G_1 \\)
1. The dealer chooses a uniformly random polynomial \\(f(x) = \sum^t_i a_i x^i \\) of degree \\(t\\).
2. Let \\(F_0, \ldots, F_t \leftarrow [a_0] G_1, \ldots, [a_t] G_1 \\)
3. Let \\(\hat{u}_2 \rightarrow [a_0] \hat{u_1} \\)
4. For each validator \\(i\\), for each \\(\omega_j \in \Omega_i\\), encrypt the evaluation \\( \hat{Y}_{i, \omega_j} \leftarrow [f(\omega_j)] ek_i \\)
4. Post the signed message \\(\tau, (F_0, \ldots, F_t), \hat{u}_2, (\hat{Y}_{i,\omega_j})\\) to the blockchain

## Public verification

1. Check \\(e(F_0, \hat{u}_1)= e(G_1, \hat{u_2})\\)
2. Compute by FFT \\(A_1, \ldots, A_W \leftarrow [f(\omega_0)]G_1, \ldots, [f(\omega_W)]G_1 \\)
2. Compute by FFT \\(A_1, \ldots, A_W \leftarrow [f(\omega_0)]G_1, \ldots, [f(\omega_{W-1})]G_1 \\)
3. Partition \\(A_1, \ldots, A_W\\) into \\(A_{i,\omega_j} \\) for validator \\(i\\)'s shares \\(\omega_j\\)
4. For each encrypted share \\(\hat{Y}_{i,\omega_i} \\), check \\(e(G_1, \hat{Y}_{i,\omega_j}) = e(A_{i,\omega_j}, ek_i) \\)
4. For each encrypted share \\(\hat{Y}_{i,\omega_j} \\), check \\(e(G_1, \hat{Y}_{i,\omega_j}) = e(A_{i,\omega_j}, ek_i) \\)

## Aggregation

Expand All @@ -35,4 +35,4 @@ Multiple PVSS instances can be aggregated into one by a single validator, speedi

It is critical that all validators agree on which PVSS instances are used to create the final key; in particular, this is exactly what makes Ferveo depend on a synchronous consensus protocol like Tendermint. Therefore, the validators must all verify the PVSS instances and agree on the set of valid PVSS instances; or in the case where a validator has aggregated all PVSS instances, the validator set must agree on a valid aggregation of PVSS instances.

However, although full nodes can certainly perform the verification of a PVSS instance or aggregation, full nodes do not need to verify either the PVSS instances or the aggregation.
However, although full nodes can certainly perform the verification of a PVSS instance or aggregation, full nodes do not need to verify either the PVSS instances or the aggregation.