From add9be4bfa3ba50f1389a917a94286292a8007e1 Mon Sep 17 00:00:00 2001 From: Song Xuyang Date: Tue, 12 Apr 2022 12:22:41 +0800 Subject: [PATCH] [book]: fix some typos --- book/src/dkginit.md | 4 ++-- book/src/pvss.md | 10 +++++----- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/book/src/dkginit.md b/book/src/dkginit.md index 4477f73e..c7e939c7 100644 --- a/book/src/dkginit.md +++ b/book/src/dkginit.md @@ -2,7 +2,7 @@ Each DKG session begins by choosing a unique integer session id \\(\tau\\). This can begin at 0 and then be incremented from the previous \\(\tau\\). When strongly integrated into Tendermint, the epoch number can be used as \\(tau\\), with the note that additional DKG sessions within an epoch (for example, to do key refresh) must use a unique \\(\tau\\). -# Share partitioning +# Share partitioning In general the validator's staking weights will total much greater than \\(W\\), the number of shares issued in the DKG; therefore, the staking weights will have to be scaled and rounded. @@ -10,4 +10,4 @@ The algorithm to assign relative weights achieves exactly the desired total weig Using the consensus layer, all validators should agree on a canonical ordering of \\((pk_i, w_i)\\)$ where \\(pk_i\\) is the public key of the \\(i\\)th validator and \\(w_i\\) is number of shares belonging to node \\(i\\). The value \\(i\\) is the integer id of the node with public key \\(pk_i\\). -Let \\(\Psi_{i} = \{ a, a+1, \ldots, a+w_i \}$\\) be the disjoint partition described above such that \\(\cup_i \Psi_{i} = \{0,1, \ldots, W-1\}\\), and \\(\Omega_{i} = \{ \omega^k \ mid k \in \Psi_{i} \}\\). \\(\Psi_i\\) are the **share indexes** assigned to the \\(i\\)th validator and \\(\Omega_i\\) is the **share domain** of the \\(i\\)th validator. \ No newline at end of file +Let \\(\Psi_{i} = \{ a, a+1, \ldots, a+w_i \}$\\) be the disjoint partition described above such that \\(\cup_i \Psi_{i} = \{0,1, \ldots, W-1\}\\), and \\(\Omega_{i} = \{ \omega^k \mid k \in \Psi_{i} \}\\). \\(\Psi_i\\) are the **share indexes** assigned to the \\(i\\)th validator and \\(\Omega_i\\) is the **share domain** of the \\(i\\)th validator. \ No newline at end of file diff --git a/book/src/pvss.md b/book/src/pvss.md index 0c4791a1..100dcae8 100644 --- a/book/src/pvss.md +++ b/book/src/pvss.md @@ -4,8 +4,8 @@ The PVSS scheme used is a modified Scrape PVSS. ## Dealer's role -1. The dealer chooses a uniformly random polynomial \\(f(x) = \sum^p_i a_i x^i \\) of degree \\(t\\). -2. Let \\(F_0, \ldots, F_p \leftarrow [a_0] G_1, \ldots, [a_t] G_1 \\) +1. The dealer chooses a uniformly random polynomial \\(f(x) = \sum^t_i a_i x^i \\) of degree \\(t\\). +2. Let \\(F_0, \ldots, F_t \leftarrow [a_0] G_1, \ldots, [a_t] G_1 \\) 3. Let \\(\hat{u}_2 \rightarrow [a_0] \hat{u_1} \\) 4. For each validator \\(i\\), for each \\(\omega_j \in \Omega_i\\), encrypt the evaluation \\( \hat{Y}_{i, \omega_j} \leftarrow [f(\omega_j)] ek_i \\) 4. Post the signed message \\(\tau, (F_0, \ldots, F_t), \hat{u}_2, (\hat{Y}_{i,\omega_j})\\) to the blockchain @@ -13,9 +13,9 @@ The PVSS scheme used is a modified Scrape PVSS. ## Public verification 1. Check \\(e(F_0, \hat{u}_1)= e(G_1, \hat{u_2})\\) -2. Compute by FFT \\(A_1, \ldots, A_W \leftarrow [f(\omega_0)]G_1, \ldots, [f(\omega_W)]G_1 \\) +2. Compute by FFT \\(A_1, \ldots, A_W \leftarrow [f(\omega_0)]G_1, \ldots, [f(\omega_{W-1})]G_1 \\) 3. Partition \\(A_1, \ldots, A_W\\) into \\(A_{i,\omega_j} \\) for validator \\(i\\)'s shares \\(\omega_j\\) -4. For each encrypted share \\(\hat{Y}_{i,\omega_i} \\), check \\(e(G_1, \hat{Y}_{i,\omega_j}) = e(A_{i,\omega_j}, ek_i) \\) +4. For each encrypted share \\(\hat{Y}_{i,\omega_j} \\), check \\(e(G_1, \hat{Y}_{i,\omega_j}) = e(A_{i,\omega_j}, ek_i) \\) ## Aggregation @@ -35,4 +35,4 @@ Multiple PVSS instances can be aggregated into one by a single validator, speedi It is critical that all validators agree on which PVSS instances are used to create the final key; in particular, this is exactly what makes Ferveo depend on a synchronous consensus protocol like Tendermint. Therefore, the validators must all verify the PVSS instances and agree on the set of valid PVSS instances; or in the case where a validator has aggregated all PVSS instances, the validator set must agree on a valid aggregation of PVSS instances. -However, although full nodes can certainly perform the verification of a PVSS instance or aggregation, full nodes do not need to verify either the PVSS instances or the aggregation. \ No newline at end of file +However, although full nodes can certainly perform the verification of a PVSS instance or aggregation, full nodes do not need to verify either the PVSS instances or the aggregation. \ No newline at end of file