Bivariate polynomial based secret sharing

Summary

Bivariate polynomial secret sharing encodes a secret as the constant term of a bivariate polynomial \(B(x,y)\) over a finite field, with \(B(0,0) = s\). Each party receives a share polynomial \(r_i(x) = B(x, i)\) and a recovery polynomial \(c_i(y) = B(i, y)\). The bivariate structure enables pairwise consistency checks: two parties can independently compute the same point \(B(j, i)\) from their respective slices.

Used by BGW [1] for synchronous VSS/MPC and by Cachin et al. [2] for Asynchronous Verifiable Secret Sharing.

Bivariate polynomial as interpolation of univariate polynomials

A bivariate polynomial \(B(x,y)\) of degree \(d_x\) in \(x\) and \(d_y\) in \(y\) can be written as:

\[B(x,y) = \sum_{j=0}^{d_y} c_j(x) \cdot y^j\]

where each \(c_j(x)\) is a univariate polynomial of degree \(d_x\). So \(B\) is a polynomial in \(y\) whose "coefficients" are themselves polynomials in \(x\).

Pick \(d_y + 1\) distinct points \(y_0, \ldots, y_{d_y}\). At each \(y_k\), we get a univariate polynomial in \(x\):

\[g_k(x) = B(x, y_k)\]

Each \(g_k\) has degree \(d_x\). Since \(B(x,y)\) has degree \(d_y\) in \(y\), we can recover \(B\) from these \(d_y + 1\) polynomials via Lagrange interpolation in \(y\):

\[B(x,y) = \sum_{k=0}^{d_y} g_k(x) \cdot L_k(y) \qquad \text{where } L_k(y) = \prod_{j \neq k} \frac{y - y_j}{y_k - y_j}\]

By symmetry of the argument, fixing \(d_x + 1\) points in \(x\) and interpolating gives \(d_x + 1\) polynomials of degree \(d_y\) in \(y\) that also determine \(B\).

Matrix view

From polynomial to matrix

A univariate polynomial \(f(x) = a_0 + a_1 x + a_2 x^2\) can be written as a row-column product:

\[f(x) = \begin{pmatrix} a_0 & a_1 & a_2 \end{pmatrix} \begin{pmatrix} 1 \\ x \\ x^2 \end{pmatrix}\]

A bivariate polynomial works the same way, but with a matrix of coefficients in the middle. Start with the general form:

\[B(x,y) = \sum_{i=0}^{d_x} \sum_{j=0}^{d_y} a_{ij} \, x^i y^j\]

Collect the $x$-powers into a column vector \(\mathbf{x} = (1, x, \ldots, x^{d_x})^T\) and the $y$-powers into \(\mathbf{y} = (1, y, \ldots, y^{d_y})^T\). Then:

\[B(x,y) = \mathbf{y}^T A \, \mathbf{x}\]

where \(A\) is the \((d_y + 1) \times (d_x + 1)\) coefficient matrix defined by: \(A_{ij}\) is the coefficient of \(x^j y^i\). That is, row \(i\) of \(A\) collects the coefficients of the \(y^i\) term, and column \(j\) collects the coefficients of the \(x^j\) term. Note: \(A_{ij}\) is a coefficient of the polynomial, not an evaluation – \(A_{ij} \neq B(i,j)\) in general. The one exception is \(A_{00} = B(0,0) = s\).

Properties of the matrix representation

Right-multiplying by \(\mathbf{x}\) fixes \(x\) and gives the coefficients of a polynomial in \(y\) (a column slice):

\[A \begin{pmatrix} 1 \\ x \end{pmatrix} = \text{coefficient vector of } B(x, y) \text{ as a polynomial in } y\]

Left-multiplying by \(\mathbf{y}^T\) fixes \(y\) and gives the coefficients of a polynomial in \(x\) (a row slice):

\[\begin{pmatrix} 1 & y \end{pmatrix} A = \text{coefficient vector of } B(x, y) \text{ as a polynomial in } x\]

Bivariate polynomial secret sharing protocol

Setup

The dealer picks a random bivariate polynomial \(B(x,y)\) of degree \(t\) in both \(x\) and \(y\), with \(B(0,0) = s\). The coefficient matrix \(A\) is \((t+1) \times (t+1)\) with \((t+1)^2\) entries, of which \(A_{00} = s\) is fixed and the rest are random.

What each party receives

Each party \(P_i\) receives two univariate polynomials:

  • Share polynomial: \(r_i(x) = B(x, i)\) – fix \(y = i\), vary \(x\). This is a degree-\(t\) polynomial in \(x\).
  • Recovery polynomial: \(c_i(y) = B(i, y)\) – fix \(x = i\), vary \(y\). This is a degree-\(t\) polynomial in \(y\).

In the matrix view:

\[r_i(x) = \mathbf{i}^T A \, \mathbf{x}\] \[c_i(y) = \mathbf{y}^T A \, \mathbf{i}\]

where \(\mathbf{i} = (1, i, i^2, \ldots, i^t)^T\) is the monomial vector for the scalar \(i\).

The share polynomial \(r_i\) comes from left-multiplying \(A\) by \(\mathbf{i}^T\) (fixing \(y = i\)). The recovery polynomial \(c_i\) comes from right-multiplying \(A\) by \(\mathbf{i}\) (fixing \(x = i\)).

Example: \(B(x,y) = 7 + 3x + 2y + 5xy\) over \(\mathbb{F}_{11}\)

Here \(d_x = d_y = t = 1\), \(n = 4\) parties, secret \(s = 7\).

\[A = \begin{pmatrix} 7 & 3 \\ 2 & 5 \end{pmatrix}\]

Note: \(A\) is not symmetric (\(A_{01} = 3 \neq 2 = A_{10}\)). This is the general case.

Cross-checking: why it works

The key idea: \(P_i\)'s share polynomial and \(P_j\)'s recovery polynomial both pass through the point \(B(j, i)\).

\(P_i\) can compute \(r_i(j) = B(j, i)\) from its share polynomial.

\(P_j\) can compute \(c_j(i) = B(j, i)\) from its recovery polynomial.

These are the same value – both equal \(B(j, i)\) – because both are evaluations of the same bivariate polynomial at the same point, accessed through different slices.

In the matrix view:

\[r_i(j) = \mathbf{i}^T A \, \mathbf{j} = B(j, i)\]

\[c_j(i) = \mathbf{i}^T A \, \mathbf{j} = B(j, i)\]

They are literally the same expression. No symmetry of \(A\) is needed.

Correctness properties

Define the monomial vector for a scalar \(k\) as:

\[\mathbf{k} = \begin{pmatrix} 1 \\ k \\ k^2 \\ \vdots \\ k^t \end{pmatrix}\]

Lemma 1 (Cross-check correctness): if the dealer distributes share and recovery polynomials of the same polynomial \(B\), then \(r_i(j) = c_j(i)\) for all \(i, j\).

Proof. Both are evaluations of \(B\) at the point \((j, i)\):

\[r_i(j) = \mathbf{i}^T A \, \mathbf{j} = B(j, i)\]

\[c_j(i) = \mathbf{i}^T A \, \mathbf{j} = B(j, i)\]

They are the same expression. \(\square\)

Contrapositive: if \(r_i(j) \neq c_j(i)\) for some honest pair \((P_i, P_j)\), then the dealer did not distribute consistent share and recovery polynomials from a single polynomial.

Lemma 2 (Uniqueness from recovery polynomials): let \(I = \{i_1, \ldots, i_{t+1}\} \subset \mathbb{F}\) be distinct indices. If \(t + 1\) parties hold degree-\(t\) recovery polynomials \(c_{i_1}, \ldots, c_{i_{t+1}}\), then there is at most one \((t+1) \times (t+1)\) matrix \(A\) consistent with all of them.

Proof.

Step 1: write out what party \(P_k\) knows. \(P_k\) holds the recovery polynomial:

\[c_k(y) = \mathbf{y}^T A \, \mathbf{k}\]

The vector \(A \, \mathbf{k}\) is a \((t+1) \times 1\) column vector – it contains the coefficients of \(c_k(y)\) as a polynomial in \(y\). That is, if \(A \, \mathbf{k} = \begin{pmatrix} \alpha_0 \\ \alpha_1 \\ \vdots \\ \alpha_t \end{pmatrix}\), then \(c_k(y) = \alpha_0 + \alpha_1 y + \cdots + \alpha_t y^t\).

So knowing \(c_k\) is equivalent to knowing the column vector \(A \, \mathbf{k}\).

Step 2: write down all \(t + 1\) parties' equations simultaneously. For each \(k \in \{i_1, \ldots, i_{t+1}\}\), we know \(A \, \mathbf{k}\). Place these as columns of a matrix:

\[A \begin{pmatrix} \mathbf{i_1} & \mathbf{i_2} & \cdots & \mathbf{i_{t+1}} \end{pmatrix} = \begin{pmatrix} A\,\mathbf{i_1} & A\,\mathbf{i_2} & \cdots & A\,\mathbf{i_{t+1}} \end{pmatrix}\]

The left side is \(A \cdot V\), where:

\[V = \begin{pmatrix} \mathbf{i_1} & \mathbf{i_2} & \cdots & \mathbf{i_{t+1}} \end{pmatrix} = \begin{pmatrix} 1 & 1 & \cdots & 1 \\ i_1 & i_2 & \cdots & i_{t+1} \\ i_1^2 & i_2^2 & \cdots & i_{t+1}^2 \\ \vdots & \vdots & \ddots & \vdots \\ i_1^t & i_2^t & \cdots & i_{t+1}^t \end{pmatrix}\]

is the \((t+1) \times (t+1)\) Vandermonde matrix built from the party indices.

The right side is the matrix \(F\) whose columns are the known coefficient vectors of the recovery polynomials.

So we have:

\[A \cdot V = F\]

Step 3: \(V\) is invertible because \(i_1, \ldots, i_{t+1}\) are distinct (the determinant of a Vandermonde matrix is \(\prod_{j > k} (i_j - i_k) \neq 0\) over a field when all indices are distinct).

Step 4: multiply both sides on the right by \(V^{-1}\):

\[A = F \cdot V^{-1}\]

\(F\) is known (from the recovery polynomials) and \(V^{-1}\) is known (from the public party indices). So \(A\) is uniquely determined. \(\square\)

Lemma 3 (Uniqueness from share polynomials): same statement, replacing recovery polynomials with share polynomials.

Proof. Party \(P_k\)'s share polynomial \(r_k(x)\) has coefficient vector \(\mathbf{k}^T A\), a row. Stacking \(t + 1\) of these:

\[V \cdot A = G\]

where \(V\) is the same Vandermonde matrix and \(G\) has the column coefficient vectors as rows. Since \(V\) is invertible:

\[A = V^{-1} \cdot G\]

\(A\) is uniquely determined. \(\square\)

Theorem (Correctness): if \(t + 1\) honest parties pass pairwise cross-checks and hold degree-\(t\) share and recovery polynomials, then \(A\) is uniquely determined and \(s = A_{00}\) is fixed.

Proof.

Step 1: the \(t + 1\) honest parties hold degree-\(t\) recovery polynomials. By Lemma 2, there is at most one matrix \(A_C\) consistent with these recovery polynomials.

Step 2: the same \(t + 1\) honest parties hold degree-\(t\) share polynomials. By Lemma 3, there is at most one matrix \(A_R\) consistent with these share polynomials.

Step 3: the cross-checks passed (Lemma 1), meaning \(r_i(j) = c_j(i) = B(j, i)\) for all honest pairs. So the share and recovery polynomials are consistent with each other.

Step 4: we show \(A_R = A_C\). From the recovery polynomials, \(A_C \, \mathbf{k} =\) coefficients of \(c_k\) for each honest \(k\). From the share polynomials, \(\mathbf{k}^T A_R =\) coefficients of \(r_k\). The cross-check \(r_i(j) = c_j(i)\) means \(\mathbf{i}^T A_R \, \mathbf{j} = \mathbf{i}^T A_C \, \mathbf{j}\) for all honest \(i, j \in I\). So \(\mathbf{i}^T (A_R - A_C) \mathbf{j} = 0\) for all \(i, j \in I\). By the same Vandermonde argument as before (both sets of vectors are linearly independent), \(A_R - A_C = 0\).

Step 5: therefore \(A = A_R = A_C\) is uniquely determined, and \(s = A_{00}\). \(\square\)

Theorem (Secrecy): an adversary holding at most \(k - 1\) share polynomials and \(k - 1\) recovery polynomials learns nothing about \(s\), where \(k\) is the reconstruction threshold.

Statement. Let \(\mathcal{B} \subset \{1, \ldots, n\}\) with \(|\mathcal{B}| \leq k - 1\) be the set of corrupt parties. The adversary knows \(r_i(x) = B(x, i)\) and \(c_i(y) = B(i, y)\) for all \(i \in \mathcal{B}\). We show: for every value \(\tilde{s} \in \mathbb{F}\), there exists a bivariate polynomial \(\tilde{B}(x, y)\) of degree \(k - 1\) in each variable such that:

  1. \(\tilde{B}(0, 0) = \tilde{s}\)
  2. \(\tilde{B}(x, i) = B(x, i)\) for all \(i \in \mathcal{B}\) (same share polynomials)
  3. \(\tilde{B}(i, y) = B(i, y)\) for all \(i \in \mathcal{B}\) (same recovery polynomials)

That is, the adversary's view is consistent with every possible secret.

Proof.

Step 1: count constraints vs. degrees of freedom.

The coefficient matrix \(\tilde{A}\) of \(\tilde{B}\) is \(k \times k\), so it has \(k^2\) entries.

The adversary knows \(k - 1\) recovery polynomials. Each \(c_i\) gives \(k\) coefficients (namely \(\tilde{A} \, \mathbf{i}\)). But these are not \(k(k-1)\) independent constraints on \(\tilde{A}\) – they are \(k - 1\) constraints per row of \(\tilde{A}\). Concretely, for each row \(\mathbf{a}_j^T\) of \(\tilde{A}\) (a \(1 \times k\) vector), the adversary knows:

\[\mathbf{a}_j^T \mathbf{i} \quad \text{for each } i \in \mathcal{B}\]

This is \(k - 1\) linear equations in \(k\) unknowns (\(\mathbf{a}_j\)), leaving a 1-dimensional solution space for each row.

Step 2: so each of the \(k\) rows of \(\tilde{A}\) has exactly 1 free parameter. That gives \(k\) free parameters total.

Step 3: the share polynomial constraints are redundant with the recovery polynomial constraints. Knowing \(r_i(x) = B(x, i)\) for \(i \in \mathcal{B}\) means knowing \(\mathbf{i}^T \tilde{A} \, \mathbf{x}\), which gives \(\tilde{A}_{m, l}\) only through the combinations \(\sum_m i^m \tilde{A}_{m,l}\) – exactly the same information as the recovery polynomial constraints (the intersection values \(B(j, i)\) for \(i, j \in \mathcal{B}\) are determined by either share or recovery polynomials).

Step 4: the constraint \(\tilde{B}(0, 0) = \tilde{s}\) means \(\tilde{A}_{00} = \tilde{s}\). We need to show \(\tilde{A}_{00}\) is among the \(k\) free parameters.

Step 5: focus on the first row of \(\tilde{A}\): \(\mathbf{a}_0^T = (A_{00}, A_{01}, \ldots, A_{0,k-1})\). The adversary's constraints are:

\[\mathbf{a}_0^T \mathbf{i} = \text{(known)} \quad \text{for each } i \in \mathcal{B}\]

This is \(k - 1\) equations in \(k\) unknowns. The solution space is \(\mathbf{a}_0 = \mathbf{a}_0^{*} + \alpha \mathbf{v}\), where \(\mathbf{v}\) is the null vector of \(V_{\mathcal{B}}^T\) (the \((k-1)\) Vandermonde rows stacked) and \(\alpha\) is the free parameter.

Step 6: the null vector \(\mathbf{v}\) satisfies \(\mathbf{i}^T \mathbf{v} = 0\) for all \(i \in \mathcal{B}\). So \(v(x) = v_0 + v_1 x + \cdots + v_{k-1} x^{k-1}\) vanishes at all \(i \in \mathcal{B}\). Up to scaling:

\[v(x) = \prod_{i \in \mathcal{B}} (x - i)\]

The constant term is \(v_0 = \prod_{i \in \mathcal{B}} (-i) = (-1)^{k-1} \prod_{i \in \mathcal{B}} i\).

Since all party indices are nonzero (parties are indexed by \(1, \ldots, n\)), \(v_0 \neq 0\). Therefore \(\mathbf{a}_0\) has a nonzero component in the \(A_{00}\) direction, so \(A_{00}\) varies freely with \(\alpha\).

Step 7: for any \(\tilde{s}\), choose \(\alpha\) so that \(\tilde{A}_{00} = \tilde{s}\). The remaining rows' free parameters can be set arbitrarily (they don't affect \(\tilde{A}_{00}\)). This gives a valid \(\tilde{A}\) consistent with the adversary's view and having \(\tilde{s}\) as the secret. \(\square\)

Reconstruction

Claim: given \(t + 1\) recovery polynomials from honest parties, the secret \(s\) can be recovered.

Proof. Each party \(P_i\) reveals \(c_i(0) = B(i, 0)\). This is a point on the univariate polynomial \(B(x, 0)\), which has degree \(t\) in \(x\).

Given \(t + 1\) points \((i_1, B(i_1, 0)), \ldots, (i_{t+1}, B(i_{t+1}, 0))\), Lagrange interpolation recovers \(B(x, 0)\) uniquely (since a degree-\(t\) polynomial is determined by \(t + 1\) points).

Evaluating at \(x = 0\): \(B(0, 0) = s\). \(\square\)

Asymmetric bivariate polynomials: decoupling thresholds

In all the constructions above, the bivariate polynomial has the same degree \(t\) in both variables (\(d_x = d_y = t\)), so the coefficient matrix \(A\) is square. Kokoris-Kogias et al. [3] observe that making the degrees different (\(d_x \neq d_y\)) allows decoupling the reconstruction threshold from the share recovery threshold.

Let \(t < n/3\) be the corruption threshold, \(k\) the reconstruction threshold, and \(n = 3t + 1\).

The problem with equal degrees

In Cachin et al.'s AVSS with \(d_x = d_y = t\):

  • The reconstruction threshold is \(k = t + 1\) (from degree \(t\) in \(x\)).
  • During the sharing phase, the dealer can only wait for \(n - t\) parties to confirm (the rest might be slow or corrupt).
  • Of those \(n - t\) parties, up to \(t\) might be corrupt, so at most \(n - 2t\) honest parties complete the sharing.
  • For reconstruction, we need \(k\) honest shares, so \(k \leq n - 2t = t + 1\).

This limits the reconstruction threshold to \(k = t + 1\), a low-threshold scheme. For applications like threshold signatures and DKG, we often need \(k\) closer to \(n - t\) (high-threshold).

The asymmetric solution

Use a bivariate polynomial \(B(x, y)\) with degree \(k - 1\) in \(x\) and degree \(t\) in \(y\), with \(B(0,0) = s\). The key idea: set the $x$-degree (\(k - 1\)) independently of the corruption threshold \(t\), allowing \(k\) to range from \(t + 1\) up to \(n - t\).

In the matrix view, the coefficient matrix \(A\) is now \((t + 1) \times k\) – rectangular, not square. It has \(t + 1\) rows (from the $y$-degree) and \(k\) columns (from the $x$-degree). When \(k > t + 1\), \(A\) is "wide".

Each party \(P_i\) receives two polynomials that now have different degrees:

  • Share polynomial: \(r_i(x) = B(x, i)\) – degree \(k - 1\) in \(x\). Used for secret reconstruction.
  • Recovery polynomial: \(c_i(y) = B(i, y)\) – degree \(t\) in \(y\). Used for share recovery.

The share of the secret is \(S_i = c_i(0) = B(i, 0)\).

Why this decouples the thresholds

The two degrees serve different purposes:

Reconstruction (degree \(k - 1\) in \(x\)): the univariate polynomial \(B(x, 0)\) has degree \(k - 1\). Given \(k\) shares \(S_i = B(i, 0)\), Lagrange interpolation recovers \(B(x, 0)\) and hence \(s = B(0, 0)\). The reconstruction threshold \(k\) can now be set independently of \(t\).

Share recovery (degree \(t\) in \(y\)): suppose an honest party \(P_m\) is slow and doesn't complete the sharing directly. The parties that did complete hold recovery polynomials \(c_j(y) = B(j, y)\). Each can compute \(B(j, m) = c_j(m)\) and send it to \(P_m\). Since \(c_m(y) = B(m, y)\) has degree \(t\), party \(P_m\) needs \(t + 1\) such values to interpolate \(c_m\) and recover its share \(S_m = c_m(0)\).

At least \(n - t\) honest parties complete the sharing (since at most \(t\) are corrupt). Even if \(P_m\) didn't complete directly, \(t + 1\) of these \(n - t\) honest parties (guaranteed since \(n - t \geq 2t + 1 > t + 1\)) can help \(P_m\) recover.

In the matrix view

The coefficient matrix \(A\) is \((t+1) \times k\):

\[B(x,y) = \mathbf{y}^T A \, \mathbf{x}\]

where \(\mathbf{x} = (1, x, \ldots, x^{k-1})^T\) is \(k \times 1\) and \(\mathbf{y} = (1, y, \ldots, y^t)^T\) is \((t+1) \times 1\).

  • Share polynomial coefficients: \(\mathbf{i}^T_{t} A\) is a \(1 \times k\) row vector (coefficients of \(r_i(x)\)), where \(\mathbf{i}_t = (1, i, \ldots, i^t)^T\).
  • Recovery polynomial coefficients: \(A \, \mathbf{i}_{k-1}\) is a \((t+1) \times 1\) column vector (coefficients of \(c_i(y)\)), where \(\mathbf{i}_{k-1} = (1, i, \ldots, i^{k-1})^T\).

To recover \(A\) from recovery polynomials: \(k\) recovery polynomials give \(A \cdot V_k = F\) where \(V_k\) is \(k \times k\) Vandermonde, invertible.

To recover \(A\) from share polynomials: \(t + 1\) share polynomials give \(V_{t+1} \cdot A = G\) where \(V_{t+1}\) is \((t+1) \times (t+1)\) Vandermonde, invertible.

Secrecy

The secret is protected by the $x$-dimension: \(B(x, 0)\) has degree \(k - 1\), so \(k - 1\) shares reveal nothing about \(s\) (same Shamir argument). The $y$-dimension is for recovery, not secrecy. Privacy holds as long as fewer than \(k\) honest parties have started reconstruction [3].

Summary of threshold relationships

Parameter Symmetric (\(d_x = d_y\)) Asymmetric (\(d_x \neq d_y\))
Degree in \(x\) \(t\) \(k - 1\)
Degree in \(y\) \(t\) \(t\)
Matrix \(A\) \((t+1) \times (t+1)\) \((t+1) \times k\)
Reconstruction \(t + 1\) \(k\) (independent of \(t\))
Recovery \(t + 1\) \(t + 1\)
Max threshold \(t + 1\) (i.e., \(n - 2t\)) \(n - t\)

Notes

  • Both BGW [1] and Cachin et al. [2] use the row-and-column approach with general (non-symmetric) bivariate polynomials. The cross-check works because both parties evaluate the same bivariate polynomial at the same point, not because of symmetry.
  • BGW operates in the synchronous setting with a complaint mechanism for verification and achieves information-theoretic security.
  • Cachin et al. extend to the asynchronous setting using Bracha's reliable broadcast and Pedersen commitments, achieving computational correctness and unconditional privacy.
  • TODO: State of the art is

References

[1] Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. 1988. “Completeness theorems for non-cryptographic fault-tolerant distributed computation.” https://dl.acm.org/doi/10.1145/62212.62213.
[2] Christian Cachin, Klaus Kursawe, Anna Lysyanskaya, and Reto Strobl. 2002. “Asynchronous verifiable secret sharing and proactive cryptosystems.” http://portal.acm.org/citation.cfm?doid=586110.586124.
[3] Eleftherios Kokoris Kogias, Dahlia Malkhi, and Alexander Spiegelman. 2020. “Asynchronous Distributed Key Generation for Computationally-Secure Randomness, Consensus, and Threshold Signatures.” https://dl.acm.org/doi/10.1145/3372297.3423364.