Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DEEP polynomial with Lagrange kernel #265

Closed
Al-Kindi-0 opened this issue Mar 26, 2024 · 1 comment
Closed

DEEP polynomial with Lagrange kernel #265

Al-Kindi-0 opened this issue Mar 26, 2024 · 1 comment

Comments

@Al-Kindi-0
Copy link
Contributor

Currently, the DEEP polynomial is computed as:

$$ Y(x) = \sum_{i=0}^k{ \alpha_i \cdot \left(\frac{T_i(x) - T_i(z)}{x - z} + \frac{T_i(x) - T_i(z \cdot g)}{x - z \cdot g} \right)} + \sum_{i=0}^m{\gamma_i \cdot \frac{H_i(x) - H_i(z)}{x - z}} $$

where:

  1. $T_i$ are trace polynomials.
  2. $H_i$ are constraint composition polynomial segment polynomials.

With the provisions being made for the Lagrange kernel in #247 , we need to accommodate those changes in the above. More explicitly, we should have the following updated sum instead:

$$ Y(x) = \sum_{i=0}^k{ \alpha_i \cdot \left(\frac{T_i(x) - T_i(z)}{x - z} + \frac{T_i(x) - T_i(z \cdot g)}{x - z \cdot g} \right)} + \beta \cdot \sum_{i=0}^{log(n)}\frac{T_l(x) - T_l(z_i)}{x - z_i} + \sum_{i=0}^m{\gamma_i \cdot \frac{H_i(x) - H_i(z)}{x - z}} $$

Where, $T_l(x)$ is the Lagrange kernel column and $z_i$'s is the following sequence:

$$ z, z \cdot g, z \cdot g^2, z \cdot g^4, ..., z \cdot g^{2^{log(n) - 1}} $$

The above should be straightforward to adapt in the current code-base but would mean that the cost would increase as if the trace width was $\log(n)$-many columns extra in the extension field.

Another way to update the DEEP polynomial would be to work with the following expression instead:

$$ Y(x) = \sum_{i=0}^k{ \alpha_i \cdot \left(\frac{T_i(x) - T_i(z)}{x - z} + \frac{T_i(x) - T_i(z \cdot g)}{x - z \cdot g} \right)} + \beta \cdot \frac{T_l(x) - V_l(x)}{P_l(x)} + \sum_{i=0}^m{\gamma_i \cdot \frac{H_i(x) - H_i(z)}{x - z}} $$

where:

  1. $V_l$ is a polynomial of degree $\log(n)$ interpolating the set ${\left( z_i, T_l(z_i)\right) }$.
  2. $P_l$ is the polynomial that vanishes at the ${z_i}$'s i.e., $P_l(x) = \Pi_{i = 0}^{\log(n)} \left(x - z\cdot g^{2^i}\right)$.

Let's look first at what we currently have. Currently, $\frac{T_i(x) - T_i(z)}{x - z}$ are computed using the coefficient form of the trace polynomials $T_i$ using synthetic division. The resulting polynomial, again in coefficient form, is then interpolated over the LDE domain. This would not work (efficiently) with the above proposal for the Lagrange kernel as fast synthetic polynomial division requires the divisor polynomial to have a special form.
What we can do instead is to compute $\frac{T_i(x) - T_i(z)}{x - z}$ for $x$ ranging over the LDE domain using batch inversion (there are other optimizations for the numerator as well described here). This has the advantage that we don't need the coefficient form of the trace polynomial and thus we can release those much earlier. Moreover, we avoid having to do an extra LDE per column as we are already in LDE domain.
Returning to the Lagrange kernel case, we can now apply the same idea to $\frac{T_l(x) - V_l(x)}{P_l(x)}$. We now face a couple of complications:

  1. Computing $V_l$ needs to be done only once but it should be evaluated for every $x$. Computing the coefficients of $V_l$ amounts to an interpolation problem over a set that is arbitrary i.e., not FFT friendly. Thus we can use a Vandermonde-based interpolation routine to find the coefficients of $V_l$. Alternatively, we can just use the barycentric evaluation formula to compute the evaluations at $x$. Evaluating using the coefficient form should be faster but computing the coefficients is not trivial, although it needs to be done only once.
  2. $P_l$ is more involved to compute as its complexity scales with $\log(n)$. Not immediately clear if there is anything we can do to improve performance here beyond batch inversion.

On the bright side, we save $\log(n)$ LDE and synthetic divisions which should be non-negligible.

@irakliyk
Copy link
Collaborator

irakliyk commented May 9, 2024

Closed by #274.

@irakliyk irakliyk closed this as completed May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants