Adapted Hardy-Littlewood Method

This page contains explanations pertaining to the "Adapted Hardy-Littlewood Method" section of Roth's paper.

Inequalities with U and V

We show that $U \leq 2Na(m)$ and $V \leq A(N) \leq Na(m)$.

We use that fact that $a(xy) \leq a(y)$ to get

(1)
\begin{align} U = 2Na(2N) = 2Na(m^4) \leq 2Na(m). \end{align}

Now the $V$ numbers $2v_1, \ldots, 2v_V$ are selected from $N$ possibe even integers between $1$ and $2N$ and so by definition

(2)
\begin{align} V \leq A(N) = Na(N) \end{align}

but we know that $N = \frac{m^4}{2}$ which is larger than $m$ for all $m \geq 2$ and so

(3)
\begin{align} V \leq N a \left( \frac{m^4}{2} \right) \leq Na(m). \end{align}

Inequality for V

We now show that $V \geq A(2N) - A(N) \geq 2Na(2N) - Na(m)$.

The number of odd integers among the $u_k$ certainly can't exceed $A(N)$ (by the equivalence of $A$ under srithmetic progressions) and so

(4)
\begin{align} A(2N) \leq \max \text{ even } + \max \text{ odd } = V + A(N) \leq V + Na(m). \end{align}

We can therefore conclude

(5)
\begin{align} V \geq A(2N) - A(N) \end{align}

and

(6)
\begin{align} U = A(2N) \leq 2Na(2N) \end{align}

and so

(7)
\begin{align} V \geq 2Na(2N) - A(N) \end{align}

and

(8)
\begin{align} V \geq A(2N) - Na(m) = 2Na(2N) - Na(m). \end{align}

Asymptotics on the Hardy-Littlewood Functions

Our first estimate is that all the functions are of a similar order: $O(Na(m))$.

We use the fact that $U \leq 2Na(m)$ and $V \leq Na(m)$ to see that for $r = 1,2$ we have

(9)
\begin{eqnarray} |f_r(\alpha)|, |F_r(\alpha)| &\leq& 2Na(m) \max_{k,n} \{ e(\alpha u_k), e(\alpha n) \}\\ &=& 2Na(m) \cdot \text{constant} \end{eqnarray}

so that

(10)
\begin{align} f_r(\alpha), F_r(\alpha) = O(Na(m)). \end{align}

Asymptotics on the Differences

We now want to estimate the difference between these functions, showing that $f_1(\alpha) - F_1(\alpha)$ and $f_2(\alpha) - F_2(\alpha)$ are both $O\left( N \{a(m) - a(2N)\} + N^{\frac{3}{4}} \right)$

We split into two cases, depending on whether $q$ is equal to one or not. This splitting is motivated by $q$'s effect on the exponential sum $S'$.

The Case q equal to 1

We let $M=2N$ and consider $f_1(\alpha)$ and $F_1(\alpha)$. We know that $U=A(2N)=A(M)$ and so $f_1(\alpha)=S$. We also have the following for $F_1(\alpha)$:

(11)
\begin{eqnarray} F_1(\alpha) &=& a(m) e(h) \sum_{n=1}^{2N} e(\beta n)\\ &=& a(m) \sum_{n=1}^M e(h + \beta n)\\ &=& a(m) \sum_{n=1}^M e(nh + \beta n)\\ &=& a(m) \sum_{n=1}^M e(\alpha n) = S' \end{eqnarray}

as $n$ and $h$ are integers os $e(nh) = e(h)$ and $q=1$ so $\alpha = h + \beta$.

We now consider our earlier bound on $|S-S'|$ to find

(12)
\begin{eqnarray} |S-S'| &<& 2Na(m) - U + O\left( m(2N)^{\frac{1}{2}} \right)\\ &=& 2Na(m) - U + O \left( mN^{\frac{1}{2}}\right)\\ &=& 2Na(m) - 2Na(2N) + O\left( mN^{\frac{1}{2}} \right). \end{eqnarray}

But then

(13)
\begin{eqnarray} f_1(\alpha) - F_1(\alpha) &=& \pm |S - S'|\\ &=& O\left( 2Na(m) - 2Na(2N) + mN^{\frac{1}{2}} \right)\\ &=& O\left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right), \end{eqnarray}

as required. Note that we use the fact that $m^4 = 2N$ and so $m=O\left( N^{\frac{1}{4} } \right)$.

We now need to consider $f_2(\alpha) - F_2(\alpha)$ and to do this we repeat much of the same argument as for $f_1(\alpha) - F_1(\alpha)$, but now set $M=N$.

Then, as $q=1$ we have

(14)
\begin{align} S' = a(m) \sum_{n=1}^M e(\alpha n) = a(m) \sum_{n=1}^N e(\alpha n) = F_2(\alpha). \end{align}

Now in all previous calculation we assumed that $S$ was define on a maximal $\mathcal{A}$-set. But in fact $S$ need not be maximal for our bound on $|S-S'|$ to hold (although it most definitely needs to be an $\mathcal{A}$-set); by the same argument as earlier, if we set

(15)
\begin{align} S = \sum_{k=1}^V e(\alpha v_k); \end{align}

then we have the bound

(16)
\begin{align} |S-S'| = O\left( Ma(m) - V + O\left( m \sqrt{M} \right) \right). \end{align}

But then we have

(17)
\begin{eqnarray} f_2(\alpha) - F_2(\alpha) &=& \pm|S - S'|\\ &=& O\left( Ma(m) - V + O\left( m \sqrt{M} \right) \right)\\ &=& O\left( Na(m) - V + O\left( m \sqrt{N} \right) \right). \end{eqnarray}

Now once again we know that $m^4=2N$ so that $m=O\left( N^{\frac{1}{4}}\right)$, and we already have

(18)
\begin{align} V \geq 2Na(2N) - Na(m). \end{align}

So with simple manipulation we have

(19)
\begin{eqnarray} f_2(\alpha) - F_2(\alpha) &=& O\left( Na(m) - \{2Na(2N) - Na(m) \} + O\left( N^{\frac{3}{4}} \right) \right)\\ &=& O\left( 2Na(m) - 2Na(2N) + O\left( N^{\frac{3}{4}} \right) \right)\\ &=& O\left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right); \end{eqnarray}

which was exactly what was needed.

Therefore the result holds if in teh Dirichlet expansion of $\alpha$ we can choose $q=1$.

The Case q not equal to 1

Before starting the proof for this case we need the following two lemmata and Jordan's Inequality.

An Exponential Sum Bound

For any $\alpha$ and $M$ we have $\sum_{n=1}^M e(\alpha n) = O\left( \frac{1}{\| \alpha\|} \right)$, where $\|\alpha\|$ denotes the distance of $\alpha$ from the nearest integer.

We first note that

(20)
\begin{align} e(\alpha n) e(\alpha m) = e(\alpha (m+n)) \end{align}

so the left hand side is the sum of a geometric series. We can therefore rewrite it in a closed form:

(21)
\begin{align} \sum_{n=1}^M e(\alpha n) = \frac{e(\alpha M) - 1}{e(\alpha) - 1}e(\alpha). \end{align}

We wish to show that in fact for all $\alpha$ and $M$ we have:

(22)
\begin{align} \left| \sum_{n=1}^M e(\alpha n) \right| \leq \min \left\{ M, \frac{1}{2\|\alpha\|} \right\}. \end{align}

But both sides of this inequality are even and periodic with respect to $\alpha$, with period 1. Therefore it is enough to prove the result true for $0 \leq \alpha \leq \frac{1}{2}$.

We first note that for $\alpha \in \left[ 0, \frac{1}{2} \right]$ we have

(23)
\begin{eqnarray} |e(\alpha) - 1| &=& 2 \pi \sin(\pi \alpha)\\ &\geq& 4\alpha\\ &=& 4 \| \alpha \| \end{eqnarray}

where the first inequality comes from Jordan's Inequality and the final equality from the fact that $0 \leq \alpha \leq \frac{1}{2}$ and so $\| \alpha \| = \alpha$.

We now let $\frac{1}{2M} \leq \alpha \leq \frac{1}{2}$. We now have

(24)
\begin{eqnarray} \left| \sum_{n=1}^M e(\alpha n) \right| &=& \frac{|e(\alpha M) - 1|}{|e(\alpha) - 1|}|e(\alpha)|\\ &\leq& \frac{2}{4 \| \alpha \|} \cdot 1 = \frac{1}{2\|\alpha\|}. \end{eqnarray}

If we assume $0 \leq \alpha \leq \frac{1}{2M}$, and apply the trivial bound to the sum (applying the triangle inequality):

(25)
\begin{align} \left| \sum_{n=1}^M e(\alpha n) \right| \leq \sum_{n=1}^M |e(\alpha n)| = M. \end{align}

Finally we know that

(26)
\begin{align} 0 < \| \alpha \| < 1, \qquad \text{ so } \qquad \frac{1}{\| \alpha \|} > 1 \end{align}

and so

(27)
\begin{eqnarray} \left| \sum_{n=1}^M e(\alpha n) \right| &\leq& \min \left( M, \frac{1}{2 \| \alpha \|} \right)\\ &\leq& \min \left( \frac{M}{\| \alpha \|}, \frac{1}{2\|\alpha\|} \right), \end{eqnarray}

and so we indeed have

(28)
\begin{align} \sum_{n=1}^M e(\alpha n) = O \left( \frac{1}{\| \alpha \|} \right). \end{align}

Jordan's Inequality

For any $x \in \left[ 0, \frac{1}{2} \right]$ the following inequality holds: $\frac{2}{\pi} x \leq \sin(x) \leq x$.

It is sufficient to show that $\frac{\sin(x)}{x}$ decreases as $x$ increases from $0$ to $\frac{\pi}{2}$; as we know that

(29)
\begin{align} \lim_{x \rightarrow 0} \frac{\sin(x)}{x} = 1; \qquad \qquad \frac{\sin\left( \frac{\pi}{2} \right) }{\frac{\pi}{2}} = \frac{2}{\pi}. \end{align}

We do this by showing the derivative of $\frac{\sin(x)}{x}$ is negative on the interval $\left( 0, \frac{\pi}{2} \right]$. But

(30)
\begin{align} \frac{\text{d}}{\text{d}x} \left( \frac{\sin(x)}{x} \right) = \frac{x \cos(x) - \sin(x)}{x^2} \end{align}

and so it remains to show $x\cos(x) - \sin(x)$ is non-positive.

But we first note that at $0$ we have

(31)
\begin{align} 0 \cdot \cos(0) - \sin(0) = 0 \end{align}

and

(32)
\begin{align} \frac{\text{d}}{\text{d}x} (x\cos(x) - \sin(x)) = \cos(x) - x\sin(x) - \cos(x) = -x\sin(x) \end{align}

which is non-positive on our interval. Therefore $x\cos(x) - \sin(x) \leq 0$ and so the inequality is proven.

Bound on the norm of alpha

If it is impossible to choose $q=1$ in the Dirichlet Box Principle, then $\alpha > \frac{1}{\sqrt{M}}$.

We start by noting that

(33)
\begin{align} \alpha = \frac{h}{q} + \beta \end{align}

and so if $q \neq 1$ we must have

(34)
\begin{align} \|\alpha \| = \frac{k}{q} \pm \beta \end{align}

where $1 < |k| \leq |h|$.

We also know from the Dirichlet Approximation Theorem that

(35)
\begin{align} q \leq \sqrt{M} \end{align}

and so

(36)
\begin{align} \| \alpha \| \geq \frac{k}{\sqrt{M}} \pm \beta. \end{align}

But we also know that $q |\beta| \leq \frac{1}{\sqrt{M}}$ and $q > 1$ so

(37)
\begin{align} \| \alpha \| \geq \frac{k-1}{\sqrt{M}} > \frac{1}{\sqrt{M}}. \end{align}

Finishing our proof

We can now prove our result for $q \neq 1$. Combining the exponential sum bound and bound on the norm we have that if $q \neq 1$:

(38)
\begin{align} F_1(\alpha) = \sum_{n=1}^M e(\alpha n) = O\left( \sqrt{M} \right). \end{align}

We also know that if $q>1$ we must have $S'=0$ and so with $M=2N$ we get

(39)
\begin{align} f_1(\alpha) = \pm \left| \sum_{k=1}^U e(\alpha u_k) \right| = \pm |S| < Ma(m) - U + O\left(m \sqrt{M} \right) \end{align}

and so

(40)
\begin{align} f_1(\alpha) = O\\left( 2Na(m) - 2Na(2N) + N^{\frac{3}{4}} \right). \end{align}

Now we use a crude estimate by the triangle inequality

(41)
\begin{eqnarray} |f_1(\alpha) - F_1(\alpha)| &\leq& |f_1(\alpha)| + |F_1(\alpha)|\\ &\leq& O\left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right) + O\left( \sqrt{N} \right). \end{eqnarray}

But we can easily compensate for the $O\left( \sqrt{N} \right)$ by the $N^\frac{3}{4}$ in the first asymptotic and so we obtain

(42)
\begin{align} f_1(\alpha) - F_1(\alpha) = O\left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right) \end{align}

and out first case for $q \neq 1$ is proven.

For $f_2(\alpha) - F_2(\alpha)$ we combine the exponential sum bound and bound on the norm again to get

(43)
\begin{align} F_2(\alpha) = O\left( \sqrt{N} \right). \end{align}

We then set $M=N$ and get

(44)
\begin{align} f_2(\alpha) = \pm\left| \sum_{k=1}^V e(\alpha v_k) \right| = \pm |S| < Na(m) - V + O\left m \sqrt{N} \right). \end{align}

We once again use the fact that $m = O\left( N^{\frac{1}{4}}\right)$ to simplify as

(45)
\begin{align} f_2(\alpha) = O\left( 2Na(m) - 2Na(2N) + N^{\frac{3}{4}} \right). \end{align}

Now we use the triangle inequality as before to get

(46)
\begin{eqnarray} |f_2(\alpha) - F_2(\alpha)| &\leq& |f_2(\alpha)| + |F_2(\alpha)|\\ &\leq& O\left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right) + O\left( \sqrt{N} \right). \end{eqnarray}

Therefore the result is true when $q \neq 1$.

We have hence shown that for any $\alpha$ we have:

(47)
\begin{eqnarray} f_1(\alpha) - F_1(\alpha) &=& O\left( N \{ a(m) - a(2N) \} + N^{\frac{3}{4}} \right);\\ f_2(\alpha) - F_2(\alpha) &=& O\left( N \{ a(m) - a(2N) \} + N^{\frac{3}{4}} \right). \end{eqnarray}

An asymptotic on a difference of products

For any $\alpha$ we have $f_1(\alpha)f_2^2(-\alpha) - F_1(\alpha) F_2^2(-\alpha)$ is of the order $O\left( \{Na(m)\}^2 \left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right)\right)$.

We first note the following simple application of the triangle inequality:

(48)
\begin{eqnarray} |f_1 \cdot f_2^2 - F_1 \cdot F_2^2| &=& | f_1 \cdot (f_2^2 - F_2^2) + F_2^2 \cdot (f_1-F_1) |\\ &\leq& |f_1 \cdot (f_2 + F_2) \cdot (f_2-F_2)| + |F_2^2 \cdot (f_1 - F_1) |. \end{eqnarray}

We now apply this inequality with $\alpha$ and $-\alpha$:

(49)
\begin{eqnarray} |f_1(\alpha)f_2^2(-\alpha) - F_1(\alpha)F_2^2(-\alpha)| &\leq& |f_1(\alpha)| |f_2(-\alpha) + F_2(-\alpha)| |f_2(-\alpha) - F_2(-\alpha)|\\ && \qquad |F_2(-\alpha)|^2|f_1(\alpha) - F_1(\alpha)|. \end{eqnarray}

We can apply our bounds to this quantity

(50)
\begin{eqnarray} f_1(\alpha) f_2^2(-\alpha) - F_1(\alpha)F_2^2(-\alpha) &=& O(Na(m)) 2O(Na(m)) O\left( N(a(m) - a(2N)) + N^{\frac{3}{4}} \right)\\ && \\quad O(Na(m))^2 O\left( N(a(m) - a(2N)) + N^{\frac{3}{4}} \right). \end{eqnarray}

Simplifying by using linearity and product rules of asymptotics we obtain:

(51)
\begin{eqnarray} f_1(\alpha) f_2^2(-\alpha) - F_1(\alpha) F_2^2(-\alpha) &=& O\left( 2(Na(m))^2 \left(N(a(m)-a(2N)) + N^{\frac{3}{4}} \right) \right)\\ &=& O\left( (Na(m))^2 \left\{ N(a(m) - a(2N)) + N^{\frac{3}{4}} \right\} \right). \end{eqnarray}

as required.

An asymptotic on f1(alpha)

If $0 < \eta < \alpha < 1 - \eta$ then we have $f_1(\alpha)$ of the order of $O \left( \frac{a(m)}{\eta} + N\{ a(m) - a(2N) \} + N^{\frac{3}{4}} \right)$.

We start by writing, for any $\alpha$:

(52)
\begin{align} f_1(\alpha) = F_1(\alpha) + O\left( N \{a(m) - a(2N)\} + N^{\frac{3}{4}} \right). \end{align}

Therefore, if we can show that for $0 < \eta < \alpha < 1 - \eta$ we have

(53)
\begin{align} F_1(\alpha) = O \left( \frac{a(m)}{\eta} \right) \end{align}

we will have out required asymptotic.

But recalling the definition of $F_1(\alpha)$ we have

(54)
\begin{align} F_1(\alpha) = a(m) \sum_{n=1}^{2N} e(\alpha n) \end{align}

and we can apply the following bound:

(55)
\begin{align} \left| \sum_{n=1}^{2N} e(\alpha n) \right| \leq \min \left\{ 2N, \frac{1}{2 \| \alpha \|} \right\} \end{align}

(where $\| \alpha \|$ denotes the distance from $\alpha$ to the nearest integer).

But $\alpha \in (0,1)$ and so $\| \alpha\|$ is either the distance from $\alpha$ to 0 or the distance to 1. But as $0<\eta<\alpha$, we know the distance from 0 to $\alpha$ is greater than $\eta$. Similarly, as $\alpha < 1 - \eta < 1$ we know the distance from 1 to $\alpha$ is also greater than $\eta$.

Hence $\| \alpha \| > \eta$ and we obtain

(56)
\begin{align} |F_1(\alpha)| = a(m) \left| \sum_{n=1}^{2N} e(\alpha n)\right| \leq \frac{a(m)}{2 \| \alpha \|} \< \frac{a(m)}{2\eta} \end{align}

which gives us for $0<\eta < \alpha < 1 - \eta$

(57)
\begin{align} F_1(\alpha) = O\left( \frac{a(m)}{\eta} \right) \end{align}

as required.

The Hardy-Littlewood Method

The condition that $u_h = v_k + v_l$ if and only if $k=l$ and $u_h = 2v_k$ can be expressed by $\int_{-\eta}^{1-\eta} f_1(\alpha) f_2^2(-\alpha) \text{d}\alpha = V \leq N a(m).$

We now show the idea that shapes the Hardy-Littlewood Method: changing a condition on the solutions of an equation into a condition on a circle integral. Then we can work with the integral to prove the result regarding the equations.

Recall the definitions of $f_1(\alpha)$ and $f_2(\alpha)$:

(58)
\begin{align} f_1(\alpha) = \sum_{k=1}^U e(\alpha u_k) \qquad \text{and} \qquad f_2(\alpha) = \sum_{k=1}^V e(\alpha v_k) \end{align}

and so our integral in question is

(59)
\begin{align} \int_{-\eta}^{1 - \eta} \left( \sum_{j=1}^U e(\alpha u_j) \right) \left( \sum_{k=1}^V e(-\alpha v_k) \right) \left( \sum_{l=1}^V e(-\alpha v_l) \right) \text{ d}\alpha \end{align}

which, after simplification from the linearity of integration and basic exponential laws of multiplication becomes

(60)
\begin{align} \sum_{j=1}^U \sum_{k=1}^V \sum_{l=1}^V \int_{-\eta}^{1-\eta} e(\alpha(u_j - v_k - v_l)) \text{ d}\alpha. \end{align}

Now we note that our function $e$ is periodic with period 1 and so we can make things a little simpler for ourselves and write the integrals as from 0 to 1 to end up with

(61)
\begin{align} \sum_{j=1}^U \sum_{k=1}^V \sum_{l=1}^V \int_0^1 e(\alpha(u_j - v_k - v_l)) \text{ d}\alpha. \end{align}

Now let us consider these integrals

(62)
\begin{align} \int_0^1 e(\alpha(u_j - v_k - v_l)) \text{ d}\alpha \end{align}

and to do this let $j,k,l$ be fixed integers. By the orthogonality of $e$ (and the fact that $\alpha$ is non-zero) we have

(63)
\begin{align} \int_0^1 e(\alpha(u_j - v_k - v_l)) \text{ d}\alpha = \left\{ \begin{array}{lc} 1 & u_j - v_k - v_l = 0 \\ 0 & \text{o/wise} \end{array}\right. \end{align}

and so this integral is non-zero if and only if $u_j = v_k + v_l$.

We can therefore go back to our sum of integrals to write it as follows:

(64)
\begin{align} \sum_{j=1}^U \sum_{k=1}^V \sum_{l=1}^V \int_0^1 e(\alpha(u_j - v_k - v_l)) \text{ d}\alpha = \left| \{ j,k,l \mid u_j = u_k + u_l \} \right|. \end{align}

But for each $k$ from 1 to $V$ there exists a $j_k$ between 1 and $U$ such that

(65)
\begin{equation} u_{j_k} = 2u_k \end{equation}

and so we have at least $V$ integrals equal to 1 in the sum, and so

(66)
\begin{align} \sum_{j=1}^U \sum_{k=1}^V \sum_{l=1}^V \int_0^1 e(\alpha(u_j-v_k-v_l)) \text{ d}\alpha \geq V. \end{align}

Bit if this sum is exactly $V$ then this means that these are the only solutions to the equation $u_j = v_k + v_l$, which is exactly the condition we wanted.

Conversely, if the only solution to $u_j = v_k + v_l$ is when $k=l$ then there are only $V$ non-zero integrals and so

(67)
\begin{align} \sum_{j=1}^U \sum_{k=1}^V \sum_{l=1}^V \int_0^1 e(\alpha(u_j - v_k - v_l)) \text{ d}\alpha = V. \end{align}

Hence these two conditions are equivalent, and the noted inequality has already been proven.

Our First Integral Asymptotic

Assuming $0 < \eta < \frac{1}{2}$ we have $\int_{\eta}^{1-\eta$} f_1(\alpha) f_2^2(-\alpha) \text{ d}\alpha$ being of the asymptotic $O\left( \left\{ \frac{a(m)}{\eta} + N(a(m) - a(2N)) + N^{\frac{3}{4}} \right\} Na(m) \right)$.

We first bound the absolute value of the integral like so:

(68)
\begin{align} \left| \int_{\eta}^{1-\eta} f_1(\alpha) f_2^2(-\alpha) \text{ d}\alpha \right| \leq \left| \left( \max_{\alpha \in [\eta, 1 - \eta]} |f_1(\alpha)| \right) \int_{\eta}^{1-\eta} f_2^2(-\alpha) \text{ d}\alpha \right| \end{align}

and we use the result that for $0 < \eta < \alpha < 1 - \eta$

(69)
\begin{align} f_1(\alpha) = O \left( \frac{a(m)}{\eta} + N(a(m) - a(2N)) + N^{\frac{3}{4}} \right). \end{align}

We are now going to look at the remaining integral and show that it is of order $O(Na(m))$. We first bound as follows:

(70)
\begin{eqnarray} \left| \int_{\eta}^{1-\eta} f_2^2(-\alpha) \text{ d}\alpha \right| &\leq& \left| \int_{\eta}^{1-\eta} |f_2^2(-\alpha)| \text{ d}\alpha \right| \\ &=& \left| \int_{1-\eta}^{\eta} |f_2^2(\alpha)| \text{ d}\alpha \right| \leq \int_0^1 |f_2^2(\alpha)| \text{ d}\alpha \end{eqnarray}

because $\eta, 1-\eta \in [0,1]$ and $|f_2^2(\alpha)| \geq 0$.

Now we only need to show

(71)
\begin{align} \int_0^1 |f_2^2(\alpha)| \text{ d}\alpha = V \end{align}

after which we use the fact that $V \leq Na(m)$. Now recall the definition of

(72)
\begin{align} f_2(\alpha) = \sum_{k=1}^V e(\alpha v_k). \end{align}

We now use the definition of the norm on complex numbers

(73)
\begin{align} |f_2(\alpha)|^2 = f_2(\alpha) \overline{f_2(\alpha)} \end{align}

and the linearity of conjugation to expand this as

(74)
\begin{align} |f_2(\alpha)|^2 = \left( \sum_{k=1}^V e(\alpha v_k) \right) \left( \sum_{l=1}^V \overline{e(\alpha v_l)} \right). \end{align}

But for any $\gamma \in \mathbb{R}$ we have $\overline{e(\gamma)} = e(-\gamma)$ and so we can simplify further

(75)
\begin{align} |f_2(\alpha)|^2 = \left( \sum_{k=1}^V e(\alpha v_k) \right) \left( \sum_{l=1}^V e(-\alpha v_l) \right) = \sum_{k,l=1}^V e(\alpha(v_k - v_l)). \end{align}

But by the orthogonality of $e$ (and as $\alpha \neq 0$) we have

(76)
\begin{align} \int_0^1 e(\alpha(v_k - v_l)) \text{ d}\alpha = \left\{ \begin{array}{cc} 1 & v_k - v_l = 0 \\ 0 & \text{o/wise} \end{array} \right. \end{align}

and as the $v_i$ are distinct, $v_k - v_l = 0$ if and only if $k=l$. Hence we can write

(77)
\begin{align} 'int_0^1 e(\alpha(v_k - v_l)) \text{ d}\alpha = \delta_{k,l}, \end{align}

(here $\delta_{k,l}$ represents the Kronecker delta function).

Therefore

(78)
\begin{eqnarray} \int_0^1 |f_2(\alpha)|^2 \text{ d}\alpha &=& \sum_{k,l=1}^V \int_0^1 e(\alpha (v_k - v_l)) \text{ d}\alpha\\ &=& \sum_{k,l=1}^V \delta_{k,l} = V \end{eqnarray}

as we required.

Rewriting an integral of f's as an integral of F's

We now use the result on difference of products to rewrite $\int_{-\eta}^{\eta} f_1(\alpha) f_2^2(-\alpha) \text{ d}\alpha$ as $\int_{-\eta}^{\eta} F_1(\alpha)F_2^2(-\alpha) \text{ d}\alpha$ + $O\left( \eta \{Na(m)\}^2 \left(N \{a(m) - a(2N) \} + N^{\frac{3}{4}} \right) \right)$

We first recall the difference of products result, which states that for any $\alpha$ we have

(79)
\begin{align} f_1(\alpha)f_2^2(-\alpha) - F_1(\alpha)F_2^2(-\alpha) = O\left( \{Na(m)\}^2 \left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right) \right). \end{align}

We now simply integrate each term above from $-\eta$ to $\eta$. THis produces the two integrals as required and we note that integrating the asymptotic notation simply multiplies the contents of the asymptotic notation by $2\eta$. That is, if

(80)
\begin{align} f(\alpha) = O(g(\alpha)) \end{align}

then this implies that

(81)
\begin{align} \int_{-\eta}^{\eta} f(\alpha) \text{ d}\alpha = O(2\eta g(\alpha)) = O(\eta g(\alpha)). \end{align}

We can therefore integrate the asymptotic notation of the difference of products result to get the required asymptotic, and the Lemma is proven.

Removing eta from the integral

We use the fact $0 < \eta < \frac{1}{2}$ to write $\int_{-\eta}^{\eta} F_1(\alpha)F_2^2(-\alpha) \text{ d}\alpha$ as $\int_{-\frac{1}{2}}^{\frac{1}{2}} F_1(\alpha)F_2^2(-\alpha) \text{ d}\alpha + O\left( \frac{a^3(m)}{\eta^2} \right)$.

To start, we split the integral from $-\eta$ to $\eta$ as follows:

(82)
\begin{eqnarray} \int_{-\eta}^{\eta} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha &=& \int_{-\frac{1}{2}}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha\\ && \quad - \left\{ \int_{\eta}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha + \int_{-\frac{1}{2}}^{-\eta} F_1(\alpha)F_2^2(-\alpha) \text{ d}\alpha \right\} \end{eqnarray}

and we concern ourselves with the final pair of integrals.

First we note that these can be simplified to a single integral:

(83)
\begin{align} \int_{\eta}^{\frac{1}{2}} F_1(\alpha)F_2^2(-\alpha) \text{ d}\alpha + \int_{-\frac{1}{2}}^{-\eta} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha = \int_{\eta}^{\frac{1}{2}} F_1(\alpha)F_2^2(-\alpha) + F_1(-\alpha) F_2^2(\alpha) \text{ d}\alpha. \end{align}

But from the definition of $F_i$, and more specifically $e(\alpha n)$, we have

(84)
\begin{align} F_i(-\alpha) = \overline{F_i(\alpha)} \end{align}

and so

(85)
\begin{align} \int_{\eta}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha + \int_{-\frac{1}{2}}^{-\eta} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha = \int_{\eta}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) + \overline{F_1(\alpha) F_2^2(-\alpha) } \text{ d}\alpha. \end{align}

We now bound the absolute value of this integral:

(86)
\begin{eqnarray} \left| \int_{\eta}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) + \overline{F_1(\alpha)F_2^2(-\alpha)} \text{ d}\alpha \right| &\leq& \int_{\eta}^{\frac{1}{2}} \left| F_1(\alpha)F_2^2(-\alpha) + \overline{F_1(\alpha) F_2^2(-\alpha)} \right| \text{ d}\alpha \\ &\leq& \int_{\eta}^{\frac{1}{2}} \left| F_1(\alpha)F_2^2(-\alpha) \right| + \left| \overline{F_1(\alpha)F_2^2(-\alpha)} \right| \text{ d}\alpha. \end{eqnarray}

We now use the standard property that for any number $z \in \mathbb{C}$ we know that $|z| = | \overline{z}|$. We therefore see that our pair of integrals are in fact bounded by:

(87)
\begin{align} 2 \int_{\eta}^{\frac{1}{2}} \left| F_1(\alpha) F_2^2(-\alpha) \right| \text{ d}\alpha. \end{align}

We now turn our attention to bounding $\left| F_1(\alpha)F_2^2(-\alpha) \right|$ for $\alpha$ in the interval $\left[\eta, \frac{1}{2}\right]$.

The first thing to note is that, as $0 < \eta < \frac{1}{2}$ we have

(88)
\begin{align} \left[ \eta, \frac{1}{2} \right] \subset [ \eta, 1 - \eta]. \end{align}

We therefore can bound $F_1(\alpha)$ just as we did in the proof of the asymptotic on $f_1(\alpha)$ to see

(89)
\begin{align} F_1(\alpha) = O\left( \frac{a(m)}{\eta} \right). \end{align}

But exactly the same argument holds for $F_2(-\alpha)$ (the only difference between the two proofs being a constant factor before applying asymptotic notation). So we get

(90)
\begin{align} F_2(-\alpha) = O\left( \frac{a(m)}{\eta} \right). \end{align}

Hence we can combine these bounds to get

(91)
\begin{align} 2 \int_{\eta}^{\frac{1}{2}} |F_1(\alpha) F_2^2(-\alpha) | \text{ d}\alpha = O\left( 2 \left( \frac{1}{2} - \eta \right) \frac{a^3(m)}{\eta^3} \right) = O \left( \frac{a^3(m)}{\eta^2} \right); \end{align}

which immediately proves the Lemma.

Number of different sums of two integers

The number of solutions of the equation $n = n' + n''$ with integers $n$, $n'$ and $n''$ satisfying $n \leq 2N$, $n' \leq N$ and $n'' \leq N$ is $N^2$.

This is relatively straightforward. We want to count

(92)
\begin{align} n = n' + n''; \qquad \qquad n \leq 2N, n' \leq N, n'' \leq N; \end{align}

and we do this by looking at the right hand side of the equation. For every choice of $(n', n'')$ there is a unique $n \leq 2N$ satisfying the equation. We have $N$ choices for $n'$ and $N$ choices for $n''$, and so a total of $N^2$ choices for $(n',n'')$ and hence $N^2$ solutions to the equations.

Rewriting a(m) squared as an integral

We can write $a^2(m)$ as $\frac{1}{N^2a(m)} \int_{-\frac{1}{2}}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha$ which has asymptotic $O\left( \frac{a^2(m)}{N^2\eta^2} + \{\eta N a(m) + 1\} \left\{ a(m) - a(2N) + N^{-\frac{1}{4}} \right\} + \frac{a(m)}{N\eta} \right)$.

We will use our results on the Hardy-Littlewood Method, the first integral asymptotic, our second integral asymptotic, our third integral asymptotic and summing two integers to prove the lemma.

To start, let us consider

(93)
\begin{align} \int^{\frac{1}{2}}_{-\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha. \end{align}

We first shall see that this is in fact $a(m)^3$ times the number of solutions to $n = n' + n''$ with $n \leq 2N$ and $n',n'' \leq N$.

From the definitions of $F_1$ and $F_2$ we can rewrite the integral as:

(94)
\begin{align} a^3(m) \int_{-\frac{1}{2}}^{\frac{1}{2}} \left( \sum_{n=1}^{2N} e(\alpha n) \right) \left( \sum_{n'=1}^N e(-\alpha n') \right) \left( \sum_{n''=1}^N e(-\alpha n'') \right) \text{ d}\alpha \end{align}

Upon expanding out the sums we get the integral:

(95)
\begin{align} a^3(m) \int_{-\frac{1}{2}}^{\frac{1}{2}} \left( \sum_{n=1}^{2N} \sum_{n'=1}^N \sum_{n''=1}^N e((n-n'-n'')\alpha) \right) \text{ d}\alpha. \end{align}

We now use orthogonality and periodicity of the complex exponential function. If $n-n'-n''=0$ then the following occurs:

(96)
\begin{align} \int_{-\frac{1}{2}}^{\frac{1}{2}} e((n - n' - n'')\alpha) \text{ d}\alpha = \int_{-\frac{1}{2}}^{\frac{1}{2}} 1 \text{ d}\alpha = 1. \end{align}

If, on the other hand we have $n - n' - n'' \in \mathbb{Z} \setminus \{0\}$, we get

(97)
\begin{align} \int_{-\frac{1}{2}}^{\frac{1}{2}} e((n - n' - n'')\alpha) \text{ d}\alpha = 0. \end{align}

This shows that our integral counts the solutions to the given equation and so our initial integral gives $a^3(m)$ times this number.

Now from summing two integers, this number is $N^2$ and so

(98)
\begin{align} N^2 = \frac{1}{a^3(m)} \int_{-\frac{1}{2}}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha, \end{align}

and so

(99)
\begin{align} a^2(m) = \frac{1}{N^2 a(m)} \int_{-\frac{1}{2}}^{\frac{1}{2}} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha, \end{align}

as required.

We now make some estimates using asympotics and the previous Lemmata and Theorems. To start we apply our third integral asymptotic to Equation 99 to get

(100)
\begin{align} = \frac{1}{N^2a(m)} \left[ \int_{-\eta}^{\eta} F_1(\alpha) F_2^2(-\alpha) \text{ d}\alpha + O\left( \frac{a^3(m)}{\eta^2} \right) \right]. \end{align}

We now apply our second integral asymptotic to simplify Equation 100 to get

(101)
\begin{align} = \frac{1}{N^2 a(m)} \left[ \int_{-\eta}^{\eta} f_1(\alpha) f_2^2(-\alpha) \text{ d}\alpha + O\left(\frac{a^3(m)}{\eta^2} \right) + O\left( \eta \{N a(m)\}^2 \left( N\{a(m) - a(2N)\} + N^{\frac{3}{4}} \right) \right) \right]. \end{align}

We use linearity of integrals to rewrite Equation 101 in the following way

(102)
\begin{align} = \frac{1}{N^2a(m)} \left[ \int_{-\eta}^{1-\eta} f_1(\alpha) f_2^2(-\alpha) \text{ d}\alpha - \int_{\eta}^{1-\eta} f_1(\alpha)f_2^2(-\alpha) \text{ d}\alpha + O\left(\frac{a^3(m)}{\eta^2} \right) + O\left(\eta \{N a(m)\}^2\left( N \{ a(m) - a(2N) \} + N^{\frac{3}{4}} \right) \right) \right] \end{align}

which allows us to apply the Hardy-Littlewood Method to the first integral and the first integral asymptotic to the second integral in Equation 102. In doing this we obtain

(103)
\begin{align} \leq \frac{1}{N^2a(m)} \left[ V + O\left( \left\{ \frac{a(m)}{\eta} + N(a(m) - a(2N)) + N^{\frac{3}{4}} \right\} N a(m) \right) + O\left(\frac{a^3(m)}{\eta^2}\right) + O\left( \eta \{N a(m) \}^2 \left( N(a(m) - a(2N)) + N^{\frac{3}{4}} \right) \right) \right]. \end{align}

We multiply through by the fraction and use the second part of the Hardy-Littlewood Method (that $V \leq Na(m)$) to Equation 103

(104)
\begin{align} = O\left( \frac{Na(m)}{N^2a(m)} + \frac{Na(m)}{N^2a(m)} \left\{ \frac{a(m)}{\eta} + N(a(m) - a(2N)) + N^{\frac{3}{4}} \right\} + \frac{a^3(m)}{N^2a(m)\eta^2} + \frac{\eta \{Na(m)\}^2}{N^2 a(m)} \left\{ N(a(m) - a(2N)) + N^{\frac{3}{4}} \right\} \right). \end{align}

Expanding out Equation 104 we obtain

(105)
\begin{align} = O\left( \frac{1}{N} + \frac{a(m)}{\eta N} + a(m) - a(2N) + N^{-\frac{1}{4}} + \frac{a^2(m)}{N^2 \eta^2} + \eta Na^2(m) - \eta N a(m)a(2N) + \eta N a(m) N^{-\frac{1}{4}} \right). \end{align}

and with some final simplifications (and noting that the first term is negligible compared to the $N^{-\frac{1}{4}}$ term) to Equation 105 we get

(106)
\begin{align} = O\left( \frac{a^2(m)}{N^2 \eta^2} + \{\eta N a(m) + 1\} \left\{ a(m) - a(2N) + N^{-\frac{1}{4}} \right\} + \frac{a(m)}{N \eta} \right) \end{align}

as required.

Inequality on a(m) squared

Recalling that $2N = m^4$ and using the definition of $\delta$ we have $a^2(m) < c_1 \left\{ a(m) \delta + a^2(m) \delta^2 + \left( \frac{a(m)}{\delta} + 1 \right) \left( a(m) - a(m^4) + \frac{1}{m} \right) \right\}$, where $\delta$, which depends only on $m$, is subject only to the restriction that $0 < \eta < \frac{1}{2}$.

We start from the rewriting of a squared and substitute in $\delta$ to get

(107)
\begin{eqnarray} a^2(m) &=& O\left( \frac{a^2(m)}{N^2 \eta^2} + \{ \eta N a(m) + 1\} \left\{ a(m) - a(2N) + \frac{1}{N^{\frac{1}{4}}} \right\} + \frac{a(m)}{N \eta} \right) \\ &=& O \left( a^2(m) \delta^2 + \left\{ \frac{a(m)}{\delta} + 1 \right\} \left\{a(m) - a(m^4) + \frac{2^{\frac{1}{4}}}{m} \right\} + a(m) \delta \right)\\ &<& c_1 \left\{ a(m) \delta + a^2(m) \delta^2 + \left( \frac{a(m)}{\delta} + 1 \right) \left( a(m) - a(m^4) + \frac{1}{m} \right) \right\} \end{eqnarray}

as required.

Inequality on b(x) squared

With the notation of $b(x)$ we can rewrite the a inequality to get $b^2(x) < c_1 \left\{ b(x) \delta + b^2(x) \delta^2 + \left( \frac{b(x)}{\delta} + 1 \right) \left( b(x) - b(x+1) + \frac{1}{2^{4^x}} \right) \right\}$.

We note from the definition that we have

(108)
\begin{eqnarray} b^2(x) &=& a^2(m)\\ b(x+1) &=& a\left( 2^{4^{x+1}} \right) = a\left( \left( 2^{4^x} \right)^4 \right) = a(m^4). \end{eqnarray}

Then by simple substitution we obtain

(109)
\begin{align} b^2(x) < c_1 \left\{ b(x) \delta + b^2(x) \delta^2 + \left( \frac{b(x)}{\delta} + 1 \right) \left( b(x) - b(x+1) + \frac{1}{2^{4^x}} \right) \right\} \end{align}

as required.