Multiplicity one for \(L\)-functions

In this section we describe the L-functions for which we will prove a multiplicity one result. As in other approaches to L-functions viewed from a classical perspective, such as that initiated by Selberg, we consider Dirichlet series with a functional equation and an Euler product. However, in contrast to Selberg, we strive to make all our axioms as specific as possible. Presumably (as conjectured by Selberg) these different axiomatic approaches all describe the same objects: L(s,\pi) where \pi is a cuspidal automorphic representation of\GL(n). An interesing alternative is the recent approach of Booker, which describes L-functions in abstract terms, modeled on the explicit formula.

\(L\)-function background

Before getting to L-functions, we recall two bits of terminology that will be used in the following discussion. An entire function f:\C\to\C is said to have order at most \alpha if for all \epsilon > 0:

f(s)=\mathcal{O}(\exp(|s|^{\alpha + \epsilon})).

Moreover, we say f has order equal to \alpha if f has order at most \alpha, and f does not have order at most \gamma for any \gamma\lt \alpha. The notion of order is relevant because functions of finite order admit a factorization as described by the Hadamard Factorization Theorem, and the \Gamma-function and L-functions are all of order 1.

In order to ease notation, we use the normalized \Gamma-functions defined by:

\Gamma_\R(s):=\pi^{-s/2}\,\Gamma(s/2)\ \ \ \ \text{ and } \ \ \ \ \Gamma_\C(s):=2(2\pi)^{-s}\,\Gamma(s).

An L-function is a Dirichlet series

L(s)=\sum_{n=1}^\infty \frac{a(n)}{n^s},

where s=\sigma+i t is a complex variable. We assume that L(s) converges absolutely in the half-plane \sigma>1 and has a meromorphic continuation to all of \C. The resulting function is of order1, admitting at most finitely many poles, all of which are located on the line \sigma = 1. Finally, L(s) must have an Euler product and satisfy a functional equation as described below.

The functional equation involves the following parameters: a positive integer N, complex numbers \mu_1, \ldots, \mu_J and \nu_1, \ldots, \nu_K, and a complex number \varepsilon. The completed L-function

\Lambda(s) :=\mathstrut \amp N^{s/2} \prod_{j=1}^{J} \Gamma_\R(s+ \mu_j) \prod_{k=1}^{K} \Gamma_\C(s+ \nu_k) \cdot L(s)

is a meromorphic function of finite order, having the same poles as L(s) in \sigma>0, and satisfying the functional equation

\Lambda(s)=\mathstrut\amp \varepsilon \overline{\Lambda}(1-s).

The number d=J+2K is called the degree of the L-function.

We require some conditions on the parameters \mu_j and \nu_j. The temperedness condition is the assertion that \Re(\mu_j)\in\{0,1\} and \Re(\nu_j) a positive integer or half-integer. With those restrictions, there is only one way to write the parameters in the functional equation, as proved in Proposition. This restriction is not known to be a theorem for most automorphic L-functions. In order to state theorems which apply in those cases, we will make use of a ``partial Selberg bound,'' which is the assertion that \Re(\mu_j),\ \Re(\nu_j) > -\frac12.

The Euler product is a factorization of the L-function into a product over the primes:

L(s)= \prod_p F_p(p^{-s})^{-1},

where F_p is a polynomial of degree at most d:

F_p(z) = (1-\alpha_{1,p} z)\cdots (1-\alpha_{d,p} z).

If p|N then p is a bad prime and the degree of F_p is strictly less thand, in other words, \alpha_{j,p}=0 for at least onej. Otherwise, p is a good prime, in which case the \alpha_{j,p} are called the Satake parameters atp. The Ramanujan bound is the assertion that at a good prime |\alpha_{j,p}|=1, and at a bad prime |\alpha_{j,p}| \le 1.

The Ramanujan bound has been proven in very few cases, the most prominent of which are holomorphic forms on \GL(2) and \GSp(4). See for a survey of what progress is known towards proving the Ramanujan bound. Also see.

We write |\alpha_{j,p}|\le p^\theta, for some \theta\lt \frac12, to indicate progress toward the Ramanujan bound, referring to this as a ``partial Ramanujan bound.''

We will need to use symmetric and exterior power L-functions associated to a L-function L(s). Let S be the finite set of bad primes p of L(s). The partial symmetric and exterior square L-functions are defined as follows.

L^S(s,\sym^n) = \prod_{p \not\in S}\: \prod_{i_1+\ldots+i_d=n} (1-\alpha_{1,p}^{i_1} \ldots \alpha_{d,p}^{i_d} p^{-s})^{-1} L^S(s,\ext^n) = \prod_{p \not\in S}\; \prod_{1\leq i_1\lt \ldots\lt i_n\leq d} (1-\alpha_{i_1,p} \ldots \alpha_{i_n,p} p^{-s})^{-1}.

We do not define the local Euler factors at the bad primes since there is no universal recipe for these. It is conjectured that the symmetric and exterior power L-functions are in fact L-functions in the sense described above. In that case, Proposition tells us that the bad Euler factors are uniquely determined. For applications that we present in this paper, the partial L-functions suffice.

In most cases it is not necessary to specify the local factors at the bad primes because, by almost any version of the strong multiplicity one theorem, an L-function is determined by its Euler factors at the good primes. For completeness we state a simple version of the result.

In the following proposition we use the term ``L-function'' in a precise sense, referring to a Dirichlet series which satisfies a functional equation of the form - with the restrictions \Re(\mu_j)\in\{0,1\} and \Re(\nu_j) a positive integer or half-integer, and having an Euler product satisfying -. We refer to the quadruple (d,N,(\mu_1,\ldots,\mu_J:\nu_1,\ldots,\nu_K),\varepsilon) as the functional equation data of the L-function.

Suppose that L_j(s)=\prod_p F_{p,j}(p^{-s})^{-1}, for j=1,2, are L-functions which satisfy a partial Ramanujan bound for some \theta\lt \frac12. If F_{p,1}=F_{p,2} for all but finitely manyp, then F_{p,1}=F_{p,2} for all p, and L_1 and L_2 have the same functional equation data.

In particular, the proposition shows that the functional equation data of an L-function is well defined. There are no ambiguities arising, say, from the duplication formula of the \Gamma-function. Also, we remark that the partial Ramanujan bound is essential. One can easily construct counterexamples to the above proposition using Saito-Kurokawa lifts, which do not satisfy the partial Ramanujan bound.

Let \Lambda_j(s) be the completed L-function of L_j(s) and consider

\lambda(s)=\mathstrut\amp \frac{\Lambda_1(s)}{\Lambda_2(s)} =\mathstrut\amp \Bigl(\frac{N_1}{N_2}\Bigr)^{s/2} \frac{\prod_{j} \Gamma_\R(s+ \mu_{j,1}) \prod_{k} \Gamma_\C(s+ \nu_{k,1})} {\prod_{j} \Gamma_\R(s+ \mu_{j,2}) \prod_{k} \Gamma_\C(s+ \nu_{k,2})} \prod_p \frac{F_{p,1}(p^{-s})^{-1}}{F_{p,2}(p^{-s})^{-1}}.

By the assumption on F_{p,j}, the product over p is really a finite product. Thus, is a valid expression for \lambda(s) for alls.

By the partial Ramanujan bound and the assumptions on \mu_j and \nu_j, we see that \lambda(s) has no zeros or poles in the half-plane \Re(s)>\theta. But by the functional equations for L_1 and L_2 we have \lambda(s) = (\varepsilon_1/\varepsilon_2)\overline{\lambda}(1-s). Thus, \lambda(s) also has no zeros or poles in the half-plane \Re(s) \lt 1-\theta. Since \theta\lt \frac12, we conclude that \lambda(s) has no zeros or poles in the entire complex plane.

If the product over p in were not empty, then the fact that \{\log(p)\} is linearly independent over the rationals implies that \lambda(s) has infinitely many zeros or poles on some vertical line. Thus, F_{p,1}=F_{p,2} for allp.

The \Gamma-factors must also cancel identically, because the right-most pole of \Gamma_\R(s+\mu) is at -\mu, and the right-most pole of \Gamma_\C(s+\nu) is at -\nu. This leaves possible remaining factors of the form \Gamma_\C(s+1)/\Gamma_\R(s+1), but that also has poles because the \Gamma_\R factor cancels the first pole of the \Gamma_\C factor, but not the second pole. Note that the restriction \Re(\mu)\in\{0,1\} is a critical ingredient in this argument.

This leaves the possibility that \lambda(s)=(N_1/N_2)^{s/2}, but such a function cannot satisfy the functional equation \lambda(s) = (\varepsilon_1/\varepsilon_2)\overline{\lambda}(1-s) unless N_1=N_2 and \varepsilon_1=\varepsilon_2.

The strong multiplicity one theorem for \(L\)-functions

In this section we state a version of strong multiplicity one for L-functions which is stronger than Proposition because it only requires the Dirichlet coefficients a(p) and a(p^2) to be reasonably close. This is a significantly weaker condition than equality of the local factor.

Although the main ideas behind the proof appear in Kaczorowski-Perelli and Soundararajan, we give a slightly stronger version which assumes a partial Ramanujan bound \theta\lt \frac16, plus an additional condition, instead of the full Ramanujan conjecture. We provide a self-contained account because we also wish to bring awareness of these techniques to people with a more representation-theoretic approach to L-functions.

Suppose L_1(s), L_2(s) are Dirichlet series with Dirichlet coefficients a_1(n), a_2(n), respectively, which continue to meromorphic functions of order 1 satisfying functional equations of the form - with a partial Selberg bound \Re(\mu_j),\ \Re(\nu_j)>-\frac12 for both functions, and having Euler products satisfying -. Assume a partial Ramanujan bound for some \theta\lt \frac16 holds for both functions, and that the Dirichlet coefficients at the primes are close to each other in the sense that

\sum_{p\le X} p\,\log(p) |a_1(p)-a_2(p)|^2\ll X .

We have L_1(s)=L_2(s) if either of the following two conditions are satisfied

  1. \displaystyle \sum_{p\le X} |a_1(p^2)-a_2(p^2)|^2 \log p \ll X.
  2. For each of L_1(s) and L_2(s), separately, any one of the following holds:
    1. The Ramanujan bound \theta=0.
    2. The partial symmetric square of the function has a meromorphic continuation past the \sigma=1 line, and only finitely many zeros or poles in \sigma\ge 1.
    3. The partial exterior square of the function has a meromorphic continuation past the \sigma=1 line, and only finitely many zeros or poles in \sigma\ge 1.

Note that condition is satisfied if |a_1(p)-a_2(p)|\ll 1/\sqrt{\mathstrut p}, in particular, if a_1(p)=a_2(p) for all but finitely manyp, or more generally if a_1(p)=a_2(p) for all but a sufficiently thin set of primes. In particular, a_1(p) and a_2(p) can differ at infinitely many primes. Also, by the prime number theorem in the form

\sum\limits_{p \lt X} \log(p) \sim X,

condition) for both L-functions implies condition).

The condition \theta\lt \frac16 arises from the p^{-3s} terms in the proof of Lemma. Those terms do not seem to give rise to a naturally occuring L-function at 3s, so it may be difficult to replace the \theta\lt \frac16 condition by a statement about the average of certain Dirichlet coefficients.

Some technical lemmas

In this section we provide the lemmas required for the proof of Theorem. There are two types of lemmas we require. The first deals with manipulating Euler products and establishing zero-free half-planes via the convergence of those products. The second deals with possible zeros at the edge of the half-plane of convergence.

Before we start proving our lemmas, we state two basic results which are used in this section:

If \sum_{n \le X} |a(n)| \ll X^{1+\epsilon} for every \epsilon>0, then \displaystyle \sum_{n=1}^\infty \frac{a(n)}{n^s} converges absolutely for all \sigma>1.

If \sum_{n \le X} |a(n)| \le C X as X\to\infty, then

\sum_{n=1}^\infty \frac{a(n)}{n^\sigma} \le \frac{C}{\sigma-1} + O(1)

as \sigma \to 1^+.

Both of these results follow by partial summation.

Coefficients of related L-functions

If L(s)=\sum a(n)n^{-s} then for \rho=\sym^n or \ext^n we write

L(s,\rho) = \sum_j a(j,\rho) \,j^{-s}.

If p is a good prime then

  • a(p,\sym^n) = a(p^n),
  • a(p,\ext^2) = a(p)^2 - a(p^2),
  • a(p,\ext^3) = a(p^3) + a(p)^3 - 2 a(p)a(p^2), and
  • a(p^2,\sym^2) =a(p^4)-a(p)a(p^3)+a(p^2)^2.

Let p be a good prime. Expanding the Euler factor L_p(s) for L(s) we have

L_p(s)= \prod_{j=1}^d \frac 1{(1-\alpha_{j,p} \, p^{-s})} = \prod_{j=1}^d \sum_{\ell=0}^\infty \alpha_{j,p}^\ell \, p^{-\ell s} = \sum_{\ell=0}^\infty p^{-\ell s} \!\!\! \sum_{n_1+\cdots+n_d=\ell} \alpha_{1,p}^{n_1}\cdots \alpha_{d,p}^{n_d}\, ,

where the n_j are restricted to non-negative integers. Expanding the Euler factor for L^S(s,\sym^n) we have

L_p(s,\sym^n)=\amp \prod_{i_1+ \cdots +i_d=n} (1-\alpha_{1,p}^{i_1} \cdots \alpha_{d,p}^{i_d} p^{-s})^{-1} =\amp \prod_{i_1+ \cdots +i_d=n} \sum_{\ell=0}^\infty \left(\alpha_{1,p}^{i_1} \cdots \alpha_{d,p}^{i_d}\right)^\ell p^{-\ell s}.

The coefficient of p^{-s} in is

\sum_{i_1+ \cdots +i_d=n} \alpha_{1,p}^{i_1} \cdots \alpha_{d,p}^{i_d},

which equals the coefficient of p^{-ns} in .

The other identities in the lemma just involve expanding the definitions and checking particular coefficients.

Manipulating \(L\)-functions

The next lemma tells us that if there are zeros in the critical strip for \sigma\geq \tfrac12, those zeros come from Euler factors involving the coefficients a(p) or a(p^2) of the Dirichlet series or the Euler factors of the symmetric or exterior square L-functions.

Suppose

L(s) =\mathstrut \amp \sum_{n} a_n n^{-s} =\mathstrut \amp \prod_{p\ \mathrm{bad}} \:\prod_{j=1}^{d_p} (1-\alpha_{j,p} p^{-s})^{-1} \prod_{p\ \mathrm{good}}\:\prod_{j=1}^d (1-\alpha_{j,p} p^{-s})^{-1},

where |\alpha_{j,p}|\le p^\theta for some \theta\in \R. Then for \sigma > 1+\theta,

L(s)=\mathstrut \amp \prod_p (1+a(p) p^{-s}) \cdot \prod_p (1+a(p^2) p^{-2s}) \cdot h_0(s), =\mathstrut \amp \prod_p (1+a(p) p^{-s}) \cdot L^S(2s,\sym^2) \cdot h_1(s), =\mathstrut \amp \prod_p (1+a(p) p^{-s} + a(p)^2 p^{-2s} ) \cdot L^S(2s,\ext^2)^{-1} \cdot h_2(s),

where h_j(s) is regular and nonvanishing for \sigma > \frac13 + \theta.

We can write the Euler product in the form

L(s) = \prod_p \sum_{j=0}^\infty a(p^j) p^{-j s} ,

where

a(p^j) \ll_j p^{j\theta} \ll p^{j(\theta+\varepsilon)},

with the implied constant depending only on\varepsilon. We manipulate the Euler product, introducing coefficients A_j, and B_j where A_j(p), B_j(p) \ll_j p^{j\theta} \ll p^{j(\theta+\varepsilon)}. We have

L(s) =\mathstrut \amp \prod_p \sum_{j=0}^\infty a(p^j) p^{-j s} =\mathstrut \amp \prod_p \Bigl(1+a(p)p^{-s}+ A_2(p) p^{-2s} + \sum_{j=3}^\infty A_j(p)p^{-js} \Bigr) \amp \times \Bigl(1+ (a(p^2) - A_2(p)) p^{-2s} + \sum_{j=2}^\infty B_{2j}(p)p^{-2js} \Bigr) \amp \times \bigl(1+ O(p^{3\theta}p^{-3s})\bigr) =\mathstrut \amp F_1(s) F_2(s) F_3(s),

say. Note that by the assumptions on A_j and B_j we have

\sum_{j=3}^\infty A_j(p)p^{-js} = O(p^{3\theta} p^{-3 s}) \ \ \ \ \ \text{ and } \ \ \ \ \ \sum_{j=2}^\infty B_{2j}(p)p^{-2js} = O(p^{4\theta} p^{-4 s}).

Combining this with justifies.

For the first assertion, set A_j(p)=0 and B_j(p)=0 and note that F_1(s) converges absolutely for \sigma>1+\theta and F_2(s) converges absolutely for \sigma>\frac12+\theta, and F_3(s) converges absolutely for \sigma>\frac13+\theta.

For the second assertion, set A_j(p)=0. For good primes p, choose B_j(p) so that F_2(s)= L^S(2s,\sym^2). For bad primes p, choose B_j(p) = 0. Note that (by the construction of the symmetric square) this choice of B_j satisfies the required bounds. The finitely many factors at the bad primes together with F_3(s) give h_1(s).

For the third assertion, the only modification is to set A_2(p)=a(p)^2, A_j(p)=0 for j\ge 3, and use the second identity in Lemma and appropriate choices for B_j(p) so that F_2(s)=L^S(2s,\ext^2)^{-1}.

Zeros at the edge of the half-plane of convergence

The absolute convergence of an Euler product in a half-plane \sigma>\sigma_0 implies that the function has no zeros or poles in that region. If the Euler product has a meromorphic continuation to a larger region, it could possibly have zeros or poles on the \sigma_0-line. The lemma in this section, which is standard and basically follows the proof of Lemma1 of, says that if the Dirichlet coefficients a(p) are small on average then there are finitely many zeros or poles on the \sigma_0-line. Our modification is that we only require the L-function to satisfy a partial Ramanujan bound.

Note that the lemma is stated with \sigma_0=1 as the boundary of convergence. Applying the lemma in contexts with a different line of convergence, as in the proof of Theorem, just involves a simple change of variables s \to s+A.

Let

L(s)=\prod_p \sum_{j=0}^\infty a({p^j}) p^{-j s}

and suppose there exists M_1, M_2 \ge 0 and \theta\lt \frac23 so that |a({p^j})|\ll p^{\theta j} and

\sum_{p\le X} |a(p)|^2 \log p \le\mathstrut \amp M_1^2 X + o(X), \sum_{p\le X} p^{-2} |a(p^2)|^2 \log p \le\mathstrut \amp M_2^2 X + o(X).

Then L(s) is a nonvanishing analytic function in the half-plane\sigma>1. Furthermore, if L(s) has a meromorphic continuation to a neighborhood of \sigma\ge 1, then L(s) has at most (M_1 + 2 M_2)^2 zeros or poles on the \sigma=1 line.

We have

L(s)=\mathstrut \amp \prod_p \sum_{j=0}^\infty a({p^j}) p^{-j s} =\mathstrut \amp \prod_p \biggl(1+a(p) p^{-s} + a(p^2)p^{-2s} + \sum_{j\ge 3} a({p^j}) p^{-j s}\biggr) =\mathstrut \amp \prod_p \left(1+a(p) p^{-s} \right) \left(1+a(p^2) p^{-2s} \right) \amp \times \bigl(1+(a(p^3)-a(p) a(p^2))p^{-3s} \amp + (a(p^4)-a(p)a(p^3)+a(p)^2a(p^2))p^{-4s}+\cdots\bigr) = \mathstrut \amp \prod_p \left(1+a(p) p^{-s} \right)\left(1+a(p^2) p^{-2s} \right) \prod_p \biggl(1+\sum_{j=3}^\infty c({p^j}) p^{-j s}\biggr) =\mathstrut \amp f(s)g(s),

say.

We have c({p^j}) \ll j M^j p^{j\theta} \ll p^{j(\theta+\epsilon)} for any \epsilon>0. We use this to show that g(s) is a nonvanishing analytic function in\sigma>\frac13+\theta. Writing g(s) = \prod_p(1+Y) we have

\log g(s) = \sum_p \log(1+Y) = \sum_p \left( Y + O(Y^2) \right),

where \displaystyle Y=\sum_{j=3}^\infty b({p^j}) p^{-j s}. Now,

|Y| \le \mathstrut\amp \sum_{j=3}^\infty |b({p^j})| p^{-j \sigma} \ll \mathstrut\amp \sum_{j=3}^\infty p^{j(\theta-\sigma+\epsilon)} = \mathstrut\amp \frac{p^{3(\theta-\sigma+\epsilon)}}{1-p^{\theta-\sigma+\epsilon}}.

If \sigma > \frac13 + \theta we have |Y|\ll 1/p^{1+\delta} for some\delta>0. Therefore by Lemma the series for \log(g(s)) converges absolutely for \sigma > \frac13 + \theta, so g(s) is a nonvanishing analytic function in that region. By the same argument, using and , f(s) is a nonvanishing analytic function for\sigma>1, so the same is true for L(s). This establishes the first assertion in the lemma.

Now we consider the zeros of L(s) on \sigma=1. Since \theta\lt \frac23, the zeros or poles of L(s) on the \sigma = 1 line are the zeros or poles of f(s). Taking the logarithmic derivative of and using the same argument as above for the lower order terms, we have

\frac{L'}{L}(s) =\mathstrut \amp \sum_p \frac{-a(p)\log(p)}{p^s} + 2\,\frac{a(p)^2\log(p)}{p^{2s}} -2\, \frac{a(p^2)\log(p)}{p^{2s}} + h_1(s) =\mathstrut \amp \sum_p \frac{-a(p)\log(p)}{p^s} -2 \,\frac{a(p^2)\log(p)}{p^{2s}} + h_2(s),

where h_j(s) is bounded in \sigma > \frac13+\theta+\epsilon for any \epsilon>0. By and Lemma, the middle term in the sum over primes in converges absolutely for \sigma>\frac12, so it was incorporated intoh_1(s).

Suppose s_1,\ldots,s_J are zeros or poles of L(s), with s_j = 1+i t_j having multiplicity m_j. We have

\frac{L'}{L}(\sigma+i t_j) \sim \frac{m_j}{\sigma-1}, \ \ \ \ \ \ \ \ \mathrm{as} \ \ \sigma \to 1^+,

therefore

\sum_p \left( \frac{-a(p)\log(p)}{p^{\sigma + it_j}} - 2\, \frac{a(p^2)\log(p)}{p^{2(\sigma + it_j)}} \right) \sim \frac{m_j}{\sigma-1}, \ \ \ \ \ \ \ \ \mathrm{as} \ \ \sigma \to 1^+.

Now write

k(s) = \sum_{j=1}^J m_j \sum_p \left( \frac{-a(p)\log(p)}{p^{s+i t_j}} -2\, \frac{a(p^2)\log(p)}{p^{2(s+i t_j)}} \right)

By we have

k(\sigma) \sim \frac{\sum_{j=1}^J m_j^2}{\sigma-1}, \ \ \ \ \ \ \ \ \mathrm{as} \ \ \sigma \to 1^+.

We will manipulate so that so that we can use and to give a bound on \sum m_j^2 in terms of M_1 and M_2.

By Cauchy's inequality and Lemma we have

|k(\sigma)| \le \mathstrut \amp \left| \sum_p \frac{a(p)\log(p)}{p^{\sigma}} \sum_{j=1}^J \frac{m_j}{p^{it_j}}\right| + 2 \left|\sum_p \frac{p^{-\sigma} a(p^2)\log(p)}{p^{\sigma}} \sum_{j=1}^J \frac{m_j}{p^{2it_j}} \right| \le \mathstrut \amp \left(\sum_p \frac{|a(p)|^2 \log(p)}{p^{\sigma}}\right)^{\frac12} \left( \sum_p \frac{\log p}{p^\sigma} \biggl|\sum_{j=1}^J m_j p^{-it_j} \biggr|^2\right)^{\frac12}\nonumber \amp \ \ \ \ \ \ +2 \left(\sum_p \frac{p^{-2\sigma}|a(p^2)|^2 \log(p)}{p^{\sigma}} \right)^{\frac12} \left( \sum_p \frac{\log p}{p^\sigma}\biggl| \sum_{j=1}^J m_j p^{-2it_j} \biggr|^2\right)^{\frac12}\nonumber \le \amp (1+o(1))\Biggl( \left(\frac{M_1^2}{\sigma-1} \right)^{\frac12} \biggl( \sum_{j=1}^J \sum_{\ell=1}^J m_j m_\ell \sum_p \frac{\log p}{p^{\sigma + i(t_j - t_\ell)} } \biggr)^{\frac12} \nonumber \amp \ \ \ \ \ +2 \left(\frac{M_2^2}{\sigma-1} \right)^{\frac12} \biggl( \sum_{j=1}^J \sum_{\ell=1}^J m_j m_\ell \sum_p \frac{\log p}{p^{\sigma + 2i(t_j - t_\ell)} } \biggr)^{\frac12} \Biggr)\nonumber \sim \amp \frac{M_1 + 2 M_2}{(\sigma-1)^\frac12 } \left( \sum_{j=1}^J \frac{m_j^2}{\sigma-1} \right)^{\frac12} \ \ \ \ \ \ \ \ \mathrm{as} \ \ \sigma \to 1^+.

In the last step we used the fact that the Riemann zeta function has a simple pole at1 and no other zeros or poles on the 1-line.

Combining and we have \displaystyle \sum_{j=1}^J m_j^2 \le (M_1 +2 M_2)^2. Since m_j\ge 1, the proof is complete.

Note that, by the prime number theorem , the condition on a(p) is satisfied if |a(p)|\le M_1. Also, if \theta\lt \frac12 then condition on a(p^2) holds with M_2=1.

Proof of <xref ref="thm_SSMO">Theorem</xref>

Assume \theta\lt \frac16, bound, and condition) in Theorem. Define

A_1(s)=\prod_p \frac{1+a_1(p)p^{-s}}{1+a_2(p)p^{-s}} \cdot \prod_p \frac{1+a_1(p^2)p^{-2s}}{1+a_2(p^2)p^{-2s}}

(see ). Then, we have

A_1(s)=\mathstrut \amp \prod_p (1+(a_1(p)-a_2(p))p^{-s}) \cdot \prod_p (1+(a_1(p^2)-a_2(p^2))p^{-2s}) \cdot H_2(s),

where H_2(s) is regular and nonvanishing for \sigma > \frac{5}{12}.

Using the identities

\frac{1+a x}{1+b x} = 1+(a-b)x - \frac{b(a-b)x^2}{1+b x}

and

1+ax+bx^2 = (1+ax)\left(1+ \frac{b x^2}{1+ax} \right)

we have

\frac{1+a x}{1+b x} = (1+(a-b)x)\left(1-\frac{b(a-b)x^2}{(1+(a-b)x)(1+bx)}\right).

Thus

\prod_p \frac{1+a_1(p)p^{-s}}{1+a_2(p)p^{-s}} =\mathstrut \amp \prod_p \bigl(1+(a_1(p)-a_2(p))p^{-s} \bigr) \amp \times \prod_p \biggl( 1- \frac{a_2(p)(a_1(p)-a_2(p)) p^{-2s}}{ (1+(a_1(p)-a_2(p))p^{-s})(1+a_2(p)p^{-s})}\biggr) =\mathstrut \amp \prod_p \bigl(1+(a_1(p)-a_2(p))p^{-s} \bigr) \cdot h(s)

say. We wish to apply Lemma to show that h(s) is regular and nonvanishing for \sigma>\sigma_0 for some \sigma_0\lt \frac12. Since \theta\lt \frac16, if \sigma\ge \frac16 and p > P_0 where P_0 depends only on \theta, then |1+a_2(p)p^{-\sigma}| \geq \frac12 and |1+(a_1(p)-a_2(p))p^{-\sigma}| \geq \frac12. Using those inequalities and |a_2(p)|\ll p^\theta we have

\sum_{P_0\le p\le X}\amp \left|\frac{a_2(p)(a_1(p)-a_2(p)) }{ (1+(a_1(p)-a_2(p))p^{-\sigma})(1+a_2(p)p^{-\sigma})}\right|^2 \log p \amp \le 16 \mathstrut \sum_{P_0\le p\le X} \left|{a_2(p)(a_1(p)-a_2(p)) }\right|^2 \log p \amp \ll\mathstrut X^{2\theta} \sum_{P_0\le p\le X} \left|(a_1(p)-a_2(p)) \right|^2 \log p \amp \ll\mathstrut X^{\frac12+2\theta}.

Changing variables s\to \frac{s}{2}-\frac{1}{12} and applying Lemma, we see that h(s) is regular and nonvanishing for \sigma>\frac{5}{12}.

Applying the same reasoning to the second factor in completes the proof.

\lambda(s) has only finitely many zeros or poles in the half-plane \sigma\ge\frac12.

By and the partial Selberg bound assumed on \mu and \nu, only the product

P(s)=\prod_p\frac{ F_{p,1}(p^{-s})^{-1} }{ F_{p,2}(p^{-s})^{-1} } = \prod_p\frac{ 1+a_1(p)p^{-s}+a_1(p^{2})p^{-2s}+\cdots} {1+a_2(p)p^{-s}+a_2(p^{2})p^{-2s}+\cdots}

could contribute any zeros or poles to \lambda(s) in the half-plane\sigma\ge\frac12.

By the first line in equation of Lemma we have

P(s) =\mathstrut \amp \prod_p \frac{1+a_1(p)p^{-s}}{1+a_2(p)p^{-s}} \cdot \prod_p \frac{1+a_1(p^2)p^{-2s}}{1+a_2(p^2)p^{-2s}} \cdot H_1(s) =\mathstrut \amp A_1(s) H_1(s),

say, where H_1(s) is regular and nonvanishing for \sigma>\frac13+\theta.

Using the notation of Lemma, write as A_1(s)=A_2(s)H_2(s). Since A_1(s) and H_2(s) are meromorphic in a neighborhood of \sigma\ge\frac12, so is A_2(s). Changing variables s\mapsto s+\frac12, which divides the nth Dirichlet coefficient by 1/\sqrt{n}, we can apply Lemma, using the estimate and condition) to conclude that A_2(s) has only finitely many zeros or poles in\sigma\ge\frac12. Since the same is true of H_1(s) and H_2(s), we have shown that P(s) has only finitely many zeros or poles in\sigma\ge\frac12. This completes the proof for conditions) and ).

In the other cases, the proof is almost the same, using Lemma to rewrite equation in terms of L_j^S(s,\sym^2) or L_j^S(s,\ext^2), and using Lemma for the factors that remain. This concludes the proof of Lemma.

Now we have the ingredients to prove Theorem. The proof begins the same as that of Proposition, by considering the ratio of completed L-funtions:

\lambda(s) := \frac{\Lambda_1(s)}{\Lambda_2(s)},

which is a meromorphic function of order1 and satisfies the functional equation \lambda(s)=\varepsilon \overline{\lambda}(1-s), where \varepsilon = \varepsilon_1/\varepsilon_2.

By the functional equation, \lambda(s) has only finitely many zeros or poles, so by the Hadamard factorization theorem

\lambda(s) = e^{A s} r(s)

where r(s) is a rational function.

By, as \sigma\to\infty,

\lambda(\sigma) = C_0 \sigma^{m_0} e^{A \sigma} \bigl(1 + C_1 \sigma^{-1} + O(\sigma^{-2})\bigr),

for someC_0\not=0 andm_0\in \Z. On the other hand, ifb(n_0) is the first non-zero Dirichlet coefficient (with n_0>1) of L_1(s)/L_2(s), then by and Stirling's formula, as \sigma\to\infty,

\lambda(\sigma) = \bigl(B_0 \sigma^{B_1} e^{B_2 \sigma\log \sigma + B_3 \sigma }(1 +o(1))\bigr)\bigl(1 + b(n_0) n_0^{-\sigma} + O((n_0+1)^{-\sigma}).

Comparing those two asymptotic formulas, the leading terms must be equal, so B_0=C_0, B_1=m_0, B_2=0, and B_3=A. Comparing second terms, we have polynomial decay equal to exponential decay, which is impossible unless b(n_0)=0 and C_1=0. But b(n_0) was the first nonzero coefficient of L_1(s)/L_2(s), so we conclude that L_1(s)=L_2(s), as claimed.