Organizers: William Duke, Philippe Michel, André Reznikov, and Akshay Venkatesh

October 16, 2006 - October 20, 2006

In 1859, in the celebrated paper On the number of primes less than a given magnitude, Bernhard Riemann (1826-1866) set forth a program for studying the distribution of primes. He was particularly interested in studying the asymptotics, as N → ∞ of the number π(N) of prime integers less than the integer N. A complete answer to this question was ultimately given by the Prime Number Theorem of de la Vallee Poussin and Hadamard, which states that

lim N → ∞ π(N)/[N/log N] = 1 .

Riemann died of poverty and ill health at the age of forty, before he was able to achieve his goals in this program.

But one seminal contribution that he made to this study was to introduce the so-called Riemann zeta function. This is the analytic function

ζ(s) = ∑j=1 1/js .

Elementary estimates from calculus show that this series converges absolutely and uniformly on compact subsets of W = {s ∈ C: Re s > 1}. In fact the function analytically continues to the entire complex plane less the point 1. The ζ function has a simple pole at 1 with residue 1.

A number of remarkable formulas, including the functional equation of Riemann and the Euler product formula, show that the Riemann zeta function contains decisive information about the distribution of the prime numbers. In fact the latter formula says that

ζ(s) = 1/[∏j=1 (1 - pj-s)] = 1/[(1 - 2-s)(1 - 3-s)(1 - 5-s) ...] ,

where {p_j} is an enumeration of the positive prime integers. This is a quite explicit relationship between the zeta function and the prime numbers.

The celebrated Riemann hypothesis concerns the location of the zeros of the Riemann zeta function. Apart from some trivial zeros on the negative real axis, it is known that all the other zeros lie in the critical strip

U = {s: 0 < Re s < 1} .

The conjecture is that all the zeros in the critical strip U actually lie on the critical line

þ = { s: Re s = 1/2 } .

AIM Director Brian Conrey, building on work of Norman Levinson, has in fact shown that 40% of the zeros of the Riemann zeta function that lie in the critical strip actually lie on the critical line. The conjecture is that 100% of the zeros do so. This is arguably the most famous and most important unsolved problem in mathematics. It is one of the Clay Mathematics Institute Millenium Prizes, and a prover of the conjecture will thereby be awarded $1 million.

The Riemann hypothesis (affectionately known as RH) is certainly a matter of great interest for a number of the world's best mathematicians. There are many different attacks on the problem and many partial results. This particular workshop concentrated on ideas that would derive from the Riemann hypothesis, but which have independent interest.

One such derivative result is the Lindelöf conjecture, which says in effect that, if RH is true, then the zeta function cannot be too large on the critical line. Even this question is too difficult for meaningful attack at this point in history. But it was certainly a topic for discussion during this very active week.

One of the first and most fundamental ideas in these studies is that of the L-function. First created by Johann Peter Gustav Lejeune Dirichlet (1805-1859), an L-function is a generalization of the Riemann zeta function. An example of an L-function is given by

L(s, χ) = 1-2 - 3-s + 5-s - 7-s + - ... .

Another interesting instance is the Dirichlet series

L(s, χ) = (1 + 3-s)(1 - 5-s)(1 + 7-s) (1 - 9-s) ... ,

where the sign (+ - ) in front of p is determined by

+ if p ≡ 3 mod 4
- if p ≡ 1 mod 4 .

This example and the preceding one are clearly related to the Euler product formula explicated above.

One important question that can be attacked using L-functions is that of estimating smallest quadratic non-residues. That is, given a large prime p, what is the least integer that is not a square modulo p? This problem has not only number-theoretic interest, but also cryptographic applications. The creation of unbreakable (in principle) codes often hinges on estimates such as these. It turns out that the problem of least non-quadratic residues is controlled by estimates on L(1/2 + it, χ), where this L is the Dirichlet function described in the last paragraph.

The Riemann hypothesis will give a very good bound on quadratic non-residues. The Lindelöf conjecture also a gives a good, but not as sharp, bound for quadratic non-residues. The best result obtainable by current techniques uses the idea of a subconvexity bound. Of course subconvexity is the principal topic of this AIM workshop. The idea of subconvexity is based on the Phragmen-Lindelöf principle, which in turn is a generalization of the classical maximum principle (adapted to certain unbounded domains) from classical complex function theory. The AIM workshop on subconvexity sought to study similar bounds for more general L-functions.

Another application of subconvexity, worked out by workshop organizer William Duke (and based on work of Iwaniec), concerns points of the integer lattice in R3 that lie on the sphere of radius n1/2. More precisely, we seek integer solutions x, y, z to the diophantine equation

x2 + y2 + z2 = n ,

where n is a positive integer. If P = (x,y,z) is such a solution, then we examine the projection

(x/n1/2, y/n1/2, z/n1/2 )

of P into the unit sphere. Duke showed that, as n → + ∞, these projected points form an equidistributed, dense set in the unit sphere (assuming that there are enough of them). The technique of the proof is subconvexity. It also turns out that this theorem has ergodic-theoretic interpretations, and that gives a tie-in between ergodic theory and number theory. In addition, results of this kind are of great interest to Fourier analysts, since spherical summation of multiple Fourier series is a matter of intense study for modern analysts.

The AIM workshop on subconvexity bounds explored multiple approaches to the Riemann hypothesis (RH) and to the distribution of primes. Many applications were discussed, and connections with diverse parts of mathematics developed. Part of the goal of the workshop was to bring non-experts, such as those who study representation theory, into the subject. The result was a host of new collaborations, and the generation of many new ideas.