Most computer scientists are familiar with the P = NP problem, which asks essentially whether we can verify more problems in polynomial time than we can solve. So fundamentally does complexity theory hinge on this result that the Clay Mathematics Institute has labelled it one of their seven Millennium Prize Problems, and will award $1,000,000 to the researchers that solve it.

But there are six other Millennium Prize problems that are often less accessible to computer scientists, requiring background in areas such as topology and complex analysis. One of the most important of these is the Riemann Hypothesis, which is normally stated in terms of complex zeros of the Riemann Zeta function. Here, we will describe a recent characterization of the Riemann Hypothesis more suited to a computer scientist's background based on growth of integer functions.

Consider the sum-of-divisors function σ(n) that sums up the divisors of the positive integer n. For example, σ(15) = 1 + 3 + 5 + 15 = 24. If you've studied asymptotic notation, a natural question to ask is: how fast does σ(n) grow?

It's clear that since 1 and n always divides n, σ(n) ≥ n + 1, so σ(n) is Ω(n). But σ(n) cannot be O(n), because σ(n)/n takes on arbitrarily large values. To see this, consider n equal to k primorial, the product of the first k primes. Because σ(n)/n is multiplicative, σ(n)/n will equal the product of σ(p)/p = (1+p)/p = (1 + 1/p) for each prime factor. But if you expand out this product, you get a sum including 1/p for each prime factor, and the sums of the reciprocals of the primes, also known as the harmonic series of primes, diverges. But the harmonic series of primes grows quite slowly, and is O(ln ln n).

From this we might conjecture that σ(n) grows at a rate of n ln ln n. Gronwall's theorem, proven in 1913, shows that in fact:

lim supn→∞ σ(n) / (n ln ln n) = eγ.

Here, e and γ are both constants (the latter is the Euler-Mascheroni constant), so eγ is a constant, equal to about 1.781. This is somewhat stronger than the statement that σ(n) is O(n ln ln n) - the lim sup actually says that for any ε > 0, some tail of the sequence σ(n) / (n ln ln n) must be bounded above by eγ + ε.

But instead of a lim sup, what we'd really like is to have a hard bound; we'd like to say:

σ(n) / (n ln ln n) < eγ for all n.

This is false, and a quick search identifies 27 small counterexamples: 2, 3, 4, 5, 6, 8, 9, 10, 12, 16, 18, 20, 24, 30, 36, 48, 60, 72, 84, 120, 180, 240, 360, 720, 840, 2520, and 5040. However, no search to date has located any larger counterexamples. From this we might conjecture:

σ(n) / (n ln ln n) < eγ for all n > 5040.

In 1984, Guy Robin showed that in fact, this statement is true if and only if the Riemann hypothesis is true. This is Robin's theorem. Thus, the Riemann hypothesis could, in theory, be proven false by exhibiting a single positive integer n > 5040 such that σ(n) > eγ (n ln ln n). Because most mathematicians believe the Riemann hypothesis is true (just as most computer scientists believe P ≠ NP) it's more likely that the above statement is true, and a proof of this fact would win you $1,000,000.