### Upper tails for triangles in sparse random graphs

Eyal Lubetzky and I just finished and uploaded to the arXiv our new paper *On the variational problem for upper tails of triangle counts in sparse random graphs*. This paper concerns the following question:

**The upper tail problem for triangles.** What is the probability that the number of triangles in an Erdős-Rényi graph graph is at least twice its mean, or more generally, larger than the mean by a factor of ?

This problem has a fairly rich history, and it is considered a difficult problem in probabilistic combinatorics. In a 2002 paper titled *The infamous upper tail*, Janson and Ruciński surveyed the progress on the problem and described its challenges (they also considered the upper tail problem more generally for any subgraph , but we focus on triangles here). Research towards the solution of this problem has led to new techniques for bounding probabilities of rare events. Note that in contrast, the lower tail problem — finding the probability of getting too *few* triangles — is comparatively easier.

Here we consider the case when tends to zero as grows (the dense case, where is fixed, leads to an interesting variational problem involving graph limits, as shown by a seminal work of Chatterjee and Varadhan — see my previous blog post). It is not too hard to describe what the answer *should* be in the sparse setting. Suppose we arbitrarily select a set of vertices and force it to be a clique, that is, we force all edges between these vertices to be present in the random graph — the probability “cost” of this constraint is — the cliques gives us roughly an additional triangles, thereby boosting the total number of triangles in the graph by a factor of from the mean, which is .

Let denote the triangle density in a graph , i.e., the number of triangles in as a fraction of the maximum possible number of triangles. The above argument shows that the upper tail probability satisfies

Is this lower bound tight? That is, can we find a matching upper bound for the upper tail probability? This is precisely the *infamous upper tail* problem mentioned earlier. For a long time there was no satisfactory upper bound anywhere close to the lower bound. A breakthrough by Kim and Vu in 2004 gave an upper bound of the form . However, this still leaves a missing factor of in the exponent. This problem was finally settled in 2012 by Chatterjee and independently by DeMarco and Kahn. They showed that the exponent in the probability has order , thereby filling in the “missing log.”

Now we know the correct order in the exponent, what remains unknown is the constant in front of the . In a very recent breakthrough by Chatterjee and Dembo (the preprint was released just January this year), they provided a framework that reduces this large deviation problem to a certain variational problem, at least in the range when for some explicit (though small) . The variational problem is the natural one associated to this problem, and it is expected to hold for much sparser random graphs as well.

Eyal and I have been working on this variational problem for quite some time, starting with our previous paper where we studied the problem in the dense setting (constant ) and determined the phase diagram for replica symmetry. In our new paper, we solve this variational problem in the sparse regime. Combining the solution of this variational problem with the new results of Chatterjee and Dembo, this determines the correct asymptotic in the exponent of the upper tail probabilities, at least when . For fixed , we can now prove that

where we determine the constant in the exponent to be

We described earlier the story behind the term (by forcing a clique of size ). Where does the other term, , come from? It turns out there is another way to get a lots of triangles relatively cheaply, at least when : forcing a set of vertices to be connected to all other vertices. This has probability . The first construction (forcing a clique) competes with the second construction (forcing a complete bipartite graph): the former is preferable when and the latter is preferable when . It turns out, as we show in our paper, that these are essentially the only constructions.