### On replica symmetry of large deviations in random graphs

Eyal Lubetzky and I just uploaded to the arXiv our new paper “On replica symmetry of large deviations in random graphs.” In this paper we answer the following question of Chatterjee and Varadhan:

Question. Fix ${0. Let ${n}$ be a large integer and let ${G}$ be an instance of the Erdős-Rényi random graph ${\mathcal G(n,p)}$ conditioned on the rare event that ${G}$ has at least as many triangles as the typical ${\mathcal G(n,r)}$. Does ${G}$ look like a typical ${\mathcal G(n,r)}$?

Here the Erdős-Rényi random graph ${\mathcal G(n,p)}$ is formed by taking ${n}$ vertices and adding every possible edge independently with probability ${p}$. Here “look like” means close in cut-distance (but we won’t give a precise definition in this blog post). In this case, saying that ${G}$ is close to ${\mathcal G(n,r)}$ is roughly equivalent to saying that every not-too-small subset of vertices (at least constant fraction in size) of ${G}$ induce a subgraph with edge density close to ${r}$.

Another way to phrase the question is: what is the reason for ${G}$ having too many triangles? Is it because it has an overwhelming number of edges uniformly distributed, or some fewer edges arranged in a special structure, e.g., a clique.

Via a beautiful new framework by Chatterjee and Varadhan for large deviation principles in ${\mathcal G(n,p)}$, we give a complete answer to the above question.

The answer, as it turns out, depends on ${(p,r)}$. See the plot below. For ${(p,r)}$ in the blue region, the answer is yes, and for ${(p,r)}$ in the red region, the answer is no.

Does ${G}$ looks like a ${\mathcal G(n,r)}$?

The phase transition behavior has already been observed previously by Chatterjee and Varadhan, but the determination of the exact precise phase boundary is new. Borrowing language from statistical physics, the blue region where the conditional random graph is close to ${\mathcal G(n,r)}$ is called the replica symmetric phase, and the red region is called the symmetry breaking phase. Note that in the left part of the plot, as we fix a small value of ${p}$, the model experiences a double phase transition as ${r}$ increases from ${p}$ to ${1}$ — starting first in the blue phase, then switches to the red phase, and then switches back to the blue phase.

More generally, our result works for any ${d}$-regular graph in replace of triangles. The boundary curve depends only on ${d}$, and they are plotted below for the first few values of ${d}$. In particular, this means that large deviation for triangles and 4-cycles share the same phase boundary. A pretty surprising fact! We also consider the model where we condition on the largest eigenvalue of the graph being too large, and the phase boundary also turns out to be the same as that of triangles.

We also derive similar results for large deviations in the number of linear hypergraphs in a random hypergraph.

The phase boundary for ${d}$-regular graphs

### Exponential random graphs

We also studied the exponential random graph model. This is a widely studied graph model, motivated in part by applications in social networks. The idea is to bias the distribution of the random graph to favor those with, say, more triangles. This model has a similar flavor to the model considered above where we condition on the random graph having lots of triangles.

We consider a random graph ${G}$ on ${n}$ vertices drawn from the distribution

$\displaystyle p_\beta(G) \propto \exp\left(\tbinom{n}{2}(\beta_1 t(K_2, G) + \beta_2 t(K_3, G))\right)\,,$

where ${t(K_2, G)}$ and ${t(K_3, G)}$ are the edge density and the triangle density of ${G}$, respectively. When ${\beta_2 = 0}$, this model coincides with the Erdős-Rényi model ${\mathcal G(n,p)}$ with some ${p}$ depending on ${\beta_1}$. We only consider the case ${\beta_2 > 0}$, which represents a positive bias in the triangle count.

As shown by Bhamidi, Bresler and Sly and Chatterjee and Diaconis, when ${n}$ is large, a typical random graph drawn from the distribution has a trivial structure — essentially the same one as an Erdős-Rényi random graph with a suitable edge density. This somewhat disappointing conclusion accounts for some of the practical difficulties with statistical parameter estimation for such models. In our work, we propose a natural generalization that will enable the exponential model to exhibit a nontrivial structure instead of the previously observed Erdős-Rényi behavior.

Here is our generalization. Consider the exponential random graph model which includes an additional exponent ${\alpha>0}$ in the triangle density term:

$\displaystyle p_{\alpha,\beta}(G) \propto \exp\left(\tbinom{n}{2}(\beta_1 t(K_2, G) + \beta_2 t(K_3, G)^\alpha)\right) \, .$

When ${\alpha \geq 2/3}$, the generalized model features the Erdős-Rényi behavior, similar to the previously observed case of ${\alpha = 1}$. However, for ${0< \alpha < 2/3}$, there exist regions of values of ${(\beta_1, \beta_2)}$ for which a typical random graph drawn from this distribution has symmetry breaking, which was rather unexpected given earlier results. For example, we know that there is symmetry breaking in the shaded regions in the plots below. (The blue curve indicates a discontinuity in the model which we won’t discuss in this blog post.)

Symmetry breaking in the new exponential graph model

### Graph limits and large deviation

Our proof of the phase boundary result for large deviations in triangle counts is quite short, so I’ll present it here.

The framework developed by Chatterjee and Varadhan reduces the problem of large deviations in random graphs to solving a variational problem in graph limits. Graph limits, also called graphons, are symmetric measurable functions ${f\colon [0,1]^2 \rightarrow [0,1]}$ which are, in some sense, scaling limits of adjacency matrices of graphes. The values of ${f}$ represent local edge densities (there are several caveats in this description but we will not go into the details here). For example, the constant function ${f \equiv p}$ is the limit of ${\mathcal G(n,p)}$ as ${n \rightarrow \infty}$. These objects were developed by Lovász and collaborators to understand the structure of very large graphs.

The relative entropy function ${h_p}$ defined by

$\displaystyle h_p(x) := x \log \frac{x}{p} + (1-x) \log \frac{1-x}{1-p}$

for $p \in (0,1)$ and $x \in [0,1]$ is the rate function associated to the binomial distribution with probability ${p}$. We can extend ${h_p}$ to graphons ${f \colon [0,1]^2 \rightarrow [0,1]}$ by defining

$\displaystyle h_p(f) := \int_{[0,1]^2} h_p(f(x,y)) \ dxdy \, .$

We’ll omit the limits of integrals from now on.

The triangle density in a graphon ${f}$ is given by the expression

$\displaystyle t(K_3, f) := \int f(x,y)f(x,z)f(y,z) \ dxdydz\, .$

The Chatterjee-Varadhan framework reduces the problem of large deviations of triangle counts (the question stated in the beginning) to the following variational problem.

Variational problem: Minimize ${h_p(f)}$ subject to ${t(K_3, f) \geq r^3}$.

It is known that the distribution of the random graph drawn from ${\mathcal G(n,p)}$ and conditioned on the event of having triangle density at least ${r^3}$, in the limit as ${n \rightarrow \infty}$, converges to the set of graph limits which are the minimizers of the variational problem.

In particular we would like to know whether the constant function ${f \equiv r}$ minimizes ${h_p(f)}$. If it does (and if it’s the unique minimizer), then we are in the replica symmetric phase, and the random graph converges to ${\mathcal G(n,r)}$ in cut-distance. On the other hand, if there exists some function ${f}$ with a strictly smaller ${h_p(f)}$ and satisfying the constraint, then we are in the symmetry breaking phase.

Thus it remains to consider the variational problem. The replica symmetry phase is determined by the following two lemmas.

Lemma 1 For any symmetric measurable ${f \colon [0,1]^2 \rightarrow [0,1]}$, we have

$\displaystyle t(K_3,f)^2 \leq \left(\int f(x,y)^2 \ dxdy\right)^3.$

Proof: This is an exercise in the Cauchy-Schwarz inequality.

\displaystyle \begin{aligned} t(K_3, f)^2 &= \left(\int f(x,y)f(y,z)f(z,x) dxdydz\right)^2 \\ &= \left(\int f(x,y) \left(\int f(y,z)f(z,x) dz\right) dxdy\right)^2 \\ &\leq \left(\int f(x,y)^2 dxdy\right) \int \left(\int f(y,z)f(z,x) dz\right)^2 dxdy \\ &\leq \left(\int f(x,y)^2 dxdy\right) \int \left(\int f(y,z)^2 dz\right) \left(\int f(z,x)^2 dz\right) dxdy \\ &= \left(\int f(x,y)^2 dxdy\right)^3. \end{aligned} $\Box$

Lemma 2 Let ${0 < p \leq r \leq 1}$ be such that ${(r^2, h_p(r))}$ lies on the convex minorant of ${x \mapsto h_p(\sqrt{x})}$. Then any symmetric measurable ${f \colon [0,1]^2 \rightarrow [0,1]}$ with ${t(K_3,f) \geq r^3}$ satisfies ${h_p(f) \geq h_p(r)}$.

Proof: Lemma 1 implies

$\displaystyle \int f(x,y)^2 \ dxdy \geq r^2.$

Define ${\phi(x) = h_p(\sqrt{x})}$ and let ${\hat\phi}$ be the convex minorant of ${\phi}$. Since ${\hat\phi}$ convex and increasing on ${[p^2, 1]}$, by Jensen’s inequality we have

\begin{aligned} h_p(f) = \int h_p(f(x,y)) dxdy &\geq \int \hat\phi(f(x,y)^2) dxdy\\ &\geq \hat\phi\left(\int f(x,y)^2 dxdy\right) \geq \hat\phi(r^2) = \phi(r^2) = h_p(r). \end{aligned}
$\Box$

It can be shown that the hypotheses for ${(p,r)}$ in Lemma 3 is equivalent to

$\displaystyle \left[1 + (r^{-1} - 1)^{1/(1-2r)}\right]^{-1} \leq p \leq r \leq 1.$

It turns out that this region is exactly the replica symmetric phase. It is the blue region in the first plot.

To show that everything else is in the symmetry breaking phase, we need to show that the constant function ${f \equiv r}$ is not a minimizer of the variational problem. The following lemma captures this fact.

Lemma 3 Let ${0 < p \leq r \leq 1}$ be such that ${(r^2, h_p(r))}$ does not lie on the convex minorant of ${x \mapsto h_p(\sqrt{x})}$. Then there exists a symmetric measurable ${f \colon [0,1]^2 \rightarrow [0,1]}$ with ${t(K_3,f) \geq r^3}$ and satisfying ${h_p(f) < h_p(r)}$.

Proof: (Sketch) The idea is to construct ${f}$ by slightly perturbing the constant graphon ${f}$ in a way that roughly preserves the triangle count but at the same time decreases the rate function with the help of the nonconvexity of ${h_p(\sqrt{x})}$.

See figure below for an illustration of this construction. The plot of ${h_p(\sqrt{x})}$ is drawn not-to-scale to highlight its convexity features.

Since ${(r^2, h_p(r))}$ does not lie on the convex minorant of ${x\mapsto h_p(\sqrt{x})}$, there necessarily exist ${0 \leq r_1 < r < r_2 \leq 1}$ such that the point ${(r^2, h_p(r))}$ lies strictly above the line segment joining ${(r_1^2, h_p(r_1))}$ and ${(r_2^2, h_p(r_2))}$. Letting ${s}$ be such that

$\displaystyle r^2 = s r_1^2 + (1-s) r_2^2\,,$

we therefore have

$\displaystyle s h_p(r_1) + (1-s) h_p(r_2) < h_p(r)\, .$

Let ${\epsilon > 0}$ and define

\displaystyle \begin{aligned} a &= s\epsilon^2\,, \qquad b = (1-s)\epsilon^2 + \epsilon^3\,,\\ I_0 &= [a,1-b]\,,\qquad I_1 = [0,a]\,, \qquad I_2 = [1-b,1]\,, \end{aligned}

noting that for ${a<1-b}$ for any sufficiently small ${\epsilon}$. Define ${f_\epsilon \colon [0,1]^2 \rightarrow [0,1]}$ by

$\displaystyle f_\epsilon(x,y) = \begin{cases} r_1 & \text{if }(x,y) \in (I_0 \times I_1) \cup (I_1 \times I_0)\,, \\ r_2 & \text{if }(x,y)\in (I_0 \times I_2) \cup (I_2 \times I_0)\,, \\ r & \text{otherwise}\,. \end{cases}$

It remains to check that ${t(K_3,f_\epsilon) > r^3}$ and ${h_p(f_\epsilon) < h_p(r)}$ for sufficiently small ${\epsilon}$. This is a straightforward calculation whose details we omit here. $\Box$

This completes the proof the phase boundary for triangles. The proof for ${d}$-regular graphs is similar, except that we need to replace Lemma 1 by a generalized Hölder’s inequality. We refer the reader to our paper for details.

Unfortunately, in the symmetry breaking phase where the constant function does not solve the variational problem, we do not know extremal solutions. In fact, we do not know how to solve the variational problem for any single ${(p,r)}$ in the symmetry breaking phase. This remains a tantalizing open problem.