Yufei Zhao

Category: Expository

Extremal regular graphs

This post is adapted from my new expository survey Extremal regular graphs: independent sets and graph homomorphisms.

The earliest result in extremal graph theory is usually credited to Mantel, who proved, in 1907, that a graph on {n} vertices with no triangles contains at most {n^2/4} edges, where the maximum is achieved for a complete bipartite graph with half of the vertices on one side and half on the other side. Much more is now known about the subject. While I initially encountered Mantel’s theorem as a high school student preparing for math contests, my first in-depth exposure to extremal graph theory was from taking a course by David Conlon during my year reading Part III at Cambridge (one can find excellent course notes on his website).

We seem to know less about sparse graphs (a general mantra in combinatorics, it seems). Let us focus on {d}-regular graphs, which are graphs where every vertex has degree {d}.

C4

The cycle of length 4 has a total of 7 independent sets.

An independent set in a graph is a subset of vertices with no two adjacent. Many combinatorial problems can be reformulated in terms of independent sets by setting up a graph where edges represent forbidden relations.

Question. In the family of {d}-regular graph of the same size, which graph has the most number of independent sets?

This question was raised by Andrew Granville in the 1988 Number Theory Conference at Banff, in an effort to resolve a problem in combinatorial number theory, namely the Cameron–-Erdős conjecture on the number of sum-free sets. The question appeared first in print in a paper by Noga Alon, who proved an asymptotic upper bound and speculated that, at least when {n} is divisible by {2d}, the maximum should be attained by a disjoint union of complete bipartite graphs {K_{d,d}}.

Some ten years later, Jeff Kahn arrived at the same conjecture while studying a problem arising from statistical physics. Using a beautiful entropy argument, Kahn proved the conjecture under the additional assumption that the graph is already bipartite.

Fast forward another nearly ten years. In the summer of 2009, during my last summer as an undergraduate, I spent a fun and productive summer attending Joe Gallian’s REU in Duluth, MN (a fantastic program, by the way!), and there I showed that Kahn’s theorem can be extended to all regular graphs, not just bipartite ones.

Here is the theorem statement. We write {I(G)} to denote the set of independent sets in {G}, and {i(G) := |I(G)|} the number of independent sets in {G}.

Theorem (Kahn, Z.) If {G} is an {n}-vertex {d}-regular graph, then

\displaystyle i(G) \le i(K_{d,d})^{n/(2d)} = (2^{d+1} - 1)^{n/(2d)}.

In the survey, I provide an exposition of the proofs of these two theorems as well as a discussion of subsequent developments. Notably, Davies, Jenssen, Perkins, and Roberts recently gave a brand new proof of the above theorems by introducing a powerful new technique, called the occupancy method, inspired by ideas in statistical physics, and it already has a number of surprising new consequences.

On a personal level, I am pleased to see this topic gaining renewed interest. (Also, somewhat to my chagrin, my undergraduate paper, according to Google Scholar, still remains my most cited paper to date.)

I shall close this blog post with one of my favorite open problems in this area.

Let {c_q(G)} denote the number of proper {q}-colorings of {G}, i.e., coloring the vertices of {G} with {q} colors, so that no two adjacent vertices receive the same color.

Conjecture. If {G} is an {n}-vertex {d}-regular graph, and {q \ge 3}, then

\displaystyle c_q(G) \le c_q(K_{d,d})^{n/(2d)}.

I was pleased to learn from Will Perkins, who gave a talk at Oxford last week, that he, along with Davies, Jenssen, and Roberts, recently proved the conjecture for 3-regular graphs. The proof uses the occupancy method that they developed earlier. The method is reminiscent of the flag algebra machinery developed by Razborov some ten years ago for solving extremal problems in dense graphs. The new method can be seen as some kind of “flag algebra for sparse graphs”. I find this development quite exciting, and I expect that more news will come.

hom_coloring.png

Graph homomorphisms from G to K_q correspond to proper colorings of vertices of G with q colors.

Advertisements

Graph regularity

In this blog post I will give a brief introduction to Szemerédi’s Regularity Lemma, a powerful tool in graph theory. The post is based on a talk I gave earlier today at a graduate student lunch seminar.

Consider the following problem. Suppose you’re given a very large graph. The graph has so many vertices that you won’t be able to access all of them. But nevertheless you want to find out certain things about the graph. These situations come up in real world applications. Perhaps we would like to know something about a social network, e.g., Facebook, but we don’t have the resource to go through every single node, as there are simply too many of them. For the purpose of this blog post though, we won’t talk about applications and instead stick to the mathematics.

Suppose we are interested answering the following question about the very large graph:

Is the graph triangle-free?

Think of the given graph as a black box. We have the following access to the graph: we are allowed to randomly sample some number of vertices and be told of all the edges between these vertices.

Can we achieve the desired goal? Well, if the graph contains, say, only a single triangle, then it’s pretty much a hopeless task, since we are almost certainly never going to find the single needle in this giant haystack through random sampling. So we have to be content with a more modest objective.

Can we distinguish a graph that’s triangle-free from a graph that is {\epsilon}-far from triangle-free?

Being {\epsilon}-far from a property means that we would have to add/delete at least {\epsilon n^2} edges from the graph to make it satisfy that property. Here {n} is the number of vertices in the very large graph. Note that this model puts us in the setting of dense graphs, i.e., graphs with {\Omega(n^2)} edges.

This problem we know how to solve. The algorithm is very straightforward: sample some constant number of vertices, and check to see if you see any triangles.

Algorithm: Sample {C_\epsilon} (some constant depending on {\epsilon}) vertices

  • If a triangle is detected, then output that the graph is not triangle-free.
  • If no triangle is detected, then output that the graph is triangle-free

If the given graph is triangle-free, then clearly we won’t ever detect any triangles, so the algorithm always outputs the correct answer. But what if the given graph is not triangle-free? We said earlier that in this case we’ll assume the graph is {\epsilon}-far from triangle free. We want the algorithm to detect at least one triangle so that it can give the correct. However, the randomized nature of the algorithm means that there will be some probability that the output will be erroneous. We are claiming that this error probability is small.

This claim seems very innocent. Essentially we need to show that if a graph cannot be made triangle-free by deleting a small number of edges, then it must not contain very many triangles. If you haven’t seen this claim before, you might think that it’s something that would follow from some easy deductions, and you might be tempted to work it out yourself. However, be warned that you will almost certainly not succeed. The claim is indeed correct, but it is far from trivial.

Read the rest of this entry »

%d bloggers like this: