I’ve published the first in a series of technical notes talking about how you design your infrastructure to make the most optimal use of Project Calico and the benefits that it provides. The first installment of that series is a discussion about using an Ethernet interconnect fabric to interconnect Calico nodes.
“But wait,” you might be saying, “all the cool kids are doing IP fabrics for their cloud — it’s the only thing that scales.” This statement is, of course, correct. The path that everyone is taking for interconnect design is IP–based, and it is due to concerns about scale. However, this model is really all about IP at the edge of the network. In most cloud networking infrastructures, the edge of the network is the top of rack (ToR) switch. The compute servers below the ToR are simply encapsulating traffic up and sending it up to the ToR. There is no visible (to the interconnect fabric) aggregation happening at the compute servers. Calico also pushes IP to the edge, but in a Calico network, that edge is the compute servers themselves. We’ve just taken the original concept, and driven it to it’s logical conclusion, IP at the first point of aggregation, which is, in reality, on the compute server.
In this technical note, Calico over an Ethernet interconnect fabric, I describe an Ethernet–based, truly scalable, interconnect fabric for Calico that substantially simplifies OA&M of the physical network, and removes IP scaling concerns from the switching layer without resorting to overlay networks, and the issues that accompany them.
As always, I welcome your constructive feedback, praise and adoration, and/or flames and invective musings.
Note: Originally, this technical note was actually in this blog post, but with the advent of our spiffy new documentation site docs.projectcalico.org, I’ve moved the actual technical note there, and replaced it with this pointer post.
Get updates on blog posts, new releases and more!