Networking Guy’s, Just don’t understand software

Before I begin my rant, let me just say my first router was a WellFleet CN with a VME Bus and my first Cisco router was an AGS+.. I have been around long enough to see DecNet, IPX, IP, SNA, Vines and a few others running across my enterprise network while troubleshooting 8228 MAU’s beaconing in the wiring closets and watching NETBEUI “Name in Conflict” packets take down a 10’000 node RSRB network.

Its 2012 and gone are the days when network engineers need to juggle multiple protocol behaviors such as IPX GetNearestServer, IP PMTU bugs in Windows NT 3.5.. or trying to find enough room in your UMB to fit all your network ODI drivers without crashing Windows.

Its a new age, and as we look back at almost 40 years since the inception of the Internet and almost 20 years since TCP/IP was created, we are at the inflection point of unprecedented change fueled by the need to share data, anytime, anywhere on any device.

The motivation for my writing this entry comes from some very interesting points of view from my distinguished ex-colleague Brad Hedlund entitled “Dodging Open Protocols with open software“. In his post he tries to dissect both the intentions and impact of a new breed of networking players such as Nicira on the world of standardized protocols.

The point here isn’t to blow a standards dodger whistle, but rather to observe that, perhaps, a significant shift is underway when it comes to the relevance and role of “protocols” in building next generation virtual data center networks.  Yes, we will always need protocols to define the underlying link level and data path properties of the physical network — and those haven’t changed much and are pretty well understood today.

The “shift in relevance and role of protocols”  is attributed not necessarily at what we know as the IETF/IEEE based networking stack and all the wonderful protocols which make up our communications framework, but in a new breed of protocols necessary to support SDN.

Sidebar: Lets just go back a second and clarify the definition of SDN. Some define Software Defined Networking in terms of control plane, data plane separation, which clearly has been influenced by the work on OpenFlow.

So the shift that we see in networking which is towards more programmability and the fact that we need new ways to invoke actions and carry state is at the crux of this shift..

However, with the possibility of open source software facilitating the data path not only in hypervisor virtual switches, but many other network devices, what then will be the role of the “protocol”? And what role will a standards body have in such case when the pace of software development far exceeds that of protocol standardization.”

Ok so this is the heart of it.. “what then will be the role of the “protocol”? And what role will a standards body have in such case when the pace of software development far exceeds that of protocol standardization.”

I think the problem here is not necessarily the semantics of the word “protocol” (for this is just a contract which two parties agree upon), but the fact that there is a loosely defined role in how this “contract” will be standardized to promote an open networking ecosystem.

Generally standardization only comes when there is sufficiently understood and tested software which provide the specific implementation of that standard. Its very hard to get a protocol specification completely right without testing it in some way..

Sidebar: If you actually go back in history you will find that TCP/IP was not a standard.. The INWG was the governing standards body of the day in defining the international standard which was supposed to be INWG 96 but because the team at Berkley got TCP up into BSD Unix, well now its history..I wrote a bit about it here: http://garyberger.net/?p=295.

With that in mind, take a closer look at the Open vSwitch documentation, dig deep, and what you’ll find is that there are other means of controlling the configuration of the Open vSwitch, other than the OpenFlow protocol.

When it comes to OVS its very important not to confuse interface and implementation. Since OVS in a classical form just a switch, you operate it through helper routines to manipulate the state management layer in the internal datastore called OVSDB and interact with the OS. This is no different than say a CLI on a Cisco router. Most of the manipulation in the management plane will probably be exposed through JSON-RPC (Guessing here) through a high-level REST interface.

What you must understand about OVS when related to control plane/data plane separation or  “flow-based network control” is you are essentially changing the behavior from a standardized switch based on local state to a distributed forwarding engine coordinated with global state.

From OVS:

The Open vSwitch kernel module allows flexible userspace control over flow-level packet processing on selected network devices. It can be used to implement a plain Ethernet switch, network device bonding, VLAN processing, network access control, flow-based network control, and so on.

Clearly since we are in the realm of control plane/data-plane separation we need to have a protocol (i.e. contract) which is agreed upon when communicating intent. This is where OpenFlow comes in..

Now unfortunately OpenFlow is still a very nascent technology and is continuing to evolve but Nicira wants to solve a problem. They want to abstract the physical network address structure in the same way that we abstract the memory address space with VMM’s (see Networking doesn’t need VMWARE but it does need better abstractions). In order to do this they needed to jump ahead of the standards bodies (in this case the ONF) and adopt some workable solutions.

For instance, OVS is not 100% compliant with OpenFlow 1.0 but has contributed to better models which will appear soon in the 1.2 specification. OVS uses an augmented PACKET_IN format and matching rules


/* NXT_PACKET_IN (analogous to OFPT_PACKET_IN).
*
* The NXT_PACKET_IN format is intended to model the OpenFlow-1.2 PACKET_IN
* with some minor tweaks. Most notably NXT_PACKET_IN includes the cookie of
* the rule which triggered the NXT_PACKET_IN message, and the match fields are
* in NXM format.

Summary:

Open Source networking is nothing new, you have XORP, Zebra, Quagga, OpenSourceRouting.org, Vyatta and standard bridging services built into Linux.

Just like with TCP/IP if there is value in OpenFlow or whatever its derivatives are we will see some form of standardization. OVS is licensed under Apache 2, so if you want to fork it go ahead thats the beauty of software. In the mean time I wouldn’t worry so much about these control protocols, they will change over time no doubt and good software developers will encapsulate the implementations and publish easy to use interfaces.

What I think people should be asking is not so much about the protocols (they all suck in their own way because distributed computing is really, really hard) but what can we do once we have exposed the dataplane in all its bits to solve some very nasty and complex challenges with the Internet?.