“ To summarize: with properly architected TCP/IP stack (I doubt I’ll live long enough to see it) we wouldn’t need large layer-2 domains (and FabricPath, TRILL, SPB, EVB, VEPA …), load balancers, application-level gateways, 500K entries in global BGP table (and in TCAM of every core router) or LISP. TCP is really the most expensive part of your data center. „
“ Is anything lost if applications don’t interact directly with the network forwarding elements? In theory, perhaps, an application might be able to get a path that is more well-suited to its precise bandwidth needs if it could talk to the network. In practice, a well-provisioned IP network with rich multipath capabilities is robust, effective, and simple. Indeed, it’s been proven that multipath load-balancing can get very close to optimal utilization, even when the traffic matrix is unknown (which is the normal case). So it’s hard to argue that the additional complexity of providing explicit communication mechanisms for applications to signal their needs to the physical network are worth the cost. In fact, we’ll argue in a future post that trying to carefully engineer traffic is counter-productive in data centers because the traffic patterns are so unpredictable. Combine this with the benefits of decoupling the network services from the physical fabric, and it’s clear that a virtualization overlay on top of a well-provisioned IP network is a great fit for the modern data center. „
“ Taking a page from their Nexus 1000V playbook, they create an SDN controller and then open source it. A free controller will effectively kill that market as well (provided it is “good enough”). And anyone who has underlying equipment that VMWare would like to make irrelevant will end up backing the effort.
There are a couple of additional points to note here. First, these battles over reach and control might be between Cisco and VMWare, but there will be collateral damage. Big Switch, for example, has a business model predicated on owning the point of control and being the preferred platform for orchestration applications. Daylight had to be an absolute kick to the gut for the Big Switch brain trust and investors.
Second, there is a very large player who is lurking relatively unnoticed. With all the focus on the equipment vendors, the supplier side is being ignored. Intel has been looking at how they can add capabilities to OVS. They have to believe they can extend their reach into the network space as well. But if the market is killed by the free offerings, why would they do this? Because they don’t mind using a loss leader if it pulls through their core business. Who must Intel have in their sights? Broadcom. This class of the titans is really just heating up „
Where the real difference in the Plexxi offering comes with the LightRail technology. This technology offers passive optical multiplexing IN the device. This allows for terabits of data across one single cable. Using up to 24 2-degree WDM fibers split across two groups, Plexxi Switch 1′s can form a number of physical topologies which very high bandwidth throughput. By building this technology inside the switch you essentially eliminate the need for old styled aggregation switching. A reduction in complexity, design, and TCO.
An alarm bell might ring with some of you thinking about your cabling passing through a switch? Fear not. Being passive pass through, the Plexxi Switch’s LightRail still functions when a device turns off or fails. You still can pass traffic through a device that is offline. With 24 fiber cores per link per switch, you many and varied number of paths to move data affinities* around to isolate a switch to be replaced. With 24 x 10GB fibres as mentioned in the Network Field Day 5 presentation, they are broken into a 12 east, 12 west layout. That is around 240Gb per switch and when you look at a 10 to 11 Plexxi switches in a ring you will be looking at 2.64Tb forwarding capacity!
“ Furthermore, through the Arista eAPI remote systems can connect to Arista EOS via JSON based web services API for machine-to-machine communication. Working natively with the OpenStack Quantum plugin, this brings another industry first where the physical network topology is unified with virtual switch configuration and virtual machine placement in an OpenStack cloud. „
“ On the other hand, it would be hard to find a SolarWinds of VLAN provisioning. Why? We’re not building data centers with standard architecture and components that would require minimum customization, and it’s close to impossible to write a low-priced application that will work well with millions of totally unique networks. „
Researchers say they have created fiber cables that can move data at 99.7 percent of the speed of light, all but eliminating the latency plaguing standard fiber technology. There are still data loss problems to be overcome before the cables could be used over long distances, but the research may be an important step toward incredibly low-latency data transmissions.
Although optic fibers transmit information using beams of light, that information doesn’t actually go at “light speed.” The speed of light, about 300,000 km/s, is the speed light travels in a vacuum. In a medium such as glass, it goes about 30 percent slower, a mere 200,000 km/s.
“ While much of the industry seems to be focused on an architecture involving one monolithic SDN controller with OpenFlow as the omnipotent one-size-fits-all southbound control API, top-down control of all devices may not be the best approach to foster an open ecosystem. In fact, it’s likely that more than one SDN controller will be part of an overall solution. For example, the fabric may have its own SDN controller for autonomous fabric-level control. Another controller, such as NVP, might be present to provide network virtualization and a northbound API for the logical network. As such, it’s important to address the need for configuration and network state synchronization between autonomous devices and multiple SDN controllers – something OpenFlow alone is not equipped to handle. In other situations, not every device or vendor will be able to support OpenFlow, or top-down control of the device’s forwarding plane may not be completely necessary or the best approach. „