The Picture You Paint Of The Browser Is That It Has Become Very Complicated With Many Ways To Send And Consume Data (Ajax, DataChannel, WebRTC, WebSocket, SSE). Has The Browser Jumped The Shark Complexity-Wise?
Actually, I would argue that it’s pretty simple once you know the underlying mechanics and use cases for each of the API’s. Take AJAX, SSE, and WebSocket: AJAX is optimized for HTTP request-response, SSE is for server-side streaming over HTTP, and WebSocket is for bi-directional streaming. On the other hand, WebRTC and DataChannel are for P2P communication. Once you know your use case, you can quickly narrow down the list to one or at most two API’s. From there, you need to take a look under the hood of each protocol to understand how it works, and how you should optimize your application code to get the best performance from it. That’s what HPBN is all about.
“ One of points I made was we have been building networks around the concept of reachability and interoperability for 30-40 years and that networking is hard, mainly because we make it hard. We turn everything on and proceed to turn things off by using QoS, firewalls, VLANs, DPI, load balancers, etc. Then the thought occurred to ask what would happen if we revered this process? What if the network was off by default and we turned it on based around services, workloads and applications? „
Well-designed architecture has complexity (and state) concentrated at the network edge. The core devices keep minimum state (example: IP subnets), while the edge devices keep session state. In a virtual network case, the hypervisors should know the VM endpoints (MAC addresses, IP addresses, virtual segments) and the physical devices just the hypervisor IP address, not the other way round.
Furthermore, as much state as possible should be stored in low-speed devices using software-based forwarding. It’s pretty simple to store a million flows in software-based Open vSwitch (updating them is a different story) and mission-impossible to store 10.000 5-tuple flows in Trident 2 chipset used by most ToR switches.
“ The end result is that compute-centric teams can create logical groupings that are not dependent on the configuration of the underlying network. Need a new logical grouping for some new line-of-business application? No problem, create it and start turning up workloads. Meanwhile, the networking-centric teams are free to design the network for optimal performance, resiliency, and cost-effectiveness without having to take the compute-centric team’s logical groups into consideration. Need to use a routed L3 architecture between pods/racks/ToR switches using VLANs? No problem—build it the way it needs to be built, and the network virtualization solution will handle creating the compute-centric logical grouping. „
We present MegaPipe, a new API for efficient, scalable network I/O for message-oriented workloads. The design of MegaPipe centers around the abstraction of a channel – a per-core, bidirectional pipe between the kernel and user space, used to exchange both I/O requests and event noti- fications. On top of the channel abstraction, we introduce three key concepts of MegaPipe: partitioning, lightweight socket (lwsocket), and batching.
We implement MegaPipe in Linux and adapt mem- cached and nginx. Our results show that, by embracing a clean-slate design approach, MegaPipe is able to exploit new opportunities for improved performance and ease of programmability. In microbenchmarks on an 8-core server with 64 B messages, MegaPipe outperforms base- line Linux between 29% (for long connections) and 582% (for short connections). MegaPipe improves the perfor- mance of a modified version of memcached between 15% and 320%. For a workload based on real-world HTTP traces, MegaPipe boosts the throughput of nginx by 75%.