So, as I gather, Silk takes all the heavy front end processing pushed onto browsers and creates a middle tier between the web server and the browser to do that (originally front end) work. They’re using AWS as an Akamai-style CDN for everything http-based. Which Akamai does, in fact, also do.
As @wpauley points out, it’s also creates a huge data collection opportunity.
I’m in the middle of reading Griftopia, highlighting away the lessons like mad on the kindle.
Later, I receive an email about a blog post with some things of note highlighted.
There’ve been plenty of services that overlay notes of some kind on the web, or share clippings, or build a stream of them, etc.. but they all require some tool beyond the web to both produce and consume such highlights.
What if highlighting were built into the web? My highlight stream could be a data asset, e.g.:
- to google as an input to ad-targeting
- to amazon as input for book suggestions (exceedingly useful there)
- to the same people who might subscribe to one of my tumblogs
Just a thought.
Tonight I stumbled on Jyri Zengestrom’s essay on “object-centered sociality” from 2005 in which he argues that sustainable networks must be built around objects (e.g. photos - Flickr; links - delicious; events - EVDB, Upcoming.org, evnt) rather than people.
He then points out there’s not successful network around places, even though people like to talk about places, because we don’t have “a digital camera for location” - basically an easy way to capture location.
Remember this was written in 2005. Since then:
- Digital camera for location = mobile phone
- Successful service to annotate places = Foursquare
And with that, I finally get foursquare—it makes places into social objects.
That’s a nice soundbite, but the rest of it: places have always been social abstractions, codified as objects in books, maps, vacation photos, etc. Those old objects have for the most part just been grafted onto the web without anyone developing a new object. The location-based social apps made check-ins the social place-object. Interesting.
I think that control and data plane separation will end up a common feature of data center networks at scale, for cloud, etc. It’s a lesson from telcos.
The work of control plane software optimization is really a set of algorithmic route calculation (etc.) and distributed database problems that are better attacked outside the confines of network hardware, with general purpose machines.
The work of the data plane is more about capability optimization in asics, fpgas, etc. for latency, jitter, scaling, subscription rates, etc.
1. Failure domain
- 6k servers, 1 logical switch.. is that one very big failure domain? How will that be managed?
- Will the QF/I be a fundamentally passive “transport device” that does nothing else?
- Where will failures be detected?
- How will failure signals propagate?
- Which components will react to failures?
- How much involvement will the QF/D have?
- Is that also a big attack surface?
- What happens if I compromise a QF/D? QF/I? QF/N?
- How do I quarantine?
3. Distributed control plane dynamics
- Where’s path computation actually done?
- Where are lookups done?
- If I upgrade a QF/N, does that upgrade all QF/Ns?
- Is the config on one QF/N the config on all QF/Ns?
- If one QF/N has a fault, is compromised, or otherwise behaves not as intended—does that mean the same is thenceforth true for all?
- Kinda odd that the QF/D is described as a “window into the fabric” (passive observer) but also providing “control and management services for the fabric”
- A separate and parallel 1G cable plant and network for “carrying control traffic”? Is the benefit of doing it this way really going to outweight the capital and operational overhead?
- Is QF/D really the brains? If so, does that mean QF/N = execution & QF/I = backplane only?
- If so, then we’re talking about a decoupled but centralized control plane (and an attractive target)
4. 3-2-1 is [great story].. ambiguous
- QF is 1 logical layer, but for switching only
- QF is not 1 physical layer (of course)
- Security services is another layer (SRX), both physical and logical
- Routing, other network services, and interconnectivity of QFs will incur another layer (MX), both physical and logical
- Juniper defines fabric differently than other vendors.. which isn’t good or bad, but should be kept in mind
- No path from existing infrastructure to QF, including Juniper’s own [!!!], that does not involve building a parallel network
- I assume the vision is that QF can act as a core network with all kinds of things (including whole non-QF network segments) expected to hang off of a QF/N—how will latency etc change in that situation?
- Will QF support QoS that does not originate from itself, but from a non-QF network segment hanging off a QF/N?
6. Standards irony
- QF is a black box once you get past the QF/N
- Since the message is that it’s really just one big distributed switch, that does make sense.. but what if I have a need to know the exact path of traffic, or am trying to isolate a fault, etc?
- Cisco stands in a position relative to QF that may make it out as the greater supporter of standards [irony!]
- Will Juniper submit the QF protocol to IETF?
- This issue is really operational
So SAN directors are just gonna hang off a QF/N?
- What about zoning, etc?
- What are the requirements for the SAN box connecting to the QF/N?
- Will QF take on SAN director type capabilities in the future?
- What about storage certifications or qualifications?
8. On scale, virt, and other topics:
- Do the latency, etc, characteristics hold in an oversubscribed QF?
And no 1G connected servers?
- @stu says you can connect servers at 1G, but you’d still be paying for a 10G port
- How’s QoS going to work if you’ve got bare-metal servers attached without virtual networking with which to apply marking? Is it entirely VLAN-based?
- Will all VLANs be presented everywhere?
- Will the same kind of APIs and automation-enablement be offers on the QF as is on the other product lines?
- How will QFs be interconnected across data centers? A-VPLS? Virtual chassis on the MXs if the distance is short enough?
- How will QFs be partitioned into zones (of some kind)? Or is this an essentially non-multitenant network?
* manipulate your cognitive context, widen
* stuff your mind with interesting raw material
* connect an idea from one context to another
Probably an idea you’ve already had, but it occurred to me that the rise of “simplification” as a marketing (sometimes architectural, sometimes marketectural) mantra for network vendors is a direct reaction to Cisco not having created effective ways to manage the real complexity of data center networks at scale. We didn’t scale operational capabilities at the same time as we scaled technology and products.
The same goes on the larger infrastructure stage for networking in general. The reactionary then aims to radically deconstruct the network into a simple message passing function with a limited, but guaranteed set of features available everywhere and to everything to the same degree. If network players refuse to manage, then all intelligence (as it were) might as well be handled elsewhere.
I got myself a kindle.
I’ve been resistant to the notion for quite some time, but decided to give it a shot. It’s a surprisingly decent experience.. easy to read on for hours.
I finally ran out of dead-tree reading material today and took the kindle for a spin to a coffee shop. Using a device like that at home is one thing; in public, a whole other. The first thing that struck me is that it’s an inherently antisocial device.
Some argue that reading is itself an antisocial activity. Not entirely. More than one conversation with a stranger (or not) in my life has been spurred by the title tucked into my hands. Books are social objects of the first order. The kindle [irony!] kills this.
[I have in my head that this has something to do with marketing and product design principles. Just don’t know what yet. o.0]
It’s been a little over half a year since I joined Cisco as a “marketing manager” on the “architecture” team in the “data center & virtualization” group targeting “enterprise & mid-market” companies for the “central marketing organization (CMO)”.
Translated into english, this roughly means that I do outbound marketing for data center stuff. This includes marketing messaging and content that goes into white papers, presentations (internal & external), ads, etc. Because @omarsultan is my boss, I also get to cause a bit of havoc attempting to pull things into a “solving business problems” orientation vs the traditional “moving boxes” tack. It’s a quasi-technical, quasi-marketing, quasi-strategery, quasi-business job.
I like it. :-)
Omar’s put together an odd, far-flung team to shake things up:
- I’m from a big co-opetitive integrator and based in Brooklyn
- @sjschuchart’s a former analyst based in Green Bay
- @cloudclint’s a former customer based somewhere in northern Kentucky near Cincinatti
Cisco is not a small company. But, coming from IBM, the place feels like a damn startup. I was impactful immediately: working on useful things that’ve been integrated into all kinds of output all over the place from day one. I’ve been able to suggest ideas way outside of my domain and have them taken up.
Some things I have or am working on:
- flesh out Cisco’s cloud strategy for the enterprise
- a VCE white paper
- an enterprise private cloud white paper
- Data Center Business Advantage (what comes after Data Center 3.0) message, content, ads, etc
- presos given by execs to analysts, media, etc
- connecting various other marketing campaigns and stuff to DC Business Advantage
- vertical-specific marketing for healthcare, financial services, etc
- telling a better and better integrated services story
A little slow, but they’re finally on the bandwagon. The obvious is that it seems Brocade agrees with Juniper, Cisco, et al., re single L2 core data center fabric for transport that can scale big. I can’t believe that they’ll just settle for vanilla MPLS for interconnecting TRILL blocks at distance (or maybe not at distance). Something else is bound to come out on that front.
Given that, here’s the more interesting:
Brocade VCS also enhances server virtualization with technologies that enable enhanced VM visibility within the network and the seamless migration of policies with a VM. VCS achieves this through its distributed services architecture that makes the fabric aware of all of its connected devices and its ability to share information across those devices. Automatic Migration of Port Profiles (AMPP), a VCS feature, enables a VM’s network profiles – such as security or Quality of Service (QoS) levels – to follow that VM during migrations without manual intervention. This unprecedented level of VM visibility and automated profile management helps intelligently remove the physical barriers to VM mobility that exists in current technologies and network architectures.That reminded me of this:
It is to tackle these three basic issues of (a) Per-VM policy enforcement and (b) monitoring, tracking and policy migration for VM live migration, and (c) VM-to-VM traffic switching, that a small set of companies started the Edge Virtual Bridging (EVB) ad-hoc group early last year. Today, that group has grown to include 200+ members from a diverse set of companies from the silicon, server, storage, networking, virtualization and data center software industries.Here’s the 802.1Qbg - Edge Virtual Bridging group. Note the authors list in the most recent presentation.
The diligent work that this group has produced has led to two distinct approved proposals in the IEEE this past November: 802.1Qbg and 802.1Qbh.
In broad terms, these proposals discuss:
- a mechanism to discover VMs on the network and enforce policies as VMs appear or migrate across the network
- EVB Discovery
- VSI (Virtual Station Interface) Discovery
- AMPP: Automated Migration of Port Profiles…
And that one reminded me of a presentation from DC CAVES: Automated Ethernet Virtual Bridging [pdf].