Cisco is using the DVS framework to deliver a portfolio of networking solutions that can operate directly within the distributed hypervisor layer and offer a feature set and operational model that are familiar and consistent with other Cisco networking products. This approach provides an end-to-end network solution to meet the new requirements created by server virtualization. Specifically, it introduces a new set of features and capabilities that enable virtual machine interfaces to be individually identified, configured, monitored, migrated, and diagnosed in a way that is consistent with the current network operation models.
These features are collectively referred to as Cisco Virtual Network Link (VN-Link). The term literally indicates the creation of a logical link between a vNIC on a virtual machine and a Cisco switch enabled for VN-Link. This mapping is the logical equivalent of using a cable to connect a NIC with a network port of an access-layer switch.” —
Cisco VN-Link: Virtualization-Aware Networking [Cisco Nexus 1000V Series Switches] - Cisco Systems
Required reading for network ppl who want to understand cloudbursting implications.
WHAT IT REALLY MEANS FOR AN APPLICATION TO BE AVAILABLE
Availability of an application should never be construed to mean “the server is up and running.” Never. Just prepare to unlearn if you think that’s true. Do not pass go. Do not collect $200. Clear your mind and let go of that definition. Ready? Good. Let’s continue then.
That’s the minimum requirement for an application to be considered “available” and in reality there’s a lot more that goes into the definition. Availability should be considered to mean:
1. The server (physical, virtual, and application) are running and accessible.
2. The application is responding in an expected fashion to all requests
3. The application is responding in a timely manner to all requests
Beware the Availability Rat Hole in the Cloud
Application driven hardware writ large.
Traffic access ports or Taps are important components in any network deployment. They eliminate points of failure, increase the ROI on your monitoring tool investment, and ensure that your monitoring devices can see all of the network traffic.
Taps come in several varieties with different purposes and features. If you are confused about what type of Tap device you need in your network, here is a quick overview of the capabilities each type of Tap provides:” —Net Optics - Network Monitoring Access: What kind of Tap do I need for my network?
EtherChannel. I would love to change the EtherChannel hashing function and do something far more intelligent, automated, and better performing. Most switches today use a simple hash based on L2, L3, or L3 plus L4 port info to determine which link to send a given traffic flow down. This link is chosen based on a hash algorithm and then stays constant unless there is a link failure in which case the traffic is remapped.
Why is this is not good enough? It’s actually okay for some traffic. But when host interconnect speeds and uplink speeds are identical we start running into problems where a host can generate a flow that can consume an entire uplink, and then you deal with contention and buffering and all sorts of fun-stuff. Today, we are seeing a convergence of host speeds and uplink speeds at 10Gb, so this problem will rear its ugly head again.” —Things I Would Like to Change Part 1/N « loopback0 – Douglas Gourlay’s Blog
Other new interesting feature is the support for VMware. The VMware Infrastructure API provides a complete set of language-neutral interfaces to the VMware virtual infrastructure management framework. By targeting the VMware Infrastructure API, the OpenNebula VMware adaptors are able to manage various flavors of VMware hypervisors: ESXi, ESX and VMware Server.
The combination of both innovations allows the creation of a Cloud infrastructure based on VMware that can be interfaced using Amazon EC2 Query API. I will cover more unique features and capabilities in upcoming posts.” —
OpenNebula keeps chugging along!
Most powerful people are on the manager’s schedule. It’s the schedule of command. But there’s another way of using time that’s common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can’t write or program well in units of an hour. That’s barely enough time to get started.
When you’re operating on the maker’s schedule, meetings are a disaster. A single meeting can blow a whole afternoon, by breaking it into two pieces each too small to do anything hard in. Plus you have to remember to go to the meeting. That’s no problem for someone on the manager’s schedule. There’s always something coming on the next hour; the only question is what. But when someone on the maker’s schedule has a meeting, they have to think about it.
For someone on the maker’s schedule, having a meeting is like throwing an exception. It doesn’t merely cause you to switch from one task to another; it changes the mode in which you work.” —Maker’s Schedule, Manager’s Schedule
The basic tenant of our design is each site in the network - no matter the size - is its own BGP AS using private BGP AS numbers. That provides 1,024 AS numbers which is more than enough for our network (that may not be enough for very large networks, but MPLS carriers are happy to AS override for you). With each site in its own AS, the WAN links at each site - be they MPLS, private-line, or GRE tunnel - would run eBGP. Now, BGP became our core WAN routing protocol. This met the MPLS carriers’ requirements and made our WAN routing much simpler. We now had a protocol with the scalability to handle thousands of routes and with enough protocol features (filter-lists, route attributes, communities, etc.) to implement routing policy (something OSPF lacks).
Next we developed what I feel is the best part of our BGP design. At each site in our network all traffic flows through the core. So, we used this rule to design BGP. The core routers (high-end Cisco 7600s) are the center of BGP at each site. These 7600s are iBGP route-reflector clusters that peer iBGP to the WAN routers. By using a route-reflector cluster we avoid the iBGP full-mesh problem. The cores create all BGP routes (via the BGP “network” command) and advertise those routes to the WAN routers. The WAN routers then advertise those routes to eBGP peers over the WAN (MPLS, other sites, etc). Filtering policy is done at the edge on the WAN routers. The cores learn routes for external sites via iBGP from the WAN routers (who already learned the routes via eBGP). Thus, the core routers know all routes in the entire network. BGP easily scales to handle these global routes, unlike OSPF which does not handle thousands of routes well. This sets up a very elegant and fast BGP design. Failover is within 5 seconds when a WAN link goes down and the design can scale quickly” —
Great article [first in a series] on bringing bgp into the enterprise.
[Copied and modified a little from a 1yr-old post on my now-defunct blog.]
How services should be treated like products:
- Development lifecycles. Processes that: identify the triggers in the marketplace, in technological innovation and research, in services delivery innovation and research, sales statistics, etc. that should initiate changes to a service offering; define how changes are made; define how new offerings are developed.
- Version control. Track changes, freeze offerings for deployment, maintain stable vs developing offerings, etc.
- Testing. Process for alpha/beta testing offerings with sales/biz dev, delivery, partners and clients.
- Ecosystem. A community of clients and partners willing to test out new/changed service offerings.
- Components. Decomposition of services into component service elements, assets, patterns, etc., a la SOA/SCA.
- Descriptions. Standard format for service offerings identifying content/service elements; assets; and a standard set of properties such as “line of business”, “sectors”, “technologies”, etc.
- API. Standard interface into and out of service offerings allowing components to plug into each other, allowing partners to integrate your offerings into their portfolios and vice versa—in effect, a services product portfolio API.
- Customers pay for something. Stop charging by the hour. If you’re really going to be an asset-based business, that is.
- …what else?
- Sales. No one gets commissions or bonuses unless their deals generate profit and then only to the extent to which they do. If that’s too radical, replace “profit” with “revenue” and go from there.
- Cost model. Costs for developing, testing, delivering, and establishing maturity in a service offering are radically different from products—because the factors are radically different. And then, some of those very factors vary from offering to offering.
- Scaling. Services do not scale easily. Because people don’t. Because brains do not scale. The less knowledge/skill required, of course, the easier it is to mitigate this issue.
- …what else?