In case it isn’t obvious by now, these charts are meant to be unfair - that’s the point. Unfair but relevant comparisons are the most interesting and important kinds. An unfair comparison generally means an unfair advantage, and this isn’t the Olympics - unfair is good. Customers don’t care if a company’s advantage is unfair. Investors don’t care. Unfair advantages are often the best kind. They are something that flows structurally from the reason why your business is going to change everything - they flow from a technology change you are building on or a change in market dynamics or consumer behaviour that you’re riding, and that your competitors cannot address. Disruption is unfair. Mobile’s disruption of PCs and the PC internet is entirely unfair - it’s the unfairness of differences like the replacement cycle and subsidy model (amongst many others) that makes it possible. — In praise of unfairness — Benedict Evans
Five Reasons Not To Raise Venture Capital, by Rachel Chalmers | Model View Culture
(via xkcd: Heartbleed)
Drones on demand -
Gofor imagines a future world where drones are cheap and ubiquitous. What sorts of things would we have personal drones do for us? Follow us home in unsafe neighborhoods? Personal traffic copters? Travel location scouting?
How long before someone uses a personal drone for the same…
What we have seen is that a logically centralized, hierarchical control plane with a peer-to-peer data plane beats full decentralization,” explained Vahdat in his keynote. “All of these flew in the face of conventional wisdom,” he continued, referring to all of those projects above, and added that everyone was shocked back in 2002 that Google would, for instance, build a large-scale storage system like GFS with centralized control. “We are actually pretty confident in the design pattern at this point. We can build a fundamentally more efficient system by prudently leveraging centralization rather than trying to manage things in a peer-to-peer, decentralized manner. — Google Lifts Veil On “Andromeda” Virtual Networking
A collaboration between a Stanford ant biologist and a computer scientist has revealed that the behavior of harvester ants as they forage for food mirrors the protocols that control traffic on the Internet. — Stanford biologist, computer scientist team up to discover the ‘anternet’
A reliable storage system is one that can be trusted to perform well under all states of operation, and that kind of predictable performance is difficult to achieve. In a predictable system, worst-case performance is crucial; average performance not so much. In a well implemented, correctly provisioned system, average performance is very rarely a cause of concern. But throughout the company we look at metrics like p999 and p9999 latencies, so we care how slow the 0.01% slowest requests to the system are. We have to design and provision for worst-case throughput. For example, it is irrelevant that steady-state performance is acceptable, if there is a periodic bulk job that degrades performance for an hour every day.
Because of this priority to be predictable, we had to plan for good performance during any potential issue or failure mode. The customer is not interested in our implementation details or excuses; either our service works for them and for Twitter or it does not. Even if we have to make an unfavorable trade-off to protect against a very unlikely issue, we must remember that rare events are no longer rare at scale.
With scale comes not only large numbers of machines, requests and large amounts of data, but also factors of human scale in the increasing number of people who both use and support the system. We manage this by focusing on a number of concerns:
- if a customer causes a problem, the problem should be limited to that customer and not spread to others
- it should be simple, both for us and for the customer, to tell if an issue originates in the storage system or their client
- for potential issues, we must minimize the time to recovery once the problem has been detected and diagnosed
- we must be aware of how various failure modes will manifest for the customer
- an operator should not need deep, comprehensive knowledge of the storage system to complete regular tasks or diagnose and mitigate most issues
And finally, we built Manhattan with the experience that when operating at scale, complexity is one of your biggest enemies. Ultimately, simple and working trumps fancy and broken. We prefer something that is simple but works reliably, consistently and provides good visibility, over something that is fancy and ultra-optimal in theory but in practice and implementation doesn’t work well or provides poor visibility, operability, or violates other core requirements. — Manhattan, our real-time, multi-tenant distributed database for Twitter scale | Twitter Blogs
I think enterprises should care more about full stack independence to manage vertical risk. Why? When a technology is vertically prescriptive (i.e. Pivotal CF One requiring VMware), it hurts an organizations’ ability to manage supply chain risk by ensuring they can source vendors to fulfill needs at other parts in the stack. This means that a customer loses cost control leverage, feature leverage, and can be held hostage in the face of overall stack quality degradation. By optimizing for vertical independence, an enterprise ensures that vendor selection at any tier in the IT stack does not dictate what vendors they use above and below that part of the stack. — The LIES That Come With Some Flavors of PaaS