[irq]: techie interrupted


“ Adrian: Going back to the speed of development: like I said, there isn’t any executive that wants his company to be slower at product development. So if you look at how you really speed things up you have to take the hand-offs out of the process. So every time you get a team giving something to another team – Development to QA to Ops to whatever – every one of those is a synchronization point that slows everything down. If you can avoid that then you’ve saved yourself a lot. So speed matters. Taking those steps out for big companies means you re-org really, because that’s what DevOps is about. If adopting DevOps doesn’t involve a re-org, then you’re not doing it right. So that’s why it’s one of the reasons it’s hard to adopt. But, once you get your head around it, you realize what you’re doing is streamlining things and you have to smash the groups together so that developers do their own operations, and Operations people and Development and QA… You get rid of the artificial barriers, and in operations you get rid of the stove-piped fiefdoms of the storage guys and network guys and the database guys and sysadmins. So you have to kind of mash this stuff back together again to make it efficient, and that’s to make the speed of delivery efficient. They got siloed for optimizing for cost rather than for speed. So this is kind of a cost-versus-speed thing. And the pendulum is swinging back away from cost to speed. Because the cost of infrastructure is so low that now the time it takes to develop something is the biggest problem, so you’ve got to speed things up. So that is causing people to think about things in different ways, and different products are appearing, and the scale that people are dealing with things, and the “software eating the world” kind of ideas where every company now has to be a software company. You can’t not be a software company because every product somewhere has software in it. And everything you do, if it’s marketing or sales, you’re doing real-time bidding for ads. „

The New Stack Makers: Adrian Cockcroft on Sun, Netflix, Clojure, Go, Docker and More | The New Stack


“ Lowering the price expands the addressable market, as well. The cheaper it is to do it in the cloud, the more difficult it is to make a business case to do an on-premises solution, especially a private cloud. Many of Gartner’s clients tell us that even if they have a financially viable case to build a private cloud right now, their costs will be essentially static over the amortization period of 3 to 5 years — versus their expectation that the major IaaS providers will drop prices 30% every year. Time-to-value for private cloud is generally 18 to 24 months, and it typically delivers a much more limited set of features, especially where developer enablement is concerned. It’s tough for internal IT to compete, especially when the major IT vendors aren’t delivering software that allows IT to create equivalent capabilities at the speed of an AWS, Microsoft, or Google. „

AWS 2Q14 and why the sky is not falling | CloudPundit: Massive-Scale Computing






What is your philosophy on build vs. buy?

In case it isn’t already obvious: for most of my career I was one of those guys who built one of everything and at some level believed that what I built was better than what other people built. As I’ve grown more experienced I’m much more protective of my time, my team’s time and our ability to focus on what differentiates us from other products. My default is buy unless there’s a compelling counter-argument.


CTO to CTO: Werner Vogels and Don Neufeld — Medium



“ What we have seen is that a logically centralized, hierarchical control plane with a peer-to-peer data plane beats full decentralization,” explained Vahdat in his keynote. “All of these flew in the face of conventional wisdom,” he continued, referring to all of those projects above, and added that everyone was shocked back in 2002 that Google would, for instance, build a large-scale storage system like GFS with centralized control. “We are actually pretty confident in the design pattern at this point. We can build a fundamentally more efficient system by prudently leveraging centralization rather than trying to manage things in a peer-to-peer, decentralized manner. „

Google Lifts Veil On “Andromeda” Virtual Networking


“ I think enterprises should care more about full stack independence to manage vertical risk. Why? When a technology is vertically prescriptive (i.e. Pivotal CF One requiring VMware), it hurts an organizations’ ability to manage supply chain risk by ensuring they can source vendors to fulfill needs at other parts in the stack. This means that a customer loses cost control leverage, feature leverage, and can be held hostage in the face of overall stack quality degradation. By optimizing for vertical independence, an enterprise ensures that vendor selection at any tier in the IT stack does not dictate what vendors they use above and below that part of the stack. „

The LIES That Come With Some Flavors of PaaS


“ It used to be that the physical hardware was orders of magnitude more expensive than engineers but this hasn’t been true for decades now - it’s perfectly reasonable to look for ways to reduce yours costs especially if it can be done quickly but obsessing over hardware costs, especially while you’re still growing, is a red herring. Building large systems is tough and the fewer things you have to worry about the better - using AWS reduces the chance that you will run into a scenario where you’re just not able to do something without changing your host and rewriting your architecture. „

AWS is about infrastructure optionality

blog comments powered by Disqus
page 1 of 99 | next »
Tumblr » powered Sid05 » templated Disquss » commented