Scale matters in virtually every business. We learn this in the first week of Microeconomics class. The concept is one that most implicitly understand – we expect a better price if we buy 1,000 of something than 3 of something – even if they don’t understand the underlying mechanisms of fixed and variable costs.
In the cloud, scale isn’t everything – it’s the only thing. We can wax poetic about the virtues of any given platform, new offerings, strengths, weaknesses, and on and on; but at the end of the day, staying power for public cloud providers boils down to a simple matter of scale.
Scale is the fundamental differentiator that underlies any public cloud provider, far more than a mechanism for spreading fixed costs over a larger number of units (or customers). Here’s why:
1) Pricing Power
The most obvious advantage of scale comes in the form of pricing power. The more equipment a provider purchases, the better the prices they can command from vendors. WalMart built a retail empire on this very simple concept. For cloud providers, however, the pricing sword cuts both ways. It can be used to lower underlying costs and thereby pricing, but since scale is itself a key differentiator, it can be used to erect large barriers to entry and, thereby, protect pricing power while lowering costs – a beautiful virtuous cycle.
For example, if AWS can purchase storage for $1/TB and Microsoft pays $1.03/TB, so long as AWS has greater scale and feature parity, AWS can continue to command a profitability premium even at the same price point (for a more detailed look at AWS pricing history, check out this blog post). This also makes it nearly impossible for new market entrants to differentiate on cost without being willing to lose massive amounts of money over a lengthy period of time – something few competitors have had the appetite for.
2) Training & Adoption
A lot of organizations who are still considering cloud adoption remain convinced they are going to adopt a multi-cloud strategy. They are bound and determined not to fall into the same onerous enterprise licenses and sticky arrangements they’ve had with software vendors for the past several decades. The reality, however, is very few (if they’re smart) will end up implementing a true multi-cloud strategy. Why? How many organizations do you know that run an equal number of Windows and Linux workloads? How’s that working out for them?
The fact of that matter is for the overwhelming majority of individuals and organizations, learning one platform in-depth is a far more practical strategy. Once the investments are made in migration, training, and broader adoption there will be very little appetite for change. In addition, training and certifications are only as valuable as the available opportunities to leverage them. This means the cloud providers with the widest adoption will also have the broadest pool of trained implementers. Nobody wants to invest time and energy into a certification that isn’t widely adopted.
3) Peripheral Ecosystem
Look no further than Apple’s app store for an object lesson on the importance of scale to a peripheral ecosystem. Apple and Google (via the Play store) run the only worthwhile 3rd-party marketplaces for mobile applications. Why develop an app for a market of 2 million when, for the same effort, you can develop for 200 million? Without scale, it isn’t worth the time and energy to develop against multiple platforms because the market potential simply isn’t there. The same goes for partner development as service and product partners are going to steer towards developing expertise on the cloud platform with the largest market. We’re watching this play out now in the PaaS and SaaS space.
4) Pace of Innovation
Like any technology platform, the cloud is all about features. Features are what bring in new users and cause existing users to stick around. The problem with features is they are very expensive to build.
The graphic below depicts the number of “major features” AWS has released each year since 2008. While Amazon didn’t disclose AWS revenues until 2014, it’s safe to say they likely match the trajectory of this graphic pretty closely. So why does that matter? It means AWS has a literal army of engineers working to build upon its existing platform. Given the financial position of the company (profitable), it also means the costs of these engineers are already covered. Anyone wishing to compete head-to-head is going to need a comparably sized and equipped army simply to keep pace. The problem is that army isn’t going to be funded by the platform’s customers, at least not initially. So any would-be competitors are going to have to fund this investment out of existing earnings or take on debt to finance a major investment, not to mention building out the underlying infrastructure, development practices, partnerships, organizational structure, etc. Just the physical infrastructure required to support enough engineers to crank out 1,000 major features per year is staggering.
5) Machine Learning
The whole concept of machine learning is predicated on the concept of scale. The idea is that if we build a machine capable of learning, given sufficient time, iterations, and feedback (practice, if you will) the machine becomes smarter. The key paradox of machine learning is nobody wants to use it until it’s really good and it doesn’t become smarter if nobody wants to use it. This is where scale really matters.
During Re:Invent, AWS announced it is making available a number of its AI and Deep Learning capabilities (Rekognition, Polly, and Lex) to it’s customer base. As if that weren’t enough, they gave away a free Echo Dot to every single one of the 32,000 attendees. These also sold like hot cakes during the recent Christmas season. AWS already has a several-million -deep user community of known tinkerers and early adopters; the exact audience that’s the ideal testing ground for early stage machine learning and AI. By making these tools broadly available, AWS has unlocked the power of the masses on their AI platform. Through the simple act of providing feedback (in the form of an answer to the laughably simple question of “is this what you expected?”), these masses are providing exactly the input AWS needs to continue improving its capabilities in this space. At any smaller scale this takes far longer and is far more expensive as the number of iterations required to learn becomes a difficult obstacle to overcome.
The New Oligopoly
As cloud adoption accelerates, the success or failure of market participants is going to hinge on their ability to scale – rapidly. While cloud adoption is still nascent by most measures (such as spending), the primary competitors in this game have already been decided – Amazon, Microsoft, and Google. Oracle, despite Larry Ellison’s bombastic declarations to the contrary, is simply too late (this clip from 2008 might help explain why) and IBM, being unwilling to cannibalize their existing business, is simply to slow. So at this point in 2017, we have “the big 3” of cloud computing, because they are the only ones with sufficient scale and resources to compete.* It will be exciting to see how this all plays out among these fierce competitors, but make no mistake, the field has already been set.
*Apple is probably the only other company with sufficient size, scale, and resources to make a run at the cloud computing market, though they have shown no interest in doing so and likely will not as it represents lower margins than their current businesses.