6fusion

Blog

Blog

Datamation Controlling Cloud Computing Costs- Video Roundtable

http://www.datamation.com/events/controlling-cloud-computing-costs-video-roundtable.html

Click to replay

 

Rob Bissett, 6fusion Chief Product Officer joins moderator, James Maguire, Senior Managing Editor of Datamation, and panelists Owen Rogers, Senior Analyst of Digital Economics, 451 Research, and Deirdre Mahon, Cloud Cruiser to discuss the new Cloud Economics.

Cloud computing was supposed to offer significant cost savings to businesses. Among other benefits, costs could be shifted to OPEX instead of CAPEX. Yet in reality many businesses have reported that costs have actually increased as they’ve moved to the cloud. Why? And why are the various costs and invoice statements so confusing and so unpredictable? And why is it so hard — or is it? — to compare costs between the various cloud providers? In this video roundtable, four leading cloud experts will discuss these and other key issues relating to cloud computing costs.

See more at: http://www.datamation.com/events/controlling-cloud-computing-costs-video-roundtable.html#sthash.rlpxki7c.dpuf

Why TCO Cost Calculators don’t work for a Cloud-World

White Paper by Rob Bissett, Chief Product Officer

The following white paper was released in response to numerous requests for additional information from an earlier post entitled: 3 Reasons Why TCO Cost Calculators Don’t Work for Cloud. The unabridged version follows:

Amazon Web Services – long considered the gold standard for web scale cloud services has begun a very focused effort on driving growth in their customer base from enterprise audiences.  I believe this will become a common theme as all the larger public cloud providers shift their focus from the “cloud first” workloads to the massive underbelly of Enterprise IT spend.

One of the major challenges that these vendors will face is increasing pressure to demonstrate TCO in the face of incumbent internal IT opposition.  There are arguments being made inside enterprise that (security and regulatory concerns notwithstanding) enterprise IT can deliver cloud-like shared services as, or more, cheaply than AWS and it’s peers.  This argument may or not be true generally, but it does force these providers to work harder to justify the TCO/ROI case to move to the cloud for enterprise workloads.

So let’s take a look at how TCO is being calculated by public cloud vendors today, working through a real world example, and identify some of the critical limitations of applying an on-prem costing model for the on-demand world.

TCO Today

Amazon has developed and published a TCO tool (http://www.awstcocalculator.com/) designed to help organizations better understand the financials behind their IT infrastructure, and how that would compare to an equivalent AWS configuration.  The challenge behind this, and in fact the challenge with all traditional enterprise ROI calculations is the complexity of the enterprise cost models.  The transition to more modern shared services infrastructures exacerbates these issues by intermingling the various resources and costs.

AWS has invested considerable effort in building out a default cost structure that can be used for the TCO calculations (from an assumptive point of view) and a set of assumptions on how enterprise hardware is architected and deployed.  The result of this is a simple to use tool that enables a user to enter in some basic information around the configuration of an app, and get a baseline cost comparison against the equivalent running in Amazon.

While the tool is quite simple to use, the logic underpinning it leaves a lot to be desired.

Specifically:

  1. The tool assumes a discrete, complete, comprehensive stack for each application that you are conducting an ROI on, with no re-use from existing resources.

  2. It assumes that you can fully eliminate the cost of the underlying hardware by switching the app to Amazon.

  3. The tool assumes a 3 year ROI, starting from today – it doesn’t allow for partial amortization.

These assumptions then mean this tool is only useful if you migrate 100% of apps to Amazon, and that you can retire the underlying infrastructure completely, and that that hardware is new.

The impact can be seen in the following example:

I have an app powered by 5 vm’s that runs in a larger shared services infrastructure:

I get the following result:

Now, this state that it will cost me $215,000 to host this application in house.  It further extrapolates that if I run it in Amazon instead, it will only cost $12,021, a savings of $203,864.  Now, in a modern enterprise, does anyone believe that you would incur $200K in REAL costs to run 5 vm’s for 3 years in a scenario for which you don’t have to buy any new hardware?

The Limitations of TCO for on-demand environments

The purpose of this document isn’t to dispute the math behind this analysis, but to illustrate that the traditional approach to TCO doesn’t work for cloud.  The challenges lie in assuming that you need to acquire an entire enterprise stack to run this app, and only this app, and that you can remove the capitalized cost on migration.  In most modern environments, enterprise IT has already acquired data center real estate, power, servers, storage, virtualization and the rest of the necessary underlying resources.  When new workloads result in increased underlying hardware requirements, those likely won’t require new acquisitions to support them.  Even if incremental investments were required, it would be exceedingly difficult to allocate the direct costs of the incremental hardware to a single app since the hardware would be dedicated to a resource pool and used across multiple apps benefiting them all. Additionally, those resources that were acquired would likely only be fractional incremental server investments rather than a full enterprise stack, and would join the stack that was already partially amortized.

More generally, this points to a potential issue with cloud TCO in general.  The “standard” method of conducting TCO analysis in the enterprise is simple – determine the hardware (and other) resources that the application uses, sum the costs of those resources, add labor, and present your results. This just simply doesn’t work for cloud – you consume, you don’t acquire.

The challenge with a consumption based model is the shared services stack. It is virtually impossible to determine exactly which hardware resources are now being used by a given app – particularly in storage and hypervisor based systems where they may move around dynamically.  Further complicating the issue is that the hardware resources are being shared – so even if you could point to a server and say “it’s running on that one”, it would be difficult or impossible to figure out how much of “that one”.

While a lot of firms have worked on finding a better way to address this TCO issue, most of them are still trying at the most basic level to figure out how to apportion partial hardware resources.  “How much do I have, and how much of that is this app using?”  Historically, internal processes to address this are seen as arbitrary, proprietary, and open to debate, internally and externally.  This calls into question all TCO analyses, moving acquisition back to a highly politicized process.

What’s next for TCO modeling?

The What’s Next is TCC – Total cost of Consumption.  TCC takes the cost of the shared services stacks as an input, and then quantitatively measures the real time consumption of each application, enabling a precise measurement of the cost of a given app. If you know what the stack costs, and how much of it each app uses, the math is simple, transparent, and defensible.  TCC is a process that abstracts the underlying hardware from the app, and provides an apples to apples comparisons de-politicizing the procurement decision (to cloud, or not to cloud).

How do we do this?  We need a costing model that is transparent and defensible, and then a method by which we can meter consumption.  Lets look more deeply at this.

The Workload Allocation Cube

The WAC is the first commercial standard for application consumption measurement.  It works by comparing real-time utilization (workload) against a fixed baseline (allocation) spanning six vectors (cube) – CPU, Memory, Storage, Disk i/o, LAN i/o, and WAN i/o.

The fundamental difference between the WAC and other proprietary measurement models is that it takes account of consumption not allocation.  In effect, the WAC is the basis of the most important metric in IT financial management: Total Cost of Consumption (TCC).

Going a layer deeper, the WAC enables a true utility approach to IT infrastructure by abstracting the platform, technology, location, provider and other details from the actual analysis.  It also eliminates the necessity of thinking in terms of instances, or instance types as the WAC consumption can be applied against any type of instance configuration.  Furthermore, because the WAC is agnostic, it can be used to compare consumption and cost against both private and public providers, providing an apples to apples cost comparison greatly simplifying the decision making process and completely depoliticizing the IT outsourcing process.

The Utility Methodology

With the development of the standard unit of measurement outlined above, we can take a fundamentally new approach to the TCO process.  This is based on what we refer to as utility economics, or simply a true “cloud” approach.  This process is fairly simple, and works backwards from the traditional app centric method that traditional models take.  Specifically, this starts with understanding the scope of the hardware environment in which the target applications work.  This is fairly broad – which storage systems (SAN), which vmware clusters, etc.

Since we are interested in the macro view, this is a much, much, simpler task than trying to identify which % of which physical elements are in use.  Once we are able to identify the macro environment (we call this an infrastructure), we can then start to dig into it’s consumption.  To take a fairly simple example, an enterprise with a single shared services ‘stack’ that is running a group of apps on it (including the one we are interested in) would just consider the whole environment.  Because we aren’t subdividing, the determination of what resource to include is pretty simple.  You can then itemize the cost centers and determine their original (or book) value and any amortization you want to consider.  Anything that is not specific to the app should be included – things like hardware, M&S, hypervisor costs, data center, etc.  This gives us the ability to undertake a pretty simple analysis that we call Infrastructure Capacity & Cost.  This step uses our methodology to determine what the overall productive capacity of the infrastructure is.  This is expressed in WACs/hr.  Based on the total cost (capital and operating) of the hardware across it’s lifetime, we can determine an average hourly operating cost.  Dividing the two gives us a $/WAC.  This represents a loaded cost per unit of consumption.  This (fairly simple) analysis gives us the basis of the TCC for the given app.

The second major aspect that must be considered is the determination of the % of the entire resource pool that the app uses.  This is difficult to do in traditional form because we are trying to use legacy physical measures of consumption.  By approaching this a different way, we can greatly simplify the process.  The 6fusion platform provides the capability to meter in real time the application consumption – on a vm by vm basis.  Think of this as wiring up the infrastructure with a “meter” on each virtual machine.  The meter reads the actual consumption and returns them to the UC6 Platform. The platform then calculates the WAC consumption from that data.  With that data, we can determine EXACTLY what percentage of the total is being used.  By summing the consumption of the various VMs that comprise the app, we can calculate the loaded cost of running the app by multiplying the # of WACs consumed by the calculated $/WAC.  This can be done hourly, monthly or with whatever time range the user wishes to see.  You can then add in any app specific costs that you want (say software licensing, etc) to consider as part of the cost of operations.  For a comparison between two hosting providers, this typically isn’t necessary as those costs would apply in both cases.  This is only important if you expect those costs to differ.  This calculation can then be repeated (a) for each app running in the system, and (b) on a regular repeating basis to provide true insight into the cost of operations for the infrastructure.  Even better, it provides a baseline for comparison.  By determining the $/WAC for the various available execution venues, the enterprise can simply and easily compare say the cost of running the app in the internal venue vs running the app in the AWS public cloud.

In larger more complex environments this can be somewhat more difficult, but by making reasonable assumptions, one can determine what hardware to include in the model, and either by (a) getting hard data from accounting or (b) using publicly available pricing and reasonable discount assumptions can build a cost model.

Finally – the inherent advantage to this model – above and beyond the simpler, more useful ROI/TCO calculation is that the metering can be left in place post migration and the actual $/WAC can be calculated as a KPI on a recurring basis to validate the original assumption on which the decision was made, providing real time information that can be used in future deployment decision making.

Conclusion

Traditional enterprise IT ROI/TCO calculators simply don’t provide enough granularity and flexibility in the new shared services / cloud world.  “Ownership” is being replaced by “Consumption”. Enterprises, and enterprise service providers, need to adopt newer more utility like models in order to be able to create credible savings and TCO calculations.  These models must include a standard unit of measure for IT, and enable the abstraction of more precise comparisons. Additionally, with the cost of migration falling as a result of the cloud model, it becomes increasingly important to provide ongoing justification for (a) the original migration decision and (b) the ongoing decision to continue with a given provider.  Adopting a true utility approach, including a standard unit of measure, provided by an impartial 3rd party is the best way to accomplish this.

3 Reasons why TCO Cost Calculators don’t work for Cloud

by Rob Bissett, Chief Product Officer, 6fusion 

Amazon Web Services – long considered the gold standard for web scale cloud services has begun a very focused effort on driving growth in their customer base from enterprise audiences.  One of the major challenges that Amazon, and all public cloud vendors, will face is increasing pressure to demonstrate TCO in the face of incumbent internal IT opposition.

So let’s take a look at how TCO is being calculated by public cloud vendors today, and identify some of the critical limitations of applying an on-prem costing model for the on-demand world.

How Amazon calculates TCO today:

Amazon has developed and published a TCO tool (http://www.awstcocalculator.com/) designed to help organizations better understand the financials behind their IT infrastructure, and how that would compare to an equivalent AWS configuration.  The challenge behind this, and in fact the challenge with all traditional enterprise ROI calculations is the complexity of the enterprise cost models.  The transition to more modern shared services infrastructures exacerbates these issues by intermingling the various resources and costs.

AWS has invested considerable effort in building out a default cost structure that can be used for the TCO calculations and a set of assumptions on how enterprise hardware is architected and deployed.  The result of which is a pretty simple to use tool that enables a user to enter in some basic information around the configuration of an app, and get a baseline cost comparison against the equivalent running in Amazon.

However, while the tool is quite simple to use, the logic underpinning it leaves a lot to be desired.

Specifically:

  1. The tool assumes a discrete complete, comprehensive stack for each application that you are conducting an ROI on, with no re-use from existing resources.

  2. It assumes that you can fully eliminate the cost of the underlying hardware by switching the app to Amazon.

  3. The tool assumes a 3 year ROI, starting from today – it doesn’t allow for partial amortization.

These assumptions then mean this tool is only useful if you migrate 100% of apps to Amazon, and that you can retire the underlying infrastructure completely. The net result of these assumptions in the logic behind this TCO calculator is that it is really only applicable to applications hosted on hardware, where the hardware is end of life and subject to retirement, and only if you plan to migrate all apps on that infrastructure.

The Limitations of TCO for on-demand environments

The purpose of this document isn’t to dispute the math behind this analysis, but to illustrate that the traditional approach to TCO doesn’t work for cloud.  The challenges lie in assuming that you need to acquire an entire enterprise stack to run this app, and only this app, and that you can remove the capitalized cost on migration.  In most modern environments, enterprise IT has already acquired data center real estate, power, servers, storage, virtualization and the rest of the necessary underlying resources.  When new workloads result in increased underlying hardware requirements, those likely won’t require new acquisitions to support them.  Even if incremental investments were required, it would be exceedingly difficult to allocate the direct costs of the incremental hardware to a single app since the hardware would be dedicated to a resource pool and used across multiple apps benefiting them all. Additionally, those resources that were acquired would likely only be fractional incremental server investments rather than a full enterprise stack.

More generally, this points to a potential issue with cloud TCO in general.  The “standard” method of conducting TCO analysis in the enterprise is simple – determine the hardware (and other) resources that the application uses, sum the costs of those resources, add labor, and present your results. This just simply doesn’t work for cloud – you consume, you don’t acquire.

What’s next for TCO modeling?

The What’s Next is TCC – Total Cost of Consumption.  TCC takes the cost of the shared services stacks as an input, and then quantitatively measures the real time consumption of each application, enabling a precise measurement of the cost of a given app. If you know what the stack costs, and how much of it each app uses, the math is simple, transparent, and defensible.  TCC is a process that abstracts the underlying hardware from the app, and provides an apples to apples comparisons de-politicizing the procurement decision (to cloud, or not to cloud).

Applying Utility Economics to the Equation 

There is a better way.  The solution is simple – think cloudy thoughts.  What do I mean by that?  Simple – when you price out a cloud instance, you aren’t reverse engineering what hardware resources etc that you are using, and trying to assign a cost to them.  What the supplier does is attempt to determine how many of what types of instances they can run across an entire location or environment, assign a dollar cost to them, add a profit margin, and then price them per unit.  Fundamentally, this is a utility thought, with the instance as the unit of measurement.  This model doesn’t care what the instance runs, or on what actual physical server it runs.   We only care that it runs in there, and while running uses a set amount of resources for which we can generically assign a cost.  This is the basis of a utility economics model to infrastructure planning.

There are two fundamental elements needed in updating the traditional TCO approach in a virtual world, which as the acronym indicates, focuses on ‘ownership’.

Specifically:

  1. Develop a standardized unit of consumption & capacity measurement that enables us to quantify infrastructure from a utility perspective greatly simplifying the analysis

  2. A ground up methodology that enables the abstraction of the hardware complexity from the application TCO enabling more precise and standard comparisons

Conclusion 

Traditional enterprise IT ROI/TCO calculators simply don’t provide enough granularity and flexibility in the new shared services / cloud world.  “Ownership” is being replaced by “Consumption”. Enterprises, and enterprise service providers need to adopt newer more utility like models in order to be able to create credible savings and TCO calculations.  These models must include a standard unit of measure for IT, and enable the abstraction of more precise comparisons. Additionally, with the cost of migration falling as a result of the cloud model, it becomes increasingly important to provide ongoing justification for (a) the original migration decision and (b) the ongoing decision to continue with a given provider.  Adopting a true utility approach, including a standard unit of measure, provided by an impartial 3rd party is the best way to accomplish this.

UCX, The Universal Compute Xchange, Launches Beta for Trading Cloud

Chicago, IL, November 4 (GlobeNewswire) – UCX, announces the beta launch of the Universal Compute Xchange.  The launch of UCX builds on the news earlier this year that the CME Group (Chicago Mercantile Exchange) signed a definitive agreement to develop an IaaS exchange.

UCX, an innovative new financial market defines a new asset class of exchange-traded products that address the risks of the digital generation, with the support, assistance, and backing of CME Group. UCX’s initial benchmark contract is based upon 6fusion’s patented Workload Allocation Cube (WACTM). The WAC represents the first commercial standard to quantify IT infrastructure supply and demand, and creates the basis for by which compute resources may be measured, and transacted.  Organizations can now engage in price discovery and trade IaaS resources using standardized WAC financial products from a centralized, transparent market.

“Today’s global capital market structure enables corporations to reduce their risk and exposure on almost every aspect of their balance sheet, from energy to interest rates, with the exception of being able to hedge their internal and external IT infrastructure expenditures, which is one of the largest and fastest growing portions of their balance sheet. By providing the ability to trade IaaS resources, using WAC financial products, corporations can reduce their financial IT infrastructure exposure, create capital efficiencies and unlock a portion of the balance sheet that has been previously locked.” says Adam Zeck, Founder and CEO of UCX.

“451 Research’s Market Monitor service estimates the combined IaaS and PaaS market to be worth over $30 Billion by 2018, a compound annual growth rate of 29%,” says Owen Rogers, Cloud Economist and Senior Analyst with 451 Research. “With so much demand for cloud services forecast, new channels to market such as cloud exchanges can help service providers grow their markets as well as assisting enterprises in procuring and managing resources more efficiently.”

UCX will be accepting applications for Beta participants starting November 15.  Selected participants will represent of a select group of cloud service providers, enterprise buyers, and experienced brokers/traders.  All participants will go through a rigorous application and evaluation process before being invited to trade on UCX. Beta trading is expected to commence January of 2015, with the full launch of the open exchange expected in the first half of 2015. Interested parties should visit our website to learn more about being considered for the Beta.

“UCX’s vision is to bring innovative products to the market that enable both supply and demand to manage risk and become more efficient,” says Zeck. “UCX is where the world will trade the cloud.”

For more information please visit UCX.

Press inquiries please contact Jocelyn DeGance Graham at jocelyn@ucxchange.com

About UCX

UCX, Universal Compute Xchange, is defining a new asset class of exchange traded products with the support, assistance and backing of CME Group (Chicago Mercantile Exchange), that address the needs and risks of the digital generation. UCX, the leading global exchange for trading infrastructure as a service, IaaS, “The Cloud”, has licensed the patented Workload Allocation Cube (WACTM) metric for measuring IT infrastructure usage, to create the benchmark WAC financial contract. UCX enables buyers and sellers to engage in price discovery and trade standardized WAC financial contracts to reduce their financial IT Infrastructure risk exposure while increasing operational agility and market efficiency, from a transparent, centralized marketplace. UCX is where the universe comes to trade.

For more information visit http://ucxchange.com/ and follow us on twitter @ucxchange

About CME Group

As the world’s leading and most diverse derivatives marketplace, CME Group (www.cmegroup.com) is where the world comes to manage risk.  CME Group exchanges offer the widest range of global benchmark products across all major asset classes, including futures and options based on interest rates, equity indexes, foreign exchange, energy, agricultural commodities, metals, weather and real estate.  CME Group brings buyers and sellers together through its CME Globex electronic trading platform and its trading facilities in New York and Chicago.  CME Group also operates CME Clearing, one of the world’s leading central counterparty clearing providers, which offers clearing and settlement services across asset classes for exchange-traded contracts and over-the-counter derivatives transactions. These products and services ensure that businesses everywhere can substantially mitigate counterparty credit risk.

The Globe Logo, CME Group, CME, Globex, CME Clearing Europe and Chicago Mercantile Exchange are trademarks of Chicago Mercantile Exchange Inc.  All other trademarks are the property of their respective owners. Further information about CME Group (NASDAQ: CME) and its products can be found at www.cmegroup.com.

 

Paving the Road to an Open Market for IT Infrastructure Services

by John Cowan, Founder and CEO of 6fusion

6fusion’s goal is to accelerate the development and advancement of an open and organized market for IT infrastructure for the betterment of buyers, suppliers and brokers in the market.  On April 14th, 6fusion and the CME Group announced an exclusive multi-year strategic and tactical collaboration to build this market and materialize the ultimate vision of 6fusion.  6fusion’s co-founder John Cowan explains the details of his company’s deal with the CME Group, what comes next for the cloud infrastructure market, and how technology markets will benefit.

Delano Seymour and I incorporated 6fusion in 2008.  While building the UC6 prototype software platform under the radar in 2009, I published this statement:

“If computing is to follow the commodity path of electricity, achieving a similar level of ubiquity and pervasiveness, it must then have a single unit of measurement that transcends politics, production, language and proprietary invention.”

6fusion was founded and funded on the thesis that economic innovation was going to be just as important, if not more so, than technical production in the development of IT as a true utility.   We didn’t foresee a business problem to solve.  We foresaw a market problem to solve.

Allow me to illustrate.

Picture a bucket.  Then imagine pouring some water into that bucket.  Now, if I asked you whether you would like to pay me for the bucket or how much water you have in the bucket, what would you say?  Ten times on ten tries you are going to tell me the obvious answer:  You would rather pay me for how much water you have in the bucket.  And it’s the right answer.  Why?  Because we as consumers understand implicitly the value of consumption economics and risk.

Paying for what we consume is the fundamental economic principle in the procurement of EVERY utility service.  We don’t pay for gas by the size of our gas tank; we pay for gas according to how much we pump.  We don’t pay for electricity by the appliance; we pay for electricity according to how much electricity the appliance consumes.  I think you get the picture.

IT has never worked this way in the modern era.

Since the advent of the Client/Server model, we’ve been to trained to accept the “box” as the logical billing unit of IT infrastructure.  We buy a box and it has inside some processing power, some memory and disk storage.   Virtualization emerged in the industry and solved for things like more efficient use of the box.  But the logical billing unit was still a box.  Only we called it a “VM”.  Then, cloud computing came along.  The cloud solved for important technical milestones like resource elasticity and self service access to giant clusters of infrastructure.  But, low and behold, the ‘cloud’ never solved the logical billing unit issue.  We are still buying a box – only now the accepted nomenclature is an “instance” as coined by AWS.

But why is all of this problematic?  Why can’t we just continue to do what we know.  Why can’t we stick with “boxes?”

The answer is simple.

Boxes are proprietary definitions of resource allocation that are not standardized and so are not portable amongst vendors.  They lack the fundamental quality that defines a utility.  They are constrained to a single vendor and thus represent one of the biggest barriers to the massive, tectonic shift to the true cloud model everyone is waiting for in the industry.

There are two ways you can solve this problem.  You can try to force everyone to pick one type of box – a strategy rife with political obstacles, vendor pissing contests and endless arguing among standards bodies.  Or, you can eliminate the box all together.

6fusion is eliminating the box.

Enter the Workload Allocation Cube.

The Workload Allocation Cube is a mathematical algorithm that Delano Seymour and I created to fix a  major flaw in the concept of a one to many equation for compute, network and storage services:  There was no universal meter.  And without one, the model fell apart at scale.

The WAC equation compares real-time utilization (workload) against a fixed baseline (allocation) covering six vectors (cube) – CPU, Memory, Storage, Disk i/o, LAN i/o, and WAN i/o.  By establishing a statistical relationship between each of the vectors, weighted by the cost of production as constrained by physical capacity, we could establish a remarkably precise representation of workload consumption as an output reflected as a singular unit value (a WAC Unit).

With the announcement by 6fusion and the CME Group, we are on the path to that vision becoming reality.  Nothing short of launching an open market for cloud computing infrastructure to disrupt the world of IT like never before will suffice.  We are here to change the game. Forever.

To accomplish this goal we will create a new entity (check here for updates) whose job is to foster and develop the IaaS Spot Exchange as the first step to a financially settled market for IT infrastructure services.  This new entity will use 6fusion software and the CME Group’s trading platform to facilitate the trading of contracts between buyers, suppliers and brokers.

6fusion’s contribution to this effort is contract settlement.  Our software platform, UC6, is being wired into the trading platform so that contracts created can be metered and settled between the contracting parties.  The development of the Spot Exchange could not succeed without physical settlement and the ability to accomplish that transparently, and across any underlying hardware or software stack, is what our software does.

With the emergence of a legitimate open market backed by the sophisticated exchange on earth, the logical next question is how this market will get organized, who stands to win and who stands to lose.

To inform the discussion 6fusion has created a visual constellation of the market.  Think of this as your astrological guide to the open market.  

There will be two types of contracts traded on the initial spot exchange.  One contract will be intended for highly regulated, compliance burdened IT organizations.  The other contract will be users that are able to trade off the benefits of managed services in return for lower prices.

The world through the lense of the open market is really only made up of three technology stacks: VMware, AWS, and OpenStack.  These technology stacks also conveniently define markets.  Trading velocity will be organized initially within these markets and the industry’s combatants will compete for transaction volume and velocity.

As time passes there will be two important developments that reshape this constellation.  The first is the maturation of cloud services in the eyes of the Enterprise buyer and the second is technological interoperability between the competing technology stacks.  Both are inevitable.  And the effect of this will be a gravitational pull to equilibrium.  The market will see AWS compete and win more business in the core of traditional enterprise IT and it will see private cloud stacks emerge to pursue the market pioneered by AWS.

All of this is good news for buyers hoping to gain transparency and leverage in the new era of procurement and outsourcing as well as the suppliers that are scratching and clawing to sell more of what the do, faster.  We’re just here to help the process along…

For Sale: Antique Cloud Instances

by Rob Bissett

Late in the post-lunch doldrums of Friday while I was surfing “The Twitter” looking for something to keep me awake, I tripped across this tweet from @thecloudnetwork: “For Sale: Antique Cloud Instances http://ift.tt/1vAvtHm 

I thought this was 1) hilarious for a whole lot of super nerdy reasons; and 2) because I believe the whole notion of cloud pricing is skewed (well, skewed is better than what I was going to say) today, and this is a wicked example of this. The comment about non-liquid markets really hits home.*

[*For those reading the Cliff Notes -- the author refers to trying to sell his reserved instances in Reserve Instance Marketplace, and since there are a ton of sellers and no buyers, it is a non-liquid marketplace (non-liquid = no money flowing...)]

Some history: Companies adopted Amazon EC2 in droves, and discovered that after a while, it gets expensive. Amazon began to offer a method for those customers to lower costs in return for committing to a term. They called these Reserve Instances (RI’s). They are for 1 or 3 years, and are paid up front. The represent approximately 50% price savings over the standard prices (This is a great example of forward pricing — ie, pricing for what your costs will be, not what they are today. Something enterprises don’t do today, but should. Topic for another blog….]

As this blog correctly points out – there are a lot of people that bought these RI’s with the best of intentions, but then either changed their minds and didn’t need them, or they moved on to other instance types to get more performance or whatnot. I mean come on — the cloud isn’t Enterprise IT. We don’t set stuff up and just leave it forever. Cloud is all about flexibility and change….

So what’s the problem? There are two: First, why should you have to commit to a specific instance configuration to benefit from future pricing in the cloud? If your vendor wants you to commit, why would they lock you into a non-liquid market if your business changes? If you are going to cloud, you are probably doing so for financial reasons, but also to take advantage of economies of scale, flexibility, and the ability to adjust on the fly. So why are you forced to make a commitment to a configuration to take advantage of future pricing? Why can’t you just commit to a term and spend amount and retain the configuration flexibility?

The other problem with this model is lock-in. What if your business changes? What if you don’t want to use that vendor anymore? Granted that if you commit to a spend with a vendor they aren’t going to give you a refund if you quit… I mean hey — I’m a vendor, I get that. But why can’t you just unload your commitment to someone else? At least to recover some percentage of your investment. I mean really, could you imagine the loss on the IT investments at MySpace when Facebook hit it big? I bet they could really have benefited from a 20% return on the IT investment they had sitting on the floor.

Amazon’s answer to this was the reserved instance marketplace — a mostly well intentioned way to allay those lock-in fears by telling people, “look, you can resell!”. But the reality is they haven’t marketed it, and don’t invest in driving adoption of it for a lot of reasons, not the least of which is that they have already been paid for those instances, and if they don’t get re-used they can resell the space in spot instances.

The net result is a tremendous lack of liquidity. There are no buyers, driving prices to close to $0. Why? well, you have to be an amazon customer already to use it. And you have to want RI’s. And, well you probably already have some if you want them.

Did you read the Amazon terms of service. Check out 8.5(d). They actually went ahead and made it against the “law” to resell or sublicense the service. so in fact, this account is subject to termination for trying to sell the instance. This pretty much eliminates any possible secondary market from forming. I mean, craigslist is probably a much better vehicle for reselling reserved instances than the reserved instance marketplace.

So what’s the solution? We need a real marketplace for IaaS. A place where users can go and purchase some “cloud.” A market that has a spot price for pay as you go services. A market that provides forward pricing based on a term/volume commitment, but still offers configuration flexibility. And finally, a market that will fully and openly support hedging risk by enabling a secondary resale market.

This type of market will enable organizations to make appropriate, risk managed moves into the cloud using financially sound management — and make IT behave even more like the utility it is supposed to be.

Rob Bissett is Chief Product Officer of 6fusion.

 

Is IT a Utility or Commodity?

Whitepaper by Rob Bissett

Given 6fusion’s mission to disrupt the traditional allocation/configuration based financial model in IT, it is inevitable that we get drawn into many wide ranging discussions regarding utilities, commodities, and the markets that evolve around them.  One discussion that keeps coming up is whether IT is a utility or a commodity.  Invariably, when we begin down this path, the conversation becomes very complex (and heated) and ends in a very unsatisfactory way largely as a result of a lack of clear understanding by what is meant by these terms, and how they will likely apply to IT.

In this paper, we’ll dig into the meaning of utilities, the IT-as-a-Utility approach, and explore the relevance to you as an IT budget holder and decision maker.  We will then bring commodities into the thread, discussing the similarities as well as the critical distinctions between the two, and explore the evolution of the IaaS, or “Cloud” Marketplace.

What’s a ‘Utility’?

The starting point of this needs to be an deeper understanding of “utility” as it applies to IT and computing in general.  In his 2012 whitepaper “Metered IT: the path to utility computing” (commissioned by 6fusion) Dr. Paul Miller builds upon Michael Rappa’s original research from the IBM Systems Journal (PDF) 2004, providing us a useful definition and starting point for utility as it applies to computing, identifying “…six characteristics common to utility services, from water and power to radio, television, and internet access:

  • Necessity. The extent to which customers depend upon the service on a daily basis
  • Reliability. The presumption that, according to Rappa, “temporary or intermittent loss of service may cause more than a trivial inconvenience”
  • Usability. Simplicity at the point of use; for example, users do not need to know how electricity powers lights at the flick of a switch
  • Utilization rates. Coping with peaks and troughs in customer demand, using for example, innovative pricing models that incentivize an even spread of demand
  • Scalability. Benefits of economies of scale, with larger providers typically realizing lower unit costs that can be passed on to customers
  • Service exclusivity. Government intervention that encourages the emergence of a monopolistic provider may be a benefit when utilities have significant setup costs or a particular requirement for scale

Rappa also concludes that a business model for the provision of utilities is “based on metering usage and constitutes a ‘pay as you go’ approach. Unlike subscription services, metered services are based on actual usage rates.”

This definition gives us a good look at a service that is highly reliable, for which unavailability is problematic, which scales as needed, and which is paid for in a “pay as you go” model.  This is wholly consistent with what we have come to expect from computing in most modern businesses – particularly as it applies to cloud.  To close this thread out, utility is often cited as one of the key criteria used in defining “cloud”.

So utility – as it applies to cloud, refers to the model in which IT services are delivered, consumed, and billed.  The definition doesn’t apply to the types of services delivered.  The argument about IT-as-a-Utility and applying utility financial concepts to that technology has nothing to do with commoditization, differentiation, quality of service, or any of the other arguments about the types of services delivered by the cloud vendors.  The argument about commoditization is a completely different discussion.

Therefore, a cloud (or any IT infrastructure) can be a utility, without being a commodity.

Is IT a Utility?

As a result of this train of logic, is it reasonable to look at cloud computing, or any IT service offered in this type and just assume that it is a utility?  The reasonable answer SHOULD be yes… but it isn’t.  There’s one of the key characteristics which Rappa cited which is holding cloud back from being a proper utility.  Today, with IT, we are still using a  subscription-based services model, not a metered model based on actual usage rates.

The traditional model (referred to as a “subscription model” in the previous paragraph) in which on-demand services (cloud) are currently offered is “pay as you go” meets the utility definition, but only on the surface, and here the language becomes tricky.  The challenge is in defining what you mean by “pay as you go”.  For virtually every provider of on demand services today, this means “pay as you contract”, or “pay as you configure”.  This is in actuality a subscription based model.  That is, when I provision a service, say virtual machine instance at a cloud provider, I am given the virtual machine, and billed for 100% of the resources that I provisioned, regardless of how many of those resources are actually consumed.

This is somewhat analogous to connecting a server with a maximum power rating of 1000W to a power circuit, and being billed for 1000W of power, regardless of the fact that the server may only be consuming 100W at any given time.  Heck – you would pay for 1000W even if the server was powered off but still connected.   This is the point made in the quote above as it applies to metering usage, and billing based on that usage. The fact that they are considered “pay as you go” simply refers to the fact that billing commences when you provision, and stops when you de-provision, which you can do at any time without penalty.  This DOES NOT make it a utility.

How do we end configuration-based economics and create IT-as-a-Utility? 

Create a standard unit of measure and meter it.

The challenge for IT organizations in approaching IT-as-a-Utility is to shift from traditional configuration-based billing models to true consumption billing models – shifting from “pay as you configure” to “pay as you consume”.  The issue holding back a larger industry shift in this direction is the definition of a standard unit of consumption.  If you consider virtually any utility that you consume today they all have a standard unit of measurement that defines the utility, regardless of the vendor that you purchase it from.  This is a challenge that 6fusion has chosen to attack directly with the development of the Workload Allocation Cube (WACTM).  The goal of the WAC is to define a unit of consumption that can be used across providers, technologies, services, and locations to measure consumption of IT resources, and provide a basis on which utility billing, utility financial models, forecasting, and economics principles can be applied.

What’s a commodity?

Before we define it, the thing that stands out to me in the discussions I have with an IT audience  is the perception that utility is synonymous with commodities and that all commodities = bad.

A commodity is often defined as a good or service for which there is no qualitative differentiation.  You can think of a commodity as a class of goods for which there is no perceptible difference between providers, offering competition solely on price.  We hear this term a lot, applied to services and physical products both. Some excellent examples of commodities lie in natural resources and food – we treat oil, gas, coal, and other products as a commodity.  In reality things like tomatoes, rice, wheat, and most meat are also commodities.

There are many subtleties and textures to consider when digging into commodities – they tend to be much more complex than most consider at first glance.  The general rule of thumb that I apply to this is that if the differences don’t matter – buy the commodity product. If they do – don’t.  What does that mean?  Take tomatoes:

Tomatoes are generally all the same.  For the most part, they look the same, taste the same, and cost the same.  In most cases they are sourced locally when available, and not when not.  For most purposes and for most people who plan to cut them up and cook them, it makes very little difference where they come from, what type they are, and who grew them.  Thus the commodity version of the product will generally yield the lowest cost and most steady supply, and as a result will be the most used.  (they are most applicable for most people, most of the time).

Now, depending on your business, you might have some specific interest in the kind of tomato, but not care who grew them, or where they came from.  Think of tomato sauce – you probably want to use plum tomatoes.  Since you are a scale provider, you need lots, at low cost, on a continuous basis.  Since there a number of businesses like you, the industry subdivided the commodity “tomatoes” into a subgroup “plum tomatoes”.  These cost slightly more than just “tomatoes” since there are fewer of them and they are more specialized, but still less than negotiating with each supplier directly.

However, if you are a farm to table chef, making a nice premium priced caprese salad special, you probably aren’t interested in using hothouse raised beefsteak tomatoes from chile.  You would like an organically grown, locally sourced roma tomato.  Why the difference?  The application – you are looking for something specific, and the differences between that specific product and the mainstream commodity are important, measurable, and something for which you are willing to pay extra.  This last group is not a commodity product – it is something specific to task, for which you contract outside an organized market with a specific supplier, and pay the price associated with that service.

Why commodities aren’t ‘bad’

The critical lesson here is that the establishment of a commodity market doesn’t mean that all services or products are the same, or that there is no market for differentiated products and services.  What is means is that there is a minimum definition of a “standard” product which will meet the needs of the majority of the market, and which can be sold without differentiation.  This creates massive economies of scale, enables organized markets to drive further scale, and drives down the costs of goods sold making consuming those services as inexpensive for consumers as possible, while providing consistent profits for the suppliers.  In many cases, a particular commodity is often broken down into sub-classes.  We used tomatoes above, but this is true for things like gasoline (premium, regular, etc), Oil (West Texas Intermediate, North Sea, etc), and others.

Producers of non-commodity products, or producers of products in markets that haven’t yet commoditized will often paint the concept as bad – and this is understandable if you are fairly non-differentiated and facing imminent destruction of your margins.  For the market as a whole though, this shift is typically a good thing.  Commoditization drives up volumes as buyers buy more of the “standard” product.  Because there is less friction in the market, more buyers buy more, and the vendors face significantly lower sales and marketing costs, which is good for suppliers.  There are three other benefits of commoditization that I want to highlight – supply chain economies of scale, parallel innovation, supply chain innovation which I will focus on in the next few sections.

Computing and Commoditization

Now – to move to the touchy subject – is computing a commodity?  Will cloud commoditize?  These debates are raging on throughout the industry today, and, without giving my opinion away, I think it is really instructive to begin to take some lessons from other markets and apply them here to really understand what we are talking about before we draw any conclusions.

When we talk about commoditization what are we talking about?  This term gets bandied about quite regularly, but without clear definitions it isn’t that useful.  When we talk about the commoditization of IT, or commoditization of cloud, I prefer to focus this down to something more specific – and that is the commoditization of IaaS services.  The instances that you get – computing, memory, I/O, and base storage.  I think this is important as it is much more specific and narrow in scope than just IT or Cloud.  The question then becomes can we define a generic enough standard that people will buy it, without regard to who is providing it, or what kind of servers it is running on.

I think the answer to this question is yes – at least with respect to the hardware it is running on.  We are seeing this already as virtually no cloud provider publicly discloses data center owners, hardware manufacturers, etc.  Now, this is where it gets much more heated – things like latency, SLA, and other underlying definitions.  I think we can all agree that a webscale provider with no SLA provides a service, that for most people, looks quite a bit different than an high SLA provider running on premium performing hardware.  That being said, if web scale quality and performance is sufficient to your needs, then getting the premium enterprise product is probably a bonus and I wouldn’t think most people would complain assuming the price was the same – again, defining the minimum standard, not the maximum standard is the critical component.  Further though, It may be possible that these types of services can be bucket-ized and define a sub-group commodity within the larger family of “cloud” commodities, providing high performance and webscale buckets.  The question comes down to (a) can we define the minimum standard and (b) will the market treat the providers in those buckets as if they were the same.  These I think are the relevant questions.

Finally – there will be many users who need something specific, be it latency, location, regulatory approval or otherwise.  Those are the specialist chefs who will be buying specific products from specific providers.  I think given this framework it is reasonable to think that a cloud commodity will likely arise, and that it will be good for the market.  I think it is also clear that there will be plenty of non-commoditized services that will be demanded by the market to address specific requirements.

Where this gets interesting will be those three topics I mentioned earlier – supply chain economies of scale, parallel innovation, and supply chain innovation.  These will have a major impact on how the cloud / IT market evolves as we move towards commoditization.

Supply Chain Economies of Scale – as the commodity providers ramp up acquisition of the building blocks of cloud computing, the suppliers of those building blocks will achieve increasing economies of scale.  This lower cost model for things like processors, boards, switches, etc will tend to benefit other constituents in the markets, be they enterprises, bespoke service providers, or others. This can’t help but benefit everyone.

Parallel Innovation – the current market leaders in cloud services got there through innovation.  While the services that they started with may trend towards commoditization, it probably isn’t reasonable to think that they will change their corporate culture from innovation to commoditization. I also wouldn’t expect them to walk away from the scale business (certainly not in Amazon’s case as this is what they were built for).  What I would expect is an increase in investment in parallel innovation – that is the development of premium add on services that are used in parallel to, or in conjunction with to the main commodity service.  We are seeing this already in the space – things like firewall, load balancer, database, desktop, and other services that run on top of the commodity (IaaS) service.  These premium services drive higher margin and return entrepreneurial profit over the commodity service, while still driving increasing demand for the main business.

Supply Chain Innovation – The final market effect that I would anticipate seeing as a result of this shift is what I refer to as supply chain innovation.  That is, as the industry moves to supplying a commodity service, the supply chain will shift to creating non-commodity tools and products that enable the commodity suppliers to produce commodity products more efficiently.  We are seeing this market effect as well.  Traditionally the server vendors built products designed for enterprise with high manageability and flexibility features etc.  The cloud specific vendors, early on, decided not to use those servers as they don’t require flexibility or manageability – they require high volume and low prices.  They turned to the ODM vendors to produce high-volume, low-cost systems with only the features required by the cloud vendors.  We are now seeing the major hardware providers go back to the drawing board and re-design their product lines to deliver extra value to cloud providers enabling them to deliver more, and better IaaS services at better cost models.  Effectively reverse commoditization in the supply chain – using non-commodity hardware to improve the delivery of commodity cloud services.

In any case – whether or not we see true commoditization in the cloud space remains to be seen – and I am sure the arguments will continue.  What is critical to understand in this debate however is what we are talking about commoditizing, and what the impacts of those moves will be.

Hopefully I have been able to make clear that we are seeing utilization in the IT space now as people shift to on-demand, pay as you go models.  This utilization will reach its zenith when “pay as you go” shifts from “pay as you configure” (pay as you subscribe) to “pay as you consume”.  At this point, users will have true visibility to in the cost models of their applications, and IT organizations will have the tools necessary to baseline, budget, and forecast more effectively.

Will this mean we have commoditized?  No – it doesn’t. The industry may move in that direction, but the adoption of utility models, and the commoditization of cloud are most certainly not linked.

Conclusion

This paper laid out (hopefully) a fairly compelling argument on the differences in utilization and commoditization.  I believe that both are inevitable in computing, but that they will happen at different rates, and as a result of different market actions and drivers.

The utilization movement is one that is simply good for enterprises.  We are seeing early market evidence of this within large web scale providers and enterprises.  The current challenges of managing costing, consumption, and procurement across many vendors and technologies is becoming overwhelming for large enterprises and some leading edge thinkers are already moving their suppliers over to utilization based billing and contracting methodologies – both because this is simpler, leads to less wastage, but also because it maps expenses directly to revenue, and better enables a company to quantify it’s market power and negotiate discounts with suppliers.  An example of this is our recent partnership announcement with Switch SUPERNAP.  Switch sees the opportunity to help develop their C.U.B.E ecosystem by helping their enterprise customers quantify their compute needs via the WAC, and to use that data to interact with the suppliers in their SUPERNAP ecosystem.

This trend towards utilization will only grow over time as the visionaries see the benefits and pass those along to the thought leaders in the industry.  At the same time, there the commoditization effect will begin – and through commoditization, utilization will accelerate as all commodities are sold as utilities.

The commoditization ship has sailed – despite the best efforts of naysayers in the industry.  We have seen two commodities exchanges (and the two biggest commodity players in the world at that) announce efforts to begin to trade IT infrastructure services as a commodity.  This will be a tremendously valuable effort for the industry.  These developing markets will help to centralize the (currently) highly fragmented spend in the space and through that centralization will help to define the standard “classes” of infrastructure.  These class definitions will help to clear the picture for buyers making it simpler for the average buyer to compare suppliers.  The markets will also establish price baselines.  These baselines will set the floor for pricing, and help buyers understand their market power, and what they should expect to pay as a result of that market power.  Finally – it will drive the major players to innovate both to create margin with the commodities while also developing differentiated add on services, which will benefit all players in the space, while also creating a healthy market for specialty services.

These are all good things for the market.  There is benefit to both suppliers and buyers in helping organizations turn the corner on the “old” way of doing IT – large multi-million dollar monolithic projects with lots of waste and lead times to much more efficient pay as you consume hybrid projects that deliver better economic performance for buyers.  IT-as-a-Utility, and IT-as-a-Commodity while independent of each other, are the industry’s inevitable future.  Organizations, especially infrastructure suppliers, should embrace these models and develop business strategies that keep pace with the agility of our now on-demand world.

Our goal?  Help enterprises (and providers) evolve IT by moving procurement and management of IT services to that final step of being a true utility and best serving the consumer.

GigaOm –Yes, IT can be sold like a barrel of oil

by 

Oil Barrels
SUMMARY:Commodity traders help set prices for oil and wheat, allowing buyers to hedge their costs. The same thing is poised to happen in the world of cloud computing.

Big companies use commodity contracts to ensure predictable prices for oil, wheat, electricity, metal and other crucial supplies that keep their businesses going. These days, a crucial supply for many companies is cloud computing power — raising the question of whether that too can be bought and traded in the same way as oil or oranges.

A recent partnership suggests the answer is yes, and that we’re heading to a world where companies won’t just turn to Amazon Web Services or Microsoft Azure for cloud services, but to a commodities market that offers the best price, on the spot or in the future, for a range of interchangeable IT infrastructure.

The financial platforms and the raw resource already exist to support cloud as a commodity. So do the people. But the question is whether someone can bring this all together, and overcome some big obstacles that stand in the way.

Cloud computing by the bushel

Earlier this year, a Raleigh, N.C.-based cloud company called 6Fusion signed a deal with the Chicago Mercantile Exchange, the world’s biggest market for commodities and derivatives contracts. If all works out, the deal will mean that buyers and sellers of cloud computing services can do business on a spot exchange and, in a few years, trade derivatives too.

The exchange will be a place to buy hours of “WAC,” a term invented by 6Fusion that stands for Workload Allocation Cube. The idea behind the WAC is to create a standard unit of cloud computing infrastructure that can be bought and sold by the thousands.

Under 6Fusion’s current definition, a WAC hour is composed of six metrics, including ones related to compute, networking and storage, that can be sold at a single price. Here is how 6Fusion portrays a WAC:

6Fusion WAC hour

According to 6Fusion spokesman Ryan Kraudel, the WAC is akin to a watt of power because it provides a standard measure of output, which in turn removes barriers to trading cloud computing as a commodity.

“The fundamental problem no one’s been able to solve till now is ‘what is the barrel or bushel’ [of cloud]? Now, there’s a basis for contracts in the future of infrastructure services,” said Kraudel.

6Fusion is not the only one proposing such an arrangement. In Europe, a company called Zimory is working with the German exchange Deutsche Boerse to sell cloud computing units.

In theory, the creation of these common metrics means companies can now use forward or futures contracts, based in WAC’s, to exercise more control over IT costs, which represent a growing percentage of many corporate budgets. Kraudel predicts that IT-intense enterprises like banks or universities will be among the first adopters.

What this could mean on the ground is that the IT infrastructure of a company like JP Morgan could soon consist of private cloud servers for sensitive data, supplemented by public cloud supplies purchased from an ever-changing roster of third party cloud computing providers. At the same time, such purchases of cloud computing “by the bushel” would also mean lower prices as traders, rather than vendors, start to set the price of key ingredients of IT infrastructure.

Skeptics might note that this idea of cloud computing brokers has been around for a while, but now its arrival finally appears close at hand. Kraudel says a spot exchange for bilateral contracts should be running by the end of the year, and that a derivatives market will be up and running by late 2015 or 2016. But that doesn’t mean, of course, those markets will succeed.

You can build it, but will anyone come?

The idea of WAC’s, and a derivatives market for IT infrastructure, is well and good in theory, but that doesn’t mean it’s actually going to happen.

6Fusion can define WACs and the Chicago Merc can provide a place to sell them, but the plan will only work if a critical mass of buyers and sellers agree they are worth trading. And that could be a challenge.

Unlike a barrel of oil or a bushel of wheat, there is no consensus on what a commodity unit of cloud computing should look like. While 6Fusion has offered a definition, not everyone will accept it and some will challenge the choice of metrics that make up a “WAC hour.” The task of defining the “cloud bushel” is harder still since the industry is evolving rapidly, and even accepted references points like an M3 instance from Amazon, may be soon outdated.

If no one can agree on what to trade, in other words, there will be no trading.

The problem is daunting but not insurmountable and, as it turns out, it’s hardly a new issue in the world of commodities. According to James Mitchell, a former commodities trader at Morgan Stanley, any traded good, no matter how standard it may seem, will be subject to changing definitions.

Mitchell, whose company Cloud Options has advised 6Fusion, points out that oil comes in a variety of standards — Brent Blend, West Texas, etc — and that orange juice contracts include a variety of conditions that let traders adjust the final price based on size, seeds and so on.

The same is likely to hold true when it comes to cloud computing commodities. Contracts for “WAC hour” futures, if the market adopts them, may include adjustment mechanisms for traders to tweak at the end of the deal.

“Everyone hedges against, then trues up against how off-spec it is,” said Mitchell, speculating on what would happen if a bundle of WAC hours didn’t correspond to the exact cloud resources that a buyer had sought to obtain.

“In the truing up process, you might have a disproportionate amount of CPU. If 6Fusion does a good job, they’ll choose a middle ground that doesn’t require a correction.”

Mitchell added that, for now, the biggest impediment to a functioning futures market is that traders and techies are still learning to speak to each other. IT people have a good idea of what a unit of cloud computing resources looks like, but this knowledge is still being translated into standard contract language of a sort that brokers can instantly recognize and trade upon, he said.

800-pound gorillas don’t like to trade

Let’s say the IT buyers and the traders do agree on a common cloud commodity (a WAC or otherwise) and the exchange is up-and-running as 6Fusion promises it will be. We’re still only halfway there since an exchange also needs sellers.

And right now, the cloud infrastructure industry is dominated by a giant called Amazon Web Services that will likely be reluctant to offer up its wares to a commodity exchange. The reason is that commodities, by definition, are interchangeable and sold at a price lower than any one seller can dictate.

So for Amazon, which is already selling cloud infrastructure at fire sale prices, a commodities exchange would not only depress prices further, but invite a host of other competitors to replace its branded AWS products with a generic bushel. But one way to prevent that from happening is for Amazon, and other big cloud service providers like Rackspace or Microsoft, to simply sit this out and try to ensure the commodities is not liquid enough to be viable.

6Fusion’s Kraudel acknowledged that Amazon, which declined to comment for this story, would be reluctant to participate, and noted that the company already offers its own on-the-spot cloud pricing as well as a form of futures called “reserve instances.” Still, he thinks the market will be liquid enough anyways.

“Amazon Web Services is an 800-pound gorilla, but there is a very long-tail to this market,” he said, explaining that there are many other providers capable of offering analogous cloud infrastructure, and that more will enter the market to meet what is still ever-growing demand. (It’s also possible that recent price pressure from two well-financed competitors, Google Cloud and Microsoft Azure, could nudge Amazon towards selling on an exchange).

Finally, the history of commodities markets may once again be instructive in trying to guess the future role of the current cloud gorillas. That history, according to Mitchell, shows that incumbents may dislike the loss of pricing power that comes with commoditization, but sooner or later the traders get the upper hand.

“Exxon tries not to use wholesale price of oil, but that doesn’t dictate the price of oil. It’s traders who are long and short who set the prices, not those like Amazon who are fundamentally long.”

How the 6fusion Switch SuperNAP partnership will transform IT economics

by John Cowan, 6fusion Co-Founder, & CEO 

Last week, 6fusion and Switch SuperNAP announced the first-of-its-kind industry partnership, bringing unparalleled economic insights to Switch’s enterprise and cloud service provider customers.  The partnership, which integrates 6fusion’s utility infrastructure metering platform into the Switch environment, will provide customers with an unprecedented level of cost transparency; bringing IT infrastructure users and providers one step closer to 6fusion’s transformational vision of IT-as-a-Utility, and the realization of the first fully viable IaaS marketplace.

The response from the industry has been overwhelming. While nobody doubts the obvious synergies between the world’s foremost innovator in data center technology and the company that is disrupting the economics of IT, I think it’s necessary to shed some light on how this collaboration got started, what it means for customers and a little bit about what comes next.

The seeds of this relationship were indirectly planted in May of 2013 at an event called Cloud 2020, organized by Ben Kepes of Forbes, and Krish Subramanian of Red Hat, and was conveniently located at the site of the SuperNAP.  At this exclusive thought leader event a lot of fascinating topics were covered.  I had the privilege of joining a panel to discuss “The Economics and Use Case of Federated Clouds.”  I don’t think the crowd on hand quite expected the fireworks this topic would ignite.  Maybe it was some post-lunch energy or maybe it was me standing up there and proclaiming that, in fact, compute, network and storage resources could be traded like coal, oil or other commodities.  And that in my opinion, when market economics could be truly employed by buyers and sellers the industry would see cloud adoption velocity to get excited about.

The debate we ignited spilled over to the blogosphere, as leading thinkers continued to make their case in the weeks following Cloud 2020 (read more background on the topic: here, here, here, and here).  Whether you thought the idea I shared that day was crazy or brilliant it was hard to ignore the groundswell of interest we created.  To foster continued discussion, 6fusion sponsored an invitation only round table on the front end of GigaOM Structure 2013, participants included Joe Weinman, Randy Bias, Paul Miller, Mark Thiele, Bernard Golden, Reuven Cohen, James Mitchell, and other industry luminaries.  For posterity, we recorded that session.

Shortly following the public debate about the concept of a futures market for cloud computing, 6fusion announced it’s first big step in that direction: 6fusion Launches Open Marketplace for IaaS.

If the Workload Allocation Cube (learn more about the fundamentals of the WAC here) is the basis measure of infrastructure like a real consumption utility, then the marketplace is the basis of contract standardization. Contract standardization is a critical building block because it defines the parameters for consumption in a uniform way; it is the foundation we are laying for the eventuality of trading IaaS Compute.

The Marketplace which was built on the Open Market Framework (OMF) launched a few months earlier.  The OMF is important because it is how 6fusion achieves open participation for buyers and sellers.  The basic premise behind OMF is that the underlying software code necessary to meter heterogeneous technology stacks was made open.  Anyone can build and support integration to the 6fusion platform – whether you come from cloud, virtualization or physical operating system perspectives.  You can read more about the OMF here.

One the questions I received since making the announcement with the SuperNAP is “why?”.   If 6fusion built all of the underlying software plumbing and the user interface to settle infrastructure contracts, why is the SuperNAP really even needed?  The answer is quite simple: the first step to building an open, financially settled market, is to organize physical marketplaces. The SuperNAP is a physical marketplace.  At an elementary level, it is world’s largest ‘farmers market’ of IT infrastructure; a massive collection of IT infrastructure operated by buyers and sellers.

By overlaying the 6fusion Open Market Framework and platform denominated by the WAC onto the SuperNAP, we have the potential to deliver unprecedented value to buyers and sellers.  For buyers, 6fusion turning the SuperNAP into an organized marketplace means the creation of buyer leverage and price transparency. Analysts don’t always see eye-to-eye on the world of cloud computing, but one point that escapes nobody is that buyers won’t just pick one execution venue for apps and workloads– they will pick multiple venues spanning internal IT, single tenant managed infrastructure and public cloud computing.  By giving buyers a normalized demand metric in the WAC, we are equipping them to have impactful negotiations, empowering them to understand their Total Cost of Consumption (TCC) KPI’s, and establishing price transparency in the marketplace so that they understand the true power of their demand.

The power of the physical marketplace is immediacy.  I come to the market.  I buy something from the market.  I consume what I buy.  Unlike the farmer’s market, however, the IT market is limited by bandwidth.  While a lot of smart people are working to solve challenges like technical interoperability, the challenge of moving large volumes of data between two disparate points is still very much a physical distance one.  The SuperNAP marketplace solves this issue by operating the most dense aggregation of network services of any data center anywhere on the planet.  What’s more, is that Switch views network services as “value add,” rather than a source of profit extraction.  They understand that by making networks and purchasing leverage freely accessible they drive a concentration on their core competency, which is to build and run world class data centers really, really well.

So thanks to Switch and 6fusion, customers can quantify their IT consumption like they would any other utility— using their actual usage based on real-time metering (versus a fixed allocation or subscription based economic model that the rest of the industry is forcing on consumers), and make rational, meaningful decisions about the distribution of their IT load across multiple execution venues almost instantaneously.

And therein lies the value for the infrastructure seller.

I get a kick out of the so called experts in our industry that tell me infrastructure suppliers would *never* support the idea of a commodity exchange or the normalization of consumption metrics for fear of diluting value propositions thereby creating a race to the bottom on price.  My friends and I at 6fusion have written plenty about that myth, so I won’t rehash it here.  Let me just say that in 10 years of working on the supply side of the market equation the motivation is dead simple:  Suppliers of infrastructure (physical, virtual or cloud) want two things:

  1. They want access to markets that lower their cost of business acquisition and
  2. They want to sell more of what they do, faster.

Simply put, the Switch 6fusion partnership will change the game for infrastructure suppliers by identifying new opportunities to serve their existing clients plus a raft of clients they never would have otherwise entertained.

There are some gaps I’ve intentionally created here so that I may come back to this post as the 6fusion-Switch story unfolds.  Consider this my ‘coming soon’ teaser:  The next step beyond the organized marketplace is a transaction marketplace.  Today, transactions are consummated in the cloud industry on proprietary paper.  Wouldn’t it be cool if that paper was exchangeable?

Like all big developments in the history of our industry, catalytic events mark the elevation of our industry evolution to new planes, new heights and new opportunities.   Switch and 6fusion catalyzing the organized market within the SuperNAP is the first commercial step toward the open market vision Delano Seymour and I documented many years ago and that is now shared by many industry leaders as well as our Partners at the CME (Chicago Mercantile Exchange) Group. For that, I figured it was worth slowing down to share a bit about how we got here and what it means.

Page 1 of 16