6fusion

Blog

Blog

Top 5 Reasons to Baseline Your Infrastructure Costs

The old saying “you can’t improve what you can’t measure” is more relevant now than ever, but just because you are measuring, it doesn’t mean you are improving.  You need to measure the right things, understand how those measurements compare to key benchmarks, and integrate those insights into your decision-making processes. Transparency is a critical success factor and it begins with understanding where you stand today by baselining your infrastructure costs.

Historically, baselines were established using Total Cost of Ownership (TCO), but that is only part of the story in an on-demand world. Why? Because TCO only tells an organization what they’re spending on infrastructure, not what they’re actually using. And a baseline established by spend doesn’t provide visibility into capacity, utilization and cost efficiencies – all of which significantly impact organizational performance.

As mentioned in a previous blog, 6fusion developed a standard unit of measurement for IT, called the Workload Allocation Cube (WAC). The WAC defines a single unit of consumption that can be used across providers, technologies, services, and locations to measure consumption of IT resources. Through the WAC, companies can calculate Total Cost of Consumption (TCC), and this provides an apples-to-apples comparison of costs across heterogeneous infrastructure models. TCC finally enables companies to baseline their current cost per unit of consumption or usage.

Once a Total Cost of Consumption baseline is established, companies are able to:

  1. Establish a foundation for IT Economic Transparency. IT Economic Transparency is the practice of establishing core economic and financial indicators to measure IT spend effectiveness. The first step in achieving IT Economic Transparency, and bridging the gap between IT and Finance, is to establish a standardized baseline for IT infrastructure cost efficiency that both groups support. This requires using industry standards on the infrastructure cost and infrastructure consumption side of the equation.

  2. Get a comprehensive view of infrastructure capacity, utilization, and costs. Taking a holistic view of your entire infrastructure footprint is critical to an infrastructure baseline, including public, private, and hybrid infrastructure deployments. Once you have a complete map of an infrastructure footprint, you can begin to see how much capacity you have, how much you are using, and how much each set of infrastructure (or infrastructure services – IaaS) is costing you at a unit cost level.

  3. Get powerful insights into cost and consumption patterns and the relationships between the two. The combination of consumption and cost gives you a unit cost of infrastructure based on actual usage, not just TCO. To use a car analogy, think of this like your miles per gallon rating, not just how much it costs you to fill your tank. You also gain the ability to see consumption patterns across different sets of infrastructure and, depending on the unit costs of each infrastructure, see how that impacts your unit costs in aggregate.

  4. Utilize industry standard benchmarking internally and externally, identifying opportunities for improvements in cost efficiency. Using an industry standard unit of measure allows you to get an apples-to-apples comparison of both internal and external costs. For example, an internal comparison would benchmark multiple on-premise infrastructures against each other and provide an overall internal average. An external benchmark would compare that average against an industry benchmark, for example identifying how you compare against other organizations in your industry, others using the same hypervisor platform, or other companies of your size.

  5. Inform vendor negotiations. A baseline of your infrastructure usage costs and patterns arms buyers with an apples-to-apples comparison of bids across vendors by using the same unit of measure to compare capacity, consumption and cost efficiency.

By establishing a baseline for IT infrastructure consumption, companies have the knowledge to make effective IT infrastructure decisions and achieve IT Economic Transparency. In a future post, we’ll dive into the detail of how an infrastructure baseline is implemented.

Follow us on twitter @6fusion to learn more in upcoming blogs.

 

Why the Cost Matching Engine is Critical to IT Economic Transparency

By Rob Bissett, 6fusion Chief Product Officer

Today we announced an exciting new capability in the UC6 platform called the Cost Matching Engine, with AWS as the first cloud service provider supported by the Engine. These new tools, the Cost Matching Engine and the AWS Price Comparison Report, help address a critical business problem that many see as a major inhibitor to enterprise cloud adoption today: making effective, fact-based, business decisions regarding investment in IT infrastructure, specifically, on-prem vs cloud (IaaS).

The Cost Matching Engine and related reports are available at no cost to the IT buyer and support the first truly quantitative, data-driven decision making process to evaluate infrastructure investment, enabling the business (economic) decision maker to answer two critical questions:

  1. What’s the per unit cost of my infrastructure?

  2. What’s the per unit cost of the other, technically viable options?

By helping to answer these questions, the breakthrough that 6fusion’s Cost Matching Engine makes possible is to calculate predicted price / unit for services that you haven’t yet contracted for, similar to the experience of shopping for airfares on Kayak or Expedia.

The first release of the cloud cost calculator supports Amazon Web Services’ EC2 and EBS services.  Specifically, it allows an organization to meter their existing infrastructure, regardless of the underlying virtualization, operating system or cloud platform in use, with our Workload Allocation Cube (WAC) technology.  Then, based on the configuration and consumption of the existing infrastructure, you can answer:  “What would this cost if I ran it in AWS”.  The Engine matches your existing VM’s to AWS instances (based on configuration or peak observed consumption, whichever you choose).  Once that matching is done, it then calculates the predicted $/WAC based on the published Amazon pricing, and the actual observed consumption (in WACs) from the existing systems.

The Cost Matching Engine represents a dramatic leap forward in terms of enabling quantitative business decision making.  It simplifies the process by providing an “apples-to-apples” comparison of cost per unit, using an unbiased, vendor neutral, 3rd party standardized unit of measurement.  This provides assurance that the math isn’t skewed by internal or external vendors.  Secondly, it dramatically shortens the time to “first quote”.  Rather than engaging in a lengthy (and potentially costly) TCO process, enterprise IT can quickly arrive at a meaningful cost analysis. The informed decision making helps the buyer decide:

  1. Which apps make the most sense to potentially migrate to cloud

  2. It is worth the effort to undertake a detailed TCO analysis

This Engine has been developed to enable organizations to get started with their cloud, hybrid cloud, and enterprise investment strategies, but also to empower professional services organizations with faster time to value for client engagements.

The Cost Matching Engine represents a transformational technology- one that can radically alter the process by which enterprise IT organizations build the business cases that guide investments.  With the increasing rate of change for enterprise IT, and the massive expansion in options, hybrid solutions, hosted hardware, manages solutions, cloud, etc, the process of quantifying investment opportunities must be made simpler, faster, and more standardized.  The Cost Matching Engine provides these capabilities.  6fusion looks forward to adding support for more vendors to provide even more economic transparency for potential buyers.

Ever wonder what your current cost utilization score is?  How yours compares to others?  If you should move to Amazon? Or if you should be moving more of what you are already doing to Amazon? 6fusion’s price matching engine can answer those questions.

Visit www.6fusion.com to get started, and follow us on Twitter @6fusion to learn more about IT Economic Transparency.

6fusion Launches Cost Matching Engine to Translate Enterprise Infrastructure Costs to Cloud Instance Costs

Cost Matching Engine advances infrastructure decision support by matching on-premise workload consumption and costs with IaaS instance types and costs. Amazon Web Services (AWS) is the first cloud service provider supported by Cost Matching Engine.

[Raleigh, NC – February, 17]  – 6fusion, the company standardizing the economic measurement of IT infrastructure and cloud services, today announced the availability of its Cost Matching Engine technology offering the first “apples-to-apples” comparison between on-premise infrastructure costs and predicted cloud infrastructure pricing based on infrastructure configuration or actual resource consumption.

The Cost Matching Engine is based on 6fusion’s patented, single unit of measure, the Workload Allocation Cube (WAC). With the WAC as the basis for comparisons, the Cost Matching Engine de-politicizes the IT investment process by using an industry standard unit of measure to provide an unbiased “apples-to-apples” comparison, regardless of the underlying technology or vendors.  It is the only tool that allows users a data-driven, economic view of workloads that support a business case for migration to the cloud.

The Cost Matching Engine is designed for use by enterprise IT organizations who are planning investment in internal infrastructure projects, or are considering a move to cloud infrastructure, as well as system integrators and consultants who support those IT organizations.

AWS is the first cloud service provider supported by the Cost Matching Engine.  The engine compares the cost of running applications in an on-premises environment to AWS, and produces a detailed cost analysis, translating current virtual machine configurations and consumption into AWS instances and costs. The system advances traditional TCO methodologies by incorporating unit costs based on real-time infrastructure resource consumption, not simply ownership, allocation, or subscriptions.

“The Cost Matching Engine is a breakthrough for the open IaaS market, addressing the critical business challenge of determining quickly which workloads and applications make financial sense to consider outsourced infrastructure before racing to have commercial discussions with potential suppliers,” said 6fusion CEO and Co-Founder, John Cowan.

Early support for the tool comes from beta customers such as SAS, the leader in business analytics software and services. J Nick Otto, Senior IT Manager at SAS, said “The Cost Matching Engine is a powerful tool that helps us analyze our workload consumption and informs our purchase and pricing decisions.”

6fusion is planning support for other cloud and infrastructure hosting providers.  “Enterprise IT is asking for greater financial visibility,” said Rob Bissett, Chief Product Officer at 6fusion.  “Delivering a complete picture of options for the enterprise buyer is our primary objective.”

To learn more about the Cost Matching Engine or how you can leverage the Workload Allocation Cube to create economic transparency in your IT organization, please visit www.6fusion.com or email Ryan Kraudel at rkraudel@6fusion.com.

 

Cloud Luminary Fireside Chat

This edition of the Cloud Luminary Fireside Chat series brings together a panel of experts to discuss cloud utilization and cost analytics within IT organizations. Guest panelists include John Cowan, Co-Founder and CEO, 6fusion, Sharon Wagner, Founder and CEO, Cloudyn, and Owen Rogers, cloud economist at 451 Research. Hosted by Bernard Golden, VP of Strategy, ActiveState.

6 Reasons Companies should care about the Future of IT Finance & Economic Transparency

The modern IT environment is highly fluid. The development of new infrastructure delivery models, typically a mix of public, private, and hybrid cloud, have created enormous new opportunities (and expectations) for innovation and value-add to the business.

Meanwhile, existing infrastructure can’t be ignored. Virtualization projects continue to drive efficiency, but also create ongoing infrastructure management challenges. Legacy applications and infrastructure continue to require support and maintenance. Line of business owners continue to demand more services with shorter project timelines, all with higher pressure to deliver lower costs.

A key component to solving these challenges is defining a common language for IT economics– a way for organizations to quantify, measure, communicate, and extrapolate their IT spend. We refer to this as creating “economic transparency” and it revolves around a standard unit of measure, and methodology for IT Infrastructure Operations (CFOs and CIOs) to bridge the gap and communicate with their peers in the C-suite. In doing so, the IT organization is better able to communicate challenges, costs, opportunities, and plans for the future.

IT Economic Transparency Definition: Leveraging standardization to establish core economic and financial Key Performance Indicators (KPIs) to measure IT spend effectiveness, economic benchmarking, and the use of improved market pricing visibility, creating transparency and visibility into investment decision making and bridging the gap between IT and Finance.

Why is economic transparency important? Every IT organization is evaluated on their decision-making, particularly investment decisions, and each dollar spent by IT must translate to value to the business: investments in maintaining legacy equipment or services, supporting a virtual environment, building a private or hybrid cloud, or moving to a public cloud. However, in today’s IT environment, the challenge increasingly is to to accurately quantify, communicate, and plan that spend.  Historical models, such as TCO (Total Cost of Ownership) need revamping if they are to accurately guide IT investments as-a-service.

In a cloud world, TCO is far less relevant, because with these external services, you will never own anything. You use the services you need, and pay for what you use. In this new on-demand world, agility, time to market, ability to fail fast, and short life apps are the norm. With the current pace of change, it just doesn’t make sense to calculate the 5-year TCO of an application.

As organizations move towards an even more rapidly innovative future, they need to be able to look past TCO and focus on what they can control: what an application costs to run. By quantifying the dollars per unit to run or use an application, the IT owner can then do three things:

  1. Compare the existing cost to options on the table;
  2. Compare the existing cost with external benchmarks to determine how cost-effective the current solution is, and;
  3. Use the information to plan for future growth.

By adopting a consumption-based utility model for managing IT economics, organizations create a new way to quantify, compare, and plan IT spend. It creates a common language for planning, which de-politicizes the process and empowers fact-based decision making.

How does a utility model and paying by consumption work? If you consider virtually any utility consumed today, they all have a standard unit of measurement that defines the utility (think “gallons” for water consumption or “kilowatt hours (kWh) for electricity), regardless of the vendor it was purchased from.

To help address the emerging economic challenges that cloud created for IT, 6fusion developed a standard unit of measurement for IT, called the Workload Allocation Cube (WAC). The WAC defines a single unit of consumption that can be used across providers, technologies, services, and locations to measure consumption of IT resources. The WAC provides a basis on which utility billing, utility financial models, forecasting, and economics principles can be applied.

What’s a WAC? It compares the real-time utilization (workload) against a fixed baseline (allocation) spanning six vectors (cube) – CPU, Memory, Storage, Disk i/o, LAN i/o, and WAN i/o.

The WAC’s total cost of consumption provides an apples-to-apples comparison of costs, de-politicizing procurement decisions – even the debate to cloud, or not to cloud. With the move away from “pay-as-you-configure” to “pay-as-you-consume,” companies are only charged what their IT departments actually use. And true IT Economic Transparency can finally be achieved.

IT Economic Transparency enables six main business outcomes. These include the ability to:

  1. Establish a baseline, allowing companies to normalize infrastructure consumption using an industry standard unit of measure, compare internal and external cost efficiency, and gain visibility into granular consumption patterns.
  2. Benchmark internally and externally, identifying opportunities for improvements in cost efficiency.
  3. Enable accurate cost allocation/charge backs, allowing users to pay for what is used versus allocated, and enabling IT to demonstrate value to the business.
  4. Track and measure migrations to the cloud, providing the ability to predict future costs, and measure historical actuals moving forward.
  5. Achieve effective capacity planning, through visibility into resource consumption and bottlenecks.
  6. Inform Vendor Negotiations, arming the buyer with an apples-to-apples comparison of bids across vendor, and real-time price discovery.

Stay tuned. We will be addressing each one of these areas in our blog series over the next few months.

 

 

What 6fusion’s API means for the IaaS Open Market

 by Rob Bissett, Chief Product Officer

Today, 6fusion announced the general availability of our new public API.  This is a pivotal “feature” release for 6fusion for a number of reasons, and I wanted take some time to share more details about why this is transformative for both 6fusion and the market as a whole.

First – the release marks an important milestone as our platform continues to evolve from a tool that organizations can use to solve discrete problems to a platform that users, ISVs, and others can integrate into broader solution sets.  This release opens up new opportunities for the larger developer community to solve more and bigger challenges for users, suppliers, and partners.

6fusion views the UC6 platform from a “data in / data out” perspective.  This API enables this view from a technical perspective by providing self service access to WAC meter data, financial baseline data and application meta data.  These types of data have already become critical for suppliers to enable access to the markets, for buyers and brokers who buy and sell in the market, and for 3rd parties that are tracking and benchmarking the market.  The API provides simpler, more powerful, more scalable, and open access to this platform.  This is a critical aspect to democratizing a truly open market for infrastructure services.

Secondly,  the API forms the backbone of UCX – the Universal Compute Xchange.  As the gold standard for infrastructure utility metering, and the basis upon which the UCX exchange and future futures markets for computing will be based, UC6 underpins the movement to an open market for IT Infrastructure.  You simply can not have an open market without an underlying technology platform that is open, accessible, and available to the market as a whole.

The release of our open API makes it possible for sellers on the exchange to facilitate contract settlement with buyers with no involvement or intervention from UCX or 6fusion required.  Trade data is injected into UC6 and consumption data for the purposes of auditless inspection and overage billing is available from the system providing suppliers and users a seamless customer experience.

If 6fusion’s vision is to organize the global market for IT infrastructure services and bridge the gap between finance and technology though the power of information, access to the underlying data must be open and extensible.

Well, now it is.

 

6fusion Releases API for IT Infrastructure Consumption Standard

New Public API Streamlines Integration of 6fusion WAC Data for IT Infrastructure Exchanges, Buyers, Sellers and Brokers

[Raleigh, NC – February, 3]  – 6fusion, the company standardizing the economic measurement of IT infrastructure and cloud services, today announced the release of its first public Application Programing Interface (API). The new API enables seamless integration with the 6fusion UC6 Platform creating new product and service opportunities for ecosystem partners based on the Workload Allocation Cube (WAC), the patented unit of measure for IT infrastructure consumption.

6fusion now provides independent software vendors (ISVs), cloud infrastructure suppliers, channel partners, and enterprise customers, with an open API to create and deliver new software applications and solutions that integrate with the UC6 Platform. The API accelerates the development and advancement of an organized market for IT infrastructure by broadening access to critical data that informs market price discovery, helps quantify supply and demand, and enables the transparency necessary to improve economic decision making around IT infrastructure investment planning. The API also enhances the value of IT financial management and decision support tools by enabling programmatic integration of WAC data into existing reporting processes and data models.

“Organizing the global IT infrastructure services market means empowering the buyer to make supportable financial decisions with well informed data.  Our public API release marks an important milestone,” said John Cowan, 6fusion CEO and Co-Founder.

Early support for the API comes from beta customers such as UCX, The Universal Compute Xchange, which leverages the new API to enable physical delivery of traded computing contracts. “The 6fusion API is critical to our information supply chain and in particular, our delivery process,” said Adam Zeck, UCX CEO and Founder.  “We see this as just the beginning of creating capital efficiencies for the market to trade WAC Financial Products via our centralized exchange.”

6fusion has designed the API to simplify integration with the UC6 Platform. Based on the Hypertext Application Language (HAL) specification,  the API is flexible, simple to use, and standards driven.  The API is comprehensive,  supporting  all features and capabilities found in the 6fusion user interface, and includes interactive documentation to assist developers with integration.

To learn more about the 6fusion API, or how you can leverage the Workload Allocation Cube to create economic transparency in your IT organization, please visit www.6fusion.com or email info@6fusion.com

 

About 6fusion

6fusion is standardizing the economic measurement of IT infrastructure and cloud services, and  providing IT economic transparency to the global market.  With 6fusion’s UC6 Platform, organizations can view and manage the Total Cost of Consumption (TCC) of their business services in real time and achieve a higher level of cost optimization, forecasting accuracy and business agility.

6fusion uses a patented single unit of measure of IT infrastructure called the Workload Allocation Cube that provides a common view of IT consumption, agnostic of underlying technology or vendors. 6fusion enables baselining, benchmarking and budgeting of business service consumption across execution venues, and supports dynamic cost optimization strategies that keep pace with the realities of today’s heterogeneous, on-demand world.  For more information visit www.6fusion.com

 

Datamation Controlling Cloud Computing Costs- Video Roundtable

http://www.datamation.com/events/controlling-cloud-computing-costs-video-roundtable.html

Click to replay

 

Rob Bissett, 6fusion Chief Product Officer joins moderator, James Maguire, Senior Managing Editor of Datamation, and panelists Owen Rogers, Senior Analyst of Digital Economics, 451 Research, and Deirdre Mahon, Cloud Cruiser to discuss the new Cloud Economics.

Cloud computing was supposed to offer significant cost savings to businesses. Among other benefits, costs could be shifted to OPEX instead of CAPEX. Yet in reality many businesses have reported that costs have actually increased as they’ve moved to the cloud. Why? And why are the various costs and invoice statements so confusing and so unpredictable? And why is it so hard — or is it? — to compare costs between the various cloud providers? In this video roundtable, four leading cloud experts will discuss these and other key issues relating to cloud computing costs.

See more at: http://www.datamation.com/events/controlling-cloud-computing-costs-video-roundtable.html#sthash.rlpxki7c.dpuf

Why TCO Cost Calculators don’t work for a Cloud-World

White Paper by Rob Bissett, Chief Product Officer

The following white paper was released in response to numerous requests for additional information from an earlier post entitled: 3 Reasons Why TCO Cost Calculators Don’t Work for Cloud. The unabridged version follows:

Amazon Web Services – long considered the gold standard for web scale cloud services has begun a very focused effort on driving growth in their customer base from enterprise audiences.  I believe this will become a common theme as all the larger public cloud providers shift their focus from the “cloud first” workloads to the massive underbelly of Enterprise IT spend.

One of the major challenges that these vendors will face is increasing pressure to demonstrate TCO in the face of incumbent internal IT opposition.  There are arguments being made inside enterprise that (security and regulatory concerns notwithstanding) enterprise IT can deliver cloud-like shared services as, or more, cheaply than AWS and it’s peers.  This argument may or not be true generally, but it does force these providers to work harder to justify the TCO/ROI case to move to the cloud for enterprise workloads.

So let’s take a look at how TCO is being calculated by public cloud vendors today, working through a real world example, and identify some of the critical limitations of applying an on-prem costing model for the on-demand world.

TCO Today

Amazon has developed and published a TCO tool (http://www.awstcocalculator.com/) designed to help organizations better understand the financials behind their IT infrastructure, and how that would compare to an equivalent AWS configuration.  The challenge behind this, and in fact the challenge with all traditional enterprise ROI calculations is the complexity of the enterprise cost models.  The transition to more modern shared services infrastructures exacerbates these issues by intermingling the various resources and costs.

AWS has invested considerable effort in building out a default cost structure that can be used for the TCO calculations (from an assumptive point of view) and a set of assumptions on how enterprise hardware is architected and deployed.  The result of this is a simple to use tool that enables a user to enter in some basic information around the configuration of an app, and get a baseline cost comparison against the equivalent running in Amazon.

While the tool is quite simple to use, the logic underpinning it leaves a lot to be desired.

Specifically:

  1. The tool assumes a discrete, complete, comprehensive stack for each application that you are conducting an ROI on, with no re-use from existing resources.

  2. It assumes that you can fully eliminate the cost of the underlying hardware by switching the app to Amazon.

  3. The tool assumes a 3 year ROI, starting from today – it doesn’t allow for partial amortization.

These assumptions then mean this tool is only useful if you migrate 100% of apps to Amazon, and that you can retire the underlying infrastructure completely, and that that hardware is new.

The impact can be seen in the following example:

I have an app powered by 5 vm’s that runs in a larger shared services infrastructure:

I get the following result:

Now, this state that it will cost me $215,000 to host this application in house.  It further extrapolates that if I run it in Amazon instead, it will only cost $12,021, a savings of $203,864.  Now, in a modern enterprise, does anyone believe that you would incur $200K in REAL costs to run 5 vm’s for 3 years in a scenario for which you don’t have to buy any new hardware?

The Limitations of TCO for on-demand environments

The purpose of this document isn’t to dispute the math behind this analysis, but to illustrate that the traditional approach to TCO doesn’t work for cloud.  The challenges lie in assuming that you need to acquire an entire enterprise stack to run this app, and only this app, and that you can remove the capitalized cost on migration.  In most modern environments, enterprise IT has already acquired data center real estate, power, servers, storage, virtualization and the rest of the necessary underlying resources.  When new workloads result in increased underlying hardware requirements, those likely won’t require new acquisitions to support them.  Even if incremental investments were required, it would be exceedingly difficult to allocate the direct costs of the incremental hardware to a single app since the hardware would be dedicated to a resource pool and used across multiple apps benefiting them all. Additionally, those resources that were acquired would likely only be fractional incremental server investments rather than a full enterprise stack, and would join the stack that was already partially amortized.

More generally, this points to a potential issue with cloud TCO in general.  The “standard” method of conducting TCO analysis in the enterprise is simple – determine the hardware (and other) resources that the application uses, sum the costs of those resources, add labor, and present your results. This just simply doesn’t work for cloud – you consume, you don’t acquire.

The challenge with a consumption based model is the shared services stack. It is virtually impossible to determine exactly which hardware resources are now being used by a given app – particularly in storage and hypervisor based systems where they may move around dynamically.  Further complicating the issue is that the hardware resources are being shared – so even if you could point to a server and say “it’s running on that one”, it would be difficult or impossible to figure out how much of “that one”.

While a lot of firms have worked on finding a better way to address this TCO issue, most of them are still trying at the most basic level to figure out how to apportion partial hardware resources.  “How much do I have, and how much of that is this app using?”  Historically, internal processes to address this are seen as arbitrary, proprietary, and open to debate, internally and externally.  This calls into question all TCO analyses, moving acquisition back to a highly politicized process.

What’s next for TCO modeling?

The What’s Next is TCC – Total cost of Consumption.  TCC takes the cost of the shared services stacks as an input, and then quantitatively measures the real time consumption of each application, enabling a precise measurement of the cost of a given app. If you know what the stack costs, and how much of it each app uses, the math is simple, transparent, and defensible.  TCC is a process that abstracts the underlying hardware from the app, and provides an apples to apples comparisons de-politicizing the procurement decision (to cloud, or not to cloud).

How do we do this?  We need a costing model that is transparent and defensible, and then a method by which we can meter consumption.  Lets look more deeply at this.

The Workload Allocation Cube

The WAC is the first commercial standard for application consumption measurement.  It works by comparing real-time utilization (workload) against a fixed baseline (allocation) spanning six vectors (cube) – CPU, Memory, Storage, Disk i/o, LAN i/o, and WAN i/o.

The fundamental difference between the WAC and other proprietary measurement models is that it takes account of consumption not allocation.  In effect, the WAC is the basis of the most important metric in IT financial management: Total Cost of Consumption (TCC).

Going a layer deeper, the WAC enables a true utility approach to IT infrastructure by abstracting the platform, technology, location, provider and other details from the actual analysis.  It also eliminates the necessity of thinking in terms of instances, or instance types as the WAC consumption can be applied against any type of instance configuration.  Furthermore, because the WAC is agnostic, it can be used to compare consumption and cost against both private and public providers, providing an apples to apples cost comparison greatly simplifying the decision making process and completely depoliticizing the IT outsourcing process.

The Utility Methodology

With the development of the standard unit of measurement outlined above, we can take a fundamentally new approach to the TCO process.  This is based on what we refer to as utility economics, or simply a true “cloud” approach.  This process is fairly simple, and works backwards from the traditional app centric method that traditional models take.  Specifically, this starts with understanding the scope of the hardware environment in which the target applications work.  This is fairly broad – which storage systems (SAN), which vmware clusters, etc.

Since we are interested in the macro view, this is a much, much, simpler task than trying to identify which % of which physical elements are in use.  Once we are able to identify the macro environment (we call this an infrastructure), we can then start to dig into it’s consumption.  To take a fairly simple example, an enterprise with a single shared services ‘stack’ that is running a group of apps on it (including the one we are interested in) would just consider the whole environment.  Because we aren’t subdividing, the determination of what resource to include is pretty simple.  You can then itemize the cost centers and determine their original (or book) value and any amortization you want to consider.  Anything that is not specific to the app should be included – things like hardware, M&S, hypervisor costs, data center, etc.  This gives us the ability to undertake a pretty simple analysis that we call Infrastructure Capacity & Cost.  This step uses our methodology to determine what the overall productive capacity of the infrastructure is.  This is expressed in WACs/hr.  Based on the total cost (capital and operating) of the hardware across it’s lifetime, we can determine an average hourly operating cost.  Dividing the two gives us a $/WAC.  This represents a loaded cost per unit of consumption.  This (fairly simple) analysis gives us the basis of the TCC for the given app.

The second major aspect that must be considered is the determination of the % of the entire resource pool that the app uses.  This is difficult to do in traditional form because we are trying to use legacy physical measures of consumption.  By approaching this a different way, we can greatly simplify the process.  The 6fusion platform provides the capability to meter in real time the application consumption – on a vm by vm basis.  Think of this as wiring up the infrastructure with a “meter” on each virtual machine.  The meter reads the actual consumption and returns them to the UC6 Platform. The platform then calculates the WAC consumption from that data.  With that data, we can determine EXACTLY what percentage of the total is being used.  By summing the consumption of the various VMs that comprise the app, we can calculate the loaded cost of running the app by multiplying the # of WACs consumed by the calculated $/WAC.  This can be done hourly, monthly or with whatever time range the user wishes to see.  You can then add in any app specific costs that you want (say software licensing, etc) to consider as part of the cost of operations.  For a comparison between two hosting providers, this typically isn’t necessary as those costs would apply in both cases.  This is only important if you expect those costs to differ.  This calculation can then be repeated (a) for each app running in the system, and (b) on a regular repeating basis to provide true insight into the cost of operations for the infrastructure.  Even better, it provides a baseline for comparison.  By determining the $/WAC for the various available execution venues, the enterprise can simply and easily compare say the cost of running the app in the internal venue vs running the app in the AWS public cloud.

In larger more complex environments this can be somewhat more difficult, but by making reasonable assumptions, one can determine what hardware to include in the model, and either by (a) getting hard data from accounting or (b) using publicly available pricing and reasonable discount assumptions can build a cost model.

Finally – the inherent advantage to this model – above and beyond the simpler, more useful ROI/TCO calculation is that the metering can be left in place post migration and the actual $/WAC can be calculated as a KPI on a recurring basis to validate the original assumption on which the decision was made, providing real time information that can be used in future deployment decision making.

Conclusion

Traditional enterprise IT ROI/TCO calculators simply don’t provide enough granularity and flexibility in the new shared services / cloud world.  “Ownership” is being replaced by “Consumption”. Enterprises, and enterprise service providers, need to adopt newer more utility like models in order to be able to create credible savings and TCO calculations.  These models must include a standard unit of measure for IT, and enable the abstraction of more precise comparisons. Additionally, with the cost of migration falling as a result of the cloud model, it becomes increasingly important to provide ongoing justification for (a) the original migration decision and (b) the ongoing decision to continue with a given provider.  Adopting a true utility approach, including a standard unit of measure, provided by an impartial 3rd party is the best way to accomplish this.

3 Reasons why TCO Cost Calculators don’t work for Cloud

by Rob Bissett, Chief Product Officer, 6fusion 

Amazon Web Services – long considered the gold standard for web scale cloud services has begun a very focused effort on driving growth in their customer base from enterprise audiences.  One of the major challenges that Amazon, and all public cloud vendors, will face is increasing pressure to demonstrate TCO in the face of incumbent internal IT opposition.

So let’s take a look at how TCO is being calculated by public cloud vendors today, and identify some of the critical limitations of applying an on-prem costing model for the on-demand world.

How Amazon calculates TCO today:

Amazon has developed and published a TCO tool (http://www.awstcocalculator.com/) designed to help organizations better understand the financials behind their IT infrastructure, and how that would compare to an equivalent AWS configuration.  The challenge behind this, and in fact the challenge with all traditional enterprise ROI calculations is the complexity of the enterprise cost models.  The transition to more modern shared services infrastructures exacerbates these issues by intermingling the various resources and costs.

AWS has invested considerable effort in building out a default cost structure that can be used for the TCO calculations and a set of assumptions on how enterprise hardware is architected and deployed.  The result of which is a pretty simple to use tool that enables a user to enter in some basic information around the configuration of an app, and get a baseline cost comparison against the equivalent running in Amazon.

However, while the tool is quite simple to use, the logic underpinning it leaves a lot to be desired.

Specifically:

  1. The tool assumes a discrete complete, comprehensive stack for each application that you are conducting an ROI on, with no re-use from existing resources.

  2. It assumes that you can fully eliminate the cost of the underlying hardware by switching the app to Amazon.

  3. The tool assumes a 3 year ROI, starting from today – it doesn’t allow for partial amortization.

These assumptions then mean this tool is only useful if you migrate 100% of apps to Amazon, and that you can retire the underlying infrastructure completely. The net result of these assumptions in the logic behind this TCO calculator is that it is really only applicable to applications hosted on hardware, where the hardware is end of life and subject to retirement, and only if you plan to migrate all apps on that infrastructure.

The Limitations of TCO for on-demand environments

The purpose of this document isn’t to dispute the math behind this analysis, but to illustrate that the traditional approach to TCO doesn’t work for cloud.  The challenges lie in assuming that you need to acquire an entire enterprise stack to run this app, and only this app, and that you can remove the capitalized cost on migration.  In most modern environments, enterprise IT has already acquired data center real estate, power, servers, storage, virtualization and the rest of the necessary underlying resources.  When new workloads result in increased underlying hardware requirements, those likely won’t require new acquisitions to support them.  Even if incremental investments were required, it would be exceedingly difficult to allocate the direct costs of the incremental hardware to a single app since the hardware would be dedicated to a resource pool and used across multiple apps benefiting them all. Additionally, those resources that were acquired would likely only be fractional incremental server investments rather than a full enterprise stack.

More generally, this points to a potential issue with cloud TCO in general.  The “standard” method of conducting TCO analysis in the enterprise is simple – determine the hardware (and other) resources that the application uses, sum the costs of those resources, add labor, and present your results. This just simply doesn’t work for cloud – you consume, you don’t acquire.

What’s next for TCO modeling?

The What’s Next is TCC – Total Cost of Consumption.  TCC takes the cost of the shared services stacks as an input, and then quantitatively measures the real time consumption of each application, enabling a precise measurement of the cost of a given app. If you know what the stack costs, and how much of it each app uses, the math is simple, transparent, and defensible.  TCC is a process that abstracts the underlying hardware from the app, and provides an apples to apples comparisons de-politicizing the procurement decision (to cloud, or not to cloud).

Applying Utility Economics to the Equation 

There is a better way.  The solution is simple – think cloudy thoughts.  What do I mean by that?  Simple – when you price out a cloud instance, you aren’t reverse engineering what hardware resources etc that you are using, and trying to assign a cost to them.  What the supplier does is attempt to determine how many of what types of instances they can run across an entire location or environment, assign a dollar cost to them, add a profit margin, and then price them per unit.  Fundamentally, this is a utility thought, with the instance as the unit of measurement.  This model doesn’t care what the instance runs, or on what actual physical server it runs.   We only care that it runs in there, and while running uses a set amount of resources for which we can generically assign a cost.  This is the basis of a utility economics model to infrastructure planning.

There are two fundamental elements needed in updating the traditional TCO approach in a virtual world, which as the acronym indicates, focuses on ‘ownership’.

Specifically:

  1. Develop a standardized unit of consumption & capacity measurement that enables us to quantify infrastructure from a utility perspective greatly simplifying the analysis

  2. A ground up methodology that enables the abstraction of the hardware complexity from the application TCO enabling more precise and standard comparisons

Conclusion 

Traditional enterprise IT ROI/TCO calculators simply don’t provide enough granularity and flexibility in the new shared services / cloud world.  “Ownership” is being replaced by “Consumption”. Enterprises, and enterprise service providers need to adopt newer more utility like models in order to be able to create credible savings and TCO calculations.  These models must include a standard unit of measure for IT, and enable the abstraction of more precise comparisons. Additionally, with the cost of migration falling as a result of the cloud model, it becomes increasingly important to provide ongoing justification for (a) the original migration decision and (b) the ongoing decision to continue with a given provider.  Adopting a true utility approach, including a standard unit of measure, provided by an impartial 3rd party is the best way to accomplish this.

Page 1 of 17