5 Tips for landing a job at a top tech startup

By Jocelyn DeGance Graham, SVP Market Development

I’ve spent the last several years immersed in startups and startup culture, helping companies grow their thought leadership, increase their brand awareness, get accepted into prestigious startup incubators, and raise capital. What I can say with absolute certainty is that the tech startup market is exploding, and there’s no better time to jump in, especially if you are based in one of the major tech centers like Silicon Valley, Seattle, Portland, or Research Triangle Park (RTP).

Searching for a job at a top startup is NOT the same as looking for a position in Enterprise. First, if you are applying for the job by sending in a resume, your time is probably better spent catching up on Game of Thrones.  Second, you need to be conducting a very targeted campaign– notice I said ‘campaign’– not search.  Securing a position with a top startup is an activity far more similar to running for office than looking for your car keys.

In the same way that the startups you are targeting are laser focused on creating a brand, you need to be doing the same with them, creating the brand of You. Like it or not, this is the reality. In my current role for example, I was hired because my company kept on running into my clients and when asked “who does your marketing” they came across my name. That’s the kind of recognition you are looking to achieve.

So whether you are seeking a technical or business role, here are a few tips that will radically improve your chances of landing the killer role, and set you apart from the other candidates:

  1. Develop your brand.  How are you going to get a startups attention if they’ve never heard of you? Take advantage of all the online tools — be an avid tweeter (of industry / professional topics), start a blog, create a website, reachout to thought leaders and write articles about them, contribute your code, etc.

  2. Network with the VC firms.  Often times VC firms have an HR resource dedicated to hiring at their portfolio companies.  Get to know these people, take them for coffee, and develop these contacts. The portfolio hiring managers are always highly networked individuals, and even if they don’t have something now, you’ll want to stay on their radar.

  3. Have passion. Your belief that your startup will be the next Google must be 99.9%. I met one woman who got the logo of her startup tattooed on her inner wrist. If you can’t find a startup you feel that passionately about, go back to Enterprise.

  4. Go to industry events.  Get out of your sweats and interact with people…in person.  For those of us in the SF Bay Area, there is no end to the events, forums, conferences, and meetups that are available.  If you live in a city without this kind of activity, consider creating your own meetup.

  5. Stay classy.  Ultimately, if you don’t get the gig be gracious. The people who are in the startup game are addicted to the lifestyle, and so you will be interacting with the same people for the next decade or more of your career.  Sometimes you’ll be the one looking for a job and other times, you’ll be the one offering the job.  This is one case in which I agree with Microsoft CEO’s Satya Nadella advice to let ‘karma’ play itself out.

Now get out there and find the next Google!



A view from the Ingram Micro Cloud Summit: Women in Tech–what’s holding us back?

Recently, I spoke at the Women of the Cloud Forum in Phoenix, hosted for the past three years as part of the Ingram Micro Cloud Summit. The session focused on the status of women in technology and cloud.  Joining me in the discussion were Gina Mastantuono, Executive VP, Finance, Ingram Micro; Lynn Jolliffe, Executive VP, Human Resources, Ingram Micro; and Khali Henderson, Senior Partner, Buzz Theory.

Our session kicked off with a ‘scorecard’ for how the industry is doing in advancing the agenda of women in tech, and to ruin the punch line, it’s bleak–surprise!

Taking a meta view across studies and research, the number of women in executive roles is approximately 20%.  Much has been written about the external barriers and factors for this, but in my opinion, after almost four years at the helm of a women in cloud non-profit, and ten as start-up executive, the internal barriers that women place on themselves are far more detrimental than the flashy news generating headlines of the external barriers.

Don’t mistake me, I am not being a Polly-Anna and discounting discrimination, harassment, misogyny all of which is alive and well, as evidenced by high profile cases like Pao vs Kleiner Perkins; however, to only blame external factors, is just a piece of our collective story.

As I shared at the Cloud Forum, women are the only majority that act like a minority– we identify ourselves as minority, and have allowed ourselves to be treated as a minority. As Eleanor Roosevelt most eloquently stated: ‘no one can make you feel inferior without your consent’.

There has been much written about encouraging women to pursue studies in STEM (Science/Tech/Engineering/Match), in fact, my nonprofit is a staunch supporter of STEM for girls, and manages a scholarship fund; however, while I agree with Education as a key ingredient of this remedy, I also recognize that there is a bigger issue– what we have is an ‘ambition gap’.

A 2012, McKinsey survey of more than 4,000 employees at top organizations, found that 36% of the men had designs on reaching the CxO level, while only 18% of their female colleges expressed the same ambitions; meaning twice as many man to start with wanted these roles.  This ambition gap is replicable at all levels of the organization, and across generations with the similar findings for Millennials.  In Sheryl Sandberg’s Lean In: Women, Work, and the will to lead she references that author Samantha Ettus has shared anecdotally that in her child’s kindergarten yearbook when asked ‘what do you want to be when you grow up’, a good percentage of the boys reported, ‘President’, while none of the girls did.  Findings indicate that these feelings will persist for a lifetime, until they ultimately play out in the workplace.

Sure discrimination suits make the news, but for most women it’s the silent career killers of our own internal monologues that plague us and block us from pursuing our dreams. The Women of the Cloud Forum energized attendees to look within. While at the conference, I slowed down, took pictures, spoke with dozens of women, and reflected on the experiences that being (a woman) in tech has afforded me. I feel fortunate, and would do it all again.

My key take away from the Forum?– Yes, there’s barriers to succeeding in tech, both external and internal, but no one is responsible for managing your career except you– make it count!

OnFinance, Nasdaq, April 16 in NYC- 6fusion selected to speak

6fusion joins the world’s foremost thought leaders in FinTech at Nasdaq, April 16 at OnFinance.  Details on 6fusion’s session forthcoming.


More about the show: FinTech is a booming entrepreneurial opportunity in the global Silicon Valley. Join us for OnFinance 2015 in the financial capital of the world at NASDAQ’s headquarters in New York City. At OnFinance you will hear the hottest FinTech company CEOs make their pitch to top venture investors and business development executives from big tech and large financial institutions. AlwaysOn events are intimate and social networking friendly, where attendees easily find each other, share ideas and make it happen. If you have a stake in FinTech, you will not want to miss this private insider affair.

Consumption, Not Configuration: The New Standard for Infrastructure Cost Allocation

There are many reasons IT organizations allocate costs by department, but the primary reason is to run IT as a business, aligning expenses with revenue to convert it from a cost center to a value center. Traditionally, departments are allocated or charged for IT services based on things like # of servers, # of VM’s, GB’s of storage or even worse % of spend by headcount; metrics that are not transparent and at times appearing almost arbitrary. Sometimes, politics even comes into play, with those in the strongest position receiving favorable chargebacks or allocations or improved methods at least.

With the emergence of new IT Financial Management standards and groups, a small number of organizations have begun tracking cost allocation by other methods, including server configuration. But even then, this basis is determined by the type and number of servers each department is using, and the total cost of those servers, not actual consumption of the critical business services the business operates. Increasingly the addition of external hosting and cloud have complicated these models due to their highly differentiated billing models.  Rarely are chargebacks based on what the departments are actually using or spending – the most accurate gauge for chargebacks. And that’s simply because most companies don’t have the data to support consumption-based cost allocations.

The difference between departments paying for the resources allocated versus the resources they actually consume creates the following inefficiencies:

  • Fixed-allocation models result in significant over-provisioning, which drives under-utilization and investment waste from unused resources.

  • Unused resources artificially drive up application hosting costs as waste, and must be amortized across active users.

  • Economic incentive for users is not aligned with the organization. When actual cost is not a direct factor in infrastructure choice, users will pick the best possible technical solution (without regard to cost) even though that cost/performance ratio may not be in alignment with the business goal for that particular organization.

  • Without actual consumption and cost data, or a common language, strategic planning within and across departments is virtually impossible.

  • There is no visibility for the head of IT or Senior Management on the reasons behind the costs of IT, or any ability to predict how that cost will change over time.

  • Strategic planning is virtually impossible as you can’t use mismatched historical data to predict the future, and you can’t compare to industry averages.

This gets back to a previous 6fusion blog comparing Total Cost of Ownership (TCO) vs. Total Cost of Consumption (TCC). TCO only tells an organization what they’re spending on infrastructure, not what they’re actually using. Even when TCO can provide cost estimates by department as noted above (not always possible with cloud deployment models), it lacks the detail to improve cost efficiencies or make accurate comparisons. But with the WAC, the standard unit of economic measure for IT, and TCC, organizations have an apples-to-apples comparison of costs across every available infrastructure. TCC enables companies to chargeback departments for actual usage, running IT as a business, and converting IT from a cost center to a value center.

Consumption-based cost allocation has become even more critical with the advent of cloud-based applications, because business units now have the ability to acquire applications services without IT. The pressure is on for IT to defend their costs and services and also be able to quickly and clearly compare and contrast internal and external services.

IT is moving towards a Utility Model– you’ll be paying for what you use, just like your electric bill. Leading edge organizations have begun to track cost allocation by consumption. This results in business units/users being able to match costs to the value they are receiving from IT in a transparent way. Those business units will also be able to effectively gauge project profitability resulting in improved business decision making. IT can respond to new demands with a fair price tag, instead of denying applications or resources due to budget constraints. And consumption-based chargebacks provide the detailed cost information needed to improve IT efficiency. It’s a win-win for IT and the recipients of their services.

Eventually, consumption will be the standard and best practice for infrastructure cost allocation, and we will all view IT just like electricity, gas, or water.

Follow us @6fusion and learn more about IT Economic Transparency and measuring your IT Consumption.


6fusion Announces VCE Vblock Certification

By Rob Bissett, 6fusion Chief Product Officer

Today we are announcing that 6fusion has been awarded the VCE Vblock Ready certification, and that 6fusion is joining VCE’s Technology Alliance Program (TAP).

There are a number of reasons why this announcement is important:

First- VCE obviously takes this program seriously, and has set their standards and process accordingly. VCE did a very thorough job with the certification process, which was rigorously, and thoroughly conducted by a 3rd party. This seemingly small item is critical, and I think reflects well on VCE as not all technology vendors put a lot of time and energy into creating high standards for 3rd party technology certification.  This should create a sense of confidence for their users and channel that they can be confident in these certified tools.

Secondly – and more to the point – the certification reflects an understanding by VCE and others that there is a new and evolving way of looking at, quantifying, and apply economic analysis to Infrastructure spends. That they are adding solutions such as ours to their alliance program portfolio speaks to their desire to give their users the best possible tools to inform their investment decisions.

Finally – this signifies the strong evidence of a shift in the market.  Converged infrastructure has long been touted as the best way to build “internal clouds” and in doing so deliver (a) superior quality / performance services and (b) at scale deliver cloud like economies of cost scale.  6fusion is being used every day by enterprises to quantify economic and cost performance of internal infrastructures and public clouds using a single unit of consumption measurement, and tools like our new Cost Matching Engine are making this even simpler.  By bringing 6fusion into the VCE TAP family, VCE is upping the game for their competitors by demonstrating a commitment to cost and performance transparency and throwing down the gauntlet for their competitors to match.  The Converged Infrastructure market is about to get very interesting!

Stay tuned because this is just another step in the process of transforming how enterprises buy Infrastructure services.  I can assure you it won’t be the last step in the relentless march forward for transparency of infrastructure services.


6fusion Joins VCE Technology Alliance Partner Program and Achieves Vblock Ready Certification

6fusion brings the power of consumption standardization to the VCE ecosystem enabling improved IT infrastructure investment outcomes

RALEIGH, N.C.–()–6fusion, the company standardizing the economic measurement of IT infrastructure and cloud services, today announced that their UC6 Platform has achieved Vblock Ready certification through the VCE Technology Alliance Partner (TAP) program. Vblock Ready certification means that Enterprise IT users can now confidently use 6fusion’s UC6 Platform on VCE converged infrastructure to enable a utility approach to IT – driving improved cost transparency, better forecasting and benchmarking capabilities, and ultimately create economic transparency in IT.

6fusion delivers on the promise of converged infrastructure by delivering true economic transparency based upon a patented, single unit of measure known as the Workload Allocation Cube (WAC). Analogous to the kilowatt, the WAC provides an economic baseline of IT infrastructure regardless of the underlying cloud or virtualization technology. With the WAC as the basis for comparisons, the IT investment decision making process is de-politicized with unbiased “apples-to-apples” benchmarking, transforming internal business planning, creating optimization, and ultimately helping businesses improve how they run their IT.

“VCE is pleased to welcome 6fusion as a Technology Alliance Partner for VCE Vblock Systems,” said DJ Long, senior director, Technology Alliances, VCE. “As a VCE TAP partner, 6fusion can now integrate its products with Vblock Systems, delivering transformative data center solutions for mutual customers that enable the agility, simplicity and improved economics of converged infrastructure with 6fusion’s expertise in standardized utility metering.”

“Economic transparency means giving Enterprise IT a decision support framework that transcends infrastructure deployment models,” said John Cowan, CEO and Co-Founder of 6fusion. “The industry standard for consumption measurement together with a true market leader in converged infrastructure is a powerful combination that means 6fusion and VCE can now deliver unprecedented value to customers.”

To learn more about 6fusion, or how you can leverage the WAC to create economic transparency in your IT organization, please visit www.6fusion.com or email info@6fusion.com

About 6fusion

6fusion is standardizing the economic measurement of IT infrastructure and cloud services, and providing IT economic transparency to the global market. With 6fusion’s UC6 Platform, organizations can view and manage the Total Cost of Consumption (TCC) of their business services in real time and achieve a higher level of cost optimization, forecasting accuracy and business agility.

6fusion uses a patented single unit of measure of IT infrastructure called the Workload Allocation Cube (WAC) that provides a common view of IT consumption, agnostic of underlying technology or vendors. 6fusion enables baselining, benchmarking and budgeting of business service consumption across execution venues, and supports dynamic cost optimization strategies that keep pace with the realities of today’s heterogeneous, on-demand world. For more information visit www.6fusion.com.

5 Tips for Attracting Top Talent to your Tech Start-up

By Jocelyn DeGance Graham, 6fusion SVP Market Development

The nature of emerging technologies is disruption. We tend to think about this disruption in terms of the technology itself, or sometimes, when the technology is truly ground-breaking like cloud, we think about the reengineering of business and operational models as well. However, we rarely think about the impact on business’ most critical asset, human capital.

As complex as the technology is, recruiting and finding the right talent proves just as challenging as developing the tech. Demand far outpaces supply, especially in places like Silicon Valley, Seattle, and Research Triangle Park (RTP), where candidates even at middle-tier universities all have jobs waiting for them upon graduation. Dice reported a near-record low technology unemployment rate of 2.7 percent for the last quarter of 2014 (compared with 6.7 percent overall for Q1, according to the Bureau of Labor Statistics).

The race for talent will only intensify as the pace of the technology increases, and attracting and retaining top talent is nothing short of a crisis. How then can companies, especially fledgling startups, compete, attract, and retain the talent necessary for their vision to grow?:

  1. Substance over gimmicks. In the cut-throat world of tech recruiting, refrain from the surface things, or adding incentives because ‘that’s just what tech companies’ do. Don’t invite dogs to work if you don’t like dogs, don’t offer dry cleaning, don’t get an on-staff masseur. Do offer benefits that have tangible benefit to your employees.
  2. Stay weird. As Graham Moore shared in his Oscar speech, creating a corporate culture of acceptance, vs. an exclusionary ‘bro’ culture is critical to a healthy environment where everyone contributes and shares. Taking a page from the white-hat hacker community, no one cares about your ‘demographics’, only who can write the sharpest code.
  3. Develop tailored (personal) reward systems. Studies show that employees, especially Millennials, don’t list money among their top priorities for joining a company. Instead take the time to find out what people on your team value. For example, having lunch once a month with the CEO or other member of the executive team.
  4. Stop fishing in depleted ponds. Rather than chasing all the same candidates as your competitors, think about hiring seasoned talent that other employers are passing over. Candidates in their 50s and 60s can round out a team, and provide valuable coaching and mentorship to younger employees.
  5. Always be recruiting. Like dating, you can meet that special superstar anywhere– traveling on a plane, chatting with a vendor, hanging at the dog park, or in any situation where you are meeting new people, find out what they do and if they are a match for your organization.

Whether you are recruiting for a senior role, a first hire, or building an entire team, you can solve your ‘talent crisis’ by creating a value-driven culture that focuses on human capital as your company’s most valued asset.

Infrastructure Benchmarks: How do you stack up?

As IT spend continues to grow, infrastructure costs are becoming more material to organizations. Along with cost efficiency efforts, modern IT departments are also expected to act as internal IT consultants, helping their constituents make better IT decisions regardless of whether the internal IT team or a service provider is delivering the capabilities. With technology and cost efficiency trends constantly changing, continuous benchmarking to assess the economics of where your organization is in relation to industry peers is critical.

Infrastructure Benchmark Definition: To evaluate or check infrastructure costs by comparison with an industry average or standard.

Once your organization has established a baseline of infrastructure costs based on Total Cost of Consumption (TCC), and broken out by cost per WACs, the standard unit of measure for IT Economic Transparency, you’ll have the information you need to implement both internal and external benchmarking. Why is this important? Because an apples-to-apples comparison of your current costs to those in the market will help you make intelligent decisions about cost efficiencies.

In breaking your consumption out into per unit costs based on WAC units, you can now address questions such as: Should you continue running your applications internally, or outsource? Are all of your outsourced applications cost effective and offering a good value, or are some priced way above average? In the latter case, which platform makes the most sense to switch to?

When combined with 6fusion’s UC6 SaaS platform, the WAC allows for greater transparency in infrastructure costs by providing:

  • A real time dashboard showing Total Consumption Costs on an application, business service, and/or organization-wide basis.

  • Cost and consumption analytics, enabling benchmarking and forecasting.

  • Detailed insights into actual application consumption across execution venues, enabling multi-faceted cost optimization strategies.

With a WAC baseline, you can then directly compare your costs against averages by industry, region, organization size or platform. 6fusion aggregates this infrastructure and application consumption data into a centralized repository that enables analysis for enterprises, ecosystems, and the IT value chain as a whole. This makes it possible for reporting and analysis comparing user data against macro-level data sets such as workload patterns, market pricing, industry trends and more.

Infrastructure Benchmarking offers four key benefits:

Reason #1 – Make Intelligent Comparisons: You’ll gain an ability to clearly compare your IT infrastructure usage and value against the market. With these intelligent comparisons, companies will see an increase in cost efficiency and overall profitability.

Reason #2 – Recognize the Best Options: Your organization will also gain the ability to recognize other infrastructure options, deployment models and technologies that should be considered. You’ll be able to track cost efficiency against all of the available market options – public cloud, hybrid services, etc.

Reason #3 – Quantify Purchase Needs: You’ll be able to quantify purchase needs based on actual usage, not vendor-defined configurations, enabling apples-to-apples vendor comparisons that allow your organization to optimize terms and unit pricing.

Reason #4 – Competitive Advantage Through Continual Improvement: Improvement is never a one-time exercise, particularly when it comes to technology. If you are not advancing, you are falling behind. But that’s not all. You need to continually assess your position relative to benchmarks, because even though you may be improving, you may not be improving as fast as the market and your competitors. Infrastructure Benchmarking shows you where you are, but more importantly, where you want to be. The lessons learned, and ongoing infrastructure changes, will provide an organization with a competitive advantage in their market.

With the WAC and TCC, and 6fusion’s centralized data repository, true infrastructure benchmarking is possible for the first time. The apples-to-apples comparisons can help you make smart short-term and long-term IT decisions that benefit the entire organization.

Want to know how your organization stacks up? Contact us at info@6fusion.com or follow us on twitter @6fusion to learn more.

Cloud Luminaries Fireside Chat with 6fusion, Replay

Fireside Chat Recap

Missed the Fireside Chat featuring Bernard Golden’s discussion with 6fusion CEO John Cowan? You’re in luck! –Replay below.

Cloud computing is changing the way we look at IT costs, according to industry experts on a recent Cloud Luminary Fireside Chat panel discussion.  Enterprise IT, traditionally viewed as a cost center, now plays a central role in the delivery of software-driven goods and services. Therefore, companies need to understand their cloud utilization and resulting costs in order to ensure profitability on their business offerings.


Bernard Golden’s Fireside Chat Cloud Luminaries Session Featuring John Cowan, Synopsis

If you missed 6fusion CEO John Cowan’s talk with Bernard Golden, here’s the summary of the discussion:

Fireside Chat Recap

Cloud computing is changing the way we look at IT costs, according to industry experts on a recent Cloud Luminary Fireside Chat panel discussion.

Enterprise IT, traditionally viewed as a cost center, now plays a central role in the delivery of software-driven goods and services. Therefore, companies need to understand their cloud utilization and resulting costs in order to ensure profitability on their business offerings.

Led by Bernard Golden, this fireside chat offers valuable insights on how organizations can get a better handle on their use of cloud computing.

Enjoy the full recording below, and highlights further down.

Participating in this panel discussion were:

  • John Cowan (@cownet), Co-Founder and CEO, 6fusion
  • Sharon Wagner (@Sharon_Wagner), Founder and CEO, Cloudyn
  • Owen Rogers (@owenrog), Senior Analyst, Digital Economics, 451 Research

What “Cloud Utilization and Cost Analytics” Means and Why It’s Important

Sharon Wagner: Utilization and cost analytics is the ability to understand how your cloud deployment behaves from a usage and cost perspective. In public cloud specifically, whenever you spin up new servers or databases or you use more and more storage, you pay as you go, and therefore usage and cost are tied together. When you over-provision on your resources you would pay more. Therefore, it’s very important to define a set of cost utilization as well as performance metrics that will help us understand who used what, when, and how we can reduce the cost and improve the performance to avoid the budget violations or even performance issues in our cloud deployments.

Bernard Golden: One of the key issues around a business offering is, all of a sudden the cost of provisioning a service is really critical. So, for example, if you’re driving a Marketing campaign to generate leads, it makes a big difference if you know that the value of every lead is say $10, whether the cost of getting that lead is $5 or $15. One is probably pretty opportunistic and very good, the other is you’re going to lose money for every one you do. And that gets affected by, in this cloud computing world, how much computing resources you use. This moves from an IT-centric cost management, “how do I reduce my total cost of ownership?”, to “how do I understand what the cost of goods sold is?”.

John Cowan: I would go one step further on that and just say that if the philosophy of cloud computing is to treat compute, network and storage as a legitimate and true utility, then utility economics needs to be able to be applied to this industry in order for buyers and sellers to make sense of it. And in that vein, I would go so far as to say that the concept of TCO (total cost of ownership) really doesn’t make any sense to consumers or users of cloud computing. What we found is that a more appropriate term is total cost of consumption, or TCC. What your business unit owners really care about is their bill of IT for running or hosting an application. They could care less that you got a good deal on your hard drives or your servers. What they care about is what the running cost is of an application that they are either bringing to market or using for an internal productivity suite for their business unit.

Owen Rogers: And for me, it comes back really to the concept of value rather than cost. I think businesses and organizations…don’t mind consuming resources, they don’t mind paying for things, but they naturally want to make sure that they are getting value from what they’re purchasing. So for me, these cost and utilization analytical tools are a way of making sure that if an application is scaling, or if resources are being consumed, that they’re actually delivering some kind of business benefits.

Are Other Companies Tracking Their Cloud Costs?

Owen: We have a commentator network called TheInfoPro, and these are thousands of end users who are consuming IT and cloud services as part of their job. And we survey them all the time to understand what their spending habits are…And it turns out that in one of these surveys, we discovered that 25 percent of the enterprise end users weren’t doing any cost analysis at all on their use of cloud services, and for me that’s a really terrifying statistic because the whole point of the cloud is that it’s variable and scaleable, and the on-demand purchase and procurement of cloud means that you should be able to purchase it up and down whenever needs change. And I think because only 25 percent of users are actually keeping track of what they are doing, this explains why we found that 33 percent of users weren’t confident that they had a good control over their costs.

The Need for a Standard Unit of Measurement

Sharon: One analogy that we typically use when we talk to clients that are asking us about the value of monitoring utilization costs…is the analogy of miles per gallon. When you ask them how do they measure the value they get from cloud computing, they will give you 10 different parameters or operational metrics, like, CPU, utilization, throughput…and when you ask them, “well, can you guys compare it to the way you buy a car?” it’s very simple. You buy an effective car by comparing the miles per gallon you’re going to pay. So we expect our clients to do the same.

John Cowan

John: It’s interesting to talk about miles per gallon, but what happens when internal IT and nine different vendors that are out there in the market offering a service, define the gallon differently? How do you actually create economic transparency and interoperability comparisons in a meaningful way between legacy IT and on-demand service?…If I’m the business unit owner or I’m the CIO, how do I make decisions about where apps should go if everybody is measuring this thing differently?…We created a unit value Workload Allocation Cube that was representative of consumption across six vectors: CPU, memory, storage, disk, LAN, and WAN I/O…”My internal cost of operation is $X per WAC unit and my supplier’s price is $X dollars per WAC unit.” Now we’re informing a much more real-time conversation about buying and selling, which is exactly what’s going on the CIO’s office.

Should We Use Excel?

Owen: We found that 50 percent of end users were still using Excel to really understand what their cost was. And again, Excel is great when we know what’s going to happen, when we have everything up front, when costs are fixed and we understand the capital and the fixed operating costs. But when cloud costs vary month in, month out and we have different business objectives we want to meet, the whole “spreadsheet in advance” approach falls apart, and that’s why we need tools to really understand what’s going on.

John: To Owen’s point, doing (a standard unit of measurement) on spreadsheets is interesting once. Try to do that in real time, which is the pace of the utility.

Sharon: I think the real time is a good point. If you take a look at the nature of applications in the cloud, they’re all very, very dynamic. I’ll give you some statistics around it. We monitor around 100,000 virtual instances daily, which represents around 10 percent of the Amazon capacity worldwide. 86% of them are started and stopped two times a month, which means that it’s very dynamic. Now try to build capacity and do some capacity management exercise in an Excel spreadsheet for servers that start and stop during the month two times or more. It’s very, very difficult. And that’s one of the reasons why you see so many instances and resources that are floating out there in the air, actually cloud, significantly underutilized.

Accountability is Key

Owen: It’s one of those things, you don’t appreciate how important it is until you have that bill at the end of the month when you realize you have this legal liability…Now, if I’m an enterprise and anyone can start up a virtual machine or can consume a resource and can make mistakes, which means they’re left on and bringing no business value, then this is going to mount up to be a lot of costs in the long run. And it only takes one experience of looking on your AWS or your Google or your Microsoft bill at the end of the month, seeing you’ve got all these charges on the bill and suddenly realizing that this is a financial liability. You’ve already consumed those resources. It is now your job, it is your role and your liability to pay them back…Most CIOs, most CFOs realize the importance of understanding and getting involved in their cloud costs.

Sharon: I want to tell you an interesting story I had from one of our customers….Amazon has a specific pricing module called Reserved Capacity so you can buy capacity up-front. So the organization reserved capacity for…each one of the business units, and it happened that reserved capacity had been allocated separately, and some of the business units actually went ahead and spun up additional on-demand resources. So what happened is that…by the end of the year, the customer paid twice — for the capacity that they bought and they didn’t allocate properly and for all the on-demand resources that people in the different business units spun up without knowing that there was reserved capacity out there…So organizational accountability is an important behavior, or practice, that customers have to implement.

On Public Cloud Costs

John: It’s unacceptable for a cloud provider to not have customer APIs into things like consumption and billing. There will come a point in the very near future…when it becomes a minimum standard for cloud operators to provide that kind of data. Until recently in our world, it was really about AWS, who had a very mature API…vs. the well-entrenched install base of the virtualization platforms like VMware or Xen. Just providing the capability for a customer to compare their utilization across a virtualization environment…and give a view as to what that might look like if it were actually hosted on an AWS-type platform…that’s an extremely important data point from the CIO’s decision aspect.

Sharon: Specifically when speaking about scaling in and scaling out in public cloud computing, sometimes I would say that performance is mitigated with additional capacity… So, just make sure that when you provision a resource in public cloud, it has the safe provisioning approach and not the over provisioning approach, so that if you scale more and more resources, you will still keep the environment perfectly provisioned to your performance needs.

Bernard: So, If you’re somebody who drinks one glass of milk a day, you should buy by the quart, whereas if you’re you have a family with six kids, you should be buying it by the gallon.

On Private Cloud Costs

Bernard: At least in my experience most internal IT organizations…don’t really know what all their costs are. And it’s particularly exacerbated by the fact that facilities may pay for the buildings and the electricity, somebody else buys the servers, a third person is doing the network connectivities coming through a telecoms group.

Sharon: I don’t think that many people are actually putting effort into it, but they are more into new workloads that they want to introduce and make a decision whether these workloads are going to run internally, OpenStack, VMware…or externally to public cloud, or sometimes using hybrid module of resource bursting. At that point in time, they really need to put their hands around consumption and clearly understand what is the true cost of their application to run in the private cloud. And they are using three different variables…based on the ABC model, activity based costing…So they are doing this exercise, but they are specifically focusing on new workloads that they have to introduce.

Owen Rogers

John: I can give you a granular cost per unit of consumption for an application every five minutes of every hour of every day. However, if you lie to yourself in terms of the cost to actually calculate your internal rate of production, it’s really only as good as you can throw it…This is why we monitor and watch the organizations that are emerging now to standardize cost methodology. There are at least three organizations that are very active now to build best practices around what large organizations should be thinking about when they’re doing their cost allocation methodologies…So I think I think you’ll see best practices emerge that produce standards around the things that you really want to incorporate when you’re calculating your internal cost of production.

Owen: It used to be so easy before when it was just a Marketing team or a Sales team and they would have their budget allocated. At the end of the month it would be so easy to say, “Oh well, the Marketing team has spent X and the net revenue they brought in was Y.” Simple equation. But now organizations are correctly moving away from this functional level and now looking at applications and where they are deriving value…In fact, when we were surveying all these end users, one of the revealing statistics for me is that 71% of respondents rated the non-IT roadblocks as their biggest barriers to cloud adoption..resistance to change, relative to the cost models, people, time, the organization budget, and regulation and compliance.

On Legacy Systems

Bernard: I was going through a data center with a very large telco, and they were kind of going “this is our new cloud stuff”…so we got onto one floor, and there was a bunch of machines, and I said, “what is all that stuff?” And they said, “Well, nobody really knows, but we don’t want to do anything to it because it might break something.”

John: It’s the expiring nuclear plant approach to IT reduction. Just leave it until it’s absolutely necessary to pour cement on it.

On Hybrid Cloud Costs

Question from the audience: I’m running a hybrid deployment. How can I measure my current costs and identify my projected as a total?

Sharon: I would go with the consumption first. So let’s assume that we know how to meter and measure costs in public cloud. Let’s go into the private cloud for a second. Assuming you’re on virtualized environment, like OpenStack or VMware…Our install base indicates 75% of cloud cost is coming from compute, then from storage or networks, so I would focus on compute first. I would try to define a cost per flavour in OpenStack or virtual instance in VMware, and multiply it by the number of hours it has been running. It will give us a very good indication for what’s the cost in hybrid cloud. That’s where I would start.

John: We view that the starting point to this is to have an apples-to-apples equation. Meter your internal environment using our technology, you will have a cost-per-unit established for your internal cloud…and then do the same thing on the public side. So you’ve got cost uniformity across the hybrid environment, and then you can start to see, as you collect data, where the patterns emerge and a forward-looking analysis for projections.

Owen: I think the real challenge in all of this, is about the variability and the scalability. If you know how many virtual machines, how much storage you’re going to use and you have that at a fixed level for three years, then actually working out the cloud costs isn’t so much a challenge…I think what’s more crucial is working out what your likely scalability demands over the next few years ago is going to be. And I think as well as having a likely one, you also need a worst-case and a best-case.

Predictive Analytics/Predictive Modelling and Risk

Bernard: You’ve almost sort of implied that you need to use Monte Carlo simulations for this. How much (are companies doing) in terms of forward projection?

Owen: I think this is all really a question of risk. So, if an organization is fully cut out to the on-demand way of doing things, then it can scale up and down and it can take risks knowing they’ve really bet very little on taking that risk. But if an organization is more traditional and they need budgets approved in advance and they have to make a huge expenditure before actually realizing if they paid off or not, then that risk is a lot greater…I think that kind of granular risk management is important, but only if you’re the type of organization that isn’t fluid enough to be able to to cope with taking small risks because you’re fully utilizing the cloud and the organization is resolved around that consumption.

Sharon: The way we answer questions like, “What will my consumption look like?”, we use a baseline of your existing cloud deployment and extrapolate or estimate the projected cost…We get into pretty accurate numbers…I would like to add one more thing what Owen mentioned on risk. Many companies that are focused on growth will reduce their risk by going to the public cloud…Once they have reached a certain size, they will reconsider their decision and may go back into private cloud and invest in hardware and software. In order to do that, they need to keep their applications portable in a way that they would be able to transit them back and forth between public and private clouds.

Bernard: Sounds to me like you’re making a sales pitch for using a Platform-as-a-Service (PaaS) product…So John,…you might say this is moving forward to “you might want to hedge that risk.”

John: At the end at the end of the day this is absolutely about risk mitigation and opportunities to de-risk this for both the buyer and seller, which is why we did the deal with the Chicago Mercantile Exchange…to create, effectively, a legitimate spot exchange between heterogeneous providers and a plethora buyers on the outside. The important point about this is really about price discovery…the supply side can sell on volume on future capacity and…large-scale buyers or intermediaries can buy on volume and distribute those contracts and ultimately resell them if they have to, to manage risk, to hedge risk, but most importantly to give that…data point to the buyer of price discovery. What is the market price for the type of application pattern that I’m hosting internally or externally to deliver service.

Closing Thoughts

Bernard: If you’re interested in that world that Sharon talked about, being able to migrate applications…I would encourage you to download the Stackato micro cloud or 20GB cluster.

Sharon: Invest in cloud but make sure that you always measure the ROI.

Owen: Watch out for 451 Research’s Cloud Price Index for a handle on how much cloud costing is changing.

John: You can’t have a utility if you don’t have a single unit of measurement.

Page 1 of 20