A new Singapore partnership marks a change in strategy for Raleigh-based 6fusion.
The cloud software company has reached a partnership deal to deliver its IT infrastructure services to the Asia Pacific, teaming up with Singapore-based CloudFX. Financial terms were not disclosed.
Rob Bissett, vice president of product management, took a break from OpenStack Summit in Portland, Ore. Wednesday to chat about the news.
“So, obviously, we’re based in the RTP area,” he says. “The company has been around for a number of years, and we’ve been primarily focused, historically, on the North American marketplace.”
But that’s historically.
“What we’re finding though, obviously, it’s a cloud-connected world,” he says.
And the demand for cloud services extends beyond mainland America. To keep up, 6fusion has already deployed a European partner. Asia Pacific is the next logical step – Singapore, in particular.
“It’s a really innovative country,” Bissett says. “The government there is highly invested in being leading-edge, so we were sort of targeting Singapore as the base of an Asia-Pacific strategy for us.”
IDC puts the Infrastructure-as-a-Service (IaaS) market at $6.2 billion in 2013, growing to a $13.5 billion market in 2016. It’s growth 6fusion plans to capitalize on, and it plans on wasting no time.
“Ultimately, what we’re trying to do is build a significant business and demonstrate our scalability,” he says.
6fusion, incorporated in 2008, relocated to the Triangle in August of 2010 following a round led by Durham-based Intersouth Partners. The company currently has more than 30 employees, the majority of which are based in Raleigh. 6fusion’s plan is to hire about 10 people in the calendar year, particularly in the area of software development.
Rob Bissett is helping expand 6fusion’s international footprint.
Lauren Ohnesorge covers technology, biotechnology and Durham County.
We are excited to officially announce expansion of 6fusion operations into Asia Pacific with our partner CloudFX. We have worked together in the region for sometime now and 6fusion technology together with CloudFX consulting and delivery has proven to be a powerful combination.
Here’s more detail on the announcement from the WSJ if you are interested.
I had a great conversation recently with a 6fusion customer Leo Soto from SotoNets, a cloud service provider, about his decision-making process in selecting an IaaS solution for their cloud backup service. Here are a few notes from that discussion:
What were the primary business problems or opportunities that caused you to look for an IaaS solution?
When we were ready to enter the cloud backup service space, we needed an infrastructure-as-a-service (IaaS) offering that would offer security, easy deployment of workloads and a true utility model for resource usage. Cloud backup is resource intensive activity that requires a safe and secure data center environment.
Who did you evaluate for this solution?
What business problems are you solving?
What has been the impact to your business solving those business problems?
By Douglas Steele, COO, 6fusion
This is the second in a series of posts on cloud brokers, their role in IT infrastructure markets, and their impact on IT infrastructure as a tradable commodity. Here’s a link to Part I of the series.
In Part I of this series, Dr. James Mitchell cuts to the heart of an issue holding back the future of exchange traded IT - there is just no easy way to compare apples-to-apples when it comes to measuring IT infrastructure consumption on different platforms, whether those are public cloud, private cloud, or physical boxes in a private data center. Certainly a big problem, but let’s put that aside for a moment and assume you had such a measurement and you could measure apples-to-apples? Then what?
Let’s outline the players involved in the market and make sure we understand what motivates each of them:
Cloud Buyers – As many studies have show, including the latest data from 6fusion, cloud buyers are primarily interested in reducing the cost of running whatever applications or workloads they need to run in the cloud. They simply want to know how to lower their costs and maximize their savings without taking on unnecessary technical or financial risk. And they want to know if they are running on the best platform for the technical requirements of their workloads at the best cost. However, as Dr. Mitchell noted, comparing between cloud providers can be quite challenging these days, but we’ll come back to that.
Cloud Providers – Cloud providers offer the public and hybrid cloud platforms that allow the buyers to run their workloads on shared infrastructure. Think AWS, Google Compute Engine, Rackspace, Microsoft Azure, etc. These guys are primarily concerned with maximizing their return on investment (ROI) on the infrastructure they have available. This means providing just enough resources for what is currently being consumed and growing their capacity as required by consumption patterns they witness, while maximizing their ROI all along the way. Their business is all about maximizing capacity utilization and they generally offer bigger discounts for pre-purchased long-term committed usage to help mitigate demand variability in an on-demand world.
Cloud Brokers – These guys see both cloud providers and cloud buyers as their customers. They identify differences between the cloud services that each cloud buyer wants to buy, and how the cloud provider is set up to sell, and then enable cloud buyers to buy how they want to buy and cloud providers to sell how they want to sell, in order to broker a deal. The key word here is BROKER. This is the true definition of a broker applied to the IT infrastructure market. This generally involves taking on pricing and capacity planning risk, and sometimes even currency risk. Where the deal is an almost risk-free arbitrage, they make very slim margins, but they can make good money when they take on significant risk or provide financing to make a deal happen to the benefit of both cloud buyer and provider. You’ll notice this has nothing to do with technology. This is purely a financial broker role. It just so happens they are brokering contracts on technology infrastructure.
So if this is an accurate description for each of their true motivations, then how are each of the parties impacted if cloud brokers are inserted into the value chain? Is there a truly synergistic formula for all of these folks to play together and yet still derive, or perhaps enhance, their individual benefits?
Right, so let’s examine the impact on each party when cloud brokers enter the mix:
Cloud Buyers – They can now find the best infrastructure that is optimized for their specific workload types and save money over the standard “on-demand” retail rates. Where does the savings derive from? Cloud buyers currently do the equivalent of buying their airline tickets when they arrive at the airport. Everyone knows this is expensive, because how is the provider supposed to know how much capacity to provide without any advance warning? You get bigger discounts if you pre-book. Cloud brokers aggregate these small pre-bookings into large chunks of committed usage that match the cloud providers’ cheapest deal structure. Cloud buyers can also compare operational costs between providers simply by having the broker in the middle. In short, cloud buyers get increased cost predictability over time and clarity on savings opportunities.
Cloud Providers – Most importantly for cloud providers, they get paid up front, and usually for much larger volumes and longer terms than customers typically want to commit to. They are also removed from painful demand generation and the billing chain/collection hassles, as that shifts to the brokers. In addition to having better control over their capacity requirements and ROI, they would also be in a better place to predict and manage their future capacity needs resulting from selling much of their consumption in advance of providing it.
Cloud Brokers – Cloud brokers inject higher levels of liquidity in the markets by attracting as many cloud buyers and cloud providers as possible, benefiting all parties involved with lower costs, higher predictability, and more opportunities to meet a diverse range of cloud requirements. You can think of a cloud broker as a professional Tetris player, using pricing signals to arrange blocks of future usage commitments into continuous blocks to purchase at a discount. As cloud brokers gain traction in the cloud market, they are keenly positioned to help both providers and buyers manage the risk of sudden economic events that may influence public cloud capacities and pricing over future terms.
This sounds unmistakably like the foundations of a public cloud commodity marketplace, or wholesale trading of IT infrastructure, which provides many benefits for all those who participate in it. Most industry insiders will tell you it’s just a matter of time now before this apparently non-fungible, but already tradable, commodity finds its way to exchange floors far sooner than most people think.
6fusion CEO and Co-Founder John Cowan wrote last year:
“When the modern enterprise or resource supplier can apply the principles of financial trading to the IT industry we are going to see a force capable of completely redefining everything we currently think we know about the business of technology delivery.”
The answer to how that will happen relates to the issue we put aside earlier in the post:
There is just no easy way to compare apples-to-apples when it comes to measuring IT infrastructure consumption on different platforms, whether those are public cloud, private cloud, or physical boxes.
You no longer have to put that question aside and I’ll tell you why in my next post, but here’s a hint.
By Douglas Steele, COO, 6fusion
This is the first of a series of posts on cloud brokers, their role in IT infrastructure markets, and their impact on IT infrastructure as a tradable commodity.
First, let’s try the following exercise:
1) Open your favorite search engine
2) In the search box type “cloud broker”
3) Open up a new tab in your browser
4) Go back to the same search engine
5) In the search box type “commodity broker”
Did you see the difference? Let me explain.
It seems ‘cloud broker’ has become one of the many concepts du jour in cloud lately. More over companies that do anything related to cloud services are calling themselves a ‘broker’ in an apparent attempt to capitalize on the opportunity to help customers connect to many cloud providers via a single web portal.
Over a year ago, 6fusion’s visionary founder and CEO, John Cowan, had this to say about the emerging concept of cloud brokerage:
“Cloud brokers will focus on the business of compute rather than the technical organization of compute. And the business of compute has nothing to do with cloud computing or the technology driving this revolution. The business of compute is about the commoditization of compute, network and storage infrastructure.”
Now, go back to your search results in Step 2) above. Are any of the companies in your search truly cloud brokers?
Software companies that claim to intermediate cloud providers and customers are not cloud brokers. They are cloud resellers or integrators. There is a subtle but very significant difference here. Resellers fulfill a service request from a customer and take a commission, and integrators translate data between cloud services. Brokers, on the other hand, assume a risk position by acquiring supply and make margin on their ability to successfully carve up sell side transactions into smaller premium buy side contracts matching unique market demands.
Dr. James Mitchell, CEO of Strategic Blue, who are launching one of the first real cloud broker businesses in the market called Cloud Options, argues that true cloud brokers are financial brokers, not resellers or integrators:
“The guys that are trying to sit in between the cloud suppliers and the cloud buyers in the billing chain, allowing each to buy or sell on a financial deal that suits their own circumstances, are true cloud brokers” says Mitchell. “They are the guys that hedge against the risk of changing expectations of future cloud pricing and buying on larger volumes for longer durations, only to further break those volumes into smaller blocks that match a buyer’s forecast cloud needs.”
Now, go back to your search results in Step 5) above. Is the picture getting clearer?
Cloud brokering is about enabling the financial intermediary; it is not about the underlying technology to resell or interconnect different cloud providers.
Dr. Mitchell admits one of his biggest obstacles has been “the very painful exercise to compare pricing between providers to help customers select a cloud provider which meets their needs financially as well as technically, so they don’t just give up on public cloud and go back to a DIY approach. There are just so many factors to correct for; you need to be a rocket scientist to figure this stuff out,” Mitchell muses.
Ironically, what Dr. Mitchell needs is rather simple: “We really just want to know how much cloud a customer uses now, and expects to use in the future, and then offer them a pricing deal that suits their business requirements,” he says.
More on that in a future post…
6fusion surveyed over 200 IT decision makers and influencers in January 2013 on their cloud adoption for the year and IT metrics they are using to measure success. Survey respondents included technology leaders from large enterprises, small and mid-size companies, government, and service providers from 14 different countries around the world. Check out a quick summary of the survey results in the infographic below.
By Doug Steele, COO, 6fusion
I just read an interesting article highlighting the results of a study showing that cloud ROI is becoming more challenging to calculate as an IT metric. The article concludes, “We must develop standard Cloud metrics and ROI models, so that they can have instruments to measure success.”
I could not agree more – this is exactly what is needed for demonstrating an effective return on IT investment. However, it’s not just about the cloud. We need an IT metric – a universal standard for IT resource consumption – that allows organizations to compare and contrast infrastructure inside and outside their organizations.
There’s no question this is a complex problem, only made more complex by the fact that the primary units of compute are all measured in different metrics - for example we have CPU in MHz, memory in MB, disk I/O in KBps, network I/O in kbps. Although this is a complex problem indeed, it is far from being insurmountable.
In fact, such a standard unit of measure does indeed exist today, which can take into account all of the key IT metrics I mentioned above in a consolidated single unit of measure. This IT metric, known as the Workload Allocation Cube (WAC), normalizes IT consumption data across heterogeneous technologies, platforms, and geographies, enabling true apples-to-apples analyses across environments.
Let’s come back to another point from the article:
“…cloud has immense potential for organizations, but efforts to deliver cloud-based solutions need to be ‘supported by proper instrumentation of the financial parameters of cloud services, so that the architecture, development and operations professionals can keep the enterprise on course.’”
With the IT metric I described, it’s possible to quantify the total capacity of resources available in any given infrastructure stack and apply financial modeling to that infrastructure to determine a per unit cost of compute. So instead of disconnected resource capacities we could now understand the true computing capacity of the whole environment, including internal and external infrastructure, as a single unified resource. But this is only truly useful if we can apply this same standard of measurement to actual workload consumption.
One last quote from the article:
“We need to track the cost of cloud and the returns realized on a continuous basis in order to be effective cloud consumers realizing business value for our shareholders”
Completely agree. So by applying this IT metric against actual workload consumption in real time, we’ll measure exactly what each workload consumes in CPU, memory, storage, disk and network I/Os, then convert that to a single unit of measure. By doing this on a “continuous basis” we can begin to understand the workload consumption patterns and workload classifications in the same unit as that of the capacity. By using the same standard unit of measure for both supply and demand we are able to determine exactly how much supply we have in total and how much of it we are actually consuming at any given time.
Wow, now that really sets the stage for making it easy to calculate the ROI, does it not? I’m sure you’re asking “but what about the costs”? Of course we cannot complete a proper ROI calculation unless we understand the costs involved in generating the compute supply we have available. The real question is which costs should I include?
Great question. This can be either very a simple exercise or a much more complex one, depending on why you are interested in performing the ROI in the first place. It can be as simple as adding up all your hardware costs for the entire infrastructure stack and then applying it against your total available capacity, giving you a cost per standard unit of compute. However, if you’re more interested in seeing the complete operational picture, a more involved cost analysis would include total acquisition and operating costs, such as the people necessary to run the infrastructure, the power costs to keep it running, etc.
Most organizations’ ROI analyses will fall somewhere between those two ends of the spectrum. Regardless of your costing approach, using a single unit of measure gets you the important details you need for an ROI analysis and goes even deeper by showing you exactly how to optimize your infrastructure based on the exact usage patterns of your consumption.
By John Cowan, Co-Founder and CEO
When considering the tectonic shift we are seeing toward IT in the cloud, it’s important to remember a few points:
- Fact 1: Salesforce.com created a disruptive technology movement by delivering enterprise software-as-a-service (SaaS), changing forever the dynamics of new entrants competing against entrenched incumbents. Recall Salesforce.com’s premise was NOT about better salesforce automation software. Instead, their approach was all about “No Software”.
- Fact 2: SaaS oriented companies generate more than a 75% higher exit valuation on average than traditional on-premise software solutions. Valuations are indicative of where an industry is going, not where it has been.
It was 2006 when Delano Seymour and I began crafting the prototype behind UC6, 6fusion’s centralized and universally metered platform to access the cloud for infrastructure-as-a-service (IaaS) buyers to fully leverage suppliers IT infrastructure resources in the cloud. Around the same time an entire raft of other software companies were springing up to say “hey, buy my software and you too can have a cloud!”
We couldn’t for the life of us figure out why anyone in their right mind would architect cloud enablement software in the classic enterprise stack framework — known as ‘on-premise software’.
Fast forward five years and I am thankful for the important decisions we made to carefully architect the IaaS version of Salesforce.com.
Selling greenfield technology is a hard thing to do. There simply are not many enterprise IT buyers without the risk aversion associated with doing big project with little-to-no historical baseline for success. And cloud has been as greenfield as it gets. As a buyer of on-premise cloud software, why would I want to take the risk of buying, racking and stacking more hardware and provisioning and maintaining more software, when everyone from my CIO to every IT analyst in the world is telling me that IT in the cloud is the future of IT service? Buying cloud enablement as a SaaS is a risk mitigation strategy for even the most conservative of enterprise IT buyers.
FUD sellers in the on-premise world often point to the IT security concerns arising from the idea of multi-tenant software. Customers need to do their due diligence and not sacrifice things like security on the altar of general IT cost savings. However, I think the generation of enterprise SaaS companies that followed in the wake of the Salesforce.com movement have proven beyond a shadow of a doubt that any potential IT security concerns are but a fleeting obstacle to selling.
The rash of acquisitions in the cloud enablement software field is very telling.
All great examples of how building on-premise software in the cloud era failed to generate large enterprise value. All three companies were acquired for less than $150M. DynamicOps, for instance, sold out to VMware for what most believe to be something in range of $125M to $150M. DynamicOps raised $16.3M from investors over the course of its four plus years in existence. Even at the top end of that exit price it represents less than a 10x return to investors.
The valuations and prices paid for SaaS companies focused on IT in the cloud are considerably higher than its on-premise counterparts, as evidenced by Salesforce.com, Box, and many more. Acquirers highly value subscription revenue models. Moreover, the cost to support and scale SaaS delivery is exponentially more efficient than building software the way our parents did.
So, if you are thinking about building the next great cloud app, take the time to figure out how your business model will work in a SaaS architecture. Your investors will thank you later.
By Delano Seymour, Co-Founder and CTO, 6fusion
So, you’re interested in taking advantage of the cost of the cloud, turning your capital expense into an operating expense, thus reaping the benefits of the pay-as-you-go billing and IT consumption model. You have identified a portion of your IT applications that should see the most benefit from the cloud and you have selected a cloud platform to put them on. You think you’re ready to deliver huge value to the business, but wait…
There are a few unanswered questions:
- Cost of the Cloud – How much should I allocate in my budget for the cost of the cloud?
- IT Cost Allocation – How do I assign the right portion of the new IT costs to the right cost center?
- IT Cost Analysis – How does it compare to the capital expense we have been paying to date?
Let me highlight some of the issues you will run into when trying to answer these questions:
- You will most likely not know ahead of time what your cloud cost will be and you will have to wait until the end of the month with fingers crossed to understand how much it will cost you for that month.
- You will have to translate the new billing model of the cloud-based platform into your existing systems and procedures to effectively allocate the costs to the right groups in your organization.
- You will not be moving all of your internal applications and services to a cloud-based platform, leaving you with an even more heterogeneous environment and multiple methods for calculating your costs across different IT venues and platforms.
Sound complex? Three things can be great assets in helping to make this simple: IT profiling, utility metering, and a single unit of measure.
Profiling takes your existing information system, the network, hardware, and software resources required to deliver a business function, and converts the usage into a single easy to use; easy to understand value. With profiling you have a simple and easy way to understand the consumption patterns of your applications, giving you the power to effectively predict the cost of running those applications on a cloud-based platform. My colleague, Kyle Bush, wrote a great blog on IT profiling if you want more detail.
IT Utility Metering
Utility Metering takes profiling to the next logical step, tracking the consumption of IT resources as they are running, providing a complete real-time picture of the consumption trends, changes, and patterns, with a simple and effective way to track usage across physical, virtual and cloud-based IT resources. Allocating cost becomes a simple matching exercise.
What makes this picture complete is the addition of a third, by very important ingredient, the single unit that is used to measure the consumption of the IT resources during both, the profiling and metering processes. A universal measure of consumption is the glue that holds metering and profiling together. It’s the foundation that gives you the ability to understand the true resource consumption of your applications and it’s effect on the IT cost truly simplifies the cost of the cloud. When combined with profiling and utility metering, it gives you a complete picture of your IT resource consumption and, more importantly, the cost of the cloud for your organization.
More on this topic to follow in future posts. In the meantime, sign up for the 6fusion platform now and get started simplifying your cost of the cloud today!
By John Cowan, Co-Founder and CEO
Ilyas Iyoob recently posted his thoughts on the idea of cloud consumer demand driving the market for auction or bid style market place. You can read it here: http://blog.gravitant.com/2013/01/28/who-controls-the-cloud-market-providers-or-consumers/
The point Ilyas strives to make is that since supply outstrips demand for cloud resources it makes sense that demand should control the balance of power in the buying relationship by bidding its business similar to the way suppliers bid for Wal-Mart’s shipping business.
While I don’t disagree that the concept of IT profiling is key to helping answer the question “how much cloud”, I think the challenge is far more profound.
I made my position well known in this industry last year over a four part blog series (you can also download a PDF of the blog series here) in which I constructed the business case for a future in cloud computing that had more to do with market economics than any one API or technology feature. I painted a picture in which the industry’s future would be dominated by service brokers more than retail cloud services.
Building a marketplace where heterogeneous suppliers can present unused cloud resources is easy. 6fusion has been doing that for five years already, but as every IaaS operator will tell you, the holy grail in this business is not “how to create a cloud”. It is, and always has been, about demand. Without demand, the exercise of creating a marketplace, even with curious buyers sniffing around, is pointless.
It’s pointless because in order to build a market we must think of compute as a true utility. And today, it is not a true utility. Where Ilyas’s position gets sketchy is the idea that magically consumers can somehow ‘bid’ out their demand to suppliers. This is impossible today because consumers and suppliers do not speak the same language. That is, there is zero commonality between how the majority of buyers and the majority of suppliers measure IT requirements and IT resource availability. How far would Walmart get, for instance, if they bid out their demand measured in truckloads and every supplier gave them a reply in some other type of metric, from miles traveled to fuel consumed to tonnage, or worse some combination of those.
Let me tell you where Walmart would be. They would be in the exact same position as nearly every Global 1000 buyer when it comes to cloud: building it themselves hoping to drive IT cost savings. If you surveyed the Global 1000 you would find a surprising number of those companies telling you that if they could consume true utility computing they would. But they can’t. I think they would tell you that if they could hedge cloud usage positions on the IT metrics that matter like they hedge their positions on other raw material commodities, they would. But they can’t.
They can’t because the cloud is not “a good or service with no qualitative differentiation” – the standard definition of a commodity. Commodity IT is not here. Not yet, anyway.