Big Data is Coming After Your Data Center And That’s a Good Thing
There’s a nasty little irony in the world of IT. While business units seek IT’s assistance in creating and leveraging data-based metrics programs to provide deep level analytics into how to improve the business, IT, in many cases, has yet to use these same kinds of tools to look inwardly. Sure, we have all kind of monitoring tools that can help us target issues that might creep into the here and now, but we have yet to really take that deep dive into analytics that can help us truly improve what we do and, more importantly, to predict where we need to go.
But, when you think about it, not many businesses are in the business of data center analytics, so it’s unlikely that enterprises will hire data scientists to improve their operations. The net effect on the bottom line would be negligible.
CloudPhysics Operations Management Platform
To combat this issue, CloudPhysics has jumped into the market with a SaaS-based platform. The company’s goal is to close this serious gap and to help IT departments leverage big data to simulate and model data center activities. After all, a leaner, meaner IT department translates to an improved bottom line.
However, this ability requires significant domain expertise and a whole lot of data that can be mined for analytical gold. To that end, CloudPhysics operates by gathering all kinds of analytical points across their entire customer base and applying that information to new systems that come online. With traditional tools, customers must wait for a period of hours, days, or weeks while the monitoring tool establishes the operational baseline. With CloudPhysics, that massive existing set of data enables the return of actionable information within minutes.
The kind of data that CloudPhysics collects doesn’t fall under compliance level data, so security is not a worry. They are gathering only data that the customer allows and only from vCenter. While the NSA has made many people wary of the term “metadata”, metadata itself is not harmful. How it is used is harmful and CloudPhysics is very mindful of this fact. In addition, data that comes into the CloudPhysics environment is anonymized to further protect customers.
David Davis interviews Cloud Physics CEO John Blumenthal
With a virtualized infrastructure, you’ve enabled mobility, encapsulation and density that didn’t exist in the physical world. This structure enables new kinds of execution opportunities, but traditional systems management approaches have remained rooted in a static and siloed world that is largely reactive in nature.
CloudPhysics views the world through their big data vision, defined as a series of tiers, each listed below:
- QoS – hard to get
- SLAs – hard to get
Getting to the two most critical automation levels requires comprehensive understanding of the environment. But, at its core, IT simply must provide greater value to the organization. Unfortunately, the data center has gotten so complex that it’s difficult to see what’s going on. To get to higher level opportunities, machines must be inserted into the management equation that have the ability to analyze and adapt to changes in the environment. These changes are driven by data that is gathered from the environment – and from thousands of other environments – at which point administrators are provided guidance as to what direction to go.
David Davis interviews Irfan Ahmad, CTO of Cloud Physics – Irfan reveals how Cloud Physics technical architecture lead enables entirely new ways of monitoring and managing virtualized infrastructure
CloudPhysics states that they help to “take the guesswork out of virtualization management.” The company’s management tool pinpoints and pre-empts performance, capacity, and availability problem, with some pretty positive outcomes:
- Shorten mean time to resolution. Why waste time focusing on virtual machines that are working just fine? CloudPhysics can help administrators pinpoint issues so that efforts are focused on the right places.
- Lower IT capex and opex costs. With the right information, IT has the ability to ensure that resources – both human and otherwise – are being used to maximum benefit and being procured in ways that make sense.
- Predicting potential outages. CloudPhysics shares a story in which the company was able to help VMware vSphere customer identify a specific problem that was resulting in host crashes. CloudPhysics had enough information to be able to help their customers take proactive steps to avoid the issue, thereby eliminating what could have been a downtime event.
Big Data is Growing!
Just how much data is “big data”? Here’s an example:
A data center with 50 physical servers running a total of 500 virtual machines generates 1 billion data points per day = 25 GB per day (but is compressed) CloudPhysics is built to scale to quadrillions of data points per day, supporting hundreds of thousands of customers.
It’s safe to say that the more data that CloudPhysics has, the more reliable the results. This is a case where, as CloudPhysics gets more customers, their product automatically improves.
David Davis through custom reporting cards (or queries) created by top virtualization experts and other Cloud Physics community members that virtualization admins can learn from and implement.
This is not the first time that I’ve written about the ways by which big data can be leverged in the data center. Previously, I’ve discussed the ways by which Nimble Storage’s big data-based InfoSight tool can be leveraged to improve the overall customer experience. I continue to believe that these data-based, fact-based tools are one of the most important ays by which IT can improve its own operations. CloudPhysics and companies like Nimble Storage
You can try CloudPhysics for yourself at www.CloudPhysics.com
And for more information, view these related videos: