CIOs: Market Trends Make Simple Solutions Viable In The Modern Enterprise
Take a look around your data center. How hard is it to manage the various pieces of hardware that are strewn about? For data center administrators, managing the environment may not be all that difficult if they’ve been doing it for a long time, but what happens if those people leave? Is everything reasonably well documented? Is it easy to learn how to use the various administrative consoles that manage different aspects of the environment?
Complexity Runs Rampant
In many cases, unfortunately, the honest answer is a resounding no to these questions. All too often, time-starved IT administrators and CIOs have taken the path of least resistance in implementing new technologies and, over time, we’ve created ‘franken’-data-centers that have lots of little pieces all working semi-independently, but were originally intended to operate as a cohesive whole.
What are just some of the challenges that face IT organizations?
- Legacy hardware that carries with it challenging management tools and a need for deep levels of knowledge about the managed device or service.
- Poorly documented environments, leading to an inability to look at the environment in a holistic way.
- Monolithic upgrade practices that are often very expensive to carry out. Expanding the environment to meet new needs is often a major challenge.
All of these issues result in a need for IT to take its eye off the most important ball in the air: The overall business. Every hour that an IT staff member spends designing a new solution because of an upgrade need is time truly lost. While the expanded environment may be necessary to continue to grow the business, wouldn’t it be nice if that IT staff member could spend less time on such pursuits and more time on efforts that have a more direct positive correlation to the bottom line?
Think about the plight of the SMB or small midmarket organization. These kinds of companies are often staffed with just the bare essentials. The IT department is often made up of specialized generalists. This means that they are people with aptitude in a particular area, but that often need to stray outside this comfort zone in order to meet the needs of the business.
The Market Is Starting to Understand the Pain
For a long time, it seemed like various vendors didn’t give much thought to the overall IT experience, essentially making it a requirement that IT departments maintain staff with very specialized knowledge. In recent years, though, there are signs that vendors are beginning to embrace simplicity as a way to differentiate themselves from others in the market and from legacy devices. This move toward simpler infrastructure can’t come soon enough and it’s manifesting itself in a number of different ways.
First, vendors are starting to realize that not every organization has hardcore experts in each of the major resource areas – storage, servers, and networking. In fact, the market seems to be coalescing around the idea that the virtualization administrator role is morphing into an overall infrastructure role, as evidenced by the introduction of so many converged appliances that combine storage and compute into a single appliance.
Second, vendors are beginning to understand that, beyond simplifying routine operational tasks, more needs to be done on the support front in order to streamline repairs and improve overall availability of services. Tier 1 companies have provided proactive support services for quite some time, but it’s apparent that this proactivity is making its way into the market much more broadly with newer products.
The Origins of Simplicity… and New Versions of Complexity
Of course, everyone wants things to be as easy to use as possible, but it seems as if virtualization itself kicked off both new waves in simplicity and, at the same time, new waves in complexity, with the latter now being addressed with whole new product categories. On the positive front, VMware drastically simplified the compute resource silo when the company first released ESX server more than a dozen years ago. With virtualization , rather than having to manage dozens, hundreds, or thousands of physical servers individually, these systems became simple software objects that could be manipulated through a centralized management console and shuttled back and forth over the network between different hosts and even different data centers.
Virtualization also heralded the introduction of an administrator that needed to have understanding of all of the various resource areas – compute, storage, and networking. With this came an acute need to understand the real nuances of system I/O patterns in mixed workload environments, not to mention whole new ways to build data centers.
Virtualization also heralded the introduction of an administrator that needed to have understanding of all of the various resource areas – compute, storage, and networking.
This is evidenced in the rise of virtualization-specific certification programs that aim to ensure that organizations hire people with the skills necessary to operate this new data center paradigm.
So, it’s been a tradeoff, but when stepping back and thinking about it, the new result has been in favor of simplicity. So, even while there were changes the made some things seemingly more complex, they were also getting easier to manage. Sure, needing people that had skills from across all resource silos might seem like an increase in complexity, it also has the benefit of helping to tear down the resource-based silos that permeated so many IT departments. The outcomes for this are positive since there doesn’t need to be as much cross-resource arguing going on. At the same time, it’s now much simpler for the business to deploy new workloads since everything is based on software and resources are simply added in chunks as these resources reach critical consumption levels.
Today, the market is seeing the simplification trend take off in myriad new ways, from adopting the building style of popular toy makers to ensuring that customer support functions continue to add value long after the sale is made. These opportunities are enabling CIOs to think differently about their data center environments and provide the opportunity to realign staff toward more bottom-line driven business opportunities.
Improved Support Models
Companies like NetApp and Compellent have historically provided extremely customer-centric support models that ensured that customers would have working services, which is critical when it comes to storage. NetApp’s hardware automatically ordered spare parts when a problem was detected, which often resulted in customers receiving hard drives from UPS without even realizing that there was a problem. Compellent’s Copilot service provided customers with an expert partner to help them solve critical storage challenges.
Today’s support models are much more data driven than customers seen in the past, but still have the net result of improving the customer support experience. In fact, companies like HP, Nimble, and Tegile – among many, many others – are building hardware – servers and storage alike – that is outfitted with hundreds or thousands of hardware- and software-based sensors that send data points back to vendor HQ where the data is constantly monitored and mined to detect anomalies. Now, when a customer calls for support, the hardware vendor is often already aware of the issue and may have even already begun the resolution process. Further, this kind of data-driven analytics massively improves the overall customer support paradigm with clear benefits, which I discussed previously in this article.
Even the very fabric of the data center is being afforded new opportunities for simplification, which have the potential to profoundly and positively impact that IT support structure. Modern converged solutions, from those that are built using existing products – Vblock, Dell Active Systems, etc. – to those that are natively converged – Nutanix, Pivot3, Scale, SimpliVity – are providing opportunities to completely rethink how to support what used to be highly complex structures.
In the case of converged systems that bring together existing products, in most cases, that product conglomeration results in a system that uses just one vendor for all support functions. For example, in the case of VCE, even though the product consists of products from VMware, Cisco, and EMC, when customers want to buy a solution or they need support, they call VCE.
In the case of natively converged systems, the resulting appliances are intended to be “units of infrastructure” to which new blocks are simply added as resource needs dictate.
As the saying goes, there is just “one throat to choke” and the product eliminates the potential for vendor finger-pointing. It also means that customers receive a rack of hardware that is pre-tested, pre-validated, and pre-wired so that the customer simply needs to turn it all on. This reduced proof of concept time and sales lead time and simplifies ongoing support.
In the case of natively converged systems, the resulting appliances are intended to be “units of infrastructure” to which new blocks are simply added as resource needs dictate. These combined compute/storage appliances – dubbed “server SAN” appliances by Wikibon, but also called “hyperconverged” solutions by others – are built in ways that provide customers with great performance results while also being particularly economical to procure and support. Again, the support paradigm is also dramatically simplified since, ultimately, the goal is to see the full data center migrated to the same vendor, providing a single point of support contact and less discreet hardware to manage. These solutions can generally coexist with existing data center structures, allowing customers to phase them in on a preexisting replacement schedule. In general, these particular solutions carry with them relatively low costs of entry and are scalable in small chunks that don’t carry major expense.
These new architectures also take aim at some of the complexity that was introduced with virtualization. As mentioned previously, while virtualization has simplified things, it also introduced new opportunities for complexity, particularly due to virtualization’s reliance on shared resources. This sharing has led to unforeseen resource constraints that have created major challenges when it comes to virtualization tier 1 and I/O-heavy applications. They include the hardware and software necessary to enable better overall management of I/O-intensive workloads, bringing these workloads well within the realm of possibility when it comes to virtualization.
It’s an exciting time to be a CIO, but also a challenging one. While there is significant change hitting the vendor landscape, it can be difficult to understand the full breadth and depth of opportunities that exist. As such, it’s imperative that CIOs focus not on individual products and trends, but define intended outcomes before scanning the market for solutions that fit those outcomes. To that end, CIOs should be focusing as much as possible on driving the complexity out of their existing systems and moving as much as possible toward simpler, more business-friendly services. Such a focus will ultimately allow CIOs to focus less on “bits and bytes” and more on the bottom line.