Enterprise Storage Guide, Guides

Discover The True Meaning Of “Software Defined”

If you’ve not yet heard about the latest trend hitting the IT industry, it’s time to breakout your buzzword BINGO playing board and make some new additions.  Today, “software defined” is trending everywhere and across the entire IT spectrum and it doesn’t look like it’s going to go anywhere soon.  And that is with good reason.  As one begins to peel back some of the hype and hysteria surrounding the software defined movement, it becomes clear that there are some laudable goals and outcomes that surround this trend.  If capitalized upon, these goals and outcomes can have tremendous positive impact on the business and on IT.  Make no mistake; there are clear business benefits to be had in the software-defined world.

What Does “Software Defined” Mean?

The unfortunate thing about new buzzwords is just how quickly they are adopted by anyone and everyone with a product that they want to shove onto the emerging bandwagon.  Such actions basically dilute and cloud the message, leading to confusion about what something really means, particularly when vendors with no business in the space work their best to create a tenuous string to the concept.

We’re at that point with “software defined”.  There are a lot of people that simple check out when confronted with the phrase because it’s become nebulous.  What that in mind, I want to make clear how I see “software defined” in the context of the data center.

In a software defined data center (SDDC), all infrastructure is abstracted in some way from the underlying hardware – generally through virtualization – pooled, and the services that operate in the environment are completely managed in software.

You may wonder how this really differs from how legacy data centers have been constructed and managed.  After all, when it comes down to it, those environments were managed with software, too, right?  The sections below outline some of the key differences between traditional data centers and what is intended in the software defined data center.

Software Defined Servers

Back in the early 2000’s VMware introduced to the world the software defined server, more commonly called a virtual machine.  In this scenario, workloads no longer run directly on hardware, but instead run on software-based constructs that sit atop the virtualization/abstraction layer.  Administrators now only interact with physical hardware as a means to install this abstraction layer and pretty much all other administration is handled through tools provided by the hypervisor vendor and other third parties that integrate with the hypervisor.  The hardware here has become almost an afterthought.  As long as it’s x86-based and can hold as much RAM as we need, it’s pretty much good to go.  Prior to the days of virtualization, administrators went through painstaking server configuration steps when buying new systems.

The benefits of having moved many environments to a virtualized state have been incredible.  Once something has been moved into a full software box, it becomes incredibly easy to manipulate.  This is evidenced by tools such as vMotion, which brought to IT a level of flexibility and availability that was difficult to achieve with physical systems.  Further, tools such as vMotion and Hyper-V’s Live Migration power automated workload placement systems that can be configured to help improve the overall availability of a computing environment and that provide heretofore-unseen levels of flexibility.  Moreover, this software-driven flexibility has enabled numerous new ways to handle critical disaster recovery and business continuity needs and an entire market of vendors has sprung up to help companies leverage their virtualization environments to this end.
More importantly, by abstracting servers from the underlying hardware, organizations now have the opportunity to automate more routine tasks, including the creation and configuration of new virtual machines, among many other opportunities.  Such ability begins to provide a strong foundation on which to automate more IT operations and manage resources more centrally and to help IT focus more on business needs.  These are all critical elements and are underlying characteristics of software defined data centers.

Software Defined Storage

Software defined storage (SDS) is a bit harder to nail down than software defined servers, but the fact is that SDS is a key component of the software defined data center and it’s actually the real deal, with a lot of vendors working hard to build SDS-based storage products that can live up to the hype and expectations.

There are some that feel that storage’s hardware-centric nature automatically means that it’s not really possible to consider the resource as software-based anything.  Still others look at the fact that many purported SDS vendors are selling appliance-based hardware.  The thinking goes like this: If the vendor has to sell hardware, it’s automatically not SDS.

The flaw in these arguments revolves around the focus on the underlying hardware.  In most of the hardware/software bundle on the market today, the hardware is based on off-the-shelf commodity components and can be relatively easily swapped out with components from other vendors.  I challenge the reader to attempt to swap out hardware in proprietary storage systems with something not provided by the original vendor.  In most cases, proprietary storage systems have embraced their uniqueness and there are a lot of custom parts in the solutions.  While that approach has worked well for a number of years, as commodity hardware has become more robust, it’s become possible to move into software many of the functions that used to require proprietary custom chips.  This has made some modern storage options much less expensive than systems provided even just a few years ago and has also made these systems much more inclusive when it comes to features.  Now, implementing high-end features, such as deduplication, is a software chore rather than a hardware one and many vendors now automatically include these features in their default configurations.  It’s really a major win for the customer.

VMware’s VSAN has become one of the better known SDS offerings and, recalling the definition for the software defined data center, this product meets the requirement.  With VSAN, the hypervisor – which hosts the kernel-based VSAN management software – aggregates the storage from across all of the vSphere hosts.  From there, the software pools all of that storage together and then presents it back to the hosts as a single pool of storage on which virtual machines can run.  The storage layer continuously manages this resource pool and provides to the administrator a complete policy framework so that storage resources can be leverages in ways that make sense to the business.

Software Defined Networking

Consider for a moment the traditional network environment.  Distributed throughout the entire organization are devices with varying levels of intelligence, with said intelligence closely corresponding to the OSI model layer at which the device operates.  The higher up the stack the device operates, the more brains it needs to accomplish its duties.  Hubs, which had only enough intelligence to simply repeat incoming traffic out all ports on the device operate at layer 1 and are rarely used anymore.  At layer 2, switches gain some intelligent switching capabilities, at least for traffic that originates and is destined for the local network.  Even at layer 2, though, there are a lot of administrative opportunities to improve overall traffic management.  Layer 3 – where routers reside – adds lots of intelligence regarding traffic, including routing between networks, prioritization, and a lot more depending on the router model.  And the process continues up the stack.

As you look at the network, the distribution layer consists primarily of layer 2 and layer 3 devices that make sure that traffic gets to where it needs to go.  Sure, higher layers are important, but those are generally in place for security or application performance reasons, not to just allow traffic from point A to point B.  The individual ports through which traffic moves is known as the data plane on the network.

Moreover, these layer 2 and 3 devices each have their own brain.  When changes need to be made to the network, each device is reconfigured to accommodate that change.  Such changes might be the addition of a new VLAN or the implementation of some kind of quality of service.  Just like was necessary with physical services in the old days, whenever a global change needs to be made, it often requires touching each individual affected device.  The policies and configuration that are in place on each individual device are collectively referred to as the control plane.  Often, the intelligence in switches and routers is implemented in custom engineered chips.

Software defined networking operates by decoupling the data plane and the control plane.  In this context, the devices to which endpoints connect – which are generally switches today – get lobotomized.  In essence, their brains are removed from their bodies, but the body continues to function with the device continuing to pass traffic just like it always did.  The device’s intelligence – the aforementioned control plane – is transferred to a centralized controller which manages all of the devices on the network.  Rather than requiring custom hardware to handle networking decisions, the control plane is implemented in software.  As has been the case for other such hardware to software abstraction technologies, this change brings a number of new capabilities and opportunities.

Software is far more flexible than hardware and can be changed at will. Sure, firmware can be updated in legacy devices, but it’s still a less flexible environment than today’s x86-based server environments provide.  In SDN, the centralized controller handles the management and manipulation of all of the various edge devices.  These edge devices, since they no longer require specialized management hardware to operate, can be run on commodity hardware devices.  In short, edge devices become servers, which are managed from the central controller.  The edge devices – the data plane – simply carry out the orders that originate from the control device – the control plane.  As was the case with virtualization, software defined networking implements a software layer atop the entire network.  In SDN, the control plane issues commands – basically helping determine how traffic will flow – and sends orders to the edge devices to manage traffic as prescribed.

The Big Software Defined Picture

Today’s IT departments are being forced to be more flexible and adaptable than ever before while, at the same time, operational costs are scrutinized to ensure reasonable total cost of ownership for all technology systems and assets.  Moreover, businesses want IT to bring operational efficiency to IT’s own operations, just as IT has been trying to do with business units for decades.  During that time, though, many IT departments have built what can often be described as a house of cards that requires endless attention.  Many administrators are required to perform what are repeatable tasks, such as provisioning LUNs, building physical servers, and modifying network hardware to accommodate new needs.

Virtualization and the rise of software defined resources are bringing – or have brought, in the case of server virtualization – to IT tools that help IT become more efficient and better able to meet critical business requirements in a timely manner.  It no longer requires weeks of waiting to stand up a new server, for example.  This process can be handled in mere minutes and, with the right management tools in place, can be accomplished by end users through self-service portals.  As time goes on, the same ease of management will come to the storage and network resource layers.  It’s software – that abstraction layer – that is enabling these opportunities.

Think about this: Before server virtualization, organizations may have had servers all from the same vendor, but from different generations, with each model requiring special handling and different drivers.  In a 100% virtualized data center, every single virtual machine runs atop an identical abstraction layer.  Even if the underlying hardware isn’t identical, this fact is mostly hidden from the virtual machines.  As such, there is little ongoing special handling that needs to take place to manage individual virtualized workloads.

User self-service is also a hot topic these days.  This is partially due to the ease by which business units can get their needs met with a credit card and a cloud service.  This expectation is forcing IT departments to implement systems that help business units achieve lower time-to-value for new services, hence the need for self-service portals.  These kinds of portals, however, depend entirely on being able to interact with the data center’s software layer.  The portal sends direction to the software layer about what users have asked for and carries out those instructions, within the confines of policies established by the IT organization.  The more resources that exist in software, the more flexible and user-centric/business-centric the IT department can be.


As software-defined everything continues to gain momentum and as these self-service orchestration systems mature and become even more tightly coupled to the software layer, IT can more easily transition itself from a department that is constantly iterating the same rote tasks to a department that acts as a broker of services for the organization and that is far more focused on the business rather than on the underlying bits and bytes of the data center.