Since when did software-defined mean that hardware doesn’t matter?
It’s a question worth considering because hardware really is - or should be - at the heart of a strategically built, efficient, and performant data center. And yet, it seems to be much more of an afterthought in today’s complex, software-defined storage environments, especially as we observe the rise in data repatriation in an effort to redress the balance between on-premises and the public cloud.
‘Software-defined everything” promises flexibility, scalability, centralization, and reduced capital expenditure - so why scrutinize the hardware? What results can we gain, and how does it translate to real, tangible value for end-users?
Hardware has been commoditized in the data center to the point of oblivion, and in so doing, we’ve gone backward because it’s been to the detriment of the results we’re seeking. We’ve been led to believe that you can’t have it all and that whatever your infrastructure model, you have to choose between performance, efficiency, and cost, or at best, two out of the three. But what if we turned this thinking around and put the hardware back in the spotlight, re-engineering it from the ground up, to deliver it all?
The pendulum swings
Imagine a pendulum: on the one side, you have a proprietary, on-premises solution. This delivers a high degree of control and it resolves a lot of the overhead, but you’re locked in and pay for it again and again… On the other side, it’s a D.I.Y. setup with open source software (OSS) and commodity hardware. After the initial cost of the cheap hardware, the ongoing cost is free! Except those who have struggled with the complexity of such a solution will tell you with tears in their eyes: it’s not.
Stop expecting spectacular results from generic hardware
In either scenario, it’s common these days to see half-empty racks in data centers. This is because generic appliances are assembled using off-the-shelf components that are not designed, or optimized, for the task at hand. The irony is that in reality, the result is large, inefficient, power-hungry boxes that suck power and cooling resources. We’ve been going about it the wrong way: shoe-horning software in an attempt to force better results. As data continues to grow and the trend to hybrid cloud sees ever more data repatriated from the public cloud, this is a situation that - left unaddressed - is unsustainable.
Specialization fosters performance breakthroughs
There is a better way. It’s where the pendulum comes to rest, and it’s what we call “task-specific” hardware: purpose-built for the data center and engineered from the ground up to exploit and optimize the capabilities of leading OSS for the data center across storage, networking, and compute.
We’ve demonstrated that when you use task-specific hardware, optimized to run the very best software-defined, open-source solutions (like Ceph for storage), you can achieve significant and meaningful gains. And, especially when it comes to storage; density improvements that enable you to fully utilize available space without blowing power and cooling budgets
For example, SoftIron’s HyperDrive Density Storage appliance enables you to deliver 120TB of storage in a single “u” of rack space with a power budget of fewer than 125 watts. That’s over 5PB of storage in a 42U rack, delivered for around 5KW of power consumption – well within that delivered in most data center racks – and a potentially significant cost saving in co-location environments where consumption impacts rack rates.
Of course, it’s also about performance. When you truly understand the way the software is architected to deal with a task, you can exploit the potential with hardware that is custom-built to support that. We dive deeper into that here, but in an essence, that’s why we’re able to deliver such blistering performance using open source code. Furthermore, when you design and build hardware for a specific task, and you know exactly what code you’re going to run, you can vastly improve the ability to efficiently and intuitively configure, install and operate your environment.
The end result? All the benefits of a proprietary bundled solution, but without the huge drawback of the vendor lock-in because it’s still open-source, so you can move on at any time.
If you're selling generic hardware into open-source data center deployments you're missing out on a very unique opportunity. You’re missing the chance to differentiate by delivering superior performance and efficiency, as well a solution that's easier (and therefore more profitable) to deploy and support, yet still delivers all the independence of open source.
You can absolutely continue to run - or sell - Ceph, or any other software-defined platforms on generic hardware. Our bet though, is that once you’ve experienced the difference that purpose-built, task-specific designed hardware makes, you won’t want to.
SoftIron just announced that its HyperDrive Storage appliance has been selected by Enterprise Management Associates (EMA), a leading IT and data management research and consulting firm, to receive a Top 3 Award in their “EMA Top 3 Enterprise Decision Guide 2020” report. HyperDrive was selected as a leading storage solution in the “Hybrid Cloud Management – Enterprise Data Services” category for the awards. The report is available here.