The cloud services market is increasingly driven by customers with highly focused IT requirements. For instance, organizations want solutions built to address a variety of edge and distributed computing use cases and, as a result, they are no longer prepared to accept homogenous technologies that fall short of their requirements.
This is an understandable perspective. While cloud computing has become a compelling option for those with centralized business functions, it has proved less beneficial for organizations reliant on remote infrastructure. Processing and protecting data at the edge is a case in point, with some implementing cloud-first strategies without onsite support. The downside here is that mission-critical, remote applications can suffer from performance and reliability issues, with the knock-on effect that cloud contracts become costly and inefficient.
Chief Product Officer at StorMagic.
Difficult challenges
In these circumstances, organizations often choose to resume onsite IT infrastructure implementation and support themselves. On the plus side, this can deliver the high levels of reliability and performance they need, but at the same time, it resurrects some significant challenges the outsourced cloud model was intended to address. This includes the associated costs of implementing hardware, power, and cooling systems at each remote location – and, in some cases, if there is even room to house the technology required at the edge.
These issues aside, administration and maintenance costs can become prohibitive, irrespective of whether it’s a small organization running one remote site or an enterprise operating dozens. The availability of localized expertise can also be a major challenge, particularly for organizations operating specialized systems where fully-trained staff are essential. Even with these requirements all in place, most remote sites will still need some level of cloud or corporate data center connectivity. IT teams must also decide which data should be stored at the edge, in the cloud, or in their data center.
Organizations that find themselves in this situation can easily end up reliant on a complex and incoherent strategy when what they actually need is a cost-effective approach with the agility to meet their bespoke edge needs. In the past 12 months in particular, this has grown to an even greater issue with the acquisition of VMware by Broadcom. After announcing a raft of updates to product bundles and new subscription costs, not to mention the axing of several existing VMware partner agreements, the changes have left many customers feeling lost at sea.
The path forward
For organizations operating in sectors such as retail, manufacturing, healthcare, and utilities, all of these difficulties will be all too familiar. These businesses rely on access to real-time data to inform their decision-making, adhere to performance standards, and maintain the efficiency of supply chains.
At the same time, the range and complexity of edge applications is increasing enormously. From patient health monitoring devices, and smart shelves and self-checkout in retail, to digital twins used in manufacturing locations, these innovations are creating huge datasets and putting even more pressure on existing data centers and cloud computing services.
To address these issues, businesses are looking to digital transformation and AI analytics technologies to drive the performance improvements they need. For example, the data being generated at these edge sites is becoming so time-sensitive that the AI systems need to be deployed locally so that decision-making keeps pace with operational requirements.
The problem is that there just isn’t time to send all the data to a cloud for AI processing, so the answer is to efficiently implement more of this functionality at the edge. This is contributing to a major surge in edge investment, with research from IDC indicating that worldwide spending on edge computing is expected to reach $232 billion in 2024, an increase of 15% compared to 2023, with that figure rising to nearly $350 billion by 2027.
A streamlined approach
In practical terms, organizations are addressing these objectives by implementing a full-stack HCI (hyperconverged infrastructure) at the edge as part of a cloud strategy. HCI consolidates computing, networking, and storage resources into a single, streamlined data center architecture.
In contrast to legacy approaches that rely on specialist hardware and software for each designated function, using virtualization reduces server requirements without impacting performance, providing a solution equivalent to enterprise-class infrastructure. By running applications and storing data at each remote site, this approach also benefits from cloud and data center connectivity according to need. This can be achieved without the hardware architecture and implementation challenges associated with traditional edge technologies.
Of particular benefit is that today’s HCI solutions are designed with the limitations of smaller remote sites in mind. This includes features that simplify the process of connecting edge technologies to cloud services and corporate data centers. The most effective HCI solutions can deliver these capabilities with just two servers and do so without compromising availability or performance, with failover taking place in as little as thirty seconds – a capability that preserves data integrity and keeps operations running. This can be achieved while also offering the associated cost benefits of reduced hardware spend.
For organizations with limited space in their remote sites, HCI reduces the physical footprint required to install hardware. They also consume less power, and don’t require as much cooling, spare parts, or on-site maintenance when compared to traditional technologies.
This is all possible because of the inherent simplicity built into modern HCI systems, where the elimination of complexity also enables easy remote set-up and management. In fact, HCI installations can be managed by IT generalists instead of requiring dedicated experts, with systems usually deployed in under an hour, avoiding disruption to day-to-day operations and getting new sites or applications up and running quickly and effectively.
Given the growing reliance on edge computing, many organizations are also likely to see their needs increase over time. HCI systems can accommodate these scaling requirements, enabling users to meet changes in demand without delay or the need for complex reconfiguration exercises.
On a day-to-day basis, centralized management tools allow administrators to remotely manage and secure all edge sites from a single console. Then, the system automatically balances and allocates resources for computing and storage in real-time, optimizing hardware resources for maximum efficiency – avoiding unnecessary and costly overprovisioning.
Put all of this together and organisations that rely on effective edge infrastructure now have a proven alternative to inefficient legacy solutions. As a result, it’s now possible to create a win-win edge strategy that delivers the benefits of powerful remote computing with the value and flexibility that only cloud computing can offer.
We’ve listed the best data visualization tools.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro