Are data centers, service providers, and chip vendors ready for a cloud native world? There are common myths that may be impeding progress...
The world is moving its data into the cloud at unprecedented speed. This shift is placing significant pressure on data centers, service providers, chip vendors, and other technology innovators to ensure they are ready to support a cloud-native world. This world is vastly different from the one in which we’ve lived for decades where enterprises ran their own IT infrastructure executing their business logic.
Today, we have the ability to look for available taxis anywhere in the world with a click of a button or purchase IT infrastructure that is burstable on demand to meet peak requirements. And, just like you can’t fit a square peg in a round hole, you should look to a modern cloud architecture to support a true cloud-native world — instead of trying to force legacy technology to adapt. Not only is older technology burdened with features the cloud doesn’t need, but it’s also lacking the things the cloud really requires such as predictable performance for every tenant that is easily scalable.
While the industry struggles with how to seamlessly adopt cloud infrastructure, we’ve put together a list of some of the most common myths discovered in the process.
Myth 1: Innovation in server CPU design has closely followed Moore’s law
While Moore’s Law delivered on its promise for many years, effectively doubling transistor count every two years, x86 architecture innovation has actually been slowing down over the last five years. The graph below, comparing 2008–2013 versus 2013–2019, shows how the x86 architecture performance improvement CAGR has actually declined while in the same period the performance per watt CAGR declined even more dramatically.
Clearly, data centers will not be able to meet their increasing performance demands by simply scaling up existing processors without increasing their power consumption even faster. As the industry moves toward more modern services based on containers and microservices, which are meant to help applications rapidly scale up and down based on demand, performance and power efficiency will be absolutely critical and will require new cloud-native server processors built specifically to handle these innovative cloud workloads.
Myth 2: Threads are the same as CPU cores
Most x86 processors and some Arm processors utilize simultaneous multithreading (SMT) to increase the number of logical cores available. These threads are often perceived or marketed as if they are independent physical cores in the processor. They are not the same.
Today’s x86 based processors have up to 64 cores and they have two threads that can run on each core — they were originally designed this way to improve performance per core, which worked well for enterprise environments where the entire processor was captive and servicing the needs of a single enterprise. However, using SMT makes it impossible for cloud service providers (CSPs) to deliver guaranteed consistent performance (predictable performance) to customers. In a cloud environment, where multiple tenants are running on the same processor, problems arise due to the varying requirements of different applications.
Any time multiple virtual machines (VMs) are deployed on the same shared server, there is risk that some VMs will dominate resource usage, compromising the performance of other customers or VMs sharing the same platform. This is called the noisy neighbor problem. When those VMs are sharing the same physical cores, as is possible with SMT, the noisy neighbor problem is even worse. In fact, it becomes a noisy roommate problem.
In addition, SMT expands the potential attack surface for malicious users, potentially allowing a user’s data to be compromised by side-channel attacks introduced by other users. To mitigate security concerns and reduce the attack surface, many CSPs rent threads in pairs, thereby ensuring that only one tenant occupies one core. From a CSP’s perspective, this defeats the purpose of having multiple threads per core on which to host more users and services. The bottom line is that SMT is not the same as having more cores. CSPs and other hyperscalers require processors with the right features that make a positive difference for their workloads and business models.
Myth 3: More CPU cores don’t matter as much because most applications are single threaded
x86-based server processors have existed for over two decades and have naturally evolved from being used in the enterprise to being used in the cloud. As this shift occurred, the processors carried all the prior design points that were useful in yesterday’s enterprise settings to today’s cloud.
For instance, fewer bigger cores were useful in the enterprise where workloads were typically monolithic applications running on private infrastructure, but today’s cloud workloads need to be scalable with the ability to dynamically adjust with consumer demand by using more cores.
This requirement has resulted in a fundamental shift from monolithic applications to containerized microservice-based architectures where scalability and resilience are of paramount importance. Cores are the fundamental unit of compute in these models, and today’s compute environments can take advantage of high core count processors.
Highlighting the growth of these new models, analyst firm IDC has predicted that by 2023, 80% of workloads will shift to or be created with containers/microservices.[ii] CSPs and hyperscalers require a new class of cloud-native processors to address the needs of the modern data center — most importantly, processors with the maximum number of cores as opposed to legacy processors with few cores that were optimized for yesterday’s software.
Myth 4: One CPU architecture can meet the demands of all workloads
Processor vendors that service multiple markets typically design a single architecture and use it across all the markets they serve. For instance, a single x86 architecture is used across multiple markets such as clients (laptops/servers), high performance computing, enterprise, edge computing, and cloud computing. However, features that are optimized for some markets, such as client laptops, do not directly apply to other markets such as cloud computing.
For example, the multi-core x86 processor can run faster when few cores are active and slows down as more cores are utilized. While this works well in a client computing environment, the same feature introduces challenges to predictable performance in cloud computing environments. With frequencies going down as core utilization increases, some customers are bound to experience a performance decrease as other tenants start using the same server. These markets value predictable and consistent performance from every core in the processor, necessitating a cloud-native processor.
Myth 5: High-performance CPUs can’t be power efficient
Today’s legacy technology comes at a cost — it’s power hungry. Unfortunately, this issue has caused many people to think it’s simply not possible to have a high-performance CPU that is also power efficient. The reality is, all that’s needed is an architecture that can do both, and that’s designed to address the needs of a single market. This is exactly what happened in the mobile and tablet markets with the Arm architecture: Arm’s ability to deliver excellent performance at a fraction of the power of incumbent architectures drove a new era in mobile computing.
Now, the same innovation is being driven in server-class processors with a focus on hyperscale and cloud service provider markets. Arm’s RISC architecture provides high performance, better performance per watt efficiency, and server class RAS for data centers. The Arm server ecosystem is well developed with a slew of operating systems, containers, virtualization, languages, and development tool support. These features provide the flexibility for hyperscalers to finally free themselves from the burden of using legacy CPUs and instead adopt cloud-native processors that are built from the ground up with a sole focus on the needs of the modern data center infrastructure.
Starting over with the cloud in mind
The only way to optimize what is needed by cloud environments is to start with new cloud-native hardware technologies designed specifically with the cloud in mind. This cloud-native processor hardware innovation has just begun. This new era of computing is going to create new market leaders: the ones brave enough to tread on new ground.
— Subra Chandramouli is director of marketing at Ampere Computing