Scroll Top

Intel® IPUs and SmartNICs boost data center utilization and lower TCO

female technician working on tablet in data center

Enterprise organizations and cloud service providers (CSPs) are constantly searching for new ways to improve their data center and cloud operations, making them more efficient and cost-effective. As data center workloads expand in number and complexity, system components must evolve to keep up. Or, new and more effective component categories must be introduced to shoulder those ever-increasing demands.

For example, CPUs have traditionally performed a wide assortment of tasks beyond application processing, such as network traffic management. But, there are times when processor cycles are heavily taxed while trying to meet network throughput requirements, security demands and other infrastructure operations, and the result can be lagging performance. In addition, complex and cycle-hungry applications continue to gain in popularity, further aggravating the challenge of processing application demands.

Presidio technology partner Intel has responded to that challenge with a new product category that could revolutionize data center resource utilization for cloud and networking: Intel® infrastructure processing units (Intel® IPUs) and SmartNICs. These programmable network devices intelligently allocate system-level resources, managing the data center’s infrastructure functions and freeing up server processor cores for improved application performance.

Meanwhile, Intel is collaborating with VMware on its Project Monterey, redefining hybrid cloud architecture, to incorporate these latest acceleration technologies. Presidio believes these innovations will prove critical for maximizing performance and lowering total cost of ownership (TCO) in the modern data center. For that reason, we advise customers who are considering a server refresh to specify these IPU-ready capabilities.

Introducing Intel IPUs – providing new data center value

Introduced in June 2021, the Intel IPU is an advanced, programmable networking device equipped with hardened accelerators and Ethernet connectivity. Infrastructure functions that traditionally would be performed by the CPU are instead fully offloaded to the IPU, which intelligently accelerates and manages them. This infrastructure offloading strategy frees CPU cycles for application management, revenue-generating tasks and other critical workloads. And, it serves as a host control point for running infrastructure applications. As a result, IPUs allow for fast, customized infrastructure deployments and improved data center utilization through flexible workload placement.

This advanced infrastructure processing device offers new data center value by providing:

  • Highly intelligent infrastructure acceleration.
  • System-level security, control and isolation.
  • Common software frameworks.
  • Reconfigurability and programmability, enabling customizations at the speed of software.

What’s more, deploying an IPU may enable the use of less expensive processors: for example, a particular workload might require a high-performing and more expensive processor without an IPU – but that same workload could be accomplished with a less expensive and lower performance processor if infrastructure workloads were offloaded to an IPU. The result: lower hardware costs, reduced power and cooling expenses, and better overall financials.

Owing to its collaboration with the industry’s major hyperscalers, Intel is the leading provider of specialized infrastructure processing devices for select cloud and networking workloads. Intel IPUs support a wide range of use cases for minimizing costs and optimizing resource utilization to improve overall performance.

Intel IPU and SmartNIC lead a growing portfolio of offloading devices

IPUs and SmartNICs both have the ability to offload the entire infrastructure stack from the host, including network functions. But here’s where they differ: IPUs can control how the host attaches to this infrastructure, whereas the SmartNIC remains under the host’s control as a peripheral.

For those unfamiliar, SmartNICs are programmable network adapter cards with programmable accelerators and Ethernet connectivity that can speed infrastructure applications running on the host. It should be noted that the concept of offloading workloads from the CPU is nothing new – think graphics processing units (GPUs), or IPsec offloading on network adapter cards. SmartNICs simply extend this function to networking, with a few added capabilities.

Intel SmartNICs have proven to deliver higher performance for edge nodes, network infrastructure and data center compute, with a level of flexibility that contributes to a reduced TCO. SmartNICs and IPUs together are the first offerings in an entirely new category, a broad infrastructure acceleration portfolio for higher throughput, lower latency and rapid customizations.

VMware’s Project Monterey embraces Intel IPU and SmartNIC infrastructure offloading

Intel is collaborating with the VMware Cloud Foundation (VCF) to design a specialized SmartNIC optimized for its Project Monterey, a top-to-bottom re-architecture that will leverage IPU capabilities to deliver peak performance by offloading infrastructure to the SmartNIC; zero-trust security without compromising performance; and consistent, simplified operation across all applications.

Project Monterey will advance the ultimate promise of IPU: to offer an expanding portfolio of offloading solutions that will assume all background and back-end duties, freeing more and more processor cores for productivity-enhancing and revenue-generating operations. 

Intel drives the future of enterprise computing with infrastructure offloading

When planning the next server refresh or data center upgrade, Presidio recommends that enterprise-grade IT decision-makers take a closer look at this fresh and innovative approach to infrastructure processing. From our perspective, these infrastructure acceleration products are just the start of an important new portfolio offering that we believe will expand significantly in the near future.

The possibilities for offloading technologies extend beyond networking with IPU solutions for storage, memory, cache, specialized compute-intensive apps, and more. Imagine deploying resource-sharing capabilities, such as making GPUs available across an entire cluster via IPU.

The need is clear for a more sensible approach to managing data center infrastructure in a way  that derives the greatest value from all assets. Contact us to have a Presidio data solutions architect analyze your environment in order to learn how Intel IPUs and SmartNICs can maximize resource utilization, accelerate overall performance, and lower TCO.

+ posts