This content is part of the Conference Coverage: HPE Discover 2024 news and conference guide

Artificial intelligence, Nvidia took center stage at HPE Discover 2024

With many organizations developing or evaluating generative AI initiatives, HPE increased its commitment to the space through a broad partnership with Nvidia.

At this year's Discover conference, HPE committed to the success of AI in the enterprise, and that commitment centers on its partnership with Nvidia.

Expanding on HPE's supercomputing-centric vision for AI from last year's event, this year's product updates highlighted a concentrated approach to strengthen HPE's partnership with Nvidia and their combined ability to deliver enterprise-level AI technologies.

HPE's enterprise AI and Nvidia-centric products included the following:

  • Support for the latest set of Nvidia processors/accelerators. The ProLiant DL384 Gen12 server supports NvidiaGH200 NVL2, and the HPE ProLiant DL380a Gen12 server supports up to eight Nvidia H200 NVL Tensor Core GPUs.
  • The introduction of Nvidia AI Computing by HPE, a portfolio spanning hardware and software, co-developed by the two companies. This portfolio includes HPE Private Cloud AI, which features small, medium, large and extra-large configurations. Each one is tailored, according to HPE, to meet a specific AI use case, such as inference, retrieval augmented generation or training, and is targeted to be generally available this fall.
  • New updates to OpsRamp, the AIOps and observability platform, adding integration for the Nvidia GPUs, software and networking options.

The announcements with Nvidia augment HPE's AI strategy and expand on HPE's success in AI with its Cray supercomputing.

HPE-Nvidia deal highlights AI infrastructure trends

A majority of organizations are developing or evaluating generative AI initiatives. According to research from TechTarget's Enterprise Strategy Group, 54% of organizations expect to have a generative AI project in production in the next 12 months.

The desire to partner with Nvidia, especially among compute infrastructure providers, is nothing new. Dell Technologies, Lenovo, Supermicro and Cisco all tout close partnerships with Nvidia and offer systems that utilize its technology.

This newest announcement of four "turnkey" HPE and Nvidia deployments highlights three key trends in AI infrastructure deployment.

First, time to value is an essential characteristic of AI initiatives, and the need to simplify and accelerate deployment is an increasingly valuable capability.

Second, organizations need better options to optimize their infrastructure for specific use cases as a means to reduce cost.

In the excitement surrounding AI projects, over investment early can lead to less than ideal returns on those investments. The ability to better tune the infrastructure to the specific needs of the workload is essential. The ability to procure these systems using HPE GreenLake is particularly compelling, and it was received with a strong round of applause during the keynote.

HPE GreenLake provides the ability to distribute the cost of the infrastructure over its lifespan to reduce the upfront budget impact at deployment. According to Enterprise Strategy Group research, 46% of users of on-premises consumption-based infrastructure services, such as HPE GreenLake, identified that they will able to accelerate IT initiatives by moving the cost into future quarters.

The third trend is the need to minimize the expertise burden on infrastructure design and deployment. Given the scarcity of generative AI talent typically present in-house, preconfigured infrastructure can play a critical role in reducing the burden on internal resources.

HPE claims that its turnkey offerings provide an advantage in time savings, saying users can deploy in just three clicks, compared to competitors that offer a combination of reference architectures and professional services.

Given how quickly the enterprise AI space is changing and the wide variety of AI initiatives, use cases and models, I expect turnkey offerings will provide a time-to-value benefit, but the integration of services might be necessary to optimize the environment to specific workload demands. HPE, along with its partner community, offer services to help organizations with their AI environments.

In terms of observability, the added innovation to OpsRamp is welcome news. The acquisition, which closed last year, provides HPE with heterogeneous infrastructure management, monitoring and observability technology.

By extending this technology to include Nvidia compute and networking, HPE confirms its intent to continue to invest in the technology and offer increased value to AI application environments. The move positions HPE to help answer not just the "Why is my app running slowly?" question of observability, but also to answer more valuable questions such as, "Why is my AI app running slowly in particular?" or "Why is my Nvidia GPU underutilized?"

Ultimately, this year's HPE Discover event served to confirm both the importance of AI to the future of the enterprise and the importance of Nvidia to the future of AI. The open question is whether HPE, with its turnkey systems combined with the heterogenous management of OpsRamp, can differentiate itself in an incredibly competitive landscape of on- and off-premises infrastructure providers including AWS, Cisco, Dell Technologies, Google Cloud, Lenovo and Microsoft Azure. 

Scott Sinclair is Practice Director with TechTarget's Enterprise Strategy Group, covering the storage industry.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close