Nvidia is extending its solution footprint far over artificial intelligence AI and gaming_ venturing broadly athwart the whole computing ecomethod into mobility and the next-age cloud data center.
Nvidias ambitions in this regard are clear from its pending acquisition of Arm Technology and from CEO Jensen Huangs positioning of the company as a “full-stack computing” preparer. Demonstrating that hes putting existing Ramp;D dollars behind this vision_ at the potential Nvidia GPU Technology Conference this month_ Huang announced the rollout of the companys new BlueField “data processing unit DPU” chip architecture.
As it evolves its hardware platform into a DPU-centric architecture in support of new enterprise applications_ Nvidia is also making sure that it fully sums its BlueField_DOCA accelerators into the Arm associate ecomethod.
Signaling that strategy at GTC_ the vendor announced that it will help Arm associates go to market with full-stack solution platforms that consist of GPU-empowerd as well as DPU-empowerd networking_ storage and security technologies. It has occupied Arm associates to form full-stack solutions for high-accomplishment computing_ cloud_ edge and PC opportunities. Also_ it is porting its AI and RTX engines to Arm_ so that they address a much larger market than the x86 platforms on which Nvidia has traditionally run.
Partners are innate to Nvidias plans to support a wider range of enterprise application workloads than just AI on its new DPU fruit family. Integral to Nvidias land-and-expand strategy is DOCA_ a new data center infrastructure SoC architecture and software outgrowth kit.
Currently useful to soon approach associates only_ the DOCA SDK empowers educeers to program applications on BlueField-hastend data center infrastructure labors. Developers can offload CPU workloads to BlueField DPUs. Consequently_ this new offering builds out Nvidias enterprise educeer tools_ complementing the CUDA programming standard that empowers outgrowth of GPU-hastend applications. In approachion_ the SDK is fully sumd into the Nvidia NGC catalog of containerized software_ thereby encouraging third-party application preparers to educe_ acknowledge_ and distribute DPU-hastend applications.
Several leading software vendors VMware_ Red Hat_ Canonical_ and Check Point Software Technologies announced plans at GTC to sum their wares with the new DSP_DOCA acceleration architecture in the coming year. In approachion_ Nvidia announced that separate leading server manufacturers_ including Asus_ Atos_ Dell Technologies_ Fujitsu_ Gigabyte_ H3C_ Inspur_ Lenovo_ Quanta_QCT_ and Supermicro_ plan to sum the DPU into their relative fruits in the same timeframe.
Although there was no specific Arm tie-in to Huangs announcement that Microsoft is adopting Nvidia AI on Azure to fetch GPU-hastend keen experiences to its cloud-based Microsoft Office experience_ it would not be surprising if_ in coming years_ more of the mobile experience on this and other Office apps were hastend locally by leveraging DPU-offload technology .
Nvidias fruit teams are wasting no time to incorporate the DPUs CPU-offload acceleration into their solutions. Most notably_ Huang announced that the Nvidia EGX AI edge-server platform is evolving to combine the Nvidia Ampere architecture GPU and BlueField-2 DPU on a one PCIe card.
Although there was no specific BlueField DPU tie-in to Nvidia Jetson_ the companys Arm-based SoC for AI robotics_ one should anticipate that the DOCA SDK will advance to support outgrowth of these applications_ which are a hot growth field for Nvidias core platforms. Its also a safe bet that the company will use its new hardware and SDK to hasten its Omniverse platform for collaborative 3-D full fruition_ its Jarvis platform for conversational AI_ and its new Maxine platform for cloud-native_ AI-hastend video streaming.
Nvidias new BlueField DPU architecture and DOCA SDK prepare a strategic platform for broadening its extend into enterprise_ labor preparer_ and consumer opportunities of all types.
By enabling hardware-hastend CPU-offload of diverse workloads_ the DPU architecture prepares Nvidia with a clear path for converging the new DOCA programming standards with its CUDA AI outgrowth framework and NGC catalog of containerized cloud solutions. This will empower the company to prepare both its own fruit teams and solution associates with the hardware and software platforms needed to hasten a full range of application and infrastructure workloads from cloud to edge.
As it awaits the eventual approval of its proposed acquisition of Arm Technology_ Nvidia will need to try this new architecture to its existing associate ecomethod. If DPU technology falls brief of Nvidias aggressive accomplishment promises_ that want could sour relations with Arms vast array of licensees_ all of whom rely heavily on its CPU-based processor architecture and would boon from more seamless integration with Nvidias market-leading AI technology.
Clsoon Nvidia cannot produce to lose momentum in the cloud-to-edge microprocessor wars just when it has begun to pull away from archrival and CPU-powerhouse Intel.