YOU ARE AT:AI InfrastructureKDDI, HPE partner to launch advanced AI data center in Japan

KDDI, HPE partner to launch advanced AI data center in Japan

KDDI said the the new Osaka Sakai data center will be powered by a rack-scale system featuring the Nvidia GB200 NVL72 platform

In sum, what to know:

New AI hub in Osaka by 2026 – KDDI and HPE are building an advanced AI data center powered by Nvidia Blackwell-based infrastructure and liquid cooling to serve Japan and global AI markets.

Focus on performance and sustainability – HPE’s rack-scale system brings energy-efficient high-performance computing, combining NVIDIA hardware and advanced cooling to reduce environmental impact.

AI services for startups and enterprises – KDDI plans to deliver cloud-based AI compute through its WAKONX platform, enabling customers to build LLMs and scale AI apps with low latency.

Japanese operator KDDI Corporation and Hewlett Packard Enterprise (HPE) announced a strategic collaboration aimed at launching a next-generation AI data center in Sakai City, Osaka Prefecture, with operations scheduled to begin in early 2026.

In a release, the Japanese company noted that the new AI data center will support startups, enterprises and research institutions in developing AI-powered applications and training large language models (LLMs), leveraging Nvidia’s Blackwell architecture and HPE’s infrastructure and cooling expertise.

The Japanese company noted that the new Osaka Sakai data center will be powered by a rack-scale system featuring the Nvidia GB200 NVL72 platform, developed and integrated by HPE. The system is optimized for high-performance computing and incorporates advanced direct liquid cooling to significantly reduce the environmental footprint, KDDI said.

As AI workloads grow in scale and complexity, the demand for low-latency inferencing and energy-efficient infrastructure is increasing. KDDI’s new AI data center in Osaka aims to meet this challenge by offering cloud-based AI compute services via its WAKONX platform, which is designed for Japan’s AI-driven digital economy.

The Nvidia GB200 NVL72 by HPE is a rack-scale system designed to enable large and complex AI clusters that are optimized for energy efficiency and performance through advanced direct liquid cooling.

Equipped with Nvidia-accelerated networking, including Nvidia Quantum-2 InfiniBand, Nvidia Spectrum-X Ethernet and Nvidia BlueField-3 DPUs, the system delivers high-performance network connectivity for diverse AI workloads. Customers can also run the Nvidia AI Enterprise platform on the KDDI infrastructure to accelerate development and deployment, the company said.

Antonio Neri, president and CEO of HPE, said: “Our collaboration with KDDI marks a pivotal milestone in supporting Japan’s AI innovation, delivering powerful computing capabilities that will enable smarter solutions.”

Looking forward, the two companies will continue to strengthen their collaboration to advance AI infrastructure and deliver innovative services ― while enhancing energy efficiency.

HPE and Nvidia recently unveiled a new suite of new AI factory offerings aimed at accelerating enterprise adoption of artificial intelligence across industries.

The expanded portfolio, announced at HPE Discover 2025 in Las Vegas, introduces a range of modular infrastructure and turnkey platforms, including HPE’s new AI-ready RTX PRO Servers and the next generation of the company’s AI platform, HPE Private Cloud AI. These offerings are designed to provide enterprises with the building blocks to develop, deploy and scale generative, agentic and industrial AI workloads.

Branded as Nvidia AI Computing by HPE, the integrated suite combines the chipmaker’s latest technologies—including Blackwell accelerated computing, Spectrum-X Ethernet and BlueField-3 networking—with HPE’s server, storage, software and services ecosystem.

The key component of the launch is the revamped HPE Private Cloud AI, co-developed with the chip firm and fully validated under the Nvidia Enterprise AI Factory framework. This platform delivers a full-stack solution for enterprises seeking to harness the power of generative and agentic AI.

ABOUT AUTHOR

Juan Pedro Tomás
Juan Pedro Tomás
Juan Pedro covers Global Carriers and Global Enterprise IoT. Prior to RCR, Juan Pedro worked for Business News Americas, covering telecoms and IT news in the Latin American markets. He also worked for Telecompaper as their Regional Editor for Latin America and Asia/Pacific. Juan Pedro has also contributed to Latin Trade magazine as the publication's correspondent in Argentina and with political risk consultancy firm Exclusive Analysis, writing reports and providing political and economic information from certain Latin American markets. He has a degree in International Relations and a master in Journalism and is married with two kids.