INTELLIGENT CLOUD
Oracle and NVIDIA partner to deliver accelerated computing and generative AI services
O racle and NVIDIA announced an expanded collaboration to deliver sovereign AI solutions to customers around the world . Oracle ’ s distributed cloud , AI infrastructure , and generative AI services , combined with NVIDIA ’ s accelerated computing and generative AI software , are enabling governments and enterprises to deploy AI factories .
“ As AI reshapes business , industry , and policy around the world , countries and organisations need to strengthen their digital sovereignty in order to protect their most valuable data ,” said Safra Catz , CEO of Oracle . “ Our continued collaboration with NVIDIA and our unique ability to deploy cloud regions quickly and locally will ensure societies can take advantage of AI without compromising their security .”
The combination of NVIDIA ’ s full-stack AI platform with Oracle ’ s Enterprise AI , deployable across OCI Dedicated Region , Oracle Alloy , Oracle EU Sovereign Cloud , and Oracle Government Cloud , offers customers a state-of-the-art AI solution that provides greater control over operations , location , and security to help support digital sovereignty .
Countries across the globe are increasingly investing in AI infrastructure that can support their cultural and economic ambitions . Across 66 cloud regions in 26 countries , customers can access more than 100 cloud and AI services spanning infrastructure and applications to support IT migration , modernisation , and innovation .
The companies ’ combined offerings can be deployed via the public cloud or in a customer ’ s data centre in specific locations , with flexible operational controls . Oracle is the only hyperscaler capable of delivering AI and full cloud services locally , anywhere . OCI services and pricing are consistent across deployment types to simplify planning , portability , and management .
Oracle ’ s cloud services leverage a range of NVIDIA ’ s stack , including NVIDIA accelerated computing infrastructure and the NVIDIA AI Enterprise software platform , including newly announced NVIDIA NIM inference microservices , which are built on the foundation of NVIDIA inference software such as NVIDIA TensorRT , NVIDIA TensorRT- LLM , and NVIDIA Triton Inference Server . To help customers address the everincreasing needs of AI models , Oracle plans to take advantage of the latest NVIDIA Grace Blackwell computing platform , announced at GTC , across OCI Supercluster and OCI Compute . OCI Supercluster will become significantly faster with new OCI Compute bare metal instances , ultra-low-latency RDMA networking , and high-performance storage . OCI Compute will adopt both the NVIDIA GB200 Grace Blackwell Superchip and the NVIDIA Blackwell B200 Tensor Core GPU .
Safra Catz , CEO Oracle
The NVIDIA GB200 Grace Blackwell Superchip will power a new era of computing . GB200 delivers up to 30X faster real-time large language model , LLM inference , 25X lower TCO , and requires 25X less energy compared to the previous generation of GPUs , supercharging AI training , data processing , and engineering design and simulation . NVIDIA Blackwell B200 Tensor Core GPUs are designed for the most demanding AI , data analytics , and highperformance computing , HPC workloads .
NVIDIA NIM and CUDA-X microservices , including NVIDIA NeMo Retriever for retrieval- augmented generation , RAG inference deployments , will also help OCI customers bring more insight and accuracy to their generative AI copilots and other productivity tools using their own data . •
INTELLIGENT TECH CHANNELS 4343