Oracle Compute Cloud@Customer

Oracle Compute Cloud@Customer is fully managed, rack-scale infrastructure that lets you use Oracle Cloud Infrastructure (OCI) Compute anywhere. Gain the benefits of cloud automation and economics in your data center by running OCI Compute and GPU shapes with storage and networking services on Compute Cloud@Customer. It’s the simplest way for you to run applications and harness the power of GenAI on cloud infrastructure in your data center while helping address data residency, security, and low-latency connections to local resources and real-time operations.

Bring OCI Compute to Your Data Center with Oracle Compute Cloud@Customer (2:12)
Bring OCI Compute to Your Data Center with Oracle Compute Cloud@Customer (2:12)

On-demand webinar: Bring AI to the Edge with Oracle Cloud Infrastructure

Explore how Oracle’s AI-powered edge solutions can help you harness the full potential of edge computing.

Fully managed, rack-scale infrastructure that lets you run OCI Compute anywhere

Bring scalable cloud compute power and GPU acceleration to your data center for AI and machine learning, HPC, and business applications.

Run scalable OCI compute, storage, and networking services in a distributed cloud.

Meet data residency, security, and latency requirements.

Lower costs with dynamic consumption pricing.

Modernize your entire application stack.

Fully managed and supported by Oracle.

Secure data with always-on encryption and Oracle Operator Access Control.

Why choose Compute Cloud@Customer?

Compute Cloud@Customer lets you run applications and middleware using OCI services on high performance cloud infrastructure in your data center. You can quickly deploy applications using a rack-scale platform with optional GPU acceleration and infrastructure-as-code automation. Modernize your entire application stack using cloud resources by combining Compute Cloud@Customer with Oracle Exadata Cloud@Customer for Oracle Database.

  • Simple and affordable

    Use fully managed cloud infrastructure and OCI Compute services with a 100% operating expenses model and dynamic consumption pricing to help reduce your costs.

  • Flexible and powerful

    Run applications, including cloud native ones, using flexible VM shapes with up to 96 compute cores each, optional NVIDIA GPUs for AI and HPC workloads, and scalable object, block, and file storage.

  • Fully compatible

    Employ the same OCI services, APIs, and automation as the rest of Oracle’s distributed cloud to easily develop workloads that can be deployed anywhere.

  • Deployable anywhere

    Address your needs for data residency and low latency by deploying rack-scale infrastructure in enterprise data centers and remote locations.

How Compute Cloud@Customer works

Compute Cloud@Customer is Oracle owned and remotely managed cloud infrastructure that’s installed at a data center or remote location of your choosing. It lets you run OCI Compute and GPU instances along with related OCI storage and networking services. Locating cloud resources inside your data center helps you meet data residency requirements and the need for low-latency connections to data center assets and real-time operations.

Select the number of compute cores, GPU instances, and total storage you’ll need, and Oracle will deliver, install, maintain, and manage the infrastructure. Subscribe to Compute Cloud@Customer infrastructure over a period of at least 48 months and pay consumption costs only for the compute and storage resources you use. Compute Cloud@Customer provides you with a 100% OpEx model that uses the same Oracle Universal Credits you use for OCI and helps lower your cost by not charging consumption for unused resources.

When using Compute Cloud@Customer or Exadata Cloud@Customer, your data is protected by always-on encryption and other OCI security capabilities. The OCI Console and common APIs manage services and applications running on Compute Cloud@Customer, Exadata Cloud@Customer, and in OCI regions, making it easy to move workloads across Oracle’s distributed cloud environments.


Oracle Compute Cloud@Customer key capabilities
Oracle Compute Cloud@Customer key capabilities

Compute Cloud@Customer use cases

  • Modernize with the cloud in your data center

    Move applications, including Oracle E-Business Suite, from on-premises platforms to Compute Cloud@Customer and connect directly to Exadata Cloud@Customer to help improve performance, reduce administration time, and lower costs. Consolidate traditional and cloud native workloads on a single Compute Cloud@Customer platform to help reduce costs.

  • Centrally manage a distributed cloud

    Deploy applications across the country and centrally manage them from a single pane of glass. Pay for infrastructure and services using OpEx with a globally consistent pricing model that lets you pay consumption costs only for the resources you use.

  • Protect application resources against disasters

    Protect applications and middleware stacks running in customer data centers or OCI regions using Compute Cloud@Customer at a standby site with orchestration of disaster recovery through OCI.

  • Leverage the tight integration of application workloads to database tiers

    Compute Cloud@Customer with Exadata Cloud@Customer provides the ideal platform to run GenAI workloads, including using GPUs for inferencing with LLM, data analysis, and HPC simulations.

  • Run GPU-enabled cloud applications in your data center

    Run GenAI workloads, including LLMs for inferencing, as well as HPC data analysis and simulation applications in your data center using OCI GPU instances.

What the analysts are saying about Oracle Compute Cloud@Customer


“Most data is still on-premises and much of that resides on Oracle Database. Many of these customers want to get the benefits of AI, but some don’t want to ship their data to an AI cloud service. Compute Cloud@Customer with NVIDIA GPUs provides a compelling solution. It enables customers to leverage a high-performance AI platform for running GenAI inference or LLM/SLM fine tuning on-premises, where their data is, never leaving their data centers, for maximum security control. Further, when combined with Exadata Cloud@Customer, the platform co-engineered with Oracle Database for extreme vector processing, customers can run their GenAI models on Compute Cloud@Customer and use RAG techniques on their internal data to get more relevant answers to their prompts. This combination delivers a powerful value prop for on-premises AI workflows.”

Carl Olofson Principal Analyst, DBMSGur

“Oracle’s extension of its distributed cloud computing line adding a smaller, lower cost option for NVIDIA racks is a huge step in clearing the way for generative AI workloads to come on-premises. Until now, other than Oracle’s Dedicated Region and Alloy deployment options, practically the only alternative for running workloads with language models required going to a public cloud hyperscaler. But for many reasons, that could range from internal policies to data residency requirements, a significant proportion of the enterprise is likely to stay on-premises over the foreseeable future. Oracle Compute Cloud@Customer and the new Private Cloud Appliance with NVIDIA GPUs will bring gen AI workloads into the data center. While the NVIDIA L40S processors are smaller in size compared to the instances that run in OCI NVIDIA Superclusters, they may be right-sized for the types of domain-specific enterprise workloads that will run on the more compact models, which we expect to become commonplace in the future.”

Tony Baer Principal, dbInsight

“Coming directly after the launch of the breakthrough Exadata X11M platform for Oracle Database 23ai workloads, Oracle now unveils Compute Cloud@Customer with NVIDIA GPUs for the application tier—to help organizations thrive with demanding workloads across AI, HPC, 3D graphics and 4K streaming. Another great use case for organizations is fraud detection, where the combination of Oracle’s platforms and NVIDIA’s GPUs work together to find anomalies and detect fraud in real-time—while helping organizations fulfill data residency requirements.”

Ron Westfall Research Director, The Futurum Group

“As part of Oracle’s distributed cloud strategy, Compute Cloud@Customer with NVIDIA GPUs delivers OCI Compute on-premises and takes the guesswork out of building an AI platform. Unlike competitive offerings disguised as mere boxes with financing contracts and no cloud supporting them whatsoever, Compute Cloud@Customer uses the same APIs, the same software and same control plane as OCI for an authentic cloud experience. This is a timely solution from Oracle that enables customers to start small and get into AI without having to commit to a massive upfront investment—plus, it’s a perfect complement to Exadata Cloud@Customer deployments for Oracle Database 23ai.”

Marc Staimer Senior Analyst and Contributor, theCube Research

“For manufacturing organizations looking to deploy digital replicas of their production lines, Oracle Compute Cloud@Customer with NVIDIA GPUs is the perfect on-premises solution. Organizations can reduce costs and time-to-market, improve quality control and improve energy efficiency and sustainability—all without impacting current manufacturing operations. And with a pay-as-you-go model that brings the benefits of OCI distributed cloud to on-premises environments, Oracle Compute Cloud@Customer addresses the needs of organizations who are required to store and manage their data in specific locations—and sets them up for a fast start in AI-driven workloads.”

Steve McDowell Principal Analyst and Founder, NAND Research

“Oracle's Compute Cloud@Customer with NVIDIA GPU expansion offers a compelling solution for organizations needing powerful and scalable AI capabilities on-premises. The ability to run demanding workloads like Generative AI and LLMs while addressing data sovereignty concerns makes this a game-changer for industries like financial services, healthcare, and telecom. Plus, it’s the ideal combination with Exadata Cloud@Customer, by addressing the AI application layer while Exadata manages the AI data layer with Oracle Database 23ai AI Vector Search.”

Steven Dickens CEO and Principal Analyst at HyperFRAME Research

“Oracle continues to deliver on its ‘bring AI to your data’ mantra starting with Oracle Database 23ai AI Vector Search, followed by Oracle Exadata X11M, the database platform in all of its variants, with impressive performance across AI functions. Having completed the data side of the equation, Oracle now announces an economically priced compute engine to run AI models on-premises—Compute Cloud@Customer with NVIDIA L40S GPUs—a high-performance, modular solution with a cloud consumption model. This trifecta provides a unified architecture that supports end-to-end workflows for many of the most popular AI use cases, including generative AI inferencing, fine-tuning AI models, RAG and others. CIOs, including those running Oracle environments, those with data residency requirements, or those taking their first steps into AI, should take a serious look at adding Compute Cloud@Customer to their portfolio.”

Holger Mueller Vice President and Principal Analyst, Constellation Research

“In the race to adopt artificial intelligence for competitive advantage, businesses must balance innovation with security and compliance. However, many face challenges related to data sovereignty and residency requirements, raising concerns about where and how their data is processed. Oracle’s Compute Cloud@Customer addresses these issues by offering an on-premises cloud solution that ensures data remains securely within a customer’s data center. It can even function in a disconnected mode for greater sovereignty control. Equipped with NVIDIA L40S GPUs, it delivers the performance needed for AI, graphics, and video processing across industries. With its security, modularity, and consumption-based pricing, the platform provides a compelling foundation for enterprise AI adoption.”

Alexei Balaganski Lead Analyst & CTO, KuppingerCole Analysts

“AI Vector Search and retrieval-augmented generation (RAG) are two techniques enterprise organizations employ to more efficiently extract value from the enterprise data that resides in databases around an organization. However, operationalizing AI Vector Search and RAG—and continuous tuning—for absolute best performance and efficiency is a time-consuming and inexact science for many IT organizations. The combination of Oracle Database 23ai on Exadata X11M with Compute Cloud@Customer with NVIDIA L40S GPUs strikes that optimal performance-efficiency balance, making deriving value from AI more frictionless and more accurate. Further, consuming this through Oracle Compute Cloud@Customer delivers this capability in a more cost-efficient manner.”

Matt Kimball Vice President and Principal Datacenter Analyst, Moor Insights & Strategy

“Getting started with AI can be a major challenge for organizations. Oracle Compute Cloud@Customer with NVIDIA L40S GPUs eliminates three key problems for the customer. First, it removes the barrier to entry with a modular architecture that starts small and can grow as needed—and still delivers the performance needed to run substantial AI models. Second, it follows a cloud subscription model, so customers get charged only for what they use. Third, security and compliance risks are greatly reduced, because Compute Cloud@Customer is deployed and runs on-premises, so that data never needs to leave the customer’s data center.”

Richard Winter CEO, WinterCorp

“Oracle Compute Cloud@Customer brings the power of scalable AI and consumption-based cloud economics to organizations that can’t move their data to the cloud. Its integrated architecture with scalable compute, storage, and up to 48 NVIDIA L40S GPUs makes it easy for users to run LLMs for inferencing and other AI use cases anywhere.”

Mike Leone Practice Director for Data Management, Analytics & AI, Enterprise Strategy Group
February 6, 2025

Cloud AI Resources Where You Need Them: Announcing NVIDIA L40S GPUs on Oracle Compute Cloud@Customer and Oracle Private Cloud Appliance

Sanjay Pillai, Director of Product Management
Jeevan Sreenivas, Principal Technical Product Manager

In today’s fast-paced world, where real-time decisions are crucial, enterprises demand immediate, actionable intelligence wherever their data resides. The future of AI extends beyond centralized cloud data centers to locations throughout your organization, where high performance, ultralow latency, and flexible deployment options drive competitive advantages. As part of Oracle’s distributed cloud strategy, we’re excited to announce the launch of Oracle Compute Cloud@Customer with NVIDIA GPU configurations (orderable today) and Oracle Private Cloud Appliance with GPU configurations (orderable in March), featuring NVIDIA L40S GPUs for bringing enterprise-grade AI, graphics, and high performance computing capabilities directly into your location of choice—including your own data center.

Read the complete post