Kubernetes Engine (OKE)

OKE streamlines operations for cloud native, enterprise-grade Kubernetes at any scale. Deploy, manage, and scale your most demanding workloads—including AI and microservices—with automated upgrades, intelligent scaling, and built-in security.



AI at Scale: Bring Innovation to Market Fast with OCI Kubernetes Engine (OKE)

Learn how to accelerate development and simplify managing AI workloads in production.

Why Choose OKE?

  • Price-Performance

    See how 8x8 improved performance and TCO on OCI.

  • Autoscaling

    Learn how DMCC meets peak demand with elastic scaling.

  • Efficiency

    Explore how Cohere improved serving efficiency on OCI.

  • Portability

    See how CNCF moved Kubernetes workloads to OCI with minimal changes.

  • Simplicity

    Find out how EZ Cloud streamlined deployment and Day 2 operations.

  • Reliability

    Read how B3 achieves stringent availability objectives on OCI.

  • Resiliency

    See how Zimperium designs for regional failover and rapid recovery.

OKE use cases

Accelerate AI model building

The AI model-building process starts with data preparation and experimentation, benefitting from secure, shared access to GPUs and centralized administration. OKE enables teams to:

– Maximize GPU utilization through secure, multitenant clusters

– Collaborate efficiently in centrally managed environments

– Integrate with Kubeflow for streamlined model development and deployment

Learn more about running applications on GPU-based nodes with OKE.

OKE: Purpose-built for AI and ML

Built on OCI's high performance infrastructure, OKE gives you:

– Access to the latest NVIDIA GPUs (H100, A100, A10, and more)

– Ultrafast RDMA networking for maximum throughput and low latency

– Full control with managed or self-managed Kubernetes worker nodes

Explore how to create a Kubernetes cluster and install Kubeflow within it.

Orchestrate training workloads efficiently

Data scientists rely on optimized scheduling to maximize resource use for training jobs. OKE supports advanced schedulers such as Volcano and Kueue to efficiently run parallel and distributed workloads.

Large-scale AI training requires fast, low-latency cluster networking. OCI’s RDMA-enabled infrastructure empowers OKE to move data directly to and from GPU memory, minimizing latency and maximizing throughput.

OKE: Ready for high performance AI training

OKE, built on reliable OCI infrastructure, brings you:

– Access to NVIDIA GPUs (H100, A100, A10, and more)

– Ultrafast, RDMA-backed network connections

– The flexibility to run jobs on self-managed Kubernetes nodes

Learn more about running applications on GPU-based nodes with OKE.

Ready to run GPU workloads on OKE with NVIDIA A100 bare metal nodes? This tutorial can show you how.

Efficient, scalable AI inference

OKE takes full advantage of Kubernetes to efficiently manage inference pods, automatically adjusting resources to meet demand. With the Kubernetes Cluster Autoscaler, OKE can automatically resize managed node pools based on real-time workload demands, enabling high availability and optimal cost management when scaling inference services.

OKE’s advanced scheduling and resource management enable you to set precise CPU and memory allocations for inference pods, supporting consistent and reliable performance as workloads fluctuate. Learn more about deploying and managing applications on OKE.

OKE offers robust options for scalable, cost-effective AI inference—including virtual nodes for rapid pod-level scaling and the flexibility to run on both GPU and Arm-based processors.

See how to deploy NVIDIA NIM inference microservices at scale with OCI Kubernetes Engine.

For more on running AI inference on GPU nodes, review the documentation for running applications on GPU-based nodes.

Make application migration easy with OKE

When you bring your applications to OKE, you can:

  • Migrate existing apps as-is—no rearchitecting needed, just lift, shift, and go.
  • Simplify your day-to-day with built-in automation for scaling, patching, and upgrades.
  • Streamline infrastructure management, so your team spends less time on maintenance and more on innovation.
  • Boost resource efficiency and optimize costs with advanced orchestration tools.
  • Increase agility, uptime, and resilience with Oracle’s high availability global cloud regions.
  • Strengthen security and facilitate compliance using Oracle’s enterprise-grade controls and certifications.

Modernizing with OKE means you move faster and more securely—while Oracle handles the complex parts behind the scenes. That’s migration made easy, so you can focus on what matters most: your business.

Follow the step-by-step deployment guide on using OKE, OCI Bastion, and GitHub Actions for secure, automated migration.

For more on OKE features and management, see the official OKE documentation.

Supercharge microservices development with OKE

Building microservices OKE lets your teams:

  • Develop and deploy services independently, so good ideas ship faster.
  • Automate builds and rollouts with OCI integrations for CI/CD for smoother updates (and quieter weekends).
  • Scale each microservice on demand to match your business needs—no more all-or-nothing resource allocation.
  • Modernize your architecture for agility and resilience to set your business up for whatever’s next.

With OKE, you get the robust tooling and enterprise security Oracle is known for, plus the flexibility microservices require. Change the way you build, update, and scale apps—with fewer headaches and a lot more control.

For more information on developing and managing microservices:

“Many OCI AI services run on OCI Kubernetes Engine (OKE), Oracle’s managed Kubernetes service. In fact, our engineering team experienced a 10X performance improvement with OCI Vision just by switching from an earlier platform to OKE. It’s that good.”

Jun Qian

VP of OCI AI Services, Oracle Cloud Infrastructure

Customers innovating with cloud native services on OCI

Oracle Cloud Infrastructure: A price-performance leader for Kubernetes

CIO magazine recognizes OCI for its expertise in delivering cutting-edge Kubernetes solutions, supporting scalable and efficient application development.

Get started with Kubernetes Engine

  • Deploy a simple containerized app using OKE managed nodes

    Deploy simple microservices packaged as Docker containers and communicate via a common API.


  • Deploy a Kubernetes cluster with virtual nodes

    Discover best practices for deploying a serverless virtual node pool using the provided Terraform automation and reference architecture.


  • Discover patterns to optimize your Kubernetes resources

    Find out how Tryg Insurance reduced their costs by 50% via dynamic rightsizing.


March 26, 2025

Announcing Fully Automated Disaster Recovery for OCI Kubernetes Engine using OCI Full Stack DR

Gregory King, Senior Principal Product Manager

Oracle Cloud Infrastructure (OCI) Full Stack Disaster Recovery (Full Stack DR) announces native support for OCI Kubernetes Engine (OKE). OKE clusters are now a selectable OCI resource in Full Stack DR just like virtual machines, storage, load balancers, and Oracle databases. This means we know exactly how to validate, failover, switchover, and test your ability to recover OKE, infrastructure, and databases without your IT staff writing one line of code or step-by-step instructions in a spreadsheet or text file.

Read the complete post

Related Kubernetes products

Registry

Secure, standards-based service for working with container images

Full Stack DR

Fully automated disaster recovery for Oracle Kubernetes Engine

DevOps CI/CD

Automate application delivery across build, test, and deployments

Resource Manager

Terraform-based cloud infrastructure automation

Get started with OKE


Oracle Cloud Free Tier

Get 30 days of access to CI/CD tools, managed Terraform, telemetry, and more.


Architecture Center

Explore deployable reference architectures and solutions playbooks.


Oracle Cloud Native services

Empower app development with Kubernetes, Docker, serverless, APIs, and more.


Contact us

Reach our associates for sales, support, and other questions.