Kubernetes Engine (OKE)

OKE streamlines operations for cloud native, enterprise-grade Kubernetes at any scale. Deploy, manage, and scale your most demanding workloads—including AI and microservices—with automated upgrades, intelligent scaling, and built-in security.



AI at Scale: Bring Innovation to Market Fast with OCI Kubernetes Engine (OKE)

Learn how to accelerate development and simplify managing AI workloads in production.

Why Choose OKE?

  • Price-Performance

    See how 8x8 improved performance and TCO on OCI.

  • Autoscaling

    Learn how DMCC meets peak demand with elastic scaling.

  • Efficiency

    Explore how Cohere improved serving efficiency on OCI.

  • Portability

    See how CNCF moved Kubernetes workloads to OCI with minimal changes.

  • Simplicity

    Find out how EZ Cloud streamlined deployment and Day 2 operations.

  • Reliability

    Read how B3 achieves stringent availability objectives on OCI.

  • Resiliency

    See how Zimperium designs for regional failover and rapid recovery.

Customers choose OKE because it delivers the results—and reliability—they need to run and grow their business.

OCI Kubernetes Engine (OKE) is certified by the Cloud Native Computing Foundation (CNCF) for both Kubernetes Platform and Kubernetes AI Platform conformance .

These certifications validate OKE’s commitment to open standards—helping ensure that your cloud native and AI/ML workloads run on a platform that’s fully aligned with the industry’s best practices and interoperable across the global Kubernetes ecosystem.

Read more OCI’s new AI Conformance certification.

OKE use cases

Accelerate AI model building

The AI model-building process starts with data preparation and experimentation, benefitting from secure, shared access to GPUs and centralized administration. OKE enables teams to:

– Maximize GPU utilization through secure, multitenant clusters

– Collaborate efficiently in centrally managed environments

– Integrate with Kubeflow for streamlined model development and deployment

Learn more about running applications on GPU-based nodes with OKE.

OKE: Purpose-built for AI and ML

Built on OCI's high performance infrastructure, OKE gives you:

– Access to the latest NVIDIA GPUs (H100, A100, A10, and more)

– Ultrafast RDMA networking for maximum throughput and low latency

– Full control with managed or self-managed Kubernetes worker nodes

Explore how to create a Kubernetes cluster and install Kubeflow within it.

Orchestrate training workloads efficiently

Data scientists rely on optimized scheduling to maximize resource use for training jobs. OKE supports advanced schedulers such as Volcano and Kueue to efficiently run parallel and distributed workloads.

Large-scale AI training requires fast, low-latency cluster networking. OCI’s RDMA-enabled infrastructure empowers OKE to move data directly to and from GPU memory, minimizing latency and maximizing throughput.

OKE: Ready for high performance AI training

OKE, built on reliable OCI infrastructure, brings you:

– Access to NVIDIA GPUs (H100, A100, A10, and more)

– Ultrafast, RDMA-backed network connections

– The flexibility to run jobs on self-managed Kubernetes nodes

Learn more about running applications on GPU-based nodes with OKE.

Ready to run GPU workloads on OKE with NVIDIA A100 bare metal nodes? This tutorial can show you how.

Efficient, scalable AI inference

OKE takes full advantage of Kubernetes to efficiently manage inference pods, automatically adjusting resources to meet demand. With the Kubernetes Cluster Autoscaler, OKE can automatically resize managed node pools based on real-time workload demands, enabling high availability and optimal cost management when scaling inference services.

OKE’s advanced scheduling and resource management enable you to set precise CPU and memory allocations for inference pods, supporting consistent and reliable performance as workloads fluctuate. Learn more about deploying and managing applications on OKE.

OKE offers robust options for scalable, cost-effective AI inference—including virtual nodes for rapid pod-level scaling and the flexibility to run on both GPU and Arm-based processors.

See how to deploy NVIDIA NIM inference microservices at scale with OCI Kubernetes Engine.

For more on running AI inference on GPU nodes, review the documentation for running applications on GPU-based nodes.

Make application migration easy with OKE

When you bring your applications to OKE, you can:

  • Migrate existing apps as-is—no rearchitecting needed, just lift, shift, and go.
  • Simplify your day-to-day with built-in automation for scaling, patching, and upgrades.
  • Streamline infrastructure management, so your team spends less time on maintenance and more on innovation.
  • Boost resource efficiency and optimize costs with advanced orchestration tools.
  • Increase agility, uptime, and resilience with Oracle’s high availability global cloud regions.
  • Strengthen security and facilitate compliance using Oracle’s enterprise-grade controls and certifications.

Modernizing with OKE means you move faster and more securely—while Oracle handles the complex parts behind the scenes. That’s migration made easy, so you can focus on what matters most: your business.

Follow the step-by-step deployment guide on using OKE, OCI Bastion, and GitHub Actions for secure, automated migration.

For more on OKE features and management, see the official OKE documentation.

Supercharge microservices development with OKE

Building microservices OKE lets your teams:

  • Develop and deploy services independently, so good ideas ship faster.
  • Automate builds and rollouts with OCI integrations for CI/CD for smoother updates (and quieter weekends).
  • Scale each microservice on demand to match your business needs—no more all-or-nothing resource allocation.
  • Modernize your architecture for agility and resilience to set your business up for whatever’s next.

With OKE, you get the robust tooling and enterprise security Oracle is known for, plus the flexibility microservices require. Change the way you build, update, and scale apps—with fewer headaches and a lot more control.

For more information on developing and managing microservices: