OKE enables you to quickly create, manage, and consume Kubernetes clusters that leverage underlying compute, network, and storage services without the need to install and maintain complex supporting Kubernetes infrastructure.
If you create a new OKE cluster, you can choose a basic cluster that doesn't have a base fee for the control plane. However, this option doesn't include access to OKE features, such as virtual nodes and add-ons, or the SLA for the control plane.
Basic clusters are suitable for customers who are willing to take on more management responsibilities for their OKE clusters and don't require the advanced capabilities provided by enhanced OKE clusters. If you need more advanced management capabilities in the future, you can easily switch to enhanced OKE clusters.
You should use OKE when you want to leverage Kubernetes to deploy and manage your Kubernetes-based container applications. It allows you to combine the production-grade container orchestration of standard upstream Kubernetes with the control, security, and high, predictable performance of OCI.
OKE charges are based on computing, storage, networking, and other types of infrastructure resource consumption in your OKE clusters. The OCPU and memory resources for OKE worker nodes have the same pricing as OCI Compute instances for the chosen shape. There’s also a $0.10 hourly base fee per cluster that has a monthly maximum of $74.40 for the control plane, which is financially guaranteed by an SLA.
You can choose a basic cluster option when creating an OKE cluster, eliminating the control plane fee. But doing so means you won’t have access to certain features such as virtual nodes and add-ons, or the control plane.
When virtual nodes are chosen for worker nodes, there’s an additional hourly fee of $0.015 per node based on their runtime usage.
OKE is supported in all regions as documented in Regions and Availability Domains.
OKE supports compliance with a number of industry standards and regulations, including, but not limited to, FedRAMP High, ISO/IEC 27001, PCI DSS, SOC1/2/3, and more. For more information, please refer to the infrastructure compliance page.
You don't need to manage it yourself. OKE takes care of it. Whenever you create a Kubernetes cluster with OKE, the managed service automatically sets up and runs multiple control planes in different fault domains or availability domains (logical data centers) to ensure high availability. The service also handles ongoing management tasks related to the control plane, such as Kubernetes version upgrades, seamlessly and without interruption.
Yes. Kubernetes clusters are created with standard upstream Kubernetes versions. These versions are also certified against the Cloud Native Computing Foundation (CNCF) conformance program.
OCI automatically creates and manages multiple Kubernetes control planes across various fault domains and availability domains (logical data centers) when you create an OKE cluster. This is done to ensure that the managed Kubernetes control plane is highly available. Control plane operations, such as upgrading to newer versions of Kubernetes, can be performed without service interruptions. Additionally, the provisioned worker nodes are labeled with their availability domain and region, allowing you to use Kubernetes scheduling mechanisms when developing and deploying robust container-based applications.
Yes. Managed Kubernetes clusters are enabled with Kubernetes RBAC. Managed Kubernetes is also integrated with Oracle Identity and Access Management (IAM), providing users with powerful controls over access to their clusters.
Yes. You can deploy a managed Kubernetes cluster into an existing VCN, giving you a greater degree of control over security lists and the use of underlying subnets.
Yes. With OKE, your Kubernetes clusters are integrated in your VCN. Your cluster worker nodes, load balancers, and the Kubernetes API endpoint are part of a private or public subnet of your VCN. Regular VCN routing and firewall rules control the access to the Kubernetes API endpoint and make it accessible from a corporate network only, through a bastion host, or by specific platform services.
Yes. OKE allows users to expose Kubernetes services of type "LoadBalancer" and create Oracle load balancers. Users can also create Kubernetes Persistent Volumes and Persistent Volume Claims backed by Oracle Block Volumes.
OKE uses CRI-O as its container runtime.
Yes. You can deploy a managed Kubernetes cluster on pure bare metal nodes. You can also leverage the concept of node pools (a set of nodes sharing a common node size/image) to create a cluster of both bare metal and virtual machines, then target your Kubernetes workloads appropriately.
When setting up an OKE cluster, you can assign a public/private SSH key pair to managed and self-managed nodes. This allows you to use that SSH key pair to access your worker nodes. However, note that OKE virtual nodes cannot be accessed via an SSH key pair, as they are fully managed by OKE.
It is possible to combine managed and self-managed nodes within a single OKE cluster. However, virtual nodes cannot be mixed with other node types in an OKE cluster.
OKE virtual nodes do not yet have persistent storage capabilities. However, the service plans to introduce support for attaching persistent volumes backed by OCI Block Storage and OCI File Storage. If your Kubernetes application requires persistent storage, it is advisable to use OKE managed nodes.
Virtual nodes are compatible with E3, E4, and A1 compute shapes, and new shapes are added regularly. If you need a shape that virtual nodes don't offer for your workloads, you can use managed nodes instead.
The following software packages are available with add-ons for lifecycle management. New software packages are added regularly.