Container Engine for Kubernetes enables you to quickly create, manage and consume Kubernetes clusters that leverage underlying compute, network and storage services without the need to install and maintain complex supporting Kubernetes infrastructure.
If you create a new OKE cluster, you can choose a Basic cluster that doesn't have a base fee for the control plane. However, this option does not include access to OKE features, such as virtual nodes and add-ons, as well as the SLA for the control plane.
Basic clusters are suitable for customers willing to take on more management responsibilities for their OKE clusters and do not require the advanced capabilities provided by the Standard OKE cluster. If you need more advanced management capabilities in the future, you can easily switch to the Standard OKE cluster.
You should use Container Engine for Kubernetes when you want to leverage Kubernetes to deploy and manage your Kubernetes based container applications. It allows you to combine the production grade container orchestration of standard upstream Kubernetes, with the control, security and high predictable performance of Oracle Cloud Infrastructure.
OKE customers' charges are determined by computing, storage, networking, and other types of infrastructure resource consumption in their OKE clusters. The OCPU and memory resources allocated to OKE worker nodes are priced the same as OCI Compute instances for the chosen shape. Additionally, there is a base fee of $0.10 per cluster per hour, up to a maximum of $74.40 per month for the control plane, which comes with a financially guaranteed service level agreement (SLA).
Customers creating an OKE cluster can choose a Basic cluster option, eliminating the nominal control plane fee. However, you will not have access to features, such as virtual nodes and add-ons, as well as the SLA for the control plane.
If virtual nodes are chosen for worker nodes, there is an extra fee of $0.015 per node per hour calculated based on the runtime usage of each virtual node.
Container Engine for Kubernetes is supported on all regions as documented in Regions and Availability Domains.
OKE is compliant with a number of industry standards and regulations, including, but not limited to, FedRAMP High, ISO/IEC 27001, PCI DSS, SOC1/2/3, and more. For more information, please refer to the infrastructure compliance page.
You don't need to manage it yourself. OKE takes care of it. Whenever you create a Kubernetes cluster with OKE, the managed service automatically sets up and runs multiple control planes in different fault domains or availability domains (logical data centers) to ensure high availability. Ongoing management tasks related to the control plane, such as Kubernetes version upgrades, are also seamlessly handled by the service without interruption.
Yes. Kubernetes clusters are created with standard upstream Kubernetes versions. These versions are also certified against the Cloud Native Computing Foundation (CNCF) conformance program.
Oracle Cloud Infrastructure (OCI) automatically creates and manages multiple Kubernetes control planes across various fault domains and availability domains (logical data centers) when you create an OKE cluster. This is done to ensure that the managed Kubernetes control plane is highly available. Control plane operations, such as upgrading to newer versions of Kubernetes, can be performed without service interruptions. Additionally, the provisioned worker nodes are labeled with their availability domain and region, allowing you to use Kubernetes scheduling mechanisms when developing and deploying robust container-based applications.
Yes. Managed Kubernetes clusters are enabled with Kubernetes RBAC. Managed Kubernetes is also integrated with Oracle Identity and Access Management (IAM), enabling users with powerful controls over access to their clusters.
Yes. You can deploy a managed Kubernetes cluster into an existing VCN, giving you a great degree of control over the use of underlying subnets, and security lists.
Yes. With OKE, your Kubernetes clusters are integrated in your virtual cloud network (VCN). Your cluster worker nodes, load balancers, and the Kubernetes API endpoint are part of a private or public subnet of your VCN. Regular VCN routing and firewall rules control the access to the Kubernetes API endpoint and make it accessible from a corporate network only, through a bastion host, or by specific platform services.
Yes. You can deploy a managed Kubernetes cluster on pure bare metal Nodes. You can also leverage the concept of "node pools" (a set of nodes sharing a common node size / image) to create a cluster of both bare metal and virtual machines and target your Kubernetes workloads appropriately.
Yes. Container Engine for Kubernetes allows users to expose Kubernetes services of type "LoadBalancer" and create Oracle load balancers. Users can also create Kubernetes Persistent Volumes and Persistent Volume Claims backed by Oracle Block Volumes.
Yes. When you create a cluster, you can provide a public/private SSH key pair in order to SSH into your worker nodes if desired.
Yes. Worker nodes run the standard Docker runtime, so that users can leverage familiar Docker commands.
At launch, OKE virtual nodes do not have persistent storage capabilities. However, after the initial launch, the service will introduce support for attaching persistent volumes backed by OCI Block Storage and OCI File Storage. If your Kubernetes application requires persistent storage, it is advisable to use OKE managed nodes. Workloads that require persistent storage can use managed nodes.
At launch, the virtual nodes will be compatible with E3 and E4 compute shapes with additional shapes included following the initial launch. If you need a shape that virtual vodes don't offer for your workloads, managed nodes can be used instead.
The following software packages are available with add-ons for lifecycle management. New software packages are added regularly.