Container Instances service is great for operating isolated containers that do not require container orchestration platform, such as Kubernetes. It is suitable for use cases including APIs, web apps, CI/CD jobs, automation tasks, data/media processing, development/test environments, and more. However, it does not replace container orchestration platforms. For use cases that require container orchestration use (OKE).
When running containers on Container Instances, you do not have to provision and manage any VMs or servers yourself. You can simply specify the container images and launch configuration to run containers on Container Instances. OCI manages the underlying compute required for running your containers. If you run containers on a virtual machine, you are responsible for managing the server and installing and maintaining the container runtime on the virtual machine.
With OCI Container Instances, you only pay for the infrastructure resources used by the container instances. The price for CPU and memory resources allocated to a container instance is the same as the price of OCI Compute Instances for the chosen shape. There are no additional charges for using container instances. Partial OCPU and gigabyte (memory) hours consumed are billed as partial hours with a one-minute minimum and the usage is aggregated by the second. Each container instance gets 15 GB ephemeral storage by default with no additional charges. For more details, see Container Instances pricing page.
When creating a Container Instance, you can select the underlying compute shape and allocate up to the maximum CPU and memory resources provided by the shape. For example, if you select an E4 or E3 Flex shape, you can allocate up to 64 cores (128 vCPU) and 1024 GB memory to your container instance.
Yes. When creating a Container Instance, you can specify one or more containers. You can specify the container image, and optionally, environment variables, resource limits, startup options etc. for each container.
A Container Instance should typically run a single application. However, your application container may require supporting containers, such as logging sidecar or database container, for development purposes. You can choose to run such multiple containers of the same application on one Container Instance. Containers running on the same Container Instance will share the CPU/memory resources, local network, and ephemeral storage. You can choose to apply CPU/memory resource limits at the container level to restrict the amount of resources consumed by each container.
Any container registry compliant with Open Container Initiative, including OCI Container Registry is supported.
Each container instance gets 15 GB of ephemeral storage by default. Options to attach persistent volumes with OCI Block Storage and OCI File Storage (FSS) will be available soon. Container Instances can also use external databases to store data that outlives the container instance.
A container instance will be inactive as soon as all containers within that instance stop, and the autorestart policy is not enabled. This means that Container Instances used for ephemeral workloads, such as CI/CD pipelines, automation tasks for cloud operations, data/media processing, etc., will stop once the workload is executed. Customers will only be billed for the duration of the job.
For Container Instances that need to stay up, such as those used for web applications, customers can configure restart policies to restart containers within a container instance in case of failure, ensuring that the application is always up.