Docker in the Cloud: Oracle Container Cloud Service

by Dr. Frank Munz

February 2017

This article deals with Docker and its usage in the cloud. The first part gives a brief and easy to understand introduction to Docker and motivates its usage as a PaaS service. The second part introduces the Oracle Container Cloud Service (OCCS) and explains the key components of OCCS.

Docker

Docker has been a tremendous success over the last three years. From an almost unknown and rather technical open source technology in 2014, it has evolved into a standardized runtime environment now officially supported for many Oracle enterprise products.

Basics

The core concepts of Docker are images and containers. A Docker image contains everything that is needed to run your software: the code, a runtime (e.g. the JVM), drivers, tools, scripts, libraries, deployments, etc.

A Docker container is a running instance of a Docker image. However, unlike in traditional virtualization with a type 1 or type 2 hypervisor, a Docker container runs on the kernel of the host operating system. Within a Docker image there is no separate operating system, as illustrated in Figure 1.

munz-docker-occs-fig01
Figure 1

Isolation vs. Virtualization

Every Docker container has its own file system, its own network stack (and therefore its own IP address), its own process space, and defined resource limitations for CPU and memory. Since a Docker container does not have to boot an operating system, it starts up instantly. Docker is about isolation, i.e. separating the resources of a host operating system, as opposed to virtualization, i.e. providing a guest operating system on top of the host operating system.

Incremental Files System

The file system of a Docker image is layered, with copy-on-write semantics. This enables inheritance and reuse, saves resources on disk, and enables incremental image download.

munz-docker-occs-fig02
Figure 2

As illustrated in Figure 2, a Docker image with a WebLogic deployment could be based on an image with a WebLogic domain, which could be based on a WebLogic image, which is based on a JDK image, which again is based on an Oracle Linux base image.

Docker Registry

Another key difference between Docker and, for example, VirtualBox, is the massive number of available images. The public Docker Hub provides over 100,000 repositories with Docker images. Docker Hub contains software and applications from official repositories such as NGINX, Logstash, Apache HTTP, Grafana, MySQL, Ubuntu, and Oracle Linux.

When starting a container, Docker will automatically pull the corresponding image from the public Docker Hub if it is not available locally. Moreover, you can also create your own images and push them to Docker Hub into either a public or private repository.

munz-docker-occs-fig03
Figure 3

So far more than 5 billion images have been pulled from Docker Hub [4].

Docker as a Microservices Runtime

The idea of cutting monolithic applications into smaller chunks of microservices attracts a lot of attention these days among software developers.

Microservices are independently deployed as a process, use light-weight protocols to communicate with each other, and every service owns its data [7]. Since microservices follow a decentralized governance approach, they require a rather high amount of infrastructure automation, automated testing, fully automated CD pipelines and skilled, agile DevOps teams.

There is still a lot of discussion about this architectural style, yet it would be naive  to assume that an application decomposed into microservicse can be simply operated as a set of processes. To name only a few requirements, a microservice needs to be host independent, and isolated on an operating system level. It must run within its resource limits, must be scaled up and down, restarted if failed, and discovered and connected to other microservices via a software defined network layer.

Therefore, running a microservice in a Docker container puts you at an excellent jumping-off point to achieve most of these goals.

Two Dimensions

Docker changes the way we build, ship, and run software in two different dimensions:

  • It enhances the process to get applications reliably from development to production. 
  • it provides a standards image format to get from on-premises to cloud.
Both dimensions are explained in more detail in the following paragraphs.

Development to Production

Creating a Docker image with all of its dependencies solves the "but it worked for me on my development machine" problem. The key idea is that a Docker image is created automatically by a build pipeline from a source code repository like Git and initially tested in a development environment. This immutable image will then be stored in a Docker registry.

As shown in the Figure 4, the same image will be used for further load tests, integration tests, acceptance tests, etc. In every environment, the same image will be used. Small but necessary environmentally specific differences, such as a JDBC URL for a production database, can be fed into the container as environment variables or files.

munz-docker-occs-fig04
Figure 4

Statistics show that 65% of all current Docker use cases are in development, and 48% use Docker for Continuous Integration [5].

On Premises to Cloud

Docker changed the adoption of public clouds: On one hand, with a Docker image, for the first time in history, a common package format exists that can be run on premises as well as on every major cloud provider. Docker containers run on my laptop the same way they run on Oracle Cloud.

On the other hand -- since Docker containers run on every major public cloud -- they are a major contribution to overcoming a long curated prejudice against public clouds: vendor lock-in. Every major cloud provider now offers Docker as a PaaS.

Maturity of Underlying Technology

The pace of Docker releases is much faster than the release cycle of the traditional enterprise software.  Sometimes the sheer pace of Docker releases, together with the newness of the Docker project raises concerns about the security and stability of Docker.

Although Docker and its command-line, the Docker daemon, its API, and tools such as Docker Swarm, Docker Machine, and Docker Compose only evolved in the last three years, the underlying kernel features have been available in every Linux kernel for nearly a decade.

A prominent example of an early adopter of container technology is Google. Google has been using Linux containers even before Docker was around. Furthermore, Google runs everything in a container. It is estimated that Google starts 2 billion containers per week [3].

 

Cgroups and Namespaces History

The underlying Linux kernel features that Docker uses are cgroups and namespaces. In 2008 cgroups were introduced to the Linux kernel based on work previously done by Google developers [1]. Cgroups limit and account for the resource usage of a set of operating system processes.

The Linux kernel uses namespace to isolate the system resources of processes from each other. The first namespace, i.e. the mount namespace, was introduced as early as 2002.

Docker Support for Oracle Products

Oracle caught the Docker trend early. The first officially supported product was WebLogic. As of this writing there is official support for the following Oracle products:

  • Oracle Database
  • Oracle WebLogic
  • Oracle Coherence
  • Oracle Tuxedo
  • Oracle HTTP Server

In addition, Dockerfiles are available to build images for open source projects, including:

  • Glassfish
  • MySQL NoSQL
  • OpenJDK

Note that at the time of this writing, Oracle enterprise software is not available from Docker hub. To run it in a Docker container, you must download the predefined build scripts from https://github.com/oracle/docker-images, add the product installer, and run the script to create your own local Docker image.

Oracle Container Cloud Service (OCCS)

The first part of this article explained some important Docker concepts. However, in a production environment it is not enough to simply run an application in a Docker container.

To set up and operate a production environment there is much more involved: Hardware is required to run the containers; software such as Docker itself along with repositories and cluster managers must be installed, upgraded and patched. If several Docker containers communicate across hosts, a network must be created. Clustered containers should be restarted if they fail. In addition, you expect that a set of containers linked to each other should be deployable as easily as a single container instance.

An example of this could be a load balancer, a few web servers, some WebLogic Server instances with an admin server and managed server, and a database.

Why OCCS?

OCCS is the newest addition in the Oracle Cloud landscape. It's a PaaS service that addresses those additional requirements for running Docker in production. The following paragraphs will explain the key concepts of OCCS in more detail.

Manager and Worker Services

To get started with the Oracle Container Cloud Service you first define an OCCS service that represents a set of hosts used for OCCS. A service always consists of a manager node and one or more worker nodes.

The manager node orchestrates the deployment of containers to the worker nodes. The worker nodes host the containers or stacks of containers. The set of worker nodes for a service can later be further subdivided into pools that build a resource group.

Every configured OCCS service has its own admin user and password. To set up an OCCS service, you define its service name and either create a new SSH key or specify an existing one. Using this SSH key you can connect to the service from the command-line.

Figure 5 illustrates two different OCCS services.

munz-docker-occs-fig05
Figure 5

When setting up the worker nodes, you can specify the underlying compute shape, the total number of worker nodes used, and the data volume size of the worker nodes, as shown in Figure 6.

munz-docker-occs-fig06
Figure 6

Once the OCCS service is defined you can see its manager and the worker nodes in the console. Figure 7 shows a service with three worker nodes with their hostnames and IP addresses, memory and storage settings.

munz-docker-occs-fig7
Figure 7

Resource Pools

Hosts within an OCCS service are further organized into resource pools. A resource pool is a collection of hosts (worker nodes) to place the containers for a service. You can create any number of resource pools, tag the pools, and assign resources as worker nodes to them. When you create a service you also define its default resource pool.

You can also move hosts from one resource pool to another. For example, from initially three worker nodes in the default pool you can move two nodes to the development pool, as shown in Figure 8.

munz-docker-occs-fig08
Figure 8

Container Services

OCCS comes with several predefined OCCS container services. An OCCS container service defines a Docker service together with the necessary configuration settings for running a Docker image and its deployment directives.

You can select an OCCS container service from a list of preconfigured existing services or simply define your own service (Figure 9).

munz-docker-occs-fig09
Figure 9

For a selected service you can choose its resource pool as a target, see (1) in Figure 10. Furthermore, there is a choice of several orchestration settings:

munz-docker-occs-fig10
Figure 10

Availability

When starting up a number of n containers you can select their availability (2).

  • Per pool: n containers are started per pool, hosts are selected based on settings of the scheduler field.
  • Per host: n containers are started on every host
  • Per tag: n containers are started on hosts matching the selected tag.

Scheduling Policy

The scheduling policy (3) defines how the order of the hosts is determined.

  • Random: OCCS randomly starts containers evenly across available hosts.
  • Memory: OCCS starts containers in order of memory, beginning with the host with the highest memory
  • CPU: OCCS starts containers in order of CPU availability, starting with the host which has used the least CPU.

Constraints

Constraints for certain hosts or tags can be specified to further restrict the host placement.

Service Run Command

OCCS integrates well with the classic way of running a Docker container. There are three different ways of specifying how to run a service.

  1. You can construct the command to run a service with the graphical builder wizard, for example, if you have limited experience with the Docker command-line. Using the Builder wizard you can easily specify the parameters of a Docker run command: the image name, the default command to run, environment variables passed to the container and various switches.

    Figure 11 shows the builder wizard for a Grafana service running in a Docker container as described in [8].

    munz-docker-occs-fig11
    Figure 11

  2. A second way is the Docker run tab. The information from the wizard is displayed as pure text in the Docker run tab and can also be edited there, or you can copy and paste a Docker run command, for example from the Docker Hub.

    Figure 12 shows the same information from the Builder wizard above as the Docker run command.

    munz-docker-occs-fig12
    Figure 12

  3. Internally OCCS uses a documented YAML representation that you can view and edit in the third tab, as shown in Figure 13.

    munz-docker-occs-fig13
    Figure 13

Stacks

OCCS does not only define and deploy single services. You can also link services together and start them as a stack. The OCCS console already comes with several predefined stack examples, such as Wordpress with a database or a Redis cluster with master and slave.

Stacks are defined by a YAML file that lists the contained services. You can define environment variables for every service in a stack.

These stacks can be composed with a graphical editor from existing OCCS services as shown in Figure 14.

munz-docker-occs-fig14
Figure 14

Alternatively, you can download the source to build example stacks yourself from the following  GitHub repository:

https://github.com/oracle/docker-images/tree/master/ContainerCloud

Every stack on the GitHub repository comes with a YAML file and a make file to build it. Moreover, every image that a stack consists of comes with a Docker file to build it.

Service Discovery

An important part of every Docker PaaS is the service discovery. OCCS relies on service discovery for communication between Docker containers, and maintains a service discovery database with information (host/port) on the running containers. The DNS record to discover a new service is inserted automatically when you start a Docker container in OCCS.

Running services can be discovered by assigning them a tag.

Repositories

OCCS can tag Docker images and push them to a repository for you. Images can be pushed to any repository in any registry, not only the registry from which they were pulled.

The Docker hub is preconfigured, but you can add registries, for example, those that you run locally within your enterprise.

munz-docker-occs-fig15
Figure 15

OCCS Operations


SSH-Access

You can log into the OCCS service manager using the SSH key that you defined when creating the service. It runs a very restricted VM environment that includes essential tools, including vi, cat, rm, and cp.

To connect to the master node of the service definition, get the its public IP address from the service console. Then run the following command from the directory where you stored the private key file.

ssh -i privateKey opc@140.86.3.9

Note that you will be able to connect with the private key only. You do not have to specify a password. At the time of this writing you can only connect to the master node, not to the worker nodes.

Backups

You can backup your service configuration in the service console with a one-click operation. A backup includes the settings for deployments, registries, services and stacks. Backups are downloaded to your local machine. Oracle recommends that you backup your service regularly for disaster recovery. Also, backups are a mandatory step when upgrading to a newer version of OCCS.

Upgrade

Users will be notified when a new version of OCCS with additional features and bug fixes becomes available. You can then backup your existing service, create a new service with the new OCCS version and import your configurations from the backup file.

References

  1. Cgroups (Wikipedia)
  2. Linux Namespaces (Wikipedia)
  3. EVERYTHING at Google runs in a container, by Jack Clark
  4. Docker Hub Hits 5 Billion Pulls
  5. Evolution of the Modern Software Supply Chain, Docker Survey 2016
  6. MOS Doc ID 2216342.1: Docker Support for Oracle DB
  7. Microservices, by Martin Fowler
  8. Oracle Container Cloud Service (OCCS), by Frank Munz
  9. Oracle Container Cloud Service: Get Started (Oracle Help Center)

About the Author

Oracle ACE Director Frank Munz is a software architect, cloud evangelist, and independent consultant specializing in middleware and distributed computing.


This article has been reviewed by the relevant Oracle product team and found to be in compliance with standards and practices for the use of Oracle products.