Getting Started with OpenStack in Oracle Solaris 11

by Glynn Foster and David Comay

From zero to a full private cloud in minutes.


Published April 2014 (updated June 2014, July 2014, and June 2015)


Introduction

Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.

Oracle Solaris 11 includes a complete OpenStack distribution called Oracle OpenStack for Oracle Solaris. OpenStack, the popular open source cloud computing platform, provides comprehensive self-service environments for sharing and managing compute, network, and storage resources through a centralized web-based portal. OpenStack has been integrated into all the core technology foundations of Oracle Solaris, allowing you to set up an enterprise private cloud infrastructure in minutes.

Why OpenStack on Oracle Solaris?

Using OpenStack with Oracle Solaris provides the following advantages:

  • Industry-proven hypervisor. Oracle Solaris Zones offer significantly lower virtualization overhead making them a perfect fit for OpenStack compute resources. Oracle Solaris Kernel Zones also provide independent kernel versions without compromise, allowing independent patch versions.
  • Secure and compliant application provisioning. The Unified Archive feature of Oracle Solaris enables rapid application deployment in the cloud via a new archive format that enables portability between bare-metal systems and virtualized systems. Instant cloning in the cloud enables you to scale out and to reliably deal with disaster recovery emergencies. Unified Archives in Oracle Solaris 11, combined with capabilities such as Immutable Zones for read-only virtualization and the new Oracle Solaris compliance framework, enable administrators to ensure end-to-end integrity and can significantly reduce the ongoing cost of compliance.
  • Fast, fail-proof cloud updates. Oracle Solaris makes updating OpenStack an easy and fail-proof process, updating a full cloud environment in less than twenty minutes. Through integration with the Oracle Solaris Image Packaging System (IPS), ZFS boot environments ensure quick rollback in case anything goes wrong, allowing administrators to quickly get back up and running.
  • Application-driven software-defined networking. Taking advantage of Oracle Solaris network virtualization capabilities, applications can now drive their own behavior for prioritizing network traffic across the cloud. The Elastic Virtual Switch (EVS) feature of Oracle Solaris provides a single point of control and enables the management of tenant networks through VLANs and VXLANs. The networks are flexibly connected to virtualized environments that are created on the compute nodes.
  • Single-vendor solution. Oracle is the #1 enterprise vendor offering a full-stack solution that provides the ability to get end-to-end support from a single vendor for database as a service (DaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), saving significant heartache and cost.

Oracle Solaris 11 includes the OpenStack Juno release (Oracle Solaris 11.2 SRU 10.5 or Oracle Solaris 11.3). Administrators who have already deployed an OpenStack cloud environment using the OpenStack Havana release can update this by following the instructions found at "Havana to Juno: OpenStack Upgrade Procedures."

Available OpenStack Services

The following OpenStack services are available in Oracle Solaris 11:

  • Nova. Nova provides the compute capability in a cloud environment, allowing self-service users to be able to create virtual environments from an allocated pool of resources. A driver for Nova has been written to take advantage of Oracle Solaris non-global zones and kernel zones.
  • Neutron. Neutron manages networking within an OpenStack cloud. Neutron creates and manages virtual networks across multiple physical nodes so that self-service users can create their own subnets that virtual machines (VMs) can connect to and communicate with. Neutron uses a highly extensible plug-in architecture, allowing complex network topologies to be created to support a cloud environment. A driver for Neutron has been written to take advantage of the network virtualization features of Oracle Solaris 11 including the Elastic Virtual Switch that automatically creates the tenant networks across multiple physical nodes.
  • Cinder. Cinder is responsible for block storage in the cloud. Storage is presented to the guest VMs as virtualized block devices known as Cinder volumes. There are two classes of storage: ephemeral volumes and persistent volumes. Ephemeral volumes exist only for the lifetime of the VM instance, but will persist across reboots of the VM. Once the instance has been deleted, the storage is also deleted. Persistent volumes are typically created separately and attached to an instance. Cinder drivers have been written to take advantage of the ZFS file system, allowing volumes to be created locally on compute nodes or served remotely via iSCSI or Fibre Channel. Additionally, a Cinder driver exists for Oracle ZFS Storage Appliance.
  • Glance. Glance provides image management services within OpenStack with support for the registration, discovery, and delivery of images that are used to install VMs created by Nova. Glance can use different storage back ends to store these images. The primary image format that Oracle Solaris 11 uses is Unified Archives. Unified Archives can be provisioned across both bare-metal and virtual systems, allowing for complete portability in an OpenStack environment.
  • Keystone. Keystone is the identity service for OpenStack. It provides a central directory of users—mapped to the OpenStack projects they can access—and an authentication system between the OpenStack services.
  • Horizon. Horizon is the web-based dashboard that allows administrators to manage compute, network, and storage resources in the data center and allocate those resources to multitenant users. Users can then create and destroy VMs in a self-service capacity, determine the networks on which those VMs communicate, and attach storage volumes to those VMs.
  • Swift. Swift provides object- and file-based storage in OpenStack. Swift provides redundant and scalable storage, with data replicated across distributed storage clusters. If a storage node fails, Swift will quickly replicate its content to other active nodes. Additional storage nodes can be added to the cluster with full horizontal scale. Oracle Solaris 11 supports Swift being hosted in a ZFS environment.
  • Ironic. Ironic provides bare-metal provisioning in an OpenStack cloud, as opposed to VMs that are handled by Nova. An Ironic driver has been written to take advantage of the Oracle Solaris Automated Installer, which handles multinode provisioning of Oracle Solaris 11 systems.
  • Heat. Heat provides application orchestration in the cloud, allowing administrators to describe multitier applications by defining a set of resources through a template. As a result, a self-service user can execute this orchestration and have the appropriate compute, network, and storage deployed in the appropriate order.
Figure 1. The points of integration between Oracle Solaris and OpenStack

Figure 1. The points of integration between Oracle Solaris and OpenStack

Installing OpenStack on Oracle Solaris 11

OpenStack on Oracle Solaris 11 does not have any special system requirements other than those spelled out for Oracle Solaris itself. Additional CPUs, memory, and disk space might be required, however, to support more than a trivial number of Nova instances. For information about general system requirements, see "Oracle Solaris 11 System Requirements."

The easiest way to start using OpenStack on Oracle Solaris is to download and install the Oracle Solaris 11 OpenStack Unified Archive, which provides a convenient way of getting started with OpenStack in about ten minutes. All of the essential OpenStack services are preinstalled and preconfigured to make setting up OpenStack on a single system easy. The Unified Archive can be downloaded from the Oracle Technology Network.

After installation and a small amount of customization, VMs, otherwise known as Nova instances, can be created, assigned block storage, attached to virtual networks, and then managed through an easy-to-use web browser interface.

The Unified Archive is preloaded with a pair of Glance images, one suitable for use with non-global zones and the other for kernel zones (solaris-kz branded zones). In addition, through the use of the new archiveadm(1M) command, new archives can be created from global, non-global, and kernel zones running Oracle Solaris 11 and then uploaded to the Glance repository for use with OpenStack.

In order to use the Unified Archive method of installation, a suitable target is necessary. This is typically a bare-metal system on which the Unified Archive can be installed via the Automated Installer, or it can be a kernel zone. Although the Unified Archive can, in theory, be installed inside a non-global zone, the Nova compute virtualization in Oracle Solaris does not support nested non-global zones.

Detailed instructions for both methods of installation are included in the README file associated with the archive. Refer to that for more detailed information, but briefly, the Unified Archive can be deployed using a variety of methods:

  • Bare-metal installation using an Automated Installer network service
  • Bare-metal installation using a USB image generated from the Unified Archive using archiveadm(1M)
  • Indirect installation using the Oracle Solaris Automated Installer Boot Image combined with the Unified Archive
  • Direct installation into a kernel zone using the standard zonecfg(1M) and zoneadm(1M) commands

Using the Automated Installer

Using an Automated Installer (AI) server is the recommended way to install the OpenStack Unified Archive onto bare metal. Once you have this, you need to create or modify a manifest for an appropriate installation service to include the following fragment instead of the typical IPS software declaration:

<software type="ARCHIVE">
      <source>
        <file uri="/net/aiserver/archives/sol-11_3-openstack-sparc.uar"/>
      </source>
      <software_data action="install">
        <name>*</name>
      </software_data>
</software>

After saving your manifest, you need to associate it with an existing AI service. The following example associates it with the existing default-sparc AI alias service:

# installadm list -m
Service Name         Manifest Name Type    Status  Criteria
------------         ------------- ----    ------  --------
default-sparc        orig_default  derived default none    
solaris11_3-sparc    orig_default  derived default none    
# installadm create-manifest -n default-sparc \
-m os_manifest -f openstack_manifest.xml -d
Created Manifest: 'os_manifest'
# installadm list -m
Service Name         Manifest Name Type    Status   Criteria
------------         ------------- ----    ------   --------
default-sparc        os_manifest   xml     default  none    
                     orig_default  derived inactive none    
solaris11_3-sparc    orig_default  derived default  none

With an updated AI manifest, you can boot your system over the network as follows (in this case, using a SPARC system):

{0} ok boot net - install
Boot device: /pci@300/pci@1/pci@0/pci@1/network@0  File and args: - install
100 Mbps full duplex Link up
<time unavailable> wanboot info: WAN boot messages->console
<time unavailable> wanboot info: configuring /pci@300/pci@1/pci@0/pci@1/network@0

1000 Mbps full duplex Link up
<time unavailable> wanboot progress: wanbootfs: Read 368 of 368 kB (100%)
<time unavailable> wanboot info: wanbootfs: Download complete
Tue Jun 16 08:47:08 wanboot progress: miniroot: Read 265784 of 265784 kB (100%)
Tue Jun 16 08:47:08 wanboot info: miniroot: Download complete
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.
Remounting root read/write
Probing for device nodes ...
Preparing network image for use
Downloading solaris.zlib
curl arguments --insecure for http://solaris:5555//export/auto_install/solaris11_3-sparc/solaris.zlib
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  229M  100  229M    0     0   110M      0  0:00:02  0:00:02 --:--:--  110M
Downloading solarismisc.zlib
curl arguments --insecure for http://solaris:5555//export/auto_install/solaris11_3-sparc/solarismisc.zlib
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 22.5M  100 22.5M    0     0   102M      0 --:--:-- --:--:-- --:--:--  103M
Downloading .image_info
curl arguments --insecure for http://solaris:5555//export/auto_install/solaris11_3-sparc/.image_info
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    85  100    85    0     0   6525      0 --:--:-- --:--:-- --:--:--  8500
Done mounting image
Configuring devices.
Hostname: solaris
Service discovery phase initiated
Service name to look up: default-sparc
Service discovery over multicast DNS failed
Service default-sparc located at solaris:5555 will be used
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery

solaris console login: 
Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log
Press RETURN to get a login prompt at any time.

08:57:13    Install Log: /system/volatile/install_log
08:57:13    Using XML Manifest: /system/volatile/ai.xml
08:57:14    Using profile specification: /system/volatile/profile
08:57:14    Using service list file: /var/run/service_list
08:57:14    Starting installation.
08:57:14    0% Preparing for Installation
08:57:14    100% manifest-parser completed.
08:57:14    100% None
08:57:14    0% Preparing for Installation
08:57:14    1% Preparing for Installation
08:57:15    2% Preparing for Installation
08:57:15    3% Preparing for Installation
08:57:15    4% Preparing for Installation
08:57:15    5% archive-1 completed.
08:57:15    6% install-env-configuration completed.
08:57:21    9% target-discovery completed.
08:57:23    Pre-validating manifest targets before actual target selection
08:57:23    Selected Disk(s) : c0t5000CCA05692B8CCd0
08:57:23    Pre-validation of manifest targets completed
08:57:23    Validating combined manifest and archive origin targets
08:57:23    Selected Disk(s) : c0t5000CCA05692B8CCd0
08:57:23    9% target-selection completed.
08:57:23    10% ai-configuration completed.
08:57:23    10% var-share-dataset completed.
08:57:34    10% target-instantiation completed.
08:57:34    10% Beginning archive transfer
08:57:34    Commencing transfer of stream: 8334d198-2812-4e37-85a5-8ae335103f81-0.zfs to rpool
08:57:53    15% Transferring contents
08:57:55    16% Transferring contents
08:57:57    19% Transferring contents
08:57:59    22% Transferring contents
08:58:01    24% Transferring contents
08:58:03    28% Transferring contents
08:58:07    33% Transferring contents
08:58:09    35% Transferring contents
08:58:11    37% Transferring contents
08:58:14    40% Transferring contents
08:58:16    41% Transferring contents
08:58:18    47% Transferring contents
08:58:22    51% Transferring contents
08:58:24    54% Transferring contents
08:58:26    55% Transferring contents
08:58:28    60% Transferring contents
08:58:30    62% Transferring contents
08:58:36    65% Transferring contents
08:58:38    68% Transferring contents
08:58:41    69% Transferring contents
08:58:43    74% Transferring contents
08:58:47    78% Transferring contents
08:58:49    80% Transferring contents
08:58:51    82% Transferring contents
08:58:53    86% Transferring contents
08:59:07    Completed transfer of stream: '8334d198-2812-4e37-85a5-8ae335103f81-0.zfs' from 
http://10.10.10.111/sol-11_3-24-openstack-sparc.uar
08:59:09    Archive transfer completed
08:59:11    90% generated-transfer-778-1 completed.
08:59:11    90% apply-pkg-variant completed.
08:59:11    90% update-dump-adm completed.
08:59:11    90% setup-swap completed.
08:59:12    91% device-config completed.
08:59:14    91% apply-sysconfig completed.
08:59:14    91% transfer-zpool-cache completed.
08:59:15    97% boot-archive completed.
08:59:16    Setting boot devices in firmware
08:59:16    Setting openprom boot-device
08:59:17    98% boot-configuration completed.
08:59:17    98% transfer-ai-files completed.
08:59:17    98% cleanup-archive-install completed.
08:59:18    100% create-snapshot completed.
08:59:18    100% None
08:59:18    Automated Installation succeeded.
08:59:18    You may wish to reboot the system at this time.
Automated Installation finished successfully
The system can be rebooted now
Please refer to the /system/volatile/install_log file for details
After reboot it will be located at /var/log/install/install_log

Once the installation is complete, you can proceed to reboot the system and answer some basic system configuration questions using the System Configuration Tool (assuming you didn't associate a system configuration profile as part of your AI server configuration). For more information about the Automated Installer, see "Installing Oracle Solaris 11 Systems."

Using an Oracle Solaris Kernel Zone

To install the OpenStack Unified Archive into an Oracle Solaris Kernel Zone, use the existing zonecfg(1M) and zoneadm(1M) commands. The following example creates an Oracle Solaris Kernel Zone using the SYSsolaris-kz template and allocates eight virtual CPUs and caps the memory at 12 GB.

# zonecfg -z openstack_zone
Use 'create' to begin configuring a new zone.
zonecfg:openstack_zone> create -t SYSsolaris-kz
zonecfg:openstack_zone> select virtual-cpu
zonecfg:openstack_zone:virtual-cpu> set ncpus=8
zonecfg:openstack_zone:virtual-cpu> end
zonecfg:openstack_zone> select capped-memory
zonecfg:openstack_zone:capped-memory> set physical=12g
zonecfg:openstack_zone:capped-memory> end
zonecfg:openstack_zone> verify
zonecfg:openstack_zone> exit

Once you have successfully created the zone configuration, you can install the kernel zone from the archive. The following example uses a disk size of 50 GB for this kernel zone to make sure there's enough space to create volumes for the VM instances. In this case, however, you will be able to create only non-global zones as the VM compute environments.

# zoneadm -z openstack_zone install -a ./sol-11_3-openstack-sparc.uar \
-x install-size=50g

Once the installation is complete, you can proceed to boot the kernel zone and log in to the console:

# zoneadm -z openstack_zone boot
# zlogin -C openstack_zone

As with the Automated Installer installation, you will then be prompted to use the System Configuration Tool for further system configuration.

Logging into the Horizon Dashboard

After the OpenStack installation is complete, you can log in to the OpenStack Horizon web-based dashboard to start evaluating the cloud environment and provision your first VM.

To log in to Horizon, point the browser to http://mysystem/horizon where mysystem is the name of system that is running the Horizon service under the Apache web service. A cloud administrator account has been preconfigured and you can log in as admin with a password of secrete.

Figure 2. The OpenStack Horizon dashboard login screen

Figure 2: The OpenStack Horizon dashboard login screen

When you log in as the cloud administrator, you will notice a main navigation menu on the left side, page content defined by the current selection of the left navigation bar, and account credentials in the top right corner. Within the left navigation menu, you can see that there are three main views: Project, Admin, and Identity.

The Admin view allows you to see an overall view of the Nova instances and Cinder volumes in use within the cloud. It also allows you to view and edit the Flavor definitions that define VM characteristics, such as the number of virtual CPUs, the amount of memory, and the disk space assigned to a VM. On Oracle Solaris, this is also where the brand of the underlying Oracle Solaris Zone is defined, such as solaris for non-global zones and solaris-kz for kernel zones. Finally, from a system provisioning perspective, this view also allows you to create virtual networks and routers for use by cloud users.

The other primary elements that the cloud administrator can view and edit concern projects (also known as tenants) and users. Projects provide a mechanism to group and isolate ownership of virtual computing resources and users, while users are the persons or services that use those resources in the cloud.

The OpenStack Horizon dashboard showing the administration panel

Figure 3. The OpenStack Horizon dashboard showing the administration panel

The Project view details the current project a user is using. By default, the admin user is part of the demo project (as indicated in the account credentials in the top right of the screen). Clicking through this view provides a set of options a cloud user can perform as a user under this project. When using the Oracle Solaris 11 OpenStack Unified Archive, the Images submenu of the Compute menu will reveal that the Glance service has been prepopulated with two images: one for instances based on non-global zones and the other for instances based on kernel zones.

Under the Access & Security submenu, users can upload their own personal SSH public key to the Nova service. This public key is automatically placed in the root user's authorized_keys file in the new instance, which allows a user to log in to the instance remotely.

Figure 4. The Oracle Solaris 11.2 non-global zone available in Images & Snapshots through Glance

Figure 4. The Oracle Solaris 11.2 non-global zone available in Images & Snapshots through Glance

To create a new instance, a cloud user (the admin user or any multitenant user) simply needs to click the Instances submenu. Clicking Launch Instance on the right side produces a dialog box where the cloud user can specify the type of image (by default, non-global zone or kernel zone are the choices), the name of the new instance and, finally, the flavor of the instance. The latter should match the zone type specified in the image type and the size chosen should reflect the requirements of the intended workload.

Under the Access & Security tab in the dialog box, you can choose which uploaded SSH keypair to install in the new instance to be created; and under the Network tab, you can choose which network(s) the instance should be attached to. Finally, clicking Launch causes the instance to be created, installed, and then booted. The time required for a new instance to be made available depends on a number of factors including the size of the images, the resources provided in the flavor definition that is chosen, and where OpenStack has placed the root file system of the new instance.

In the Instances submenu screen, you can click the name of the instance to see general information as well to view the instance's console log. By reloading this particular page, you can see updates that have taken place.

Note that by clicking the Volumes submenu, you can see the Cinder volumes that have been created. Generally, each instance will have at least one volume assigned to it and displayed here. In a multinode configuration, this volume might be remote from the instance using a protocol such as iSCSI or Fibre Channel. Instances that are made of non-global zones have a volume assigned only if the volume is on a different node in the cloud.

By clicking the Network Topology submenu of the Network menu, you can see a visual representation of the cloud network including all subnet segments, virtual routers, and active instances.

Scaling Your OpenStack Environment

Now that you've installed OpenStack into a single-node environment, the common next step is to start creating a multinode environment. While single-node configurations are great for evaluating OpenStack, they are not suitable for a production environment where scalability and reliability are key goals.

The OpenStack services can be split across multiple physical nodes that define a logical grouping. Each cloud usually contains a single Horizon dashboard, a single Glance image repository, and a single Keystone identity service. A cloud can, however, have any number of compute and storage instances, which is particularly useful when you want to take advantage of different hypervisor capabilities from different vendors or different back-end storage devices available in your data center.

A typical starting architecture would include a single controller node, a compute node, and a storage node. The controller node hosts most of the shared OpenStack services that supply the API endpoints of OpenStack, scheduling services, and other shared functionality. The compute node hosts the VM instances that are created by the self-service users. The storage node hosts permanent and ephemeral storage for the cloud.

Figure 5. Typical starting OpenStack architecture across three physical nodes

Figure 5: Typical starting OpenStack architecture across three physical nodes

Administrators will require more knowledge about OpenStack and how different services work together to achieve this. For more information on OpenStack, see "Installing and Configuring OpenStack in Oracle Solaris 11."

See Also

Also see these additional resources:

About the Authors

Glynn Foster is a principal product manager for Oracle Solaris. He is responsible for a number of technology areas including OpenStack, the Oracle Solaris Image Packaging System, installation, and configuration management.

David Comay is a senior principal software engineer who has been at Sun and Oracle since 1996 when he began working in the networking area specializing in routing protocols and IPv6. He was the OS/Networking technical lead for the first two Oracle Solaris 8 update releases as well as for Oracle Solaris 9. He subsequently moved into the resource management area where he was a member of the original Oracle Solaris Zones project team. He led that team after its initial project integration through the end of Oracle Solaris 10 and for several of the subsequent Oracle Solaris 10 update releases. After driving the Oracle Solaris Modernization program and being the technical lead for the OpenSolaris binary releases as well as for Oracle Solaris 11, David is now the architect for the Oracle Solaris cloud strategy focusing initially on the integration of OpenStack with Oracle Solaris.

Revision 1.0, 04/28/2014
Revision 1.1, 06/20/2014
Revision 2.0, 07/30/2014
Revision 3.0, 06/30/2015

Follow us:
Blog | Facebook | Twitter | YouTube