How to Build OpenStack Block Storage on ZFS

by Cindy Swearingen

This article describes how to build efficient and reliable OpenStack block storage on ZFS in Oracle Solaris 11.2.


Published July 2014


About OpenStack in Oracle Solaris 11

Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.

OpenStack, a popular open source project that provides cloud management infrastructure, is integrated into Oracle Solaris 11.2. OpenStack storage features include Cinder for block storage access (see Figure 1) and Swift for object storage that also provides redundancy and replication.

ZFS, a file system that integrates volume management features, provides a simple interface for managing large amounts data. It has a robust set of data services and also supports a variety of storage protocols.

Cinder provisions a ZFS block device or a volume for your project (or tenant) instances. An Oracle Solaris Kernel Zone or a non-global zone is created and deployed for each project instance. After you create a deployable image of the zone and launch an instance of the zone image, Cinder allocates a ZFS volume to contain the instance's image as the guest's root device.

Oracle Solaris 11.2 provides additional Cinder drivers to provision the following devices:

  • iSCSI targets from a pool on a Cinder volume node
  • FC LUNs as block devices
  • iSCSI targets from an Oracle ZFS Storage Appliance

However, using these features is beyond the scope of this article.

A good way to get started with OpenStack is to run a small, all-in-one configuration where all OpenStack services are enabled, along with the Cinder volume service on the same system node, and to use ZFS as the back-end storage.

ZFS provides robust redundancy and doesn't need any special software or hardware arrays to provide data redundancy. ZFS is simple to configure and manage.

This article describes cloud storage practices for deploying a cloud infrastructure environment on Oracle Solaris and using Cinder to provide block storage devices as ZFS volumes on the same system.

This article does not describe how to set up OpenStack. For information on setting up OpenStack, see "Getting Started with OpenStack on Oracle Solaris 11.2."

Figure 1. Diagram of OpenStack Cinder Block Storage Service

Figure 1. Diagram of OpenStack Cinder Block Storage Service

OpenStack Block Storage Prerequisites and Deployment Process

The prerequisites require that Oracle Solaris 11.2 OpenStack already be running on a single SPARC or x86 system as the compute node that runs the primary OpenStack services and has multiple local or SAN-based devices.

The components of the configuration include the following:

  • Compute node (Nova): The system node where zones for tenant or project instances are managed, but zones are not installed as part of the process described in this article. The Cinder volume service runs on this node as well.
  • Volume service (Cinder): The location where the Cinder volume service allocates ZFS volumes for tenant or project instances, which is customized in this article.
  • User authorization (Keystone): Both admin and tenant user names and passwords must already have been created and can be provided to Keystone, the authentication service.

The following general steps describe how to customize a single system that runs OpenStack services and runs the Cinder volume service to deploy ZFS volumes. Data redundancy is configured and ZFS compression and encryption can also be added in this configuration.

The remaining sections of this article describe these steps in detail.

Create the ZFS Components

Oracle Solaris runs on a ZFS storage pool that is typically called rpool. This usually small pool is not an ideal environment for hosting a cloud infrastructure, because it contains the Oracle Solaris components that run the system.

A general recommendation is to keep your root pool (rpool) small and host your application, user, and cloud data in a separate pool. Mirrored pool configurations perform best for most workloads.

The following steps describe how to configure the components shown in Figure 2: a mirrored ZFS storage pool (tank), the primary file system (cinder), and the ZFS volumes that will contain the tenant (or project) cloud instances.

Figure 2. A Mirrored ZFS Storage Pool with File System Components

Figure 2. A Mirrored ZFS Storage Pool with File System Components

  1. Create a separate, mirrored ZFS storage pool that provides data redundancy and also configures two spares.

    The following example creates a mirrored storage pool called tank that contains two pairs of mirrored disks and two spare disks:

    # zpool create tank mirror c0t5000C500335F4C7Fd0 \
    c0t5000C500335F7DABd0 mirror c0t5000C500335FC6F3d0 \
    c0t5000C500336084C3d0 spare c0t5000C500335E2F03d0 \
    c0t50015179594B6F52d0
    

    Size the mirrored storage pool according to your estimated cloud data needs. You can always add another mirrored pair of devices to your mirrored ZFS storage pool if you need more space.

    For more information about ZFS administration syntax, see Managing ZFS File Systems in Oracle Solaris 11.2.

    Review your pool's current status:

    # zpool status tank
    

    Identify the pool's raw available space:

    # zpool list tank
    
  2. Create a ZFS file system:

    Note: If you want to use encryption to secure your cloud data, encryption must be enabled (as described below) when the ZFS file system is created. For more information about ZFS encryption and other encryption key methods besides being prompted for a passphrase, see see Managing ZFS File Systems in Oracle Solaris 11.2.

    # zfs create tank/cinder
    

    Review the actual available space that is available to your file system:

    # zfs list -r tank/cinder
    

    Frequently review the space available for your ZFS volumes by monitoring the USED space and the AVAIL space.

    If you want to conserve disk space, enable compression on the tank/cinder file system. ZFS volumes that are allocated for project instances are automatically compressed.

    # zfs set compression=on tank/cinder
    

    If you want to secure your cloud data, consider enabling encryption.

    # zfs create -o encryption=on tank/cinder
    Enter passphrase for 'tank/cinder': xxxxxxxx
    Enter again: xxxxxxxx
    

Customize the Cinder Storage Location

  1. Modify the zfs_volume_base parameter in /etc/cinder/cinder.conf to identify an alternate pool/file-system.

    For example, change this line:

    # zfs_volume_base = rpool/cinder 
    

    To this:

    # zfs_volume_base = tank/cinder
    
  2. Refresh the Cinder volume services:

    # svcadm restart svc:/application/openstack/cinder/cinder-volume:setup
    # svcadm restart svc:/application/openstack/cinder/cinder-volume:default
    

Test Your Cinder Configuration

  1. Set the authentication environment variables:

    Cinder expects the following Keystone authorization parameters to be presented as options on the command line, or you can set them as environment variables.

    # export OS_AUTH_URL=http://localhost:5000/v2.0 
    # export OS_USERNAME=admin-user-name 
    # export OS_PASSWORD=password 
    # export OS_TENANT_NAME=tenant-user-name 
    
  2. Create a 1-GB test volume:

    # cinder create --display_name test 1
    +---------------------+--------------------------------------+
    |       Property      |                Value                 |
    +---------------------+--------------------------------------+
    |     attachments     |                  []                  |
    |  availability_zone  |                 nova                 |
    |       bootable      |                false                 |
    |      created_at     |      2014-07-17T20:19:33.423744      |
    | display_description |                 None                 |
    |     display_name    |                 test                 |
    |          id         | 258d80e9-2ef3-eab8-fbea-96a4d176360d |
    |       metadata      |                  {}                  |
    |         size        |                  1                   |
    |     snapshot_id     |                 None                 |
    |     source_volid    |                 None                 |
    |        status       |               creating               |
    |     volume_type     |                 None                 |
    +---------------------+--------------------------------------+
    
  3. After you create a Cinder volume, confirm that it is created and the space is consumed:

    # zfs list -r tank/cinder
    NAME                             USED  AVAIL REFER MOUNTPOINT
    tank/cinder                      1.03G 547G  31K  /tank/cinder
    tank/cinder/volume-258d80e9-...  1.03G  546G 16K  -
    

    You can also confirm that the volume is visible from OpenStack's Horizon interface. When you launch a project instance through Horizon, a new Cinder volume is created automatically, so you can remove the test volume from the Horizon->volume menu by using the Delete Volume feature.

(Optional) Perform Additional Cinder Configuration Customization

The following are additional customizations you can do:

  • Monitor your ZFS pool for disk failures by setting up smtp-notify alert notifications.
  • Use ZFS snapshots to replicate ZFS volumes.
  • Create a separate archive pool with ZFS compression enabled to reduce the storage footprint when archiving tenant data.

For more information, see Managing ZFS File Systems in Oracle Solaris 11.2.

Summary

A redundant ZFS storage pool that is serving Cinder volumes for OpenStack can be hosted on any local or SAN storage to provide cloud data protection. You can also apply robust data services, such as encryption for data security or compression for data reduction, to your cloud storage.

See Also

The ZFS blog.

Also see these additional resources:

About the Author

Cindy Swearingen is an Oracle Solaris Product Manager who specializes in ZFS and storage features.

Revision 1.0, 07/23/2014

Follow us:
Blog | Facebook | Twitter | YouTube