How to Use an Existing Oracle Solaris 10 JumpStart Server to Provision Oracle Solaris 11 11/11

by Kristina Tripp and Isaac Rozenfeld

This article illustrates how to take an existing JumpStart server, install the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 on it, create and configure an installation service, and then provision a client system using that installation service.


Published March 2013

OTN is all about helping you become familiar enough with Oracle technologies to make an informed decision. Articles, software downloads, documentation, and more. Join up and get the technical resources you need to do your job.
Overview of the AI Installation Process
Creating an AI Server Using an Existing JumpStart Server
Creating a Local IPS Repository
Using Automated Installer to Create an Install Service
Customizing the Default AI Manifest
Booting the Client with the Customized Manifest
Augmenting the Installation with Configuration
Updating to Oracle Solaris 11.1
Conclusion
See Also
About the Authors

With the release of Oracle Solaris 10 1/13, a follow-on tool—the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10—has been released and is available on My Oracle Support for customers who have support contracts. This tool will help ease your transition to Oracle Solaris 11 by providing the capability to run the Oracle Solaris 11 Automated Installer (AI) on Oracle Solaris 10. This makes it possible to use an existing JumpStart server to begin your migration to Oracle Solaris 11.

Using the AI included in the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10, you can create and manage services to install the Oracle Solaris 11 11/11 operating system over the network in a hands-off manner. The Image Packaging System (IPS) component included in the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 can be used to host a local repository from which installations can occur.

The focus of this article is on using a system that is acting as an Oracle Solaris 10 JumpStart server; however, such a system is not a requirement. The tasks outlined in this article can be accomplished on a system that is not acting as an Oracle Solaris 10 JumpStart server.

Note: Once you have a system that is running Oracle Solaris 11, this system should subsequently be used as a server to provision future Oracle Solaris 11 updates, because there are no plans to extend the functionality of the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 to support provisioning Oracle Solaris 11.1.

Overview of the AI Installation Process

There are three significant steps involved in the AI installation process:

  1. Assignment of a network identity for the system being installed
  2. Contacting the AI service to download over the network a small boot image and a description of how to provision a system
  3. Provisioning the system, including software and system configuration, over the network

Each of the steps above can be accomplished by using various services that reside on the same physical or virtual system or they by services that reside on separate systems. In this article, the following services will reside on the same system, as shown in Figure 1:

  • DHCP service
  • AI service
  • IPS repository service

Note: In environments where DHCP is not permitted, there are alternative methods for getting the network identity to the system manually, such as the use of network configuration arguments through the boot PROM on SPARC systems.

In this article, we will take an existing Oracle Solaris 10 system being used as JumpStart server and use it to host an AI environment for the purpose of installing Oracle Solaris 11 on a remote install client.

Figure 1

Figure 1. Client system installed by the automated install server.

Our first system, depicted on the left in Figure 1, will be used to install the installation environment where we run our JumpStart server and the AI, IPS, and DHCP services.

The second system, shown on the right in Figure 1, will be created and installed for the purpose of demonstrating automated installation. For our examples, the AI server will have an IP address of 192.0.2.1, and our install client will have an IP address of 192.0.2.2. Our DHCP server resides on the AI server, so it will have an IP address of 192.0.2.1.

Creating an AI Server Using an Existing JumpStart Server

We will begin by installing the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 software, which can be downloaded from My Oracle Support as a gzipped tarball that contains the necessary packages for installing the product.

The Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 can be installed on an Oracle Solaris 10 8/11 system or an Oracle Solaris 10 1/13 system.

Notes:

  • If you don't already have rsync installed as part of your corporate build image, you can obtain it from Oracle Solaris 10 1/13 media by installing the SUNWrsync and SUNWrsyncS packages.
  • For simplicity, this article will assume that the system has been updated to Oracle Solaris 10 1/13. For details about installing on an Oracle Solaris 10 8/11 system, see the installation guide included in the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10.

First, download the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 from My Oracle Support and place it in a temporary location where it can be extracted. Then extract the files.

root@aiserver:~# gunzip OPA-1.0.generic.tar.gz
root@aiserver:~# tar -xf OPA-1.0.generic.tar

Next, install the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 package on your system:

root@aiserver:~# cd OPA-1.0.generic
root@aiserver:~# export ARCH=`uname -p`
root@aiserver:~# pkgadd -d $ARCH/SUNWai-ips-dep

root@aiserver:~# pkgadd -d $ARCH/SUNWips

root@aiserver:~# pkgadd -d $ARCH/SUNWai

The AI component within the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 uses /etc/netboot instead of /tftpboot for the installation process. As part of the installation process, you need to transition the JumpStart server files that currently exist in /tftpboot to /etc/netboot. Therefore, use the following commands to create the new directory hierarchy, copy the files in the tftpboot directory to the new directory:

root@aiserver:~# mkdir -m 755 /etc/netboot
root@aiserver:~# cd /tftpboot
root@aiserver:~# tar cpf - . | (cd /etc/netboot && tar xvpf -)

Next, temporarily disable the tftp service, move the old tftpboot setup to /tftpboot.old, link /tftpboot to /etc/netboot, modify the tftp service to use /etc/netboot, and then restart the tftp service.

root@aiserver:~# svcadm disable svc:/network/tftp/udp6:default
root@aiserver:~# mv tftpboot tftpboot.old
root@aiserver:~# ln -s /etc/netboot /tftpboot
root@aiserver:~# svccfg -s svc:network/tftp/udp6 setprop \
inetd_start/exec = astring: '("/usr/sbin/in.tftpd\ -s\ /etc/netboot")'
root@aiserver:~# mv tftpboot tftpboot.old
root@aiserver:~# svcadm enable svc:/network/tftp/udp6:default

At this point, the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 is installed and is ready for use.

Creating a Local IPS Repository

By default, all Oracle Solaris 11 installations are configured to use a software package repository hosted at pkg.oracle.com. Administrators often choose to copy this repository locally to work around network restrictions in their data centers, achieve faster client installations, or gain more control over what software client systems can access. In this article, we will construct a local Oracle Solaris 11 11/11 IPS repository.

First, download the following patch sets from My Oracle Support:

  • 14669012 Oracle Solaris 11 11/11 REPO ISO IMAGE (SPARC/X86, 64-bit)
  • 15879286 Oracle Solaris 11 SRU 13.4 REPO ISO IMAGE (SPARC/X86, 64-bit)

Next, we'll take the IPS repository contents from the ISO files and place them in a location on the server's file system to set up a IPS repository service, which is handled by pkg.depotd(1M). To do this, we'll mount the ISO images and copy it to a ZFS file system named IPS, which we'll use to host the repository. The Oracle Solaris 11 11/11 IPS repository image is provided via two smaller files, so we'll have to handle each one of them individually.

Note: The rsync command we'll run will take some time, because there are over 6 GB of data that comprise the IPS repository image. Also, if you are using an Oracle Solaris 10 8/11 system, it does not have the rsync packages. You can obtain them from Oracle Solaris 10 1/13 media by installing the SUNWrsync and SUNWrsyncS packages.

To accomplish all this, first, unzip the individual files. Then run the commands shown in Listing 1:

root@aiserver:~# zfs create rpool/IPS
root@aiserver:~# zfs set mountpoint=/IPS rpool/IPS
root@aiserver:~# mkdir /IPS/Solaris11
root@aiserver:~# lofiadm -a full_path_to/sol-11-1111-repo-p01.iso /dev/lofi/1
root@aiserver:~# mount -F hsfs -o ro /dev/lofi/1 /mnt
root@aiserver:~# rsync -a /mnt/repo /IPS/Solaris11/

root@aiserver:~# umount /mnt
root@aiserver:~# lofiadm -d /dev/lofi/1
root@aiserver:~# lofiadm -a full_path_to/sol-11-1111-repo-p02.iso /dev/lofi/1
root@aiserver:~# mount -F hsfs -o ro /dev/lofi/1 /mnt
root@aiserver:~# rsync -a /mnt/repo /IPS/Solaris11/
root@aiserver:~# umount /mnt
root@aiserver:~# lofiadm -d /dev/lofi/1

Listing 1

At this point, the base Oracle Solaris 11 11/11 IPS repository has been created. We now need to update it with the Support Repository Update (SRU) patches that were generated for the Oracle Solaris 11 11/11 release. To do this, apply patch set 15879286 to the repository, as follows:

root@aiserver:~# lofiadm -a full_path_to/sol-11-1111-sru13-04-incr-repo.iso /dev/lofi/1
root@aiserver:~# mount -F hsfs -o ro /dev/lofi/1 /mnt
root@aiserver:~# rsync -a /mnt/repo /IPS/Solaris11/
root@aiserver:~# umount /mnt
root@aiserver:~# lofiadm -d /dev/lofi/1
root@aiserver:~# pkgrepo rebuild -s file:///IPS/Solaris11/repo

Now that the IPS repository has been updated, it can be served to clients using pkg.depotd(1M). Before doing this, though, alter the provided repository configuration template so it points to the installed repository's location (/IPS/Solaris11/repo):

root@aiserver:~# svccfg -s application/pkg/server setprop \
pkg/inst_root=/IPS/Solaris11/repo
root@aiserver:~# svccfg -s application/pkg/server setprop pkg/readonly=true

By default, pkg.depotd(1M) listens for connections on port 80.

Next, start pkg.depotd(1M) and use it for serving packages to our install client by refreshing the corresponding Service Management Facility service configuration and enabling the service:

root@aiserver:~# svcadm refresh application/pkg/server
root@aiserver:~# svcadm enable application/pkg/server

The next step is to configure the system to use the locally configured IPS repository. Do this by pointing the pkg IPS client's solaris publisher to the same host (that is, itself):

root@aiserver:~# pkg set-publisher -G '*' -M '*' -g http://192.0.2.1 solaris
root@aiserver:~# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
solaris                               origin   online   http://192.0.2.1/

Using Automated Installer to Create an Install Service

Now that you have a system that has the AI and IPS set up, let's create an instance of the automated installation service.

Since we already created the Oracle Solaris 11 repository image locally and set it as the publisher to the AI server itself, we can create this service from the pkg://install-image/solaris-auto-install package, instead of having to download the .iso file separately.

To create a service for x86 hosts, simply run the command shown in Listing 2, which will create a service named s11i386:

root@aiserver:~# installadm create-service -a i386 -n s11i386

Creating service from: pkg:/install-image/solaris-auto-install
OK to use default image path: /export/auto_install/s11i386? [y/N]: y
Download: install-image/solaris-auto-install ...  Done
Install Phase ...  Done
Package State Update Phase ...  Done
Image State Update Phase ...  Done
Reading Existing Index ...  Done
Indexing Packages ...  Done

Creating service: s11i386

Image path: /export/auto_install/s11i386

Refreshing install services

Creating default-i386 alias.

This service is the default alias for all PXE clients. If not already in
place, the following should be added to the DHCP configuration:
Boot server IP: 192.0.2.1
Boot file(s):
default-i386/boot/grub/pxegrub

Refreshing install services
root@aiserver:~#

Listing 2

Please note that since in our example, we are working with the very first instance of a service, we have to use the default service name when referencing modifications to the service later on. Keep this in mind, because there will always be a default installation service that is architecture-specific. When you execute commands on services that were created first, the commands have to reflect the name default-i386 instead of s11i386.

If you had wanted to create a SPARC service, you would use -a sparc instead of -a i386 in the command shown in Listing 2. When dealing directly with ISO images, the use of -a is not required, because the AI can determine automatically the architecture of the service being created.

You can see the install service's status by running the following command:

root@aiserver:~# installadm list

Service Name Alias Of Status  Arch   Image Path
------------ -------- ------  ----   ----------
default-i386 s11i386  on      x86    /export/auto_install/s11i386
s11i386       -       on      x86    /export/auto_install/s11i386

Although the service is up and running, there's still a bit more work that needs to be done before it can be used by an install client. Unlike the Oracle Solaris 11 version of the AI, the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 version will not configure a DHCP server, so you must manually configure it in order to add the necessary entries for clients. During the installadm create-service command we ran in Listing 2, the command output showed the boot server's IP address and boot file information for the service that was created:

This service is the default alias for all PXE clients. If not already in
place, the following should be added to the DHCP configuration:
Boot server IP: 192.0.2.1
Boot file(s):
default-i386/boot/grub/pxegrub
...
root@aiserver:~#

You'll need to configure your DHCP server so hosts that will be installed using this service will be given the proper boot server IP address and boot server file. Typically, you'll also want to provide clients with the DNS server and DNS domain.

If you are running the Oracle Solaris 10 DHCP server, the command sequence would look something like the following:

root@aiserver:~# /usr/sbin/dhtadm -g -A -m default-i386 \
-d :BootSrvA=192.0.2.1:BootFile="default-i386/boot/grub/pxegrub":\
DNSserv=192.0.2.1:DNSdmain=foobar.example.com:

Note: If you're setting up a SPARC client and you're using an Oracle Solaris 10 system with the built-in DHCP server, the command would look like this instead:

root@aiserver:~# /usr/sbin/dhtadm -g -A -m default-sparc \ 
-d :BootFile=\"http://192.0.2.1:5555/cgi-bin/wanboot-cgi\":


We also recommend setting BootSrv and the DNS setting, if it is known. That would render the following modification to the dhtadm command:

root@aiserver:~# /usr/sbin/dhtadm -g -A -m default-sparc \
-d :BootSrvA=192.0.2.1:BootFile="http://192.0.2.1:5555/cgi-bin/wanboot-cgi":DNSserv=192.0.2.1:DNSdmain=foobar.example.com:'

Next, you need to modify your DHCP configuration to apply it to all i386 hosts that you want to use this AI configuration. For example, if you're using the Oracle Solaris 10 DHCP server, an example command for a client MAC address of 08:00:27:e2:03:06 with a desired permanent IP reservation of 192.0.2.243 would be as follows:

root@aiserver:~# /usr/sbin/pntadm -A 192.0.2.243 -i 01080027E20306 -m default-i386 \
-f "PERMANENT+MANUAL" 192.0.2.0

The installadm create-client option can be used to apply a specific configuration to a specific host, but this is left as an exercise for readers to explore.

At this point, our local IPS repository is set up, the default install service (default-i386) has been created, and you've manually configured your DHCP server for the target client. If we attempted to install a client using the default-i386 service, it would not use our local IPS repository but would instead still use the default IPS repository of pkg.oracle.com. This is because the default manifest within the pkg://install-image/solaris-auto-install package that was used in the creation of our service specifies that clients use the IPS repository hosted at pkg.oracle.com. Since we want our installs to use our local repository, we'll change this in the next section by modifying the default manifest associated with default-i386 to tell it to use our local IPS repository instead.

Customizing the Default AI Manifest

The default AI manifest is an XML file that is used during the client installation process to tell the client installer what file systems to create and which locales and packages to install. When you create a new install service, install-service-image-path/auto_install/manifest/default.xml is the initial default AI manifest for that install service.

In order to modify the default manifest used by our install service, we will use the installadm command to modify our configuration. The required tasks are reflected in the following three steps:

  1. Examine the manifest that the install service uses by exporting it.
  2. Modify the manifest.
  3. Re-import the newly modified manifest to the install service.

This simple set of steps can be accomplished using the following commands.

First, get the listing of our installation services and the manifests associated with them:

root@aiserver:~# installadm list -m

Service Name  Manifest      Status
------------  --------      ------
default-i386  orig_default  Default
s11i386       orig_default  Default

Generally, when you want to change a service, you'd specify the service you want to change directly. However, in our example—since we are working with the very first instance of a service—we want to use the default service name of default-i386 when referencing modifications to the service. The -m switch is used to specify the name of the service. Since we want to capture that output to a file, redirect the output of the command like this:

root@aiserver:~# installadm export -n default-i386 \
-m orig_default > /var/tmp/orig_default.xml

Examining the origi_default.xml file that was just created, you'll find an XML entry similar to the following, which is used to specify the publisher to use for installs.

      <source>
        <publisher name="solaris">
          <origin name="http://pkg.oracle.com/solaris/release"/>
        </publisher>
      </source>

We want to replace the http://pkg.oracle.com/solaris/release entry with an entry that points to our local repository. For our example, this can be done by replacing the following line:
<origin name="http://pkg.oracle.com/solaris/release"/>

with this line:

<origin name="http://192.0.2.1/solaris"/>

To accomplish this, run the following sed command:

root@aiserver:~# sed \
's#http://pkg.oracle.com/solaris/release#http://192.0.2.1/solaris#' \
/var/tmp/orig_default.xml \ > /var/tmp/orig_default2.xml

One other important item to note here is that the manifest specifies what packages to install for Oracle Solaris 11 via the following XML entry:

      <software_data action="install">
        <name>pkg:/entire@latest</name>
        <name>pkg:/group/system/solaris-large-server</name>
      </software_data>

The entire@latest entry indicates to install the latest version of Oracle Solaris available. Since the AI included with Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 can install only Oracle Solaris 11 11/11 we need to change this to the release version associated with that release: pkg:/entire@0.5.11-0.175.0.

To accomplish this, use sed again:

root@aiserver:~# sed \
's#entire@latest#entire@0.5.11-0.175.0#' \
/var/tmp/orig_default2.xml > /var/tmp/orig_default3.xml

Then update the manifest in our service to use this new manifest:

root@aiserver:~# installadm update-manifest -n default-i386 -m orig_default \
-f  /var/tmp/orig_default3.xml

It important to note that changing the orig_default manifest in the default-i386 service does not change the orig_default manifest for the s11i386 service. Therefore, if we also want the s11i386 service to use the local repository, we can apply the update manifest we created for the default-i386 service to our s11i386 service using the following:

root@aiserver:~# installadm update-manifest -n s11i386 -m orig_default \
-f  /var/tmp/orig_default3.xml

Booting the Client with the Customized Manifest

At this point, our local IPS repository is up and running, we've created a default service, and we've created a customized manifest for the service. If you haven't already configured your DHCP Server, configure it so that the client system will receive the necessary data.

If we now boot our x86 client, the following sequence will occur:

  1. The client boots and gets an IP address, and the boot file, pxegrub, is downloaded from the location provided by the DHCP server.
  2. The pxegrub boot file is loaded and reads a menu.lst file.
  3. The pxegrub boot file gets the boot_archive file and Oracle Solaris is booted using TFTP.
  4. The net image archives, solaris.zlib and solarismisc.zlib, are downloaded using HTTP, as provided by the GRUB menu.
  5. The AI manifest and system configuration profiles are downloaded from the AI install service.
  6. The AI install program is invoked by the AI manifest to perform the installation.

For more information on the boot service, see the "Installing Clients" section of the Oracle Solaris 11 11/11 documentation.

Boot the client now. Make sure you indicate that it should boot from the network during the boot process.

When the GRUB menu appears, two menu entries will be displayed:

Oracle Solaris 11 11/11 Text Installer and command line
Oracle Solaris 11 11/11 Automated Install

Select the Oracle Solaris 11 11/11 Automated Install entry.

During the install process, the console will show the transfer of the images from the boot server and the installation as it is performed on the client, as shown in Listing 3.

HTTP request sent, awaiting response... 200 OK
Length: 121075712 (115M) [text/plain]
Saving to: `/tmp/solaris.zlib'

100%[======================================>] 121,075,712 11.1M/s   in 14s

2012-11-06 21:08:20 (8.24 MB/s) - `/tmp/solaris.zlib' saved [121075712/121075712]

Downloading solarismisc.zlib
--2012-11-06 21:08:20--  http://192.0.2.1:555//export/auto_install/s11i386/solarismisc.zlib
Connecting to 192.0.2.1:555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20610048 (20M) [text/plain]
Saving to: `/tmp/solarismisc.zlib'

100%[======================================>] 20,610,048  9.73M/s   in 2.0s

2012-11-06 21:08:22 (9.73 MB/s) - `/tmp/solarismisc.zlib' saved [20610048/20610048]

Downloading .image_info
--2012-11-06 21:08:22--  http://192.0.2.1:555//export/auto_install/s11i386/.image_info
Connecting to 192.0.2.1:555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 220 [text/plain]
Saving to: `/tmp/.image_info'

100%[======================================>] 220         --.-K/s   in 0s

2012-11-06 21:08:22 (13.4 MB/s) - `/tmp/.image_info' saved [220/220]

Done mounting image
Configuring devices.
Hostname: foobar
Service discovery phase initiated
Service name to look up: default-i386
Service discovery over multicast DNS failed
Service default-i386 located at 192.0.2.1:555 will be used
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery

Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log
Press RETURN to get a login prompt at any time.


foobar console login: 21:08:49    Install Log: /system/volatile/install_log
21:08:49    Using XML Manifest: /system/volatile/ai.xml
21:08:49    Using profile specification: /system/volatile/profile
21:08:49    Using service list file: /var/run/service_list
21:08:49    Starting installation.
21:08:49    0% Preparing for Installation
21:08:49    100% manifest-parser completed.
21:08:50    0% Preparing for Installation
21:08:50    0% Preparing for Installation
21:08:50    0% Preparing for Installation
21:08:50    0% Preparing for Installation
21:08:50    1% Preparing for Installation
21:08:50    1% Preparing for Installation
21:08:50    1% Preparing for Installation
21:08:50    2% Preparing for Installation
21:08:50    2% Preparing for Installation
21:08:50    2% Preparing for Installation
21:08:50    3% Preparing for Installation
21:08:50    3% Preparing for Installation
21:08:50    3% Preparing for Installation
21:08:50    4% Preparing for Installation
21:08:50    4% Preparing for Installation
21:08:50    4% Preparing for Installation
21:09:01    7% target-discovery completed.
21:09:01    === Executing Target Selection Checkpoint ==
21:09:01    Selected Disk(s) : c4t0d0
21:09:02    13% target-selection completed.
21:09:02    17% ai-configuration completed.
21:09:02    19% var-shared-dataset completed.
21:09:16    21% target-instantiation completed.
21:09:16    21% Beginning IPS transfer
21:09:16    Creating IPS image
21:09:22    Installing packages from:
21:09:22        solaris
21:09:22            origin:  http://192.0.2.1/solaris/
21:27:05    Version mismatch:
21:27:05    Installer build version: pkg://solaris/entire@0.5.11,5.11-0.175.0.0.0.2.0:20111020T143822Z
21:27:05    Target build version: pkg://solaris/entire@0.5.11,5.11-0.175.0.11.0.13.0.4.0:20121106T194623Z
21:27:05    23% generated-transfer-1294-1 completed.
21:27:05    25% initialize-smf completed.
21:27:07    Setting console boot device property to ttya
21:27:07    Disabling boot loader graphical splash
21:27:07    Creating Legacy GRUB config directory:
        /rpool/boot/grub
21:27:07    Installing boot loader to devices: ['/dev/rdsk/c4t0d0s0']
21:27:08    35% boot-configuration completed.
21:27:08    37% update-dump-adm completed.
21:27:08    40% setup-swap completed.
21:27:09    42% device-config completed.
21:27:10    44% apply-sysconfig completed.
21:27:10    46% transfer-zpool-cache completed.
21:27:28    87% boot-archive completed.
21:27:28    89% transfer-ai-files completed.
21:27:28    99% create-snapshot completed.
21:27:28    Automated Installation succeeded.
21:27:28    You may wish to reboot the system at this time.
Automated Installation finished successfully
The system can be rebooted now
Please refer to the /system/volatile/install_log file for details
After reboot it will be located at /var/sadm/system/logs/install_log

Listing 3

In the output shown in Listing 3, five lines have been bolded:

  • Service name to look up: default-i386 is the output that we can use during the installation to confirm that our client installed with the service for which we configured it.
  • Service discovery over multicast DNS failed can be ignored, because mDNS is not available for Oracle Solaris 10. This is why the DHCP server should specify the DNS information that the client might need during the install process.
  • origin: http://192.0.2.1/solaris/ shows us the IPS repository that is being used to retrieve the necessary packages to complete the install.
  • Version mismatch is an error that can be ignored. It's simply stating that the AI server was built to install the initial release of Solaris 11 11/11 and that you are installing a later version. Because this is an update of the initial release, you can safely ignore this message.
  • Automated Installation finished successfully is the statement that tells us that the OS has been successfully installed.

At this point, Oracle Solaris 11 11/11 has been installed on the disk of the client, but the OS has not been configured. Since we have not configured the AI with a profile for the client system being installed, the AI will invoke an interactive system configuration tool so you can to perform the remaining configuration of the system when the system is rebooted.

Prior to rebooting the system, you might want to log in and explore the system, for example, by looking at the installation log file (/system/volatile/install_log) or the AI manifest that was used to install the system (/system/volatile/ai.xml).

To log in to the console, use the default AI image username, which is root, and the password solaris.

Feel free to reboot the system and acquaint yourself with the interactive system configuration tool. This article, however, will not discuss its use.

Augmenting the Installation with Configuration

As mentioned in the last section, when the system is rebooted, it looks for a preprovisioned configuration file known as a system configuration profile for the client, and if it finds none, it invokes the system configuration tool.

In order to fully automate the installation process, we can create a system configuration profile that would be made available as part of the installation service. To do that, we'd first need to create a system configuration profile that contains all the minimum configuration data, and then we'd add this profile to the installation service that would be used by clients of that service.

If we were installing from an Oracle Solaris 11 system, we would use the sysconfig tool. However, this tool is not available on Oracle Solaris 10, so we have to take an alternate approach until we have at least one Oracle Solaris 11 system. There are two possible options that can be performed on Oracle Solaris 10:

  • Use the sample configuration file.
  • Use the js2ai(1M) tool to help create the configuration file.

We'll explore both options in the following subsections.

Using the System Configuration Profile

Three example system configuration profiles are installed on the system when you install the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10. You'll find them located under /usr/share/auto_install/sc_profiles.

  • enable_sci.xml shows how to cause the client system to always invoke the interactive system configuration tool.
  • sc_sample.xml shows how to configure the client to use automatic network configuration.
  • static_network.xml is a much more complex example that shows how to configure a host to a static IP address, configure DNS, and so on.

These profiles are discussed in more detail in "Example System Configuration Profile" section of the Oracle Solaris 11 11/11 documentation.

To make things simple, let's simply use the sc_sample.xml file.

root@aiserver:~# cp /usr/share/auto_install/sc_profiles/sc_sample.xml \
/var/tmp/auto_network.xml

The sc_sample.xml file that we just copied has the following configuration characteristics:

  • The user account name is jack, the password is jack, and it specifies GID 10, UID 101, and the bash shell.
  • It configures the root role with password solaris.
  • The keyboard mapping is set to US-English.
  • The time zone is set to UTC.
  • Network configuration is automated.
  • DNS name service client is enabled.

Edit the file if you want to change any of these default settings. It is advisable that at a minimum you modify the user account name and password and the root password.

Once you have made any desired changes, create a profile and assign it to our service by entering the following command:

root@aiserver:~# installadm create-profile -n default-i386 -p auto_network \
-f /var/tmp/auto_network.xml
Profile auto_network added to database.

Now verify that the install service contains a custom system configuration profile associated with it:

root@aiserver:~# installadm list -p

Service Name  Profile
------------  -------
default-i386  auto_network

When you reboot the client from the network, you can witness a complete hands-off process of installing and configuring the system. You can then log in using the credentials configured in the system configuration profile. If you didn't change the defaults, this would be username jack and password jack. After the system is installed and you log in, you can elevate privileges, as necessary, by assuming the root role with password solaris.

Using js2ai(1M) to Create the Configuration File

js2ai is a tool that exists in the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 release. It provides some basic conversion capabilities for converting JumpStart profiles and sysidcfg files to versions that can be used by AI. For this exercise, we are going to take the JumpStart sysidcfg file shown in Listing 4, which describes a host, and use it to set up the configuration for our client.

# cat /tmp/sysidcfg
keyboard=US-English
system_locale=en_US.UTF-8
terminal=vt100
network_interface=e1000g0 { dhcp protocol_ipv6=no }
name_service=DNS {domain_name=foobar.example.com
name_server=192.0.2.254
search=foobar.example.com}
security_policy=none
timezone=US/Eastern
timeserver=localhost
nfs4_domain=dynamic
root_password=9Nd/cwBcNWFZg

Listing 4

Now, use js2ai to convert the JumpStart sysidcfg file:

# cd /tmp
# js2ai -s
                                Process  Unsupported  Conversion  Validation
Name                  Warnings  Errors   Items        Errors      Errors
-------------------   --------  -------  -----------  ----------  ----------
sysidcfg                     3        0            1           0           0

Conversion completed. One or more failures and/or warnings occurred.
For details see /tmp/js2ai.log

js2ai created the profile sc_profile.xml from the sysidcfg file. During the process, js2ai generated four warnings and one "unsupported items" error. If we examine the log, we see the text shown in Listing 5:

# cat /tmp/js2ai.log
sysidcfg:line 4:WARNING: In order to support the direct translation of the sysidcfg 
interface 'e1000g0', Oracle Solaris 11 neutral link name support will be disabled.  
If you wish to use neutral link names change the interface name specified in the 
sysidcfg file to a 'netx' style interface name or edit the resulting sc_profile.xml 
file.
sysidcfg:line 11:UNSUPPORTED: unsupported keyword: nfs4_domain
sysidcfg:line 13:WARNING: Oracle Solaris 11 uses roles instead of root user.  An 
admin user with root role privileges will need to be defined in order to access the 
system in multi-user mode.  The necessary xml structures have been added to 
sc_profile.xml as a comment.  Edit sc_profile.xml to perform the necessary 
modifications to define the admin user.
sysidcfg:line 13:WARNING: no hostname specified, setting hostname to 'solaris'.

Listing 5

The first message is telling us that the way network interface names in Oracle Solaris 11 are handled by default is different than the behavior in Oracle Solaris 10. In order to use the old style of naming, it is necessary for js2ai to disable neutral link naming. For more information on this, see the "Network Devices and Datalink Names" section of the Oracle Solaris documentation.

If you want to use the new Oracle Solaris 11 style of naming, you can edit your sysidcfg file and replace the network interface name in it with an Oracle Solaris 11 style name. The primary interface is typically net0. During the conversion, when js2ai sees an Oracle Solaris 11 net style interface name, it will not disable neutral link naming in the resulting sc_profile.xml file.

The second message is an error telling us that conversion of the keyword nfs4_domain is not currently supported. If you don't want to see this message, you can simply remove the keyword from your sysidcfg file.

The third message is a warning about how Oracle Solaris 11 uses roles instead of dedicated root user. For more information, see the "User Account Management and User Environment Changes" section in the documentation. In Oracle Solaris 11, the root user can log in to the machine only when it's in single-user mode. Since no user data other than root_passwd is defined in the profile, you won't be able to log in to the machine when it's booted in multi-user mode. To correct, this we'll want to add the missing details. During the conversion process, js2ai helps in this endeavor by adding the required user XML structure as a comment in the sc_profile.xml file. If you view the sc_profile.xml file you'll see the user entry defined, as shown in Listing 6:

  <service name="system/config-user" type="service" version="1">
    <instance enabled="true" name="default">
      <!--
Configures user account as follows:
 * User account name 'jack'
 * password 'jack'
 * GID 10
 * UID 101
 * root role
 * bash shell
-->
      <!--
<property_group name="user_account" type="application">
  <propval name="login" type="astring" value="jack"/>
  <propval name="password" type="astring" value="9Nd/cwBcNWFZg"/>
  <propval name="description" type="astring" value="default_user"/>
  <propval name="shell" type="astring" value="/usr/bin/bash"/>
  <propval name="gid" type="astring" value="10"/>
  <propval name="uid" type="astring" value="101"/>
  <propval name="type" type="astring" value="normal"/>
  <propval name="roles" type="astring" value="root"/>
  <propval name="profiles" type="astring" value="System Administrator"/>
</property_group>
-->
      <property_group name="root_account" type="application">
        <propval name="password" type="astring" value="9Nd/cwBcNWFZg"/>
        <propval name="type" type="astring" value="role"/>
      </property_group>
    </instance>
  </service>

Listing 6

In XML, a comment is started with <!-- and ends in -->. Remove the XML comment markers around the user_account structure. Then modify the structure as desired to create the user that will have root role privileges.

The last message we received during our sysidcfg conversion indicated that no host name was specified and, as a result, js2ai has automatically defined the host name as solaris and added the necessary XML structure (system/identity) to the sc_profile.xml file. If the system/identity structure is not present in a profile when it's used during an automated install, the AI will define the system as "unknown."

Once you complete any edits you want to make to sc_profile.xml, you can add it to the configuration. Currently, we only have one profile associated with our service.

# installadm list -n default-i386 -p    

Profile       Criteria
-------       --------
auto_network  None

When we added the auto_network profile to our configuration, we added it without specifying any criteria. Specifying criteria provides a way to cause a profile to apply to a specific profile of a system. If we add two profiles to our service without specifying criteria, the service won't know which profile it should use when configuring the remote client.

Criteria are specified by one or more criteria keywords. For more details on the available options, see "Selection Criteria" in the documentation.

In our current configuration, our DHCP server is set up to give our client an IP address of 192.0.2.2 and we are running on the /24 network, so our IPv4 network is 192.0.2.0. By using the criteria network keyword, we can indicate to the default-i386 service that we want to apply the profile we are about to create to clients on this network. We use the -c argument to indicate criteria:

# installadm create-profile -n default-i386 -p jump_auto -f /tmp/sc_profile.xml -c  network="192.0.2.0-192.0.2.255"
Profile jump_auto added to database.

Now, when we look at the profiles via installadm, we'll see that the profile has criteria associated with it.

# installadm list -n default-i386 -p

Profile       Criteria
-------       --------
auto_network  None
jump_auto     network = 192000002000- 192000002255

If we perform an install again, the install client will be configured via the jump_auto profile we configured instead of the auto_network profile that was used previously.

Updating to Oracle Solaris 11.1

Although the AI in Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 is not capable of installing Oracle Solaris 11.1, you can update your clients to Oracle Solaris 11.1 once you have installed Oracle Solaris 11/11 on them. For the complete procedures on updating your clients, see Upgrading to Oracle Solaris 11.1.

Conclusion

The Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 provides the initial stepping stones to help you migrate to Oracle Solaris 11.

In this article, you learned how to add the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 to an Oracle Solaris 10 JumpStart server and use the Automated Installer to install Oracle Solaris 11 on a remote client. You learned how an automated install manifests and profiles are specified and configured. You also learned how to add a local IPS repository and host it for all internally built systems in your environment, without having to have externally routable connectivity to systems that need to be protected.

As you get comfortable installing Oracle Solaris 11 on a number of systems, keep in mind that the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10 is only a temporary solution to help you deploy Oracle Solaris 11 broadly. Once you have some systems installed, it is recommended that you migrate the AI server onto a dedicated Oracle Solaris 11 system. The benefit in doing so is that you would no longer be limited to running a deployment server that is based on the Oracle Solaris 11 Provisioning Assistant for Oracle Solaris 10; such a server is functional, but it is not designed to provide the full range of Oracle Solaris 11 features on an ongoing basis. Additionally, as you look to retire Oracle Solaris 10 systems, you might want to take advantage of the fact that Oracle Solaris 11 systems can be configured to provide JumpStart services as well.

See Also

Here are some additional resources:

About the Authors

Kristina Tripp is a Senior Software Engineer working in Oracle Revenue Product Engineering where she focuses on Oracle Solaris 11 install technologies. Kristina joined Oracle in 2010 as part of the Sun Microsystems acquisition.

Isaac Rozenfeld is a Principal Product Manager for Oracle Solaris and focuses on adoption, installation, and lifecycle management technologies. Isaac joined Oracle in 2010 as part of the Sun Microsystems acquisition.

Revision 1.1, 04/01/2013

Follow us:
Blog | Facebook | Twitter | YouTube