| DBA: Linux
Guide to Advanced Linux Command Mastery, Part 5: Managing the Linux Environment, Continued
by Arup Nanda
In this fifth and final installment of the series, we will focus on more commands and techniques for managing a Linux environment – including a virtualized one.
Published July 2009
Shell Keyword Variables
When in the command line, you are using a ''shell'' – most likely the bash shell. In a shell you can define a variable and set a value to it to be retrieved later. Here is an example of a variable named ORACLE_HOME:
# export ORACLE_HOME=/opt/oracle/product/11gR2/db1
Later, you can refer to the variable by prefixing a ''$'' sign to the variable name, e.g.:
# cd $ORACLE_HOME
This is called a user defined variable. Likewise, there are several variables defined in the shell itself. These variables -- whose names have been pre-defined in the shell -- control how you interact with the shell. You should learn about these variables (at least a handful of important ones) to improve the quality and efficiency of your work.
This variable sets the Linux command prompt. Here is an example when the command where we are trying to change the prompt from the default ''# '' to ''$ '':
# export PS1="$ " $
Note how the prompt changed to $? You can place any character here to change the default prompt. The double quotes are not necessary but since we want to put a space after the ''$'' sign, we have to place quotes around it.
Is that it – to show the prompt in a fancy predefined character or character strings? Not at all. You can also place special symbols in the variable to show special values. For instance the symbol \u shows the username who logged in and \h shows the hostname. If we use these two symbols, the prompt can be customized to show who logged in and where:
$export PS1="\u@\h# " oracle@oradba1#
This shows the prompt as oracle logged in on the server called oradba1 – enough to remind yourself who and where you are. You can further customize the prompt using another symbol, \W, which shows the basename of the current directory. Here is how the prompt looks now:
# export PS1="\u@\h \W# " oracle@oradba1 ~#
The current directory is HOME; so it shows ''~''. As you change to a different directory it changes.
Adding the current directory is a great way to remind yourself where you are and the implications of your actions. Executing rm * has a different impact on /tmp than if you were on /home/oracle, doesn't it?There is another symbol - \w. There is a very important difference between \w and \W. The latter produces the basename of the current directory while the former shows the full directory:
oracle@oradba1 11:59 AM db1# export PS1="\u@\h \@ \w# " oracle@oradba1 12:01 PM /opt/oracle/product/11gR2/db1#
Note the difference? In the previous prompt, where \W was used, it showed only the directory db1, which is the basename. In the next prompt where \w was used, the full directory /opt/oracle/product/11gR2/db1 was displayed.
In many cases a full directory name in the prompt may be immensely helpful. Suppose you have three Oracle Homes. Each one will have a subdirectory called db1. How will you know where exactly you are if only ''db1'' is displayed? A full directory will leave no doubts. However, a full directory also makes the prompt very long, making it a little inconvenient.
The symbol ''\@'' shows the current time in hour:minute AM/PM format:
# export PS1="\u@\h \@ \W# " oracle@oradba1 11:59 AM db1#
Here are some other symbols you can use in PS1 shell variable:
This variable asks the shell to treat the shell variables as a whole or separate them. If desired to be separated, the value set to the IFS variable is used as a separator. Hence the name Input Field Separator (IFS). To demonstrate, let's define a variable as shown below.
These are actually two files: initODBA112.ora and init.ora. Now you want to display the first line of each of these files, you will use the head -1 command.
# head -1 $pfiles head: cannot open `initODBA112.ora:init.ora' for reading: No such file or directory
The output says it all; the shell interpreted the variable as a whole: `initODBA112.ora:init.ora', which is not a name of any file. That's why the head command fails. If the shell interpreted the '':'' as some sort of a separator, it would have done that job properly. So, that's what we can do by setting IFS variable:
# export IFS=":" # head -1 $pfiles ==> initODBA112.ora <== # first line of file initODBA112.ora ==> init.ora <== # first line of file init.ora
There you go – the shell expanded the command head -1 $pfiles to head -1 initODBA112.ora and head -1 init.ora and therefore the command executed properly.
When you use a command in Linux, it's either in a shell as you saw with the kill command in Part 4 or it's an executable file. If it's an executable, how do you know where is it located?
Take for instance the rm command, which removes some file. The command can be given from any directory. Of course the executable file rm does not exist in all the directories so how does Linux know where to look?
The variable PATH holds the locations where the shell must look for that executable. Here is an example of a PATH setting:
# echo $PATH /usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:.
When you issue a command such as rm, the shell looks for a file rm in these locations in this order:
/usr/kerberos/bin /usr/local/bin /bin /usr/bin /usr/X11R6/bin . (the current directory)
If the file is not found in any of these locations, the shell returns an error message -bash: rm: command not found. If you want to add more locations to the PATH variable, do so with '':'' as a separator.
Did you note a very interesting fact above? The location ''current directory'' is set at the very end, not at the beginning. Common sense may suggest that you put it in the beginning so that the shell looks for an executable on the current directory first before looking elsewhere. Putting it at the end will instruct the shell to look in the current directory at the end of the process. But is there a better approach?
Experts recommend that you put the current directory (.) at the end of the PATH command, not the beginning. Why? This practice is for safety. Suppose you are experimenting with some ideas to enhance common shell commands and inadvertently leave that file in your home directory. When you log in, you are in the home directory and when you execute that command, you are not executing the common command but rather the executable file in your home directory.
This could be disastrous in some cases. Suppose you are using toying with a new version of the ''cp'' command and there is a file called cp in your home directory. This file may potentially do some damage. If you type ''cp somefile anotherfile'', your version of file cp will be executed, creating damage. Putting the current directory at the end executes the normal ''cp'' command and avoids such a risk.
It also prevents the risk of some hacker placing some malignant command file in the form of common commands. Some experts even suggest to remove the ''.'' from the PATH altogether, to prevent any inadvertent execution. If you have to execute something in the current directory, just use the ./ notation as in:
This executes a file called mycommand in the present directory.
Very similar to PATH, this variable expands the scope of the cd command to more than the present directory. For instance when you type the cd command as shown below:
# cd dbs -bash: cd: dbs: No such file or directoryIt makes sense since the dbs directory does not exist in the present directory. It's under /opt/oracle/product/11gR2/db1. That's why the cd command fails. You can of course go to the directory /opt/oracle/product/11gR2 and then execute the cd command, which will be successful. If you want to increase the scope to include /opt/oracle/product/11gR2/db1, you can issue:
# export CDPATH=/opt/oracle/product/11gR2/db1
Now if you issue the cd command from any dorectory:
# cd dbs /opt/oracle/product/11gR2/db1/dbs # pwd /opt/oracle/product/11gR2/db1/dbs
The cd command now looks for other directories for that subdirectory.
There are several other variables; but these ones are very widely used and you should know how to master them.
This command controls the behavior of the shell. It has many options and arguments but I will explain a few important ones.
A very common mistake people make while using overwriting commands such as cp and mv is to overwrite correct files inadvertently. You can prevent risk that by using ''alias'' (shown in Part 1 of this series), e.g. using mv –i instead of mv. However, how can you prevent someone or some script overwriting the files by the re-direction operator (''>'')?
Let's see an example. Suppose you have a file called very_important.txt. Someone (or some script) inadvertently used something like:
# ls -l > very_important.txt
The file immediately gets overwritten. You lose the original contents of the file. To prevent this risk, you can use the set command with the option -o noclobber as shown below:
# set -o noclobber
After this command if someone tries to overwrite the file:
# ls -l > very_important.txt -bash: very_important.txt: cannot overwrite existing file
The shell now prevents an existing file being overwritten. What if you want to overwrite? You can use the >| operator.
# ls -l >| very_important.txt
To turn it off:
# set +o noclobber
Another very useful set command is used to use vi editor for editing commands. Later in this installment you will learn how to check the commands you have given and how they can be re-executed. One quick way to re-execute the command is to recall the commands using the vi editor. To enable it execute this command first:
# set -o vi
No suppose you are looking for a command that contains the letter ''v'' (such as vi, or vim, etc.). To search for the command, execute this keystrokes. I have shown the keys to be pressed within square brackets:
# [Escape Key][/ key][v key][ENTER Key]
This will bring up the latest command executed containing ''v''. The last command in this case was set –o vi; so that comes up at the command prompt.
# set -o vi
If that's not the command you were looking for, press the ''n'' key for the next latest command. This way you can cycle through all the command executed with the letter ''v'' in it. When you see the command, you can press [ENTER key] to execute it. The search can be as explicit as you want. Suppose you are looking for a mpstat command executed earlier. All you have to do is enter that search string ''mpstat'':
# [Escape Key][/ key]mpstat[ENTER Key]
Suppose the above command shows mpstat 5 5 and you really want to execute mpstat 10 10. Instead of retyping, you can edit the command in vi. To do so, press [Escape Key] and the [v] key, which will bring up the command in vi editor. Now you can edit the command as you want. When you save it in vi by pressing :wq, the modified command will be executed.
In Part 4 you learned about the kill command, which is a special one – it's both a utility (an executable in some directory) and a shell built-in. In addition, you also learned about aliases in a prior installment. Sometimes there are some commands used in shell scripts – ''do'', ''done'', ''while'' for instance, which are not really commands by themselves. They are called shell keywords.
How do you know what type of command it is? The type command shows that. Here is how we have used it to show the types of the commands mv, do, fc and oh.
# type mv do fc oh mv is /bin/mv do is a shell keyword fc is a shell builtin oh is aliased to `cd $ORACLE_HOME'
It shows very clearly that mv is a utility (along with its location), do is a keyword used inside scripts, fc is a built-in and oh is an alias (and what it aliased to).
When you login to the Linux system you typically execute a lot of commands at the command prompt. How do you know what commands you have executed? You may want to know that for a lot of reasons – you want to re-execute it without retyping, you want to make sure you executed the right command (e.g. removed the right file), you want to verify what commands were issued, and so on. The history command gives you a history of the commands executed.
# history 1064 cd dbs 1065 export CDPATH=/opt/oracle/product/11gR2/db1 1066 cd dbs 1067 pwd 1068 env 1069 env | grep HIST … and so on …
Note the numbers before each command. This is the event or command number. You will learn how to use this feature later in this section. If you want to display only a few lines of history instead of all available, say the most recent five commands:
# history 5
The biggest usefulness of history command actually comes from the ability to re-execute a command without retyping. To do so, enter the ! mark followed by the event or the command number that precedes the command name in the history output. To re-execute the command cd dbs shown in number 1066, you can issue:
# !1066 cd dbs /opt/oracle/product/11gR2/db1/dbs
The command !! (two exclamation marks) executes the last command executed. You can also pass a string after the ! command, which re-executes the latest command with the pattern as the string in the starting position. The following command re-executes the most recent command starting with cd:
# !cd cd dbs /opt/oracle/product/11gR2/db1/dbs
What if you want to re-execute a command containing a string – not start with it? The ? modifier does a pattern matching in the commands. To search for a command that has network in it, issue:
# !?network? cd network /opt/oracle/product/11gR2/db1/network
You can modify the command to be re-executed as well. For instance, suppose you had given earlier a command cd /opt/oracle/product/11gR2/db1/network and want to re-execute it after adding /admin at the end, you will issue:
# !1091/admin cd network/admin /opt/oracle/product/11gR2/db1/network/admin
This command is a shell built-in used to show the command history as well, like history. The most common option is -l (the letter ''L'', not the number ''1'') which shows the 16 most recent commands:
# fc -l 1055 echo $pfiles 1056 export IFS= ... and so on ... 1064 cd dbs 1065 export CDPATH=/opt/oracle/product/11gR2/db1 1066 cd dbs
You can also ask fc to show only a few commands by giving a range of event numbers, e.g. 1060 and 1064:
# fc -l 1060 1064 1060 pwd 1061 echo CDPATH 1062 echo $CDPATH 1063 cd 1064 cd dbs
The -l option also takes two other parameters – the string to perform a pattern matching. Here is an example where you want to display the history of commands that start with the word echo all the way to the most recent command that starts with pwd.
# fc -l echo pwd 1062 echo $CDPATH 1063 cd 1064 cd dbs 1065 export CDPATH=/opt/oracle/product/11gR2/db1 1066 cd dbs 1067 pwd
If you want to re-execute the command cd dbs (command number 1066), you can simply enter that number after fc with the -s option:
# fc -s 1066 cd dbs /opt/oracle/product/11gR2/db1/dbs
Another powerful use of the fc -l command is the substitution of commands. Suppose you want to execute a command similar to the 1066 (cd dbs) but want to issue cd network, not cd dbs, you can use the substitution argument as shown below:
# fc -s dbs=network 1066 cd network /opt/oracle/product/11gR2/db1/network
If you omit the -s option, as shown below:
# fc 1066
It opens a vi file with the command cd dbs inside, which you can edit and execute.
Consider this: you want to send a set of files to someone or somewhere and do not want to risk the files getting lost, breaking the set. What can you do to make sure of that? Simple. If you could put all the files into a single file and send that single file to its destination, you can rest assured that all the files arrived safely.
The cpio command has three main options:
Each option has its own set of sub-options. For instance the -c option is applicable in case of -i and -o but not in case of -p. So, let's see the major option groups and how they are used.
The -v option is used to display a verbose output, which may be beneficial in cases where you want a definite feedback on what's going on.
First, let see how to create an archive from a bunch of files. Here we are putting all files with the extension ''trc'' in a specific directory and putting then in a file called myfiles.cpio:
$ ls *.trc | cpio -ocv > myfiles.cpio +asm_ora_14651.trc odba112_ora_13591.trc odba112_ora_14111.trc odba112_ora_14729.trc odba112_ora_15422.trc 9 blocks
The -v option was for verbose output so cpio showed us each file as it was added to the archive. The -o option was used since we wanted to create an archive. The -c option was used to tell cpio to write the header information in ASCII, which makes it easier to move across platforms.
Another option is -O which accepts the output archive file as a parameter.
# ls *.trc | cpio -ocv -O mynewfiles.cpio
To extract the files:
$ cpio -icv < myfiles.cpio +asm_ora_14651.trc cpio: odba112_ora_13591.trc not created: newer or same age version exists odba112_ora_13591.trc
Here the -v and –i options are used for verbose output and for extraction of files from the archives. The –c option was used to instruct the cpio to read the header information as ASCII. When cpio extracts a file and it is already present (as it was the case for odba112_ora_13591.trc), it does not overwrite the file; but simply skips it with the message. To force overwriting, use the -u option.
# cpio -icvu < myfiles.cpio
To only display the contents without actually extracting, use the –t option along with the –i (extraction):
# cpio -it < myfiles.cpio +asm_ora_14651.trc odba112_ora_13591.trc odba112_ora_14111.trc
What if you are extracting a file which already exists? You still want to extract it but perhaps to a different name. One example is that you are trying to restore a file called alert.log (which is a log file for an Oracle instance) and you don't want to overwrite the current alert.log file.
One of the very useful options is –r, which allows you to rename the files being extracted, interactively.
# cpio -ir < myfiles.cpio rename +asm_ora_14651.trc -> a.trc rename odba112_ora_13591.trc -> b.trc rename odba112_ora_14111.trc -> [ENTER] which leaves the name alone
If you created a cpio archive of a directory and want to extract to the same directory structure, use the –d option while extracting.
While creating, you can add files to an existing archive (append) using the -A option as shown below:
# ls *.trc | cpio -ocvA -O mynewfiles.cpio
The command has many other options; but you need to know only these to effectively use them.
Another mechanism for creating an archive is tar. Originally created for archiving to tape drives (hence the nam:e Tape Archiver), tar is a very popular command for its simplicity. It takes three primary options
Here is how you create a tar archive. The –f option lets you name an output file that tar will create as an archive. In this example we are creating an archive called myfiles.tar from all the files with the extension ''trc''.
# tar -cf myfiles.tar *.trc
Once created, you can list the contents of an archive by the –t option.
# tar tf myfiles.tar +asm_ora_14651.trc odba112_ora_13591.trc odba112_ora_14111.trc odba112_ora_14729.trc odba112_ora_15422.trc
To show the details of the files, use the –v (verbose) option:
# tar tvf myfiles.tar -rw-r----- oracle/dba 1150 2008-12-30 22:06:39 +asm_ora_14651.trc -rw-r----- oracle/dba 654 2008-12-26 15:17:22 odba112_ora_13591.trc -rw-r----- oracle/dba 654 2008-12-26 15:19:29 odba112_ora_14111.trc -rw-r----- oracle/dba 654 2008-12-26 15:21:36 odba112_ora_14729.trc -rw-r----- oracle/dba 654 2008-12-26 15:24:32 odba112_ora_15422.trc
To extract files from the archive, use the –x option. Here is an example (the –v option has been used to show the verbose output):
# tar xvf myfiles.tar
Compression is a very important part of Linux administration. You may be required to compress a lot of files to make room for new ones, or to send to via email and so on.
Linux offers many compression commands; here we'll examine to most common ones: zip and gzip.
The zip command produces a single file by consolidating other files and compressing them into one zip (or compressed) file. Here is a simple example usage of the command:
# zip myzip *.aud
It produces a file called myzip.zip with all the files in the directory named with a .aud extension.
zip accepts several options. The most common is -9, which instructs zip to compress as much as possible, while sacrificing the CPU cycles (and therefore taking longer). The other option, -1, instructs just the opposite: to compress faster while not compressing as much.
You can also protect the zip file by encrypting it with a password. Without the correct password the zip file cannot be unzipped. This is provided at the runtime with the –e (encrypt) option:
# zip -e ze *.aud Enter password: Verify password: adding: odba112_ora_10025_1.aud (deflated 32%) adding: odba112_ora_10093_1.aud (deflated 31%) ... and so on ...
The -P option allows a password to be given in the command line. Since this allows other users to see the password in plaintext by checking for the processes or in the command history, it's not recommended over the -e option.
# zip -P oracle zp *.aud updating: odba112_ora_10025_1.aud (deflated 32%) updating: odba112_ora_10093_1.aud (deflated 31%) updating: odba112_ora_10187_1.aud (deflated 31%) … and so on ..
You can check the integrity of the zip files by the -T option. If the zipfile is encrypted with a password, you have to provide the password.
# zip -T ze [ze.zip] odba112_ora_10025_1.aud password: test of ze.zip OK
Of course, when you zip, you need to unzip later. And the command is – you guessed it – unzip. Here is a simple usage of the unzip command to unzip the zip file ze.zip.
# unzip myfiles.zip
If the zip file has been encrypted with a password, you will be asked for the password. When you enter it, it will not be repeated on the screen.
# unzip ze.zip Archive: ze.zip [ze.zip] odba112_ora_10025_1.aud password: password incorrect--reenter: password incorrect--reenter: replace odba112_ora_10025_1.aud? [y]es, [n]o, [A]ll, [N]one, [r]ename: N
In the above example you entered the password incorrectly first; so it was prompted again. After entering it correctly, unzip found that there is already a file called odba112_ora_10025_1.aud; so unzip prompted you for your action. Note the choices – you had a rename option as well, to rename the unzipped file.
Remember the zip protected by a password passed in the command line with the zip –P command? You can unzip this file by passing the command in the command line as well, using the same –P option.
# unzip -P mypass zp.zip
The -P option differs from the -p option. The -p option instructs unzip to unzip files to the standard output, which can then be redirected to another file or another program.
The attractiveness of zip comes from the fact that it's the most portable. You can zip it on Linux and unzip it on OS X or Windows. The unzip utility is available on many platforms.
Suppose you have zipped a lot of files under several subdirectories under a directory. When you unzip this file, it creates the subdirectories as needed. If you want all the files to be unzipped into the current directory instead, use the -j option.
# unzip -j myfiles.zip
One of the most useful combinations is the use of tar to consolidate the files and compressing the resultant archive file via the zip command. Instead of a two-step process of tar and zip, you can pass the output of tar to zip as shown below:
# tar cf - . | zip myfiles - adding: - (deflated 90%)
The special character ''-'' in zip means the name of the file. The above command tars everything and creates a zip called myfiles.zip.
Similarly, while unzipping the zipped file and extracting the files from the zip archive, you can eliminate the two step process and perform both in one shot:
# unzip -p myfiles | tar xf -
The command gzip (short for GNU zip), is another command to compress files. It is intended to replace the old UNIX compress utility.
The main practical difference between zip and gzip is that the former creates a zip file from a bunch of files while the latter creates a compressed file for each input file. Here is an example usage:
# gzip odba112_ora_10025_1.aud
Note, it did not ask for a zip file name. The gzip command takes each file (e.g. odba112_ora_10025_1.aud) and simply creates a zip file named odba112_ora_10025_1.aud.gz. Additionally, note this point carefully, it removes the original file odba112_ora_10025_1.aud. If you pass a bunch of files as parameter to the command:
# gzip *
It creates a zip file with the extension .gz for each of these files present in the directory. Initially the directory contained these files:
a.txt b.pdf c.trc
After the gzip * command, the contents of the directory will be:
a.txt.gz b.pdf.gz c.trc.gz
The same command is also used for unzip (or uncompress, or decompress). The option is, quite intuitively, -d to decompress the files compressed by gzip
To check the contents of the gzipped file and how much has been compressed, you can use the -l option. It actually doesn't compress or uncompress anything; it just shows the contents.
# gzip -l * compressed uncompressed ratio uncompressed_name 698 1150 42.5% +asm_ora_14651.trc 464 654 35.2% odba112_ora_13591.trc 466 654 34.9% odba112_ora_14111.trc 466 654 34.9% odba112_ora_14729.trc 463 654 35.3% odba112_ora_15422.trc 2557 3766 33.2% (totals)
You can compress the files in a directory as well, using the recursive option (-r). To gzip all files under the log directory, use:
# gzip -r log
To check integrity of a gzip-ed file, use the -t option:
# gzip -t myfile.gz
When you want to create a different name for the gzipped file, not the default .gz, you should use the –c option. This instructs the gzip command to write to standard output which can be directed to a file. You can use the same technique to put more than one file in the same gzipped file. Here we are compressing two files - odba112_ora_14111.trc, odba112_ora_15422.trc – in the same compressed file named 1.gz.
# gzip -c odba112_ora_14111.trc odba112_ora_15422.trc > 1.gz
Note when you display the contents of the compressed file:
# gzip -l 1.gz compressed uncompressed ratio uncompressed_name 654 -35.9% 1
The compression ratio shown is for the last file in the list only (that is why it shows a lesser size for original than the compressed one). When you decompress this file, both the original files will be displayed one after the other and both will be uncompressed properly.
The -f option forces the output to overwrite the files, if present. The –v option shows the output in more verbose manner. Here is an example:
# gzip -v *.trc +asm_ora_14651.trc: 42.5% -- replaced with +asm_ora_14651.trc.gz odba112_ora_13591.trc: 35.2% -- replaced with odba112_ora_13591.trc.gz odba112_ora_14111.trc: 34.9% -- replaced with odba112_ora_14111.trc.gz odba112_ora_14729.trc: 34.9% -- replaced with odba112_ora_14729.trc.gz odba112_ora_15422.trc: 35.3% -- replaced with odba112_ora_15422.trc.gz
A related command is zcat. If you want to display the contents of the gzipped file without unzipping it first, use the zcat command:
# zcat 1.gz
The zcat command is similar to gzip -d | cat on the file; but does not actually decompress the file.
Like the zip command, gzip also accepts the options for degree of compression:
# gzip -1 myfile.txt … Least compression consuming least CPU and fastest # gzip -9 myfile.txt … Most compression consuming most CPU and slowest
The command gunzip is also available, which is equivalent to gzip -d (to decompress a gzipped file)
Managing Linux in a Virtual Environment
Linux has been running in data centers all over the world for a quite a while now. Traditionally, the concept of a server means a physical machine distinct from other physical machines. This was true until the arrival of virtualization, where a single server could be carved up to become several virtual servers, with each one appearing as if they are independent servers on the network. Conversely, a ''pool'' of servers made up of several physical servers can be carved up as deemed necessary.
Since there is a no longer a one-on-one relationship between a physical server and a logical or virtual server, some concepts might appear tricky. For instance, what is available memory? Is it available memory of (1) the virtual server, (2) the individual physical server from where the virtual server was carved out of, or (3) the total of the pool of servers the virtual server is a part of? So Linux commands may behave a little differently when operated under a virtual environment.
In addition, the virtual environment also needs some administration so there are specialized commands for the management of the virtualized infrastructure. In this section you will learn about the specialized commands and activities related to the virtual environment. We will use Oracle VM as an example.
One of the key components of the virtualization in an Oracle VM environment is the Oracle VM Agent, which must be up for Oracle VM to be fully operational. To check if the agent is up, you have to get on to the Administration server (provm1, in this case) and use the service command:
[root@provm1 vnc-4.1.2]# service ovs-agent status ok! process OVSMonitorServer exists. ok! process OVSLogServer exists. ok! process OVSAgentServer exists. ok! process OVSPolicyServer exists. ok! OVSAgentServer is alive.
The output shows clearly that all the key processes are up. If they are not up, they may have been misconfigured and you may want to configure it (or configure it the first time):
# service ovs-agent configure
The same service command is also used to start, restart and stop the agent processes:
service ovs-agent start service ovs-agent restart service ovs-agent stop
The best way, however, to manage the environment is via the GUI console, which is Web based. The Manager Webpage is available on the Admin server, at the port 8888, by default. You can bring it up by entering the following on any Web browser (assuming the admin server name is oradba2).
Login as admin and the password you created during installation. It brings up a screen shown below:
The bottom of the screen shows the physical servers of the server pool. Here the server pool is called provmpool1 and the physical server IP is 10.14.106.0. On this screen, you can reboot the server, power it off, take it off the pool and edit the details on the server. You can also add a new physical server to this pool by clicking on the Add Server button.
Clicking on the IP address of the server brings up the details of that physical server, as shown below:
Perhaps the most useful is the Monitor tab. If you click on it, it shows up the utilization of resources on the server – CPU, disk and memory, as shown below. From this page you can visually check if the resources are under or over utilized, if you need to add more physical servers and so on.
Going back to the main page, the Server Pools tab shows the various server pools defined. Here you can define another pool, stop, reinstate the pool and so on:
If you want to add a user or another administrator, you need to click on the Administration tab. There is a default administrator called ''admin''. You can check all the admins here, set their properties like email addresses, names, etc.:
Perhaps the most frequent activity you will perform is the management of individual virtual machines. Almost all the functions are located on the Virtual Machines tab on the main home page. It shows the VMs you have created so far. Here is a partial screenshot showing two machines called provmlin1 and provmlin2:
The VM named provmlin2 shows as ''powered off'', i.e. it appears as down to the end users. The other one – provmlin1 – has some kind of error. First, let's start the provmlin2 VM. Select the radio button next to it and click on the button Power On. After some time it will show as ''Running'', shown below:
If you click on the VM name, you will be able to see the details of the VM, as shown below:
From the above screen we know that the VM have been allocated 512MB of RAM; it runs Oracle Enterprise Linux 5; it has only one core and so on. One of the key information available on the page is the VNC port: 5900. Using this, you can bring up the VNC terminal of this virtual machine. Here, I have used a VNV viewer, using the hostname provm1 and port 5900:
This brings up the VNC session on the server. Now you can start a terminal session:
Since the VNC port 5900 pointed to the virtual machine called provmlin4, the terminal on that VM came up. Now you can issue your regular Linux commands on this terminal.
On the server running the virtual machines, the performance measurement commands like uptime (described in Installment 3) and top (described in Installment 2) have different meanings compared to their physical server counterparts. In a physical server the uptime refers to the amount if time the server has been up, while in a virtual world it could be ambiguous – referring to the individual virtual servers on that server. To measure performance of the physical serverpool, you use a different command, xm. The commands are issued from this main command. For instance, to list the virtual servers, you can use the command xm list:
[root@provm1 ~]# xm list Name ID Mem VCPUs State Time(s) 22_provmlin4 1 512 1 -b---- 27.8 Domain-0 0 532 2 r----- 4631.9
To measure uptime, you would use xm uptime:
[root@provm1 ~]# xm uptime Name ID Uptime 22_provmlin4 1 0:02:05 Domain-0 0 8:34:07
The other commands available in xm are shown below. Many of these commands can be executed via the GUI as well.
console Attach to <Domain>'s console. create Create a domain based on <ConfigFile>. new Adds a domain to Xend domain management delete Remove a domain from Xend domain management. destroy Terminate a domain immediately. dump-core Dump core for a specific domain. help Display this message. list List information about all/some domains. mem-set Set the current memory usage for a domain. migrate Migrate a domain to another machine. pause Pause execution of a domain. reboot Reboot a domain. restore Restore a domain from a saved state. resume Resume a Xend managed domain. save Save a domain state to restore later. shell Launch an interactive shell. shutdown Shutdown a domain. start Start a Xend managed domain. suspend Suspend a Xend managed domain. top Monitor a host and the domains in real time. unpause Unpause a paused domain. uptime Print uptime for a domain. vcpu-set Set the number of active VCPUs for allowed for the domain.
Let's see some frequently used ones. Besides uptime, you may be interested in the system performance via the top command. This command xm top acts pretty much like the top command in the regular server shell – it refreshes automatically, has some keys that bring up different types of measurements such as CPU, I/O, Network, etc. Here is the output of the basic xm top command:
xentop - 02:16:58 Xen 3.1.4 2 domains: 1 running, 1 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown Mem: 1562776k total, 1107616k used, 455160k free CPUs: 2 @ 2992MHz NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR SSID 22_provmlin4 --b--- 27 0.1 524288 33.5 1048576 67.1 1 1 9 154 1 06598 1207 0 Domain-0 -----r 4647 23.4 544768 34.9 no limit n/a 2 8 68656 2902548 0 0 0 0
It shows the stats like the percentages of CPU used, memory used and so on for each Virtual Machine. If you press N, you will see network activities as shown below:
xentop - 02:17:18 Xen 3.1.4 2 domains: 1 running, 1 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown Mem: 1562776k total, 1107616k used, 455160k free CPUs: 2 @ 2992MHz Net0 RX: 180692bytes 2380pkts 0err 587drop TX: 9414bytes 63pkts 0err 0drop Domain-0 -----r 4650 22.5 544768 34.9 no limit n/a 2 8 68665 2902570 0 0 0 0 0 Net0 RX: 2972232400bytes 2449735pkts 0err 0drop TX: 70313906bytes 1017641pkts 0err 0drop Net1 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop Net2 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop Net3 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop Net4 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop Net5 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop Net6 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop Net7 RX: 0bytes 0pkts 0err 0drop TX: 0bytes 0pkts 0err 0drop
Pressing V brings up VCPU (Virtual CPU) stats.
xentop - 02:19:02 Xen 3.1.4 2 domains: 1 running, 1 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown Mem: 1562776k total, 1107616k used, 455160k free CPUs: 2 @ 2992MHz NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR SSID 22_provmlin4 --b--- 28 0.1 524288 33.5 1048576 67.1 1 1 9 282 1 06598 1220 0 VCPUs(sec): 0: 28s Domain-0 -----r 4667 1.6 544768 34.9 no limit n/a 2 8 68791 2902688 0 00 0 0 VCPUs(sec): 0: 2753s 1: 1913s
Let's go through some fairly common activities – one of which is distributing the available memory among the VMs. Suppose you want to give each VM 256 MB of RAM, you should use xm mem-set command as shown below. Later you should use xm list command to confirm that.
This brings an end to the five-installment long series on advanced Linux commands. As I mentioned in the beginning of the series, Linux has thousands of commands that are useful for many cases, and new commands are developed and added regularly. It's not as important to know all available commands as to know what works best for you.
In this series I presented and explained a few commands necessary to perform most of your daily jobs. If you practice these few commands, along with their options and arguments, you will be able to handle any Linux infrastructure with ease.
Thanks for reading and best of luck.
Arup Nanda ( firstname.lastname@example.org) has been exclusively an Oracle DBA for more than 12 years with experiences spanning all areas of Oracle Database technology, and was named "DBA of the Year" by Oracle Magazine in 2003. Arup is a frequent speaker and writer in Oracle-related events and journals and an Oracle ACE Director. He co-authored four books, including RMAN Recipes for Oracle Database 11g: A Problem Solution Approach .