by Ginny Henningsen
Published June 2012
This article surveys the interfaces and tools that sysadmins can use to set up and manage virtual compute, memory, operating system, network, and storage resources from Oracle. To limit scope somewhat, I don't address desktop virtualization solutions. I also don't discuss when to use one technology over another, but here are two recently posted articles that do:
In addition, Oracle Solaris 10 System Virtualization Essentials, a book by Jeff Victor, Jeff Savit, Gary Combs, Simon Hayler, and Bob Netherton (Prentice-Hall, 2011), discusses Oracle virtualization technologies and when to choose one over the other.
For each Oracle virtualization technology, I tried to answer these three questions:
Table 1 provides a high-level view of the technologies I've focused on.Table 1. Virtualization Technologies
|Server virtualization||Hard partitioning||Dynamic Domains (Oracle's SPARC Enterprise M-Series servers only)|
|Hypervisor-based virtualization||Oracle VM Server for SPARC (Oracle's SPARC T-Series servers only)
Oracle VM (x86 systems only), which consists of Oracle VM Server for x86 and Oracle VM Manager
Oracle VM Templates
|Operating System virtualization||Oracle Solaris Zones
Linux Containers (LXC) in Oracle Linux 6
|Network virtualization||Network virtualization in Oracle Solaris|
|Storage virtualization||Oracle Solaris ZFS
Oracle's Sun ZFS Storage Appliance
You can manage each product individually with the tools it includes, or you can use Oracle Enterprise Manager Ops Center for a more comprehensive approach—managing a combination of bare-metal systems, virtualized environments, and cloud-based application services. This article describes both approaches.
Oracle servers incorporate these virtualization technologies:
Each is described below.
Supported on Oracle's SPARC Enterprise M-Series servers only, Dynamic Domains provide hardware-based partitioning of CPU, memory, and I/O hardware resources within a single chassis. Since each domain runs its own copy of the Oracle Solaris operating system, it's possible to run different releases and upgrades of Oracle Solaris within a single system, each in a completely isolated and separate operating domain. Because Dynamic Domains are hardware-based, they provide native performance while supplying the highest level of security isolation and fault tolerance.
The maximum number of Dynamic Domains that you can set up varies according to the SPARC Enterprise M-Series server model and its configuration, in particular the number of Physical System Boards (PSBs). For example, an entry-level SPARC Enterprise M3000 server houses only a single PSB and supports a single domain. In contrast, high-end SPARC Enterprise M-Series servers can support up to 16 boards configured into multiple domains, with a maximum of 24 domains on a fully populated SPARC Enterprise M9000 server. Dynamic Reconfiguration (DR) capabilities allow domain resources to be modified and reallocated while the server continues to run.
To manage domains on a SPARC Enterprise M-Series server, you must connect to the built-in service processor, the eXtended System Control Facility (XSCF), which is responsible for monitoring system health, power use, and domain status. You must log in to the XSCF shell and run a series of service processor commands to allocate resources and configure each domain. To set up a domain, you'll execute the commands in roughly this sequence:
setupfrucommand specifies the devices that are available in the server before resources are partitioned. Configure system boards in one of two modes: Uni-USB mode—which groups all CPU, memory, and I/O devices on a single PSB—or Quad-USB mode, which divides PSB resources logically into four sets (called eXtended System Boards or XSBs) that are then allocated to domains.
setdclcommand identifies domain resources and specifies domain configuration information (called the DCL).
addboardcommand assigns specific hardware resources (XSBs) to a domain.
poweroncommand powers on the domain.
consolecommand opens a console to a domain. From this console, you can install the Oracle Solaris OS (for a newly created domain) and set up domain services such as NTP.
You can also use Oracle Enterprise Manager Ops Center to automate configuration tasks for Dynamic Domains. After Oracle Enterprise Manager Ops Center discovers the existence of SPARC Enterprise M-Series servers and checks firmware status, it can apply hardware resource profiles in a deployment plan to configure, install, or update systems, including a profile to define Dynamic Domains. You can use Oracle Enterprise Manager Ops Center to track allocated and unallocated system resources, domain configuration and status, power state, and performance data.
Configuration commands and procedures are described in the SPARC Enterprise M3000/M4000/M5000/M8000/M9000 Servers XSCF User's Guide and the SPARC Enterprise M3000/M4000/M5000/M8000/M9000 Servers Administration Guide. To learn more about Oracle Enterprise Manager Ops Center for managing hardware, see the Oracle Enterprise Manager Ops Center Feature Reference Guide.
Oracle VM Server for SPARC (previously called Sun Logical Domains or LDOMs) provides efficient, built-in virtualization capabilities on Oracle's SPARC T-Series servers. Oracle VM Server for SPARC makes it possible to deploy multiple operating systems (including different versions and upgrades of Oracle Solaris 10 or Oracle Solaris 11) on a single SPARC T-Series server. It takes advantage of processor threads on these servers, supporting up to 128 virtual machines on a single system.
Oracle VM Server for SPARC relies on a SPARC hypervisor, a small firmware layer that subdivides and partitions server resources (CPUs, memory, I/O, and storage) among defined virtual machines or logical domains. A domain's operating system is permitted to access only those resources allocated to it by the hypervisor. CPU threads are exclusively allocated and are not time-sliced, so compute operations achieve native performance—there is no context switching or privileged instruction emulation as there is in other virtual machine implementations.
Oracle VM Server for SPARC supports optional "whole-core" allocation, which allows domains to have their own per-core cache for optimal performance. This also facilitates hard partitioning for Oracle software licenses, permitting licenses to be based on the number of domain cores used rather than the total number of cores in the server.
You can use the Logical Domains Manager (
ldm) to create virtual machines and allocate physical resources. The
ldm interface is a command-line interface that features an extensive set of subcommands. See the
ldm(1M) man page for a complete listing.
After installing the Oracle VM Server for SPARC software, you need to start three critical services:
vcc(the Virtual Console Concentrator service)
vds(the Virtual Disk Service)
vsw(the Virtual Switch Service)
Since all system resources are allocated initially to the control domain, you must release some resources before you can create guest domains.
As an example of creating a guest domain, Listing 1 defines a guest domain named
guest1 with eight virtual CPUs, 2 GB of memory, and a virtual network device. It also defines a virtual disk service for a disk volume and a virtual disk that happens to have Oracle Solaris 11 installed on the ZFS volume
This OS instance could have been created by installing from a DVD image or network boot image or by cloning an existing OS instance from a different guest domain. (Cloning an existing domain is an extremely fast and efficient way of generating new guest domains.)
ldm bind command commits the specified resources to the guest. Using the
ldm list-domain command, you can display the console port number of the
guest1 domain, which is port number 5000. This tells you to use
telnet on console port 5000 (after the guest domain is started) to access the domain console.
# ldm create guest1 # ldm set-vcpu 8 guest1 # ldm set-mem 2g guest1 # ldm add-vnet vnet0 primary-vsw0 guest1 # ldm add-vdsdev /dev/zvol/rdsk/ldoms/vdisk/s11 s11@primary-vds0 # ldm add-vdisk vdisk0 s11@primary-vds0 guest1 # ldm bind guest1 # ldm list-domain guest1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME guest1 bound ----- 5000 8 2G # ldm start guest1 LDom guest1 started # telnet localhost 5000
Listing 1. Setting Up a Guest Domain Using Oracle VM Server for SPARC
The Oracle VM Server for SPARC P2V migration tool (
ldmp2v) takes an existing physical system and converts it to a virtual system running within a logical domain. Details on using
ldmp2v are given in the Oracle VM Server for SPARC Administration Guide.
An important capability of Oracle VM Server for SPARC is secure live migration, which permits a domain to be migrated from one server to another while application services continue to run, with memory contents encrypted during transmission to ensure security. The article "How to Migrate a Running Oracle Database to Another System" discusses how the Live Migration feature can be used to relocate database workloads. Often when Oracle Solaris Zones are used to isolate workloads on SPARC T-Series servers, Oracle VM Server for SPARC is also used to enable the live migration of domains.
If you are faced with configuring domains on many SPARC T-Series servers, using the
ldm command-line interface can become cumbersome. To increase administrative efficiency, consider using Oracle Enterprise Manager Ops Center, which can streamline repetitive domain configuration and management tasks for Oracle VM Server for SPARC, as well as monitor domain performance.
When you use Oracle Enterprise Manager Ops Center to provision Oracle VM Server for SPARC across selected SPARC T-Series target servers (as shown in Figure 1), the software automatically installs the specified Oracle Solaris OS, sets up the control domain, and installs a performance-monitoring agent on all targets.
Figure 1. Using Oracle Enterprise Manager Ops Center to Provision Oracle VM Server for SPARC
After Oracle Enterprise Manager Ops Center configures the servers, you can create logical domain profiles and deployment plans that define domain characteristics as well as provision operating systems within domains. You can even use Oracle Enterprise Manager Ops Center to perform live migration, moving a domain to another system via a simple point-and-click operation.
To learn more, see Oracle Enterprise Manager Ops Center: Configuring and Deploying Oracle VM Server for SPARC, Oracle Enterprise Manager Ops Center: Configuring and Installing Logical Domains, and the "Oracle VM Server for SPARC" chapter in the Oracle Enterprise Manager Ops Center Feature Reference Guide.
Designed to support virtual machines running high-performance enterprise applications, Oracle VM is a highly scalable server virtualization solution for x86 Intel or AMD processor-based platforms. It consists of two key components:
Oracle VM Server for x86 enables physical-to-virtual (P2V) mapping and implements advanced server management policies (such as high availability, distributed resource scheduling, and distributed power management) to support enterprise applications that require business continuity. Similar to Oracle VM Server for SPARC, it permits virtual machines to be migrated securely to another Oracle VM server, and it can pin CPUs to VMs to achieve hard partitioning and lower software licensing costs.
Oracle VM Manager 3 is an Oracle Fusion Middleware application that relies on underlying Oracle Database services and Oracle WebLogic Server application services running on Oracle Linux. On each server that hosts Oracle VM Server for x86, an Oracle VM agent runs and communicates to Oracle VM Manager. The agent forwards event notifications and configuration data and processes management requests.
Figure 2. Oracle VM Manager
Of course, you can also manage your Oracle VM environment using Oracle Enterprise Manager Ops Center, which encompasses the functionality of Oracle VM Manager. Within Oracle Enterprise Manager Ops Center or alone, Oracle VM Manager automates VM server discovery, configuration, and setup, streamlining the steps of managing storage servers and file systems, networking, virtual server pools, and guest VMs.
The on-demand Webcast "Top 10 Tips to Accelerate Oracle VM Deployments" is an excellent resource for learning about best practices in setting up an Oracle VM environment. In the Webcast, Greg King (a Senior Best Practices Consultant for Oracle VM) talks about the importance of planning your deployment, using Oracle VM Templates, naming your objects clearly, and performing validation steps. Greg is also the author of the "Oracle VM 3: Quick Start Guide," which describes the Oracle VM Manager steps needed to perform in a typical implementation. As shown in the flowchart in Figure 3 (reproduced from this guide), deploying Oracle VM involves four general steps—preparation, building the Oracle VM Manager platform, creating a server pool, and creating guest VMs.
Figure 3. General Steps for Deploying Oracle VM
Oracle VM Templates are prebuilt, preconfigured, prepatched guest VMs that provide a fully configured software stack. They allow you to rapidly and easily download applications (along with a production-ready operating system) and begin using software right away. The current list of Oracle VM Templates includes the following:
Oracle VM Manager itself is even available as a template that can be installed as a guest on top of Oracle Linux.
For more information about how to administer an Oracle VM environment, see the Oracle VM documentation, especially the Oracle VM Getting Started Guide for installation tips and the Oracle VM User's Guide for details on Oracle VM Manager.
Oracle provides two primary operating system virtualization technologies:
Built into the Oracle Solaris 10 and Oracle Solaris 11 operating systems, Oracle Solaris Zones are an OS-level virtualization technology that provides independent, isolated, and secure runtime environments. Oracle Solaris Zones are extremely lightweight, enabling native performance and imposing virtually no overhead, making them ideal for supporting numerous virtual environments (for example, for mass consolidation efforts, multiple development sandboxes, and so forth).
Oracle Solaris automatically establishes a global zone for system-wide administrative control. Then you can create non-global zones (sometimes simply called zones) within the global zone. Although all zones share the same underlying kernel, applications running in one zone can't impact applications running in other zones. Oracle Solaris 11 even supports Oracle Solaris 10 Zones so you can run existing applications and more gracefully transition applications from Oracle Solaris 10 to Oracle Solaris 11.
The primary administrative interface for managing zones is the command-line interface
zonecfg, which uses a tree-like structure of context-relevant subcommands. Listing 2 shows a simple example of how to create and install a non-global zone using
# zonecfg -z my-zone my-zone: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:my-zone> create create: Using system default template 'SYSdefault' zonecfg:my-zone> set zonepath=/zones/my-zone zonecfg:my-zone> set autoboot=true zonecfg:my-zone> verify zonecfg:my-zone> commit zonecfg:my-zone> exit # zoneadm -z my-zone install A ZFS file system has been created for this zone. Progress being logged to /var/log/zones/zoneadm.20111016T114436Z.my-zone.install . . . Done: Installation completed in 151.635 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/testzone/root/var/log/zones/zoneadm.20111016T114436Z. my-zone.install
Listing 2. Creating an Oracle Solaris Zone
You can allocate system resources to particular zones, which can improve resource utilization and help critical workloads get the resources they need. The
zone.cpu-shares resource control limits a zone's CPU usage in relation to other zones in the same resource pool. In Listing 3—which is based on an example from the paper "Consolidating Applications with Oracle Solaris Containers"—one zone is assigned twice the CPU shares as the other via the relative value of
global # zonecfg -z sales zonecfg:sales> set scheduling-class=FSS zonecfg:sales> set cpu-shares=20 zonecfg:sales> exit global # zonecfg -z mkt zonecfg:mkt> set scheduling-class=FSS zonecfg:mkt> set cpu-shares=10 zonecfg:mkt> exit
Listing 3. Allocating Resources to Oracle Solaris Zones
You can also define virtual network interfaces (VNICs) and virtual switches, and impose a limit on network bandwidth, as in Listing 4, which comes from Duncan Hardie's article, "How to Restrict Your Application Traffic Using Oracle Solaris 11 Network Virtualization and Resource Management."
# zonecfg -z webzone-1 zonecfg:webzone-1> select anet linkname=net0 zonecfg:webzone-1:anet> set maxbw=400M zonecfg:webzone-1:anet> end zonecfg:webzone-1> verify zonecfg:webzone-1> commit zonecfg:webzone-1> exit
Listing 4. Limiting VNIC Bandwidth
zonep2vchk tool, you can also perform P2V migrations (moving a system image into a non-global zone) or V2V (virtual-to-virtual) migrations, to migrate an existing zone to a new system. These capabilities are helpful for rebalancing workloads or consolidating servers and for disaster recovery.
Of course, in addition to
zonep2vchk, you can use Oracle Enterprise Manager Ops Center to configure and manage zones. (Are you sensing a pattern yet?) To configure zones using Oracle Enterprise Manager Ops Center, you first create a deployment plan with a zone profile that captures all zone configurations. Once zones are configured, you can use the Oracle Enterprise Manager Ops Center interface to perform zone management operations, including boot, reboot, shutdown, cloning, zone migration, and zone deletion.
To learn more about managing Oracle Solaris Zones, see "How to Get Started Creating Oracle Solaris Zones in Oracle Solaris 11," Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management, and the "Oracle Solaris Zones" chapter in the Oracle Enterprise Manager Ops Center Feature Reference Guide.
Available in Oracle Linux 6 with the Unbreakable Enterprise Kernel, Oracle Linux Containers provide a way to isolate a group of processes from others on a running Linux system. Linux Containers is a lightweight operating system virtualization technology built on Linux resource management (
cgroup) capabilities and resource isolation that is implemented through namespaces. The LXC project page refers to Linux Containers as "a lightweight virtual system mechanism sometimes described as '
chroot on steroids'" (because of the way Linux Containers extends traditional process management).
The Containers on Linux blog article by Wim Coekaerts introduces LXC functionality and some
lxc command-line interfaces that you can use to configure Linux Containers. In the article, Wim steps through the process of creating a container: downloading an Oracle VM template, creating the template source in the
/container file system, and, finally, using the
lxc-create script, as shown in Listing 5.
# lxc-create -n ol5test1 -t ol5 Cloning base template /container/ol5-template to /container/ol5test1 ... Create a snapshot of '/container/ol5-template' in '/container/ol5test1' Container created : /container/ol5test1 ... Container template source : /container/ol5-template Container config : /etc/lxc/ol5test1 Network : eth0 (veth) on virbr0 'ol5' template installed 'ol5test1' created # lxc-start -n ol5test1 # lxc-console -n ol5test1 -t 1
Listing 5. Creating a Linux Container
Once the container is created, you enter an
lxc-start command to run the template scripts that install the container. At this point, you can either use
lxc-console (as shown) or
ssh to access the container environment. For more information, refer to the LXC project page or the documentation.
Oracle Solaris 11 introduced a new network architecture that facilitates network virtualization. Through the combination of VNICs, virtual switches, and bandwidth controls, you can create, control, and manipulate virtualized network environments. These environments are highly flexible, simulating or even replacing the physical network infrastructure.
Oracle Solaris 11 introduced the
dladm command to configure datalinks and the
ipadm command to configure network interfaces (these commands replace
ifconfig, which served multiple purposes and didn't implement persistent configurations). Using these new Oracle Solaris 11 tools (often in conjunction with Oracle Solaris Zones), you can implement virtualized networks.
Oracle Solaris enables two types of virtualized network interfaces:
Listing 6 shows how you might use
ipadm to create two VNICs on a single physical interface:
# dladm create-vnic -l net0 vnic0 # dladm create-vnic -l net0 vnic1 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE vnic0 net0 1000 Mbps 2:8:20:c2:39:38 random vnic1 net0 1000 Mbps 2:8:20:5f:84:ff random # ipadm create-ip vnic0 # ipadm create-ip vnic1 # ipadm create-addr -T static -a 192.168.3.80/24 vnic0/v4address # ipadm create-addr -T static -a 192.168.3.85/24 vnic1/v4address # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/? static ok 127.0.0.1/8 net0/v4addr static ok 192.168.3.70/24 vnic0/v4address static ok 192.168.3.80/24 vnic1/v4address static ok 192.168.3.85/24
Listing 6. Creating VNICs
By default, when you configure an Oracle Solaris Zone, it also creates an automatic VNIC (you can list it using
zonecfg by specifying the
info subcommand). One of the powerful features of Oracle Solaris is that you can set bandwidth controls for VNICs, thereby limiting networking resources for applications running in the zone. Duncan Hardie gives an example of doing this in the OTN article "How to Restrict Your Application Traffic Using Oracle Solaris 11 Network Virtualization and Resource Management." A recent podcast, "Why and How to Use Network Virtualization," also addresses the topic of network virtualization. For details on virtualized networking in Oracle Solaris 11, see Oracle Solaris Administration: Network Interfaces and Network Virtualization.
Oracle provides two primary storage virtualization technologies:
Initially deployed in Oracle Solaris 10, ZFS integrates volume management features and virtualizes underlying storage devices into a single shared storage pool. This approach enables thin provisioning, capacity growth, and improved storage resource utilization. ZFS offers a number of enhancements over previous file system technologies:
The primary commands for configuring storage pools and managing ZFS file systems are
zfs. When you create a storage pool, you specify virtual devices that comprise the pool. The virtual devices can be physical disks or files, and you can also specify the type of data replication, such as a
raidz. (A RAID-Z device is similar to a RAID-5 device but with atomic operations. ZFS supports RAID-Z with single-, double-, or triple-parity fault tolerance—specified as
Creating a ZFS storage pool automatically creates and mounts a new ZFS file system, as in the following examples, which create a mirrored ZFS file system named
pool1 and a RAID-Z2 file system named
# zpool create pool1 mirror c1t1d0 c2t1d0 # zpool create pool2 raidz2 c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0
zfs command configures ZFS data sets—typically a file system or a snapshot—within a ZFS storage pool. For example, the following command creates the file system
pool1/home and sets several properties, including the mount point, sharing, and compression:
# zfs create -o mountpoint=/export/zfs -o sharenfs=on -o compression=on pool1/home
A file system constructed beneath
pool1/home, as shown in the following command, will then inherit the properties of its parent:
# zfs create pool1/home/mark
The file system
pool1/home/mark is automatically mounted at
/export/zfs/mark and shared based on the parent's attributes.
ZFS snapshots are point-in-time file system copies that initially consume no additional disk space within the pool. As data within the active data set changes, the snapshot begins to use space, although it references old data to conserve space where it can. Snapshots can be used to clone VM images quickly, especially for logical domains and zones.
ZFS file systems and snapshots are at the heart of managing Oracle Solaris 11 boot environments, and they are also used with the Live Upgrade feature of Oracle Solaris 10. Although snapshots are instrumental in creating alternate boot environments and don't initially consume disk space, you can sometimes get tripped up if you're not paying close attention when you generate them. Bob Netherton's blog article, Live Upgrade, /var/tmp and the Ever Growing Boot Environments, is a good lesson in thinking through space implications of snapshots (and especially where to put your patch cluster when using Live Upgrade on Oracle Solaris 10).
You can read more about administering ZFS in the Oracle Solaris Administration: ZFS File Systems.
Oracle offers a series of storage appliances that take advantage of the storage virtualization that ZFS provides. Supplying NAS application and data storage, Sun ZFS Storage Appliances are also configured into the Oracle Exadata, Oracle SPARC SuperCluster, and Oracle Exalogic Elastic Cloud products.
To manage a Sun ZFS Storage Appliance, you can use either a command-line interface (CLI) or a browser-based user interface (BUI).
Figure 4. Sun ZFS Storage Appliance BUI
To use the CLI, you typically access the appliance through the serial console or via
ssh. Commands follow a context-relevant, tree-like structure, and tab completion allows you to list options when you're unsure what to type in a particular context. The command structure in the CLI generally mirrors the functionality of what's available in the BUI. The Sun ZFS Storage 7000 System Administration Guide gives examples of using both interfaces.
Of course, you can also manage Sun ZFS Storage Appliances as assets within Oracle Enterprise Manager Ops Center. To do this, you first configure the Sun ZFS Storage Appliance and the storage pool. For specific instructions, see the Oracle Enterprise Manager Ops Center Feature Reference Guide.
Oracle Enterprise Manager Ops Center is a centralized interface for managing and monitoring all Oracle systems across an IT infrastructure. It's part of the larger Oracle Enterprise Manager product family that enables a "single pane-of-glass" view into the end-to-end solution stack, including Oracle databases, middleware, and applications.
Oracle Enterprise Manager Ops Center provides a framework for managing a range of Oracle servers and related assets: Oracle engineered systems, Oracle's SPARC and x86 servers, Oracle Solaris, Oracle Linux, Oracle's Sun ZFS Storage Appliances, Oracle's Sun server networking, and (as I've tried to highlight in this article) Oracle's virtualization technologies—Dynamic Domains, Oracle VM Server for SPARC, Oracle VM Server for x86, and Oracle Solaris Zones.
To help you get started with Oracle Enterprise Manager Ops Center, Oracle provides a useful utility called OCDoctor that can qualify systems to see whether they meet the necessary prerequisites for Oracle Enterprise Manager Ops Center installation.
Oracle Enterprise Manager Ops Center uses a combination of profiles and plans to automate administrative tasks, such as configuration, provisioning, and updates. The user interface helps you define initial profiles and plans from a set of provided templates (see the Oracle Enterprise Manager Ops Center Quick Start Guide). Best of all, Oracle recently announced an Ops Center Everywhere Program in which qualified customers can download and implement the software without additional cost.
Here are some additional resources:
And here are URLs for the resources referenced earlier in this document.
Oracle VM Server for SPARC:
ldm(1M) man page: http://docs.oracle.com/cd/E23120_01/html/821-2855/ldm-1m.html
Oracle Solaris Zones:
Oracle Solaris ZFS and Sun ZFS Storage Appliances:
Oracle Enterprise Manager Ops Center:
Ginny Henningsen has worked for the last 15 years as a freelance writer developing technical collateral and documentation for high-tech companies. Prior to that, Ginny worked for Sun Microsystems, Inc. as a Systems Engineer in King of Prussia, PA and Milwaukee, WI. Ginny has a BA from Carnegie-Mellon University and a MSCS from Villanova University.
|Revision 1.0, 06/26/2012|