Your search did not match any results.
We suggest you try the following to help find what you’re looking for:
Oracle Solaris has built-in file services to support your applications with faster performance, greater safety, and simplified data management.
The last few decades of file system research have resulted in a great deal of progress in performance and recoverability. However, anyone who has ever lost important files, run out of space on a partition, or struggled with a volume manager understands the need for improvement in the areas of data integrity, manageability, and scalability. Oracle Solaris ZFS, available in Oracle Solaris 10, incorporates advanced data security and protection features, minimizing the need for recovery mechanisms that require downtime. By redefining file systems as virtualized storage, Oracle Solaris ZFS uses virtual storage pools to make it easy to expand file systems simply by adding more drives.
Oracle Solaris ZFS delivers both significant cost savings as well as advanced technology.
To address costs, ZFS is designed around a simple concept of the storage pool. If you need more storage you add more disks to the pool. If you need a file system, it is allocated from the pool. Traditional file systems are built around a model of disk and disk partitions. ZFS so simplifies storage management with the pool concept that you no longer need a volume manager. For example, to add mirrored file systems for three users and then add more disks, Oracle Solaris ZFS reduces the number of tasks from 28 to five—and the time required is cut from 40 minutes to 10 seconds. The simplicity of the ZFS model also helps reduce the chance of administrator error when performing file system-related tasks.
On the technology side, ZFS provides full end-to-end checksuming. Data is protected by 256-bit checksums, resulting in 99.99999999999999999% error detection and correction. ZFS constantly reads and checks data to help ensure it is correct, and if it detects an error in a storage pool with redundancy, it automatically repairs the corrupt data. And unlike traditional models there are also checksums kept separate from the data. ZFS can detect if a good block is written in the wrong place, something traditional file systems can not detect. A variety of mirroring options are also available with ZFS.
ZFS is a copy-on-write file system hence making a snapshot of a system is extremely fast and light weight operation. With copy-on-write there is no need, when making a snapshot to actually copy the data blocks, ZFS only copies the pointers to the data. Over time the snapshot may differ from its source and when a data block can no longer be shared because the contents change, then a new block is written and associated with the copy. To see the value of this approach, consider Oracle Solaris's venerable Live Upgrade. This has always been a popular way to mitigate risk and reduce downtime that results from system software changes (for example patching). But to use Live Upgrade on a UFS system, you need to have a partition as big as the existing root partition because step one is to make a copy of the current boot environment before applying any changes.
With ZFS you don't need double the space for the copy, you just need enough space to account for the system changes between the time you make the copy and when you finish whatever software changes are necessary. Making a copy of a UFS boot environment can take an hour or more on a large system. With ZFS a copy takes on the order of 20 seconds. And you don't need to have that second disk for the copy. When it takes 20 seconds to build a safety net (a copy of the system before making any changes) you are much more motivated to employ that safety net than when it takes an hour to make the copy. The copy gives you the flexibility to reboot to the state of your system before any changes were made, a powerful tool for reducing risk, and a tool made so much more powerful thanks to ZFS.
Oracle Solaris ZFS is a 128-bit file system, providing 16 billion, billion times the capacity of 32-bit or even 64-bit file systems. Hence, it supports more file systems, snapshots, and files in a file system than can possibly be created in the foreseeable future. It's not that you need 128 bit address fields today. But it won't be that far into the future when the largest data applications will need 65 bit addressing and hence out of scope for traditional 64 bit file systems.
The UNIX File System (UFS) in Oracle Solaris 10 is very mature, very stable, and ideal for use with many traditional applications including those from Oracle. Oracle Solaris UFS has its roots in the Berkeley Fast File System (FFS) of the 1980s; today's implementation is the result of more than 20 years of development, improvement, and stabilization.
Oracle Solaris Volume Manager is a robust disk and storage management solution suitable for enterprise-class deployment, used with UFS. It can be used to pool storage elements into volumes and allocate them to applications; redundancy and failover capabilities can help provide continuous data access even in the event of a device failure.
The Network File System (NFS) was introduced in the mid-80s as one of the first and most successful examples of open network file sharing. Version 4 is the latest NFS release and is designed to be both vendor neutral and operating system-type neutral. NFSv4 integrates file access, file locking, and mount protocols into a single, unified protocol to ease traversal through a firewall and to improve security. The Oracle Solaris 10 implementation of NFSv4 is fully integrated with Kerberos V5, thus providing authentication, integrity, and privacy and enabling servers to offer different security flavors for different file systems. The Oracle Solaris 10 implementation also includes delegation, enabling the server to delegate the management of a file to a client and reducing the number of round-trip operations required. In addition, the protocol includes operation compounding, which allows multiple operations to be combined into a single "over-the-wire" request.
Internet Storage Name Service (iSNS)
Since release of Oracle Solaris 10 8/07, provides support for the Internet Storage Name Service (iSNS) protocol in the Oracle Solaris iSCSI target and initiator software. The iSNS protocol allows for the automated discovery, management, and configuration of iSCSI devices on a TCP/IP network. Booting from iSCSI targets has been supported since Oracle Solaris 10 9/10, with the easiest installation experience offered by Oracle Solaris 10 1/13.
The Oracle Solaris SAN Stack is an open standards-based I/O framework and device driver stack to support fibre channel, fully integrated into the Oracle Solaris 10 release so that it’s even easier for system administrators to use.
It includes Multiplexed I/O (MPxIO), an architecture that enables I/O devices to be accessed through multiple host controller interfaces from a single instance of the I/O device.