What You See Is What You Get Element

ASM Enhancements in Oracle Database 12c


Databases are about storage, after all; and the storage layer is where ASM rules. In Oracle Database 12c, ASM has undergone significant changes and enhancements. It will take a lot more than just one article to go over all of them in detail, so here's an overview of the landmark enhancements and explain them with enough detail so that you can hit the ground running.

Flex ASM Cluster


Flex ASM Cluster is perhaps it is the biggest change in ASM since its introduction 10 years ago. Consider an Oracle Cluster that you have on a number of nodes. Each cluster node must run a certain set of processes including the ASM processes, which is called ASM instance. The ASM instance manages the storage attached to the server. However, this poses a serious potential issue for the database. If that ASM instance fails, the database instance also shuts down since it can't communicate with the storage anymore. The dependence of the ASM instance caused some to balk at the prospect of ASM as a viable storage management solution. Well, worry no more.

To solve this problem, Oracle Database 12c introduces a new concept in the ASM architecture that lets a database instance to connect to an ASM instance on a different node of the same cluster. The sever where there is a database instance but no ASM instance is called a client. However, recall that the ASM processes fulfill a very important function - to identify the exact contents of the database is asking for by looking up the extent map. Since there is no ASM instance, there is no extent map available on those client servers. To get the information on what is stored (the metadata), this client needs to connect to a real ASM instance with metadata, called a Flex ASM instance. By default, there are three Flex ASM instances in a cluster; but that number can be changed. Here is how the Flex ASM works in a 5 node cluster.


ASM-enhancements-odb12c-Fig1


There are five nodes in this cluster named Server1 through Server5. There are two databases - D1 and D2. Instances of D1 run on Server1, Server 2 and Server3 while those of D2 run on Server3, Server4 and Server5. Prior to Oracle Database 12c, you would have had to run an ASM instance on each of these servers as well, most likely named +ASM1 through +ASM5. In Flex ASM architecture, however, you could choose to run the ASM instance (called Flex ASM Instance) on Servers 2, 3 and 4. The servers 1 and 5 do not need to run ASM instance. These servers are simply called ASM clients. The clients communicate with the Flex ASM nodes to get the metadata and then get the data from the disk to the database instance. The ASM instance does not run on these client nodes, saving some amount of memory and computing capacity.

However, not requiring to run ASM instances is not the only advantage of Flex ASM; it also insulates you from the damaging effect of an ASM instance going down. Let's examine the impact of the ASM instance crashing. Since there is no ASM instance running on 1 and 5, the case of ASM coming down does not apply to these two nodes. If ASM instance on Server3 crashes, Oracle will automatically start that ASM instance on any of the two nodes Server1 and Server5. Recall that Flex ASM allows for an ASM client to make the storage available to the database instance. So the Server3 (where the ASM instance crashed) will simply become an ASM Client for the database instance running there. The database instance will not crash.

You can enable Flex ASM when you install the cluster. Of course, it is relevant only when the cluster is installed; not in a single instance. While installing the Grid Infrastructure, you have an option to choose the type of ASM instance, shown in the screen below:


flexasm-install


You choose the "Use Oracle Flex ASM for storage" option for Flex ASM. It is also possible to convert an existing normal ASM instance to Flex ASM. The default number of ASM instances to run on the cluster is 3; so Oracle will choose any three nodes on the cluster to run the instances. The other nodes will become client nodes. The number of nodes where Oracle ASM runs is known as ASM Cardinality. You can check that by the srvctl tool.

 $ srvctl config asm

And look for "ASM instance count". If the ASM instances fail, the Oracle Clusterware automatically starts up enough ASM instances on other nodes to maintain the cardinality. You can also change the cardinality by the same tool:

 $ srvctl modify asm -count 2

How does the database instance running on a client node decide which of the 3 Flex ASM instances (assuming you have 3) to connect to? That is done automatically by Flex Cluster infrastructure by choosing the least loaded ASM instance.

ASM Password File


Flex ASM brings up a little complexity. Earlier (before Flex ASM), each node ran an ASM instance; so the database instance merely connected to the ASM instance running on that node directly. However, in Flex ASM, the database instance runs on a non-ASM node. For instance, take the instance of database D1 running on Server1; there is no ASM instance. Instead it must communicate with the Flex ASM instances running on Servers 2, 3 and 4. This communication occurs across the servers. How will a process running on Server1 authenticate with a process running on Server2?

To solve that problem, Oracle Database 12c now creates an ASM password file that the clients will need to authenticate. Should you have to worry about creating and maintaining this additional file? Not at all. During the installation, an ASM password file is automatically created.

Speaking of password files, remember the password files used in shared environments such as those used for Data Guard? They had to be created and replicated, without which the authentication would fail. If you changed the password, you may have had to  recreate the password file. Cumbersome and unreliable at best. Well, those days are gone. The ASM Password File is now kept in an ASM Diskgroup which is available to all the nodes. Being in one place, it does not need replication between nodes. Here is the location of the password file, shown via ASMCMD.

 ASMCMD> pwd
+LABDATA1/ASM/PASSWORD
ASMCMD> ls -ls
Type      Redund  Striped  Time  Sys Block_Size Blocks  Bytes  Space  Name
PASSWORD  UNPROT  COARSE   OCT 14 13:00:00 Y 512   15 7680       0  pwdasm.256.828798957


By the way, the orapwd utility now accepts a diskgroup, not just filesystems, as a location for password files. This is excellent for clusters where ASM diskgroup is visible to all the nodes. So a password file when created will be visible to all nodes and will not copying over.

$ orapwd file='+DATA' password=oracle dbuniquename=LCRA


Checking the presence of the file created in ASM diskgroup

 ASMCMD [+data/lcra/password] > ls -ls
Type      Redund  Striped  Time  Sys Block_Size Blocks  Bytes  Space  Name
PASSWORD  HIGH COARSE   OCT 14 13:00:00 Y 512   15 7680       0  pwdlcra.289.8336729457


To create an ASM password file, you need to use the same orapwd utility but simply use a parameter called asm:

 $ orapwd file='+DATA' asm=y  

Enter password for SYS:


Checking for the presence of this password file (note the location: it's under ASM:

 ASMCMD [+data/asm/password] > ls -ls

Type      Redund  Striped  Time  Sys Block_Size Blocks  Bytes  Space  Name PASSWORD  HIGH    COARSE   DEC 08 23:00:00  Y  512     15 7680      0  pwdasm.310.833673205


ASM Interconnect


The ASM instances running on the nodes of the cluster have to communicate among themselves, mostly for metadata. This network traffic is not significant and in previous versions the Clusterware's interconnect was used for this ASM communication. However, In Oracle Database 12c, you can now create a separate network dedicated for ASM communication. You need to mention the network address during the installation. Here is a visual representation of the ASM network. Before you scream in agony that Oracle Database 12c needs yet another network, remember, this dedicated ASM network is completely optional; the default is the pre-12c behavior of using the Clusterware interconnect for ASM communication.


ASM-enhancements-odb12c-Fig2

Faster Resync



When a disk in a ASM mirrored diskgroup goes offline, probably because it has failed or you are temporarily offlined it to replace it, it needs to replicated back from the other surviving disks. This is called Disk Resync. The data is not 100% protected until the resync is completed; but the process also consumes system resources. There are some significant enhancements. First of all, the resync operation is restartable. If the operation is somehow interrupted, it starts from the point it failed. This means the overall resync operation is faster. ASM uses checkpoints to store the last point of resync and restarts from there.

But that's not all. If you have spare processing capacity in the server, you would want to direct all the resources to the complete the resync operation faster, won't you? Conversely, if you didn't have much spare resources, you would want the resync to consume as few resources as possible. Remember the power limit during disk rebalance? The power limit is a way for you to allocate as little or as much resources to the rebalance operation--more the limit, more the resources allocated and faster the process. The same concept is now available in resync operations too. While onlining a disk or a diskgroup, which kicks off resync, you can specify a power limit. In ASMCMD, this is given by the parameter -power, as shown below, where we online the disk DATA_0000 of the diskgroup DATA:

 ASMCMD> online -G DATA  -D DATA_0000 --power 100
Diskgroup altered.
The power limit can be anything from 1 to 1024, a far cry from the 1-11 available in the rebalance operations in Oracle 10g. You can use SQL for the same operation as well.
 SQL>  alter diskgroup data online disk data_0000 power 100;
Diskgroup altered.
By the way, the output "Diskgroup altered" does not actually mean that the diskgroup was onlined. The command kicks off an online operation immediately in the background. You can find out what is going on in that operation and how much is left by querying the view v$asm_operation as shown below:

 SQL> select * from v$asm_operation;
 
GROUP_NUMBER OPERA PASS STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- --------- ---- ---------- ---------- ---------- ----------
EST_RATE EST_MINUTES ERROR_CODE CON_ID
---------- ----------- -------------------------------------------- ----------
           3 REBAL RESYNC DONE 100 100 0        229
0 0      0 0            3 REBAL REBALANCE DONE  100  100  17         17
0 0      0 0            3 REBAL COMPACT RUN  100   100  0          0
0 0      0 0

If you would rather have the command stay in the prompt until it completes, you should use the "WAIT" clause to the command as shown below:

 SQL> alter diskgroup data online disk data_0000 power 100 wait;
This command, instead of returning the prompt back to you, will wait until the operation is complete.

DG Attribute


Now you can see and set many attributes of a diskgroup. The command lsattr (also available prior to 12c) in ASMCMD shows the various attributes.

ASMCMD> lsattr -l
Name Value
access_control.enabled FALSE
access_control.umask 066
au_size 1048576
cell.smart_scan_capable FALSE
compatible.advm 11.2.0.0.0
compatible.asm 12.1.0.0.0
compatible.rdbms 11.2.0.0.0
content.check FALSE
content.type recovery
disk_repair_time 3.6h
failgroup_repair_time 24.0h
idp.boundary auto
idp.type dynamic
phys_meta_replicated true
sector_size 512
thin_provisioned FALSE
access_control.enabled FALSE
access_control.umask 066
au_size 1048576
cell.smart_scan_capable FALSE
compatible.advm 11.2.0.0.0
compatible.asm 12.1.0.0.0
compatible.rdbms 11.2.0.0.0
content.check FALSE
content.type data
disk_repair_time 3.6h
failgroup_repair_time 24.0h
idp.boundary auto
idp.type dynamic
phys_meta_replicated true
sector_size 512
thin_provisioned FALSE
access_control.enabled FALSE
access_control.umask 066
au_size 1048576
cell.smart_scan_capable FALSE
compatible.asm 12.1.0.0.0
compatible.rdbms 10.1.0.0.0
content.check FALSE
content.type data
disk_repair_time 3.6h
failgroup_repair_time 24.0h
idp.boundary auto
idp.type dynamic
phys_meta_replicated true
sector_size 512
thin_provisioned FALSE

In the previous versions you could see only the following:

 access_control  (enabled or not)
access_control (umask)
 au_size cell.smart_scan_capable compatible.asm compatible.rdbms disk_repair_time sector_size
We will examine some of the attributes in the subsequent sections.

ASM Command Line Interpreter


ASMCMD is the command line extension to the ASM interface available since Oracle 10gR2, freeing you from using SQL*Plus to manage the ASM infrastructure. While it was primarily aimed at System Administrators who were less familiar with SQL and therefore needed a command line interface, it gained popularity among DBAs as well, partoicularly because it was quite useful in scripting and quick checks. ASMCMD has become improved substantially over the last several releases and in this version, there are many enhancements worth writing about.

First, the parameters to the tool have seen new additions, to accommodate the new functionalities introduced in ASM. Note that these parameters require the more commonly used double hyphen ("--") in modern usage; not the single hyphen ("-"). For instance, this is valid:
 $ asmcmd --nocp

This is not valid:
 $ asmcmd -nocp

You may be using Database Resident Connection Pooling feature. When you kick off an ASMCMD command, it actually establishes a connection to the ASM instance. To disable connection pooling, use the --nocp parameter to the ASMCMD tool:
 $ asmcmd --nocp

If you are using Flex ASM, you can use the ASMCMD to connect to a particular instance. The --inst parameter allows you to connect to a specific named instance. To connect to +ASM2, use the following:
 $ asmcmd --inst +ASM2

The commands within ASMCMD have seen new additions as well. Let's see some of the new commands. To show the version of ASM you are running, use showversion:
 ASMCMD> showversion
 ASM version         : 12.1.0.1.0

In a cluster, the patch level may vary among the various nodes of the cluster, due to rolling patch applications. In that case you will be interested in determining the patch levels in the cluster as well as the local nodes. To know the patch level of the entire cluster, use a parameter --softwarepatch to the command:
 ASMCMD> showversion --softwarepatch
ASM version         : 12.1.0.1.0
Software patchlevel : 0

To determine the patch level of the ASM installation in the local node, use a parameter --releasepatch:
 ASMCMD> showversion --releasepatch
ASM version         : 12.1.0.1.0
Release patchlevel  : 0

This is different from the version parameter to the asmcmd command. Here is how the output of the ASMCMD parameter looks like:
 $ asmcmd -V
asmcmd version 12.1.0.1.0

The latter shows the version of the ASMCMD tool while the former shows the version of ASM. To show the patches installed in the ASM software home use the showpatches command:
 ASMCMD> showpatches
---------------
List of Patches ===============

If there were patches installed, they would have appeared here. The showclustermode command shows if Flex ASM (discussed earlier in the article) is enabled or not.
 ASMCMD> showclustermode
ASM cluster : Flex mode disabled
To displays the state of the cluster use the showclusterstate command:
 ASMCMD> showclusterstate
Normal

Physical Replication


An ASM diskgroup has two different types of metadata that is kept in a disk header:
  • Allocation Table - It's a data structure that shows the different Allocation Units (AU) stored on the disk. AUs, if you recall, are the minimum addressable units in a diskgroup. If an AU is allocated to a file stored in the diskgroup, it's the Allocation Table that shows which file and which extent the AU belongs to. Before allocating storage, Oracle must check this table to see which AUs can be allocated. Each entry in AT represents a single AU on that disk.
  • Free Space Table - Now that you know what Allocation Tables are,you can easily see a potential issue. If too many processes check the Allocation Table to see which AUs are available, there will be a contention on the AT itself. Therefore Oracle divides the AT into blocks and marks those blocks which do not have any free AUs anymore so that the processes looking for free AUs will not scan them. This table of free blocks of AT entries is called a Free Space Table and is located at the beginning of the Allocation Table.
To know the location of an extent of a file on an AU, Oracle needs this information. If a diskgroup has more than one disk, the information must be present on all those disks as well. When a disk header is damaged, the contents of the disk are pretty much gone, unless there is a mirror copy. However, if the header is damaged but nothing else is, it's quite possible that the disk data is salvageable, provided another copy of these two metadata elements (allocation table and free space table) is available on that disk. In Oracle Database 12c, a copy of the metadata is automatically created. It's known as physical replication. Note that while this is a feature of 12c, this is available only if the diskgroup compatibility is set to 12.1 or more. To check the physical replication status, you can check the diskgroup attribute as shown below:

 ASMCMD>  lsattr -G DATA -l phys_meta_replicated

Name  Value 
phys_meta_replicated  true


The status shows "true". If you create a diskgroup with compatibility 11.2, this feature will be turned off.

Diskgroup Type


When you create a diskgroups in ASM, do you really care what it will be used for? In other words, if you knew the intended purpose of the diskgroup, would you have created it differently? In some cases you do. For instance, if a diskgroup is created for redolog, you would have gone for a thin striping as opposed to thick in a regular data diskgroup. Another consideration is Allocation Unit size. If the database is huge, you may want to create larger size AUs in data diskgroups compared to others.

There is another situation where this is very important. Consider disk failures in case of ASM mirrored diskgroups. ASM maintains the copies of the AUs on other disks. If that disk fails, no data is lost because the other surviving member(s) of the diskgroup will contain the copies. If both copies are lost, you have to restore the database from the backup, typically stored in the Flash Recovery Area (FRA). Ideally you have created the FRA on its own diskgroup so it's quite possible that at least one - the data diskgroup or the FRA diskgroup will have survived the failure. However, what happens when both the diskgroups - DATA and FRA happen to be on the same physical disks and the primary AU and its copy, unfortunately, are on the same physical disk that failed? Let's see that in a figure shown below.


ASM-enhancements-odb12c-Fig3


Here we have five physical disks. The DATA and FRA diskgroups have been created over all the 5 disks. These diskgroups are all created with NORMAL redundancy, i.e. one copy of the AU is created on a different disk. I have shown the AUs with numbers attached to them. The colors represent the type of the AU. Red is primary copy and blue is the copy made by ASM's normal redundancy. For instance, AU number 1 (shown as a red oval with "1") of the DATA diskgroup is located on disk 1 while the copy of that AU (shown as a blue oval with "1") is on disk 5. So if disk 1 fails, the AUs 1 and 2 will be affected; but their copies will be found on disk 5 and can be repaired from there.

However, this distribution has a serious potential flaw. What will happen if both disks--1 and 5--fail? Both the primary and the copy of the AU 1 will be lost and the diskgroup can't be repaired automatically. You would have to restore AU 1 from the backup, located in FRA. But, if you examine the above figure carefully, you will notice that that primary and copies of AU in FRA diskgroup also happened to be on the same two disks. So, you do not have a backup to restore from. Disaster!

You could have avoided that situation had you somehow made sure the copies in FRA diskgroup are placed in a different physical disk from the copies in the DATA diskgroup. For instance the copy of AU #1 in DATA is in disk #5, so the copy of AU #1 in FRA should be anywhere but disk #5. How can you accompolish that? Since you don't control the placement of AUs, you really can't guarantee that. Of course, if you had entirely different sets of physical disks for DATA and FRA, this would not have been the case; but inmany cases that kind of separation is not possible. You would want to create a diskgroup across as many spindles as possible to improve performance and isolating diskgroups on separate disks reduces the number of available spindles. Besides, this may not be an option for small databases with large LUN sizes.

Fortunately, Oracle Database 12c has a solution for that in ASM. You can now specify the content of the diskgroup. You can specify three types of content, depending on which Oracle will decide to put the copy of the AU on the available disks. If the content type is data, then the AU copy is placed at the next disk. If the content typ is recovery, then Oracle places the AU not on the next disk; but two disks away. Here is how you specify the content type:
 ASMCMD> setattr -G FRA content.type recovery
You can also use ALTER DISKGROUP in SQL to set the value; or just give the attribute during the creation of the diskgroup. When FRA diskgroup has this attribute, here is how the AUs and their copies are created on the same 5 disks:


ASM-enhancements-odb12c-Fig4

With this arrangement, the FRA contents are removed from the DATA contents. Note how the copy of AU #2 is located 2 disks away from in FRA diskgroup, while it is in the next disk for the DATA diskgroup.  If both disks #1 and #2 fail, the AU 2 of DATA will be completely lost. However, the FRA diskgroup will still have a copy of AU #2 on the disk #4, which is undamaged. Now you can restore the datafile. Phew! Disaster avoided.

You can have three different values for the attribute content.type:
data - for data, which places the AU copies on the neighboring disks
recovery - for FRA, which places the AU two disks from the primary
system - for system related files, which places the copy four disks away from the primary

To find out the content type, just use the lsattr command in ASMCMD:

ASMCMD> lsattr -G FRA -l -m content.type
Group_Name  Name        Value     RO  Sys 
DATA        content.type recovery  N   Y  

The -m option shows if the value is Read Only (header "RO") or is System defined (header "Sys").

Renaming Disks

When you create a diskgroup you need to give a path of the physical disks (or, LUN, as appropriate), e.g. "/dev/sdb". However, internally Oracle assigns a name to this disk, typically DiskGroupName_aFourDigitNumber, e.g. "DATA2_0001". Normally you don't need to worry about the disk name; but in some cases you do. For instance, when you replace a disk, you have to give the name; not the path. Let's look at the disks of a diskgroup DATA2:


SQL> select name, path
  2  from v$asm_disk
  3  where group_number = (select group_number from v$asm_diskgroup where name = 'DATA2');  
NAME PATH
---------- -------
DATA2_0001 /dev/sdg
DATA2_0000 /dev/sdf


The name DATA2_001 is not very intuitive. It does show you what diskgroup it belongs t; but not anything about the disk itself. For instance, you may want to name all your disks based on a specific types of storage. All disks om EMC VMAX should have the clause VMAX in them, all on VNX storage should have VNX, etc. You want to rename the disks. Now worries; Oracle 12c Database ASM allows you to do that using a RENAME clause. First, you need to dismount the diskgroup and mount it as restricted:

SQL> alter diskgroup data2 dismount; Diskgroup altered.

SQL> alter diskgroup data2 mount restricted;
Diskgroup altered.

Then use the following SQL to rename the disks. 

SQL> alter diskgroup data2 rename disk 'DATA2_0001' to 'DATA2_VMAX_0001', 'DATA2_0000' to 'DATA2_VMAX_0000';
Diskgroup altered.

Now if you check the disk names and paths:

 SQL> select name, path
  2  from v$asm_disk
  3  where group_number = (select group_number from v$asm_diskgroup where name = 'DATA2');
NAME PATH
--------------- -------
DATA2_VMAX_0001 /dev/sdg
DATA2_VMAX_0000 /dev/sdf


Since the diskgroup is now mounted restricted, you should dismount it and mount normally before releasing them for normal use.


Replacing Disks


Disks are mechanical devices; they do go bad occasionally and you would have to replace them. Sometimes you replace cheaper and slower disks with faster ones (or even faster ones with slower and cheaper ones for Information Lifecycle Management). ASM allows you to do it online, without affecting the application in any way. In the previous versions you had to add the new disk to and drop the old disk from the diskgroup in one operation. Although it was one command, it rebalanced the Allocation Units twice: once each during the drop and the addition. Not only the operation caused more I/O; but it was slow as well.

If the ASM diskgroup is not required, in Oracle Database 12c ASM you now have a replace disk functionality. Before replacing, you should make the disk offline and then replace with the new one, as shown below. This is done in SQL while connected as SYSASM:


 SQL> alter diskgroup data2 offline disk DATA2_0101;  
Diskgroup altered.
SQL>  alter diskgroup data2 replace disk DATA2_0101 with '/dev/sdj';
Diskgroup altered.

It is much faster than dropping and adding. Note how the disk name ("DATA2_0101"), not the path of the disk, is used as the disk to be replaced while the path ("/dev/sdj") is used as the new disk.


Estimate Work


Are we there yet? How long will it take? Sounds familiar? Whether it is kids in the backseat or managers breathing down your neck, certain questions are inevitable when you are on the way to accomplish a long task. Imagine something like adding a disk to a diskgroup. The ASM diskgroup has to be rebalanced by moving AUs from the existing disks to the new disk. Everyone wants to know how long such a task would take. Perhaps you want to fill that information in a change ticket. Or, perhaps you want to assign power limit based on the work to be done.

Well, don't guess; Oracle Database 12c ASM allows you to estimate the potential work, even without actually performing it. Here is how you will estimate the work for the disk add:

 SQL> explain work for alter diskgroup data add disk '/dev/sdi';
Explained.  

SQL> select est_work from v$asm_estimate;
EST_WORK
----------
     29298

The output shows how many AUs need to be moved. This gives you a fair idea about the time (assuming you have some reference from past) and helps you set the appropriate power limit.


ASM Disk Scrubbing


When you use ASM mirroring (normal or high redundancy), there are more than one copy of an allocation unit (AU). The AU and its copies should be in sync. Sometimes logical corruption creep in, resulting in higher response time from the disk affected by the corruption. In Oracle Database 12c ASM you can use a new command called SCRUB to weed out the logical corruption. This command repairs the logical corruption by reading the data from the mirror copies. Here is how you would repair the diskgroup DATA:

 SQL> alter diskgroup data scrub repair;
Diskgroup altered.


Again, as with the previously described operations involving large movements of data between disks, you can control how much resource this operation will take by using a special clause called "power". However, instead of a number, this parameter expects values from the list: LOW, HIGH, MAX and AUTO. A power of MAX will consume most resources to complete the operation faster but may affect other operations in the system. Here is how:

 SQL> alter diskgroup data scrub power max;


The power value of AUTO lets ASM choose the best power value depending on the system resources available. This is also the default option. If the load on the system I/O is very high, the scrubbing operation is not performed by Oracle since it will just make the I/O response even worse. To force the scrubbing even under those circumstances, you should use the FORCE clause:

 SQL> alter diskgroup data scrub repair power max force;


But scrubbing is not just for the entire diskgroup; you may choose to scrub a single disk as well. This is helpful if you want to break up the activities to one disk at a time. Here is how you do it for a disk:

 SQL> alter diskgroup data scrub disk data_0000 repair power max force;
Diskgroup altered.


The good new does not stop there. You can even repair a specific file. This is particularly useful when you want to make sure important files such as system or sysaux datafiles and vital application related files are scrubbed first. 

 SQL> alter diskgroup data scrub file '+DATA/CONA/DATAFILE/USERS.271.824035767' repair;
Diskgroup altered.


What if you want to merely check the presence of the logical corruptions; not actually fix them? Fair question. Just omit the keyword "repair" in the command.

 SQL> alter diskgroup data scrub file '+DATA/CONA/DATAFILE/USERS.271.824035767';


This will report the logical corruptions; not actually fix them.

More to Consider


The maximum number of diskgroups you can now create is 511, up to a total size of 32 PB. That should be enough for most if not all databases.

A little known enhancement makes a huge difference in availability. When you add or drop a disk from a diskgroup, the allocation units need to shuffled among the final disks. This is called rebalancing, The act itself is not new; but the order is. In Oracle Database 12c ASM, the critical files such as control files, redo log files, etc. are rebalanced first, before the other files. This enhances the reliability of the database. While on the topic of rebalancing, in Exadata, that action is offloaded to storage cells, which makes the operation even faster.

In earlier versions, rebalancing was serial. If you gave two rebalance operations, only one could start in a single instance and the other one is queued. This lengthened the time required for rebalancing. In a cluster, you can kick off another rebalance operation in a different instance; but not in the same instance. In this version, this limitation is removed. You can kick off multiple rebalance operations that run concurrently in the same instance. This shortens the amount of time it takes to rebalance, especially if you have spare I/O capacity available.

Another big impact item is the way ASM in Oracle Database 12c reads allocation units from the mirrored ASM diskgroups. In case of diskgroups with normal redundancy, each AU has a copy stored in a different member disk. When the AU is written, Oracle writes to the primary disk and then creates the copy on the other disk. Until both operations are complete, the AU writing process is not complete. While the writing activity of the AU in ASM is similar to the other mirroring products, in the previous versions, ASM differed on the mode of reading. Other mirroring products distribute the reads among the mirrored disks. ASM didn't do that. It always read from the primary disk. The exception was in the case where the primary disk is offline. Since the primary disk is always read; the load on all the disks was not uniform.

In Oracle Database 12c ASM, the reads can be from the mirror disk, in addition to the primary disk. ASM determines at the runtime the least loaded member disk (the disk with less number of reads) in the normal redundancy diskgroup and reads from there; not just from the primary member disk. This allows more even distribution of load on all the disks of the diskgroup.