Tabs

Wednesday, January 28, 2015

Switchover and Failover in Standby Oracle 11g

Data Guard uses two terms when cutting over the standby server, switch-over which is a planned and failover which a unplanned event .



1.) Switchover: Switchover is a planned event, it is ideal when we might want to upgrade the primary database or change the storage/hardware configuration (add memory, cpu networking), we may even want to upgrade the configuration to Oracle RAC .

What happens during a switchover is the following :


1.) Notifies the primary database that a switchover is about to occur
2.) Disconnect all users from the primary database
3.) Generate a special redo record that signals the End of Redo (EOR)
4.) Converts the primary database into a standby database
5.) Once the standby database applies the final EOR record, guaranteeing that no data loss has been lost, converts the standby database into the primary database.


The new standby database (old primary) starts to receive the redo records and continues process until we switch back again. It is important to remember that both databases receive the EOR record so both databases know the next redo that will be received. Although we can have users still connecting to the primary database while the switchover occurs (which generally takes about 60 seconds) I personal have a small outage just to be on the safe side and just in case things don't go as smoothly as I hoped.


We can even switch over form a linux database to a windows database from a 64 bit to a 32 bit database which is great if we want to migrate to a different O/S of 32/64 bit architecture, also our rollback option is very easy simply switchback if it did not work.

2.) Failover: Failover is a unplanned event, this is where the EOR was never written by the primary database, the standby database process what redo it has then waits, data loss now depends on the protection mode in affect .

  • Maximum Performance - possible chance of data loss
  • Maximum Availability - possible chance of data loss
  • Maximum Protection - no data loss

we have the option to manually failover or make the whole process automatic, manual gives  the DBA maximum control over the whole process obliviously the the length time of the outage depends on getting the DBA out of bed and failing over. Otherwise Oracle Data Guard Fast-Start Failover feature can automatically detect a problem and failover automatically for us. The failover process should take between 15 to 25 seconds.

Which Role Transition Operation Should I Use  ?



When faced with the decision on which role transition is best for the given situation, we need to always choose one that best reduces downtime and has the least potential for data loss. Also to consider is how the change will affect any other standby database in the configuration. We should consider the following when making the decision on which operation to use:

  • What is the current state of the primary database just before the transition? Is it available?
  • What is the state of the selected standby database to be used in the role transition at the time of transition?
  • Is the standby database configured as a physical or logical standby database?
The following decision tree can be used to assist when making this critical decision as to which operation to perform:




One key point to consider is that if it would be faster to repair the primary database (from failure or a simple planned hardware/software upgrade), the most efficient method would be to perform the tasks and then to bring up the primary database as quickly as possible and not perform any type of role transition. This method can impose less risk to the system and does not require any client software to be re-configured.

Another consideration involves a Data Guard configuration which includes a logical standby database. A switchover operation can be performed using either a physical or logical standby database. Take note, however, of the following issues you may run in to regarding physical and logical standby configurations. If the configuration includes a primary, a physical standby, and a logical standby, and a switchover is performed on the logical standby, the physical standby will no longer be a part of the configuration and must be rebuilt. In the same scenario, if a switchover operation is performed on the physical standby, the logical standby remains in the Data Guard configuration and does not need to be rebuilt. Obviously, a physical standby is a better option to be a switchover candidate than a logical standby when multiple standby types exist in a given configuration.


Hence finally we come to conclusion that the order to setup Data Guard is the following : 

  • The primary database is up and running
  • Create a standby database
  • Setup the redo transport rules
  • Create the SRL files 
  • Execute one of the Following 

SQL> Alter database set standby to Maximum Performance; //(Default)
SQL> Alter database set standby to Maximum Availability;
SQL> Alter database set standby to Maximum Protection;



Tuesday, January 27, 2015

How to Reduce DB File Sequential Read Wait


DB File Sequential Read wait event occurs when we are trying to access data using index and oracle is waiting for the read of index block from disk to buffer cache to complete.  A sequential read is a single-block read.Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache.Db file sequential read wait events may also appear when undo blocks are read from disk in order to provide a consistent get(rarely).

To determine the actual object being waited can be checked by the p1, p2, p3 info in v$session_wait .  A sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3). This wait may also be seen for reads from datafile headers (P2 indicates a file header read) ,where p1,p2 and p3 gives the the absolute file number ,the block being read ,and  the number of blocks (i.e, P3 should be 1) respectively. 

Block reads are fairly inevitable so the aim should be to minimise un-necessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance.Hence to reduce this wait event follow the below points .

1.) Tune Oracle - tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce "db file sequential read" wait time.
2.) Tune Physical Devices - Distribute(stripe) the data on diferent disk to reduce the i/o . Logical distribution is useless. "Physical" I/O performance is only governed by "independency of devices".
3.) Faster Disk - Buy the faster disk to reduce the unnecessary I/O request .
4.) Increase db_block_buffers - A larger buffer cache can (not will, "might") help .

Friday, January 23, 2015

How to Exclude Tablespace from Rman Backup ?

Sometimes we  may want to omit a  specified  tablespace  from  part of  the regular  backup schedule. Suppose  in  a tablespace  the data don't change  or  the tablespace  contains test  data only or  sometimes  a scenario may occur when we are  clonning  the database  using   the  rman backup where we  do not  need all  the schemas . In such cases, either we  might change our  backup  strategy or  skip the certain tablespace  in  the database. 

To overcome this type of issue, we can configure the exclude option to exclude the specified tablespace from the Backup Database command . Though we can generally skip the tablespace during the Backup Database command but only when the tablespace is offline or readonly .The exclusion condition applies to any data files that we add to this tablespace in the future. Below are the steps to configure the Exclude  option 

C:\> rman  target  /
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Dec 28 18:58:05 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: NOIDA (DBID=1523131116)


RMAN> configure exclude for tablespace "EXAMPLE" ;
Tablespace EXAMPLE will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored

To check this parameter , use the below command 

RMAN> show exclude ;
RMAN configuration parameters for database with db_unique_name NOIDA are:
CONFIGURE EXCLUDE FOR TABLESPACE 'EXAMPLE';


If  "exclude" option is configured ,  even then we can backup the excluded tablespsace by explicitly specifying them in a Backup command or by specifying the NOEXCLUDE option on a "Backup Database" command as 

RMAN> backup database noexclude ;
or
RMAN> backup tablespace example ;
Starting backup at 29-DEC-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=C:\APP\NEERAJS\ORADATA\NOIDA\EXAMPLE01.DBF
channel ORA_DISK_1: starting piece 1 at 29-DEC-11
channel ORA_DISK_1: finished piece 1 at 29-DEC-11
piece handle=E:\RMAN_BACKUP\0UMVEC3G_1_1 tag=TAG20111229T150832 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 29-DEC-11

We can disable the exclusion feature tablespace example as : 

RMAN> configure exclude for tablespace example clear ;
Tablespace EXAMPLE will be included in future whole database backups
old RMAN configuration parameters are successfully deleted


RMAN> show exclude ;
RMAN configuration parameters for database with db_unique_name NOIDA are :
RMAN configuration has no stored or default parameters


In order to skip  READONLY  and  OFFLINE  tablespace we can issue backup database command as,
RMAN>backup database skip readonly, skip offline ;

Cross-Platform Transportable Tablespaces in Oracle

Oracle transportable tablespaces are the fastest way for moving large volumes of data between two Oracle databases. Starting with Oracle Database 10g, we can transport  tablespaces across platforms . Furthermore, like import and export, transportable tablespaces provide a mechanism for transporting metadata in addition to transporting data. Below are the steps to perform the cross platform transport tablespace .

Prerequisites :
There are few points which has to be considered before performing the Transportable tablespaces . The followings are : 

1.) The source and target database must use the same character set and national character set .

2.) We cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, we can rename either the tablespace to be transported or the destination tablespace before the transport operation.

3.) Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.

4.) If the owners of tablespace objects does not exist on target database,  the usernames need to be created manually before starting the transportable tablespace import. If  we use spatial indexes, then:
  • be aware that TTS across different endian platforms are not supported  for spatial indexes in 10gR1 and 10gR2; such a limitation has been released in 11g .
  • Specific Spatial packages must be run before exporting and after transportation, please see Oracle Spatial documentation.
5.) We cannot transport the SYSTEM  tablespace or objects owned by the user SYS . 

6.) Opaque Types (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application  must address any endianness issues after these types are moved to the new  platform .

7.) Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP.

If we are migrating a database, then make sure there are no invalid objects in the source database before making the export. Take a full no rows export to recreate objects that won't be transported with TTS or remove all the invalid objects by running utlrp.sql scripts . Keep the source database viable until we have determined all objects are in the target database and there are no issues (i.e. the target database has been thoroughly checked out ).

Here we are performing the cross platform transport tablespace demo .The details about the sorce and target are as below : 

Source : 
OS                      = Redhat Linux (32 bit)
Oracle Version    = Oracle 10.2.0
Database Name  = comcast 
Tablespace         = TES_TBS (which is to be transported)

Target : 
OS                      = Microsoft Window (32 bit)
Oracle Version    = Oracle 11.2.0
Database Name  =  noida
Step 1 : Determine Platforms Support and  Endianness (on source) :
This step is only necessary if we are transporting the tablespace set to a platform different from the source platform. Determine whether cross-platform tablespace transport is supported for both the source and target platforms, and also determine the endianness of each platform . We can query the v$transportable_platform view to see the platforms that are supported, and to determine each platform's endian format (byte ordering). 
Now check the endians by below query:  
Source endians :

SQL> select  d.platform_name, endian_format  from   v$transportable_platform  tp ,   v$database d   where  tp.PLATFORM_NAME  =  d.PLATFORM_NAME ;
PLATFORM_NAME                   ENDIAN_FORMAT
-----------------------                 ----------------------
Linux IA (32-bit)                         Little

Target endians  :

SQL>  select d.platform_name,endian_format from v$transportable_platform tp , v$database d where 
tp.PLATFORM_NAME = d.PLATFORM_NAME;
PLATFORM_NAME                            ENDIAN_FORMAT
-----------------------------                   ---------------------
Microsoft Windows IA (32-bit)             Little

Here , we notice that the both source and target have same endians . If we have different endians then follow step 2 else skip it and move to step 3 .

Step 2 :  Different endian formats ( skip step 2 if  having same endians ) :
If  the endian formats are different  then a conversion is necessary for transporting the tablespace. For example, run the below command to convert the tablespace of Source(Linux 64 bit)  to Target(Solaris 64 bit) platform .

i.>  Using  CONVERT  Tablespace   FROM  PLATFORM  on  Source  host

RMAN> convert tablespace test_user_tbs 
2> to platform ‘Solaris[tm] OE (64-bit)'
3> format='/oradata/rman_backups/%N_%f' ;

The data file “test_user_tbs01.dbf” is not touched and a new file will be created for Solaris platform under “/oradata/rman_backups“and copy the file to Target platform. Use RMAN's CONVERT command to convert the datafiles to be transported to the destination host's format. The converted datafiles are stored in “/solaris/oradata”.

ii.>  Using CONVERT DATAFILE... FROM PLATFORM on Destination host

RMAN> convert datafile test_user_tbs01.dbf
2> from platform ‘Linux IA (64-bit)'
3> db_file_name_convert ‘/linux/oradata/’ ‘/solaris/oradata’

Let's have a demo of the transportable tablespace . 

Step 3  : Create a tablespace and user  : 
Here we will create the Tablespace and User on the source database and this tablespace will be transported further .

SQL> create tablespace tes_tbs datafile '/home/oracle/oradata/comcast/tes_tbs.dbf' size 100m ;
Tablespace created

SQL> create user tes identified by tes 
  2  default tablespace tes_tbs 
  3  quota unlimited on tes_tbs ; 
User created.

SQL> grant resource,connect to tes ;
Grant succeeded.

SQL> create table tes.t1 as select * from hr.employees ; 
Table created.

SQL> create index TES.IND_EMP_T1 on tes.t1(employee_id) tablespace sysaux ;
Index created.

SQL> create table tes.t2 as select * from dba_extents; 
Table created.

Step  4 :  Check Self-Contained Set of  Tablespaces  :
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. We can only transport a set of  tablespaces that is self-contained . Some examples of self contained tablespace violations are:
  • An index inside the set of tablespaces is for a table outside of the set of tablespaces.
  • A partitioned table is partially contained in the set of tablespaces.
  • A referential integrity constraint points to a table across a set boundary.


The statement below can be used to determine whether Tablespace are self-contained or not : 
SQL> conn sys as sysdba
Enter password: 
Connected.

SQL> EXECUTE dbms_tts.transport_set_check('TES_TBS', TRUE, TRUE) ;
PL/SQL procedure successfully completed.

The  DBMS_TTS  package checks if the transportable set is self-contained. All violations are inserted into a temporary table that can be selected from the transport_set_violations view. 

SQL> select * from   transport_set_violations;
VIOLATIONS
----------------------------------------------------------------------------------------------------
Index TES.IND_EMP_T1 in tablespace SYSAUX points to table TES.T1 in tablespace TES_TBS

Since, there is violation ,so we manully  move the object to target tablespace .Here, we can rebuild the index and move it into tablespace 'TES_TBS' as 

SQL> alter  index  TES.IND_EMP_T1  rebuild  tablespace  tes_tbs ;
Index altered.

Again , run the dbms_tts package to check the violations .

SQL> EXECUTE dbms_tts.transport_set_check('TES_TBS', TRUE, TRUE);
PL/SQL procedure successfully completed.

SQL> select * from   transport_set_violations;
no rows selected

Now there is no violations. If there is any dependency object , then  we get  "ORA-29341: The transportable set is not self-contained"  error while exporting the tablespace .

Step 5 : Generate a Transportable Tablespace Set : 
After ensuring that we have a self-contained set of tablespaces that we want to transport, generate a transportable tablespace set by performing the following actions . Make the tablespaces read-only.

SQL> alter tablespace tes_tbs read only ;
Tablespace altered.

Now export the metadata of tablespace "tes_tbs"  as 

$ expdp system/xxxx  dumpfile=tes_tbs_exp.dmp  transport_tablespaces=tes_tbs  transport_full_check=y  logfile=tes_tbs_export.log

Export: Release 10.2.0.1.0 - Production on Friday, 20 January, 2012 14:03:53
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=tes_tbs_exp.dmp transport_tablespaces=tes_tbs transport_full_check=y logfile=tes_tbs_export.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /home/oracle/product/10.2.0/db_1/rdbms/log/tes_tbs_exp.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 14:07:44

Step 6 : Copy the datafile and  dumpfile on target database 
Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database.

Step 7 : Import the dumpfile in target database 
Before importing the dumpfile make sure that tablespace of same doesnot exist in the target database. Also check the user may exist which is having default tablespace i.e, "TES_TBS" . We can also use the remap_schema parameter to restore it into some other schemas.

C:\>impdp system/xxxx  dumpfile=tes_tbs_exp.dmp TRANSPORT_DATAFILES='C:\app\Neerajs\oradata\noida\tes_tbs.dbf' logfile=tes_imp.log

Import: Release 11.2.0.1.0 - Production on Fri Jan 20 14:38:14 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** dumpfile=tes_tbs_exp.dmp  TRANSPORT_DATAFILES='C:\app\Neerajs\oradata\noida\tes_tbs.dbf' logfile=tes_imp.log

Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 14:38:48

Step 8 : Connect to target Database  

C:\>sqlplus sys/xxxx@noida as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 20 14:40:04 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter tablespace tes_tbs read write ;
Tablespace altered.

SQL> conn tes/tes
Connected.

SQL> select  *  from tab;

TNAME       TABTYPE     CLUSTERID
----------     -----------     --------------
T1                TABLE
T2                TABLE

Here, we find the tablespace is successfully transported with all the tables. The transportable tablespace feature is useful in a number of scenarios, including:
  • Exporting and importing partitions in data warehousing tables 
  • Copying multiple read-only versions of a tablespace on multiple databases
  • Performing tablespace point-in-time-recovery (TSPITR) 
  • Migrating databases among RDBMS versions and OS platforms

Thursday, January 22, 2015

Difference Between Unique Indexes and Unique Constraints

There is a very general confusion that whenever we create a unique key constraint or primary key then a corresponding index is created . Primary key and Unique key  creates the unique indexes , but this is not always true .  Lets have a look ...

SQL> create table T1 (id number ) ;
Table created.

SQL> alter table T1 add constraint  T_ID_IDX  unique(id) ;
Table altered.

SQL> select index_name,table_name,uniqueness from dba_indexes where table_name='T1';
INDEX_NAME TABLE_NAME UNIQUENES
---------- ---------- ---------
T_ID_IDX    T1        UNIQUE

SQL> select constraint_name,constraint_type ,table_name from dba_constraints where table_name='T1' and owner='HR' ;
CONSTRAINT_NAME  C   TABLE_NAME
--------------- ---- ----------
T_ID_IDX         U     T1

Here, we see that when we have created a table unique index get created . Now have another look ..

SQL> create table T2(id number);
Table created.

SQL>create unique index T2_id_idx on T2(id);
Index created.

SQL> select index_name,table_name,uniqueness from dba_indexes where table_name='T2'  ;
INDEX_NAME TABLE_NAME UNIQUENES
---------- ---------- ---------------
T2_ID_IDX    T2        UNIQUE

SQL>  select constraint_name,constraint_type ,table_name from dba_constraints where table_name='T2' and owner='HR' ;
no rows selected

SQL> alter table T2 add constraint T2_ID_IDX unique(id);
Table altered.

Now, we expecting  two indexes i.e;  one from the unique index and other from unique constraints . let's look on the below query :

SQL> select constraint_name,constraint_type ,table_name from dba_constraints where table_name='T2' and owner='HR' ;
CONSTRAINT_NAME  C   TABLE_NAME
--------------- ---- ----------
T2_ID_IDX        U     T2

SQL> drop index T2_ID_IDX;
drop index T2_ID_IDX
           *
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

Above query show only one indexes  . Hence from the above demo, we can only say that  " a unique constraint does not necessarily create an index or a unique constraint does not necessarily create a UNIQUE index "  .

If  we want a unique index in place, it is suggested we should explicitly create it by  using CREATE UNIQUE INDEX .  A primary key or unique constraint is not guaranteed to create a new index, nor is the index they create guaranteed to be a unique index.  Therefore, if  we desire a unique index to be created for query performance issues, we should explicitly create one. 

A question may arises that why do we need a unique constraint when we already have a unique index?
The reason are

1. ) The difference between a unique index and a unique  constraint starts with the fact that the constraint is a rule while the index is a database object that is used to provide improved performance in the retrieval of rows from a table. It is a physical object that takes space and is created with the DDL command  .

2.) we can use either a unique OR non-unique index to support a unique constraint. Constraints are metadata, more metadata is good. We can define a foreign key to a unique constraint, not so a unique index. 

3.) A constraint has different meaning to an index. It gives the optimiser more information and allows us to have foreign keys on the column  whereas a unique index doesn't. But most importantly because  it is the right way to do it.

User Managed Hot Backups in Oracle

A cold backup does have the somewhat bad side effect of wiping out our shared pool, our buffer cache and preventing our users from logging in to do work. Our database is  like a car, it runs better when it is warmed up.  If  we want to cold start it - be  prepared for rough running when we restart as we have to rebuild that shared pool, that  buffer cache and so on . I would never pick cold over hot given the chance.  No benefit, only downsides (Acc. to Tkye). The only kind of backup we do on our production systems here is hot .


There are two ways to perform Oracle backup and recovery : 

1.)  Recovery Manager (RMAN) : It is an Oracle utility that can backup, restore, and recover database files. It is a feature of the Oracle database server and does not require separate installation.
2.) User-Managed backup and recovery : We use operating system commands for backups and SQL*Plus for recovery. This method is  called user-managed backup and recovery and  is fully supported by Oracle, although use of RMAN is highly recommended because it is more robust and greatly simplifies administration.

There are basically two types of  backup .The backup are as 


1.) Consistent Backup :  This is also know as Cold Backup . A consistent backup is one in which the files being backed up contain all changes up to the same system change number (SCN). This means that the files in the backup contain all the data taken from a same point in time .
2.) Inconsistent Backup :  This is also known as Hot backup . An inconsistent backup is a backup in which the files being backed up do not contain all the changes made at all the SCNs . This can occur because the datafiles are being modified as backups are being taken. 

There are  some DBAs which prefer oracle user-managed backups.They put their database into backup mode prior to backing  up and take it out of backup mode after backup. If  we 're going to perform user-managed backups, we must back up all of the following file : 
  • Datafiles
  • Control files
  • Online redo logs (if performing a cold backup)
  • The parameter file (not mandatory )
  • Archived redo logs
  • Password file if used

The below diagram shows the Whole Database Backup Options :  
A hot backup requires quite a bit more work than cold backup.Below are steps required for Hot backup.


Step 1 :  Check the log mode of the database  Whenever we go for hot backup then the database must be in archivelog  mode . 
SQL> SELECT LOG_MODE FROM V$DATABASE ;
LOG_MODE
---------------
ARCHIVELOG

Step 2 :  Put the database into backup mode  If we are using the oracle 10gR2 or later , then we can put the entire database into backup mode and if we are using the oracle prior to 10gR2 ,then we have to put each tablespace in backup mode . In my case , I am having 11gR2 . 
SQL> alter database begin backup ; 
Database altered.
In case of oracle prior to 10gR2 use the below command as 
SQL> set echo off 
SQL> set heading off 
SQL>  set feedback off 
SQL> set termout  off 
SQL> spool backmode.sql 
SQL> select 'alter tablespace  '||name||'  begin backup ;'   "Tablespace in backup mode"  from v$tablespace;
SQL> spool off 
SQL>  @C:\backmode.sql 


Step 3 :  Backup all the datafiles  Copy all the datafile using the operating system command and Paste it on the desired backup location .Meanwhile,we can verify the status of the datafile by using the v$backup view  to check the status of the datafiles.
SQL> select  *  from  v$backup ; 
     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- ---------
         1 ACTIVE                3967181 03-APR-12
         2 ACTIVE                3967187 03-APR-12
         3 ACTIVE                3967193 03-APR-12
         4 ACTIVE                3967199 03-APR-12
         5 ACTIVE                3967205 03-APR-12
         6 ACTIVE                3967211 03-APR-12
         7 ACTIVE                3967217 03-APR-12
         8 ACTIVE                3967223 03-APR-12
         9 ACTIVE                3967229 03-APR-12
The Column STATUS=ACTIVE  shows that the datafiles are in backup mode . 


Step  4  : Take out the database from backup mode  If we are using 10gR2 or above version of oracle , we use the below command to take out the database from backup mode as 
SQL> alter database end backup ; 
Database Altered 
If we are having version prior to 10gR2 , then we use the below command as above : 
SQL> set echo off 
SQL> set heading off 
SQL> set feedback off 
SQL> set termout  off 
SQL> spool end_mode.sql 
SQL> select  'alter tablespace  '||name||'  end backup ;'   "tablespace in backup mode"  from v$tablespace ; 
SQL> spool off 
SQL> @C:\endmode.sql 


Step 5 :  Switch the redolog file and backup archivelogs   After taking the database out of Hot Backup we must switch logfile (preferably more than once) and backup the archivelogs generated .We may backup archivelogs while the database is in backup mode but we must also backup the first archivelog(s) after the end backup. The best method to do both is to run the SQL command alter system archive log current. This switches the logfile but does not return the prompt until the previous redo log has been archived. We can run alter system switch logfile, but then we won't be sure that the latest redo log has been archived before we move on to the next step. 
SQL> alter system archive log current ; 
System altered.
SQL>
System altered.
Now backup the archivelogs to the backup location .


Step 6  : Back up the control file  Now , we can backup the controlfile as binary file and as human readable .We should use both methods to back up the control file; either one may come in handy at different times . The commands are as 
(Human readable)
SQL> alter database backup controlfile to trace ;  or 
Database altered.
SQL> alter database backup controlfile to trace as '<backup location>' ; 
Database altered.
(Binary format)
SQL> alter database backup controlfile to '<backup location>' ; 
Database altered.


Step 7 : Backup the passwordfile and spfile  We can backup the passwordfile and spfile though it is not mandatory.


Some Points Worth Remembering 
  • We  need to backup all the archived log files, these files are very important to do recovery. 
  • It is advisable to backup all of  tablespaces (except read-only tablespaces), else complete recovery is not possible.
  • Backup of online redo log files are not required, as the online log file has the end of backup marker and would cause corruption if used in recovery.
  • It is Preferable to start the hot backups at low activity time.
  • When hot backups are in progress we  "cannot" shutdown the database in NORMAL or IMMEDIATE mode (and it is also not desirable to ABORT).

Tuesday, January 20, 2015

Understanding Oracle password hashing

This is an explanation of how the password hashing works on Oracle databases across 9i and 10g. For the explanation two users are chosen on a 10g and in a 9i database, namely user1 and user2 and their encrypted passwords are displayed. The test is first done on a 10g database

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
NLSRTL Version 10.2.0.3.0 - Production

SQL> select username,password from dba_users where username like 'USER%';

USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 1718E5DBB8F89784

Now both the user1 and user2 have their own usernames as their passwords as shown below.

SQL> conn user1/user1
Connected.
SQL>
SQL> conn user2/user2
Connected.

The password of user2 is changed to that of 'user1', just to see how the password encryption mechanism works.

SQL> conn /as sysdba
Connected.
SQL>
SQL> alter user user2 identified by user1;

User altered.

User2's old password does not work anymore, but the new password is in effect now as shown below.

SQL> conn user2/user2
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
SQL> conn user2/user1
Connected.
SQL>

When querying the passwords of user1 and user2 this time, shows up different values for both the users inspite of the fact that both the users have the same password assigned to them.

SQL> conn /as sysdba
Connected.
SQL>
SQL> select username,password from dba_users where username like 'USER%';
USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 83566E16913442F3

This shows that Oracle does not only encrypts just the password but also takes into account some other factor with the password for encryption. This other factor could be assumed as the username as such.

The above was tested on a 10g database,the same which one tested on a 9i database shows no difference in the password encryption mechanism which is evident from the output shown below.

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for IBM/AIX RISC System/6000: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production

SQL> create user user1 identified by user1;

User created.

SQL> grant connect,resource to user1;

Grant succeeded.

SQL> create user user2 identified by user1;

User created.

SQL> grant connect,resource to user2;

Grant succeeded.

SQL> select username,password from dba_users where username like '%USER%';

USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 83566E16913442F3

SQL> conn user1/user1
Connected.
SQL>
SQL> conn user2/user2
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
SQL>
SQL> conn user2/user1
Connected.
SQL>
SQL> conn /as sysdba
Connected.
SQL>
SQL> alter user user2 identified by user2;
User altered.

SQL> select username,password from dba_users where username like '%USER%';
USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 1718E5DBB8F89784 <--- The password here is
'user2' which shows
the same encrypted
text in the first query
output at the top of this page.