sexta-feira, 16 de outubro de 2015

Create standby database using Active Data Guard

Database version: 11.2.0.x (11gR2) using ASM
OS: RedHat Enterprise Linux 6.x

One thing which is indispensable nowadays at any (reputable) company that works with IT is a copy of your primary database at a different datacenter/availability zone (cloud).

To do that, you have three options: create your own replication script (which can lead to an enormous amount of problems); using a non homologated software created by some company (which can be dangerous too), or buy an Oracle Data Guard license (which can be expensive).

Any one of those three strategies can present a considerable problem.
So, let's suppose that you have an Oracle Data Guard license. Now we can use a broker to control all replication between our primary and our new standby database.

Before starting our tutorial, let me define one simple rule: you have to install the same Oracle software version at your new machine (which is going to be your standby database), and preferably keep your storage solution (like ASM, but that is just to keep it beautiful - it is not mandatory).

Let's assume that we have a single instance primary database with a SID called "prod" and you want to create a single instance standby database with a SID called "dataguard", and you have 3 kinds of diskgroup: DATA, RECO and REDO. DATA will keep our datafiles, spfile, and these kind of things; RECO will keep our archives, and REDO will keep our redo log files.



1st: You have to do is to create some groups of standby logfile (depending on your archive generation, you may have to increase theses numbers):


ALTER DATABASE ADD STANDBY LOGFILE GROUP 1 ('+REDO', '+REDO');


2nd: Backup your primary database using rman. You have many ways to do that, but for example:

$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Oct 7 14:46:41 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: BIOPROD (DBID=1276674215)
RMAN> backup as compressed backupset database format '$BACKUP_HOME/BKP_%D%M%Y_%I_%s_%U.bkp' include current controlfile;

After that, transfer your backup (and archives) to your standby database machine (to the same directory).

3th: Configure your listener.ora adding the following settings.


$ vi $ORACLE_HOME/network/admin/listener.ora

SID_LIST_LISTENER =
  (SID_LIST=
    (SID_DESC=
      (GLOBAL_DBNAME=dataguard)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1/)
      (SID_NAME=dataguard))
    (SID_DESC=
      (GLOBAL_DBNAME=dataguard_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1/)
      (SID_NAME=dataguard))
    (SID_DESC=
      (GLOBAL_DBNAME=dataguard_DGB)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1/)
      (SID_NAME=dataguard)))


Change your ORACLE_HOME variable to your own. The important here is to configure three kinds of service: dataguard, dataguard_DGMGRL and dataguard_DGB (they are essentials to broker).
Restart your listener.

4th: Configure your tnsnames.ora file on both machines (production and standby environment) with your database settings.


$ vi $ORACLE_HOME/network/admin/tnsnames.ora

dataguard =
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dataguard.com.br)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dataguard)))

prod =
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = prod.com.br)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = prod)))


5th: At your primary database, set the following parameters:

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='dg_config=(prod,dataguard)';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='service="dataguard", LGWR ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=5 reopen=15 db_unique_name="dataguard", valid_for=(all_logfiles,primary_role)';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2='DEFER';
ALTER SYSTEM SET DB_FILE_NAME_CONVERT='+DATA/dataguard','+DATA/prodbr' scope=spfile;
ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='+RECO/dataguard','+RECO/prodbr' scope=spfile;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
ALTER SYSTEM SET LOG_ARCHIVE_MIN_SUCCEED_DEST=1;


A brief explanation about these parameters:
  - The LOG_ARCHIVE_CONFIG parameter need to contain your DB_UNIQUE_NAME parameter from production and standby database. In this case, our SID and DB_UNIQUE_NAME are equal.
  -The LOG_ARCHIVE_DEST_x configures a series of parameters about our replication, like if our synchronization will be asynchronous or synchronous (for more information, see the documentation). The important here is to know that data guard broker will synchronize his configuration with this parameter: if we change any property later, it will reflect on this parameter.
  - The LOG_ARCHIVE_DEST_STATE_x just enable or disable your LOG_ARCHIVE_DEST_x configuration.
  - The DB_FILE_NAME_CONVERT is a very important parameter to configure here. Be careful: wrong conversions could lead to a bunch of problems. Here, we need to configure what is our datafile directory source and say what will be our target source: in this example, we are reading our datafiles from +DATA/dataguard and we going to save it now at +DATA/prodbr. It's very important to configure this parameter if we are thinking to switchover the environments (change primary to standby, and the standby to primary roles).
  - The LOG_FILE_NAME_CONVERT is like DB_FILE_NAME_CONVERT, except here we specify the red log files destinations. If you want to increase your security, you could configure this parameter with '+DATA' information (in case you have some redo log files lost in wrong directories, for example).
  - The STANDBY_FILE_MANAGEMENT sets your file name configuration. Instead you define your datafile name or redo file name, the database will take care of it. So, when we create a datafile at +DATA, we don't need to specify  the file path (like '+DATA/prod/datafile/datafile_01.dbf'); for example, just say it will be create at '+DATA' and oracle will create a file with a random id at the right location (like '+DATA/datafile/tablespace_name.123.4567891011').
  - The LOG_ARCHIVE_MIN_SUCCEED_DEST defines the minimum success destination to archive logs before set the current redo log file to be available to use again. This prevent you lost data if something going wrong. If you configure more than one standby, and consequently configure more LOG_ARCHIVE_DEST_x parameters, consider increase this number.

Like I said, to understand better about these parameters, check Oracle documentation.


6th: Create your standby database pfile from your primary database, or create a new pfile with the minimum parameters to start it. For example:


$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 7 17:52:26 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> create pfile = '/tmp/dataguard_pfile.ora' from spfile;

...

$ vi /tmp/dataguard_pfile.ora

*.db_name='
prod'
*.memory_target=5G
*.processes = 500
*.audit_file_dest='/u01/app/oracle/admin/dataguard/adump'
*.audit_trail ='db'
*.db_block_size=8192
*.db_domain=''
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=100G
*.diagnostic_dest='/u01/app/oracle'
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
*.control_files = '+DATA/dataguard/control01.ctl', '+REDO/dataguard/control02.ctl'
*.compatible ='11.2.0.4'
*.log_archive_config='dg_config=(prod,dataguard)'
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST','valid_for=(ALL_LOGFILES, ALL_ROLES)'
*.log_archive_dest_2='service="prod", LGWR ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=5 reopen=15 db_unique_name="prod", valid_for=(all_logfiles,primary_role)'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='defer'
*.db_file_name_convert='+DATA/prod','+DATA/dataguard'
*.log_file_name_convert='+DATA/prod','+DATA/dataguard','+RECO/prod','+RECO/dataguard','+REDO/prod','+REDO/dataguard'
*.standby_file_management=auto
*.log_archive_min_succeed_dest=1

*.db_unique_name='dataguard'

Remember two things: your db_name MUST be exactly like your production environment; and you need to create the audit_file_dest directory (in this case, /u01/app/oracle/admin/dataguard/adump'), otherwise you will receive a warning.

7th: Move your standby pfile to your standby machine, and then startup the new instance on nomount mode.


$ export ORACLE_SID=dataguard
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu Oct 8 16:47:22 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/home/oracle/dataguard_pfile.ora';
...
SQL> create spfile='+DATA/dataguard/spfiledataguard.ora' from pfile='/home/oracle/dataguard_pfile.ora';
File created.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup nomount;



8th: Create your oracle password file (orapw) for SYS user, for both environments (you can use the same password from prod, just to keep it simple):


$ orapwd file=$ORACLE_HOME/dbs/orapwdataguard_nv password=dataguard_sys_password
$ orapwd file=$ORACLE_HOME/dbs/orapwprod_nv password=prod_sys_password


Do it on both environments (probably your prod machine will already have orapwprod file, but there is no problem if you overwrite it or not).

9th: Duplicate (restore) your dataguard database! Use the exactly command below:


$ rman target sys/prod_sys_password@prod auxiliary sys/dataguard_sys_password@dataguard

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Oct 8 17:14:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (DBID=12345678910)
connected to auxiliary database: PROD (not mounted)

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;

Starting Duplicate Db at 08-OCT-15

contents of Memory Script:
{
   set until scn  54045879909;
   restore clone standby controlfile;
}
executing Memory Script

executing command: SET until clause

Starting restore at 08-OCT-15

channel C1: restoring control file
channel C1: copied control file copy
input file name=/backup/controlfile_bkp.bkp
output file name=+DATA/dataguard/control01.ctl
output file name=+REDO/dataguard/control02.ctl
Finished restore at 08-OCT-15

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only.

contents of Memory Script:
{
   set until scn  54045879909;
   set newname for tempfile  1 to
 "+data";

...

   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

...

renamed tempfile 1 to +data in control file

...

executing command: SET NEWNAME

...

channel C1: starting datafile backup set restore
channel C1: specifying datafile(s) to restore from backup set
channel C1: restoring datafile 00001 to +data

...

channel C1: reading from backup piece /backup/BKP_06102015_1276674215_98210_t2qj01fp_1_1.bkp

...

Finished restore at 08-OCT-15

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

...

released channel: C1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/14/2015 01:40:41
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-20506: no backup of archived log found
RMAN-06053: unable to perform media recovery because of missing log

RMAN-06025: no backup of archived log for thread 1 with sequence 109545 and starting SCN of 54055457212 found to restore


Do not be afraid: those final RMAN errors are "normal" (unfortunately). It means that you dont have the necessary archive with SCN ### to restore, which will be impossible to get anyway (to understand about that, please read more about archives and how they work).

10th: When you run the command DUPLICATE DATABASE with DORECOVER clause, all archives will be applied after restore. But it can fail if you do something wrong. So, just to insure that at leat all archives which was genereted together with backup was applied, let's do it manually.


$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Oct 8 22:51:52 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (DBID=1276674215, not open)

RMAN> catalog start with '/backup/archive';

Starting implicit crosscheck backup at 14-OCT-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3395 device type=DISK
Crosschecked 173 objects
Finished implicit crosscheck backup at 14-OCT-15

Starting implicit crosscheck copy at 14-OCT-15
using channel ORA_DISK_1
Crosschecked 12 objects
Finished implicit crosscheck copy at 14-OCT-15

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern /backup/archive

List of Files Unknown to the Database
=====================================
File Name: /backup/archive/ARCH_PROD_1276674215_99307_1_vbqjbmqr.BKP

...

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...

cataloging done

List of Cataloged Files
=======================
File Name: /backup/archive/ARCH_BIOPROD_1276674215_99307_1_vbqjbmqr.BKP

...

RMAN> recover database;

Starting recover at 14-OCT-15
using channel ORA_DISK_1

starting media recovery
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
channel ORA_DISK_1: reading from backup piece /backup/archive/ARCH_BIOPROD_1276674215_99508_1_5kqjdlul.BKP
channel ORA_DISK_1: piece handle=/backup/archive/ARCH_BIOPROD_1276674215_99508_1_5kqjdlul.BKP tag=BACKUP_ARCHIVELOG_BIONEXO

channel ORA_DISK_1: restored backup piece 1

...

unable to find archived log
archived log thread=1 sequence=109579
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 10/15/2015 21:20:06

RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 109579 and starting SCN of 54055767606


11th: Now we need to configure our broker configuration. At your primary environment, run the dgmgrl command to access the broker and create a new configuration with the following commands:

$ dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> create configuration MY_CONFIG as primary database is PROD connect identifier is PROD;

Configuration "MY_CONFIG" created with primary database "PROD"

DGMGRL> add database DATAGUARD as connect identifier is DATAGUARD maintained as physical;
Database "DATAGUARD" added
DGMGRL> show database DATAGUARD;

Database - DATAGUARD

  Role:            PHYSICAL STANDBY
  Intended State:  OFFLINE
  Transport Lag:   (unknown)
  Apply Lag:       (unknown)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    DATAGUARD

Database Status:
DISABLED

DGMGRL> enable database DATAGUARD;
Enabled.
DGMGRL> edit database prodbr set state='transport-on';
Succeeded.
DGMGRL> edit database dgprodbr set state='apply-on';
Succeeded.
DGMGRL> show database verbose DATAGUARD;

Database - dgprodbr

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:  
5 days 16 hours 32 minutes 12 seconds (computed 1 second ago)
  Apply Lag:       5 days 20 hours 12 minutes 3 seconds (computed 1 second ago)
  Apply Rate:      (unknown)
  Real Time Query: ON
  Instance(s):
    DATAGUARD

  Database Warning(s):
    ORA-16855: transport lag has exceeded specified threshold

  Properties:
    DGConnectIdentifier             = 'dataguard'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '5'
    ReopenSecs                      = '15'
    NetTimeout                      = '900'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'auto'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '10'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '+DATA/prodbr/, +DATA/dgprodbr/, +DATA/dataguard/, +DATA/dgprodbr/'
    LogFileNameConvert              = '+DATA/prodbr/, +DATA/dgprodbr/, +DATA/dataguard/, +DATA/dgprodbr/, +REDO/prodbr/, +REDO/dgprodbr/, +REDO/dataguard/, +REDO/dgprodbr/, +RECO/prodbr/,+RECO/dgprodbr/, +RECO/dataguard/, +RECO/dgprodbr/'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '300'
    TransportDisconnectedThreshold  = '30'
    SidName                         = 'dataguard'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dataguard.com.br)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=dataguard_DGMGRL)(INSTANCE_NAME=dataguard)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'arch_%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
WARNING


Warning!: If you get error "ORA-01033: ORACLE initialization or shutdown in progress" right after you try to add your database configuration, means your standby database is not open. Startup (open read only) and then retry.

If you get error "Error: ORA-01017: invalid username/password; logon denied" right after you try to add your database configuration, means that your password orapw file is wrong. You can try to recreate it (8th step) or you can just copy your orapw file from primary environment to your standby environment (scp or rsync).

In my case, the database status showing WARNING is normal, because I just configured the broker and the difference between my applied backup date (and applied archives) and my production database, is 5 days and 20 hours, since my transport lag threshold is 300 seconds (5 minutes), it must alarm.


I have some properties I like to configure manually, like:


  edit database dgprodbr set property LogXptMode='ASYNC';
  edit database dgprodbr set property MaxConnections = 5;
  edit database dgprodbr set property ReopenSecs = 15;
  edit database dgprodbr set property TransportDisconnectedThreshold = 30;
  edit database dgprodbr set property ArchiveLagTarget=120;
  edit database dgprodbr set property TransportLagThreshold=300;
  edit database dgprodbr set property NetTimeout=900;
  edit database dgprodbr set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dataguard.com.br)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=dataguard_DGMGRL)(INSTANCE_NAME=dataguard)(SERVER=DEDICATED)))';


To understand more about these parameters, just check oracle documentation.

12th: If you dont have Oracle Grid configured, then jump this step.

Now we are going to configure our Server Control Utility, doing the following command on oracle user:

$ srvctl add database -d dataguard -o $ORACLE_HOME -r PHYSICAL_STANDBY -p "+DATA/dataguard/spfiledataguard.ora" -a "DATA","RECO","REDO"
$ srvctl modify database -d dgprodbr -s 'open','read only'
$ srvctl config database -d dataguard
Database unique name: dataguard
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/dataguard/spfiledataguard.ora
Domain:
Start options: open,read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Database instance: dgprodbr
Disk Groups: DATA,RECO,REDO
Services:


Now you can add some services if you want, but remember: services are related with your database role (PRIMARY, PHYSICAL STANDBY, LOGICAL STANDBY or SNAPSHOT STANDBY), e.g.:

$ srvctl add service -d dataguard -s srv_dataguard -l PHYSICAL_STANDBY
$ srvctl config service -d dataguard -s srv_dataguard
Service name: srv_dataguard
Service is enabled
Cardinality: SINGLETON
Disconnect: false
Service role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
$ srvctl start service -d dgprodbr -s srv_dataguard


In this case, this service will be only available when your database is in physical standby mode. If you open it to snapshot standby, this service will not be available anymore.

And it's done! you can use your broker to check if your archive log is been transported, like:


dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show database dataguard;

Database - dataguard

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   2 minutes 15 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      545.00 KByte/s
  Real Time Query: ON
  Instance(s):
    dataguard

Database Status:
SUCCESS


Considerations:
* Always check your alert_$ORACLE_SID.log file, you can always get a good picture about what are happening with your replication (on both nodes, i.e., standby and primary database)
* Whenever you change your parameter which is shared with your broker, e.g., db_file_name_convert, you need to change the equivalent parameter in broker (in this case, DbFileNameConvert), or your broker database status will be set to WARNING, and some replications can fail if you do, for example, a shutdown and startup.

domingo, 13 de setembro de 2015

Change database SID

Database version: 11.2.0.x (11gR2) using ASM
OS: RedHat Enterprise Linux 6.x



Imagine that you have a single instance database, with a SID called olddb (let's say our db_name is equal too).

For some reason, you need to change your SID to newdb, but how to do that after everything is installed and working fine?

Before start this tutorial, make sure of somethings:

  * Your DB_RECOVERY_FILE_DEST will be used to copy your datafiles to your new location
  * Your DB_RECOVERY_FILE_DEST_SIZE must be configure with a space higher or equal than your full database (specially your datafiles)


1st: Mount your database. You need to use the command nid (with oracle user, and make sure that you have your ORACLE_SID set to olddb):


$ nid target=/ setname=true dbname=newdb

DBNEWID: Release 11.2.0.4.0 - Production on Sat Sep 12 09:35:20 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to database OLDDB (DBID=2045464205)

Connected to server version 11.2.0

Control Files in database:
    +DATA/olddb/control01.ctl
    +DATA/olddb/control02.ctl

Change database name of database OLDDB to NEWDB? (Y/[N]) => Y

Proceeding with operation
Changing database name from OLDDB to NEWDB
    Control File +DATA/olddb/control01.ctl - modified
    Control File +DATA/olddb/control02.ctl - modified
    Datafile +DATA/olddb/datafile/system.387.89016408 - wrote new name
...
    Instance shut down

Database name changed to NEWDB.
Modify parameter file and generate a new password file before restarting.
Succesfully changed database name.
DBNEWID - Completed succesfully.



Warning!: if you get error "NID-00131: Control file is not current", means that you didnt open your database anytime by now. So, open it to make sure your controlfile is up to date and ready to be use (maybe you'll need to open it with resetlogs clause - alter database open resetlogs).


Now, if you try to mount your database, you'll get the following error:



alter database mount
*
ERROR at line 1:
ORA-01103: database name 'NEWDB' in control file is not 'OLDDB'




This error tell us that your database is not ready to be used. We only changed the control file definition about our database (you can check this showing your parameter db_name... as you can see, your SID is still OLDDB).

2nd: You need to create your new parameter file and control file to your new configuration. First, create a pfile.


$ export ORACLE_SID=olddb
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Sep 13 14:21:18 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.


SQL> startup mount;
ORACLE instance started.

Total System Global Area 1.0689E+10 bytes
Fixed Size                  2262656 bytes
Variable Size            1677724032 bytes
Database Buffers         8992587776 bytes
Redo Buffers               16900096 bytes

SQL> create pfile='/tmp/pfilenewdb.ora' from spfile='+DATA/olddb/spfileolddb.ora';

File created.



Now, edit your db_name parameter to set your new database name:



SQL> alter system set db_name='newdb' scope=spfile;

System altered.

SQL> shutdown immediate;
ORA-01109: database not open


See? now you've changed your database name and everything was fine! But we are not done yet.
Now we need to create our controlfile in the right place!
Save your controlfile old destination because we'll need it later, and then change your parameter control_files to your new destination.


SQL> show parameter control_files;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +DATA/olddb/control01.ctl,
                                                 +DATA/olddb/control02.ctl
SQL> alter system set control_files='+DATA/newdb/control01.ctl','+DATA/newdb/control02.ctl' scope=spfile;

System altered.



Shutdown your olddb instance.

Replace the parameters from your pfilenewdb.ora to your new instance (like db_name, audit_file_dest, and all others that have OLDDB on it). Should be something like this:


*.archive_lag_target=0
*.audit_file_dest='/u01/app/oracle/admin/prodbr/adump'
*.audit_trail='NONE'
*.cell_offload_processing=FALSE
*.cluster_database=false
*.compatible='11.2.0.4.0'
*.control_files='+DATA/newdb/control01.ctl','+DATA/newdb/control02.ctl'#Restore Controlfile
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=32
*.db_files=1000
*.db_name='newdb'
*.db_recovery_file_dest='+DATA'
*.db_recovery_file_dest_size=53687091200
*.db_unique_name='newdb'
*.dg_broker_start=FALSE
*.diagnostic_dest='/u01/app/oracle'
*.instance_number=1
*.job_queue_processes=10
*.log_archive_dest='+DATA'
*.log_archive_format='arch_%t_%s_%r.dbf'
newdb.log_archive_format='arch_%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_archive_min_succeed_dest=1
*.log_archive_trace=0
newdb.log_archive_trace=0
*.log_buffer=31703040
*.open_cursors=10000
*.parallel_execution_message_size=65536
*.pga_aggregate_target=4294967296
*.processes=4000
*.remote_login_passwordfile='exclusive'
*.resource_manager_plan=''
*.sec_case_sensitive_logon=TRUE
*.session_cached_cursors=5000
*.sessions=2100
*.sga_max_size=3221225472
*.sga_target=3221225472
*.standby_file_management='AUTO'
newdb.thread=1
*.undo_retention=900

*.undo_tablespace='UNDOTBS1'

IMPORTANT: Make sure your db_recovery_file_dest is pointing to your target directory where all datafiles will stay (and, of course, the db_recovery_file_dest_size is big enough to support it).

Now you need to create your new spfile at your newdb location in ASM:


$ export ORACLE_SID=newdb
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Nov 3 16:44:30 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/tmp/pfileolddb.ora';
ORACLE instance started.

Total System Global Area 3206836224 bytes
Fixed Size                  2257520 bytes
Variable Size            1157631376 bytes
Database Buffers         2013265920 bytes
Redo Buffers               33681408 bytes
SQL>
SQL> create spfile='+DATA/newdb/spfilenewdb.ora' from pfile='/tmp/pfileolddb.ora';

File created.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.



Startup at nomount state to restore your controlfiles, as below:



$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Sun Sep 13 14:34:01 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: NEWDB (not mounted)

RMAN> restore controlfile from '+DATA/olddb/control01.ctl';

Starting restore at 13-SEP-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=68 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/newdb/control01.ctl
output file name=+DATA/newdb/control02.ctl
Finished restore at 13-SEP-15

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1




Put your instance in mount state.

3rd: Using ASM, you can see that your db files is still in your OLDDB directory. Now, let's move it to your new SID directory (NEWDB).



$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Sun Sep 13 14:27:05 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: NEWDB (DBID=2045464205, not open)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name NEWDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1700     SYSTEM               ***     +DATA/olddb/datafile/system.387.890164085
...
List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +DATA/olddb/tempfile/temp.506.890212713
...

RMAN> backup as copy database;

Starting backup at 13-SEP-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=68 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA/olddb/datafile/undotbs1.481.890157651
output file name=+DATA/newdb/datafile/undotbs1.509.890318271 tag=TAG20150913T143748 RECID=433 STAMP=890318867
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:10:06

channel ORA_DISK_1: starting datafile copy
...
copying current control file
output file name=+DATA/newdb/controlfile/backup.725.890327841 tag=TAG20150913T143748 RECID=649 STAMP=890327841
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 13-SEP-15
channel ORA_DISK_1: finished piece 1 at 13-SEP-15
piece handle=+DATA/newdb/backupset/2015_09_13/nnsnf0_tag20150913t143748_0.726.890327843 tag=TAG20150913T143748 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-SEP-15

RMAN> list copy of database;

List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
522     1    A 13-SEP-15       53694861539 12-SEP-15
        Name: +DATA/newdb/datafile/system.598.890325775
        Tag: TAG20150913T143748
...

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/newdb/datafile/system.598.890325775"
...

RMAN> alter database open;

database opened

RMAN> report schema;

Report of database schema for database with db_unique_name NEWDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1700     SYSTEM               ***     +DATA/newdb/datafile/system.598.890325775
...
List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +DATA/olddb/tempfile/temp.506.890212713
...



Note that your tempfiles wasn't moved to your new directory. Unfortunately, neither your redo logs wasn't, too.


4th: Now, what we have to do is recreate your tempfiles and redo logs at the right directory. So, let's do that.



$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Sep 13 17:28:05 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> alter tablespace TEMP add tempfile '+DATA' size 100m autoextend on next 100m maxsize 4096m;

Tablespace altered.

SQL> set lines 200;
SQL> set pages 500;
SQL> select 'alter database tempfile ''' || file_name || ''' drop;' command_ from dba_temp_files where file_name not like '%newdb%';

COMMAND_
-------------------------------------------------------------------------------------------------
alter database tempfile '+DATA/olddb/tempfile/temp.506.890212713' drop;
...

SQL> alter database tempfile '+DATA/olddb/tempfile/temp.506.890212713' drop;

Database altered.
...

SQL> alter database add logfile group 111 ('+DATA','+DATA') size 512m;

Database altered.

SQL> alter database add logfile group 112 ('+DATA','+DATA') size 512m;

Database altered.

SQL> select 'alter database drop logfile group ' || group# || ';' command_ from v$logfile where member not like '%newdb%';

COMMAND_
---------------------------------------------------------------------------
alter database drop logfile group 1;

SQL> alter database drop logfile group 1;
alter database drop logfile group 1
*
ERROR at line 1:
ORA-01623: log 1 is current log for instance olddb (thread 1) - cannot drop
ORA-00312: online log 1 thread 1: '+DATA/olddb/onlinelog/group_1.276.890212603'
ORA-00312: online log 1 thread 1: '+DATA/olddb/onlinelog/group_1.260.890212611'



What happened? If you face this, and you probably will, its because you cant drop a redo logfile that is in ACTIVE state, meaning your logfile is generating an archive log file, so you can't drop it until it finishes. But, as Im in hurry (lol), I will just clean it and lose this archivelog (be aware that its not a good practice, since you wont be able to recover your database on some catastrophe).



SQL> alter database clean unarchived logfile group 1;

Database clear UNARCHIVED altered.


And it's done!


Considerations:

* You can recreate your controlfile with control01.ctl and control02.ctl alias;
* You can now delete your old OLDDB directory from ASM to erase all old files;
* Remember to check if you are using archivelog or not, and turn it on/off.
* Change your db_recovery_file_dest to the old directory (FRA), where you save your archivelogs and backups.