今天就从系统的介绍下11gR2 RAC上OGG (Oracle GoldenGate的简称,下同)的完整配置步骤,并简单谈谈如何解决上面这个场景的问题。
第一阶段:下载OGG(可以参考之前的单节点的复制例子,不再赘述)
OGG的下载地址
第二阶段:OGG的安装
1)登录源端的RAC系统中的任一个节点,并在ACFS上建立一个供OGG使用的共享目录,比如叫/cloudfs/goldengate
2)解压OGG的安装包到/cloudfs/goldengate目录
3) 设置好OGG工作的环境变量,比如
export LIBRARY_PATH=/cloudfs/goldengate:$ORACLE_HOME/lib:$LD_LIBRARY_PATH
4)启动ggsci并创建目录,然后进行必要的设置,启动manager
$ ggsci
GGSCI > create subdirs
(optional, support for DDL/Sequence)
Create and edit the parameter file for GLOBALS:
GGSCI > EDIT PARAMS ./GLOBALS
Add this line to GLOBALS parameter file:
GGSCHEMA ggs
NOTE: 'ggs' is the example OGG user and will be used in the rest of this document.
GGSCI > EDIT PARAMS mgr
Add the following lines to Manager parameter file:
PORT 7809
AUTOSTART ER *
AUTORESTART ER *
GGSCI > START mgr
5)在目标端重复上面的步骤1-4,注意目录名的使用,我们在目标端使用/mycloudfs/goldengate以示区分。
第三阶段:源和目标RAC数据库准备步骤
1) Create OGG user 'ggs' on both the source and target database, connect to database using SQL*Plus as SYSDBA:
SQL> CREATE USER ggs IDENTIFIED BY ggs;
SQL> GRANT CONNECT,RESOURCE,DBA TO ggs;
2)(optional, add Oracle sequence replication support) On both source and target database, go to OGG directory and run this SQL, enter OGG user 'ggs' as prompted:
SQL> @sequence.sql
3) Enable supplemental logging on source ODA database:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
(Optional) Add Oracle DDL replication support
4) On the source system, go to OGG directory, connect to database using SQL*Plus as SYSDBA.
SQL> GRANT EXECUTE ON utl_file TO ggs;
5 On the source system, run the following script, provide OGG user 'ggs' as prompted.
SQL> @marker_setup.sql
SQL> @ddl_setup.sql
NOTE: enter 'INITIALSETUP' when prompted for the mode of installation.
SQL> @role_setup.sql
SQL> @ddl_enable.sql
SQL> @ddl_pin ggs
NOTE: 'ggs' here is the OGG user.
第四阶段:配置源端的extract group
1) Issue the following command to log on to the database.
GGSCI > DBLOGIN USERID ggs, PASSWORD ggs
2) Create a primary Extract group 'myext':
GGSCI > ADD EXTRACT myext, TRANLOG, BEGIN NOW, THREADS 2
NOTE: THREADS value is the number of your RAC instances.
3) Create a local trail. The primary Extract writes to this trail, and the data-pump Extract reads it.
GGSCI > ADD EXTTRAIL /cloudfs/goldengate/dirdat/et, EXTRACT myext
NOTE: 'et' is the example trail identifier for Extract 'myext'.
4) Create and edit the parameter file for Extract 'myext':
GGSCI > EDIT PARAMS myext
Add following lines to this parameter file:
EXTRACT myext
SETENV (ORACLE_HOME = "/u01/app/oracle/product/11.2.0/dbhome_1")
USERID ggs@ggdb, PASSWORD ggs
TRANLOGOPTIONS DBLOGREADER
THREADOPTIONS MAXCOMMITPROPAGATIONDELAY 20000
EXTTRAIL /cloudfs/goldengate/dirdat/et
DYNAMICRESOLUTION
DDL INCLUDE ALL
TABLE hr.*;
NOTE 1: make sure the SQL*Net connection string 'ggdb' works.
NOTE 2: 'hr' is the example schema which will be synchronized to the target system.
第五阶段:在源端配置data pump extract group
1)Create a data pump group 'mypump':
GGSCI > ADD EXTRACT mypump, EXTTRAILSOURCE /cloudfs/goldengate/dirdat/et, BEGIN now
2) Specify a remote trail that will be created on the target system.
GGSCI > ADD RMTTRAIL /mycloudfs/goldengate/dirdat/rt, EXTRACT mypump
NOTE: 'rt' is the example trail identifier for Extra