Search the Reference Library pages:  

Oracle Change Data Capture Asynchronous Autolog Demo
Version 11.1
 
Setup As SYS - Prepare Source Database and Instance (1 & 8)
conn / as sysdba

-- *NIX only
define _editor=vi
-- or the editor of your choice

-- validate database parameters
archive log list                    -- Archive Mode
show parameter compatible           -- 11.0 or above
show parameter global_names         -- should be TRUE but not required
show parameter java_pool_size       -- 0 or at least 50000000
show parameter open_links           -- not less than the default 4 (if using dblinks)
show parameter parallel_max_servers -- set to <current_value>+(5*number of planned change sets)
show parameter processes            -- set to <current_value>+(7*number of planned change sets)
show parameter sessions             -- set to <current value>+(2*number of planned change sets)
show parameter shared_pool_size     -- 0 or at least 200MB
show parameter streams_pool_size    -- 0 or <current_size>+(20*number of planned change sets)
show parameter undo_retention       -- minimum 3600 (1 hr.)


-- Examples of altering initialization parameters
alter system set compatible='11.1.0.0.0' scope=SPFILE;
alter system set global_names=TRUE scope=BOTH;
alter system set streams_pool_size=200M scope=BOTH; -- very slow if making smaller
alter system set undo_retention=3600 scope=BOTH;

Log Archive Parameters for the Source DB
Parameter Description

log_archive_dest_1
Specifies the directory specification of the source database where its own archived redo log files are written

LOCATION: Specifies a unique directory path name for the source database's archived redo logs

MANDATORY: Specifies that redo log file must be successfully archived before it can be overwritten

REOPEN: Specifies the minimum number of seconds the log writer process (LGWR) or archive process (ARCn) should wait before trying to reaccess the local log files if a previous attempt failed

log_archive_dest_1 ="location=/app/oracle/product/flash_recovery_area  mandatory reopen=2"

log_archive_dest_2
This parameter must include the SERVICE, ARCH or LGWR ASYNC, OPTIONAL, NOREGISTER, and REOPEN attributes so that redo transport services are configured to copy the redo log files from the source database to the destination database. This parameter must also include either the VALID_FOR or the TEMPLATE attribute depending on the AutoLog option. Settable attributes are:

SERVICE: Specifies the network name of the destination database

ARCH or LGWR SYNCH: To use the AutoLog online option, specify LGWR ASYNC. LGWR ASYNC specifies that the log writer process (LGWR) copy redo data asynchronously to the destination database as the redo is generated on the source database. The copied redo data becomes available to Change Data Capture after its source database transaction commits.To use the AutoLog archive option, specify either ARCH or LGWR ASYNC. ARCH specifies that the archiver process (ARCn) copy the redo log files to the destiantion database after a source database log switch occurs. LGWR ASYNC specifies that the log writer process (LGWR) copy redo data asynchronously to the destination database as the redo is generated on the source database. For both ARCH and LGWR ASYNC, the copied redo data becomes available to Change Data Capture only after a source database log switch occurs when using the AutoLog archive option.

OPTION: Specifies that the copying of a redo log file to the destination database need not succeed before the corresponding online redo log at the source database can be overwritten. Required to avoid stalling operations on the source database due to a transmission failure. The original redo log file remains available to the source database in either archived or backed up form, if it is needed.

NOREGISTER: Specifies that the destination database location is not recorded in the destination database control file

REOPEN: Specifies the minimum number of seconds the log writer process (LGWR) or archive process (ARCn) should wait before trying to access the destination database if a previous attempt failed

VALID_FOR: Set to either (ONLINE_LOGFILE, PRIMARY_ROLE) or (ONLINE_LOGFILE, ALL_ROLES) to enable redo data to be copied from the online redo log on the source database to the standby redo log at the destination database

TEMPLATE: When using the AutoLog archive option, specify TEMPLATE to define a directory specification and a format template for the file name used for the archived redo log files that are copied to the destination db

log_archive_dest_2 ="service=prodb lgwr async optional noregister reopen=2 valid_for=(online_logfile,primary_role)"

or

log_archive_dest_2 = "service=destdb arch optional noregister reopen=2 template=/app/oracle/product/stdbylogs/arch_%s_%t_%r.dbf"
log_archive_dest_state_1 Indicates that redo transport services can transmit archived redo log files to this destination
log_archive_dest_state_1 = enable
log_archive_dest_state_2 Indicates that redo transport services can transmit archived redo log files to this destination
log_archive_dest_state_2 = enable
log_archive_format Format template for the default file name when archiving redo log files
log_archive_format="arch_%s_%t_%r.dbf"

/* make the above changes for the source database by creating an include file (ifile) and referencing it in the spfile. Create the ifile with vi, create the pfile to edit from the spfile and then recreate the spfile with the ifile parameter before proceeding. */

shutdown immediate;

startup mount;

-- begin archiving redo logs
alter database archivelog;

-- force logging of all transactions: override nologging statements
alter database force logging;

-- one option among several
alter database add supplemental log data;

alter database open;

-- validate archivelogging
archive log list;

alter system switch logfile;

archive log list;

-- retest initalization parameters changes to verify modification to planned values

-- validate force and supplemental logging
col log_min format a7
col log_pk format a6
col log_pk format a6
col log_ui format a6
col log_fk format a6
col log_all format a7
col force_log format a9

SELECT supplemental_log_data_min LOG_MIN, supplemental_log_data_pk LOG_PK, supplemental_log_data_ui LOG_UI, supplemental_log_data_fk LOG_FK, 
supplemental_log_data_all LOG_ALL, force_logging FORCE_LOG
FROM v$database;

SELECT tablespace_name, force_logging
FROM dba_tablespaces;

-- examine CDC related data dictionary objects
SELECT table_name
FROM dba_tables
WHERE owner = 'SYS'
AND table_name LIKE 'CDC%$';

desc cdc_system$

SELECT * FROM cdc_system$;

/* The publisher will need to reference the global name of the source database. The global name of the source database will be used on the destination database to create the AutoLog change source. The source database DBA can query the GLOBAL_NAME column in the GLOBAL_NAME view on the source database to retrieve this information for the publisher. So on the source db save the result from the following query: */

SELECT * FROM global_name;

/* also determine the log file size used on the source database and, also, the number of log file groups. The number of standby log files groups, created later, must be one more than the number  of redo log groups on the source. */

desc v$log

SELECT group#, bytes
FROM v$log;

 
Setup As SYS - Prepare Destination Database and Instance (2)
conn / as sysdba

-- *NIX only
define _editor=vi
-- or the editor of your choice

-- validate database parameters
archive log list                    -- Archive Mode
show parameter compatible           -- 11.0 or above
show parameter global_names         -- should be TRUE but not required
show parameter java_pool_size       -- 0 or at least 50000000
show parameter open_links           -- not less than the default 4 (if using dblinks)
show parameter parallel_max_servers -- set to <current_value>+(5*number of planned change sets)
show parameter processes            -- set to <current_value>+(7*number of planned change sets)
show parameter sessions             -- set to <current value>+(2*number of planned change sets)
show parameter shared_pool_size     -- 0 or at least 200MB
show parameter streams_pool_size    -- 0 or <current_size>+(20*number of planned change sets)
show parameter undo_retention       -- minimum 3600 (1 hr.)


-- Examples of altering initialization parameters
alter system set compatible='11.1.0.0.0' scope=SPFILE;
alter system set global_names=TRUE scope=BOTH;
alter system set streams_pool_size=200M scope=BOTH; -- very slow if making smaller
alter system set undo_retention=3600 scope=BOTH;

Log Archive Parameters for the Destination DB
Parameter Description
remote_archive_enable Indicates that this destination database can receive remotely archived redo log files
remote_archive_enable = TRUE

log_archive_dest_1
The directory specification on the destination database where its own archived redo log files are to be kept. If the destination database has an AutoLog online change source, the following attributes should be specified:

LOCATION:
Specifies a unique directory path name for the destination  database's own archived redo log files

VALID_FOR: Set VALID_FOR to either (ONLINE_LOGFILE, PRIMARY_ROLE) or (ONLINE_LOGFILE, ALL_ROLES) to enable the online redo log file to be archived locally

log_archive_dest_1="/app/oracle/product/flash_recovery_area mandatory reopen=2 valid_for=(online_logfile,primary_role)

log_archive_dest_2
If the destination database has an AutoLog online change source, this specifies the standby redo log files on the destination database that receive change data from the source database. It is very important to specify a unique location for these standby redo log files so that they do not overwrite the destination database's own archived log files.

LOCATION: Specifies a unique directory path name for the destination database's standby redo log files

MANDATORY: Specifies that a standby redo log file must be successfully archived before it can be overwritten

VALID_FOR: Set VALID_FOR either to (STANDBY_LOGFILE, PRIMARY_ROLE) or (STANDBY_LOGFILE, ALL_ROLES) to enable the destination database to receive change data from the source database and write it to the destination database standby log files.

log_archive_dest_2="location=/u01/destdb mandatory
valid_for=(standby_logfile,primary_role)"
log_archive_dest_state_1 Indicates that redo transport services can transmit archived redo log files to this destination
log_archive_dest_state_1 = enable
log_archive_dest_state_2 Indicates that redo transport services can transmit archived redo log files to this destination
log_archive_dest_state_2 = enable
log_archive_format Format template for the default file name when archiving redo log files
log_archive_format="arch_%s_%t_%r.dbf"

/* make the above changes for the destination database by creating an include file (ifile) and referencing it in the spfile. Create the ifile with vi, create the pfile to edit from the spfile and then recreate the spfile with the ifile parameter before proceeding. */

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

-- validate archivelogging
archive log list;

alter system switch logfile;

archive log list;

-- retest initalization parameters changes to verify modification to planned values

-- examine CDC related data dictionary objects

SELECT table_name
FROM dba_tables
WHERE owner = 'SYS'
AND table_name LIKE 'CDC%$';

desc cdc_system$

SELECT * FROM cdc_system$;

 
Create Streams Administrators (5)
/* on the destination database the administrator is used to perform the underlying Oracle Streams operations needed to create Change Data Capture change sources, change sets, and change tables. */

-- on the destination database (omega2)

CREATE TABLESPACE cdc_tbsp
datafile 'c: emp\cdctbsp01.dbf' SIZE 50M
AUTOEXTEND OFF
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K;

CREATE USER cdcadmin
IDENTIFIED EXTERNALLY
DEFAULT TABLESPACE cdc_tbsp
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON cdc_tbsp;

GRANT CREATE SESSION TO cdcadmin;
GRANT CREATE SEQUENCE TO cdcadmin;
GRANT CREATE TABLE TO cdcadmin;

GRANT SELECT_CATALOG_ROLE TO cdcadmin;
GRANT EXECUTE_CATALOG_ROLE TO cdcadmin;

GRANT DBA TO admin; -- replace this with what is actually required when non-use fails

GRANT execute ON dbms_cdc_publish TO cdcadmin;

exec dbms_streams_auth.grant_admin_privilege('CDCADMIN');
 
*Prepare Schema Tables for CDC Replication  (3)
-- on the source database
conn / as sysdba

alter user sh account unlock identified by sh;

connect sh/sh

SELECT table_name, reason
FROM all_streams_unsupported
WHERE owner = 'SH'
ORDER BY 1;

desc products

col prod_name format a30
col prod_desc format a30
col prod_category format a20

SELECT prod_id, prod_name, prod_desc, prod_category, prod_status
FROM products;

-- create CDC demo table
CREATE TABLE cdc_demo3 AS
SELECT * FROM products;

ALTER TABLE cdc_demo3
ADD CONSTRAINT pk_cdc_demo3
PRIMARY KEY (prod_id)
USING INDEX;

/* Create an unconditional log group on all columns to be captured in the source table. Source table columns that are unchanged and are not in an unconditional log group, will be null in the change table, instead of reflecting their actual source table values. (This example captures rows in the sh.products table only. The source database DBA would repeat this step for each source table for which change tables will be created). */

ALTER TABLE sh.cdc_demo3
ADD SUPPLEMENTAL LOG GROUP log_group_products
(prod_id, prod_name, prod_list_price) ALWAYS;

or

ALTER TABLE sh.cdc_demo3 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

desc user_log_groups

SELECT * FROM user_log_groups;
 
* Create Standby Redo Log Files  (4)
-- on the destination database in a terminal window

$ mkdir -p /app/oracle/product/stdbylogs
$ mkdir -p /home/oracle/stdbylogs
$ exit

-- on the destination database in SQL*Plus
conn / as sysdba

ALTER DATABASE ADD STANDBY LOGFILE GROUP 1
('/app/oracle/product/stdbylogs/slog1a.rdo', '/home/oracle/stdbylogs/slog1b.rdo') SIZE 50M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 2
('/app/oracle/product/stdbylogs/slog2a.rdo', '/home/oracle/stdbylogs/slog2b.rdo') SIZE 50M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 3
('/app/oracle/product/stdbylogs/slog3a.rdo', '/home/oracle/stdbylogs/slog3b.rdo') SIZE 50M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
('/app/oracle/product/stdbylogs/slog4a.rdo', '/home/oracle/stdbylogs/slog4b.rdo') SIZE 50M;

SELECT group#, bytes, status
FROM v$standby_log;
 
* Instantiate Source Data Dictionary (6)
The source database DBA builds a LogMiner data dictionary at the source database so that redo transport services can transport this data dictionary to the destination database. This LogMiner data dictionary build provides the table definitions as they were just prior to beginning to capture change data. CDC automatically updates the data dictionary with any source table DDL  operations that are made during the course of CDC to ensure that the dictionary is always  synchronized with the source database tables.

When building the LogMiner data dictionary, the source database DBA must get the SCN value of the data dictionary build. When creating change sources, in a later step, the publisher will need to provide this value as the first_scn parameter.


conn / as sysdba

set serveroutput on

DECLARE
 f_scn NUMBER;
BEGIN
  f_scn := 0;
  dbms_capture_adm.build(:f_scn);
  dbms_output.put_line('The first_scn value is ' || :f_scn);
END;
/

The first_scn value is 207722
 
* Prepare Source Tables (7)
-- The source database DBA must prepare the source tables on the source database for asynchronous CDC by instantiating each source table so that the underlying Oracle Streams environment records the information it needs to capture each source table's changes. The source table structure and the column datatypes must be supported by CDC.

conn / as sysdba

desc dba_capture_prepared_tables

SELECT table_name, scn, supplemental_log_data_pk PK, supplemental_log_data_ui UI,
supplemental_log_data_fk FK, supplemental_log_data_all "ALL"
FROM dba_capture_prepared_tables;

dbms_capture_adm.prepare_table_instantiation(
table_name           IN VARCHAR2,
supplemental_logging IN VARCHAR2 DEFAULT 'keys');

exec dbms_capture_adm.prepare_table_instantiation('sh.products');

SELECT table_name, scn, supplemental_log_data_pk PK, supplemental_log_data_ui UI,
supplemental_log_data_fk FK, supplemental_log_data_all "ALL"
FROM dba_capture_prepared_tables;

 
Identify each Change Source Database and Create the Change Sources (9)
/* The publisher uses the dbms_cdc_publish.create_autolog_change_source procedure on the destination database to create change sources. A change source describes the source database from which the  data will be captured, and manages the relationship between the source database and the  destination database. A change source always specifies the SCN of a data dictionary build from the source database as its first_scn parameter.

The publisher gets the SCN of the data dictionary build and the global database name from the  source database DBA. If the publisher cannot get the value to use for the first_scn parameter value from the source database DBA, then, with the appropriate privileges, it can be queried from v$archived_log on the source database.

On the destination database, the publisher creates the AutoLog change source and specifies the global name as the source_database parameter value and the SCN of the data dictionary build as the first_scn parameter value. */


-- in the destination database

 
dbms_cdc_publish.create_autolog_change_source(
change_source_name IN VARCHAR2,
description        IN VARCHAR2 DEFAULT NULL,
source_database    IN VARCHAR2,
first_scn          IN NUMBER,
online_log         IN CHAR DEFAULT 'N');

exec dbms_cdc_publish.create_autolog_change_source('CS_DEMO3', 'AutoLog Demo', 'PROD', 207722, 'Y');

-- to create an AutoLog archive change source omit the last parameter

 
Create AutoLog Change Set (10)
/* The publisher uses the dbms_cdc_publish.create_change_set procedure on the destination database to create change sets. The publisher can optionally provide beginning and ending dates to indicate where to begin and end data capture. When CDC creates a change set, its associated  Streams capture and apply processes are also created (but not started).

The following example shows how to create a change set called OMEGA_DAILY that captures changes starting today, and continues capturing change data indefinitely. */


-- on the destination database

conn cdcadmin/cdcadmin

SELECT set_name, change_source_name, capture_name, queue_name, publisher, stop_on_ddl
FROM all_change_sets;
 
dbms_cdc_publish.create_change_set(
change_set_name    IN VARCHAR2,
description        IN VARCHAR2 DEFAULT NULL,
change_source_name IN VARCHAR2,
stop_on_ddl        IN CHAR DEFAULT 'N',
begin_date         IN DATE DEFAULT NULL,
end_date           IN DATE DEFAULT NULL);

exec dbms_cdc_publish.create_change_set('OMEGA_CSET', 'change set info', 'CS_DEMO3', 'Y');

SELECT set_name, change_source_name, capture_name, queue_name, publisher, stop_on_ddl
FROM all_change_sets;

 
Create Change Table / Destination Database Publisher: Create the change tables. (11)
/* The publisher uses dbms_cdc_publish.create_change_table on the desintation database to create change tables. Creates a change tables for each published source table specifying the columns to be included, and specifying the combination of before and after images of the change data to capture.

The publisher can set the options_string field of the dbms_cdc_publish.create_change_table procedure to have more control over the physical properties and tablespace properties of the change tables. The options_string field can contain any option available (except partitioning) on the CREATE TABLE statement. In this example, it specifies a tablespace for the change set. (This example assumes that the publisher previously created the CDC_TBSP  tablespace.)

The following example creates a change table on the destination database that captures changes made to a source table in the source database. The example uses the sample table sh.products. */


-- on the destination database

conn cdcadmin/cdcadmin

SELECT object_name, object_type
FROM user_objects
ORDER BY 2,1;
dbms_cdc_publish.create_change_table(
owner             IN VARCHAR2,
change_table_name IN VARCHAR2,
change_set_name   IN VARCHAR2,
source_schema     IN VARCHAR2,
source_table      IN VARCHAR2,
column_type_list  IN VARCHAR2,
capture_values    IN VARCHAR2, -- BOTH, NEW, OLD
rs_id             IN CHAR,
row_id            IN CHAR,
user_id           IN CHAR,
timestamp         IN CHAR,
object_id         IN CHAR,
source_colmap     IN CHAR,
target_colmap     IN CHAR,
options_string    IN VARCHAR2);

exec dbms_cdc_publish.create_change_table('CDCADMIN', 'PRODUCT_CTAB', 'OMEGA_CSET', 'SH', 'CDC_DEMO3', 'PROD_ID NUMBER(6), PROD_NAME VARCHAR2(50), PROD_LIST_PRICE NUMBER(8,2)', JOB_ID VARCHAR2(10), DEPARTMENT_ID NUMBER(4)', 'BOTH', 'Y', 'N', 'N', 'N', 'N', 'N', 'Y', 'TABLESPACE CDCTBSP');

SELECT object_name, object_type
FROM user_objects
ORDER BY 2,1;

col high_value format a15

SELECT table_name, composite, partition_name, high_value
FROM user_tab_partitions;

GRANT select ON product_ctab TO <new_end_user>;

conn / as sysdba

desc cdc_change_tables$

SELECT change_set_name, source_schema_name, source_table_name
FROM cdc_change_tables$;


/* This example creates a change table named product_ctab within change set OMEGA_CSET. The column_type_list parameter identifies the columns captured by the change table. The source_schema and source_table parameters identify the schema and source table that reside in the source database, not the destination database.

The capture_values setting in the example indicates that for update operations, the change data will contain two separate rows for each row that changed: one row will contain the row values before the update occurred and the other row will contain the row values after the update occurred. */

 
Enable Change Set (12)
/* Because asynchronous change sets are always disabled when they are created, the publisher must alter the change set to enable it. The Oracle Streams capture and apply processes are started when the change set is enabled. */

-- on the destination database

conn cdcadmin/cdcadmin

SELECT set_name, change_source_name, capture_enabled
FROM cdc_change_sets$;

conn cdcadmin/cdcadmin
dbms_cdc_publish.alter_change_set(
change_set_name     IN VARCHAR2,
description         IN VARCHAR2 DEFAULT NULL,
remove_description  IN CHAR DEFAULT 'N',
enable_capture      IN CHAR DEFAULT NULL,
recover_after_error IN CHAR DEFAULT NULL,
remove_ddl          IN CHAR DEFAULT NULL,
stop_on_ddl         IN CHAR DEFAULT NULL);

exec dbms_cdc_publish.alter change_set('OMEGA_CSET', enable_capture => 'Y');

SELECT set_name, change_source_name, capture_enabled
FROM cdc_change_sets$;

 
Switch the redo log files at the source database (13)
-- on the source database

conn / as sysdba

SQL> ALTER SYSTEM SWITCH LOGFILE;
 
Destination Database Publisher: Grant access to subscribers (14)

/* The publisher controls subscriber access to change data by granting and revoking the SQL SELECT privilege on change tables for users and roles on the destination database. The publisher grants access to specific change tables. Without this step, a subscriber cannot access any change data. This example assumes that user subscriber1 already exists. */

-- on the destination database

conn cdcadmin/cdcadmin

GRANT SELECT ON cdcadmin.products_ct TO subscriber1;

-- The Change Data Capture asynchronous AutoLog system is now ready for subscriber1 to create subscriptions.

 
DML On Source Table
conn sh/sh

do some stuff to be replicated
 
Validate Capture 
?
 
Capture Cleanup on Destination
conn sh/sh

ALTER TABLE cdc_demo3 DROP SUPPLEMENTAL LOG GROUP log_group_products;

DROP TABLE cdc_demo3 PURGE;

conn / as sysdba

-- reverse prepare table instantiation
exec dbms_capture_adm.abort_table_instantiation('SH.CDC_DEMO3');

-- drop the change table
exec dbms_cdc_publish.drop_change_table('CDCADMIN', 'PRODUCT_CTAB', 'Y');

-- drop the change set
exec dbms_cdc_publish.drop_change_set('OMEGA_CSET');

drop user cdcadmin cascade;
 
Capture Cleanup on Source
conn / as sysdba

-- reverse prepare table instantiation
exec dbms_capture_adm.abort_table_instantiation('SH.CDC_DEMO3');

-- drop the change table
exec dbms_cdc_publish.drop_change_table('CDCADMIN', 'PRODUCT_CTAB', 'Y');

-- drop the change set
exec dbms_cdc_publish.drop_change_set('OMEGACSET');

drop user cdcadmin cascade;
 
Related Topics
CDC Demo - Asynchronous HotLog
CDC Demo - Synchronous Mode
Database Link
DBMS_CAPTURE_ADM
DBMS_CDC_PUBLISH
DBMS_CDC_SUBSCRIBE
DBMS_STREAMS_AUTH
DBMS_OUTPUT
Export
Import
Streams Demo 1
User
 
   Home |    Search |    Code Library |    Sponsors |    Privacy |    Terms of Use |    Contact Us    © 2003 - 2024 psoug.org
-----