CM Failover 기능 사용

1. Agent 설치

prs_install_agent.cfg 예시

AGENT_CNT=4

AGENT_ID_0=agent1
AGENT_HOST_0=172.19.0.11
AGENT_PORT_0=7600
AGENT_CM_GROUP_0=CM1
AGENT_CM_ID_0=0

AGENT_ID_1=agent2
AGENT_HOST_1=172.19.0.12
AGENT_PORT_1=7700
AGENT_CM_GROUP_1=CM1
AGENT_CM_ID_1=1

AGENT_ID_2=agent3
AGENT_HOST_2=172.19.0.13
AGENT_PORT_2=7800
AGENT_CM_GROUP_2=CM2
AGENT_CM_ID_2=0

AGENT_ID_3=agent4
AGENT_HOST_3=172.19.0.14
AGENT_PORT_3=7900
AGENT_CM_GROUP_3=CM2
AGENT_CM_ID_3=1

agent 설치 및 설치 결과 예시

$ cd $PRS_HOME/install
$ . prs_install_agent.sh
********************************************************************************
* Agent Install Step (1/4)
* Check System Parameters 
********************************************************************************
* Checking system type... 
Linux

* Checking PRS_HOME... 
PRS_HOME: /home/prosync/example

********************************************************************************
* Agent Install Step (2/4)
* Check install agent config file   [prs_install_agent.cfg]  
* Check Instance file               [prs_instance.map] 
********************************************************************************
* Checking configuration file... 
[/home/prosync/example/install/prs_install_agent.cfg] exists.

* Checking Agent configuration... 
parameter check for Agent[0] Started 
    AGENT_ID: agent1
    AGENT_HOST: 172.19.0.11
    AGENT_PORT: 7600
    AGENT_CM_GROUP: CM1
    AGENT_CM_ID: 0
parameter check for Agent[0] Done 
parameter check for Agent[1] Started 
    AGENT_ID: agent2
    AGENT_HOST: 172.19.0.12
    AGENT_PORT: 7700
    AGENT_CM_GROUP: CM1
    AGENT_CM_ID: 1
parameter check for Agent[1] Done 
parameter check for Agent[2] Started 
    AGENT_ID: agent3
    AGENT_HOST: 172.19.0.13
    AGENT_PORT: 7800
    AGENT_CM_GROUP: CM2
    AGENT_CM_ID: 0
parameter check for Agent[2] Done 
parameter check for Agent[3] Started 
    AGENT_ID: agent4
    AGENT_HOST: 172.19.0.14
    AGENT_PORT: 7900
    AGENT_CM_GROUP: CM2
    AGENT_CM_ID: 1
parameter check for Agent[3] Done 

* Checking prs_instance.map file... 
[/home/prosync/example/config/prs_instance.map] exists.

********************************************************************************
* Agent Install Step (3/4)
* Writing agent config file         [/home/prosync/example/install/prs_install_agent.cfg]  
* Writing Instance file             [/home/prosync/example/config/prs_instance.map] 
* Total Agent Cnt                   [4] 
********************************************************************************
* Writing all agents.. cnt: 4
* Writing Agent.. Index:           0
* Writing Agent.. AGENT_ID:        agent1
* Writing Agent.. AGENT_HOST:      172.19.0.11
* Writing Agent.. AGENT_PORT:      7600
* Writing Agent.. AGENT_CM_GROUP:  CM1
* Writing Agent.. AGENT_CM_ID:     0
* Writing Agent configuration... 
/home/prosync/example/config/prs_agent_agent1.cfg not found. Writing files...
Done.

* Writing Agent information... 


* Writing Agent.. Index:           1
* Writing Agent.. AGENT_ID:        agent2
* Writing Agent.. AGENT_HOST:      172.19.0.12
* Writing Agent.. AGENT_PORT:      7700
* Writing Agent.. AGENT_CM_GROUP:  CM1
* Writing Agent.. AGENT_CM_ID:     1
* Writing Agent configuration... 
/home/prosync/example/config/prs_agent_agent2.cfg not found. Writing files...
Done.

* Writing Agent information... 


* Writing Agent.. Index:           2
* Writing Agent.. AGENT_ID:        agent3
* Writing Agent.. AGENT_HOST:      172.19.0.13
* Writing Agent.. AGENT_PORT:      7800
* Writing Agent.. AGENT_CM_GROUP:  CM2
* Writing Agent.. AGENT_CM_ID:     0
* Writing Agent configuration... 
/home/prosync/example/config/prs_agent_agent3.cfg not found. Writing files...
Done.

* Writing Agent information... 


* Writing Agent.. Index:           3
* Writing Agent.. AGENT_ID:        agent4
* Writing Agent.. AGENT_HOST:      172.19.0.14
* Writing Agent.. AGENT_PORT:      7900
* Writing Agent.. AGENT_CM_GROUP:  CM2
* Writing Agent.. AGENT_CM_ID:     1
* Writing Agent configuration... 
/home/prosync/example/config/prs_agent_agent4.cfg not found. Writing files...
Done.

* Writing Agent information... 


* Creating var directory...
   * Already Created.
********************************************************************************
* Agent Install Step (4/4)
* Creating files for CM
********************************************************************************
AGENT_CM_ID detected
generating prs_0.sh...
PRS_HOME=/home/prosync/example
AGENT_ID=agent1
prs_0.sh has created in $PRS_HOME/bin/
AGENT_CM_ID detected
generating prs_1.sh...
PRS_HOME=/home/prosync/example
AGENT_ID=agent2
prs_1.sh has created in $PRS_HOME/bin/
AGENT_CM_ID detected
generating prs_0.sh...
PRS_HOME=/home/prosync/example
AGENT_ID=agent3
prs_0.sh has created in $PRS_HOME/bin/
AGENT_CM_ID detected
generating prs_1.sh...
PRS_HOME=/home/prosync/example
AGENT_ID=agent4
prs_1.sh has created in $PRS_HOME/bin/
Agent installation Done.


2. CM Failover 스크립트 설정

이 스크립트 파일에는 Agent 기동을 위한 환경 변수를 설정해야 하며, 반드시 TB_HOME ,AGENT_ID 등을환경에 맞게 수정해야 한다.

예시: $PRS_HOME/bin/prs_0.sh

#!/bin/sh

export TB_HOME=/home/tibero/tibero/tibero7   ## TB_HOME 입력
export PRS_HOME=/home/prosync/example
AGENT_ID=agent1                              ## agent_id 입력
TIMEOUT_CNT=7
source $PRS_HOME/bin/prs_cm.sh
export logdir=$PRS_HOME/var

echo "`date +%Y/%m/%d\ %H:%M:%S` cm agent Start (Agent Command $1)" >> $logdir/cmagent.log
case $1 in
START)
    echo "start $TIMEOUT_CNT $AGENT_ID" >> $logdir/cmagent.log
    start $TIMEOUT_CNT $AGENT_ID
    rc=$?
    ;;
PROBE)
    echo "probe $TIMEOUT_CNT $AGENT_ID" >> $logdir/cmagent.log
    probe $TIMEOUT_CNT $AGENT_ID
    rc=$?
    ;;
DOWN)
    echo "stop $TIMEOUT_CNT $AGENT_ID" >> $logdir/cmagent.log
    stop $TIMEOUT_CNT $AGENT_ID
    rc=$?
    ;;
KILL)
    echo "stop $TIMEOUT_CNT $AGENT_ID" >> $logdir/cmagent.log
    stop $TIMEOUT_CNT $AGENT_ID
    rc=$?
    ;;
NOTI)
    rc=0
    ;;
COMMIT)
    echo "send $TIMEOUT_CNT $AGENT_ID ${@:2}" >> $logdir/cmagent.log
    send $TIMEOUT_CNT $AGENT_ID ${@:2}
    rc=$?
    ;;

*)
    ;;
esac
echo "`date +%Y/%m/%d\ %H:%M:%S` cm agent End  (Agent Command $1)" >> $logdir/cmagent.log


exit $rc

참고

script 내의 source 명령어는 **POSIX 표준 sh**에서는 존재하지 않는다.

만약 CM에서 해당 스크립트를 실행하는데 권한에 문제가 없는데도 127 에러가 발생한다면, /bin/sh/bin/bash로 변경하면 해결된다.


3. Agent Parameter 설정

CM Failover 기능을 사용하기 위해 prs_agent_<agent_id>.cfg 파일에 다음과 같은 파라미터를 설정한다.

파라미터
설명

USE_CM

CM 사용 여부를 설정한다. Y로 설정할 경우 하위 프로세스의 상태를 모니터링하고, 장애 발생 시 재기동을 수행한다. 내부 장애 발생 시 cmrctl을 통해 정지 요청을 수행한다. (기본값: N)

PRS_AGENT_PROC_MAX_FAIL_CNT

하위 프로세스 장애 발생 시 상태를 보정하기 위해 재시도하는 최대 횟수를 설정한다. (기본값: 10)

PRS_AGENT_PROC_STARTUP_TIMEOUT

프로세스 기동 요청 이후 정상적으로 처리되기까지 대기하는 최대 시간(초)을 설정한다. 기동 요청이 정상 처리되지 않을 경우, 설정한 시간만큼 대기한 후 재시도한다.

PRS_AGENT_PROC_STATUS_REPLY_TIMEOUT

하위 프로세스로부터 응답이 없을 경우 대기하는 최대 시간(초)을 설정한다. 초과 시 해당 프로세스에 hang이 발생한 것으로 판단하고 재기동을 수행한다.

PRS_AGENT_PROC_STATUS_INTERVAL

Agent 프로세스가 CM에게 cmrctl check 명령을 보내는 주기를 설정한다.

PRS_AGENT_CM_PROBE_TIMEOUT

CM 상태 확인 시 대기하는 최대 시간(초)을 설정한다. 0으로 설정할 경우 cmrctl 응답을 무한히 대기한다. 설정값을 초과할 경우 CM hang으로 판단하여 Agent 및 하위 프로세스를 정리한다. (기본값: 0) ※ 표준 출력을 우회하여 응답을 수신하므로, 응답 지연이 우려될 경우 0으로 설정한다.

참고

USE_CMY로 설정한 경우에만 PRS_AGENT_PROC_MAX_FAIL_CNT, PRS_AGENT_PROC_STARTUP_TIMEOUT, PRS_AGENT_PROC_STATUS_REPLY_TIMEOUT, PRS_AGENT_PROC_STATUS_INTERVAL, PRS_AGENT_CM_PROBE_TIMEOUT 파라미터가 동작한다.

agent config file 설정 예시

예시 : $PRS_HOME/config/prs_agent_agent1.cfg

cd $PRS_HOME/config
vi prs_agent_agent1.cfg
################################################################################
# 
# ProSync Agent Configurations (Template)
#
################################################################################

#-------------------------------------------------------------------------------
# NORMAL
#-------------------------------------------------------------------------------
LISTENER_PORT = 7603                                            

#-------------------------------------------------------------------------------
# LOG
#-------------------------------------------------------------------------------
LOG_LEVEL = 3
LOG_DIR = /home/prosync/example/var/agent/agent1/log                  
LOG_BACKUP_DIR = /home/prosync/example/var/agent/agent1/log/backup         
#-------------------------------------------------------------------------------
# CM
#-------------------------------------------------------------------------------
USE_CM = Y
PRS_AGENT_PROC_MAX_FAIL_CNT = 10		#Dependency Param : USE_CM=Y
PRS_AGENT_PROC_STARTUP_TIMEOUT = 10	        #Dependency Param : USE_CM=Y
PRS_AGENT_PROC_STATUS_REPLY_TIMEOUT = 10	#Dependency Param : USE_CM=Y
PRS_AGENT_PROC_STATUS_INTERVAL = 3		#Dependency Param : USE_CM=Y
PRS_AGENT_CM_PROBE_TIMEOUT = 0			#Dependency Param : USE_CM=Y


4. Instance 설치

prs_install.cfg 설정 예시

############################################
#
# ProSync Installation Parameters
#
############################################

# (Mandatory)
INSTANCE_ID=TAC2TAC
PRS_USER=TAC2TAC
PRS_PWD=TAC2TAC

# (Optional)
#PRS_TS_NAME=
#PRS_TS_FILE=
#PRS_TS_SIZE=
#PRS_SKIP_USER_CREATE=N
#PRS_TARGET_MIN_PRIVILEGE=N
#PRS_LOG_DIR=
#LOG_BACKUP_DIR=
#CREATE_DSN_FILE=N
#DSN_DIR=
#DSN_FILE=#need DSN_DIR

############################################
#
# ProSync Processes Informations for Instance Map
#
############################################
### (Optional)
AGENT_LIST_DELIMITER=,

### (Mandatory)
# Ext process
## Ext cnt must be the same as SRC_DB_CNT
EXT_CNT=2
EXT_AGENT_ID_LIST_0=agent1,agent2
EXT_AGENT_ID_LIST_1=agent2,agent1

# Apply process
APPLY_PORT=7620
APPLY_AGENT_ID_LIST=agent3,agent4

# Llob process
LLOB_PORT=7630
LLOB_AGENT_ID_LIST=agent1,agent2

############################################
#
# Source database informations
#
############################################

# (Mandatory)
SRC_DB_TYPE=TIBERO
SRC_DB_NAME=tibero
SRC_INSTALL_USER=sys
SRC_INSTALL_PWD=tibero

# (Optional)
#AUTO_ADD_SUPP_LOG=Y
#SRC_SKIP_TS_CREATE=N

# (Number Of Database Instances)
SRC_DB_CNT=2

# (Dsn)
#SRC_DB_REAL_NAME=
#SRC_DB_IP_0=
#SRC_DB_PORT_0=
#SRC_DB_IP_1=
#SRC_DB_PORT_1=

# (for Cluster, only Extract Needed)
SRC_DB_ALIAS_0=tibero1
SRC_DB_ALIAS_1=tibero2

# (for Oracle Logminer only)
# Oracle 11g or less, use utl_file_dir instead of DICT_FILE_DIR
#USE_LOGMNR=Y
#DICT_FILE_DIR=

# (for MySQL only)
#PRS_EXT_IP=

############################################
#
# Target database informations
#
############################################

# (Mandatory)
TAR_DB_TYPE=TIBERO
TAR_DB_NAME=tibero_tar
TAR_INSTALL_USER=sys
TAR_INSTALL_PWD=tibero

# (Optional)
#TAR_SKIP_TS_CREATE=N

# (Number Of Database Instances)
TAR_DB_CNT=2

# (Dsn)
#TAR_DB_REAL_NAME=
#TAR_DB_IP_0=
#TAR_DB_PORT_0=
#TAR_DB_IP_1=
#TAR_DB_PORT_1=

# (for MySQL only)
#PRS_APPLY_IP=

# (for multi thread)
#GROUP_NUM=1

# (for TDE(transparent data encryption) synchronization)
#USE_TDE=N

instance 설치 및 설치 결과 예시

$ cd $PRS_HOME/install
$ . prs_install.sh
********************************************************************************
* Check System Parameters (1/5)
********************************************************************************
Checking system type... Linux
Checking PRS_HOME... /home/prosync/example
Checking TB_HOME... /home/tibero/tibero/tibero7
Checking ORACLE_HOME... not found
Checking psql ... not available
Checking psql service ...not found

********************************************************************************
* Check Installation Parameters (2/5)
********************************************************************************
Checking configuration file... ok
Checking prs_instance.map file... ok
Checking Instance Id from prs_instance.map...
* instance information for TAC2TAC was not found. It will be written in this script...
ok
Checking proecsses infos...
* Checking Ext processes Informations... EXT_CNT: 2
* Delimiter: [,]
   * Ext[0] process's Agent Id List 
       * Agent ID[agent1]
       * [agent1] found.
       * Agent ID[agent2]
       * [agent2] found.
   * Ext[1] process's Agent Id List 
       * Agent ID[agent2]
       * [agent2] found.
       * Agent ID[agent1]
       * [agent1] found.
* Checking Apply process Informations...
* Checking Apply Port..
   * APPLY_PORT=[7620]
* Checking Apply process's Agent Id List..
       * Agent ID[agent3]
       * [agent3] found.
       * Agent ID[agent4]
       * [agent4] found.
* Checking Llob process Informations...
* Checking Llob Port..
   * LLOB_PORT=[7630]
* Checking Llob process's Agent Id List..
       * Agent ID[agent1]
       * [agent1] found.
       * Agent ID[agent2]
       * [agent2] found.

Checking INSTANCE_ID... TAC2TAC
Checking PRS_USER... TAC2TAC
Checking PRS_TS_NAME... TAC2TAC_ts
Checking PRS_TS_FILE... TAC2TAC_ts.dtf
Checking PRS_TS_SIZE... 1G
Checking PRS_EXT_IP... %
Checking PRS_APPLY_IP... %
Checking SRC_DB_CNT... 2
Checking SRC_DB_TYPE... TIBERO
Checking SRC_INSTALL_USER... sys
Checking SRC_DB_NAME... tibero
Checking source connection(1/2)... ok
Checking source connection(2/2)... ok
_list_create: Variable TAR_DB_IP_0 is undefined or empty.
_list_create: Variable TAR_DB_PORT_0 is undefined or empty.
Checking TAR_DB_TYPE... TIBERO
Checking TAR_INSTALL_USER... sys
Checking TAR_DB_NAME... tibero_tar
Checking target connection [tibero_tar](1/2)... ok
Checking target connection [tibero_tar](2/2)... ok

********************************************************************************
* Install to Source Database (3/5)
********************************************************************************
Checking archived log mode... ok
Usage: _list_get <list> <index>
Checking log mining parameters [tibero1]... ok
Checking log mining parameters [tibero2]... ok
Usage: _list_get <list> <index>
Get Archive log directory [tibero1]... ok
Get Archive log format [tibero1]... ok
Get Archive log directory [tibero2]... ok
Get Archive log format [tibero2]... ok
Dropping ProSync objects... ok
Dropping ProSync tablespace... DROP TABLESPACE instance_ts INCLUDING CONTENTS AND DATAFILES;

done

ok
Creating ProSync tablespace... ok
Dropping ProSync user... ok
Creating ProSync user... ok
Granting privileges... ok
Granting privileges for trigger... ok
Creating ProSync internal tables... ok
Creating ProSync internal package... ok
Creating ProSync internal trigger... ok
Prosync user TAC2TAC to uppercase TAC2TACok
Checking source object file(group num : 1)... ok
Building source object for TEST%.%... ok
Adding table supplemental log... 4, and one for prosync
  "TAC2TAC"."PRS_DUMMY_TBL" : Supplemental logging set
  "TEST"."T1" : supplemental log already exists, continue.
  "TEST"."T2" : supplemental log already exists, continue.
  "TEST"."T3" : supplemental log already exists, continue.
  TAC2TAC.PRS_DUMMY_TBL : supplemental log already exists, continue.
Counting total source objects... 4
Generating initial DDL history... ok
Checking initial DDL history... ok
Insert default value in PRS_DUMMY_TBL...ok
Querying NLS_CHARACTERSET... UTF8
Querying NLS_NCHAR_CHARACTERSET... UTF16
Switching logfile [tibero1]... ok
Switching logfile [tibero2]... ok
Usage: _list_get <list> <index>
Querying current log sequence [tibero1]... 8
Querying current log sequence [tibero2]... 8
Querying current snapshot#... 93289


********************************************************************************
* Install to Target Database (4/5)
********************************************************************************
Dropping ProSync objects... ok
Dropping ProSync tablespace... DROP TABLESPACE TAC2TAC_ts INCLUDING CONTENTS AND DATAFILES;

done

ok
Creating ProSync tablespace... ok
Dropping ProSync user... ok
Creating ProSync user... ok
Granting privileges... ok
Creating ProSync internal tables... ok
Generating initial construct history [tibero1 (group 1)]... ok
Generating initial construct history [tibero2 (group 1)]... ok
Generating initial commit history... ok

********************************************************************************
* Generate Configuration Parameters (5/5)
********************************************************************************
Generating Wallet... ok
Usage: _list_get <list> <index>
Generating Extract [tibero1] configuration...  ok
Generating Extract [tibero2] configuration...  ok
Generating Apply [tibero_tar] configuration...  ok
Generating LONG/LOB configuration...  ok
Generating instance map to prs_instance.map ...


********************************************************************************
* ProSync is installed successfully on Fri, 18 Apr 2025 02:01:32 +0000.
*
*   PRS_HOME     = /home/prosync/example
*   Binary Path  = /home/prosync/example/bin
*   ProSync User = TAC2TAC
*
*   Archived Log Path     = /home/tibero/tibero/tibero7/database/tibero/archive/
*   Archived Log Format   = log-t%t-r%r-s%s.arc
*   Initial log sequence# = 9
*
*   Archived Log Path     = /home/tibero/tibero/tibero7/database/tibero/archive/
*   Archived Log Format   = log-t%t-r%r-s%s.arc
*   Initial log sequence# = 9
*
*   Initial change# (TSN) = 94003 
*
********************************************************************************


5. CM 설정

5.1. tibero 기동 종료

$ tbdown

5.2. cm 기동 종료

$ tbcm -d

5.3. cm tip file에 cm_id 추가

$ cd $TB_HOME/config

$TB_HOME/config 예시

total 156
drwxr-xr-x  2 tibero tibero  4096 Apr 21 05:54 ./
drwxr-xr-x 13 tibero tibero  4096 Apr 17 10:19 ../
-rw-r--r--  1 tibero tibero    72 Apr  1  2024 .gitignore
-rw-r--r--  1 tibero tibero   474 Apr  1  2024 cm.template
-rw-r--r--  1 tibero tibero   691 Apr 17 10:19 cm_tibero1.tip
-rw-r--r--  1 tibero tibero   691 Apr 17 10:19 cm_tibero2.tip
-rw-r--r--  1 tibero tibero   810 Apr  1  2024 common_tip.template
-rwxr-xr-x  1 tibero tibero   982 Apr  1  2024 gen_QMS_store_dsn_for_sampler.sh*
-rwxr-xr-x  1 tibero tibero  4433 Apr  1  2024 gen_psm_cmd.sh*
-rwxr-xr-x  1 tibero tibero  4886 Apr  1  2024 gen_tip.sh*
-rwxr-xr-x  1 tibero tibero  5938 Apr  1  2024 gen_tip_for_ssvr.sh*
-rwxr-xr-x  1 tibero tibero 11797 Apr  1  2024 gen_tip_for_tac.sh*
-rwxr-xr-x  1 tibero tibero 15732 Apr  1  2024 gen_tip_for_zeta.sh*
-rw-r--r--  1 tibero tibero  1796 Apr  1  2024 ilog.map.example
-rwxr-xr-x  1 tibero tibero  1628 Apr 17 10:19 psm_commands*
-rw-r--r--  1 tibero tibero   567 Apr  1  2024 sampler.ssa.template
-rw-r--r--  1 tibero tibero   488 Apr  1  2024 sampler.template
-rw-r--r--  1 tibero tibero   434 Apr  1  2024 ssvr.template
-rw-r--r--  1 tibero tibero   730 Apr  1  2024 tac.template
-rw-r--r--  1 tibero tibero   572 Apr  1  2024 tas.template
-rw-r--r--  1 tibero tibero   489 Apr  1  2024 tibero.template
-rw-r--r--  1 tibero tibero   936 Apr 21 05:54 tibero1.tip
-rw-r--r--  1 tibero tibero   936 Apr 21 05:54 tibero2.tip
-rw-r--r--  1 tibero tibero   693 Apr  1  2024 tibero_dev.template
-rw-r--r--  1 tibero tibero   344 Apr  1  2024 tibero_max.template
-rw-r--r--  1 tibero tibero   582 Apr  1  2024 tip.ssa.template
-rw-r--r--  1 tibero tibero   525 Apr  1  2024 tip.template
-rw-r--r--  1 tibero tibero   689 Apr  1  2024 tip_dev.template
-rw-r--r--  1 tibero tibero   528 Apr  1  2024 tip_jenkins.template
-rw-r--r--  1 tibero tibero   340 Apr  1  2024 tip_max.template
-rw-r--r--  1 tibero tibero     5 Apr  1  2024 variant

cm_<$TB_SID>.tip file들을 열어 맨 윗줄에 CM_ID를 추가해준다. CM_ID는 agent 설치 시 prs_install_agent.cfg에 작성한 내용을 기반으로 추가한다.

cm.tibero1.tip 예시

# tip file generated from /home/tibero/tibero/tibero7/config/cm.template (Thu Apr 17 10:19:44 UTC 2025)
#-------------------------------------------------------------------------------
#
# Cluster Manager initialization parameter
#
#-------------------------------------------------------------------------------
CM_ID=0 # 여기에 CM_ID를 추가한다.
CM_NAME=cm_tibero1
CM_UI_PORT=8645
CM_RESOURCE_FILE=/home/tibero/tibero/tibero7/cmfile/tibero/cm_tibero1.res

CM_LOG_DEST=/home/tibero/tibero/tibero7/instance/tibero1/log/cm
CM_GUARD_LOG_DEST=/home/tibero/tibero/tibero7/instance/tibero1/log/cm/guard

#CM_HEARTBEAT_EXPIRE
#CM_WATCHDOG_EXPIRE
#LOG_LVL_CM

#CM_ENABLE_FAST_NET_ERROR_DETECTION=Y
#CM_FENCE=Y
#_CM_CHECK_RUNLEVEL=Y

5.4. cm 기동

tbcm -b

5.5. tibero 기동

tbboot

5.6. cm 확인

cmrctl show all

cm info 예시

Resource List of Node cm_tibero1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 172.19.0.11/8640
     COMMON  cluster     cls_tibero       UP inc: inc1, pub: N/A
 cls_tibero     file   cls_tibero:0       UP /home/tibero/tibero/tibero7/cmfile/tibero/cmdata
 cls_tibero  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
 cls_tibero       db        tibero1 UP(NRML) tibero, /home/tibero/tibero/tibero7, failed retry cnt: 0
=====================================================================
Resource List of Node cm_tibero2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 172.19.0.12/8670
     COMMON  cluster     cls_tibero       UP inc: inc2, pub: N/A
 cls_tibero     file   cls_tibero:0       UP /home/tibero/tibero/tibero7/cmfile/tibero/cmdata
 cls_tibero  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
 cls_tibero       db        tibero2 UP(NRML) tibero, /home/tibero/tibero/tibero7, failed retry cnt: 0
=====================================================================

5.7. CM group 등록

agent 설치 시, prs_install_agent.cfg에 작성한 내용을 기반으로 CM group을 추가한다.

cmrctl add group --name <group_name> --cname <cluster_name> 
--grptype <type> --failover <true|false>
Key
Value Type
설명

name

string

그룹 리소스 이름이다. (unique, 필수)

cname

string

해당 그룹 리소스가 속할 cluster 리소스 이름이다. (필수)

grptype

string

그룹의 종류를 나타내기 위한 용도이다. (필수)

failover

string

agent가 종료되었을때 failover 기능 사용 여부이다. (default: true)

예시

cmrctl add group --name CM1 --cname cls_tibero --grptype prosync

등록에 성공하면 다음과 같은 문구가 나온다.

$ Resource add success! (group, CM1)

5.8. CM agent 등록

cmrctl add agent --name <agent_name> --grpname <group_name> --script <directory_path>
--pubnet <public_network_resource_name> --retry_cnt <retry_cnt>
Key
Value Type
설명

name

string

agent 리소스 이름이다. (unique, 필수)

grpname

string

agent 리소스가 속할 그룹 리소스 이름이다 (필수)

script

string(directory path)

agent cmd를 실행시킬 script가 위치한 절대경로이다 (필수)

pubnet

string

public 용도로 사용할 네트워크 리소스 이름이다. dependency를 추가하려면 입력해야한다.

retry_cnt

integer

최대 retry 시도 횟수이다. (default: 3)

예시

cmrctl add agent --name agent1 --grpname CM1 --script "$TB_HOME/prs_0.sh"

등록에 성공하면 다음과 같은 문구가 나온다.

$ Resource add success! (agent, agent1)

cm을 확인해보면 다음과 같이 group과 agent가 등록된 info를 확인할 수 있다.

cmrctl show all

cm info 예시

Resource List of Node cm_tibero1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 172.19.0.11/8640
     COMMON  cluster     cls_tibero       UP inc: inc1, pub: N/A
 cls_tibero     file   cls_tibero:0       UP /home/tibero/tibero/tibero7/cmfile/tibero/cmdata
 cls_tibero  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
 cls_tibero       db        tibero1 UP(NRML) tibero, /home/tibero/tibero/tibero7, failed retry cnt: 0
 cls_tibero    group            CM1     DOWN type: prosync (failover: ON)
 cls_tibero    agent         agent1     DOWN /home/tibero/tibero/tibero7/prs0.sh, start retry cnt: 0
=====================================================================
Resource List of Node cm_tibero2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 172.19.0.12/8670
     COMMON  cluster     cls_tibero       UP inc: inc2, pub: N/A
 cls_tibero     file   cls_tibero:0       UP /home/tibero/tibero/tibero7/cmfile/tibero/cmdata
 cls_tibero  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
 cls_tibero       db        tibero2 UP(NRML) tibero, /home/tibero/tibero/tibero7, failed retry cnt: 0
 cls_tibero    group            CM1     DOWN type: prosync (failover: ON)
 cls_tibero    agent         agent2     DOWN /home/tibero/tibero/tibero7/prs1.sh, start retry cnt: 0
=====================================================================

5.9. CM group 기동

cm을 통해 prosync를 기동하고 failover기능을 사용하기 위해 cm group을 기동시킨다.

cmrctl start group --name CM1

결과 예시

=================================== SUCCESS! ===================================
 Succeeded to request at each node to boot resources under the group(CM1).
 Please use "cmrctl show group --name CM1" to verify the result.
================================================================================
Resource List of Node cm_tibero1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 172.19.0.11/8640
     COMMON  cluster     cls_tibero       UP inc: inc1, pub: N/A
 cls_tibero     file   cls_tibero:0       UP /home/tibero/tibero/tibero7/cmfile/tibero/cmdata
 cls_tibero  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
 cls_tibero       db        tibero1 UP(NRML) tibero, /home/tibero/tibero/tibero7, failed retry cnt: 0
 cls_tibero    group            CM1       UP type: prosync (failover: ON)
 cls_tibero    agent         agent1       UP /home/tibero/tibero/tibero7/prs_0.sh, start retry cnt: 0
=====================================================================
Resource List of Node cm_tibero2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 172.19.0.12/8670
     COMMON  cluster     cls_tibero       UP inc: inc2, pub: N/A
 cls_tibero     file   cls_tibero:0       UP /home/tibero/tibero/tibero7/cmfile/tibero/cmdata
 cls_tibero  service         tibero       UP Database, Active Cluster (auto-restart: OFF)
 cls_tibero       db        tibero2 UP(NRML) tibero, /home/tibero/tibero/tibero7, failed retry cnt: 0
 cls_tibero    group            CM1       UP type: prosync (failover: ON)
 cls_tibero    agent         agent2       UP /home/tibero/tibero/tibero7/prs_1.sh, start retry cnt: 0
=====================================================================


6. Prosync 기동 확인

admin process를 통해 prosync의 기동을 확인한다.

$ prs_adm

ProSync 4 - Admin Utility

TmaxData Corporation Copyright (c) 2024-. All rights reserved.

Admin> status

prs_agent ID: agent1, HOST: 172.19.0.11, PORT: 7600, CM_GROUP: CM1, CM_ID: 0 is running
prs_agent ID: agent2, HOST: 172.19.0.12, PORT: 7700, CM_GROUP: CM1, CM_ID: 1 is running
prs_agent ID: agent3, HOST: 172.19.0.13, PORT: 7800, CM_GROUP: CM2, CM_ID: 0 is running
prs_agent ID: agent4, HOST: 172.19.0.14, PORT: 7900, CM_GROUP: CM2, CM_ID: 1 is running


Instance ID: [prosync]
prosync_ext1 (1) is running (prs_agent ID : agent1, HOST: 172.19.0.11, PORT: 7600)
prosync_ext2 (2) is running (prs_agent ID : agent2, HOST: 172.19.0.12, PORT: 7700)
prosync_apply1 is running (prs_agent ID : agent3, HOST: 172.19.0.13, PORT: 7800)
prosync_llob (1) is running (prs_agent ID : agent1, HOST: 172.19.0.11, PORT: 7600)

cm failover 기능을 사용하는 경우 $PRS_HOME/var에 cm을 통해 agent를 기동한 log파일들이 생긴다.

  • cmagent.log

  • prs_cm_<$agent_id>.log

  • prs_cm_sh_err_<$agent_id>.log

Last updated