Storage Server Installation and Configuration

This chapter describes how to install a storage server, and register disks and flash devices. It also explains how to create disk space, table space and install DB using SSVR instance on TAS instance.

Node Specifications

All installation and configuration examples in this chapter are based on the specifications of the DB node and Storage node shown in the table below. The specifications of each node are the main factors that affect ZetaData configuration and system performance, so it is important to verify them before starting configuration.

Item
DB Node Specifications
Storage Node Specifications

Menory

256GB

96GB

Disk configuration

2TB NVMe x 4

4TB HDD x 12, 2TB NVMe x 4

There are several ways to check memory and disk capacity.

The following is an example of checking the physical memory capacity of nodes.

$ grep MemTotal /proc/meminfo 
MemTotal: 98707668 kB

The following is an example of checking the memory capacity of a node.

// Some code$ fdisk -l
Disk /dev/sdd: 4000.8 GB, 4000753475584 bytes, 7813971632 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes 
I/O size (minimum/optimal): 262144 bytes / 262144 bytes 
Disk label type: dos
Disk identifier: 0x00000000
...

Note

DB nodes require a high performance CPU and plenty of memory, and disk is primarily used for OS storage. Storage nodes, on the other hand, focus on storing and serving data, so disk capacity and speed are more important factors. For the same cost, it is recommended to configure the DB node to focus on CPU and memory, and the Storage node to focus on disk and flash devices.


RAID Installation

Case 1. Installing the OS without using RAID

It is assumed that a storage node has twelve 4 TB disks. The OS and the storage server's binary and control file are installed in 400 GB of space allocated to one of the disks.

If RAID is not used, then 3.6 TB of space (excluding the 400 GB where the OS is installed) is configured as a single partition. Therefore, the storage node can be configured by using the 3.6 TB partition of the disk in which the OS is installed, and eleven 4 TB disks.

Case 2. Installing the OS using RAID

It is assumed that a storage node has twelve 4 TB disks, and two of them are configured as RAID 1 mirroring. The OS and the storage server's binary and control file are installed in 400 GB of space allocated to the RAID1.

After installing the OS, divide the remaining 7.2 TB into two separate 3.6 TB disks, and configure RAID 0 for each one. Therefore, the storage node can be configured by using RAID 1 mirroring with two 3.6TB disks and ten 4TB disks, excluding the space where the OS is installed.


Storage node disk configuration

The following is the process of preparing shared disks for installation. The process must be performed on all nodes.

Storage node disks can be used by modifying their permission or ownership. However, this can cause the issue that a disk name is changed after OS rebooting. To prevent the issue, it is recommended to configure storage node disks by using udev.

The following briefly describes udev.

  • Like system software, udev provides device events, manages device node permission, and creates or renames a symbolic link for a network interface or /dev directory.

  • When a device is detected by the kernel, UDEV gathers properties such as the serial number or bus device number from the SYSFS directory to determine a unique name for the device. UDEV keeps track of devices in the /SYS file system based on their major and minor numbers, and utilizes system memory and SYSFS to manage device information.

  • When the kernel raises an event when a module is loaded or a device is added or removed, UDEV follows the rules to set the device filename, create symbolic links, set file permissions, and so on.

The following is an example of storage node disks configured using udev.

$ ls -al /dev
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk0 -> sda
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk1 -> sdb
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk2 -> sdc
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk3 -> sdd
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk4 -> sde
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk5 -> sdf
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk6 -> sdg
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk7 -> sdh
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk8 -> sdi
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk9 -> sdj
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk10 -> sdk
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk11 -> sdl
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/flash0 -> nvme0n1
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/flash1 -> nvme1n1
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/flash2 -> nvme2n1
lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/flash3 -> nvme3n1

The links with /dev/ in the above example are symbolic links created according to the udev rules.

The following is an example of the udev rules file for configuring disks.

$ cat /etc/udev/rules.d/zetadisk.rules
KERNEL=="sdb", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32750", \
SYMLINK+="disk0", OWNER="zeta", MODE="0600"

KERNEL=="sdc", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32751", \
SYMLINK+="disk1", OWNER="zeta", MODE="0600"

KERNEL=="sdd", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32752", \
SYMLINK+="disk2", OWNER="zeta", MODE="0600"

KERNEL=="sde", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32753", \
SYMLINK+="disk3", OWNER="zeta", MODE="0600"

KERNEL=="sdf", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32754", \
SYMLINK+="disk4", OWNER="zeta", MODE="0600"

KERNEL=="sdg", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32755", \
SYMLINK+="disk5", OWNER="zeta", MODE="0600"

KERNEL=="sdh", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32756", \
SYMLINK+="disk6", OWNER="zeta", MODE="0600"

KERNEL=="sdi", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32757", \
SYMLINK+="disk7", OWNER="zeta", MODE="0600"

KERNEL=="sdj", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32758", \
SYMLINK+="disk8", OWNER="zeta", MODE="0600"

KERNEL=="sdk", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32759", \
SYMLINK+="disk9", OWNER="zeta", MODE="0600"

KERNEL=="sdl", SUBSYSTEM=="block", \
ENV{ID_SERIAL}=="3600508b1001cdc32710", \
SYMLINK+="disk10", OWNER="zeta", MODE="0600"

KERNEL=="nvme0n1", SYMLINK+="flash0", OWNER="zeta", MODE="0600"
KERNEL=="nvme1n1", SYMLINK+="flash1", OWNER="zeta", MODE="0600"
KERNEL=="nvme2n1", SYMLINK+="flash2", OWNER="zeta", MODE="0600"
KERNEL=="nvme3n1", SYMLINK+="flash3", OWNER="zeta", MODE="0600"

The udev rules file must be saved with the .rules extension in the /etc/udev/rules.d folder.

The rule shown with the example means that the user should find the device that matches the specified SCSI_ID(RESULT=="SCSI_ID") among the block device (SUBSYSTEM=="block") nodes with the kernel name (KERNEL=="sda") expressed as sda, and then set the owner and user privileges and then create the given symbolic link.

The device SCSI_ID can be checked by executing /lib/udev/scsi_id (for RHEL 7).

Depending on the OS versions, execute /sbin/scsi_id. This program must be executed with admin privileges, and the following is an example of checking scsi_id.

$ /usr/lib/udev/scsi_id --whitelisted --device=/dev/sda 
3600508b1001cdc32750


Kernel Parameter Configuration

For proper operation of ZetaData, several kernel parameters must be configured. The location of the kernel parameter configuration file is as follows.

/etc/sysctl.conf

The following is an example of configuring kernel parameters.

kernel.sem = 100000 100000 100000 100000
kernel.shmmax = 17179869184
kernel.shmall = 24718805
fs.aio-max-nr = 4194304
fs.file-max = 8388608
vm.max_map_count = 262144

kernel.shmmax parameter

  • It refers to the maximum value in bytes that can be allocated to a single shared memory segment.

  • The value should be more generous than the size of the total shared memory to be allocated to the SSVR instance.

kernel.shmall parameter

  • It refers to the total number of pages of shared memory available system-wide.

  • The value should be set based on the SSVR instance divided by the page size (reason: SSVR instance uses most of the system's shared memory resources generally.)

The following is the command to check the page size of the system.

$ getconf PAGE_SIZE 
4096

fs.aio-max-nr parameter

  • It refers to the maximum number of asynchronous I/O requests that the system can handle simultaneously.

  • The value should be set to an appropriate value that monitors actual usage.

fs.file-max parameter

  • It refers to the the maximum number of file descriptors that can be open simultaneously on the system.

  • The value should be adjusted according to the scale of the system.

vm.max_map_count parameter

  • It refers to the maximum number of memory mappings a process can have.

  • When using RDMA over InfiniBand, maps are frequently generated during the process of registering internal library resources and regions to RDMA, so the value should be set generously.

To use a TCP socket for storage and DB node communication, configure the maximum size of the socket buffer as follows.

net.core.rmem_default = 4194304
net.core.wmem_default = 4194304
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864


Environment Variables

The following is the the environment variables for using an SSVR instance.

Variable
Description

$TB_HOME

Home directory where a storage server is installed.

$TB_SID

Service ID that identifies a storage server instance.

Note

The environment variables used in a SSVR instance are the same as in Tibero.

For more information, refer to "Tibero Installation Guide".


Initialization Parameters

The initialization parameters to install ZetaData are basically the same as Tibero, and the following parameters should be configured.

Initialization Parameters
Descriptions

SSVR_RECV_PORT_START

Sets the starting port number used for a SSVR instance. (Range: 1024 - 65535)

SSVR_USE_TCP

Sets to "Y" to use TCP protocol instead of the RDMA protocol. Set the same for the initialization parameters of TAS and TAC instances. (Default: N)

SSVR_USE_IB

Sets to "Y" to use RDMA protocol and the value of the USE_ZETA parameter must be "Y". If it is not set, it is set to the opposite value of the SSVR_USE_TCP parameter.

SSVR_USE_FC

Indicates whether to use flash cache. (default: Y)

SSVR_USE_SDM

Indicates whether to use storage data maps. (default: Y)

SSVR_USE_AGNT

It indicates whether to enable the agent process. It is responsible for managing or assisting with internal tasks. It is also responsible for collecting and sending Tibero Performance Monitor (TPM) related performance metrics. (Default: N)

SSVR_USE_TPM

Indicates whether to enable the TPM (default: N).

SSVR_TPM_SENDER_INTERVAL

Sets the interval for checking sender connection status when TPM is enabled. (Default: 50)

SSVR_WTHR_CNT

Indicates the number of working threads that the SSVR instance will use to perform I/O. If not set, it is automatically proportional to the number of CPUs and does not need to be set.

(Range: SSVR_WTHR_CNT> 0)

INSTANCE_TYPE

Indicates the type of instance and is set for each instance.

  • TAS Instance: AS

  • SSVR Instance: SSVR

USE_ZETA

Configures for ZetaData. If not set otherwise, it is set to "Y" only when the IN STANCE_TYPE parameter value is

"SSVR".

The following is an example of setting the SSVR instance initialization parameters for Storage Node #0.

# ssvr0.tip 
INSTANCE_TYPE=SSVR 
LISTENER_PORT=9100
CONTROL_FILES="/home/tibero/zetadata/database/ssvr0/c1.ctl"

TOTAL_SHM_SIZE=15G 
MEMORY_TARGET=70G

SSVR_RECV_PORT_START=9110

MEMORY_TARGET parameter is the total amount of memory to be used by the SSVR instance.

Due to the high amount of external memory used for connections on InfiniBand libraries (4 MB per connection), it is recommended to set this after consulting with the lab.

The formula is as follows:

SSVR Instance
MEMORY_TARGET = (Total memory )
                - [(The number of DB node) 
                * {(Max Session Count in TAC instance)
                + (The total number of PEP thread in TAC instance)}] * 4MB
                - 10GB(=Connection margin for OS and other threads)
** (he total number of PEP thread in TAC instance) = (The number of PEP Process in TAC instance)
                                        * (The number of thread per PEP process)
** Calculating the total number of PEP threads for a TAC instance is based on one TAC instance..

TOTAL_SHM_SIZE parameter is the total amount of shared memory to be used by the SSVR instance.

The minimum value can be calculated by first adding the 3 GB required by default to start the SSVR instance, plus 150 MB per 1 TB of disk, and then adding 1 GB to 2 GB of other allowance per 10 GB of disk to the result.


Connection Information for Using tbSQL

To connect to a storage server by using tbSQL, configure the connection information in the $TB_HOME/client/config/tbdsn.tbr file.

The configuration method is the same as in Tibero with the exception that DB_NAME is not required.

The following is an example of configuring the file, $TB_HOME/client/config/tbdsn.tbr.

ssvr0=((INSTANCE=(HOST=10.10.10.13)(PORT=9100))) 
ssvr1=((INSTANCE=(HOST=10.10.10.14)(PORT=9100))) 
ssvr2=((INSTANCE=(HOST=10.10.10.15)(PORT=9100))) 
tas0=((INSTANCE=(HOST=10.10.10.11)(PORT=9120))) 
tas1=((INSTANCE=(HOST=10.10.10.12)(PORT=9120)))
tac0=((INSTANCE=(HOST=10.10.10.11)(PORT=9150)(DB_NAME=TAC))) 
tac1=((INSTANCE=(HOST=10.10.10.12)(PORT=9150)(DB_NAME=TAC)))


Configuration of SSVR instance

To configure SSVR instance, follow the following steps.

In this example, three storage nodes and two DB nodes will be configured in the following steps.

  1. Create and start an SSVR instance

  2. Create a storage disk

  3. Create a grid disk

  4. Create a flash cache

1. Create and start an SSVR instance

Below is the process of creating an SSVR instance based on the settings in $TB_HOME/config/$TB_SID.tip.

Check the contents of the initialization parameters and create a control file accordingly. Start the SSVR instance in nomount mode and create an SSVR instance. After creation, the SSVR instance is automatically shut down.

$ export TB_SID=ssvr0
$ tbboot -t nomount
Listener port = 9100

Tibero 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NOMOUNT mode).
$ tbsql sys/tibero

tbSQL 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Connected to Tibero.
SQL> create storage server;
created.

Start it in mount mode to restart an SSVR instance.

$ tbboot -t mount
Listener port = 9100 

Tibero 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (MOUNT mode).

2. Create a storage disk

Storage Disk is the physical disk that will be used by the SSVR instance.

To use the disks on the Storage node, each disk must be registered as a storage disk for the SSVR instance.

The following is the command registering a storage disk.

create storage disk {storage-disk-name} path {path} size {storage-disk-size}
Parameter
Description

storage disk

{storage-disk-name}

The name of the storage disk to register with the SSVR instance. It only needs to have a unique name within each SSVR instance.

path {path}

The path to the disk on the Storage node.

size {storage-disk-size}

The size of the storage disk to enroll in SSVR. The default unit is bytes, which can be further denominated as K (KiB), M (MiB), G (GiB), T (TiB), P (PiB), or E (EiB).

The unit of capacity for storage disks is 1T (TiB) = 1024G (GiB). The capacity of the disk checked by the fdisk command is calculated as 1TB=1000GB, so you need to set the size of the storage disks by calculating the capacity in units of 1T=1024G.

This is an optional parameter and will contain the total capacity of the device if not specified.

Configure the storage disk capacity to 4 TB minus 400 GB for the disk where the OS, SSVR binary files, and control files are installed. The examples below assume installation on the first disk.

$ tbsql sys/tibero

tbSQL 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Connected to Tibero.

SQL> create storage disk SD00 path '/dev/disk0' size 3350G;
created.
SQL> create storage disk SD01 path '/dev/disk1' size 3725G;
created.
SQL> create storage disk SD02 path '/dev/disk2' size 3725G;
created.
SQL> create storage disk SD03 path '/dev/disk3' size 3725G;
created.
SQL> create storage disk SD04 path '/dev/disk4' size 3725G;
created.
SQL> create storage disk SD05 path '/dev/disk5' size 3725G;
created.
SQL> create storage disk SD06 path '/dev/disk6' size 3725G;
created.
SQL> create storage disk SD07 path '/dev/disk7' size 3725G;
created.
SQL> create storage disk SD08 path '/dev/disk8' size 3725G;
created.
SQL> create storage disk SD09 path '/dev/disk9' size 3725G;
created.
SQL> create storage disk SD10 path '/dev/disk10' size 3725G;
created.
SQL> create storage disk SD11 path '/dev/disk11' size 3725G;
created.

Storage disk information can be viewed through the V$SSVR_STORAGE_DISK view.

SQL> select * from v$ssvr_storage_disk;

STORAGE_DISK_NUMBER NAME PATH OS_BYTES
------------------- ------ ----------- ---------
                  0 SD00 /dev/disk0 4.000E+12
                  1 SD01 /dev/disk1 4.000E+12
                  2 SD02 /dev/disk2 4.000E+12
                  3 SD03 /dev/disk3 4.000E+12
                  4 SD04 /dev/disk4 4.000E+12
                  5 SD05 /dev/disk5 4.000E+12
                  6 SD06 /dev/disk6 4.000E+12
                  7 SD07 /dev/disk7 4.000E+12
                  8 SD08 /dev/disk8 4.000E+12
                  9 SD09 /dev/disk9 4.000E+12
                 10 SD10 /dev/disk10 4.000E+12
                 11 SD11 /dev/disk11 4.000E+12

3. Create a grid disk

A Grid Disk is a disk that is visible from the outside of an SSVR instance and must be included in a single storage disk.

The following registers a grid disks.

create grid disk {grid-disk-name} storage disk {storage-disk-name} \ 
   offset {offset} size {grid-disk-size}

Parameter
Description

grid disk {grid-disk-name}

The name of the grid disk to register with the SSVR instance. It only needs to have a unique name within each SSVR instance.

storage disk

{storage-disk-name}

The name of the storage disk registered with the SSVR instance. It is viewed the name through the V$SSVR_STOR AGE_DISK view.

offset {offset}

Enters an offset for the storage disk. This is an optional parameter and is not recommended. If used, it must be set to a multiple of 32 KB.

size {grid-disk-size}

The size of the grid disk to register with the SSVR instance. The default unit is bytes, which can be further denominated as K (KiB), M (MiB), G (GiB), T (TiB), P (PiB), or E (EiB). It must not be larger than the available capacity of the storage disk. This is an optional parameter, and if not specified, the maximum multiple of 32 KB below the total capacity of the specified storage disk

The following is an example of creating a grid disk using the maximum available capacity of a registered storage disk by omitting the size option.

SQL> create grid disk GD00 storage disk SD00;
created.
SQL> create grid disk GD01 storage disk SD01;
created.
SQL> create grid disk GD02 storage disk SD02;
created.
SQL> create grid disk GD03 storage disk SD03;
created.
SQL> create grid disk GD04 storage disk SD04;
created.
SQL> create grid disk GD05 storage disk SD05;
created.
SQL> create grid disk GD06 storage disk SD06;
created.
SQL> create grid disk GD07 storage disk SD07;
created.
SQL> create grid disk GD08 storage disk SD08;
created.
SQL> create grid disk GD09 storage disk SD09;
created.
SQL> create grid disk GD10 storage disk SD10;
created.
SQL> create grid disk GD11 storage disk SD11;
created.

Grid disk information can be viewed through the V$SSVR_GRID_DISK view.

SQL> select * from v$ssvr_grid_disk;

GRID_DISK_NUMBER NAME STORAGE_DISK_NUMBER STORAGE_DISK_OFFSET TOTAL_BYTES
---------------- ------ ------------------- ------------------- -----------
               0 GD00                     0                   0 4.000E+12
               1 GD01                     1                   0 4.000E+12
               2 GD02                     2                   0 4.000E+12
               3 GD03                     3     0 4.000E+12
               4 GD04                     4     0 4.000E+12
               5 GD05                     5     0 4.000E+12
               6 GD06                     6     0 4.000E+12
               7 GD07                     7     0 4.000E+12
               8 GD08                     8     0 4.000E+12
               9 GD09                     9     0 4.000E+12
              10 GD10                    10     0 4.000E+12
              11 GD11                    11     0 4.000E+12

4. Create a flash cache

To improve I/O performance, register a flash device as a cache by using the following command.

create flashcache {name} path {path} size {flashcache-size}
Parameter
Description

path {path}

The prefix of the flash device name.

For example, if there are four flash devices (/dev/flash0, /dev/flash1, /dev/flash2, and /dev/flash4), the value is '/dev/flash'

size {flashcache-size}

The size of flash devices. The capacity unit for flash devices is 1T (TiB) = 1024G (GiB). The capacity of the flash device checked with the fdisk command is calculated as 1TB=1000GB, the size of flash devices must be entered by calculating the capacity in the unit of 1TB (1024 GB). This is an optional parameter and will contain the total capacity of the device if not specified.

Currently, it supports flash devices of the same size. Note that the value must be set to the size of each flash device, not the total size of all flash devices.

The following is an example of creating a flash cache that has a path of "/dev/flash", a start number of 0, 1490GB of a flash device, and four devices. A flash cache cannot be used until a storage server is restarted after a flash cache is created. To use a flash cache, a storage server must be restarted.

SQL> create flashcache flash0 path '/dev/flash0' size 1490G;
created.
SQL> create flashcache flash1 path '/dev/flash1' size 1490G;
created.
SQL> create flashcache flash2 path '/dev/flash2' size 1490G;
created.
SQL> create flashcache flash3 path '/dev/flash3' size 1490G;
created. 
SQL> quit 
Disconnected.

$ tbdown

Tibero instance terminated (NORMAL mode).

$ tbboot -t mount
Listener port = 9100 

Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (MOUNT mode).

Flash cache information can be checked through the V$SSVR_FLASHCACHE view

SQL> select * from v$ssvr_flashcache;
FLASHCACHE_NUMBER NAME PATH OS_BYTES
----------------- ------ ----------- ---------
                0 FC0 /dev/flash0 1.600E+12
                1 FC1 /dev/flash1 1.600E+12
                2 FC2 /dev/flash2 1.600E+12
                3 FC3 /dev/flash3 1.600E+12

Note

Add two more SSVR instances with the same configuration as in the example above.

This example will use three SSVR instances to create disk space. and it configures the storage disks, grid disks, and flash cache on the other two SSVR instances as well.


TAS/TAC Instance configuration

This section describes how to use SSVR instances on TAS and TAC to create disk space, tablespace, and install the DB.

SSVR Instance Access Information

To use SSVR instance in TAS and TAC instance, SSVR connection information must be configured.

In the "$TB_HOME/client/config/ssdsn.tbr" file, configure the network IP address of the storage node to be used for communication and the port information of the SSVR instance. This SSVR instance connection information needs to be set on all nodes where TAS and TAC instances are installed.

The following is a sample of the $TB_HOME/client/config/ssdsn.tbr file.

# Example
# {storage node #0 IP}/{port} 
# {storage node #1 IP}/{port} 
# {storage node #2 IP}/{port} 
10.10.10.13/9110
10.10.10.14/9110
10.10.10.15/9110

The settings include

Item
Description

{storage node IP}

Indicates the IP address of the Stoage node to use.

{port}

Indicates the port number of the SSVR instance to use, which corresponds to SSVR_RECV_PORT_START in the SSVR instance initialization parameters. Set the Storage node IP followed by a "/" separator.

2. TAS/TAC configuration

The next step is to configure the TAS instance and configure DiskSpace with the grid disks of SSVR.

Configure CM instances and use them with TAS instances for cluster configuration of TAC instances.

  1. Configuring connection information for using TBSQL

  2. Configuring TAS instance on DB node #0 and creating disk space

  3. Configuring CM instance on DB node #0

  4. Staring CM and TAS instance on DB node #0

  5. Adding TAS on DB Node #1 from TAS instance on DB Node #0

  6. Configuring and starting TAS, CM instance on DB node #1

  7. Configuring and starting TAS instance on DB node #0

  8. Adding TAC on DB Node #1 from TAC instance on DB Node #0

  9. Staring TAC instance on DB node #1

Note

The order of starting the TAS instance and TAC instance on DB node 1 is not relevant if it is performed after adding the TAS instance and TAC instance on DB node 0.

1. Configuring connection information for using TBSQL

The following is an example of a $TB_HOME/client/con fig/tbdsn.tbr file for connecting to SSVR instances, TAS, and TAC instances using tbSQL.

ssvr0=((INSTANCE=(HOST=10.10.10.13)(PORT=9100)))
ssvr1=((INSTANCE=(HOST=10.10.10.14)(PORT=9100)))
ssvr2=((INSTANCE=(HOST=10.10.10.15)(PORT=9100)))
tas0=((INSTANCE=(HOST=10.10.10.11)(PORT=9120)))
tas1=((INSTANCE=(HOST=10.10.10.12)(PORT=9120)))
tac0=((INSTANCE=(HOST=10.10.10.11)(PORT=9150)(DB_NAME=TAC)))
tac1=((INSTANCE=(HOST=10.10.10.12)(PORT=9150)(DB_NAME=TAC)))

2. Configuring TAS instance on DB node 0 and creating disk space

The TAS instance connects to the SSVR instance through the SSVR instance's connection information recorded in the $TB_HOME/client/config/ssdsn.tbr file, and identifies each disk by the grid disk name created in the SSVR. The TAS instance recognizes file paths that start with "-" as grid disks in SSVR and can use them in all cases, including creating disk space and adding/deleting disks.

The following is an example initialization parameters and configuration for a TAS instance.

Initialization parameter
Description

AS_SCAN_SSVR_DISK

This is an initialization parameter written in the TAS tip, which indicates whether the SSVR instance uses the disk. If there is no change, it is set to "N", so it must be specified "Y" in the TAS tip to use ZetaData.

# tas0.tip
INSTANCE_TYPE=AS
LISTENER_PORT=9120

TOTAL_SHM_SIZE=4G
MEMORY_TARGET=5G

CLUSTER_DATABASE=Y
LOCAL_CLUSTER_ADDR=10.10.10.11
LOCAL_CLUSTER_PORT=9130
CM_PORT=9140

THREAD=0
DB_BLOCK_SIZE=4K

AS_SCAN_SSVR_DISK=Y
USE_ZETA=Y

The following is an example of the process of creating disk space using a grid disk in an SSVR instance.

AU (Allocation Unit) is a value that indicates the unit of allocation, and the size of the allocation unit that can be set is 4 MB. The striping units and the striping units of the TAS must be multiples of each other to ensure that the array fits and improve performance.

Note

For a detailed description of disk space creation, refer to "Starting TAS" in the "Tibero Active Storage Administrator's Guide".

$ export TB_SID=tas0
$ tbboot -t nomount
Listener port = 9120 
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NOMOUNT mode).
$ tbsql sys/tibero

tbSQL 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 

Connected to Tibero.

SQL> create diskspace DS0 normal redundancy 
failgroup FG1 disk
'-10.10.10.13/GD00' name DISK00 size 3350G, 
'-10.10.10.13/GD01' name DISK01 size 3725G, 
'-10.10.10.13/GD02' name DISK02 size 3725G, 
'-10.10.10.13/GD03' name DISK03 size 3725G, 
'-10.10.10.13/GD04' name DISK04 size 3725G, 
'-10.10.10.13/GD05' name DISK05 size 3725G, 
'-10.10.10.13/GD06' name DISK06 size 3725G, 
'-10.10.10.13/GD07' name DISK07 size 3725G, 
'-10.10.10.13/GD08' name DISK08 size 3725G, 
'-10.10.10.13/GD09' name DISK09 size 3725G, 
'-10.10.10.13/GD10' name DISK10 size 3725G, 
'-10.10.10.13/GD11' name DISK11 size 3725G
failgroup FG2 disk
'-10.10.10.14/GD00' name DISK20 size 3350G, 
'-10.10.10.14/GD01' name DISK21 size 3725G, 
'-10.10.10.14/GD02' name DISK22 size 3725G, 
'-10.10.10.14/GD03' name DISK23 size 3725G, 
'-10.10.10.14/GD04' name DISK24 size 3725G, 
'-10.10.10.14/GD05' name DISK25 size 3725G,
'-10.10.10.14/GD06' name DISK26 size 3725G, 
'-10.10.10.14/GD07' name DISK27 size 3725G, 
'-10.10.10.14/GD08' name DISK28 size 3725G, 
'-10.10.10.14/GD09' name DISK29 size 3725G, 
'-10.10.10.14/GD10' name DISK30 size 3725G, 
'-10.10.10.14/GD11' name DISK31 size 3725G
failgroup FG3 disk
'-10.10.10.15/GD00' name DISK40 size 3350G, 
'-10.10.10.15/GD01' name DISK41 size 3725G, 
'-10.10.10.15/GD02' name DISK42 size 3725G, 
'-10.10.10.15/GD03' name DISK43 size 3725G, 
'-10.10.10.15/GD04' name DISK44 size 3725G, 
'-10.10.10.15/GD05' name DISK45 size 3725G, 
'-10.10.10.15/GD06' name DISK46 size 3725G, 
'-10.10.10.15/GD07' name DISK47 size 3725G, 
'-10.10.10.15/GD08' name DISK48 size 3725G, 
'-10.10.10.15/GD09' name DISK49 size 3725G, 
'-10.10.10.15/GD10' name DISK50 size 3725G, 
'-10.10.10.15/GD11' name DISK51 size 3725G
attribute 'AU_SIZE'= '4M';
created.

Note

REDUNDANCY management is managed on a per FAILGROUP basis. Therefore, it is recommended to configure FAILGROUP for each SSVR instance that is likely to fail at the same time.

Disk space information can be checked through V$AS_DISKSPACE view.

SQL> select * from v$as_diskspace;

DISKSPACE_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE
----------------- ------ ------------ ----------- ---------------------
                0 DS0              512       4096               4194304
                
STATE   TYPE     TOTAL_MB FREE_MB   REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------- ------- --------- --------- ----------------------- --------------
MOUNT   NORMAL  33666928 30482600                   106152       14710574

3. Configuring CM instance on DB node #0

The CM instance registers the network and cluster and registers TAS and TAC as services to help ensure reliable cluster operations.

The following is an example of setting initialization parameters for CM. For more information, refer to the "Tibero Administrator's Guide".

# cm0.tip 
CM_NAME=cm0 
CM_UI_PORT=9140
CM_RESOURCE_FILE="/home/tibero/zetadata/cm0_res.crf"

4. Staring CM and TAS instance on DB node #0

The following is an example of the process of registering a network with CM, registering a cluster, starting a cluster, registering a TAS service, registering a TAS instance, and starting a TAS instance after starting a CM instance.

$ export CM_HOME=$TB_HOME
$ export CM_SID=cm0

$ tbcm -b
CM Guard demon started up. 

TBCM 7.1.1 (Build -)

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Tibero cluster manager started up. 
Local node name is (cm0:9140).

$ cmrctl add network --name net0 --ipaddr 10.10.10.11 --portno 1000
Resource add success! (network, net0)
$ cmrctl add cluster --name cls --incnet net0 --cfile "-"
Resource add success! (cluster, cls)
$ cmrctl start cluster --name cls
MSG SENDING SUCCESS!
$ cmrctl add service --name TAS --type as --cname cls
Resource add success! (service, TAS)
$ cmrctl add as --name tas0 --svcname TAS --dbhome "$TB_HOME"
Resource add success! (as, tas0)
$ cmrctl start as --name tas0
Listener port = 9120 

Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)

5. Adding TAS on DB Node #1 from TAS instance on DB Node #0

The following is an example of the process of adding DB node #1 TAS from DB node #0 TAS instance.

$ export TB_SID=tas0
$ tbsql sys/tibero

tbSQL 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 

Connected to Tibero.

SQL> alter diskspace DS0 add thread 1;
SQL> Disconnected.

6. Configuring and starting TAS, CM instance on DB node #1

The following is an example of setting TAS instance initialization parameters for DB Node #1.

# tas1.tip 
INSTANCE_TYPE=AS 
LISTENER_PORT=9120

TOTAL_SHM_SIZE=4G 
MEMORY_TARGET=5G

CLUSTER_DATABASE=Y 
LOCAL_CLUSTER_ADDR=10.10.10.12 
LOCAL_CLUSTER_PORT=9130 
CM_PORT=9140

THREAD=1 
DB_BLOCK_SIZE=4K
AS_SCAN_SSVR_DISK=Y 
USE_ZETA=Y

The following is an example of initialization parameter settings for CM instance on DB node #1.

# cm1.tip
CM_NAME=1 
CM_UI_PORT=9140
CM_RESOURCE_FILE="/home/tibero/zetadata/cm1_res.crf"

The following is an example of starting a CM instance on DB node #1 and going through the process of network registration, cluster registration, cluster startup, TAS service registration, TAS instance registration, and TAS instance startup.

$ export TB_SID=tas1
$ export CM_HOME=$TB_HOME
$ export CM_SID=cm1
$ tbcm -b
CM Guard demon started up. 

TBCM 7.1.1 (Build -)

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.

Tibero cluster manager CM started up. 
Local node name is (cm1:9140).

$ cmrctl add network --name net1 --ipaddr 10.10.10.12 --portno 1000
Resource add success! (network, net1)
$ cmrctl add cluster --name cls --incnet net1 --cfile "-"
Resource add success! (cluster, cls)
$ cmrctl start cluster --name cls
MSG SENDING SUCCESS!
$ cmrctl add as --name tas1 --svcname TAS --dbhome "$TB_HOME"
Resource add success! (as, tas1)
$ cmrctl start as --name tas1
Listener port = 9120 

Tibero 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)

7. Configuring and starting TAS instance on DB node #0

The TAC instance connects to the SSVR instance through the SSVR instance connection information recorded in the $TB_HOME/client/config/ssdsn.tbr file. The TAC instance recognizes file paths that start with a "+" as virtual files managed by the TAS instance. This path can be used for the path of any file, including control files and CM files.

The following is an example of initialization parameters for the configuration of DB node #0 TAC instance as a cluster using TAS. Do not modify the 'DB_BLOCK_SIZE=32K' parameter.

# tac0.tip 
DB_NAME=TAC LISTENER_PORT=9150
CONTROL_FILES="+DS0/c1.ctl"
LOG_ARCHIVE_DEST="+DS0/ARCH"

MAX_SESSION_COUNT=300

TOTAL_SHM_SIZE=60G 
MEMORY_TARGET=270G

CLUSTER_DATABASE=Y 
LOCAL_CLUSTER_ADDR=10.10.10.11 
LOCAL_CLUSTER_PORT=9160 
CM_PORT=9140
THREAD=0 
UNDO_TABLESPACE=UNDO0 
DB_BLOCK_SIZE=32K 
AS_PORT=9120 
USE_ACTIVE_STORAGE=Y
_USE_O_DIRECT=Y 
USE_ZETA=Y

The MEMORY_TARGET parameter is the total amount of memory to be used by the TAC instance.

Due to the high amount of external memory used for connections in InfiniBand libraries (4 MB per connection), it is recommended that you consult your lab before setting this parameter.

The formula is as follows:

TAC instance
MEMORY_TARGET = (total amount of memory)
                - (Number of Storage nodes) * (Number of grid disks per SSVR instance)
                * {(Max Session Count of TAC instances)
                + (total number of PEP threads in the TAC instance)} * 4MB
                - (Memory Target of the TAS instance on the node)
                - 10GB(=OS and allowance for connections from other threads)
** (total number of PEP threads on the TAC instance) = (number of PEP processes on the TAC instance)
                                        * (number of threads per PEP process)
** The calculation of the total number of PEP threads in a TAC instance is based on the current TAC instance.

The following is an example of the process of registering the TAC service and registering the TAC instance on the CM instance of DB node #0 that was started in the previous step.

$ export CM_HOME=$TB_HOME
$ export CM_SID=cm0
$ cmrctl add service --name TAC --type db --cname cls
Resource add success! (service, TAC)
$ cmrctl add db --name tac0 --svcname TAC --dbhome "$TB_HOME"
Resource add success! (db, tac0)

The following is an example of the process of creating a melt database by starting the TAC instance on DB node #0 in nomount mode and using the disk space of the TAS instance. It will automatically shut down after creation, so restart it.

$ export TB_SID=tac0

$ tbboot -t nomount
Listener port = 9150 

Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NOMOUNT mode).
$ tbsql sys/tibero

tbSQL 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 

Connected to Tibero.

SQL> create database "TAC"
user sys identified by tibero 
maxinstances 32
maxdatafiles 2048 
character set MSWIN949
logfile group 1 '+DS0/log0001.log' size 2G, 
group 2 '+DS0/log0002.log' size 2G, 
group 3 '+DS0/log0003.log' size 2G
maxloggroups 255
maxlogmembers 8
datafile '+DS0/system.dtf' size 4G 
autoextend on next 64M maxsize 128G
syssub datafile '+DS0/syssub.dtf' size 4G
autoextend on next 64M maxsize 128G 
default temporary tablespace TEMP
tempfile '+DS0/temp000.dtf' size 128G autoextend off, 
tempfile '+DS0/temp001.dtf' size 128G autoextend off, 
tempfile '+DS0/temp002.dtf' size 128G autoextend off, 
tempfile '+DS0/temp003.dtf' size 128G autoextend off,
.
.
.
tempfile '+DS0/temp098.dtf' size 128G autoextend off, 
tempfile '+DS0/temp099.dtf' size 128G autoextend off
undo tablespace UNDO0 datafile
'+DS0/undo00.dtf' size 128G autoextend off, 
'+DS0/undo01.dtf' size 128G autoextend off, 
'+DS0/undo02.dtf' size 128G autoextend off
default tablespace USR
datafile '+DS0/usr.dtf' size 32G 
autoextend on next 64M maxsize unlimited;

Database created.

SQL> Disconnected.
$ tbboot
Listener port = 9150 
Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NORMAL mode).

8. Adding TAC on DB Node #1 from TAC instance on DB Node #0

The following is an example of adding additional configuration on DB Node #0 TAC instance to start DB Node #1 TAC instance.

$ export TB_SID=tac0
$ tbsql sys/tibero

tbSQL 7

TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Connected to Tibero.

SQL> create undo tablespace UNDO1 datafile
'+DS0/undo03.dtf' size 128G autoextend off,
'+DS0/undo04.dtf' size 128G autoextend off, 
'+DS0/undo05.dtf' size 128G autoextend off;

SQL> alter database add logfile thread 1 group 4 '+DS0/log004.log' size 2G; 
SQL> alter database add logfile thread 1 group 5 '+DS0/log005.log' size 2G; 
SQL> alter database add logfile thread 1 group 6 '+DS0/log006.log' size 2G; 
SQL> alter database enable public thread 1;
SQL> Disconnected.

9. Staring TAC instance on DB node #1

The following is an example of setting the initialization parameters for DB Node #1 TAC instance. Do not modify 'DB_BLOCK_SIZE=32K'.

# tac1.tip 
DB_NAME=TAC 
LISTENER_PORT=9150
CONTROL_FILES="+DS0/c1.ctl"
LOG_ARCHIVE_DEST="+DS0/ARCH" 

MAX_SESSION_COUNT=300

TOTAL_SHM_SIZE=60G 
MEMORY_TARGET=270G

CLUSTER_DATABASE=Y 
LOCAL_CLUSTER_ADDR=10.10.10.12 
LOCAL_CLUSTER_PORT=9160 
CM_PORT=9140
THREAD=1 
UNDO_TABLESPACE=UNDO1 
DB_BLOCK_SIZE=32K 
AS_PORT=9120 
USE_ACTIVE_STORAGE=Y
_USE_O_DIRECT=Y 
USE_ZETA=Y

The MEMORY_TARGET parameter is the total amount of memory to be used by the TAC instance.

Due to the high amount of external memory used for connections in InfiniBand libraries (4 MB per connection), it is recommended that user consults tech service team before setting this parameter.

The formula is as follows:

TAC instance
MEMORY_TARGET = (total amount of memory)
                - (Number of Storage nodes) * (Number of grid disks per SSVR instance)
                * {(Max Session Count of TAC instances)
                + (total number of PEP threads in the TAC instance)} * 4MB
                - (Memory Target of the TAS instance on the node)
                - 10GB(=OS and allowance for connections from other threads)
** (total number of PEP threads in the TAC instance) = (number of PEP processes in the TAC instance)
                                      * (number of threads per PEP process)
** The calculation of the total number of PEP threads in a TAC instance is based on the current TAC instance.

The following is an example of registering and starting a TAC instance on the DB Node #1 CM instance that was started in the previous step.

$ export TB_SID=tac1
$ export CM_HOME=$TB_HOME
$ export CM_SID=cm1
$ cmrctl add db --name tac1 --svcname TAC --dbhome "$TB_HOME
Resource add success! (db, tac1)
$ cmrctl start db --name tac1
Listener port = 9150 

Tibero 7
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved. 
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)

Through the above process, configure SSVR instances, TAS instances, and TAC instances on three SSVR nodes and two DB nodes respectively.

Note

Using SSVR instances on TAS and TAC instances does not require any configuration other than the access information described earlier.

For more information about installing and setting preferences for TAS and TAC, refer to "Tibero Active Storage Administrator's Guide" and "Tibero Administrator's Guide".


Verifying SSVR instance information

The following views display SSVR instance information. They can be retrieved only in a SSVR instance.

View
Description

V$SSVR_CLIENT

View the clients that the SSVR instance has connections to.

V$SSVR_FLASHCACHE

View the Flash Cache information connected to the SSVR instance.

V$SSVR_GRID_DISK

View the Grid Disk information connected to the SSVR instance.

V$SSVR_STORAGE_DISK

View the Storage Disk information connected to the SSVR instance.

V$SSVR_SLAB_STAT

View the SLAB information being used by the SSVR instance.

V$SSVR_MEMSTAT

View memory information used by the SSVR instance.

V$SSVR_CLIENT

The V$SSVR_CLIENT view shows information about all clients that the SSVR instance has connections to.

Column
Dta type
Description

ADDRESS

VARCHAR(20)

The address of the client.

PORT

NUMBER

The port number that the client is connected to.

NAME

VARCHAR(128)

The name of the client.

THREAD_NUMBER

NUMBER

The thread number responsible for connecting to the client.

The following is an example of V$SSVR_CLIENT.

SQL> select * from v$ssvr_client;

ADDRESS   PORT   NAME         TRHEAD_NUMBER
--------- ------ ---------- ----------------
127.0.0.1 36088   TAS                     0
127.0.0.1 36070   CLIENT_LIB              0

V$SSVR_FLASHCACHE

The V$SSVR_FLASHCACHE view shows information about all flash caches connected to an SSVR instance.

Column
Data type
Description

FLASHCACHE_NUMBER

NUMBER

The number of the flash cache.

NAME

VARCHAR(32)

The name of the flash cache.

PATH

VARCHAR(256)

The path to the flash cache.

OS_BYTES

NUMBER

The size of the flash cache as recognized by the OS.

Note

For an example of viewing V$SSVR_FLASHCACHE, refer to 'Create a flash cache'.

V$SSVR_GRID_DISK

The V$SSVR_GRID_DISK view shows information about all grid disks attached to an SSVR instance.

Column
Data type
Description

GRID_DISK_NUMBER

NUMBER

The number of the grid disk.

NAME

VARCHAR(128)

The name of the grid disk.

STORAGE_DISK_NUMBER

NUMBER

The number of the storage disk that is mapped to the grid disk.

STORAGE_DISK_OFFSET

NUMBER

The offset of the storage disk that is mapped to the grid disk.

TOTAL_BYTES

NUMBER

The size of the grid disk.

Note

For an example of viewing V$SSVR_GRID_DISK , refer to 'Create a grid disk'.

V$SSVR_STORAGE_DISK

The V$SSVR_STORAGE_DISK view shows information about all storage disks attached to an SSVR instance.

Column
Data type
Description

STORAGE_DISK_NUMBER

NUMBER

The number of the storage disk.

NAME

VARCHAR(128)

The name of the storage disk.

PATH

VARCHAR(256)

The path of the storage disk.

OS_BYTES

NUMBER

The size of the storage disk as recognized by the OS.

Note

For an example of viewing V$SSVR_STORAGE_DISK , refer to 'Create a storage disk'.

V$SSVR_SLAB_STAT

The V$SSVR_SLAB_STAT view shows the SLAB information that the SSVR instance is currently using.

Column
Data type
Description

SLAB_SIZE

NUMBER

The size of the SLAB.

SLAB_GET_CNT

NUMBER

The number of the SLAB.

TOTAL_CHUNK_CNT_NUMBER

NUMBER

The total amount of chunks.

MAX_CHUNK_CNT

NUMBER

The maximum possible number of chunks.

The following is an example of V$SSVR_SLAB_STAT.

SQL> select * from v$ssvr_slab_stat;

SLAB_SIZE SLAB_GET_CNT TOTAL_CHUNK_CNT MAX_CHUNK_CNT
---------- ------------ --------------- -------------
     32768        99838              48            48
   1048576          354              48            48
   4194304           72              48            48
     32768       112967              64            64
   4194304           25              16            16
   
5 rows selected.

V$SSVR_MEMSTAT

The V$SSVR_MEMSTAT view shows memory usage information for an SSVR instance. The units are expressed in MB.

Column
Data type
Description

TOTAL_PGA_MEMORY_MB

NUMBER

The total size of the process memory.

FIXED_PGA_MEMORY_MB

NUMBER

The size of the fixed process memory.

USED_PGA_MEMORY_MB

VARCHAR(128)

The amount of process memory used.

TOTAL_SHARED_MEMORY_MB

NUMBER

The total size of shared memory.

FIXED_SHARED_MEMORY_MB

NUMBER

The size of fixed shared memory.

USED_SHARED_MEMORY_MB

NUMBER

The amount of shared memory used.

The following is an example of V$SSVR_MEMSTAT.

SQL> select * from v$ssvr_memstat;

TOTAL_PGA_MEMORY_MB FIXED_PGA_MEMORY_MB USED_PGA_MEMORY_MB
------------------- ------------------- ------------------
TOTAL_SHARED_MEMORY_MB FIXED_SHARED_MEMORY_MB USED_SHARED_MEMORY_MB
---------------------- ---------------------- ---------------------
                4092                  31              3466
                  2048                    714            1.7592E+13
                  
1 row selected.

Last updated