test 테이블 sbj 컬럼에 데이터가
"감자고구마"
"감자 고구마"
"고구마 감자"
이런식의 데이터가 있다고 가정하자
WHERE 조건 검색시 공백여부와 상관없이 "감자고구마", "감자 고구마" 모두 한번에 검색 하고자 한다면
MySQL, MariaDB
SELECT sbj FROM test WHERE sbj REGEXP '감자[[:space:]]?고구마';
SELECT sbj FROM test WHERE sbj RLIKE '감자 ?고구마';
(비추)SELECT sbj FROM test WHERE REPLACE(sbj, ' ', '') LIKE '%감자고구마%';
ORACLE, Tibero
SELECT sbj FROM stest WHERE REGEXP_LIKE(sbj, '감자[[:space:]]?고구마')
'Database'에 해당되는 글 25건
- 2021.04.21 WHERE 조건에 공백 여부 포함하여 검색하기 1
- 2019.10.16 MS949(EUC-KR)에서 UTF-8 변경시 DBMS 컬럼 사이즈 변경 작성
- 2018.10.11 Tibero 6 설치(Oracle DB-Link, Java 모듈 적용)
- 2018.04.03 Tibero6 to Oracle 10g(11g) DB-LINK
- 2013.08.14 CentOS 6.3에서 TokuDB 실행시 문제
- 2013.01.10 교차선분조건 판별
- 2012.01.26 Altibase의 성능테스트(단순 SELECT) 4
- 2011.04.22 MySQL 5.x ERWin 에서 스크립트 생성시 DB에 반영문제 2
- 2010.04.29 PL/SQL BASE64 Encode/Decode
- 2009.09.06 crontab에서 plsql 실행
- 2009.05.26 Managing Hierarchical Data in MySQL
- 2009.03.23 ORACLE 백업 펄스크립트
- 2009.03.12 procedures 백업하기 2
- 2009.03.12 DBMS_scheduler
- 2009.03.12 오라클 잡 관리(Oracle job manage) 2
MS949(euc-kr)에서 한글의 경우는 2byte를 사용하고
UTF-8로 변경시 한글은 3byte를 사용함.(ASCII 경우(영문,숫자)는 기존대로 1byte만 사용함)
** DBMS 변경시 관련 스크립트 작성 참조(작업전 백업은 필수!!)
ORACLE, Tibero만 해당됨
1. 기존 DB에서(MS949) UTF-8 캐릭터셋 사이즈를 미리 조정한다.
1-1. 스크립트 생성
SELECT 'ATLER TABLE '|| table_name ||' MODIFY('|| LOWER(column_name) ||' VARCHAR2('|| LEAST(CEIL(data_length*1.5/10) * 10, 4000) ||'));' AS sqltext
FROM USER_TAB_COLUMNS
WHERE data_type LIKE 'VARCHAR%'
AND data_length > 1
AND data_length < 4000;
1-2. 작성된 스크립트 확인
> 숫자 또는 영문자만 들어가는 필드는 제외한다.
예) VARCHAR2 형식으로 저장되는 날짜형식등
> 시스템 관련 테이블의 컬럼은 제외
예) SYS_XXXXX
1-3. 작성된(1-1) 스크립트 내용을 기존(MS949)에 적용함
2. 사이즈가 변경된 기존(MS949) DB를 백업한다.
3. 백업한 기존(MS949) DB를 이관할 DB(UTF-8)에 임포트한다.
순서정리 : 기존DB 사이즈 조정 스크립트 작성 > 필요없는 컬럼 제외 > 다시 기존 DB에 스크립 적용 > 기존 DB 백업 > 신규 DB(UTF-8) 임포트
** 설치 운영환경은 CentOS 7.5로 테스트 진행 하였고 운영하고 있습니다.
/*------------------------------*/
/* Linux 환경 확인 및 구성 */
/*------------------------------*/
** 패키지 설치여부 확인
[root@gampol ~]# rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE}(%{ARCH})\n' \
gcc \
gcc-c++ \
libgcc \
libstdc++ \
libstdc++-devel \
compat-libstdc++ \
libaio \
libaio-devel
** 패키지 설치
[root@gampol ~]# yum install -y \
gcc \
gcc-c++ \
libgcc \
libstdc++ \
libstdc++-devel \
compat-libstdc++ \
libaio \
libaio-devel
** 커널 파라미터 확인
[root@gampol ~]# sysctl -a
** 커널 파라미터 변경하거나 추가함
[root@gampol ~]# vi /etc/sysctl.conf
kernel.shmmni = 4096
kernel.shmmax = 4398046511104
kernel.shmall = 4294967296
kernel.sem = 10000 32000 10000 10000
kernel.msgmni = 512
kernel.sysrq = 0
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
fs.aio-max-nr = 1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 1024 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
** 커널 파라미터 적용
[root@gampol ~]# sysctl -p
** 계정 자원제한 설정(아래쪽에 추가함)
[root@gampol ~]# vi /etc/security/limits.conf
* soft nproc 131072
* hard nproc 131072
* soft nofile 131072
* hard nofile 131072
* soft core unlimited
* hard core unlimited
* soft memlock 50000000
* hard memlock 50000000
** 계정 인증 보안 설정(session 항목 첫라인에 추가함)
[root@gampol ~]# vi /etc/pam.d/login
session required pam_limits.so
** SELinux 설정 해제(변경처리)
[root@gampol ~]# setenforce 0
[root@gampol ~]# vi /etc/selinux/config
SELINUX=disabled
** hostname 확인 및 변경
[root@gampol ~]# hostname
localhost.localdomain
[root@gampol ~]# hostnamectl set-hostname gampol
[root@gampol ~]# echo gampol > /proc/sys/kernel/hostname
[root@gampol ~]# hostname
gampol
/*------------------------------*/
/* Tibero 계정생성 */
/*------------------------------*/
[root@gampol ~]# groupadd dba
[root@gampol ~]# useradd tibero -g dba -G dba
/*------------------------------*/
/* 환경변수 추가 */
/*------------------------------*/
[root@gampol ~]# su - tibero
[tibero@gampol ~]$ vi .bash_profile
### Java Env
export JAVA_HOME=/usr/local/jdk
export JDK_HOME=$JAVA_HOME
export PATH=$JAVA_HOME/bin:$PATH
### Tibero Env
export TB_BASE=/home/tibero
export TB_HOME=$TB_BASE/tibero6
export TBGW_HOME=$TB_HOME/client/gateway
export TB_SID=TDBMS
export NLS_LANG=KOREAN_KOREA.KO16MSWIN949
#export NLS_LANG=AMERICAN_AMERICA.UTF8
#export NLS_LANG=MSWIN949
export TB_PROF_DIR=$TB_HOME/bin/prof
export ORACLE_HOME=$TB_HOME/instantclient_11_2
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$ORACLE_HOME:$LD_LIBRARY_PATH
export SHLIB_PATH=$LD_LIBRARY_PATH
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$TB_HOME/bin:$TB_HOME/client/bin:$ORACLE_HOME:$PATH
### TBinary Env
export TBINARY_PATH=$TB_BASE/tbinary
export PATH=$TBINARY_PATH/monitor:$PATH
export LANG=ko_KR.eucKR
### Tibero Alias
alias tbbase="cd $TB_BASE"
alias tbhome="cd $TB_HOME"
alias tbcfg="cd $TB_HOME/config"
alias tbcli="cd $TB_HOME/client"
alias tbscr="cd $TB_HOME/scripts"
alias tbcfgv="vi $TB_HOME/config/$TB_SID.tip"
alias tbcliv="vi $TB_HOME/client/config/tbdsn.tbr"
alias tblog="cd $TB_BASE/log/$TB_SID"
alias tbi='cd $TB_BASE/tbinary/'
alias tm='cd $TB_BASE/tbinary/monitor;monitor;cd -'
PS1='\e[0m[\e[1;31m\h\e[0m:\e[1;32m\u\e[0m@\e[1;33m$TB_SID\e[0m:\e[1;32m$NLS_LANG\e[0m \w]\n\$ '
/*------------------------------*/
/* Tibero 설치 작업 */
/*------------------------------*/
** 현재위치 확인
[tibero@gampol ~]$ cd $TB_BASE
[tibero@gampol ~]$ pwd
/home/tibero
** $TB_HOME 압축 풀기
[tibero@gampol ~]$ tar xvzf tibero6-bin-FS06-linux64-152596-opt-20180306165737.tar.gz
** 확인
[tibero@gampol ~]$ ls -l $TB_HOME
합계 12
drwxr-xr-x. 3 tibero dba 4096 7월 26 12:54 bin/
drwxr-xr-x. 10 tibero dba 105 7월 26 12:54 client/
drwxr-xr-x. 2 tibero dba 4096 7월 26 12:54 config/
drwxr-xr-x. 3 tibero dba 249 7월 26 12:55 lib/
drwxr-xr-x. 3 tibero dba 26 7월 26 12:55 license/
drwxr-xr-x. 3 tibero dba 22 7월 26 12:55 nls/
drwxr-xr-x. 3 tibero dba 4096 7월 26 12:55 scripts/
** 기본 디렉토리 생성
[tibero@gampol ~]$ mkdir -p $TB_HOME/data/$TB_SID
[tibero@gampol ~]$ mkdir -p $TBGW_HOME
** 초기 환경파일 생성
[tibero@gampol ~]$ cd $TB_HOME/config
[tibero@gampol ~]$ ./gen_tip.sh
Using TB_SID "TDBMS"
/home/tibero/tibero6/config/TDBMS.tip generated
/home/tibero/tibero6/config/psm_commands generated
/home/tibero/tibero6/client/config/tbdsn.tbr generated.
Running client/config/gen_esql_cfg.sh
Done.
** Tibero 파라미터 수정
[tibero@gampol ~]$ vi $TB_HOME/config/$TB_SID.tip
DB_NAME=TDBMS
LISTENER_PORT=8629
CONTROL_FILES="/home/tibero/tibero6/tbdata/TDBMS/c1.ctl","/home/tibero/tibero6/tbdata/TDBMS/c2.ctl"
DB_CREATE_FILE_DEST="/data1/tibero6/data/TDBMS"
#CERTIFICATE_FILE="/home/tibero/tibero6/config/svr_wallet/TDBMS.crt"
#PRIVKEY_FILE="/home/tibero/tibero6/config/svr_wallet/TDBMS.key"
#WALLET_FILE="/home/tibero/tibero6/config/svr_wallet/WALLET"
#ILOG_MAP="/home/tibero/tibero6/config/ilog.map"
MAX_SESSION_COUNT=200
TOTAL_SHM_SIZE=4G
MEMORY_TARGET=8G
[tibero@gampol ~]$ tbboot nomount
*************************************************************************
* ERROR: Can't open the license file!!
* (1) Check the license file - /home/tibero/tibero6/license/license.xml
*************************************************************************
(라이선스가 없다면 https://technet.tmaxsoft.com/ 데모라이선스 신청하고 받은 라이선스 파일은 $TB_HOME/license 위치에 복사하도록 한다.)
[tibero@gampol ~]$ tbboot nomount
Change core dump dir to /home/tibero/tibero6/bin/prof.
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).
[tibero@gampol ~]$ tbsql sys/tibero
# 기본 시스템 데이터 베이스 생성(서버 설치 PATH 확인후 실행한다.)
SQL> CREATE DATABASE "TDBMS"
USER SYS IDENTIFIED BY tibero
MAXINSTANCES 8
MAXDATAFILES 100
CHARACTER SET MSWIN949
LOGFILE GROUP 1 '/data1/tibero6/data/TDBMS/log001.log' SIZE 500M,
GROUP 2 '/data1/tibero6/data/TDBMS/log002.log' SIZE 500M,
GROUP 3 '/data1/tibero6/data/TDBMS/log003.log' SIZE 500M
MAXLOGGROUPS 255
MAXLOGMEMBERS 8
NOARCHIVELOG
DATAFILE '/data1/tibero6/data/TDBMS/system001.dtf' SIZE 100M AUTOEXTEND ON NEXT 1000M MAXSIZE UNLIMITED DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE '/data1/tibero6/data/TDBMS/temp001.dtf' SIZE 100M AUTOEXTEND ON NEXT 1000M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE
UNDO TABLESPACE UNDO
DATAFILE '/data1/tibero6/data/TDBMS/undo001.dtf' SIZE 100M AUTOEXTEND ON NEXT 1000M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
SQL> exit;
[tibero@gampol ~]$ tbdown
tbdown failed. proc info file is deleted.
Hint: Please check if the tbsvr instance was already stopped.
[tibero@gampol ~]$ tbboot
Change core dump dir to /home/tibero/tibero6/bin/prof.
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
[tibero@gampol ~]$ $TB_HOME/scripts/system.sh
Enter SYS password:
tibero
Enter SYSCAT password:
syscat
Dropping agent table...
Creating text packages table ...
Creating the role DBA...
Create default system users & roles?(Y/N):
y
Creating system users & roles...
Creating example users...
Creating virtual tables(1)...
Creating virtual tables(2)...
Granting public access to _VT_DUAL...
Creating the system generated sequences...
Creating internal dynamic performance views...
Creating outline table...
/*------------------------------*/
/* Tibero 일반 계정 생성 */
/*------------------------------*/
** 시스템 계정 접속
[tibero@gampol ~]$ tbsql sys/tibero
/* 기존 유저 삭제 */
DROP USER gampol CASCADE;
/* 기존 작업 테이블 스페이스 삭제 */
DROP TABLESPACE ts_gampol INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;
/* 테이블 스페이스 생성 */
CREATE TABLESPACE ts_gampol
DATAFILE '/data1/tibero6/data/TDBMS/ts_gampol.dtf' SIZE 500M
AUTOEXTEND ON NEXT 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K;
/* 유저 생성 */
CREATE USER gampol IDENTIFIED BY gampol
DEFAULT TABLESPACE ts_gampol
TEMPORARY TABLESPACE TEMP
ACCOUNT UNLOCK;
GRANT CONNECT
,RESOURCE
,SELECT ANY DICTIONARY
,SELECT ANY TABLE
,ALTER ANY PROCEDURE
,ALTER SESSION
,CREATE ANY SYNONYM
,CREATE VIEW
,CREATE DATABASE LINK
,CREATE SEQUENCE
,CREATE SESSION
,CREATE SYNONYM
,CREATE TABLE
,CREATE ANY TYPE
,DELETE ANY TABLE
,DROP ANY INDEX
,DROP ANY PROCEDURE
,DROP ANY SYNONYM
,DROP ANY TABLE
,DROP ANY TRIGGER
,DROP ANY VIEW
,DROP ANY TYPE
TO gampol;
/*--------------------------------------*/
/* Oracle Client 설치(최초 설치시) */
/*--------------------------------------*/
** Oracle 클라이언트 다운로드
https://www.oracle.com/database/technologies/instant-client/downloads.html
http://www.oracle.com/technetwork/database/database-technologies/instant-client/overview/index.html
** 오라클 클라이언트 최소 패키지 설치(오라클 서버나 별도 클라이언트가 설치 되지 않았다면)
[tibero@gampol ~]$ unzip instantclient-basic-linux.x64-11.2.0.4.0.zip
[tibero@gampol ~]$ mv instantclient_11_2 $TB_HOME
[tibero@gampol ~]$ ls -l $TB_HOME
[tibero@gampol ~]$ mkdir -p $ORACLE_HOME/network/admin
[tibero@gampol ~]$ vi $ORACLE_HOME/network/admin/tnsnames.ora
#---- 테스트 ----#
TDBMS_10g = (
DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = 오라클서버 IP) (PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl10))
)
** tbdsn.tbr 추가
[tibero@gampol ~]$ vi $TB_HOME/client/config/tbdsn.tbr
#---- 테스트 ----#
TDBMS_10g=(
(GATEWAY=
(LISTENER= (HOST=127.0.0.1) (PORT=9999))
(TARGET=TDBMS_10g)
(TX_MODE=GLOBAL)
)
)
** tbgw.cfg 생성
[tibero@gampol ~]$ vi $TB_HOME/client/config/tbgw.cfg
LOG_DIR=$TBGW_HOME/oracle/log
LOG_LVL=2
LSNR_PORT=9999
MAX_LOG_SIZE=1000
MAX_LOG_CNT=5
FETCH_SIZE=32000
/*--------------------------------*/
/* gw4orcl_XXg 서비스 등록 */
/*--------------------------------*/
** Gateway 바이러리 복사
[tibero@gampol ~]$ cp $TB_HOME/client/bin/gw4orcl_11g $TBGW_HOME/gw4orcl
[tibero@gampol ~]$ ldd $TBGW_HOME/gw4orcl (Not found 여부 확인)
[tibero@gampol ~]$ $TBGW_HOME/gw4orcl
[tibero@gampol ~]$ tbsql ssoview/ssolink@SSO
SQL> exit;
[tibero@gampol ~]$ tbsql piview_eval/piview_eval@INSA
SQL> exit;
[tibero@gampol ~]$ tbsql lftpmapp/ghktjd0906@EHOZO
SQL> exit;
** db-link 생성(gampol Tibero 계정 접속)
[tibero@gampol ~]$ tbsql gampol/gampol
SQL> CREATE DATABASE LINK TDBMS_10g CONNECT TO '오라클 계정명' IDENTIFIED BY '계정 비밀번호' USING 'TDBMS_10g';
SQL> SELECT COUNT(*) FROM 테스트테이블명@TDBMS_10g ;
/*--------------------------------------------------*/
/* Tibero Java 패키지 생성작업(PSM, PL/SQL) */
/*--------------------------------------------------*/
** 암호화 모듈은 kisa에서 제공하는 lib를 활용하여 별도로 만들었으니
필요한 java lib를 적용코자 한다면 설치 방법만 참고하시기 바랍니다.
** 자바 lib 저장 디렉토리 생성
[tibero@gampol ~]$ mkdir -p $TB_HOME/instance/$TB_SID/java
** 자바 lib 복사
[tibero@gampol ~]$ cd $TB_HOME/instance/$TB_SID/java
[tibero@gampol ~]$ ls -l
commons-codec-1.10.jar
kisaOraLib.jar
** 자바 패키지 적용 환경설정 추가
[tibero@gampol ~]$ vi $TB_HOME/config/$TB_SID.tip
_PSM_BOOT_JEPA=Y
JAVA_CLASS_PATH=/home/tibero/tibero6/instance/TDBMS/java
** JEPA 연결정보 추가
[tibero@gampol ~]$ vi $TB_HOME/client/config/tbdsn.tbr
epa=(
(
EXTPROC=(LANG=JAVA)
(
LISTENER=(HOST=localhost)
(PORT=9390)
)
)
)
** JEPA 환경설정 수정(기본정보 확인)
[tibero@gampol ~]$ cat $TB_HOME/client/epa/java/config/epa.cfg
** psmjava 클래스패스 추가
[tibero@gampol ~]$ vi $TB_HOME/bin/psmjavac
codec=$TB_HOME/instance/TDBMS/java/commons-codec-1.10.jar
kisa=$TB_HOME/instance/TDBMS/java/kisaOraLib.jar
.......:${codec}:${kisa}:${classpath} ${src}
** tbjavaepa 클래스패스 추가
[tibero@gampol ~]$ vi $TB_HOME/client/bin/tbjavaepa
:
codec=$TB_HOME/instance/TDBMS/java/commons-codec-1.10.jar
kisa=$TB_HOME/instance/TDBMS/java/kisaOraLib.jar
:
echo "....:$config:$codec:$kisa $mainclass ...."
:
echo "....:$config:$codec:$kisa $mainclass ...."
** 패키지 생성 및 테스트
CREATE OR REPLACE PACKAGE crypto
AS
FUNCTION encrypt(str VARCHAR2) RETURN VARCHAR2;
FUNCTION decrypt(str VARCHAR2) RETURN VARCHAR2;
FUNCTION sha256(str VARCHAR2) RETURN VARCHAR2;
END;
/
CREATE OR REPLACE PACKAGE BODY crypto
AS
FUNCTION encrypt(str VARCHAR2) RETURN VARCHAR2 AS LANGUAGE JAVA NAME 'kr.re.nsr.cbc.SeedCBC.encrypt(java.lang.String) return java.lang.String';
FUNCTION decrypt(str VARCHAR2) RETURN VARCHAR2 AS LANGUAGE JAVA NAME 'kr.re.nsr.cbc.SeedCBC.decrypt(java.lang.String) return java.lang.String';
FUNCTION sha256(str VARCHAR2) RETURN VARCHAR2 AS LANGUAGE JAVA NAME 'kr.re.nsr.sha.SHA256.encrypt(java.lang.String) return java.lang.String';
END;
/
SELECT crypto.encrypt('암호화 테스트!!') "TestEnc"
,crypto.decrypt('8mVH5wJNP9FHE0AmWsgkVQ==') "TestDec"
,crypto.sha256('암호화 테스트!!') "TestSHA256"
FROM DUAL;
선행 작업으로 Tibero가 설치된 서버에 Oracle Client 설치필요(같은 서버에 이미 오라클이 설치 되어 있으면 불필요)
1. Tibero 계정 .bash_profile파일 내용에 Oracle Client 환경변수 추가
export TB_HOME=/home/tibero/tibero6
export TB_SID=nextbsc
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib
export LANG=ko_KR.UTF-8
export TBGW_HOME=$TB_HOME/client/gateway
export ORACLE_BASE=/oracle11
export ORACLE_HOME=/oracle11/opt
export ORACLE_SID=orcl949
export ORACLE_OWNER=oracle11
export ORA_NLS10=$ORACLE_HOME/nls/data
export TMPDIR=$ORACLE_BASE/tmp
export NLS_DATE_FORMAT=YYYYMMDD
export TMP=$ORACLE_BASE/tmp
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS_LANG=KOREAN_KOREA.KO16MSWIN949
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$ORACLE_HOME/lib32:/usr/lib
export SHLIB_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:/usr/lib
export PATH=$PATH:$TB_HOME/bin:$TB_HOME/client/bin:$ORACLE_HOME/bin:/usr/sbin:$PATH
alias tblog='cd ${TB_HOME}/instance/$TB_SID/log'
alias tbcfg='cd $TB_HOME/config'
alias tbcfgv='vi $TB_HOME/config/$TB_SID.tip'
alias tbi='cd ~/tbinary'
2. Oracle Client 권한변경
chmod o+rx $ORACLE_HOME
chmod o+rx $ORACLE_HOME/lib
chmod o+r $ORACLE_HOME/lib/*
chmod o+x $ORACLE_HOME/network
chmod o+x $ORACLE_HOME/network/admin
chmod o+r $ORACLE_HOME/network/admin/tnsnames.ora
chmod -R o+rx $ORACLE_HOME/nls
chmod -R o+rx $ORACLE_HOME/oracore
chmod o+x $ORACLE_HOME/sqlplus
chmod o+x $ORACLE_HOME/sqlplus/mesg
chmod o+rx $ORACLE_HOME/sqlplus/mesg/*
3. Gateway Binary 복사
cp $TB_HOME/client/bin/gw4orcl_11g $TBGW_HOME/gw4orcl
(Oracle Client가 10g면 gw4orcl_10g)
ldd $TBGW_HOME/gw4orcl
(Not found 여부 확인)
4. Network Alias 설정
vi $TB_HOME/client/config/tbdsn.tbr
orcl10=(
(GATEWAY=
(LISTENER=
(HOST=192.168.0.242) # Tibero서버 IP(Host명 불가, 꼭 IP만 기록해야됨)
(PORT=9999) # 로그 기록포트
)
(TARGET=NEXTBSC_10g) # Oracle Client에 tnsnames.ora 접속하고자 하는 타켓명(Alias명)하고 같아야함
(TX_MODE=GLOBAL)
)
)
5. Gateway 환경설정
vi ${TB_HOME}/client/config/tbgw.cfg
LOG_DIR=$TBGW_HOME/oracle/log
LOG_LVL=2
LSNR_PORT=9999 # 위 4번에서 설정한 같은 port
MAX_LOG_SIZE=1000
MAX_LOG_CNT=5
FETCH_SIZE=32000
6. Database Link 생성 및 Test
CREATE DATABASE LINK <DB LINK명> CONNECT TO <USER명> IDENTIFIED BY '<PASSWORD>' USING '<NETWORK ALIAS명>';
SELECT COUNT(*) FROM NG02_MEMBER_INFO@<DB LINK명>;
CentOS 6.3에서 아래처럼 플러그인 초기화 실패 에러가 나오네요..
130814 10:13:15 InnoDB: Waiting for the background threads to start
130814 10:13:16 Percona XtraDB (http://www.percona.com) 5.5.30-tokudb-7.0.4-MariaDB-30.1 started; log sequence number 0
130814 10:07:58 mysqld_safe Starting mysqld daemon with databases from /data/mariadb/data
130814 10:07:58 [Warning] The syntax '--log' is deprecated and will be removed in a future release. Please use '--general-log'/'--general-log-file' instead.
130814 10:07:59 [ERROR] InnoDB: syntax error in innodb_data_file_path
130814 10:07:59 [ERROR] Plugin 'InnoDB' init function returned error.
130814 10:07:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
130814 10:07:59 [Note] Plugin 'FEEDBACK' is disabled.
Transparent huge pages are enabled, according to /sys/kernel/mm/redhat_transparent_hugepage/enabled
130814 10:07:59 [ERROR] Plugin 'TokuDB' init function returned error.
130814 10:07:59 [ERROR] Plugin 'TokuDB' registration as a STORAGE ENGINE failed.
130814 10:07:59 [ERROR] mysqld: File '/data/mariadb/mysql_query.log' not found (Errcode: 13)
130814 10:07:59 [ERROR] Unknown/unsupported storage engine: TokuDB
130814 10:07:59 [ERROR] Aborting
위 메세지중 붉은색 메세지를 참조하여 아래와 같이 해주면 끝이네요.
# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
#vi /etc/init.d/mysql(mariadb) 스크립트 start 부분에 추가
# 추가되는 부분.
if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/redhat_transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
fi
참조 #1(30p) http://www.tokutek.com/wp-content/uploads/2013/05/Percona-Live-Tutorial-2013.pdf
참조 #2(34p~38p) http://www.slideshare.net/sprdd/linux-performan-tuning-part-1
PHPSCHOOL 톡박에 좋은 글이 올라와 포스팅 해봅니다.
많은 개발자 분들이 놓치기 쉬운 부분이네요.(OR OR OR OR 남발)
-- 교차선분조건 (A) = (1), (B) = (10)
WITH tmp AS
(
SELECT 0 IDX, 11 A, 12 B FROM DUAL UNION ALL
SELECT 1, -2, 3 FROM DUAL UNION ALL
SELECT 2, 2, 5 FROM DUAL UNION ALL
SELECT 3, 7, 11 FROM DUAL UNION ALL
SELECT 4, -1, 11 FROM DUAL UNION ALL
SELECT 5, -3, -1 FROM DUAL
)
SELECT * FROM tmp WHERE (1) < B AND (10) > A
이벤트에 참여해서 Altibase HDB 완벽가이드 책을 받았는데(2012/01/26)
이벤트 기한은 2012/01/27까지 란다....헐~~ 오늘 책을 받았는데
회사일도 해야되고 어쩌나.....~~~ㅎㅎ
시간도 짧고 해서 기능들을 많이는 테스트 해보지 못했습니다.
잛은시간에 디비의 성능을 간단히 테스트 해보고자
다른 디비들과 백만건 정도 되는 데이터에 단순 SELECT 쿼리만 날려 보았습니다.
의외로 국산디비 성능이 다른 외산 디비에 비해 괜찮은 속도를 낸것같아..~~
앞으로 개발사에서 공격적인 마케팅과 서비스 지원이 이루어 진다면
충분이 시장에서 경쟁력이 있지 않을까라는 개인적인 생각을 해봅니다.
단순 SELECT 쿼리만 테스트 했고 테스트한 모든 디비는 같은 O/S, H/W에
테스트 해보았고, 모든 설치환경은 디폴트이고 따로 튜닝은 하지 않았습니다.
(MySQL은 속도가 안나와서 my.cnf점 수정함).
(elapsed time은 따로 기록하지 않았습니다. 간단한 리뷰를 쓰기위해 결과만 확인했으니 필요하다면 직접 해보시길..)
## H/W
CPU : i7-950/3.06Ghz/Quad Core
MEM : DDR3 SDRAM 2Gb*6(12Gb)
HDD : SATA3(7200rpm/64M)
## O/S
CentOS 5.6 64bit(Linux XXXX 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux)
의외인것은 Altibase, CUBRID 반응속도가 다른 DB에 비해 상당히 빨랐음
그룹처리 : ORACLE(상용), CUBRID(오픈, 국산), Altibase(국산), PostgreSQL(오픈) 속도가 MySQL보다 빨랐음(1~2초 내외)
페이징처리 : ORACLE(상용), Altibase(국산), CUBRID(오픈, 국산) 상대적으로 빨랐음
PostgreSQL, MySQL(MariaDB ??)은 상댁적으로 느렸음
FirebirdSQL(오픈, 구 InterBase) 시간이 없어 테스트 못함 ㅎㅎㅎ
일반 개인 및 공개 사이트 개발은 MySQL(MariaDB ??) 보단 CUBRID
상용 사이트 Oracle 디비를 마이그레이션 하거나 국산 디비를 적용해야되는 정부기관에는 Altibase
Altibase의 장점은 Oracle의 쿼리와 비슷하여 마이그레이션 하는데 다른 디비에
비하여 괜찮은 장점인것 같습니다.
Altibase를 좀더 사용해바야 알겠지만 단순 설치와 쿼리/인덱스/힌트/테이블스페이스등 구문이나 실행방법이
오라클 클론인것 같습니다.
PostgreSQL의 상용버전인 http://www.enterprisedb.com/ (Oracle 구문 동일 또는 유사함)와도 비슷한것 같기도하고
아무튼 이점 넘 마음에 듭니다.ㅎㅎ
Altibase(페이징속도 빠름) = CUBRID(그룹핑 속도 빠름) >= ORACLE > MySQL(MyISAM) >= PostgreSQL >= MySQL(InnoDB)
리뷰가 넘 허접하네...ㅎㅎ
Altibase 설치(다른 디비는 시간되면~~~올리도록 하겠습니다. 시간이 될라나..ㅋㅋ)
( http://atc.altibase.com/download/install_guide.jsp 참조)
http://atc.altibase.com/
해당 OS에 설치할 제품을 다운로드
gcc 버전확인
#gcc -v
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --disable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
해당 gcc버전을 확인하고 버전에 맞는 패키지를 다운받는다.
설치용 altibase-XEON_LINUX_redhat_Enterprise_release5-64bit-5.3.3.12-release-GCC4.1.2.tgz
패치용 altibase-server-5.5.1.2-LINUX-X86-64bit-patch_0_0_2_5.run
계정생성
#useradd -G dba -g dba altibase
계정변경
#su - altibase
환경변수 설정(추가함, 패치 적용하고 삭제함)
$vi .bash_profile
ALTIBASE_HOME=$HOME/altibase_home; export ALTIBASE_HOME
PATH=$ALTIBASE_HOME/bin:$PATH ;export PATH
LD_LIBRARY_PATH=$ALTIBASE_HOME/lib:$LD_LIBRARY_PATH ;export LD_LIBRARY_PATH
패키지 압축풀기
$unzip altibase-XEON_LINUX_redhat_Enterprise_release5-64bit-5.3.3.12-release-GCC4.1.2.tgz
패키지 설치(디폴트 적용)
$./altibase_install.sh
license적용(복사함, http://atc.altibase.com/download/license_guide.jsp 신청후 적용)
$ALTIBASE_HOME/conf/license
패치파일 적용(디폴트 적용)
$chmod 755 altibase-server-5.5.1.2-LINUX-X86-64bit-patch_0_0_2_5.run
$./altibase-server-5.5.1.2-LINUX-X86-64bit-patch_0_0_2_5.run
환경변수 설정
$vi .bash_profile
삭제함 ALTIBASE_HOME=$HOME/altibase_home; export ALTIBASE_HOME
삭제함 PATH=$ALTIBASE_HOME/bin:$PATH ;export PATH
삭제함 LD_LIBRARY_PATH=$ALTIBASE_HOME/lib:$LD_LIBRARY_PATH ;export LD_LIBRARY_PATH
패치적용하고 추가된부분
. /home/altibase/altibase_home/conf/altibase_user.env
환경파일 적용
$source /home/altibase/.bash_profile
서버 시작(종료:$server stop)
$server start
시스템 계정 로그인
$isql -u sys -p manager -sysdba
테스트용 테이블스페이스 생성
iSQL(sysdba)>
CREATE TABLESPACE "TS_TEST"
DATAFILE '/home/altibase/altibase_home/dbs/ts_test.dbf'
SIZE 100M
AUTOEXTEND ON NEXT 100M
MAXSIZE UNLIMITED;
iSQL(sysdba)>
CREATE MEMORY TABLESPACE "TS_TEST_MEM"
SIZE 131072K
AUTOEXTEND ON NEXT 131072K
MAXSIZE UNLIMITED
CHECKPOINT PATH '/home/altibase/altibase_home/dbs'
SPLIT EACH 1048576K;
테스트용 계정 생성
iSQL(sysdba)>
CREATE USER test IDENTIFIED BY test
TEMPORARY TABLESPACE "SYS_TBS_DISK_TEMP"
DEFAULT TABLESPACE "TS_TEST"
ACCESS "TS_TEST_MEM" ON;
데이터 건수 : 1,118,003
#################################### Altibase 5.6.1 ####################################
-- Altibase
-- @/path/USE_DETAIL.SQL;
-- $isql -u username -p passwd -f /path/USE_DETAIL.SQL
CREATE TABLE USE_DETAIL
(
ud_seq NUMBER(20,0) NOT NULL
,rc_seq VARCHAR2(10)
,ma_seq VARCHAR2(20)
,fd_seq NUMBER(20,0)
,ui_id VARCHAR2(20)
,tm_seq VARCHAR2(10) NOT NULL
,ud_type CHAR(1) DEFAULT 'G' NOT NULL
,ud_weight NUMBER(10,3)
,ud_usedate VARCHAR2(20) DEFAULT '' NOT NULL
,ud_dayofweek NUMBER(2,0)
,ud_cdate VARCHAR2(20)
,ud_useunixtime NUMBER(15,0)
,CONSTRAINT PK_USE_DETAIL PRIMARY KEY(ud_seq)
);
CREATE UNIQUE INDEX PK_USE_DETAIL ON USE_DETAIL(ud_seq);
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
접 속 : isql -u username -p passwd
실행시간 : iSQL> set timing on;
종 료 : iSQL> exit;
그룹핑 테스트 SQL :
SELECT X1.ud_seq, X1.ma_seq, X1.tm_seq
FROM USE_DETAIL X1,
(
SELECT MAX(ud_seq) AS ud_seq, ma_seq, tm_seq
FROM USE_DETAIL X
WHERE ud_usedate BETWEEN '2011-07-01' AND '2011-08-01'
AND tm_seq IN('00101','00102','00103','00104','00105','00106','00107','00108')
AND ud_type = 'G'
GROUP BY ma_seq, tm_seq
) X2
WHERE X1.ud_seq = X2.ud_seq;
페이징 테스트 SQL :
SELECT *
FROM
(
SELECT ROWNUM rnum, X1.*
FROM
(
SELECT /*+ INDEX_DESC(X pk_use_detail) */ ud_seq, ma_seq, tm_seq, ud_usedate
FROM USE_DETAIL X
WHERE ud_type = 'G'
) X1
WHERE ROWNUM <= 900051
)
WHERE rnum BETWEEN 900000 AND 900050;
또는
SELECT * FROM use_detail LIMIT 900000, 50;
인덱스 생성방벙 :
DROP INDEX IDX_01;
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
OR
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate DESC, tm_seq, ud_type);
데이터 건수 : 1,118,003
#################################### MySQL 5.1.4 ###################################
-- MySQL(MariaDB ??)
-- mysql>\. /path/USE_DETAIL.SQL;
-- #>mysql -uusername -p dbname < /path/USE_DETAIL.SQL
CREATE TABLE USE_DETAIL
(
ud_seq BIGINT(20) NOT NULL
,rc_seq VARCHAR(10)
,ma_seq VARCHAR(20)
,fd_seq BIGINT(20)
,ui_id VARCHAR(20)
,tm_seq VARCHAR(10) NOT NULL
,ud_type CHAR(1) NOT NULL DEFAULT 'G'
,ud_weight DOUBLE(10,3)
,ud_usedate VARCHAR(20) NOT NULL DEFAULT ''
,ud_dayofweek TINYINT(2)
,ud_cdate VARCHAR(20)
,ud_useunixtime INT(15)
,PRIMARY KEY(ud_seq)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
CREATE UNIQUE INDEX PK_USE_DETAILON USE_DETAIL(ud_seq);
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
접속 : mysql -uusername -ppasswd
디비 : mysql> use dbname;
종료 : mysql> exit; or \q
그룹핑 테스트 SQL :
SELECT X1.ud_seq, X1.ma_seq, X1.tm_seq
FROM USE_DETAIL X1,
(
SELECT MAX(ud_seq) AS ud_seq, ma_seq, tm_seq
FROM USE_DETAIL X USE INDEX FOR GROUP BY(IDX_01)
WHERE ud_usedate BETWEEN '2011-07-01' AND '2011-08-01'
AND tm_seq IN('00101','00102','00103','00104','00105','00106','00107','00108')
AND ud_type = 'G'
GROUP BY ma_seq, tm_seq
) X2
WHERE X1.ud_seq = X2.ud_seq;
페이징 테스트 SQL :
SELECT ud_seq, ma_seq, tm_seq, ud_usedate
FROM USE_DETAIL X FORCE INDEX(IDX_01)
WHERE ud_type = 'G'
ORDER BY ud_seq DESC
LIMIT 900000, 50;
인덱스 생성방벙 : 쿼리문에 흰트사용함, my.cnf도 버퍼용량 증가시킴
ALTER TABLE USE_DETAIL DROP INDEX IDX_01;
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
기타 : MyISAM, InnoDB Engine 둘다 테스트함
데이터 건수 : 1,118,003
#################################### ORACLE 10g ####################################
-- ORACLE
-- SQL>@/path/USE_DETAIL.SQL;
-- $sqlplus username/passwd @/path/USE_DETAIL.SQL
CREATE TABLE USE_DETAIL
(
ud_seq NUMBER(20,0) NOT NULL
,rc_seq VARCHAR2(10)
,ma_seq VARCHAR2(20)
,fd_seq NUMBER(20,0)
,ui_id VARCHAR2(20)
,tm_seq VARCHAR2(10) NOT NULL
,ud_type CHAR(1) DEFAULT 'G' NOT NULL
,ud_weight NUMBER(10,3)
,ud_usedate VARCHAR2(20) DEFAULT '' NOT NULL
,ud_dayofweek NUMBER(2,0)
,ud_cdate VARCHAR2(20)
,ud_useunixtime NUMBER(15,0)
,CONSTRAINT PK_USE_DETAIL PRIMARY KEY(ud_seq)
);
CREATE UNIQUE INDEX PK_USE_DETAIL ON USE_DETAIL(ud_seq);
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
접 속 : sqlplus username/passwd
실행시간 : SQL> set timing on
종 료 : SQL> exit;
그룹핑 테스트 SQL :
SELECT X1.ud_seq, X1.ma_seq, X1.tm_seq
FROM USE_DETAIL X1,
(
SELECT MAX(ud_seq) AS ud_seq, ma_seq, tm_seq
FROM USE_DETAIL X
WHERE ud_usedate BETWEEN '2011-07-01' AND '2011-08-01'
AND tm_seq IN('00101','00102','00103','00104','00105','00106','00107','00108')
AND ud_type = 'G'
GROUP BY ma_seq, tm_seq
) X2
WHERE X1.ud_seq = X2.ud_seq;
페이징 테스트 SQL :
SELECT *
FROM
(
SELECT ROWNUM rnum, X1.*
FROM
(
SELECT /*+ INDEX_DESC(X pk_use_detail) */ ud_seq, ma_seq, tm_seq, ud_usedate
FROM USE_DETAIL X
WHERE ud_type = 'G'
) X1
WHERE ROWNUM <= 900051
)
WHERE rnum BETWEEN 900000 AND 900050;
인덱스 생성방벙 :
DROP INDEX IDX_01;
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
OR
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate DESC, tm_seq, ud_type);
데이터 건수 : 1,118,003
################################# CUBRID 2008 R4.0##################################
-- CUBRID
-- csql>;REA /path/USE_DETAIL.SQL
-- csql>;RU
-- #csql demodb -i /path/USE_DETAIL.SQL
CREATE TABLE USE_DETAIL
(
ud_seq INTEGER NOT NULL
,rc_seq VARCHAR(10)
,ma_seq VARCHAR(20)
,fd_seq INTEGER
,ui_id VARCHAR(20)
,tm_seq VARCHAR(10) NOT NULL
,ud_type CHAR(1) DEFAULT 'G' NOT NULL
,ud_weight NUMERIC(10,3)
,ud_usedate VARCHAR(20) DEFAULT '' NOT NULL
,ud_dayofweek SHORT
,ud_cdate VARCHAR(20)
,ud_useunixtime INTEGER
,CONSTRAINT PK_USE_DETAIL PRIMARY KEY(ud_seq)
);
CREATE UNIQUE INDEX PK_USE_DETAIL ON USE_DETAIL(ud_seq);
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
접 속 : csql -C demodb
실행시간 : csql> ;time on
종 료 : csql> ;ex
그룹핑 테스트 SQL :
SELECT X1.ud_seq, X1.ma_seq, X1.tm_seq
FROM USE_DETAIL X1,
(
SELECT MAX(ud_seq) AS ud_seq, ma_seq, tm_seq
FROM USE_DETAIL X
WHERE ud_usedate BETWEEN '2011-07-01' AND '2011-08-01'
AND tm_seq IN('00101','00102','00103','00104','00105','00106','00107','00108')
AND ud_type = 'G'
GROUP BY ma_seq, tm_seq
) X2
WHERE X1.ud_seq = X2.ud_seq;
페이징 테스트 SQL :
SELECT ud_seq, ma_seq, tm_seq, ud_usedate
FROM USE_DETAIL
WHERE ud_type = 'G'
ORDER BY ud_seq DESC FOR ORDERBY_NUM() BETWEEN 900000 AND 900050;
인덱스 생성방벙 :
DROP INDEX IDX_01;
OR
DROP REVERSE INDEX IDX_01;
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
OR
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate DESC, tm_seq, ud_type);
OR
CREATE REVERSE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
데이터 건수 : 1,118,003
################################# PostgreSQL 8.4.7 ##################################
-- PostgreSQL
-- demodb=# \i /path/USE_DETAIL.SQL;
-- $psql -U username dbname < /path/USE_DETAIL.SQL
CREATE TABLE USE_DETAIL
(
ud_seq INTEGER NOT NULL
,rc_seq VARCHAR(10)
,ma_seq VARCHAR(20)
,fd_seq INTEGER
,ui_id VARCHAR(20)
,tm_seq VARCHAR(10) NOT NULL
,ud_type CHAR(1) DEFAULT 'G' NOT NULL
,ud_weight NUMERIC(10,3)
,ud_usedate VARCHAR(20) DEFAULT '' NOT NULL
,ud_dayofweek SMALLINT
,ud_cdate VARCHAR(20)
,ud_useunixtime INTEGER
,CONSTRAINT PK_USE_DETAIL PRIMARY KEY(ud_seq)
);
CREATE UNIQUE INDEX PK_USE_DETAIL ON USE_DETAIL(ud_seq);
CREATE INDEX IDX_01 ON USE_DETAIL1(ud_usedate, tm_seq, ud_type);
접 속 : psql -U postgres demodb
실행시간 : demodb=# \timing on
종 료 : demodb=# \q
그룹핑 테스트 SQL :
SELECT X1.ud_seq, X1.ma_seq, X1.tm_seq
FROM USE_DETAIL X1,
(
SELECT MAX(ud_seq) AS ud_seq, ma_seq, tm_seq
FROM USE_DETAIL X
WHERE ud_usedate BETWEEN '2011-07-01' AND '2011-08-01'
AND tm_seq IN('00101','00102','00103','00104','00105','00106','00107','00108')
AND ud_type = 'G'
GROUP BY ma_seq, tm_seq
) X2
WHERE X1.ud_seq = X2.ud_seq;
페이징 테스트 SQL 1 :
SELECT ud_seq, ma_seq, tm_seq, ud_usedate
FROM USE_DETAIL
WHERE ud_type = 'G'
ORDER BY ud_seq DESC
LIMIT 50 OFFSET 900000;
페이징 테스트 SQL 2 :
SELECT *
FROM
(
SELECT ROW_NUMBER() OVER (ORDER BY ud_usedate DESC) rnum, ud_seq, ma_seq, tm_seq, ud_usedate
FROM USE_DETAIL
WHERE ud_type = 'G'
) X2
WHERE rnum BETWEEN 900000 AND 900050;
인덱스 생성방벙 :
DROP INDEX IDX_01;
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate, tm_seq, ud_type);
OR
CREATE INDEX IDX_01 ON USE_DETAIL(ud_usedate DESC, tm_seq, ud_type);
####테스트 자료####
USE_DETAIL.7z
먼저 ERWin 7.3을 사용했습니다.
아직까진 MySQL을 Oracle처럼 깔끔하게 지원하지 않네요..(새로 나온 8.x에서는 지원될려나 모르겠는데..과자 구하기가 힘드네요.ㅎㅎㅎ)
문제는 DEFAULT CHARSET UTF-8, InnoDB, COMMENT 등이 생성되지 않고
매번 SQL 스크립트 수정하는것도 귀찮고 힘들고 시간에 쫓기고 해서 ERWin 생성 스크립트를 아예 수정했습니다.
* 순서대로 Physical/Logical 로 파일을 생성한다.
* Physical 모드에서 모델(테이블)을 선택한후 우클릭해서
Table Properties > Comment 를 클릭해서 테이블 코멘트를 입력함.(중요, 꼭입력함)
* Physical 모드
* Logical 모드
* Logical 모드의 Attribute 명을 Physical 모드의 코멘트로 변환 (별도로 코멘트를 등록했다면 이부분은 넘어가도됨!!!)
Physical 모드 메뉴에서 Model > Domain Dictionary 선택
Tab 메뉴에서 Comment 선택하고 Comment Inherited by Col:* 에 %AttName 입력하고 OK 버튼 클릭
다시 Physical 모드에서 모델(테이블)을 선택한후 우클릭해서 Columns 선택한후 Reset 버튼 클릭함
All columns of table [table name](선택 테이블만 적용) 또는 All columns in model(전체적용) 체크하고
Select Properties to Reset: 에서 Comment 만 체크함(중요) OK 버튼을 클릭함.
* 실제 스크립트 생성하기
메뉴 > Tools > Forward Engineer > Schema Generation 선택
설정 내용을 아래 이미지처럼 설정한 후 별도로 설정내용을 저장해 두자.
첨부한 파일을 특정 디렉토리에 복사한뒤 파일을 적용하고 Preview 버튼을 클릭해서 생성 스크립트를 확인해 본다.
* 실제 적용한 스크립트 내용임
==========================================
==========================================
SELECT UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(UTL_RAW.CAST_TO_RAW('1234nextguide'))) FROM dual; SELECT UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_DECODE(UTL_RAW.CAST_TO_RAW('MTIzNG5leHRndWlkZQ=='))) FROM dual; SELECT UTL_ENCODE.TEXT_ENCODE('1234nextguide', 'WE8ISO8859P1', 1) FROM dual; SELECT UTL_ENCODE.TEXT_DECODE('MTIzNG5leHRndWlkZQ==', 'WE8ISO8859P1', 1) FROM dual;
/*------------------------------------------------------------------------------ -- 개체명 : B2C -- 생성일 : 2010-03-22 오후 3:03:32 -- 최종수정일 : 2010-03-22 오후 3:03:39 -- 상태 : VALID ------------------------------------------------------------------------------*/ CREATE OR REPLACE FUNCTION B2C(B IN BLOB) RETURN CLOB -- TYPECASTS BLOB TO CLOB (BINARY CONVERSION) IS pos PLS_INTEGER := 1; buffer VARCHAR2( 32767 ); res CLOB; lob_len PLS_INTEGER := DBMS_LOB.getLength(b); BEGIN DBMS_LOB.createTemporary(res, TRUE); DBMS_LOB.OPEN(res, DBMS_LOB.LOB_ReadWrite); LOOP buffer := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(b, 16000, pos)); IF LENGTH( buffer ) > 0 THEN DBMS_LOB.writeAppend( res, LENGTH( buffer ), buffer ); END IF; pos := pos + 16000; EXIT WHEN pos > lob_len; END LOOP; RETURN res; -- res is OPEN here END b2c;
/*------------------------------------------------------------------------------ -- 개체명 : C2B -- 생성일 : 2010-03-22 오후 3:03:32 -- 최종수정일 : 2010-03-22 오후 3:03:39 -- 상태 : VALID ------------------------------------------------------------------------------*/ CREATE OR REPLACE FUNCTION C2B(C IN CLOB) RETURN BLOB -- TYPECASTS CLOB TO BLOB (BINARY CONVERSION) IS pos PLS_INTEGER := 1; buffer RAW( 32767 ); res BLOB; lob_len PLS_INTEGER := DBMS_LOB.getLength(c); BEGIN DBMS_LOB.createTemporary(res, TRUE); DBMS_LOB.OPEN(res, DBMS_LOB.LOB_ReadWrite); LOOP buffer := UTL_RAW.CAST_TO_RAW(DBMS_LOB.SUBSTR(c, 16000, pos)); IF UTL_RAW.LENGTH(buffer) > 0 THEN DBMS_LOB.writeAppend(res, UTL_RAW.LENGTH(buffer), buffer); END IF; pos := pos + 16000; EXIT WHEN pos > lob_len; END LOOP; RETURN res; -- res is OPEN here END c2b;
/*------------------------------------------------------------------------------ -- 개체명 : BASE64_B2C -- 생성일 : 2010-03-22 오후 3:03:32 -- 최종수정일 : 2010-03-22 오후 3:03:39 -- 상태 : VALID ------------------------------------------------------------------------------*/ CREATE OR REPLACE FUNCTION BASE64_B2C(B IN BLOB) RETURN CLOB IS v_buffer_size INTEGER := 4096; v_buffer_raw RAW(4096); v_offset INTEGER DEFAULT 1; v_clob CLOB; BEGIN DBMS_LOB.createTemporary(v_clob, FALSE, DBMS_LOB.CALL); LOOP BEGIN DBMS_LOB.READ(b, v_buffer_size, v_offset, v_buffer_raw); EXCEPTION WHEN NO_DATA_FOUND THEN EXIT; END; v_offset := v_offset + v_buffer_size; DBMS_LOB.APPEND(v_clob, to_clob(utl_raw.cast_to_varchar2(utl_encode.base64_encode(v_buffer_raw)))); END LOOP; RETURN v_clob; END BASE64_B2C;
/*------------------------------------------------------------------------------ -- 개체명 : BASE64_C2B -- 생성일 : 2010-03-22 오후 3:03:32 -- 최종수정일 : 2010-03-22 오후 3:03:39 -- 상태 : VALID ------------------------------------------------------------------------------*/ CREATE OR REPLACE FUNCTION BASE64_C2B(C IN CLOB) RETURN BLOB IS v_blob BLOB; v_offset INTEGER; v_buffer_varchar VARCHAR2(4096); v_buffer_raw RAW(4096); v_buffer_size BINARY_INTEGER := 4096; BEGIN IF c IS NULL THEN RETURN NULL; END IF; DBMS_LOB.createTemporary(v_blob, TRUE); v_offset := 1; LOOP BEGIN DBMS_LOB.READ(c, v_buffer_size, v_offset, v_buffer_varchar); EXCEPTION WHEN NO_DATA_FOUND THEN EXIT; END; v_buffer_raw := UTL_ENCODE.BASE64_DECODE(UTL_RAW.CAST_TO_RAW(v_buffer_varchar)); dbms_lob.writeAppend(v_blob, UTL_RAW.LENGTH(v_buffer_raw), v_buffer_raw); v_offset := v_offset + v_buffer_size; END LOOP; RETURN v_blob; END BASE64_C2B;
[참조] smartsql.tistory.com
sqlplus 실행시 @를 붙여 별도의 sql문을 실행하면서 해도 되지만
파일을 별도로 관리해야 되는 번거로움 때문에 쉘스크립트상에서 실행하기로 했음
crontab 에 등록시
4 19 * * * /oracle/shell/call_proc_profit_dist.sh >> /oracle/shell/log/proc_profit_dist.log
file: call_proc_profit_dist.sh
#!/bin/bash export ORACLE_BASE=/oracle export ORACLE_HOME=/oracle/10.2.0 export ORACLE_OWNER=oracle export ORACLE_SID=kumc export TNS_ADMIN=$ORACLE_HOME/network/admin export ORA_NLS10=$ORACLE_HOME/nls/data export NLS_LANG=KOREAN_KOREA.KO16MSWIN949 export NLS_NCHAR=KOREAN_KOREA.UTF8 export NLS_DATE_FORMAT=YYYYMMDD export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/demo:$ORAClE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib export LDPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32 export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data export PATH=$ORACLE_HOME/JRE/bin:$ORACLE_HOME/bin:.:$PATH export TEMPDIR=/tmp export THREADS_FLAG=native export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/oracm/bin export CLASSPATH=$CLASSPATH:$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export EDITOR=vi export LANG=ko_KR.eucKR export PATH ORACLE_BASE ORACLE_HOME ORACLE_OWNER ORACLE_SID TNS_ADMIN NLS_LANG LD_LIBRARY_PATH ORA_NLS33 PATH LD_ASSUME_KERNEL TEMPDIR THREADS_FLAG PATH CLASSPATH EDITOR LANG NLS_DATE_FORMAT DISABLE_HUGETLBFS LDPATH sqlplus -s kumc_cost/kumc_cost << __END__ WHENEVER SQLERROR EXIT 1 WHENEVER OSERROR EXIT 1 SET TIMING ON SET SERVEROUTPUT ON SET PAGESIZE 0 SET FEEDBACK OFF SET VERIFY OFF SET HEADING OFF SET ECHO OFF DECLARE vMsg VARCHAR2(4000) := NULL; vSQL VARCHAR2(4000) := NULL; vCheck BOOLEAN := TRUE; vSeq NUMBER; BEGIN DBMS_OUTPUT.ENABLE('10000000000'); SELECT cpi_seq INTO vSeq FROM COST_PROCESS_INFO WHERE cpi_status = 'P'; PROC_PROFIT_DIST(vCheck, vMsg); --vSQL := 'CALL PROC_PROFIT_DIST(:vCheck, :vMsg)'; --EXECUTE IMMEDIATE vSQL USING IN vCheck, OUT vMsg; --DBMS_OUTPUT.PUT_LINE(vMsg); EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('진행중인 절차가 없습니다.!!'); END; / __END__ EXITCODE=$? export EXITCODE if [ "$EXITCODE" -ne 0 ] then echo "Error: SQL*Plus exit code : $EXITCODE" echo "" echo "===== Procedure run aborted at `date` !!! ====" echo "" exit 1 fi echo "" echo "===== Procedure Finished at `date` !!! ====" echo ""
Introduction
Most users at one time or another have dealt with hierarchical data in a SQL database and no doubt learned that the management of hierarchical data is not what a relational database is intended for. The tables of a relational database are not hierarchical (like XML), but are simply a flat list. Hierarchical data has a parent-child relationship that is not naturally represented in a relational database table.
For our purposes, hierarchical data is a collection of data where each item has a single parent and zero or more children (with the exception of the root item, which has no parent). Hierarchical data can be found in a variety of database applications, including forum and mailing list threads, business organization charts, content management categories, and product categories. For our purposes we will use the following product category hierarchy from an fictional electronics store:
These categories form a hierarchy in much the same way as the other examples cited above. In this article we will examine two models for dealing with hierarchical data in MySQL, starting with the traditional adjacency list model.
The Adjacency List Model
Typically the example categories shown above will be stored in a table like the following (I'm including full CREATE and INSERT statements so you can follow along):
CREATE TABLE category( category_id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(20) NOT NULL, parent INT DEFAULT NULL); INSERT INTO category VALUES(1,'ELECTRONICS',NULL),(2,'TELEVISIONS',1),(3,'TUBE',2), (4,'LCD',2),(5,'PLASMA',2),(6,'PORTABLE ELECTRONICS',1), (7,'MP3 PLAYERS',6),(8,'FLASH',7), (9,'CD PLAYERS',6),(10,'2 WAY RADIOS',6); SELECT * FROM category ORDER BY category_id; +-------------+----------------------+--------+ | category_id | name | parent | +-------------+----------------------+--------+ | 1 | ELECTRONICS | NULL | | 2 | TELEVISIONS | 1 | | 3 | TUBE | 2 | | 4 | LCD | 2 | | 5 | PLASMA | 2 | | 6 | PORTABLE ELECTRONICS | 1 | | 7 | MP3 PLAYERS | 6 | | 8 | FLASH | 7 | | 9 | CD PLAYERS | 6 | | 10 | 2 WAY RADIOS | 6 | +-------------+----------------------+--------+ 10 rows in set (0.00 sec)
In the adjacency list model, each item in the table contains a pointer to its parent. The topmost element, in this case electronics, has a NULL value for its parent. The adjacency list model has the advantage of being quite simple, it is easy to see that FLASH is a child of mp3 players, which is a child of portable electronics, which is a child of electronics. While the adjacency list model can be dealt with fairly easily in client-side code, working with the model can be more problematic in pure SQL.
Retrieving a Full Tree
The first common task when dealing with hierarchical data is the display of the entire tree, usually with some form of indentation. The most common way of doing this is in pure SQL is through the use of a self-join:
SELECT t1.name AS lev1, t2.name as lev2, t3.name as lev3, t4.name as lev4 FROM category AS t1 LEFT JOIN category AS t2 ON t2.parent = t1.category_id LEFT JOIN category AS t3 ON t3.parent = t2.category_id LEFT JOIN category AS t4 ON t4.parent = t3.category_id WHERE t1.name = 'ELECTRONICS'; +-------------+----------------------+--------------+-------+ | lev1 | lev2 | lev3 | lev4 | +-------------+----------------------+--------------+-------+ | ELECTRONICS | TELEVISIONS | TUBE | NULL | | ELECTRONICS | TELEVISIONS | LCD | NULL | | ELECTRONICS | TELEVISIONS | PLASMA | NULL | | ELECTRONICS | PORTABLE ELECTRONICS | MP3 PLAYERS | FLASH | | ELECTRONICS | PORTABLE ELECTRONICS | CD PLAYERS | NULL | | ELECTRONICS | PORTABLE ELECTRONICS | 2 WAY RADIOS | NULL | +-------------+----------------------+--------------+-------+ 6 rows in set (0.00 sec)
Finding all the Leaf Nodes
We can find all the leaf nodes in our tree (those with no children) by using a LEFT JOIN query:
SELECT t1.name FROM category AS t1 LEFT JOIN category as t2 ON t1.category_id = t2.parent WHERE t2.category_id IS NULL; +--------------+ | name | +--------------+ | TUBE | | LCD | | PLASMA | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | +--------------+
Retrieving a Single Path
The self-join also allows us to see the full path through our hierarchies:
SELECT t1.name AS lev1, t2.name as lev2, t3.name as lev3, t4.name as lev4 FROM category AS t1 LEFT JOIN category AS t2 ON t2.parent = t1.category_id LEFT JOIN category AS t3 ON t3.parent = t2.category_id LEFT JOIN category AS t4 ON t4.parent = t3.category_id WHERE t1.name = 'ELECTRONICS' AND t4.name = 'FLASH'; +-------------+----------------------+-------------+-------+ | lev1 | lev2 | lev3 | lev4 | +-------------+----------------------+-------------+-------+ | ELECTRONICS | PORTABLE ELECTRONICS | MP3 PLAYERS | FLASH | +-------------+----------------------+-------------+-------+ 1 row in set (0.01 sec)
The main limitation of such an approach is that you need one self-join for every level in the hierarchy, and performance will naturally degrade with each level added as the joining grows in complexity.
Limitations of the Adjacency List Model
Working with the adjacency list model in pure SQL can be difficult at best. Before being able to see the full path of a category we have to know the level at which it resides. In addition, special care must be taken when deleting nodes because of the potential for orphaning an entire sub-tree in the process (delete the portable electronics category and all of its children are orphaned). Some of these limitations can be addressed through the use of client-side code or stored procedures. With a procedural language we can start at the bottom of the tree and iterate upwards to return the full tree or a single path. We can also use procedural programming to delete nodes without orphaning entire sub-trees by promoting one child element and re-ordering the remaining children to point to the new parent.
The Nested Set Model
What I would like to focus on in this article is a different approach, commonly referred to as the Nested Set Model. In the Nested Set Model, we can look at our hierarchy in a new way, not as nodes and lines, but as nested containers. Try picturing our electronics categories this way:
Notice how our hierarchy is still maintained, as parent categories envelop their children.We represent this form of hierarchy in a table through the use of left and right values to represent the nesting of our nodes:
CREATE TABLE nested_category ( category_id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(20) NOT NULL, lft INT NOT NULL, rgt INT NOT NULL ); INSERT INTO nested_category VALUES(1,'ELECTRONICS',1,20),(2,'TELEVISIONS',2,9),(3,'TUBE',3,4), (4,'LCD',5,6),(5,'PLASMA',7,8),(6,'PORTABLE ELECTRONICS',10,19), (7,'MP3 PLAYERS',11,14),(8,'FLASH',12,13), (9,'CD PLAYERS',15,16),(10,'2 WAY RADIOS',17,18); SELECT * FROM nested_category ORDER BY category_id; +-------------+----------------------+-----+-----+ | category_id | name | lft | rgt | +-------------+----------------------+-----+-----+ | 1 | ELECTRONICS | 1 | 20 | | 2 | TELEVISIONS | 2 | 9 | | 3 | TUBE | 3 | 4 | | 4 | LCD | 5 | 6 | | 5 | PLASMA | 7 | 8 | | 6 | PORTABLE ELECTRONICS | 10 | 19 | | 7 | MP3 PLAYERS | 11 | 14 | | 8 | FLASH | 12 | 13 | | 9 | CD PLAYERS | 15 | 16 | | 10 | 2 WAY RADIOS | 17 | 18 | +-------------+----------------------+-----+-----+
We use lft and rgt because left and right are reserved words in MySQL, see http://dev.mysql.com/doc/mysql/en/reserved-words.html for the full list of reserved words.
So how do we determine left and right values? We start numbering at the leftmost side of the outer node and continue to the right:
This design can be applied to a typical tree as well:
When working with a tree, we work from left to right, one layer at a time, descending to each node's children before assigning a right-hand number and moving on to the right. This approach is called the modified preorder tree traversal algorithm.
Retrieving a Full Tree
We can retrieve the full tree through the use of a self-join that links parents with nodes on the basis that a node's lft value will always appear between its parent's lft and rgt values:
SELECT node.name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt AND parent.name = 'ELECTRONICS' ORDER BY node.lft; +----------------------+ | name | +----------------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | PORTABLE ELECTRONICS | | MP3 PLAYERS | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | +----------------------+
Unlike our previous examples with the adjacency list model, this query will work regardless of the depth of the tree. We do not concern ourselves with the rgt value of the node in our BETWEEN clause because the rgt value will always fall within the same parent as the lft values.
Finding all the Leaf Nodes
Finding all leaf nodes in the nested set model even simpler than the LEFT JOIN method used in the adjacency list model. If you look at the nested_category table, you may notice that the lft and rgt values for leaf nodes are consecutive numbers. To find the leaf nodes, we look for nodes where rgt = lft + 1:
SELECT name FROM nested_category WHERE rgt = lft + 1; +--------------+ | name | +--------------+ | TUBE | | LCD | | PLASMA | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | +--------------+
Retrieving a Single Path
With the nested set model, we can retrieve a single path without having multiple self-joins:
SELECT parent.name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.name = 'FLASH' ORDER BY parent.lft; +----------------------+ | name | +----------------------+ | ELECTRONICS | | PORTABLE ELECTRONICS | | MP3 PLAYERS | | FLASH | +----------------------+
Finding the Depth of the Nodes
We have already looked at how to show the entire tree, but what if we want to also show the depth of each node in the tree, to better identify how each node fits in the hierarchy? This can be done by adding a COUNT function and a GROUP BY clause to our existing query for showing the entire tree:
SELECT node.name, (COUNT(parent.name) - 1) AS depth FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +----------------------+-------+ | name | depth | +----------------------+-------+ | ELECTRONICS | 0 | | TELEVISIONS | 1 | | TUBE | 2 | | LCD | 2 | | PLASMA | 2 | | PORTABLE ELECTRONICS | 1 | | MP3 PLAYERS | 2 | | FLASH | 3 | | CD PLAYERS | 2 | | 2 WAY RADIOS | 2 | +----------------------+-------+
We can use the depth value to indent our category names with the CONCAT and REPEAT string functions:
SELECT CONCAT( REPEAT(' ', COUNT(parent.name) - 1), node.name) AS name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +-----------------------+ | name | +-----------------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | PORTABLE ELECTRONICS | | MP3 PLAYERS | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | +-----------------------+
Of course, in a client-side application you will be more likely to use the depth value directly to display your hierarchy. Web developers could loop through the tree, adding <li></li> and <ul></ul> tags as the depth number increases and decreases.
Depth of a Sub-Tree
When we need depth information for a sub-tree, we cannot limit either the node or parent tables in our self-join because it will corrupt our results. Instead, we add a third self-join, along with a sub-query to determine the depth that will be the new starting point for our sub-tree:
SELECT node.name, (COUNT(parent.name) - (sub_tree.depth + 1)) AS depth FROM nested_category AS node, nested_category AS parent, nested_category AS sub_parent, ( SELECT node.name, (COUNT(parent.name) - 1) AS depth FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.name = 'PORTABLE ELECTRONICS' GROUP BY node.name ORDER BY node.lft )AS sub_tree WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.lft BETWEEN sub_parent.lft AND sub_parent.rgt AND sub_parent.name = sub_tree.name GROUP BY node.name ORDER BY node.lft; +----------------------+-------+ | name | depth | +----------------------+-------+ | PORTABLE ELECTRONICS | 0 | | MP3 PLAYERS | 1 | | FLASH | 2 | | CD PLAYERS | 1 | | 2 WAY RADIOS | 1 | +----------------------+-------+
This function can be used with any node name, including the root node. The depth values are always relative to the named node.
Find the Immediate Subordinates of a Node
Imagine you are showing a category of electronics products on a retailer web site. When a user clicks on a category, you would want to show the products of that category, as well as list its immediate sub-categories, but not the entire tree of categories beneath it. For this, we need to show the node and its immediate sub-nodes, but no further down the tree. For example, when showing the PORTABLE ELECTRONICS category, we will want to show MP3 PLAYERS, CD PLAYERS, and 2 WAY RADIOS, but not FLASH.
This can be easily accomplished by adding a HAVING clause to our previous query:
SELECT node.name, (COUNT(parent.name) - (sub_tree.depth + 1)) AS depth FROM nested_category AS node, nested_category AS parent, nested_category AS sub_parent, ( SELECT node.name, (COUNT(parent.name) - 1) AS depth FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.name = 'PORTABLE ELECTRONICS' GROUP BY node.name ORDER BY node.lft )AS sub_tree WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.lft BETWEEN sub_parent.lft AND sub_parent.rgt AND sub_parent.name = sub_tree.name GROUP BY node.name HAVING depth <= 1 ORDER BY node.lft; +----------------------+-------+ | name | depth | +----------------------+-------+ | PORTABLE ELECTRONICS | 0 | | MP3 PLAYERS | 1 | | CD PLAYERS | 1 | | 2 WAY RADIOS | 1 | +----------------------+-------+
If you do not wish to show the parent node, change the HAVING depth <= 1 line to HAVING depth = 1.
Aggregate Functions in a Nested Set
Let's add a table of products that we can use to demonstrate aggregate functions with:
CREATE TABLE product( product_id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(40), category_id INT NOT NULL ); INSERT INTO product(name, category_id) VALUES('20" TV',3),('36" TV',3), ('Super-LCD 42"',4),('Ultra-Plasma 62"',5),('Value Plasma 38"',5), ('Power-MP3 5gb',7),('Super-Player 1gb',8),('Porta CD',9),('CD To go!',9), ('Family Talk 360',10); SELECT * FROM product; +------------+-------------------+-------------+ | product_id | name | category_id | +------------+-------------------+-------------+ | 1 | 20" TV | 3 | | 2 | 36" TV | 3 | | 3 | Super-LCD 42" | 4 | | 4 | Ultra-Plasma 62" | 5 | | 5 | Value Plasma 38" | 5 | | 6 | Power-MP3 128mb | 7 | | 7 | Super-Shuffle 1gb | 8 | | 8 | Porta CD | 9 | | 9 | CD To go! | 9 | | 10 | Family Talk 360 | 10 | +------------+-------------------+-------------+
Now let's produce a query that can retrieve our category tree, along with a product count for each category:
SELECT parent.name, COUNT(product.name) FROM nested_category AS node , nested_category AS parent, product WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.category_id = product.category_id GROUP BY parent.name ORDER BY node.lft; +----------------------+---------------------+ | name | COUNT(product.name) | +----------------------+---------------------+ | ELECTRONICS | 10 | | TELEVISIONS | 5 | | TUBE | 2 | | LCD | 1 | | PLASMA | 2 | | PORTABLE ELECTRONICS | 5 | | MP3 PLAYERS | 2 | | FLASH | 1 | | CD PLAYERS | 2 | | 2 WAY RADIOS | 1 | +----------------------+---------------------+
This is our typical whole tree query with a COUNT and GROUP BY added, along with a reference to the product table and a join between the node and product table in the WHERE clause. As you can see, there is a count for each category and the count of subcategories is reflected in the parent categories.
Adding New Nodes
Now that we have learned how to query our tree, we should take a look at how to update our tree by adding a new node. Let's look at our nested set diagram again:
If we wanted to add a new node between the TELEVISIONS and PORTABLE ELECTRONICS nodes, the new node would have lft and rgt values of 10 and 11, and all nodes to its right would have their lft and rgt values increased by two. We would then add the new node with the appropriate lft and rgt values. While this can be done with a stored procedure in MySQL 5, I will assume for the moment that most readers are using 4.1, as it is the latest stable version, and I will isolate my queries with a LOCK TABLES statement instead:
LOCK TABLE nested_category WRITE; SELECT @myRight := rgt FROM nested_category WHERE name = 'TELEVISIONS'; UPDATE nested_category SET rgt = rgt + 2 WHERE rgt > @myRight; UPDATE nested_category SET lft = lft + 2 WHERE lft > @myRight; INSERT INTO nested_category(name, lft, rgt) VALUES('GAME CONSOLES', @myRight + 1, @myRight + 2); UNLOCK TABLES; We can then check our nesting with our indented tree query: SELECT CONCAT( REPEAT( ' ', (COUNT(parent.name) - 1) ), node.name) AS name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +-----------------------+ | name | +-----------------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | GAME CONSOLES | | PORTABLE ELECTRONICS | | MP3 PLAYERS | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | +-----------------------+
If we instead want to add a node as a child of a node that has no existing children, we need to modify our procedure slightly. Let's add a new FRS node below the 2 WAY RADIOS node:
LOCK TABLE nested_category WRITE; SELECT @myLeft := lft FROM nested_category WHERE name = '2 WAY RADIOS'; UPDATE nested_category SET rgt = rgt + 2 WHERE rgt > @myLeft; UPDATE nested_category SET lft = lft + 2 WHERE lft > @myLeft; INSERT INTO nested_category(name, lft, rgt) VALUES('FRS', @myLeft + 1, @myLeft + 2); UNLOCK TABLES;
In this example we expand everything to the right of the left-hand number of our proud new parent node, then place the node to the right of the left-hand value. As you can see, our new node is now properly nested:
SELECT CONCAT( REPEAT( ' ', (COUNT(parent.name) - 1) ), node.name) AS name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +-----------------------+ | name | +-----------------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | GAME CONSOLES | | PORTABLE ELECTRONICS | | MP3 PLAYERS | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | | FRS | +-----------------------+
Deleting Nodes
The last basic task involved in working with nested sets is the removal of nodes. The course of action you take when deleting a node depends on the node's position in the hierarchy; deleting leaf nodes is easier than deleting nodes with children because we have to handle the orphaned nodes.
When deleting a leaf node, the process if just the opposite of adding a new node, we delete the node and its width from every node to its right:
LOCK TABLE nested_category WRITE; SELECT @myLeft := lft, @myRight := rgt, @myWidth := rgt - lft + 1 FROM nested_category WHERE name = 'GAME CONSOLES'; DELETE FROM nested_category WHERE lft BETWEEN @myLeft AND @myRight; UPDATE nested_category SET rgt = rgt - @myWidth WHERE rgt > @myRight; UPDATE nested_category SET lft = lft - @myWidth WHERE lft > @myRight; UNLOCK TABLES;
And once again, we execute our indented tree query to confirm that our node has been deleted without corrupting the hierarchy:
SELECT CONCAT( REPEAT( ' ', (COUNT(parent.name) - 1) ), node.name) AS name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +-----------------------+ | name | +-----------------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | PORTABLE ELECTRONICS | | MP3 PLAYERS | | FLASH | | CD PLAYERS | | 2 WAY RADIOS | | FRS | +-----------------------+
This approach works equally well to delete a node and all its children:
LOCK TABLE nested_category WRITE; SELECT @myLeft := lft, @myRight := rgt, @myWidth := rgt - lft + 1 FROM nested_category WHERE name = 'MP3 PLAYERS'; DELETE FROM nested_category WHERE lft BETWEEN @myLeft AND @myRight; UPDATE nested_category SET rgt = rgt - @myWidth WHERE rgt > @myRight; UPDATE nested_category SET lft = lft - @myWidth WHERE lft > @myRight; UNLOCK TABLES;
And once again, we query to see that we have successfully deleted an entire sub-tree:
SELECT CONCAT( REPEAT( ' ', (COUNT(parent.name) - 1) ), node.name) AS name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +-----------------------+ | name | +-----------------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | PORTABLE ELECTRONICS | | CD PLAYERS | | 2 WAY RADIOS | | FRS | +-----------------------+
The other scenario we have to deal with is the deletion of a parent node but not the children. In some cases you may wish to just change the name to a placeholder until a replacement is presented, such as when a supervisor is fired. In other cases, the child nodes should all be moved up to the level of the deleted parent:
LOCK TABLE nested_category WRITE; SELECT @myLeft := lft, @myRight := rgt, @myWidth := rgt - lft + 1 FROM nested_category WHERE name = 'PORTABLE ELECTRONICS'; DELETE FROM nested_category WHERE lft = @myLeft; UPDATE nested_category SET rgt = rgt - 1, lft = lft - 1 WHERE lft BETWEEN @myLeft AND @myRight; UPDATE nested_category SET rgt = rgt - 2 WHERE rgt > @myRight; UPDATE nested_category SET lft = lft - 2 WHERE lft > @myRight; UNLOCK TABLES;
In this case we subtract two from all elements to the right of the node (since without children it would have a width of two), and one from the nodes that are its children (to close the gap created by the loss of the parent's left value). Once again, we can confirm our elements have been promoted:
SELECT CONCAT( REPEAT( ' ', (COUNT(parent.name) - 1) ), node.name) AS name FROM nested_category AS node, nested_category AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt GROUP BY node.name ORDER BY node.lft; +---------------+ | name | +---------------+ | ELECTRONICS | | TELEVISIONS | | TUBE | | LCD | | PLASMA | | CD PLAYERS | | 2 WAY RADIOS | | FRS | +---------------+
Other scenarios when deleting nodes would include promoting one of the children to the parent position and moving the child nodes under a sibling of the parent node, but for the sake of space these scenarios will not be covered in this article.
Final Thoughts
While I hope the information within this article will be of use to you, the concept of nested sets in SQL has been around for over a decade, and there is a lot of additional information available in books and on the Internet. In my opinion the most comprehensive source of information on managing hierarchical information is a book called Joe Celko's Trees and Hierarchies in SQL for Smarties, written by a very respected author in the field of advanced SQL, Joe Celko. Joe Celko is often credited with the nested sets model and is by far the most prolific author on the subject. I have found Celko's book to be an invaluable resource in my own studies and highly recommend it. The book covers advanced topics which I have not covered in this article, and provides additional methods for managing hierarchical data in addition to the Adjacency List and Nested Set models.
In the References / Resources section that follows I have listed some web resources that may be of use in your research of managing hierarchal data, including a pair of PHP related resources that include pre-built PHP libraries for handling nested sets in MySQL. Those of you who currently use the adjacency list model and would like to experiment with the nested set model will find sample code for converting between the two in the Storing Hierarchical Data in a Database resource listed below.
거래처 솔라리스인 관계로 리눅스에서 작업했던 bash쉘 기능이 전부 먹히지 않아
그냥 Perl 스크립트로 만들어 보았습니다..
file: ora_backup.pl
#!/usr/bin/perl use DBI; use Time::Local; use Error qw(:try); use Archive::Zip qw(:ERROR_CODES :CONSTANTS); BEGIN { $ENV{ORACLE_OWNER} = "oracle9"; $ENV{ORACLE_BASE} = "/home/$ENV{ORACLE_OWNER}"; $ENV{ORACLE_HOME} = "$ENV{ORACLE_BASE}/product/9i"; $ENV{TNS_ADMIN} = "$ENV{ORACLE_HOME}/network/admin"; $ENV{ORA_NLS33} = "$ENV{ORACLE_HOME}/ocommon/nls/admin/data"; $ENV{ORACLE_SID} = "NEXTBSC"; $ENV{ORACLE_TNSNAME} = "NEXTBSC"; $ENV{NLS_LANG} = "KOREAN_KOREA.KO16MSWIN949"; $ENV{NLS_NCHAR} = "KOREAN_KOREA.UTF8"; $ENV{LD_LIBRARY_PATH} = "$ENV{ORACLE_HOME}/lib:$ENV{ORACLE_HOME}/lib32:$ENV{ORACLE_HOME}/rdbms/demo:$ENV{ORACLE_HOME}/oracm/lib:/lib:/usr/lib:/usr/local/lib"; $ENV{LD_RUN_PATH} = "$ENV{ORACLE_HOME}/lib"; $ENV{NLS_DATE_FORMAT} = "YYYYMMDD"; $ENV{TEMPDIR} = "/tmp"; $ENV{THREADS_FLAG} = "native"; } =comment foreach my $key(keys %ENV) { print $key."=". $ENV{$key}."\n"; } =cut ##------------------------------------------------------------------------------------------------------- ## 삭제할 파일 일자 정의(3일전) ##------------------------------------------------------------------------------------------------------- ($year, $month, $day) = (localtime(time - (60*60*24*3)))[5,4,3]; my $rmdate = sprintf("%04d%02d%02d", $year+1900, $month+1, $day); ##------------------------------------------------------------------------------------------------------- ##백업할 파일 일자 정의(당일) ##------------------------------------------------------------------------------------------------------- ($year, $month, $day) = (localtime(time))[5,4,3]; my $date = sprintf("%04d%02d%02d", $year+1900, $month+1, $day); #my $date = sprintf("%02d", $day); ##------------------------------------------------------------------------------------------------------- ## 기본 변수 선언. ##------------------------------------------------------------------------------------------------------- my $user = "qes_bsc"; my $bakdir = "/data1/oracle9i_dmp_files/"; my $fullpath = $bakdir.$date ."_". $user ."_full.zip"; my $onepath = $bakdir.$date ."_". $user ."_one.zip"; my $rmfullpath = $bakdir.$rmdate ."_". $user ."_full.zip"; my $rmonepath = $bakdir.$rmdate ."_". $user ."_one.zip"; my @nums = (0..30); #------------------------------------------------------------------------------------------------------- # FULL 백업 .... #------------------------------------------------------------------------------------------------------- print "\nFULL 백업중.....\n"; my $command = "$ENV{ORACLE_HOME}/bin/exp $user/$user\@$ENV{ORACLE_TNSNAME} grants=y "; $command .= " file=("; for(@nums) { $command .= $bakdir . $user ."_full_". sprintf("%02s", $_) .".dmp,"; } $command = substr($command, 0, length($command)-1); $command .= " )"; $command .= " filesize=2G"; $command .= " log=". $bakdir . $user .".log"; system $command; ##------------------------------------------------------------------------------------------------------- ## 테이블 단위 백업.... ##------------------------------------------------------------------------------------------------------- try { print "\n테이블 단위 백업중.....\n"; my $conn = DBI->connect("DBI:Oracle:NEXTBSC", $user, $user, {AutoCommit => 0, RaiseError => 1, PrintError => 0 }) || die "Error :$DBI::errstr"; my $SQL = "SELECT tname FROM tab WHERE tname NOT LIKE 'BIN%' "; my $stmt = $conn->prepare($SQL); $stmt->execute(); while(@row = $stmt->fetchrow_array) { my $command = ""; if(@row["TNAME"] == "NG01_REPORT_INFO") { $command = "$ENV{ORACLE_HOME}/bin/exp qes_bsc/qes_bsc\@$ENV{ORACLE_TNSNAME} grants=y "; $command .= " tables=". @row["TNAME"]; $command .= " file=("; for(@nums) { $command .= $bakdir . @row["TNAME"] ."_one_". sprintf("%02s", $_) .".dmp,"; } $command = substr($command, 0, length($command)-1); $command .= " )"; $command .= " filesize=2G"; } else { $command = "$ENV{ORACLE_HOME}/bin/exp qes_bsc/qes_bsc\@$ENV{ORACLE_TNSNAME} grants=y "; $command .= " tables=". @row["TNAME"]; $command .= " file=". $bakdir . @row["TNAME"] ."_one_". ".dmp"; } system $command; } $stmt->finish; $conn->disconnect; } catch Error with { my $ex = shift; print $ex; } finally { #close_the_garage_door_already(); }; ##------------------------------------------------------------------------------------------------------- ## FULL 백업 압축.... ##------------------------------------------------------------------------------------------------------- try { opendir(DIR, $bakdir); @files = grep(/_full_\d{2}/, readdir(DIR)); print $bakdir.@files; closedir(DIR); if(@files > 0) { my $zip = Archive::Zip->new(); my $comp = $zip->addDirectory($date); $comp->desiredCompressionMethod(COMPRESSION_DEFLATED); $comp->desiredCompressionLevel(COMPRESSION_LEVEL_BEST_COMPRESSION); print "FULL 백업을 압축중입니다.....!!!\n"; foreach my $file(sort @files) { print $file. "\n"; $comp = $zip->addFile($bakdir . $file, $date ."/". $file) || die 'write error'; $check = true; } $zip->writeToFileNamed($fullpath) == AZ_OK; system "split -b 680m ". $fullpath ." ". $fullpath."."; } } catch Error with { my $ex = shift; print $ex; } finally { #close_the_garage_door_already(); }; ##------------------------------------------------------------------------------------------------------- ## 테이블 단위 압축.... ##------------------------------------------------------------------------------------------------------- try { opendir(DIR, $bakdir); @files = grep(/_one_(\d{2})?/, readdir(DIR)); closedir(DIR); print $bakdir.@files; if(@files > 0) { my $zip = Archive::Zip->new(); my $zip = Archive::Zip->new(); my $comp = $zip->addDirectory($date); $comp->desiredCompressionMethod(COMPRESSION_DEFLATED); $comp->desiredCompressionLevel(COMPRESSION_LEVEL_BEST_COMPRESSION); print "\n테이블 단위 백업을 압축중입니다.....!!!\n"; foreach my $file(sort @files) { print $file. "\n"; $comp = $zip->addFile($bakdir . $file, $date ."/". $file) || die 'write error'; } $zip->writeToFileNamed($onepath) == AZ_OK; system "split -b 680m ". $onepath ." ". $onepath."."; } } catch Error with { my $ex = shift; print $ex; } finally { #close_the_garage_door_already(); }; ##------------------------------------------------------------------------------------------------------- ## 삭제할 파일 ##------------------------------------------------------------------------------------------------------- system "rm -f ". $rmfullpath; system "rm -f ". $rmonepath; system "rm -f ". $bakdir ."*.dmp"; __END__ ##------------------------------------------------------------------------------------------------------- ## 실행전에 필용한 팩키지 목록들.... ##------------------------------------------------------------------------------------------------------- ##perl -e "use Archive::Zip" ##perl -MCPAN -e "install DBD::Oracle" ##perl -MCPAN -e "install Archive::Zip" ##perl -MCPAN -e shell ##cpan>i /archive::zip/ ##cpan>install Archive::Zip ##cpan>quit ##windows --> copy /b XXXX_full_zip.aa + XXXX_full_zip.ab XXXX_full.zip ##solaris --> cat XXXX_full_zip.?? > XXXX_full.zip ##linux --> XXXX_full_zip.* > XXXX_full.zip ##__author__ = "park17@gmail.com" ##__version__ = "0.1" ##__date__ = "2007-09-29" ##__file__ = "ora_backup.pl"2G가 이상되면 unzip 명령어가 먹지 않네요..
64비트 옵션을 주고 컴파일 하면 된다고 하는데 안되더라구요..
그래서 귀찮아서 그냥 이것도 Perl 스크립트로 만들었습니다.
file: unzip.pl
#!/usr/bin/perl use strict; use Archive::Zip qw(:ERROR_CODES); my $zip = Archive::Zip->new(); my $zipName = shift(@ARGV); if($zip->read($zipName) != AZ_OK) { die "Read of $zipName failed\n"; } else { foreach my $member($zip->members()) { print $member->fileName(), ", ", $member->uncompressedSize(), ":", $member->compressedSize(), "\n"; $zip->extractMember($member->fileName()); } }
SET SERVEROUTPUT ON SIZE 1000000 SET ECHO OFF VERIFY OFF FEEDBACK OFF TRIMSPOOL ON PAGES 0 LINES 512 SET TERMOUT OFF SET TERMOUT ON !rm -rf './proc_backup'; !mkdir './proc_backup'; PROMPT PROMPT PROCEDURES export Start...!!!! PROMPT SET TERMOUT OFF SPOOL procedures_exp.sql PROMPT SET ECHO OFF VERIFY OFF FEEDBACK OFF TRIMSPOOL ON TERMOUT OFF PAGES 0 --PROMPT SET LINESIZE 512; --LINES 512 DECLARE TYPE objectElements IS VARRAY(5) OF VARCHAR2(20); VObjectType objectElements := objectElements('PROCEDURE', 'PACKAGE', 'PACKAGE BODY', 'FUNCTION', 'TYPE'); vObjectExt objectElements := objectElements('prc', 'spc', 'bdy', 'fnc', 'tps'); vObjectName VARCHAR2(50) := ''; vExportDir VARCHAR2(20) := './proc_backup/'; --vFileHandle UTL_FILE.FILE_TYPE; CURSOR cur_procedures(v_object_type VARCHAR2) IS SELECT name --, DBMS_METADATA.GET_DDL(v_object_type, name) text FROM USER_SOURCE WHERE TYPE = v_object_type GROUP BY name; -- -- DBA 권한시 수정하여 사용.(procedures_dump.sql 파일도 수정) -- /* CURSOR cur_procedures(v_owner VARCHAR2, v_object_type VARCHAR2) IS SELECT object_name name --, DBMS_METADATA.GET_DDL(v_object_type, object_name, owner) FROM dba_objects WHERE owner = v_owner AND object_type = v_object_type ORDER BY owner */ BEGIN --DBMS_OUTPUT.ENABLE('10000000000'); FOR i IN VObjectType.FIRST .. VObjectType.LAST LOOP FOR list_procedures IN cur_procedures(VObjectType(i)) LOOP vObjectName := LOWER(list_procedures.name || '.' || vObjectExt(i)); DBMS_OUTPUT.put_line('SET TERMOUT ON'); DBMS_OUTPUT.put_line('PROMPT ' || VObjectType(i) || ' -->> ' || vExportDir || vObjectName); DBMS_OUTPUT.put_line('SET TERMOUT OFF'); DBMS_OUTPUT.put_line('spool ' || vExportDir || vObjectName); DBMS_OUTPUT.put_line('@procedures_dump ' || VObjectType(i) || ' ' || list_procedures.name); DBMS_OUTPUT.put_line('spool off'); --vFileHandle := UTL_FILE.fopen(vExportDir, list_procedures.name || '.' || vObjectExt(i), 'W'); --UTL_FILE.put_line(vFileHandle, list_procedures.text); --UTL_FILE.fclose(vFileHandle); END LOOP; END LOOP; /* EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.put_line('NO_DATA_FOUND'); UTL_FILE.fclose(vFileHandle); NULL; WHEN UTL_FILE.INVALID_PATH THEN DBMS_OUTPUT.put_line('UTL_FILE.INVALID_PATH'); UTL_FILE.fclose(vFileHandle); WHEN UTL_FILE.READ_ERROR THEN DBMS_OUTPUT.put_line(' UTL_FILE.READ_ERROR'); UTL_FILE.fclose(vFileHandle); WHEN UTL_FILE.WRITE_ERROR THEN DBMS_OUTPUT.put_line('UTL_FILE.WRITE_ERROR'); UTL_FILE.fclose(vFileHandle); WHEN OTHERS THEN DBMS_OUTPUT.put_line('OTHER STUFF'); UTL_FILE.fclose(vFileHandle); */ END; / SPOOL OFF; @procedures_exp.sql; SET TERMOUT ON PROMPT PROMPT PROCEDURES export End...!!!! PROMPT SET TERMOUT OFF QUIT;
procedures_dump.sql
SET SERVEROUTPUT ON SIZE 1000000 SET HEAD OFF VERIFY OFF SET LONG 1000000 COL TEXT FORMAT A132; SELECT TRIM(DBMS_METADATA.GET_DDL('&1', name)) text FROM USER_SOURCE WHERE type = '&1' AND name = '&2' GROUP BY name; /* SELECT TRIM(DBMS_METADATA.GET_DDL('&2', object_name, '&1')) text FROM dba_objects WHERE owner = '&1' AND object_type = '&2' ORDER BY owner */ PROMPT /
program | 스케쥴러에 의해 실행될 DB 프로그램 |
job | 스케쥴러에 등록된 작업 유형 |
schedule | 등록된 작업의 작업 일정 |
window | 수행될 작업에 할당될 시스템 자원에 대한 정보 |
begin dbms_scheduler.create_program ( program_name => 'test_scheduler', program_action => '/export/home/work/aaa.exe', program_type => 'executable' ); end; /새롭게 생성한 프로그램은 scheduler에 의해 사용 가능하도록 활성화 상태이어야 한다.
begin dbms_scheduler.drop_program ( program_name => 'test_scheduler', force => false ); end; /다음은 실행할 프로그램의 스케쥴 계획을 생성한다. 스케쥴 이름과 실행될 시각, 종료시각, 실행주기등을 작성할 수 있다.
begin dbms_scheduler.create_schedule ( program_name => 'hourly_schedule', start_date => systimestamp, end_date => systimestamp + interval '30' day, repeat_interval => 'freq=hourly; interval=4', comments => 'every 4 hours' ); end; /create_job 프로시져를 이용하면 스케줄과 함깨 실행할 SQL 문장 또는 PL/SQL 블럭을 등록할 수 있다.
begin dbms_scheduler.create_job ( job_name => 'my_emp_job1', job_type => 'PLSQL_BLOCK', job_action => 'insert into emp values(7987, "SALLY","ANALYST", null,null,null,null,null);', start_date => '09/07/23 17:16:51.927000+09:00', repeat_interval => 'freq=daily; interval=2', comments => 'my new job' ); end; /다음은 실행될 스케쥴 job을 삭제하는 방법이다.
begin dbms_scheduler.drop_job ( job_name => 'my_emp_job1', force => false ); end; /이제, check_test라는 PL/SQL 블럭을 생성해서 스케쥴러에 등록한 후, 자동으로 실행해 보자.
create or replace procedure check_test IS v_no number(4); v_name varchar2(13) := null; BEGIN select empno, ename into v_no, v_name from scott.emp where empno =7934; dbms_output.put_line(v_no||' '||v_name); END; / BEGIN dbms_scheduler.create_job ( job_name => 'my_emp_job2', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN check_test(); END;', schedule_name => 'hourly_schedule' ); END; / execute dbms_scheduler.enable('my_emp_job2');scheduler에 의해 활성화된 my_emp_job2를 스케쥴링한다.
execute dbms_scheduler.run_job('my_emp_job2'); BEGIN DBMS_SCHEDULER.DROP_JOB ( job_name => 'my_emp_job2'; force => false ); END; /이번에는 운영체제상에 존재하는 실행파일을 스케쥴러에 등록한 후 자동으로 실행하는 예제
begin dbms_scheduler.create_job ( job_name => 'my_emp_job3', job_type => 'executable', job_action => '/export/home/work/aaa', start_date => systimestamp, repeat_interval => 'freq=daily; ingterval=2', end_date => systimestamp + interval '30' day ); end; / execute dbms_scheduler.enable ('my_emp_job3'); execute dbms_scheduler.run_job('my_emp_job3'); BEGIN dbms_scheduler.drop_job ( job_name => 'my_emp_job3', force => false ); END; /이번에는 실행될 프로그램에 자원관리 기능을 적용하는 방법
begin dbms_scheduler.create_window ( window_name =>'my_window100', schedule_name => 'hourly_schedule', resource_plan => 'internal_plan', duration => interval '160' minute, comments => 'My window' ); END; / execute dbms_scheduler.enable ('my_wondow100'); execute dbms_scheduler.run_job('my_emp_job2');오라클 자료사전의 scheduler와 관련된 자료
col comments format a35
col table_name format a35
select table_name, comments from dictionary
where table_name like 'DBA%SCHEDULER%'
order by table_name;
DBMS_JOB.SUBMIT ( job OUT BINARY_INTEGER, what IN VARCHAR2, next_date IN DATE DEFAULT sysdate, -- 실행할 시각 interval IN VARCHAR2 DEFAULT 'null', -- 다음 실행될 시점을 계산할 수식 no_parse IN BOOLEAN DEFAULT FALSE, instance IN BINARY_INTEGER DEFAULT any_instance, force IN BOOLEAN DEFAULT FALSE);next_date의 디폴트 값은 sysdate이므로 값을 주지 않으면 등록 즉시 실행된다. 그 다음 실행 시각은 JOB이 실행되기 직전 interval에 지정된 수식을 이용해 계산한다. (interval이 NULL일 경우는 작업이 한 번만 실행된다.) 파라미터 이름이 interval이기는 하지만 실제 의미는 interval이 아니라 "다음 실행될 시점을 계산할 수식"인 것이다. 만약 어떤 작업을 1시간에 1번씩 실행시키고 싶다면 interval을 'sysdate+1/24'로 주면 된다. 작업을 시작하기 전에 sysdate+1/24를 통해 다음 실행할 시각을 구하면 작업 시작 시간으로부터 1시간 후인 시각이 된다. 다음 작업 시작 시각을 알고 싶으면 ALL_JOBS의 NEXT_DATE 컬럼을 조회해 확인할 수 있다.
interval | 작업 주기 |
'sysdate + 1/24' | 1시간에 1번 |
'sysdate + 1' | 1일에 1번 |
'sysdate + 7' | 7일(일주일)에 한번 |
그런데 위와 같이 하면 작업 주기만 지정한 것일 뿐이다. 특정 시각에 JOB을 실행시키려면 다음과 같이 하면 된다.
interval | 작업 시각 |
'trunc(sysdate) + 1 + 1/24' | 매일 01시에 작업 실행 |
'trunc(sysdate, ''D'') + 7' | 매주 일요일 00시에 작업 실행 |
select trunc(sysdate, 'D') + 7 from dual;이제 다음과 같이 다양한경우에 대한 interval을 구해보자.
- 매주 토요일 새벽 1시에 실행
- 매월 1일 새벽 0시에 실행
- 매월 말일 밤 11시에 실행
- 평일(월화수목금) 밤 10시에 실행
- 불규칙한 시각, 8시, 14시, 20시에 한번씩
... next_date=>to_date('2007102701','YYYYMMDDHH24'), interval=>'sysdate + 7' ...월초나 월말의 경우는 add_months나 last_day를 이용해 구하면 된다.
-- 매월1일 새벽 0시 작업 실행 ... next_date=>add_months(trunc(sysdate,'MM'),1), interval=>'add_months(trunc(sysdate,''MM''),1)' -- 매월 말일 밤 11시에 작업 실행 ... next_date=>last_day(trunc(sysdate))+23/24, interval=>'last_day(trunc(sysdate)+1)+23/24' -- 말일+1일은 다음달 1일 ...평일만 실행되도록 하기 위해서는 interval이 좀더 복잡해진다.
... interval=>'trunc(sysdate) + decode(to_char(sysdate,''D''), ''6'', 3, ''7'', 2, 1) + 22/24' ...요일을 구한 다음 토요일(to_char(sysdate,'D')='6')에는 작업 후 3일 후에, 일요일(to_char(sysdate,'D')='7')에는 작업 후 2일 후에, 평일에는 자업 후 1일 후에 작업이 다시 시작되도록 하면 된다. 이를 위해 DECODE 함수를 활용했다.
불규칙한 시간 간격일 경우에도 작업 시각을 기반으로 DECODE를 활용하면 가능할 것 같다. 그러나 하루 수행 횟수가 서너 번 정도라면 그냥 각 시각마다 실행되도록 서너 개의 JOB을 등록시켜주는 것도 생각해볼 수 있다.
원래 문제는 불규칙한 시각+평일 조건을 만족해야 하므로 하나의 interval 수식으로 해결하려면 수식이 무척 복잡해질 것 같다. interval 수식이 복잡해지면 이해가기도 어려워 진고, 나중에 수정하고 싶을 때 문제가 생길 수도 있다.
참고로 10g부터는 DBMS_JOB 대신 DBMS_SCHEDULER을 쓰도록 권고하고 있다.
VARIABLE jobno NUMBER
BEGIN
DBMS_JOB.SUBMIT
(
:jobno,
'SP_IN_EMP_SAL;',
SYSDATE,
'SYSDATE + 1'
);
COMMIT;
END;
/
BEGIN DBMS_JOB.REMOVE(14443); END; /
BEGIN DBMS_JOB.CHANGE(14144, NULL, NULL, 'SYSDATE + 3'); END; /
BEGIN DBMS_JOB.WHAT(14144, 'SP_IN_EMP_SAL;'); END; /잡 다음 수행시간 변경
BEGIN DBMS_JOB.NEXT_DATE(14144, TRUNC(SYSDATE,'MI') + 4); END; /잡 수행 간격 변경
BEGIN DBMS_JOB.INTERVAL(14144, TRUNC(SYSDATE,'MI')+30/1440); END; /잡 수행 정지 잡이 BROKEN되면 잡은 수행되지 않으며 강제로 수행 할 시에는 DBMS_JOB.RUN()을 통해 수행합니다..
BEGIN DBMS_JOB.BROKEN(14144, TRUE); END; /잡큐 정보 VIEWING DBA_JOBS, USER_JOBS, ALL_JOBS를 이용합니다.
SELECT JOB, NEXT_DATE, NEXT_SEC, FAILURES, BROKEN FROM DBA_JOBS; JOB NEXT_DATE NEXT_SEC FAILURES B ------- --------- -------- -------- - 9125 01-JUN-01 00:00:00 4 N 14144 24-OCT-01 16:35:35 0 N 9127 01-JUN-01 00:00:00 16 Y 3 rows selected.DBMS_JOB의 활용예제를 주기적으로 소스테이블에서 타켓 테이블로 적재할 시 어떻게 사용할 수 있는지 간단하게 예를 만들었습니다
--수행될 잡의 목록이 들어갈 테이블 CREATE TABLE JOB_LIST ( JOB_ID VARCHAR2(2), JOB_TYPE VARCHAR2(1), JOB_NAME VARCHAR2(30), JOB_EXEC_HOUR VARCHAR(2), JOB_PARENTID VARCHAR2(2), CONSTRAINTS JOB_LIST_PK PRIMARY KEY(JOB_ID) USING INDEX TABLESPACE CYS_INDEX PCTFREE 0 STORAGE(INITIAL 32K NEXT 32K PCTINCREASE 0) NOLOGGING) TABLESPACE CYS_DATA PCTFREE 0 STORAGE(INITIAL 128K NEXT 128K PCTINCREASE 0 ); / CREATE UNIQUE INDEX JOB_LIST_IDX01 ON JOB_LIST(JOB_NAME,JOB_EXEC_HOUR) TABLESPACE CYS_INDEX PCTFREE 0 STORAGE(INITIAL 64K NEXT 64K PCTINCREASE 0); / -잡 수행 히스토리 테이블로 하루에 한번씩 JOB_LIST 테이블에서 LOG테이블로 JOB_LIST가 복사된다. CREATE TABLE JOB_LOG ( JOB_ID VARCHAR2(2), JOB_EXEC_DATE VARCHAR2(8), JOB_START_TIME DATE, JOB_END_TIME DATE, JOB_DATASTART_TIME DATE, JOB_DATAEND_TIME DATE, JOB_STATUS VARCHAR2(1), JOB_ERR_MSG VARCHAR2(100), CONSTRAINTS JOB_LOG_PK PRIMARY KEY(JOB_ID,JOB_EXEC_DATE) USING INDEX TABLESPACE CYS_INDEX PCTFREE 0 STORAGE(INITIAL 128K NEXT 128K PCTINCREASE 0) NOLOGGING, CONSTRAINTS JOB_LIST_FK FOREIGN KEY(JOB_ID) REFERENCES JOB_LIST(JOB_ID)) TABLESPACE CYS_DATA STORAGE(INITIAL 1M NEXT 1M PCTINCREASE 0) --JOB_ID를 부여하기 위한 시퀀스 CREATE SEQUENCE JOB_NUM START WITH 1 INCREMENT BY 1 ; --하루에 한번 DBMS_JOB을 이용하여 JOB_LIST의 JOB들을 JOB_LOG에 INSERT하기 위한 프로시져 --다음날 수행할 JOB을 LOG로 넣는다. CREATE OR REPLACE PROCEDURE SP_IN_JOB_LOG(V_INDATE IN VARCHAR2 DEFAULT NULL) AS BEGIN INSERT INTO JOB_LOG SELECT JOB_ID, NVL(V_INDATE,TO_CHAR(SYSDATE+1,'YYYYMMDD')), NULL, NULL, NULL, NULL, 'N', NULL FROM JOB_LIST; COMMIT; END SP_IN_JOB_LOG; --SP_IN_JOB_LOG 프로시저를 DBMS_JOB에 등록한다. DECLARE JOB_NUMBER NUMBER; BEGIN DBMS_JOB.SUBMIT(JOB_NUMBER, --JOB번호 'SP_IN_JOB_LOG;', --프로시저명 TO_DATE('20050208180000','YYYYMMDDHH24MISS'), --NEXT_DATE 'TRUNC(SYSDATE,''MI'')+1'); --잡 수행 간격(매일 정각 6시) END; --JOB_LIST에 수행할 프로시져(JOB)을 등록 --DBA_USERS테이블을 이용해서 24개의 로우를 만들어 낸 후 카테시안 프러덕을 이용 INSERT INTO JOB_LIST SELECT JOB_NUM.NEXTVAL JOB_ID, JOB_TYPE, JOB_NAME, B.CNT JOB_EXEC_HOUR, NULL FROM( SELECT NULL JOB_ID, 'F' JOB_TYPE, 'SP_IN_F_SALE_SUM' JOB_NAME, NULL JOB_EXEC_HOUR, NULL FROM DUAL) A, ( SELECT LPAD(ROWNUM-1,2,'0') CNT FROM DBA_TABLES WHERE ROWNUM<25) B COMMIT; --JOB_LIST를 JOB_LOG로 INSERT(현재 SP_IN_JOB_LOG 가 다음일을 INSERT하도록 되어 있으므로 해당일을 넣어줌) EXEC SP_IN_JOB_LOG('20050208'); --제대로 들어갔는지 확인 SELECT * FROM JOB_LOG; --SOURCE 테이블을 시간단위로 섬머리 해서 TARGET 테이블로 적재하기 위한 프로시져 --ERROR없이 매 시간 돌 때는 파라미터 없이 SP_IN_F_SALE_SUM으로 수행 되고 수동으로 어떤 데이터의 --범위를 적재해야 할 경우 시간의 범위를 파라미터로 넘겨줌 CREATE OR REPLACE PROCEDURE SP_IN_F_SALE_SUM(V_STARTTIME IN VARCHAR2 DEFAULT NULL, V_ENDTIME IN VARCHAR2 DEFAULT NULL) AS D_STARTTIME DATE; D_ENDTIME DATE; V_ERR_MSG VARCHAR2(100); BEGIN --프로시저가 파라미터 값이 없이 수행될 경우 IF V_STARTTIME IS NULL AND V_ENDTIME IS NULL THEN SELECT NVL(JOB_DATAEND_TIME,TRUNC(SYSDATE-1/24,'HH24')) INTO D_STARTTIME FROM JOB_LOG WHERE JOB_ID=(SELECT JOB_ID FROM JOB_LIST WHERE JOB_NAME='SP_IN_F_SALE_SUM' AND JOB_EXEC_HOUR=TO_CHAR(SYSDATE-1/24,'HH24')) AND JOB_EXEC_DATE=TO_CHAR(SYSDATE,'YYYYMMDD') AND JOB_STATUS='Y'; D_ENDTIME:=TRUNC(SYSDATE,'HH24'); ELSE SELECT NVL(JOB_DATAEND_TIME,TO_DATE(V_STARTTIME,'YYYYMMDDHH24MISS')) INTO D_STARTTIME FROM JOB_LOG WHERE JOB_ID=(SELECT JOB_ID FROM JOB_LIST WHERE JOB_NAME='SP_IN_F_SALE_SUM' AND JOB_EXEC_HOUR=SUBSTR(V_STARTTIME,9,2)) AND JOB_EXEC_DATE=SUBSTR(V_ENDTIME,1,8) AND JOB_STATUS='Y'; D_ENDTIME:=TO_DATE(V_ENDTIME,'YYYYMMDDHH24MISS'); END IF; --수행되는 프로시저의 START시간을 찍어주고 RUNNING으로 표시 UPDATE JOB_LOG SET JOB_START_TIME=SYSDATE, JOB_STATUS='R' WHERE JOB_ID=(SELECT JOB_ID FROM JOB_LIST WHERE JOB_NAME='SP_IN_F_SALE_SUM' AND JOB_EXEC_HOUR=SUBSTR(V_STARTTIME,9,2)) AND JOB_EXEC_DATE=SUBSTR(V_ENDTIME,1,8); -- DML------ --수행되어질 INSERT문 -- DML----- COMMIT; --프로시저가 ERROR없이 수행이 끝나면 END 시간과 가져온 데이터의 범위를 찍어줌 UPDATE JOB_LOG SET JOB_END_TIME=SYSDATE, JOB_DATASTART_TIME=D_STARTTIME, JOB_DATAEND_TIME=D_ENDTIME, JOB_STATUS='Y' WHERE JOB_ID=(SELECT JOB_ID FROM JOB_LIST WHERE JOB_NAME='SP_IN_F_SALE_SUM' AND JOB_EXEC_HOUR=TO_CHAR(D_ENDTIME,'HH24')) AND JOB_EXEC_DATE=SUBSTR(V_ENDTIME,1,8); COMMIT; EXCEPTION WHEN OTHERS THEN V_ERR_MSG:= SUBSTRB(SQLERRM, 1, 80); UPDATE JOB_LOG SET JOB_END_TIME=SYSDATE, JOB_STATUS='E', JOB_ERR_MSG=V_ERR_MSG WHERE JOB_ID=(SELECT JOB_ID FROM JOB_LIST WHERE JOB_NAME='SP_IN_F_SALE_SUM' AND JOB_EXEC_HOUR=TO_CHAR(D_ENDTIME,'HH24')) AND JOB_EXEC_DATE=SUBSTR(V_ENDTIME,1,8); COMMIT; END SP_IN_F_SALE_SUM; / --SP_IN_F_SALE_SUM 프로시저를 DBMS_JOB에 등록합니다. DECLARE JOB_NUMBER NUMBER; BEGIN DBMS_JOB.SUBMIT(JOB_NUMBER, --JOB번호 'SP_IN_F_SALE_SUM;', --프로시저명 TO_DATE('20050209000000','YYYYMMDDHH24MISS'), --NEXT_DATE 'TRUNC(SYSDATE,''MI'')+1/24'); --잡 수행 간격(매시간 정각) END;