Download

Download

Click on the button below to download a tar-ball that contains all the client scripts.  You will then need to untar it in a location of your choice.  Please make sure you follow the instructions on this page to learn how to setup and configure these scripts.

Download the DB2PRO Client Scripts (21 KB)

Setup Instructions - High Level

Steps

Free,Personal, Premium Plan Users

Business in AWS Cloud Plan

Business in-house Plan

Install Cygwin If you are running the scripts on Windows platform
Receive AWS Keys
Step 1: Un-tar download package
Step 2: Verify DB2 Client
Step 3: Setup DatabaseList.ini
Step 4: Setup Data Collection Scripts
Step 5: Setup Data Shipping Scripts
Step 6: Do a Test Run of Data Collection Scripts
Step 7: Launch Everything in Background
Step 8: Upload files manually

Install Cygwin

All of our scripts are coded in bash and in order to run them in a Windows environment (for instance Windows 7)  you will need to install Cygwin to emulate the script.

The installation is straight forward, the default selection will be good enough, but you also need to add the openssh module.

Business in AWS Cloud Plans

If you have a Business in AWS Cloud subscription, you can upload files manually via the upload page or automatically via a private key file.  In this case, please contact us so that we can provide you with:

  1. A private key file that is used for sending files to AWS server, usually it will be called [your-company]_datacopyuser.pem
  2. The address of your AWS server, usually it will be in the form of [your-company].db2pro.com

You will need this in step 5.

If you are using our services on the Free Tier, you can only upload files manually via our upload page.

Step 1: Un-tar the download file

You will need to download the client package and un-tar it in your $HOME directory.

[you@your-server /home/you]$  tar -xvf db2pro.DataCollection.tar

[you@your-server /home/you]$  cd db2pro/DataCollection

Note: the package will create these directories:

  • db2pro
  • db2pro/DataCollection
  • db2pro/DataCollection/data
  • db2pro/DataCollection/log

Step 2: Verify DB2 Client

Since you will be collecting data (from SYSIBMADM and SYSCAT schemas) via a database connection, you will need to make sure that your DB2 Client is properly configured with all the catalog entries to the databases from which you intent to connect to.

You can do this trivial test as follows.

Windows Users: Please use DB2 Command Window to do this test

[you@your-server /home/you/db2pro/DataCollection]$  db2 “connect to <datatabase> user <userid> using <password>”

[you@your-server /home/you/db2pro/DataCollection]$  db2 “select * from sysibmadm.snapdb fetch first 1 rows only”

[you@your-server /home/you/db2pro/DataCollection]$  db2 “select * from syscat.tables fetch first 1 rows only”

This would insure not only you have the catalog entry properly configured, but also the userid that you are using to connect has enough privileges.

Step 3: Setup DatabaseList.ini  (in db2pro/DataCollection Folder)

Edit a file called DatabaseList.ini and add your database information:

[you@your-server /home/you/db2pro/DataCollection]$  vi DatabaseList.ini

This file is used by the Data Collection Scripts to connect to the database and retrieve stats and meta data.

The <Hostname> and <DB2 Instance> don't serve any technical functionality, but they are used in the file names that are produced - and these are the values you will see in your dashboard on the db2pro site.

As well, <Hostname> cannot have dots.  You can simply put an alias name or replace the dots with dashes.

Step 4: Setup Data Collection Scripts

You will see three Data Collection scripts which you can optionally edit and adjust what you like to collect.

[you@your-server /home/you/db2pro/DataCollection]$  vi Collect.DB2.Stats.hourly.sh

[you@your-server /home/you/db2pro/DataCollection]$  vi Collect.DB2.Stats.daily.sh

[you@your-server /home/you/db2pro/DataCollection]$  vi Collect.DB2.Stats.weekly.sh

For your reference, here is the content of these scripts:

Expand to view the content of Collect.DB2.Stats.hourly.sh
Collect.DB2.Stats.hourly.sh
#!/usr/bin/bash
DATA_PATH=./data
interval=`expr 1 \* 3600`

function check_error {
if [[ $? -ne 0 ]]; then
    echo "Exiting with error . . ."
    exit $?
fi
}

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    while IFS='' read -r line || [[ -n "$line" ]]; do

        HOSTNAME=$(echo $line | cut -f1 -d,)
        DB2_INSTANCE=$(echo $line | cut -f2 -d,)
        DBNAME=$(echo $line | cut -f3 -d,)
        USERID=$(echo $line | cut -f4 -d,)
        PASSWORD=$(echo $line | cut -f5 -d,)
        FILE_DATE=`date +%s`
            
        db2ExecFileName=temp.`date +%Y-%m-%d-%H-%M-%S`.db2
        echo "connect to ${DBNAME} user ${USERID} using ${PASSWORD} ;" > $db2ExecFileName
        
        # Put a list of SYSIBMADM Views that you want to collect hourly here
        SYSIBMADM_VIEWS="\
        LOG_UTILIZATION \
        SNAPDB"

        for SYSIBMADM_VIEW in $SYSIBMADM_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSIBMADM_${SYSIBMADM_VIEW}.ixf of ixf select * from sysibmadm.${SYSIBMADM_VIEW} with UR ;" >> $db2ExecFileName

        done
        
        echo "connect reset;" >> $db2ExecFileName

        db2 -tvf $db2ExecFileName
        
        tar -cf ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf 
        check_error

        gzip ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress 
        check_error

        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress.gz ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz
        check_error

        rm ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf
        check_error
        
        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz ${DATA_PATH}

        rm $db2ExecFileName
        
    done < "DatabaseList.ini"

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done
Expand to view the content of Collect.DB2.Stats.daily.sh
Collect.DB2.Stats.daily.sh
#!/usr/bin/bash
DATA_PATH=./data
interval=`expr 24 \* 3600`

function check_error {
if [[ $? -ne 0 ]]; then
    echo "Exiting with error . . ."
    exit $?
fi
}
echo "Sleeping for $1 seconds . . ."
sleep $1 2>/dev/null

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    while IFS='' read -r line || [[ -n "$line" ]]; do

        HOSTNAME=$(echo $line | cut -f1 -d,)
        DB2_INSTANCE=$(echo $line | cut -f2 -d,)
        DBNAME=$(echo $line | cut -f3 -d,)
        USERID=$(echo $line | cut -f4 -d,)
        PASSWORD=$(echo $line | cut -f5 -d,)
        FILE_DATE=`date +%s`
            
        db2ExecFileName=temp.`date +%Y-%m-%d-%H-%M-%S`.db2
        echo "connect to ${DBNAME} user ${USERID} using ${PASSWORD} ;" > $db2ExecFileName
        
        # Put a list of SYSIBMADM Views that you want to collect daily here
         SYSIBMADM_VIEWS="\
            ADMINTABINFO \
            BP_HITRATIO \
            BP_READ_IO \
            BP_WRITE_IO \
            DB_HISTORY \
            DBCFG \
            DBMCFG \
            DBPATHS \
            ENV_INST_INFO \
            ENV_SYS_INFO \
            ENV_SYS_RESOURCES \
            MON_PKG_CACHE_SUMMARY \
            MON_BP_UTILIZATION \
            MON_CONNECTION_SUMMARY \
            MON_DB_SUMMARY \
            MON_LOCKWAITS \
            MON_SERVICE_SUBCLASS_SUMMARY \
            MON_TBSP_UTILIZATION \
            MON_WORKLOAD_SUMMARY \
            SNAPTAB_REORG \
            SNAPTBSP_RANGE \
            SNAPTBSP_QUIESCER \
            TBSP_UTILIZATION \
            SNAPTBSP \
            SNAPTBSP_PART \
            SNAPCONTAINER \
            CONTAINER_UTILIZATION \
            REG_VARIABLES \
            SNAPSTORAGE_PATHS\
            SNAPDYN_SQL \
            SNAPTAB \
            SNAPBP \
            SNAPDB_MEMORY_POOL \
            SNAPDBM \
            SNAPDBM_MEMORY_POOL"

        for SYSIBMADM_VIEW in $SYSIBMADM_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSIBMADM_${SYSIBMADM_VIEW}.ixf of ixf select * from sysibmadm.${SYSIBMADM_VIEW} with UR ;" >> $db2ExecFileName

        done
        
         SYSCAT_VIEWS="\
            DATAPARTITIONS \
            INDEXES \
            TABLES \
            VIEWDEP \
            BUFFERPOOLS \
            CONTEXTS \
            TABLESPACES"

        for SYSCAT_VIEW in $SYSCAT_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSCAT_${SYSCAT_VIEW}.ixf of ixf select * from syscat.${SYSCAT_VIEW} with UR ;" >> $db2ExecFileName

        done
        
        echo "connect reset;" >> $db2ExecFileName

        db2 -tvf $db2ExecFileName
        
        tar -cf ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf 
        check_error

        gzip ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress 
        check_error

        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress.gz ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz
        check_error

        rm ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf
        check_error
        
        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz ${DATA_PATH}

        rm $db2ExecFileName
        
    done < "DatabaseList.ini"

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done
Expand to view the content of Collect.DB2.Stats.weekly.sh
Collect.DB2.Stats.weekly.sh
#!/usr/bin/bash
DATA_PATH=./data
interval=`expr 7 \* 24 \* 3600`

function check_error {
if [[ $? -ne 0 ]]; then
    echo "Exiting with error . . ."
    exit $?
fi
}
echo "Sleeping for $1 seconds . . ."
sleep $1 2>/dev/null

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    while IFS='' read -r line || [[ -n "$line" ]]; do

        HOSTNAME=$(echo $line | cut -f1 -d,)
        DB2_INSTANCE=$(echo $line | cut -f2 -d,)
        DBNAME=$(echo $line | cut -f3 -d,)
        USERID=$(echo $line | cut -f4 -d,)
        PASSWORD=$(echo $line | cut -f5 -d,)
        FILE_DATE=`date +%s`
            
        db2ExecFileName=temp.`date +%Y-%m-%d-%H-%M-%S`.db2
        echo "connect to ${DBNAME} user ${USERID} using ${PASSWORD} ;" > $db2ExecFileName
        
        # Put a list of SYSIBMADM Views that you want to collect hourly here
    
         SYSCAT_VIEWS="\
            DBPARTITIONGROUPDEF \
            DBPARTITIONGROUPS \
            DBAUTH \
            DATAPARTITIONS \
            TABAUTH \
            PACKAGEAUTH \
            PACKAGEDEP  \
            PACKAGES \ 
            TRIGGERS \
            NODEGROUPDEF \
            NODEGROUPS"
            
        for SYSCAT_VIEW in $SYSCAT_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSCAT_${SYSCAT_VIEW}.ixf of ixf select * from syscat.${SYSCAT_VIEW} with UR ;" >> $db2ExecFileName

        done

        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSCAT_EVENTMONITORS.ixf of ixf select A.*, EVENT_MON_STATE(A.EVMONNAME) EVENT_MON_STATE from SYSCAT.EVENTMONITORS A ; ">> $db2ExecFileName

        echo "call REORGCHK_TB_STATS ('T', 'ALL') ;" >> $db2ExecFileName
         
        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.REORGCHK_TB_STATS.ixf of ixf select * from session.tb_stats ;" >> $db2ExecFileName

        echo "call REORGCHK_IX_STATS ('T', 'ALL') ;" >> $db2ExecFileName
         
        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.REORGCHK_IX_STATS.ixf of ixf select * from session.ix_stats ;" >> $db2ExecFileName

        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.MON_GET_INDEX.ixf of ixf select * from table(MON_GET_INDEX ('', '', -1)) as t ; " >> $db2ExecFileName
        
        echo "connect reset;" >> $db2ExecFileName

        db2 -tvf $db2ExecFileName
        
        tar -cf ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf 
        check_error

        gzip ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress 
        check_error

        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress.gz ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz
        check_error

        rm ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf
        check_error
        
        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz ${DATA_PATH}

        rm $db2ExecFileName
        
    done < "DatabaseList.ini"

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done

Step 5: Setup the Data Shipping script

You will need to place the private key file in your db2pro/DataCollection folder.

Then you will need to edit the data shipping script and adjust two important parameters.

[you@your-server /home/you/db2pro/DataCollection]$  vi Ship.DB2.Stats.to.db2pro.sh

You MUST adjust these two lines:

  • PRIVATE_KEY_FILE=<your-company>_datacopyuser.pem
  • TARGET_HOST=datacopyuser@<your-company>.db2pro.com
Expand to view the content of Ship.DB2.Stats.to.db2pro.sh
Ship.DB2.Stats.to.db2pro.sh
#!/bin/sh
# PRIVATE_KEY_FILE is the one you have received from db2pro.com
# TARGET_HOST      is the user account (usually datacopyuser) and the sub-domain name which points to your server in the cloud
PRIVATE_KEY_FILE=<your-company>_datacopyuser.pem
TARGET_HOST=datacopyuser@<your-company>.db2pro.com

SOURCE_DIR=./data
TARGET_DIR=/db2/data
FILE_LIST=files_to_send

interval=`expr 1 \* 300`

checkError() {
   if (( $? == 0 )); then
      return
   else
      echo "Program Terminated with error"
      exit 1
   fi
}

cd ${SOURCE_DIR}

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    ls *.tar.gz > ${FILE_LIST} 2>/dev/null
    sort ${FILE_LIST} > ${FILE_LIST}.sorted
    mv ${FILE_LIST}.sorted ${FILE_LIST}

    FileList=`cat ${FILE_LIST}`
    FileListLength=`cat ${FILE_LIST} | wc -l`
    echo "Number of files to send = "$FileListLength

    for sourceFile in $FileList; do
        echo "`date +%Y-%m-%d\ %H:%M:%S`: Moving ${sourceFile}"

        scp -i "../${PRIVATE_KEY_FILE}" -p ${sourceFile} ${TARGET_HOST}":"${TARGET_DIR}"/"${sourceFile}".inprogress"
        checkError

        ssh -i "../${PRIVATE_KEY_FILE}" ${TARGET_HOST} "chmod 666 ${TARGET_DIR}\"/\"${sourceFile}\".inprogress\" "
        checkError

        ssh -i "../${PRIVATE_KEY_FILE}" ${TARGET_HOST} "mv ${TARGET_DIR}\"/\"${sourceFile}\".inprogress\" ${TARGET_DIR}\"/\"${sourceFile}"
        checkError

        rm -f ${sourceFile}
        checkError

    done

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done

Step 6 - Do a Test Run of Data Collection Scripts

At this point you, you need to make sure that all your .sh files have the execute bit turned on (chmod 755) and just execute it.

[you@your-server /home/you/db2pro/DataCollection]$  chmod 755 *.sh

We suggest you start with Collect.DB2.Stats.hourly.sh first in foreground, and verify in the db2pro/DataCollection/data folder you see a file(s) created.  These scripts are self scheduling so after they go through doing their work, they calculate how long they need to go to sleep so that they can run at a perfect interval.  After your first test run, simply press Ctrl-C to stop the script.

[you@your-server /home/you/db2pro/DataCollection]$  Collect.DB2.Stats.hourly.sh

[you@your-server /home/you/db2pro/DataCollection]$  ls -al data/

You then try Collect.DB2.Stats.daily.sh and Collect.DB2.Stats.weekly.sh and verify some files have been created.

[you@your-server /home/you/db2pro/DataCollection]$ Collect.DB2.Stats.daily.sh

[you@your-server /home/you/db2pro/DataCollection]$ Collect.DB2.Stats.weekly.sh

[you@your-server /home/you/db2pro/DataCollection]$  ls -al data/

Last step is to try the Ship.DB2.Stats.to.db2pro.sh script which should send those files to AWS and remove them from your local directory.

[you@your-server /home/you/db2pro/DataCollection]$ Ship.DB2.Stats.to.db2pro.sh

Step 7 - Launch everything in background

Congratulations, all you have to do is to fire off these scripts in the background as follows:

Note: you can set

[you@your-server /home/you/db2pro/DataCollection]$  nohup Collect.DB2.Stats.hourly.sh > log/Collect.DB2.Stats.hourly.sh.out &

[you@your-server /home/you/db2pro/DataCollection]$  export EPOCH_SLEEP_DAILY=`date –date=”tomorrow 02:00:00″ +%s`

[you@your-server /home/you/db2pro/DataCollection]$  nohup Collect.DB2.Stats.daily.sh $EPOCH_SLEEP_DAILY  > log/Collect.DB2.Stats.daily.sh.out &

[you@your-server /home/you/db2pro/DataCollection]$  export EPOCH_SLEEP_WEEKLY=`date –date=”next sun 03:00:00″ +%s`

[you@your-server /home/you/db2pro/DataCollection]$  nohup Collect.DB2.Stats.weekly.sh $EPOCH_SLEEP_WEEKLY  > log/Collect.DB2.Stats.weekly.sh.out &

[you@your-server /home/you/db2pro/DataCollection]$  nohup Ship.DB2.Stats.to.db2pro.sh > log/Ship.DB2.Stats.to.db2pro.sh.out &

Step 8 - Upload files manually

At this point you will start seeing DB2PRO Packages on your machine.  If you are using the Free, Personal or Premium plan, you can log in to our site, and upload these files using the "Upload Files" page.