802-130 Albert St. Ottawa +1 416 457 1649

Download Collectors for Windows

Download Performance Data Collectors for Windows

If you are not allowed to install VirtualBox on your laptop, then you can run the Performance Data Collectors on your Windows desktop and subsequently ship the data to the db2 Processing Engine on a different machine.

Click on the button below to download a tar-ball that contains the Performance Data Collectors.  You will need to untar it in a location of your choice.  Please make sure you follow the instructions on this page for a successful configuration.

Download the DB2PRO Client Scripts (21 KB)

Install Performance Data Collectors for Windows

Install Cygwin

All of our scripts are coded in bash and in order to run them in a Windows environment (for instance Windows 7)  you will need to install Cygwin to emulate the script.

The installation is straight forward, the default selection will be good enough, but you also need to add the openssh module.

Step 1: Un-tar the download file

You will need to download the client package and un-tar it in your $HOME directory.

[you@your-server /home/you]$  tar -xvf db2pro.DataCollection.tar

[you@your-server /home/you]$  cd db2pro/DataCollection

Note: the package will create these directories:

  • db2pro
  • db2pro/DataCollection
  • db2pro/DataCollection/data
  • db2pro/DataCollection/log

Step 2: Verify DB2 Client

Since you will be collecting data (from SYSIBMADM and SYSCAT schemas) via a database connection, you will need to make sure that your DB2 Client is properly configured with all the catalog entries to the databases from which you intent to connect to.

You can do this trivial test as follows.

Windows Users: Please use DB2 Command Window to do this test

[you@your-server /home/you/db2pro/DataCollection]$  db2 "connect to <datatabase> user <userid> using <password>"

[you@your-server /home/you/db2pro/DataCollection]$  db2 "select * from sysibmadm.snapdb fetch first 1 rows only"

[you@your-server /home/you/db2pro/DataCollection]$  db2 "select * from syscat.tables fetch first 1 rows only"

This would insure not only you have the catalog entry properly configured, but also the userid that you are using to connect has enough privileges.

Step 3: Setup DatabaseList.ini  (in db2pro/DataCollection Folder)

Edit a file called DatabaseList.ini and add your database information:

[you@your-server /home/you/db2pro/DataCollection]$  vi DatabaseList.ini

This file is used by the Data Collection Scripts to connect to the database and retrieve performance data.

The <Hostname> and <DB2 Instance> don't serve any technical functionality, but they are used in the file names that are produced - and these are the values you will see in your dashboard on the db2pro site.

As well, <Hostname> cannot have dots.  You can simply put an alias name or replace the dots with dashes.

Step 4: Setup the Data Collectors

You will see three Data Collectors which you can optionally edit and adjust what you like to collect.

[you@your-server /home/you/db2pro/DataCollection]$  vi Collect.DB2.Stats.hourly.sh

[you@your-server /home/you/db2pro/DataCollection]$  vi Collect.DB2.Stats.daily.sh

[you@your-server /home/you/db2pro/DataCollection]$  vi Collect.DB2.Stats.weekly.sh

For your reference, here is the content of these scripts:

Expand to view the content of Collect.DB2.Stats.hourly.sh
Collect.DB2.Stats.hourly.sh
#!/usr/bin/bash
DATA_PATH=./data
interval=`expr 1 \* 3600`

function check_error {
if [[ $? -ne 0 ]]; then
    echo "Exiting with error . . ."
    exit $?
fi
}

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    while IFS='' read -r line || [[ -n "$line" ]]; do

        HOSTNAME=$(echo $line | cut -f1 -d,)
        DB2_INSTANCE=$(echo $line | cut -f2 -d,)
        DBNAME=$(echo $line | cut -f3 -d,)
        USERID=$(echo $line | cut -f4 -d,)
        PASSWORD=$(echo $line | cut -f5 -d,)
        FILE_DATE=`date +%s`
            
        db2ExecFileName=temp.`date +%Y-%m-%d-%H-%M-%S`.db2
        echo "connect to ${DBNAME} user ${USERID} using ${PASSWORD} ;" > $db2ExecFileName
        
        # Put a list of SYSIBMADM Views that you want to collect hourly here
        SYSIBMADM_VIEWS="\
        LOG_UTILIZATION \
        SNAPDB"

        for SYSIBMADM_VIEW in $SYSIBMADM_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSIBMADM_${SYSIBMADM_VIEW}.ixf of ixf select * from sysibmadm.${SYSIBMADM_VIEW} with UR ;" >> $db2ExecFileName

        done
        
        echo "connect reset;" >> $db2ExecFileName

        db2 -tvf $db2ExecFileName
        
        tar -cf ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf 
        check_error

        gzip ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress 
        check_error

        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress.gz ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz
        check_error

        rm ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf
        check_error
        
        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz ${DATA_PATH}

        rm $db2ExecFileName
        
    done < "DatabaseList.ini"

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done
Expand to view the content of Collect.DB2.Stats.daily.sh
Collect.DB2.Stats.daily.sh
#!/usr/bin/bash
DATA_PATH=./data
interval=`expr 24 \* 3600`

function check_error {
if [[ $? -ne 0 ]]; then
    echo "Exiting with error . . ."
    exit $?
fi
}
echo "Sleeping for $1 seconds . . ."
sleep $1 2>/dev/null

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    while IFS='' read -r line || [[ -n "$line" ]]; do

        HOSTNAME=$(echo $line | cut -f1 -d,)
        DB2_INSTANCE=$(echo $line | cut -f2 -d,)
        DBNAME=$(echo $line | cut -f3 -d,)
        USERID=$(echo $line | cut -f4 -d,)
        PASSWORD=$(echo $line | cut -f5 -d,)
        FILE_DATE=`date +%s`
            
        db2ExecFileName=temp.`date +%Y-%m-%d-%H-%M-%S`.db2
        echo "connect to ${DBNAME} user ${USERID} using ${PASSWORD} ;" > $db2ExecFileName
        
        # Put a list of SYSIBMADM Views that you want to collect daily here
         SYSIBMADM_VIEWS="\
            ADMINTABINFO \
            BP_HITRATIO \
            BP_READ_IO \
            BP_WRITE_IO \
            DB_HISTORY \
            DBCFG \
            DBMCFG \
            DBPATHS \
            ENV_INST_INFO \
            ENV_SYS_INFO \
            ENV_SYS_RESOURCES \
            MON_PKG_CACHE_SUMMARY \
            MON_BP_UTILIZATION \
            MON_CONNECTION_SUMMARY \
            MON_DB_SUMMARY \
            MON_LOCKWAITS \
            MON_SERVICE_SUBCLASS_SUMMARY \
            MON_TBSP_UTILIZATION \
            MON_WORKLOAD_SUMMARY \
            SNAPTAB_REORG \
            SNAPTBSP_RANGE \
            SNAPTBSP_QUIESCER \
            TBSP_UTILIZATION \
            SNAPTBSP \
            SNAPTBSP_PART \
            SNAPCONTAINER \
            CONTAINER_UTILIZATION \
            REG_VARIABLES \
            SNAPSTORAGE_PATHS\
            SNAPDYN_SQL \
            SNAPTAB \
            SNAPBP \
            SNAPDB_MEMORY_POOL \
            SNAPDBM \
            SNAPDBM_MEMORY_POOL"

        for SYSIBMADM_VIEW in $SYSIBMADM_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSIBMADM_${SYSIBMADM_VIEW}.ixf of ixf select * from sysibmadm.${SYSIBMADM_VIEW} with UR ;" >> $db2ExecFileName

        done
        
         SYSCAT_VIEWS="\
            DATAPARTITIONS \
            INDEXES \
            TABLES \
            VIEWDEP \
            BUFFERPOOLS \
            CONTEXTS \
            TABLESPACES"

        for SYSCAT_VIEW in $SYSCAT_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSCAT_${SYSCAT_VIEW}.ixf of ixf select * from syscat.${SYSCAT_VIEW} with UR ;" >> $db2ExecFileName

        done
        
        echo "connect reset;" >> $db2ExecFileName

        db2 -tvf $db2ExecFileName
        
        tar -cf ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf 
        check_error

        gzip ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress 
        check_error

        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress.gz ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz
        check_error

        rm ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf
        check_error
        
        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz ${DATA_PATH}

        rm $db2ExecFileName
        
    done < "DatabaseList.ini"

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done
Expand to view the content of Collect.DB2.Stats.weekly.sh
Collect.DB2.Stats.weekly.sh
#!/usr/bin/bash
DATA_PATH=./data
interval=`expr 7 \* 24 \* 3600`

function check_error {
if [[ $? -ne 0 ]]; then
    echo "Exiting with error . . ."
    exit $?
fi
}
echo "Sleeping for $1 seconds . . ."
sleep $1 2>/dev/null

while true
do
    echo "`date +%Y-%m-%d-%H-%M-%S` - top of the loop"
    start=`date +%s`
    end=`expr $start + $interval`

    while IFS='' read -r line || [[ -n "$line" ]]; do

        HOSTNAME=$(echo $line | cut -f1 -d,)
        DB2_INSTANCE=$(echo $line | cut -f2 -d,)
        DBNAME=$(echo $line | cut -f3 -d,)
        USERID=$(echo $line | cut -f4 -d,)
        PASSWORD=$(echo $line | cut -f5 -d,)
        FILE_DATE=`date +%s`
            
        db2ExecFileName=temp.`date +%Y-%m-%d-%H-%M-%S`.db2
        echo "connect to ${DBNAME} user ${USERID} using ${PASSWORD} ;" > $db2ExecFileName
        
        # Put a list of SYSIBMADM Views that you want to collect hourly here
    
         SYSCAT_VIEWS="\
            DBPARTITIONGROUPDEF \
            DBPARTITIONGROUPS \
            DBAUTH \
            DATAPARTITIONS \
            TABAUTH \
            PACKAGEAUTH \
            PACKAGEDEP  \
            PACKAGES \ 
            TRIGGERS \
            NODEGROUPDEF \
            NODEGROUPS"
            
        for SYSCAT_VIEW in $SYSCAT_VIEWS; do

            echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSCAT_${SYSCAT_VIEW}.ixf of ixf select * from syscat.${SYSCAT_VIEW} with UR ;" >> $db2ExecFileName

        done

        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.SYSCAT_EVENTMONITORS.ixf of ixf select A.*, EVENT_MON_STATE(A.EVMONNAME) EVENT_MON_STATE from SYSCAT.EVENTMONITORS A ; ">> $db2ExecFileName

        echo "call REORGCHK_TB_STATS ('T', 'ALL') ;" >> $db2ExecFileName
         
        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.REORGCHK_TB_STATS.ixf of ixf select * from session.tb_stats ;" >> $db2ExecFileName

        echo "call REORGCHK_IX_STATS ('T', 'ALL') ;" >> $db2ExecFileName
         
        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.REORGCHK_IX_STATS.ixf of ixf select * from session.ix_stats ;" >> $db2ExecFileName

        echo "export to ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.MON_GET_INDEX.ixf of ixf select * from table(MON_GET_INDEX ('', '', -1)) as t ; " >> $db2ExecFileName
        
        echo "connect reset;" >> $db2ExecFileName

        db2 -tvf $db2ExecFileName
        
        tar -cf ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf 
        check_error

        gzip ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress 
        check_error

        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.progress.gz ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz
        check_error

        rm ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.*.ixf
        check_error
        
        mv ${FILE_DATE}.${HOSTNAME}.${DB2_INSTANCE}.${DBNAME}.tar.gz ${DATA_PATH}

        rm $db2ExecFileName
        
    done < "DatabaseList.ini"

    now=`date +%s`

    sleepTime=`expr $end - $now`
    if (( $sleepTime < 0 )); then
        sleepTime=0
    fi

    echo sleep $sleepTime seconds
    sleep $sleepTime 
done

Step 5 - Test the Data Collectors

At this point you, you need to make sure that all your .sh files have the execute bit turned on (chmod 755) and just execute it.

[you@your-server /home/you/db2pro/DataCollection]$  chmod 755 *.sh

We suggest you start with Collect.DB2.Stats.hourly.sh first in foreground, and verify in the db2pro/DataCollection/data folder you see a file(s) created.  These scripts are self scheduling so after they go through doing their work, they calculate how long they need to go to sleep so that they can run at a perfect interval.  After your first test run, simply press Ctrl-C to stop the script.

[you@your-server /home/you/db2pro/DataCollection]$  Collect.DB2.Stats.hourly.sh

[you@your-server /home/you/db2pro/DataCollection]$  ls -al data/

You then try Collect.DB2.Stats.daily.sh and Collect.DB2.Stats.weekly.sh and verify some files have been created.

[you@your-server /home/you/db2pro/DataCollection]$ Collect.DB2.Stats.daily.sh

[you@your-server /home/you/db2pro/DataCollection]$ Collect.DB2.Stats.weekly.sh

[you@your-server /home/you/db2pro/DataCollection]$  ls -al data/

Step 6 - Launch Data Collectors in background

Congratulations, all you have to do is to fire off these scripts in the background as follows:

Note: you can set

[you@your-server /home/you/db2pro/DataCollection]$  nohup Collect.DB2.Stats.hourly.sh > log/Collect.DB2.Stats.hourly.sh.out &

[you@your-server /home/you/db2pro/DataCollection]$  export EPOCH_SLEEP_DAILY=`date --date="tomorrow 02:00:00" +%s`

[you@your-server /home/you/db2pro/DataCollection]$  nohup Collect.DB2.Stats.daily.sh $EPOCH_SLEEP_DAILY  > log/Collect.DB2.Stats.daily.sh.out &

[you@your-server /home/you/db2pro/DataCollection]$  export EPOCH_SLEEP_WEEKLY=`date --date="next sun 03:00:00" +%s`

[you@your-server /home/you/db2pro/DataCollection]$  nohup Collect.DB2.Stats.weekly.sh $EPOCH_SLEEP_WEEKLY  > log/Collect.DB2.Stats.weekly.sh.out &

Step 7 - Send your files to your db2pro processing engine

There are several ways to setup a db2pro processing engine.

Option 1: VirtualBox running an ubuntu image

We can provide you a VirtualBox ubuntu image pre-configured to process the db2pro data collector packages.  Please contact us for additional information.

Option 2: Virtual Machine running in your private cloud

Please contact us for additional information.