DB2 Logging and HADR Performance Analysis using
DB2 Log Scanner and HADR Calculator
Table of Contents
DB2 Log Scanner is a tools that scans DB2 LUW transaction log files (such as S0000001.LOG) to analyze logging rate, flush (ie. log write), transaction, and log record characteristics. It supports cross version and cross platform scan. Any db2logscan executable can scan files from DB2 LUW version V9.1 and later, generated on either big or small endian platforms. It is also a high performance scanner. It can scan log files at 600MB/sec on Linux work stations when input files are cached. Thus you are more likely to be IO bound during a scan. DB2 Log Scanner cannot be used on encrypted logs.
The scanning result can be used to tune logging performance on either standard (non HADR) databases or HADR databases.
As of Db2 v11.5, the "HADR Toolkit" suite of tools including db2logscan (Db2 Log Scanner), simhadr (HADR Simulator), and hadrCalculator (HADR Calculator) are available as part of your Db2 installation sqllib/bin (db2logscan, simhadr) and sqllib/bin/samples (hadrCalculator.pl), and do not need to be downloaded separately.
db2logscan is a stand-alone tool. You only need to download a single binary executable. It has no dependency on DB2 installation. Downloads for Aix, Linux and Windows platforms are available below:
File name | OS | OS version | Bit size | Architecture |
---|---|---|---|---|
db2logscan_aix | AIX | 6.1 | 64 | powerpc |
db2logscan_linux | Linux | 3.0 | 64 | x86_64 |
db2logscan_win64.exe | Windows | (N/A) | 64 | x86 |
Note: Since db2logscan supports cross platform scanning, a scanner on the same platform as the database server is not required. The OS version listed above is the version where the binary is built. Generally, the binary can also run on newer OS versions, but not older versions.
Only the above platforms are availalbe for download.
A companion tool, HADR calculator, can be downloaded here. HADR calculator is a perl script. On Unix systems, you may run it directly with "x" permission. On Windows, run it as "perl hadrCalculator.pl <arguments>".
Version History
Run db2logscan with no argument or with "-help" argument to get usage:
+----------------------------------------------------------------------------+
| IBM DB2 Log Scanner (db2logscan) |
| Version 1.0, 64 bit. |
| Licensed Material, Property of IBM. |
| Copyright IBM Corp. 2013. All Rights Reserved. |
+----------------------------------------------------------------------------+
Usage: db2logscan [options] <logFile1> <logFile2> ... <logFileN>
db2logscan scans DB2 transaction log files (such as S0000001.LOG) to analyze
logging rate, flush (log write), transaction, and log record characteristics.
Files from DB2 LUW version V9.1 and later are supported. Option names and
keywords are case-insensitive. Options are:
-interval <n> Sample logging rate every <n> seconds. 0 for sampling once per
log file. Default 60.
-checksum Check page checkSum of input files. Detault no check.
-infoLevel <s> Logging information display level. Incremental levels are:
rate: Show logging rate. Default level.
file: Show meta data of each file.
flush: Show each flush.
page: Show each page.
record: Show each record.
-transLevel <s> Transaction information display level. Incremental levels are:
summary: Just a summary at end of run. Default level.
open: Show open tranactions at end of run.
all: Show start and end of each transaction during the run.
-f <file> Read file names from <file>, one per line.
-retrieve <cmd> Run "<cmd> fileName" before opening a file to retrieve the file
from archive, etc.
This machine is small endian. db2logscan auto detects log file endian.
It can process log files generated with a different endian.
Typically, IBM, Sun, and HP CPUs are big endian and Intel CPUs are small endian.
-interval option allows you to sample logging rate at arbitrary interval. See sample output below for more info.
-checksum option is used only when you suspect a file is corrupted.
-infoLevel can give more detailed info on the input file. You can print file, flush, page, and record level information.
-transLevel can give more detailed info on transactions. If you need to zoom in onto individual transactions, use this option.
-f option gets input file names from a file, instead of command line. This is useful when you need to scan a large number of files. Each line is taken as a file name (with ending newline removed). Spaces at beginning and end of the line are NOT trimmed. Blank lines and comment lines (lines starting with "#") are ignored.
-retrieve option allows you to specify a command to retrieve log files. It can be used when you need to retrieve log files from archive (or any other repository) and don't want to retrieve all files before running scanner. The retrieve command can include space management to remove old files in the target directory. Only the current file (the one passed as argument to retrieve command) is needed by scanner. With space management, you only need a small amount of space on the target directory. The command is invoked via the system(command line) call. The command line is passed to "/bin/sh -c" on Unix and "cmd.exe /c" on Windows. The command and the log file name are always double quoted, to allow embedded spaces. This means you cannot embed the double quote character (") itself in the command or file name (double quote is not allowed in path on Windows anyway).
Unix example for retrieve:
shell> db2logscan_linux data/S0000003.LOG -retrieve ./retrieve.sh
File "data/S0000003.LOG"
system("./retrieve.sh" "data/S0000003.LOG")
Found /nfshome/zhuge/proj/logscan/logscan.data/logs.V10.1/S0000003.LOG, copying to data/S0000003.LOG
The scanner prints the exact string passed to the system() call (note the double quotes). The argument for the retrieve command is log file name given to scanner, including its path. Retrieve command is supposed to parse the file name to get the log file number, then retrieve the file into the location. In the example above, it found the file in the repository /nfshome/zhuge/proj/logscan/logscan.data/logs.V10.1, then copied the file to the target dir "data" (The "Found" line is the output of the retrieve command).
Windows example for retrieve:
C:\temp> db2logscan_win64.exe "my data\S0000002.LOG" -retrieve "C:\my scripts\retrieve.bat"
File "my data\S0000002.LOG"
system(""C:\my scripts\retrieve.bat" "my data\S0000002.LOG"")
Found C:\home\zhuge\logdata\logs.V10.1\S0000002.LOG, copying to my data\S0000002.LOG
Note the spaces in paths in the Windows example above. The whole command line passed to system() is quoted so that cmd.exe can correctly parse the quoted command and log file name on the line (just a Windows quirk. See cmd.exe /? for details).
When log file compression is enabled (via database config parameter logarchcompr1 or logarchcompr2), archived log files are compressed. db2logscan cannot handle compressed files. You may temporarily disable compression (the config parameters can be changed dynamically) to generate log files for scanner. If you need to scan already generated and compressed files (for example, to debug some system anomaly that already happened), you may contact IBM tech support to decompress these files for you.
With default infoLevel, you get:
This machine is small endian.
Environment variable TZ is "US/Pacific".
Timezone is PST/PDT, 8.0 hours west of GMT, has_daylight_rule=1
Current local time is 2013-08-16 13:26:49
File "/nfshome/zhuge/proj/logscan/logscan.data/logs.V10.1.aix/S0002534.LOG"
FormatVersion 3072 out of range, switching to big endian.
2013-02-08 23:56:27 0.113 MB/s, 60 sec, 4.3 pg/f, 0.150000 sec/f, 1.0 pg/tr, 0.034558 sec/tr, 0.032931 sec/cmt, nOpenTrans 0.7
2013-02-08 23:57:27 0.097 MB/s, 60 sec, 4.9 pg/f, 0.197368 sec/f, 1.1 pg/tr, 0.045356 sec/tr, 0.043197 sec/cmt, nOpenTrans 0.7
2013-02-08 23:58:27 0.792 MB/s, 60 sec, 17.7 pg/f, 0.087083 sec/f, 6.6 pg/tr, 0.038671 sec/tr, 0.032680 sec/cmt, nOpenTrans 0.9
2013-02-08 23:59:27 0.415 MB/s, 60 sec, 11.2 pg/f, 0.105448 sec/f, 3.2 pg/tr, 0.035371 sec/tr, 0.030318 sec/cmt, nOpenTrans 0.7
...
2013-02-09 01:22:30 2.596 MB/s, 60 sec, 5.8 pg/f, 0.008786 sec/f, 29.4 pg/tr, 0.100666 sec/tr, 0.044412 sec/cmt, nOpenTrans 38.3
2013-02-09 01:23:30 1.709 MB/s, 60 sec, 10.0 pg/f, 0.022945 sec/f, 14.8 pg/tr, 2.731225 sec/tr, 0.033879 sec/cmt, nOpenTrans 28.6
2013-02-09 01:24:30 0.684 MB/s, 60 sec, 5.7 pg/f, 0.032662 sec/f, 11.9 pg/tr, 0.656532 sec/tr, 0.067568 sec/cmt, nOpenTrans 5.1
2013-02-09 01:25:30 0.499 MB/s, 60 sec, 6.4 pg/f, 0.050125 sec/f, 8.6 pg/tr, 0.150000 sec/tr, 0.068182 sec/cmt, nOpenTrans 2.3
File "/nfshome/zhuge/proj/logscan/logscan.data/logs.V10.1.aix/S0003158.LOG"
2013-02-09 01:25:38 1.469 MB/s, 8 sec, 8.3 pg/f, 0.022161 sec/f, 15.2 pg/tr, 0.061453 sec/tr, 0.044693 sec/cmt, nOpenTrans 2.9
WARNING: Non-continuous extent number. Context cleared.
2013-02-26 15:32:09 7.307 MB/s, 60 sec, 17.6 pg/f, 0.009435 sec/f, 15.9 pg/tr, 0.186589 sec/tr, 0.008580 sec/cmt, nOpenTrans 22.7
2013-02-26 15:33:09 5.703 MB/s, 60 sec, 7.7 pg/f, 0.005297 sec/f, 12.3 pg/tr, 0.179527 sec/tr, 0.008495 sec/cmt, nOpenTrans 23.0
2013-02-26 15:34:09 6.787 MB/s, 60 sec, 7.6 pg/f, 0.004354 sec/f, 12.8 pg/tr, 0.177737 sec/tr, 0.007421 sec/cmt, nOpenTrans 24.4
2013-02-26 15:35:09 3.943 MB/s, 60 sec, 5.8 pg/f, 0.005725 sec/f, 9.3 pg/tr, 0.254635 sec/tr, 0.009193 sec/cmt, nOpenTrans 25.0
2013-02-26 15:36:09 1.363 MB/s, 60 sec, 3.5 pg/f, 0.010018 sec/f, 4.3 pg/tr, 0.102471 sec/tr, 0.012782 sec/cmt, nOpenTrans 32.4
2013-02-26 15:37:09 1.988 MB/s, 60 sec, 4.8 pg/f, 0.009497 sec/f, 6.2 pg/tr, 0.104266 sec/tr, 0.012487 sec/cmt, nOpenTrans 40.6
2013-02-26 15:38:09 2.399 MB/s, 60 sec, 5.2 pg/f, 0.008453 sec/f, 8.8 pg/tr, 0.081205 sec/tr, 0.014458 sec/cmt, nOpenTrans 44.9
2013-02-26 15:39:05 3.280 MB/s, 56 sec, 6.7 pg/f, 0.007984 sec/f, 15.1 pg/tr, 0.102178 sec/tr, 0.017937 sec/cmt, nOpenTrans 45.0
transTable: nOpenTran=48 openTranSize=5910673 nBuckets=1024 nAdds=166489 nCollisions=1243 collisionRate=0%
Distribution of log write rate (unit is MB/s):
... <See "Statistical Distribution of Logging Metrics" section" below>
3906.125 MB scanned in 5.300 seconds, scanning rate 736.941 MB/second
db2logscan retrieves timestamps from log records (mostly from commit record). Time is shown in the local time zone of the scanner. Time zone information is printed at the beginning. If the scanner machine has a default time zone that is different from the database server's, you may want to set the TZ environment variable before running scanner to match the time zone of the database server.
This machine is small endian.
...
FormatVersion 3072 out of range, switching to big endian.
Shows that db2logscan detected that the files are generated on an endian different from that of the scanner machine. It auto switches scanning endian to match the database server's.
The 2013-02-08 23:56:27 0.113 MB/s, 60 sec, 4.3 pg/f, 0.150000 sec/f, 1.0 pg/tr, 0.034558 sec/tr, 0.032931 sec/cmt, nOpenTrans 0.7
lines shows that at that point in time, in the previous 60 seconds,
If all things are even (same length transactions evenly spread over), transaction duration = commit interval * nOpenTrans. In other words, nOpenTrans shows how many transactions are overlapping (concurrent). In reality, this formula does not always hold, due to uneven transaction duration and start/end time.
Logging rate info is caculated at regular interval. Default is every 60 seconds. You can use arbitrary interval, via the -interval option. Because log files are persistent, you can slice and dice your log data any way you want, without rerunning the workload. For graphical presentation, you can feed the output to plotting tools such as gnuplot.
Notes:
Multiple files can be specified on command line. For files in a continuous range, the scanner carries the context from one file to the next (input is treated as one continuous stream). When the scanner detects gap in the input file, context is cleared (a new range is started). In the above example,
WARNING: Non-continuous extent number. Context cleared.
is printed to indicate context being cleared. The scanner does more than just extent (log file) number checking. It looks at meta data stored in the files to determine context. It won't be fooled by seemingly continuous file numbers (as the result of file renaming, or files copied from different log streams).
At the end, transaction table state is printed:
transTable: nOpenTran=48 openTranSize=5910673 nBuckets=1024 nAdds=166489 nCollisions=1243 collisionRate=0%
Number of open transactions at end of scan is reported (48 in the example). openTranSize is the sum of size of these open transactions (transaction size is defined as sum of log record size for a transaction, unit is bytes). Transaction table is implemented as a hash table. nBuckets, nAdds, nCollisions, and collisionRate report the hash table statistics. These are used for scanner diagnostic only.
Info level (-infoLevel) option controls logging rate, file, flush, page, and record verbosity.
At infoLevel "rate" (default level), you get logging rate for every sample interval, like:
2013-02-26 15:32:09 7.307 MB/s, 60 sec, 17.6 pg/f, 0.009435 sec/f, 15.9 pg/tr, 0.186589 sec/tr, 0.008580 sec/cmt, nOpenTrans 22.7
InfoLevel "file" adds file meta data:
File "/nfshome/zhuge/proj/logscan/logscan.data/logs.V10.1.aix/S0003158.LOG"
Chk1=200, Chk2=201, using header 2
FormatVersion=12 (V10.1), CId=1361847345 (2013-02-25 18:55:45)
streamId=0 ExtNum=3158 state=0x38001 ExtSize=499984 PageCount=499984 firstLso=5529742923657 firstLFSInExtent=238909236 lsnBase=0x5E112E20A
InfoLevel "flush" adds flush meta data:
Flush 3: size=56 startTime=2013-02-26 15:31:11 endTime=2013-02-26 15:31:11 duration=0sec
Flush 4: size=7 startTime=2013-02-26 15:31:11 endTime=2013-02-26 15:31:11 duration=0sec
InfoLevel "page" adds page meta data:
Page 275: byteCount=4076 firstIndex=73 pageFlag=0x0 pageLso=5529744044556
Page 276: byteCount=4076 firstIndex=419 pageFlag=0x0 pageLso=5529744048632
Page 277: byteCount=4076 firstIndex=237 pageFlag=0x10 pageLso=5529744052708 endOfFlush
Page 278: byteCount=4076 firstIndex=53 pageFlag=0x0 pageLso=5529744056784
End of flush pages are marked.
InfoLevel "record" adds record meta data:
Record 5: pageOffset=1097 lrecsize=113 lrectype=0x4E lrecflags=0x0 recLfs=238909237 recLsn=0x5E112E20E tid=0x50C3A270
Record 6: pageOffset=1210 lrecsize=164 lrectype=0x4E lrecflags=0x0 recLfs=238909237 recLsn=0x5E112E20F tid=0x50C3A0A4 startOfTrans nOpenTrans=2
...
Record 1706: pageOffset=3848 lrecsize=128 lrectype=0x4E lrecflags=0x0 recLfs=238909238 recLsn=0x5E112E8B3 tid=0x50C3A2A8
Record 1707: pageOffset=3976 lrecsize=48 lrectype=0x84 lrecflags=0x0 recLfs=238909238 recLsn=0x5E112E8B4 tid=0x50C3A0A4 endOfTrans nOpenTrans=13 vts=1361921471.1(2013-02-26 15:31:11)
Records for start and end of a transaction are marked. "tid" is transaction id. Records with the same tid belong to the same transaction. The last record in a transaction is usually "commit" (lrectype=0x84), which contains a timestamp (vts field).
Transaction Level (-transLevel) option controls transaction information verbosity. InfoLevel and transLevel options are independent. You can use any combination of the levels.
Level "summary" (default level) shows just a summary at end of run:
transTable: nOpenTran=48 openTranSize=5910673 nBuckets=1024 nAdds=45488 nCollisions=1070 collisionRate=2%
Level "open" lists open tranactions at end of run:
transTable: nOpenTran=48 openTranSize=5910673 nBuckets=1024 nAdds=45488 nCollisions=1070 collisionRate=2%
bkt:0x14 Transaction 29993: tid=0x50C61C14 startTime=2013-02-26 15:35:24 endTime=NULL size=114626 length=0sec
bkt:0x1E Transaction 34440: tid=0x50C6801E startTime=2013-02-26 15:36:23 endTime=NULL size=60855 length=0sec
...
Level "all" adds start and end of each transaction during the run:
Transaction 14 start: tid=0x50C3A296 startTime=2013-02-26 15:31:09 endTime=NULL size=375 duration=0sec nOpenTrans=13
Transaction 14 end: tid=0x50C3A296 startTime=2013-02-26 15:31:09 endTime=2013-02-26 15:31:09 size=915 duration=0sec nOpenTrans=12
Scanner gathers statistical distribution of certain logging metrics. The distribution section is printed after the logging rate section. Sample distribution section below, with comments added (fixed width font is original text from scanner):
Distribution of log write rate (unit is MB/s):
Total 8 numbers, Sum 32.764, Min 1.362, Max 7.306, Avg 4.096
Exactly 1.362 1 numbers 12%
Exactly 1.987 1 numbers 12%
Exactly 2.399 1 numbers 12%
Exactly 3.280 1 numbers 12%
Exactly 3.942 1 numbers 12%
Exactly 5.702 1 numbers 12%
Exactly 6.786 1 numbers 12%
Exactly 7.306 1 numbers 12%
Each "log write rate" number comes from a sample interval printed in the logging rate section. The example shows 4.096 MB/s average.
Distribution of flush size (unit is page):
Total 68369 numbers, Sum 499984, Min 1, Max 370, Avg 7.3
From 1 to 1 18615 numbers 27%
From 2 to 3 17843 numbers 26%
From 4 to 7 15782 numbers 23%
From 8 to 15 9621 numbers 14%
From 16 to 31 3847 numbers 5%
From 32 to 63 1728 numbers 2%
From 64 to 127 753 numbers 1%
From 128 to 255 166 numbers 0%
From 256 to 511 14 numbers 0%
Every flush is represented. The example shows 7.3 page/flush average (typical on OLTP systems). Watch out for very large (more than a few hundred pages) flushes. You may use info level "flush" to see individual flushes.
Distribution of flush duration (unit is second):
Total 68368 numbers, Sum 476, Min 0, Max 2, Avg 0.006962
Exactly 0 67927 numbers 99%
Exactly 1 406 numbers 0%
Exactly 2 35 numbers 0%
Every flush is represented. The example shows 6 millisecond/flush average, which is common. Watch out for very long flushes. However, reported long flushes can be the result of sparse timestamps in log stream (actual flush time can be shorter than reported). You may use info level "flush" to see individual flushes and study the long flushes. If the long flushes occurred when system is not idle, then they are likely to be real long flushes, especially if applications are experiencing commit delay around the same time.
Because timestamps in log stream are of "second" resolution, reported flush duration is always in whole seconds. For a flush shorter than 1 second, if it started and ended in the same "second" tick, the reported duration will be 0 second. If it happens to span a tick (started in second x and ended in second x+1), then reported duration will be 1 second. With a large number of flushes, the average flush time is still accurate because the overall elapsed time and number of flushes are still accurate.
Distribution of record size (unit is byte):
Total 9077093 numbers, Sum 2037934956, Min 48, Max 32840, Avg 225
From 32 to 63 112895 numbers 1%
From 64 to 127 3360867 numbers 37%
From 128 to 255 3914015 numbers 43%
From 256 to 511 613774 numbers 6%
From 512 to 1023 1037053 numbers 11%
From 1024 to 2047 2548 numbers 0%
From 2048 to 4095 27943 numbers 0%
From 4096 to 8191 6718 numbers 0%
From 8192 to 16383 1271 numbers 0%
From 16384 to 32767 8 numbers 0%
From 32768 to 65535 1 numbers 0%
Every record scanned is represented.
Distribution of transaction size (unit is byte):
Total 45440 numbers, Sum 2032024283, Min 48, Max 2904178, Avg 44719
From 32 to 63 12 numbers 0%
From 128 to 255 1571 numbers 3%
From 256 to 511 11314 numbers 24%
From 512 to 1023 22664 numbers 49%
From 1024 to 2047 1803 numbers 3%
From 2048 to 4095 122 numbers 0%
From 4096 to 8191 299 numbers 0%
From 8192 to 16383 117 numbers 0%
From 16384 to 32767 268 numbers 0%
From 32768 to 65535 670 numbers 1%
From 65536 to 131071 543 numbers 1%
From 131072 to 262143 3090 numbers 6%
From 262144 to 524287 2258 numbers 4%
From 524288 to 1048575 707 numbers 1%
From 2097152 to 4194303 2 numbers 0%
Every transaction ended in the scanned files is represented. Open transactions at end of scan is excluded.
Distribution of transaction duration (unit is second):
Total 45440 numbers, Sum 7310, Min 0, Max 58, Avg 0.160871
Zero 42588 numbers 93%
From 1 to 1 1903 numbers 4%
From 2 to 3 582 numbers 1%
From 4 to 7 140 numbers 0%
From 8 to 15 146 numbers 0%
From 16 to 31 69 numbers 0%
From 32 to 63 12 numbers 0%
Every transaction ended in the scanned files is represented. Open transactions at end of scan is excluded. Similar to flush time, timer resolution is whole second. You will only see whole seconds in reported transaction duration. Reported duration of 0 and 1 second may not be accurate. Overall average is still accurate if there is a large number of transactions.
Distribution of commit interval (unit is second):
Total 8 numbers, Sum 0.091349, Min 0.007421, Max 0.017937, Avg 0.011419
Exactly 0.007421 1 numbers 12%
Exactly 0.008494 1 numbers 12%
Exactly 0.008580 1 numbers 12%
Exactly 0.009192 1 numbers 12%
Exactly 0.012486 1 numbers 12%
Exactly 0.012782 1 numbers 12%
Exactly 0.014457 1 numbers 12%
Exactly 0.017937 1 numbers 12%
Each "commit interval" number comes from a sample period printed in the logging rate section.
Distribution of number of open transactions:
Total 90928 numbers, Sum 2712630, Min 1, Max 55, Avg 29.8
From 1 to 1 1 numbers 0%
From 2 to 3 2 numbers 0%
From 4 to 7 4 numbers 0%
From 8 to 15 2072 numbers 2%
From 16 to 31 56303 numbers 61%
From 32 to 63 32546 numbers 35%
Each time the number of open transaction changes (a transaction is starting or ending), a sample point is taken.
Each log stream (in pureScale and DPF, each member has its own stream) is written to disk via a single thread (thread name is db2loggw, threads can be listed via "db2pd -edus" command). The log stream serializes concurrent transactions in the database into a single stream. Log writing of this single stream can be a bottleneck in high volume systems, especially OLTP systems. High performance devices are recommended as logging device.
DB2 logging is designed to be self tuning. SQL agents write log records into log buffer (buffer size is controlled by database config parameter logbufsz), while log writing thread (db2loggw) writes the records to disk, in unit of pages (4KB per page). While agents are writing log records to one part of the log buffer, log writer can write pages to disk from another part of the buffer in parallel. When log writer is done, it gathers log pages filled with records for the next flush (log write). If log flush is slow, more pages will accumulate when log write thread is writing, resulting in larger flushes. For hard drive based logging device, each write pays a fixed cost (access time), plus data transfer time which is proportional to amount of data written. With larger writes, the fix cost part is spread onto more pages, resulting in higher throughput (effective speed).
However, self tuning has its limit. While logger is doing a slow write, agents can only submit so much log data for the next write. Because committing a transaction requires flushing all records for this transaction to disk, an SQL agent issuing a commit request will be blocked (cannot move on to the next transaction), waiting for records of the transaction to be written to disk. The maximal flush size is limited to aggregated size of transactions in the system. In scanner output, average_transaction_size * nOpenTrans is the theoretical max flush size for the given workload at that moment.
Larger flush increases throughput, but slows down tranaction commit (an application needs to wait longer on its commit request), because the flush containing the commit record will take longer time. Note that longer flush time needed to write a transaction's log records prior to the commit record does not slow down the application. The database considers a non-commit SQL statement complete as soon as the records are written to in-memory buffer and data changes are applied to data page buffer pool. As long as the log buffer is large enough to buffer these records, slower write to disk does not slow down the application (disk write is effectively asynchronous). Only the commit statement waits for log flush to disk.
For best performance, log buffer size should be at least several times of the max flush size (transaction_size * nOpenTrans). The buffer is circular. SQL agents write to pages ahead of the log writer's flush position. When agents reach end of buffer, they wrap around to the beginning of the buffer. If they catch up to the log writer's position, buffer is full and agents have to wait for log writer to complete the flush and release the buffer. When log writer reaches end of buffer, it issues a flush to end of buffer, even if there are data ready to be written at beginning of the buffer. A single flush does not wrap around end of buffer, because OS interface only accepts a continuous region in memory for each write call. After flushing to end of buffer, log writer wraps around to the beginning of buffer. Thus the end of buffer flush tends to be smaller.
In a perfect world (a single write can wrap around the buffer end, and transaction size and number are constant), the buffer only needs to be two times max flush size; one for agents to write log records, and the other for log writer to write records to disk in parallel. In real world, due to smaller flushes at the end of buffer and fluctuating transaction size and number, a larger buffer is needed. Larger buffer also makes transaction roll back faster. If roll back can find the records to undo in the buffer, it can avoid retrieving the records from disk. Thus flush size and rollback are the two main concerns when determining buffer size.
The num_log_buffer_full monitor element reports the number of buffer full conditions. Buffer size should be configured to greater than 2* max flush size so that agents never hit buffer full condition. If you see buffer full condition, increase logbufsz. Even in a large system, log buffer wouldn't cost much. For example, 1000 clients, each with 10 page/transaction, aggregated transaction size is 10K page (40MB). A 100MB buffer should cover it.
Longer commit time may or may not have significant impact on the application. It all depends on if longer commit will change the commit interval (time between commits) of the application. If commit interval is not changing, then application throughput does not change (it can submit the same number of transactions in a given amount of time). Some applications' commit interval is not sensitive to commit time. For example, an online shopping application where the user spends most of the time browsing and only submits an purchase order from time to time. User order action is not sensitive to order commit time. If the order is taking 200ms to complete, instead of 100ms, the user would barely notice. On the other hand, longer commit time will directly slow down a data injection/loading application. Longer commit time will be added to commit interval. If commit interval is relatively long (say, 10 seconds), impact of commit time going from 100ms to 200ms is still very small (100ms/10second = 1%). On a 1 second commit interval, the same commit time change will have much higher impact (100ms/1second = 10%).
The scanner reports commit interval in the "sec/cmt" field. This is the database wide commit interval. The per application interval is the database_commit_interval * number_of_applications. Note that number of applications can be higher than number of database connections, if the applications are not connected to the database all the time. Thus taking a reading of number of connections to the database may not be a good estimate of number of applications. You need to understand the application behavior to estimate number of applications and compute per application commit interval. Once you know application commit interval and application behavior, you can estimate the impact of longer commit time on the application.
In the next section, "HADR calculator", we will see how the HADR calculator can compute the HADR impact to commit time in various HADR sync modes.
When HADR (High Availability Disaster Recovery) is enabled on a database, logs on the primary database are replicated to one or more standby databases in real time. Replication can slow down log writing on the primary. The main factors impacting primary database log writing are: HADR sync mode, log device speed on primary and standby databases, and network speed between the primary and standby(s). See HADR sync mode for description of the sync modes, and HADR Performance Tuning on measuring disk and network speed.
The HADR calculator (hadrCalculator.pl) estimates HADR performance in various sync modes. For Db2 v11.5 and later, it is already included in installation sqllib/samples/perl/hadrCalculator.pl. For Db2 releases before v11.5, it can be downloaded here.
HADR calculator is a perl script. On Unix systems, you may run it directly with "x" permission. On Windows, run it as "perl hadrCalculator.pl <arguments>".
Note on float number arguments: HADR calculator uses perl library function to parse command line. Some perl packages may require leading 0 for decimals. For example, it may reject ".1", but accept "0.1". Some packages accept both formats. If you get errors like
Value ".1" invalid for option network
try 0.1 instead.
Run hadrCalculator.pl with no argument or with "-help" argument to get usage:
+----------------------------------------------------------------------------+
| IBM DB2 HADR Calculator V1.0 |
| Licensed Material, Property of IBM. |
| Copyright IBM Corp. 2013. All Rights Reserved. |
+----------------------------------------------------------------------------+
Usage: hadrCalculator.pl [options] <inputFile1> <inputFile2> ... <inputFileN>
HADR calculator annotates db2logscan output with theoretical HADR data rate.
When no input file is specified, stdin is read. Output is written to stdout.
Options are: (<s> indicates string. <n> indicates integer. <f> indicates float
number)
-syncmode <s> Specify one or more HADR sync modes. Modes are
SYNC, NEARSYNC, ASYNC, or SUPERASYNC (case insensitive).
Multiple modes can be specified as comma delimited list.
Default "SYNC,NEARSYNC,ASYNC".
-network <f1> <f2> Specify primary-standby network speed as <f1> MBytes/sec with
round trip time of <f2> second.
-disk <f1> <f2> Specify disk write speed as <f1> MBytes/sec with overhead
of <f2> second per write.
You provide sync mode, network and log disk speed to the script, it will compute logging rate in the specified sync modes. Primary and standby log disks are assumed to have the same speed. Use HADR simulator to measure network and disk speed using this procedure.
For SYNC, NEARSYNC, ASYNC modes, calculator computes the logging rate of peer state, where log writing can be slowed down by log shipping. For SUPERASYNC mode, calculator computes logging rate of remote catchup state, because HADR never enters Peer state in this mode. See "Remote Catchup State" section below for more info.
In this example, a WAN is assumed (10MB/sec, 100ms round trip). Medium to heavy impact to applications is predicted for sync and nearSync modes. No impact is predicted for async mode. This matches the HADR rule of thumb: Use async mode for WAN.
Command: hadrCalculator.pl -network 10 .1 -disk 200 .001 <inputFile>
Output:
hadrCalculator.pl: Network speed 10 MB/s, .1 second round trip time
hadrCalculator.pl: Disk speed 200 MB/s, .001 second overhead per write
...
2013-02-09 01:25:38 1.469 MB/s, 8 sec, 8.3 pg/f, 0.022161 sec/f, 15.2 pg/tr, 0.061453 sec/tr, 0.044693 sec/cmt, nOpenTrans 2.9
actual 1.469 MB/s@ 8 pg/f 0.022161 s/f
SYNC ??? 1.424 MB/s@ 44 pg/f 0.120941 s/f, min 0.307 MB/s@ 8 pg/f 0.105566 s/f, max 1.424 MB/s@ 44 pg/f 0.120941 s/f
NEARSYNC ??? 1.469 MB/s@ 44 pg/f 0.117219 s/f, min 0.314 MB/s@ 8 pg/f 0.103242 s/f, max 1.469 MB/s@ 44 pg/f 0.117219 s/f
ASYNC 10.000 MB/s@ 8 pg/f 0.003242 s/f, min 10.000 MB/s@ 8 pg/f 0.003242 s/f, max 10.000 MB/s@ 44 pg/f 0.017219 s/f
...
2013-02-26 15:32:09 7.307 MB/s, 60 sec, 17.6 pg/f, 0.009435 sec/f, 15.9 pg/tr, 0.186589 sec/tr, 0.008580 sec/cmt, nOpenTrans 22.7
actual 7.307 MB/s@ 17 pg/f 0.009435 s/f
SYNC ??? 5.484 MB/s@ 360 pg/f 0.257087 s/f, min 0.627 MB/s@ 17 pg/f 0.109563 s/f, max 5.484 MB/s@ 360 pg/f 0.257087 s/f
NEARSYNC ??? 5.850 MB/s@ 360 pg/f 0.240988 s/f, min 0.643 MB/s@ 17 pg/f 0.106875 s/f, max 5.850 MB/s@ 360 pg/f 0.240988 s/f
ASYNC 10.000 MB/s@ 17 pg/f 0.006875 s/f, min 10.000 MB/s@ 17 pg/f 0.006875 s/f, max 10.000 MB/s@ 360 pg/f 0.140988 s/f
...
2013-02-26 15:36:09 1.363 MB/s, 60 sec, 3.5 pg/f, 0.010018 sec/f, 4.3 pg/tr, 0.102471 sec/tr, 0.012782 sec/cmt, nOpenTrans 32.4
actual 1.363 MB/s@ 3 pg/f 0.010018 s/f
SYNC ?? 1.373 MB/s@ 42 pg/f 0.120146 s/f, min 0.132 MB/s@ 3 pg/f 0.103504 s/f, max 3.362 MB/s@ 139 pg/f 0.161864 s/f
NEARSYNC ?? 1.370 MB/s@ 40 pg/f 0.115874 s/f, min 0.135 MB/s@ 3 pg/f 0.101367 s/f, max 3.524 MB/s@ 139 pg/f 0.154422 s/f
ASYNC 10.000 MB/s@ 3 pg/f 0.001367 s/f, min 10.000 MB/s@ 3 pg/f 0.001367 s/f, max 10.000 MB/s@ 139 pg/f 0.054422 s/f
...
2013-02-26 15:39:05 3.280 MB/s, 56 sec, 6.7 pg/f, 0.007984 sec/f, 15.1 pg/tr, 0.102178 sec/tr, 0.017937 sec/cmt, nOpenTrans 45.0
actual 3.280 MB/s@ 6 pg/f 0.007984 s/f
SYNC ?? 3.283 MB/s@ 134 pg/f 0.159649 s/f, min 0.250 MB/s@ 6 pg/f 0.104879 s/f, max 6.737 MB/s@ 679 pg/f 0.393973 s/f
NEARSYNC ?? 3.280 MB/s@ 124 pg/f 0.148815 s/f, min 0.255 MB/s@ 6 pg/f 0.102617 s/f, max 7.263 MB/s@ 679 pg/f 0.365430 s/f
ASYNC 10.000 MB/s@ 6 pg/f 0.002617 s/f, min 10.000 MB/s@ 6 pg/f 0.002617 s/f, max 10.000 MB/s@ 679 pg/f 0.265430 s/f
...
Distribution of log write rate (unit is MB/s):
Total 8 numbers, Sum 32.764, Min 1.362, Max 7.306, Avg 4.096
Average rate 4.096 MB/s
REMOTE CATCHUP 10.000 MB/s@ 16 pg/f 0.006250 s/f
Exactly 1.362 1 numbers 12%
Exactly 1.987 1 numbers 12%
...
SYNC Max flush size: predicted 360 pages, workload max 1126 pages
NEARSYNC Max flush size: predicted 360 pages, workload max 1126 pages
ASYNC Max flush size: predicted 17 pages, workload max 1126 pages
At the beginning, the calculator prints out network and disk speed as verification of command line arguments:
hadrCalculator.pl: Network speed 10 MB/s, .1 second round trip time
hadrCalculator.pl: Disk speed 200 MB/s, .001 second overhead per write
HADR calculator annotates the logging rate lines from db2logscan with computed rate of various HADR sync modes (other input lines are printed as is). An example:
actual 1.363 MB/s@ 3 pg/f 0.010018 s/f
SYNC ?? 1.373 MB/s@ 42 pg/f 0.120146 s/f, min 0.132 MB/s@ 3 pg/f 0.103504 s/f, max 3.362 MB/s@ 139 pg/f 0.161864 s/f
The "actual" line just reprints selected fields from input, for easy comparison to logging rate of the sync modes. For each sync mode, there are 3 groups of numbers. Each group has its own data rate (MB/s), flush size (pg/f), and flush duration (s/f). Example of a group: 3.283 MB/s@ 134 pg/f 0.159649 s/f. It means that you can get this data rate at this flush size, and each flush takes this amount of time.
For each sync mode, the first group is the target rate group. This group attempts to produce the same logging rate as the actual rate, therefore minimizing impact to applications. Even when the actual rate can be produced, flush size and duration may be different from actual and there may be some impact to application behavior. See "commit time" discussion in "DB2 Logging Performance" section above for more info. There are two special cases for this group:
The 2nd group is the "min" group. It is the minimal performance this mode can produce. It is computed using the actual flush size. If the rate is higher than actual (flush duration is shorter than actual), then this sync mode will not have impact to applications. If the rate is lower than actual (flush duration is longer than actual), then there may still be some impact even if the target rate group can match actual logging rate.
The 3rd group is the "max" group. It is the maximal logging rate this mode can produce. It is computed using the maximal flush size of "transaction_size * nOpenTrans". If the actual rate is higher than this rate, system will hit performance cap in this mode.
On an annotation line, the space after sync mode name is a "?", "??", "???" marker column showing degree of impact to applications in the given sync mode:
When any "?" marks a sync mode, you need to evaluate flush duration, commit wait time, and application commit interval to gauge impact to applications when HADR is enabled.
Average commit wait time will be 1.5 times of flush duration (on average, a commit request arrives while log writer is in the middle of writing a flush. The commit record gets included in the next flush, and completes at end of the next flush). You may compare computed flush duration against actual flush duration to gauge change of commit wait time.
You may compute application commit interval as database_commit_interval * number_of_applications. Then compare application commit interval with the change in commit wait time to gauge impact to applications. Note that applications have varying degree of sensitivity to commit time. See "DB2 Logging Performance" section above for details.
At the end of calculator output, max flush size of the "target" group (shown as "predicted") and "max" group (shown as "workload max") over all annotated logging rate lines are printed. See "Computed Logging Rate" section above for more info on the groups. Sample output:
SYNC Max flush size: predicted 360 pages, workload max 1126 pages
For best performance, primary database log write buffer (database config parameter LOGBUFSZ) and HADR socket buffer size should be at least of the "predicted" flush size. "Workload max" size is preferred. Other factors for setting socket buffer size are: maximizing TCP throughput, buffering ASYNC mode send, buffering remote catchup send (minimal 64KB). See TCP Tuning and HADR Performance Tuning for more info.
In remote catchup state, the primary database log write thread writes logs without waiting for replication to the standby. The primary database HADR thread reads log data from log files and sends it to the standby. Log writing and shipping are independent. Shipping position is behind writing position. Normally, log shipping eventually catches up to current log writing position and the HADR pair enters peer state (assuming sync mode is not superAsync). For more info on HADR state transition, see HADR States.
HADR calculator models remote catchup state log shipping rate as async mode at fixed 16 page/flush, because HADR thread reads log data from log files in fixed 16 page size and sends log data without ack.
If remote catchup log shipping rate is slower than the average log writing rate, the standby may never catchup to the primary and the primary-standby log gap will keep increasing. The calculator will compare average logging rate and remote catchup rate and warn the user with "?" if needed:
Distribution of log write rate (unit is MB/s):
Total 8 numbers, Sum 32.764, Min 1.362, Max 7.306, Avg 4.096
Average rate 4.096 MB/s
REMOTE CATCHUP ?? 5.000 MB/s@ 16 pg/f 0.012500 s/f
Exactly 1.362 1 numbers 12%
Exactly 1.987 1 numbers 12%
...
The "Average rate" and "REMOTE CATCHUP" lines are added by the calculator. "?" markers are used for warning level:
Examples:
Average rate 4.096 MB/s
REMOTE CATCHUP 10.000 MB/s@ 16 pg/f 0.006250 s/f
Average rate 4.096 MB/s
REMOTE CATCHUP ? 7.000 MB/s@ 16 pg/f 0.008929 s/f
Average rate 4.096 MB/s
REMOTE CATCHUP ?? 5.000 MB/s@ 16 pg/f 0.012500 s/f
Average rate 4.096 MB/s
REMOTE CATCHUP ??? 3.000 MB/s@ 16 pg/f 0.020833 s/f
Using the same sample workload, now let say that we run HADR across town, instead of across the continent. With network bandwidth raised to 20MB/second and round trip time cut down to 5ms, we get:
Command: hadrCalculator.pl -network 20 .005 -disk 200 .001 <inputFile>
Output:
hadrCalculator.pl: Network speed 20 MB/s, .005 second round trip time
hadrCalculator.pl: Disk speed 200 MB/s, .001 second overhead per write
...
2013-02-09 01:25:38 1.469 MB/s, 8 sec, 8.3 pg/f, 0.022161 sec/f, 15.2 pg/tr, 0.061453 sec/tr, 0.044693 sec/cmt, nOpenTrans 2.9
actual 1.469 MB/s@ 8 pg/f 0.022161 s/f
SYNC 3.624 MB/s@ 8 pg/f 0.008945 s/f, min 3.624 MB/s@ 8 pg/f 0.008945 s/f, max 9.935 MB/s@ 44 pg/f 0.017331 s/f
NEARSYNC 4.897 MB/s@ 8 pg/f 0.006621 s/f, min 4.897 MB/s@ 8 pg/f 0.006621 s/f, max 12.652 MB/s@ 44 pg/f 0.013609 s/f
ASYNC 20.000 MB/s@ 8 pg/f 0.001621 s/f, min 20.000 MB/s@ 8 pg/f 0.001621 s/f, max 20.000 MB/s@ 44 pg/f 0.008609 s/f
...
2013-02-26 15:32:09 7.307 MB/s, 60 sec, 17.6 pg/f, 0.009435 sec/f, 15.9 pg/tr, 0.186589 sec/tr, 0.008580 sec/cmt, nOpenTrans 22.7
actual 7.307 MB/s@ 17 pg/f 0.009435 s/f
SYNC ?? 7.363 MB/s@ 23 pg/f 0.012539 s/f, min 6.180 MB/s@ 17 pg/f 0.011125 s/f, max 15.393 MB/s@ 360 pg/f 0.091593 s/f
NEARSYNC 8.148 MB/s@ 17 pg/f 0.008438 s/f, min 8.148 MB/s@ 17 pg/f 0.008438 s/f, max 18.675 MB/s@ 360 pg/f 0.075494 s/f
ASYNC 20.000 MB/s@ 17 pg/f 0.003438 s/f, min 20.000 MB/s@ 17 pg/f 0.003438 s/f, max 20.000 MB/s@ 360 pg/f 0.070494 s/f
...
2013-02-26 15:36:09 1.363 MB/s, 60 sec, 3.5 pg/f, 0.010018 sec/f, 4.3 pg/tr, 0.102471 sec/tr, 0.012782 sec/cmt, nOpenTrans 32.4
actual 1.363 MB/s@ 3 pg/f 0.010018 s/f
SYNC 1.748 MB/s@ 3 pg/f 0.007820 s/f, min 1.748 MB/s@ 3 pg/f 0.007820 s/f, max 13.724 MB/s@ 139 pg/f 0.039653 s/f
NEARSYNC 2.405 MB/s@ 3 pg/f 0.005684 s/f, min 2.405 MB/s@ 3 pg/f 0.005684 s/f, max 16.895 MB/s@ 139 pg/f 0.032211 s/f
ASYNC 12.797 MB/s@ 3 pg/f 0.001068 s/f, min 12.797 MB/s@ 3 pg/f 0.001068 s/f, max 20.000 MB/s@ 139 pg/f 0.027211 s/f
...
2013-02-26 15:39:05 3.280 MB/s, 56 sec, 6.7 pg/f, 0.007984 sec/f, 15.1 pg/tr, 0.102178 sec/tr, 0.017937 sec/cmt, nOpenTrans 45.0
actual 3.280 MB/s@ 6 pg/f 0.007984 s/f
SYNC ? 3.294 MB/s@ 7 pg/f 0.008724 s/f, min 3.054 MB/s@ 6 pg/f 0.008570 s/f, max 15.965 MB/s@ 679 pg/f 0.166258 s/f
NEARSYNC 4.149 MB/s@ 6 pg/f 0.006309 s/f, min 4.149 MB/s@ 6 pg/f 0.006309 s/f, max 19.274 MB/s@ 679 pg/f 0.137715 s/f
ASYNC 20.000 MB/s@ 6 pg/f 0.001309 s/f, min 20.000 MB/s@ 6 pg/f 0.001309 s/f, max 20.000 MB/s@ 679 pg/f 0.132715 s/f
...
SYNC Max flush size: predicted 23 pages, workload max 1126 pages
NEARSYNC Max flush size: predicted 17 pages, workload max 1126 pages
ASYNC Max flush size: predicted 17 pages, workload max 1126 pages
We see that application impact is now down to "small" to "medium" for sync and nearSync mode during heavy load period and no impact at all in light load period. And the difference between computed flush duration and actual flush duration is small (just a few ms, compared to hundreds of ms in the WAN example). So this may be a practical setup.
Finally, let's look at a LAN case (120MB/sec, .1 ms round trip time, typical giga bit ethernet):
Command: hadrCalculator.pl -network 120 .0001 -disk 200 .001 <inputFile>
Output:
hadrCalculator.pl: Network speed 120 MB/s, .0001 second round trip time
hadrCalculator.pl: Disk speed 200 MB/s, .001 second overhead per write
...
2013-02-09 01:25:38 1.469 MB/s, 8 sec, 8.3 pg/f, 0.022161 sec/f, 15.2 pg/tr, 0.061453 sec/tr, 0.044693 sec/cmt, nOpenTrans 2.9
actual 1.469 MB/s@ 8 pg/f 0.022161 s/f
SYNC 12.033 MB/s@ 8 pg/f 0.002694 s/f, min 12.033 MB/s@ 8 pg/f 0.002694 s/f, max 32.755 MB/s@ 44 pg/f 0.005257 s/f
NEARSYNC 27.899 MB/s@ 8 pg/f 0.001162 s/f, min 27.899 MB/s@ 8 pg/f 0.001162 s/f, max 92.527 MB/s@ 44 pg/f 0.001861 s/f
ASYNC 27.899 MB/s@ 8 pg/f 0.001162 s/f, min 27.899 MB/s@ 8 pg/f 0.001162 s/f, max 92.527 MB/s@ 44 pg/f 0.001861 s/f
...
2013-02-26 15:32:09 7.307 MB/s, 60 sec, 17.6 pg/f, 0.009435 sec/f, 15.9 pg/tr, 0.186589 sec/tr, 0.008580 sec/cmt, nOpenTrans 22.7
actual 7.307 MB/s@ 17 pg/f 0.009435 s/f
SYNC 20.459 MB/s@ 17 pg/f 0.003360 s/f, min 20.459 MB/s@ 17 pg/f 0.003360 s/f, max 50.447 MB/s@ 360 pg/f 0.027948 s/f
NEARSYNC 51.163 MB/s@ 17 pg/f 0.001344 s/f, min 51.163 MB/s@ 17 pg/f 0.001344 s/f, max 118.987 MB/s@ 360 pg/f 0.011849 s/f
ASYNC 51.163 MB/s@ 17 pg/f 0.001344 s/f, min 51.163 MB/s@ 17 pg/f 0.001344 s/f, max 120.000 MB/s@ 360 pg/f 0.011749 s/f
...
2013-02-26 15:36:09 1.363 MB/s, 60 sec, 3.5 pg/f, 0.010018 sec/f, 4.3 pg/tr, 0.102471 sec/tr, 0.012782 sec/cmt, nOpenTrans 32.4
actual 1.363 MB/s@ 3 pg/f 0.010018 s/f
SYNC 5.816 MB/s@ 3 pg/f 0.002351 s/f, min 5.816 MB/s@ 3 pg/f 0.002351 s/f, max 45.061 MB/s@ 139 pg/f 0.012077 s/f
NEARSYNC 12.797 MB/s@ 3 pg/f 0.001068 s/f, min 12.797 MB/s@ 3 pg/f 0.001068 s/f, max 117.411 MB/s@ 139 pg/f 0.004635 s/f
ASYNC 12.797 MB/s@ 3 pg/f 0.001068 s/f, min 12.797 MB/s@ 3 pg/f 0.001068 s/f, max 120.000 MB/s@ 139 pg/f 0.004535 s/f
...
2013-02-26 15:39:05 3.280 MB/s, 56 sec, 6.7 pg/f, 0.007984 sec/f, 15.1 pg/tr, 0.102178 sec/tr, 0.017937 sec/cmt, nOpenTrans 45.0
actual 3.280 MB/s@ 6 pg/f 0.007984 s/f
SYNC 10.145 MB/s@ 6 pg/f 0.002580 s/f, min 10.145 MB/s@ 6 pg/f 0.002580 s/f, max 52.289 MB/s@ 679 pg/f 0.050762 s/f
NEARSYNC 23.143 MB/s@ 6 pg/f 0.001131 s/f, min 23.143 MB/s@ 6 pg/f 0.001131 s/f, max 119.460 MB/s@ 679 pg/f 0.022219 s/f
ASYNC 23.143 MB/s@ 6 pg/f 0.001131 s/f, min 23.143 MB/s@ 6 pg/f 0.001131 s/f, max 120.000 MB/s@ 679 pg/f 0.022119 s/f
...
SYNC Max flush size: predicted 17 pages, workload max 1126 pages
NEARSYNC Max flush size: predicted 17 pages, workload max 1126 pages
ASYNC Max flush size: predicted 17 pages, workload max 1126 pages
No impact at all in any sync mode. You may use any mode and expect no impact to applications.
With DB2 log scanner and HADR calculator in hand, now you can plan your HADR system with confidence, and tune your system with quantitative guidance. See step by step guide on choosing HADR sync mode and TCP buffer size.