TSCO instance running on a PostgreSQL DB. The performance is slow (TSCO UI is slow, tasks require long time to run) How one can check the performance of PostgreSQL DB server? |
The first step is to check the usage of CPU and memory. If the server is monitored with TSCO (e.g. it is a VM and it is imported with vCenter ETL) you can do an analysis of CPU Utilization and Memory Utilization and understand if they are high.
[root@bl-pun-bco-qa01 ~]# /usr/pgsql-9.3/bin/pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.
Sample test output:
Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 12564.224 ops/sec 80 usecs/op
fdatasync 8032.680 ops/sec 124 usecs/op
fsync 7344.526 ops/sec 136 usecs/op
fsync_writethrough n/a
open_sync 12225.885 ops/sec 82 usecs/op
Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 5565.933 ops/sec 180 usecs/op
fdatasync 7546.708 ops/sec 133 usecs/op
fsync 6119.231 ops/sec 163 usecs/op
fsync_writethrough n/a
open_sync 5590.784 ops/sec 179 usecs/op
Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
1 * 16kB open_sync write 10826.603 ops/sec 92 usecs/op
2 * 8kB open_sync writes 5699.060 ops/sec 175 usecs/op
4 * 4kB open_sync writes 2940.245 ops/sec 340 usecs/op
8 * 2kB open_sync writes 1465.665 ops/sec 682 usecs/op
16 * 1kB open_sync writes 737.026 ops/sec 1357 usecs/op
Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 6723.157 ops/sec 149 usecs/op
write, close, fsync 6733.806 ops/sec 149 usecs/op
Non-Sync'ed 8kB writes:
write 254583.644 ops/sec 4 usecs/op
Testing I/O subsystem performance using 'fio' on LinuxThere are 6 commands that can be run on Linux to test the performance of the underlying I/O subsystem:
The /dev/sda reference will be replaced with the actual disk device where your Postgres database files reside on the server. Also in the other commands /path/to/file should be a file on the same logical volume as your Postgres database files. Note: These tests should generally be executed when the TSCO Datahub is not running in the environment, as it will compete with these test commands (and thus decrease the reported throughput of the hardware). Validating the postgres.conf settingsThere are a few Postgres configuration recommendations made in the CO documentation. Please refer to docs and check if your postgres.conf file is reflecting the recommended configurationThere is a command available for postgres called pgtune (if not installed, you can ask to system administrator to install as root with command yum install pgtune) which can be used to tune your Postgres database configuration parameters based upon your memory configuration For example, on a machine with 4 GB of RAM where the Postgres DB is in /apps/postgres/data the command is: pgtune --input /apps/postgres/data/postgresql.conf --type mixed --connections 100 --memory 4000000000
the same can be done collecting the relevant parameters and putting them into the pgtune web page, without installing packages on the customer's environment: https://pgtune.leopard.in.ua/#/ Pay particular attention to the value of effective_io_concurrency, which can be the culprit. If performance problems persist, one diagnostic option would be to try, for a short test period (a couple of days), running the Postgres database with the parameter synchronous_commit = off to see if there is a noticeable increase in performance. That generally wouldn't be a desirable permanent configuration (as if the server were to crash the database could be corrupted and need to be restored from backup) but it would be a good indicator of the type of throughput possible in an environment with slow I/O subsystem response. |