Monday, June 25, 2007

Postgres and Sun Studio 12 build parameters and postgresql.conf values

Lot of people were requesting the compiler options and postgresql.conf options used in the Postgres scalability on Sun Fire T2000 test that I had done previously.

I had not published them before since I am not sure if these options are optimal for all cases or not. Anyway I am providing them as it looks it will help lot of people to jumpstart with Postgres on Solaris and in the meanwhile I can continue doing more tests and tweak them for another entry if required.

 The PostgreSQL binaries were configured with Sun Studio 12 as follows:

$ ./configure --prefix=/usr/local/pg824 CC=/opt/SUNWspro/bin/cc CFLAGS="-xO3 -xarch=native \
-xspace -W0,-Lt -W2,-Rcond_elim -Xa  -xildoff -xc99=none -xCC" --without-readline

Again, I am told that the above options may  not be optimum for Sun Studio 12. I will update them after doing more tests with other options.

The main postgresql.conf options modified are as follows:

max_connections = 1000

unix_socket_directory = '/tmp'

temp_buffers = 1000

max_prepared_transactions = 450


maintenance_work_mem = 512MB

max_fsm_pages = 208000

max_fsm_relations = 10000

vacuum_cost_delay = 50

bgwriter_delay = 200ms





wal_sync_method = fdatasync

full_page_writes = off

wal_buffers = 2500

commit_delay = 10

checkpoint_segments = 256

checkpoint_timeout = 300

checkpoint_warning = 99

random_page_cost = 1.5

cpu_tuple_cost = 0.001

cpu_index_tuple_cost = 0.0005

cpu_operator_cost = 0.00025

effective_cache_size = 40GB

Of course I shouldn't forget the network tunables used also:

ndd -set /dev/tcp tcp_conn_req_max_q
ndd -set /dev/tcp tcp_conn_req_max_q0

ndd -set /dev/tcp tcp_xmit_hiwat 131072
ndd -set /dev/tcp tcp_recv_hiwat 131072
ndd -set /dev/tcp tcp_naglim_def 1

The actual database was divided into three file systems: One for $PGDATA, one for  pg_xlog and one more for index tablespaces. All mounted with forcedirectio option)
Your mileage may vary but hopefully for the better.

1 comment:

Scott Marlowe said...

Some of these settings seem a little odd.
max_connections = 1000
work_mem is the amount of memory a single sort or other operation can grab to work in. Keep in mind a single query may have multiple sorts going on at once.
With 1000 connections, if 10% (100) were to execute 1 sort each, that could result in 100*100MB memory being used, or 10GB of memory being allocated.
It's quite easy to exhaust your machines memory with a combination of high max_connections and work_mem.
Thanks for running this test, it's nice to see some official benchmark numbers for pgsql out there. In my informal testing I've found it to easily outrun many commercial dbs, and of course, with the zero licensing costs, I can always throw more hardware at the problem without worrying about having to buy more licenses.