In my last entry, we saw how to setup
PostgreSQL 12 beta 3 in Amazon RDS. In that entry I purposely left out how to change database parameters as I realized that it deserves an entry (or more) by itself.
Using the AWS CLI you can create a new database parameter group as follows:
$ aws rds create-db-parameter-group --db-parameter-group-name jkpg12pg \
--db-parameter-group-family postgres12 --description "My PostgreSQL 12 Parameter Group" \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com
We have just created a group and not applied the parameters to any database. Before we apply, we do to see what are the default values in the created group. You can run a command as follows to see values being set by default in the group.
$ aws rds describe-db-parameters --db-parameter-group-name jkpg12pg \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com \
--query 'Parameters[].[ParameterName,ParameterValue]' --output text
The output contains a list of parameters with values. Let's look at some of the values to see how to interpret them.
application_name None
..
autovacuum_max_workers GREATEST({DBInstanceClassMemory/64371566592},3)
autovacuum_vacuum_cost_limit GREATEST({log(DBInstanceClassMemory/21474836480)*600},200)
..
effective_cache_size {DBInstanceClassMemory/16384}
..
jit None
..
maintenance_work_mem GREATEST({DBInstanceClassMemory*1024/63963136},65536)
max_connections LEAST({DBInstanceClassMemory/9531392},5000)
..
shared_buffers {DBInstanceClassMemory/32768}
shared_preload_libraries pg_stat_statements
..
work_mem None
xmlbinary None
xmloption None
When you see
None it basically is equivalent to being not set in postgresql.conf and the default value of the PostgreSQL version engine is used by PostgreSQL. In the above example, you will notice that
jit is set to
None which means it will take the default
ON value in PostgreSQL 12 and enable
jit in the instance.
If you change a parameter set to a specific value based on the supported type for that parameter, then that value (based on the supported unit) will be used instead of the default value in PostgreSQL. For example you see that shared_preload_libraries has a default value of
pg_stat_statements. Which means when you deploy a PostgreSQL 12 instance, pg_stat_statements extension will have the library preloaded and available for it without requiring another restart.
Other interesting parameters are ones with values containing
{} or with
GREATEST or
LEAST functions. These values are using macro functions allowing you to set them based on the
DBInstanceClassMemory (in bytes) based on the instance size used by the database instance.
For example, shared_buffers is set to
{DBInstanceClassMemory/32768} . In PostgreSQL, when there are no units, shared_buffers is set to the number of 8KB pages based on the value.
So in this example it shows that it is set to 25% or 1/4th of total RAM in bytes in terms of 8192 bytes = (RAM/8192 )/4 or RAM/32768.
Setting values is an important task to get optimum usage of a PostgreSQL database. Lets look at how I think of setting these values for an instance.
Lets consider a case of an RDS instance type of db.m4.16xlarge with 64GB as RAM. For simplicity, I am not considering basic Linux kernel memory and RDS monitoring and other OS processes but filesystem cache will be considered as that is significant portion of memory used by a PostgreSQL instance. The other major component are shared buffers which is a common shared memory area used by PostgreSQL processes. The final component is the aggregate of all individual private memory of each connection of PostgreSQL database.
TOTAL RAM = Filesystem Cache + Shared DB Buffers Cache + Sum of all PostgreSQL connections
By default on RDS, the shared buffers are set to 25% of RAM. It would be fair to consider that file system cache usage could be equal or greater than that as all pages will come through file system cache and constitutes 25-30% of RAM. While shared_buffers can be controlled by the shared_buffers parameter, the file system cache cannot be controlled directly, however it can be freed by OS during low memory situations. So using our example of 64GB total memory, we already have accounted for 16GB + 16-20GB of file system cache, leaving about 28-32GB of memory free for your private memory consumed by database connections. In the rest of the calculation we assume this two part to be roughly 50% of RAM
Private memory of database connections are hard to easily measure as they are not '
RSS' of a process but '
RSS' - touched shared memory pages and depends on the number of connections and chunks of
work_mem consumed by each connection.
For capacity calculation we could use something simple as
Average Memory per PostgreSQL connections * Concurrent Connections <= 50% of RAM
where Average memory per PostgreSQL connection can be simplified to say n *
work_mem + process overhead where n can vary by the type of queries. For example a query with JOIN of two tables and ORDER BY can end up using 2
work_mem chunks along with memory process overhead. Putting that into numbers with a default
work_mem of 4MB and an approximate process overhead of say roughly 5MB of PostgreSQL (if Linux huge pages are disabled then this number may need to bumped on higher side), each PostgreSQL connection is about 2x4 + 5 = 13MB. If you have 1,000 concurrent MB you may end up consuming about 13GB and for 2,000 connections that number can jump to 26GB. Hence we should make sure that
Work_mem <= ( 50% RAM in KB /concurrent_connections - 5,000 KB)/2
Hence query tuning, lowering
work_mem,
max_connections can help control this component of the memory. If your queries actually ends up requiring more
work_mem memory, then your default 25% of shared_buffers needs to be reduced down to make more memory available for your
work_mem and
max_connection needs. It is perfectly reasonable to take down the percentage of
shared_buffers to fit the number of concurrent connections as it is better to have lower hit ratio of buffer pool than ending up using swap space.
The above perspective is a simple way to look at it. I am ignoring other things like temporary tables in memory and other temporary memory that will also consume memory with the assumption queries are simple join and order by queries. But if you are using temporary tables and have large analytical queries, you have to account for that memory in your average memory per PostgreSQL connection to arrive at the usage needed and then maybe reduce shared buffers to make sure the total usage is within total RAM and not end up using swap or causing large flush of the file system cache.
If you want to lower your shared buffers to say 20% instead of the default 25%, you would change the macro for the parameter to be set to (RAM/8192)/5 or
{DBInstanceClassMemory/40960}
To override a parameter in a particular group you can do as follows:
$ aws rds modify-db-parameter-group --db-parameter-group-name jkpg12pg \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com \
--parameters "ParameterName=shared_buffers,ParameterValue=\"{DBInstanceClassMemory/40960}\",ApplyMethod=pending-reboot"
When you list it again you see the change in the parameter group. If you notice that for this parameter, the
ApplyMethod is
pending-reboot. For static parameters you can only apply them on reboot and
shared_buffers is a static parameter. For dynamic parameters you can also use "
immediate" as
ApplyMethod, which will apply the changes immediately to all database instances using the parameter group.
In our case we have still not applied it to the database yet so it does not matter. Lets first apply it to our database.
$ aws rds modify-db-instance \
--db-instance-identifier mypg12b3 --db-parameter-group jkpg12pg \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com
Note however changing the group does not mean all the parameters are in effect. Since static changes can only be applied after a reboot, we will reboot the instance as follows:
$ aws rds reboot-db-instance --db-instance-identifier mypg12b3 \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com
With the reboot now we have a database running with a custom parameter group which has parameters sets tuned based on the expected workflow. You may not get them right in the first try but now you know how to set them and apply them to the database using the CLI commands.
This blog represents my own view points and not of my employer, Amazon Web Services.