Thursday, September 19, 2019

PostgreSQL Extensions - A Deeper Look

My slides from my session "PostgreSQL Extensions - A Deeper Look" at PostgresOpen 2019 and PostgresConf SV 2019









This blog represents my own view points and not of my employer, Amazon Web Services.

Sunday, September 08, 2019

Tuning DB Parameters for PostgreSQL 12 in Amazon RDS

In my last entry, we saw how to setup PostgreSQL 12 beta 3 in Amazon RDS. In that entry I purposely left out how to change database parameters as I realized that it deserves an entry (or more) by itself.

Using the AWS CLI you can create a new database parameter group as follows:

$ aws rds create-db-parameter-group  --db-parameter-group-name jkpg12pg    \
 --db-parameter-group-family postgres12 --description "My PostgreSQL 12 Parameter Group" \
 --region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com  


We have just created a group and not applied the parameters to any database. Before we apply, we do to see what are the default values  in the created group. You can run a command as follows to see values being set by default in the group.

$ aws rds describe-db-parameters --db-parameter-group-name jkpg12pg  \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com \
--query 'Parameters[].[ParameterName,ParameterValue]' --output text

The output contains a list of parameters with values. Let's look at some of the values to see how to interpret them.


application_name None
..
autovacuum_max_workers GREATEST({DBInstanceClassMemory/64371566592},3)
autovacuum_vacuum_cost_limit GREATEST({log(DBInstanceClassMemory/21474836480)*600},200)
..
effective_cache_size {DBInstanceClassMemory/16384}
..
jit None
..
maintenance_work_mem GREATEST({DBInstanceClassMemory*1024/63963136},65536)
max_connections LEAST({DBInstanceClassMemory/9531392},5000)
..
shared_buffers {DBInstanceClassMemory/32768}
shared_preload_libraries pg_stat_statements
..
work_mem None
xmlbinary None
xmloption None
       

When you see None it basically is equivalent to being not set in postgresql.conf and the default value of the PostgreSQL version engine is used by PostgreSQL. In the above example, you will notice that jit is set to None which means it will take the default  ON value in PostgreSQL 12 and enable jit in the instance.

If you change a parameter set to a specific value based on the supported type for that parameter, then that value (based on the supported unit) will be used instead of the default value in PostgreSQL. For example you see that shared_preload_libraries has a default value of pg_stat_statements. Which means when you deploy a PostgreSQL 12 instance, pg_stat_statements extension will have the library preloaded and available for it without requiring another restart.

Other interesting parameters are ones with values containing {} or with GREATEST or LEAST functions. These values are using macro functions allowing you to set them based on the DBInstanceClassMemory (in bytes) based on the instance size used by the database instance.

For example, shared_buffers is set to {DBInstanceClassMemory/32768} . In PostgreSQL, when there are no units, shared_buffers is set to the number of 8KB  pages based on the value.
So in this example it shows that it is set to  25% or 1/4th of total RAM in bytes in terms of 8192 bytes =   (RAM/8192 )/4 or RAM/32768.


Setting values is an important task to get optimum usage of a PostgreSQL database. Lets look at how I think of setting these values for an instance.

Lets consider a case of an RDS instance type of db.m4.16xlarge with 64GB as RAM. For simplicity, I am not considering basic Linux kernel memory and RDS monitoring and other OS processes but filesystem cache will be considered as that is significant portion of memory used by a PostgreSQL instance. The other major component are shared buffers which is a common shared memory area used by PostgreSQL processes. The final component is the aggregate of all individual private memory of each connection of PostgreSQL database.


TOTAL RAM = Filesystem Cache + Shared DB Buffers Cache + Sum of all  PostgreSQL connections


By default on RDS, the shared buffers are set to 25% of RAM. It would be fair to consider that file system cache usage could be equal or greater than that as all pages will come through file system cache and constitutes 25-30% of RAM. While shared_buffers can be controlled by the shared_buffers parameter, the file system cache cannot be controlled directly, however it can be freed by OS during low memory situations. So using our example of 64GB total memory, we already have accounted for 16GB + 16-20GB of file system cache, leaving about 28-32GB of memory free for your private memory consumed by database connections. In the rest of the calculation we assume this two part to be roughly 50% of RAM

Private memory of database connections are hard to easily measure as they are not 'RSS' of a process but 'RSS' - touched shared memory pages and depends on the number of connections and chunks of work_mem consumed by each connection.

For capacity calculation we could use something simple as

      Average Memory per PostgreSQL connections * Concurrent Connections <= 50% of RAM 

where Average memory per PostgreSQL connection can be simplified to say n * work_mem + process overhead where n can vary by the type of queries. For example a query with JOIN of two tables and ORDER BY can end up using 2 work_mem chunks along with memory process overhead. Putting that into numbers with a default work_mem of 4MB and an approximate process overhead of say roughly 5MB of PostgreSQL (if Linux huge pages are disabled then this number may need to bumped on higher side), each PostgreSQL connection is about 2x4 + 5 = 13MB. If you have 1,000 concurrent MB you may end up consuming about 13GB and for 2,000 connections that number can jump to 26GB. Hence we should make sure that

Work_mem <=   (  50% RAM in KB  /concurrent_connections   -  5,000 KB)/2

Hence query tuning, lowering work_mem, max_connections can help control this component of the memory. If your queries actually ends up requiring more work_mem memory, then your default 25% of shared_buffers needs to be reduced down to make more memory available for your work_mem and max_connection needs. It is perfectly reasonable to take down the percentage of shared_buffers to fit the number of concurrent connections as it is better to have lower hit ratio of buffer pool than ending up using swap space.

The above perspective is a simple way to look at it. I am ignoring other things like temporary tables in memory and other temporary memory that will also consume memory with the assumption queries are simple join and order by queries. But if you are using temporary tables and have large analytical queries, you have to account for that memory in your average memory per PostgreSQL connection to arrive at the usage needed and then maybe reduce shared buffers to make sure the total usage is within total RAM and not end up using swap or causing large flush of the file system cache.

If you want to lower your shared buffers to say 20% instead of the default 25%, you would change the macro for the parameter to be set to (RAM/8192)/5 or {DBInstanceClassMemory/40960}

To override a parameter in a particular group you can do as follows:

$ aws rds modify-db-parameter-group --db-parameter-group-name jkpg12pg \
 --region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com  \
 --parameters "ParameterName=shared_buffers,ParameterValue=\"{DBInstanceClassMemory/40960}\",ApplyMethod=pending-reboot" 



When you list it again you see the change in the parameter group. If you notice that for this parameter, the ApplyMethod is pending-reboot. For static parameters you can only apply them on reboot and shared_buffers is a static parameter. For dynamic parameters you can also use "immediate" as ApplyMethod, which will apply the changes immediately to all database instances using the parameter group.

In our case we have still not applied it to the database yet so it does not matter. Lets first apply it to our database.

$ aws rds modify-db-instance \
 --db-instance-identifier mypg12b3 --db-parameter-group jkpg12pg \
 --region us-east-2 --endpoint  https://rds-preview.us-east-2.amazonaws.com 


Note however changing the group does not mean all the parameters are in effect. Since static changes can only be applied after a reboot, we will reboot the instance as follows:

$ aws rds reboot-db-instance  --db-instance-identifier mypg12b3  \
--region us-east-2 --endpoint  https://rds-preview.us-east-2.amazonaws.com  

With the reboot now we have a database running with a custom parameter group which has parameters sets tuned based on the expected workflow. You may not get them right in the first try but now you know how to set them and apply them to the database using the CLI commands.



This blog represents my own view points and not of my employer, Amazon Web Services.


Thursday, August 29, 2019

Setting up PostgreSQL 12 Beta 3 for Testing in Amazon RDS

One of the amazing things about the PostgreSQL community is launching releases like clockwork. On 8/8/2019 the PostgreSQL community not only launched the minor versions for PostgreSQL 11 and old major versions but also a new Beta 3 version for upcoming PostgreSQL 12.

On AWS, you can check versions of PostgreSQL available in your region as follows:

       

$ aws rds describe-db-engine-versions --engine postgres --query 'DBEngineVersions[*].EngineVersion'
[
    "9.3.12", 
...  
    "11.2", 
    "11.4"
]
       
 

You will not see any beta versions out there. Pre-release versions for PostgreSQL in AWS  are available in the Database Preview Environment within US East 2 (Ohio).  If you are using the cli you have to add the region us-east-2 and also the url endpoint  rds-preview.us-east-2.amazonaws.com to your CLI commands.


       

$ aws rds describe-db-engine-versions --engine postgres \
  --query 'DBEngineVersions[*].EngineVersion' \
  --region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com
[
    "12.20190617", 
    "12.20190806"
]
       
 

The versions displayed are bit cryptic but they denote the  major version followed by date  when the build was synced for the preview release. The version description will be more friendly to read than the version itself.


       

$ aws rds describe-db-engine-versions --engine postgres \
  --query 'DBEngineVersions[*].DBEngineVersionDescription' \
  --region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com
[
    "PostgreSQL 12.20190617 (BETA2)", 
    "PostgreSQL 12.20190806 (BETA3)"
]
       
 

Lets deploy an instance of PostgreSQL 12 Beta 3 aka version 12.20190806.

       

$ aws rds create-db-instance  \
--engine postgres  --engine-version 12.20190806 --db-instance-identifier mypg12b3 \
--allocated-storage 100 --db-instance-class db.t2.small     \
--db-name benchdb  --master-username pgadmin  --master-user-password SuperSecret \
--region us-east-2 --endpoint  https://rds-preview.us-east-2.amazonaws.com  

       
 

After couple or few minutes the end point will be available and can be queried as follows:

       

$ aws rds describe-db-instances  --db-instance-identifier mypg12b3 --query 'DBInstances[].Endpoint' \
--region us-east-2 --endpoint  https://rds-preview.us-east-2.amazonaws.com 
[
    {
        "HostedZoneId": "ZZOC4A7DETW6VV", 
        "Port": 5432, 
        "Address": "mypg12b3.c9zz9zzzzzzz.us-east-2.rds-preview.amazonaws.com"
    }
]

       
 

If you have a default vpc security group defined in US East 2 (Ohio), you should be able to use the latest psql client to connect to it based on your default rules. If you do not have a default vpc security group, a new security group is created for you to which you have to add your client to the database instance. The security group will be in your US-East-2 (Ohio) region EC2 security groups for the preview environment.

 Once you have your client added to the security group, your client will be able to connect to the database as follows:

       

$ psql -h mypg12b3.c9zz9zzzzzzz.us-east-2.rds-preview.amazonaws.com -d benchdb -U pgadmin 
Password for user pgadmin: 
psql (10.4, server 12beta3)
WARNING: psql major version 10, server major version 12.
         Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

benchdb=> select version();
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12beta3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
(1 row)

benchdb=> 
       
 

For this given major version, the supported extensions can be queried as follows:

       

benchdb=> show rds.extensions;
                                                                                                                                                                                                        
                                                                                                    rds.extensions                                                                                      
                                                                                                                                                                                                        
              
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------
 address_standardizer, address_standardizer_data_us, amcheck, aws_commons, bloom, btree_gin, btree_gist, citext, cube, dblink, dict_int, dict_xsyn, earthdistance, fuzzystrmatch, hstore, hstore_plperl,
 intagg, intarray, ip4r, isn, jsonb_plperl, log_fdw, ltree, pageinspect, pg_buffercache, pg_freespacemap, pg_prewarm, pg_similarity, pg_stat_statements, pg_trgm, pg_visibility, pgcrypto, pgrouting, pg
rowlocks, pgstattuple, pgtap, plperl, plpgsql, pltcl, postgis, postgis_tiger_geocoder, postgis_topology, postgres_fdw, prefix, sslinfo, tablefunc, test_parser, tsm_system_rows, tsm_system_time, unacce
nt, uuid-ossp
(1 row)
       
 


Extensions are created using your master username login as follows:
       

benchdb=> CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
benchdb=> CREATE EXTENSION postgis;
CREATE EXTENSION
benchdb=> CREATE EXTENSION postgis_topology;
CREATE EXTENSION
       
 

To verify the versions of the extensions that I have created.

       
benchdb=> select * from pg_extension;
  oid  |      extname       | extowner | extnamespace | extrelocatable | extversion  |   extconfig   |          extcondition           
-------+--------------------+----------+--------------+----------------+-------------+---------------+---------------------------------
 14299 | plpgsql            |       10 |           11 | f              | 1.0         |               | 
 16402 | pg_stat_statements |       10 |         2200 | t              | 1.7         |               | 
 16410 | postgis            |       10 |         2200 | f              | 3.0.0alpha3 | {16712}       | {"WHERE NOT (                  +
...
       |                    |          |              |                |             |               | )"}
 17418 | postgis_topology   |       10 |        17417 | f              | 3.0.0alpha3 | {17421,17434} | {"",""}
(4 rows)
       
 

One of the recent enhancements available since PostgreSQL 11 release in Amazon RDS is that pg_stat_statements library is now loaded by default unless explicitly disabled. This means I can immediately use pg_stat_statements after creating the extension.

       
benchdb=>     
select * from pg_stat_statements order by total_time desc limit 4;
 userid | dbid  |       queryid        |               query               | calls | total_time  |  min_time   |  max_time   |  mean_time  | stddev_time | rows | shared_blks_hit | shared_blks_read | s
hared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time | blk_write_time 
--------+-------+----------------------+-----------------------------------+-------+-------------+-------------+-------------+-------------+-------------+------+-----------------+------------------+--
-------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+---------------+----------------
     10 | 16384 | -6310040060866956228 | select pg_start_backup($1, $2)    |     1 | 4934.715563 | 4934.715563 | 4934.715563 | 4934.715563 |           0 |    1 |               0 |                0 |  
                 0 |                   0 |              0 |               0 |                  0 |                  0 |              0 |                 0 |             0 |              0
     10 | 16384 |  4124339773204179264 | select pg_stop_backup()           |     1 | 4656.211207 | 4656.211207 | 4656.211207 | 4656.211207 |           0 |    1 |               0 |                0 |  
                 0 |                   0 |              0 |               0 |                  0 |                  0 |              0 |                 0 |             0 |              0
  16394 | 16396 | -2030728853740878493 | CREATE EXTENSION postgis          |     1 |  691.652456 |  691.652456 |  691.652456 |  691.652456 |           0 |    0 |           71359 |              247 |  
               835 |                 707 |              0 |               0 |                  0 |                  0 |              0 |                 0 |             0 |              0
  16394 | 16396 | -2651058291501154175 | CREATE EXTENSION postgis_topology |     1 |   61.100119 |   61.100119 |   61.100119 |   61.100119 |           0 |    0 |            8539 |               26 |  
                57 |                  37 |              0 |               0 |                  0 |                  0 |              0 |                 0 |             0 |              0
(4 rows)

       
 


Note the instances in preview environment are meant for development and testing for 60 days with your applications to try out new features and optimize them for PostgreSQL 12 !



This blog represents my own view points and not of my employer, Amazon Web Services.

Wednesday, February 06, 2019

PGConf.RU 2019: Slides from my sessions

It was my first visit to Moscow for PGConf.RU 2019. Enjoyed meeting the strong community of PostgreSQL in Russia!


Slides from my sessions:

1. Deep Dive into the RDS PostgreSQL Universe




2. Tips and Tricks for Amazon RDS for PostgreSQL



This blog represents my own view points and not of my employer, Amazon Web Services.


Thursday, October 18, 2018

Hello PostgreSQL 11 - Get ready to say goodbye to PostgreSQL 9.3

Earlier today (Oct 18, 2018), the PostgreSQL community announced the release of PostgreSQL 11.  Having done multiple software releases earlier, I appreciate the hard work by all contributors to get yet another major release on schedule. It is hard to do a major release every year and the community has been doing it since PostgreSQL 8.4 making this the 10th  release in the last decade. 

Everybody has their favorite feature in PostgreSQL 11 and I have one that is top on my list which is the transactional support in stored procedures2nd Quadrant had first announced that feature end of last year and at that time, it instantly became my favorite as I see it as a giant leap in PostgreSQL as it allows people to essentially write long data routines like ETL broken down in multiple transactions. Of course many users will certainly enjoy the improvements in  table partitioning system, query parallelism, and just-in-time (JIT) compilation for accelerating the execution of expressions in queries. However, the developers will certainly get more freedom with the stored procedure improvements.

With the release of PostgreSQL 11, now there are 6 major releases supported: PostgreSQL 9.3, 9.4, 9.5, 9.6, 10 and, 11. It is definitely a good time to start thinking to upgrade your PostgreSQL 9.3 databases. As per the versioning policy, the final minor release for PostgreSQL 9.3 will be on November 8th, 2018.  PostgreSQL 9.3 will be the last major version which does not support logical replication which was first introduced in PostgreSQL 9.4.  Hence, I expect this is will be the last painful upgrade because PostgreSQL 9.4 onwards you can always leverage logical replication to minimize the downtime while switching to a new version. All is not lost for PostgreSQL 9.3, while the experience is not exactly the same there are still tools available using the older trigger based replication to help or just bite the bullet and upgrade once with a small maintenance window as later versions will give you more options for your next major version upgrade.

If you need tips and tricks for upgrading your PostgreSQL 9.3 instances,  let me know! :-)

Wednesday, July 25, 2018

Loading data in PostgreSQL 11 Beta Using Native Logical Replication from PostgreSQL 10 in Amazon RDS

In the last blog entry,  I talked about creating two instances of PostgreSQL 11 Beta in Amazon RDS Database Preview Environment and setting up native logical replication. Today, Amazon RDS announced support for PostgreSQL 10.4 with native logical replication.  Let's see how to use this new support to replicate data from PostgreSQL 10 in Amazon RDS into PostgreSQL 11 Beta instances in  preview environment.

I  start with a new PostgreSQL 10.4 instance in Amazon RDS and populated it with data from an older dataset of IMDB.

benchdb-> \d
                          List of relations
 Schema |                 Name                  |   Type   |  Owner
--------+---------------------------------------+----------+---------
 public | acted_in                              | table    | pgadmin
 public | acted_in_idacted_in_seq               | sequence | pgadmin
 public | actors                                | table    | pgadmin
 public | aka_names                             | table    | pgadmin
 public | aka_names_idaka_names_seq             | sequence | pgadmin
 public | aka_titles                            | table    | pgadmin
 public | aka_titles_idaka_titles_seq           | sequence | pgadmin
 public | genres                                | table    | pgadmin
 public | keywords                              | table    | pgadmin
 public | movies                                | table    | pgadmin
 public | movies_genres                         | table    | pgadmin
 public | movies_genres_idmovies_genres_seq     | sequence | pgadmin
 public | movies_keywords                       | table    | pgadmin
 public | movies_keywords_idmovies_keywords_seq | sequence | pgadmin
 public | series                                | table    | pgadmin
(15 rows)

This "production" PostgreSQL 10 database also has data in it.

benchdb=> select count(*) from acted_in;
 count
--------
 618706
(1 row)

benchdb=> select count(*) from movies;
 count
--------
 183510
(1 row)

benchdb=> select count(*) from series;
 count
--------
 162498
(1 row)


In order to prepare PostgreSQL 10 instance in Amazon RDS for logical replication, we need to verify that rds.logical_replication database parameter is enabled. If it is not enabled, you can create a customer parameter group with rds.logical_replication enabled and the parameter group assigned to the database instance. In my case I had already used a custom parameter group with logical replication enabled.

benchdb=> show rds.logical_replication;
 rds.logical_replication
-------------------------
 on
(1 row)

In order to use logical replication,  a replication user to be created  in PostgreSQL 10 instance that will be used to connect from PostgreSQL 11 instance. In case of Amazon RDS, that can be done by granting the rds_replication role to the user.

benchdb=> CREATE USER pg11repluser WITH password 'SECRET';
CREATE ROLE
benchdb=> GRANT rds_replication TO pg11repluser;
GRANT ROLE

For security purpose, it is better that the replication user only has SELECT permissions on the tables to be replicated.

benchdb=> GRANT SELECT ON ALL TABLES IN SCHEMA public TO pg11repluser;
GRANT

The final step inside the database is to create  a publication pgprod10 to indicate which tables need to be replicated. An easier way to select all tables is as follows:

benchdb=> CREATE PUBLICATION pgprod10 FOR ALL TABLES;
CREATE PUBLICATION


One thing to note here is to edit  the inbound rules of the security group of the production instance to allow the PostgreSQL 11 Beta instance to connect.

On PostgreSQL 11 Beta instance, first thing is to recreate schema. We use pg_dump for this purpose only to copy the schema over from PostgreSQL 10 instance

$ pg_dump -s  -h pg104.XXXXX.us-east-2.rds.amazonaws.com -U pgadmin benchdb > movies_schema.sql


Load the schema into PostgreSQL 11 using the psql client

$ psql -h pg11from10.XXXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin -d benchdb -f movies_schema.sql

Note you might see errors for GRANT statements if the same users are not defined in the new instance. It is okay to ignore these messages.

ERROR:  role "pg11repluser" does not exist

We are now ready to create the subscription on PostgreSQL 11 Beta. We verify that there are no rows in this case and then confirm that we get all expected rows after the subscription is created.

benchdb=> select count(*) from acted_in;
 count
-------
     0
(1 row)

benchdb=> select count(*) from movies;
 count
-------
     0
(1 row)

benchdb=> select count(*) from series;
 count
-------
     0
(1 row)

benchdb=> CREATE SUBSCRIPTION pg11beta1 CONNECTION 'host=pg104.XXXXX.us-east-2.rds.amazonaws.com dbname=benchdb user=pg11repluser password=SECRET' PUBLICATION pgprod10;
NOTICE:  created replication slot "pg11beta1" on publisher
CREATE SUBSCRIPTION
benchdb=> select count(*) from acted_in;
 count
--------
 618706
(1 row)

benchdb=> select count(*) from movies;
 count
--------
 183510
(1 row)

benchdb=> select count(*) from series;
 count
--------
 162498
(1 row)


benchdb=>

With the new native logical replication support in PostgreSQL 10 in Amazon RDS, it is now easy to replicate the data into PostgreSQL 11 Beta instance in Amazon RDS Database Preview Environment.  It can also be used to replicate data to/from database instances deployed outside of Amazon RDS.

This blog represents my own view points and not of my employer, Amazon Web Services.


Friday, June 08, 2018

Setting up PostgreSQL 11 Beta 1 in Amazon RDS Database Preview Environment


PostgreSQL 11 Beta 1 has been out for more than couple of weeks. The best way to experience it is  to try out the new version and test drive it yourself.

Rather than building it directly from source, I take the easy way out and deploy it in the cloud. Fortunately, it is already available in Amazon RDS Database Preview Environment.



For this post I am going to use the AWS CLI since it is easy to understand the command line and copy/paste it and also easier to script it for repetitive testing. To use the Database Preview environment, the endpoint has to be modified to use https://rds-preview.us-east-2.amazonaws.com/ instead of the default for the region.

Because there might be multiple PostgreSQL 11 beta releases possible, it is important to understand which build version is being deployed.  I can always leave it to the default which typically would be the latest preferred version but lot of times I want to make sure on the version I am deploying. The command to get all the versions of PostgreSQL 11 is describe-db-engine-versions.

$ aws rds describe-db-engine-versions --engine postgres --db-parameter-group-family postgres11 --endpoint-url  https://rds-preview.us-east-2.amazonaws.com/ 
{
    "DBEngineVersions": [
        {
            "Engine": "postgres", 
            "DBParameterGroupFamily": "postgres11", 
            "SupportsLogExportsToCloudwatchLogs": false, 
            "SupportsReadReplica": true, 
            "DBEngineDescription": "PostgreSQL", 
            "EngineVersion": "11.20180419", 
            "DBEngineVersionDescription": "PostgreSQL 11.20180419 (68c23cba)", 
            "ValidUpgradeTarget": [
                {
                    "Engine": "postgres", 
                    "IsMajorVersionUpgrade": false, 
                    "AutoUpgrade": false, 
                    "EngineVersion": "11.20180524"
                }
            ]
        }, 
        {
            "Engine": "postgres", 
            "DBParameterGroupFamily": "postgres11", 
            "SupportsLogExportsToCloudwatchLogs": false, 
            "SupportsReadReplica": true, 
            "DBEngineDescription": "PostgreSQL", 
            "EngineVersion": "11.20180524", 
            "DBEngineVersionDescription": "PostgreSQL 11.20180524 (BETA1)", 
            "ValidUpgradeTarget": []
        }
    ]

}

From the above, I see there are two versions 11.20180419 and 11.20180524. The versions are based on datestamp with the description showing the tag information of the version. Since I am interested in the BETA1 version I use the version 11.20180524.


$ aws rds create-db-instance  --endpoint  https://rds-preview.us-east-2.amazonaws.com  --allocated-storage 100 --db-instance-class db.t2.small  --db-name benchdb  --master-username SECRET  --master-user-password XXXXX  --engine postgres  --engine-version 11.20180524   --db-instance-identifier pg11beta1

Once deployed, I can always get the endpoint of the instance as follows:

$ aws rds describe-db-instances --endpoint=https://rds-preview.us-east-2.amazonaws.com --db-instance-identifier pg11beta1 |grep Address
                "Address": "pg11beta1.XXXXXX.us-east-2.rds-preview.amazonaws.com"


In my account I have already added my client to my default security group,


$ psql -h pg11beta1.XXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin -d benchdb -c 'SELECT VERSION()'
                                                  version                                         
 PostgreSQL 11beta1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit


It is hard to test a database without any data.  Normally I would just use pgbench directly on it and load data. But then I thought let me try a different way of loading data in this instance of the database similarly to how people will typically do from a different production setup. For this purpose, I will need to setup a production database before I proceed.

Before I  create a production database instance, I first create a custom parameter group so that  I can  enable my typical settings that I use in a production database.  In the preview environment I created a  PostgreSQL 11 database family parameter group and edit the group to change some of the parameters as follows:

rds.logical_replication = 1

and saved the group.
Next, I create my production instance using the newly created parameter group.


 $ aws rds create-db-instance --allocated-storage 100 --db-instance-class db.t2.small --engine postgres --db-name benchdb --master-username pgadmin --master-user-password SECRET --db-instance-identifier pg11prod 


It is still empty so I filled it up with my production data.

$ pgbench -i -s 100  -h pg11prod.XXX.us-east-2.rds-preview.amazonaws.com -U pgadmin benchdb


Now, I have a typical setup with one production instance and another empty test instance. I know have to figure how to get the data into my test instance. I could always dump all data using pg_dump and restored it on the new instance but this time I am going to try logical replication.

For setting up logical replication between two instances I first need to recreate the schema on the other instance. pg_dump provides a flag -s to just dump the schema with no data. I dump the schema from the production setup


$ pg_dump -s  -h pg11prod.XXXX.us-east-2.rds.amazonaws.com -U pgadmin benchdb > schema.txt

and then load the schema into my test setup


$ psql -h pg11beta1.XXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin -d benchdb -f schema.txt



Now, I want to actually setup logical replication between the two users. For this I need a replication user. I could use the master password but that is too risky. So, I create a new user with read only privileges on the tables in the database and give it replication rights that will work in Amazon RDS.

$ psql -h pg11prod.XXXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin benchdb



benchdb=> CREATE USER repluser WITH PASSWORD 'SECRET';
CREATE ROLE
benchdb=> GRANT rds_replication TO repluser;

GRANT ROLE
benchdb=> GRANT SELECT ON ALL TABLES IN SCHEMA public TO repluser;
GRANT


Next, I have to setup a  publication for all tables in the production database

benchdb=> CREATE PUBLICATION pgprod11 FOR ALL TABLES;
CREATE PUBLICATION

One more thing to add here is to change the inbound rules of the security group of the production instance to allow the test instance to connect.

On my test instance I need to create a subscription to subscribe to all changes happening on my production setup.

$ psql -h pg11beta1.XXXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin benchdb

benchdb=> CREATE SUBSCRIPTION pg11beta1 CONNECTION 'host=pg11prod.XXXXXX.us-east-2.rds-preview.amazonaws.com dbname=benchdb user=repluser password= SECRET' PUBLICATION pgprod11;
NOTICE:  created replication slot "pg11beta1" on publisher
CREATE SUBSCRIPTION

Note if the command itself is taking a long time to execute then typically it means that it cannot connect to the production instance. Check the security group to make sure the rule to allow your test instance to connect is set properly.  If the connection is allowed then the command returns instantaneously. However, the actual data might be loading behind the scenes.

After some time, I can see that my test instance has all the initial data from the production setup.

benchdb=> select count(*) from pgbench_branches;
 count
-------
   100
(1 row)

benchdb=> select count(*) from pgbench_history;
 count
-------
     0
(1 row)

(The table pgbench_history is typically empty after a fresh setup of pgbench)

Now let's run application workload on our production database pg11prod

$ pgbench -c 10  -T 300 -P 10  -h pg11prod.XXXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin -U pgadmin benchdb


As the load starts (after the initial vacuum), log into the test instance and check for changes. With pgbench default test, it is easy to verify changes by counting entries in pgbench_history.

$ psql -h pg11beta1.XXXXX.us-east-2.rds-preview.amazonaws.com -U pgadmin benchdb
psql (10.4 (Ubuntu 10.4-2.pgdg16.04+1), server 11beta1)
Type "help" for help.

benchdb=> select count(*) from pgbench_history;
 count
-------
  2211
(1 row)

benchdb=> select count(*) from pgbench_history;
 count
-------
 10484
(1 row)

This is a simple test to see changes are being propagated from the production instance to the test instance.

I, finally, have a logical replication using a read-only user between two PostgreSQL 11 instances in Amazon RDS Database Preview Environment.




Pretty cool!


This blog represents my own view points and not of my employer, Amazon Web Services.

Sunday, September 10, 2017

Best Practices with Managed PostgreSQL in the Cloud - #pgopen2017

Best Practices with Managed PostgreSQL in the Cloud @ Postgres Open SV 2017 (#pgopen2017)



Best Practices with Managed PostgreSQL in the Cloud from Jignesh Shah


This blog represents my own view points and not of my employer, Amazon Web Services.

Thursday, June 30, 2016

Hello Docker on Windows

NOTE: 9/8/2016: This is an older post which I wrote few  months ago but never not posted.

After using docker on Linux for more than a  year it was finally time to try it on a different platform. Trying on docker on Windows Server 2016 TP4 was one way to try it out but the experience of that was bit more complicated. However when I heard about docker on Windows 10 I was initially surprised. Why? Well based on what I had seen and figured out that it really needed Hyper-V features to run which I assumed was only available on the Windows Server line.

I guess I was wrong. Using Control Panel -> Program & Features -> Turn Windows Features On or Off , there is a feature called Hyper-V which can be turned on.
Now before you start searching for it and trying to turn it on wait till you read the following to save you some hassles.

1. You need Windows 10 Pro (Sorry Windows 10 does not work)
2. You need a CPU which supports Virtualization and SLAT aka EPT.

With Task Manager -> Performance -> CPU it is easy to figure out if Virtualization is supported or not. But SLAT is another story. systeminfo or coreinfo is required to figure that out. You may be able to turn on some of the components of the Hyper-V on CPUs not supporting SLAT but that will not be enough.



I really had to cycle through few laptops using Intel Core2 Duo and Intel Pentium chips which do support Virtualization but did not support SLAT and finally came across my dusty desktop using AMD Phenom which had Virtualization with SLAT support on it. and running Windows 10 on it.

Of course then I applied for the Docker beta program on Windows. The invitation came yesterday and finally got a chance to download the docker binaries and install it.

Once the installation (as Administrator of course) finished it gave the option to Launch docker and after it finished launching the daemon in the background it showed a splash image as follows:


Good job Docker on the usability to show me what to do next:

Next I deploy an nginx server as follows



Woha!! If it did not strike you.. I am running Linux images here on Windows!!
Now I can access the same in a browser as http://docker/
(This I  would say was a  bit of struggle since I had not read the doc properly where I was trying with http://127.0.0.1/ or http://localhost or http://LOCAL/ but only http://docker worked)




Overall very interesting and game changing for development on Windows!.

Monday, May 16, 2016

Yet another reboot of Application Architecture

Last week I attended Redis Conf 2016 in Mission Bay Conference Center and was excited to see more than 650+ attendees discussing Redis. It is interesting that Redis has grown from a pure caching solution to now support more data use cases of their customer base.

If we put the above in perspective we will see how applications are changing over the years.

CHUI Era

Years before leading to Y2K were all monolithic applications where everything was done on a single setup with people either using dump terminals or using Windows or Unix Clients to just open a telnet sessions and use a text-based interface which was often called later as "ChUI" - Character User Interface. Browsers were not popular but Windows was picking up and some "modern" applications at that point had their first Windows Fat Client but it was still all in one Windows "GUI" applications being developed.

GUI Era

While technically the whole decade leading to year 2000, Client-Server technologies became more popular with a centralized database and a front end in either a Windows Rich Client or Java Rich Client or Web Browser based "WebUI" front end. Companies like Oracle, Sun at that time made a killing selling large Centralized servers running databases with essentially a Rich Client or WebUI client accessing the central server. In the later part of the years three tier systems. However majority of the enterprise applications were still "Rich Clients"

Java Era

The era of the middleware was basically rule by Java webapp servers leading to a "classic" three-tier systems: Database layer, Middleware layer, Presentation layer. This is the generation that heavily started the SOA leading to APIs everywhere. Also this is the generation that lead to XML hell where everybody had to understand XML to interconnect everything. However things were still monolithic specially in the Database layer and to lesser extent on the Middleware layer. Scale was more limited to Amdhal's law. To work around some of this scaling issues, more tiers were being introduced like "Caching layer", "Load Proxies", etc.

ScaleOut Era

As databases became hard to scale on a single node, Designs started changing using new kinds of database systems to support smaller boxes leading to a new kind of designs: Sharding, Shared-Nothing, Shared Data based database systems. This was the first reboot in some sense where "Eventual Consistency" paradigms before more popular and applications where now being developed with these new paradigms of multi-node databases.  Applications now had to introduce new layers who has knowledge about the "intelligence" of the scale out databases on how to handle sharding, node reconnections, etc. CAP Theorem was more discussed than Amdhal's law. The number of tiers in such a scale out application was already approaching 10 such distinct operation tiers. There were people doing multi data centers but those were primarily for DR use cases.

Cloud Era

With the advent of Amazon Web Services, new refactoring of applications started primarily with the concept of multiple data centers, variable latencies between services and needing real decoupling between tiers. Earlier the tiers were more of "components" rather than services as the assumption was everything will be updated together. Also the notion of "Change management" started changing to be more continuous deployment to production.  Applications get started to get more complex as there were some services which were "always" production mode as they were being served from 3rd Party providers. Third party API consumption really became very popular.  This really started moving the number of tiers from somewhere around 10  to more like 25-30 different tiers in an app.

MicroServices Era

With the advent of Linux containers like Docker and microservice adoption, yet another reboot of applications is happening and this time at a faster pace than before.  This is an interesting on-going era for applications. No longer a tier is a "component" of an application but it is more of a purpose driven "service" by itself.  Every service is versioned, API -accessible, fully updatable on its own without impacting the rest of the application.  This change is causing the number of tiers in a typical enterprise application to be now growing beyond 100s. I have heard some enterprises having about 300-400 microservices based tiers in thier application. Many of these microservices are 3rd party services.  There are advantages like there is no single monolithic "waterfall" release of the application anymore. Things that previously had taken months or years to build, can now be build in hours or days. But on the downside there are just too many moving parts in the application now. Architectural changes of your data flows and use cases are now very expensive. Pre-deployment testing becomes difficult, Canary deployments becomes necessary to avoid risks of introducing bugs and taking down the whole application. While nothing is bad in evolution, it is just that thinking of how to manage applications will have to change based on the changing landscape.


In conclusion, applications have changed over the years, adapting the changes is necessary for business to catch up to competition and still retain their technology edge in the market.