{"id":2162,"date":"2019-07-10T18:30:34","date_gmt":"2019-07-10T13:00:34","guid":{"rendered":"http:\/\/blog.tenthplanet.in\/?p=2162"},"modified":"2026-03-03T10:14:02","modified_gmt":"2026-03-03T10:14:02","slug":"postgresql-users-and-roles-creation-and-privileges-using-pgpool","status":"publish","type":"post","link":"https:\/\/tenthplanet.in\/blogs\/postgresql-users-and-roles-creation-and-privileges-using-pgpool\/","title":{"rendered":"PostgreSQL users and roles creation and privileges using PGPool"},"content":{"rendered":"<h2>PostgreSQL users and roles creation and privileges:<\/h2>\n<p>We assume that PostgreSQL Enterprise Manager was installed and configured for setting up new users and roles.<\/p>\n<h3>Select Postgres Enterprise Manager Node and right click Login Roles to create new roles:<\/h3>\n<p>1. To modify the properties of an existing login role, right click on the name of a login role in the tree control, and select Properties from the context menu. To delete a login role, right click on the name of the role, and select Delete\/Drop from the context menu.<\/p>\n<p>2. A role must be granted sufficient privileges before accessing, executing, or creating any database object. PEM allows you to assign ( GRANT ) and remove ( REVOKE ) object permissions to group roles or login accounts using the graphical interface of the PEM client.<\/p>\n<h3>Creating Role through PSQL:<\/h3>\n<p>1. Create a role that can log in, but don&#8217;t give it a password:<\/p>\n<pre>CREATE ROLE developer LOGIN;<\/pre>\n<p>2. Create a role with a password:<\/p>\n<pre>CREATE USER user1 WITH PASSWORD 'jw8s0F4';<\/pre>\n<p>(CREATE USER is the same as CREATE ROLE except that it implies LOGIN.)<\/p>\n<p>3. Create a role with a password that is valid until the end of 2018. After one second has ticked in 2019, the password is no longer valid.<\/p>\n<pre>CREATE ROLE developer WITH LOGIN PASSWORD 'jw8s0F4' VALID UNTIL '2019-01-01';<\/pre>\n<p>4. Create a role that can create databases and manage roles:<\/p>\n<pre>CREATE ROLE admin WITH CREATEDB CREATEROLE;<\/pre>\n<h3>Postgres using PGPool:<\/h3>\n<p>1. From now on, I assume that database clusters are located at \/home\/postgres\/data and are owned by postgres user:<\/p>\n<pre>$ initdb -D \/home\/postgres\/data<\/pre>\n<p>2. Next add followings to \/home\/postgres\/data\/postgresql.conf. &#8220;logging_collector&#8221; and below are not really relevant to Streaming replication but they are my favorites to make my life easier. You might want to remove &#8220;log_statement = &#8216;all'&#8221; in production environment however.<\/p>\n<pre>listen_address = '*'\nhot_standby = on\nwal_level = hot_standby\nmax_wal_senders = 1\nlogging_collector = on\nlog_filename = '%A.log'\nlog_line_prefix = '%p %t '\nlog_truncate_on_rotation = on\nlog_statement = 'all'<\/pre>\n<p>3. Put pg_hba.conf to \/home\/postgres\/data. Of course you need to replace &#8220;\/some\/where\/&#8221; with actual directory where you downloaded the scripts. Caution: settings here allows to access from any IP address. Please apply appropriate setting for your real world systems.<\/p>\n<pre>$ cp \/etc\/postgresql\/pg_hba.conf\" \/home\/postgres\/data\"<\/pre>\n<p>4. Start PostgreSQL server on server2 and server3. At this point, those PostgreSQL servers will run as primary server, thus no streaming replication is working.<\/p>\n<pre>$ pg_ctl -D \/home\/postgres\/data start<\/pre>\n<p>5. Next you need to allow postgres user on server2 and server3 can access each other without password. Execute ssh-keygen command as postgres and append the contents of \/home\/postgres\/.ssh\/id_rsa.pub to \/home\/postgres\/.ssh\/authorized_keys of other server. After this we recommend to test the setting by executing ls command via ssh, for example.<\/p>\n<pre>$ ssh server2 ls<\/pre>\n<h3>Installing pgPool II<\/h3>\n<p>1. Run the below commands in the server1 postgres server,<\/p>\n<pre>$ tar xfz \/etc\/postgresql\/pgpool-II-3.3.3.tar.gz\n$ cd pgpool-II-3.3.3\n$ .\/configure\n$ make\n$ sudo make install\n$ cp \/etc\/postgresql\/\/pgpool-II-3.3.3\/install-functions.sh\n$ sh install-functions.sh<\/pre>\n<p>2. Next you need to install pgpool-II configuration files onto server1. The main configuration file is pgpool.conf. The other one is the pcp.conf. You will need to execute followings as root.<\/p>\n<pre>$ cp \/etc\/postgresql\/pgpool-II-3.3.3\/pgpool.conf \/usr\/local\/etc\n$ chown apache \/usr\/local\/etc\/pgpool.conf\n$ cp \/etc\/postgresql\/pgpool-II-3.3.3\/pcp.conf \/usr\/local\/etc\n$ chown apache \/usr\/local\/etc\/pcp.conf<\/pre>\n<p>3. Install basebackup.sh and pgpool_remote_start, necessary for online recovery onto server2 and server3. Note that in pgpool_remote_start the path to pg_ctl command is specified. You might want to change it to an appropriate path according to your PostgreSQL installation.<\/p>\n<pre>$ cp \/etc\/postgresql\/pgpool-II-3.3.3\/baseback.sh \/home\/postgres\/data\n$ chmod 755 basebackup.sh\n$ cp \/etc\/postgresql\/pgpool-II-3.3.3\/pgpool_remote_start \/home\/postgres\/data\n$ chmod 755 pgpool_remote_start<\/pre>\n<p>4. Install failover.sh for automatic failover onto server1.<\/p>\n<pre>$ sudo cp \/etc\/postgresql\/pgpool-II-3.3.3\/failover.sh \/usr\/local\/etc\n$ chmod 755 failover.sh<\/pre>\n<p>5. Create necessary directories on server2 and server3. Execute followings as root.<\/p>\n<pre>$ mkdir \/var\/run\/pgpool\n$ chown apache \/var\/run\/pgpool\n$ mkdir \/var\/log\/pgpool\n$ chown apache \/var\/log\/pgpool\n\n$ mkdir \/var\/log\/pgpool\/trigger\n$ chown postgres \/var\/log\/pgpool\/trigger<\/pre>\n<p>6. Add \/var\/www\/.ssh\/id_rsa.pub to \/home\/postgres\/.ssh\/authorized_keys on server2 and server3. After this we recommend to test the setting by executing ls command via ssh, for example do followings as apache user.<\/p>\n<pre>$ ssh postgres@server2 ls\n$ ssh postgres@server3 ls<\/pre>\n<h3>Installing pgpoolAdmin<\/h3>\n<pre>$ cd \/var\/www\/html\/pgpoolAdmin-3.3.1\n$ chmod 777 templates_c\n$ chown apache conf\/pgmgt.conf.php\n$ chmod 644 conf\/pgmgt.conf.php<\/pre>\n<p>1. Login to pgpoolAdmin and start pgpool-II from &#8220;pgpool status&#8221; menu. You see server2 port 5432 PostgreSQL is running as a primary server. You should be able to connect to server3 port 5432 by using psql. Let&#8217;s try to create a table.<\/p>\n<pre>$ createdb -h server1 test\n$ psql -h server1 test\ntest=# create table t1(i int);\nCREATE TABLE\ntest=#<\/pre>\n<p>2. Starting standby server.To start standby server, just click &#8220;Recovery&#8221; button. &#8220;basebackup.sh&#8221; will be executed and the standby server will be automatically started.<\/p>\n<pre>$ insert into t1 via pgpool-II.\n-- it will be executed on primary server\n\npsql -h server1 test\ntest=# insert into t1 values(1);\ntest=# \\q\n\npsql -h server3 test\n-- now connected to standby server\ntest=# select * from t1;\ni\n---\n1\n(1 row)<\/pre>\n<p>3. If standby goes down, it is disconnected from pgpool-II. Users can issue SQL via pgpool-II as usual. Just streaming replication is stopped. To recover standby server, click &#8220;Recovery&#8221; button.<\/p>\n<h2>Postgres Backup and Archive:<\/h2>\n<p>Backing Up the Database,<\/p>\n<p>1. Create a backup file using pg_dump.<\/p>\n<pre>$ cd \/opt\/edb\/server1\/bin\n$ .\/pg_dump -U enterprisedb -Fp -f \/tmp\/edb.dmp edb\nPassword:\n$<\/pre>\n<p>2. Connect to the database as a superuser and export the SQL\/Protect data using the export_sqlprotect(&#8216;sqlprotect_file&#8217;) function where sqlprotect_file is the fully qualified path to a file where the SQL\/Protect data is to be saved.<\/p>\n<pre>edb=# SELECT sqlprotect.export_sqlprotect('\/tmp\/sqlprotect.dmp');\nexport_sqlprotect\n-------------------\n\n(1 row)<\/pre>\n<p>The files \/tmp\/edb.dmp and \/tmp\/sqlprotect.dmp comprise your total database backup.<\/p>\n<p>3. Restoring From the Backup Files,<\/p>\n<p>The following example uses the psql utility program to restore the plain-text backup file \/tmp\/edb.dmp to a newly created database named newdb:<\/p>\n<pre>$ \/opt\/edb\/as10\/bin\/psql -d newdb -U enterprisedb -f \/tmp\/edb.dmp\nPassword for user enterprisedb:\nSET\nSET\nSET\nSET\nSET\nCOMMENT\nCREATE SCHEMA\n.\n.\n.<\/pre>\n<p>4. Connect to the new database as a superuser and delete all rows from the edb_sql_protect_rel table.This step removes any existing rows in the edb_sql_protect_rel table that were backed up from the original database. These rows do not contain the correct OIDs relative to the database where the backup file has been restored.<\/p>\n<pre>$ \/opt\/edb\/as10\/bin\/psql -d newdb -U enterprisedb\nPassword for user enterprisedb:\npsql.bin (10.0.1)\nType \"help\" for help.\n\nnewdb=# DELETE FROM sqlprotect.edb_sql_protect_rel;\nDELETE 2<\/pre>\n<p>Delete all rows from the edb_sql_protect table.<\/p>\n<pre>newdb=# DELETE FROM sqlprotect.edb_sql_protect;\nDELETE 1<\/pre>\n<p>Delete any statistics that may exist for the database.<\/p>\n<pre>newdb=# SELECT * FROM sqlprotect.edb_sql_protect_stats;\nusername | superusers | relations | commands | tautology | dml\n--------------+--------------------+-----------+-------------------+--------------+-------\n(0 rows)<\/pre>\n<p>5. For each row that appears in the preceding query, use the drop_stats function specifying the role name of the entry.<\/p>\n<p>For example, if a row appeared with appuser in the username column, issue the following command to remove it:<\/p>\n<pre>newdb=# SELECT sqlprotect.drop_stats('appuser');\ndrop_stats\n------------\n\n(1 row)<\/pre>\n<p>6. Delete any offending queries that may exist for the database.<\/p>\n<pre>edb=# SELECT * FROM sqlprotect.edb_sql_protect_queries;\nusername | ip_address | port | machine_name | date_time | query\n-----------------+---------------+------+-------------------------+----------------+-------\n(0 rows)<\/pre>\n<p>7. For each row that appears in the preceding query, use the drop_queries function specifying the role name of the entry.<\/p>\n<p>For example, if a row appeared with appuser in the username column, issue the following command to remove it:<\/p>\n<pre>edb=# SELECT sqlprotect.drop_queries('appuser');\ndrop_queries\n--------------\n\n(1 row)<\/pre>\n<p>8. Make sure the role names that were protected by SQL\/Protect in the original database exist in the database server where the new database resides.<\/p>\n<pre>newdb=# SELECT sqlprotect.import_sqlprotect('\/tmp\/sqlprotect.dmp');\nimport_sqlprotect\n-------------------\n(1 row)\n\nnewdb=# SELECT datname, oid FROM pg_database;\ndatname | oid \n-----------+-------\ntemplate1 | 1\ntemplate0 | 13909\nedb | 13917\nnewdb | 16679\n(4 rows)\n\nnewdb=# SELECT rolname, oid FROM pg_roles;\nrolname | oid \n--------------+-------\nenterprisedb | 10\nappuser | 16671\nnewuser | 16678\n(3 rows)\n\nnewdb=# SELECT relname, oid FROM pg_class WHERE relname IN ('dept','emp','appuser_tab');\nrelname | oid \n-------------+-------\nappuser_tab | 16803\ndept | 16809\nemp | 16812\n(3 rows)\n\nnewdb=# SELECT * FROM sqlprotect.edb_sql_protect;\ndbid | roleid | protect_relations | allow_utility_cmds | allow_tautology | allow_empty_dml\n-------+--------+-------------------+--------------------+-----------------+-----------------\n16679 | 16671 | t | t | f | f\n(1 row)\n\nnewdb=# SELECT * FROM sqlprotect.edb_sql_protect_rel;\ndbid | roleid | relid\n-------+--------+-------\n16679 | 16671 | 16809\n16679 | 16671 | 16803\n(2 rows)\n\nnewdb=# SELECT * FROM sqlprotect.edb_sql_protect_stats;\nusername | superusers | relations | commands | tautology | dml\n----------+------------+-----------+----------+-----------+-----\nappuser | 0 | 5 | 2 | 1 | 0\n(1 row)\n\nnewedb=# \\x\nExpanded display is on.\nnwedb=# SELECT * FROM sqlprotect.edb_sql_protect_queries;\n-[ RECORD 1 ]+---------------------------------------------\nusername | appuser \nip_address | \nport | \nmachine_name | \ndate_time | 20-JUN-14 13:21:00 -04:00 \nquery | CREATE TABLE appuser_tab_2 (f1 INTEGER); \n-[ RECORD 2 ]+---------------------------------------------\nusername | appuser \nip_address | \nport | \nmachine_name | \ndate_time | 20-JUN-14 13:22:00 -04:00 \nquery | INSERT INTO appuser_tab_2 VALUES (2); \n-[ RECORD 3 ]+---------------------------------------------\nusername | appuser \nip_address | 192.168.2.6 \nport | 50098 \nmachine_name | \ndate_time | 20-JUN-14 13:39:00 -04:00 \nquery | CREATE TABLE appuser_tab_3 (f1 INTEGER); \n-[ RECORD 4 ]+---------------------------------------------\nusername | appuser \nip_address | 192.168.2.6 \nport | 50098 \nmachine_name | \ndate_time | 20-JUN-14 13:39:00 -04:00 \nquery | INSERT INTO appuser_tab_2 VALUES (1); \n-[ RECORD 5 ]+---------------------------------------------\nusername | appuser \nip_address | 192.168.2.6 \nport | 50098 \nmachine_name | \ndate_time | 20-JUN-14 13:39:00 -04:00 \nquery | SELECT * FROM appuser_tab_2 WHERE 'x' = 'x';<\/pre>\n<p>Verify that the SQL\/Protect configuration parameters are set as desired in the postgresql.conf file for the database server running the new database. Restart the database server or reload the configuration file as appropriate.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Pgpool is a middleware that works between PostgreSQL servers and a PostgreSQL database client. Pgpool can manage multiple PostgreSQL servers.<\/p>\n","protected":false},"author":23,"featured_media":2215,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[424],"tags":[556,557,27],"class_list":["post-2162","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-pentaho","tag-pentaho-and-postgresql","tag-pgpool","tag-postgresql"],"acf":[],"_links":{"self":[{"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/posts\/2162","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/users\/23"}],"replies":[{"embeddable":true,"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/comments?post=2162"}],"version-history":[{"count":0,"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/posts\/2162\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/media\/2215"}],"wp:attachment":[{"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/media?parent=2162"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/categories?post=2162"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tenthplanet.in\/blogs\/wp-json\/wp\/v2\/tags?post=2162"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}