Idle in transaction – DELETE waiting

We have a Linux server (Redhat Enterprise 6) running a multi-site installation of Mediawiki. In total, we have today 120 unique wiki’s, each having its one Postgresql 9.1 database.

Suddenly the server stopped responding. In Apache webserver, the total number concurrent apache processes went up to Apache’s max-client setting in httpd.conf, and people couldn’t log in or see any wiki-pages.

In the apache log, we did not see anything special. But, when we started to check which processes were running during a full stop on the server with the Unix command `ps`, we could see entries like:

# ps -ef | egrep ‘DELETE|idle’
postgres 25106 2502 0 11:12 ? 00:00:00 postgres: ubwiki ubwiki idle in transaction
postgres 25108 2502 0 11:12 ? 00:00:00 postgres: ubwiki ubwiki DELETE waiting
postgres 25447 2502 0 11:13 ? 00:00:00 postgres: ubwiki ubwiki DELETE waiting
postgres 25619 2502 0 11:14 ? 00:00:00 postgres: ubwiki ubwiki DELETE waiting

A search on Internet found that also others experienced this, and the solution to our problem we found here:



We implemented a fix to our problem by adding a bash script to /etc/crontab, running each fifth minute.

The script will first check if there are more then three (3) processes with the description “idle in transaction”.

If so, then the pkill command will stop (“kill”) the oldest one.

if [ `/usr/bin/pgrep -f ‘idle in transaction’ | wc -l` -gt 2 ]
echo "Email message about that process was found and stopped." | mail to@someone
/usr/bin/test `/usr/bin/pgrep -f ‘idle in transaction’ | wc -l` -gt 2; pkill -o -f ‘idle in transaction’

The script runs every fifth minute. Here is our entry in the /etc/crontab file.

[code lang=”bash”]
*/5 * * * * root /usr/share/scripts/fix-transactions/


Authenticate Postgresql client agains backend ldap

We have a Postgresql server open to many different people at our organization. In order to add a ldap authentication you can add these 3 lines to your current pg_hba.conf file:

host all username       ldap ldapserver="" ldapbasedn="ou=people,dc=uib,dc=no"
host all username    129.177.XXX.YYY/32 ldap ldapserver="" ldapbasedn="ou=people,dc=uib,dc=no"
local all username                      ldap ldapserver="" ldapbasedn="ou=people,dc=uib,dc=no"

where the first one will allow users that access the server from localhost (either from a shell or connections). The second line is allowing a computer from the following IP adresse: 129.177.XXX.YYY.

Finally the last line will use ldap backend for local connections. That means a Linux user that logged in with SSH, will still be validated agains the ldap server, regardless of local password.

Remember this is only authentication, so you have to add the username to Postgresql database anyway (hint: use createuser command).

If you want to allow a user to connect remotely over web, remember to use SSL in the webserver, HTTPS, so that the password is not transmittet un-encrypted.

Note also:  This is also only LDAP, not LDAPS connections. So data going from the Postgresql server to the ldap is not encrypted. In short: Keep the distance between the postgresql server and the LDAP server short, or try to implement LDAPS, which I did, where I failed, so far…

Open Journal System missing email logs

I upgraded our Open Journal Systems from 2.2.2 to 2.4.2 and afterwards we saw the email logs were gone. This was related to:

My solution was this:
Established a new Postgresql server in a virtual enviroment (vmware, test-machine), and import the old and the new (upgraded) database.

# Log in to Prod server
ssh prodserver
# Dump the related and upgraded database to a sql file
pg_dump -U postgres -f /tmp/ojs_new.sql ojs_new
# Copy the file to a Test server where you can play around
scp ojs_new.sql username@my_vmware_test_server
# Log in to your Test server
ssh my_vmware_test_server
# Become Root, the system super user
sudo su -
# Become Postgres user, so that you can do all Postgresql stuff
su - postgres
# Create a user named "ojs"
createuser ojs
# Create the databases
createdb -O ojs ojs_new
createdb -O ojs ojs_old
# Import the databases to Postgresql on the Test Vmware server
psql -f ojs_old.sql ojs_old
psql -f ojs_new.sql ojs_new

ojs_old: was the original 2.2.2 database
ojs_new: was the upgraded 2.4.2 one

In order to fix the assoc_id numbers, I used this bash script:

# Find first all log_id's in the old database (ojs_old)
ORIGLIST=`psql -t -d ojs_old -c "SELECT log_id FROM article_email_log" postgres`;
for i in $ORIGLIST;do
  OLD_ARTICLE_ID=`psql -t -d ojs_old -c "SELECT article_id FROM article_email_log WHERE log_id = '$i'" postgres`
  echo -e "psql -d boap_journals -c \"UPDATE email_log SET assoc_id = $OLD_ARTICLE_ID WHERE log_id = $i\" postgres"

The output from this script is sent to STDOUT. I redirected it to another script, and ran the script on our Production server.

ssh my_vmware_test_server
sudo su -
# Send the output of the script to file
./ >
# copy the file to the Production server
scp username@prodserver:/tmp/
# log in to the Production server
ssh prodserver
# Become root, be careful!
sudo su -
cd /tmp/
# Run the script! (But! Remember to do a PG database dump before, so you can restore if something goes terrible wrong!)
# become Postgresql super user
su - postgres
pg_dumpall -f mybackup_just_in_case.sql
# become root again

Postgresql 9.1 BACKUP DUMP BASH script

We wrote an improved postgresql dump bash script for Postgresql version 9.1. This one will save each dump file with the name: database_name_DAYNAME.sql.bz2
In this way, we would only have 7 backups at any time, because each file will be overwritten after seven days. Since our backup system (TSM) saves 7 versions of each file, we would then have 49 versions at any time. That means we can go 49 days back in time to restore a certain dumpfile. In addition the script saves monthly a dump file with the name: database_name_MONTHNAME.sql.bz2.
You can also set a variable TEST to ‘yes’ to, then the script will only dump one specific filename: database_name_daily.sql.bz2. This can be useful on Test servers, where you might not be interested in a historical backup back in time.

Here it is:


## This scripts dumps all the databases in a Postgres 9.1 server, localhost
## 2 dump files are made for each database. One with Inserts another without.

# - implement a 'if system is Test' option to minimize number of dump files UNDER PROGRESS
# - use functions instead?
# - some kind of integration with Jenkins?
# - fix the 2 strange '|' that appears in the DATABASE list FIXED?
# - Add timer so we can optimize speed of the script execution time
# - enable use of the logfile LOGFILE. Could be nice to log what this script is/has been doing and when.
# - number of days to keep a dump file could be a parameter to this script
# - enable print of name of the script, where the script is run (hostname and directory). Makes it easy to find the script on a server
# - would be nice to add a incremental feature for this script. Then one can dump files several times a day, without worrying about space problems on the harddisk DIFFICULT?

# Timer
start_time=$(date +%s)

# Variables
BACKUP_DIR2="var/backup/postgresql_dumps" # Gosh..
MAILLIST="someone att somewhere" # should be edited
# Is this a test system? Set TESTSYSTEM to 'yes' in order to remove date and time information from dumpfile names (in order to minimize number of dumpfiles).
TODAY=$(date|awk '{ print $1 }')
MONTH=$(date|awk '{ print $1 }')
MONTHNAME=`date +%b --date '0 month'`
DAYINMONTH=$(date|awk '{ print $3 }')
YEAR=$(date | awk '{ print $6 }')

# Only postgres can run this script
if [ `whoami` != "postgres" ]; then
echo "pgsql_dump tried to run, but user is not postgres!" >> $LOGFILE
echo "You are not postgres, can not run."
echo "Try: su -c ./ postgres"

# Check if there any backup files. If not, something is wrong!
if [ `find $BACKUP_DIR -type f -name '*.sql.bz2' -mtime -2 | wc -l` -eq 0 ]; then
echo "There are no pgsql dumps for the last 2 days at $HOSTNAME. Something is wrong!" | mail -s "[PGSQLDUMP ERROR] $HOSTNAME" $MAILLIST

# logfile might be nice to have (or maybe Jenkins is the way to go?)
if [ ! -e $LOGFILE ]; then
touch $LOGFILE

if [ $TESTSYSTEM == "yes" ];then
#DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
# For testing purposes
DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`

for i in $DATABASES; do

## Create folders for each database if they don't exist
if [ ! -d "$BACKUP_DIR/$i/" ];then
mkdir $BACKUP_DIR/$i
if [ ! -d "$BACKUP_DIR/$i/daily" ];then
mkdir $BACKUP_DIR/$i/daily
if [ ! -d "$BACKUP_DIR/$i/monthly" ];then
mkdir $BACKUP_DIR/$i/monthly

# On Test servers we don't want dump files with date and time information
if [ $TESTSYSTEM == "yes" ];then

# backup for each weekday (Mon, Tue, ...)
nice -n 10 /usr/pgsql-9.1/bin/pg_dump --column-inserts $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME".sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql

# dump with copy statements
nice -n 10 /usr/pgsql-9.1/bin/pg_dump $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME"_copy.sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql

# monthly backup (Jan, Feb...)
if [ $DAYINMONTH==10 ]; then
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME".sql.bz2
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME"_copy.sql.bz2

# Year backup
# coming after a while


## Full backup
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall --column-inserts > $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall > $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME".sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME"_copy.sql

## Vacuuming (is it really necessary for PG 9.1? Don't think so...)
#nice -n 10 vacuumdb -a -f -z -q

finish_time=$(date +%s)
echo "Time duration for pg_dump script at $HOSTNAME: $((finish_time - start_time)) secs." | mail $MAILLIST

Postgresql 8.1 database bash dump script

I’ve wrote a simple bash script to dump databases from a Postgresql 8.1 database server
which uses the pg_dump and pg_dumpall.

## This scripts dumps all the databases in a Postgres 8.1 server, localhost
## 2 dump files are made for each database. One with Inserts another without.
## some variables, change them to fit yours

# Date and time variables
 DATE=`date +%Y-%m-%d`
 TIME=`date +%k:%M:%S`
# only postgres can run this script!
 if [ `whoami` != "postgres" ]; then
   echo "pgsql_dump tried to run, but user is not postgres!" >> $LOGFILE
   echo "You are not postgres, can not run."
   echo "Try: su -c ./ postgres"

# clean up old dumps! (find all types which are files, with the name that ends with .sql,
# their date older than 7 days, and execute the command "rm" (remove) )
 find $BACKUP_DIR -type f -name '*.sql' -mtime +7 -exec rm {} \;

# Check if there any backup files.
# Action: find in folder BACKUP_DIR all files with file-extension .sql.
# Count the number of files with wc (word count, option -l, which counts the numbers of lines.
# If this number is 0, then there are no files, and clearly something is wrong,
# because you don't have any backups!
if [ `find $BACKUP_DIR -type f -name '*.sql' | wc -l` -eq 0 ]; then
 echo "There are no pgsql dumps for the last 2 days at $HOSTNAME. Something is wrong!" | mail -s "[PGSQLDUMP ERROR] $HOSTNAME" $MAILLIST

# Create the log-file if it doesn't exist
if [ ! -e $LOGFILE ]; then
 touch $LOGFILE

# Find which databases you have in your Postgresql server
# Action: list out all the databases, remove unwanted lines and characters, extract wanted line with awk (line 1),
# strip away white empty lines with the command 'sed':
DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1'} | sed -e '/^$/d'`
# Dump the databases in individual files
 for i in $DATABASES; do
   pg_dump -D $i > /backup/postgresql_dumps/"$FILENAME"

# we also like a dump with copy statements
 for i in $DATABASES; do
   pg_dump $i > /backup/postgresql_dumps/"$FILENAME"

# full backup is also necessary
 pg_dumpall -D > /backup/postgresql_dumps/"all-databases $TODAY".sql
 pg_dumpall > /backup/postgresql_dumps/"all-databases $TODAY".sql

# Finally vacuum the database
 vacuumdb -a -f -z -q