Machform error: Cardinality violation: 1242 Subquery returns more than 1 row

We have a multiuser Machform PHP application running on a Redhat 7 Linux server with Apache webserver and MariaDB database.

In a specific form after page 3, accessing this URI: confirm.php?id=NNNNNNNN&mf_page_from=3 where NNNNNNNN is the id number of the form,

The following error message appears:

SQL: [671] select A.element_id, A.option_id, (select if(B.element_matrix_parent_id=0,A.option, (select C.`option` from ap_element_options C where C.element_id=B.element_matrix_parent_id and C.form_id=A.form_id and C.live=1 and C.option_id=A.option_id)) ) ‘option_label’ from ap_element_options A left join ap_form_elements B on (A.element_id=B.element_id and A.form_id=B.form_id) where A.form_id=? and A.live=1 and B.element_type=’matrix’ and B.element_status=1 order by A.element_id,A.option_id asc Sent SQL: [680] select A.element_id, A.option_id, (select if(B.element_matrix_parent_id=0,A.option, (select C.`option` from ap_element_options C where C.element_id=B.element_matrix_parent_id and C.form_id=A.form_id and C.live=1 and C.option_id=A.option_id)) ) ‘option_label’ from ap_element_options A left join ap_form_elements B on (A.element_id=B.element_id and A.form_id=B.form_id) where A.form_id=’NNNNNNNN’ and A.live=1 and B.element_type=’matrix’ and B.element_status=1 order by A.element_id,A.option_id asc Params: 1 Key: Position #0: paramno=0 name=[0] “” is_param=1 param_type=2 Query Failed: SQLSTATE[21000]: Cardinality violation: 1242 Subquery returns more than 1 row

The solution is to change these files:

/includes/entry-functions.php and export_entries.php

where line: C.option_id=A.option_id))

is changed to:

C.option_id=A.option_id LIMIT 1))

Mysql / Mariadb performance test

I have two different Linux servers, and wanted to test if one where faster than the other. I used mysqlslap together with the “time” command in a bash shell.

# time mysqlslap --user=root --password=$(cat /root/mysql) --host=localhost concurrency=50 --iterations=100 --number-int-cols=5 --number-char-cols=20 --auto-generate-sql --verbose

In this way I could get out different values from the server which I could compare.

Benchmark
Average number of seconds to run all queries: 0.007 seconds
Minimum number of seconds to run all queries: 0.006 seconds
Maximum number of seconds to run all queries: 0.024 seconds
Number of clients running queries: 1
Average number of queries per client: 0

real 0m9.390s
user 0m0.085s
sys 0m0.165s

Referance: https://www.digitalocean.com/community/tutorials/how-to-measure-mysql-query-performance-with-mysqlslap

 

Idle in transaction – DELETE waiting

We have a Linux server (Redhat Enterprise 6) running a multi-site installation of Mediawiki. In total, we have today 120 unique wiki’s, each having its one Postgresql 9.1 database.

Suddenly the server stopped responding. In Apache webserver, the total number concurrent apache processes went up to Apache’s max-client setting in httpd.conf, and people couldn’t log in or see any wiki-pages.

In the apache log, we did not see anything special. But, when we started to check which processes were running during a full stop on the server with the Unix command `ps`, we could see entries like:

[php]
# ps -ef | egrep ‘DELETE|idle’
postgres 25106 2502 0 11:12 ? 00:00:00 postgres: ubwiki ubwiki 127.0.0.1(37186) idle in transaction
postgres 25108 2502 0 11:12 ? 00:00:00 postgres: ubwiki ubwiki 127.0.0.1(37189) DELETE waiting
postgres 25447 2502 0 11:13 ? 00:00:00 postgres: ubwiki ubwiki 127.0.0.1(37420) DELETE waiting
postgres 25619 2502 0 11:14 ? 00:00:00 postgres: ubwiki ubwiki 127.0.0.1(37553) DELETE waiting
[/php]

A search on Internet found that also others experienced this, and the solution to our problem we found here:

* http://www.mail-archive.com/dspace-tech@lists.sourceforge.net/msg00109.html

* https://wiki.duraspace.org/display/DSPACE/Idle+In+Transaction+Problem#IdleInTransactionProblem-Workaround:Killing

We implemented a fix to our problem by adding a bash script to /etc/crontab, running each fifth minute.

The script will first check if there are more then three (3) processes with the description “idle in transaction”.

If so, then the pkill command will stop (“kill”) the oldest one.

fix-transactions.sh:

[bash]
#!/bin/bash
if [ `/usr/bin/pgrep -f ‘idle in transaction’ | wc -l` -gt 2 ]
then
echo "Email message about that process was found and stopped." | mail to@someone
fi
/usr/bin/test `/usr/bin/pgrep -f ‘idle in transaction’ | wc -l` -gt 2; pkill -o -f ‘idle in transaction’
[/bash]

The script runs every fifth minute. Here is our entry in the /etc/crontab file.

[code lang=”bash”]
*/5 * * * * root /usr/share/scripts/fix-transactions/fix-transactions.sh
[/code]

 

Authenticate Postgresql client agains backend ldap

We have a Postgresql server open to many different people at our organization. In order to add a ldap authentication you can add these 3 lines to your current pg_hba.conf file:

host all username    127.0.0.1/32       ldap ldapserver="ourserver.uib.no" ldapbasedn="ou=people,dc=uib,dc=no"
host all username    129.177.XXX.YYY/32 ldap ldapserver="ourserver.uib.no" ldapbasedn="ou=people,dc=uib,dc=no"
local all username                      ldap ldapserver="ourserver.uib.no" ldapbasedn="ou=people,dc=uib,dc=no"

where the first one will allow users that access the server from localhost (either from a shell or 127.0.0.1 connections). The second line is allowing a computer from the following IP adresse: 129.177.XXX.YYY.

Finally the last line will use ldap backend for local connections. That means a Linux user that logged in with SSH, will still be validated agains the ldap server, regardless of local password.

Remember this is only authentication, so you have to add the username to Postgresql database anyway (hint: use createuser command).

If you want to allow a user to connect remotely over web, remember to use SSL in the webserver, HTTPS, so that the password is not transmittet un-encrypted.

Note also:  This is also only LDAP, not LDAPS connections. So data going from the Postgresql server to the ldap is not encrypted. In short: Keep the distance between the postgresql server and the LDAP server short, or try to implement LDAPS, which I did, where I failed, so far…

Open Journal System missing email logs

I upgraded our Open Journal Systems from 2.2.2 to 2.4.2 and afterwards we saw the email logs were gone. This was related to:

http://pkp.sfu.ca/support/forum/viewtopic.php?f=8&t=9140

My solution was this:
Established a new Postgresql server in a virtual enviroment (vmware, test-machine), and import the old and the new (upgraded) database.

# Log in to Prod server
ssh prodserver
# Dump the related and upgraded database to a sql file
pg_dump -U postgres -f /tmp/ojs_new.sql ojs_new
# Copy the file to a Test server where you can play around
scp ojs_new.sql username@my_vmware_test_server
# Log in to your Test server
ssh my_vmware_test_server
# Become Root, the system super user
sudo su -
# Become Postgres user, so that you can do all Postgresql stuff
su - postgres
# Create a user named "ojs"
createuser ojs
# Create the databases
createdb -O ojs ojs_new
createdb -O ojs ojs_old
# Import the databases to Postgresql on the Test Vmware server
psql -f ojs_old.sql ojs_old
psql -f ojs_new.sql ojs_new

ojs_old: was the original 2.2.2 database
and
ojs_new: was the upgraded 2.4.2 one

In order to fix the assoc_id numbers, I used this bash script:

myscript.sh:

#!/bin/bash
# Find first all log_id's in the old database (ojs_old)
ORIGLIST=`psql -t -d ojs_old -c "SELECT log_id FROM article_email_log" postgres`;
for i in $ORIGLIST;do
  OLD_ARTICLE_ID=`psql -t -d ojs_old -c "SELECT article_id FROM article_email_log WHERE log_id = '$i'" postgres`
  echo -e "psql -d boap_journals -c \"UPDATE email_log SET assoc_id = $OLD_ARTICLE_ID WHERE log_id = $i\" postgres"
done

The output from this script is sent to STDOUT. I redirected it to another script, and ran the script on our Production server.

ssh my_vmware_test_server
sudo su -
# Send the output of the script to file
./myscript.sh > runme-on-prod-server.sh
# copy the file to the Production server
scp runme-on-prod-server.sh username@prodserver:/tmp/
# log in to the Production server
ssh prodserver
# Become root, be careful!
sudo su -
cd /tmp/
# Run the script! (But! Remember to do a PG database dump before, so you can restore if something goes terrible wrong!)
# become Postgresql super user
su - postgres
pg_dumpall -f mybackup_just_in_case.sql
# become root again
exit
./runme-on_prod-server.sh

Upgrade Moodle 2.2.1 to 2.6.2+

I tried to upgrade Moodle 2.2.1 to 2.6.2+ and got this error message:

Default exception handler: DDL sql execution error Debug: Data truncated for column 'institution' at row 999
ALTER TABLE mdl_user MODIFY COLUMN institution VARCHAR(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' after phone2
Error code: ddlexecuteerror

The solution was to run these sql’s at the Mysql database:

update mdl_user set institution="" where institution is NULL;
update mdl_user set department="" where department is NULL;
update mdl_user set address="" where address is NULL;

This issue seems to be related to:

Postgresql 9.1 BACKUP DUMP BASH script

We wrote an improved postgresql dump bash script for Postgresql version 9.1. This one will save each dump file with the name: database_name_DAYNAME.sql.bz2
In this way, we would only have 7 backups at any time, because each file will be overwritten after seven days. Since our backup system (TSM) saves 7 versions of each file, we would then have 49 versions at any time. That means we can go 49 days back in time to restore a certain dumpfile. In addition the script saves monthly a dump file with the name: database_name_MONTHNAME.sql.bz2.
You can also set a variable TEST to ‘yes’ to, then the script will only dump one specific filename: database_name_daily.sql.bz2. This can be useful on Test servers, where you might not be interested in a historical backup back in time.

Here it is:

#!/bin/bash

## This scripts dumps all the databases in a Postgres 9.1 server, localhost
## 2 dump files are made for each database. One with Inserts another without.

## TODO
# - implement a 'if system is Test' option to minimize number of dump files UNDER PROGRESS
# - use functions instead?
# - some kind of integration with Jenkins?
# - fix the 2 strange '|' that appears in the DATABASE list FIXED?
# - Add timer so we can optimize speed of the script execution time
# - enable use of the logfile LOGFILE. Could be nice to log what this script is/has been doing and when.
# - number of days to keep a dump file could be a parameter to this script
# - enable print of name of the script, where the script is run (hostname and directory). Makes it easy to find the script on a server
# - would be nice to add a incremental feature for this script. Then one can dump files several times a day, without worrying about space problems on the harddisk DIFFICULT?
## TODO END

# Timer
start_time=$(date +%s)

# Variables
LOGFILE="/var/lib/pgsql/9.1/data/pg_log/pgsql_dump.log"
BACKUP_DIR="/var/backup/postgresql_dumps"
BACKUP_DIR2="var/backup/postgresql_dumps" # Gosh..
HOSTNAME=`hostname`
MAILLIST="someone att somewhere" # should be edited
# Is this a test system? Set TESTSYSTEM to 'yes' in order to remove date and time information from dumpfile names (in order to minimize number of dumpfiles).
TESTSYSTEM="no"
TODAY=$(date|awk '{ print $1 }')
MONTH=$(date|awk '{ print $1 }')
MONTHNAME=`date +%b --date '0 month'`
DAYINMONTH=$(date|awk '{ print $3 }')
YEAR=$(date | awk '{ print $6 }')

# Only postgres can run this script
if [ `whoami` != "postgres" ]; then
echo "pgsql_dump tried to run, but user is not postgres!" >> $LOGFILE
echo "You are not postgres, can not run."
echo "Try: su -c ./pgsql_dump.sh postgres"
exit;
fi

# Check if there any backup files. If not, something is wrong!
if [ `find $BACKUP_DIR -type f -name '*.sql.bz2' -mtime -2 | wc -l` -eq 0 ]; then
echo "There are no pgsql dumps for the last 2 days at $HOSTNAME. Something is wrong!" | mail -s "[PGSQLDUMP ERROR] $HOSTNAME" $MAILLIST
fi

# logfile might be nice to have (or maybe Jenkins is the way to go?)
if [ ! -e $LOGFILE ]; then
touch $LOGFILE
fi

if [ $TESTSYSTEM == "yes" ];then
#DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
# For testing purposes
DATABASES="database-1
database-2"
else
DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
fi

for i in $DATABASES; do

## Create folders for each database if they don't exist
if [ ! -d "$BACKUP_DIR/$i/" ];then
mkdir $BACKUP_DIR/$i
fi
if [ ! -d "$BACKUP_DIR/$i/daily" ];then
mkdir $BACKUP_DIR/$i/daily
fi
if [ ! -d "$BACKUP_DIR/$i/monthly" ];then
mkdir $BACKUP_DIR/$i/monthly
fi

# On Test servers we don't want dump files with date and time information
if [ $TESTSYSTEM == "yes" ];then
DAILYFILENAME="daily_$i"
MONTHLYFILENAME="monthly_$i"
ALLDATABASESFILENAME="all-databases"
else
DAILYFILENAME="daily_$i_$TODAY"
MONTHLYFILENAME="monthly_$i_$MONTHNAME"
ALLDATABASESFILENAME="all-databases_$TODAY"
fi

# backup for each weekday (Mon, Tue, ...)
nice -n 10 /usr/pgsql-9.1/bin/pg_dump --column-inserts $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME".sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql

# dump with copy statements
nice -n 10 /usr/pgsql-9.1/bin/pg_dump $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME"_copy.sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql

# monthly backup (Jan, Feb...)
if [ $DAYINMONTH==10 ]; then
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME".sql.bz2
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME"_copy.sql.bz2
fi

# Year backup
# coming after a while

done

## Full backup
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall --column-inserts > $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall > $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME".sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME"_copy.sql
rm -f $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
rm -f $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql

## Vacuuming (is it really necessary for PG 9.1? Don't think so...)
#nice -n 10 vacuumdb -a -f -z -q

finish_time=$(date +%s)
echo "Time duration for pg_dump script at $HOSTNAME: $((finish_time - start_time)) secs." | mail $MAILLIST