Open Journal System missing email logs

I upgraded our Open Journal Systems from 2.2.2 to 2.4.2 and afterwards we saw the email logs were gone. This was related to:

http://pkp.sfu.ca/support/forum/viewtopic.php?f=8&t=9140

My solution was this:
Established a new Postgresql server in a virtual enviroment (vmware, test-machine), and import the old and the new (upgraded) database.

# Log in to Prod server
ssh prodserver
# Dump the related and upgraded database to a sql file
pg_dump -U postgres -f /tmp/ojs_new.sql ojs_new
# Copy the file to a Test server where you can play around
scp ojs_new.sql username@my_vmware_test_server
# Log in to your Test server
ssh my_vmware_test_server
# Become Root, the system super user
sudo su -
# Become Postgres user, so that you can do all Postgresql stuff
su - postgres
# Create a user named "ojs"
createuser ojs
# Create the databases
createdb -O ojs ojs_new
createdb -O ojs ojs_old
# Import the databases to Postgresql on the Test Vmware server
psql -f ojs_old.sql ojs_old
psql -f ojs_new.sql ojs_new

ojs_old: was the original 2.2.2 database
and
ojs_new: was the upgraded 2.4.2 one

In order to fix the assoc_id numbers, I used this bash script:

myscript.sh:

#!/bin/bash
# Find first all log_id's in the old database (ojs_old)
ORIGLIST=`psql -t -d ojs_old -c "SELECT log_id FROM article_email_log" postgres`;
for i in $ORIGLIST;do
  OLD_ARTICLE_ID=`psql -t -d ojs_old -c "SELECT article_id FROM article_email_log WHERE log_id = '$i'" postgres`
  echo -e "psql -d boap_journals -c \"UPDATE email_log SET assoc_id = $OLD_ARTICLE_ID WHERE log_id = $i\" postgres"
done

The output from this script is sent to STDOUT. I redirected it to another script, and ran the script on our Production server.

ssh my_vmware_test_server
sudo su -
# Send the output of the script to file
./myscript.sh > runme-on-prod-server.sh
# copy the file to the Production server
scp runme-on-prod-server.sh username@prodserver:/tmp/
# log in to the Production server
ssh prodserver
# Become root, be careful!
sudo su -
cd /tmp/
# Run the script! (But! Remember to do a PG database dump before, so you can restore if something goes terrible wrong!)
# become Postgresql super user
su - postgres
pg_dumpall -f mybackup_just_in_case.sql
# become root again
exit
./runme-on_prod-server.sh

issuu pdf sync – it works … but

[pdf issuu_pdf_id=”140327143047-86dbdac2750a4fa883b13e0df8cb290b” layout=”1″ bgcolor=”FFFFFF” allow_full_screen_=”1″ flip_timelaps=”6000″ ]

The plugin is installed from: http://wordpress.org/plugins/issuu-pdf-sync/

but not in Ipad.

I guess it is flash based, so we hope for future release that will support Ipad, Iphone and other mobile units…

Upgrade Moodle 2.2.1 to 2.6.2+

I tried to upgrade Moodle 2.2.1 to 2.6.2+ and got this error message:

Default exception handler: DDL sql execution error Debug: Data truncated for column 'institution' at row 999
ALTER TABLE mdl_user MODIFY COLUMN institution VARCHAR(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' after phone2
Error code: ddlexecuteerror

The solution was to run these sql’s at the Mysql database:

update mdl_user set institution="" where institution is NULL;
update mdl_user set department="" where department is NULL;
update mdl_user set address="" where address is NULL;

This issue seems to be related to:

Using date and time in Bash scripts

Date and time is useful. You might want your script to check if it is Monday today, or if the script ran 2 days ago. Maybe you need to save a file with the current date and time. Here are some variables that I use to put in my bash scripts:

[bash]
TODAY=$(date|awk ‘{ print $1 }’)
TODAY=`date ‘+%A’`
NUMBER-OF-DAY-IN-WEEK=`date +%u`
MONTH=$(date|awk ‘{ print $1 }’)
MONTHNAME=`date +%b –date ‘0 month’`
DAYINMONTH=$(date|awk ‘{ print $3 }’)
YEAR=$(date | awk ‘{ print $6 }’)
WEEKNUMBER=`date +"%V"`
[/bash]

 

[bash]
# Date to unixstamp
date2stamp () {
date –utc –date "$1" +%s
}
stamp2date (){
date –utc –date "1970-01-01 $1 sec" "+%Y-%m-%d %T"
}
dateDiff (){
case $1 in
-s) sec=1; shift;;
-m) sec=60; shift;;
-h) sec=3600; shift;;
-d) sec=86400; shift;;
*) sec=86400;;
esac
dte1=$(date2stamp $1)
dte2=$(date2stamp $2)
diffSec=$((dte2-dte1))
if ((diffSec < 0)); then abs=-1; else abs=1; fi
echo $((diffSec/sec*abs))
}
[/bash]

Here is useful reading:
http://www.cyberciti.biz/faq/linux-unix-formatting-dates-for-display/

Postgresql 9.1 BACKUP DUMP BASH script

We wrote an improved postgresql dump bash script for Postgresql version 9.1. This one will save each dump file with the name: database_name_DAYNAME.sql.bz2
In this way, we would only have 7 backups at any time, because each file will be overwritten after seven days. Since our backup system (TSM) saves 7 versions of each file, we would then have 49 versions at any time. That means we can go 49 days back in time to restore a certain dumpfile. In addition the script saves monthly a dump file with the name: database_name_MONTHNAME.sql.bz2.
You can also set a variable TEST to ‘yes’ to, then the script will only dump one specific filename: database_name_daily.sql.bz2. This can be useful on Test servers, where you might not be interested in a historical backup back in time.

Here it is:

#!/bin/bash

## This scripts dumps all the databases in a Postgres 9.1 server, localhost
## 2 dump files are made for each database. One with Inserts another without.

## TODO
# - implement a 'if system is Test' option to minimize number of dump files UNDER PROGRESS
# - use functions instead?
# - some kind of integration with Jenkins?
# - fix the 2 strange '|' that appears in the DATABASE list FIXED?
# - Add timer so we can optimize speed of the script execution time
# - enable use of the logfile LOGFILE. Could be nice to log what this script is/has been doing and when.
# - number of days to keep a dump file could be a parameter to this script
# - enable print of name of the script, where the script is run (hostname and directory). Makes it easy to find the script on a server
# - would be nice to add a incremental feature for this script. Then one can dump files several times a day, without worrying about space problems on the harddisk DIFFICULT?
## TODO END

# Timer
start_time=$(date +%s)

# Variables
LOGFILE="/var/lib/pgsql/9.1/data/pg_log/pgsql_dump.log"
BACKUP_DIR="/var/backup/postgresql_dumps"
BACKUP_DIR2="var/backup/postgresql_dumps" # Gosh..
HOSTNAME=`hostname`
MAILLIST="someone att somewhere" # should be edited
# Is this a test system? Set TESTSYSTEM to 'yes' in order to remove date and time information from dumpfile names (in order to minimize number of dumpfiles).
TESTSYSTEM="no"
TODAY=$(date|awk '{ print $1 }')
MONTH=$(date|awk '{ print $1 }')
MONTHNAME=`date +%b --date '0 month'`
DAYINMONTH=$(date|awk '{ print $3 }')
YEAR=$(date | awk '{ print $6 }')

# Only postgres can run this script
if [ `whoami` != "postgres" ]; then
echo "pgsql_dump tried to run, but user is not postgres!" >> $LOGFILE
echo "You are not postgres, can not run."
echo "Try: su -c ./pgsql_dump.sh postgres"
exit;
fi

# Check if there any backup files. If not, something is wrong!
if [ `find $BACKUP_DIR -type f -name '*.sql.bz2' -mtime -2 | wc -l` -eq 0 ]; then
echo "There are no pgsql dumps for the last 2 days at $HOSTNAME. Something is wrong!" | mail -s "[PGSQLDUMP ERROR] $HOSTNAME" $MAILLIST
fi

# logfile might be nice to have (or maybe Jenkins is the way to go?)
if [ ! -e $LOGFILE ]; then
touch $LOGFILE
fi

if [ $TESTSYSTEM == "yes" ];then
#DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
# For testing purposes
DATABASES="database-1
database-2"
else
DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
fi

for i in $DATABASES; do

## Create folders for each database if they don't exist
if [ ! -d "$BACKUP_DIR/$i/" ];then
mkdir $BACKUP_DIR/$i
fi
if [ ! -d "$BACKUP_DIR/$i/daily" ];then
mkdir $BACKUP_DIR/$i/daily
fi
if [ ! -d "$BACKUP_DIR/$i/monthly" ];then
mkdir $BACKUP_DIR/$i/monthly
fi

# On Test servers we don't want dump files with date and time information
if [ $TESTSYSTEM == "yes" ];then
DAILYFILENAME="daily_$i"
MONTHLYFILENAME="monthly_$i"
ALLDATABASESFILENAME="all-databases"
else
DAILYFILENAME="daily_$i_$TODAY"
MONTHLYFILENAME="monthly_$i_$MONTHNAME"
ALLDATABASESFILENAME="all-databases_$TODAY"
fi

# backup for each weekday (Mon, Tue, ...)
nice -n 10 /usr/pgsql-9.1/bin/pg_dump --column-inserts $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME".sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql

# dump with copy statements
nice -n 10 /usr/pgsql-9.1/bin/pg_dump $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME"_copy.sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql

# monthly backup (Jan, Feb...)
if [ $DAYINMONTH==10 ]; then
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME".sql.bz2
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME"_copy.sql.bz2
fi

# Year backup
# coming after a while

done

## Full backup
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall --column-inserts > $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall > $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME".sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME"_copy.sql
rm -f $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
rm -f $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql

## Vacuuming (is it really necessary for PG 9.1? Don't think so...)
#nice -n 10 vacuumdb -a -f -z -q

finish_time=$(date +%s)
echo "Time duration for pg_dump script at $HOSTNAME: $((finish_time - start_time)) secs." | mail $MAILLIST