Using date and time in Bash scripts

Date and time is useful. You might want your script to check if it is Monday today, or if the script ran 2 days ago. Maybe you need to save a file with the current date and time. Here are some variables that I use to put in my bash scripts:

[bash]
TODAY=$(date|awk ‘{ print $1 }’)
TODAY=`date ‘+%A’`
NUMBER-OF-DAY-IN-WEEK=`date +%u`
MONTH=$(date|awk ‘{ print $1 }’)
MONTHNAME=`date +%b –date ‘0 month’`
DAYINMONTH=$(date|awk ‘{ print $3 }’)
YEAR=$(date | awk ‘{ print $6 }’)
WEEKNUMBER=`date +"%V"`
[/bash]

 

[bash]
# Date to unixstamp
date2stamp () {
date –utc –date "$1" +%s
}
stamp2date (){
date –utc –date "1970-01-01 $1 sec" "+%Y-%m-%d %T"
}
dateDiff (){
case $1 in
-s) sec=1; shift;;
-m) sec=60; shift;;
-h) sec=3600; shift;;
-d) sec=86400; shift;;
*) sec=86400;;
esac
dte1=$(date2stamp $1)
dte2=$(date2stamp $2)
diffSec=$((dte2-dte1))
if ((diffSec < 0)); then abs=-1; else abs=1; fi
echo $((diffSec/sec*abs))
}
[/bash]

Here is useful reading:
http://www.cyberciti.biz/faq/linux-unix-formatting-dates-for-display/

Postgresql 9.1 BACKUP DUMP BASH script

We wrote an improved postgresql dump bash script for Postgresql version 9.1. This one will save each dump file with the name: database_name_DAYNAME.sql.bz2
In this way, we would only have 7 backups at any time, because each file will be overwritten after seven days. Since our backup system (TSM) saves 7 versions of each file, we would then have 49 versions at any time. That means we can go 49 days back in time to restore a certain dumpfile. In addition the script saves monthly a dump file with the name: database_name_MONTHNAME.sql.bz2.
You can also set a variable TEST to ‘yes’ to, then the script will only dump one specific filename: database_name_daily.sql.bz2. This can be useful on Test servers, where you might not be interested in a historical backup back in time.

Here it is:

#!/bin/bash

## This scripts dumps all the databases in a Postgres 9.1 server, localhost
## 2 dump files are made for each database. One with Inserts another without.

## TODO
# - implement a 'if system is Test' option to minimize number of dump files UNDER PROGRESS
# - use functions instead?
# - some kind of integration with Jenkins?
# - fix the 2 strange '|' that appears in the DATABASE list FIXED?
# - Add timer so we can optimize speed of the script execution time
# - enable use of the logfile LOGFILE. Could be nice to log what this script is/has been doing and when.
# - number of days to keep a dump file could be a parameter to this script
# - enable print of name of the script, where the script is run (hostname and directory). Makes it easy to find the script on a server
# - would be nice to add a incremental feature for this script. Then one can dump files several times a day, without worrying about space problems on the harddisk DIFFICULT?
## TODO END

# Timer
start_time=$(date +%s)

# Variables
LOGFILE="/var/lib/pgsql/9.1/data/pg_log/pgsql_dump.log"
BACKUP_DIR="/var/backup/postgresql_dumps"
BACKUP_DIR2="var/backup/postgresql_dumps" # Gosh..
HOSTNAME=`hostname`
MAILLIST="someone att somewhere" # should be edited
# Is this a test system? Set TESTSYSTEM to 'yes' in order to remove date and time information from dumpfile names (in order to minimize number of dumpfiles).
TESTSYSTEM="no"
TODAY=$(date|awk '{ print $1 }')
MONTH=$(date|awk '{ print $1 }')
MONTHNAME=`date +%b --date '0 month'`
DAYINMONTH=$(date|awk '{ print $3 }')
YEAR=$(date | awk '{ print $6 }')

# Only postgres can run this script
if [ `whoami` != "postgres" ]; then
echo "pgsql_dump tried to run, but user is not postgres!" >> $LOGFILE
echo "You are not postgres, can not run."
echo "Try: su -c ./pgsql_dump.sh postgres"
exit;
fi

# Check if there any backup files. If not, something is wrong!
if [ `find $BACKUP_DIR -type f -name '*.sql.bz2' -mtime -2 | wc -l` -eq 0 ]; then
echo "There are no pgsql dumps for the last 2 days at $HOSTNAME. Something is wrong!" | mail -s "[PGSQLDUMP ERROR] $HOSTNAME" $MAILLIST
fi

# logfile might be nice to have (or maybe Jenkins is the way to go?)
if [ ! -e $LOGFILE ]; then
touch $LOGFILE
fi

if [ $TESTSYSTEM == "yes" ];then
#DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
# For testing purposes
DATABASES="database-1
database-2"
else
DATABASES=`psql -q -c "\l" | sed -n 4,/\eof/p | grep -v rows | grep -v template0 | awk {'print $1}' | sed 's/^://g' | sed -e '/^$/d' | grep -v '|'`
fi

for i in $DATABASES; do

## Create folders for each database if they don't exist
if [ ! -d "$BACKUP_DIR/$i/" ];then
mkdir $BACKUP_DIR/$i
fi
if [ ! -d "$BACKUP_DIR/$i/daily" ];then
mkdir $BACKUP_DIR/$i/daily
fi
if [ ! -d "$BACKUP_DIR/$i/monthly" ];then
mkdir $BACKUP_DIR/$i/monthly
fi

# On Test servers we don't want dump files with date and time information
if [ $TESTSYSTEM == "yes" ];then
DAILYFILENAME="daily_$i"
MONTHLYFILENAME="monthly_$i"
ALLDATABASESFILENAME="all-databases"
else
DAILYFILENAME="daily_$i_$TODAY"
MONTHLYFILENAME="monthly_$i_$MONTHNAME"
ALLDATABASESFILENAME="all-databases_$TODAY"
fi

# backup for each weekday (Mon, Tue, ...)
nice -n 10 /usr/pgsql-9.1/bin/pg_dump --column-inserts $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME".sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql

# dump with copy statements
nice -n 10 /usr/pgsql-9.1/bin/pg_dump $i > $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 -C / $BACKUP_DIR2/$i/daily/"$DAILYFILENAME"_copy.sql
rm -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql

# monthly backup (Jan, Feb...)
if [ $DAYINMONTH==10 ]; then
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME".sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME".sql.bz2
cp -f $BACKUP_DIR/$i/daily/"$DAILYFILENAME"_copy.sql.bz2 $BACKUP_DIR/$i/monthly/"$MONTHLYFILENAME"_copy.sql.bz2
fi

# Year backup
# coming after a while

done

## Full backup
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall --column-inserts > $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
nice -n 10 /usr/pgsql-9.1/bin/pg_dumpall > $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME".sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME".sql
nice -n 10 tar cjf $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql.bz2 -C / var/backup/postgresql_dumps/"$ALLDATABASESFILENAME"_copy.sql
rm -f $BACKUP_DIR/"$ALLDATABASESFILENAME".sql
rm -f $BACKUP_DIR/"$ALLDATABASESFILENAME"_copy.sql

## Vacuuming (is it really necessary for PG 9.1? Don't think so...)
#nice -n 10 vacuumdb -a -f -z -q

finish_time=$(date +%s)
echo "Time duration for pg_dump script at $HOSTNAME: $((finish_time - start_time)) secs." | mail $MAILLIST

Mysql SQL in bash one-liner

If you just need a quick way to get some data from a mysql database in your shell (bash), you could do something like this in one line:

mysql -h your.server.edu -u db_username -p`cat /path/to/your/homedir/secretpasswordfile` -e ";use databasename; SELECT tablename.columnname FROM tablename where id like '421111' and something like '1' and option like '23';"; > /tmp/datayouwant.txt; while read i; do echo ";$i";; done < /tmp/datayourwant.txt | sort | uniq

If you don't like to scroll:
-bash-3.2$ mysql -h your.server.edu -u db_username -p`cat /path/to/your/homedir/secretpasswordfile` -e "use databasename; SELECT tablename.columnname FROM tablename where id like '421111' and something like '1' and option like '23';" > /tmp/datayouwant.txt; while read i; do echo "$i"; done < /tmp/datayourwant.txt | sort | uniq

On my server I would then get a list of words/numbers or whatever you might have in the database, which one might want to use further in another script or command:

Dikult
Drupal
DSpace
Mediawiki
Open Journal Systems
Piwik
Postgresql og Mysql
Redhat Enterprise Linux 6 (RHEL6)
Redmine
Solr
Webmail (RoundCubemail)
Wordpress
Xibo

Check http headers with wget

If you want to see the http headers from your shell, you can do it with:

 

wget --no-check-certificate --server-response --spider https://yourwebsite.something

The result would be something like:

[bash]
Spider mode enabled. Check if remote file exists.
–2014-02-07 11:13:33– https://yourwebsite.something/something
Resolving yourwebsite.something… 129.177.5.226
Connecting to yourwebsite.something|129.177.5.226|:443… connected.
HTTP request sent, awaiting response…
HTTP/1.1 301 Moved Permanently
Date: Fri, 07 Feb 2014 10:13:33 GMT
Server: Apache
Location: https://yourwebsite.something/something/
Vary: Accept-Encoding
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
Location: https://yourwebsite.something/something/ [following]
Spider mode enabled. Check if remote file exists.
–2014-02-07 11:13:33– https://yourwebsite.something/something/
Connecting to yourwebsite.something|129.177.5.226|:443… connected.
HTTP request sent, awaiting response…
HTTP/1.1 301 Moved Permanently
Date: Fri, 07 Feb 2014 10:13:33 GMT
Server: Apache
X-Powered-By: PHP/5.3.3
X-Content-Type-Options: nosniff
Vary: Accept-Encoding,Cookie,User-Agent
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: private, must-revalidate, max-age=0
Last-Modified: Fri, 07 Feb 2014 10:13:33 GMT
Location: http://yourwebsite.something/something/index.php/Hovudside
Connection: keep-alive, Keep-Alive
Keep-Alive: timeout=15, max=100
Content-Type: text/html; charset=utf-8
Location: http://yourwebsite.something/something/index.php/Hovudside [following]
Spider mode enabled. Check if remote file exists.
–2014-02-07 11:13:33– http://yourwebsite.something/something/index.php/Hovudside
Connecting to yourwebsite.something|129.177.5.226|:80… connected.
HTTP request sent, awaiting response…
HTTP/1.1 200 OK
Date: Fri, 07 Feb 2014 10:13:33 GMT
Server: Apache
X-Powered-By: PHP/5.3.3
X-Content-Type-Options: nosniff
Content-language: nn
Vary: Accept-Encoding,Cookie,User-Agent
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: private, must-revalidate, max-age=0
Last-Modified: Tue, 14 Jan 2014 11:52:09 GMT
Connection: keep-alive, Keep-Alive
Keep-Alive: timeout=15, max=100
Content-Type: text/html; charset=UTF-8
Length: unspecified [text language="/html"][/text]
Remote file exists and could contain further links,
but recursion is disabled — not retrieving.
[/bash]

Finding spam users

We had a lot of spam users in our multisite wordpress system. This was because we had self-registration enabled for a period. Not a smart thing to do…

anyway, I wrote a bash script in order to find which users id’s from the Mysql database that could potentially be spam users. With this list of ID’s I would run another SQL directly where I would set the “spam” to the number “1”. This would make sure the user could not log in, and after a month or so, the user could be deleted.

As you can see from one of the sql’s, the “%uib.no” is the trusted emails from our organization.

 

[bash]
#!/bin/bash

# Which one of all the users in the multisite blog system are spamusers?
# If the user doesn’t have any relations in users_blogg, most likely the user is a spam user.
# But the user can have a meta_value in wp_usermeta table like source_domain.
# These are valid users, and should not be marked as spam

# Find all users ID’s
dbquery=$(mysql -u root -p`cat /root/mysql` -e “use blog; select ID from wp_users;”)
array=($(for i in $dbquery; do echo $i; done))

# echo ${array[@]}

# for all the users, do:
for i in ${array[@]}
do
dbquery2=$(mysql -u root -p`cat /root/mysql` -e “use blog;SELECT wp_bp_user_blogs.id FROM wp_bp_user_blogs WHERE wp_bp_user_blogs.user_id = $i”;)
array2=($(for j in $dbquery2; do echo $j; done))
if [ ${#array2[@]} -eq 0 ];then
dbquery3=$(mysql -u root -p`cat /root/mysql` -e “use blog; select user_email from wp_users WHERE ID = $i and user_email not like ‘%uib.no’;”)
dbquery4=$(mysql -u root -p`cat /root/mysql` -e “use blog; select meta_value from wp_usermeta WHERE user_id = $i and meta_key like ‘source_domain’;”)
array4=($(for r in $dbquery4; do echo $r; done))
for n in ${array4[@]}
do
echo “User $i has a blogg with name $n! Please don’t delete this user.”
done

array3=($(for k in $dbquery3; do echo $k; done))
for m in ${array3[@]}
do
echo “User $i with email $m is not connected to any blog and should be marked as a spam user!”
done
fi
done
[/bash]

WordPress Brute Force attack spring 2013

For those who are running a WordPress site today at May 2013, you should know about this on-going BruteForce attack against many WordPress sites around the world.
http://www.bbc.co.uk/news/technology-22152296

Here at University of Bergen, we are also working on to protect our multisite WordPress installation.

How is this “attack” performed?

There is a so-called “botnet”, where around 90 000 computers are involved.

These “attacks” are made in a way that a computer taken by a cracker, in the botnet, is sending a POST request to a WordPress site to wp-login.php, where a suggested username and password is sent. If the username and password is correct, the user is logged in. Then the attackers have access to your WordPress site. Most of the posted username is “admin”.

We installed something to our server called ‘fail2ban”, which is a python script that runs as a daemon, and will implement Iptables (firewall) rules on the fly, after parsing the access-log.

This worked for us, because we realized that the “botnet” bruteforce attack was using the same “user-agent” string in all wp-login.php attempts.

Now, I see the potential danger of telling the “secret” of this attack, but let me just give you some data from the last number of entries in our Apache httpd log: access_log since 19th of May:

48514 "Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0"
5696 "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0"
2494 "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/20100101 Firefox/18.0"

Here I have counted the numbers of user-agents in our httpd-log files.

As you can see, the attack is mainly based on one type of user-agent, the one which has 48514 entries. Clearly it is reasonable to block these clients that have this agent.

Our block solution is made that if a client has a suggested user-agent as shown above "Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0", and that client is trying to perform more than 3 wp-login.php call in a certain time, it is blocked out for 12 hours in our case. If the client is performing a similar attempt within an hour, the bantime is expanded.

Our fail2ban jail.conf file has this section:


[apache-wp-login]
enabled = true
port = http,https
filter = apache-wp-login
action = iptables-multiport[name=APACHE-WP, port="http,https", protocol=tcp]
logpath = /var/log/httpd/access_log
maxretry = 3
findtime = 3600
bantime = 43200

So, then we decided to make a script that could test which useragents are involved in making a lot of POST to wp-login.php, and then compare them to “our” clients, the “friendly” ones.

From that we could make a simple failregex in fail2ban filter.d/apache-wp-login.conf:

[Definition]
failregex = ^<HOST> -.*] "POST /wp-login.php .* "Mozilla/5.0 [(]Windows NT 6.1; rv:19.0[)] Gecko/20100101 Firefox/19.0"$
            ^<HOST> -.*] "POST /wp-login.php .* "Mozilla/5.0 [(]Windows NT 6.1; WOW64; rv:18.0[)] Gecko/20100101 Firefox/18.0"$
ignoreregex =

This useragent string is taken from the apache log file (access_log). Our idea is that if you hav 5000 or more log in attempts from the same useragent, this useragent could be banned.

In order to see that our bann was successfull we counted the number of entries in iptable:

service iptables status | grep DROP | wc -l

We see the potential that the people responsible for this attack easily could implement a random useragent in the POST for each login attempt to wp-login.php.
So far, it looks like they still use the same useragent for all POST against wp-login.php.

Our next approach:
If user-agents change rapid and POST to wp-login.php still happens in a big scale, we will concider a two-step user validation for our WordPress installation.

patching two files

I had two files that both contained lines that I needed. The final result I wanted was a file called LocalSettings.php that could get the “goodies” from the two other files. The “goodies” here are variable lines containing specific details for a certain Wiki installation.

So my first file was called: LocalSettings.php.original

The other: LocalSettings.php.upgraded

I wanted to get the new lines from LocalSettings.php.upgraded into the LocalSettings.php.original.

First:

diff -Naur LocalSettings.php.original LocalSettings.php.upgraded > mypatch.file
Finally:

patch -p0 LocalSettings.php.original < mypatch.file