Creating Snapshots of a backup using LVM snapshot

Normally I used to have a backup-retention-script in place that will create a TAR-ball of the backup data (using Herakles). But this way I was not able to have a retention of longer then 3 days :-(

So I had to look into another solution, I could add a new harddrive in the server… but there should be something else possible. So I ended up by using LVM snapshots. So I created a Volume group of about 100GB. In that volume group I created a logical volume of about 30GB, which is enough (and if not, we can ‘grow’ the Filesystem thanks to LVM :-) )

After having all that done, I’ve created a script located in /root/scripts/lvm-snapshot. This script runs every midnight and creates a snapshot.

#!/bin/bash
#
# Create LVM Snapshots
#
#
#—————————————————————————————————————
CURRENT_SNAPNAME=”snap-“$(date “+%Y%m%d%H%M%S”)
VOLUME2SNAPSHOT=”/dev/vol_backup/lvm0″
LVMSNAPSHOTCMD=”/usr/sbin/lvcreate -L 2G -s -n $CURRENT_SNAPNAME $VOLUME2SNAPSHOT”
LINE=”———————————————————————————————————————“

echo $LINE
df -h /mnt/data
echo $LINE
$LVMSNAPSHOTCMD 2> /dev/null
#—————————————————————————————————————
SNAPSHOT_RETENTION=15
CURRENT_SNAPSHOT_COUNT=$(lvdisplay | grep “^  LV Name                /dev/vol_backup/snap” | sort | awk ‘{ print $3 }’ | wc -l)

OVERFLOW=$(echo $CURRENT_SNAPSHOT_COUNT – $SNAPSHOT_RETENTION | bc)
if [ $OVERFLOW -gt 0 ];
then
        echo $LINE
        for files in  $(lvdisplay | grep “^  LV Name                /dev/vol_backup/snap” | sort | awk ‘{ print $3 }’ | head -n$OVERFLOW);
        do
                 /usr/sbin/lvremove -f $files 2> /dev/null
        done
fi
#—————————————————————————————————————
echo $LINE
/usr/sbin/vgdisplay vol_backup
echo $LINE
/usr/sbin/lvdisplay $VOLUME2SNAPSHOT

And the crontab entry is:

# crontab -l
0 0 * /root/scripts/lvm-snapshot

Import private key and (signed) certificate into Java keystore (JKS)

Last monday, I had to ‘secure’ the smcwebserver from Sun (or should I say Oracle?), that is used by ARCo. But I run into a few issues:

  1. My lack of knowledge about Java;
  2. Keytool doesn’t allow you to import keys generated by tools like openssl :-(

But… I was able to handle them both and know I have an smcwebserver (which is using Java-keystores) running with a key that was generated by openssl and a certificated signed by our enterprise CA.

There for I had to do some Java ‘hacking’. After some hours spending on Google-searches, I landed on a posting on the website of ‘Agent Bob‘. He has some Java-program that allows you to ‘import’ keys and certificates that were generated outside keytool :-)

Although, I had to perform some minor modification on the Java-code, to set the password of the new JKS to ‘changeit‘, because that is what smcwebserver will try to open the keystore. So, you need to make sure that line 87 is:

String keypass = “changeit”;

For your convenience you can download the modified version here.

Now, create a Java class with the command (please note, I’m not a Java-specialist, so something else will work as well… but this worked for me ;-) ):
$ javac ImportKey.java

Having this done, you must make sure, your key-file and (signed) certificate are in the DER format. If they are not, you can convert them using the following commands:
$ openssl pkcs -topk8
               -nocrypt
               -in server.key
               -out server.key.der
               -outform der

$ openssl x509 -in server.crt
               -out server.crt.der
               -outform der

We can import the keys with the Java-program:

$ java ImportKey server.key.der server.crt.der webconsole

And last, but not least, put the keystore in place (and of course we make sure we’ve a backup of the old one):

# cp /var/opt/webconsole/domains/console/conf/keystore.jks{,.backup}
# cp $HOME/keystore.ImportKey /var/opt/webconsole/domains/console/conf/keystore.jks

Now we have to restart the smcwebserver:

# smcwebserver stop
# smcwebserver start

That’s all :-)

Testing a kernel and initrd with qemu

In my previous post I wrote how to add a module to the initrd.img file. Although, some testing might be nice.This testing can be done using qemu.So for example:# qemu-system-x86_64 -kernel /scratch/blah/isolinux/vmlinuz -initrd /tmp/initrd.img-mod…

Adding additional/new modules to the initial ramdisk on Linux.

Recently I needed to add the support for a new network-interface to a initial ramdisk (initrd) on Linux (RHEL 5u3).

After some h4ck1ng and a number of hours on Google I was able to do add the module to the initrd. :-)

The steps I did are:

Extract initrd

  1. Get an original initrd.img from a boot ISO and put it into /tmp/initrd.img-original
  2. Create a temp environment where we can extract the initrd:

    $ mkdir -p /scratch/initrd-mod/{initrd,modules}

  3. Extract the /tmp/initrd.img-original to the temp-environment:

    $ cd /scratch/initrd-mod/initrd
    $ zcat /tmp/initrd.img-original | cpio -i

Add the module to initrd.img

Extract the modules.cgz file

The modules (.ko files) are located in the (container) modules/modules.cgz in the initrd.

Now you need to extract the modules.cgz file:

$ cd /scratch/initrd-mod/modules
$ zcat /scratch/initrd-mod/initrd/modules/modules.cgz | cpio -idvm

Add the module

Now you’ve to make sure you’ve an module compiled using the right kernel version and architecture. Copy the new .ko file into the extracted modules.cgz tree

$ cp /tmp/new-module.ko /scratch/initrd-mod/modules/{VERSION}/{ARCH}

So in my case with RHEL5u3 it is the location:

$ cp /tmp/igb.ko /scratch/initrd-mod/modules/2.6.18-128.el5/x86_64/

Repack the modules.cgz

Now we need to repack the modules.cgz file:

$ cd /scratch/initrd-mod/modules
$ find . -type f | cpio -o -H crc | gzip -n9 > /scratch/initrd-mod/initrd/modules/modules.cgz

Modify the modules.alias file

Now you need to modify the modules.alias file in order to get the module loaded properly.

The aliases can be find using modinfo:

$ /sbin/modinfo /tmp/igb.ko | grep ^alias
alias: pci:v00008086d000010D6sv*sd*bc*sc*i*
alias: pci:v00008086d000010A9sv*sd*bc*sc*i*
alias: pci:v00008086d000010A7sv*sd*bc*sc*i*
alias: pci:v00008086d000010E8sv*sd*bc*sc*i*
alias: pci:v00008086d0000150Dsv*sd*bc*sc*i*
alias: pci:v00008086d000010E7sv*sd*bc*sc*i*
alias: pci:v00008086d000010E6sv*sd*bc*sc*i*
alias: pci:v00008086d0000150Asv*sd*bc*sc*i*
alias: pci:v00008086d000010C9sv*sd*bc*sc*i*

The correct entries can be create using the following one liner (in this case for the igb module):

$ /sbin/modinfo /tmp/igb.ko | grep ^alias | awk ‘{ print “alias ” $2 ” igb” }’
alias pci:v00008086d000010D6sv*sd*bc*sc*i* igb
alias pci:v00008086d000010A9sv*sd*bc*sc*i* igb
alias pci:v00008086d000010A7sv*sd*bc*sc*i* igb
alias pci:v00008086d000010E8sv*sd*bc*sc*i* igb
alias pci:v00008086d0000150Dsv*sd*bc*sc*i* igb
alias pci:v00008086d000010E7sv*sd*bc*sc*i* igb
alias pci:v00008086d000010E6sv*sd*bc*sc*i* igb
alias pci:v00008086d0000150Asv*sd*bc*sc*i* igb
alias pci:v00008086d000010C9sv*sd*bc*sc*i* igb

Make sure you remove duplicates in the modules.aliases file.

Repack the initrd.img

Now it’s time to repack the intird.img file with the new module:

$ cd /scratch/initrd-mod/initrd
$ find ./ | cpio -H newc -o | gzip -n9 > /tmp/initrd.img-modded

And put the /tmp/initrd.img-modded onto your boot disk.

The Grow statistics of my (private) mailbox

Yesterday, I decided to reorganize my ‘Archive’ folder of my private mailbox.

My policy is, don’t remove e-mail except if it’s spam :-D

And I noticed some ‘growing’ of the number of e-mails per year in the archive:

 Year Number of mails
2003 133
2004 530
2005 706
2006 865
2007 1869
2008 2335
2009 2832

So some nice graph of it will be:

Mozilla Labs Weave

In one of the last Linux Magazine issues, there was an article with the title “Untangling the Web with Mozilla Weave“. I really recognized the issue of having several Firefox instances with their own bookmarks/tabs/et cetera.

So I thought… let’s give it a try, and so far… it works fine for me. I’ve now set up Weave on my Linux Laptop and Linux workstation at home. The upcoming week I will set up weave for my Linux workstation at work and on my Portable Apps Firefox on ahum Windows Vista workstation.

Load Grid Engine accounting file into MySQL

Recently I need to create a report about utilization of an HPC Cluster that uses Grid Engine, but we didn’t had ARCO and so not running yet for that cluster :-(

So I digged into my brain on how to load data from a “RAW” format into a database… it’s something I did when I worked for PricewaterhouseCoopers Advisory, but then I used financial data.

Please press the continue reading link below… to read more :-D

First you need to create a database within MySQL:

mysql> create database ge_accounting;

Then we create a table containing the accounting information, so we create a file name (for example) create-tables.sql:

create table ge_jobs
(
ge_qname char(30) not null,
ge_hostname char(30) not null,
ge_group char(10) not null,
ge_owner char(10) not null,
ge_job_name char(255) not null,
ge_job_number int unsigned not null primary key,
ge_account char(30) not null,
ge_account_prio int unsigned not null,

tmp_submission_time int unsigned not null,
tmp_start_time int unsigned not null,
tmp_end_time int unsigned not null,

ge_failed int unsigned not null,
ge_exit_status int unsigned not null,
ge_ru_wallclock int unsigned not null,
ge_ru_utime int unsigned not null,
ge_ru_stime int unsigned not null,
ge_ru_maxrss int unsigned not null,
ge_ru_ixrss int unsigned not null,
ge_ru_ismrss int unsigned not null,
ge_ru_idrss int unsigned not null,
ge_ru_isrss int unsigned not null,
ge_ru_minflt int unsigned not null,
ge_ru_majflt int unsigned not null,
ge_ru_nswap int unsigned not null,
ge_ru_inblock int unsigned not null,
ge_ru_oublock int unsigned not null,
ge_ru_msgsnd int unsigned not null,
ge_ru_msgrcv int unsigned not null,
ge_ru_nsignals int unsigned not null,
ge_ru_nvcsw int unsigned not null,
ge_ru_nivcsw int unsigned not null,
ge_project char(30) not null,
ge_department char(30) not null,
ge_granted_pe char(30),
ge_slots int unsigned not null,
ge_task_number int unsigned not null,
ge_cpu int unsigned not null,
ge_mem int unsigned not null,
ge_io int unsigned not null,
ge_category char(255),
ge_iow int unsigned not null,
ge_pe_taskid char(30),
ge_maxvmem int unsigned not null,
ge_arid int unsigned not null,

tmp_ar_submission_time int unsigned not null,

ge_submission_time timestamp not null,
ge_start_time timestamp not null,
ge_end_time timestamp not null,
ge_ar_submission_time timestamp not null

);

So use mysql to load the data:

$ mysql -u root -p ge_accounting < create-tables.sql

Now we can load the data into the database. For this example $SGE_ROOT is set to /apps/ge and the $SGE_CELL is set to default.

$ mysql -u root -p ge_accounting

mysql> LOAD DATA INFILE ‘/apps/ge/default/accounting/accounting’
REPLACE
INTO TABLE ge_jobs
FIELDS TERMINATED BY ‘:’
IGNORE 4 LINES;

And now we’ve to convert the EPOCH time stamps into nice timestamps using the following query:

mysql> UPDATE ge_jobs
SET ge_submission_time = (SELECT FROM_UNIXTIME(tmp_submission_time)),
ge_start_time = (SELECT FROM_UNIXTIME(tmp_start_time)),
ge_end_time = (SELECT FROM_UNIXTIME(tmp_end_time)),
ge_ar_submission_time = (SELECT FROM_UNIXTIME(tmp_ar_submission_time));

And now you can make a query that show a utilization per month based on 16 available slots, but with a 10% reserved non availability due to maintanance by admins:

mysql> SELECT MONTH(ge_submission_time) AS show_month,
SUM(ge_ru_wallclock * ge_slots) AS total_wallclock,
(SUM(ge_ru_wallclock * ge_slots) / (DATE_FORMAT(LAST_DAY(ge_submission_time),’%d’) * 86400 * 16 * 0.9) * 100) AS total_util
FROM ge_jobs
WHERE YEAR(ge_submission_time) = ‘2009’
GROUP BY show_month
ORDER BY show_month;

Please note, the query above is not perferct! I use it based on the submission time… but it doesn’t handle jobs that run in multiple months… have to tweak my query a little bit more for this.