Use maildrop to forward a mail to another mail box

I recently had the need to forward e-mail based on the from field to another mailbox. I know, it’s possible with a simple .forward in your $HOME, but that will forward all the mail. :-(

So after some further searching I end up with the following rule for your maildrop filter… it simply checks if the mail (in this example) is from [email protected]  and will forward it to [email protected]:

if ( /^From: .*[email protected].*/ )
{
        dotlock “forward.lock” {
          log “Forward mail”
          to “|/usr/sbin/sendmail [email protected]
        }
}

And that’s all you need to put add to your $HOME/.mailfilter

Use Picasa RSS Feed to show album on my own website

Recently I’ve moved the web albums of my kids from my own webserver to Google Picasa. But… I wanted to keep my nice javascript based carousel :-)

In the current code I already had some PHP-code that creates the content of the carousel using an array. Now I added two new features in the ‘website’.

  1. Config files
  2. Downloading the RSS (XML) feed and cache it
  3. Extract the URLs with the photos from the XML feed.

1. Config files

One ‘global’ config:

<?php
 cacheLocation=”/tmp/picasa-cache/”;
 $cacheTTL = 60;
?>

Per album I’ve a config.php in that directory, so for example we’ve the following content:

 <?php
  $xmlURL=’http://picasaweb.google.com/data/feed/base/user/k/id/123456970123ASBD1?alt=rss’;
  $AlbumDescription=”Rick de Rijk”;
  $PicasaURL=”http://picasaweb.google.com/paderijk/Rick”;
  $ShortName=”rick”;
  $xmlFile=”$cacheLocation/$ShortName.xml”;
?>

2. Download the RSS (XML) feed and cache it:

<?php
# Code that takes care of the caching
#
if (!(file_exists($xmlFile) &&
    (time() – $cacheTTL < filemtime($xmlFile))
  )) {
    //unlink($xmlFile);
    $data = file_get_contents($xmlURL);
    $f = file_put_contents($xmlFile, $data);
  }
?>

3. Extract the URLs with the photos from the feed

<?php
$foto_array = array();
$xml = new SimpleXMLElement($xmlFile, null, true);

$urls = $xml->xpath(“channel/item/enclosure/@url”);

foreach ($urls as $image_url)
{
  array_push($foto_array, $image_url);
}
?>

That’s all :-)

Fixed LDAP after upgrading from CentOS 5.4 to 5.5

Some months ago I upgraded my CentOS servers from version 5.4 to 5.5. One of these servers were running LDAP Master and LDAP Slave as playground. Although after the upgrade to CentOS 5.5 it was broken, but due to other priorities I didn’t had a change to fix it. 

On my systems I enabled TLS to communicate to LDAP-servers and also enabled kerberos. So this results in a modified /etc/sysconfig/ldap:

# Enable Kerberos
export KRB5_KTNAME=”FILE:/etc/openldap/ldap.keytab”

But I noticed that the RPM installed a new version of that, although with the extension .rpmnew. So after applying the changes that were in the .rpmnew file and when I set SLAPD_LDAPS and SLAPD_LDAPI to “yes” I end up with the following content:

# Parameters to ulimit called right before starting slapd
# – use this to change system limits for slapd
ULIMIT_SETTINGS=

# How long to wait between sending slapd TERM and KILL
# signals when stopping slapd by init script
# – format is the same as used when calling sleep
STOP_DELAY=3s

# By default only listening on ldap:/// is turned on.
# If you want to change listening options for slapd,
# set following three variables to yes or no
SLAPD_LDAP=yes
SLAPD_LDAPS=yes
SLAPD_LDAPI=yes
export KRB5_KTNAME=”FILE:/etc/openldap/ldap.keytab”

And guess what… It works again :-)

Creating Snapshots of a backup using LVM snapshot

Normally I used to have a backup-retention-script in place that will create a TAR-ball of the backup data (using Herakles). But this way I was not able to have a retention of longer then 3 days :-(

So I had to look into another solution, I could add a new harddrive in the server… but there should be something else possible. So I ended up by using LVM snapshots. So I created a Volume group of about 100GB. In that volume group I created a logical volume of about 30GB, which is enough (and if not, we can ‘grow’ the Filesystem thanks to LVM :-) )

After having all that done, I’ve created a script located in /root/scripts/lvm-snapshot. This script runs every midnight and creates a snapshot.

#!/bin/bash
#
# Create LVM Snapshots
#
#
#—————————————————————————————————————
CURRENT_SNAPNAME=”snap-“$(date “+%Y%m%d%H%M%S”)
VOLUME2SNAPSHOT=”/dev/vol_backup/lvm0″
LVMSNAPSHOTCMD=”/usr/sbin/lvcreate -L 2G -s -n $CURRENT_SNAPNAME $VOLUME2SNAPSHOT”
LINE=”———————————————————————————————————————“

echo $LINE
df -h /mnt/data
echo $LINE
$LVMSNAPSHOTCMD 2> /dev/null
#—————————————————————————————————————
SNAPSHOT_RETENTION=15
CURRENT_SNAPSHOT_COUNT=$(lvdisplay | grep “^  LV Name                /dev/vol_backup/snap” | sort | awk ‘{ print $3 }’ | wc -l)

OVERFLOW=$(echo $CURRENT_SNAPSHOT_COUNT – $SNAPSHOT_RETENTION | bc)
if [ $OVERFLOW -gt 0 ];
then
        echo $LINE
        for files in  $(lvdisplay | grep “^  LV Name                /dev/vol_backup/snap” | sort | awk ‘{ print $3 }’ | head -n$OVERFLOW);
        do
                 /usr/sbin/lvremove -f $files 2> /dev/null
        done
fi
#—————————————————————————————————————
echo $LINE
/usr/sbin/vgdisplay vol_backup
echo $LINE
/usr/sbin/lvdisplay $VOLUME2SNAPSHOT

And the crontab entry is:

# crontab -l
0 0 * /root/scripts/lvm-snapshot

Mozilla Labs Weave

In one of the last Linux Magazine issues, there was an article with the title “Untangling the Web with Mozilla Weave“. I really recognized the issue of having several Firefox instances with their own bookmarks/tabs/et cetera.

So I thought… let’s give it a try, and so far… it works fine for me. I’ve now set up Weave on my Linux Laptop and Linux workstation at home. The upcoming week I will set up weave for my Linux workstation at work and on my Portable Apps Firefox on ahum Windows Vista workstation.

Load Grid Engine accounting file into MySQL

Recently I need to create a report about utilization of an HPC Cluster that uses Grid Engine, but we didn’t had ARCO and so not running yet for that cluster :-(

So I digged into my brain on how to load data from a “RAW” format into a database… it’s something I did when I worked for PricewaterhouseCoopers Advisory, but then I used financial data.

Please press the continue reading link below… to read more :-D

First you need to create a database within MySQL:

mysql> create database ge_accounting;

Then we create a table containing the accounting information, so we create a file name (for example) create-tables.sql:

create table ge_jobs
(
ge_qname char(30) not null,
ge_hostname char(30) not null,
ge_group char(10) not null,
ge_owner char(10) not null,
ge_job_name char(255) not null,
ge_job_number int unsigned not null primary key,
ge_account char(30) not null,
ge_account_prio int unsigned not null,

tmp_submission_time int unsigned not null,
tmp_start_time int unsigned not null,
tmp_end_time int unsigned not null,

ge_failed int unsigned not null,
ge_exit_status int unsigned not null,
ge_ru_wallclock int unsigned not null,
ge_ru_utime int unsigned not null,
ge_ru_stime int unsigned not null,
ge_ru_maxrss int unsigned not null,
ge_ru_ixrss int unsigned not null,
ge_ru_ismrss int unsigned not null,
ge_ru_idrss int unsigned not null,
ge_ru_isrss int unsigned not null,
ge_ru_minflt int unsigned not null,
ge_ru_majflt int unsigned not null,
ge_ru_nswap int unsigned not null,
ge_ru_inblock int unsigned not null,
ge_ru_oublock int unsigned not null,
ge_ru_msgsnd int unsigned not null,
ge_ru_msgrcv int unsigned not null,
ge_ru_nsignals int unsigned not null,
ge_ru_nvcsw int unsigned not null,
ge_ru_nivcsw int unsigned not null,
ge_project char(30) not null,
ge_department char(30) not null,
ge_granted_pe char(30),
ge_slots int unsigned not null,
ge_task_number int unsigned not null,
ge_cpu int unsigned not null,
ge_mem int unsigned not null,
ge_io int unsigned not null,
ge_category char(255),
ge_iow int unsigned not null,
ge_pe_taskid char(30),
ge_maxvmem int unsigned not null,
ge_arid int unsigned not null,

tmp_ar_submission_time int unsigned not null,

ge_submission_time timestamp not null,
ge_start_time timestamp not null,
ge_end_time timestamp not null,
ge_ar_submission_time timestamp not null

);

So use mysql to load the data:

$ mysql -u root -p ge_accounting < create-tables.sql

Now we can load the data into the database. For this example $SGE_ROOT is set to /apps/ge and the $SGE_CELL is set to default.

$ mysql -u root -p ge_accounting

mysql> LOAD DATA INFILE ‘/apps/ge/default/accounting/accounting’
REPLACE
INTO TABLE ge_jobs
FIELDS TERMINATED BY ‘:’
IGNORE 4 LINES;

And now we’ve to convert the EPOCH time stamps into nice timestamps using the following query:

mysql> UPDATE ge_jobs
SET ge_submission_time = (SELECT FROM_UNIXTIME(tmp_submission_time)),
ge_start_time = (SELECT FROM_UNIXTIME(tmp_start_time)),
ge_end_time = (SELECT FROM_UNIXTIME(tmp_end_time)),
ge_ar_submission_time = (SELECT FROM_UNIXTIME(tmp_ar_submission_time));

And now you can make a query that show a utilization per month based on 16 available slots, but with a 10% reserved non availability due to maintanance by admins:

mysql> SELECT MONTH(ge_submission_time) AS show_month,
SUM(ge_ru_wallclock * ge_slots) AS total_wallclock,
(SUM(ge_ru_wallclock * ge_slots) / (DATE_FORMAT(LAST_DAY(ge_submission_time),’%d’) * 86400 * 16 * 0.9) * 100) AS total_util
FROM ge_jobs
WHERE YEAR(ge_submission_time) = ‘2009’
GROUP BY show_month
ORDER BY show_month;

Please note, the query above is not perferct! I use it based on the submission time… but it doesn’t handle jobs that run in multiple months… have to tweak my query a little bit more for this.

Bleeding edge, is indeed bleeding edge

Yesterday I thought let’s play with FC12 (aka Rawhide, aka FC11.90). So I enabled the Rawhide-repositories on my FC11 laptop and entered “yum -y update”. And after a while it was there… bleeding edge kernel and other packages.

The first issue I run into, was that Firefox 3.5 was not able to run, it caused a segfault. :-( There seems to be a bug in the xulrunner package. So I was able to fix it, by “downgrading” Firefox to 3.0.11, but that one crashed on pages using “Adobe Flash plugin”. So I removed the flash plugin, because I wanted bleeding edge Fedora. So having that “sort out” I wanted to suspend my laptop, and guess what… It didn’t want to suspend… so after some hacking around… it still didn’t work.

So my final decision was Go back to FC11. I was able to “downgrade” my system in about 60 minutes. At home I’ve a mirror repository with al the backups, so during installation I added these repositories, so I also had all the updates in one go.

Lesson learned: “Bleeding edge… is indeed bleeding edge!”

I need my work for my daily work… If I won’t need it for my daily work I would have keep FC12 (aka Rawhide, aka FC11.90) on it to participate in developing FC12.