YUM repo fails with koji/mock when base URL is used

Recently I started playing around with Koji for package building.

Everything was set-up pretty fast… but the first attempts to build a build-root failed…

After some troubleshooting I found the cause…

If you use the repo-data as available on an installation ISO (served via a webserver) and use mergerepos the location of an RPM will look like:

<location xml:base="CentOS" href="pam-0.99.6.2-6.el5_5.2.x86_64.rpm"/>

The entry as available on the ISO is:

<location xml:base="media://1330913492.861127#1" href="CentOS/pam-0.99.6.2-6.el5_5.2.x86_64.rpm"/>

The way to workaround this issue, is be recreated the repodata using createrepo:

# createrepo -u http://172.16.3.240/repo/centos/5.8/base/x86_64/-o ${WEBSERVERPATH}/new-repo/ /media/cd/

And then the entry will look like:

<location xml:base=”http://172.16.3.240/repo/centos/5.8/base/x86_64/CentOS” href=”pam-0.99.6.2-6.el5_5.2.x86_64.rpm”/>

The bug can be found in bugzilla.

SSL Chain

I’ve ordered via http://www.cheapssls.com/ a simple SSL-certificate signed by Comodo for use with apache… although a lot of browsers (Firefox on Mac OS X, all browsers in Linux) didn’t accepted it (CA was not know…)
After some discussion with …

How to update Python bindings to subversion.

Recently I run into the problem that a team had a requirement for subversion 1.6.6 (while CentOS 5u3 was not supporting this… but the vendor didn’t provide a newer release). This team also had a requirement to have TRAC… TRAC is depended on Python… but I was not allowed to update the subversion bindings for python by updating the it on the whole system… so… this is what I did:

  • Installed a number of devel packages:
       # yum install apr-devel neon{,-devel} apr-util-devel

  • Compiled sqlite version 3.6.13 and installed it on NFS:
      $ ./configure --prefix=/nfs/apps/webservices/trac-parent/sqlite/3.6.13
    ...
    $ make ; make install
    ...

  • Compiled subversion 1.6.6 and installed it on NFS:
    $ make clean; ./configure 
    --prefix=/
    nfs/apps/webservices/trac-parent/subversion/1.6.6
    --with-sqlite=
    /nfs/apps/webservices/trac-parent/sqlite/3.6.13
    --without-neon

    ...
    $ make -j8 ; make install ; make swig-py ; make install-swig-py

  • Added the following line to /etc/sysconfig/httpd:
    export LD_LIBRARY_PATH=/nfs/apps/webservices/trac-parent/sqlite/3.6.13/lib/

  • Modified /etc/httpd/conf.d/trac.conf by adding a ‘PythonPath’ to the location-directive:
    <Location /projects>
    ...
    PythonPath "['/nfs/apps/webservices/trac-parent/subversion/1.6.6/lib/svn-python'] + sys.path"
    </Location>

  • Restart the trac daemon:
    # service httpd stop
    # service httpd start

  • Now you’ve to resync the trac-instance with Subversion (the
    repository_dir value in the trac.ini of the instance).. but make sure
    you use the correct bindings in Python:

    # export LD_LIBRARY_PATH=/nfs/apps/webservices/trac-parent/sqlite/3.6.13/lib/
    # export PYTHONPATH=/nfs/apps/webservices/trac-parent/subversion/1.6.6/lib/svn-python
    # trac-admin ${TRAC_INSTANCE_PATH} repository resync "*"

CentOS 5 enabling Two-factor SSH authentication via Google

Today I noticed a very nice article about enabling Google’s two-factor authentication for Linux SSH.

After reading it… I found some time to play with it… so I enabled it within 10 minutes on my CentOS 5 64bit play-ground server… but there are some small ‘caveats’.

hg – Command

To checkout the code, you must make install the mercurial RPM… this one is available via the EPEL repositories.

So after having the EPEL repositories enabled, run as root:

yum -y install mercurial

Compiling the PAM module

When you checked out the code.

hg clone https://google-authenticator.googlecode.com/hg/ google-authenticator/

You cannot compile directly the module… therefor you must apply a small change to the Makefile.

Change where /usr/lib/libdl.so is stated to /usr/lib64/libdl.so (3 occurrences)

$ make
$ sudo make install

Now you’ve to update the /etc/pam.d/sshd so it contains:

#%PAM-1.0
auth       required     pam_google_authenticator.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
session    optional     pam_keyinit.so force revoke
session    include      system-auth
session    required     pam_loginuid.so

Configure SSH

You also have to make sure that in /etc/ssh/sshd_config the following settings are set on yes:

ChallengeResponseAuthentication yes
UsePAM yes

And restart the SSH-daemon

Set up your smartphone/credentials on the system

$ google-authenticator
https://www.google.com/chart?chs=200×200&chld=M|0&cht=qr&chl=otpauth://totp/user@server%3Fsecret%3DSAEP64T5VZAVWAFB
Your new secret key is: SAEP64T5VZAVWAFB
Your verification code is 376046
Your emergency scratch codes are:
  67868696
  26247332
  54815527
  54336661
  71083816
Do you want me to update your “~/.google_authenticator” file (y/n) y
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) n
If the computer that you are logging into isn’t hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

And you’re done :-D

Give it a try to SSH to that box…

 TIP: Make sure you’ve an SSH session still open… or you might lock yourself out of the system…

Use maildrop to forward a mail to another mail box

I recently had the need to forward e-mail based on the from field to another mailbox. I know, it’s possible with a simple .forward in your $HOME, but that will forward all the mail. :-(

So after some further searching I end up with the following rule for your maildrop filter… it simply checks if the mail (in this example) is from [email protected]  and will forward it to [email protected]:

if ( /^From: .*[email protected].*/ )
{
        dotlock “forward.lock” {
          log “Forward mail”
          to “|/usr/sbin/sendmail [email protected]
        }
}

And that’s all you need to put add to your $HOME/.mailfilter

Creating Snapshots of a backup using LVM snapshot

Normally I used to have a backup-retention-script in place that will create a TAR-ball of the backup data (using Herakles). But this way I was not able to have a retention of longer then 3 days :-(

So I had to look into another solution, I could add a new harddrive in the server… but there should be something else possible. So I ended up by using LVM snapshots. So I created a Volume group of about 100GB. In that volume group I created a logical volume of about 30GB, which is enough (and if not, we can ‘grow’ the Filesystem thanks to LVM :-) )

After having all that done, I’ve created a script located in /root/scripts/lvm-snapshot. This script runs every midnight and creates a snapshot.

#!/bin/bash
#
# Create LVM Snapshots
#
#
#—————————————————————————————————————
CURRENT_SNAPNAME=”snap-“$(date “+%Y%m%d%H%M%S”)
VOLUME2SNAPSHOT=”/dev/vol_backup/lvm0″
LVMSNAPSHOTCMD=”/usr/sbin/lvcreate -L 2G -s -n $CURRENT_SNAPNAME $VOLUME2SNAPSHOT”
LINE=”———————————————————————————————————————“

echo $LINE
df -h /mnt/data
echo $LINE
$LVMSNAPSHOTCMD 2> /dev/null
#—————————————————————————————————————
SNAPSHOT_RETENTION=15
CURRENT_SNAPSHOT_COUNT=$(lvdisplay | grep “^  LV Name                /dev/vol_backup/snap” | sort | awk ‘{ print $3 }’ | wc -l)

OVERFLOW=$(echo $CURRENT_SNAPSHOT_COUNT – $SNAPSHOT_RETENTION | bc)
if [ $OVERFLOW -gt 0 ];
then
        echo $LINE
        for files in  $(lvdisplay | grep “^  LV Name                /dev/vol_backup/snap” | sort | awk ‘{ print $3 }’ | head -n$OVERFLOW);
        do
                 /usr/sbin/lvremove -f $files 2> /dev/null
        done
fi
#—————————————————————————————————————
echo $LINE
/usr/sbin/vgdisplay vol_backup
echo $LINE
/usr/sbin/lvdisplay $VOLUME2SNAPSHOT

And the crontab entry is:

# crontab -l
0 0 * /root/scripts/lvm-snapshot

Load Grid Engine accounting file into MySQL

Recently I need to create a report about utilization of an HPC Cluster that uses Grid Engine, but we didn’t had ARCO and so not running yet for that cluster :-(

So I digged into my brain on how to load data from a “RAW” format into a database… it’s something I did when I worked for PricewaterhouseCoopers Advisory, but then I used financial data.

Please press the continue reading link below… to read more :-D

First you need to create a database within MySQL:

mysql> create database ge_accounting;

Then we create a table containing the accounting information, so we create a file name (for example) create-tables.sql:

create table ge_jobs
(
ge_qname char(30) not null,
ge_hostname char(30) not null,
ge_group char(10) not null,
ge_owner char(10) not null,
ge_job_name char(255) not null,
ge_job_number int unsigned not null primary key,
ge_account char(30) not null,
ge_account_prio int unsigned not null,

tmp_submission_time int unsigned not null,
tmp_start_time int unsigned not null,
tmp_end_time int unsigned not null,

ge_failed int unsigned not null,
ge_exit_status int unsigned not null,
ge_ru_wallclock int unsigned not null,
ge_ru_utime int unsigned not null,
ge_ru_stime int unsigned not null,
ge_ru_maxrss int unsigned not null,
ge_ru_ixrss int unsigned not null,
ge_ru_ismrss int unsigned not null,
ge_ru_idrss int unsigned not null,
ge_ru_isrss int unsigned not null,
ge_ru_minflt int unsigned not null,
ge_ru_majflt int unsigned not null,
ge_ru_nswap int unsigned not null,
ge_ru_inblock int unsigned not null,
ge_ru_oublock int unsigned not null,
ge_ru_msgsnd int unsigned not null,
ge_ru_msgrcv int unsigned not null,
ge_ru_nsignals int unsigned not null,
ge_ru_nvcsw int unsigned not null,
ge_ru_nivcsw int unsigned not null,
ge_project char(30) not null,
ge_department char(30) not null,
ge_granted_pe char(30),
ge_slots int unsigned not null,
ge_task_number int unsigned not null,
ge_cpu int unsigned not null,
ge_mem int unsigned not null,
ge_io int unsigned not null,
ge_category char(255),
ge_iow int unsigned not null,
ge_pe_taskid char(30),
ge_maxvmem int unsigned not null,
ge_arid int unsigned not null,

tmp_ar_submission_time int unsigned not null,

ge_submission_time timestamp not null,
ge_start_time timestamp not null,
ge_end_time timestamp not null,
ge_ar_submission_time timestamp not null

);

So use mysql to load the data:

$ mysql -u root -p ge_accounting < create-tables.sql

Now we can load the data into the database. For this example $SGE_ROOT is set to /apps/ge and the $SGE_CELL is set to default.

$ mysql -u root -p ge_accounting

mysql> LOAD DATA INFILE ‘/apps/ge/default/accounting/accounting’
REPLACE
INTO TABLE ge_jobs
FIELDS TERMINATED BY ‘:’
IGNORE 4 LINES;

And now we’ve to convert the EPOCH time stamps into nice timestamps using the following query:

mysql> UPDATE ge_jobs
SET ge_submission_time = (SELECT FROM_UNIXTIME(tmp_submission_time)),
ge_start_time = (SELECT FROM_UNIXTIME(tmp_start_time)),
ge_end_time = (SELECT FROM_UNIXTIME(tmp_end_time)),
ge_ar_submission_time = (SELECT FROM_UNIXTIME(tmp_ar_submission_time));

And now you can make a query that show a utilization per month based on 16 available slots, but with a 10% reserved non availability due to maintanance by admins:

mysql> SELECT MONTH(ge_submission_time) AS show_month,
SUM(ge_ru_wallclock * ge_slots) AS total_wallclock,
(SUM(ge_ru_wallclock * ge_slots) / (DATE_FORMAT(LAST_DAY(ge_submission_time),’%d’) * 86400 * 16 * 0.9) * 100) AS total_util
FROM ge_jobs
WHERE YEAR(ge_submission_time) = ‘2009’
GROUP BY show_month
ORDER BY show_month;

Please note, the query above is not perferct! I use it based on the submission time… but it doesn’t handle jobs that run in multiple months… have to tweak my query a little bit more for this.