How to update Python bindings to subversion.

Recently I run into the problem that a team had a requirement for subversion 1.6.6 (while CentOS 5u3 was not supporting this… but the vendor didn’t provide a newer release). This team also had a requirement to have TRAC… TRAC is depended on Python… but I was not allowed to update the subversion bindings for python by updating the it on the whole system… so… this is what I did:

  • Installed a number of devel packages:
       # yum install apr-devel neon{,-devel} apr-util-devel

  • Compiled sqlite version 3.6.13 and installed it on NFS:
      $ ./configure --prefix=/nfs/apps/webservices/trac-parent/sqlite/3.6.13
    ...
    $ make ; make install
    ...

  • Compiled subversion 1.6.6 and installed it on NFS:
    $ make clean; ./configure 
    --prefix=/
    nfs/apps/webservices/trac-parent/subversion/1.6.6
    --with-sqlite=
    /nfs/apps/webservices/trac-parent/sqlite/3.6.13
    --without-neon

    ...
    $ make -j8 ; make install ; make swig-py ; make install-swig-py

  • Added the following line to /etc/sysconfig/httpd:
    export LD_LIBRARY_PATH=/nfs/apps/webservices/trac-parent/sqlite/3.6.13/lib/

  • Modified /etc/httpd/conf.d/trac.conf by adding a ‘PythonPath’ to the location-directive:
    <Location /projects>
    ...
    PythonPath "['/nfs/apps/webservices/trac-parent/subversion/1.6.6/lib/svn-python'] + sys.path"
    </Location>

  • Restart the trac daemon:
    # service httpd stop
    # service httpd start

  • Now you’ve to resync the trac-instance with Subversion (the
    repository_dir value in the trac.ini of the instance).. but make sure
    you use the correct bindings in Python:

    # export LD_LIBRARY_PATH=/nfs/apps/webservices/trac-parent/sqlite/3.6.13/lib/
    # export PYTHONPATH=/nfs/apps/webservices/trac-parent/subversion/1.6.6/lib/svn-python
    # trac-admin ${TRAC_INSTANCE_PATH} repository resync "*"

CentOS 5 enabling Two-factor SSH authentication via Google

Today I noticed a very nice article about enabling Google’s two-factor authentication for Linux SSH.

After reading it… I found some time to play with it… so I enabled it within 10 minutes on my CentOS 5 64bit play-ground server… but there are some small ‘caveats’.

hg – Command

To checkout the code, you must make install the mercurial RPM… this one is available via the EPEL repositories.

So after having the EPEL repositories enabled, run as root:

yum -y install mercurial

Compiling the PAM module

When you checked out the code.

hg clone https://google-authenticator.googlecode.com/hg/ google-authenticator/

You cannot compile directly the module… therefor you must apply a small change to the Makefile.

Change where /usr/lib/libdl.so is stated to /usr/lib64/libdl.so (3 occurrences)

$ make
$ sudo make install

Now you’ve to update the /etc/pam.d/sshd so it contains:

#%PAM-1.0
auth       required     pam_google_authenticator.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
session    optional     pam_keyinit.so force revoke
session    include      system-auth
session    required     pam_loginuid.so

Configure SSH

You also have to make sure that in /etc/ssh/sshd_config the following settings are set on yes:

ChallengeResponseAuthentication yes
UsePAM yes

And restart the SSH-daemon

Set up your smartphone/credentials on the system

$ google-authenticator
https://www.google.com/chart?chs=200×200&chld=M|0&cht=qr&chl=otpauth://totp/user@server%3Fsecret%3DSAEP64T5VZAVWAFB
Your new secret key is: SAEP64T5VZAVWAFB
Your verification code is 376046
Your emergency scratch codes are:
  67868696
  26247332
  54815527
  54336661
  71083816
Do you want me to update your “~/.google_authenticator” file (y/n) y
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) n
If the computer that you are logging into isn’t hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

And you’re done :-D

Give it a try to SSH to that box…

 TIP: Make sure you’ve an SSH session still open… or you might lock yourself out of the system…

WordPress template with jQuery flippage

Recently I’ve been working on creating a template for WordPress for my brother in law’s company. My brother in law is photographer so I had also had to implement albums/galleries using “jQuery jFlip“. So I decided to use the “NextGEN Gallery” plugin for WordPress.

The benefit of NextGEN Gallery is that it allows you to add custom gallery templates to your WordPress template/theme by having in your theme-folder a nggallery folder and files named gallery-{template_name}.php.

To enable jQuery jFlip with NextGEN Gallery I had to do the following modifications:

Add to $TEMPLATE_PATH/header.php the following lines in the head section:

<!–[if IE]><script src=”<?php bloginfo(‘template_url’); ?>/js/excanvasX.js” type=”text/javascript”></script><![endif]–>
<script src=”<?php bloginfo(‘template_url’); ?>/js/jquery-1.6.1.min.js” type=”text/javascript”></script>
<script src=”<?php bloginfo(‘template_url’); ?>/js/jquery.jflip-0.4.min.js” type=”text/javascript”></script>

Make sure you put jquery-1.6.1.min.js and jquery-jflip-0.4.min.js and excanvasX.js (for IE support) in your template, or deep link to the developer sites.

And now create a NextGEN template $TEMPLATE_PATH/nggallery/gallery-flippage.php:

<?php if (!defined (‘ABSPATH’)) die (‘No direct access allowed’); ?><?php if (!empty ($gallery)) : ?>
<script type=”text/javascript”>
  jQuery.noConflict();
  $(function(){
    $(“#gallery1″).jFlip(600,300,{background:”transparent”,cornersTop:false,scale:”fit”});
   })
</script>
<p>&nbsp;</p>
<center>
<ul id=”gallery1″>
   <?php foreach ( $images as $image ) : ?>
     <li><img src=”<?php echo $image->imageURL ?>” /></li>
    <?php endforeach; ?>
</ul>
</center>
<?php endif; ?>

Please note the ‘jQuery.noConflict()’… please make sure it’s there other wise it will drive you crazy :-(

Now make sure the NextGen gallery plugin is active and make a page in WordPress with the following content:

 

[nggallery id=5 template=flippage]

 That’s all :-)

And the results can be checked here.

Use maildrop to forward a mail to another mail box

I recently had the need to forward e-mail based on the from field to another mailbox. I know, it’s possible with a simple .forward in your $HOME, but that will forward all the mail. :-(

So after some further searching I end up with the following rule for your maildrop filter… it simply checks if the mail (in this example) is from [email protected]  and will forward it to [email protected]:

if ( /^From: .*[email protected].*/ )
{
        dotlock “forward.lock” {
          log “Forward mail”
          to “|/usr/sbin/sendmail [email protected]
        }
}

And that’s all you need to put add to your $HOME/.mailfilter

Use Picasa RSS Feed to show album on my own website

Recently I’ve moved the web albums of my kids from my own webserver to Google Picasa. But… I wanted to keep my nice javascript based carousel :-)

In the current code I already had some PHP-code that creates the content of the carousel using an array. Now I added two new features in the ‘website’.

  1. Config files
  2. Downloading the RSS (XML) feed and cache it
  3. Extract the URLs with the photos from the XML feed.

1. Config files

One ‘global’ config:

<?php
 cacheLocation=”/tmp/picasa-cache/”;
 $cacheTTL = 60;
?>

Per album I’ve a config.php in that directory, so for example we’ve the following content:

 <?php
  $xmlURL=’http://picasaweb.google.com/data/feed/base/user/k/id/123456970123ASBD1?alt=rss’;
  $AlbumDescription=”Rick de Rijk”;
  $PicasaURL=”http://picasaweb.google.com/paderijk/Rick”;
  $ShortName=”rick”;
  $xmlFile=”$cacheLocation/$ShortName.xml”;
?>

2. Download the RSS (XML) feed and cache it:

<?php
# Code that takes care of the caching
#
if (!(file_exists($xmlFile) &&
    (time() – $cacheTTL < filemtime($xmlFile))
  )) {
    //unlink($xmlFile);
    $data = file_get_contents($xmlURL);
    $f = file_put_contents($xmlFile, $data);
  }
?>

3. Extract the URLs with the photos from the feed

<?php
$foto_array = array();
$xml = new SimpleXMLElement($xmlFile, null, true);

$urls = $xml->xpath(“channel/item/enclosure/@url”);

foreach ($urls as $image_url)
{
  array_push($foto_array, $image_url);
}
?>

That’s all :-)

Fixed LDAP after upgrading from CentOS 5.4 to 5.5

Some months ago I upgraded my CentOS servers from version 5.4 to 5.5. One of these servers were running LDAP Master and LDAP Slave as playground. Although after the upgrade to CentOS 5.5 it was broken, but due to other priorities I didn’t had a change to fix it. 

On my systems I enabled TLS to communicate to LDAP-servers and also enabled kerberos. So this results in a modified /etc/sysconfig/ldap:

# Enable Kerberos
export KRB5_KTNAME=”FILE:/etc/openldap/ldap.keytab”

But I noticed that the RPM installed a new version of that, although with the extension .rpmnew. So after applying the changes that were in the .rpmnew file and when I set SLAPD_LDAPS and SLAPD_LDAPI to “yes” I end up with the following content:

# Parameters to ulimit called right before starting slapd
# – use this to change system limits for slapd
ULIMIT_SETTINGS=

# How long to wait between sending slapd TERM and KILL
# signals when stopping slapd by init script
# – format is the same as used when calling sleep
STOP_DELAY=3s

# By default only listening on ldap:/// is turned on.
# If you want to change listening options for slapd,
# set following three variables to yes or no
SLAPD_LDAP=yes
SLAPD_LDAPS=yes
SLAPD_LDAPI=yes
export KRB5_KTNAME=”FILE:/etc/openldap/ldap.keytab”

And guess what… It works again :-)

use subversion to publish websites

Sometimes I’m really surprised about myself… especially how lazy I am. :-)

I’m currently playing around with one of my private websites, and to improve developing I decided to use subversion. So far so good, but I wanted that the committed subversion code was automatically online on the webserver. So I did the following very simple trick.

First I check out the code (subtree) from the subversion server (which is using https):

$ cd /sites
$ mv dev.adslweb.net{,-backup}
$ svn co https://svn.adslweb.net/svn/websites/dev.adslweb.net

Next step was to commit the current content of the website into subversion:

$ cd /sites/dev.adslweb.net
$ cp -Rv /sites/dev.adslweb.net-backup/*  ./
$ svn add *
$ svn commit -m “Initial commit of ADSLWEB.net dev env”

Now download the simple script I created for making sure that subversion doesn’t fire off twice for updating the same tree.

Download svn-update.sh via this link.

So something like this:

$ mkdir ~/scripts/
$ cd ~/scripts
$ wget http://www.xs4all.nl/~paderijk/pics/svn-update.sh
$ chmod 700 svn-update.sh

Now… the last step… create a crontab entry with the following content:

*/1 * * * * /home/pieter/scripts/svn-update.sh /sites/dev.adslweb.net 2>&1 > /dev/null

 And guess… and it works like a charm, on every new commit done by whoever… you get your online site updated within 1 minute without the need log in into the website/webserver using ftp/ssh/whatever.

More flexible yum-repo sync script

In the past I started syncing every night the Updates-repositories from Fedora and CentOS on a local server, just to speed up updates/kick starts et cetera… The first version of the script was very quick and dirty, now I’ve a more decent script that allow you to add/remove very quick new versions of CentOS and Fedora.

 You can find the script here.

How full are your snapshot volumes in LVM?

As I mentioned in my previous post, which is already 2 months old :(, I’m using snapshots for data retention.

Now I run up in the situation, that I wanted to know how full the snapshots are. A ‘normal’ df will not work… but I figured it out! The command lvs is willing to do the work:

# lvs –aligned –separator | vol_backup
  LV                |VG        |Attr  |LSize |Origin|Snap% |Move|Log|Copy% |Convert
  lvm0              |vol_backup|owi-ao|40.00G|      |      |    |   |      |      
  snap-20100412_2350|vol_backup|swi-a-| 4.00G|lvm0  | 23.71|    |   |      |      
  snap-20100413_2350|vol_backup|swi-a-| 4.00G|lvm0  | 21.70|    |   |      |      
  snap-20100414_2350|vol_backup|swi-a-| 4.00G|lvm0  | 19.52|    |   |      |      
  snap-20100415_2350|vol_backup|swi-a-| 4.00G|lvm0  | 17.53|    |   |      |      
  snap-20100416_2350|vol_backup|swi-a-| 4.00G|lvm0  | 15.54|    |   |      |      
  snap-20100417_2350|vol_backup|swi-a-| 4.00G|lvm0  | 13.56|    |   |      |      
  snap-20100418_2350|vol_backup|swi-a-| 4.00G|lvm0  | 11.56|    |   |      |      
  snap-20100419_2353|vol_backup|swi-a-| 4.00G|lvm0  |  9.02|    |   |      |      
  snap-20100420_2353|vol_backup|swi-a-| 4.00G|lvm0  |  6.76|    |   |      |      
  snap-20100421_2350|vol_backup|swi-a-| 4.00G|lvm0  |  2.79|    |   |      | 

In the ‘Snap%‘ column you can see how full your snapshot volume is!