Weblog moved to VPS and uses CloudFlare’s CDN

Maybe you’ve noticed… maybe you did not… but the weblog got a new URL… instead of blog.adslweb.net it’s now blog.angits.net.

The reason for the new URL is the fact that the weblog now uses CloudFlare’s Content Delivery Network and with the free plan from CloudFlare it’s not possible to use ‘subdomains’ and since I had angits.net ‘laying around as spare’ I started using that one.

Besides using a CDN, the weblog physically moved from my server at home to a VPS in the DataCenter of NedZone.

Network overview of CloudFlare’s CDN

CentOS/RHEL 6u5 with Two-Factor Authentication Google Authenticator and SELinux

As already announced on several Social Media Platforms, I got Google Authenticator PAM-module enabled on a Centos(/RHEL) 6u5 box.

This implementation includes:

  • SELinux can run in enforcing mode;
  • Certain IP-ranges or users in certain groups can be excluded from Two-Factor Authentication;

Please note that it does not require the Google Authentication Service, the Google Authenticator is just a PAM module that enables HMAC-Based One-time Password (HOTP) algorithm specified in RFC 4226 and the Time-based One-time Password (TOTP) algorithm specified in RFC 6238, which is license under the Apache License 2.0.

Installing Google Authenticator PAM Module

First step is to install the (additional) required packages; just to be sure they’re there…

# yum -y install git pam-devel

Download and compile the code:

# cd $HOME

# git clone https://code.google.com/p/google-authenticator/

# cd $HOME/google-authenticator/libpam

# make

# make install

Modify /etc/ssh/sshd_config so the following setting is active:

ChallengeResponseAuthentication yes

UsePAM yes

But also disable pubkey authentication, to avoid bypassing 2FA (one of the nice caveats I run into)

PubkeyAuthentication no

Restart the SSH Daemon

# service sshd restart

Change the /etc/pam.d/sshd so it has the following contents (please note that the secret is stored in the $HOME/.ssh folder, this has the correct SELinux Context):

#%PAM-1.0
auth [success=1 default=ignore] pam_access.so accessfile=/etc/security/group-2fa.conf
auth required pam_google_authenticator.so secret=${HOME}/.ssh/google_authenticator
auth required pam_sepermit.so
auth include password-auth
account required pam_nologin.so
account include password-auth
password include password-auth
# pam_selinux.so close should be the first session rule
session required pam_selinux.so close
session required pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session required pam_selinux.so open env_params
session optional pam_keyinit.so force revoke
session include password-auth

Now configure the /etc/security/group-2fa.conf, this file controls who/what are exempted from 2FA:

# This file controls which exemptions are made for
# disabling two factor authentication
#
# Users that are member of the usergroup no2fa are
# exempted of the requirement providing 2FA
+ : (no2fa) : ALL

# And we also trust the systems from the Subnet 192.168.100.0/24
# This subnet also contains hosts that are very secure
+ : ALL : 192.168.100.0/24
# Keep this line, to enforce non matching entries
# to enforce 2FA
– : ALL : ALL

Setting up the users

I created two users, one who is member of the no2fa group and one not.

# id pieter
uid=500(pieter) gid=500(pieter) groups=500(pieter)
# id testuser
uid=502(testuser) gid=502(testuser) groups=502(testuser),503(no2fa)

$ ssh [email protected]
Password: ********
Password: ********
Password: ********

As you can see, without luck… /var/log/secure gives you the following errors:

Jun 18 10:11:41 centos-testvm sshd[2896]: error: PAM: Cannot make/remove an entry for the specified session for pieter from workstation.example.com
Jun 18 10:11:41 centos-testvm sshd[2906]: pam_access(sshd:auth): access denied for user `pieter’ from `workstation.example.com’
Jun 18 10:11:41 centos-testvm sshd(pam_google_authenticator)[2906]: Failed to read “/home/pieter/.ssh/google_authenticator”
Jun 18 10:11:41 centos-testvm sshd[2898]: Postponed keyboard-interactive for pieter from 172.31.3.250 port 16055 ssh2
Jun 18 10:11:42 centos-testvm sshd[2898]: Connection closed by 172.31.3.250

This is caused because google-authenticator is not configured yet…

The user testuser can login (which is in the no2fa group): 

$ ssh [email protected]
Password: ******
Last login: Wed Jun 18 09:51:05 2014 from workstation.example.com
[testuser@centos-testvm ~]$

So now we have to configure google-authenticator for the user pieter.

# sudo su - pieter
$ google-authenticator --label=${USER}@example.com --time-based --disallow-reuse --force
--window-size=6 --rate-limit=3 --rate-time=30 --secret=${HOME}/.ssh/google_authenticator
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/[email protected]%3Fsecret%3DABHNLFFWEOJPPL5QQ
Your new secret key is: DABHNLFFWEOJPPL5QQ
Your verification code is 408011
Your emergency scratch codes are:
60523746
71110301
00006670
78000814
75909110

Now the user pieter can log in using 2FA:

$ ssh [email protected]
Verification code: [Token]
Password: ********
Last login: Fri Jun 20 11:30:37 2014 from workstation.example.com
[pieter@centos-testvm ~]$

Courage for safety

This video was shown in during an internal course about safety within the company I work for… It let me start thinking about safety, not that I’m working off-shore… but it can happen everywhere, including a normal office area!

The video is originally in English, but found it with dutch subtitles as well. 

Courage For Safety [Woodside] from arboTV on Vimeo.

Customizing CoreOS images

For quite a while I’m impressed by the Docker and CoreOS projects and it has been quite a while on my todo list to look into it…

Since I’ve access to some playground with some old Workstations… I decided to start playing around with it, using PXE boot (this was already set up on that environment).

So I followed the instructions for PXE as described on the CoreOS PXE Boot page, although it kept complaining about an “invalid or corrupt kernel image“, while the checksums (MD5/SHA1) were OK. Since the TFT server is running on RHEL5, I did had an old version of pxelinux, so after the downloading the latest syslinux binary from kernel.org the system booted.

After I had the system booted the system ‘alerted’ me on the fact that the test-environment is using an MTU of 9000 (JumboFrames)…

This could not be fixed by the cloud-config configuration over HTTP method as far as I consider… because the cloud-config is loaded by the OS and therefore it requires up-and-running network-interface (with a correct MTU set);

So I  had to modify the CoreOS initrd, to update the MTU in /usr/lib/systemd/network/99-default.link:

MTUBytes=9000

So the we need to unpack the initial ramdisk.

Unpacking the CoreOS Ramdisk

Step 1) Create a temporary location in /tmp:

# mkdir -p /tmp/coreos/{squashfs,initrd,original,custom}

Step 2) Download or copy the ramdisk /tmp/coreos/original:

# cd /tmp/coreos/original/
# wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe_image.cpio.gz

Step 3) unzip the ramdisk:

# gunzip coreos_production_pxe_image.cpio.gz
# cd ../initrd
# cpio -id < ../original/
coreos_production_pxe_image.cpio

Step 4) Unsquash the squash filesystem and move the original-container:

# cd ../squashfs/
# unsquashfs ../initrd/usr.squashfs
# mv ../initrd/usr.squashfs ../usr.squashfs-original

Please note… that you need at least the squashFS 4.0 tools… but you can download the source and compile the binaries (at least it works on RHEL5).

And now you can access the unpacked image via /tmp/coreos/squashfs/squashfs-root and perform modifications, but please use the path minus the usr-prefix and relative to /tmp/coreos/squashfs/squashfs-root. So summarized:

/usr/lib/systemd/network/99-default.link can be found in:
/tmp/coreos/squashfs/squashfs-root/lib/systemd/network/99-default.link

So hack around and apply modification where needed.

Packing the CoreOS Customized Ramdisk

Now we have to repack the ramdisk, so we can load it…

Step 1) Repack the squashfs 

# cd /tmp/coreos/squashfs
# mksquashfs  squashfs-root/ ../initrd/usr.squashfs -noappend -always-use-fragments

Please ensure you use squashfs tools 4.0!

Step 2) Make it all a cpio archive and zip it

# cd /tmp/coreos/initrd
# find . | cpio -o -H newc | gzip > ../custom/coreos_CUSTOM_pxe_image.cpio.gz

Now boot it and use the custom image as initrd. 

 

GlusterFS rebalance weight based (WIP)

Please note I’ve to perform further testing to validate if it works as expected… but I would like to at least share it… 

On one of the GlusterFS instances I manage I have a wide variety of disk (brick) sizes.

5x 1TB
2x 2TB
1x 3TB

Although GlusterFS is currently not taking different disk sizes into count for ‘rebalancing’ the data.

After some searching on the Internet I noticed that there is a proposal to built it in into the GlusterFS Code (check this proposal on the Gluster Community).

So far the steps are actually pretty ‘simple’…

Step 1)

Download the python scripts (as root)

# mkdir -p $HOME/glusterfs-weighted-rebalance
# cd $HOME/glusterfs-weighted-rebalance
# wget https://raw.githubusercontent.com/gluster/glusterfs/master/extras/rebalance.py 
https://raw.githubusercontent.com/gluster/glusterfs/master/extras/volfilter.py

Step 2)

Run the python script

# python rebalance.py -l glusterfs-cluster Backup_Volume
Here are the xattr values for your size-weighted layout:
Backup_Volume-client-0: 0x00000002000000000000000015557a94
Backup_Volume-client-1: 0x000000020000000015557a952aaaf523
Backup_Volume-client-2: 0x00000002000000002aaaf52440006fb2
Backup_Volume-client-3: 0x000000020000000040006fb35555ea41
Backup_Volume-client-4: 0x00000002000000005555ea426aab64d0
Backup_Volume-client-5: 0x00000002000000006aab64d19555c505
Backup_Volume-client-6: 0x00000002000000009555c506d5559fca
Backup_Volume-client-7: 0x0000000200000000d5559fcbffffffff
The following subvolumes are still mounted:
Backup_Volume-client-0 on /tmp/tmp2oBBLB/brick0
Backup_Volume-client-1 on /tmp/tmp2oBBLB/brick1
Backup_Volume-client-2 on /tmp/tmp2oBBLB/brick2
Backup_Volume-client-3 on /tmp/tmp2oBBLB/brick3
Backup_Volume-client-4 on /tmp/tmp2oBBLB/brick4
Backup_Volume-client-5 on /tmp/tmp2oBBLB/brick5
Backup_Volume-client-6 on /tmp/tmp2oBBLB/brick6
Backup_Volume-client-7 on /tmp/tmp2oBBLB/brick7
Don’t forget to clean up when you’re done.

Step 3)

Set the xattr trusted.glusterfs.size-weighted per brick to the values mentioned above:

# setfattr -n trusted.glusterfs.size-weighted -v 0x00000002000000000000000015557a94 /tmp/tmp2oBBLB/brick0
# setfattr -n trusted.glusterfs.size-weighted -v 0x000000020000000015557a952aaaf523 /tmp/tmp2oBBLB/brick1
# setfattr -n trusted.glusterfs.size-weighted -v 0x00000002000000002aaaf52440006fb2 /tmp/tmp2oBBLB/brick2
# setfattr -n trusted.glusterfs.size-weighted -v 0x000000020000000040006fb35555ea41 /tmp/tmp2oBBLB/brick3
# setfattr -n trusted.glusterfs.size-weighted -v 0x00000002000000005555ea426aab64d0 /tmp/tmp2oBBLB/brick4
# setfattr -n trusted.glusterfs.size-weighted -v 0x00000002000000006aab64d19555c505 /tmp/tmp2oBBLB/brick5
# setfattr -n trusted.glusterfs.size-weighted -v 0x00000002000000009555c506d5559fca /tmp/tmp2oBBLB/brick6
# setfattr -n trusted.glusterfs.size-weighted -v 0x0000000200000000d5559fcbffffffff /tmp/tmp2oBBLB/brick7

Step 4)

Unmount the temporary mounted volumes that were mounted by rebalance.py:

# umount /tmp/tmp2oBBLB/*

Step 5) 

Start Gluster to rebalance the volumes:

gluster volume rebalance Backup_Volume start