RouterBoard as a Home Router – 4 1/2 years on

A while back I mentioned a follow up to an old blog post about the RouterBoard that i’d recently purchased and setup for home use. This is a very belated update on that board.

My requirements have since changed from the original post, but not dramatically so. The requirement for LACP has disapeared, IPSec is no longer used, but a requirement for Dynamic Routing has appeared.

All in all, I have to say that I still cannot recommend RouterOS enough. I’ve been using it the past 4 1/2 years, and have recommended a large number of folks to use it.

The main reason behind this is that it just works, there’s not really any faffing about that needs to be done, and if you’re running the stable release, everything does just work.

Feature wise, this is right up there with some of the big brands (Cisco, Juniper et all), however its fair to say not with the same price tag.

Continue reading

iSCSI Target re scanning on VMWare

If you’re using iSCSI on VMWare but have a requirement to rescan the luns after a machine has booted (For example a VM which has DirectPath to a Storage card enabled, which is hosting your iSCSI luns) you can simply do so with the following command

#!/bin/sh
ssh --USER--@--ESXIHOST-- 'esxcli storage core adapter rescan --all && esxcfg-rescan -A' 1>/dev/null 2>/dev/null

F5 LoadBalancing on a per app mountpoint

With some customers solutions I’ve seen a common requirement to do loadbalancing decision based on the actual application server serving the content, this obviosuly introduces a few issues if you’re using a single base URL for this

If we take the example below

www.example.com/ -> Web Servers
www.example.com/app1 -> App1 Servers
www.example.com/app2 -> App1 Servers

With this in mind, its not possible to use traditional Layer 3 / Layer 4 load balancers, and would require a L7 Load balancer, such as a F5 LTM or Riverbed Stingray (ZTM/ZXTM). I’m going to concentrate on the F5 in this example.

On the F5 you have the abbility to use a iRule to preform load balacning actions. On a Virtal Server that has the “http” profile enabled, you would be able to add a iRule similar to below.

# Name        : Application Loadbalacning Split
# Date        : 19/03/2013
# Purpose     : Split loadbalancing based on application
# Methodology : Change pool based on url

# Set the poolname prefix based on the virtual servers name.
# Pool will always be POOL_$SERVERPOOL
when CLIENT_ACCEPTED {
    if { [virtual] contains "testing" } {
        set serverpool "testing"
    } else {
        set serverpool "liveserver"
    }
}

# Preform a load balancing decision based on the endpoint
# Split the HTTP::path out on '/' and return only the first //
# This doesn't convert from HEX / Encoded urls, but sends to default pool
when HTTP_REQUEST {
    switch [ lindex [split [string tolower [HTTP::path] ] "/" ]  1 ] {
        "app1" {
            set NEWPOOL "APP1_$serverpool"
        }
        "app2" {
            set NEWPOOL "APP2_$serverpool"
        }
        "default" {
            set NEWPOOL "default_$serverpool"
        }
    }
    pool $NEWPOOL

}

# We add a HTTP Header to identify the app pool that we're going to
when HTTP_RESPONSE {
    HTTP::header insert "X-AP" $NEWPOOL
}

There are multiple events that this will trigger.

  1. CLIENT_ACCEPTED
  2. This event is triggered whenever a new connection is made to the Load balancer. In our case the code will check to see if the virtual servers name contains “testing”, which if it does sets the serverpool variable to contain “testing”, otherwise it will set it to “liveserver”

  3. HTTP_REQUEST
  4. This event is triggered on any new HTTP Request. In our case this preforms a ‘switch’ (a multiple if/else statement) on the URL. We do however preform two “transformations” on the URL, the first is we convert it to lower case. the second is that we only take the URL between the first two /’s. So for the URL http://www.example.com/app1/test we would use app1 for teh switch statement.
    Based on the path, we will then set the NEWPOOL variable, and then set the pool to NEWPOOL

  5. HTTP_RESPONSE
  6. This event is triggered when the server send a response to a HTTP Request. We add the “X-AP” header to the response, and set this to the NEWPOOL variable.

‘Instant’ Upload to Rackspace CloudFiles

Using inotify and the ‘swift’ client tools it is possible to automatically upload files to cloudfiles as they are written to disk.

This code is untested and might cause planes to drop from the sky, use it at your own risk!

#!/bin/bash
DIRECTORY='/home/welby/'
CONTAINER='testing'
USERNAME="welbycloud"
KEY="APIKEYFORCLOUD"
VERSION="2.0"
inotifywait -mr --format '%w%f' -e close_write $DIRECTORY | while read filename; do
    if [ ! -e "$1" ]; then
       sleep 2
       if [ ! -e "$1" ]; then
           continue
       fi
    fi
    swift upload  -s -A https://auth.api.rackspacecloud.com/v1.0 -U $USERNAME -K $KEY -v $VERSION $CONTAINER $filename 
done

Google Authenticator F5 IRule

Two Factor authentication is rather hit and miss in terms of support from web apps.

A quick look around the web turns up an article on DevCentral for a solution to implement google authentication with ldap. As I don’t run a LDAP server at home I needed to hack up the script a bit. This iRule implements the two factor side of things from the above article, but skips the LDAP side of things, as it’s not needed!

when RULE_INIT {
  # auth parameters
  set static::auth_cookie "bigip_virtual_auth"
  set static::auth_cookie_aes_key "AES 128 abcdef0123456789abcdef0123456789"
  set static::auth_timeout 86400
  set static::auth_lifetime 86400

  # name of datagroup that holds AD user to Google Authenticator mappings
  set static::user_to_google_auth_class "user_to_google_auth"

  # lock the user out after x attempts for a period of x seconds
  set static::lockout_attempts 3
  set static::lockout_period 30

  # 0 - logging off
  # 1 - log only successes, failures, and lockouts
  # 2 - log every attempt to access virtual as well as authentication process details
  set static::debug 1

  # HTML for login page
  set static::login_page { 
  
    

Authorization Required

user:
Google Authenticator code:
} } when CLIENT_ACCEPTED { # per virtual status tables for lockouts and users' auth_status set lockout_state_table "[virtual name]_lockout_status" set auth_status_table "[virtual name]_auth_status" set authid_to_user_table "[virtual name]_authid_to_user" # record client IP, [IP::client_addr] not available in AUTH_RESULT set user_ip [IP::client_addr] # set initial values for auth_id and auth_status set auth_id [md5 [expr rand()]] set auth_status 2 set auth_req 1 } when HTTP_REQUEST { if { $auth_req == 1 } { # track original URI user requested prior to login redirect set orig_uri [b64encode [HTTP::uri]] if { [HTTP::cookie exists $static::auth_cookie] && !([HTTP::path] starts_with "/google/auth/login")} { set auth_id_current [AES::decrypt $static::auth_cookie_aes_key [b64decode [HTTP::cookie value $static::auth_cookie]]] set auth_status [table lookup -notouch -subtable $auth_status_table $auth_id_current] set user [table lookup -notouch -subtable $authid_to_user_table $auth_id_current] if { $auth_status == 0 } { if { $static::debug >= 2 } { log local0. "$user ($user_ip): Found valid auth cookie (auth_id=$auth_id_current), passing request through" } } else { if { $static::debug >= 2 } { log local0. "Found invalid auth cookie (auth_id=$auth_id_current), redirecting to login"} HTTP::redirect "/google/auth/login?orig_uri=$orig_uri" } } elseif { ([HTTP::path] starts_with "/google/auth/login") && ([HTTP::method] eq "GET") } { HTTP::respond 200 content $static::login_page } elseif { ([HTTP::path] starts_with "/google/auth/login") && ([HTTP::method] eq "POST") } { set orig_uri [b64decode [URI::query [HTTP::uri] "orig_uri"]] HTTP::collect [HTTP::header Content-Length] } else { if { $static::debug >= 2 } { log local0. "Request for [HTTP::uri] from unauthenticated client ($user_ip), redirecting to login" } HTTP::redirect "/google/auth/login?orig_uri=$orig_uri" } } } when HTTP_REQUEST_DATA { if { $auth_req == 1} { set user "" set ga_code "" foreach param [split [HTTP::payload] &] { set [lindex [split $param =] 0] [lindex [split $param =] 1] } if { ($user ne "") && ([string length $ga_code] == 6) } { set ga_code_b32 [class lookup $user $static::user_to_google_auth_class] set prev_attempts [table incr -notouch -subtable $lockout_state_table $user] table timeout -subtable $lockout_state_table $user $static::lockout_period if { $prev_attempts = 2 } { log local0. "$user ($user_ip): Starting authentication sequence, attempt #$prev_attempts" } # begin - Base32 decode to binary # Base32 alphabet (see RFC 4648) array set static::b32_alphabet { A 0 B 1 C 2 D 3 E 4 F 5 G 6 H 7 I 8 J 9 K 10 L 11 M 12 N 13 O 14 P 15 Q 16 R 17 S 18 T 19 U 20 V 21 W 22 X 23 Y 24 Z 25 2 26 3 27 4 28 5 29 6 30 7 31 } set l [string length $ga_code_b32] set n 0 set j 0 set ga_code_bin "" for { set i 0 } { $i < $l } { incr i } { set n [expr $n <= 8 } { set j [incr j -8] append ga_code_bin [format %c [expr ($n & (0xFF <> $j]] } } # end - Base32 decode to binary # begin - HMAC-SHA1 calculation of Google Auth token set time [binary format W* [expr [clock seconds] / 30]] set ipad "" set opad "" for { set j 0 } { $j < [string length $ga_code_bin] } { incr j } { binary scan $ga_code_bin @${j}H2 k set o [expr 0x$k ^ 0x5C] set i [expr 0x$k ^ 0x36] append ipad [format %c $i] append opad [format %c $o] } while { $j < 64 } { append ipad 6 append opad \ incr j } binary scan [sha1 $opad[sha1 ${ipad}${time}]] H* token # end - HMAC-SHA1 calculation of Google Auth hex token # begin - extract code from Google Auth hex token set offset [expr ([scan [string index $token end] %x] & 0x0F) <= 2 } { log local0. "$user ($user_ip): Google Authenticator TOTP token matched" } set auth_status 0 set auth_id_aes [b64encode [AES::encrypt $static::auth_cookie_aes_key $auth_id]] table add -subtable $auth_status_table $auth_id $auth_status $static::auth_timeout $static::auth_lifetime table add -subtable $authid_to_user_table $auth_id $user $static::auth_timeout $static::auth_lifetime if { $static::debug >= 1 } { log local0. "$user ($user_ip): authentication successful (auth_id=$auth_id), redirecting to $orig_uri" } HTTP::respond 302 "Location" $orig_uri "Set-Cookie" "$static::auth_cookie=$auth_id_aes;" HTTP::collect } else { if { $static::debug >= 1 } { log local0. "$user ($user_ip): authentication failed - Google Authenticator TOTP token not matched" } HTTP::respond 200 content $static::login_page } } else { if { $static::debug >= 1 } { log local0. "$user ($user_ip): could not find valid Google Authenticator secret for $user" } HTTP::respond 200 content $static::login_page } } else { if { $static::debug >= 1 } { log local0. "$user ($user_ip): attempting authentication too frequently, locking out for ${static::lockout_period}s" } HTTP::respond 200 content "You've made too many attempts too quickly. Please wait $static::lockout_period seconds and try again." } } else { HTTP::respond 200 content $static::login_page } } }

Yubikey and server authentication

After starting to use the Yubikey for LastPass and various other online servers I’ve started also using my yubikey for SSH access to my server(s).

I’ve touched on google_authenticator and pam_yubico for authentication in a previous post however I will be going into this in a bit more detail.

Taking a machine at home as an example. My requirements are simple

  • NO SSH Key access to be allowed – as there is no way to require a second factor with an SSH Key (Passphrases can be removed or a new key generated)
  • Access from Local machines to be allowed without Two Factor being enabled
  • Yubikey to be the Primary TFA
  • Fall back to google authenticator should either the Yubico servers be down, an issue with my keys or I just don’t have a USB port available (IE I’m on a phone or whatever)
  • In order to meet these requirements I’m going to need the following

  • yubico-pam Yubikey PAM
  • Google Authenticator PAM
  • pam_access
  • The server is running Archlinux, and luckily all of these are within AUR – and as such I’m not going to cover the install of the modules.

    In order to restrict SSHd access as above I need the following auth lines in /etc/pam.d/sshd

    # Check unix password
    auth            required        pam_unix.so try_first_pass
    # check to see if the User/IP combo is on the skip list - if so, skip the next two lines
    auth            [success=2 default=ignore] pam_access.so accessfile=/etc/security/access_yubico.conf
    # Check /etc/yubikey for the users yubikey and skip the next line if it all works
    auth            [success=1 default=ignore ]     pam_yubico.so id=1 url=https://api.yubico.com/wsapi/2.0/verify?id=%d&otp=%s authfile=/etc/yubikey
    # Check against google authenticator
    auth            required        pam_google_authenticator.so
    auth            required        pam_env.so
    

    The next step is ensure that the relevant users and IP are listed in /etc/security/access_yubico.conf

    # Allow welby from 1.2.3.4
    + : welby : 1.2.3.4
    # Deny all others
    - : ALL : ALL
    

    After this is setup we will also need to setup the yubikey file /etc/yubikey

    welby:ccccccdddddd:cccccccccccc
    

    I’m not going to cover configuration of google authenticator with the google-authenticator command

    The final changes are to the /etc/ssh/sshd_config ensuring that the following are set

    PasswordAuthentication no
    PubkeyAuthentication no
    PermitRootLogin no
    ChallengeResponseAuthentication yes
    UsePAM yes
    

    PAM and Two Factor authentication

    As the need for Two factor authentication is a requirement for PCI-DSS (Payment Card Industry standard) and SSH Key with password is not always deemed to be an acceptable form of Two factor authorisation there is now a surge in different forms of two factor auth, all with their own pros and cons.

    For a small business or ‘Prosumer’ (professional consumers) the market incumbent (RSA) is not a viable option due to the price of the tokens and the software / appliance that is required. There are cheaper (or free!) alternatives for which two that I’ve used at Google Authenticator, and Yubikey.

    Google Authenticator is an OATH-TOTP system that much like RSA generates a one time password once every 30 seconds. It’s avaiable as an App for the Big three mobile platforms (iOS, Android and Blackberry).

    Yubikey is a hardware token that emulates a USB keyboard, that when the button is pressed, generates a one time password. This is supported by services such as lastpass.

    Both solutions have the ability to be used with their own PAM modules. Installation of either is simple, but what happens if you want to use both, but only require one of these.

    Luckily PAM makes it quite easy !

    auth            required        pam_unix.so try_first_pass
    auth            [success=1 default=ignore ]     pam_yubico.so id=1 url=https://api.yubico.com/wsapi/2.0/verify?id=%d&otp=%s
    auth            required        pam_google_authenticator.so
    

    In the above example the user must enter a password and then provide either their yubikey or their google_authenticator.

    Should the password be incorrect the user will still be prompted for their yubikey or google authenticator, but will then fail. Should they provide a password and then their yubikey, they will not be asked for their google authenticator. Should they provide password and not a yubikey, they will be prompted for their google authenticator!

    Auditd logging all commands

    A common requirement for PCI-DSS is for all commands run by a user who has admin privileges to be logged. There are many ways to do this, most of the time people will opt for a change to the bash configuration or to rely on sudo. There are many ways arround this (such as providing the command that you wish to run as a paramater to SSH). As the linux kernel provides a full auditing system. Using a utility such as auditd, we are able to log all commands that are run. The configuration for this is actually quite simple. In /etc/audit/audit.rules we need to ensure that the following exists.

    -a exit,always -F arch=b64 -S execve
    -a exit,always -F arch=b32 -S execve

    This will capture any execve system call (on exit) and will log this to the auditd log. A log entry will look similar to below.

    type=SYSCALL msg=audit(1318930500.123:3020171): arch=c000003e syscall=59 success=yes exit=0 a0=7fff65179def a1=7fff65179ec0 a2=7fff6517d060 a3=7ff54ee36c00 items=3 ppid=9200 pid=9202 auid=0 uid=1000 gid=100 euid=1000 suid=1000 fsuid=1000 egid=100 sgid=100 fsgid=100 tty=(none) ses=4 comm="xscreensaver-ge" exe="/usr/bin/perl" key=(null)
    type=EXECVE msg=audit(1318930500.123:3020171): argc=5 a0="/usr/bin/perl" a1="-w" a2="/usr/bin/xscreensaver-getimage-file" a3="--name" a4="/home/welby/Pictures"
    type=EXECVE msg=audit(1318930500.123:3020171): argc=3 a0="/usr/bin/perl" a1="-w" a2="/usr/bin/xscreensaver-getimage-file"
    type=CWD msg=audit(1318930500.123:3020171): cwd="/home/welby/Downloads"
    type=PATH msg=audit(1318930500.123:3020171): item=0 name="/usr/bin/xscreensaver-getimage-file" inode=208346 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00
    type=PATH msg=audit(1318930500.123:3020171): item=1 name=(null) inode=200983 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00
    type=PATH msg=audit(1318930500.123:3020171): item=2 name=(null) inode=46 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00

    This should keep most auditors happy :)

    Moving part of an lvm vg from one pv to another

    Lets say that you’ve got multiple physical volumes (PV) in a Volume Group (VG) and you want to migrate the extents from one PV to another, this can be acomplished with a quick and easy pvmove command.

    For example

    pvdisplay -m
    --- Physical volume ---
    PV Name /dev/sdb1
    VG Name INTEL_RAID
    PV Size 2.73 TiB / not usable 4.00 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 714539
    Free PE 0
    Allocated PE 714539
    PV UUID XWiRzE-Ol3d-38En-ND6b-qo93-4zeF-xv8zDv

    --- Physical Segments ---
    Physical extent 0 to 604876:
    Logical volume /dev/INTEL_RAID/MEDIA
    Logical extents 0 to 604876
    Physical extent 604877 to 617676:
    Logical volume /dev/INTEL_RAID/backups_mimage_0
    Logical extents 25600 to 38399
    Physical extent 617677 to 617701:
    Logical volume /dev/INTEL_RAID/EPG
    Logical extents 0 to 24
    Physical extent 617702 to 643301:
    Logical volume /dev/INTEL_RAID/backups_mimage_0
    Logical extents 0 to 25599
    Physical extent 643302 to 714538:
    Logical volume /dev/INTEL_RAID/MEDIA
    Logical extents 604877 to 676113

    --- Physical volume ---
    PV Name /dev/sdc1
    VG Name INTEL_RAID
    PV Size 2.04 TiB / not usable 2.00 MiB
    Allocatable yes
    PE Size 4.00 MiB
    Total PE 535726
    Free PE 430323
    Allocated PE 105403
    PV UUID laOuKy-5FZa-cJ3h-JffV-qUub-diKC-O0wVqK

    --- Physical Segments ---
    Physical extent 0 to 25599:
    Logical volume /dev/INTEL_RAID/backups_mimage_1
    Logical extents 0 to 25599
    Physical extent 25600 to 54202:
    Logical volume /dev/INTEL_RAID/MEDIA
    Logical extents 676114 to 704716
    Physical extent 54203 to 67002:
    Logical volume /dev/INTEL_RAID/NZB_DOWNLOAD
    Logical extents 0 to 12799
    Physical extent 67003 to 79802:
    Logical volume /dev/INTEL_RAID/backups_mimage_1
    Logical extents 25600 to 38399
    Physical extent 79803 to 105402:
    Logical volume /dev/INTEL_RAID/OLD_VM
    Logical extents 0 to 25599
    Physical extent 105403 to 535725:
    FREE

    From here you can see that /dev/INTEL_RAID/MEDIA is a Logical Volume (LV) on both PVs within the VG. If I was wanting to grow my mirrored LV, which requires space on both PVs, I’d have to migrate some of the extents of another LV. If I wanted to move some of the MEDIA lv, I should be able to do the following

    pvmove /dev/sdb:643302-714538 /dev/sdc

    This will move extents 643302-714538 to the next contiguious block on /dev/sdc

    Dahdi In LXC

    At home we use various VoIP providers to either get free calls to various places (GTalk/GVoice to america for instance) and various other destinations over SIP providers

    I’ve been using Asterisk for years (I remeber the 0.7 release) and have implemented it for companies before, usually with no issues, baring the continuall deadlocks in the 1.2 range. Recently I enabled my VoIP network segment for IPv6 only to find that GTalk stoped working on IPV6 Day. After a bit of digging about, it seems that Asterisk 1.8 does support IPV6! But, gtalk and similar are not supported, SIP is infact the only first class citezen it seems.

    I’ve toyed with using freeswitch before, but unfortuantly have had varied sucsess with FreeTDM to Dahdi with BT caller ID and the likes. I did hack in support for it, but I’m not too sure if I trust my code, as my C is quite rusty to say the least.

    I did however come up with another solution!

    As I’m running a moderatly new Linux Kernel I can use LXC – Linux Containers – which are effectilvy the same idea as a wpar, chroot, openvz, whatever. After setting up asterisk in the LXC I needed to expose my Dahdi card to it. LXC allows you to restrict access on a per device basis. I’ve setup Dahdi on the host machine as normal so the kernel modules can be loaded etc. Once this is done I’ve preformed the following within the LXC

    cd /
    mkdir dev/dahdi
    mknod dev/dahdi/1 c 196 1
    mknod dev/dahdi/2 c 196 2
    mknod dev/dahdi/3 c 196 3
    mknod dev/dahdi/4 c 196 4
    mknod dev/dahdi/channel c 196 254
    mknod dev/dahdi/ctl c 196 0
    mknod dev/dahdi/pseudo c 196 255
    mknod dev/dahdi/timer c 196 253
    mknod dev/dahdi/transcode c 196 250

    This creates the Device nodes within /dev/ for my 4 dahdi channels (3FXS 1FXO if anyone is interested). After this I’ve added the following to the lxc config file, to allow the LXC to have access to these devices


    # If you want to be lazy just add this line
    #lxc.cgroup.devices.allow = c 196:* rwm

    #Otherwise use the following
    lxc.cgroup.devices.allow = c 196:0 rwm
    lxc.cgroup.devices.allow = c 196:1 rwm
    lxc.cgroup.devices.allow = c 196:2 rwm
    lxc.cgroup.devices.allow = c 196:3 rwm
    lxc.cgroup.devices.allow = c 196:4 rwm
    lxc.cgroup.devices.allow = c 196:250 rwm
    lxc.cgroup.devices.allow = c 196:253 rwm
    lxc.cgroup.devices.allow = c 196:254 rwm
    lxc.cgroup.devices.allow = c 196:255 rwm

    This will obviously only work for the first 4 dahdi channels, but if you need more, just continue adding the 196:x lines, replacing x with the channel number, and also ensuring that you create the device nodes in the same way