Linux Projects

PAM and Two Factor authentication

As the need for Two factor authentication is a requirement for PCI-DSS (Payment Card Industry standard) and SSH Key with password is not always deemed to be an acceptable form of Two factor authorisation there is now a surge in different forms of two factor auth, all with their own pros and cons.

For a small business or ‘Prosumer’ (professional consumers) the market incumbent (RSA) is not a viable option due to the price of the tokens and the software / appliance that is required. There are cheaper (or free!) alternatives for which two that I’ve used at Google Authenticator, and Yubikey.

Google Authenticator is an OATH-TOTP system that much like RSA generates a one time password once every 30 seconds. It’s avaiable as an App for the Big three mobile platforms (iOS, Android and Blackberry).

Yubikey is a hardware token that emulates a USB keyboard, that when the button is pressed, generates a one time password. This is supported by services such as lastpass.

Both solutions have the ability to be used with their own PAM modules. Installation of either is simple, but what happens if you want to use both, but only require one of these.

Luckily PAM makes it quite easy !

auth            required try_first_pass
auth            [success=1 default=ignore ] id=1 url=
auth            required

In the above example the user must enter a password and then provide either their yubikey or their google_authenticator.

Should the password be incorrect the user will still be prompted for their yubikey or google authenticator, but will then fail. Should they provide a password and then their yubikey, they will not be asked for their google authenticator. Should they provide password and not a yubikey, they will be prompted for their google authenticator!


Auditd logging all commands

A common requirement for PCI-DSS is for all commands run by a user who has admin privileges to be logged. There are many ways to do this, most of the time people will opt for a change to the bash configuration or to rely on sudo. There are many ways arround this (such as providing the command that you wish to run as a paramater to SSH). As the linux kernel provides a full auditing system. Using a utility such as auditd, we are able to log all commands that are run. The configuration for this is actually quite simple. In /etc/audit/audit.rules we need to ensure that the following exists.

-a exit,always -F arch=b64 -S execve
-a exit,always -F arch=b32 -S execve

This will capture any execve system call (on exit) and will log this to the auditd log. A log entry will look similar to below.

type=SYSCALL msg=audit(1318930500.123:3020171): arch=c000003e syscall=59 success=yes exit=0 a0=7fff65179def a1=7fff65179ec0 a2=7fff6517d060 a3=7ff54ee36c00 items=3 ppid=9200 pid=9202 auid=0 uid=1000 gid=100 euid=1000 suid=1000 fsuid=1000 egid=100 sgid=100 fsgid=100 tty=(none) ses=4 comm="xscreensaver-ge" exe="/usr/bin/perl" key=(null)
type=EXECVE msg=audit(1318930500.123:3020171): argc=5 a0="/usr/bin/perl" a1="-w" a2="/usr/bin/xscreensaver-getimage-file" a3="--name" a4="/home/welby/Pictures"
type=EXECVE msg=audit(1318930500.123:3020171): argc=3 a0="/usr/bin/perl" a1="-w" a2="/usr/bin/xscreensaver-getimage-file"
type=CWD msg=audit(1318930500.123:3020171): cwd="/home/welby/Downloads"
type=PATH msg=audit(1318930500.123:3020171): item=0 name="/usr/bin/xscreensaver-getimage-file" inode=208346 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1318930500.123:3020171): item=1 name=(null) inode=200983 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1318930500.123:3020171): item=2 name=(null) inode=46 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00

This should keep most auditors happy 🙂


Moving part of an lvm vg from one pv to another

Lets say that you’ve got multiple physical volumes (PV) in a Volume Group (VG) and you want to migrate the extents from one PV to another, this can be acomplished with a quick and easy pvmove command.

For example

pvdisplay -m
--- Physical volume ---
PV Name /dev/sdb1
PV Size 2.73 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 714539
Free PE 0
Allocated PE 714539
PV UUID XWiRzE-Ol3d-38En-ND6b-qo93-4zeF-xv8zDv

--- Physical Segments ---
Physical extent 0 to 604876:
Logical volume /dev/INTEL_RAID/MEDIA
Logical extents 0 to 604876
Physical extent 604877 to 617676:
Logical volume /dev/INTEL_RAID/backups_mimage_0
Logical extents 25600 to 38399
Physical extent 617677 to 617701:
Logical volume /dev/INTEL_RAID/EPG
Logical extents 0 to 24
Physical extent 617702 to 643301:
Logical volume /dev/INTEL_RAID/backups_mimage_0
Logical extents 0 to 25599
Physical extent 643302 to 714538:
Logical volume /dev/INTEL_RAID/MEDIA
Logical extents 604877 to 676113

--- Physical volume ---
PV Name /dev/sdc1
PV Size 2.04 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 535726
Free PE 430323
Allocated PE 105403
PV UUID laOuKy-5FZa-cJ3h-JffV-qUub-diKC-O0wVqK

--- Physical Segments ---
Physical extent 0 to 25599:
Logical volume /dev/INTEL_RAID/backups_mimage_1
Logical extents 0 to 25599
Physical extent 25600 to 54202:
Logical volume /dev/INTEL_RAID/MEDIA
Logical extents 676114 to 704716
Physical extent 54203 to 67002:
Logical volume /dev/INTEL_RAID/NZB_DOWNLOAD
Logical extents 0 to 12799
Physical extent 67003 to 79802:
Logical volume /dev/INTEL_RAID/backups_mimage_1
Logical extents 25600 to 38399
Physical extent 79803 to 105402:
Logical volume /dev/INTEL_RAID/OLD_VM
Logical extents 0 to 25599
Physical extent 105403 to 535725:

From here you can see that /dev/INTEL_RAID/MEDIA is a Logical Volume (LV) on both PVs within the VG. If I was wanting to grow my mirrored LV, which requires space on both PVs, I’d have to migrate some of the extents of another LV. If I wanted to move some of the MEDIA lv, I should be able to do the following

pvmove /dev/sdb:643302-714538 /dev/sdc

This will move extents 643302-714538 to the next contiguious block on /dev/sdc


Dahdi In LXC

At home we use various VoIP providers to either get free calls to various places (GTalk/GVoice to america for instance) and various other destinations over SIP providers

I’ve been using Asterisk for years (I remeber the 0.7 release) and have implemented it for companies before, usually with no issues, baring the continuall deadlocks in the 1.2 range. Recently I enabled my VoIP network segment for IPv6 only to find that GTalk stoped working on IPV6 Day. After a bit of digging about, it seems that Asterisk 1.8 does support IPV6! But, gtalk and similar are not supported, SIP is infact the only first class citezen it seems.

I’ve toyed with using freeswitch before, but unfortuantly have had varied sucsess with FreeTDM to Dahdi with BT caller ID and the likes. I did hack in support for it, but I’m not too sure if I trust my code, as my C is quite rusty to say the least.

I did however come up with another solution!

As I’m running a moderatly new Linux Kernel I can use LXC – Linux Containers – which are effectilvy the same idea as a wpar, chroot, openvz, whatever. After setting up asterisk in the LXC I needed to expose my Dahdi card to it. LXC allows you to restrict access on a per device basis. I’ve setup Dahdi on the host machine as normal so the kernel modules can be loaded etc. Once this is done I’ve preformed the following within the LXC

cd /
mkdir dev/dahdi
mknod dev/dahdi/1 c 196 1
mknod dev/dahdi/2 c 196 2
mknod dev/dahdi/3 c 196 3
mknod dev/dahdi/4 c 196 4
mknod dev/dahdi/channel c 196 254
mknod dev/dahdi/ctl c 196 0
mknod dev/dahdi/pseudo c 196 255
mknod dev/dahdi/timer c 196 253
mknod dev/dahdi/transcode c 196 250

This creates the Device nodes within /dev/ for my 4 dahdi channels (3FXS 1FXO if anyone is interested). After this I’ve added the following to the lxc config file, to allow the LXC to have access to these devices

# If you want to be lazy just add this line
#lxc.cgroup.devices.allow = c 196:* rwm

#Otherwise use the following
lxc.cgroup.devices.allow = c 196:0 rwm
lxc.cgroup.devices.allow = c 196:1 rwm
lxc.cgroup.devices.allow = c 196:2 rwm
lxc.cgroup.devices.allow = c 196:3 rwm
lxc.cgroup.devices.allow = c 196:4 rwm
lxc.cgroup.devices.allow = c 196:250 rwm
lxc.cgroup.devices.allow = c 196:253 rwm
lxc.cgroup.devices.allow = c 196:254 rwm
lxc.cgroup.devices.allow = c 196:255 rwm

This will obviously only work for the first 4 dahdi channels, but if you need more, just continue adding the 196:x lines, replacing x with the channel number, and also ensuring that you create the device nodes in the same way

Linux Projects Software Uncategorized

A quick (and quite unscientific!) break down of Rackspace CloudFiles UK vs Amazon S3 (Ireland)

(Disclaimer – I’m a Rackspace Employee, the postings on this site are my own, may be bias, and don’t necessarily represent Rackspace’s positions, strategies or opinions. These tests have been preformed independently from my employer by my self)

As Rackspace have recently launched a ‘beta’ Cloudfiles service within the UK I thought I would run a few tests to compare it to Amazon’s S3 service running from Eire (or Southern Ireland).

I took a set of files, totalling 18.7GB, with file sizes ranging from between 1kb and 25MB, text files, and contents being mainly Photos (both JPEG and RAW (cannon and nikon), plain text files, GZiped Tarballs and a few Microsoft Word documents just for good measure.

The following python scripts were used:

Cloud Files

import cloudfiles
import sys,os

local_file_list = sys.stdin.readlines()
cf = cloudfiles.get_connection(api_username, api_key, authurl=auth_url)
containers = cf.get_all_containers()
for container in containers:
    if == dest_container:
            backup_container = container

def upload_cf(local_file):
    u = backup_container.create_object(local_file)

for local_file in local_file_list:
        local_file = local_file.rstrip()
        local_file_size = os.stat(local_file).st_size/1024
        print "uploading %s (%dK)" % (local_file, local_file_size)



import cloudfiles
import sys,os

#Setup the connection
cf = cloudfiles.get_connection(api_username, api_key, authurl=auth_url)

#Get a list of containers
containers = cf.get_all_containers()

# Lets setup the container
for container in containers:
    if == dest_container:
            backup_container = container

#Create the container if it does not exsit
except NameError:
    backup_container = cf.create_container(dest_container)

# We've now got our container, lets get a file list
def build_remote_file_list(container):
    remote_file_list = container.list_objects_info()
    for remote_file in remote_file_list:
        f = open(remote_file['name'],'w')
        rf = container.get_object(remote_file['name'])
        print remote_file['name']
        for chunk in
remote_file_list = build_remote_file_list(backup_container)


from boto.s3.connection import S3Connection
from boto.s3.key import Key
import sys,os

dest_container = "CONTAINER"

s3 = S3Connection('api','api_secret')

buckets = s3.get_all_buckets()

for container in buckets:
    if == dest_container:
                backup_container = container

def build_remote_file_list(container):
    remote_file_list = container.list()
    for remote_file in remote_file_list:
        print remote_file
        f = open(remote_file,'w')
        rf = container.get_key(remote_file)
        #print remote_file['name'

local_file_list = sys.stdin.readlines()

def upload_s3(local_file):
    k = Key(backup_container)
    k.key = local_file

for local_file in local_file_list:
        local_file = local_file.rstrip()
        local_file_size = os.stat(local_file).st_size/1024
        print "uploading %s (%dK)" % (local_file, local_file_size)


from boto.s3.connection import S3Connection
from boto.s3.key import Key
import sys,os

dest_container = "CONTAINER"

s3 = S3Connection('api','apt_secret')

buckets = s3.get_all_buckets()

for container in buckets:
    if == dest_container:
                backup_container = container

def build_remote_file_list(container):
    remote_file_list = container.list()
    for remote_file in remote_file_list:
        f = open(,'w')
        rf = container.get_key(
        #print remote_file['name'

remote_file_list = build_remote_file_list(backup_container)

The test was preformed from a Linux host which has a 100MBit connection (Uncapped/unthrottled) in London, however the test was also preformed with almost identical results from a machine in Paris (also 100mbit). Tests were also run from other locations (Dallas Fort Worth – Texas, my home ISP ( however these locations were limited to 25mbit and 24mbit , and both reached their maximum speeds. The tests were as follows:

  • Download files from Rackspace Cloudfiles UK (these had been uploaded previously) – This is downloaded directly via the API, NOT via a CDN
  • Upload the same files to S3 Ireland
  • Upload the same files to a new “container” at Rackspace Cloudfiles UK
  • Download the files from S3 Ireland – This is downloaded directly via the API, NOT via a CDN
  • The average speeds for the tests are as follows:
    Download: 90Mbit/s
    Upload: 85MBit/s
    S3 Ireland
    Download: ~40Mbit/s
    Upload : 13Mbit/s


  • Cloud files seems to be able to max out a 100mbit connection for both File
  • S3 seems to have a cap of 13mbit for inbound file transfers?
  • S3 seems to either be extremely unpredictable on file transfer speeds for downloading files via the API, or there is some form of cap after a certain amount of data transferred, or there was congestion on the AWS network
  • Below is a graph showing the different connection speeds achieved using CF & S3

    As mentioned before this is a very unscientific test (and I’d say that these results have not been replicated from as many locations or as many times as I’d like to, so I would take them with a pinch of salt) , but it does appear that Rackspace cloudfiles UK is noticeably faster than S3 Ireland

    Android Apple Hardware iPhone

    iPhone to Android SMS Converstion Script

    As promised here’s a copy of my iPhone to Android script. Just a quick and dirty python script that reads in a backup from itunes and converts it to a bit of XML able to be read in by SMS Backup and Restore on the android platform

    from sqlite3 import *
    from sqlite3 import *
    from xml.sax.saxutils import escape
    import codecs
    import re
    f ='sms.xml','w','utf-8')
    f.write ('''
    # This is 31bb7ba8914766d4ba40d6dfb6113c8b614be442.mddata or 31bb7ba8914766d4ba40d6dfb6113c8b614be442.mdbackup usally
    c = connect('sms.db')
    curs = c.cursor()
    curs.execute('''SELECT address,date,text,flags FROM message WHERE flags <5 ORDER BY date asc''')
    for row in curs:
            a= escape(unicode(row[0]))
            d = escape(unicode(row[1]))
            t = row[3]-1
            t = str(t)
            b = re.sub('"',"'",escape(unicode(row[2])))
            f.write( ''+"n")
    '''''' )
    Apple iPhone Linux

    IRSSI Prowl Notifications

    A quick script to send notifications from IRSSI for privmessages and also for highlights, I’ll put more commentary on later, but for now..

    use strict;
    use vars qw($VERSION %IRSSI);
    use Irssi;
    use LWP::UserAgent;
    $VERSION = '0.1';
    %IRSSI = (
            authors => 'Welby McRoberts',
            contact => '[email protected]',
            name => 'irssi_prowler',
            description => 'Sends a notification to Prowl to alert an iPhone of a new highlighted message',
            url => '',
            changes => 'Friday, 10 Jun 2009'
    ######## Config
    $PRIV_PRI = 2;
    $PRIV_EVENT = 'Private Message';
    $HI_PRI = 1;
    $HI_EVENT = 'Highlight';
    $APP = 'irssi';
    $UA = 'irssi_prowler';
    ####### Highlights
    sub highlight {
            my ($dest, $text, $stripped) = @_;
            if ($dest->{level} & MSGLEVEL_HILIGHT) {
                    print "prowl($HI_PRI, $APP, $HI_EVENT, $text)";
                    prowl($HI_PRI, $APP, $HI_EVENT, $text);
    ####### Private Messages
    sub priv {
            my ($server, $text, $nick, $host, $channel) = @_;
            print "prowl($PRIV_PRI, $APP, $PRIV_EVENT, $text)";
            prowl($PRIV_PRI, $APP, $PRIV_EVENT, $text);
    ####### Prowl call
    sub prowl {
            my ($priority, $application, $event, $description) = @_;
            my ($request, $response, $url, $lwp);
            print 'pri: '.$priority;
            print 'app: '.$application;
            print 'event: '.$event;
            print 'description: '.$description;
            ######## Setting up the LWP
            $lwp = LWP::UserAgent->new;
            # URL Encode
            $application =~ s/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg;
        $event =~ s/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg;
        $description =~ s/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg;
            # Setup the url
            $url = sprintf("",
            print $url;
            $request = HTTP::Request->new(GET => $url);
            $response = $lwp->request($request);
            print $response;
    ####### Bind "message private" to priv()
    Irssi::signal_add_last("message private", "priv");
    ####### Bind "print text" to highlights()
    Irssi::signal_add_last("print text", "highlight");

    Lighttpd: mod_security via mod_magnet

    In most large enterprises there is a requirement to comply with various standards. The hot potato in the Ecommerce space at the moment (and has been for a few years!) is PCI-DSS.

    At $WORK we have to comply with PCI-DSS with the full audit and similar occurring due to the number of transactions we perform. Recently we’ve deployed lighttpd for one of our platforms, which has caused an issue for our Information Security Officers and Compliance staff.

    PCI-DSS 6.6 requires EITHER a Code review to be preformed, which whilst this may seem to be an easy task, when you’re talking about complex enterprise applications following a very……… agile development process it’s not always an option. The other option is to use a WAF (Web Application Firewall). Now there are multiple products available that sit upstream and perform this task. There is however an issue if you use SSL for your traffic. Most WAF will not do the SSL decryption / reencryption between the client and server (effectively becoming a Man in the Middle). There are however a few products which do this, F5 networks’ ASM being one that springs to mind. Unfortunately this isn’t always an option due to licensing fees and similar. An alternative is to run a WAF on the server its self. A common module for this is Mod_Security for Apache. Unfortunately, a similar module does not exist for Lighttpd.

    In response to $WORKs requirement for this I’ve used mod_magnet to run a small lua script to emulate the functionality of mod_security (to an extent at least!). Please note that mod_magent is blocking, so will cause any requests to be blocked until the mod_magnet script has completed, so be very careful with the script, and ensure that it’s not causing any lag in a test environment, prior to deploying into live!

    Below is a copy of an early version of the script (most of the mod_security rules that we have are specific to work, so are not being included for various reasons), however I’ll post updates to this soon.


    -- mod_security alike in LUA for mod_magnet
    LOG = true
    DROP = true
    function returnError(e)
            if (lighty.env["request.remote-ip"]) then
                    remoteip = lighty.env["request.remote-ip"]
                    remoteip = "UNKNOWN_IP"
            if (LOG == true) then
                    print ( remoteip .. " blocked due to ".. e .. " --- " ..
                                    lighty.env["request.method"] .. " " .. lighty.request["Host"] .. " " .. lighty.env["request.uri"])
            if (DROP == true) then
                    return 405
    function SQLInjection(content)
            if (string.find(content, "UNION")) then
                    return returnError('UNION in uri')
    function UserAgent(UA)
            UA = UA:gsub("%a", string.lower, 1)
            if (string.find(UA, "libwhisker")) then
                    return returnError('UserAgent - libwhisker')
            elseif (string.find(UA, "paros")) then
                    return returnError('UserAgent - paros')
            elseif (string.find(UA, "wget")) then
                    return returnError('UserAgent - wget')
            elseif (string.find(UA, "libwww")) then
                    return returnError('UserAgent - libwww')
            elseif (string.find(UA, "perl")) then
                    return returnError('UserAgent - perl')
            elseif (string.find(UA, "java")) then
                    return returnError('UserAgent - java')
    -- URI = lighty.env["request.uri"]
    -- POST = lighty.request
    if ( SQLInjection(lighty.env["request.uri"]) == 405) then
           ret = 405
    if ( UserAgent(lighty.request["User-Agent"]) == 405) then
           ret = 405
    return ret

    The following needs to be added to lighttpd.conf to attach this LUA script via mod magnet

    server.modules += ( "mod_magnet" )
    magnet.attract-physical-path-to = ( "/etc/lighttpd/mod_sec.lua")

    *Update – 23 Aug 09* Updated to return code even if one test passes*

    Comments or suggestions are appreciated!

    Hardware Linux Projects

    RouterBoard as a Home Router – 7 Months on – Part 1

    At the new year I decided that I was fed up with having my main Unix server acting as a Router (amongst other things) and decided to bite the bullet and get a full blown router. Here in lay a dilema. Being the fact that I’m a geek, I couldn’t settle for a “home” unhackable router. So this instantly ruled out most of the commercial available routers, baring those that run OpenWRT. Now don’t get me wrong, OpenWRT is more than capable, but I just didn’t feel like having to worry about hardware support, fighting with IPTables and getting hardware that probally wouldn’t scale. Now before anyone starts thinking “Scaling, but this is for a home connection!”, this is true. However I do sync my DSL at the full  24244 kbps Downstream, and 2550 kbps upstream (I live under 200m from the exchange according to my line attenuation, also my ISP doesn’t bandwidth cap, and allow for FastPath and similar to be enabled. Go BeThere!) . Also at the time, I was seriously considering investing in a secondary connection for additional bandwidth. This meant that I was left with a few choices

    • Build my Own. Using something like an ALIX/Sokeris and use something like FreeBSD (or something with a webgui for when I feel rather lazy, such as m0n0wall or pfsense. Both I’ve used previously with great success)
    • Cisco. Yes, the 800 pound gorrila of home. A ‘cheap’ 1800 or similar was going to set me back about £400, however this would have provided me most of what I needed.
    • RotuerBoard. These where, to me at least, relativly unknown. I originally looked at them for building my own system with them, and then discovered RouterOS came with the boards. This was an instant sale.

    After my first look at RouterOS I was basically sold. Main reasoning behind this was that it was a comercial Linux distribution, that actually worked well as a router, and shipped with both a CLI (Nortel-esq in this case) and a *shock* gui application. It also met my main criteria.

    • Support for 802.1Q. I have multiple vLANs at home so having support for dot1q was a necessity
    • Support for 802.3ad. As I have a few machines connecting via the router, I needed the throughput, as I don’t have gigabit switching LACP support was a necessity.
    • Support for Wireless. All good routers for the home (even a geeky one) need support for 802.11(a/b/g).
    • Support for SubSSIDs. Relating to the above, I didn’t want to have 7 wireless cards for my various networks
    • Support for WPA2-PSK and WPA2-EAP. I use RADIUS to authenticate all my personal stations to a central authentication system, but I don’t want to have to add guests to this, so PSK should also be supported.
    • Support for OpenVPN. I don’t like having my traffic to / from home going in the clear at all, so I needed to be able to connect via a VPN of some sort, My preference is OpenVPN for c2s vpns (s2s is still IPSEC…. which leads onto the next point)
    • Support for IPSec. I connect to various friends networks, and yet again, don’t want this sort of traffic in the clear, we made the standard IPSec (3des/md5) a while back
    • Support for “Unlimted” Firewall rules. This may sound silly, but anyone who has worked with the lowend Sonicwalls will know what I mean, only being able to put 20 rules is EXTREMELY restrictive especially with multiple vlans! (I’ve got roughly 300 rules)
    • Support for setting DHCP options. I used VMWare ESX at home for my test lab, so I require to be able to setup the DHCP server to be able to send the correct options for PXE (or gPXE) so this was a requirement
    • Quick booting. As silly as this may sound, I don’t want boot times of upwards of 30 seconds for my router.
    • Support for Bridging of interfaces with Firewall rules. This one is rather self explanatory really!
    • Support for UPnP. Lets face it, UPnP is required for any form of Voice/Video chat these days over the main IM networks (YIM/AIM/MSNIM)
    • Support for NetFlow or similar. This one is a nice to have, as I like to use flow-tools to generate a rough guess on what type of traffic is flowing through my network
    • Support for Traffic Shaping. Ah yes, the holy grail of routers. Unfortunately the likes of TC on linux requires a degree in astrophysics to get working how you’d like!
    • Easy configuration.

    After discovering (via the x86 installable and the demo units) that RouterOS would let me do all of the above, I decided to give it a whirl.

    Apple iTunes Linux media Software

    Issues with OS X 10.5 iTunes 8.1.1 and mt-daapd (aka Firefly Media Server)

    I’ve recently upgraded my iTunes installation on my MacBookPro to 8.1.1 and to my horror found that I’m no longer able to connect to my DAAP library on my NAS.

    This is rather strange as the issue has only just appeared in 8.1.1 and does not appear on my windows machines which reside on a different network, and have Bonjour / Rendezvous mDNS traffic broadcast locally by RendevousProxy. After much annoyance, I decided to do a quick check of what an older iTunes library was sending out, and comparing that to Avahi. It turns out that my Avahi configuration was missing some vital Text Records. This wasn’t an issue in previous revisions of the iTunes client, but appears to be an issue in 8.1.1.

    I updated my daap.service file in /etc/avahi/services/ to the following

    	  iTSh Version=131073
    And restarted Avahi for good measure and now can connect to my mt-daapd library again!