Yubikey and server authentication

After starting to use the Yubikey for LastPass and various other online servers I’ve started also using my yubikey for SSH access to my server(s).

I’ve touched on google_authenticator and pam_yubico for authentication in a previous post however I will be going into this in a bit more detail.

Taking a machine at home as an example. My requirements are simple

  • NO SSH Key access to be allowed – as there is no way to require a second factor with an SSH Key (Passphrases can be removed or a new key generated)
  • Access from Local machines to be allowed without Two Factor being enabled
  • Yubikey to be the Primary TFA
  • Fall back to google authenticator should either the Yubico servers be down, an issue with my keys or I just don’t have a USB port available (IE I’m on a phone or whatever)
  • In order to meet these requirements I’m going to need the following

  • yubico-pam Yubikey PAM
  • Google Authenticator PAM
  • pam_access
  • The server is running Archlinux, and luckily all of these are within AUR – and as such I’m not going to cover the install of the modules.

    In order to restrict SSHd access as above I need the following auth lines in /etc/pam.d/sshd

    The next step is ensure that the relevant users and IP are listed in /etc/security/access_yubico.conf

    After this is setup we will also need to setup the yubikey file /etc/yubikey

    I’m not going to cover configuration of google authenticator with the google-authenticator command

    The final changes are to the /etc/ssh/sshd_config ensuring that the following are set

    PAM and Two Factor authentication

    As the need for Two factor authentication is a requirement for PCI-DSS (Payment Card Industry standard) and SSH Key with password is not always deemed to be an acceptable form of Two factor authorisation there is now a surge in different forms of two factor auth, all with their own pros and cons.

    For a small business or ‘Prosumer’ (professional consumers) the market incumbent (RSA) is not a viable option due to the price of the tokens and the software / appliance that is required. There are cheaper (or free!) alternatives for which two that I’ve used at Google Authenticator, and Yubikey.

    Google Authenticator is an OATH-TOTP system that much like RSA generates a one time password once every 30 seconds. It’s avaiable as an App for the Big three mobile platforms (iOS, Android and Blackberry).

    Yubikey is a hardware token that emulates a USB keyboard, that when the button is pressed, generates a one time password. This is supported by services such as lastpass.

    Both solutions have the ability to be used with their own PAM modules. Installation of either is simple, but what happens if you want to use both, but only require one of these.

    Luckily PAM makes it quite easy !

    In the above example the user must enter a password and then provide either their yubikey or their google_authenticator.

    Should the password be incorrect the user will still be prompted for their yubikey or google authenticator, but will then fail. Should they provide a password and then their yubikey, they will not be asked for their google authenticator. Should they provide password and not a yubikey, they will be prompted for their google authenticator!

    A quick (and quite unscientific!) break down of Rackspace CloudFiles UK vs Amazon S3 (Ireland)

    (Disclaimer – I’m a Rackspace Employee, the postings on this site are my own, may be bias, and don’t necessarily represent Rackspace’s positions, strategies or opinions. These tests have been preformed independently from my employer by my self)

    As Rackspace have recently launched a ‘beta’ Cloudfiles service within the UK I thought I would run a few tests to compare it to Amazon’s S3 service running from Eire (or Southern Ireland).

    I took a set of files, totalling 18.7GB, with file sizes ranging from between 1kb and 25MB, text files, and contents being mainly Photos (both JPEG and RAW (cannon and nikon), plain text files, GZiped Tarballs and a few Microsoft Word documents just for good measure.

    The following python scripts were used:

    Cloud Files
    Upload

    Download

    s3
    Upload

    Download

    The test was preformed from a Linux host which has a 100MBit connection (Uncapped/unthrottled) in London, however the test was also preformed with almost identical results from a machine in Paris (also 100mbit). Tests were also run from other locations (Dallas Fort Worth – Texas, my home ISP (bethere.co.uk)) however these locations were limited to 25mbit and 24mbit , and both reached their maximum speeds. The tests were as follows:

  • Download files from Rackspace Cloudfiles UK (these had been uploaded previously) – This is downloaded directly via the API, NOT via a CDN
  • Upload the same files to S3 Ireland
  • Upload the same files to a new “container” at Rackspace Cloudfiles UK
  • Download the files from S3 Ireland – This is downloaded directly via the API, NOT via a CDN
  • The average speeds for the tests are as follows:
    Cloudfiles
    Download: 90Mbit/s
    Upload: 85MBit/s
    S3 Ireland
    Download: ~40Mbit/s
    Upload : 13Mbit/s

    Observations

  • Cloud files seems to be able to max out a 100mbit connection for both File
  • S3 seems to have a cap of 13mbit for inbound file transfers?
  • S3 seems to either be extremely unpredictable on file transfer speeds for downloading files via the API, or there is some form of cap after a certain amount of data transferred, or there was congestion on the AWS network
  • Below is a graph showing the different connection speeds achieved using CF & S3

    As mentioned before this is a very unscientific test (and I’d say that these results have not been replicated from as many locations or as many times as I’d like to, so I would take them with a pinch of salt) , but it does appear that Rackspace cloudfiles UK is noticeably faster than S3 Ireland

    RouterBoard as a Home Router – 7 Months on – Part 1

    At the new year I decided that I was fed up with having my main Unix server acting as a Router (amongst other things) and decided to bite the bullet and get a full blown router. Here in lay a dilema. Being the fact that I’m a geek, I couldn’t settle for a “home” unhackable router. So this instantly ruled out most of the commercial available routers, baring those that run OpenWRT. Now don’t get me wrong, OpenWRT is more than capable, but I just didn’t feel like having to worry about hardware support, fighting with IPTables and getting hardware that probally wouldn’t scale. Now before anyone starts thinking “Scaling, but this is for a home connection!”, this is true. However I do sync my DSL at the full  24244 kbps Downstream, and 2550 kbps upstream (I live under 200m from the exchange according to my line attenuation, also my ISP doesn’t bandwidth cap, and allow for FastPath and similar to be enabled. Go BeThere!) . Also at the time, I was seriously considering investing in a secondary connection for additional bandwidth. This meant that I was left with a few choices

    • Build my Own. Using something like an ALIX/Sokeris and use something like FreeBSD (or something with a webgui for when I feel rather lazy, such as m0n0wall or pfsense. Both I’ve used previously with great success)
    • Cisco. Yes, the 800 pound gorrila of home. A ‘cheap’ 1800 or similar was going to set me back about £400, however this would have provided me most of what I needed.
    • RotuerBoard. These where, to me at least, relativly unknown. I originally looked at them for building my own system with them, and then discovered RouterOS came with the boards. This was an instant sale.

    After my first look at RouterOS I was basically sold. Main reasoning behind this was that it was a comercial Linux distribution, that actually worked well as a router, and shipped with both a CLI (Nortel-esq in this case) and a *shock* gui application. It also met my main criteria.

    • Support for 802.1Q. I have multiple vLANs at home so having support for dot1q was a necessity
    • Support for 802.3ad. As I have a few machines connecting via the router, I needed the throughput, as I don’t have gigabit switching LACP support was a necessity.
    • Support for Wireless. All good routers for the home (even a geeky one) need support for 802.11(a/b/g).
    • Support for SubSSIDs. Relating to the above, I didn’t want to have 7 wireless cards for my various networks
    • Support for WPA2-PSK and WPA2-EAP. I use RADIUS to authenticate all my personal stations to a central authentication system, but I don’t want to have to add guests to this, so PSK should also be supported.
    • Support for OpenVPN. I don’t like having my traffic to / from home going in the clear at all, so I needed to be able to connect via a VPN of some sort, My preference is OpenVPN for c2s vpns (s2s is still IPSEC…. which leads onto the next point)
    • Support for IPSec. I connect to various friends networks, and yet again, don’t want this sort of traffic in the clear, we made the standard IPSec (3des/md5) a while back
    • Support for “Unlimted” Firewall rules. This may sound silly, but anyone who has worked with the lowend Sonicwalls will know what I mean, only being able to put 20 rules is EXTREMELY restrictive especially with multiple vlans! (I’ve got roughly 300 rules)
    • Support for setting DHCP options. I used VMWare ESX at home for my test lab, so I require to be able to setup the DHCP server to be able to send the correct options for PXE (or gPXE) so this was a requirement
    • Quick booting. As silly as this may sound, I don’t want boot times of upwards of 30 seconds for my router.
    • Support for Bridging of interfaces with Firewall rules. This one is rather self explanatory really!
    • Support for UPnP. Lets face it, UPnP is required for any form of Voice/Video chat these days over the main IM networks (YIM/AIM/MSNIM)
    • Support for NetFlow or similar. This one is a nice to have, as I like to use flow-tools to generate a rough guess on what type of traffic is flowing through my network
    • Support for Traffic Shaping. Ah yes, the holy grail of routers. Unfortunately the likes of TC on linux requires a degree in astrophysics to get working how you’d like!
    • Easy configuration.

    After discovering (via the x86 installable and the demo units) that RouterOS would let me do all of the above, I decided to give it a whirl.

    SKY on a HTPC

    Recently I’ve become more and more annoyed with my SKY-HD’s disk spinning up and down, and then the power appearing to be cut to the drive, meaning that theres a rather loud click comes from it. Not a problem if you’re watching TV, as this only occurs when the box is in standby. Very annoying if you’re having problems sleeping and the thing is going clunk ever 30 minutes or so. I’ve been told that I can change a disk spin down somewhere on the box, however this doesn’t appear to have made any difference. Another issue that is compounding the annoyance is that the SKY-HD box is almost impossible to use with a single tuner.

    I decided to resurrect my HTPC and attempt to get SKY going into that. There where 4 major requirements for this

    1. Has to be able to play content – I pay a silly amount a month just for 3 HD channels (BBC, Discovery and History) :: This meant that a DVB-S2 receiver was required

    2. Has to be able to decode pay for channels – I pay a subscription to them, I’ll be damned if I don’t get my channels! :: This meant that either a SoftCAM or a CI slot and CAM were required

    3. Has to be local to the machine, I want a raw MEPG2/h264 stream going to the media pc, and not any additional transcoding, also one less set of CPUs is a good thing ™ – This isnt a poke at a specific Linux Based satellite receiver at all :: This meant internal cards or locally attached devices (USB2/Firewire)

    4. The HTPC must be running software that can play my videos – I don’t want to have my popcorn hour AND a HTPC to do my video. :: This meant using a Media Centre type application, this does however exclude Microsoft’s Windows Media Centre, as it doesn’t play MKVs/OGMs etc

     

    Relatively small requests one would think, but apparently not! I was left with a few choices for Card, however the one that seemed to come out tops was the Digital-Everywhere FloppyDTV/S2. This meets requirements 1&3, by being able to decode DVB-S2 signals, and also is sending data via the Firewire bus.

    In order to meet requirement 2 I opted for the “Dragon Cam” (or specifically the T-Rex 4.1). This is a Conditional Access Module, which along with a valid SKY viewing card, preforms the VideoCrypt (NDS) decryption. This does have one, annoying, caveat. The smartcard must go into a SKY box every 4->6 weeks to have a “new installation” done, as the CAM will not rewrite the new decryption codes to the card.

     

    The Shopping list at the end of all of this was as follows:

    • Digital Everywhere FireDTV S2 External @ £160 (External was chosen for various reasons, including the IR remote support)
    • Firewire PCI Card @ £10 
    • T-Rex Cam @ £60
    • Infinity Unlimited USB Card Programmer @ £60 – This was required to do the initial loading of the T-Rex CAM, however can be returned / resold / similar as its a once off requirement

    So all in all £220 to view/record on a media pc. This is for a single tuner only, as I don’t have access to multiple drops from the buildings satellite distribution system (which is rather amusing, as these are “executive” flats, built in the last 3 years, and yet all flats only have a single drop for satellite). Multi Drops can be done using a SoftCAM, where the CAM is replaced with a USB Smartcard programmer, but only one is required, meaning that the first channel would be £220, but each after that would be £160 (or £130 if an internal was to be used). Of course the Legality of using a SoftCAM is extremely questionable, where as a non official sky receiver is only marginally.

    I’ll be documenting more on the setup soon