A brief overview of OpenHAB

What is Openhab?

OpenHAB is a Java based home automation system that allows you to control multiple technologies all from a single location. It is built on the principal of a single “Home automation Bus” where different technologies communicate using a common “Event bus”.

This means that you are able, on any device that can run the JVM, you can run your home automation, without the need to purchase expensive controllers, however, with OpenHAB manual work is required to get the system working, as “batteries are not included”.

OpenHAB is OpenSource which means that the community as a whole are able to fix bugs, and unlike Cloud based services, or propitiatory controllers, you do not need to rely on the company (or in this case, team of people) to still be around to do feature improvements, the community as a whole can do so.

What can OpenHAB connect to?

OpenHAB has a plethora of addons that are available, ranging from HomeAutomation protocols (such as ZWave, EnOcean, RFXCOM (Lightwave RF can be controlled using this)). Some consumer devices are also able to be integrated (Sonos, Samsung TVs, LG TVs) It also has the ability to integrate with various notification systems including EMail,XMPP (Jabber / Google Talk), NotifyMyAndorid. It can also integrate with Calendaring systems like Google Calendar

Lets dive in!

For the examples that we’re going to use in the this document we will use ZWave based equipment

The hardware that will be in use is as follows

Initial Setup of OpenHAB

As OpenHAB doesn’t have an installer it has to be installed by hand. We’re going to do this into /opt

Installation of Addons

We’re going to copy the addons mentioned above into place

Installation of HABMIN

OpenHAB does not provide a WebUI for administration so we’re going to install a community project called HABMin to provide us this

Configuration of OpenHAB

========================

At this point we have OpenHAB installed, we now however need to provide it a Configuration file, which should be in /opt/openhab/configurations//openhab.cfg

 

You will need to use your favourite Text Editor (vi, nano, emacs) to edit this file

 

plex:host=192.168.1.2

plex:port=32400

zwave:port = /dev/ttyACM0

zwave:healtime = 2

zwave:masterController = true

tcp:refreshinterval=250

folder:items=10,items

folder:sitemaps=10,sitemap

folder:rules=10,rules

folder:scripts=10,script

folder:persistence=10,persist

security:option=OFF

persistence:default=rrd4j

mainconfig:refresh=60

chart:provider=default

logging:pattern=%date{ISO8601} – %-25logger: %msg%n

 

In this file we’ve set the ZWave controller to be the first ACM device (/dev/ttyACM0), if you don’t have any 3g Modules or other ZWave controllers plugged into this machine this will (likely) be the device you would need to use.

 

We are also disabling security for the purpose of this demo, you may want to enable this going forward!

 

Starting OpenHAB

===============

This is done using the start.sh script

/opt/openhab/start.sh

 

Setup of ZWave Devices

======================

We’re going to use HABMin to handle the inclusion of devices into the network.

Open a webbrowser to the machines IP address into the /habmin/ folder. For instance if this is 1.2.3.4 the address you would go to is http://1.2.3.4/habmin/

Click the Configuration tab.

Click Bindings (Bottom Left corner)

Click the ZWave binding

Click the Devices Tab

For each device you wish to add, click the include button and follow the instructions for the device to include it into the network.

 

Once this is completed the Devices tab should show the list of devices, similar to below (be aware this has more devices that we’re going to look at configuring!)

Persistence

=========

It is useful to be able to see what an item’s previous value has been, and as such we want to store these so that they survive over restarts. We’re going to use the rrd4j addon for this. It is configured by putting the following in /opt/openhab/configurations/persistence/rrd4j.persist

 

// persistence strategies have a name and a definition and are referred to in the “Items” section

Strategies {

   everyHour : “0 0 * * * ?”

   everyDay  : “0 0 0 * * ?”

   everyMinute : “0 * * * * ?”

   // if no strategy is specified for an item entry below, the default list will be used

   default = everyChange

}

 

/*

* Each line in this section defines for which item(s) which strategy(ies) should be applied.

* You can list single items, use “*” for all items or “groupitem*” for all members of a group

* item (excl. the group item itself).

*/

Items {

   // persist all items once a day and on every change and restore them from the db at startup

   * : strategy = everyChange, everyMinute, everyDay, restoreOnStartup

}

Items

=====

 

In OpenHAB an item is a individual attribute of a device that is configured within OpenHAB.

 

The first set of Items we will configure is the AeonLabs multi Sesnsor

We need to create a file in /opt/openhab/configurations/items/. Lets create /opt/openhab/configurations/items/livingroom.items

Within this file we define a set of items

 

Number sensor_1_temp “Temperature [%.1f °C]” {zwave=”3:command=sensor_multilevel,sensor_type=1″}

Number sensor_1_humidity “Humidity    [%.0f %%]” {zwave=”3:command=sensor_multilevel,sensor_type=5”}

Number sensor_1_luminance “Luminance    [%.0f Lux]” {zwave=”3:command=sensor_multilevel,sensor_type=3”}

Contact sensor_1_motion “Motion [%s]” {zwave=”3:command=sensor_binary”}

Number sensor_1_battery “Battery [%s %%]” {zwave=”3:command=battery”}

 

The format of these files is as follows

 

ItemType ItemName ItemLabel <ItemIcon> (ItemGroup) {ItemBinding}

We are not going to cover off Icons or Groups in this guide.

The ItemTypes that are available are

  • Color
  • Contact
  • DateTime
  • Dimmer
  • Group
  • Number
  • RollerShutter
  • String
  • Switch

 

The ItemLabel has the ability to be formatted using standard Java formatter syntax, which will not be covered here other than to say [%.1f °C] will display the temperature to 1 decimal point, ie 23.4.

 

The Binding is where we configure which device we actually are querying. In this example I’m using ZWave Node 3. We specify the command class that needs be be used, for instance SENSOR_MULTILEVEL and then the sensor type, this is documented at https://github.com/openhab/openhab/wiki/Z-Wave-Binding

 

When this file is saved, you should now be able to see the items in HABMin under Configuration -> Items and Groups

 

The next devices we will configure are the NorthQ meters. These can be placed in any filename in the items directory. Lets create them as power.items

 

Group Power <energy>

Number power_1_battery “Electricity Meter Battery [%s %%]” <battery> (Power) {zwave=”4:command=BATTERY,refresh_interval=3600″}

Number power_1_usage_total “KWH [%s]” (Power) {zwave=”4:command=METER,meter_scale=E_KWh,refresh_interval=450″}

Number power_1_usage “Watt [%.2f]” (Power)

 

Lets put the gas in gas.items

 

Group Gas

Number gas_1_battery “Gas Meter Battery [%s %%]” (Gas) {zwave=”6:command=BATTERY,refresh_interval=3600″}

Number gas_1_usage_total “m3 [%s]” (Gas) {zwave=”6:command=METER,refresh_interval=450″}

Number gas_1_usage “m3 [%.2f]” (Gas)

 

You will notice that we have a power_1_usage and gas_1_usage item that does not have a binding. We’ll look at this in a few moments

 

Our final set of items that we want to create are for the Qubino Relay. Lets create this in relay.items

 

Switch light1_state “light1 [%s]” {zwave=”10:command=switch_binary”}

Number light1_power “light1 Watt [%s]” {zwave=”10:command=meter,refresh_interval=60″}

 

Rules

=====

Rules allow you to create logic within the OpenHAB system, for instance when there is movement, turn on the light.

We’re going to create two rules, one for our gas meter and one for our power meter.

Both of these devices will, by default, only return the amount of power or gas used since they were installed, but I find it more interesting / useful to have a “point in time” amount that is being used.

Create the file /opt/openhab/configuration/rules/northq.rules

 

rule gas1_current_usage_update

       when

               // When power meter is updated

               Item gas_1_usage_total received update

       then

               // Update current usage with the difference between this, and the previous update to get our spot usage of m3.

               gas_1_usage.postUpdate(

                                       gas_1_usage_total.deltaSince(now.minusMinutes(5)).value

                               )

                       );

       end

rule power1_current_usage_update

       when

               // When power meter is updated

               Item power_1_usage_total received update

       then

               // Update current usage with the difference between this, and the previous update and multiply by 1000 to give us the total in watts.

               power_1_usage.postUpdate(

                                       power_1_usage_total.deltaSince(now.minusMinutes(5)).value

                                       * 1000

                               )

                       );

       end

 

These rules simply take the last value that the usage total was 5 minutes ago, and sets the usage to the difference between the two readings

 

Sitemaps

========

Sitemaps provide us a way to display items on a device, such as within OpenHAB’s applications or on the web page.

 

They can get quite complex and as such I’m not going to cover them off in details, but a sample sitemap would be /opt/openhab/configuration/sitemap/default.xml

sitemap default label=”Home”

{

                                       Frame label=”Hallway” {

                                               Switch item=light1_state

                                               Text item=light1_power

                                       }

                                       Frame label=”Sensor1″ {

                                       Text item=sensor_1_temp valuecolor=[>25=”orange”,>15=”green”,>5=”orange”,<=5=”blue”] {

                                       Text item=sensor_1_humidity

                                       Text item=sensor_1_luminance

                                       Text item=sensor_1_battery

                                       Text item=sensor_1_motion

                                       }

               Frame label=”Energy” {

                       Text item=power_1_usage label=”Power usage [%.0f Watts]” icon=”energy”

                       Chart item=power_1_usage period=h refresh=6000

                       Text item=power_1_battery

               Text item=gas_1_usage label=”Gas usage [%.2f m3]” icon=”fire-on”

                       Chart item=gas_1_usage period=h refresh=6000

                       Text item=gas_1_battery

       }

}

 

On the fly content replacement using F5 Load balancers

In the modern web application world, a large proportion of sites are using SSL Offloading, be this for the added security of the web servers not having the SSL private key on them (and hence if compromised the certificate is not necessarily compromised as well) or for the performance boost associated of using hardware accelerators. This however is a double-edged sword. Its more complex for developers to test their applications against this behavior, as they need to either setup two webservers (or vhosts with proxying) on the same host to emulate this, or they need to have an actual off loading device. Both of these are not always readily available options, or easy for the development team to do.

With this in mind, I have seen many times applications that “work in development” but don’t work in production. One common issue I’ve seen is developers checking the protocol that the user has connected to the server as. When off loading, this will be HTTP, rather than HTTPS. It’s also a common practice to run SSL sites on a different port, lets say port 8080, however if the developer is using the absolute URL of the server including the port number when creating URLs this can cause issues.

The result is a url like https://www.withagrainofsalt.co.uk becomes http://web123.internal:8080/. The end user is unable to get access to this (usually) and the user experience is less than idea. The correct way to fix this would be in the application its self, however this can sometimes take weeks / months, and there may not be budget allocated to fix this defect.

Continue reading On the fly content replacement using F5 Load balancers

Outbound filtering of Web requests using Squid as a Proxy server

Frequently in my line of work I’ll be asked about filtering of outbound traffic from application servers. There are two schools of thought here, one is that an app server can have unfiltered access to the internet, and the other that the app server should have as little access to any resources (both inside and outside of the solution) as needed to preform its role.

This generally isn’t an issue if site to site VPNs, static IPs or similar are being used on the destination side. But what happens if your application requires access to something like Youtube, Facebook or Flickr. As these cloud services are not managed by the customer, we have no idea if they are on static IP addresses (and in the case of flickr, they do seem to change moderately frequently).

With this in mind a traditional Layer3/Layer4 firewall is only going to be able to handle this if it supports DNS resolution in its access-list set, and unfortunately (but for good reason) this is not a common feature. Cisco did introduce this to the ASA firewalls in 8.4, however I personally have not used this, so at the moment its still a bit of an unknown and I can’t recommend it to a customer.

There is however another way of doing this, whilst it might not be a perfect situation, it does at least allow you to filter outbound traffic.

The Squid proxy server has been around for quite some time and is quite a stable product, both in the forward (outbound) and reverse (inbound) HTTP proxy space. We’re going to use this to preform our outbound proxying. It is possible to use commercial products like a BlueCoat proxy, however I’m going to concentrate on the FOSS solution here.

Prerequisites

Before we start we need to have the following:

  • A Linux Server (for this example I’m going to be using CentOS 6.4, however any linux distribution should work)

Installing Squid

This is a really simple task on most linux distributions, as not only has squid been since the early 90’s, it’s also really popular! You can use the package manager to install squid on most distributions

You should get a response similar to below:

We now would need to configure squid to start on boot

 SSL Proxying

Squid has a rather nice feature called SSLBump which allows us to preform a Man In the Middle SSL Proxy. Privacy issues aside on this feature (after all we’re using it for servers not for end users) this is going to work for us from the server side of things. One key thing to note is we have to trust the CA, that we’re going to generate, on all applications / servers. I’m not going to cover how to do this in this post.

Normally when we create an SSL certificate we’d do this with a specific domain, however as we’re going to be proxying for all domains we’re going to use a wildcard certificate. For the “Common Name” or Server name, we need to chose “*” as the value.

In order to create the CA you can follow the following post. One point of note is to ensure that you do not do this on the Squid server, as this would mean that should the server be compromised, the CA (which is trusted on multiple servers) is now trusted as well.

We need to create the certificate using the CA script as per the above post. CA -newreq This will look similar to

Once this is completed you’ll need to sign this with the  CA -sign command

Once this is completed, ensure that newcert.pem and newkey.pem are copied to the squid server. You will then also need to remove the passphrase from the key.

Once this is done, you’ll need to then also copy the cert into the same file

Configuring Squid

We’re going to make a very simple squid config, allowing access from the App servers to youtube.com, but no other hosts. Replace  /etc/squid/squid.conf with the following

 

Testing Squid

We’re going to use the curl command to test that the ACLs are working

First lets test google, this should fail. We specify the proxy with the -x flag

As you can see we get a 403 on this from Squid

Lets now try http access to youtube.com

This works as expected. Lets try https to youtube.com now!

This has failed as we’ve not got the CA certificate in the bundle that curl uses, lets get curl to ignore the SSL certificate

Now lets just make sure that other https sites don’t work.

 Forwarding all traffic via the Proxy server

Now the way that this is done depends on the firewall or router in use. What we need to achieve is to either D-NAT or redirect all traffic on port 80 / 443 outbound to the Squid server.

For a Cisco ASA there is a guide on how to do this with WCCP

For a Linux based device you would want to have a IPTables rule similar to

 

Creating a CA using OpenSSL – with OCSP

SSL Certificates are a source of huge amounts of confusion. There are two things that a SSL session will provide. The first is encryption, which can be provided with “self signed” certificates. The second, and arguably the more important is authentication of the remote server. This is managed by “Certification Authorities”. Web Browsers will have a set of known CAs that are trusted, and any certificate signed by them is therefore also trusted. Obviously if a CA has had a security breach then all bets are off.

Within an organisation it is usually preferable for NON PUBLIC facing sites and services to use ‘self signed’ or internal CA signed certificates. The later is usually more sensible, however it comes with the issue of more administrative time is required, and also that all clients must trust this CA.

There are various different ways of creating a CA, Windows Server 2003 and above come with their own CA software, and most UNIX/Linux distributions have OpenSSL available.

In this guide I’m going to walk through the creation of a CA using OpenSSL. I’m also going to look at enabling additional features such as OSCP (a way of clients confirming if a certificate is still valid) and go over how to create “Server Alternative Name” certificates (also known as UC or SAN certs, allowing multiple hostnames/domainnames to exist on the same cert).

One key thing to remember here is security of the CA. You must ensure that no unauthorized access is permitted to the CA, as if someone has been able to gain this, they will have access to issue certificates.

I’m also going to ensure that we setup OCSP, which is a way of clients checking to see that certificates are still valid and not revoked.

Prerequisites

Before we start we need to have the following:

  • A Linux Server with openssl installed (for this example I’m going to be using CentOS 6.4, however any linux distribution should work)
  • A Domain name (in this example I’m going to use test.local)

Configure OpenSSL

On a CentOS/RedHat system there is already a basic openssl.cnf file created, that the scripts for managing a CA already take into account.  /etc/pki/tls/openssl.cnf

Open this up in which ever editor you like and do the following:

  • Locate  countryName_default = XX and change the XX to which ever country code you are in, for example the United Kingdom would be GB
  • Locate  #stateOrProvinceName_default = Default Province and edit this line so there is no # at the start, and that Default Province now is set to our State/Proviince/County/City
  • Locate  localityName_default = Default City and edit this to be your City
  • Locate 0.organizationName_default  and edit this to be your City

At this point we’ve edited the config so that for any new requests you won’t have to type these in!

Whilst still in the Text Editor we need to setup the OCSP side of things.

  • Locate the  [ usr_cert ] section and add 

    In this example I’m going to put this on the CA, but this is *NOT* a good idea from a security perspective. You want the CA to have as little (if indeed any) access from the outside.
  • We also need to create the OCSP ‘extensions’ section. Add this to the end of the file

     

Create the CA

We’re going to use the OpenSSL CA script to do this.

  • Change directory to  /etc/pki/tls/misc 
  • Run the CA command:  ./CA -newca 
  • Whilst Running it you will be asked
              • File name : Just hit enter here
              • PEM Passphrase  : this is the password you will use for the CA. Make sure it’s secure!
              • Country Name : Hit enter here
              • State or Province Name : Hit enter here
              • Locality Name (eg, city) : Hit enter here
              • Organization Name (eg, company) : Hit enter here
              • Organizational Unit Name : Hit enter here
              • Common Name (eg, your name or your server's hostname) : For this its generally considered best to set this to ca.domain, so in this case ca.test.local
              • Email Address : Hit enter here
              • A challenge password : Hit Enter here
              • An optional company name : Hit enter here
              • Enter pass phrase for /etc/pki/CA/private/./cakey.pem :: Enter the CA password here

The end output should look similar to

 

At this point you have a CA setup and ready to go. You will need to ensure that the CA public certificate is installed on the browsers / devices that you will be using. This can be downloaded using a SCP client from  /etc/pki/CA/cacert.pem

Creating a OCSP signing certificate

In order to host an OCSP server, we have to generate a OCSP signing certificate. If you’re going to have multiple OCSP servers, you may want to have multiple certificates.

We’re going to create a directory, and a request for the certificate

At this point we now need to sign the request and make the certificate

You will be asked for

  • CA Passphrase
  • Sign the certificate? [y/n]: Say yes to this
  • 1 out of 1 certificate requests certified, commit? Say yes to this as well

Start OCSP server

At this point we now also need to run the OCSP server. Be aware that this is going to run as root in this example, which you should *NOT* do. You will want to ensure permissions are done in a way that a normal user, I’m not going to cover this at the moment though. Start the server with the following

 

 Issuing a Certificate

Now that you’ve got a working CA, you can sign any certificate requests. There are multiple ways of creating these, some software will provide you a CSR, but in this example I’m going to do this all on the CA its self (don’t do this in production!)

  • Change directory to /etc/pki/tls/misc
  • Run the CA command: ./CA -newreq

This will give a result similar to below

We now need to sign the certificate

  • Run the CA command 

This will give a result similar to

The certificate now exists and can be seen in newcert.pem (and the key in newkey.pem)

Checking OCSP status

We can now check to see if the above certificate is valid via OCSP:

openssl ocsp -CAfile /etc/pki/CA/cacert.pem -issuer /etc/pki/CA/cacert.pem -cert newcert.pem -url http://127.0.0.1:8888 -resp_text

This will return an address similar to below:

 Revoking a certificate.

Oh no! The certificate above has been compromised. We need to revoke it. This isn’t as difficult as you may think. We have a copy of all of the certificates on the CA. If we look at the certificate serial number (c5:07:3c:dc:c5:8a:cb:ad in this case) this file should exist in  /etc/pki/CA/newcerts/ To revoke you need to

  • Revoke the certificate  openssl ca -revoke /etc/pki/CA/newcerts/C5073CDCC58ACBAD.pem 
  • Verify that the certifcate is revoked  openssl ocsp -CAfile /etc/pki/CA/cacert.pem -issuer /etc/pki/CA/cacert.pem -cert /etc/pki/CA/newcerts/C5073CDCC58ACBAD.pem -url http://127.0.0.1:8888 -resp_text

 

RouterBoard as a Home Router – 4 1/2 years on

A while back I mentioned a follow up to an old blog post about the RouterBoard that i’d recently purchased and setup for home use. This is a very belated update on that board.

My requirements have since changed from the original post, but not dramatically so. The requirement for LACP has disapeared, IPSec is no longer used, but a requirement for Dynamic Routing has appeared.

All in all, I have to say that I still cannot recommend RouterOS enough. I’ve been using it the past 4 1/2 years, and have recommended a large number of folks to use it.

The main reason behind this is that it just works, there’s not really any faffing about that needs to be done, and if you’re running the stable release, everything does just work.

Feature wise, this is right up there with some of the big brands (Cisco, Juniper et all), however its fair to say not with the same price tag.

Continue reading RouterBoard as a Home Router – 4 1/2 years on

iSCSI Target re scanning on VMWare

If you’re using iSCSI on VMWare but have a requirement to rescan the luns after a machine has booted (For example a VM which has DirectPath to a Storage card enabled, which is hosting your iSCSI luns) you can simply do so with the following command

F5 LoadBalancing on a per app mountpoint

With some customers solutions I’ve seen a common requirement to do loadbalancing decision based on the actual application server serving the content, this obviosuly introduces a few issues if you’re using a single base URL for this

If we take the example below

With this in mind, its not possible to use traditional Layer 3 / Layer 4 load balancers, and would require a L7 Load balancer, such as a F5 LTM or Riverbed Stingray (ZTM/ZXTM). I’m going to concentrate on the F5 in this example.

On the F5 you have the abbility to use a iRule to preform load balacning actions. On a Virtal Server that has the “http” profile enabled, you would be able to add a iRule similar to below.

There are multiple events that this will trigger.

  1. CLIENT_ACCEPTED
  2. This event is triggered whenever a new connection is made to the Load balancer. In our case the code will check to see if the virtual servers name contains “testing”, which if it does sets the serverpool variable to contain “testing”, otherwise it will set it to “liveserver”

  3. HTTP_REQUEST
  4. This event is triggered on any new HTTP Request. In our case this preforms a ‘switch’ (a multiple if/else statement) on the URL. We do however preform two “transformations” on the URL, the first is we convert it to lower case. the second is that we only take the URL between the first two /’s. So for the URL http://www.example.com/app1/test we would use app1 for teh switch statement.
    Based on the path, we will then set the NEWPOOL variable, and then set the pool to NEWPOOL

  5. HTTP_RESPONSE
  6. This event is triggered when the server send a response to a HTTP Request. We add the “X-AP” header to the response, and set this to the NEWPOOL variable.

‘Instant’ Upload to Rackspace CloudFiles

Using inotify and the ‘swift’ client tools it is possible to automatically upload files to cloudfiles as they are written to disk.

This code is untested and might cause planes to drop from the sky, use it at your own risk!

Google Authenticator F5 IRule

Two Factor authentication is rather hit and miss in terms of support from web apps.

A quick look around the web turns up an article on DevCentral for a solution to implement google authentication with ldap. As I don’t run a LDAP server at home I needed to hack up the script a bit. This iRule implements the two factor side of things from the above article, but skips the LDAP side of things, as it’s not needed!

Yubikey and server authentication

After starting to use the Yubikey for LastPass and various other online servers I’ve started also using my yubikey for SSH access to my server(s).

I’ve touched on google_authenticator and pam_yubico for authentication in a previous post however I will be going into this in a bit more detail.

Taking a machine at home as an example. My requirements are simple

  • NO SSH Key access to be allowed – as there is no way to require a second factor with an SSH Key (Passphrases can be removed or a new key generated)
  • Access from Local machines to be allowed without Two Factor being enabled
  • Yubikey to be the Primary TFA
  • Fall back to google authenticator should either the Yubico servers be down, an issue with my keys or I just don’t have a USB port available (IE I’m on a phone or whatever)
  • In order to meet these requirements I’m going to need the following

  • yubico-pam Yubikey PAM
  • Google Authenticator PAM
  • pam_access
  • The server is running Archlinux, and luckily all of these are within AUR – and as such I’m not going to cover the install of the modules.

    In order to restrict SSHd access as above I need the following auth lines in /etc/pam.d/sshd

    The next step is ensure that the relevant users and IP are listed in /etc/security/access_yubico.conf

    After this is setup we will also need to setup the yubikey file /etc/yubikey

    I’m not going to cover configuration of google authenticator with the google-authenticator command

    The final changes are to the /etc/ssh/sshd_config ensuring that the following are set