Internet of Things: Single Channel LoRa Gateway Build

Today the final outstanding item on the Single Channel LoRa Gateway Shopping List arrived in the post, and I now have everything needed to get my Single Channel LoRa Gateway up and running and posting data back to The Things Network. If you’d like to replicate the project follow the guide below to get your gateway up and running.

First lets get the hardware side of things sorted out. Take 8 female to female jumpers and chop off the connectors at one end, and then tin the wires ready for soldering. The raw ends will be soldered to the LoRa transceiver and the female dupont style connectors will be plugged into the Raspberry Pi’s GPIO header.

Solder a cable to each of the following pins; DI00, 3.3v, MISO, MOSI, SCK, RESET, NSS and the GND pin next to the MISO pin. Do not solder the GND cable to any other GND point of the board, it must be the one next to the MISO pin. I found I had to switch out my soldering iron tip to a very fine tip as the pads on the transceiver module are pretty small.

Once you have all the cables for the GPIO header connected to the RFM95W LoRa module solder an 8cm length of solid core bell wire to the ANT pin on the module, and another 8cm length of solid core bell wire to the GND pin next to the ANT pin, these wires should face opposite directions at a 180 degree angle and will act as a small dipole antenna.

Next connect up the LoRa transceiver to the Raspberry Pi GPIO header as per the following. Pin 1 is marked on the board.

RFM95W Raspberry Pi
3.3V 1
GND 6
DI00 7
RESET 11
NSS 22
MOSI 19
MISO 21
SCK 23

I assembled my Raspberry Pi in a case with a gap for the cables to pop out from the GPIO connector. Here is the result, and also an opportunity for you to laugh at my bad soldering…

If you have not already loaded an OS onto your Raspberry Pi SD card you should do so now, I’ll be using Raspbian Jessie Lite although you are free to use any distribution that supports wiringpi. Once you have the OS on the sdcard connect up the Raspberry Pi to your network and boot it up. I’ll be configuring everything over SSH, if you want SSH to start by default on first boot place an empty file called ssh in the boot partition of the SD card, else on first boot SSH will remain in a stopped state.

After the Pi has booted up login and sudo up to the root user and then run the raspi-config utility.

rputt@loragw:~$ sudo su
root@loragw:~# raspi-config

From the menu select interfacing options and then select SPI, you need to ensure SPI is enabled prior to continuing configuration of the Pi.

If the configuration asks if you’d like to reboot say yes and reboot your Raspberry Pi. Next continue to install the wiringpi, gcc and git packages from your OS repository.

root@loragw:~# apt-get update
root@loragw:~# apt-get install -y wiringpi git gcc

Once the packages are installed we can continue to clone the single channel gateway packet forwarder from GitHub.

root@loragw:~# git clone https://github.com/tftelkamp/single_chan_pkt_fwd

Now lets change directory into the repository we just cloned and check a few items in the source code…

root@loragw:~# cd single_chan_pkt_fwd
root@loragw:~/single_chan_pkt_fwd# vi main.cpp

In the file find the “#Define SERVER1” line and replace the IP address with 52.169.76.203, the IP in the repository is for an old The Things Network Router which has been retired, the IP provided in this blog is for router.eu.thethings.network, you can of course customise this value for a more appropriate host in your region. Next find the “uint32_t freq” line in the file and ensure it is set for the relevant frequency for your region, in the EU the frequency is 868100000 for 868.1mhz. Save the file and run the make command to build a binary from the code.

root@loragw:~/single_chan_pkt_fwd# make
g++ main.o base64.o -lwiringPi -o single_chan_pkt_fwd
root@loragw:~/single_chan_pkt_fwd# 

Next run the single_chan_pkt_fwd binary and once it has done it’s initialisation press ctrl+c to stop its execution, if the hardware has been connected correctly and the software compiled successfully then you should see something like the following…

root@loragw:~/single_chan_pkt_fwd# ./single_chan_pkt_fwd 
SX1276 detected, starting.
Gateway ID: ba:12:e3:ff:ff:42:4e:1b
Listening at SF7 on 868.100000 Mhz.
------------------
stat update: {"stat":{"time":"2017-05-06 14:37:10 GMT","lati":0.00000,"long":0.00000,"alti":0,"rxnb":0,"rxok":0,"rxfw":0,"ackr":0.0,"dwnb":0,"txnb":0,"pfrm":"Single Channel Gateway","mail":"","desc":""}}
^C
root@loragw:~/single_chan_pkt_fwd# 

Open a web browser and visit https://www.thethingsnetwork.com, if you already have an account login, if you are new to The Things Network sign up for a new account, once you are logged in go to the console, click the gateways button and the click register gateway.

Complete the form as per the following:

  • Protocol – Packet Forwarder
  • Gateway EUI – Copy the Gateway ID given when you first run the single_chan_pkt_fwd binary.
  • Description – A friendly name for your gateway.
  • Frequency Plan – Select the frequency plan for your region, this should match the frequency given when you run single_chan_pkt_fwd, if it does not then you probably have the wrong LoRa module for your region, you should order one with the relevant frequency for your region, running a LoRa module in the wrong frequency band may be illegal in your country.

You should end up with something that looks a bit like this…

Next click register gateway. The page will refresh and the gateway status should say “not connected”, now rerun the single_chan_pkt_fwd binary on your Raspberry Pi, the gateway should now check into The Things Network and the webpage should update to show a connected status and say it was last seen within the last 30 seconds or so.

That’s it, your LoRa single channel packet forwarder gateway is now up and running and capable of sending stuff like sensor readings to The Things Network. Remember there are a few limitations to running a single channel gateway compared to a full 8 channel gateway…

  • This particular implementation is uni directional, it can only receive packets from LoRa devices, not transmit data back.
  • The gateway only operates on 1 channel and 1 spread factor, in this case 868.1mhz and SP7, therefore you should update any LoRa node that you want to submit data from to only use the 868.1mhz channel not the usual 8 channel round robin.
  • If you like you can edit the channel it listens on and the spread factor by editing the frequency and spread factor in the main.cpp file and rerunning the make command.

Obviously when you logout of your Pi SSH session the single_chan_pkt_fwd binary stops running, you can either run this under a screen session or configure SystemD to run the binary as a service to make sure it continues running after you logout.

Whats next? Well, its all well and good having a LoRa gateway setup in your house, but without any data flowing over it’s pretty boring, I am currently waiting on another LoRa transceiver module to turn up (I ordered this one from Ali Express as they are much cheaper direct from China via Ali Express rather than ordering from a UK seller on eBay, but the delivery times suck) once that arrives I’ll continue to convert my ESP8266 DHT22 project to a LoRa DHT22 project and have it submit it’s readings over my LoRa gateway. Unfortunately until this arrives I have no real way of proving if my gateway does in fact work or not. Keep watching the blog for updates.

Internet of Things: Single Channel LoRa Gateway Shopping List

Currently my Internet of Things Things are quite limited, in my house I have a DIY DHT22 powered Temperature & Humidity Sensor and Phillips Hue lighting. However soon providing everything goes to plan I should be moving out of rented accomodation into my own home which greatly improves the flexibility for home automation and cool IoT devices. One theme in common with most Internet of Things Things appears to be the use of radio to a hub of some sort rather than hundreds of ESP8266 modules all over the place, and this of course becomes an attractive proposition as low powered but low bandwidth radio can easily achieve better range compared to WiFi and you can have a much higher density of IoT devices multiplexed on some radio technology in comparison to connected WiFi devices on consumer grade WiFi access points.

After a bit of research I decided I should start experimentation with this concept and I stumbled accross LoRa, low power, wide area networking for IoT usage. LoRa is particularly attractive because of The Things Network (TTN), an open network of LoRa gateways which allows your thing to latch on to any TTN LoRa gateway and start transmitting or indeed recieving it’s data, it’s also attractive as LoRa connected devices can be made uber cheap with modules as low as around £10 on eBay from China. My first intention was to convert my DHT22 humidity sensor to a LoRa based device, so I researched if I could use a nearby TTN LoRa gateway for this purpose. Unfortunately the closest gateway is a good 5 miles away from my house and although LoRa has proven to have very good range when line of sight can be achieved and with carefully planned conditions I knew it was almost impossible to contact this gateway from my house purely because of the landscape and the fact in the UK most TTN LoRa gateways are limited range DIY devices. Further more there are no LoRa gateways in the area I plan on moving to, so to go down the LoRa route I’d need to host my own gateway long term anyway.

Back to the drawing board, before I could convert my DHT22 ESP8266 powered sensor to LoRa I’d need to establish my own LoRa gateway. Initially this seemed to be a huge blocker, although the LoRa modules for use in IoT devices are cheap, a decent LoRa gateway seemed to be in the region of £200 for a compliant 8 channel device. Then I found the saviour Andreas Spiess a YouTuber from Switzerland with a selection of LoRa based experiments on his channel. He documented how he found a blog detailing the creation of a cheap 1 channel LoRa gateway using a Raspberry Pi and a cheap LoRa transiever module, he also explains the limitations of a 1 channel gateway, for example on your connected things you need to force 1 channel operation and you will be able to multiplex less devices. However for the low number of devices I plan on creating I am sure this will provide a cheap entry point and with the heads up from Andreas reducing alot of headaches along the way.

So I ordered the parts to start building a single channel gateway. Here is the shopping list, I already had some of the parts laying around in my electronics box… Of course if you would like to replicate the project you’ll need some tools like wire strippers, a soldering iron and so on.

1x Raspberry Pi
1x Raspberry Pi Power Supply
1x MicroSD Card (8GB+)
1x RFM95W LoRa Transceiver Module
8x Female to Female Jumper Wires

These items are due to arrive over the next couple of days, then I will start to build my single channel LoRa gateway and I’ll document progress as I go. Once the gateway is online I’ll order a few other LoRa modules and start playing with sending and receiving packets on a breadboard to ensure it is working as expected.

My adventure into SSD caching with ZFS (Home NAS)

Recently I decided to throw away my old defunct 2009 MacBook Pro which was rotting in my cupboard and I decided to retrieve the only useful part before doing so, the 80GB Intel SSD I had installed a few years earlier. Initially I thought about simply adding it to my desktop as a bit of extra space but in 2017 80GB really wasn’t worth it and then I had a brainwave… Lets see if we can squeeze some additional performance out of my HP Microserver Gen8 NAS running ZFS by installing it as a cache disk.

I installed the SSD to the cdrom tray of the Microserver using a floppy disk power to SATA power converter and a SATA cable, unfortunately it seems the CD ROM SATA port on the motherboard is only a 3gbps port although this didn’t matter so much as it was an older 3gbps SSD anyway. Next I booted up the machine and to my suprise the disk was not found in my FreeBSD install, then I realised that the SATA port for the CD drive is actually provided by the RAID controller, so I rebooted into intelligent provisioning and added an additional RAID0 array with just the 1 disk to act as my cache, in fact all of the disks in this machine are individual RAID0 arrays so it looks like just a bunch of disks (JBOD) as ZFS offers additional functionality over normal RAID (mainly scrubbing, deduplication and compression).

Configuration

Lets have a look at the zpool before adding the cache drive to make sure there are no errors or uglyness…

[root@netdisk] ~# zpool status vol0
  pool: vol0
 state: ONLINE
  scan: scrub repaired 0 in 3h21m with 0 errors on Wed Apr 26 16:40:16 2017
config:

	NAME                                            STATE     READ WRITE CKSUM
	vol0                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/b8ebf047-25cd-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	    gptid/c0df3410-25cd-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    gptid/c8b062cb-25cd-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	    gptid/355426bd-25ce-11e7-b7e2-000c29ccceef  ONLINE       0     0     0

errors: No known data errors

Now lets prep the drive for use in the zpool using gpart. I want to split the SSD into two seperate partitions, one for L2ARC (read caching) and one for ZIL (write caching). I have decided to split the disk into 20GB for ZIL and 50GB for L2ARC. Be warned using 1 SSD like this is considered unsafe because it is a single point of failure in terms of delayed writes (a redundant configuration with 2 SSDs would be more appropriate) and the heavy write cycles on the SSD from the ZIL is likely to kill it over time.

[root@netdisk] ~# gpart create -s gpt da6
[root@netdisk] ~# gpart show da6
=>       34  150994877  da6  GPT  (72G)
         34  150994877       - free -  (72G)
[root@netdisk] ~# gpart add -t freebsd-zfs -s 20G da6
da6p1 added
[root@netdisk] ~# gpart add -t freebsd-zfs -s 50G da6
da6p2 added
[root@netdisk] ~# gpart show da6
=>       34  150994877  da6  GPT  (72G)
         34   41943040    1  freebsd-zfs  (20G)
   41943074  104857600    2  freebsd-zfs  (50G)
  146800674    4194237       - free -  (2.0G)
[root@netdisk] ~# glabel list da6p1
Geom name: da6p1
Providers:
1. Name: gptid/a3d641e5-2c03-11e7-9146-000c29ccceef
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 41943040
   length: 21474836480
   index: 0
Consumers:
1. Name: da6p1
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
[root@netdisk] ~# glabel list da6p2
Geom name: da6p2
Providers:
1. Name: gptid/a6dd9f4d-2c03-11e7-9146-000c29ccceef
   Mediasize: 53687091200 (50G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 104857600
   length: 53687091200
   index: 0
Consumers:
1. Name: da6p2
   Mediasize: 53687091200 (50G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0

Now we have a couple of partitions configured on the disk as described above, they can now be added to the ZFS zpool.

[root@netdisk] ~# zpool add vol0 log gptid/a3d641e5-2c03-11e7-9146-000c29ccceef
[root@netdisk] ~# zpool add vol0 cache gptid/a6dd9f4d-2c03-11e7-9146-000c29ccceef
[root@netdisk] ~# zpool status vol0
  pool: vol0
 state: ONLINE
  scan: scrub repaired 0 in 3h21m with 0 errors on Wed Apr 26 16:40:16 2017
config:

	NAME                                            STATE     READ WRITE CKSUM
	vol0                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/b8ebf047-25cd-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	    gptid/c0df3410-25cd-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    gptid/c8b062cb-25cd-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	    gptid/355426bd-25ce-11e7-b7e2-000c29ccceef  ONLINE       0     0     0
	logs
	  gptid/a3d641e5-2c03-11e7-9146-000c29ccceef    ONLINE       0     0     0
	cache
	  gptid/a6dd9f4d-2c03-11e7-9146-000c29ccceef    ONLINE       0     0     0

errors: No known data errors

Now it’s time to see if adding the cache has made much of a difference. I suspect not as my Home NAS sucks, it is a HP Microserver Gen8 with the crappy Celeron CPU and only 4GB RAM, anyway, lets test it and find out. First off lets throw fio at the mount point for this zpool and see what happens both with the ZIL and L2ARC enabled and disabled.

Caching Disabled FIO Results

[root@netdisk] ~# fio disk_perf.fio
random_rw: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.14
Starting 1 thread
random_rw: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [M(1)] [100.0% done] [10806KB/10580KB/0KB /s] [2701/2645/0 iops] [eta 00m:00s]
random_rw: (groupid=0, jobs=1): err= 0: pid=101627: Fri Apr 28 12:21:47 2017
  read : io=522756KB, bw=10741KB/s, iops=2685, runt= 48669msec
    slat (usec): min=2, max=2559, avg= 9.31, stdev=21.63
    clat (usec): min=2, max=69702, avg=34.66, stdev=563.31
     lat (usec): min=12, max=69760, avg=43.97, stdev=567.97
    clat percentiles (usec):
     |  1.00th=[    2],  5.00th=[    2], 10.00th=[    3], 20.00th=[    3],
     | 30.00th=[    3], 40.00th=[    3], 50.00th=[    3], 60.00th=[   14],
     | 70.00th=[   14], 80.00th=[   15], 90.00th=[   15], 95.00th=[   15],
     | 99.00th=[   27], 99.50th=[   59], 99.90th=[ 9536], 99.95th=[10304],
     | 99.99th=[15040]
  write: io=525820KB, bw=10804KB/s, iops=2701, runt= 48669msec
    slat (usec): min=2, max=8124, avg=15.87, stdev=43.54
    clat (usec): min=2, max=132195, avg=305.77, stdev=1973.41
     lat (usec): min=21, max=132485, avg=321.64, stdev=1977.32
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    3], 20.00th=[    3],
     | 30.00th=[    3], 40.00th=[    3], 50.00th=[    4], 60.00th=[   24],
     | 70.00th=[   24], 80.00th=[   25], 90.00th=[   25], 95.00th=[   35],
     | 99.00th=[10304], 99.50th=[10560], 99.90th=[11712], 99.95th=[19584],
     | 99.99th=[50432]
    lat (usec) : 4=49.44%, 10=4.09%, 20=21.92%, 50=22.63%, 100=0.23%
    lat (usec) : 250=0.08%, 500=0.02%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.90%, 20=0.63%, 50=0.02%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=3.08%, sys=2.92%, ctx=265022, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=130689/w=131455/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=522756KB, aggrb=10741KB/s, minb=10741KB/s, maxb=10741KB/s, mint=48669msec, maxt=48669msec
  WRITE: io=525820KB, aggrb=10804KB/s, minb=10804KB/s, maxb=10804KB/s, mint=48669msec, maxt=48669msec
[root@netdisk] ~# 

Caching Enabled FIO Results

[root@netdisk] ~# fio disk_perf.fio
random_rw: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.14
Starting 1 thread
random_rw: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [M(1)] [100.0% done] [10902KB/10854KB/0KB /s] [2725/2713/0 iops] [eta 00m:00s]
random_rw: (groupid=0, jobs=1): err= 0: pid=101640: Fri Apr 28 12:24:18 2017
  read : io=522756KB, bw=10462KB/s, iops=2615, runt= 49967msec
    slat (usec): min=2, max=23575, avg=12.84, stdev=155.63
    clat (usec): min=2, max=135652, avg=34.24, stdev=644.41
     lat (usec): min=12, max=135654, avg=47.08, stdev=663.37
    clat percentiles (usec):
     |  1.00th=[    2],  5.00th=[    2], 10.00th=[    3], 20.00th=[    3],
     | 30.00th=[    3], 40.00th=[    3], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[   13], 80.00th=[   14], 90.00th=[   15], 95.00th=[   15],
     | 99.00th=[   27], 99.50th=[   70], 99.90th=[ 9920], 99.95th=[10432],
     | 99.99th=[11968]
  write: io=525820KB, bw=10523KB/s, iops=2630, runt= 49967msec
    slat (usec): min=2, max=139302, avg=22.47, stdev=556.00
    clat (usec): min=2, max=119615, avg=306.30, stdev=1969.21
     lat (usec): min=20, max=139752, avg=328.78, stdev=2046.20
    clat percentiles (usec):
     |  1.00th=[    2],  5.00th=[    3], 10.00th=[    3], 20.00th=[    3],
     | 30.00th=[    3], 40.00th=[    3], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[   23], 80.00th=[   24], 90.00th=[   25], 95.00th=[   32],
     | 99.00th=[10176], 99.50th=[10560], 99.90th=[19072], 99.95th=[24192],
     | 99.99th=[44800]
    lat (usec) : 4=62.92%, 10=2.93%, 20=15.75%, 50=16.37%, 100=0.28%
    lat (usec) : 250=0.14%, 500=0.03%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.93%, 20=0.57%, 50=0.04%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=3.21%, sys=2.51%, ctx=265920, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=130689/w=131455/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=522756KB, aggrb=10462KB/s, minb=10462KB/s, maxb=10462KB/s, mint=49967msec, maxt=49967msec
  WRITE: io=525820KB, aggrb=10523KB/s, minb=10523KB/s, maxb=10523KB/s, mint=49967msec, maxt=49967msec
[root@netdisk] ~# 

Observations

Ok, so the initial result is a little dissapointing, but hardly unexpected, my NAS sucks and there are lots of bottle necks, CPU, memory and the fact only 2 of the SATA ports are 6gbps. There is no real difference performance wise in comparison between the results, the IOPS, bandwidth and latency appear very similar. However lets bare in mind fio is a pretty hardcore disk benchmark utility, how about some real world use cases?

Next I decided to test a few typical file transactions that this NAS is used for, Samba shares to my workstation. For the first test I wanted to test reading a 3GB file over the network with both the cache enabled and disabled, I would run this multiple times to ensure the data is hot in the L2ARC and to ensure the test is somewhat repeatable, the network itself is an uncongested 1gbit link and I am copying onto the secondary SSD in my workstation. The dataset for these tests has compression and deduplication disabled.

Samba Read Test

Attempt    without caching    with caching
1          48.1MB/s           52.2MB/s
2          49.6MB/s           66.4MB/s
3          47.4MB/s           65.6MB/s

Not bad once the data becomes hot in the L2ARC cache reads appear to gain a decent advantage compared to reading from the disk directly. How does it perform when writing the same file back accross the network using the ZIL vs no ZIL.

Samba Write Test

Attempt    without caching    with caching
1          34.2MB/s           57.3MB/s
2          33.6MB/s           55.7MB/s
3          36.7MB/s           57.1MB/s

Another good result in the real world test, this certainately helps the write transfer speed however I do wonder what would happen if you filled the ZIL transferring a very large file, however this is unlikely with my use case as I typically only deal with a couple of files of several hundred megabytes at any given time so a 20GB ZIL should suit me reasonably well.

Is ZIL and L2ARC worth it?

I would imagine with a big beefy ZFS server running in a company somewhere with a large disk pool and lots of users with multiple enterprise level SSD ZIL and L2ARC would be well worth the investment, however at home I am not so sure. Yes I did see an increase in read speeds with cached data and a general increase in write speeds however it is use case dependant. In my use case I rarely access the same file frequently, my NAS primarily serves as a backup and for archived data, and although the write speeds are cool I am not sure its a deal breaker. If I built a new home NAS today I’d probably concentrate the budget on a better CPU, more RAM (for ARC cache) and more disks. However if I had a use case where I frequently accessed the same files and needed to do so in a faster fashion then yes, I’d probably invest in an SSD for caching. I think if you have a spare SSD lying around and you want something fun todo with it, sure chuck it in your ZFS based NAS as a cache mechanism. If you were planning on buying an SSD for caching then I’d really consider your needs and decide if the money can be spent on alternative stuff which would improve your experience with your NAS. I know my NAS would benefit more from an extra stick of RAM and a more powerful CPU, but as a quick evening project with some parts I had hanging around adding some SSD cache was worth a go.

Download YouTube Videos using VLC

I recently stumbled accross an amazing feature of VLC which I didn’t notice existed until I accidently pasted a YouTube webpage URL into the VLC open dialog… VLC is fully able to play YouTube videos and you can even save them using the transcoding feature, pretty cool.

Warning: I am sharing this with you as I thought it is a cool feature, however please do not use this method to download any copyright protected materials…

 

Play YouTube Videos in VLC

First off lets try playing a YouTube video in VLC. Find a YouTube video you’d like to play in VLC and copy it’s URL directly out of your web browser’s address bar. For this example I’ll use a video of some creative commons licensed music.

Next open VLC and select “Open Network…” from the file menu.

Paste the YouTube URL you copied from your browser window into the URL text box and click open, your YouTube video should now start to play within VLC.

Great, now you can watch your favourite YouTube videos within VLC, you can even add them to playlists and best of all it seems to skip the ads every time. But whatou if you want to save the videos offline to view later? Keep reading to learn how to save YouTube videos to your computer using VLC.

 

Downloading YouTube videos using VLC

Downloading YouTube videos using VLC is almost as easy as playing them, copy the YouTube URL from your browser and then open VLC, this time open the Streaming/Exporting Wizard from the file menu.

On the first page of the wizard select “Transcode/save to file” and click Next.

On the next page paste the YouTube URL into the “Select a stream” textbox and click Next.

Tick the Transcode Video and Transcode Audio checkboxes and select the codec of the file you’d like to save, personally I prefer mpeg4 although if you prefer a different format / bitrate you should select it here. If you don’t know what to select here use the stuff from my screenshot and you should have a working mpeg4 file out the other end. Remember to check transcode audio too else you will end up with a video file with no audio.

Next select a location to save the video file… Clicking the choose button will open a file save dialog so you can more easily select a location for the file. Once you are happy click Next.

A breif summary will be displayed with all the details you entered in the wizard, dismiss this by clicking the finish button, in your main VLC window you’ll now have a new Steaming/Transcode Wizard item in your playlist. Double click on this playlist item to start the transcoding of the YouTube video, when the transcoding starts the Title of the playlist item will update to the title of the YouTube video. No video will play but progress will be displayed at the bottom of the VLC window in the area highlighted in the screenshot… Once the progress indicated in the playback position bar is complete and the time returns to 00:00 you know transcoding the YouTube video to the file is complete.

You can now locate the video file you saved from YouTube in the location you selected in the wizard and you can open it in any video player capable of playing back the codec you selected. It feels a bit clunky but after you have downloaded a couple of videos the process gets alot easier.

Congratulations, you have now downloaded a video from YouTube using VLC!

 

As I previously suggested please do not download copyrighted materials from YouTube, if you want a video / song please purchase it through official means, however if you are after a creative commons video, or some old content you uploaded to YouTube but lost the source files please go ahead and use this method to pull down video from YouTube.

New Nokia 3310, genius or ludicrous?

A few days ago my other better half was most excited to tell me the joyful news that some random company somewhere was “bringing back the Nokia 3310” and initially I thought, well thats kind of cool as a nastolgic kind of idea, but then after reading the hype online I soon turned to think its an awful idea and a cash cow. The damn thing is going to retail for around £35… £35 for a feature phone are you having a laugh?

Anyway since then I have heard alot of stuff about the new release specially as my generation where around in the highlight of the 3310, here is what I’ve been hearing along with my thoughts…

  1. It looks like a classic Nokia 3310, how cool!
    • Sure this is kind of nice I guess, but you’ll soon get annoyed with the lack of screen size and clunky buttons I am sure.
  2. The Nokia 3310 was the best phone I ever owned!
    • Hang on a minute, you’d probably say that about your first car too, just because of your fond memories of loosing your virginity on the back seat at the age of 17 and having a “burn up with your mates” in the local industrial estate after dark, the reality is living with that car today would be really shit, even if you retrofitted a CD player.
  3. But the battery will last for ever…
    • Yep the battery life is nice, specially if you want a 2G phone for your latest hiking adventure which will last more than 7 hours to airlift you to safety when you freeze at 2 in the morning, or if you want a burner phone to smuggle into prison and you didn’t fancy wedging that horrible 3 pin charger up your butt, but lets look at the reality, is super long battery life really worth loosing all the features you love today and £35? If you want a phone which will still send an SMS and make a phone call after 2 and a half days of being powered on without a charge check out other feature phones which are out there, they are just as bad as the 3310 remake without the nastolgic 3310 case, but only £5 (see here if your interested).
  4. They’ve pimped it with a colour screen!
    • Damn, so orginally I was sold on the nostolgia, but now it’s just a standard feature phone in a 3310 case, if you really wanna be part of the cool kids buy an original Nokia 3310 off of eBay and get a new battery for it, then you really get my respect.
  5. It would be good to escape my smart phone and have a simpler phone.
    • Sure, I understand your motivation here, smart phones are a big drain on your time and they interupt everything with their notifications, apps and short battery life. But there are much cheaper feature phones out there so you may as well save £30 in the process and buy something else to entertain you with the lack of apps / internet / anything useful other than calling your mum on your new handset.
  6. SNAKE
    • Damn, you aren’t gonna enjoy this new snake, its on a colour screen, in the pre-release videos it seems to behave differently to the original snake, and don’t you remember how fustrating it was when you mis-pushed one of those crappy rubbery buttons and your snake ran into the side of the screen after you’d carefully curated the spiral of doom for the last 2 minutes of your life? Trust me, everything is capable of running a snake like game these days, you really don’t need a 3310 remake with a snake remake on it to enjoy a worm based, dot eating time filler.

 

My verdict…

If you want it because it looks like a 3310, fine, for any other reason do not buy this phone.

 

What do I recommend instead?

If you live within reasonably easy access to a power socket (and lets face it most of us do) then for the same money as the 3310 remake you can buy a really low end Android phone which will do most stuff you want day to day like calling, SMS, normally a pretty crappy camera and it’ll run some of the not so fancy Android apps so atleast you’ll be able to check up on the awful lives of your friends on Facebook or Tweet photos of your dog licking icecream off of it’s nose. – Yep they really are available for the same price as the 3310 remake: Cheap Android Phone

If you need a phone that will last for a few days and be reliable for making calls and SMS then checkout cheaper feature phones which as proven above start from just £5, with the £30 saving you can go and enjoy yourself. – Cheap Feature Phone

If you want a phone with good battery life and smart phone features… Sorry your out of luck, get a time machine and bring some future batteries back with you please, we’d really like to know how they work! – Time Machine

And lastly, when should you buy a 3310 remake? Well if you want something that looks like a 3310 to take on your trip to some beach in Thailand to “find yourself” and you are living on the generous donations of mummy and daddy to waste away your gap year at uni then sure… Blow yourself away and get the 3310 remake if that’s your kind of thing. – Nokia 3310 Remake, of course if you are the real mccoy you’d look around Amazon used section, eBay or even your own drawer of crap at home to find an original Nokia 3310 in all it’s early 2000s glory.

Trying out Openstack Function as a Service (Picasso)

Background

Function as a Service is a very exciting concept offering a scalable compute resource without having to worry about the underlying platform, and now it could be coming to an OpenStack powered cloud near you (well actually its still very early days, but I decided to have a play with it anyway).

For the last few years I have been running www.0870converter.co.uk completely “serverless” using AWS S3, API Gateway, Lambda and DynamoDB. The site is served up from an S3 bucket using the static website feature, this in turn reaches out to AWS API Gateway via JS when someone searches for a number (try 0800001066 if you don’t know any UK numbers) which invokes a Lambda function to run a small Python script to look up any alternative numbers in a DynamoDB table and the result is passed back through the microservices until the API Gateway returns the results to the async JS call. This comes at a great advantage to me, it means the website is fully scalable without any complex architectural decisions, I don’t have to manage any instances, any DBs, any services or apply any patches. I just push out code updates that do the interactive bits to Lambda and any static content / client-side JS to the S3 bucket. This is the power of Function as a Service, the only problem is until now the only really credible solution is Amazon Web Services as the ecosystem surrounding Lambda such as API Gateway open up great possibilities.

However recently there has been a flurry of development on a relatively new Openstack project, Picasso. Picasso wraps the IronFunctions Function as a Service engine in an Openstacky API including Keystone integration for multitenancy and user authentication and an extension to Python Openstack Client allowing easy administration. This opens up intruiging possibilities such as running your own Function as a Service in your own Openstack Private Cloud or potentially one day having Function as a Service included in some Openstack Public Clouds. Here are my first impressions of Picasso…

Installation

I performed installation on “RobCloud” a small Openstack playground I maintain for myself to try out stuff with Openstack. Installation was pretty simple first off I installed an IronFunctions host using the docker container approach as described in their dev docs at http://open.iron.io and then continued to install the Picasso service, unlike most Openstack services it is pretty picky about having the latest and greatest Python release so I installed Python 3.5.2 from Miniconda, installation was then as easy as installing the requirements via pip and then installing Picasso itself using it’s setuptools file. You can get the current Picasso source from https://github.com/openstack/picasso. It is still very early days for the project and unlike projects in the big tent there are no versioned releases or packages, however the master branch worked fine when I tested it on Valentines day 2017 :-).

So a few observations… unlike other Openstack services Picasso does not currently use Oslo.Config, Oslo.Log or Oslo.Db so configuration and behaviour is slightly different to what we have grown to expect from other Openstack services. With Picasso there is no config file as far as I can see and all the stuff the service requires is passed as cli opts, this is a little annoying as instead of editing a nice friendly config file you have to bodge your stuff into your systemd script. Likewise as Oslo.Db has not been used the service does not use SQLAlchemy which means the only backend DB option is MySQL, which will raise some eyebrows particularly from the PostgreSQL lovers in the Openstack community. However the project is still young and I have posted a wealth of blueprints to hopefully steer standardisation.

Usage

Anyway lets get on to using the service, I updated my Keystone service catalogue to include my new Picasso functions service and added the relevant endpoints. I then downloaded the Picasso client from https://github.com/openstack/python-picassoclient and installed it in the same virtualenv as where I run my Openstack Python client from… After completing this step I can now see an additional fn branch of the options in the Openstack client.

So what are these apps and routes things? Well these map identically to the apps and routes in IronFunctions, before you can run a function you must create an app and then create a route within the app which maps to your function. But do not be fooled, these are not real REST routes and you cannot build a proper API out on top of them like you can with AWS API Gateway in my opinion as it lacks support for stuff like variables in the URI, and the only verb used is POST. However having said that you can make these routes publically accessible if you set your function to be public and maybe this is a precursor to another Openstack project which translates these into RESTful endpoints. So let’s actually run some code…

First I need to create a new app to hold a route to my function.

openstack fn apps create demo
+-------------+--------------------------------------------------+
| Field       | Value                                            |
+-------------+--------------------------------------------------+
| description | App for project 797d956bab374df3ab33ca4ff603e032 |
| created_at  | 2017-02-14 23:03:39.856657                       |
| updated_at  | 2017-02-14 23:03:39.856683                       |
| project_id  | 797d956bab374df3ab33ca4ff603e032                 |
| config      | None                                             |
| id          | 6836c39f0109420593eb77a33e63702d                 |
| name        | demo-797d956bab374df3ab33ca4ff                   |
+-------------+--------------------------------------------------+

and then create a route within the app to launch our function. For the purposes of this blog post I am going to run the example hello world function from Iron.io, however functions are essentially just Docker containers hosted on DockerHub containing the code you wish to execute.

openstack fn routes create demo-797d956bab374df3ab33ca4ff /hello sync iron/hello
+-----------------+------------+
| Field           | Value      |
+-----------------+------------+
| image           | iron/hello |
| memory          | 128        |
| max_concurrency | 1          |
| timeout         | 30         |
| path            | /hello     |
| is_public       | False      |
| type            | sync       |
+-----------------+------------+

You can see from the command above you can define either synchronous or asynchronous functions, and “iron/hello” is the DockerHub image name. There are also options to set the max memory, concurrency and so on, however for this test I used the defaults. We can now execute the function as we can see the hello world example does indeed return hello world.

openstack fn routes execute demo-797d956bab374df3ab33ca4ff /hello
+---------+-------------------------------------------------------------------------+
| Field   | Value                                                                   |
+---------+-------------------------------------------------------------------------+
| message | App demo-797d956bab374df3ab33ca4ff sync route /hello execution finished |
| result  | Hello World!                                                            |
|         |                                                                         |
+---------+-------------------------------------------------------------------------+

There are a few oddities I noticed along the way, for example if you create an asynchronous function and kick it off via the API you never get an execution ID returned so you can later check in on progress, instead your function goes off and runs and you never get to see the return value, you have to implement callbacks yourself in the function to some other thing you manage like a DB or a queue and if they don’t work then it essentially goes and does its thing and you never get to know about it. Another issue is your Docker image must be published to DockerHub, I have not found a way to switch this out for a private Docker repository… It would be uber cool if it was extended to use Glance to store Docker images for use as functions and to eat the Openstack dog food as most Openstack clouds will already be running Glance for Nova images, so it makes sense to have all your images in one place. Other than these little niggles is does what it says on the tin, it runs functions. There is a whole host of additional functionality I can envisage such as Neutron tenant network integration, its all well and good being able to run functions, but if the thing executing your functions means it can’t talk to your database node on your private network you could be put in a situation of having MySQL or likewise listening publically. But the project is still young, I am hoping to get involved with building out some of the blueprints I have submitted following triage, so watch this space…

Public Demo

If you want to give it a go for yourself I decided (maybe stupidly) to put up a public demonstration up so you can fiddle with it. I have to warn you there is no monitoring and it’s something I put together pretty quickly so it will probably fall over from time to time and may be down for long periods without me checking in on it’s health. Also if it keeps falling over and requiring more attention than just a few service restarts here and there I’ll probably bin it off… I warn you, do not use this demo for anything serious, it may be down for long periods of time or dissappear without notice!

 

Identity URL: http://104.130.11.76:5000/v3/

Identity User: picasso_demo

Identity Password: d3m0t1m3 (Please do not change the password) – if people play around with this I’ll probably setup a cron to reset it every couple of minutes, let’s see what happens.

 

or if you’d prefer to copy and paste export the environment variables…

export OS_USERNAME=picasso_demo

export OS_PASSWORD=d3m0t1m3

export OS_PROJECT_NAME=picasso_demo

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://104.130.11.76:5000/v3/

export OS_IDENTITY_API_VERSION=3

 

The easiest way to use the demo is to install the Openstack client, first I’d start by creating a virtualenv to seperate your Picaso demo activities from your usual Python work, then install Python Openstack Client followed by the Python Picasso Client as per below…

MRF28PG8WN:envs robe8437$ virtualenv picassoclient
Using base prefix '/Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/robe8437/Python/envs/picassoclient/bin/python3.5
Also creating executable in /Users/robe8437/Python/envs/picassoclient/bin/python
Installing setuptools, pip, wheel...done.
MRF28PG8WN:envs robe8437$ source picassoclient/bin/activate
(picassoclient) MRF28PG8WN:envs robe8437$ pip install python-openstackclient
(picassoclient) MRF28PG8WN:envs robe8437$ git clone https://github.com/openstack/python-picassoclient.git
Cloning into 'python-picassoclient'...
remote: Counting objects: 149, done.
remote: Total 149 (delta 0), reused 0 (delta 0), pack-reused 149
Receiving objects: 100% (149/149), 43.95 KiB | 0 bytes/s, done.
Resolving deltas: 100% (71/71), done.
Checking connectivity... done.
(picassoclient) MRF28PG8WN:envs robe8437$ cd python-picassoclient/
(picassoclient) MRF28PG8WN:python-picassoclient robe8437$ python setup.py install

Once the client has finished installed copy and paste the environment variable export statements from above and then run the following to check the client is working, it should return an Openstack Identity Token.

(picassoclient) MRF28PG8WN:python-picassoclient robe8437$ openstack token issue
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2017-02-15T01:13:15+0000         |
| id         | 5d56397c4f884c82b717a5ab038d5890 |
| project_id | bad2137765d046bf8101c61bdc374a16 |
| user_id    | 1ea44bf4f75147e6a6ba017871c0a67e |
+------------+----------------------------------+

Next providing nobody else playing with the demo has deleted it try running the hello world function…

(picassoclient) MRF28PG8WN:python-picassoclient robe8437$ openstack fn routes execute demo-bad2137765d046bf8101c61bd /hello
+---------+-------------------------------------------------------------------------+
| Field   | Value                                                                   |
+---------+-------------------------------------------------------------------------+
| message | App demo-bad2137765d046bf8101c61bd sync route /hello execution finished |
| result  | Hello World!                                                            |
|         |                                                                         |
+---------+-------------------------------------------------------------------------+

Providing nobody has broken the demo or removed the function this should work, now you are free to play with Picasso on my demo environment. As I previously requested, please do not change the password and if you can help it try not to delete the hello function :-).

Securing DNS Traffic with DNS over HTTPS

Recently I wrote a post around the UK IP Bill and speculated how ISPs may implement the most basic requirement of the bill, to keep a list of every site each subscriber had visited. The simplest and most complete method I speculated around for doing this was inspecting DNS traffic passing over the ISP’s routers on port 53, DNS is a very old protocol and is plain text so snooping on which domains each user had visited would be as easy as running a mass tcpdump on port 53 with meta data extracting magic. Anyway, this post covers a simple proxy you can run at home to stop your DNS traffic going out over port 53 as plain text and for it to travel over HTTP with SSL encryption.

The Problem

As you well know if you run tcpdump on port 53 today on your machine and you make any DNS lookup you are in for a treat, the full conversation in plain text in front of you. Check out what loading my blog looks like in terms of DNS traffic, its a treasure trove of hostnames.

MRF28PG8WN:https_dns_proxy robe8437$ sudo tcpdump -i en0 port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes
21:52:36.278484 IP mrf28pg8wn.connect.64494 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 17578+ A? robertputt.co.uk. (45)
21:52:36.323561 IP fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain > mrf28pg8wn.connect.64494: 17578 1/0/0 A 149.202.161.86 (61)
21:52:40.908298 IP mrf28pg8wn.connect.60815 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 52589+ A? www-google-analytics.l.google.com. (51)
21:52:40.913541 IP mrf28pg8wn.connect.59876 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 49807+ A? pagead2.googlesyndication.com. (47)
21:52:41.091339 IP fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain > mrf28pg8wn.connect.60815: 52589 1/0/0 A 216.58.201.46 (67)
21:52:41.465906 IP fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain > mrf28pg8wn.connect.59876: 49807 2/0/0 CNAME pagead46.l.doubleclick.net., A 216.58.213.66 (103)
21:52:42.537092 IP mrf28pg8wn.connect.49631 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 21419+ A? www.google.com. (32)
21:52:42.537195 IP mrf28pg8wn.connect.54188 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 48472+ A? pagead.l.doubleclick.net. (42)
21:52:42.789511 IP fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain > mrf28pg8wn.connect.54188: 48472 1/0/0 A 216.58.204.2 (58)
21:52:43.485905 IP mrf28pg8wn.connect.53642 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 36701+ A? pagead-googlehosted.l.google.com. (50)
21:52:43.611953 IP mrf28pg8wn.connect.49631 > fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain: 21419+ A? www.google.com. (32)
21:52:43.643998 IP fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain > mrf28pg8wn.connect.53642: 36701 1/0/0 A 216.58.201.33 (66)
21:52:43.769184 IP fwdr-8.fwdr-8.fwdr-8.fwdr-8.domain > mrf28pg8wn.connect.49631: 21419 1/0/0 A 172.217.23.36 (48)

From this we can clearly see my I visited the domain robertputt.co.uk plus a load of ad traffic and Google Analytics. This would easily allow ISPs to complete one of their responsibilites of the IP Bill, to record every website which a user visits, note this says website not webpage so domain / subdomain / hostname is sufficient here.

So you are probably thinking; this is DNS it’s a core bit of how the internet works, you can’t do anything to change that. Well you may be suprised, Google has launched a new variation of it’s public DNS product called DNS-over-HTTPS, you can check out the docs for it here. This service essentially allows you to do DNS lookups over a HTTPS session which as wel all know is encrypted and not suseptable to the tcpdump MITM seen above, however there is a big issue, you cannot configure your machine to do DNS over HTTPS, most machines network configuration only allows talking to a traditional DNS server on port 53 in plain text using the standard protocol.

Using DNS-over-HTTPS

Do not dispair, there is a way you can use DNS over HTTPS today, although its a little ugly to get setup. The technique involves running a proxy locally which takes requests like a normal DNS server using the standard protocol on port 53 in plain text, it then reaches out to Google DNS-over-HTTPS gets the result and responds to the client in the traditional manor. I wrote a small proxy for this purpose in Python using dnslib and requests, you can fetch it here – Py-DNS-over-HTTPS-Proxy. It’s not a very nice script but it does the job, feel free to raise a PR if you have improvements :-).

So how do I get this thing working, first create a virtualenv, install the requirements and checkout the script…

MRF28PG8WN:envs robe8437$ virtualenv dns_proxy
Using base prefix '/Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/robe8437/Python/envs/dns_proxy/bin/python3.5
Also creating executable in /Users/robe8437/Python/envs/dns_proxy/bin/python
Installing setuptools, pip, wheel...done.
(exequor_api) MRF28PG8WN:envs robe8437$ cd dns_proxy/
(exequor_api) MRF28PG8WN:dns_proxy robe8437$ source bin/activate
(dns_proxy) MRF28PG8WN:dns_proxy robe8437$ pip install dnslib requests
Collecting dnslib
  Using cached dnslib-0.9.6.tar.gz
Collecting requests
  Using cached requests-2.12.4-py2.py3-none-any.whl
Building wheels for collected packages: dnslib
  Running setup.py bdist_wheel for dnslib ... done
  Stored in directory: /Users/robe8437/Library/Caches/pip/wheels/f4/3d/d1/b941767759a29d9a8df99b00c6f4204aeb6e5f12429f9e2e4e
Successfully built dnslib
Installing collected packages: dnslib, requests
Successfully installed dnslib-0.9.6 requests-2.12.4
(dns_proxy) MRF28PG8WN:dns_proxy robe8437$ git clone https://github.com/robputt796/Py-DNS-over-HTTPS-Proxy.git
Cloning into 'Py-DNS-over-HTTPS-Proxy'...
remote: Counting objects: 32, done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 32 (delta 13), reused 23 (delta 7), pack-reused 0
Unpacking objects: 100% (32/32), done.
Checking connectivity... done.
(dns_proxy) MRF28PG8WN:dns_proxy robe8437$ 

Now lets run the proxy and test it out… by default it runs as a non-privileged user on port 8053. First I start the proxy Python script…

(dns_proxy) MRF28PG8WN:dns_proxy robe8437$ python Py-DNS-over-HTTPS-Proxy/https_dns_proxy/__init__.py

and in a new tab run tcpdump against port 53 on my network device…

sudo tcpdump -i en0 port 53

and then in a third tab I run my DNS query to the proxy listening on the loopback device…

MRF28PG8WN:~ robe8437$ dig @localhost -p8053 A robertputt.co.uk

;  @localhost -p8053 A robertputt.co.uk
; (2 servers found)
;; global options: +cmd
;; Got answer:
;;  opcode: QUERY, status: NOERROR, id: 65000
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;robertputt.co.uk.		IN	A

;; ANSWER SECTION:
robertputt.co.uk.	14399	IN	A	149.202.161.86

;; Query time: 184 msec
;; SERVER: 127.0.0.1#8053(127.0.0.1)
;; WHEN: Fri Jan  6 22:11:23 2017
;; MSG SIZE  rcvd: 50

MRF28PG8WN:~ robe8437$

This time we see no traffic for the DNS query in the tcpdump as the request has been sent via the proxy over HTTPS.

So how can we use the proxy to actually serve DNS requests for the system? To test the proxy as the system’s resolver I quit the instance of the proxy I was running as my own user, escalated to root and edited the script to change the listening port from 8053 to 53, I then executed the proxy as root… Now obviously in the real world you would never do this, I simply did this to test the theory in reality you should use authbind or something similar to run the process under a standard user account. Next I tested the proxy using the dig command in a seperate tab…

MRF28PG8WN:~ robe8437$ dig @localhost google.com

;  @localhost google.com
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; opcode: QUERY, status: NOERROR, id: 1391
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;google.com.			IN	A

;; ANSWER SECTION:
google.com.		299	IN	A	216.58.198.174

;; Query time: 184 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Jan  6 22:41:03 2017
;; MSG SIZE  rcvd: 44

MRF28PG8WN:~ robe8437$ 

As it appears to be working I edited my host’s DNS configuration to use the loopback address (127.0.0.1) as the resolver.

Now time to see if I can browse the web using the DNS-over-HTTPS proxy. Wohoo it seemed to load the BBC website just fine and we can see the DNS requests for loading the page in the logging in the stdout of the proxy script…


The proof is in the pudding…

Now, to test that this actually silences all DNS traffic on port 53 and that all our DNS requests now leave our machine over encrypted HTTPS, whilst the proxy is up and running and your system is pointed at localhost for DNS resolution run tcpdump again listening on port 53 and try surfing a few websites…

MRF28PG8WN:Py-DNS-over-HTTPS-Proxy robe8437$ sudo tcpdump -i en0 port 53
Password:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes

Silence! As you can see there is no traffic in plain text over port 53 anymore and all your DNS queries are being made securely against the Google DNS-over-HTTPS webservice. Of course this is in no way configured in a manor which you could use day to day, it is running as root which is uber bad practice, the script is very thrown together and is not very robust, and if the webserver serving the requests goes down there is no failover or roundrobin to another Google DNS-over-HTTPS server, however as a simple experiment to hide your DNS traffic from your ISP it works very well and proves it can be done.

Please feel free to comment your thoughts and as always PRs are welcome to build the script out further if you’d like to add features. Thanks for reading!

 

Simple IoT Dashboards with Dweet.io & Freeboard.io

A few posts ago I made wrote an article about making a pretty nifty ESP8266 based Internet of Things Temperature & Humidity sensor, although I was particularly vague about dashboarding and instead simply threw the results into MySQL, finally over the Christmas period I got around to playing around with some free dashboarding services and stumbled accross Dweet.io (data collection and time series storage) and Freeboard.io (drawing the actual dashboards) to make relatively asthetically pleasing dashboards with next to no effort and here are the results…

Current Situation

Just to recap about what we currently have in place… So far there is a small ESP8266 device and a DHT22 sensor submitting results to a URL of my choosing hardcoded in the LUA script uploaded to the board every 5 minutes, this hits a very simple PHP script which validates some secret key the “sensor_id” and throws the results into a MySQL DB. I have always been horrible at making any UI related stuff and I particularly hate JavaScript so using Google Charts with some AJAX magic or likewise was a particular turn off for me, so I deicded to check out what free out of the box dashboarding stuff existed. Of course we could go down the Graphite / InfluxDB + Grafana route although I felt this was overkill for such a simple and small project so I wanted to find some hosted stuff out there suitable for small IoT projects. There where two routes I could envisage for making this happen, changing the hardcoded URL on the device itself to post directly to the dashboarding service or still going via my existing PHP script and then having a further Curl from the PHP script to post the data to the dashboarding service. In the end I opted for the second option because…

  • At the time of writing this article I was away from home and hence uploading the new LUA script wouldn’t be possible until I got back.
  • I still wanted the data to go into a MySQL database for long term warehousing because the free plan from Dweet.io only stores 24 hours of time series data.
  • It felt heavy creating 2 requests from the device itself on my home internet connection and any “multiplexing” of the data would be better suited to my server out on the internet, this way I could easily redirect the results whereever I wanted just by updating a script rather than having to reflash the device with alternative firmware / scripts.

Anyway my essential aim was to go from this…

mysql> select result_id, datetime, temp, humi from sensor_results ORDER BY result_id desc LIMIT 5;
+-----------+---------------------+------+------+
| result_id | datetime            | temp | humi |
+-----------+---------------------+------+------+
|    146931 | 2016-07-23 17:27:36 |   28 | 51.4 |
|    146930 | 2016-07-23 17:32:06 |   28 | 51.1 |
|    146929 | 2016-07-23 17:37:51 |   28 |   51 |
|    146928 | 2016-07-23 17:42:36 |   28 | 51.1 |
|    146927 | 2016-07-23 17:47:06 |   28 |   51 |
+-----------+---------------------+------+------+
5 rows in set (0.00 sec)

to this…

Dweet.io?

Dweet.io is a simple time series storage service designed for IoT devices, it comes in a free and paid variety, the free edition only allows you to store up to 24 hours, hence I still wanted to feed the data into MySQL also for warehousing. The service is accessed via HTTP(S) and the API is wide open, no accounts needed. First you need to decide a unique name for your thing, it can be anything you like as long as you remember it, once you have decided a name for your thing try and request the last 24 hours history from the API to see if your thing name has been used…

curl https://dweet.io/get/dweets/for/<my thing name>

Hopefully you should get a 404 back, this indicates the thing name you choose is not being used, if you get some other response back then you should choose another name rather than wrecking some other guy’s data set. Next we probably want to post some data into our thing’s time series dataset. First off we should decide our key value pairs we want to store, in the case of the DHT22 we want to store temperature & humidity, let’s send the Dweet API our first bit of info to test out the creation of the device. Dweet accepts values either as a post with the body containing JSON key value pairs or via a GET with the key value pairs in the query string, for simplicity and because our data is short lets use the GET method…

curl https://dweet.io/dweet/for/<my thing name>?temperature=20.0&humidity=50.0

You should get a 200 response from the Dweet API, when you submit your first set of data your thing name will be registered and the first data stored. You can now query to check the data has been stored accordingly by performing the original GET to find the data for the last 24 hours, you should see the data you just sent as an array inside of a JSON payload paired up with the date time of submission.

Now lets extend the original PHP script to post the data to Dweet.io using Curl as well as storing in MySQL, add the following to the tail end of the script.

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://dweet.io/dweet/for/?temperature=$temperature&humidity=$humidity");
curl_exec($ch);
curl_close($ch);

Wait for the next time your ESP8266 sensor checks in to your PHP script and query your Dweet.io thing, you should see real world values getting entered into the time series data. You can query your device on Dweet.io and see the recent entries…

Wohooo, data is now being stored in Dweet.io and we can now consume the data in Freeboard to create some tasty dashboards.

Freeboard.io?

Freeboard.io is a simple web based dashboard service, you can create as many public dashboards as you like for free, although if you want to protect your dashboards you’ll need a paid account. Head over to freeboard.io to sign up for an account, once confirmed login and create your first dashboard, enter a name for the dashboard in the text box and click “Create Dashboard”, the page will refresh and you’ll be presented with a blank dashboard with editing tools in the pane at the top of the page. Before you start building the dashboard you’ll want to add your Dweet thing as a datasource. Under the datasource’s header click “add”, you’ll notice there are many types of data source you can use from raw JSON to Dweet and Xively, select Dweet.io as the datasource and complete the form including a nickname for the data source, your thing name and so on, the key only applies if you have a paid Dweet.io account with a lock on your thing.

Repeat these steps to add another data source of type weather, and enter your zip / postcode, this will allow you to display outside weather conditions from a local weather station to compare with your inside sensor. Now you have both of your data sources defined for your Dweet.io based sensor and outside weather from an internet weather service, next we will go on to building the dashboard’s display components.

Click add pane in the tool pane to create a new area to place our dashboard’s components, a blank pane will be added to your dashboard, click on the spanner icon in the top right of the pane to open it’s properties window, here you can define it’s size and name, just change the name to “Outside” and click save. Next click the plus symbol in the top right corner of the pane to add a dashboard component, select gauge from the type list, set the name as “Temperature” set the data source to “datasources[“Weather”][“current_temp”]”, this will tell the gauge to draw the current temperature from the weather service. You should select some suitable min and max values depending on your climate. Click save and the gauge will be added to the dashboard.

Now repeat these steps to add another gauge to the outside pane for humidity using the datasources[“Weather”][“humidity”] data source, and add another pane and add gauges for the Dweet.io datasource’s temperature and humidity. You should end up with something that looks like this…

Continue to add components such as gauges, text, maps, sparklines, clocks and other types of widgets until your dashboard has everything you want to display. In my case I decided to add another data source to display a clock and also some indicator lights to provide alarms for certain conditions. The weather data source contains a fair bit of information if you dig through it’s raw object such as wind speed, and general conditions.

Conclusion

Overall Dweet.io and Freeboard.io offer an inexpensive route to getting your IoT devices online without having to deploy any servers or write any code, you can prototype decent looking dashboards in a short period of time and can be used by almost anyone regardless of skill level. Let me know in the comments what kind of dashboards you come up with for your IoT devices. You can check out my dashboard live at the following URL: https://freeboard.io/board/xVzIXf. If you want to give Freeboard.io without having your own IoT device surf the public “recently updated” listings on Dweet.io and try building some dashboards with other people’s data feeds, who knows what you might find.

Welcome to the World of Software Defined Radio

After a long time of listening to everyone in my circles going on about how super mega awesome Software Defined Radio (SDR) is I decided I must give it a go. I’ll be honest I have always thought radio related stuff is a bit of a strange and boring hobby, the electronics side fine, but sitting and chatting to people in a slow and often poor quality conversation didn’t really seem that interesting to  me, anyway I have also seen some quite cool stuff come out of the use of SDR such as Sammy Kamkar’s door bell hack / prank. Anyway, as a complete radio novice (literally no knowledge at all) I decided to give some SDR stuff a go as the cost of entry is now ridiculously cheap, and this is the story of my first steps into the murky waters of SDR.

Hardware

As I previously mentioned the entry point to SDR is now ridiculously cheap, there is no excuse not to join in the fun, first off I completed a short bit of research mainly by reading other people’s blogs and watching YouTube videos, and a common theme soon emerged, pretty much everyone is using DVB-T tuner sticks based on the RTL-2832 chip. It appears the demodulator on the chip is highly versatile and can tune into pretty much anything between 25mhz and 1750mhz (VHF & UHF) out of the box if you have the right antenna and with a fairly inexpensive upconverter it can go as low as the HF (100khz – 25mhz) range. These DVB-T tuner sticks are available as low as just £8 from Amazon and eBay and most of the software out there for SDR hobbyist is available free. In the end I opted to go for the NooElec NESDR Mini 2, it is slightly more expensive although it had very good reviews and the company behind the device are SDR fans rather than producing the device for the official TV usecase. For the time being I haven’t ordered the upconverter, which may seem like a strange step as everyone says all the cool stuff is available in the lower frequencies, although I wanted to give SDR a go before I commited to buying lots of hardware.

Stuff you can do in the VHF / UHF range

Even though I opted not to get the upconverter at the moment there is still a decent amount of stuff you can do in the VHF / UHF range, some of these are illegal in some countries so please check your local laws prior to listening in on these broadcasts…

  • Listen to your favourite FM radio stations
  • Find stuff being broadcast on other frequencies such as police, marine, search and rescue and other radios
  • Listen to local Air Traffic Control
  • Find digital data streams which you may or may not be able to decode
  • Decode ADS-B and plot commercial aircraft in your local area on a map

However there are some cool things you cannot do in the VHF / UHF ranges where an upconverter would come in handy such as…

  • Listening in on the conversations of Ham operators
  • Find and decode morse code (these are usually sent over shortwave)
  • Find and listen to number stations
  • Find other weird curiosities such as jammers, radar and other stuff

Unboxing the Hardware

A few days after I ordered the NooElec from Amazon it arrived at my door in a small Amazon shipping box. Inside was an electrostatic bag containing the USB SDR stick, a small telescopic antenna and a remote control, from my understanding none of the software supports the remote control in an SDR context. Unfortunately as I have no alternative antennas my initial experimentation will take place with the included antenna, I have heard mixed reviews about these from “they suck and don’t work at all” to “it works, just about, if you hang out of the window with it” so I don’t expect my scanning to be very succesful, but we shall see.

NooElec Mini 2

Overall the build quality of the stick itself is very good, nice and sturdy and I like how it comes with an end cap to protect the USB connector when not in use, however the same cannot really be said for the remote or the antenna, both feel a bit cheap. However I guess the main point of the stick is that you’d use it with an external antenna and the driver you are likely to be using is an SDR driver not a DVB-T driver so the remote is soon pretty irellevant too.

Scanning The Airwaves

After getting the device I was keen to get scanning as soon as possible so I plugged it all into my Mac and went in search of some software. After a bit of a search I concluded HDSDR was probably the best option, and although it’s Windows software and K1FM had already packaged it for Mac OSX using Wine, you can download the package here. There are no drivers to install on OSX, all you need to do is run the rtl_tcp server and then open HDSDR. It is already configured to work out of the box so you can just hit the start button and get scanning right away, however I found even with a pretty decent Mac (i7, 16gbit RAM, SSD 2015 model) running the software through Wine was particularly laggy / unresponsive taking ages to retune via the rtl_tcp server and stuttery updates of the waterfall diagram (the big thing which shows you signal intensity accross a portion of frequencies) so I decided to try out some other OSX software but it was all either running under mono or a bit finnecky. If you want to give this a go make sure you have a Windows PC / Windows Virtual Machine with USB passthrough if you want any pretty GUI apps to mess around with as all the decent ones seem to be Windows based / the software seems to run happier here, if you are happy with CLI then there are plenty of multiplatform tools out there. I switched to a Windows machine and I bunged in 99.1mhz, a popular local FM radio station here, I could here some funny noises which was promising and when I selected FM I could even make out a little bit of the music, however the signal seemed rather weak and the highlighted space on the spectrum was not the full width of the signal space, this was corrected using the FM bandwidth slider and the gain control, still quite crackly but you could easily make out what it was, after moving upstairs and placing the antenna in the window I received a much better signal and could listen with ease. It was at this point I noticed the antenna has a magnetic base, which was pretty useful as I used this to attach it to the radiator that ran under my window.

HDSDR 99.1mhz Weak FM Signal

Well that was it working, and although I am having to sit precariously on the edge of the bed in the master bedroom I seem to be able to tune into stuff (yep the hanging out of the window thing seems to be a reality), so I decided to Google and see what else was around in the VHF / UHF frequencies in my area. I came accross a useful site which gave a good listing of UK frequencies… http://www.qsl.net/m3gnn/ht/index.htm I tried out some of the stuff listed on there in my area although alot seem dead, I think the area I live in doesn’t help me much as there are alot of trees and I am in a slight dip, maybe I need to go up the local meadow which is on top of the hill to get better results (probably one to try on my lunch break tommorow). One thing I did notice though is I got a pretty pattern on ADS-B (1090mhz) which receives packets sent by planes to ground stations to log their flight information, which would suggest that maybe I can play with this whilst I am in my house even if it is with limited range.

ADSB Raw

So you know whats coming next right? We better see if we can decode some of these packets. I Google’d around again for how to decode these, and I found a few bits of software, I soon realised that in the world of SDR people deprecate and bin off software and don’t clean up after themselves which makes finding stuff that works / is supported / is up to date is quite tricky… First up I tried ADSB# as it had glorious reviews, although when I went to download it it appears the software no longer exists, I tried to find a mirror but no luck. Next up was RTL1090, another highly regarded bit of software in the community but I couldn’t get it to work nicely on my Windows 10 machine, personally I think there was something up with the drivers it was using under the hood as it kept giving ugly 0xc000007b errors everytime at launch. I carried on searching and came accross dump1090 which is a cross platform ADS-B decoder, you can get the raw source here – https://github.com/antirez/dump1090 and build it yourself or download a precompiled Windows binary here – http://sdr.osmocom.org/trac/attachment/wiki/rtl-sdr/RelWithDebInfo.zip. Once installed I tried running it on my machine, initially it loaded a blank command prompt Window but after a few seconds it started to populate a table with details of nearby planes.

ADSB CLI

Ok this is all well and good but manually plugging those coordinates into Google Maps isn’t any fun, lets get Virtual Radar Server installed which plucks values from dump1090 or rtl1090 over the TCP/IP socket it exposes and plots the planes onto Google Maps on our behalf, you can fetch VRS from here – http://www.virtualradarserver.co.uk. Initially you must configure it to link it up to dump1090 or rtl1090, just go to options > receivers and then click the wizard button and follow the instructions and it’ll do the rest for you. Once complete you can visit http://localhost in your browser and check out the planes plotted on the map moving in realtime (well, when the SDR gets a new packet from them). In my case you can see the queue for planes landing at London Heathrow as that is my local airport, obviously the number of planes and distance is very limited as I am using the small antenna which isn’t tuned for 1090mhz.

Virtual Radar Server

What next?

Obviously this is only the start of my SDR adventure, and so far my results have been limited and pitiful, however there are lots of ways to expand. Some ideas I have come up with so far include…

  • Building a custom antenna for ADS-B to see how much range I can get on spotting aircraft
  • Building a custom antenna for other VHF / UHF frequencies
  • Building an upconverter so I can checkout the HF stuff (it doesn’t look that scary but maybe I will get better results if I purchase one)
  • Trying out the SDR in different locations, including maybe on the roof of the car park at work.

Let me know in the comments if there is any more stuff I should add to this list.

WebSDR

So far I have also played with WebSDR, this is currently my only opportunity to get at the HF stuff without an upconverter, obviously this comes with the downside of the radio not actually being in my physical location and is far less satisfying. The WebSDR I have been playing with alongside my own VHF / UHF stuff above is hosted in the Netherland by the University of Twente, so far I have managed to listen to a few Ham operators on here as well as a Polish number station broadcast. You too can scan the HF frequencies via their website at http://websdr.ewi.utwente.nl:8901. It is probably a good way to get started to see if you are interested in not before making the plunge into buying some hardware.

Anyway, this is it for the start of my adventures into SDR. I’ll post with any further updates as they come.

Internet of Things Temperature & Humidity Monitor

Want to join the Internet of Things device making frenzy? This short guide introduces the ESP8266-01, the brains behind many hobbyist IoT devices, by making a simple temperature and humidity sensor. It offers a chance to practice your soldering skills and lets you get aquainted with flashing the ESP8266-01 with custom firmware and at the end you’ll be able to build your own swanky dashboard for viewing the sensor readings.

Shopping List

You’ll need the following tools to complete the project, most hobbyist are likely to have these already and they are available in pretty much all DIY stores.

  • Soldering Iron & Solder (try and get decent solder, it flows much better than the cheap stuff)
  • Small Hack Saw
  • Veroboard / Stripboard Track Cutter (or 4mm drill bit)
  • A computer with a USB port for programming
  • 3.3v USB TTL UART Converter Module
  • WiFi Internet Access
  • Micro USB Phone charger capable of >500mA @ 5v (Do not power this device directly from your computer’s USB port, it could draw more power than the USB standard and damage your PC)
  • Dry & clean washing up sponge
  • A webserver capable of running a PHP and a MySQL DB (You can substitute this for Python / Ruby / Perl and different storage if you prefer)
  • Multimeter

You’ll also need the following components to complete the build, most of these are available at electronics shops although some of them like the ESP8266-01 are probably easier to source on eBay / Amazon.

  • ESP8266-01 Serial WiFi Module – Datasheet – Amazon UK
  • DHT22 Digital Humidity & Temperature Sensor – Datasheet – Amazon UK
  • LD1117AV33 3.3v Voltage Regulator – Datasheet
  • 2x 10µF Electrolytic Capacitors
  • Header pin strip with 4 pins
  • Micro USB breakout module
  • 24AWG “Prototyping” Wire
  • Small(ish) peice of Veroboard / Stripboard

Disclaimer

Working with electronics can be dangerous, please take caution when working on projects described on this website, I cannot be held responsible for any injury to any individual or damage to any equipment as a result of trying these projects. If at any point you are unsure please seek the advice of a professional who is suitably qualified.

Schematic

Apologies for the lack of bridges on the schematic. Please note not all traces that overlap are actually joins, only those marked with a dot should be considered as connections.

IoT Temperature & Humidity Sensor Schematic

Assembly

First off clean up the veroboard (stripboard) by lightly rubbing the copper striped side with the washing up sponge, use the abbrasive part which is usually green and more coarse in appearance. Do not rub so hard to scratch or remove the copper tracks, just enough to remove any oxide or dirt that may have build up whilst it has been in storage. Check all your parts are present and free from any obvious signs of damage, and if the pins on components are not shiny and ready to solder give those a small rub with the sponge also to ensure good contact / prevent dry joints when soldering.

Next you should prepare any individual components that require assembly, in my case the Micro USB breakout module came without the pins attached to the board. So I started off by soldering the pins to the USB breakout board, technically only the VCC and Ground pins are required, however I would recommend soldering all the pins even though we are not using them to provide some rigidity to the connection. After soldering I checked the connection by plugging the USB board into a mobile phone charger and checking 5v DC can be obtained accross the VCC and Ground pins with a multimeter.

MicroUSB Breakout Board

Once any components which require assembly are completed decide where you want all the components to go on the veroboard, you can probably get it looking a bit cleaner with less links compared to my efforts below, mine looks messy as I wanted to keep the DHT22 on the same side as the USB Micro port so it’s easier to get it in my desired case. For this task I used VeeCAD Free Edition to produce the layout.

IoT Temperature & Humidity Sensor Layout

Bear in mind the layout diagram does not indicate the direction components face, so this needs be checked against the datasheet prior to soldering components to the board. Start by cutting the veroboard to the required size using the hacksaw and then begin soldering components. Begin with the shortest components first and any components which may be difficult to solder due to the over hang of other components. Also ensure the gaps are cut at this stage, it is alot easier to perform this now before the board gets cluttered, but make sure you use a check twice cut once technique else you’ll end up wasting alot of board.

Here is a photo of my board after attaching the power supply side of things, once you get to this stage its probably a good idea to check the relevant tracks provide 5v, 3.3v and ground respectively by plugging the device into a USB Phone Charger and checking the tracks with a multimeter. You may notice my board looks slightly different to the original plan, I decided to condense the size of the board down a little and to reduce the number of links by stretching component legs further than VeeCAD allows for some components. A good example is the attachment of C1 in this photograph. Remember when placing the capacitors to ensure the polarity is correct, the side indicated with the light stripe and the shorter leg is the negative. Also notice the direction the voltage regulator is facing.

IoT Temperature & Humidity Sensor Mid Build

Eventually all links and components are added and the board is ready for programming, you can now perform a power on test. If all has gone well when applying power the ESP8266 power indicator should illuminate.

Flashing

I used a jumper cable to connect the 4 pin header to the USB TTL UART converter. Remember to set the jumper or switch on your USB adapter to 3.3v mode to avoid causing damage to your ESP8266. Remember the most northerly pin on the header from the perspective of the photograph below is RX and TX followed by a blank pin and then Ground. Hook up the ground cable to the ground, the TX to the RX and the RX to the TX of the USB adapter as photographed below. Once all the cables are connected use another jumper cable to short out the GPIO0 pin of the ESP8266 to the Ground pin, these can easily be accessed from the top of the ESP8266, once these pins are shorted apply power to the board via the USB Micro port. This puts the board in to flash mode and the firmware can be loaded. Do not make the connection between GPIO0 and Ground permanent else your ESP8266 will boot to flash mode with every power on, this connection is only temporary.

IoT Temperature & Humidity Sensor Flashing

Next if you are using Windows download the ESP8266 firmware flashing tool from https://github.com/nodemcu/nodemcu-flasher. If you are a Mac or Linux user you can use the command line Python equivalent available at https://github.com/themadinventor/esptool or installed from PyPi, e.g. pip install esptool

Next download the following firmware image, this gives you the NodeMCU firmware with some additional extras compiled including native SSL, DNS & DHT support – nodemcu-esp8266.bin

Open the flasher and on the config tab click the settings button and select the bin file you just downloaded, ensure the position is 0x00000 and then head back to the operation tab and hit flash. The indicator on your ESP8266 and USB TTL UART adapter should violently flash whilst the firmware is uploaded, if this doesn’t happen investigate as neccesary. The software should auto select the correct COM port although if it gets it wrong select the correct port from the drop down menu.

Internet of Things Temperature & Humidity Sensor FlashingInternet of Things Temperature & Humidity Sensor Flashing

After the firmware has finished uploading and the flasher tool displays a tick logo in the bottom left corner proceed to remove power from the board, and reapply power. This should reboot the ESP8266 into the firmware rather than flash mode. When doing this ensure the jumper from GPIO0 to Ground has been removed. Next continue to download ESPlorer (GitHub Project – https://github.com/4refr0nt/ESPlorer), a relatively nice Java app for dealing with LUA scripts and various other functions of the ESP8266 with a friendly UI.

Once you have ESPlorer open you’ll notice it’s split into two panes, the left hand pane is for commands and lua scripting, the right hand pane gives output and control for the ESP8266. In the right hand pane select your COM port if using windows or /dev device if using Linux / OSX from the drop down and click open at the top of the window, if all is good you should see ESPlorer connect to the ESP8266 and provide a serial console.

ESPlorer

Now you can test the ESP8266 is working as expected by issuing a few commands directly into the console, first put the device into client mode and then try connecting to your Wifi network and see if you get an IP from your DHCP server, something like the below should do the trick…

    wifi.setmode(wifi.STATION)
    wifi.sta.config("<WIFI NAME>", "<WIFI PASSWORD>")
    wifi.sta.connect()
    print(wifi.sta.status())
    print(wifi.sta.getip())

Providing everything has been setup, flashed and connected as required your console should echo back a status value or 5 and an IP address, subnet and the default route for your network.

Next lets upload the code that checks in the temperature and humidity to your webserver. I have configured mine to perform this operation every 15 seconds which is uber extreme although you can edit as you feel appropriate. Do not make this too often though, if the ESP8266 is busy all the time it’s likely you’ll never be able to write to it via ESPlorer again as it’s constantly blocking, and you may need to reflash it prior to being able to upload new scripts. Providing you used the firmware provided on this page you should have full SSL and DNS support out of the box, however if you have not used the firmware from this page you may only be able to submit the data unecrypted or without DNS lookups.

Copy the code into the left hand pane of ESPlorer and edit as required…

function do_update()
    print(wifi.sta.status())
    print(wifi.sta.getip())
    print("Checking DHT22 Sensor")
    pin=4
    status, temp, humi, temp_dec, humi_dec = dht.read(pin)
    print("temp: "..temp)
    print("humi: "..humi)
    print("Submitting result")
    http.get("https://<your webserver>/<your script.php>?sensor_id=<esp8266 device id>&temp="..temp.."&humi="..humi, nil, function(code, data)
        if (code < 0) then
          print("HTTP request failed")
          print("Sleeping until retry")
        else
          print("HTTP request successful")
          print("Status Code:"..code)
          print("Sleeping until retry")
        end
      end)
end

print("Configuring WiFi Connection")
wifi.setmode(wifi.STATION)
wifi.sta.config("<Wifi Name>","<Wifi Password>")
wifi.sta.connect()

tmr.alarm(0, 300000, 1, do_update)

This script configures the wifi upon boot up of the ESP8266 with the Wifi network and password you define and then starts a timer to run the do_update function once every 300 seconds, you can make this shorter or longer if you like but 300 seconds is probably a reasonable resolution. When the function runs it prints some debug information, fetches the readings from the DHT22 and then submits the results to a URL you define. If the HTTP request is succesful or not it exits cleanly and then waits until the timer executes the function again. Once you are happy with your script click the save button and save the script as “init.lua”, anything in init.lua will be ran immediately on boot of the ESP8266. Once saved click the “Send to ESP” button in the bottom left of the editor. This will upload the script to the ESP8266. Once the upload completes it will begin to execute.

Configure the Web Server

You should now check that your webserver is receiving requests from your ESP8266 at the interval you specified. Login to your webserver and cat the access.log file or if you are using something like cPanel or Plesk login to your control panel and checkout your website stats in your preferred plugin.

Here is an abstract from my Apache log file showing the URL specified in the lua script being hit at the interval specified…

86.189.174.214 - - [23/Jul/2016:17:04:36 +0100] "GET /submit_metrics.php?sensor_id=555555&temp=27.7&humi=51.8 HTTP/1.1" 200 2878 "-" "ESP8266"
86.189.174.214 - - [23/Jul/2016:17:09:38 +0100] "GET /submit_metrics.php?sensor_id=555555&temp=27.7&humi=51.8 HTTP/1.1" 200 2878 "-" "ESP8266"
86.189.174.214 - - [23/Jul/2016:17:14:34 +0100] "GET /submit_metrics.php?sensor_id=555555&temp=27.5&humi=51.6 HTTP/1.1" 200 2878 "-" "ESP8266"

Notice the user agent is set as “ESP8266” and that I decided to send my ESP8266’s device ID as well as the temperature and humidity to my webserver so I can see which sensor has been reporting statistics, with this method you can have multiple sensors reporting in to the same script / db. In my log you can see I have already created my PHP script as the server is responding with a 200 response, in your case the file you specified probably doesn’t exist yet so you’ll probably be sending back 404 responses.

Next it’s time to create your MySQL DB. Login via your favourite MySQL DB editor, in my case I like to use the CLI client but you are free to use PHPMyAdmin, MySQL Workbench or any other tool. Create a new database to store your data and create the tables using the following statement. It’s also probably good practice to configure a user specifically for this application and configure their permissions to only have read and write access to this DB.

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;

DROP TABLE IF EXISTS `sensor_results`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `sensor_results` (
  `result_id` int(11) NOT NULL AUTO_INCREMENT,
  `sensor_id` varchar(30) DEFAULT NULL,
  `datetime` datetime DEFAULT NULL,
  `temp` float DEFAULT NULL,
  `humi` float DEFAULT NULL,
  PRIMARY KEY (`result_id`),
  KEY `fk_sensor_id_idx` (`sensor_id`),
  CONSTRAINT `fk_sensor_id` FOREIGN KEY (`sensor_id`) REFERENCES `sensors` (`sensor_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

DROP TABLE IF EXISTS `sensors`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `sensors` (
  `sensor_id` varchar(30) NOT NULL,
  `sensor_name` varchar(45) DEFAULT NULL,
  `sensor_description` text,
  PRIMARY KEY (`sensor_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

Now would be a good time to add your ESP8266 device ID to the sensors table, if you do not add this the foreign key constraint will prevent you from writing your results back to the DB.

INSERT INTO `iot`.`sensors` (`sensor_id`, `sensor_name`) VALUES ('<ESP8266 Device ID>', '<Your Sensor Name e.g. Living Room Sensor>');

Next either create your DB writing script on your webserver or upload it via SFTP / FTP, ensure it is positioned in the same path as previously written in your LUA script. You can write your script in any language supported by your webserver to write back sensor values to your DB, although in my case I decided to use PHP as I was feeling lazy… Feel free to use my script below.

<?php
$sensor_id = $_GET['sensor_id'];
$temp = $_GET['temp'];
$humi = $_GET['humi'];

$sensor_id = mysqli_real_escape_string($db, $sensor_id);
$humidity = mysqli_real_escape_string($db, $humidity);
$temperature = mysqli_real_escape_string($db, $temperature);

$dbconn = mysqli_connect('<DB HOST>','<DB USER>','<DB PASSWORD>','<DB NAME>');

$SQL = "INSERT INTO `iot`.`sensor_results` (`sensor_id`, `datetime`, `temp`, `humi`) VALUES ('$sensor_id', Now(), '$temp', '$humi');";

mysqli_query($dbconn, $SQL); 

mysqli_close($dbconn);
?>

It is a very dumb script that does no validation or error detection, however data types and a valid sensor_id are enforced by the constraints on the DB tables so no bad data as such can be commited to the DB. Feel free to beef up the script as you feel neccesary.

Next ensure your IOT Temperature and Humidity sensor is powered up via it’s USB Micro port and see if temperatures and humidity values are being written back to your DB. Everything should be working now, if values are not being written go back over the guide and troubleshoot and issues, remember checking out your webservers access.log and error.log files as well as following the printed output on the ESP8266’s serial connection can be a great help.

mysql> select result_id, datetime, temp, humi from sensor_results ORDER BY result_id desc LIMIT 5;
+-----------+---------------------+------+------+
| result_id | datetime            | temp | humi |
+-----------+---------------------+------+------+
|    146931 | 2016-07-23 17:27:36 |   28 | 51.4 |
|    146930 | 2016-07-23 17:32:06 |   28 | 51.1 |
|    146929 | 2016-07-23 17:37:51 |   28 |   51 |
|    146928 | 2016-07-23 17:42:36 |   28 | 51.1 |
|    146927 | 2016-07-23 17:47:06 |   28 |   51 |
+-----------+---------------------+------+------+
5 rows in set (0.00 sec)

Conclusion

After everything is confirmed working as expected you can disconnect your programming cables from the debug header. I chose to trim down my veroboard a little so I could fit the project into a small prjects case, however if you choose to do this be careful you do not cut through or remove any tracks which are used. Here is what the board footprint looks like following the cleanup. If you choose to mount the project in a projects box remember to cut a few holes for the DHT22 and USB Micro Port to poke through the case else you will not get reasonable humidity / temp readings at the atmosphere the device is in is limited.

IoT Temperature & Humidity Sensor Completed

Now you can get creative and make a custom dashboard for displaying your sensor’s stats rather than viewing it directly from the MySQL DB on the CLI / PHPMyAdmin. You can write this in any language you please / is support by your webserver. Simply select the results you wish from the DB and display them however you see fit, maybe try using gauges from the Google Charts API or some other cool JS libraries to animate changes in temperature / humidity.

Hopefully now you have a pretty good grasp on getting something basic up and running using the ESP8266-01 including flashing with the NodeMCU firmware and uploading LUA scripts. How about trying to mod this design to do something different, maybe try replacing the DHT22 with a reed switch on a door frame to alert you when somebody enters your house, or use a light dependent resistor to measure the luminosity of your room. If you’d like to reuse your ESP8266-1 you can put it into flash mode by bridging the GPIOO pin and the ground during power up and reflashing it using the flash tool, now it’ll be ready to accept a new LUA script via ESPlorer.