Smarter smartmirror

So I also decided to build myself a smartmirror. However, I want it to provide a little more functionality than just displaying some information and telling me that I’m beautiful. Here is the finished build:

Bathroom SmartMirror With LeapMotion

And here is a video of the leap-motion-control in action:

I want to place it in my bathroom, because that’s the only place where I actually spend some time in front of the mirror. I do want some controls, but I do not want to touch buttons or the mirror itself, so I chose a leap motion controller. Below I will detail some of the steps I went through in building this thing.

Update: Speech recognition and the bloody-mary-protocol.



I chose a 28″ screen (SamsungT28D310ES 71,1 cm) which offers all needed interfaces and should be slim enough for my build. It has been stripped out of the plastic case, but there is a small metal-frame that I left attached (works).


First I wanted to use a Raspberry Pi, but it is too slow for any real work and the Leap SDK is not available for the ARM platform. Instead I used a Intel Celeron G1610T (DualCore 2,3GHz) 8GB DDRIII, 500GB HDD, HDMI, (H61MV) e.Mini which I ordered some years ago from amazon


I chose to do a custom build with wood from the hardware-store and paint it white.

We painted three layers of waterproof glaze, then spraypainted it with white finish.

Smartmirror - Painting the frame


After about 4 tries with some 2-way-sheeting and a nightmare of bubbles and scratches, I ordered a solid glass 2-way mirror from (In the meantime, 10 days later, nearly all of the bubbles are gone and it really looks like a mirror).


The webpage can be found in this git-repo:, the rest of the scripts are not compiled to a package yet, but have to be copied from this page.

Chromium-browser, some javascript,php and python

To autostart chrome in kiosk-mode and load the smartmirror-webpage, create ~/.config/autostart/chromium.desktop:

For the basic framework I took the code from, which is an awesome clean and structured framework that already provided some of the functionality I wanted (time, live and forecast weather, icons). I extended it by writing my own videoplayer-, motion-sensing and bitcoin-module and integrated mousetrap to map keypresses to functions.

Input: Leap motion controller

First, the hardware to mount it: I chose to go with a headmounted setup to prevent water and other stuff on the device and to get it out of the way (you can find the scad- and stl-files here: if you want to print it yourself):

For my feeling, the classification rate of more sophisticated gestures it pretty lame, even after the calibration. I chose to just use the hand-position and put a threshold on x-movement to detect left- or right-swipes – classification rate 100%! Using pykeyboard xdotools, I translate those swipes to keypresses for the browser. Also, a short info-sound is played as soon as the hand is recognized as such by the leap-device.

To make it autostart with the xserver, I created a file called ~/.config/autostart/leap.desktop (I am running gnome) with the following content:

LibCec and cec-client

Some TVs are able to receive commands over a CEC-interface, like turning on and off. Sadly, my graphics-card does not support this, so I ordered a Pulse Eight CEC-Adapter for USB. This is a USB-bridge with which you are able to „inject“ CEC-commands into the HDMI-stream. With linux, turning on and off the TV (well, standby), is just a matter of:

echo "as 1" | cec-client -s


echo "standby 0" | cec-client -s

Motion sensing: Arduino with a PIR-sensor

I used an regular Arduino Nano over USB. A PIR-motion sensor is wired up to +5V, GND and a digital pin. The arduino constantly monitors the sensor and relays the read value to it’s serial interface. To avoid mixing up device-numberings, I created a udev-rule for the arduino (/etc/udev/rules.d/99-arduino.rules):

SUBSYSTEMS=="usb", ATTRS{idProduct}=="7523", ATTRS{idVendor}=="1a86", SYMLINK+="arduino_motionsensor"

This tells the udev-system to always create a device called „/dev/arduino_motionsensor“ in addition to the regular „/dev/ttyUSB{0-9}“ link.

int pirSensor = 2;

void setup() {
 pinMode(pirSensor, INPUT);
 digitalWrite(pirSensor, HIGH);

void loop() {
 int pirValue = digitalRead(pirSensor);

On the PC I use a script that is also started via the .desktop autostart-script:

Daily news: Tagesschau in 100 Sekunden

To download the „Tagesschau in 100 Sekunden“ (German TV news, distilled to the most important 100 seconds), I used the following python-script to parse their website, find the link to the current video, and download it.

I just put the script in a cronjob so it downloads the video during the night:

00 06 * * * /home/sarah/scripts/ > /dev/null 2>&1

Additional ideas and extensions


Including an ESP or Bluetooth to connect to other bathroom accessory (like a scale or lights) -> Like a friend of mine used to say: „I don’t give a rats arse“. Really, I couldn’t care less how my weight is behaving, and I cannot imagine any other useful Internet-of-Things application (really usefull IoT is scarse, anyway).

Non-invasive health monitoring

That’s certainly an idea that I will pursue -> using a webcam to identify the face, try to measure drowsiness or general health state. The light my bathroom is artificial only, so it’s constant and controlled, which would enable to measure the color of the skin. Plus, with the temporal amplification algorithms of MIT ( it could be possible to measure the pulse.

Scaring guests

That one I got from my fellow researchers when discussing useful extensions over lunch. A „bloody mary protocol“ would be is pretty nice (using speech recognition, say „bloody mary“ three times and then display a really scary face of a woman. Another possibility would be to use some kind of facial puppet algorithm (detect facial landmarks of the viewer, map those to another, saved face and let this face mimic the viewer).

Update (29.04.2016): The blood-mary-protocol has been implemented!

I implemented this using the CMU Sphinx speech recognition toolkit, the python (swig) wrapper and a custom dictionary.


create a custom dictionary and necessary files:

I chose not to integrate this in the official GIT-repo as it would be a lot of overhead just for this small functionality. I might however dedicate an extra article on the speech recognition. If you want details, please ask!

Here is the code I used after pocketsphinx and the above wrappers have been implemented:

import os
import time
from os import environ, path

from pocketsphinx.pocketsphinx import *
from sphinxbase.sphinxbase import *

MODELDIR = "/home/sarah/code/pocketsphinx-python/pocketsphinx/model"
DATADIR = "/home/sarah/code/pocketsphinx-python/pocketsphinx/test/data"

config = Decoder.default_config()
config.set_string('-hmm', path.join(MODELDIR, 'en-us/en-us'))
config.set_string('-lm', '/home/sarah/code/pocketsphinx-python/wordlist_model/8879.lm')
config.set_string('-dict', '/home/sarah/code/pocketsphinx-python/wordlist_model/8879.dic') 
config.set_string('-logfn', '/dev/null')
decoder = Decoder(config)

import pyaudio

print "---------------------------------------- Searching and selecting audio device"
pa = pyaudio.PyAudio() 
chosen_device_index = -1
for x in xrange(0,pa.get_device_count()): 
    info = pa.get_device_info_by_index(x)
    print pa.get_device_info_by_index(x)
    if info["name"] == "HD Pro Webcam C920: USB Audio (hw:1,0)":
        chosen_device_index = info["index"]
        print "Chose index ", chosen_device_index
print "-----------------------------------------"

p = pyaudio.PyAudio()
stream =, channels=1, rate=16000, input_device_index=chosen_device_index, input=True, output=False)

in_speech_bf = False
while True:
        buf =
        if buf:
            decoder.process_raw(buf, False, False)
            if decoder.get_in_speech() != in_speech_bf:
                in_speech_bf = decoder.get_in_speech()
                if not in_speech_bf:
                    res = decoder.hyp().hypstr
                    print res
                    if res == "BLOODY MARY BLOODY MARY BLOODY MARY":
                        os.system("mplayer -fs -volume 100 /home/sarah/code/pocketsphinx-python/bloodymary.mp4")
        print "Meh."
TobiasWeis | 20. April 2016
  • Kip 29. April 2016 at 0:42
    Awesome, this really is the future! Your project could be a nice addition to the Leap Motion Gallery ( as well if you're interested in sharing a little more widely.
  • MantaGTJ 17. Mai 2016 at 20:17
    NICE !, maybe you could incorporate Wii Fit pressure pad, for both gestures/pressure and general health monitoring ? different profiles for different people ? This inspires me from building PCs into a lot bigger things. Thank you for your inspiration. best wishes with your next build. Robb
  • Stefan 11. Oktober 2016 at 16:48
    Hi, nutzt du den Spiegel von oder den selbstbeklebten? hat auf eine Anfrage geantwortet, deren Spiegel wäre für diesen Verwendungszweck nicht geeignet .. Gruß, Stefan
    • TobiasWeis 13. Oktober 2016 at 10:36
      Ich nutze den von myspiegel, und er funktioniert für den Anwendungszweck sehr gut! (Spionspiegel - Sondermaß (BxH) 63 x 37 cm - 8 mm stark - Kante poliert).
  • Balcan Alexandru 20. Februar 2017 at 15:01
    Hello, I`ve seen that the 8mm thick mirror worked fine, do you think that a 12mm thick solid glass mirror would work also? I would have to choose from either 12mm solid glass mirror or 3mm thick plexiglass. what would you propose? Thank you! Alexandru
    • TobiasWeis 20. Februar 2017 at 15:04
      Hi Alexandru, depending on the size, a 12mm thick glass plate can have a lot of weight I guess? I guess 3mm plexi should be fine if it has the one-way-mirror-coating.

Leave a Reply