my blogmy blog

my diary
Measuring voltage on a Raspberry Pi and displaying it in style

After I completed the circuitry to power my raspberry pi from a battery pack, I wanted a way to display the voltage of the battery pack, and be able to access the voltage level from the raspberry pi, so it can shutdown automatically when critical voltage levels are reached to prevent damage to the filesystem or draining the battery too much.

Our Pi can stand at most 5V per input pin, so how do we measure voltages like the 7.4V of our battery and above (if its full, it has more than 7.4V)? We have to scale the highest expected voltage down to 5V! The easiest way to do this is by using a voltage divider. Suppose the 9v-battery in the schematic below is our battery, and the arduino is powered over USB. We connect GND of both and divide the 9V of the battery by using a voltage divider. If both resistors in the schematic below are equal, we effectively halve the voltage. Remember the formula for voltage dividers (without load):

U_{out} = U_{in} * \dfrac{R2}{R1 + R2}

In our case, that means if the battery is full and has 9V:

U_{out} = 4.5V = 9V * \dfrac{10k}{10k + 10k}

So the maximal expected voltage at our analog-in is 4.5V!

By using the arduino-function analogRead(A0), we get a value between 0 and 1023, which represents the voltage scale from 0-5V. We know that whatever we read will be halve of the actual external voltage, so to convert this value back to the actual voltage:

int sensorValue = analogRead(A0);
voltage = sensorValue * (10.0 / 1023.0)

Now this can be put into a nice little script to output this value over serial to the Raspberry Pi, and display it somewhere. I chose to use a 1.44 Inch Spi Tft Lcd Color Screen St7735 with 128×160 pixels and wrote some code to save and display measurements at certain time-intervalls in a rolling fashion, which results in this:

Powering a raspberry pi from battery

The raspberry pi will be the main processing unit of my pypibot. I want to power 6v-motors, so I decided on going with a 7.2v battery-pack. I had one lying around with 2600mAh, which should be enough for testing the setup right now.

I originally planned on going the easy way and ordered a converter from 8-36v to 5v, with a micro-usb-connector already wired (from DROK). Without any other load this worked nicely, although the input voltage was below 8v. But as soon as the motors have been wired up, the voltage would drop too low for this thing to still output 5v, and in consequence the pi went down.

So, here is the definite way to go if you want to power a raspberry pi in a robust way from a 7.2v battery (or anything above that voltage):

Using an adjustable DC/DC power converter! While these units cost little over 5 EUR (for 5 units in total), they take anything from 4V – 35V as input, and the output voltage can be configured by turning a little screw on a potentiometer. In my experiments, the voltage could drop as low as 6.1v, and this unit would still supply a rock-steady 5v to the pi (once setup, it will deliver steady 5v on the output for a wide range of voltages actually). They can withstand 3A max, which should be enough for the raspberry pi and any sensor I hook up to it.

I ended up soldering a micro-usb-connecter to it myself:

In a first testrun with the 7.2V, 2600mA NiCd battery pack I had lying around (it is quite old, so it probably has a far lower capacity than that), the Raspberry Pi lasted 1 hour and 42 minutes, while driving around with the motors from time to time: up 1:42, load average: 0.48, 0.33, 0.33.

Neato XV Laser scanner (LIDAR)

So today my Neat XV LIDAR module arrived, and I had to test it directly with the Raspberry Pi. For everyone that does not know this wonderful piece of hardware yet: It is a low-cost 360-degree spinning laserscanner that is usually scavenged from the Neato XV vacuum-robots. In Germany it is quite hard to get your hands on one, so I ordered one via ebay from the US.

Test-Setup:

According to https://xv11hacking.wikispaces.com/LIDAR+Sensor, the wires of the LIDAR-unit have the following pinout:

Red: 5V
Brown: LDS_RX
Orange: LDS_TX
Black: GND

Although the logic-unit is supplied with 5V, the interface (rx/tx) is 3.3v. Perfect for talking to a raspberry pi!

As stated in the wiki, the sensor (without the motor!) draws ~45mA in idle and ~135mA when in use (rotating).

For these first tests, I wired it up by connection the power-lines of the logic to an external 5v power supply (that can definitely provide the needed mA), the TX of the scanner directly to the RX of the raspberry pi, connected the GNDs. Without connecting the motor yet and just powering the logic-unit on while connected to the serial (115200 baud, 8N1), it would greet me with the following welcome-message:

Piccolo Laser Distance Scanner
Copyright (c) 2009-2011 Neato Robotics, Inc.
All Rights Reserved

Loader	V2.5.15295
CPU	F2802x/c001
Serial	KSH14415AA-0358429
LastCal	[5371726C]
Runtime	V2.6.15295
#Spin...3 ESCs or BREAK to abort
Short: Schema diagram from an existing sqlite database

I have a sqlite-database which is just a little too big to keep in my head,

so I was searching for a way to create a nice diagram from the existing schema.
I have been trying a lot of tools, none of them delivered.

Now, with version 14.14.01 of schemacrawler, I was able to produce a nice plot!

./schemacrawler.sh -server sqlite -database /home/shared/data/TobisGpsSequence/sequences_960_720_manual.db -infolevel=maximum -password= -command=schema -outputformat=png -outputfile=test.png

(Please ignore the crazy database layout, I am in the middle of a migration and you are looking at the work-in-progress that caused me to again look around for nice visualizing tools)

test

Simulating robots with MORSE

It is quite challenging and costly to build up a robot lab, especially if you just want to conduct some experiments with sensors and a moving platform. In todays search of affordable robot platforms, I discovered MORSE, a simulation platform built on the blender game engine (www.openrobots.org/wiki/morse/). This article will show how to set it up, select an environment, add sensors and read from them.

It already has the infrastructure, several environments and pre-built robots, sensors (camera, GPS, laserscanner, IR, etc.) and actuators to play with, and it can be installed directly via apt (Ubuntu + Debian). It took me less than an hour to skim through the tutorials, set up a basic environment, add a laser-range sensor to an existing robot and visualize the results, pretty amazing! (You can find all of my project files here: https://github.com/TobiasWeis/morse-robot-simulation)

robot_sim

 

[Kaggle] Minority Report, or the San Francisco Random Forest Precog

I had a little free time on my hand and decided to quickly complete the coursera-course „Data Science at Scale – Practical predictive analytics“ of the University of Washington by Bill Howe. The last assignment was to participate in a kaggle competition.

Source: https://commons.wikimedia.org/wiki/File:ExpoSYFY_-_Minority_Report_(10825723756).jpg
Source: https://commons.wikimedia.org/wiki/File:ExpoSYFY_-_Minority_Report_(10825723756).jpg

For this assignment I chose the „San Francisco Crime Classification“ challenge. The task is to predict the Category of a crime given the time and location. The dataset contains incidents from the SFPD Crime Incident Reporting system from 2003 to 2015 (878049 datapoints for training) with the following variables:

Smarter smartmirror

So I also decided to build myself a smartmirror. However, I want it to provide a little more functionality than just displaying some information and telling me that I’m beautiful. Here is the finished build:

Bathroom SmartMirror With LeapMotion

 

And here is a video of the leap-motion-control in action:

 

I want to place it in my bathroom, because that’s the only place where I actually spend some time in front of the mirror. I do want some controls, but I do not want to touch buttons or the mirror itself, so I chose a leap motion controller. Below I will detail some of the steps I went through in building this thing.

Groundtruth data for Computer Vision with Blender

In the video below you can see the sequence of a car driving in a city scene and braking. The layers I rendered out for groundtruth data are the rendered image with the boundingbox of the car (top left), the emission layer ( shows the brakelights when they start to emit light, top right ), the optical flow (lower left), and the depth of each pixel in the world scene ( lower right).


Render-time was about 10h on a Nvidia GeForce GTX 680, tilesize 256×256, total image-size: 960×720.

Setting up groundtruth rendering, saving and reading it again

After you have composed your scene, switch to the Cycles rendering engine, and enable the render-passes in the properties-view (under render-layers) you plan on saving as ground-truth information. In this case, I selected the Z-layer, which represents the depth of the scene:

Now open the Node-editor-view, display the Compositing-Nodetree and put a checkmark at „Use Nodes“, it should look like this:

Hit „T“, and in the „output“-tab, select „File Output“. Connect it to the Z-output of the render-layers-node. Hit „N“ while the File-Output-node is selected, enter an appropriate File-subpath as name for the data (I chose „Depth“ here), and depending on your usecase, change the output format. As I am expecting float-values that are not constrained to 0-255, I will save the data in the OpenEXR-format:

After rendering ([F12]), we have a file called /tmp/Depth0001.exr. It can be read and displayed using python:

 

#!/usr/bin/python                                                                                                             

'''
author: Tobias Weis
'''

import OpenEXR
import Imath
import array
import numpy as np
import csv
import time
import datetime
import h5py
import matplotlib.pyplot as plt

def exr2numpy(exr, maxvalue=1.,normalize=True):
    """ converts 1-channel exr-data to 2D numpy arrays """                                                                    
    file = OpenEXR.InputFile(exr)

    # Compute the size
    dw = file.header()['dataWindow']
    sz = (dw.max.x - dw.min.x + 1, dw.max.y - dw.min.y + 1)

    # Read the three color channels as 32-bit floats
    FLOAT = Imath.PixelType(Imath.PixelType.FLOAT)
    (R) = [array.array('f', file.channel(Chan, FLOAT)).tolist() for Chan in ("R") ]

    # create numpy 2D-array
    img = np.zeros((sz[1],sz[0],3), np.float64)

    # normalize
    data = np.array(R)
    data[data > maxvalue] = maxvalue

    if normalize:
        data /= np.max(data)

    img = np.array(data).reshape(img.shape[0],-1)

    return img

depth_data = exr2numpy("Depth0001.exr", maxvalue=15, normalize=False)

fig = plt.figure()
plt.imshow(depth_data)
plt.colorbar()
plt.show()

The output of this script on the default-scene looks like this:

Further layers to generate groundtruth data

Besides depth information, we can also generate groundtruth data for other modalities:

  • Object/Material index: Can be used to generate pixel-accurate annotations of objects for semantic segmentation tasks, or simply computing bounding boxes for IOU measures
  • Normal: Can be used to generate pixel-accurate groundtruth data of surface normals
  • Vector: Will show up as „Speed“ in the node editor, contains information for optical flow between two frames in a sequence
  • Emit: Pixel-accurate information of emssion-properties.
Displacement priors

What is the target of all this ? Driving in an automotive scenario with a given speed and turnrate at any moment, we want to predict the displacement of a 2D-projection (pixel) between two frames:
p(\vec{uv}_{x,y} | speed, turnrate, camera-matrix, world-geometry)

By using the camera-calibration, I can create artificial curves and walls as 3D point-sets and project them back to 2D. Using discretized values for speed, turnrate, streetwidth and wall-height, I can then simulate the displacement of these 3D-Points when they are projected to 2D (our image).
(Note for me: this is the backprojection-code, main-file: main_displacements.py)

2_flows

Symmetry detection

This will probably become one of our modalities in the future: symmetry !

Thanks to the guys at hs-niederrhein, there is symmetry-detection-code that can already be used
for some first estimates:

This software implements the gradient product transform for symmetry
detection that is described in the paper

C. Dalitz, R. Pohle-Froehlich, F. Schmitt, M. Jeltsch:
„The gradient product transform for symmetry detection
and blood vesselm extraction.“ International Conference on
Computer Vision Theory and Applications (VISAPP), pp. 177-184,
2015

And the first results look quite promising: