my blogmy blog

my diary
Short: Schema diagram from an existing sqlite database

I have a sqlite-database which is just a little too big to keep in my head,

so I was searching for a way to create a nice diagram from the existing schema.
I have been trying a lot of tools, none of them delivered.

Now, with version 14.14.01 of schemacrawler, I was able to produce a nice plot!

./schemacrawler.sh -server sqlite -database /home/shared/data/TobisGpsSequence/sequences_960_720_manual.db -infolevel=maximum -password= -command=schema -outputformat=png -outputfile=test.png

(Please ignore the crazy database layout, I am in the middle of a migration and you are looking at the work-in-progress that caused me to again look around for nice visualizing tools)

test

Simulating robots with MORSE

It is quite challenging and costly to build up a robot lab, especially if you just want to conduct some experiments with sensors and a moving platform. In todays search of affordable robot platforms, I discovered MORSE, a simulation platform built on the blender game engine (www.openrobots.org/wiki/morse/). This article will show how to set it up, select an environment, add sensors and read from them.

It already has the infrastructure, several environments and pre-built robots, sensors (camera, GPS, laserscanner, IR, etc.) and actuators to play with, and it can be installed directly via apt (Ubuntu + Debian). It took me less than an hour to skim through the tutorials, set up a basic environment, add a laser-range sensor to an existing robot and visualize the results, pretty amazing! (You can find all of my project files here: https://github.com/TobiasWeis/morse-robot-simulation)

robot_sim

 

[Kaggle] Minority Report, or the San Francisco Random Forest Precog

I had a little free time on my hand and decided to quickly complete the coursera-course „Data Science at Scale – Practical predictive analytics“ of the University of Washington by Bill Howe. The last assignment was to participate in a kaggle competition.

Source: https://commons.wikimedia.org/wiki/File:ExpoSYFY_-_Minority_Report_(10825723756).jpg
Source: https://commons.wikimedia.org/wiki/File:ExpoSYFY_-_Minority_Report_(10825723756).jpg

For this assignment I chose the „San Francisco Crime Classification“ challenge. The task is to predict the Category of a crime given the time and location. The dataset contains incidents from the SFPD Crime Incident Reporting system from 2003 to 2015 (878049 datapoints for training) with the following variables:

Smarter smartmirror

So I also decided to build myself a smartmirror. However, I want it to provide a little more functionality than just displaying some information and telling me that I’m beautiful. Here is the finished build:

Bathroom SmartMirror With LeapMotion

And here is a video of the leap-motion-control in action:

I want to place it in my bathroom, because that’s the only place where I actually spend some time in front of the mirror. I do want some controls, but I do not want to touch buttons or the mirror itself, so I chose a leap motion controller. Below I will detail some of the steps I went through in building this thing.

Groundtruth data for Computer Vision with Blender

In the video below you can see the sequence of a car driving in a city scene and braking. The layers I rendered out for groundtruth data are the rendered image with the boundingbox of the car (top left), the emission layer ( shows the brakelights when they start to emit light, top right ), the optical flow (lower left), and the depth of each pixel in the world scene ( lower right).


Render-time was about 10h on a Nvidia GeForce GTX 680, tilesize 256×256, total image-size: 960×720. In this article I will first demonstrate how to set up the depth rendering, and afterwards how to extract, save and recover the optical flow.

LW12 Protocol and Python Package

For my new flat I wanted controllable RGB LED stripes. Problem is, most of the controllable cheap ones only have IR remotes, so the receiver must be in line of sight of the remote somehow. That has several drawbacks: you cannot install it behind some furniture without the receiver sticking out, and synchronizing across several rooms is hard.

My solution was to pick some of the RGB LED WiFi controllers (LW12). These come with a neat Smartphone-App to control them.

However, I wanted to control them with my own home-automation-system, or my own smartphone-app.

SICK PLS 101-312, Python and Linux

After fiddling around with some ultrasonic sensors for S.A.R.A.H. (my home automation system), I was looking for other options. Thanks to ebay, industrial laserscanners are now an option :)
In this article I will describe how I connected the scanner with a regular PC, got the password, and provide a python-class that is able to communicate with the scanner and produce nice cv-images (and a numpy-array containing the measurements).

I payed 80 bucks for this used SICK laserscanner in the bay: the PLS 101-312.

 

 

Displacement priors

What is the target of all this ? Driving in an automotive scenario with a given speed and turnrate at any moment, we want to predict the displacement of a 2D-projection (pixel) between two frames:
p(\vec{uv}_{x,y} | speed, turnrate, camera-matrix, world-geometry)

By using the camera-calibration, I can create artificial curves and walls as 3D point-sets and project them back to 2D. Using discretized values for speed, turnrate, streetwidth and wall-height, I can then simulate the displacement of these 3D-Points when they are projected to 2D (our image).
(Note for me: this is the backprojection-code, main-file: main_displacements.py)

2_flows

Symmetry detection

This will probably become one of our modalities in the future: symmetry !

Thanks to the guys at hs-niederrhein, there is symmetry-detection-code that can already be used
for some first estimates:

This software implements the gradient product transform for symmetry
detection that is described in the paper

C. Dalitz, R. Pohle-Froehlich, F. Schmitt, M. Jeltsch:
„The gradient product transform for symmetry detection
and blood vesselm extraction.“ International Conference on
Computer Vision Theory and Applications (VISAPP), pp. 177-184,
2015

And the first results look quite promising:

Lane detection

Today I will try to detect some lanes..

Assumptions:
– We know the lane-width (plus minus)
– We are inside the middle of a lane
– We know the camera geometry
– Based on the turnrate of the IMU we can estimate the curvature of the street
– A line in pixels can be detected by a upward flank and a downward flank

Here are some exemplary results:

1) Of course, the best one first ;)
lanedet_00004871