What is the target of all this ? Driving in an automotive scenario with a given speed and turnrate at any moment, we want to predict the displacement of a 2D-projection (pixel) between two frames:
By using the camera-calibration, I can create artificial curves and walls as 3D point-sets and project them back to 2D. Using discretized values for speed, turnrate, streetwidth and wall-height, I can then simulate the displacement of these 3D-Points when they are projected to 2D (our image).
(Note for me: this is the backprojection-code, main-file: main_displacements.py)
Since these points and their displacements are sparse, I need to interpolate them in order to use them as maps for flow- or motioncancellation-algorithms.
(Note for me: this is the ipython-notebook in the backproject folder, Sparse\ Interpolation\ of\ flow.ipynb)
Since most of the code is in python, but I want to use these maps as priors in the C++ system, I first tried to use yaml to get them as flow-matrices from python to C++, did not work, opencv in python uses another yaml-format than opencv in c++. Next, I tried xml. This worked well, but the average file was about 4.6Mb large. Since for this image-size and speeds I never measured a displacement greater than 255, I decided that the png-format is good enough ;)
Here is a video that gets the speed and the turnrate from one of my captured sequences, maps it to the nearest Prior-Map (e.g., speed is 55, turnrate is 0.121, then I will choose the map: 50kmh_0.1tr_14m.png, that way I do not have to create a myriad of maps, and the approximations should be fine enough. If not, I can always invest the time and recode everything in C++ to simulate everything live ).