Example-based 3D Trajectory Extraction

Zeyd Boukhers,  Kimiaki Shirahama and Marcin Grzegorzek

Demo

The above video shows two examples of our method on Kitti dataset. The red symbole denotes the camera and each dot denotes an object followed by its trajectory. When the actual object’s movement is slow and its projection does not make a significat change on the image plane, it is hard to determine at the beginning its velocity (direction and speed) -See the second example #2-.

Abstract

For semantic analysis of activities and events in videos, it is important to capture the spatio-temporal relation among objects in the 3D space. In this paper, we present a probabilistic method that extracts 3D trajectories of objects from 2D videos, captured from a monocular moving camera. Compared to existing methods that rely on restrictive assumptions, we propose a method which can extract 3D trajectories with much less restriction by adopting new example-based techniques which compensate the lack of information. Here, we estimate the focal length of the camera based on similar candidates, and use it to compute depths of detected objects. Contrary to other 3D trajectory extraction methods, our method is able to process videos taken from a stable camera as well as a non-calibrated moving camera without restrictions. For this, we modify Reversible Jump Markov Chain Monte Carlo (RJ-MCMC) particle filtering to be more suitable for camera odometery without relying on geometrical feature points. Moreover, our method decreases time consumption by reducing the number of object detections with keypoint matching. Finally we evaluate our method on known datasets showing the robustness of our system and demonstrating its efficiency in dealing with different kind of videos.

Overview

Focal Length Estimation

In this process, we estimate the focal length of the camera from which the query image is taken. For this, we collected about 75000 images from mirflickr. Each image is associated with the corresponding scene class and the EXIF file (i.e. focal length, CCD width). We retrieve then 100 images from the corresponding class based on the spatial characteristics, then 20 images based on the semantic characteristics and finally 7 images are retrieved based on the bluree degrees. The focal length can be estimated given the focal length of the selected candidates. The focal length can be estimated in mm. However, the focal length in mm cannot be used to study the relation between the image and the real world, since the width of the CCD is also missing. Therefore, we extand our method to estimate the focal length on the basis of pixel unit, where practically the ratio of the focal length (mm) to the CCD width is estimated (called Ι). The focal length is pixel unit is then the product of Ι and the width of the image.

Code

Focal length estimation