GRAM Road-Traffic Monitoring

Introduction

Here we release the GRAM Road-Traffic Monitoring (GRAM-RTM) dataset, a novel benchmark for multi-vehicle tracking in real-time. It consists of 3 challenging video sequences, recorded under different conditions and with different platforms. The first video, called M-30 (7520 frames), has been recorded in a sunny day with a Nikon Coolpix L20 camera, with a resolution of 800x480 @30 fps. The second sequence, called M-30-HD (9390 frames), has been recorded in a similar location but during a cloudy day, and with a high resolution camera: a Nikon DX3100 at 1200x720 @30 fps. The third video sequence, called Urban1 (23435 frames), has been recorded in a busy intersection with video surveillance traffic camera with a resolution of 600x360 @25fps.

GRAM RTM Examples

All the vehicles in the GRAM-RTM dataset have been manually annotated. The following categories are provided: car, truck, van, and big-truck. The total number of different objects in each sequence is: 256 for M-30, 235 for M-30-HD and 237 for Urban1. We provide a unique identifier for each vehicle. All the annotations included in the GRAM-RTM are created in a XML format PASCAL VOC compatible.

For a detailed description of the experimental setup proposed to evaluate your detection and tracking results, please, download our paper.

Downloads

Best practice: Recommendations on using the dataset

The GRAM-RTM images must be used only for testing, never for training. Therefore, we propose the following experimental setup. Any approach reporting results in the GRAM-RTM database must be trained using any data except the provided test images. Furthermore, the test data must be used strictly for reporting of results alone - it must not be used in any way to train or tune systems, for example by running multiple parameter choices and reporting the best results obtained. For training, we suggest the use of other datasets providing vehicles, e.g. the PASCAL VOC.

Database Rights

The database has been made publicly available for scientific research purposes.

Acknowledgements

This work was partially supported by projects TIN2010-20845-C03-01,TIN2010-20845-C03-03, IPT-2011-1366-390000 and IPT-2012-0808-370000.

Citing

If you make use of this data and software, please cite the following reference in any publications:

@INPROCEEDINGS{guerrero2013iwinac,
  author = {Guerrero-Gomez-Olmedo, R. and Lopez-Sastre, R.~J. and Maldonado-Bascon, S. and Fernandez-Caballero, A.},
  title = {Vehicle Tracking by Simultaneous Detection and Viewpoint Estimation},
  booktitle = {IWINAC 2013, Part II, LNCS 7931},
  pages = {306--316}, 
  year = {2013}  
}