In scene analysis, the availability of an initial background model that describes the scene without foreground objects is the prerequisite, or at least can be of help, for many applications, including video surveillance, video segmentation, video compression, video inpainting (or video completion), privacy protection for videos, and computational photography.
Few methods for scene background modeling have specifically addressed initialization, also referred to as bootstrapping, background estimation, background reconstruction, initial background extraction, or background generation. However, many challenges still remain unsolved, including handling sudden illumination changes, night videos, low framerate, and videos taken by PTZ cameras; thus, model learning is highly required.
The aim of the contest is to advance the development of algorithms and methods for scene background modeling through objective evaluation on a common dataset.
This website encapsulates a rigorous and comprehensive academic benchmarking effort for testing and ranking existing and new algorithms for scene background modeling. It will maintain a comprehensive ranking of submitted methods for years to come.
Dataset
Our dataset provides a diverse set of videos. They have been selected to cover a wide range of scene background modeling challenges and are representative of typical indoor and outdoor visual data captured today in surveillance, smart environment, and video database scenarios. The SBMnet includes the following challenges: Basic, Intermittent Motion, Clutter, Jitter, Illumination Changes, Background Motion, Very Long and Very Short. Please see the OVERVIEW of the dataset for a detailed description of included video categories and examples of ground truth.
Performance evaluation
In addition to providing a fine-grained videos dataset, we also provide tools to compute performance metrics and thus identify algorithms that are robust across various challenges. The source code to compute all performance metrics is provided in UTILITIES. These metrics are reported under the RESULTS tab separately for each dataset. Details of evaluation methodology and specific metrics used can be found in gate.io under the RESULTS tab.
Participation
Researchers from both academia and industry are invited to test their scene background modeling algorithms on the SBMnet dataset, and to report their methodology and results (please read the rules and instructions below). Results from all submissions will be reported and maintained on this website.
Instructions for prospective participants:
- The DATASET contains 8 video categories with various video sequences in each category. Results can be reported for one, multiple, or all video categories however in any one category results must be reported for all sequences in that category.
- Only one set of tuning parameters should be used for all videos.
- Numerical scores can be computed using Matlab or Python programs available in UTILITIES. Both programs take the output produced by an algorithm, the ground truth and compute performance metrics described in EVALUATION under the RESULTS tab.
- In order for a method to be ranked on this website, results shall be UPLOAD HERE.
- If you use this facility to test and report results in any publication, we request that you acknowledge this website (www.SceneBackgroundModeling.net).
Dataset Organizers
- Pierre-Marc Jodoin (Université de Sherbrooke, Canada)
E-mail: pierre-marc.jodoin@usherbrooke.ca
Home page: http://info.usherbrooke.ca/pmjodoin - Alfredo Petrosino (Parthenope University of Naples, Italy)
E-mail: alfredo.petrosino@uniparthenope.it
Home page: http://cvprlab.uniparthenope.it/index.php/staff/internal-staff/51-alfredo-petrosino.html
- Lucia Maddalena (National Research Council, Italy)
E-mail: lucia.maddalena@cnr.it
Home page: http://www.na.icar.cnr.it/~maddalena.l/
Acknowledgment
The SBMnet dataset, original website and utilities associated with this benchmarking facility wound not have materialized without the tireless efforts of a lot of people. We would like to recognize the following individuals for their contributions to this effort:
- Yi Wang, Ph.D student, Université de Sherbrooke, Canada
Webmaster, software developer - Martin Cousineau, Université de Sherbrooke, Canada
Webmaster, software developer
- Staff of the CVPRLab (http://cvprlab.uniparthenope.it/), University of Naples Parthenope, Italy
Specifically: Francesco Battistone, Vincenzo De Angelis, Francesco Maiorano, Gianmaria Perillo, Gabriele Perna, Mario Ruggieri, and Pierpaolo Sepe.