We present an event camera dataset, called EvTTC, for the task of time-to-collision estimation in autonomous driving:
- A diverse set of sequences featuring various targets,
such as real vehicles, inflatable vehicles, and dummies,
across a wide range of relative speeds, including both
routine and challenging situations.
- A low-cost and small-scale TTC testbed that facilitates
the generation of quasi-real data at different relative
speeds. The design of the testbed is open-source.
- A specific benchmark for the TTC task that can serve
as an evaluation platform for the community to test and
compare different TTC estimation methods.
News
Dec. 5, 2024 |
Watch our video on Youtube.
|
Nov. 5, 2024 |
EvTTC Benchmark goes live!
|
BibTeX
Please cite the following publication when using this benchmark in an academic context:
@misc{sun2024aneventcameradatasetfortime-to-collisionestion,
title = {EvTTC: An Event Camera Dataset for Time-to-Collision Estimation},
author = {Kaizhen Sun and Jinghang Li and Kuan Dai and Bangyan Liao and Wei Xiong and Yi Zhou},
year = 2024,
eprint = {2412.05053},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
-
Time-to-Collision (TTC) estimation lies in the
core of the forward collision warning (FCW) functionality,
which is key to all Automatic Emergency Braking (AEB)
systems. Although the success of solutions using frame-based
cameras (e.g., Mobileye’s solutions) has been witnessed in
normal situations, some extreme cases, such as the sudden
variation in the relative speed of leading vehicles and the sudden
appearance of pedestrians, still pose significant risks that cannot
be handled. This is due to the inherent imaging principles of
frame-based cameras, where the time interval between adjacent
exposures introduces considerable system latency to AEB.
Event cameras, as a novel bio-inspired sensor, offer ultra-high
temporal resolution and can asynchronously report brightness
changes at the microsecond level. To explore the potential
of event cameras in the above-mentioned challenging cases,
we propose EvTTC, which is, to the best of our knowledge,
the first multi-sensor dataset focusing on TTC tasks under
high-relative-speed scenarios. EvTTC consists of data collected
using standard cameras and event cameras, covering various
potential collision scenarios in daily driving and involving
multiple collision objects. Additionally, LiDAR and GNSS/INS
measurements are provided for the calculation of ground truth
TTC. Considering the high cost of testing TTC algorithms on
full-scale mobile platforms, we also provide a small-scale TTC
testbed for experimental validation and data augmentation.
All the data and the design of the testbed are open sourced,
and they can serve as a benchmark that will facilitate the
development of vision-based TTC techniques.
Acknowledgement
This work was supported by the National Key Research and Development
Project of China (grant number 2023YFB4706600), as well as the generous support provided by our industry partner
Xidi Zhijia (Hunan) Co., Ltd..