Does spatio-temporal information benefit the video summarization task?

Aashutosh Ganesh*, Mirela Popa, Daan Odijk, Nava Tintarev

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

An important aspect of summarizing videos is understanding the temporal context behind each part of the video to grasp what is and is not important. Video summarization models have in recent years modeled spatio-temporal
relationships to represent this information. These models achieved state-of-the-art correlation scores on important benchmark datasets. However, what has not been reviewed is whether spatio-temporal relationships are even required to achieve state-of-the-art results. Previous work in activity recognition has found biases, by prioritizing static cues such as scenes or objects, over motion information. In this paper we inquire if similar spurious relationships might influence the task of video summarization. To do so, we analyse the role that temporal information plays on existing benchmark datasets. We first estimate a baseline with temporally invariant models to see how well such models rank on benchmark datasets (TVSum and SumMe). We then disrupt the temporal order of the videos to investigate the impact it has on existing state-of-the-art models. One of our findings is that the temporally invariant models achieve competitive correlation scores that are close to the human baselines on the TVSum dataset. We also demonstrate that existing models are not affected by temporal perturbations. Furthermore, with certain disruption strategies that shuffle fixed time segments, we can actually improve their correlation scores. With these results, we find that spatio-temporal relationship play a minor role and we raise
the question whether these benchmarks adequately model the task of video summarization. Code available at:
https://github.com/AashGan/TemporalPerturbSum
Original languageEnglish
Title of host publicationAEQUITAS 2024: Workshop on Fairness and Bias in AI | co-located with ECAI 2024
Pages10-25
Number of pages15
Publication statusPublished - 20 Oct 2024

Fingerprint

Dive into the research topics of 'Does spatio-temporal information benefit the video summarization task?'. Together they form a unique fingerprint.

Cite this