A Robust Neural Fingerprint of Cinematic Shot-Scale

Gal Raz*, Giancarlo Valente, Michele Svanera, Sergio Benini, András Bálint Kovács

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This article provides evidence for the existence of a robust “brainprint” of cinematic shot-scales that generalizes across movies, genres, and viewers. We applied a machine-learning method on a dataset of 234 fMRI scans taken during the viewing of a movie excerpt. Based on a manual annotation of shot-scales in five movies, we generated a computational model that predicts time series of this feature. The model was then applied on fMRI data obtained from new participants who either watched excerpts from the movies or clips from new movies. The predicted shot-scale time series that were based on our model significantly correlated with the original annotation in all nine cases. The spatial structure of the model indicates that the empirical experience of cinematic close-ups correlates with the activation of the ventral visual stream, the centromedial amygdala, and components of the mentalization network, while the experience of long shots correlates with the activation of the dorsal visual pathway and the parahippocampus. The shot-scale brainprint is also in line with the notion that this feature is informed among other factors by perceived apparent distance. Based on related theoretical and empirical findings we suggest that the empirical experience of close and far shots implicates different mental models: concrete and contextualized perception dominated by recognition and visual and semantic memory on the one hand, and action-related processing supporting orientation and movement monitoring on the other.
Original languageEnglish
Pages (from-to)23-52
Number of pages30
JournalProjections-The journal for movies and mind
Volume13
Issue number3
DOIs
Publication statusPublished - Dec 2019

Keywords

  • apparent distance
  • fMRI
  • machine learning
  • motion pictures
  • neural decoding
  • shot-scale
  • FUNCTIONAL CONNECTIVITY
  • DYNAMICS
  • DISTANCE
  • AMYGDALA
  • CORTEX
  • SIZE
  • DISSOCIATION
  • PERCEPTION
  • MODELS
  • AREAS

Cite this

Raz, Gal ; Valente, Giancarlo ; Svanera, Michele ; Benini, Sergio ; Kovács, András Bálint. / A Robust Neural Fingerprint of Cinematic Shot-Scale. In: Projections-The journal for movies and mind. 2019 ; Vol. 13, No. 3. pp. 23-52.
@article{da0bad4bca8e41f8ad1411beb0810035,
title = "A Robust Neural Fingerprint of Cinematic Shot-Scale",
abstract = "This article provides evidence for the existence of a robust “brainprint” of cinematic shot-scales that generalizes across movies, genres, and viewers. We applied a machine-learning method on a dataset of 234 fMRI scans taken during the viewing of a movie excerpt. Based on a manual annotation of shot-scales in five movies, we generated a computational model that predicts time series of this feature. The model was then applied on fMRI data obtained from new participants who either watched excerpts from the movies or clips from new movies. The predicted shot-scale time series that were based on our model significantly correlated with the original annotation in all nine cases. The spatial structure of the model indicates that the empirical experience of cinematic close-ups correlates with the activation of the ventral visual stream, the centromedial amygdala, and components of the mentalization network, while the experience of long shots correlates with the activation of the dorsal visual pathway and the parahippocampus. The shot-scale brainprint is also in line with the notion that this feature is informed among other factors by perceived apparent distance. Based on related theoretical and empirical findings we suggest that the empirical experience of close and far shots implicates different mental models: concrete and contextualized perception dominated by recognition and visual and semantic memory on the one hand, and action-related processing supporting orientation and movement monitoring on the other.",
keywords = "apparent distance, fMRI, machine learning, motion pictures, neural decoding, shot-scale, FUNCTIONAL CONNECTIVITY, DYNAMICS, DISTANCE, AMYGDALA, CORTEX, SIZE, DISSOCIATION, PERCEPTION, MODELS, AREAS",
author = "Gal Raz and Giancarlo Valente and Michele Svanera and Sergio Benini and Kov{\'a}cs, {Andr{\'a}s B{\'a}lint}",
year = "2019",
month = "12",
doi = "10.3167/proj.2019.130303",
language = "English",
volume = "13",
pages = "23--52",
journal = "Projections-The journal for movies and mind",
issn = "1934-9688",
publisher = "Berghahn Journals",
number = "3",

}

A Robust Neural Fingerprint of Cinematic Shot-Scale. / Raz, Gal; Valente, Giancarlo; Svanera, Michele; Benini, Sergio; Kovács, András Bálint.

In: Projections-The journal for movies and mind, Vol. 13, No. 3, 12.2019, p. 23-52.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - A Robust Neural Fingerprint of Cinematic Shot-Scale

AU - Raz, Gal

AU - Valente, Giancarlo

AU - Svanera, Michele

AU - Benini, Sergio

AU - Kovács, András Bálint

PY - 2019/12

Y1 - 2019/12

N2 - This article provides evidence for the existence of a robust “brainprint” of cinematic shot-scales that generalizes across movies, genres, and viewers. We applied a machine-learning method on a dataset of 234 fMRI scans taken during the viewing of a movie excerpt. Based on a manual annotation of shot-scales in five movies, we generated a computational model that predicts time series of this feature. The model was then applied on fMRI data obtained from new participants who either watched excerpts from the movies or clips from new movies. The predicted shot-scale time series that were based on our model significantly correlated with the original annotation in all nine cases. The spatial structure of the model indicates that the empirical experience of cinematic close-ups correlates with the activation of the ventral visual stream, the centromedial amygdala, and components of the mentalization network, while the experience of long shots correlates with the activation of the dorsal visual pathway and the parahippocampus. The shot-scale brainprint is also in line with the notion that this feature is informed among other factors by perceived apparent distance. Based on related theoretical and empirical findings we suggest that the empirical experience of close and far shots implicates different mental models: concrete and contextualized perception dominated by recognition and visual and semantic memory on the one hand, and action-related processing supporting orientation and movement monitoring on the other.

AB - This article provides evidence for the existence of a robust “brainprint” of cinematic shot-scales that generalizes across movies, genres, and viewers. We applied a machine-learning method on a dataset of 234 fMRI scans taken during the viewing of a movie excerpt. Based on a manual annotation of shot-scales in five movies, we generated a computational model that predicts time series of this feature. The model was then applied on fMRI data obtained from new participants who either watched excerpts from the movies or clips from new movies. The predicted shot-scale time series that were based on our model significantly correlated with the original annotation in all nine cases. The spatial structure of the model indicates that the empirical experience of cinematic close-ups correlates with the activation of the ventral visual stream, the centromedial amygdala, and components of the mentalization network, while the experience of long shots correlates with the activation of the dorsal visual pathway and the parahippocampus. The shot-scale brainprint is also in line with the notion that this feature is informed among other factors by perceived apparent distance. Based on related theoretical and empirical findings we suggest that the empirical experience of close and far shots implicates different mental models: concrete and contextualized perception dominated by recognition and visual and semantic memory on the one hand, and action-related processing supporting orientation and movement monitoring on the other.

KW - apparent distance

KW - fMRI

KW - machine learning

KW - motion pictures

KW - neural decoding

KW - shot-scale

KW - FUNCTIONAL CONNECTIVITY

KW - DYNAMICS

KW - DISTANCE

KW - AMYGDALA

KW - CORTEX

KW - SIZE

KW - DISSOCIATION

KW - PERCEPTION

KW - MODELS

KW - AREAS

U2 - 10.3167/proj.2019.130303

DO - 10.3167/proj.2019.130303

M3 - Article

VL - 13

SP - 23

EP - 52

JO - Projections-The journal for movies and mind

JF - Projections-The journal for movies and mind

SN - 1934-9688

IS - 3

ER -