GRAPE MARS

Multimodal Analysis Research Software

The multimodal annotation software GRAPE MARS (Multimodal Analysis Research Software), developed by the research group GRAPE, is designed to assist in the detailed analysis of videos and the different semiotic modes used in them. The program allows the creation of libraries defined by the user, where the semiotic modes to be analysed are collected (e.g., gestures, head movements, facial expression, visual effects, sound effects, etc.), as well as their different typologies (e.g., types of gestures, or different facial expressions). The software facilitates the annotation of these modes by organizing them in different layers which are time-aligned. This helps determine the cooccurrence of modes (that is, the instances when they occur in synchrony). Likewise, it facilitates the quantitative analysis of these data and its representation in graphs.

The software consists of the following modules: video library, video player, verbal transcription, audio representation graph, annotation library, annotation layers, and data analytics.

GRAPE-MARS is an OPENSOURCE software free to use, but we would appreciate a citation if you use it in your research. You can cite GRAPE-MARS as a software using APA:

Ruiz-Madrid, N., Fortanet-Gómez, I., Bernad-Mechó, E., & Valeiras-Jurado, J. (2023). GRAPE-MARS (Multimodal Analysis Research Software). Castelló de la Plana: Universitat Jaume I. https://mars.grape.uji.es