Fundamentals of formal properties of nonmanuals:
A quantitative approach


About the project

The project “Fundamentals of formal properties of nonmanuals: A quantitative approach” (NONMANUAL) is funded by the European Research Council (link to the project in the ERC Datahub). It will run from January 2023 to December 2027.

If you are interested in collaborating on topics related to this project, do not hesitate to email me.


Summary

Sign languages, in addition to using the hands, also use positions and movements of other articulators: the body, the head, the mouth, the eyebrows, the eyes and the eyelids, to convey lexical, grammatical, and prosodic information. This linguistic use of the nonmanual articulators is known as nonmanuals. Contrary to current assumptions in the field of sign linguistics, this project proposes the hypothesis that all sign languages use the same basic universal building blocks (nonmanual movements) but that each language is different in how it combines these building blocks both sequentially and simultaneously; languages also differ in the regularity, frequency, and the alignment properties of the nonmanuals.

In order to test this hypothesis, the project will investigate formal properties of nonmanuals in five geographically, historically, and socially diverse sign languages using data from published naturalistic corpora of the sign languages, Computer Vision for extracting measurements of the movement of nonmanual articulators, and a statistical techniques of Non-linear Mixed Effect Modelling and Functional Data Analysis for a quantitative comparison of dynamic nonmanual contours. This will result in the first quantitative formal typology of nonmanuals grounded in naturalistic corpus data. The novel methodology proposed in this project requires testing, adjustment, and development, which constitutes an important component of the project. The developed methodological pipeline will be a secondary output enabling large-scale reliable quantitative research on nonmanuals in future.

Finally, the established typology of formal properties of nonmanuals in the five sign languages will serve as basis for a cross-modal comparison between nonmanuals and prosody/intonation in spoken languages in order to separate truly universal features of the human linguistic capacity from the effects of the visual vs. auditory modalities.


The team

  • PI: Vadim Kimmelman

  • Postdoctoral researcher: the position will be advertised shortly

  • PhD student: the position will be advertised shortly

  • PhD student: the position will be advertised shortly

  • Statistician: Jan Bulla

Collaborations with Anna Kuznetsova, Anastasiia Chizhikova, Anara Sandygulova, Medet Mukushev, Alfarabi Imashev, Anželika Teresė.


Output

Kimmelman, V. & A. Teresė (2023). Analyzing Literary Texts in Lithuanian Sign Language with Computer Vision: A Proof of Concept. In R. Galimullin & S. Touileb. CEUR Workshop Proceedings vol. 3413. https://ceur-ws.org/Vol-3431/paper5.pdf (open access)

  • In this study, we demonstrate how Computer Vision (MediaPipe) can be applied to analyze kinetic properties in literary pieces and their non-literary retellings in Lithuanian Sign Language. For example, the graph below shows that the eyebrows move more in the originals in comparison to the retellings of the same pieces.

Previous publications:

Kimmelman, V., A. Imashev, M. Mukushev & A. Sandygulova. (2020). Eyebrow position in grammatical and emotional expressions in Kazakh-Russian Sign Language: A quantitative study. PLOS ONE 15(6). https://doi.org/10.1371/journal.pone.0233731 (open access)

  • In this study, we applied a Computer Vision tool (OpenPose) to quantitatively analyze eyebrow position as affected by three sentence types and three different emotions in utterances produced by ten signers. See a video below demonstrating application of the tool and sentence types.


Kuznetsova, A., A. Imashev, M. Mukushev, A. Sandygulova & V. Kimmelman (2021). Using Computer Vision to Analyze Non-manual Marking of Questions in KRSL. In D. Shterionov (ed.) Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), (pp. 49-59). Association for Machine Translation in the Americas. https://aclanthology.org/2021.mtsummit-at4ssl.6/ (open access)

  • In this study, we re-analyzed parts of the data from the previous study using a different CV-tool (OpenFace) and applying Machine Learning to improve the measurements of eyebrow movements.

Kuznetsova, A., Imashev, A., Mukushev, M., Sandygulova, A., & Kimmelman, V. (2022). Functional Data Analysis of Non-manual Marking of Questions in Kazakh-Russian Sign Language. In E. Efthimiou, S.-E. Fotinea, T. Hanke, J. A. Hochgesang, J. Kristoffersen, J. Mesch, & M. Schulder (Eds.), Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources (pp. 124-131). European Language Resources Association (ELRA). https://www.sign-lang.uni-hamburg.de/lrec/pub/22024.pdf

  • In this study, we improved the statistical analysis of the data from the previous study in order to account for the dynamic nature of eyebrow and head movements. The figure below shows application of FDA analysis to head movement and inner and outer eyebrow movement across different sentence types with and without landmark registration (time alignment to sign boundaries).

Chizhikova, A., & Kimmelman, V. (2022). Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study. In E. Efthimiou, S.-E. Fotinea, T. Hanke, J. A. Hochgesang, J. Kristoffersen, J. Mesch, & M. Schulder (Eds.), Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources (pp. 29-36). European Language Resources Association (ELRA). https://www. sign-lang.uni-hamburg.de/lrec/pub/22011.pdf

  • In this study, we analyzed phonetic properties of headshake expressing negation in Russian Sign Language using OpenFace. Example of a headshake measured as head rotation in OpenFace: