Optimizing of the automated extraction of audible mouse vocalizations

Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
University of Delaware
Abstract
Animals interact socially for the purpose of exchanging information, coordinating activities, and establishment of social relationships. These interactions can involve the emission of acoustic signals to facilitate communication between individuals. Mice have a large vocal repertoire with vocalizations emitted in the audible range of human hearing and extending above into the ultrasonic range. Mouse vocalizations of a higher frequency have been characterized extensively in efforts to understand how these acoustic signals influence behavior. However, knowledge of audible vocalizations, such as their various spectrotemporal structures, remains limited. Spectrotemporal structures refers to the time and frequency domains of a vocalization that provides a visual shape to observe. To overcome this gap in knowledge, first the manual curation and characterization of a dataset of audible vocalizations emitted from opposite-sex mouse social interactions was executed. Then, the adjustment of custom-written software that had been previously optimized for ultrasonic vocalization extraction was applied to dyadic social interactions that contained audible vocalizations. Comprehensive statistical tests were used to determine significant differences across recordings. The custom-written software was able to locate and extract parts of a whole vocalization within a recording, showing progress in our efforts to optimize automated extraction of audible mouse vocalizations.
Description
Keywords
Automated software, Behavior, Extraction, Mice, Vocalizations
Citation