Machine learning application for a case study of synesthesia
Author | Affiliation |
---|---|
Bartulienė, Raminta | |
Date |
---|
2021-11-26 |
Poster presentations
ISBN 978-609-07-0679-4 (digital PDF)
Synaesthesia is a neurological condition in which one type of sensory stimulus triggers another type of sensory sensation. In this study, we investigated a case of sound-to-color synesthesia using two methodologies: the classical method of synesthesia authenticity diagnosis using the test of genuineness (TOG), and multilayer feed-forward neural network application for the classification of voice-induced colors. The research subject SB is a 22 years old female with partial blindness. She can only see grey silhouettes but claims that after communication with a person, their grey silhouette develops a person-specific color. To investigate SB case interviews were conducted with 39 participants (19 males and 20 females). SB was talking with each participant until their silhouette developed a color. Voices of the participants were recorded using two-channel audio equipment with a 44,1kHz sampling rate. To test the genuineness of the case TOG was applied. SB was presented with the same stimuli (participant voice recordings) a year later, without having been previously warned about a retest and similar age and education female was used as a control group. The control was instructed to assign random colors to the audio recordings and was told to memorise them, to be retested in two weeks. The TOG confirmed the genuineness of SB synaesthesia. To further investigate SB synesthesia, a multilayer feed-forward neural network was designed to classify synesthetically evoked colours. We raised a hypothesis that if SB synesthesia is sound-to-colour synesthesia, then the neural network should be able to classify different voice signals by colours. Also, a random forest machine-learning algorithm was used to select relevant features. Training of the neural network was done with the group with most data - white and pink colour assigned females. 68 audio features were extracted from the recordings. The trained neural network succes[...].