![]() ![]() People reporting higher hearing handicaps should watch for poor cognitive function in processing speed and visuospatial abilities. HHIE showed a significant negative correlation between processing speed performance on the SS (standardized β = −0.095, adjusted p = 0.04) and visuospatial performance on the MR (standardized β = −0.145, adjusted p = 0.04), and no correlation between the scores of the HHIE and either episodic memory performance on the LM (standardized β = 0.060, adjusted p = 0.22) or executive function performance on the ST (standardized β = 0.053, adjusted p = 0.32). We conducted permutation tests of multiple regression analysis of the standardized scores on the HHIE and cognitive tests. The HHIE and cognitive measures were administered to 196 older adults (average age = 67.7 ± 4.3 years, male 56, female 140) without cognitive impairment and without severe hearing handicap. This study assessed the Hearing Handicap Inventory for the Elderly (HHIE) and cognition (Mini Mental State Exam MMSE, Logical Memory LM, Symbol Search SS, Stroop Test ST, and Mental Rotation MR) to investigate which cognitive domains are most strongly involved with hearing self-assessment in older adults. We thank Ms Reetta Korhonen for help in data collection and Riitta Hari (Low Temperature Lab, HUT) for valuable comments on the manuscript.Īge-related hearing loss is a common disorder with significant consequences for quality of life. Financial support from the Academy of Finland to the Research Centre for Computational Science and Engineering and to MS is also acknowledged. was supported by the European Union Research Training Network “Multi-modal Human–Computer Interaction”. We suggest that when SWS stimuli were perceived as Acknowledgements This result does not depend on the amount of practise with listening to SWS stimuli as confirmed by the results obtained in Experiment 2. If the SWS stimuli had always been processed in the same way, the influence of visual speech should have been the same in both speech and non-speech modes. Our results demonstrate that acoustic and visual speech were integrated strongly only when the perceiver interpreted the acoustic stimuli as speech. Then at least part of the large integration effect observed with the incongruent stimuli could have Discussion ![]() However, this procedure might have created a learning effect so that subjects might have become more used to SWS stimuli. The reason for this was that once the subject “enters speech mode” it is impossible to hear the SWS stimuli as non-speech. In Experiment 1, the different tasks were always performed in the same order, so that the non-speech mode always preceded speech mode for the SWS stimuli. Stimuliįour auditory stimuli (natural /omso/ and /onso/ and their sine wave replicas) and digitized video clips of a male face articulating Experiment 2 Two subjects were excluded from the subject pool because they reported perceiving the SWS stimuli as speech before being instructed about their speech-like nature. None of the subjects had earlier experience with SWS stimuli. All reported normal hearing and normal or corrected-to-normal vision. Ten students of the Helsinki University of Technology were studied. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |