No AccessJournal of Speech and Hearing ResearchResearch Article1 Dec 1968Confusions Among Visually Perceived Consonants Cletus G. Fisher Cletus G. Fisher University of Iowa, Iowa City, Iowa Google Scholar https://doi.org/10.1044/jshr.1104.796 SectionsAboutPDF ToolsAdd to favoritesDownload CitationTrack Citations ShareFacebookTwitterLinked In Eighteen college students with normal hearing responded to the visual perception of initial and final consonants in an English-like phonetic environment in a test of the homopheny of consonant sounds of English. The Multiple-choice Intelligibility Test provided stimulus items but special response sheets were provided to allow each subject a possible response of any consonant judged homotypical or homorganic to the stimulus item. Correct answers as possible responses were deleted to provide a usable number of confusions. Subjects were not aware of the deletion of correct responses even after the task was completed. Resulting confusion matrices were analyzed for significant confusions among consonants; these confusions were grouped into mutually exclusive classes termed visemes. The results tend to support previously published linguistic groupings of homophenous sounds rather than the classical listing from the developers of speechreading methodology. Variations from the former are explained in terms of the addition of minimal phonetic redundancy. Additional Resources FiguresReferencesRelatedDetailsCited by Brain and Language235 (105196)1 Dec 2022The timing of visual speech modulates auditory neural processingMarc Sato Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies6:3 (1-26)6 Sep 2022MuteItTanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen and Shubham Jain CAAI Transactions on Intelligence Technology17 Aug 2022Developing phoneme‐based lip‐reading sentences system for silent speech recognitionRanda El‐Bialy, Daqing Chen, Souheil Fenghour, Walid Hussein, Perry Xiao, Omar H. Karam and Bo Li Nergis Pervan Akman, Talya Tumer Sivri, Ali Berkol and Hamit Erdem (2022) Lip Reading Multiclass Classification by Using Dilated CNN with Turkish Dataset 2022 International Conference on Electrical, Computer and Energy Technologies (ICECET)10.1109/ICECET55527.2022.9873011978-1-6654-7087-2 Egyptian Informatics Journal1 Jul 2022Read my lips: Artificial intelligence word-level arabic lipreading systemWaleed Dweik, Sundus Altorman and Safa Ashour Cortex152 (21-35)1 Jul 2022Motor and visual influences on auditory neural processing during speaking and listeningMarc Sato American Journal of Audiology31:2 (453-469)2 Jun 2022Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective TrainingLynne E. Bernstein, Nicole Jordan, Edward T. Auer and Silvio P. Eberhardt International Journal of Cognitive Computing in Engineering3 (24-30)1 Jun 2022Deep learning based assistive technology on audio visual speech recognition for hearing impairedL Ashok Kumar, D Karthika Renuka, S Lovelyn Rose, M C Shunmuga priya and I Made Wartana Proceedings of the ACM on Computer Graphics and Interactive Techniques5:1 (1-15)4 May 2022Joint Audio-Text Model for Expressive Speech-Driven 3D Facial AnimationYingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang and Taku Komura NeuroImage252 (119044)1 May 2022Masking of the mouth area impairs reconstruction of acoustic speech features and higher-level segmentational features in the presence of a distractor speakerChandra Leon Haider, Nina Suess, Anne Hauswald, Hyojin Park and Nathan Weisz Psychonomic Bulletin & Review29:2 (600-612)1 Apr 2022The role of iconic gestures and mouth movements in face-to-face communicationAnna Krason, Rebecca Fenton, Rosemary Varley and Gabriella Vigliocco IEEE Internet of Things Journal9:7 (5357-5367)RealPRNet: A Real-Time Phoneme-Recognized Network for "Believable" Speech AnimationZixiao Yu, Haohong Wang and Jian Ren American Journal of Audiology31:1 (57-77)3 Mar 2022During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in NoiseLynne E. Bernstein, Edward T. Auer and Silvio P. Eberhardt Brain and Language225 (105058)1 Feb 2022How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processingChotiga Pattamadilok and Marc Sato IEEE/ACM Transactions on Audio, Speech, and Language Processing30 (2076-2090)End-to-End Lip-Reading Without Large-Scale DataAdriana Fernandez-Lopez and Federico M. Sukno Xian Liu, Yinghao Xu, Qianyi Wu, Hang Zhou, Wayne Wu and Bolei Zhou (2022) Semantic-Aware Implicit Neural Audio-Driven Video Portrait Generation Computer Vision – ECCV 202210.1007/978-3-031-19836-6_7 Sensors22:1 (72)23 Dec 2021Lipreading Architecture Based on Multiple Convolutional Neural Networks for Sentence-Level Visual Speech RecognitionSanghun Jeon, Ahmed Elsharkawy and Mun Sang Kim Souheil Fenghour, Daqing Chen, Laureta Hajderanj, Isakh Weheliye and Perry Xiao (2021) A Novel Supervised t-SNE Based Approach of Viseme Classification for Automated Lip Reading 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET)10.1109/ICECET52533.2021.9698534978-1-6654-4231-2 Sensors21:23 (7890)26 Nov 2021An Effective Conversion of Visemes to Words for High-Performance Automatic LipreadingSouheil Fenghour, Daqing Chen, Kun Guo, Bo Li and Perry Xiao International Journal of Speech Technology25 Nov 2021Research on the application of speech recognition in computer network technology in the era of big dataBaohua Zhang Yudong Guo, Keyu Chen, Sen Liang, Yong-Jin Liu, Hujun Bao and Juyong Zhang (2021) AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis 2021 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV48922.2021.00573978-1-6654-2812-5 K.T. Bibish Kumar, Sunil John, K.M. Muraleedharan and R.K. Sunil Kumar (2021) Linguistically involved data-driven approach for Malayalam phoneme-to-viseme mapping Applied Speech Processing10.1016/B978-0-12-823898-1.00003-5 Zixiao Yu, Haohong Wang and Jian Ren (2020) A Hybrid Temporal Modeling Phoneme Recognized Network for Real-Time Speech Animation 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)10.1109/MIPR49039.2020.00019978-1-7281-4272-2 Cognitive Science44:71 Jul 2020Making Sense of the Hands and Mouth: The Role of "Secondary" Cues to Meaning in British Sign Language and EnglishPamela Perniss, David Vinson and Gabriella Vigliocco Navin Kumar Mudaliar, Kavita Hegde, Anand Ramesh and Varsha Patil (2020) Visual Speech Recognition: A Deep Learning Approach 2020 5th International Conference on Communication and Electronics Systems (ICCES)10.1109/ICCES48766.2020.9137926978-1-7281-5371-1 Bo Xu, Cheng Lu, Yandong Guo and Jacob Wang (2020) Discriminative Multi-Modality Speech Recognition 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR42600.2020.01444978-1-7281-7168-5 The Journal of the Acoustical Society of America147:4 (2609-2624)1 Apr 2020Multi-modal cross-linguistic perception of fricatives in clear speechSylvia Cho, Allard Jongman, Yue Wang and Joan A. Sereno Bo Xu, Jacob Wang, Cheng Lu and Yandong Guo (2020) Watch to Listen Clearly: Visual Speech Enhancement Driven Multi-modality Speech Recognition 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV45572.2020.9093314978-1-7281-6553-0 IEEE Access8 (215516-215530)Lip Reading Sentences Using Deep Learning With Only Visual CuesSouheil Fenghour, Daqing Chen, Kun Guo and Perry Xiao Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox and Andrew Zisserman (2020) BSL-1K: Scaling Up Co-articulated Sign Language Recognition Using Mouthing Cues Computer Vision – ECCV 202010.1007/978-3-030-58621-8_3 Frederic Chaume (2020) Dubbing The Palgrave Handbook of Audiovisual Translation and Media Accessibility10.1007/978-3-030-42105-2_6 Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies3:4 (1-24)11 Dec 2019RFID TattooJingxian Wang, Chengfeng Pan, Haojian Jin, Vaibhav Singh, Yash Jain, Jason I. Hong, Carmel Majidi and Swarun Kumar Behavior Research Methods10 Dec 2019Motion capture-based animated characters for the study of speech–gesture integrationJens Nirme, Magnus Haake, Agneta Gulz and Marianne Gullberg International Journal of Speech Technology22:4 (1149-1166)1 Dec 2019Viseme set identification from Malayalam phonemes and allophonesK. T. Bibish Kumar, R. K. Sunil Kumar, E. P. A. Sandesh, S. Sourabh and V. L. Lajish Developmental Science22:61 Nov 2019The role of production abilities in the perception of consonant category in infantsAnne Vilain, Marjorie Dole, Hélène Lœvenbruck, Olivier Pascalis and Jean‐Luc Schwartz Applied Sciences9:18 (3870)15 Sep 2019Alternative Visual Units for an Optimized Phoneme-Based Lipreading SystemHelen L. Bear and Richard Harvey Frederic Chaume (2019) Chapter 5. Audiovisual translation in the age of digital transformation Reassessing Dubbing10.1075/btl.148.05cha6 Aug 2019 Image and Vision Computing88 (76-83)1 Aug 2019Lip reading with Hahn Convolutional Neural NetworksAbderrahim Mesbah, Aissam Berrahou, Hicham Hammouchi, Hassan Berbia, Hassan Qjidaa and Mohamed Daoudi Language, Cognition and Neuroscience (1-18)16 Jul 2019The contribution of audiovisual speech to lexical-semantic processing in natural spoken sentencesAngèle Brunellière, Laurence Delrue and Cyril Auran Computer Speech & Language55 (101-119)1 May 2019Synthesising visual speech using dynamic visemes and deep learning architecturesAusdang Thangthai, Ben Milner and Sarah Taylor PLOS ONE14:3 (e0213588)21 Mar 2019Perception of incongruent audiovisual English consonantsKaylah Lalonde, Lynne A. Werner and Julie Jeannette Gros-Louis Frontiers in Communication37 Jan 2019Visual-Tactile Speech Perception and the Autism QuotientDonald Derrick, Katie Bicevskis and Bryan Gick Souheil Fenghour, Daqing Chen and Perry Xiao (2019) Decoder-Encoder LSTM for Lip Reading the 2019 8th International Conference10.1145/3328833.33288459781450361057 Adriana Fernandez-Lopez and Federico M. Sukno (2019) Optimizing Phoneme-to-Viseme Mapping for Continuous Lip-Reading in Spanish Computer Vision, Imaging and Computer Graphics – Theory and Applications10.1007/978-3-030-12209-6_15 International Journal of Audiology57:12 (914-922)2 Dec 2018Speech intelligibility of virtual humansAnnelies Devesse, Alexander Dudek, Astrid van Wieringen and Jan Wouters Computer Speech & Language52 (165-190)1 Nov 2018Comparing heterogeneous visual gestures for measuring the diversity of visual speech signalsHelen L. Bear and Richard Harvey Image and Vision Computing78 (53-72)1 Oct 2018Survey on automatic lip-reading in the era of deep learningAdriana Fernandez-Lopez and Federico M. Sukno S. Cygert, G. Szwoch, S. Zaporowski and A. Czyzewski (2018) Vocalic Segments Classification Assisted by Mouth Motion Capture 2018 11th International Conference on Human System Interaction (HSI)10.1109/HSI.2018.8430943978-1-5386-5024-0 Language Learning68 (127-158)1 Jun 2018Response Errors in Females' and Males' Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading AccuracyLynne E. Bernstein E.P.A. Sandesh and V. L. Lajish (2018) Lip Motion Synthesis for Speech Animation Using Active Shape Model 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS)10.1109/ICCONS.2018.8663133978-1-5386-2842-3 Multisensory Research31:1-2 (57-78)The Time Course of Audio-Visual Phoneme Identification: a High Temporal Resolution StudyCarolina Sánchez-García, Sonia Kandel, Christophe Savariaux and Salvador Soto-Faraco Speech Communication95 (40-67)1 Dec 2017Phoneme-to-viseme mappings: the good, the bad, and the uglyHelen L Bear and Richard Harvey Communication Sciences & Disorders22:3 (615-628)30 Sep 2017Analysis of Korean Viseme System in Korean Standard Monosyllabic Word ListsJaehee Choi, Keonseok Yoon, Hyesoo Ryu and Hyunsook Jang Speech Communication92 (114-124)1 Sep 2017The influence of auditory-visual speech and clear speech on cross-language perceptual assimilationSarah E. Fenwick, Catherine T. Best, Chris Davis and Michael D. Tyler ACM Transactions on Graphics36:4 (1-12)20 Jul 2017Audio-driven facial animation by joint end-to-end learning of pose and emotionTero Karras, Timo Aila, Samuli Laine, Antti Herva and Jaakko Lehtinen Benjamin M. Gorman and David R. Flatla (2017) A Framework for Speechreading Acquisition Tools CHI '17: CHI Conference on Human Factors in Computing Systems10.1145/3025453.302556097814503465592 May 2017 Frontiers in Human Neuroscience1011 Jan 2017Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent LipreadingAisling E. O'Sullivan, Michael J. Crosse, Giovanni M. Di Liberto and Edmund C. Lalor The Journal of the Acoustical Society of America140:5 (3531-3539)1 Nov 2016Visual-tactile integration in speech perception: Evidence for modality neutral speech primitivesKatie Bicevskis, Donald Derrick and Bryan Gick Hyun-Jun Hyung, Byeong-Kyu Ahn, Dongwoon Choi, Dukyeon Lee and Dong-Wook Lee (2016) Evaluation of a Korean Lip-sync system for an android robot 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)10.1109/URAI.2016.7734025978-1-5090-0821-6 ACM Transactions on Graphics35:4 (1-11)11 Jul 2016JALIPif Edwards, Chris Landreth, Eugene Fiume and Karan Singh Image and Vision Computing51 (1-12)1 Jul 2016Visual units and confusion modelling for automatic lip-readingDominic Howell, Stephen Cox and Barry Theobald Ibrahim Almajai, Stephen Cox, Richard Harvey and Yuxuan Lan (2016) Improved speaker independent lip reading using speaker adaptive training and deep neural networks 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2016.7472172978-1-4799-9988-0 Helen L. Bear and Richard Harvey (2016) Decoding visemes: Improving machine lip-reading 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2016.7472029978-1-4799-9988-0 Frontiers in Psychology72 Feb 2016Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech SegmentationLaina G. Lusk and Aaron D. Mitchel SAGE Open5:4 (215824401561193)23 Dec 2015A Novel Approach for Allocating Mathematical Expressions to Visual Speech SignalsMohammad Hossein Sadaghiani, Niusha Shafiabady and Dino Isa Computational Cognitive Science1:11 Dec 2015Transforming an embodied conversational agent into an efficient talking head: from keyframe-based animation to multimodal concatenation synthesisGuillaume Gibert, Kirk N. Olsen, Yvonne Leung and Catherine J. Stevens Oscar Koller, Hermann Ney and Richard Bowden (2015) Deep Learning of Mouth Shapes for Sign Language 2015 IEEE International Conference on Computer Vision Workshop (ICCVW)10.1109/ICCVW.2015.69978-1-4673-9711-7 Psychonomic Bulletin & Review22:5 (1299-1307)1 Oct 2015Variability and stability in the McGurk effect: contributions of participants, stimuli, time, and response typeDebshila Basu Mallick, John F. Magnotti and Michael S. Beauchamp Cortex68 (169-181)1 Jul 2015Prediction and constraint in audiovisual speech perceptionJonathan E. Peelle and Mitchell S. Sommers Sarah Taylor, Barry-John Theobald and Iain Matthews (2015) A mouth full of words: Visually consistent acoustic redubbing ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2015.7178903978-1-4673-6997-8 Speech Communication66 (182-217)1 Feb 2015Audiovisual speech synthesis: An overview of the state-of-the-artWesley Mattheyses and Werner Verhelst Simon Elias Bibri (2015) Towards AmI Systems Capable of Engaging in 'Intelligent Dialog' and 'Mingling Socially with Humans' The Human Face of Ambient Intelligence10.2991/978-94-6239-130-7_7 PLoS ONE9:12 (e114439)4 Dec 2014Nonnative Audiovisual Speech Perception in Noise: Dissociable Effects of the Speaker and ListenerZilong Xie, Han-Gyol Yi, Bharath Chandrasekaran and Joel Snyder Frontiers in Neuroscience81 Dec 2014Neural pathways for visual speech perceptionLynne E. Bernstein and Einat Liebenthal Communication Disorders Quarterly35:4 (191-203)1 Aug 2014A Review of the Evidence on Strategies for Teaching Children Who Are DHH Grapheme–Phoneme CorrespondenceStacey L. Tucci, Jessica W. Trussell and Susan R. Easterbrooks Frontiers in Psychology516 Jul 2014Talker variability in audio-visual speech perceptionShannon L. M. Heald and Howard C. Nusbaum Lennart Gustafsson, Tamas Jantvik and Andrew P. Paplinski (2014) A Self-organized artificial neural network architecture that generates the McGurk effect 2014 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN.2014.6889411978-1-4799-1484-5 Sarah Taylor, Barry-John Theobald and Iain Matthews (2014) The effect of speaking rate on audio and visual speech ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP.2014.6854158978-1-4799-2893-4 IEEE Journal of Selected Topics in Signal Processing8:2 (336-347)Joint Audiovisual Hidden Semi-Markov Model-Based Speech SynthesisDietmar Schabus, Michael Pucher and Gregor Hofer Journal of Cognitive Neuroscience26:3 (606-620)1 Mar 2014Audiovisual Speech Integration Does Not Rely on the Motor System: Evidence from Articulatory Suppression, the McGurk Effect, and fMRIWilliam Matchin, Kier Groulx and Gregory Hickok Oscar Koller, Hermann Ney and Richard Bowden (2014) Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition Computer Vision – ECCV 201410.1007/978-3-319-10590-1_19 Mohammad Mahdi Dehshibi, Meysam Alavi and Jamshid Shanbehzadeh (2013) Kernel-based Persian viseme clustering 2013 13th International Conference on Hybrid Intelligent Systems (HIS)10.1109/HIS.2013.6920468978-1-4799-2439-4 Liyan Chen and Beizhan Wang (2013) Research on digital reconstruction of Chinese ancient architecture 2013 International Conference on Anti-Counterfeiting, Security and Identification (ASID)10.1109/ICASID.2013.6825319978-1-4799-1111-0 Speech Communication55:7-8 (857-876)1 Sep phoneme-to-viseme mapping and application for visual speech and Werner Verhelst Multimedia Tools and Aug Persian viseme using for visual speech Mohammad Mahdi Dehshibi, and Mohammad Hossein and of visual speech International on IEEE Transactions on Audio, Speech, and Language Identification Using Visual L. and Stephen J. Cox Yuxuan Barry-John Theobald and Richard Harvey Independent Computer Lip-Reading IEEE International Conference on Multimedia and Jun processing in audiovisual of natural A Hélène and Journal of and Apr Model of Reading by Evidence for to Viseme A. M. M. and A. M. Yuxuan Richard Harvey and Barry-John Theobald into machine lip reading ICASSP - IEEE International Conference on Acoustics, Speech and Signal Journal of Mar perception and Neural Networks in Cognitive and Brain Nov perception of of speech rate and of and The Journal of the Acoustical Society of Sep the the influence of the on auditory and visual spoken word F. and Mitchell S. Sommers Speech Sep of and natural visual and H. Chen and W. Visual speech recognition of International on and Research A. V. Di R. and S. A Talking Head for International Conference on Intelligent and Systems and May effect of visual on M and John Taylor Jan of in mouth and Dec responses in audiovisual and and A of for the McGurk effect International Joint Conference on Neural Networks Jacob L and Stephen Cox Speaker independent identification IEEE International Conference on Acoustics, Speech and Signal and S. Face Computing in Multimedia Interaction and Communication Intelligent Multimedia and Wu, and A visual for audio-visual speech recognition IEEE International Conference on Image Processing Journal of Oct word recognition by T. Language and Jun A Framework for of Audiovisual and Jacob L and Stephen Cox Automatic A study ICASSP - IEEE International Conference on Acoustics, Speech and Signal PLoS Mar Word Recognition in Noise: A Using Zhou, A. John J. C. and David IEEE Transactions on Audio, Speech, and Language Visual Speech and A. Journal of Jan in to visual A multimodal F. J. and A. A. G. R. and S. A Talking Head Systems Journal of Speech, Language, and Hearing Dec Effects of Identification Training on Speech Recognition and The Journal of the Acoustical Society of Jan speech perception in and and IEEE Transactions on and - Systems and of Effects on Human C. Chen and E. Perception & Oct in visual speech perception and phonetic Edward T. A. and Lynne E. Bernstein and Human in International Workshop on and Image Processing and Conference on Speech and Image Multimedia and Perception & Feb by M. and F. & Dec animation based on and IEEE Transactions on Audio, Speech and Language and for audio-visual speech International Journal of Computer May and Synthesis of Lip Using and Gregor and Speech Synthesis of Computer Animation and Jul approach to facial Chung, and Speech May in spoken processing by and of A of speech M. Speech Oct for audiovisual speech and Journal of Speech, Language, and Hearing Dec of and Facial Information to Perception of Jongman, Yue Wang and H. Kim Proceedings of the Sep talking for J. and J. Aug Role of Facial and in Visual and Audiovisual Speech and and Oct Speech Perception and M. and Psychonomic Bulletin & Jun influence of the on speech word and T. Auer Perception & May as a L. Lynne E. Bernstein and Edward T. Auer E. Bernstein, Edward T. Auer and A. in perceptual and for visual Consonants Proceedings of ICASSP Lee and Conversion Using Hidden Models in Artificial Lee and Viseme Recognition Using Hidden Models Intelligent and Automated Learning Sep and the of speech IEEE Signal Processing Jan speech processing Chen Perception & Jan perception E. Bernstein, E. and E. and The as an the for IEEE Conference IEEE International Conference on and Speech Oct the of intelligibility and in audiovisual word Lynne E. Bernstein and Edward T. Auer Proceedings of the May integration in multimodal Chen and Journal of Speech, Language, and Hearing Feb on a Test of Speech Perception E. and T. and T. a talking facial based on visemes Computer Animation and rate and viseme for Signal Processing Society Workshop on Multimedia Signal IEEE Transactions on Image Jan from and J. Oscar N. and D. Continuous Automatic Speech Recognition by Lipreading Language Mar Speech Perception by and of Factors the McGurk M. and C. A for of International Conference on Language and E. Continuous automatic speech recognition by lipreading Conference on Systems and C. S. H. and A. integration on the of IEEE International Conference on Neural C. H. S. and A. Improving recognition by lipreading Proceedings of ICASSP G. and E. Neural network lipreading system for speech recognition International Joint Conference on Neural Language and Apr and Visual on and J. Proceedings of the Jan network of integration for and IEEE of acoustic and visual speech using neural and Journal of the Society of Apr with a of and International Journal of Jan M. and S. Journal of Oct of stimulus on the identification of in and J. and N. Stevens and A for the High for the Jan Effects of the Visual Perception of and C. A. Journal of Human Communication Dec Temporal for of Speech J. British Journal of Jan of by Hearing Children Using A and Journal of Communication Jan of and speechreading on English and M. and H. to Mar in Dec & American