L2 Pronunciation Intelligibility in Google Voice-to-text Applications

Authors

  • Michael Olayinka GBADEGESIN, PhD Department of Languages and Literature, Lead City University, Ibadan
  • Deborah Adejumoke ADEJOBI Department of Languages and Literature, Lead City University, Ibadan

Keywords:

Artificial Intelligence, Human Voice-to-text, Communication Breakdown, Intelligible Pronunciation, L2

Abstract

There is an increase in the application of new technologies in all spheres of human endeavour. Industrial 4.0 is one of the recently birthed industrial revolutions through which machines understand human speech, thinks and comprehends human intentions. It structures critical components for intelligent vehicles, intelligent offices, intelligent service robots, intelligent industries, and so on. This furthers the structure of the intelligent ecology of the Internet of Things. At the centre of all these is human speech which is used to give order to and ask questions from Artificial Intelligence (AI) and robots. Previous studies on AI and linguistics have discussed the use of AI in language classroom using AI modelling pronunciation to enhance pronunciation performance of the second language learners. This 
study examines Automatic Speech Recognition (ASR) using Word Error Rate (WER) to measure the level of intelligibility of the pronunciation of man to machine. It investigates if there will be communication breakdown if the pronunciation is not intelligible. To achieve this, 30 L2 speakers of English were selected from Igbo, Hausa and Yoruba to read 135 crafted words into 5sentences using Google ASR application as primary data. The secondary data was drawn from journal articles, textbooks and the Internet. The result showed that the pronunciation model used to develop the application has made provision for several L2 speakers of English in reality of the new world Englishes. 

Downloads

Published

2023-11-08

Similar Articles

You may also start an advanced similarity search for this article.