LIP READING AND PREDICTION SYSTEM BASED ON DEEP LEARNING
Abstract
Speech perception is characterized as a
multimodal process, which means it elicits several
meanings. Understanding a message can be aided by,
and in some cases even made necessary by, lip reading,
which overlays visual cues on top of auditory signals.
Lip-reading is a crucial field with many uses, including
biometrics, speech recognition in noisy environments,
silent dictation, and enhanced hearing aids. It is a
challenging research project in the area of computer
vision, whose major goal is to watch the movement of
human lips in a video and recognize the textual content
that goes with it. Yet, due to the constraints of lip
changes and the depth of linguistic information, the
complexity of lip identification has increased, which has
slowed the growth of study themes in lip language.
Nowadays, deep learning has advanced in several
sectors, giving us the confidence to perform the task of
lip recognition. Lip learning based on deep learning
often entails extracting features and comprehending
images using a network model, as opposed to classical
lip recognition that recognizes lip characteristics. The
design of the network framework for data gathering,
processing, and data recognition for lip reading is the
main topic of this discussion. In this research, we
created a reliable and accurate method for lip reading.
We first isolate the mouth region and segment it, after
which we extract various aspects from the lip image,
such as the Hog, Surf, and Haar features. Lastly, we use
Gated Recurrent Units to train our deep learning model
(GRU).
Keywords:
Haar, Hog and Surf Features, GRU based deep Learning ArchitecturePublished
Issue
Section
License
Copyright (c) 2023 International Journal on Emerging Research Areas

This work is licensed under a Creative Commons Attribution 4.0 International License.
All published work in this journal is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
How to Cite
Similar Articles
- Minu Cherian, Elzabeth Bobus, Bala Susan Jacob, M Annapoorna, Ashwin Mathew Zacheria, Empowering Laptop Selection with Natural Language Processing Chatbot and Data Driven Filtering Assistance , International Journal on Emerging Research Areas: Vol. 4 No. 1 (2024): IJERA
- Jyothika Anil, Milan Joseph Mathew, Namitha S Mukkadan, Reshmi Raveendran, Rintu Jose, Driver Drowsiness Detection Using Smartphone Application , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- Manjima M A, Soumya Anand, Partial Replacement of bitumen by Plant Polymer Lignin in Bituminous Pavement , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- K Sooraj, Yasim Khan M, A High Speed Low Power 10T SRAM with high Robustness , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- Anu Rose Joy, An overview of Fake News DetectionusingBidirectional Long Short-TermMemory(BiLSTM)Models , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- P Sathya Narayan, Safad Ismail, Developing an Empathetic Interaction Model for Elderly in Pandemics , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- Aiman Shahul, L Pavithra, Eldhose KV, S Thasni, Dany Jennez, S Resmara, Sand Battery Technology: A Promising Solution for Renewable Energy Storage , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- Shana Shaji, Jerin Jose, Jeny Jose, GLOBAL ISSUES OF PLASTICS ON ENVIORNMENT AND PUBLIC HEALTH , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- C P Athira, Fathima Sithara P.A, HAND GESTURE BASED HOME AUTOMATION , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
- Akhil Shaji, Albin Joshy, M J Athulkrishna, Joel Biju, Bino Thomas, COLLEGE BUS SECURITY AND MANAGEMENT SYSTEM , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
You may also start an advanced similarity search for this article.