Exploring Explainable AI, Security and Beyond : A Comprehensive Review
Abstract
The paper dives into the transformation of security with the integration of machine learning (ML) and the associated challenges. It highlights how AI’s incorporation in network security brings both promise and complexities, emphasizing the need to align perceived benefits with actual capabilities. Explainable AI (XAI) emerges as a crucial tool, offering transparency despite facing ongoing challenges and necessitating continual advancements. The pursuit of Explainable AI (XAI) and tools like AI Explainability 360 demonstrate strengths but grapple with understanding and methodological gaps. Specific techniques such as LIME, GNNEXPLAINER, and object recognition in Deep Reinforcement Learning show promise but encounter challenges like scalability and adaptability. Across these domains, understanding the multifaceted landscape becomes pivotal for leveraging’s potential while addressing critical challenges in security and explainability.
Keywords:
Machine Learning (ML), Big Data Frameworks, DeepLearning, Expalainable AI, Reinforcement Learning, Natural Language ProcessingPublished
Issue
Section
License
Copyright (c) 2023 International Journal on Emerging Research Areas

This work is licensed under a Creative Commons Attribution 4.0 International License.
All published work in this journal is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
How to Cite
Similar Articles
- Aiman Shahul, L Pavithra, Eldhose KV, S Thasni, Dany Jennez, S Resmara, Sand Battery Technology: A Promising Solution for Renewable Energy Storage , International Journal on Emerging Research Areas: Vol. 3 No. 1 (2023): IJERA
You may also start an advanced similarity search for this article.
