logo

Securing AI: Understanding and Defending Against Adversarial Attacks in Deep Learning Systems

Authors

  • Nikita Niteen

    Amal Jyothi College of Engineering
    Author
  • Juby Mathew

    Amal Jyothi College of Engineering
    Author

Abstract

This review paper delves into the intricate landscape of security vulnerabilities within deep learning frameworks, specifically focusing on adversarial attacks and their impact across diverse AI applications. It scrutinizes vulnerabilities in neural network models, reinforcement learning policies, Natural Language Processing (NLP) classifiers, cloud-based image detectors, and deep convolutional neural networks (CNNs). The paper illuminates’ techniques such as adversarial example generation and their applicability in exploiting vulnerabilities in various scenarios, underlining the imperative need for robust defense mechanisms. Additionally, it explores innovative methodologies like influence functions and outlier detection to enhance understanding, debug models, and fortify defenses
against adversarial attacks. The paper concludes by accentuating the critical importance of addressing these vulnerabilities and fostering further research in securing AI systems against potential threats. Absolutely! Here a simpler abstract that captures the essence of your review paper: It looks at how sneaky tricks can fool smart AI systems. It talks about how bad guys can make AI mess up, even in important things like self-driving cars, language understanding, and image recognition. The paper shows different ways these tricks work and how they can be used against various types of AI. It also shares some cool ideas to make AI safer and tougher against these tricks. The paper ends by saying it really important to make AI safer from these sneaky attacks.

Keywords:

Deep Learning Security, Vulnerabilities in AI Systems, Neural Network Vulnerability, Reinforcement Learning Vulnerabilities, Adversarial Examples, Defense Mechanisms in Deep Learning, Natural Language Processing (NLP) Security, Cloud-Based Image Detectors, Convolutional Neural Networks (CNNs) Vulnerabilities, Machine Learning Security Risks, Adversarial Examples in Physical World, Interpretability of Deep Neural Networks, Obfuscated Gradients, Defense Strategies against Adversarial Attacks
Views 18
Downloads 10

Published

29-12-2023

Issue

Section

Articles

How to Cite

[1]
N. Niteen and J. Mathew, “Securing AI: Understanding and Defending Against Adversarial Attacks in Deep Learning Systems”, IJERA, vol. 3, no. 2, Dec. 2023, Accessed: Aug. 16, 2025. [Online]. Available: https://ijera.in/index.php/IJERA/article/view/6

Similar Articles

1-10 of 163

You may also start an advanced similarity search for this article.