Study Reveals AI in Remote Sensing is Highly Susceptible to Attacks

Scientists uncover significant vulnerabilities in AI models utilized in remote sensing, highlighting the need for improved robustness to ensure reliable performance in crucial applications like disaster response.

A recent study by researchers from Northwestern Polytechnical University and The Hong Kong Polytechnic University has exposed concerning vulnerabilities in artificial intelligence models used for remote object detection and image classification. These weaknesses could impact essential applications such as environmental monitoring, government intelligence and disaster management.

The team, led by Shaohui Mei, a professor at the School of Electronic Information at Northwestern Polytechnical University, performed a comprehensive review of existing research on deep learning (DL) models to evaluate their robustness against various disturbances.

Their findings, published in the Journal of Remote Sensing, reveal that these AI models are susceptible to both physical and digital attacks.

“We sought to address the lack of comprehensive studies on the robustness of deep learning models used in remote sensing tasks, particularly focusing on image classification and object detection. Our aim was to understand the vulnerabilities of these models to various types of noise, especially adversarial noise, and to systematically evaluate their natural and adversarial robustness,” Mei said in a news release.

The study investigated the impact of natural noise — like rain, snow and fog — and adversarial attacks, such as the Fast Gradient Sign Method (FGSM), on the performance of various DL models. They also examined potential physical manipulations, such as attaching patches to objects to deceive the AI systems.

“Deep learning models, despite their powerful capabilities in remote sensing applications, are susceptible to different kinds of disturbances, including adversarial attacks. It is crucial for developers and users of these technologies to be aware of these vulnerabilities and to work towards improving model robustness to ensure reliable performance in real-world conditions,” co-author Jiawei Lian, a graduate student at the School of Electronic Information at Northwestern Polytechnical University, said in the news release.

Their analysis demonstrated several key insights:

  • Training an adversarial attack is similar to training a neural network and is influenced by factors like training data and optimization strategies.
  • Weak detectors, like YOLOv2, are more easily targeted, whereas more robust models show better resilience.
  • Techniques such as “momentum” and “dropout” can enhance the effectiveness of attacks.
  • Physical attacks, such as placing patches in critical areas, can be as effective as digital attacks.

The team aims to refine their benchmarking framework and conduct more extensive tests to further improve the robustness and security of DL models in remote sensing. Their ultimate goal is to enhance the reliability of these technologies in crucial applications, such as environmental monitoring, disaster response and urban planning.