| Abstract |
Due to their usefulness in smart cities and other civilian uses, drones are becoming increasingly popular. With the ability to be organized into networks, they can be used to gather various kinds of data, including images and videos with multimedia characteristics, and then forward it to processing centers for additional handling. They also become a fresh target for a variety of attacks, such as GPS spoofing, denial of service, and false data injection. It is therefore vital and required to create new systems and protection mechanisms against these threats. In this research, we highlight the risk associated with the so-called False Data Injection (FDI) and present a deep learning-based approach for detecting it. An injection of misleading data into the data (images) gathered by the drones is regarded as a serious and potent attack that has the potential to significantly change a final judgment made by the processing center. Our strategy uses deep learning for image analysis and classification in order to thwart this attack. Using Nearest Neighbor Interpolation (NNI) to scale the incoming image to match the classifier, we next feed the image to a Convolutional Neural Network (CNN) for image classification. Finally, we use the Mahalanobis Distance to compare each class of classification results to a neighborhood. Our solution performs well, irrespective of image size, as evidenced by numerical findings on the current dataset, which show an accuracy of 97.71%, a precision of 96.69%, a recall of 94.33%, and an F-score of 0.941%. © 2024 IEEE. |