Deep neural networks have recently seen a significant surge in adoption for different Artificial Intelligence technologies due to the development of powerful computer systems. However, because of the growing security ...
详细信息
Deep neural networks have recently seen a significant surge in adoption for different Artificial Intelligence technologies due to the development of powerful computer systems. However, because of the growing security concerns, they are susceptible to dangerous risks. Adversarial instances were initially discovered in the field of computer vision (CV), where systems were deceived by altering their initial inputs. In the field of natural language processing (NLP), additionally they occur. Several approaches are put up to address this gap and handle an extensive variety of NLP applications. We give an organized survey of these works in this *** text is distinct and meaningful in nature, in contrast to the image, which makes the creation of hostile assaults much more challenging. In this study, we present a thorough analysis of adversarial attacks and counterattacks in the textual domain. In order to make the essay self-contained, we examine related important works in computer vision and cover the fundamentals of NLP. We explore unresolved concerns to close the gap between current advancements and increasingly powerful adversarial assaults on NLP DNNs in our survey's conclusion.
Human civilization is based on agriculture, and one of the biggest threats to agricultural productivity is the existence of wild animals on farmlands. Animals are taking over some farmlands, causing significant crop l...
详细信息
Longitudinal medical imageprocessing is a significant task to understand the dynamic changes of disease by taking and comparing image series over time, providing insights into how conditions evolve and enabling more ...
详细信息
With the rapid advancements in technology and science, optimization theory and algorithms have become increasingly important. A wide range of real-world problems is classified as optimization challenges, and meta-heur...
详细信息
With the rapid advancements in technology and science, optimization theory and algorithms have become increasingly important. A wide range of real-world problems is classified as optimization challenges, and meta-heuristic algorithms have shown remarkable effectiveness in solving these challenges across diverse domains, such as machine learning, process control, and engineering design, showcasing their capability to address complex optimization problems. The Stochastic Fractal Search (SFS) algorithm is one of the most popular meta-heuristic optimization methods inspired by the fractal growth patterns of natural materials. Since its introduction by Hamid Salimi in 2015, SFS has garnered significant attention from researchers and has been applied to diverse optimization problems across multiple disciplines. Its popularity can be attributed to several factors, including its simplicity, practical computational efficiency, ease of implementation, rapid convergence, high effectiveness, and ability to address single- and multi-objective optimization problems, often outperforming other established algorithms. This review paper offers a comprehensive and detailed analysis of the SFS algorithm, covering its standard version, modifications, hybridization, and multi-objective implementations. The paper also examines several SFS applications across diverse domains, including power and energy systems, imageprocessing, machine learning, wireless sensor networks, environmental modeling, economics and finance, and numerous engineering challenges. Furthermore, the paper critically evaluates the SFS algorithm's performance, benchmarking its effectiveness against recently published meta-heuristic algorithms. In conclusion, the review highlights key findings and suggests potential directions for future developments and modifications of the SFS algorithm.
This paper investigates the optimization and deployment of YOLOv7 deep learning model on NVIDIA Jetson Nano, an AI-focused edge computing platform for object detection in various computer visionapplications. The work...
详细信息
Whilst Deep Neural Networks (DNNs) have been developing swiftly, most of the research has been focused on videos based on RGB frames. RGB data has been traditionally optimised for human vision and is a highly re-elabo...
详细信息
Whilst Deep Neural Networks (DNNs) have been developing swiftly, most of the research has been focused on videos based on RGB frames. RGB data has been traditionally optimised for human vision and is a highly re-elaborated and interpolated version of the collected raw data. In fact, the sensor collects the light intensity value per pixel, but an RGB frame contains 3 values, for red, green, and blue colour channels. This conversion to RGB requires computational resource, time, power, and increases by a factor of three the amount of output data. This work investigates DNN based detection using (for training and evaluation) Bayer frames, generated from a benchmarking automotive dataset (i.e. KITTI dataset). A Deep Neural Network (DNN) is deployed in an unmodified form, and also modified to accept only single channel frames, such as Bayer frames. Eleven different re-trained versions of the DNN are produced, and cross-evaluated across different data formats. The results demonstrate that the selected DNN has the same accuracy when evaluating RGB or Bayer data, without significant degradation in the perception (the variation of the Average Precision is <1%). Moreover, the colour filter array position and the colour correction matrix do not seem to contribute significantly to the DNN performance. This work demonstrates that Bayer data can be used for object detection in automotive without significant perception performance loss, allowing for more efficient sensing-perception systems.
Geological hazards in densely vegetated mountainous regions are challenging to detect due to terrain concealment and the limitations of traditional visualization methods. This study introduces the LiDAR image highligh...
详细信息
Geological hazards in densely vegetated mountainous regions are challenging to detect due to terrain concealment and the limitations of traditional visualization methods. This study introduces the LiDAR image highlighting algorithm (LIHA), a novel approach for enhancing micro-topographical features in digital elevation models (DEMs) derived from airborne LiDAR data. By analogizing terrain profiles to non-stationary spectral signals, LIHA applies locally estimated scatterplot smoothing (Loess smoothing), wavelet decomposition, and high-frequency component amplification to emphasize subtle features such as landslide boundaries, cracks, and gullies. The algorithm was validated using the Mengu landslide case study, where edge detection analysis revealed a 20-fold increase in identified micro-topographical features (from 1907 to 37,452) after enhancement. Quantitative evaluation demonstrated LIHA's effectiveness in improving both human interpretation and automated detection accuracy. The results highlight LIHA's potential to advance early geological hazard identification and mitigation, particularly when integrated with machine learning for future applications. This work bridges signal processing and geospatial analysis, offering a reproducible framework for high-precision terrain feature extraction in complex environments.
This paper presents a novel approach for enhancing vehicle safety and navigation through an integrated system for lane detection, vehicle alignment, and automatic braking using visual feedback. Our proposed system emp...
详细信息
The cultivation of mangoes contributes significantly to the economy and food security of many tropical and subtropical regions. However, mango trees are susceptible to various leaf diseases that can significantly affe...
详细信息
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the sc...
详细信息
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.e., "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware. To achieve this, we introduce 1) a reformulation of NeRF's architecture and 2) a sparse voxel grid representation with learned feature vectors. The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact (averaging less than 90 MB per scene), and can be rendered in real-time (higher than 30 frames per second on a laptop GPU). Actual screen captures are shown in our video.
暂无评论