1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves ...
详细信息
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves privacy,enhances responsiveness,and saves ***,current ondevice DL relies on predefined patterns,leading to accuracy and efficiency *** is difficult to provide feedback on data processing performance during the data acquisition stage,as processing typically occurs after data acquisition.
Bat Algorithm (BA) is a nature-inspired metaheuristic search algorithm designed to efficiently explore complex problem spaces and find near-optimal solutions. The algorithm is inspired by the echolocation behavior of ...
详细信息
Background: The synthesis of reversible logic has gained prominence as a crucial research area, particularly in the context of post-CMOS computing devices, notably quantum computing. Objective: To implement the bitoni...
详细信息
Aflood is a significant damaging natural calamity that causes loss of life and *** work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage c...
详细信息
Aflood is a significant damaging natural calamity that causes loss of life and *** work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage caused byfl*** massive amount of data generated by social media platforms such as Twitter opens the door toflood *** of the real-time nature of Twitter data,some government agencies and authorities have used it to track natural catastrophe events in order to build a more rapid rescue ***,due to the shorter duration of Tweets,it is difficult to construct a perfect prediction model for determiningfl*** learning(ML)and deep learning(DL)approaches can be used to statistically developflood prediction *** the same time,the vast amount of Tweets necessitates the use of a big data analytics(BDA)tool forflood *** this regard,this work provides an optimal deep learning-basedflood forecasting model with big data analytics(ODLFF-BDA)based on Twitter *** suggested ODLFF-BDA technique intends to anticipate the existence offloods using tweets in a big data *** ODLFF-BDA technique comprises data pre-processing to convert the input tweets into a usable *** addition,a Bidirectional Encoder Representations from Transformers(BERT)model is used to generate emotive contextual embed-ding from ***,a gated recurrent unit(GRU)with a Multilayer Convolutional Neural Network(MLCNN)is used to extract local data and predict thefl***,an Equilibrium Optimizer(EO)is used tofine-tune the hyper-parameters of the GRU and MLCNN models in order to increase prediction *** memory usage is pull down lesser than 3.5 MB,if its compared with the other algorithm *** ODLFF-BDA technique’s performance was validated using a benchmark Kaggle dataset,and thefindings showed that it outperformed other recent approaches significantly.
The nonlinear filtering problem has enduringly been an active research topic in both academia and industry due to its ever-growing theoretical importance and practical *** main objective of nonlinear filtering is to i...
详细信息
The nonlinear filtering problem has enduringly been an active research topic in both academia and industry due to its ever-growing theoretical importance and practical *** main objective of nonlinear filtering is to infer the states of a nonlinear dynamical system of interest based on the available noisy measurements. In recent years, the advance of network communication technology has not only popularized the networked systems with apparent advantages in terms of installation,cost and maintenance, but also brought about a series of challenges to the design of nonlinear filtering algorithms, among which the communication constraint has been recognized as a dominating concern. In this context, a great number of investigations have been launched towards the networked nonlinear filtering problem with communication constraints, and many samplebased nonlinear filters have been developed to deal with the highly nonlinear and/or non-Gaussian scenarios. The aim of this paper is to provide a timely survey about the recent advances on the sample-based networked nonlinear filtering problem from the perspective of communication constraints. More specifically, we first review three important families of sample-based filtering methods known as the unscented Kalman filter, particle filter,and maximum correntropy filter. Then, the latest developments are surveyed with stress on the topics regarding incomplete/imperfect information, limited resources and cyber ***, several challenges and open problems are highlighted to shed some lights on the possible trends of future research in this realm.
In recent decades, Cellular Networks (CN) have been used broadly in communication technologies. The most critical challenge in the CN was congestion control due to the distributed mobile environment. Some approaches, ...
详细信息
In the data retrieval process of the data recommendation system,the matching prediction and similarity identification take place a major role in the *** that,there are several methods to improve the retrieving process...
详细信息
In the data retrieval process of the data recommendation system,the matching prediction and similarity identification take place a major role in the *** that,there are several methods to improve the retrieving process with improved accuracy and to reduce the searching ***,in the data recommendation system,this type of data searching becomes complex to search for the best matching for given query data and fails in the accuracy of the query recommendation *** improve the performance of data validation,this paper proposed a novel model of data similarity estimation and clustering method to retrieve the relevant data with the best matching in the big data *** this paper advanced model of the Logarithmic Directionality Texture Pattern(LDTP)method with a Metaheuristic Pattern Searching(MPS)system was used to estimate the similarity between the query data in the entire *** overall work was implemented for the application of the data recommendation *** are all indexed and grouped as a cluster to form a paged format of database structure which can reduce the computation time while at the searching ***,with the help of a neural network,the relevancies of feature attributes in the database are predicted,and the matching index was sorted to provide the recommended data for given query *** was achieved by using the Distributional Recurrent Neural Network(DRNN).This is an enhanced model of Neural Network technology to find the relevancy based on the correlation factor of the feature *** training process of the DRNN classifier was carried out by estimating the correlation factor of the attributes of the *** are formed as clusters and paged with proper indexing based on the MPS parameter of similarity *** overall performance of the proposed work can be evaluated by varying the size of the training database by 60%,70%,and 80%.The parameters that are considered for performance analysis are Precision
Sign language recognition is an important social issue to be addressed which can benefit the deaf and hard of hearing community by providing easier and faster communication. Some previous studies on sign language reco...
详细信息
Among the most vital organs that protect the body of a human from the external environment is the skin. Early skin illness identification is essential for reducing mortality because it prevents skin cancer, and any ot...
详细信息
We present a novel attention-based mechanism to learn enhanced point features for point cloud processing tasks, e.g., classification and segmentation. Unlike prior studies, which were trained to optimize the weights o...
详细信息
We present a novel attention-based mechanism to learn enhanced point features for point cloud processing tasks, e.g., classification and segmentation. Unlike prior studies, which were trained to optimize the weights of a pre-selected set of attention points, our approach learns to locate the best attention points to maximize the performance of a specific task, e.g., point cloud classification. Importantly, we advocate the use of single attention point to facilitate semantic understanding in point feature learning. Specifically,we formulate a new and simple convolution, which combines convolutional features from an input point and its corresponding learned attention point(LAP). Our attention mechanism can be easily incorporated into state-of-the-art point cloud classification and segmentation networks. Extensive experiments on common benchmarks, such as Model Net40, Shape Net Part, and S3DIS, all demonstrate that our LAP-enabled networks consistently outperform the respective original networks, as well as other competitive alternatives, which employ multiple attention points, either pre-selected or learned under our LAP framework.
暂无评论