Evaluation system of small arms firing has an important effect in the context of military domain. A partially automated evaluation system has been conducted and performed at the ground level. Automation of such system...
详细信息
Evaluation system of small arms firing has an important effect in the context of military domain. A partially automated evaluation system has been conducted and performed at the ground level. Automation of such system with the inclusion of artificial intelligence is a much required process. This papers puts focus on designing and developing an AI-based small arms firing evaluation systems in the context of military environment. Initially image processing techniques are used to calculate the target firing score. Additionally, firing errors during the shooting have also been detected using a machine learning algorithm. However, consistency in firing requires an abundance of practice and updated analysis of the previous results. Accuracy and precision are the basic requirements of a good shooter. To test the shooting skill of combatants, firing practices are held by the military personnel at frequent intervals that include 'grouping' and 'shoot to hit' scores. Shortage of skilled personnel and lack of personal interest leads to an inefficient evaluation of the firing standard of a firer. This paper introduces a system that will automatically be able to fetch the target data and evaluate the standard based on the fuzzy *** it will be able to predict the shooter performance based on linear regression ***, it compares with recognized patterns to analyze the individual expertise and suggest improvements based on previous values. The paper is developed on a Small Arms Firing Skill Evaluation System, which makes the whole process of firing and target evaluation faster with better accuracy. The experiment has been conducted on real-time scenarios considering the military field and shows a promising result to evaluate the system automatically.
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both enti...
详细信息
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both entity and relation embedding to make predictions, ignoring the semantic correlations among different entities and relations within the same timestamp. This can lead to random and nonsensical predictions when unseen entities or relations occur. Furthermore, many existing models exhibit limitations in handling highly correlated historical facts with extensive temporal depth. They often either overlook such facts or overly accentuate the relationships between recurring past occurrences and their current counterparts. Due to the dynamic nature of TKG, effectively capturing the evolving semantics between different timestamps can be *** address these shortcomings, we propose the recurrent semantic evidenceaware graph neural network(RE-SEGNN), a novel graph neural network that can learn the semantics of entities and relations simultaneously. For the former challenge, our model can predict a possible answer to missing quadruples based on semantics when facing unseen entities or relations. For the latter problem, based on an obvious established force, both the recency and frequency of semantic history tend to confer a higher reference value for the current. We use the Hawkes process to compute the semantic trend, which allows the semantics of recent facts to gain more attention than those of distant facts. Experimental results show that RE-SEGNN outperforms all SOTA models in entity prediction on 6 widely used datasets, and 5 datasets in relation prediction. Furthermore, the case study shows how our model can deal with unseen entities and relations.
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, an...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a "local" CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a "global" activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess
Fires are becoming one of the major natural hazards that threaten the ecology, economy, human life and even more worldwide. Therefore, early fire detection systems are crucial to prevent fires from spreading out of co...
详细信息
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de...
详细信息
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.
With the recent advances in the field of deep learning, an increasing number of deep neural networks have been applied to business process prediction tasks, remaining time prediction, to obtain more accurate predictiv...
详细信息
With the recent advances in the field of deep learning, an increasing number of deep neural networks have been applied to business process prediction tasks, remaining time prediction, to obtain more accurate predictive results. However, existing time prediction methods based on deep learning have poor interpretability, an explainable business process remaining time prediction method is proposed using reachability graph,which consists of prediction model construction and visualization. For prediction models, a Petri net is mined and the reachability graph is constructed to obtain the transition occurrence vector. Then, prefixes and corresponding suffixes are generated to cluster into different transition partitions according to transition occurrence vector. Next,the bidirectional recurrent neural network with attention is applied to each transition partition to encode the prefixes, and the deep transfer learning between different transition partitions is performed. For the visualization of prediction models, the evaluation values are added to the sub-processes of a Petri net to realize the visualization of the prediction models. Finally, the proposed method is validated by publicly available event logs.
With the arrival of the 5G era,wireless communication technologies and services are rapidly exhausting the limited spectrum *** auctions came into being,which can effectively utilize spectrum *** of the complexity of ...
详细信息
With the arrival of the 5G era,wireless communication technologies and services are rapidly exhausting the limited spectrum *** auctions came into being,which can effectively utilize spectrum *** of the complexity of the electronic spectrum auction network environment,the security of spectrum auction can not be *** scholars focus on researching the security of the single-sided auctions,while ignoring the practical scenario of a secure double spectrum auction where participants are composed of multiple sellers and *** begin to design the secure double spectrum auction mechanisms,in which two semi-honest agents are introduced to finish the spectrum auction *** these two agents may collude with each other or be bribed by buyers and sellers,which may create security risks,therefore,a secure double spectrum auction is proposed in this *** traditional secure double spectrum auctions,the spectrum auction server with Software Guard Extensions(SGX)component is used in this paper,which is an Ethereum blockchain platform that performs spectrum auctions.A secure double spectrum protocol is also designed,using SGX technology and cryptographic tools such as Paillier cryptosystem,stealth address technology and one-time ring signatures to well protect the private information of spectrum *** addition,the smart contracts provided by the Ethereum blockchain platform are executed to assist offline verification,and to verify important spectrum auction information to ensure the fairness and impartiality of spectrum ***,security analysis and performance evaluation of our protocol are discussed.
Exploration strategy design is a challenging problem in reinforcement learning(RL),especially when the environment contains a large state space or sparse *** exploration,the agent tries to discover unexplored(novel)ar...
详细信息
Exploration strategy design is a challenging problem in reinforcement learning(RL),especially when the environment contains a large state space or sparse *** exploration,the agent tries to discover unexplored(novel)areas or high reward(quality)*** existing methods perform exploration by only utilizing the novelty of *** novelty and quality in the neighboring area of the current state have not been well utilized to simultaneously guide the agent’s *** address this problem,this paper proposes a novel RL framework,called clustered reinforcement learning(CRL),for efficient exploration in *** adopts clustering to divide the collected states into several clusters,based on which a bonus reward reflecting both novelty and quality in the neighboring area(cluster)of the current state is given to the *** leverages these bonus rewards to guide the agent to perform efficient ***,CRL can be combined with existing exploration strategies to improve their performance,as the bonus rewards employed by these existing exploration strategies solely capture the novelty of *** on four continuous control tasks and six hard-exploration Atari-2600 games show that our method can outperform other state-of-the-art methods to achieve the best performance.
Airplanes play a critical role in global transportation, ensuring the efficient movement of people and goods. Although generally safe, aviation systems occasionally encounter incidents and accidents that underscore th...
详细信息
Data race is one of the most important concurrent anomalies in multi-threaded *** con-straint-based techniques are leveraged into race detection,which is able to find all the races that can be found by any oth-er soun...
详细信息
Data race is one of the most important concurrent anomalies in multi-threaded *** con-straint-based techniques are leveraged into race detection,which is able to find all the races that can be found by any oth-er sound race ***,this constraint-based approach has serious limitations on helping programmers analyze and understand data ***,it may report a large number of false positives due to the unrecognized dataflow propa-gation of the ***,it recommends a wide range of thread context switches to schedule the reported race(in-cluding the false one)whenever this race is exposed during the constraint-solving *** ad hoc recommendation imposes too many context switches,which complicates the data race *** address these two limitations in the state-of-the-art constraint-based race detection,this paper proposes DFTracker,an improved constraint-based race detec-tor to recommend each data race with minimal thread context ***,we reduce the false positives by ana-lyzing and tracking the dataflow in the *** this means,DFTracker thus reduces the unnecessary analysis of false race *** further propose a novel algorithm to recommend an effective race schedule with minimal thread con-text switches for each data *** experimental results on the real applications demonstrate that 1)without removing any true data race,DFTracker effectively prunes false positives by 68%in comparison with the state-of-the-art constraint-based race detector;2)DFTracker recommends as low as 2.6-8.3(4.7 on average)thread context switches per data race in the real world,which is 81.6%fewer context switches per data race than the state-of-the-art constraint based race ***,DFTracker can be used as an effective tool to understand the data race for programmers.
暂无评论