The accurate and robust unmanned aerial vehicle(UAV)localization is significant due to the requirements of safety-critical monitoring and emergency wireless communication in hostile underground *** range-based localiz...
详细信息
The accurate and robust unmanned aerial vehicle(UAV)localization is significant due to the requirements of safety-critical monitoring and emergency wireless communication in hostile underground *** range-based localization approaches fundamentally rely on the assumption that the environment is relatively ideal,which enables a precise range for ***,radio propagation in the underground environments may be dramatically influenced by various equipments,obstacles,and ambient *** this case,inaccurate range measurements and intermittent ranging failures inevitably occur,which leads to severe localization performance *** address the challenges,a novel UAV localization scheme is proposed in this paper,which can effectively handle unreliable observations in hostile underground *** first propose an adaptive extended Kalman filter(EKF)based on the fusion of ultra-wideband(UWB)and inertial measurement unit(IMU)to detect and adjust the inaccurate range *** to deal with intermittent ranging failures,we further design the constraint condition by limiting the system ***,the auto-regressive model is proposed to implement the localization in the ranging blind areas by reconstructing the lost ***,extensive simulations have been conducted to verify the *** carry out field experiments in an underground garage and a coal mine based on P440 UWB *** show that the localization accuracy is improved by 16.9%compared with the recent methods in the hostile underground environments.
Protein structure prediction is one of the main research areas in the field of Bio-informatics. The importance of proteins in drug design attracts researchers for finding the accurate tertiary structure of the protein...
详细信息
Free speech is essential, but it can conflict with protecting marginalized groups from harm caused by hate speech. Social media platforms have become breeding grounds for this harmful content. While studies exist to d...
详细信息
Free speech is essential, but it can conflict with protecting marginalized groups from harm caused by hate speech. Social media platforms have become breeding grounds for this harmful content. While studies exist to detect hate speech, there are significant research gaps. First, most studies used text data instead of other modalities such as videos or audio. Second, most studies explored traditional machine learning algorithms. However, due to the increase in complexities of computational tasks, there is need to employ complex techniques and methodologies. Third, majority of the research studies have either been evaluated using very few evaluation metrics or not statistically evaluated at all. Lastly, due to the opaque, black-box nature of the complex classifiers, there is need to use explainability techniques. This research aims to address these gaps by detecting hate speech in English and Kiswahili languages using videos manually collected from YouTube. The videos were converted to text and used to train various classifiers. The performance of these classifiers was evaluated using various evaluation and statistical measurements. The experimental results suggest that the random forest classifier achieved the highest results for both languages across all evaluation measurements compared to all classifiers used. The results for English language were: accuracy 98%, AUC 96%, precision 99%, recall 97%, F1 98%, specificity 98% and MCC 96% while the results for Kiswahili language were: accuracy 90%, AUC 94%, precision 93%, recall 92%, F1 94%, specificity 87% and MCC 75%. These results suggest that the random forest classifier is robust, effective and efficient in detecting hate speech in any language. This also implies that the classifier is reliable in detecting hate speech and other related problems in social media. However, to understand the classifiers’ decision-making process, we used the Local Interpretable Model-agnostic Explanations (LIME) technique to explain the
Graph processing has been widely used in many scenarios,from scientific computing to artificial *** processing exhibits irregular computational parallelism and random memory accesses,unlike traditional ***,running gra...
详细信息
Graph processing has been widely used in many scenarios,from scientific computing to artificial *** processing exhibits irregular computational parallelism and random memory accesses,unlike traditional ***,running graph processing workloads on conventional architectures(e.g.,CPUs and GPUs)often shows a significantly low compute-memory ratio with few performance benefits,which can be,in many cases,even slower than a specialized single-thread graph *** domain-specific hardware designs are essential for graph processing,it is still challenging to transform the hardware capability to performance boost without coupled software *** article presents a graph processing ecosystem from hardware to *** start by introducing a series of hardware accelerators as the foundation of this ***,the codesigned parallel graph systems and their distributed techniques are presented to support graph ***,we introduce our efforts on novel graph applications and hardware *** results show that various graph applications can be efficiently accelerated in this graph processing ecosystem.
Unsupervised domain adaptation (UDA) has emerged as a powerful solution for the domain shift problem via transferring the knowledge from a labeled source domain to a shifted unlabeled target domain. Despite the preval...
详细信息
Unsupervised domain adaptation (UDA) has emerged as a powerful solution for the domain shift problem via transferring the knowledge from a labeled source domain to a shifted unlabeled target domain. Despite the prevalence of UDA for visual applications, it remains relatively less explored for time-series applications. In this work, we propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data. Unlike existing approaches that either use statistical distances or adversarial techniques, we leverage contrastive learning solely to mitigate the distribution shift across the different domains. Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains. Subsequently, we leverage contrastive learning to maximize the similarity between each domain and its corresponding augmented view. The generated views consider the temporal dynamics of time-series data during the adaptation process while inheriting the semantics among the two domains. Hence, we gradually push both domains toward a common intermediate space, mitigating the distribution shift across them. Extensive experiments conducted on five real-world time-series datasets show that our approach can significantly outperform all state-of-the-art UDA methods. Impact Statement-Unsupervised domain adaptation (UDA) aims to reduce the gap between two related but shifted domains. CurrentUDAmethods for time-series data are based on adversarial or discrepancy approaches. Thesemethods are complex in training and cannot efficiently address the large domain shift. Therefore, in this work, we propose a time-series UDA framework based purely on contrastive learning, which is simpler in implementation and training. To leverage contrastive learning to mitigate domain shift, we propose a temporal mixup strategy to generate augmentations that are robust to the domain shift and can move both domains towards an intermediate
The incredible progress in technologies has drastically increased the usage of Web *** share their credentials like userid and password or use their smart cards to get authenticated by the application *** cards are ha...
详细信息
The incredible progress in technologies has drastically increased the usage of Web *** share their credentials like userid and password or use their smart cards to get authenticated by the application *** cards are handy to use,but they are susceptible to stolen smart card attacks and few other notable security *** prefer to use Web applications that guarantee for security against several security attacks,especially insider attacks,which is *** of several existing schemes prove the security pitfalls of the protocols from preventing security attacks,specifically insider *** paper introduces LAPUP:a novel lightweight authentication protocol using physically unclonable function(PUF)to prevent security attacks,principally insider *** PUFs are used to generate the security keys,challenge-response pair(CRP)and hardware signature for designing the *** transmitted messages are shared as hash values and encrypted by the keys generated by *** messages are devoid of all possible attacks executed by any attacker,including insider *** is also free from stolen verifier attacks,as the databases are secured by using the hardware signature generated by *** analysis of the protocol exhibits the strength of LAPUP in preventing insider attacks and its resistance against several other security *** evaluation results of the communication and computation costs of LAPUP clearly shows that it achieves better performance than existing protocols,despite providing enhanced security.
Even though various features have been investigated in the detection of figurative language, oxymoron features have not been considered in the classification of sarcastic content. The main objective of this work is to...
详细信息
The Telecare Medical Information System (TMIS) faces challenges in securely exchanging sensitive health information between TMIS nodes. A Mutual Authenticated Key Agreement (MAKA) scheme is used to eliminate security ...
详细信息
Task scheduling, which is important in cloud computing, is one of the most challenging issues in this area. Hence, an efficient and reliable task scheduling approach is needed to produce more efficient resource employ...
详细信息
The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression ***,labeling large datasets demands significant human...
详细信息
The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression ***,labeling large datasets demands significant human,time,and financial *** active learning methods have mitigated the dependency on extensive labeled data,a cold-start problem persists in small to medium-sized expression recognition *** issue arises because the initial labeled data often fails to represent the full spectrum of facial expression *** paper introduces an active learning approach that integrates uncertainty estimation,aiming to improve the precision of facial expression recognition regardless of dataset scale *** method is divided into two primary ***,the model undergoes self-supervised pre-training using contrastive learning and uncertainty estimation to bolster its feature extraction ***,the model is fine-tuned using the prior knowledge obtained from the pre-training phase to significantly improve recognition *** the pretraining phase,the model employs contrastive learning to extract fundamental feature representations from the complete unlabeled *** features are then weighted through a self-attention mechanism with rank ***,data from the low-weighted set is relabeled to further refine the model’s feature extraction *** pre-trained model is then utilized in active learning to select and label information-rich samples more *** results demonstrate that the proposed method significantly outperforms existing approaches,achieving an improvement in recognition accuracy of 5.09%and 3.82%over the best existing active learning methods,Margin,and Least Confidence methods,respectively,and a 1.61%improvement compared to the conventional segmented active learning method.
暂无评论