Due to the probability characteristics of quantum mechanism, the combination of quantum mechanism and intelligent algorithm has received wide attention. Quantum dynamics theory uses the Schr?dinger equation as a quant...
详细信息
Due to the probability characteristics of quantum mechanism, the combination of quantum mechanism and intelligent algorithm has received wide attention. Quantum dynamics theory uses the Schr?dinger equation as a quantum dynamics equation. Through three approximation of the objective function, quantum dynamics framework(QDF) is obtained which describes basic iterative operations of optimization algorithms. Based on QDF, this paper proposes a potential barrier estimation(PBE) method which originates from quantum mechanism. With the proposed method, the particle can accept inferior solutions during the sampling process according to a probability which is subject to the quantum tunneling effect, to improve the global search capacity of optimization *** effectiveness of the proposed method in the ability of escaping local minima was thoroughly investigated through double well function(DWF), and experiments on two benchmark functions sets show that this method significantly improves the optimization performance of high dimensional complex functions. The PBE method is quantized and easily transplanted to other algorithms to achieve high performance in the future.
Partial-label learning(PLL) is a typical problem of weakly supervised learning, where each training instance is annotated with a set of candidate labels. Self-training PLL models achieve state-of-the-art performance b...
详细信息
Partial-label learning(PLL) is a typical problem of weakly supervised learning, where each training instance is annotated with a set of candidate labels. Self-training PLL models achieve state-of-the-art performance but suffer from error accumulation problems caused by mistakenly disambiguated instances. Although co-training can alleviate this issue by training two networks simultaneously and allowing them to interact with each other, most existing co-training methods train two structurally identical networks with the same task, i.e., are symmetric, rendering it insufficient for them to correct each other due to their similar limitations. Therefore, in this paper, we propose an asymmetric dual-task co-training PLL model called AsyCo,which forces its two networks, i.e., a disambiguation network and an auxiliary network, to learn from different views explicitly by optimizing distinct tasks. Specifically, the disambiguation network is trained with a self-training PLL task to learn label confidence, while the auxiliary network is trained in a supervised learning paradigm to learn from the noisy pairwise similarity labels that are constructed according to the learned label confidence. Finally, the error accumulation problem is mitigated via information distillation and confidence refinement. Extensive experiments on both uniform and instance-dependent partially labeled datasets demonstrate the effectiveness of AsyCo.
In the enormous field of Natural Language Processing (NLP), deciphering the intended significance of a word among a multitude of possibilities is referred to as word sense disambiguation. This process is essential for...
详细信息
Predicting RNA binding protein(RBP) binding sites on circular RNAs(circ RNAs) is a fundamental step to understand their interaction mechanism. Numerous computational methods are developed to solve this problem, but th...
详细信息
Predicting RNA binding protein(RBP) binding sites on circular RNAs(circ RNAs) is a fundamental step to understand their interaction mechanism. Numerous computational methods are developed to solve this problem, but they cannot fully learn the features. Therefore, we propose circ-CNNED, a convolutional neural network(CNN)-based encoding and decoding framework. We first adopt two encoding methods to obtain two original matrices. We preprocess them using CNN before fusion. To capture the feature dependencies, we utilize temporal convolutional network(TCN) and CNN to construct encoding and decoding blocks, respectively. Then we introduce global expectation pooling to learn latent information and enhance the robustness of circ-CNNED. We perform circ-CNNED across 37 datasets to evaluate its effect. The comparison and ablation experiments demonstrate that our method is superior. In addition, motif enrichment analysis on four datasets helps us to explore the reason for performance improvement of circ-CNNED.
Drug-target interactions(DTIs) prediction plays an important role in the process of drug *** computational methods treat it as a binary prediction problem, determining whether there are connections between drugs and t...
详细信息
Drug-target interactions(DTIs) prediction plays an important role in the process of drug *** computational methods treat it as a binary prediction problem, determining whether there are connections between drugs and targets while ignoring relational types information. Considering the positive or negative effects of DTIs will facilitate the study on comprehensive mechanisms of multiple drugs on a common target, in this work, we model DTIs on signed heterogeneous networks, through categorizing interaction patterns of DTIs and additionally extracting interactions within drug pairs and target protein pairs. We propose signed heterogeneous graph neural networks(SHGNNs), further put forward an end-to-end framework for signed DTIs prediction, called SHGNN-DTI,which not only adapts to signed bipartite networks, but also could naturally incorporate auxiliary information from drug-drug interactions(DDIs) and protein-protein interactions(PPIs). For the framework, we solve the message passing and aggregation problem on signed DTI networks, and consider different training modes on the whole networks consisting of DTIs, DDIs and PPIs. Experiments are conducted on two datasets extracted from Drug Bank and related databases, under different settings of initial inputs, embedding dimensions and training modes. The prediction results show excellent performance in terms of metric indicators, and the feasibility is further verified by the case study with two drugs on breast cancer.
Real-time systems are widely implemented in the Internet of Things(IoT) and safety-critical systems, both of which have generated enormous social value. Aiming at the classic schedulability analysis problem in real-ti...
详细信息
Real-time systems are widely implemented in the Internet of Things(IoT) and safety-critical systems, both of which have generated enormous social value. Aiming at the classic schedulability analysis problem in real-time systems, we proposed an exact Boolean analysis based on interference(EBAI) for schedulability analysis in real-time systems. EBAI is based on worst-case interference time(WCIT), which considers both the release jitter and blocking time of the task. We improved the efficiency of the three existing tests and provided a comprehensive summary of related research results in the field. Abundant experiments were conducted to compare EBAI with other related results. Our evaluation showed that in certain cases, the runtime gain achieved using our analysis method may exceed 73% compared to the stateof-the-art schedulability test. Furthermore, the benefits obtained from our tests grew with the number of tasks, reaching a level suitable for practical application. EBAI is oriented to the five-tuple real-time task model with stronger expression ability and possesses a low runtime overhead. These characteristics make it applicable in various real-time systems such as spacecraft, autonomous vehicles, industrial robots, and traffic command systems.
Software defect prediction (SDP) is considered a dynamic research problem and is beneficial during the testing stage of the software development life cycle. Several artificial intelligence-based methods were avai...
详细信息
Software defect prediction (SDP) is considered a dynamic research problem and is beneficial during the testing stage of the software development life cycle. Several artificial intelligence-based methods were available to predict these software defects. However, the detection accuracy is still low due to imbalanced datasets, poor feature learning, and tuning of the model's parameters. This paper proposes a novel attention-included Deep Learning (DL) model for SDP with effective feature learning and dimensionality reduction mechanisms. The system mainly comprises ‘6’ phases: dataset balancing, source code parsing, word embedding, feature extraction, dimensionality reduction, and classification. First, dataset balancing was performed using the density peak based k-means clustering (DPKMC) algorithm, which prevents the model from having biased outcomes. Then, the system parses the source code into abstract syntax trees (ASTs) that capture the structure and relationship between different elements of the code to enable type checking and the representative nodes on ASTs are selected to form token vectors. Then, we use bidirectional encoder representations from transformers (BERT), which converts the token vectors into numerical vectors and extracts semantic features from the data. We then input the embedded vectors to multi-head attention incorporated bidirectional gated recurrent unit (MHBGRU) for contextual feature learning. After that, the dimensionality reduction is performed using kernel principal component analysis (KPCA), which transforms the higher dimensional data into lower dimensions and removes irrelevant features. Finally, the system used a deep, fully connected network-based SoftMax layer for defect prediction, in which the cross-entropy loss is utilized to minimize the prediction loss. The experiments on the National Aeronautics and Space Administration (NASA) and AEEEM show that the system achieves better outcomes than the existing state-of-the-art models f
The Internet of Things(IoT)has taken the interconnected world by *** to their immense applicability,IoT devices are being scaled at exponential proportions ***,very little focus has been given to securing such *** the...
详细信息
The Internet of Things(IoT)has taken the interconnected world by *** to their immense applicability,IoT devices are being scaled at exponential proportions ***,very little focus has been given to securing such *** these devices are constrained in numerous aspects,it leaves network designers and administrators with no choice but to deploy them with minimal or no security at *** have seen distributed denial-ofservice attacks being raised using such devices during the infamous Mirai botnet attack in *** we propose a lightweight authentication protocol to provide proper access to such *** have considered several aspects while designing our authentication protocol,such as scalability,movement,user registration,device registration,*** define the architecture we used a three-layered model consisting of cloud,fog,and edge *** have also proposed several pre-existing cipher suites based on post-quantum cryptography for evaluation and *** also provide a fail-safe mechanism for a situation where an authenticating server might fail,and the deployed IoT devices can self-organize to keep providing services with no human *** find that our protocol works the fastest when using ring learning with *** prove the safety of our authentication protocol using the automated validation of Internet security protocols and applications *** conclusion,we propose a safe,hybrid,and fast authentication protocol for authenticating IoT devices in a fog computing environment.
Significant advancements have been made in natural language processing (NLP), especially in numerous languages. The vast linguistic diversity of India presents unique challenges for automated language processing tasks...
详细信息
Accurately detecting traffic anomalies becomes increasingly crucial in network management. Algorithms that model the traffic data as a matrix suffers from low detection accuracy, while the work using the tensor model ...
详细信息
暂无评论