While the multi-view 3D reconstruction task has made significant progress, existing methods simply fuse multi-view image features without effectively leveraging available auxiliary information, especially the viewpoin...
详细信息
Feature selection is an important data preprocessing process in artificial intelligence, which aims to eliminate redundant features while retaining essential features. Measuring feature significance and relevance betw...
详细信息
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner wo...
ISBN:
(纸本)9798331314385
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store knowledge have long been a subject of intense interest and investigation among researchers. To date, most studies have concentrated on isolated components within these models, such as the Multilayer Perceptrons and attention head. In this paper, we delve into the computation graph of the language model to uncover the knowledge circuits that are instrumental in articulating specific knowledge. The experiments, conducted with GPT2 and TinyLLAMA, have allowed us to observe how certain information heads, relation heads, and Multilayer Perceptrons collaboratively encode knowledge within the model. Moreover, we evaluate the impact of current knowledge editing techniques on these knowledge circuits, providing deeper insights into the functioning and constraints of these editing methodologies. Finally, we utilize knowledge circuits to analyze and interpret language model behaviors such as hallucinations and in-context learning. We believe the knowledge circuits hold potential for advancing our understanding of Transformers and guiding the improved design of knowledge editing. Code and data are available in https://***/zjunlp/KnowledgeCircuits.
Learning with Noisy Labels (LNL) aims to improve the model generalization when facing data with noisy labels, and existing methods generally assume that noisy labels come from known classes, called closed-set noise. H...
详细信息
Disk failure is one of the most important reliability problems in large-scale network storage systems. Disk failure may lead to serious data loss and even disastrous consequences if the missing data cannot be recovere...
详细信息
In recent years, reconfigurable intelligent surface (RIS) technologies have been proposed due to its ability in shaping propagation environment for future 6G network development. Existing RIS technologies mainly inclu...
详细信息
Knowledge Base Question Answering (KBQA) has been a long-standing field to answer questions based on knowledge bases. Recently, the evolving dynamics of knowledge have attracted a growing interest in Temporal Knowledg...
详细信息
Seasonal features of various kinds of time-varying nodes in large-scale complex networks could facilitate the effective network optimization. It is necessary to find the nodes with seasonal features from the limited l...
详细信息
Aiming at the problems of insufficient utilization of information about elite particles in archive and instability of particle motion in the population in the multi-objective artificial physics optimization algorithm ...
详细信息
ISBN:
(数字)9798350380286
ISBN:
(纸本)9798350380293
Aiming at the problems of insufficient utilization of information about elite particles in archive and instability of particle motion in the population in the multi-objective artificial physics optimization algorithm (MOAPO) in solving multiobjective optimization problems, A multi-objective artificial physics optimization algorithm based on two-phase search (TPMOAPO) is proposed. To begin with, the algorithm improves the calculation of the mass of particles, so that the strength and weakness of the particles can be accurately transformed into the corresponding masses while improving the efficiency of particle mass calculation. Next, a two-phase search strategy is proposed, which makes the algorithm have strong exploration ability in the first phase, and the second phase gradually enhances the exploitation capability with iterations, which solves the problem of instability motion of particles in the search process. Finally, the simulated binary crossover (SBX) and polynomial-based mutation (PM) operators are adopted in the archive to further enhance the search capability of the algorithm. For verifying the performance of TP-MOAPO, 21 benchmark functions were selected to compare with the classical multi-objective particle swarm optimization algorithms: MOPSO, dMOPSO, SMPSO, MMOPSO, and NMPSO, and the experimental results show the superiority of TP-MOAPO in these functions.
The identification of drug-protein interactions (DTIs) is a critical step in drug development and repositioning. However, detecting these interactions using scientific methods presents a formidable challenge. Existing...
The identification of drug-protein interactions (DTIs) is a critical step in drug development and repositioning. However, detecting these interactions using scientific methods presents a formidable challenge. Existing models often struggle to extract relevant information from drug-protein sequences, and they fail to consider pre-existing interactions between pairs. Here, we introduce a deep learning model, DPAG, based on attention mechanisms and graph neural networks, which overcomes these limitations. DPAG leverages information from both the drug and protein sequences, as well as the association between drug-protein pairs, to predict DTIs. We conducted multiple experiments to evaluate the performance of the model in benchmark datasets, and the results showed that DPAG achieved excellent performance in terms of AUC, AUPR, precision, and recall. Our findings suggest that DPAG holds significant potential as a promising tool for predicting DTIs in drug development and repositioning. Code is available at https://***/veresse/DPAG.
暂无评论