Recent advances in single-cell RNA sequencing (scRNA-seq) technology provides unprecedented opportunities for reconstruction gene regulation networks (GRNs). At present, many different models have been proposed to inf...
Recent advances in single-cell RNA sequencing (scRNA-seq) technology provides unprecedented opportunities for reconstruction gene regulation networks (GRNs). At present, many different models have been proposed to infer GRN from a large number of RNA-seq data, but most deep learning models use a priori gene regulatory network to infer potential GRNs. It is a challenge to reconstruct GRNs from scRNA-seq data due to the noise and sparsity introduced by the dropout effect. Here, we propose GAALink, a novel unsupervised deep learning method. It first constructs the gene similarity matrix and then refines it by threshold value. It then learns feature representations of genes through a graphical attention autoencoder that propagates information across genes with different weights. Finally, we use gene feature expression for matrix completion such that the GRNs are reconstructed. Compared with seven existing GRNs reconstruction methods, GAALink achieves more accurate performance on seven scRNA-seq dataset with four ground truth networks. GAALink can provide a useful tool for inferring GRNs for scRNA-seq expression data.
Deep neural networks(DNNs)have recently shown great potential in solving partial differential equations(PDEs).The success of neural network-based surrogate models is attributed to their ability to learn a rich set of ...
详细信息
Deep neural networks(DNNs)have recently shown great potential in solving partial differential equations(PDEs).The success of neural network-based surrogate models is attributed to their ability to learn a rich set of solution-related ***,learning DNNs usually involves tedious training iterations to converge and requires a very large number of training data,which hinders the application of these models to complex physical *** address this problem,we propose to apply the transfer learning approach to DNN-based PDE solving *** our work,we create pairs of transfer experiments on Helmholtz and Navier-Stokes equations by constructing subtasks with different source terms and Reynolds *** also conduct a series of experiments to investigate the degree of generality of the features between different *** results demonstrate that despite differences in underlying PDE systems,the transfer methodology can lead to a significant improvement in the accuracy of the predicted solutions and achieve a maximum performance boost of 97.3%on widely used surrogate models.
With the continuous deepening of Artificial Neural Network(ANN)research,ANN model structure and function are improving towards diversification and ***,the model is more evaluated from the pros and cons of the problem-...
详细信息
With the continuous deepening of Artificial Neural Network(ANN)research,ANN model structure and function are improving towards diversification and ***,the model is more evaluated from the pros and cons of the problem-solving results and the lack of evaluation from the biomimetic aspect of imitating neural networks is not inclusive ***,a new ANN models evaluation strategy is proposed from the perspective of bionics in response to this problem in the ***,four classical neural network models are illustrated:Back Propagation(BP)network,Deep Belief Network(DBN),LeNet5 network,and olfactory bionic model(KIII model),and the neuron transmission mode and equation,network structure,and weight updating principle of the models are analyzed *** analysis results show that the KIII model comes closer to the actual biological nervous system compared with other models,and the LeNet5 network simulates the nervous system in ***,evaluation indexes of ANN are constructed from the perspective of bionics in this paper:small-world,synchronous,and chaotic ***,the network model is quantitatively analyzed by evaluation indexes from the perspective of *** experimental results show that the DBN network,LeNet5 network,and BP network have synchronous *** the DBN network and LeNet5 network have certain chaotic characteristics,but there is still a certain distance between the three classical neural networks and actual biological neural *** KIII model has certain small-world characteristics in structure,and its network also exhibits synchronization characteristics and chaotic *** with the DBN network,LeNet5 network,and the BP network,the KIII model is closer to the real biological neural network.
As the demands for superior agents grow, the training complexity of Deep Reinforcement Learning (DRL) becomes higher. Thus, accelerating training of DRL has become a major research focus. Dividing the DRL training pro...
详细信息
Surgical hemorrhage is a common occurrence in surgeries. Accurate segmentation of hemorrhage regions is important for surgical navigation and post-operative assessment. Some segmentation models focused on medical imag...
详细信息
ISBN:
(数字)9798331535087
ISBN:
(纸本)9798331535094
Surgical hemorrhage is a common occurrence in surgeries. Accurate segmentation of hemorrhage regions is important for surgical navigation and post-operative assessment. Some segmentation models focused on medical images, but its performance on hemorrhage data is limited. Besides, there is an essential challenge to annotate a large number of hemorrhage data, and previous segmentation methods struggle with complex hemorrhage characteristics like unclear boundaries and scattered targets. The Segment Anything Model 2 (SAM2) shows an significant zero-shot ability in general image segmentation, and is generalizable to fine-tuned on downstream tasks. However, it often faces limitations in hemorrhage segmentation task that lacking annotations. This paper proposes a fine-tuning approach for SAM2, significantly improving its performance on hemorrhage segmentation tasks with limited data. Our method provides better segmentation performance on few-shot hemorrhage data than SAM and SAM2 based models.
Graph is a significant data structure that describes the relationship between entries. Many application domains in the real world are heavily dependent on graph data. However, graph applications are vastly different f...
详细信息
Graph is a significant data structure that describes the relationship between entries. Many application domains in the real world are heavily dependent on graph data. However, graph applications are vastly different from traditional applications. It is inefficient to use general-purpose platforms for graph applications, thus contributing to the research of specific graph processing platforms. In this survey, we systematically categorize the graph workloads and applications, and provide a detailed review of existing graph processing platforms by dividing them into general-purpose and specialized systems. We thoroughly analyze the implementation technologies including programming models, partitioning strategies, communication models, execution models, and fault tolerance strategies. Finally, we analyze recent advances and present four open problems for future research.
Jamming attack can severely affect the performance of Wireless sensor networks (WSNs) due to the broadcast nature of wireless medium. In order to localize the source of the attacker, we in this paper propose a jammer ...
详细信息
Jamming attack can severely affect the performance of Wireless sensor networks (WSNs) due to the broadcast nature of wireless medium. In order to localize the source of the attacker, we in this paper propose a jammer localization algorithm named as Minimum-circle-covering based localization (MCCL). Comparing with the existing solutions that rely on the wireless propagation parameters, MCCL only depends on the location information of sensor nodes at the border of the jammed region. MCCL uses the plane geometry knowledge, especially the minimum circle covering technique, to form an approximate jammed region, and hence the center of the jammed region is treated as the estimated position of the jammer. Simulation results showed that MCCL is able to achieve higher accuracy than other existing solutions in terms of jammer's transmission range and sensitivity to nodes' density.
Anomalies in time series appear consecutively, forming anomaly segments. Applying the classical point-based evaluation metrics to evaluate the detection performance of segments leads to considerable underestimation, s...
详细信息
Anomalies in time series appear consecutively, forming anomaly segments. Applying the classical point-based evaluation metrics to evaluate the detection performance of segments leads to considerable underestimation, so most related studies resort to point adjustment. This operation treats all points as true positives within a segment equally when only one individual point alarms, resulting in significant overestimation and creating an illusion of superior performance. This paper proposes smoothing point adjustment, a novel range-based evaluation protocol for time series anomaly detection. Our protocol reflects detection performance impartially by carefully considering the specific location and frequency of alarms in the raw results. It is achieved by smoothly determining the adjustment range and rewarding early detection via a ranging function and a rewarding function. Compared with other evaluation metrics, experiments on different datasets show that our protocol can yield a performance ranking of various methods more consistent with the desired situation.
Event extraction(EE)is a complex natural language processing(NLP)task that aims at identifying and classifying triggers and arguments in raw *** polysemy of triggers and arguments stands out as one of the key challeng...
详细信息
Event extraction(EE)is a complex natural language processing(NLP)task that aims at identifying and classifying triggers and arguments in raw *** polysemy of triggers and arguments stands out as one of the key challenges affecting the precise extraction of *** approaches commonly consider the semantic distribution of triggers and arguments to be ***,the sample quantities of different semantics in the same trigger or argument vary in real-world scenarios,leading to a biased semantic *** bias introduces two challenges:(1)low-frequency semantics is difficult to identify;(2)high-frequency semantics is often mistakenly *** tackle these challenges,we propose an adaptive learning method with the reward-penalty mechanism for balancing the semantic distribution in polysemous triggers and *** reward-penalty mechanism balances the semantic distribution by enlarging the gap between the target and nontarget semantics by rewarding correct classifications and penalizing incorrect ***,we propose a sentencelevel event situation awareness(SA)mechanism to guide the encoder to accurately learn the knowledge of events mentioned in the sentence,thereby enhancing target event semantics in the distribution of polysemous triggers and ***,for various semantics in different tasks,we propose task-specific semantic decoders to precisely identify the boundaries of the predicted triggers and arguments for the *** experimental results on ACE2005 and its variants,along with the rich Entities,Relations,and Events(ERE),demonstrate the superiority of our approach over single-task and multi-task EE baselines.
Mixed-type data with both categorical and numerical features are ubiquitous in network security, but the existing methods are minimal to deal with them. Existing methods usually process mixed-type data through feature...
详细信息
ISBN:
(纸本)9781665408783
Mixed-type data with both categorical and numerical features are ubiquitous in network security, but the existing methods are minimal to deal with them. Existing methods usually process mixed-type data through feature conversion, whereas their performance is downgraded by information loss and noise caused by the transformation. Meanwhile, existing methods usually superimpose domain knowledge and machine learning in which fixed thresholds are used. It cannot dynamically adjust the anomaly threshold to the actual scenario, resulting in inaccurate anomalies obtained, which results in poor performance. To address these issues, this paper proposes a novel Anomaly Detection method based on Reinforcement Learning, termed ADRL, which uses reinforcement learning to dynamically search for thresholds and accurately obtain anomaly candidate sets, fusing domain knowledge and machine learning fully and promoting each other. Specifically, ADRL uses prior domain knowledge to label known anomalies and uses entropy and deep autoencoder in the categorical and numerical feature spaces, respectively, to obtain anomaly scores combining with known anomaly information, which are integrated to get the overall anomaly scores via a dynamic integration strategy. To obtain accurate anomaly candidate sets, ADRL uses reinforcement learning to search for the best threshold. Detailedly, it initializes the anomaly threshold to get the initial anomaly candidate set and carries on the frequent rule mining to the anomaly candidate set to form the new knowledge. Then, ADRL uses the obtained knowledge to adjust the anomaly score and get the score modification rate. According to the modification rate, different threshold modification strategies are executed, and the best threshold, that is, the threshold under the maximum modification rate, is finally obtained, and the modified anomaly scores are obtained. The scores are used to re-carry out machine learning to improve the algorithm's accuracy for anomalo
暂无评论