In synaptic molecular communication (MC), the activation of postsynaptic receptors by neurotransmitter (NTs) is governed by a stochastic reaction-diffusion process. This randomness of synaptic MC contributes to the ra...
详细信息
In synaptic molecular communication (MC), the activation of postsynaptic receptors by neurotransmitter (NTs) is governed by a stochastic reaction-diffusion process. This randomness of synaptic MC contributes to the randomness of the electrochemical downstream signal in the postsynaptic cell, called postsynaptic membrane potential (PSP). Since the randomness of the PSP is relevant for neural computation and learning, characterizing the statistics of the PSP is critical. However, the statistical characterization of the synaptic reaction-diffusion process is difficult because the reversible bi-molecular reaction of NTs with receptors renders the system nonlinear. Consequently, there is currently no model available which characterizes the impact of the statistics of postsynaptic receptor activation on the PSP. In this work, we propose a novel statistical model for the synaptic reaction-diffusion process in terms of the chemical master equation (CME). We further propose a novel numerical method which allows to compute the CME efficiently and we use this method to characterize the statistics of the PSP. Finally, we present results from stochastic particle-based computer simulations which validate the proposed models. We show that the biophysical parameters governing synaptic transmission shape the autocovariance of the receptor activation and, ultimately, the statistics of the PSP. Our results suggest that the processing of the synaptic signal by the postsynaptic cell effectively mitigates synaptic noise while the statistical characteristics of the synaptic signal are preserved. The results presented in this paper contribute to a better understanding of the impact of the randomness of synaptic signal transmission on neuronal information processing.
To enhance the capability of identifying unknown emitters in open spaces, an open-multiscale attention kernel (MSAK)-convolutional neural network-long short-term memory (CNN-LSTM) structure is proposed. To this end, f...
详细信息
To enhance the capability of identifying unknown emitters in open spaces, an open-multiscale attention kernel (MSAK)-convolutional neural network-long short-term memory (CNN-LSTM) structure is proposed. To this end, first, a MSAK module and CNN-LSTM structure are introduced, and then, the depth and complexity of the feature extraction network are improved to enhance its representation capability. To classify unknown emitters accurately, the MSAK-CNN-LSTM model is improved to obtain an open-MSAK-CNN-LSTM model with open-set recognition capability. Additionally, the two preprocessing procedures are summarised, and their strengths and weaknesses are compared. Experimental results show that the proposed open-MSAK-CNN-LSTM model achieves satisfactory accuracy in identifying unknown emitters in open space. In addition, it has significant advantages in low signal-to-noise ratio (SNR) scenarios. The challenge of enhancing the capability of identifying unknown emitters in open spaces is addressed. The authors believe that their study significantly contributes to the literature because it introduces an innovative approach to radar emitter identification. By combining the MSAK-CNNLSTM model with open-set recognition capabilities, the authors extend the boundaries of existing methods, enabling accurate classification of unknown emitters in open environments. image
Recently, convolutional neural network (CNN) based approaches have shown remarkable achievement for single image super-resolution (SISR). However, CNN-based SR methods often struggle with the trade-off between image r...
详细信息
Recently, convolutional neural network (CNN) based approaches have shown remarkable achievement for single image super-resolution (SISR). However, CNN-based SR methods often struggle with the trade-off between image reconstruction quality and model complexity. In this paper, we propose a Partial Convolution Residual Network (PCRN), which improves SR performance in terms of image reconstruction quality and model size through two design aspects. Firstly, we revisit the commonly used pixelshuffle upsampling function in SR models, demonstrating that the pixelshuffle operation causes a significant dispersion of the receptive fields of convolutional kernels, leading to a reduction in the model's ability to extract local texture features. To address this problem, we designed a pointwise convolution residual module to focus on the receptive field, allowing the model to capture local features more precisely, thereby effectively enhancing image restoration performance. Secondly, inspired by FasterNet, we leverage partial convolution to reduce the model's complexity and introduce the partial convolution residual block as the foundation of PCRN. This design significantly reduces redundant channels and memory access operations, further improving model performance. Experimental results demonstrate that PCRN can outperform other typical SR models.
Understanding which brain regions are associated with specific neurological disorders has been an important area of neuroimaging research, which has important implications for biomarker and diagnostic studies. In this...
详细信息
ISBN:
(纸本)9798350349405;9798350349399
Understanding which brain regions are associated with specific neurological disorders has been an important area of neuroimaging research, which has important implications for biomarker and diagnostic studies. In this paper, we propose an interpretable deep graph neural network ( GNN) framework to analyze functional magnetic resonance images (fMRI) and discover neurological biomarkers, namely IDGNN. Specifically, we design a novel deep graph convolutional layer to better utilize the spatial and functional information of fMRI, and aggregate feature information of multi-hop neighbor regions of interest (ROIs) more efficiently. Considering the need for interpretability in brain image analysis, we improve the Gradient Class Activation Mapping (Grad-CAM) technique for fMRI brain graphs as a way to discover the most significant ROIs identified by IDGNN from brain connectivity patterns. Furthermore, we employ an attentional feature fusion mechanism to better perform fusion of multi-scale features. We apply our IDGNN on ABIDE fMRI dataset. Results show that our method outperforms several competing methods, and successfully identifies biomarkers of autism spectrum disorders (ASD).
Decentralized Frank-Wolfe methods are suitable for solving the decentralized constrained optimization with clients having computational limitations, as they solve a linear program to find the descent direction instead...
详细信息
ISBN:
(纸本)9798350344820;9798350344813
Decentralized Frank-Wolfe methods are suitable for solving the decentralized constrained optimization with clients having computational limitations, as they solve a linear program to find the descent direction instead of the computationally intense projection operation. However, traditional decentralized Frank-Wolfe approaches may exhibit limitations in scenarios where convergence speed is critical. To address this challenge, we propose an unrolled decentralized Frank-Wolfe method to improve convergence. In particular, each layer learns the best descent direction by training a neural network, facilitating faster convergence through layer-wise updates. Furthermore, we introduce two weight sharing techniques, namely intra-client and inter-client weight sharing mechanisms, that can reduce the training complexity associated with a growing number of network layers and clients, respectively. These techniques ensure scalability of the model and improves its performance. We demonstrate the efficacy of the proposed unrolled model on various machine-learning tasks for both synthetic and real-world datasets.
signal decomposition (analysis) and reconstruction (synthesis) are cornerstones in signalprocessing and feature recognition tasks. signal decomposition is traditionally achieved by projecting data onto predefined bas...
详细信息
signal decomposition (analysis) and reconstruction (synthesis) are cornerstones in signalprocessing and feature recognition tasks. signal decomposition is traditionally achieved by projecting data onto predefined basis functions, often known as atoms. Coefficient manipulation (e.g., thresholding) combined with signal reconstruction then either provides signals with enhanced quality or permits extraction of desired features only. More recently dictionary learning and deep learning have also been actively used for similar tasks. The purpose of dictionary learning is to derive the most appropriate basis functions directly from the observed data. In deep learning, neural networks or other transfer functions are taught to perform either feature classification or data enhancement directly, provided solely some training data. This review shows first how popular signalprocessingmethods, such as basis pursuit and sparse coding, are related to analysis and synthesis. We then explain how dictionary learning and deep learning using neural networks can also be interpreted as generalized analysis and synthesis methods. We introduce the underlying principles of all techniques and then show their inherent strengths and weaknesses using various examples, including two toy examples, a moonscape image, a magnetic resonance image, and geophysical data.
360 degrees depth estimation has been extensively studied because 360 degrees images provide a full field of view of the surrounding environment as well as a detailed description of the entire scene. However, most wel...
详细信息
360 degrees depth estimation has been extensively studied because 360 degrees images provide a full field of view of the surrounding environment as well as a detailed description of the entire scene. However, most well-studied convolutional neural networks (CNNs) for 360 degrees depth estimation can extract local features well, but fail to capture rich global features from the panorama due to a fixed receptive field in CNNs. PCformer, a parallel convolutional transformer network that combines the benefits of CNNs and transformers, is proposed for 360 degrees depth estimation. The transformer has the nature to model long-range dependency and extract global features. With PCformer, both global dependency and local spatial features can be efficiently captured. To fully incorporate global and local features, a dual attention fusion module is designed. Besides, a distortion-weighted loss function is designed to reduce the distortion in panoramas. Extensive experiments demonstrate that the proposed method achieves competitive results against the state-of-the-art methods on three benchmark datasets. Additional experiments also demonstrate that the proposed model has benefits in terms of model complexity and generalisation capability.
Graph convolutional neural networks (GCNs) have shown promising results in the field of hand gesture recognition based on 3D skeletal data. However, most existing GCN methods rely on manually crafted graph structures ...
详细信息
Graph convolutional neural networks (GCNs) have shown promising results in the field of hand gesture recognition based on 3D skeletal data. However, most existing GCN methods rely on manually crafted graph structures based on the physical structure of the human hand. During training, each graph node can only establish connections based on these manual settings, and is unable to perceive new relationships between skeleton nodes that arise during gesture execution. This limitation leads to inflexible and often suboptimal graph topologies. Shift graph convolutional networks improve flexibility in the receptive field by altering the graph network structure, particularly by achieving good results in global shift angles. To address the shortcomings of previous GCN methods, an adaptive shift graph convolutional neural network (AS-GCN) is proposed for hand gesture recognition. AS-GCN draws inspiration from shift graph convolutional networks and employs the characteristics of each human action to guide the graph neural network in performing shift operations, aiming to accurately select nodes that require an expanded receptive field. Experiments are conducted on the SHREC'17 dataset for general skeleton-based gesture recognition, both with and without physical constraints on skeletal relationships. Compared to existing state-of-the-art (SOTA) algorithms, the AS-GCN algorithm demonstrates average improvements of 5.13% and 8.33% in gesture recognition accuracy for 14 gestures and 28 gestures setting, respectively, under physical constraints. Without physical constraints, the AS-GCN achieves average improvements of 4% and 7.97% for the 14 gestures and 28 gesture settings, respectively. The AS-GCN-C algorithm boosts accuracy for gestures 14 and 28 in the DHG-14/28 dataset, outperforming several state-of-the-art (SOTA) methods by 1.6-13.2% and 6.8-18.4% respectively. Similarly, the AS-GCN-A algorithm improves accuracy for both gesture settings, surpassing SOTA by margins ranging f
Medical imaging techniques are frequently used for tumor detection and diagnosis. Segmentation of tumor from medical images is a popular field of study. To this end, various deep neural network based methods are intro...
详细信息
ISBN:
(纸本)9798350343557
Medical imaging techniques are frequently used for tumor detection and diagnosis. Segmentation of tumor from medical images is a popular field of study. To this end, various deep neural network based methods are introduced for segmenting tumor regions. Within the scope of this study, we first collected a data set consisting of thorax CT (Computed Tomography) images with two class labels as benign and malignant with the help of chest radiologists and chest disease clinicians. Then, we trained four different deep neural network based segmentation methods, Mask R-CNN, YOLACT, SOLOV2, and U-Net, and compared their accuracies. Finally, we conducted experiments to show which CT image channels are more useful for segmentation. Among the tested methods, it was observed that the YOLACT algorithm returned the best results in classifying tumors and U-Net yielded the best segmentation masks.
Biometric systems play a crucial role in securely recognizing an individual's identity based on physical and behavioral traits. Among these methods, finger vein recognition stands out due to its unique position be...
详细信息
Biometric systems play a crucial role in securely recognizing an individual's identity based on physical and behavioral traits. Among these methods, finger vein recognition stands out due to its unique position beneath the skin, providing heightened security and individual distinctiveness that cannot be easily manipulated. In our study, we propose a robust biometric recognition system that combines a lightweight architecture with depth-wise separable convolutions and residual blocks, along with a machine-learning algorithm. This system employs two distinct learning strategies: single-instance and multi-instance. Using these strategies demonstrates the benefits of combining largely independent information. Initially, we address the problem of shading of finger vein images by applying the histogram equalization technique to enhance their quality. After that, we extract the features using a MobileNetV2 model that has been fine-tuned for this task. Finally, our system utilizes a support vector machines (SVM) to classify the finger vein features into their classes. Our experiments are conducted on two widely recognized datasets: SDUMLA and FV-USM and the results are promising and show excellent rank-one identification rates with 99.57% and 99.90%, respectively.
暂无评论