In this paper, a discrete-time projection neuralnetwork with an adaptive step size (DPNN) is proposed for distributed global optimization. The DPNN is proven to be convergent to a Karush-Kuhn-Tucker point. Several DP...
详细信息
Trait self-report mindfulness scales measure one's disposition to pay nonjudgmental attention to the present moment. Concerns have been raised about the validity of trait mindfulness scales. Despite this, there is...
详细信息
Trait self-report mindfulness scales measure one's disposition to pay nonjudgmental attention to the present moment. Concerns have been raised about the validity of trait mindfulness scales. Despite this, there is extensive literature correlating mindfulness scales with objective brain measures, with the goal of providing insight into mechanisms of mindfulness, and insight into associated positive mental health outcomes. Here, we systematically examined the neural correlates of trait mindfulness. We assessed 68 correlational studies across structural magnetic resonance imaging, task-based fMRI, resting-state fMRI, and EEG. Several consistent findings were identified, associating greater trait mindfulness with decreased amygdala reactivity to emotional stimuli, increased cortical thickness in frontal regions and insular cortex regions, and decreased connectivity within the default-mode network. These findings converged with results from intervention studies and those that included mindfulness experts. On the other hand, the connections between trait mindfulness and EEG metrics remain inconclusive, as do the associations between trait mindfulness and between-network resting-state fMRI metrics. ERP measures from EEG used to measure attentional or emotional processing may not show reliable individual variation. Research on body awareness and self-relevant processing is scarce. For a more robust correlational neuroscience of trait mindfulness, we recommend larger sample sizes, data-driven, multivariate approaches to self-report and brain measures, and careful consideration of test-retest reliability. In addition, we should leave behind simplistic explanations of mindfulness, as there are many ways to be mindful, and leave behind simplistic explanations of the brain, as distributednetworks of brain areas support mindfulness.
The relay channel, consisting of a source-destination pair and a relay, is a fundamental component of cooperative communications. While the capacity of a general relay channel remains unknown, various relaying strateg...
详细信息
ISBN:
(纸本)9798350393194;9798350393187
The relay channel, consisting of a source-destination pair and a relay, is a fundamental component of cooperative communications. While the capacity of a general relay channel remains unknown, various relaying strategies, including compress-and-forward (CF), have been proposed. For CF, given the correlated signals at the relay and destination, distributed compression techniques, such as Wyner-Ziv coding, can be harnessed to utilize the relay-to-destination link more efficiently. In light of the recent advancements in neuralnetwork-based distributed compression, we revisit the relay channel problem, where we integrate a learned one-shot Wyner-Ziv compressor into a primitive relay channel with a finite-capacity and orthogonal (or out-of-band) relay-to-destination link. The resulting neural CF scheme demonstrates that our task-oriented compressor recovers binning of the quantized indices at the relay, mimicking the optimal asymptotic CF strategy, although no structure exploiting the knowledge of source statistics was imposed into the design. We show that the proposed neural CF scheme, employing finite order modulation, operates closely to the capacity of a primitive relay channel that assumes a Gaussian codebook. Our learned compressor provides the first proof-of-concept work toward a practical neural CF relaying scheme.
Super-Resolution via Repeated Refinement (SR3) is a state-of-the-art super-resolution algorithm based on diffusion model that can enhance the resolution of images. This method is used to pre-trained models on large da...
详细信息
ISBN:
(纸本)9798350363074;9798350363081
Super-Resolution via Repeated Refinement (SR3) is a state-of-the-art super-resolution algorithm based on diffusion model that can enhance the resolution of images. This method is used to pre-trained models on large datasets and can be used for various tasks without requiring training from scratch. Training SR3 from scratch using the ImageNet dataset involves a complex process that requires substantial computational resources and expertise. The idea is applied the trained SR3 model to new images by feeding the low-resolution inputs and obtaining the high-resolution outputs. It's important to note that training SR3 from scratch is a resource-intensive process that requires powerful GPUs and significant computation time. If you do not have access to such resources, an alternative is to use pre-trained models that are already available and fine-tune them on specific datasets or tasks. The paper shows the result of comparing the resolution of the preprocessed images using a significantly smaller number of images to perform the training with those obtained using the pre-trained model. The results obtained show acceptable results without having to perform on large datasets minimizing the computation time to obtain the resolution of images.
With the continuous deepening of Artificial neuralnetwork(ANN)research,ANN model structure and function are improving towards diversification and ***,the model is more evaluated from the pros and cons of the problem-...
详细信息
With the continuous deepening of Artificial neuralnetwork(ANN)research,ANN model structure and function are improving towards diversification and ***,the model is more evaluated from the pros and cons of the problem-solving results and the lack of evaluation from the biomimetic aspect of imitating neuralnetworks is not inclusive ***,a new ANN models evaluation strategy is proposed from the perspective of bionics in response to this problem in the ***,four classical neuralnetwork models are illustrated:Back Propagation(BP)network,Deep Belief network(DBN),LeNet5 network,and olfactory bionic model(KIII model),and the neuron transmission mode and equation,network structure,and weight updating principle of the models are analyzed *** analysis results show that the KIII model comes closer to the actual biological nervous system compared with other models,and the LeNet5 network simulates the nervous system in ***,evaluation indexes of ANN are constructed from the perspective of bionics in this paper:small-world,synchronous,and chaotic ***,the network model is quantitatively analyzed by evaluation indexes from the perspective of *** experimental results show that the DBN network,LeNet5 network,and BP network have synchronous *** the DBN network and LeNet5 network have certain chaotic characteristics,but there is still a certain distance between the three classical neuralnetworks and actual biological neural *** KIII model has certain small-world characteristics in structure,and its network also exhibits synchronization characteristics and chaotic *** with the DBN network,LeNet5 network,and the BP network,the KIII model is closer to the real biological neuralnetwork.
In recent years, millimeter wave (mmWave) radar has played an indispensable role in several applications. mm Wave radars can measure the distance, speed, and angle of objects, but the transmit power of a single mm Wav...
详细信息
ISBN:
(纸本)9781665450867
In recent years, millimeter wave (mmWave) radar has played an indispensable role in several applications. mm Wave radars can measure the distance, speed, and angle of objects, but the transmit power of a single mm Wave radar is limited. By deploying multiple mm Wave radars in a distributed manner and fusing signals from them, detection results will be improved. Before data fusion, it is necessary to accurately measure the external parameters between different radars to complete the coordinates calibration of the radar network. Current methods focus on the calibration method by jointly observing moving objects in overlapping view fields of the radar network. The calibration process requires one target to move within a defined area. Because the radar cross section (RCS) characteristics of the target in all directions are usually inconsistent, if the reflected signal of this target is weak during the calibration process, the error of this method will be relatively large. This paper proposes a new neuralnetwork-based method to estimate the interval distance between different radars without passing through a moving target. The distance estimation error of the proposed network can reach within 0.1 m, which is smaller than the calibration method based on moving objects. Through the verification of actual measured data, the proposed network can more accurately estimate the interval distance between radars.
In recent years, deep learning models have become ubiquitous in industry and academia alike. Deep neuralnetworks can solve some of the most complex pattern-recognition problems today, but come with the price of massi...
详细信息
In recent years, deep learning models have become ubiquitous in industry and academia alike. Deep neuralnetworks can solve some of the most complex pattern-recognition problems today, but come with the price of massive compute and memory requirements. This makes the problem of deploying such large-scale neuralnetworks challenging in resource-constrained mobile edge computing platforms, specifically in mission-critical domains like surveillance and healthcare. To solve this, a promising solution is to split resource-hungry neuralnetworks into lightweight disjoint smaller components for pipelined distributedprocessing. At present, there are two main approaches to do this: semantic and layer-wise splitting. The former partitions a neuralnetwork into parallel disjoint models that produce a part of the result, whereas the latter partitions into sequential models that produce intermediate results. However, there is no intelligent algorithm that decides which splitting strategy to use and places such modular splits to edge nodes for optimal performance. To combat this, this work proposes a novel AI-driven online policy, SplitPlace, that uses Multi-Armed-Bandits to intelligently decide between layer and semantic splitting strategies based on the input task's service deadline demands. SplitPlace places such neuralnetwork split fragments on mobile edge devices using decision-aware reinforcement learning for efficient and scalable computing. Moreover, SplitPlace fine-tunes its placement engine to adapt to volatile environments. Our experiments on physical mobile-edge environments with real-world workloads show that SplitPlace can significantly improve the state-of-the-art in terms of average response time, deadline violation rate, inference accuracy, and total reward by up to 46, 69, 3 and 12 percent respectively.
Finding sources or leaks of airborne material in Chemical, Biological, Radiological, or Nuclear (CBRN) accidents is crucial for effective disaster response. This paper makes use of sparse Bayesian learning (SBL) to co...
详细信息
ISBN:
(纸本)9789464593617;9798331519773
Finding sources or leaks of airborne material in Chemical, Biological, Radiological, or Nuclear (CBRN) accidents is crucial for effective disaster response. This paper makes use of sparse Bayesian learning (SBL) to cooperatively estimate source locations based on measurements by multiple robots or a sensor network. The SBL approach facilitates the identification of sparse source support, indirectly providing information about the number of sources and their locations. To achieve this, we introduce a novel method that includes a trained surrogated model for the gas dispersion process described by a Partial Differential Equation (PDE). Namely, a Physics-Guided neuralnetwork (PGNN) is employed to approximate a parameterized Green's function of the PDE. The obtained approximation is integrated into a gradient-based optimization process. The proposed method allows estimating super-resolution arbitrary source locations, eliminating constraints to a specific grid. Further, the newly proposed PGNN surrogate model comes with the advantage that the approach can be extended to cases where no analytic Green's function is available. Simulation results demonstrate the effectiveness of the proposed approach, showcasing its potential for enhanced airborne material detection in CBRN scenarios.
The field of autonomous driving technology is rapidly advancing, with deep learning being a key component. Particularly in the field of sensing, 3D point cloud data collected by LiDAR is utilized to run deep neural ne...
详细信息
ISBN:
(纸本)9798350363357;9798350363340
The field of autonomous driving technology is rapidly advancing, with deep learning being a key component. Particularly in the field of sensing, 3D point cloud data collected by LiDAR is utilized to run deep neuralnetwork models for 3D object detection. However, these state-of-the-art models are complex, leading to longer processing times and increased power consumption on edge devices. The objective of this study is to address these issues by leveraging Split Computing, a distributed machine learning inference method. Split Computing aims to lessen the computational burden on edge devices, thereby reducing processing time and power consumption. Furthermore, it minimizes the risk of data breaches by only transmitting intermediate data from the deep neuralnetwork model. Experimental results show that splitting after voxelization reduces the inference time by 70.8% and the edge device execution time by 90.0%. When splitting within the network, the inference time is reduced by up to 57.1%, and the edge device execution time is reduced by up to 69.5%.
The increasing number of Internet-of-Things (IoT) devices will generate unprecedented data in the upcoming years. Fog computing may prevent the saturation of the network infrastructure by processing data at the edge o...
详细信息
The increasing number of Internet-of-Things (IoT) devices will generate unprecedented data in the upcoming years. Fog computing may prevent the saturation of the network infrastructure by processing data at the edge or within these devices. Consequently, the machine intelligence built almost exclusively on the cloud can be scattered to the edge devices. While deep learning techniques can adequately process IoT-massive data volumes, their high resource-demanding nature poses a trade-off for execution on resource-constrained devices. This paper proposes and evaluates the performance of the PArtitioning networks for COnstrained DEvices (PANCODE), a novel algorithm that employs a multilevel approach to partition large convolutional neuralnetworks for distributed execution on constrained IoT devices. Experimental results with the LeNet and AlexNet models show that our algorithm can produce partitionings that achieve up to 2173.53 times more inferences per second than the Best Fit algorithm and up to 1.37 times less communication than the second-best approach. We also show that the METIS state-of-the-art framework only produces invalid partitionings in more constrained setups. The results indicate that our algorithm achieves higher inference rates and low communication costs in convolutional neuralnetworks distributed among constrained and exceptionally very constrained devices.
暂无评论