Ultrahigh field (UHF) Magnetic Resonance Imaging (MRI) offers an elevated signal-to-noise ratio (SNR), enabling exceptionally high spatial resolution that benefits both clinical diagnostics and advanced research. How-...
详细信息
Ultrahigh field (UHF) Magnetic Resonance Imaging (MRI) offers an elevated signal-to-noise ratio (SNR), enabling exceptionally high spatial resolution that benefits both clinical diagnostics and advanced research. How- ever, the jump to higher fields introduces complications, particularly transmit radiofrequency (RF) field (B+1 ) inhomogeneities, manifesting as uneven flip angles and image intensity irregularities. These artifacts can degrade image quality and impede broader clinical adoption. Traditional RF shimming methods, such as Magnitude Least Squares (MLS) optimization, effectively mitigate B+1 inhomogeneity but remain time-consuming and typically require the patient’s presence to compute solutions. Recent machine learning approaches, including RF Shim Prediction by Iteratively Projected Ridge Regression and other deep learning architectures, suggest alternative pathways. Although these approaches show promise, challenges such as extensive training periods, limited network complexity, and practical data requirements persist. In this paper, we introduce a holistic learning-based framework called Fast-RF-Shimming, which achieves a 5000× speed-up in RF shimming compared to traditional MLS optimization methods. In the initial phase, we employ random-initialized Adaptive Moment Estimation (Adam) to derive the desired reference shimming weights from multi-channel B+1 fields. Next, we train a Residual Network (ResNet) to map B+1 fields directly to the ultimate RF shimming outputs, incorporating the confidence parameter into its loss function. Finally, we design Non-uniformity Field Detector (NFD), an optional post-processing step, to ensure the extreme non-uniform outcomes are identified. By leveraging a residual learning paradigm, our approach circumvents the need for pre-computed reference solutions in the testing phase and remarkably reduces computational overhead. Comparative evaluations with standard MLS optimization underscore notable gains in both processing s
Machine learning (ML) methods, especially reinforcement learning (RL), have been widely considered for traffic signal optimization in intelligent transportation systems. Most of these ML methods are centralized, lacki...
Machine learning (ML) methods, especially reinforcement learning (RL), have been widely considered for traffic signal optimization in intelligent transportation systems. Most of these ML methods are centralized, lacking in scalability and adaptability in large traffic networks. Further, it is challenging to train such ML models due to the lack of training platforms and/or the cost of deploying and training in a real traffic networks. This paper presents an approach for the integration of decentralized graph-based multi-agent reinforcement learning (DGMARL) with a Digital Twin (DT) to optimize traffic signals for the reduction of traffic congestion and network-wide fuel consumption related to stopping. Specifically, the DGMARL agents learn traffic state patterns and make decisions regarding traffic signal control with assistance from a Digital Twin module, which simulates and replicates the traffic behaviors of a real traffic network. The proposed approach was evaluated using PTV-Vissim [1], a microscopic traffic simulation platform. PTV-Vissim is also the simulation engine of the DT, enabling emulation and optimization of the traffic signals on the MLK Smart Corridor in Chattanooga, Tennessee. Compared to an actuated signal control baseline approach, experiment results show that Eco_PI, a developed performance measure capturing the impact of stops on fuel consumption, was reduced by 44.27% in a 24-hour and an average of 29.88% in a PM-peak-hour scenario.
This paper investigates using satellite data to improve adaptive sampling missions, particularly for front tracking scenarios such as with algal blooms. Our proposed solution to find and track algal bloom fronts uses ...
详细信息
Timely updating of Internet of Things (IoT) data is crucial for immersive vehicular metaverse services. However, challenges such as latency caused by massive data transmissions, privacy risks associated with user data...
详细信息
To reach the next frontier in multimode nonlinear optics, it is crucial to better understand the classical and quantum phenomena of systems with many interacting degrees of freedom – both how they emerge and how they...
详细信息
The emergence of cloud computing technology has motivated a very high number of users in different organizations to access its services in running and delivering their various operations and services. However, this su...
详细信息
Low latency inference has many applications in edge machine learning. In this paper, we present a run-time configurable convolutional neural network (CNN) inference ASIC design for low-latency edge machine learning. B...
Low latency inference has many applications in edge machine learning. In this paper, we present a run-time configurable convolutional neural network (CNN) inference ASIC design for low-latency edge machine learning. By implementing a 5-stage pipelined CNN inference model in a 3D ASIC technology, we demonstrate that the model distributed on two dies utilizing face-to-face (F2F) 3D integration achieves superior performance. Our experimental results show that the design based on 3D integration achieves 43% better energy-delay product when compared to the traditional 2D technology.
Mathematical modeling of decision networks is crucial in simulating the socio-economic environment, especially as these systems grow increasingly complex. Such models allow for a structured and quantitative analysis o...
详细信息
ISBN:
(数字)9798350377224
ISBN:
(纸本)9798350377231
Mathematical modeling of decision networks is crucial in simulating the socio-economic environment, especially as these systems grow increasingly complex. Such models allow for a structured and quantitative analysis of the myriad factors and interactions that define modern economies and societies. These models enable researchers and policymakers to predict outcomes, identify optimal strategies, and evaluate the potential impacts of various decisions. This is particularly important in addressing intricate issues such as market dynamics, social welfare, and policy implementation, where traditional analytical methods fall short. Moreover, mathematical models facilitate the testing of hypothetical scenarios, providing valuable insights into how changes in one part of the system might ripple through the entire network. This paper proposes mathematical modelling of a new programmable decision network that allows simulating the socio-economic environment as composed of dynamically interacting components and relationships. The proposed network model, thanks to its programmability, can have a more comprehensive and deeper performance than the rest of the available traditional decision networks. As a result, these models are indispensable tools for understanding and managing the complexities of contemporary economic and social landscapes, ultimately aiding in the formulation of informed, effective policies and strategies.
To efficiently provide coverage for the machines and robots in remote areas, satellites and unmanned aerial vehicles (UAVs) can be utilized. In such scenarios, UAVs need to integrate different functions to support the...
详细信息
ISBN:
(数字)9798350390643
ISBN:
(纸本)9798350390650
To efficiently provide coverage for the machines and robots in remote areas, satellites and unmanned aerial vehicles (UAVs) can be utilized. In such scenarios, UAVs need to integrate different functions to support the control tasks of robots efficiently, while satellites can serve as backhaul to the cloud. To improve the overall performance of the closed-loop control tasks of robots, we jointly optimize the communication and computing resource allocation with data offloading. Specif-ically, we explore the linear quadratic regulator (LQR) cost to measure the control performance and formulate a sum LQR cost minimization problem. An iterative algorithm is proposed to solve the optimization problem. Simulation results are provided to demonstrate the superiority of the proposed scheme.
This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics. In this setting, the Q-function of each...
This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics. In this setting, the Q-function of each RL problem (task) can be decomposed into a successor feature (SF) and a reward mapping: the former characterizes the transition dynamics, and the latter characterizes the task-specific reward function. This Q-function decomposition, coupled with a policy improvement operator known as generalized policy improvement (GPI), reduces the sample complexity of finding the optimal Q-function, and thus the SF & GPI framework exhibits promising empirical performance compared to traditional RL methods like Q-learning. However, its theoretical foundations remain largely unestablished, especially when learning the successor features using deep neural networks (SF-DQN). This paper studies the provable knowledge transfer using SFs-DQN in transfer RL problems. We establish the first convergence analysis with provable generalization guarantees for SF-DQN with GPI. The theory reveals that SF-DQN with GPI outperforms conventional RL approaches, such as deep Q-network, in terms of both faster convergence rate and better generalization. Numerical experiments on real and synthetic RL tasks support the superior performance of SF-DQN & GPI, aligning with our theoretical findings.
暂无评论