The Internet of Things (IoT) refers to the networked interconnection of devices that collect, exchange, and analyze data to enable intelligent applications. In emerging sixth-generation (6 G) networks, batteryless IoT...
详细信息
The Internet of Things (IoT) refers to the networked interconnection of devices that collect, exchange, and analyze data to enable intelligent applications. In emerging sixth-generation (6 G) networks, batteryless IoT devices have gained significant attention, as they rely on ambient energy harvesting rather than traditional batteries. This paper presents an energy-focused model for a 6G-enabled batteryless IoT network that integrates Vortex Wireless Power Transfer (WPT) with fog node coordination to manage energy harvesting and computation offloading. WPT exploits electromagnetic resonance to deliver energy wirelessly. Our vortex‐based model applies exponential attenuation, enhancing energy harvesting for batteryless IoT devices. Then system dynamically assigns IoT devices to optimal WPT zones based on coverage and received power, while simultaneously determining whether tasks should be executed locally or offloaded to Mobile Edge Computing (MEC)-enabled fog nodes, based on real-time energy and latency constraints. To solve the result of the NP-hard optimization problem, we develop an Enhanced Adaptive Quantum Binary Particle Swarm Optimization (EAQBPSO) algorithm that effectively balances workload distribution, energy harvesting, and consumption. Simulation results indicate that our approach significantly outperform traditional methods, achieving improvements of up to 71 % in energy efficiency, nearly 87 % in energy harvesting efficiency, and reducing average energy consumption per task by over 40 %.
With the growing cosmopolitan culture of modern cities, the need of robust Multi-Lingual scene Text (MLT) detection and recognition systems has never been more immense. With the goal to systematically benchmark and pu...
详细信息
Video text detection is considered as one of the most difficult tasks in document analysis due to the following two challenges: 1) the difficulties caused by video scenes, i.e., motion blur, illumination changes, and ...
详细信息
Video text detection is considered as one of the most difficult tasks in document analysis due to the following two challenges: 1) the difficulties caused by video scenes, i.e., motion blur, illumination changes, and occlusion; 2) the properties of text including variants of fonts, languages, orientations, and shapes. Most existing methods attempt to enhance the performance of video text detection by cooperating with video text tracking, but treat these two tasks separately. In this work, we propose an end-to-end video text detection model with online tracking to address these two challenges. Specifically, in the detection branch, we adopt ConvLSTM to capture spatial structure information and motion memory. In the tracking branch, we convert the tracking problem to text instance association, and an appearance-geometry descriptor with memory mechanism is proposed to generate robust representation of text instances. By integrating these two branches into one trainable framework, they can promote each other and the computational cost is significantly reduced. Experiments on existing video text benchmarks including ICDAR2013 Video, Minetto and YVT demonstrate that the proposed method significantly outperforms state-of-the-art methods. Our method improves F-score by about 2% on all datasets and it can run realtime with 24.36 fps on TITAN Xp.
Brain Tumours are highly complex, particularly when it comes to their initial and accurate diagnosis, as this determines patient prognosis. Conventional methods rely on MRI and CT scans and employ generic machine lear...
详细信息
Brain Tumours are highly complex, particularly when it comes to their initial and accurate diagnosis, as this determines patient prognosis. Conventional methods rely on MRI and CT scans and employ generic machine learning techniques, which are heavily dependent on feature extraction and require human intervention. These methods may fail in complex cases and do not produce human-interpretable results, making it difficult for clinicians to trust the model's predictions. Such limitations prolong the diagnostic process and can negatively impact the quality of treatment. The advent of deep learning has made it a powerful tool for complex image analysis tasks, such as detecting brain Tumours, by learning advanced patterns from images. However, deep learning models are often considered "black box" systems, where the reasoning behind predictions remains unclear. To address this issue, the present study applies Explainable AI (XAI) alongside deep learning for accurate and interpretable brain Tumour prediction. XAI enhances model interpretability by identifying key features such as Tumour size, location, and texture, which are crucial for clinicians. This helps build their confidence in the model and enables them to make better-informed decisions. In this research, a deep learning model integrated with XAI is proposed to develop an interpretable framework for brain Tumour prediction. The model is trained on an extensive dataset comprising imaging and clinical data and demonstrates high AUC while leveraging XAI for model explainability and feature selection. The study findings indicate that this approach improves predictive performance, achieving an accuracy of 92.98% and a miss rate of 7.02%. Additionally, interpretability tools such as LIME and Grad-CAM provide clinicians with a clearer understanding of the decision-making process, supporting diagnosis and treatment. This model represents a significant advancement in brain Tumour prediction, with the potential to enhance pat
Major depressive disorder (MDD) is a complex mental disorder characterized by a persistent sad feeling and depressed mood. Recent studies reported differences between healthy control (HC) and MDD by looking to brain n...
详细信息
ISBN:
(数字)9781728119908
ISBN:
(纸本)9781728119915
Major depressive disorder (MDD) is a complex mental disorder characterized by a persistent sad feeling and depressed mood. Recent studies reported differences between healthy control (HC) and MDD by looking to brain networks including default mode and cognitive control networks. More recently there has been interest in studying the brain using advanced machine learning-based classification approaches. However, interpreting the model used in the classification between MDD and HC has not been explored yet. In the current study, we classified MDD from HC by estimating whole-brain connectivity using several classification methods including support vector machine, random forest, XGBoost, and convolutional neural network. In addition, we leveraged the SHapley Additive exPlanations (SHAP) approach as a feature learning method to model the difference between these two groups. We found a consistent result among all classification method in regard of the classification accuracy and feature learning. Also, we highlighted the role of other brain networks particularly visual and sensory motor network in the classification between MDD and HC subjects.
Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms. The most recent perceptual IR algorithms based on generative adversarial networks (GANs) have brought in ...
详细信息
Prediction models are used amongst others to inform medical decisions on interventions. Typically, individuals with high risks of adverse outcomes are advised to undergo an intervention while those at low risk are adv...
详细信息
The rapid increase in Mobile Internet of Things (IoT) devices requires novel computational frameworks. These frameworks must meet strict latency and energy efficiency requirements in Edge and Mobile Edge Computing (ME...
The rapid increase in Mobile Internet of Things (IoT) devices requires novel computational frameworks. These frameworks must meet strict latency and energy efficiency requirements in Edge and Mobile Edge Computing (MEC) systems. Spatio-temporal dynamics, which include the position of edge servers and the timing of task schedules, pose a complex optimization problem. These challenges are further exacerbated by the heterogeneity of IoT workloads and the constraints imposed by device mobility. The balance between computational overhead and communication challenges is also a problem. To solve these issues, advanced methods are needed for resource management and dynamic task scheduling in mobile IoT and edge computing environments. In this paper, we propose a Deep Reinforcement Learning (DRL) multi-objective algorithm, called a Double Deep Q-Learning (DDQN) framework enhanced with Spatio-temporal mobility prediction, latency-aware task offloading, and energy-constrained IoT device trajectory optimization for federated edge computing networks. DDQN was chosen for its optimize stability and reduced overestimation in Q-values. The framework employs a reward-driven optimization model that dynamically prioritizes latency-sensitive tasks, minimizes task migration overhead, and balances energy efficiency across devices and edge servers. It integrates dynamic resource allocation algorithms to address random task arrival patterns and real-time computational demands. Simulations demonstrate up to a 35 % reduction in end-to-end latency, a 28 % improvement in energy efficiency, and a 20 % decrease in the deadline-miss ratio compared to benchmark algorithms.
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets ...
详细信息
Radiation therapy (RT) is widely employed in the clinic for the treatment of head and neck (HaN) cancers. An essential step of RT planning is the accurate segmentation of various organs-at-risks (OARs) in HaN CT image...
详细信息
暂无评论