This paper proposes a joint design of probabilistic constellation shaping (PCS) and precoding to enhance the sum-rate performance of multi-user visible light communications (VLC) broadcast channels subject to signal a...
详细信息
ISBN:
(数字)9798350351255
ISBN:
(纸本)9798350351262
This paper proposes a joint design of probabilistic constellation shaping (PCS) and precoding to enhance the sum-rate performance of multi-user visible light communications (VLC) broadcast channels subject to signal amplitude constraint. In the proposed design, the transmission probabilities of bipolar M-pulse amplitude modulation (M-PAM) symbols for each user and the transmit precoding matrix are jointly optimized to improve the sum-rate performance. The joint design problem is shown to be a complex non-convex problem due to the non-convexity of the objective function. To tackle the problem, the firefly algorithm (FA), a nature-inspired heuristic optimization approach, is employed to solve a local optima to the original non-convex optimization problem. The FA-based approach, however, suffers from high computational complexity. Therefore, we propose a low-complexity design based on zero-forcing (ZF) precoding, which is solved using an alternating optimization (AO) approach. Simulation results reveal that the proposed joint design with PCS significantly improves the sum-rate performance compared to the conventional design with uniform signaling. Some insights into the optimal symbol distributions of the two joint design approaches are also provided.
In the ever-evolving domain of medical imaging, the integration of deep learning techniques holds the promise of transformative advancements. This research delved into the potential of employing data transfer within d...
详细信息
ISBN:
(数字)9798350367003
ISBN:
(纸本)9798350367225
In the ever-evolving domain of medical imaging, the integration of deep learning techniques holds the promise of transformative advancements. This research delved into the potential of employing data transfer within deep learning architectures for the automated detection of three distinct lung cancer types. Leveraging sophisticated methodologies like linear discriminant analysis (LDA), t-SNE, and PCA, the study aimed to enhance accuracy and efficiency in detecting malignancies from lung CT scan images. On rigorous evaluation, the models demonstrated compelling accuracy rates: salivary gland-type lung tumors at $\mathbf{9 0. 5 \%}$, pleomorphic (spindle/giant cell) carcinoma at $88.2 \%$, and primary pulmonary sarcomas at $91.3 \%$. Additionally, ROC curve analysis further highlighted the robust discriminative capability of the models across varied decision thresholds. The promising results accentuate the potential of integrating data transfer techniques with deep learning in a clinical setting. This research not only exhibits a significant stride in lung cancer detection but also paves the path for further innovations in automated medical image analysis.
In a wireless network using directional transmitters, a typical problem is to schedule a set of directional links to cover all the receivers in a region, such that an adequate data rate and coverage are maintained whi...
详细信息
Forests cover nearly one-third of the Earth’s land and are some of our most biodiverse ecosystems. Due to climate change, these essential habitats are endangered by increasing wildfires. Wildfires are not just a risk...
Forests cover nearly one-third of the Earth’s land and are some of our most biodiverse ecosystems. Due to climate change, these essential habitats are endangered by increasing wildfires. Wildfires are not just a risk to the environment, but they also pose public health risks. Given these issues, there is an indispensable need for efficient and early detection methods. Conventional detection approaches fall short due to spatial limitations and manual feature engineering, which calls for the exploration and development of data-driven deep learning solutions. This paper, in this regard, proposes 'FireXnet', a tailored deep learning model designed for improved efficiency and accuracy in wildfire detection. FireXnet is tailored to have a lightweight architecture that exhibits high accuracy with significantly less training and testing time. It contains considerably reduced trainable and non-trainable parameters, which makes it suitable for resource-constrained devices. To make the FireXnet model visually explainable and trustable, a powerful explainable artificial intelligence (AI) tool, SHAP (SHapley Additive exPlanations) has been incorporated. It interprets FireXnet’s decisions by computing the contribution of each feature to the prediction. Furthermore, the performance of FireXnet is compared against five pre-trained models — VGG16, InceptionResNetV2, InceptionV3, DenseNet201, and MobileNetV2 — to benchmark its efficiency. For a fair comparison, transfer learning and fine-tuning have been applied to the aforementioned models to retrain the models on our dataset. The test accuracy of the proposed FireXnet model is 98.42%, which is greater than all other models used for comparison. Furthermore, results of reliability parameters confirm the model’s reliability, i.e., a confidence interval of [0.97, 1.00] validates the certainty of the proposed model’s estimates and a Cohen’s kappa coefficient of 0.98 proves that decisions of FireXnet are in considerable accordance with t
LiDAR-based localization is valuable for applications like mining surveys and underground facility maintenance. However, existing methods can struggle when dealing with uninformative geometric structures in challengin...
详细信息
The problem of finding the minimum amount of fanout needed to realize a switching function f is investigated. Fanout-free functions are defined, and necessary and sufficient conditions for a function to be fanout-free...
详细信息
The utilization of Artificial intelligence (AI) and other cutting-edge techniques in the field of medical image analysis has exhibited significant potential. Nevertheless, a significant obstacle that impedes the exten...
详细信息
The utilization of Artificial intelligence (AI) and other cutting-edge techniques in the field of medical image analysis has exhibited significant potential. Nevertheless, a significant obstacle that impedes the extensive implementation of these models in the healthcare sector is their restricted interpretability. The concept of explainability is a subject of extensive discussion and debate within the context of utilizing Artificial intelligence in the healthcare domain. Notwithstanding the empirical evidence demonstrating the superior performance of AI-driven systems compared to humans in certain analytical tasks, particularly in the field of medical image computing, these systems still encounter challenges due to their limited explainability. The present study provides a comprehensive assessment of the significance of explainability in the field of medical Artificial intelligence and performs an ethical analysis of the influence of explainability on the incorporation of AI-driven tools in data engineering in medicine and health care. The paper examines various subjects including data security, confidentiality, privacy, fairness, and discrimination, among others.
In this paper, a color edge detection strategy based on collaborative filtering combined with multiscale gradient fusion is proposed. The block-matching and 3D (BM3D) filter are used to enhance the sparse representati...
详细信息
The need for early detection of diabetes mellitus has led to the development of various intelligent systems using machine learning and artificial intelligence for the recognition of the presence of the disease. Howeve...
详细信息
The need for early detection of diabetes mellitus has led to the development of various intelligent systems using machine learning and artificial intelligence for the recognition of the presence of the disease. However, most of the techniques have yielded a comparatively lower accuracy. This research applied data science techniques to a dataset of diabetes mellitus to improve the accuracy of the prediction of the disease. This was achieved by pre-processing the data with dummy categories and applying principal components analysis for reduced dimensionality. Support vector machine, random forest classifier, and deep neural networks were then used to train the system. Support vector machine, random forest classifier, and deep neural networks yielded accuracies of 0.76, 0.77, and 0.89 respectively. Correspondingly, deep neural networks yielded the highest accuracy. The study concluded that better pre-processing will improve the accuracy of machine learning algorithms in the prediction of diabetes mellitus.
EEG and fMRI are complementary, noninvasive technologies for investigating human brain function. These modalities have been used to uncover large-scale functional networks and their disruptions in clinical populations...
详细信息
ISBN:
(数字)9798331520526
ISBN:
(纸本)9798331520533
EEG and fMRI are complementary, noninvasive technologies for investigating human brain function. These modalities have been used to uncover large-scale functional networks and their disruptions in clinical populations. Given the high temporal resolution of EEG and high spatial resolution of fMRI, integrating these modalities can provide a more holistic understanding of brain activity. This work explores a multimodal source decomposition technique for extracting shared modes of temporal variation between fMRI BOLD signals and EEG spectral power fluctuations in the resting state. The resulting components are then compared between patients with focal epilepsy and controls, revealing multimodal network differences between groups.
暂无评论