Maximizing energy efficiency (EE) in Multiple-Input Single-Output (MISO) downlink networks employing Quadrature Rate-Splitting Multiple Access (Q-RSMA) is a challenging task due to the non-convex optimization problem ...
详细信息
ISBN:
(数字)9798331507022
ISBN:
(纸本)9798331507039
Maximizing energy efficiency (EE) in Multiple-Input Single-Output (MISO) downlink networks employing Quadrature Rate-Splitting Multiple Access (Q-RSMA) is a challenging task due to the non-convex optimization problem involving power allocations, beamforming vectors, and rate allocations under multiple constraints. In this paper, we propose a Deep Reinforcement Learning (DRL) framework based on the Deep Deterministic Policy Gradient (DDPG) algorithm to maximize EE. We handle the minimum rate constraints by formulating the rate allocation as a linear programming (LP) problem, which allows for a computationally efficient solution. The beamforming vector normalization is clarified to ensure the unit norm constraints are satisfied. Simulation results demonstrate the effectiveness of the proposed approach in achieving high EE while satisfying all system constraints.
Recent advances in Large Language Models (LLMs) have enabled their application to recommender systems (RecLLMs), yet concerns remain regarding fairness across demographic and psychological user dimensions. We introduc...
The discharge of domestic sewage by urban residents has always been a major factor causing water pollution in rivers, and with the continuous improvement of people's living standards, the amount of domestic sewage...
详细信息
In medical data classification, improving the performance of classification is essential when the dataset's size is small, and some attributes' values need to be included. Classification performance can suffer...
详细信息
ISBN:
(数字)9798331531935
ISBN:
(纸本)9798331531942
In medical data classification, improving the performance of classification is essential when the dataset's size is small, and some attributes' values need to be included. Classification performance can suffer if the training dataset starts with insufficient data samples. The challenges like overfitting, high variance, inability to find real performance of model, biased models, sensitivity to noise, improper evaluation occurs with small dataset. The proposed algorithm uses domain-based multiple imputations to create additional data tuples and appends these tuples to the originally available training data. This improves the classification power of the classifier. This proposed technique uses a set of domain values to impute missing attribute values. The accuracy of imputed tuples is verified using classifier filters. The proposed technique significantly improves the classification performance of the classifier. This technique is suitable for small to medium-sized data sets. Data imputation helps develop improved and more accurate classifiers. The proposed approach offers improved classification performance. The proposed method achieves a 4.01% improvement in the classification accuracy of fuzzy neural network FBNFC. We performed two hypothesis tests, the t-test, and the Wilcoxon rank sum test. It is observed that the alternative hypothesis is true.
Medical images are available in small datasets with different modalities and for various organs. Although Transfer learning is a promising approach for training models on small datasets, further studies are required o...
详细信息
The Internet of Things (IoT) is an architecture for a network that processes and analyzes sensitive data in order to deliver a variety of services to a large number of users. IoT devices are capable of delivering a wi...
详细信息
Occlusions are a significant challenge to human pose estimation algorithms, often resulting in inaccurate and anatomically implausible poses. Although current occlusion-robust human pose estimation algorithms exhibit ...
详细信息
This innovative study applies Machine Learning (ML) techniques to construct an all-encompassing Multiple Disease Prediction System using datasets from Kaggle. The dataset holds great potential for marking “Parkinson...
详细信息
ISBN:
(数字)9798331519582
ISBN:
(纸本)9798331519599
This innovative study applies Machine Learning (ML) techniques to construct an all-encompassing Multiple Disease Prediction System using datasets from Kaggle. The dataset holds great potential for marking “Parkinson”, “Heart Disease” and “Heart diseases” causing to its multifaceted structure. The study incorporates Support Vector Machines for “Support Vector Machine” that organized Mikey's and problem of Diabetes predictions & Logistic regression for heart disease predictions. This novel approach means that the system offers improved reliability and ease of use, and most importantly, optimal predictions for every possible medical nuance. A prominent concern is the intersection of emerging technologies and medicine with emphasis on the importance of evidence-based medicine for primary healthcare. The project uses intuitive and advanced algorithms for ML based on the machine learning algorithms to improve the value of understanding diabetes for cardiovascular disease prediction and mitigate the gap for improved healthcare action. The principal expects to develop models which are economical and can support a wide range of changes, such as using decision trees, but utilizing more advanced techniques like SVM and Logistic Regression. The integration of the open source Streamlit platform allows for the user module of the integrated component to be more easily utilized. This improves operational efficacy and access to the predictive system, thus broadening the target user population. This integrates the different aspects of the project in an enhanced way to build a better and more proactive health care system for early diagnosis and preventive measures of common health problems.
Background: computerscience significantly influences modern culture, especially with the rapid breakthroughs and technology in social media networking. Social media platforms have become significant channels for shar...
详细信息
Background: computerscience significantly influences modern culture, especially with the rapid breakthroughs and technology in social media networking. Social media platforms have become significant channels for sharing and exchanging daily news and information on many issues in the current digital environment, which is known for its massive data collection and transmission capabilities. While there are many benefits to this environment, there are also many false reports and information that deceive readers and users into believing they are receiving correct information. Objective: Nowadays, all users use social media to obtain news content, but sometimes some malicious users tamper with real news and then spread fake news, which may reduce the reputation of social media. Therefore, many existing models have been introduced to detect fake news, but these models are based on traditional machine learning algorithms, such as decision tree (DT), multilayer perceptron (MLP), random forest (RF), etc. These models Lack of performance, security, and authorization. Our proposed model can solve existing model problems using reinforcement learning and blockchain technology. Methods: In this research paper, we explain a new way to identify fake news. This new approach, combined with policy-based heuristic reinforcement learning (PHRL), where the model dynamically adjusts through iterative learning, is the key innovation and gradually improves classification accuracy. The same as our smart contract authorization method, which enhances the authenticity of content posted safely by authorized users and improves the transparency and accountability of information. Results: Our model was tested on real-time information collected from various sources with 70% accuracy and valid authentication. Conclusion: Our proposed model produced better results with a Mean Absolute Error (MAE) of 0.0811 and Root Mean Squared Error (RMSE) of 0.2847, both significantly lower values. Our proposed mode
Anomaly detection and classification play a vital role in maintaining public safety and security. An automated system for anomaly detection and classification can reduce human leverage, cost, and time. We propose a tw...
详细信息
ISBN:
(数字)9798350357509
ISBN:
(纸本)9798350357516
Anomaly detection and classification play a vital role in maintaining public safety and security. An automated system for anomaly detection and classification can reduce human leverage, cost, and time. We propose a two-stage classifier pipelined within a single model for anomaly detection and classification. The first stage of the two-stage classifier is a Convolutional Neural Network(CNN) based binary classifier that determines whether an event is anomalous or normal. Based on the output of the first classifier, if the event is found anomalous, then it goes to the second stage of our single-pipelined model. The second stage classifier is a Vision Transformer (ViT) based architecture that further classifies an anomalous event into specific anomaly categories. This research utilized the UCF Crime dataset. Being quite large, it requires a significant amount of computational resources and time for processing. We also proposed a keyframe extraction algorithm to reduce the computational cost and time. The proposed keyframe extraction algorithm identifies and selects only relevant frames from videos and discards the redundant and irrelevant frames. The proposed methodology combines Convolution Neural Network (CNN) and Vision Transformer (ViT) for spatial-temporal feature extraction from a complex scenario and classify them. The proposed model achieves 98% accuracy for binary classification modules and 95% accuracy for multi-class classification. Furthermore, the proposed keyframe extraction algorithm significantly reduces the processing time and computational resources. For each videos, it requires only 20ms processing time. The outcome of the proposed model suggests that it can outperform traditional methods for anomaly detection and classification. However, a highly correlated and vast amount of data creates problems like overfitting and increases the complexity of the model.
暂无评论