Background: Epilepsy is a neurological disorder that leads to seizures. This occurs due to excessive electrical discharge by the brain cells. An effective seizure prediction model can aid in improving the lifestyle of...
详细信息
The emergence of the novel COVID-19 virus has had a profound impact on global healthcare systems and economies, underscoring the imperative need for the development of precise and expeditious diagnostic tools. Machine...
详细信息
The emergence of the novel COVID-19 virus has had a profound impact on global healthcare systems and economies, underscoring the imperative need for the development of precise and expeditious diagnostic tools. Machine learning techniques have emerged as a promising avenue for augmenting the capabilities of medical professionals in disease diagnosis and classification. In this research, the EFS-XGBoost classifier model, a robust approach for the classification of patients afflicted with COVID-19 is proposed. The key innovation in the proposed model lies in the Ensemble-based Feature Selection (EFS) strategy, which enables the judicious selection of relevant features from the expansive COVID-19 dataset. Subsequently, the power of the eXtreme Gradient Boosting (XGBoost) classifier to make precise distinctions among COVID-19-infected patients is *** EFS methodology amalgamates five distinctive feature selection techniques, encompassing correlation-based, chi-squared, information gain, symmetric uncertainty-based, and gain ratio approaches. To evaluate the effectiveness of the model, comprehensive experiments were conducted using a COVID-19 dataset procured from Kaggle, and the implementation was executed using Python programming. The performance of the proposed EFS-XGBoost model was gauged by employing well-established metrics that measure classification accuracy, including accuracy, precision, recall, and the F1-Score. Furthermore, an in-depth comparative analysis was conducted by considering the performance of the XGBoost classifier under various scenarios: employing all features within the dataset without any feature selection technique, and utilizing each feature selection technique in isolation. The meticulous evaluation reveals that the proposed EFS-XGBoost model excels in performance, achieving an astounding accuracy rate of 99.8%, surpassing the efficacy of other prevailing feature selection techniques. This research not only advances the field of COVI
Intrusion detection systems(IDS)are one of the most promising ways for securing data and networks;In recent decades,IDS has used a variety of categorization *** classifiers,on the other hand,do not work effectively un...
详细信息
Intrusion detection systems(IDS)are one of the most promising ways for securing data and networks;In recent decades,IDS has used a variety of categorization *** classifiers,on the other hand,do not work effectively unless they are combined with additional algorithms that can alter the classifier’s parameters or select the optimal sub-set of features for the *** are used in tandem with classifiers to increase the stability and with efficiency of the classifiers in detecting *** algorithms,on the other hand,have a number of limitations,particularly when used to detect new types of *** this paper,the NSL KDD dataset and KDD Cup 99 is used to find the performance of the proposed classifier model and compared;These two IDS dataset is preprocessed,then Auto Cryptographic Denoising(ACD)adopted to remove noise in the feature of the IDS dataset;the classifier algorithms,K-Means and Neural network classifies the dataset with adam *** classifier is evaluated by measuring performance measures like f-measure,recall,precision,detection rate and *** neural network obtained the highest classifying accuracy as 91.12%with drop-out function that shows the efficiency of the classifier model with drop-out function for KDD Cup99 *** their power and limitations in the proposed methodology that could be used in future works in the IDS area.
The evolution of the electrical grid from its early centralized structure to today’s advanced "smart grid" reflects significant technological progress. Early grids, designed for simple power delivery from l...
详细信息
The evolution of the electrical grid from its early centralized structure to today’s advanced "smart grid" reflects significant technological progress. Early grids, designed for simple power delivery from large plants to consumers, faced challenges in efficiency, reliability, and scalability. Over time, the grid has transformed into a decentralized network driven by innovative technologies, particularly artificial intelligence (AI). AI has become instrumental in enhancing efficiency, security, and resilience by enabling real-time data analysis, predictive maintenance, demand-response optimization, and automated fault detection, thereby improving overall operational efficiency. This paper examines the evolution of the electrical grid, tracing its transition from early limitations to the methodologies adopted in present smart grids for addressing those challenges. Current smart grids leverage AI to optimize energy management, predict faults, and seamlessly integrate electric vehicles (EVs), reducing transmission losses and improving performance. However, these advancements are not without limitations. Present grids remain vulnerable to cyberattacks, necessitating the adoption of more robust methodologies and advanced technologies for future grids. Looking forward, emerging technologies such as Digital Twin (DT) models, the Internet of Energy (IoE), and decentralized grid management are set to redefine grid architectures. These advanced technologies enable real-time simulations, adaptive control, and enhanced human–machine collaboration, supporting dynamic energy distribution and proactive risk management. Integrating AI with advanced energy storage, renewable resources, and adaptive access control mechanisms will ensure future grids are resilient, sustainable, and responsive to growing energy demands. This study emphasizes AI’s transformative role in addressing the challenges of the early grid, enhancing the capabilities of the present smart grid, and shaping a secure
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac...
详细信息
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.
Handwritten documents generated in our day-to-day office work, class room and other sectors of society carry vital information. Automatic processing of these documents is a pipeline of many challenging steps. The very...
详细信息
Recently, Rumor Spreading over Online Social Media is found as one of the serious issue, which causes severe damage to society, organization and individuals. To control the rumor spread, rumor detection is found as on...
详细信息
ChatGPT is an AI-based Natural Language Generation (NLG) system developed by Microsoft that enables users to converse with virtual agents in a conversational manner. ChatGPT is based on the transformer-based architect...
详细信息
In today’s digital era, the security of sensitive data such as Aadhaar data is of utmost importance. To ensure the privacy and integrity of this data, a conceptual framework is proposed that employs the Diffie-Hellma...
详细信息
INTRODUCTION: Cloud computing, a still emerging technology, allows customers to pay for services based on usage. It provides internet-based services, whilst virtualization optimizes a PC’s available resources. OBJECT...
详细信息
暂无评论