The proceedings contain 53 papers. The special focus in this conference is on Intelligent Systems in computing and Communications. The topics include: Towards Hands-Free computing: AI Virtual Mouse Interface Powered b...
ISBN:
(纸本)9783031756047
The proceedings contain 53 papers. The special focus in this conference is on Intelligent Systems in computing and Communications. The topics include: Towards Hands-Free computing: AI Virtual Mouse Interface Powered by Gestures;smartAgro: Precision Yield Prediction, Crop Insights, and Real-Time Dashboard;Artificial Intelligence with MRI-Guided Radiation Therapy for Cancer Treatment;Software Requirements to UML Class Diagrams Using Machine Learning and Rule-Based Approach;CFALEA_LSTM: Adaptive Lotus Effect Algorithm Enabled Long Short-Term Memory for Rainfall Prediction Using Time Series data;a Deep Learning Survey on Diseases Prediction and Detection in Health Care;fault Diagnosis in Belts Using Signal processing Techniques and Machine Learning;machine Learning Techniques Based Chronic Kidney Disease Detection with Performance Analysis of Fuzzy Rough Set and Correlation Attribute Selection;segmentation and Classification of Unharvested Arecanut Bunches Using Deep Learning;enhanced Satellite Image Fusion Using Deep Learning and Feature Extraction Techniques: A Survey;intelligent Aircraft Antiskid Braking Systems – A Review;artificial Neural Networks Applied in the Detection of Breast Cancer;supervised and Unsupervised Learning Techniques for Malware Classification Based on Opcode Frequency Features;predictive Analytics for Diagnosing Alzheimer’s Disease Using Artificial Intelligence and Machine Learning algorithms;A Comprehensive Study on Artificial Intelligence (AI) Driven Internet of Healthcare Things (IOHT);predictive Models for the Early Diagnosis and Prognosis of Knee Osteoarthritis Using Deep Learning Techniques;predicting Salinity Resistance of Rice at the Seedling Stage: An Evaluation of Transfer Learning Methods;Multi-camera HD Pedestrian dataset for Person Detection and Re-identification;Fire Detection System Using Deep CNN.
Recently, urban rail transportation is constantly developing to intelligent urban rail based on the Internet of Things, artificial intelligence, and high-speed communication. Long Term Evolution for Urban Rail Transpo...
详细信息
Personalized marketing is one of the most important marketing strategies which drives sales for a business by recommending the right products to the right customers. With the growing volume of consumers, it becomes ne...
详细信息
Personalized marketing is one of the most important marketing strategies which drives sales for a business by recommending the right products to the right customers. With the growing volume of consumers, it becomes necessary to divide them into suitable groups to take appropriate action for the correct set of customers. The prime objective of this study is to segment the customers into meaningful groups, suggest different marketing strategies for various segments, and recommend suitable products for each customer within the groups. This paper analyses historical sales and customer interaction data of the business to derive the right attributes. This study has used $\mathbf{K}$ -Means clustering, an unsupervised machine learning algorithm, to obtain five separable clusters with a silhouette score of 0.57. For product recommendation, a voting ensemble-based recommendation system has been used to suggest up to the top 10 products for each customer. The ensemble method consists of four techniques: user-based collaborative filtering, LightFM algorithm, KNNWithMeans algorithm, and neural network. The paper aims to amalgamate customer segmentation and product recommendation problem statements for developing a single solution to help businesses design personalized marketing strategies.
Breast Cancer is one of the most extensively researched cancers of the human body. This is has led to the creation and development of innumerable methods and techniques to control the growth and symptoms of the cancer...
Breast Cancer is one of the most extensively researched cancers of the human body. This is has led to the creation and development of innumerable methods and techniques to control the growth and symptoms of the cancer way before it could cause any severe damage to other parts of the body. Yet, breast cancer remains one of the most common cancers and also the leading cause of death in women. This has encouraged us to perform our research in this field. While research does exist on the genetic aspect of breast cancer, it is insufficient to derive any conclusive decisions and results from these papers. Fundamentally any research in this domain is limited to either personal data of patients or genetic data exclusively. This paper aims to provide a unique insight by using both personal and genetic data. The researchers use a variety of innovative data wrangling techniques so as to accentuate the strengths of data. Along with this various statistical and machine learning techniques are used to rank the important genes in the dataset. These genes are selected and used to analyze their effect on patients using various machine learning algorithms. This data can be used to aid the healthcare industry in the detection, treatment and prevention of cancer.
In today's age, where data security's a growing concern it is crucial to develop strong encoding and decoding models. In our research paper we introduce a model that combines concepts, from compiler design wit...
In today's age, where data security's a growing concern it is crucial to develop strong encoding and decoding models. In our research paper we introduce a model that combines concepts, from compiler design with diverse hashing algorithms. This model aims to offer an efficient solution for transforming and retrieving data. As technology advances rapidly and the need for data security becomes more prominent in real world scenarios our model tackles the challenges of safeguarding information while ensuring integrity and confidentiality. By utilizing hashing algorithms like SHA, BLAKE2 and MD5 we achieve data encryption. Our experiments demonstrate that our model outperforms approaches showcasing its effectiveness across industries where protecting data privacy is of utmost importance. With its user interface our model provides a solution for secure data transmission and storage contributing significantly to advancements in data security and encryption techniques.
Quantum computing has emerged as both a boon and a bane in the realm of network security. This paper delves into the intriguing interplay between quantum computing and the vulnerabilities of Quantum Key Distribution (...
Quantum computing has emerged as both a boon and a bane in the realm of network security. This paper delves into the intriguing interplay between quantum computing and the vulnerabilities of Quantum Key Distribution (QKD) protocols, a cornerstone of secure quantum communication. While QKD offers unmatched security through the principles of quantum mechanics, it is not impervious to attacks in the age of rapidly advancing quantum computing. We examine these vulnerabilities, highlighting the challenges of ensuring the long-term security of quantum networks in a post-quantum world. Through a comprehensive exploration of threat models and the practicality of QKD, we uncover potential weaknesses in quantum network security. In particular, we investigate QKD's susceptibility to evolving quantum algorithms, hardware vulnerabilities, and post-processing security concerns. This work aims to provide a roadmap for understanding and addressing these vulnerabilities, ensuring the continued robustness of quantum communication networks, and fostering the development of post-quantum cryptographic solutions. As quantum computing advances, its implications for quantum network security are far-reaching. This study underlines the critical need for research and innovation in securing quantum communication. By unravelling the vulnerabilities within Quantum Key Distribution and exploring post-quantum cryptographic alternatives, this research contributes to the development of more resilient and secure quantum networks, safeguarding sensitive information and communications in an increasingly quantum-enabled world.
data Valuation in machine learning is concerned with quantifying the relative contribution of a training example to a model’s performance. Quantifying the importance of training examples is useful for identifying hig...
详细信息
data Valuation in machine learning is concerned with quantifying the relative contribution of a training example to a model’s performance. Quantifying the importance of training examples is useful for identifying high and low quality data to curate training datasets and for address data quality issues. Shapley values have gained traction in machine learning for curating training data and identifying data quality issues. While computing the Shapley values of training examples is computationally prohibitive, approximation methods have been used successfully for classification models in computer vision tasks. We investigate data valuation for Automatic Speech Recognition models which perform a structured prediction task and propose a method for estimating Shapley values for these models. We show that a proxy model can be learned for the acoustic model component of an end-to-end ASR and used to estimate Shapley values for acoustic frames. We present a method for using the proxy acoustic model to estimate Shapley values for variable length utterances and demonstrate that the Shapley values provide a signal of example quality.
One of the main drawbacks of using ML and DL-based optimization techniques is that a massive amount of historical data is needed for the algorithms to converge to quasi-optimal solutions. In the DCs' scope, it is ...
One of the main drawbacks of using ML and DL-based optimization techniques is that a massive amount of historical data is needed for the algorithms to converge to quasi-optimal solutions. In the DCs' scope, it is difficult, expensive, and sometimes even dangerous to gather data from all kinds of real situations, as the equipment's physical integrity may be at risk. Thus, the generation of realistic synthetic data in computing environments is crucial to enabling these optimization technologies. The goal of this paper is to create an automated system that produces near-term forecasts of globally important economic / financial time series. The system will create features from a wide variety of data sources and create forecasts using an ensemble of time series models. We will exploit distributed computing and storage technologies to build a system that is always on, evaluating new potential features and constantly updating the deployed forecasting model. We will begin with the daily spot price of crude oil but hope to expand to other important series. This paper proposes to solve these challenges by generating synthetic data and forecasting possible realistic scenarios in a real cloud-based DC environment. For this purpose, we will use a technique that has given extraordinary results in recent years: generative adversarial networks (GAN). These generative algorithms based on artificial neural networks have been commonly used to create realistic synthetic multimedia data (mainly images and videos). Some proposals in the state of the art use GAN to generate synthetic time-series data. However, this research has significant limitations, such as the limitation of handling multi-variable and categorical data in the generation of scenarios.
In light of the exponential growth of data volumes and the requirement for effective data analysis and processing, Big data has emerged as a disruptive paradigm in the field of information technology. The aim of this ...
In light of the exponential growth of data volumes and the requirement for effective data analysis and processing, Big data has emerged as a disruptive paradigm in the field of information technology. The aim of this study provides a complete review of Big data technologies, terminology, and applications that utilize data. We go into the core concepts and underlying technology and then highlight a number of notable knowledge-centric applications that demonstrate the influence of Big data across various industries. This paper offers a review of recent Big data technology developments. It attempts to assist users in selecting and implementing the best Big data technology combination for their technological demands and individual application requirements. It additionally provides a global overview of major Big data technologies. It classifies and explains the main technologies' features, benefits, limitations, and applications.
Cloud computing is already a part of the daily lives of both people and companies and as such, there is a concern about how it impacts the environment. The union of these two realities provides an opportunity for the ...
Cloud computing is already a part of the daily lives of both people and companies and as such, there is a concern about how it impacts the environment. The union of these two realities provides an opportunity for the emergence of Green Cloud computing with new proposals, approaches, and metrics to make data centers more efficient, mainly in terms of energy, as well as to reduce CO2 emissions and its environmental impact. Knowing the metrics that can be used to measure the energy cost and environmental impact of data centers are fundamental in Green computing, this article studies the relationship between energy consumption and execution times, architecture, and costs of a cloud environment. Using simulators it was possible to see an improvement in energy efficiency above 44%, as well as to get a cost reduction of at least 17%, and a reduction of of 55% in the environmental impact from the emission of CO 2 . It is important to note that there was no impact on processing times, by using the adoption of algorithms for complex problems, in the scheduling of Virtual Machines.
暂无评论