The heart is the organ which pumps blood to the human body. There is an important need for early disease prediction systems for heart related problems. The disease prediction system supports the doctors to diagnose cr...
The heart is the organ which pumps blood to the human body. There is an important need for early disease prediction systems for heart related problems. The disease prediction system supports the doctors to diagnose critical diseases as early as possible. The disease prediction system is built using field knowledge of the healthcare profession and machine learning (ML), artificial intelligence. This research paper aims to explore the role of Exploratory data Analysis [EDA] and pre-processing of heart disease (HD) data for the prediction of HD. This research explores three ML classifiers, namely Random Forest (RF), Support Vector Machine (SVM) and Decision Tree (DT) with missing value imputers, feature scaling techniques, data analytics, and visualization tools using four benchmark HD datasets from the UCI repository. Imputation methods mean, median, most frequent, constant, KNN imputer and iterative imputer with scaling of features were analysed with the help of three classifiers RF, SVM and DT. Random Forest with an iterative imputer achieved 86.41% of accuracy.
The study of B-cell antigen interactions is pivotal for advances in design of vaccines, antibody production, or structural studies based on epitopes. Epitope determination is a colossal and time-consuming process and ...
详细信息
The study of B-cell antigen interactions is pivotal for advances in design of vaccines, antibody production, or structural studies based on epitopes. Epitope determination is a colossal and time-consuming process and many methods and techniques have been implemented for the same. The current generation prediction methods are highly evolved and have incorporated recent advances in machine learning and dataprocessing. This review comprises an evaluation of predictors with a focus on current generation methods that integrate recent techniques such as sequence encoding, fuzzy network, feature maps to aid in design of more comprehensive linear b-cell epitope predictors. Further, the tools, techniques and working of these current generation methods along with details of the methodologies used by each of them is discussed. This study analyzes the performance of these methods and provides comparison for each. It concludes by identifying and highlighting the pathways for further enhancements in the design of high performing linear b-cell epitope predictors.
The proposed method aims to revolutionize data transfer speed and security in medical Internet of Things (IoT) systems by harnessing the high-throughput capabilities of the Present cipher through dedicated hardware. I...
The proposed method aims to revolutionize data transfer speed and security in medical Internet of Things (IoT) systems by harnessing the high-throughput capabilities of the Present cipher through dedicated hardware. It seamlessly integrates three core algorithms: data packing, Present encryption, and optimized transmission. The steps involved in each algorithm, accompanied by their respective mathematical equations. data packing efficiently organizes medical data into packets, a vital step for subsequent processing and encryption. The Present encryption algorithm, a cornerstone of our approach, ensures secure data packets prior to transmission. Finally, optimized transmission guarantees efficient and swift data transfer within medical IoT systems. By incorporating these algorithms and equations, our method optimizes data transfer speed, energy efficiency, and data security, utilizing the high-throughput Present cipher hardware framework.
The machine learning community has mainly relied on real data to benchmark algorithms as it provides compelling evidence of model applicability. Evaluation on synthetic datasets can be a powerful tool to provide a bet...
详细信息
The machine learning community has mainly relied on real data to benchmark algorithms as it provides compelling evidence of model applicability. Evaluation on synthetic datasets can be a powerful tool to provide a better understanding of a model’s strengths, weaknesses and overall capabilities. Gaining these insights can be particularly important for generative modeling as the target quantity is completely unknown. Multiple issues related to the evaluation of generative models have been reported in the literature. We argue those problems can be avoided by an evaluation based on ground truth. General criticisms of synthetic experiments are that they are too simplified and not representative of practical scenarios. As such, our experimental setting is tailored to a realistic generative task. We focus on categorical data and introduce an appropriately scalable evaluation method. Our method involves tasking a generative model to learn a distribution in a high-dimensional setting. We then successively bin the large space to obtain smaller probability spaces where meaningful statistical tests can be applied. We consider increasingly large probability spaces, which correspond to increasingly difficult modeling tasks, and compare the generative models based on the highest task difficulty they can reach before being detected as being too far from the ground truth. We validate our evaluation procedure with synthetic experiments on both synthetic generative models and current state-of-the-art categorical generative models.
In this paper, we developed an artificial intelligence-based data filtering algorithm to improve the accuracy and reliability of MPU6050 sensor measurements. Using the Edge Impulse platform, we trained and optimized t...
详细信息
A very quick progress in information technologies as well as the telescope creation technologies produce a big volume of astronomical big data as well as high dimensional information. All such big data are needed to b...
A very quick progress in information technologies as well as the telescope creation technologies produce a big volume of astronomical big data as well as high dimensional information. All such big data are needed to be analyzed and reviewed in time. But in astronomy such big data can be received in both in the real-time/online mode and from the historical archives. In this paper we have presented the several aspects of the analysis process of such big astronomical data. We have reviewed them and provided some global approaches for the big astronomical data analysis. Also, we have presented examples of sources for big data, different astronomical instruments, big telescopes, different dataprocessing methods and algorithms that are used for the big data analysis process in astronomy. The paper describes an applying of the different dataprocessing methods and algorithms adopted to the various types of astronomical information in scope of the Collection Light Technology (CoLiTec) project. During this research the Lemur software was developed for the online processing and analysis of the big volume of the astronomical big data. The especial OnLine data Analysis System (OLDAS) mode was created for resolving the different tasks during astronomical big data analysis and processing, like preparation, extracting, classification, preprocessing, clustering, data mining, transforming, image processing, etc.
Several studies have been proposed to deploy the flow recording (i.e., flow size counting and sketching algorithms) on programmable switches for high-speed processing, helping network management tasks like scheduling....
Several studies have been proposed to deploy the flow recording (i.e., flow size counting and sketching algorithms) on programmable switches for high-speed processing, helping network management tasks like scheduling. Although programmable switches provide a remarkable packet processing speed, they are of compact resources and follow a restrictive pipeline programming. To fit these limitations, current algorithms either sacrifice the recording accuracy or harm the switch throughput. In this paper, we propose InheritSketch for further improvement. InheritSketch utilizes a separation counting fashion, which is memory-efficient for compact switches. It accurately records the more valuable heavy hitters in the large key-value counters (i.e., the primary table), while only sketching non-heavy flows in the small sentinel table. With the recording ongoing, InheritSketch intelligently summarizes the historical recording experience as the basis for flow inheritance. That is, flows with the same IDs as the previous heavy hitters are regarded as new heavy hitters, being recorded in the primary table. To correct some incorrect inheritance, we also propose the flow rebellion, which promotes flows of large sizes but wrongly stored in the sentinel table to the primary table. InheritSketch is also helpful in applications like differentiated scheduling. We compare InheritSketch with six previous recording algorithms on three public traffic datasets, and prototype InheritSketch on a commodity P4 switch. The results demonstrate that InheritSketch reduces the recording errors by at most ∼7×, and that InheritSketch only consumes 10% of hardware resources on the switch.
scientific research will increasingly rely on AI and the cloud in the future; our suggested solution will allow us to use these technologies to solve a number of problems (CC). We have outlined the different issues th...
scientific research will increasingly rely on AI and the cloud in the future; our suggested solution will allow us to use these technologies to solve a number of problems (CC). We have outlined the different issues that may be addressed via the combined efforts of cloud computing and AI and discussed how to implement such an approach. One of the most powerful exploration techniques is, for example, using cloud-based artificial intelligence algorithms to increase productivity. Drive to create apps, manufactured in the cloud, beyond the fundamental automation pro, requires the ability to predict scenarios and make continuous decisions online. In this paper, we describe a programming language for intelligent computing that will enable machines to reason and make choices for themselves, in real time.
Cross-lingual self-supervised learning has been a growing research topic in the last few years. However, current works only explored the use of audio signals to create representations. In this work, we study cross-lin...
详细信息
Cross-lingual self-supervised learning has been a growing research topic in the last few years. However, current works only explored the use of audio signals to create representations. In this work, we study cross-lingual self-supervised visual representation learning. We use the recently-proposed Raw Audio-Visual Speech Encoders (RAVEn) framework to pre-train an audio-visual model with unlabelled multilingual data, and then fine-tune the visual model on labelled transcriptions. Our experiments show that: (1) multi-lingual models with more data outperform monolingual ones, but, when keeping the amount of data fixed, monolingual models tend to reach better performance; (2) multi-lingual outperforms English-only pre-training; (3) using languages which are more similar yields better results; and (4) fine-tuning on unseen languages is competitive to using the target language in the pre-training set. We hope our study inspires future research on non-English-only speech representation learning.
With the rapid development of infrared detectors, the high-speed detection data has grown exponentially, which bring great challenges to the real-time target detection processed by back-end. Aiming at this problem, we...
详细信息
With the rapid development of infrared detectors, the high-speed detection data has grown exponentially, which bring great challenges to the real-time target detection processed by back-end. Aiming at this problem, we complete the target fast detection experiment in the sequence oriented infrared image based on YOLOv5 and intelligent platform, which is composed of cpu and smart chip. According to the experimental results, for 8K×8K 16bit quantized high-speed detection image can basically meet the real-time processing capability of Gb/s level, and for conventional images less than 2K×2K 16bit quantized high-speed detection image can basically realize the ability of simultaneous processing of less than ten detection images. According to the results of our experiment, the high-speed real-time infrared image target detection method based on intelligent platform will have a certain application in the actual space-based detection and the application processing.
暂无评论