Diabetes, a chronic metabolic disease with a rising global prevalence, significantly impacts individuals’ health. Diabetes increases a person’s risk of developing various diseases, including heart disease, stroke, v...
详细信息
data mining is a new type of information processing technology, through the analysis of massive data, to find hidden information, knowledge and trends. Cluster analysis is the most important data mining technology. Th...
详细信息
data mining is a new type of information processing technology, through the analysis of massive data, to find hidden information, knowledge and trends. Cluster analysis is the most important data mining technology. The K-means algorithm is one of the more classic clustering algorithms, which optimizes the clustering results through step-by-step iteration. It has the advantages of reliable theory, simple implementation and fast convergence. Based on K-means clustering analysis method, this paper constructs a student biochemical index monitoring system, which provides reference for the formulation of students' physical health and development planning. The core work of this study is to build the mathematical model of K-means clustering analysis algorithm, design the conceptual structure and logical structure of the database, and complete the design of the student biochemical index cluster analysis software.
In view of the complexity of current traffic scenes, the rapid development of intelligent transportation applications and the improvement of domestic platforms, this paper combines the current mainstream deep learning...
In view of the complexity of current traffic scenes, the rapid development of intelligent transportation applications and the improvement of domestic platforms, this paper combines the current mainstream deep learning algorithms to recognize the road line with the PilotNet model improved by BatchNorm and Dropout technologies under the Baidu PaddlePaddle framework, and to detect the traffic signs with the YOLOv3 Tiny model. Due to the limited hardware system resources and the real-time nature of automatic driving, finally, the two types of models are deployed to the Edge Board computing box. In the application of intelligent transportation, single-thread technology cannot respond to the real-time nature of automatic driving, and thread blocking has a great impact. After comparison, dual-thread technology is adopted. dataprocessing is carried out by capturing the moving image of the car and the control value, and the experiment is completed. In the experiments, the smart car can recognize the traffic signs at a speed of around 0.01s per frame and finish model loading and making decision after the recognition of traffic scenes in 34.86s, which shows that the trained neural network models all have lightweight characteristics and can meet the system application with limited resources.
To improvise a data transmission for wallet with merchant payment system using cluster classification over sequence analysis method is proposed in this paper. The implementation have been carried out for processing th...
To improvise a data transmission for wallet with merchant payment system using cluster classification over sequence analysis method is proposed in this paper. The implementation have been carried out for processing the data through MySQL and the algorithms are tested with Net Beans (IDE) applications. The cluster classification paved the way to achieve the enhancement of data transmission of data. With the sample size of 46 and tested around 11 number of times compared with the sequence analysis method. The speed with the accuracy of 92% for data transmission for wallets with merchant payment systems using cluster classification, provides significantly better results compared to sequence analysis methods. There was a statistical significance between cluster classification and sequence analysis *** improvise a data transmission for a wallet with a merchant payment system using cluster classification over sequence analysis method provides more significant accuracy of 94.4 percent than sequence analysis method by analyzing the parameters of time with data and received time.
With the development of energy efficiency measurement methods, the target demand for energy utilization in data centers is constantly improving, and the study of green energy efficiency (EE) factors in large-scale dat...
With the development of energy efficiency measurement methods, the target demand for energy utilization in data centers is constantly improving, and the study of green energy efficiency (EE) factors in large-scale data centers based on DEA (data envelopment analysis) is becoming increasingly important. In the construction of the entire EE factor analysis model, how to improve the green energy efficiency of the improved model and improve the efficiency of system power usage is currently a key issue that needs to be urgently solved. This article used traditional research methods on factors affecting green energy efficiency, analyzed the hierarchical composition of EE in large-scale data centers (DC), and calculated the four major energy efficiency indicators in DC. Based on the data results, the following conclusions were drawn through discussion. Through simulation experiments, six data center samples from different regions were selected, and the DEA based green energy efficiency factor analysis model was applied. Compared with traditional schemes, the green energy utilization rate has increased, with a comprehensive average improvement of 11.1%. At the same time, the power utilization efficiency has also improved, with a comprehensive average decrease of about 7.7%. This indicates that the large-scale data center green energy efficiency factor analysis model based on DEA has good results in practical applications. This study provides strong guidance and reference for the sustainable development of future data centers, and innovative solutions to address the growing digital demand and environmental challenges.
Big data is one of the enabling technologies of the vision of Industry 4.0. Technological evolution is able to generate an increasing number of data. From the web, to social networks, from mobile devices to sensors, d...
详细信息
Big data is one of the enabling technologies of the vision of Industry 4.0. Technological evolution is able to generate an increasing number of data. From the web, to social networks, from mobile devices to sensors, data is conveyed through the most disparate products of technology, whether physical or virtual. However, the transition from big data to smart data providing insights about information and issues that matter is not simple and obvious to achieve. The greater the amount of data and the more heterogeneous they are, the more complex their processing will be. The data, once collected, is processed by complex analytics algorithms and, to do this, considerable storage units and computing power are *** this paper, we describe our approach and experience at the Italian national agency ENEA in architecting a big data-driven software architecture for public street lighting. Such a software architecture is called ENEA PELL smart city platform (in brief, PELL SCP) and it is intended to collect, represent, control, predict, and possibly optimize the behaviour of public street lighting plants. In particular, we provide an overview of the analytics features that are being developed in collaboration with the University of Bergamo (Italy) to analyze electric energy data as collected by the PELL SCP.
People’s usage of smart wearable devices and sensors plays a crucial role in VLSI technology. The wearable devices are embedded in clothes, smartwatches, and accessories. The wear gadgets like smart rings, smartwatch...
详细信息
The objective is to securely store information in the cloud by dividing the encryption between multiple methods and storing it on the cloud. The aim is to maintain the confidentiality, integrity, and availability of t...
The objective is to securely store information in the cloud by dividing the encryption between multiple methods and storing it on the cloud. The aim is to maintain the confidentiality, integrity, and availability of the data. In the IT sector, many businesses are adopting cloud computing at a rapid pace, which reduces the cost of new software. Cloud computing offers the advantage of easily accessing data at a minimal cost. It is a cost-effective and convenient way to access data over the Internet. However, ensuring the integrity of cloud computing is crucial, especially since cloud storage providers may not always be trustworthy. This is particularly challenging when users store sensitive data with unreliable cloud storage providers. Therefore, securely exchanging data and maintaining data integrity remains a complex problem. This approach involves utilizing AES, DES, and RC2 algorithms to store data in a single cloud, ensuring the security and privacy of client-sensitive information.
AI games are one of the growing fields along with the advancement of computing technologies. Many computer games have been deployed as AI games. To the best of our knowledge, there is no clear category of algorithms u...
AI games are one of the growing fields along with the advancement of computing technologies. Many computer games have been deployed as AI games. To the best of our knowledge, there is no clear category of algorithms used for AI games. The aim of this paper is to survey and classify AI games according to their algorithms. In this paper, AI games are classified into two categories: AI games that have been developed using traditional AI methods and AI games that have used machine learning (ML) approaches. The classifications highlight genres, techniques, and approaches used in game development. Researchers and developers in the field of AI games can use this paper as a quick reference to further explore the use of AI and ML algorithms in games.
This paper will improve the accuracy of sales forecasting from the perspective of multi-objective optimization. A two-layer multi-objective optimization model is constructed to optimize the utility of data providers a...
This paper will improve the accuracy of sales forecasting from the perspective of multi-objective optimization. A two-layer multi-objective optimization model is constructed to optimize the utility of data providers and data consumers based on data quality, data attributes, correlation of attributes, consumer competition and other multidimensional factors. Then, according to the sales information, the supermarket sales decision support system based on data warehouse, on-line analysis and data mining is designed and developed. This paper introduces the construction process of data warehouse and the realization of on-line analysis and processing technology from the perspective of application. Simulation results show that the proposed algorithm achieves better performance in marketing data analysis. It enables more efficient data transactions.
暂无评论