Image inpainting, which aims to reconstruct reasonably clear and realistic images from known pixel information, is one of the core problems in computer vision. However, due to the complexity and variability of the und...
详细信息
This paper explores the concept of algorithmic hybridization, which involves combining various machine learning (ML) algorithms to enhance performance by utilizing the benefits of both simultaneously. This study prese...
This paper explores the concept of algorithmic hybridization, which involves combining various machine learning (ML) algorithms to enhance performance by utilizing the benefits of both simultaneously. This study presents a framework that utilizes a combination of long short-term memory (LSTM) and bidirectional LSTM (BiLSTM) with artificial neural networks (ANN) to classify non-functional requirements (NFR). The task of NFR classification is challenging due to the scarcity of supervised learning data. The effectiveness of the proposed approach was assessed by comparing the performance of our integrated model with that of single LSTM and BiLSTM models. To conduct this evaluation, we combined two datasets consisting of 1000 non-functional requirements (NFR). The experimental findings revealed that the proposed approach is effective as the hybrid models exhibited better precision, recall, and F-1 score compared to its counterparts.
Based on the computer big data technology, this paper designs a system to realize the unattended live broadcast through the steps of video stitching, hot spot area determination, hot spot area tracking and so on. Then...
详细信息
Based on the computer big data technology, this paper designs a system to realize the unattended live broadcast through the steps of video stitching, hot spot area determination, hot spot area tracking and so on. Then this paper designs the software and hardware environment of the system and determines the final implementation scheme and parameters through experiments. Secondly, this paper studies various related technologies involved in video stitching. In this paper, the concept of hot spot region is proposed and the detection algorithm of hot spot region is designed. The video splicing function is designed based on SURF feature point detection technology. After video fusion processing, the video Mosaic subsystem is finally realized. Finally, this paper designs the function and implementation of the video live relay subsystem. The simulation results show that this method can smoothly track the hot spots and extract them to realize real-time retransmission.
This paper provides a network security service establishment method and system based on virtualization technology, including: obtaining computing node sets and host devices; Determine the link bandwidth according to t...
This paper provides a network security service establishment method and system based on virtualization technology, including: obtaining computing node sets and host devices; Determine the link bandwidth according to the host equipment; The computing resource model is established according to the set of computing nodes and host devices; The computing resource model is used to deploy the service function chain to establish network security services. The method presented in this paper makes use of the characteristics of rapid deployment of container borne network functions, and proposes some dynamic function deployment methods. On the basis of the idea of integer programming optimization problem modeling, the overall problem is decomposed into independent subproblems, the scale of the optimization problem is reasonably limited, and the fast deployment of containers is utilized to ensure that the response time of service requests is low, while retaining the performance advantages of optimization algorithms over simple methods to some extent.
The article examines the application of a method for processing experimental data in measurement systems by generating hypotheses about the presence of a given set of parameters and estimating their probability. The d...
The article examines the application of a method for processing experimental data in measurement systems by generating hypotheses about the presence of a given set of parameters and estimating their probability. The difference between the method is the use of the error probability distribution function with an independent parameter "distribution scale" and the search for a combination of process model parameters at which the posterior probability function takes the maximum value. The method is characterized by high resistance to impulse noise, as a result of which it can be used in measuring systems. A special feature of the method is the large number of calculations required to generate hypotheses and test their probability. To solve this problem, a parallel computing system architecture has been proposed that generates hypotheses and evaluates them using independent processing elements.
An event or an observation that is statistically different from the others is termed an anomaly. Anomaly detection is the process of identifying such anomalies. Anomaly detection is an effective tool for risk mitigati...
详细信息
An event or an observation that is statistically different from the others is termed an anomaly. Anomaly detection is the process of identifying such anomalies. Anomaly detection is an effective tool for risk mitigation, fraud detection, and improving the system's robustness. It is also an active research area, with numerous algorithms being proposed. In this paper, we compare the performance of various anomaly detection algorithms on mul-tivariate as well as univariate datasets. The assessment measures generated are important and can be beneficial for predicting anomalies in a timely and accurate manner. Experimental results demonstrate that on a univariate dataset, the auto-regressive moving average (ARMA), performs better than the local outlier factor (LOF), while on a multivariate dataset, the LOF model performs better. The prototype developed has been extensively tested on publicly available datasets and can be evaluated on larger, more comprehensive datasets for deployment in the real-time anomaly detection setup.
In order to solve the problem of poor learning effect caused by data heterogeneity among different participants in the existing federated learning methods, this paper proposes a federated data augmentation algorithm b...
In order to solve the problem of poor learning effect caused by data heterogeneity among different participants in the existing federated learning methods, this paper proposes a federated data augmentation algorithm based on heterogeneity assessment (FDA-HA) from the perspective of mitigating the effect of heterogeneity and protecting user data privacy, which mitigates the degree of data heterogeneity among participants by generating adversarial networks for data augmentation and at the same time safeguards fairness in the data augmentation process under the premise of protecting data privacy while guaranteeing fairness in the data enhancement process. Experimental results on MNIST, FashionMNIST, and Cifar10 datasets show that compared with mainstream federated learning algorithms, this algorithm improves the accuracy by 7.96% and 13.44% in data scenarios with different degrees of heterogeneity, and at the same time, it has a certain degree of fairness.
Artificial Intelligence (AI) systems have seen a meteoric rise in adoption, permeating across diverse sectors such as healthcare, finance, and e-commerce. The next step in the evolution of the practical use of AI is t...
Artificial Intelligence (AI) systems have seen a meteoric rise in adoption, permeating across diverse sectors such as healthcare, finance, and e-commerce. The next step in the evolution of the practical use of AI is the automation of these systems. Since these systems hinge largely on the quality, volume, and careful management of data for training, optimization, and validation of data science models, the promise of AI automation hinges significantly on the ability to efficiently handle data ingestion, ensure real-time processing, and respond swiftly to changes in *** is a requirement of a multi-faceted approach in terms of data engineering to help practitioners implement automated AI. The need for robust data engineering practices that ensure scalability, efficiency, and reliability in AI applications and their automation is evident. Comprehensive investigations of the impact of these practices on the automation of AI are required to create future-proof, reliable AI *** paper will highlight some areas of data engineering that are important for automation of AI, the challenges they pose and the impact they have on performance of AI systems. There is a solution section for each problem that gives the scope of automation and human involvement required along with the tools that are currently being used.
Emerging smart grid applications analyze large amounts of data collected from millions of meters and systems to facilitate distributed monitoring and real-time control tasks. However, current parallel dataprocessing ...
Emerging smart grid applications analyze large amounts of data collected from millions of meters and systems to facilitate distributed monitoring and real-time control tasks. However, current parallel dataprocessing systems are designed for common applications, unaware of the massive volume of the collected data, causing long data transfer delay during the computation and slow response time of smart grid systems. A promising direction to reduce delay is to jointly schedule computation tasks and data transfers. We identify that the smart grid data analytic jobs require the intermediate data among different computation stages to be transmitted orderly to avoid network congestion. This new feature prevents current scheduling algorithms from being efficient. In this work, an integrated computing and communication task scheduling scheme is proposed. The mathematical formulation of smart grid data analytic jobs scheduling problem is given, which is unsolvable by existing optimization methods due to the strongly coupled constraints. Several techniques are combined to linearize it for adapting the Branch and Cut method. Based on the topological information in the job graph, the Topology Aware Branch and Cut method is further proposed to speed up searching for optimal solutions. Numerical results demonstrate the effectiveness of the proposed method.
The goal of this research study is to use data science and machine learning to anticipate and analyze the Covid-19 pandemic's spread. The main goal is to create a predictive algorithm that can be used to anticipat...
The goal of this research study is to use data science and machine learning to anticipate and analyze the Covid-19 pandemic's spread. The main goal is to create a predictive algorithm that can be used to anticipate daily Covid-19 instances across various geographies. The proposed model will be based on historical data related to Covid-19 cases such as population density, demographics, weather conditions, and socio-economic factors. The model will use supervised learning algorithms to analyze the data and develop a predictive model. Real-world data will be used to validate the suggested model, and performance measures including accuracy, precision, recall, and F1 score will be used to assess the model's effectiveness. Public health experts will use the model's findings to learn more about the virus's transmission and to help them choose the best preventative measures. The proposed approach may prove to be an effective tool in the fight against the Covid-19 epidemic.
暂无评论