With the continuous growth of cloud computing and virtualization technology, network function virtualization (NFV) techniques have been significantly enhanced. NFV has many advantages such as simplified services, prov...
详细信息
With the continuous growth of cloud computing and virtualization technology, network function virtualization (NFV) techniques have been significantly enhanced. NFV has many advantages such as simplified services, providing more flexible services, and reducing network capital and operational costs. However, it also poses new challenges that need to be addressed. A challenging problem with NFV is resource management, since the resources required by each virtualized network function (VNF) change with dynamic traffic variations, requiring automatic scaling of VNF resources. Due to the resource consumption importance, it is essential to propose an efficient resource auto-scaling method in the NFV networks. Inadequate or excessive utilization of VNF resources can result in diminished performance of the entire service chain, thereby affecting network performance. Therefore, predicting VNF resource requirements is crucial for meeting traffic demands. VNF behavior in networks is complex and nonlinear, making it challenging to model. By incorporating machine learning methods into resource prediction models, network service performance can be improved by addressing this complexity. As a result, this paper introduces a new auto-scaling architecture and algorithm to tackle the predictive VNF problem. Within the proposed architecture, there is a predictive VNF auto-scaling engine that comprises two modules: a predictive task scheduler and a predictive VNF auto-scaler. Furthermore, a prediction engine with a VNF resource predictor module has been designed. In addition, the proposed algorithm called GPAS is presented in three phases, VNF resource prediction using genetic programming (GP) technique, task scheduling and decision-making, and auto-scaling execution. The GPAS method is simulated in the KSN framework, a network environment based on NFV/SDN. In the evaluation results, the GPAS method shows better performance in SLA violation rate, resource usage, and response time when co
Predicting the metastatic direction of primary breast cancer (BC), thus assisting physicians in precise treatment, strict follow-up, and effectively improving the prognosis. The clinical data of 293,946 patients with ...
详细信息
Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inher...
详细信息
Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inherent biases and computational burdens, especially when used to relax the rank function, making them less effective and efficient in real-world scenarios. To address these challenges, our research focuses on generalized nonconvex rank regularization problems in robust matrix completion, low-rank representation, and robust matrix regression. We introduce innovative approaches for effective and efficient low-rank matrix learning, grounded in generalized nonconvex rank relaxations inspired by various substitutes for the ?0-norm relaxed functions. These relaxations allow us to more accurately capture low-rank structures. Our optimization strategy employs a nonconvex and multi-variable alternating direction method of multipliers, backed by rigorous theoretical analysis for complexity and *** algorithm iteratively updates blocks of variables, ensuring efficient convergence. Additionally, we incorporate the randomized singular value decomposition technique and/or other acceleration strategies to enhance the computational efficiency of our approach, particularly for large-scale constrained minimization problems. In conclusion, our experimental results across a variety of image vision-related application tasks unequivocally demonstrate the superiority of our proposed methodologies in terms of both efficacy and efficiency when compared to most other related learning methods.
The behavior of users on online life service platforms like Meituan and Yelp often occurs within specific finegrained spatiotemporal contexts(i.e., when and where). Recommender systems, designed to serve millions of u...
详细信息
The behavior of users on online life service platforms like Meituan and Yelp often occurs within specific finegrained spatiotemporal contexts(i.e., when and where). Recommender systems, designed to serve millions of users, typically operate in a fully server-based manner, requiring on-device users to upload their behavioral data, including fine-grained spatiotemporal contexts, to the server, which has sparked public concern regarding privacy. Consequently, user devices only upload coarse-grained spatiotemporal contexts for user privacy protection. However, previous research mostly focuses on modeling fine-grained spatiotemporal contexts using knowledge graph convolutional models, which are not applicable to coarse-grained spatiotemporal contexts in privacy-constrained recommender systems. In this paper, we investigate privacy-preserving recommendation by leveraging coarse-grained spatiotemporal contexts. We propose the coarse-grained spatiotemporal knowledge graph for privacy-preserving recommendation(CSKG), which explicitly models spatiotemporal co-occurrences using common-sense knowledge from coarse-grained contexts. Specifically, we begin by constructing a spatiotemporal knowledge graph tailored to coarse-grained spatiotemporal contexts. Then we employ a learnable metagraph network that integrates common-sense information to filter and extract co-occurrences. CSKG evaluates the impact of coarsegrained spatiotemporal contexts on user behavior through the use of a knowledge graph convolutional network. Finally, we introduce joint learning to effectively learn representations. By conducting experiments on two real large-scale datasets,we achieve an average improvement of about 11.0% on two ranking metrics. The results clearly demonstrate that CSKG outperforms state-of-the-art baselines.
Palmprint recognition is an emerging biometrics technology that has attracted increasing attention in recent years. Many palmprint recognition methods have been proposed, including traditional methods and deep learnin...
详细信息
Palmprint recognition is an emerging biometrics technology that has attracted increasing attention in recent years. Many palmprint recognition methods have been proposed, including traditional methods and deep learning-based methods. Among the traditional methods, the methods based on directional features are mainstream because they have high recognition rates and are robust to illumination changes and small noises. However, to date, in these methods, the stability of the palmprint directional response has not been deeply studied. In this paper, we analyse the problem of directional response instability in palmprint recognition methods based on directional feature. We then propose a novel palmprint directional response stability measurement (DRSM) to judge the stability of the directional feature of each pixel. After filtering the palmprint image with the filter bank, we design DRSM according to the relationship between the maximum response value and other response values for each pixel. Using DRSM, we can judge those pixels with unstable directional response and use a specially designed encoding mode related to a specific method. We insert the DRSM mechanism into seven classical methods based on directional feature, and conduct many experiments on six public palmprint databases. The experimental results show that the DRSM mechanism can effectively improve the performance of these methods. In the field of palmprint recognition, this work is the first in-depth study on the stability of the palmprint directional response, so this paper has strong reference value for research on palmprint recognition methods based on directional features.
The paper addresses the critical problem of application workflow offloading in a fog environment. Resource constrained mobile and Internet of Things devices may not possess specialized hardware to run complex workflow...
详细信息
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentat...
详细信息
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object ***,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient ***,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training *** results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,*** detection performance surpasses that of other single-task or multi-task algorithm models.
In modern society,an increasing number of occasions need to effectively verify people's *** is the most ef-fective technology for personal *** research on automated biometrics recognition mainly started in the 196...
详细信息
In modern society,an increasing number of occasions need to effectively verify people's *** is the most ef-fective technology for personal *** research on automated biometrics recognition mainly started in the 1960s and *** the following 50 years,the research and application of biometrics have achieved fruitful *** 2014-2015,with the successful applications of some emerging information technologies and tools,such as deep learning,cloud computing,big data,mobile communication,smartphones,location-based services,blockchain,new sensing technology,the Internet of Things and federated learning,biometric technology entered a new development ***,taking 2014-2015 as the time boundary,the development of biometric technology can be divided into two *** addition,according to our knowledge and understanding of biometrics,we fur-ther divide the development of biometric technology into three phases,i.e.,biometrics 1.0,2.0 and *** 1.0 is the primary de-velopment phase,or the traditional development *** 2.0 is an explosive development phase due to the breakthroughs caused by some emerging information *** present,we are in the development phase of biometrics *** 3.0 is the future development phase of *** the biometrics 3.0 phase,biometric technology will be fully mature and can meet the needs of various *** 1.0 is the initial phase of the development of biometric technology,while biometrics 2.0 is the advanced *** this paper,we provide a brief review of biometrics ***,the concept of biometrics 2.0 is defined,and the architecture of biometrics 2.0 is *** particular,the application architecture of biometrics 2.0 in smart cities is *** challenges and perspectives of biometrics 2.0 are also discussed.
Mashup developers often need to find open application programming interfaces(APIs) for their composition application development. Although most enterprises and service organizations have encapsulated their businesses ...
详细信息
Mashup developers often need to find open application programming interfaces(APIs) for their composition application development. Although most enterprises and service organizations have encapsulated their businesses or resources online as open APIs, finding the right high-quality open APIs is not an easy task from a library with several open APIs. To solve this problem, this paper proposes a deep learning-based open API recommendation(DLOAR) approach. First, the hierarchical density-based spatial clustering of applications with a noise topic model is constructed to build topic models for Mashup clusters. Second,developers' requirement keywords are extracted by the Text Rank algorithm, and the language model is built. Third, a neural network-based three-level similarity calculation is performed to find the most relevant open APIs. Finally, we complement the relevant information of open APIs in the recommended list to help developers make better choices. We evaluate the DLOAR approach on a real dataset and compare it with commonly used open API recommendation approaches: term frequency-inverse document frequency, latent dirichlet allocation, Word2Vec, and Sentence-BERT. The results show that the DLOAR approach has better performance than the other approaches in terms of precision, recall, F1-measure, mean average precision,and mean reciprocal rank.
Deep Hashing is a technique used for retrieving images on a large-scale, encoding the latent code of images into binary codes, which significantly reduces computational and storage costs in image retrieval. This enabl...
详细信息
暂无评论