Recommender systems aim to filter information effectively and recommend useful sources to match users' requirements. However, the exponential growth of information in recent social networks may cause low predictio...
详细信息
The manual process of evaluating answer scripts is strenuous. Evaluators use the answer key to assess the answers in the answer scripts. Advancements in technology and the introduction of new learning paradigms need a...
详细信息
Wind field forecasting is crucial for human activities, but numerical weather prediction still has room to improve accuracy. In this paper, we formalize wind field forecast correction as a spatiotemporal sequence pred...
详细信息
Array-geometry-agnostic speech separation (AGA-SS) aims to develop an effective separation method regardless of the microphone array geometry. Conventional methods rely on permutation-free operations, such as summatio...
详细信息
The rapid development of deep learning provides great convenience for production and ***,the massive labels required for training models limits further ***-shot learning which can obtain a high-performance model by le...
详细信息
The rapid development of deep learning provides great convenience for production and ***,the massive labels required for training models limits further ***-shot learning which can obtain a high-performance model by learning few samples in new tasks,providing a solution for many scenarios that lack *** paper summarizes few-shot learning algorithms in recent years and proposes a ***,we introduce the few-shot learning task and its ***,according to different implementation strategies,few-shot learning methods in recent years are divided into five categories,including data augmentation-based methods,metric learning-based methods,parameter optimization-based methods,external memory-based methods,and other ***,We investigate the application of few-shot learning methods and summarize them from three directions,including computer vision,human-machine language interaction,and robot ***,we analyze the existing few-shot learning methods by comparing evaluation results on mini Image Net,and summarize the whole paper.
Researchers have recently created several deep learning strategies for various tasks, and facial recognition has made remarkable progress in employing these techniques. Face recognition is a noncontact, nonobligatory,...
详细信息
Researchers have recently created several deep learning strategies for various tasks, and facial recognition has made remarkable progress in employing these techniques. Face recognition is a noncontact, nonobligatory, acceptable, and harmonious biometric recognition method with a promising national and social security future. The purpose of this paper is to improve the existing face recognition algorithm, investigate extensive data-driven face recognition methods, and propose a unique automated face recognition methodology based on generative adversarial networks (GANs) and the center symmetric multivariable local binary pattern (CS-MLBP). To begin, this paper employs the center symmetric multivariant local binary pattern (CS-MLBP) algorithm to extract the texture features of the face, addressing the issue that C2DPCA (column-based two-dimensional principle component analysis) does an excellent job of removing the global characteristics of the face but struggles to process the local features of the face under large samples. The extracted texture features are combined with the international features retrieved using C2DPCA to generate a multifeatured face. The proposed method, GAN-CS-MLBP, syndicates the power of GAN with the robustness of CS-MLBP, resulting in an accurate and efficient face recognition system. Deep learning algorithms, mainly neural networks, automatically extract discriminative properties from facial images. The learned features capture low-level information and high-level meanings, permitting the model to distinguish among dissimilar persons more successfully. To assess the proposed technique’s GAN-CS-MLBP performance, extensive experiments are performed on benchmark face recognition datasets such as LFW, YTF, and CASIA-WebFace. Giving to the findings, our method exceeds state-of-the-art facial recognition systems in terms of recognition accuracy and resilience. The proposed automatic face recognition system GAN-CS-MLBP provides a solid basis for a
The arduous and costly journey of drug discovery is increasingly intersecting with computational approaches, which promise to accelerate the analysis of bioassays and biomedical literature. The critical role of microR...
详细信息
The arduous and costly journey of drug discovery is increasingly intersecting with computational approaches, which promise to accelerate the analysis of bioassays and biomedical literature. The critical role of microRNAs (miRNAs) in disease progression has been underscored in recent studies, elevating them as potential therapeutic targets. This emphasizes the need for the development of sophisticated computational models that can effectively identify promising drug targets, such as miRNAs. Herein, we present a novel method, termed Duplex Link Prediction (DLP), rooted in subspace segmentation, to pinpoint potential miRNA targets. Our approach initiates with the application of the Network Enhancement (NE) algorithm to refine the similarity metric between miRNAs. Thereafter, we construct two matrices by pre-loading the association matrix from both the drug and miRNA perspectives, employing the K Nearest Neighbors (KNN) technique. The DLSR algorithm is then applied to predict potential associations. The final predicted association scores are ascertained through the weighted mean of the two matrices. Our empirical findings suggest that the DLP algorithm outperforms current methodologies in the realm of identifying potential miRNA drug targets. Case study validations further reinforce the real-world applicability and effectiveness of our proposed method. The code of DLP is freely available at https://***/kaizheng-academic/DLP. IEEE
The current large-scale Internet of Things(IoT)networks typically generate high-velocity network traffic *** use IoT devices to create botnets and launch attacks,such as DDoS,Spamming,Cryptocurrency mining,Phishing,**...
详细信息
The current large-scale Internet of Things(IoT)networks typically generate high-velocity network traffic *** use IoT devices to create botnets and launch attacks,such as DDoS,Spamming,Cryptocurrency mining,Phishing,*** service providers of large-scale IoT networks need to set up a data pipeline to collect the vast network traffic data from the IoT devices,store it,analyze it,and report the malicious IoT devices and types of ***,the attacks originating from IoT devices are dynamic,as attackers launch one kind of attack at one time and another kind of attack at another *** number of attacks and benign instances also vary from time to *** phenomenon of change in attack patterns is called concept ***,the attack detection system must learn continuously from the ever-changing real-time attack patterns in large-scale IoT network *** meet this requirement,in this work,we propose a data pipeline with Apache Kafka,Apache Spark structured streaming,and MongoDB that can adapt to the ever-changing attack patterns in real time and classify attacks in large-scale IoT *** concept drift is detected,the proposed system retrains the classifier with the instances that cause the drift and a representative subsample instances from the previous training of the *** proposed approach is evaluated with the latest dataset,IoT23,which consists of benign and several attack instances from various IoT *** classification accuracy is improved from 97.8%to 99.46%by the proposed *** training time of distributed random forest algorithm is also studied by varying the number of cores in Apache Spark environment.
Task scheduling, which is important in cloud computing, is one of the most challenging issues in this area. Hence, an efficient and reliable task scheduling approach is needed to produce more efficient resource employ...
详细信息
In today's intelligent transportation systems, the effectiveness of image-based analysis relies heavily on image quality. To enhance images while preserving reversibility, this paper proposes a histogram matching-...
详细信息
暂无评论