Long-term multivariate time series forecasting is an important task in engineering applications. It helps grasp the future development trend of data in real-time, which is of great significance for a wide variety of f...
详细信息
Long-term multivariate time series forecasting is an important task in engineering applications. It helps grasp the future development trend of data in real-time, which is of great significance for a wide variety of fields. Due to the non-linear and unstable characteristics of multivariate time series, the existing methods encounter difficulties in analyzing complex high-dimensional data and capturing latent relationships between multivariates in time series, thus affecting the performance of long-term prediction. In this paper, we propose a novel time series forecasting model based on multilayer perceptron that combines spatio-temporal decomposition and doubly residual stacking, namely Spatio-Temporal Decomposition Neural Network (STDNet). We decompose the originally complex and unstable time series into two parts, temporal term and spatial term. We design temporal module based on auto-correlation mechanism to discover temporal dependencies at the sub-series level, and spatial module based on convolutional neural network and self-attention mechanism to integrate multivariate information from two dimensions, global and local, respectively. Then we integrate the results obtained from the different modules to get the final forecast. Extensive experiments on four real-world datasets show that STDNet significantly outperforms other state-of-the-art methods, which provides an effective solution for long-term time series forecasting.
Stock price prediction is a typical complex time series prediction problem characterized by dynamics,nonlinearity,and *** paper introduces a generative adversarial network model that incorporates an attention mechanis...
详细信息
Stock price prediction is a typical complex time series prediction problem characterized by dynamics,nonlinearity,and *** paper introduces a generative adversarial network model that incorporates an attention mechanism(GAN-LSTM-Attention)to improve the accuracy of stock price ***,the generator of this model combines the Long and Short-Term Memory Network(LSTM),the Attention Mechanism and,the Fully-Connected Layer,focusing on generating the predicted stock *** discriminator combines the Convolutional Neural Network(CNN)and the Fully-Connected Layer to discriminate between real stock prices and generated stock ***,to evaluate the practical application ability and generalization ability of the GAN-LSTM-Attention model,four representative stocks in the United States of America(USA)stock market,namely,Standard&Poor’s 500 Index stock,Apple Incorporatedstock,AdvancedMicroDevices Incorporatedstock,and Google Incorporated stock were selected for prediction experiments,and the prediction performance was comprehensively evaluated by using the three evaluation metrics,namely,mean absolute error(MAE),root mean square error(RMSE),and coefficient of determination(R2).Finally,the specific effects of the attention mechanism,convolutional layer,and fully-connected layer on the prediction performance of the model are systematically analyzed through ablation *** results of experiment show that the GAN-LSTM-Attention model exhibits excellent performance and robustness in stock price prediction.
In low-light image enhancement,prevailing Retinex-based methods often struggle with precise illumina-tion estimation and brightness *** can result in issues such as halo artifacts,blurred edges,and diminished details ...
详细信息
In low-light image enhancement,prevailing Retinex-based methods often struggle with precise illumina-tion estimation and brightness *** can result in issues such as halo artifacts,blurred edges,and diminished details in bright regions,particularly under non-uniform illumination *** propose an innovative approach that refines low-light images by leveraging an in-depth awareness of local content within the *** introducing multi-scale effective guided filtering,our method surpasses the limitations of traditional isotropic filters,such as Gaussian filters,in handling non-uniform *** dynamically adjusts regularization parameters in response to local image characteristics and significantly integrates edge perception across different *** balanced approach achieves a harmonious blend of smoothing and detail preservation,enabling more accurate illumination ***,we have designed an adaptive gamma correction function that dynamically adjusts the brightness value based on local pixel intensity,further balancing enhancement effects across different brightness levels in the *** results demonstrate the effectiveness of our proposed method for non-uniform illumination images across various *** exhibits superior quality and objective evaluation scores compared to existing *** method effectively addresses potential issues that existing methods encounter when processing non-uniform illumination images,producing enhanced images with precise details and natural,vivid colors.
Graph similarity learning aims to calculate the similarity between pairs of *** unsupervised graph similarity learning methods based on contrastive learning encounter challenges related to random graph augmentation st...
详细信息
Graph similarity learning aims to calculate the similarity between pairs of *** unsupervised graph similarity learning methods based on contrastive learning encounter challenges related to random graph augmentation strategies,which can harm the semantic and structural information of graphs and overlook the rich structural information present in *** address these issues,we propose a graph similarity learning model based on learnable augmentation and multi-level contrastive ***,to tackle the problem of random augmentation disrupting the semantics and structure of the graph,we design a learnable augmentation method to selectively choose nodes and edges within the *** enhance contrastive levels,we employ a biased random walk method to generate corresponding subgraphs,enriching the contrastive ***,to solve the issue of previous work not considering multi-level contrastive learning,we utilize graph convolutional networks to learn node representations of augmented views and the original graph and calculate the interaction information between the attribute-augmented and structure-augmented views and the original *** goal is to maximize node consistency between different views and learn node matching between different graphs,resulting in node-level representations for each *** representations are then obtained through pooling operations,and we conduct contrastive learning utilizing both node and subgraph ***,the graph similarity score is computed according to different downstream *** conducted three sets of experiments across eight datasets,and the results demonstrate that the proposed model effectively mitigates the issues of random augmentation damaging the original graph’s semantics and structure,as well as the insufficiency of contrastive ***,the model achieves the best overall performance.
The rapid development of ISAs has brought the issue of software compatibility to the forefront in the embedded *** address this challenge,one of the promising solutions is the adoption of a multiple-ISA processor that...
详细信息
The rapid development of ISAs has brought the issue of software compatibility to the forefront in the embedded *** address this challenge,one of the promising solutions is the adoption of a multiple-ISA processor that supports multiple different ***,due to constraints in cost and performance,the architecture of a multiple-ISA processor must be carefully optimized to meet the specific requirements of embedded *** exploring the RISC-V and ARM Thumb ISAs,this paper proposes RVAM16,which is an optimized multiple-ISA processor microarchitecture for embedded devices based on hardware binary translation *** results show that,when running non-native ARM Thumb programs,RVAM16 achieves a significant speedup of over 2.73×with less area and energy consumption compared to using hardware binary translation alone,reaching more than 70%of the performance of native RISC-V programs.
The Coordinate Descent Method for K-means(CDKM)is an improved algorithm of *** identifies better locally optimal solutions than the original K-means *** is,it achieves solutions that yield smaller objective function v...
详细信息
The Coordinate Descent Method for K-means(CDKM)is an improved algorithm of *** identifies better locally optimal solutions than the original K-means *** is,it achieves solutions that yield smaller objective function values than the K-means ***,CDKM is sensitive to initialization,which makes the K-means objective function values not small *** selecting suitable initial centers is not always possible,this paper proposes a novel algorithm by modifying the process of *** proposed algorithm first obtains the partition matrix by CDKM and then optimizes the partition matrix by designing the split-merge criterion to reduce the objective function value *** split-merge criterion can minimize the objective function value as much as possible while ensuring that the number of clusters remains *** algorithm avoids the distance calculation in the traditional K-means algorithm because all the operations are completed only using the partition *** on ten UCI datasets show that the solution accuracy of the proposed algorithm,measured by the E value,is improved by 11.29%compared with CDKM and retains its efficiency advantage for the high dimensional *** proposed algorithm can find a better locally optimal solution in comparison to other tested K-means improved algorithms in less run time.
Recommendation has been widely used in business scenarios to provide users with personalized and accurate item lists by efficiently analyzing complex user-item ***,existing recommendation methods have significant shor...
详细信息
Recommendation has been widely used in business scenarios to provide users with personalized and accurate item lists by efficiently analyzing complex user-item ***,existing recommendation methods have significant shortcomings in capturing the dynamic preference changes of users and discovering their true potential *** address these problems,a novel framework named Intent-Aware Graph-Level Embedding Learning(IaGEL)is proposed for *** this framework,the potential user interest is explored by capturing the co-occurrence of items in different periods,and then user interest is further improved based on an adaptive aggregation algorithm,forming generic intents and specific *** addition,for better representing the intents,graph-level embedding learning is designed based on the mutual information comparison among positive intents and negative ***,an intent-based recommendation strategy is designed to further mine the dynamic changes in user *** on three public and industrial datasets demonstrate the effectiveness of the proposed IaGEL in the task of recommendation.
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation *** this paper,we aim to reduce the annotation cost of crowd datasets,a...
详细信息
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation *** this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised *** this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the ***,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd *** addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density *** experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.
Dear Editor,This letter presents a distributed adaptive second-order latent factor(DAS) model for addressing the issue of high-dimensional and incomplete data representation. Compared with first-order optimizers, a se...
详细信息
Dear Editor,This letter presents a distributed adaptive second-order latent factor(DAS) model for addressing the issue of high-dimensional and incomplete data representation. Compared with first-order optimizers, a second-order optimizer has stronger ability in approaching a better solution when dealing with the non-convex optimization problems, thus obtaining better performance in extracting the latent factors(LFs) well representing the known information from high-dimensional and incomplete data.
Working as aerial base stations,mobile robotic agents can be formed as a wireless robotic network to provide network services for on-ground mobile devices in a target ***,a challenging issue is how to deploy these mob...
详细信息
Working as aerial base stations,mobile robotic agents can be formed as a wireless robotic network to provide network services for on-ground mobile devices in a target ***,a challenging issue is how to deploy these mobile robotic agents to provide network services with good quality for more users,while considering the mobility of on-ground *** this paper,to solve this issue,we decouple the coverage problem into the vertical dimension and the horizontal dimension without any loss of optimization and introduce the network coverage model with maximum coverage ***,we propose a hybrid deployment algorithm based on the improved quick artificial bee *** algorithm is composed of a centralized deployment algorithm and a distributed *** proposed deployment algorithm deploy a given number of mobile robotic agents to provide network services for the on-ground devices that are independent and identically *** results have demonstrated that the proposed algorithm deploys agents appropriately to cover more ground area and provide better coverage uniformity.
暂无评论