及时准确的交通预测对城市交通控制和引导至关重要。由于交通数据的复杂性和非平稳变化,传统的预测方法不能满足中长期预测任务的要求,往往忽略了交通流的时空依赖性。文章采用了一种新的深度学习框架——动态图卷积网络(DGCN)来解决交通领域的时间序列预测问题。我们没有使用常规的卷积和循环单元,而是在图上表达问题,该网络引入潜在网络提取时空特征,用于自适应构建动态路网图矩阵。实验表明,我们的模型DGCN有效捕获了全面的时空相关性,并在各种真实交通数据集上始终优于最先进的基线。Timely and accurate traffic forecasting is very important for urban traffic control and guidance. Due to the complexity and non-stationary changes of traffic data, traditional forecasting methods can not meet the requirements of medium and long-term forecasting tasks and often ignore the temporal and spatial dependence of traffic flow. In this paper, a new deep learning framework—Dynamic Graph Convolution Network (DGCN), is adopted to solve the problem of time series prediction in the traffic field. We don’t use the conventional convolution and circulation unit, but express the problem on the graph. The network introduces the potential network to extract the spatio-temporal features and is used to adaptively construct the dynamic road network graph matrix. Experiments show that our model DGCN effectively captures the comprehensive spatial-temporal correlation and is always superior to the most advanced baseline on various real traffic data sets.
高光谱影像标记样本的获取通常是一项费时费力的工作,如何在小样本条件下提高影像的分类精度是高光谱影像分类领域面临的难题之一。现有的高光谱影像分类方法对影像的多尺度信息挖掘不够充分,导致在小样本条件下的分类精度较差。针对此问题,本文设计了一种面向高光谱影像小样本分类的全局特征与局部特征自适应融合方法。该方法基于动态图卷积网络和深度可分离卷积网络,分别从全局尺度和局部尺度挖掘影像的潜在信息,实现了标记样本的有效利用。进一步引入极化自注意力机制,在减少信息损失的同时提升网络的特征表达,并采用特征自适应融合机制对全局特征和局部特征进行自适应融合。为验证本文方法的有效性,在University of Pavia、Salinas、WHU-Hi-LongKou和WHU-Hi-HanChuan4组高光谱影像基准数据集上开展分类试验。试验结果表明,与传统分类器和先进的深度学习模型相比,本文方法兼顾执行效率和分类精度,在小样本条件下能够取得更为优异的分类表现。在4组数据集上的总体分类精度分别为99.01%、99.42%、99.18%和95.84%,平均分类精度分别为99.31%、99.65%、98.89%和95.49%,Kappa系数分别为98.69%、99.35%、98.93%和95.14%。本文方法对应的代码开源于https://***/IceStreams/GLFAF。
暂无评论