Remarkable progress has been achieved in generative modeling for time-series data with the introduction of Generative Adversarial Networks (GANs) [1]. GANs are neural networks that are meant to generate synthetic inst...
详细信息
ISBN:
(纸本)9798350345032
Remarkable progress has been achieved in generative modeling for time-series data with the introduction of Generative Adversarial Networks (GANs) [1]. GANs are neural networks that are meant to generate synthetic instances of data utilizing two neural networks, a generator and a discriminator, that operate against each other at the same time [1]. The generator learns to generate fake data to get the discriminator to classify its generated samples as authentic. The discriminator, on the other hand, attempts to distinguish between authentic and produced data. Finally, the generator could generate realistic data. GANs have demonstrated their ability to generate realistic data and have made remarkable progress in various tasks, such as the generation of time-series [4], images [5], and videos [3]. Particularly, a significant amount of work has utilized GANs based on Recurrent Neural Networks (RNNs) for time-series generation [4]. However, by carefully examining the generated samples from these models, we can observe that RNN-based GANs, such as LSTM GANs and gated recurrent GANs, cannot handle long sequences. Although RNN-based GANs can generate many realistic samples, there is still a difficulty in training due to exploding vanishing gradients and mode collapse that limits their generation capability. In addition, these RNN-based GANs are typically designed for regular time-series data, and thus cannot maintain informative varying intervals properly, which is a major concern for generating time-series *** this paper, we propose SparseGAN, a novel sparse self-attention-based GANs that allows for attention-driven, long-memory modeling for regular and irregular time-series generation through learned embedding space. This way, it can yield a more informative representation and capture long-range dependencies for time-series generation while using original data for supervision. SparseGAN comprises two essential sub-networks: the Supervision Network and the Generation Ne
We propose GQFormer, a probabilistic time series forecasting method that models the quantile function of the forecast distribution. Our methodology is rooted in the Implicit Quantile modeling approach, where samples f...
详细信息
Used car pricing is a critical aspect of the automotive industry, influenced by many economic factors and market dynamics. With the recent surge in online marketplaces and increased demand for used cars, accurate pric...
详细信息
Click-Through Rate prediction (CTR) is a crucial task for online advertising and recommender systems. Therefore, it has gained considerable attention in the past few years as it highly affects the revenue of several c...
Click-Through Rate prediction (CTR) is a crucial task for online advertising and recommender systems. Therefore, it has gained considerable attention in the past few years as it highly affects the revenue of several commercial platforms and online systems. The primary purpose of recent research emphasizes obtaining meaningful and powerful representations through mining low and high-feature interactions using various components such as Deep Neural Networks (DNN), CrossNets, or transformer blocks. However, models utilizing one representation for the input fields in each instance restrict the model's predictive power. Other models tend to be overly complicated to reach high input data expressiveness and predictive power. In this work, we propose a simple yet effective Deep Multi-Representation model (DeepMR) that is capable of learning informative representations by jointly training a mixture of two powerful feature representation learning components, namely DNNs and multi-head self-attentions. Furthermore, DeepMR integrates the novel residual with zero initialization (ReZero) connections to the DNN and the multi-head self-attention components for learning superior input representations. Experiments on three real-world datasets show that the proposed model significantly outperforms all state-of-the-art models with a relative improvement of up to 16.6% in the task of click-through rate prediction. Our implementation code and datasets are available here https://***/Shereen-Elsayed/DeepMR.
Remarkable progress has been achieved in generative modeling for time-series data with the introduction of Generative Adversarial Networks (GANs) [1]. GANs are neural networks that are meant to generate synthetic inst...
Remarkable progress has been achieved in generative modeling for time-series data with the introduction of Generative Adversarial Networks (GANs) [1]. GANs are neural networks that are meant to generate synthetic instances of data utilizing two neural networks, a generator and a discriminator, that operate against each other at the same time [1]. The generator learns to generate fake data to get the discriminator to classify its generated samples as authentic. The discriminator, on the other hand, attempts to distinguish between authentic and produced data. Finally, the generator could generate realistic data. GANs have demonstrated their ability to generate realistic data and have made remarkable progress in various tasks, such as the generation of time-series [4], images [5], and videos [3]. Particularly, a significant amount of work has utilized GANs based on Recurrent Neural Networks (RNNs) for time-series generation [4]. However, by carefully examining the generated samples from these models, we can observe that RNN-based GANs, such as LSTM GANs and gated recurrent GANs, cannot handle long sequences. Although RNN-based GANs can generate many realistic samples, there is still a difficulty in training due to exploding vanishing gradients and mode collapse that limits their generation capability. In addition, these RNN-based GANs are typically designed for regular time-series data, and thus cannot maintain informative varying intervals properly, which is a major concern for generating time-series *** this paper, we propose SparseGAN, a novel sparse self-attention-based GANs that allows for attention-driven, long-memory modeling for regular and irregular time-series generation through learned embedding space. This way, it can yield a more informative representation and capture long-range dependencies for time-series generation while using original data for supervision. SparseGAN comprises two essential sub-networks: the Supervision Network and the Generation Ne
Probabilistic forecasting of irregularly sampled multivariate time series with missing values is crucial for decision making in various domains, including health care, astronomy, and climate. State-of-the-art methods ...
详细信息
Hyperbolic deep learning has become a growing research direction in computer vision due to the unique properties afforded by the alternate embedding space. The negative curvature and exponentially growing distance met...
详细信息
Time series forecasting attempts to predict future events by analyzing past trends and patterns. Although well researched, certain critical aspects pertaining to the use of deep learning in time series forecasting rem...
详细信息
Sequential recommendation models have recently become a crucial component for next-item recommendation tasks in various online platforms due to their unrivaled ability to capture complex sequential patterns in histori...
详细信息
Click-Through Rate prediction (CTR) is a crucial task in recommender systems, and it gained considerable attention in the past few years. The primary purpose of recent research emphasizes obtaining meaningful and powe...
详细信息
暂无评论