Time series prediction is one of the most important applications of fuzzy cognitive maps. In general, the state of FCMs in forecasting depends only on the state of the previous moment, but in fact it is also affected ...
详细信息
Time series prediction is one of the most important applications of fuzzy cognitive maps. In general, the state of FCMs in forecasting depends only on the state of the previous moment, but in fact it is also affected by the past state. Hence higher-order fuzzy cognitive maps (HFCMs) are proposed based on FCMs considering historical information and have been widely used for time series forecasting. However, using HFCM to deal with sparse and large-scale multivariate time series are still a challenge, while large-scale data makes it difficult to determine the causal relationship between variables because of the increased number of nodes, so it is necessary to explore the relationship between conceptual nodes to guide large-scale HFCMs learning. Therefore, a sparse large-scale HFCMs learning algorithm guided by Spearman correlation coefficient, called SG-HFCM, is proposed in the paper. The SG-HFCM model is specified as follows: First, the solving of HFCMs model is transform into a regression model and an adaptive loss function is utilized to enhance the robustness of the model. Second, l1-norm is used to improve the sparsity of the weight matrix. Third, in order to more accurately characterize the correlation relationship between variables, the Spearman correlation coefficients is add as a regular term to guide the learning of weight matrices. When calculating the Spearman correlation coefficient, through splitting domain interval method, we can better understand the characteristics of the data, and get better correlation in different small intervals, and more accurately characterize the relationship between the variables in order to guide the weight matrix. In addition, in this paper, the ADMM method and quadratic programming method are used to solve the algorithms to get better solutions for the SG-HFCM, where the quadratic programming can well ensure that the range of the weights and obtaining the optimal solution. At last, we compare the proposed SG-HFCM method wit
Primordial magnetic fields (PMFs) may significantly influence 21-cm physics via two mechanisms: (i) magnetic heating of the intergalactic medium (IGM) through ambipolar diffusion (AD) and decaying magnetohydrodynamic ...
详细信息
At present, the residual-based deep network model mainly adopts a multi-branch structure, but this structure increases the memory access cost, and the reasoning time is not as good as that of a single-branch network. ...
详细信息
CoVaR is an important measure for assessing the systemic risk of a network composed of many systems. To optimize and control the systemic risk of the network, we need to know the sensitivity of CoVaR. In this paper, w...
CoVaR is an important measure for assessing the systemic risk of a network composed of many systems. To optimize and control the systemic risk of the network, we need to know the sensitivity of CoVaR. In this paper, we derive closed-form expressions of the CoVaR sensitivities and design batched estimators using the infinitesimal perturbation analysis (IPA) and finite-difference methods. We establish the consistency and asymptotic normality of the proposed estimators and show that the convergence rate of the estimators is strictly slower than $n^{-1/6}$ . Numerical experiments show the effectiveness of our estimator and support the theoretical results.
It is commonly believed that 6G will be established on ubiquitous artificial intelligence (AI). Federated learning is recognized as one of the vital solutions for 6G intelligent applications due to its distributed AI ...
详细信息
Real-world data often have a long-tailed and open-ended distribution. A reliable practical machine learning system need to learn from the majority classes and also generalize to minority *** achieve this, acknowledge ...
详细信息
Latent-based image generative models, such as Latent Diffusion Models (LDMs) and Mask Image Models (MIMs), have achieved notable success in image generation tasks. These models typically leverage reconstructive autoen...
ISBN:
(纸本)9798331314385
Latent-based image generative models, such as Latent Diffusion Models (LDMs) and Mask Image Models (MIMs), have achieved notable success in image generation tasks. These models typically leverage reconstructive autoencoders like VQGAN or VAE to encode pixels into a more compact latent space and learn the data distribution in the latent space instead of directly from pixels. However, this practice raises a pertinent question: Is it truly the optimal choice? In response, we begin with an intriguing observation: despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation. This finding contrasts sharply with the field of NLP, where the autoregressive model GPT has established a commanding presence. To address this discrepancy, we introduce a unified perspective on the relationship between latent space and generative models, emphasizing the stability of latent space in image generative modeling. Furthermore, we propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling by applying K-Means on the latent features of self-supervised learning models. Experimental results show that image autoregressive modeling with our tokenizer (DiGIT) benefits both image understanding and image generation with the next token prediction principle, which is inherently straightforward for GPT models but challenging for other generative models. Remarkably, for the first time, a GPT-style autoregressive model for images outperforms LDMs, which also exhibits substantial improvement akin to GPT when scaling up model size. Our findings underscore the potential of an optimized latent space and the integration of discrete tokenization in advancing the capabilities of image generative models. The code is available at https://***/DAMO-NLP-SG/DiGIT.
In recent years, camera-based non-contact heart rate (HR) measurement technology has grown immensely. The system captures the reflection of light from the facial tissues and lead to the formation of a remote photoplet...
详细信息
Sequential DeepFake detection is an emerging task that predicts the manipulation sequence in order. Existing methods typically formulate it as an image-to-sequence problem, employing conventional Transformer architect...
详细信息
ISBN:
(数字)9798331510831
ISBN:
(纸本)9798331510848
Sequential DeepFake detection is an emerging task that predicts the manipulation sequence in order. Existing methods typically formulate it as an image-to-sequence problem, employing conventional Transformer architectures. However, these methods lack dedicated design and consequently result in limited performance. As such, this paper describes a new Transformer design, called TSOM, by exploring three perspectives: Texture, Shape, and Order of Manipulations. Our method features four major improvements: we describe a new texture-aware branch that effectively captures subtle manipulation traces with a Diversiform Pixel Difference Attention module. Then we introduce a Multi-source Cross-attention module to seek deep correlations among spatial and sequential features, enabling effective modeling of complex manipulation traces. To further enhance the cross-attention, we describe a Shape-guided Gaussian mapping strategy, providing initial priors of the manipulation shape. Finally, observing that the subsequent manipulation in a sequence may influence traces left in the preceding one, we intriguingly invert the prediction order from forward to backward, leading to notable gains as expected. Extensive experimental results demonstrate that our method outperforms others by a large margin, highlighting the superiority of our method.
Serverless platforms typically adopt an early-binding approach for function sizing, requiring developers to specify an immutable size for each function within a workflow beforehand. Accounting for potential runtime va...
详细信息
暂无评论