We investigate an inertial forward-backward algorithm in connection with the minimization of the sum of a non-smooth and possibly non-convex and a non-convex differentiable function. The algorithm is formulated in the...
详细信息
We investigate an inertial forward-backward algorithm in connection with the minimization of the sum of a non-smooth and possibly non-convex and a non-convex differentiable function. The algorithm is formulated in the spirit of the famous FISTA method;however, the setting is non-convex and we allow different inertial terms. Moreover, the inertial parameters in our algorithm can take negative values too. We also treat the case when the non-smooth function is convex, and we show that in this case a better step size can be allowed. Further, we show that our numerical schemes can successfully be used in DC-programming. We prove some abstract convergence results which applied to our numerical schemes allow us to show that the generated sequences converge to a critical point of the objective function, provided a regularization of the objective function satisfies the Kurdyka-Lojasiewicz property. Further, we obtain a general result that applied to our numerical schemes ensures convergence rates for the generated sequences and for the objective function values formulated in terms of the KL exponent of a regularization of the objective function. Finally, we apply our results to image restoration.
Convolutional dictionary learning (CDL) is a widely used technique in computer vision for accurately capturing local features and texture information in signals. However, most existing CDL methods are based on batch p...
详细信息
ISBN:
(纸本)9798400712005
Convolutional dictionary learning (CDL) is a widely used technique in computer vision for accurately capturing local features and texture information in signals. However, most existing CDL methods are based on batch processing, which requires computing the entire dataset at once, resulting in a significant memory requirement that limits further development. To address this issue, researchers have turned to online convolutional dictionary learning (OCDL) in recent years. OCDL inputs data as a stream and stores all historical information in a pair of fixed-size history arrays. This approach avoids the problem of increasing memory requirements with the number of samples. However, current OCDL methods encounter two primary issues when handling big datasets or large dictionaries: the high time complexity of updating history arrays or dictionaries and the challenge of selecting hyperparameters linked to the ADMM algorithm. This paper proposes a slice-based approximate OCDL method called SAOCDL to address the aforementioned problems. The algorithm utilizes a sparse approximation model, where historical data samples are approximated as the convolution sum of the best local dictionary of a single sample and the corresponding local sparse codes. The current sample and a pair of fixed-size history arrays that store this approximation are used to update the best local dictionary. The resulting optimization problem is solved in the spatial domain using the proposed convergent inertial proximal-gradient algorithm, which combines dry friction with Hessian-driven damping. Extensive experiments were conducted on various benchmark datasets, and the results demonstrate that the proposed SAOCDL approach is highly competitive in terms of performance, efficiency, and memory with state-of-the-art OCDL and CDL algorithms.
暂无评论