strongly adaptive meta-algorithms (SA-meta) are popular in online portfolio selection due to their resilience in adversarial environments and adaptability to market changes. However, their application is often limited...
详细信息
strongly adaptive meta-algorithms (SA-meta) are popular in online portfolio selection due to their resilience in adversarial environments and adaptability to market changes. However, their application is often limited by high variance in errors, stemming from calculations over small intervals with limited observations. To address this limitation, we introduce the stronglyadaptive Optimistic Follow-the-Regularized-Leader (SAOFTRL), an advanced framework that integrates the Optimistic Follow-the-Regularized-Leader (OFTRL) strategy into SA-metaalgorithms to stabilize performance. SAOFTRL is distinguished by its novel regret bound, which provides a theoretical guarantee of worst-case performance in challenging scenarios. Additionally, we reimagine SAOFTRL within a mean-variance portfolio (MVP) framework, enhanced with shrinkage estimators and adaptive rolling windows, thereby ensuring reliable average-case performance. For practical deployment, we present an efficient SAOFTRL implementation utilizing the Successive Convex Approximation (SCA) method. Empirical evaluations demonstrate SAOFTRL's superior performance and expedited convergence when compared to existing benchmarks, confirming its effectiveness and efficiency in dynamic market conditions.
暂无评论