Exploiting Curvature in Online Convex Optimization with Delayed Feedback
Contributo in Atti di convegno
Data di Pubblicazione:
2025
Citazione:
Exploiting Curvature in Online Convex Optimization with Delayed Feedback / H. Qiu, E. Esposito, M. Zhang (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: International Conference on Machine Learning / [a cura di] A. Singh, M. Fazel, D. Hsu, S. Lacoste-Julien, F. Berkenkamp, T. Maharaj, K. Wagstaff, J. Zhu. - [s.l] : PMLR, 2025. - pp. 50448-50479 (( 42. International Conference on Machine Learning Vancouver 2025.
Abstract:
In this work, we study the online convex optimization problem with curved losses and delayed feedback. When losses are strongly convex, existing approaches obtain regret bounds of order $d_{\max} \ln T$, where $d_{\max}$ is the maximum delay and $T$ is the time horizon. However, in many cases, this guarantee can be much worse than $\sqrt{d_{\mathrm{tot}}}$ as obtained by a delayed version of online gradient descent, where $d_{\mathrm{tot}}$ is the total delay. We bridge this gap by proposing a variant of follow-the-regularized-leader that obtains regret of order $\min\{\sigma_{\max}\ln T, \sqrt{d_{\mathrm{tot}}}\}$, where $\sigma_{\max}$ is the maximum number of missing observations. We then consider exp-concave losses and extend the Online Newton Step algorithm to handle delays with an adaptive learning rate tuning, achieving regret $\min\{d_{\max} n\ln T, \sqrt{d_{\mathrm{tot}}}\}$ where $n$ is the dimension. To our knowledge, this is the first algorithm to achieve such a regret bound for exp-concave losses. We further consider the problem of unconstrained online linear regression and achieve a similar guarantee by designing a variant of the Vovk-Azoury-Warmuth forecaster with a clipping trick. Finally, we implement our algorithms and conduct experiments under various types of delay and losses, showing an improved performance over existing methods.
Tipologia IRIS:
03 - Contributo in volume
Keywords:
online learning; delayed feedback; curved losses
Elenco autori:
H. Qiu, E. Esposito, M. Zhang
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
International Conference on Machine Learning