Monday, August 14, 2023

On Provably Faster Gradient Descent via Long Steps

I have not read this paper yet, but perhaps it is going to trigger a new wave of research on how to further optimize and accelerate the training of models in machine learning & AI.

From the abstract:
"This work establishes provably faster convergence rates for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We show that long steps, which may increase the objective value in the short term, lead to provably faster convergence in the long term. A conjecture towards proving a faster O(1/TlogT) rate for gradient descent is also motivated along with simple numerical validation."

[2307.06324] Provably Faster Gradient Descent via Long Steps

No comments: