Gadat, Sébastien and Gavra, Ioana (2022) Asymptotic study of stochastic adaptive algorithm in non-convex landscape. Journal of Machine Learning Research, vol. 23 (art. 228). pp. 1-54.

This is the latest version of this item.

[thumbnail of wp_tse_1175.pdf]
Preview
Text
Download (646kB) | Preview

Abstract

This paper studies some asymptotic properties of adaptive algorithms widely used in optimization and machine learning, and among them Adagrad and Rmsprop, which are involved in most of the blackbox deep learning algorithms. Our setup is the non-convex landscape optimization point of view, we consider a one time scale parametrization and we consider the situation where
these algorithms may be used or not with mini-batches. We adopt the point of view of stochastic algorithms and establish the almost sure convergence of these methods when using a decreasing step-size towards the set of critical points of the target function. With a mild extra assumption on the noise, we also obtain the convergence towards the set of minimizers of the function. Along
our study, we also obtain a \convergence rate" of the methods, in the vein of the works of [GL13].

Item Type: Article
Language: English
Date: August 2022
Refereed: Yes
Uncontrolled Keywords: Stochastic optimization, Stochastic adaptive algorithm, Convergence of random variables
Subjects: B- ECONOMIE ET FINANCE
Divisions: TSE-R (Toulouse)
Site: UT1
Date Deposited: 17 Nov 2022 09:40
Last Modified: 17 Nov 2022 09:40
OAI Identifier: oai:tse-fr.eu:127256
URI: https://publications.ut-capitole.fr/id/eprint/46256

Available Versions of this Item

View Item

Downloads

Downloads per month over past year