The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods for large-scale optimization problems arising in machine learning. In a recent paper, Bollapragada et al. [1] propose to include an adaptive subsampling strategy into a stochastic gradient scheme. We propose to combine this approach with a selection rule for the steplength, borrowed from the full-gradient scheme known as Limited Memory Steepest Descent (LMSD) method [4] and suitably tailored to the stochastic framework. This strategy, based on the Ritz-like values of a suitable matrix, enables to give a local estimate of the local Lipschitz constant of the gradient of the objective function, without introducing line-search techniques, while the possible increase of the subsample size used to compute the stochastic gradient enables to control the variance of this direction. An extensive numerical experimentation for convex and non-convex loss functions highlights that the new rule makes the tuning of the parameters less expensive than the selection of a suitable constant steplength in standard and mini-batch stochastic gradient methods. The proposed procedure has also been compared with the Momentum and ADAM methods.

Steplength and Mini-batch Size Selection in Stochastic Gradient Methods

G. Franchini
;
V. Ruggiero;
2020

Abstract

The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods for large-scale optimization problems arising in machine learning. In a recent paper, Bollapragada et al. [1] propose to include an adaptive subsampling strategy into a stochastic gradient scheme. We propose to combine this approach with a selection rule for the steplength, borrowed from the full-gradient scheme known as Limited Memory Steepest Descent (LMSD) method [4] and suitably tailored to the stochastic framework. This strategy, based on the Ritz-like values of a suitable matrix, enables to give a local estimate of the local Lipschitz constant of the gradient of the objective function, without introducing line-search techniques, while the possible increase of the subsample size used to compute the stochastic gradient enables to control the variance of this direction. An extensive numerical experimentation for convex and non-convex loss functions highlights that the new rule makes the tuning of the parameters less expensive than the selection of a suitable constant steplength in standard and mini-batch stochastic gradient methods. The proposed procedure has also been compared with the Momentum and ADAM methods.
2020
9783030645793
Stochastic gradient methods Learning rate selection rule Adaptive subsampling strategies Reduction variance techniques
File in questo prodotto:
File Dimensione Formato  
LOD_short_paper.pdf

solo gestori archivio

Descrizione: file
Tipologia: Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 279.79 kB
Formato Adobe PDF
279.79 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Franchini2020_Chapter_SteplengthAndMini-batchSizeSel.pdf

solo gestori archivio

Descrizione: Full text editoriale
Tipologia: Full text (versione editoriale)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 285.69 kB
Formato Adobe PDF
285.69 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2457881
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
social impact