In this paper we study a stochastic gradient algorithm which rules the increase of the minibatch size in a predefined fashion and automatically adjusts the learning rate by means of a monotone or non -monotone line search procedure. The mini -batch size is incremented at a suitable a priori rate throughout the iterative process in order that the variance of the stochastic gradients is progressively reduced. The a priori rate is not subject to restrictive assumptions, allowing for the possibility of a slow increase in the mini -batch size. On the other hand, the learning rate can vary non -monotonically throughout the iterations, as long as it is appropriately bounded. Convergence results for the proposed method are provided for both convex and non -convex objective functions. Moreover it can be proved that the algorithm enjoys a global linear rate of convergence on strongly convex functions. The low per -iteration cost, the limited memory requirements and the robustness against the hyperparameters setting make the suggested approach well -suited for implementation within the deep learning framework, also for GPGPU-equipped architectures. Numerical results on training deep neural networks for multiclass image classification show a promising behaviour of the proposed scheme with respect to similar state of the art competitors.

A stochastic gradient method with variance control and variable learning rate for Deep Learning

Ruggiero V.;Trombini I.
;
2024

Abstract

In this paper we study a stochastic gradient algorithm which rules the increase of the minibatch size in a predefined fashion and automatically adjusts the learning rate by means of a monotone or non -monotone line search procedure. The mini -batch size is incremented at a suitable a priori rate throughout the iterative process in order that the variance of the stochastic gradients is progressively reduced. The a priori rate is not subject to restrictive assumptions, allowing for the possibility of a slow increase in the mini -batch size. On the other hand, the learning rate can vary non -monotonically throughout the iterations, as long as it is appropriately bounded. Convergence results for the proposed method are provided for both convex and non -convex objective functions. Moreover it can be proved that the algorithm enjoys a global linear rate of convergence on strongly convex functions. The low per -iteration cost, the limited memory requirements and the robustness against the hyperparameters setting make the suggested approach well -suited for implementation within the deep learning framework, also for GPGPU-equipped architectures. Numerical results on training deep neural networks for multiclass image classification show a promising behaviour of the proposed scheme with respect to similar state of the art competitors.
2024
Franchini, G.; Porta, F.; Ruggiero, V.; Trombini, I.; Zanni, L.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2557052
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact