This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Each iteration involves a first phase where n independent minimizations are performed over the n variable blocks, followed by a phase where the results of the first phase are coordinated to obtain the whole variable update. Convergence of the method to the global optimum is proved for functions composed of a smooth part plus a possibly non-smooth but separable term. The method is also proved to have a linear rate of convergence, for functions that are smooth and strongly convex. The proposed algorithm can give computational advantage over the more standard serial block coordinate-wise minimization methods, when run over a parallel, multi-worker, computing architecture. The method is suitable for regularized regression problems, such as the group Lasso, group Ridge regression, and group Elastic Net. Numerical tests are run on such types of regression problems to exemplify the performance of the proposed method.
Parallel block coordinate minimization with application to group regularized regression / Calafiore, Giuseppe Carlo. - In: OPTIMIZATION AND ENGINEERING. - ISSN 1389-4420. - STAMPA. - (2016), pp. 1-24. [10.1007/s11081-016-9336-z]
Parallel block coordinate minimization with application to group regularized regression
CALAFIORE, Giuseppe Carlo
2016
Abstract
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Each iteration involves a first phase where n independent minimizations are performed over the n variable blocks, followed by a phase where the results of the first phase are coordinated to obtain the whole variable update. Convergence of the method to the global optimum is proved for functions composed of a smooth part plus a possibly non-smooth but separable term. The method is also proved to have a linear rate of convergence, for functions that are smooth and strongly convex. The proposed algorithm can give computational advantage over the more standard serial block coordinate-wise minimization methods, when run over a parallel, multi-worker, computing architecture. The method is suitable for regularized regression problems, such as the group Lasso, group Ridge regression, and group Elastic Net. Numerical tests are run on such types of regression problems to exemplify the performance of the proposed method.File | Dimensione | Formato | |
---|---|---|---|
parallel_block_descent_5.pdf
Open Access dal 30/09/2017
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
PUBBLICO - Tutti i diritti riservati
Dimensione
581.78 kB
Formato
Adobe PDF
|
581.78 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2651765
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo