Abstract
We derive a stochastic gradient algorithm for semidefinite optimization using randomization techniques. The algorithm uses subsampling to reduce the computational cost of each iteration and the subsampling ratio explicitly controls granularity, i.e. the tradeoff between cost per iteration and total number of iterations. Furthermore, the total computational cost is directly proportional to the complexity (i.e. rank) of the solution. We study numerical performance on some large-scale problems arising in statistical learning.
Citation
Alexandre d’Aspremont. "Subsampling algorithms for semidefinite programming." Stoch. Syst. 1 (2) 274 - 305, 2011. https://doi.org/10.1214/10-SSY018
Information