# Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization

@article{Patrascu2017NonasymptoticCO, title={Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization}, author={Andrei Patrascu and Ion Necoara}, journal={arXiv: Optimization and Control}, year={2017} }

A very popular approach for solving stochastic optimization problems is the stochastic gradient descent method (SGD). Although the SGD iteration is computationally cheap and the practical performance of this method may be satisfactory under certain circumstances, there is recent evidence of its convergence difficulties and instability for unappropriate parameters choice. To avoid these drawbacks naturally introduced by the SGD scheme, the stochastic proximal point algorithms have been recently… Expand

#### 19 Citations

Convergence analysis of stochastic higher-order majorization-minimization algorithms

- Mathematics
- 2021

Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function so that along the iterations the objective function decreases. Such a… Expand

General Convergence Analysis of Stochastic First-Order Methods for Composite Optimization

- Mathematics, Computer Science
- J. Optim. Theory Appl.
- 2021

Stochastic composite convex optimization problems with the objective function satisfying a stochastic bounded gradient condition, with or without a quadratic functional growth property are considered, covering a large class of objective functions. Expand

Random gradient algorithms for convex minimization over intersection of simple sets

- Computer Science
- 2019 18th European Control Conference (ECC)
- 2019

This paper considers constrained convex optimization problems, where the constraints are described as the intersection of a finite number of convex sets, and derives gradient methods with random projections that can easily address streaming settings and optimization problems having a very large number of given constraints. Expand

Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity

- Computer Science, Mathematics
- SIAM J. Optim.
- 2019

We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal… Expand

Randomized projection methods for convex feasibility problems: conditioning and convergence rates

- Mathematics
- 2018

Finding a point in the intersection of a collection of closed convex sets, that is the convex feasibility problem, represents the main modeling strategy for many computational problems. In this paper… Expand

Minibatch stochastic subgradient-based projection algorithms for solving convex inequalities

- Mathematics
- 2019

This paper deals with the convex feasibility problem, where the feasible set is given as the intersection of a (possibly infinite) number of closed convex sets. We assume that each set is specified… Expand

Linear convergence of dual coordinate descent on non-polyhedral convex problems

- Mathematics
- 2019

This paper deals with constrained convex problems, where the objective function is smooth strongly convex and the feasible set is given as the intersection of a large number of closed convex… Expand

Random minibatch projection algorithms for convex feasibility problems

- Computer Science
- 2019 IEEE 58th Conference on Decision and Control (CDC)
- 2019

This work is the first proving that random minibatch subgradient based projection updates have a better complexity than their single-sample variants, and derives asymptotic convergence results and proves linear convergence rate. Expand

Stable Robbins-Monro approximations through stochastic proximal updates

- Mathematics
- 2015

The need for parameter estimation with massive data has reinvigorated interest in iterative estimation procedures. Stochastic approximations, such as stochastic gradient descent, are at the forefront… Expand

OR-SAGA: Over-relaxed stochastic average gradient mapping algorithms for finite sum minimization

- Mathematics, Computer Science
- 2018 European Control Conference (ECC)
- 2018

In this paper we derive a family of stochastic average gradient methods for solving unconstrained convex problems with the objective function expressed as a finite sum, that uses a proximal operator… Expand

#### References

SHOWING 1-10 OF 31 REFERENCES

Regularized Iterative Stochastic Approximation Methods for Stochastic Variational Inequality Problems

- Computer Science, Mathematics
- IEEE Transactions on Automatic Control
- 2013

This work introduces two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and provides convergence analysis for each of them. Expand

Convergence of Stochastic Proximal Gradient Algorithm

- Mathematics
- Applied Mathematics & Optimization
- 2019

We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the… Expand

RES: Regularized Stochastic BFGS Algorithm

- Mathematics, Computer Science
- IEEE Transactions on Signal Processing
- 2014

Convergence results show that lower and upper bounds on the Hessian eigenvalues of the sample functions are sufficient to guarantee almost sure convergence of a subsequence generated by RES and convergence of the sequence in expectation to optimal arguments. Expand

Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning

- Computer Science, Mathematics
- NIPS
- 2011

This work provides a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent as well as a simple modification where iterates are averaged, suggesting that a learning rate proportional to the inverse of the number of iterations, while leading to the optimal convergence rate, is not robust to the lack of strong convexity or the setting of the proportionality constant. Expand

Stochastic Three-Composite Convex Minimization

- Computer Science, Mathematics
- NIPS
- 2016

This work proves the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term of the sum of three convex functions. Expand

Random algorithms for convex minimization problems

- Computer Science, Mathematics
- Math. Program.
- 2011

This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the… Expand

Robust Stochastic Approximation Approach to Stochastic Programming

- Mathematics, Computer Science
- SIAM J. Optim.
- 2009

It is intended to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. Expand

Randomized projection methods for convex feasibility problems: conditioning and convergence rates

- Mathematics
- 2018

Finding a point in the intersection of a collection of closed convex sets, that is the convex feasibility problem, represents the main modeling strategy for many computational problems. In this paper… Expand

Towards Stability and Optimality in Stochastic Gradient Descent

- Mathematics, Computer Science
- AISTATS
- 2016

A new iterative procedure termed averaged implicit SGD (AI-SGD), which employs an implicit update at each iteration, which is related to proximal operators in optimization and achieves competitive performance with other state-of-the-art procedures. Expand

Linear convergence of first order methods for non-strongly convex optimization

- Computer Science, Mathematics
- Math. Program.
- 2019

This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. Expand