1. Introduction


Much like Newton's method is a standard tool for solving unconstrained smooth minimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed version of these problems. They are very generally applicable, but they turn out to be especially well-suited to problems of recent and widespread interest involving large or high-dimensional datasets.

Proximal methods sit at a higher level of abstraction than classical optimization algorithms like Newton’s method. In the latter, the base operations are low-level, consisting of linear algebra operations and the computation of gradients and Hessians. In proximal algorithms, the base operation is evaluating the proximal operator of a function, which involves solving a small convex optimization problem. These subproblems can be solved with standard methods, but they often admit closedform solutions or can be solved very quickly with simple specialized methods. We will also see that proximal operators and proximal algorithms have a number of interesting interpretations and are connected to many different topics in optimization and applied mathematics.

2. Algorithms


For following convex optimization problem

$$\min_{x}f(x)+g(x)$$

where $f$ is smooth, $g:R^n\rightarrow R\cup \{+\infty\}$ is closed proper convex.

Generally, there are several proximal methods to solve this problem.

  • Proximal Gradient Method

$$x^{k+1}:=prox_{\lambda^kg}(x^k-\lambda^k \nabla f(x^k)$$

which converges with rate $O(1/k)$ when $\nabla f$ is Lipschitz continuous with constant L and step sizes are $ \lambda^k=\lambda\in(0,1/L]$. If $L$ is not known, we can use the following line search:

Typical value of $\beta$ is 1/2, and

$$\hat{f}_{\lambda}(x,y)=f(y)+\nabla f(y)^T(x-y)+(1/2\lambda)||x-y||_{2}^2$$

  • Accelerated Proximal Gradient Method

$$y^{k+1}=x^k+\omega (x^k-x^{k-1})$$

$$x^{k+1}:=prox_{\lambda^kg}(y^{k+1}-\lambda^k \nabla f(y^{k+1}))$$

works for $\omega^k=k/(k+3)$ and similar line search as before.

This method has faster $O(1/k^2)$ convergence rate, originated with Nesterov (1983)

  • ADMM

$$x^{k+1}:=prox_{\lambda f}(z^k-u^k)$$

$$z^{k+1}:=prox_{\lambda g}(x^{k+1}+u^k)$$

$$u^{k+1}:=u^k+x^{k+1}-z^{k+1}$$

basiclly, always works and has $O(1/k)$ rate in general. If $f$ and $g$ are both indicators, get a variation on alternating projections.

This method originates from Gabay, Mercier, Glowinski, Marrocco in 1970s.

3. Example


You are required to solve the following optimization problem

$$\min_{x}\frac{1}{2}x^TAx+b^Tx+c+\gamma||x||_{1}$$

where

$$A=\begin{pmatrix} 2 & 0.25 \\ 0.25 & 0.2 \end{pmatrix},\;b=\begin{pmatrix} 0.5 \\ 0.5 \end{pmatrix},\; c=-1.5, \; \lambda=0.2$$

As for this problem, if $f(x)=\frac{1}{2}x^TAx+b^Tx+c$ and $g(x)=\gamma||x||_{1}$ then

$$\nabla f(x)=Ax+b$$

If $g=||\cdot||_{1}$, then

$$prox_{\lambda f}(v)=(v-\lambda)_{+}-(-v-\lambda)_{+}$$

So the update step is

$$x^{k+1}:=prox_{\lambda^k \gamma||\cdot||_{1}}(x^k-\lambda^k \nabla f(x^k))$$

Finally, the 2D coutour plot of objective function and the trajectory of the value update are showed in following figure.

Additionally, when we use proximal gradient method based on exact line search to optimize the objective function, the result is:

We can find that proximal algorithm can solve this nonsmooth sonvex optimization problem successfully. And method based on exact line search can obtain faster convergence rate than one based on backtracking line search.

If you want to learn proximal algorithms further, you can read the book "Proximal Algorithms" by N. Parikh and S. Boyd, and corresponding website: http://web.stanford.edu/~boyd/papers/prox_algs.html

References


Parikh, Neal, and Stephen P. Boyd. "Proximal Algorithms." Foundations and Trends in optimization 1.3 (2014): 127-239.


 




 

Proximal Algorithms的更多相关文章

  1. Proximal Algorithms 6 Evaluating Proximal Operators

    目录 一般方法 二次函数 平滑函数 标量函数 一般的标量函数 多边形 对偶 仿射集合 半平面 Box Simplex Cones 二阶锥 半正定锥 指数锥 Pointwise maximum and ...

  2. Proximal Algorithms 5 Parallel and Distributed Algorithms

    目录 问题的结构 consensus 更为一般的情况 Exchange 问题 Global exchange 更为一般的情况 Allocation Proximal Algorithms 这一节,介绍 ...

  3. Proximal Algorithms 4 Algorithms

    目录 Proximal minimization 解释 Gradient flow 解释1 最大最小算法 不动点解释 Forward-backward 迭代解释 加速 proximal gradien ...

  4. Proximal Algorithms 3 Interpretation

    目录 Moreau-Yosida regularization 与次梯度的联系 改进的梯度路径 信赖域问题 Proximal Algorithms 这一节,作者总结了一些关于proximal的一些直观 ...

  5. Proximal Algorithms 1 介绍

    目录 定义 解释 图形解释 梯度解释 一个简单的例子 Proximal Algorithms 定义 令\(f: \mathrm{R}^n \rightarrow \mathrm{R} \cup \{+ ...

  6. Proximal Algorithms 7 Examples and Applications

    目录 LASSO proximal gradient method ADMM 矩阵分解 ADMM算法 多时期股票交易 随机最优 Robust and risk-averse optimization ...

  7. Proximal Algorithms 2 Properties

    目录 可分和 基本的运算 不动点 fixed points Moreau decomposition 可分和 如果\(f\)可分为俩个变量:\(f(x, y)=\varphi(x) + \psi(y) ...

  8. Proximal Gradient Descent for L1 Regularization

    [本文链接:http://www.cnblogs.com/breezedeus/p/3426757.html,转载请注明出处] 假设我们要求解以下的最小化问题:                     ...

  9. Matrix Factorization, Algorithms, Applications, and Avaliable packages

    矩阵分解 来源:http://www.cvchina.info/2011/09/05/matrix-factorization-jungle/ 美帝的有心人士收集了市面上的矩阵分解的差点儿全部算法和应 ...

随机推荐

  1. 0_Simple__simpleOccupancy

    计算核函数调用使得占用率,并尝试使用 runtime 函数自动优化线程块尺寸,以便提高占用率. ▶ 源代码. #include <iostream> #include "cuda ...

  2. hive整合hbase

    Hive整合HBase后的好处: 通过Hive把数据加载到HBase中,数据源可以是文件也可以是Hive中的表. 通过整合,让HBase支持JOIN.GROUP等SQL查询语法. 通过整合,不仅可完成 ...

  3. 奇技淫巧:在spring官网上下载历史版本的spring插件,springsource-tool-suite

    转自:https://blog.csdn.net/PacosonSWJTU/article/details/80959689 目前spring官网(http://spring.io/tools/sts ...

  4. springboot的全局异常通知

    ExceptionHandler:拦截所有通知

  5. Spring MVC 数据绑定流程

    DataBinder 数据绑定 入参,校验,格式化, ConversionService: 进行数据类型转换和数据格式化 Validators 进行数据合法性的校验, 把结果放入BindingResu ...

  6. 下载的chm文件打不开问题

    下载的chm文件无法打开,是因为此文件是在其它电脑上编辑的,上面留有原电脑的信息,当下载打开时,发现电脑信息不一致,因此会将应用锁定. 操作:文件  -->  属性  -->常规 --&g ...

  7. 在mfc中picture控件中显示Mat图片<转>

    void ShowMatImgToWnd(CWnd* pWnd, cv::Mat img) { if(img.empty()) return; CRect drect; pWnd->GetCli ...

  8. Java Magic. Part 1: java.net.URL

    Java Magic. Part 1: java.net.URL @(Base)[JDK, url, magic, 黑魔法] 英文原文 转载请写明:原文地址 系列文章: -Java Magic. Pa ...

  9. C#实现二维码生成与解码

    前几天公司内部分享了一个关于二维码的例子,觉得挺好玩的,但没有提供完整的源码.有时候看到一个好玩的东西,总想自己Demo一个,于是抽空就自己研究了一下. 一.二维码的原理 工欲善其事,必先利其器.要生 ...

  10. ingress 密码验证

    traefik ingress 上面的方式需要引入haprox或者nginx,多引入了一个代理转发层,其实ingress本身就提供了basic auth的支持,在ingress规则中添加额外的认证an ...