学术报告

当前位置: 首页 学术报告 正文
学术报告五十:Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints

时间:2024-06-26 10:52

主讲人 徐扬扬 讲座时间 2024.07.09 16:00-17:00pm
讲座地点 日产精品一致六区搬运工粤海校区汇星楼一号教室 实际会议时间日 09
实际会议时间年月 2024.7

数学科学学院学术报告[2024] 050号

(高水平大学建设系列报告930号)


报告题目: Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints

报告人:徐扬扬 教授(伦斯勒理工学院)

报告时间:2024.07.09 16:00-17:00pm

讲座地点:日产精品一致六区搬运工粤海校区汇星楼一号教室

内容摘要:In this talk, I will present a damped proximal augmented Lagrangian method (DPALM) for solving problems with a weakly-convex objective and convex linear/nonlinear constraints. Instead of taking a full stepsize, DPALM adopts a damped dual stepsize. DPALM can produce a (near) eps-KKT point within eps^{-2} outer iterations if each DPALM subproblem is solved to a proper accuracy. In addition, I will show overall iteration complexity of DPALM when the objective is either a regularized smooth function or in a regularized compositional form. For the former case, DPALM achieves the complexity of eps^{-2.5} to produce an eps-KKT point by applying an accelerated proximal gradient (APG) method to each DPALM subproblem. For the latter case, the complexity of DPALM is eps^{-3} to produce a near eps-KKT point by using an APG to solve a Moreau-envelope smoothed version of each subproblem. Our outer iteration complexity and the overall complexity either generalize existing best ones from unconstrained or linear-constrained problems to convex-constrained ones, or improve over the best-known results on solving the same-structured problems. Furthermore, numerical experiments on linearly/quadratically constrained non-convex quadratic programs and linear-constrained robust nonlinear least squares are conducted to demonstrate the empirical efficiency of the proposed DPALM over several state-of-the art methods.

报告人简介:Yangyang Xu is now an associate professor in the Department of Mathematical Sciences at Rensselaer Polytechnic Institute. He received his B.S. in Computational Mathematics from Nanjing University in 2007, M.S. in Operations Research from the Chinese Academy of Sciences in 2010, and Ph.D. from the Department of Computational and Applied Mathematics at Rice University in 2014. His research interests are on optimization theory and methods and their applications, such as machine learning, statistics, and signal processing. He developed optimization algorithms for compressed sensing, matrix completion, and tensor factorization and learning. His recent research focuses on first-order methods, stochastic optimization methods, and distributed optimization. His research has been supported by NSF, ONR, and IBM. He is now an associate editor of Mathematics of Operations Research.

欢迎师生参加!

邀请人:胡耀华


                       数学科学学院

                     2024年06月26日