咨询与建议

限定检索结果

文献类型

  • 14 篇 期刊文献
  • 2 篇 会议
  • 1 篇 科技报告

馆藏范围

  • 17 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 14 篇 理学
    • 14 篇 数学
  • 6 篇 管理学
    • 6 篇 管理科学与工程(可...
  • 3 篇 工学
    • 2 篇 计算机科学与技术...
    • 1 篇 信息与通信工程
    • 1 篇 软件工程

主题

  • 17 篇 memory gradient ...
  • 10 篇 unconstrained op...
  • 8 篇 global convergen...
  • 6 篇 convergence
  • 2 篇 exact line searc...
  • 2 篇 numerical experi...
  • 2 篇 line search
  • 2 篇 convergence anal...
  • 1 篇 pareto critical
  • 1 篇 curve search rul...
  • 1 篇 curry-altman's s...
  • 1 篇 descent search d...
  • 1 篇 iterations
  • 1 篇 wolfe conditions
  • 1 篇 algorithms
  • 1 篇 nonmonotone armi...
  • 1 篇 large scale prob...
  • 1 篇 synergetic neura...
  • 1 篇 difference equat...
  • 1 篇 non-linear progr...

机构

  • 4 篇 univ michigan de...
  • 3 篇 qufu normal univ...
  • 2 篇 qufu normal univ...
  • 1 篇 shanghai univ sc...
  • 1 篇 department insti...
  • 1 篇 china univ petr ...
  • 1 篇 wuhan univ sch m...
  • 1 篇 information cent...
  • 1 篇 山东科技大学
  • 1 篇 hainan univ dept...
  • 1 篇 sichuan univ col...
  • 1 篇 chinese acad sci...
  • 1 篇 chongqing jiaoto...
  • 1 篇 department of ma...
  • 1 篇 chongqing normal...
  • 1 篇 上海交通大学
  • 1 篇 tongji univ dept...
  • 1 篇 huanggang normal...
  • 1 篇 school of manage...
  • 1 篇 tokyo univ sci d...

作者

  • 3 篇 shi zj
  • 2 篇 shi zhen-jun
  • 2 篇 narushima yasush...
  • 2 篇 shen j
  • 1 篇 zou gang
  • 1 篇 yao wei
  • 1 篇 shen jie
  • 1 篇 sun qingying
  • 1 篇 zhao yong
  • 1 篇 cantrell j. w
  • 1 篇 yabe hiroshi
  • 1 篇 wu baofeng
  • 1 篇 qingguo bai
  • 1 篇 zhang weiguo
  • 1 篇 miele a
  • 1 篇 liu qiu
  • 1 篇 ou yigui
  • 1 篇 liu yuanwen
  • 1 篇 jingyong tang co...
  • 1 篇 guo jinhua

语言

  • 12 篇 英文
  • 4 篇 其他
  • 1 篇 中文
检索条件"主题词=Memory gradient method"
17 条 记 录,以下是1-10 订阅
排序:
memory gradient method for multiobjective optimization
收藏 引用
APPLIED MATHEMATICS AND COMPUTATION 2023年 第1期443卷
作者: Chen, Wang Yang, Xinmin Zhao, Yong Sichuan Univ Coll Math Chengdu 610065 Peoples R China Chongqing Normal Univ Natl Ctr Appl Math Chongqing Chongqing 401331 Peoples R China Chongqing Normal Univ Sch Math Sci Chongqing 401331 Peoples R China Chongqing Jiaotong Univ Coll Math & Stat Chongqing 400074 Peoples R China
In this paper, we propose a new descent method, called multiobjective memory gradi-ent method, for finding Pareto critical points of a multiobjective optimization problem. The main thought in this method is to select ... 详细信息
来源: 评论
A memory gradient method for non-smooth convex optimization
收藏 引用
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 2015年 第8期92卷 1625-1642页
作者: Ou, Yigui Liu, Yuanwen Hainan Univ Dept Appl Math Haikou 570228 Peoples R China
Based on the Moreau-Yosida regularization and a modified line search technique, this paper presents an implementable memory gradient method for solving a possibly non-differentiable convex minimization problem by conv... 详细信息
来源: 评论
A new variant of the memory gradient method for unconstrained optimization
收藏 引用
OPTIMIZATION LETTERS 2012年 第8期6卷 1643-1655页
作者: Zheng, Yue Wan, Zhongping Wuhan Univ Sch Math & Stat Wuhan 430072 Peoples R China Huanggang Normal Univ Coll Math & Comp Sci Huanggang 438000 Peoples R China
In this paper, we present a new memory gradient method such that the direction generated by this method provides a sufficient descent direction for the objective function at every iteration. Then, we analyze its globa... 详细信息
来源: 评论
A NEW DESCENT memory gradient method AND ITS GLOBAL CONVERGENCE
收藏 引用
Journal of Systems Science & Complexity 2011年 第4期24卷 784-794页
作者: Min SUN Qingguo BAI Department of MaShematics and Information Science Zaozhuan9 University Zaozhuang 277160 China. School of Management Qufu Normal University Rizhao 276826 China
In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direc... 详细信息
来源: 评论
A memory gradient method with a New Nonmonotone Line Search Rule
A Memory Gradient Method with a New Nonmonotone Line Search ...
收藏 引用
The 2010 IEEE International Conference on Progress in Informatics and Computing
作者: Jingyong Tang College of Mathematics and Information Science Xinyang Normal University Xinyang 464000,China Yunhong Hu Department of Applied Mathematics Yuncheng University Yuncheng 044000,China
Based on nonmonotone Armijo line search,the paper proposes a new nonmonotone line search and investigates a memory gradient method with this line *** global convergence is also proved under some mild *** compared with... 详细信息
来源: 评论
曲线搜索下记忆梯度法的收敛性
收藏 引用
高等学校计算数学学报 2012年 第4期34卷 308-315页
作者: 汤京永 贺国平 信阳师范学院数学与信息科学学院 上海交通大学数学系 山东科技大学信息科学与工程学院
1引言考虑无约束优化问题(UP):minf(x),x∈R;,其中f(x):R;—R为连续可微函数,g(x)为其梯度.求解(UP)的算法主要是迭代法,基本格式为Xk+1=Xκ+ακdκ,其中dκ为f(x)在Xκ点的搜索方向,ακ是搜索步长.在本文中,若Xκ为当前... 详细信息
来源: 评论
memory gradient method with Goldstein line search
收藏 引用
COMPUTERS & MATHEMATICS WITH APPLICATIONS 2007年 第1期53卷 28-40页
作者: Shi, Zhen-Jun Shen, Jie Univ Michigan Dept Comp & Informat Sci Dearborn MI 48128 USA Qufu Normal Univ Coll Operat Res & Management Shandong 276826 Peoples R China
In this paper, we present a multi-step memory gradient method with Goldstein line search for unconstrained optimization problems and prove its global convergence under some mild conditions. We also prove the linear co... 详细信息
来源: 评论
Strong global convergence of an adaptive nonmonotone memory gradient method
收藏 引用
APPLIED MATHEMATICS AND COMPUTATION 2007年 第1期185卷 681-688页
作者: Yu, Zhensheng Zhang, Weiguo Wu, Baofeng Shanghai Univ Sci & Technol Coll Sci Shanghai 200093 Peoples R China Tongji Univ Dept Appl Math Shanghai 200092 Peoples R China
In this paper, we develop an adaptive nonmonotone memory gradient method for unconstrained optimization. The novelty of this method is that the stepsize can be adjusted according to the characteristics of the objectiv... 详细信息
来源: 评论
Global convergence of a memory gradient method for unconstrained optimization
收藏 引用
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS 2006年 第3期35卷 325-346页
作者: Narushima, Yasushi Yabe, Hiroshi Tokyo Univ Sci Grad Sch Dept Math Shinjuku Ku Tokyo 1628601 Japan Tokyo Univ Sci Dept Math Informat Sci Shinjuku Ku Tokyo 1628601 Japan
memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this ... 详细信息
来源: 评论
On memory gradient method with trust region for unconstrained optimization
收藏 引用
NUMERICAL ALGORITHMS 2006年 第2期41卷 173-196页
作者: Shi, ZJ Shen, J Qufu Normal Univ Coll Operat Res & Management Shandong 276826 Peoples R China Univ Michigan Dept Comp & Informat Sci Dearborn MI 48128 USA
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each ... 详细信息
来源: 评论