咨询与建议

限定检索结果

文献类型

  • 113 篇 期刊文献
  • 94 篇 会议
  • 2 篇 学位论文

馆藏范围

  • 209 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 191 篇 工学
    • 121 篇 计算机科学与技术...
    • 71 篇 电气工程
    • 33 篇 控制科学与工程
    • 26 篇 信息与通信工程
    • 16 篇 软件工程
    • 13 篇 仪器科学与技术
    • 10 篇 石油与天然气工程
    • 10 篇 环境科学与工程(可...
    • 8 篇 机械工程
    • 8 篇 材料科学与工程(可...
    • 6 篇 电子科学与技术(可...
    • 6 篇 化学工程与技术
    • 5 篇 力学(可授工学、理...
    • 5 篇 动力工程及工程热...
    • 4 篇 建筑学
    • 4 篇 土木工程
    • 4 篇 水利工程
    • 4 篇 航空宇航科学与技...
    • 3 篇 生物工程
    • 2 篇 光学工程
  • 34 篇 理学
    • 15 篇 物理学
    • 9 篇 化学
    • 8 篇 数学
    • 6 篇 生物学
    • 2 篇 地球物理学
  • 15 篇 管理学
    • 14 篇 管理科学与工程(可...
  • 13 篇 医学
    • 11 篇 临床医学
    • 4 篇 基础医学(可授医学...
  • 3 篇 农学
  • 2 篇 经济学
    • 2 篇 应用经济学
  • 1 篇 法学
  • 1 篇 文学

主题

  • 209 篇 training algorit...
  • 32 篇 neural network
  • 27 篇 artificial neura...
  • 15 篇 neural networks
  • 9 篇 particle swarm o...
  • 8 篇 artificial neura...
  • 6 篇 deep learning
  • 6 篇 neural nets
  • 6 篇 machine learning
  • 6 篇 training
  • 5 篇 modeling
  • 5 篇 feature extracti...
  • 5 篇 multilayer perce...
  • 5 篇 forecasting
  • 4 篇 generative adver...
  • 4 篇 process neural n...
  • 4 篇 quasi-newton met...
  • 4 篇 power quality di...
  • 4 篇 learning (artifi...
  • 4 篇 recurrent neural...

机构

  • 3 篇 hebei univ engn ...
  • 3 篇 univ dayton dept...
  • 2 篇 lab cent lyonnai...
  • 2 篇 joint inst nucl ...
  • 2 篇 assiut univ dept...
  • 2 篇 univ calif san d...
  • 2 篇 ivanovo state un...
  • 2 篇 univ dubrovnik d...
  • 2 篇 univ tokyo grad ...
  • 2 篇 shandong univ sc...
  • 2 篇 south china univ...
  • 2 篇 univ technol com...
  • 2 篇 forth inst appl ...
  • 1 篇 govt arts coll d...
  • 1 篇 stanford univ de...
  • 1 篇 rabdan acad fac ...
  • 1 篇 univ teknol petr...
  • 1 篇 iasbs dept chem ...
  • 1 篇 dalian ocean uni...
  • 1 篇 pt pln persero e...

作者

  • 6 篇 ninomiya hiroshi
  • 5 篇 xu shaohua
  • 3 篇 gong n
  • 3 篇 taha tarek m.
  • 3 篇 liu kun
  • 3 篇 paul dipjyoti
  • 3 篇 pantazis yannis
  • 3 篇 zeng xiaoqin
  • 3 篇 hasan raqibul
  • 3 篇 mahboubi shahrza...
  • 3 篇 ng wing w. y.
  • 3 篇 bertrand-krajews...
  • 3 篇 stylianou yannis
  • 3 篇 denoeux t
  • 3 篇 takamichi shinno...
  • 2 篇 stadnik a
  • 2 篇 vilovic ivan
  • 2 篇 subbotin s
  • 2 篇 chen chen
  • 2 篇 saito yuki

语言

  • 190 篇 英文
  • 13 篇 其他
  • 6 篇 中文
检索条件"主题词=Training algorithm"
209 条 记 录,以下是51-60 订阅
排序:
GENERAL METHOD FOR training COMMITTEE MACHINE
收藏 引用
PATTERN RECOGNITION 1978年 第4期10卷 255-259页
作者: TAKIYAMA, R Department of Visual Communication Design Kyushu Institute of Design Fukuoka 815 Japan
A method for training the committee machine with an arbitrary logic is described. First, an expression of the discriminant function realized by the committee machine is introduced. By making use of the expression, an ... 详细信息
来源: 评论
BOOKS ON TAPE AS training DATA FOR CONTINUOUS SPEECH RECOGNITION
收藏 引用
SPEECH COMMUNICATION 1994年 第1期14卷 61-70页
作者: BOULIANNE, G KENNY, P LENNIG, M OSHAUGHNESSY, D MERMELSTEIN, P BELL NO RES LTD MONTREAL PQ CANADA
training algorithms for natural speech recognition require very large amounts of transcribed speech data. Commercially distributed books on tape constitute an abundant source of such data, but it is difficult to take ... 详细信息
来源: 评论
A Simple training Strategy for Graph Autoencoder  2020
A Simple Training Strategy for Graph Autoencoder
收藏 引用
12th International Conference on Machine Learning and Computing (ICMLC)
作者: Wang, Yingfeng Xu, Biyun Kwak, Myungjae Zeng, Xiaoqin Middle Georgia State Univ 100 Univ Pkwy Macon GA 31206 USA Beijing Kubao Technol Co 1089 Huihe South St Beijing Peoples R China Hohai Univ 1 Xikang Rd Nanjing Jiangsu Peoples R China
Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to reduce the computational cost. The training algorithm of a graph autoencoder sea... 详细信息
来源: 评论
Momentum Acceleration of Quasi-Newton training for Neural Networks  16th
Momentum Acceleration of Quasi-Newton Training for Neural Ne...
收藏 引用
16th Pacific Rim International Conference on Artificial Intelligence (PRICAI)
作者: Mahboubi, Shahrzad Indrapriyadarsini, S. Ninomiya, Hiroshi Asai, Hideki Shonan Inst Technol 1-1-25 Tsujido Nishikaigan Fujisawa Kanagawa 2518511 Japan Shizuoka Univ Naka Ku 3-5-1 Johoku Hamamatsu Shizuoka 4328011 Japan
This paper describes a novel acceleration technique of quasi-Newton method (QN) using momentum terms for training in neural networks. Recently, Nesterov's accelerated quasi-Newton method (NAQ) has shown that the m... 详细信息
来源: 评论
training Method for a Feed Forward Neural Network Based on Meta-heuristics  13th
Training Method for a Feed Forward Neural Network Based on M...
收藏 引用
13th International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP)
作者: Melo, Haydee Zhang, Huiming Vasant, Pandian Watada, Junzo Waseda Univ Grad Sch Prod Informat & Syst Tokyo Japan Univ Teknol PETRONAS Fundamental & Appl Sci Dept Seri Iskandar Malaysia Univ Teknol PETRONAS Deparment Comp & Informat Sci Seri Iskandar Malaysia
This paper proposes a Gaussian-Cauchy Particle Swarm Optimization (PSO) algorithm to provide the optimized parameters for a Feed Forward Neural Network. The improved PSO trains the Neural Network by optimizing the net... 详细信息
来源: 评论
Memristor Crossbar Based Unsupervised training
Memristor Crossbar Based Unsupervised Training
收藏 引用
IEEE National Aerospace and Electronics Conference (NAECON)
作者: Hasan, Raqibul Taha, Tarek M. Univ Dayton Dept Elect & Comp Engn Dayton OH 45469 USA
Several big data applications are particularly focused on classification and clustering tasks. Robustness of such system depends on how well it can extract important features from the raw data. For big data processing... 详细信息
来源: 评论
Neural Network training based on quasi-Newton Method using Nesterov's Accelerated Gradient
Neural Network Training based on quasi-Newton Method using N...
收藏 引用
IEEE Region 10 Conference (TENCON)
作者: Ninomiya, Hiroshi Shonan Inst Technol Dept Informat Sci 1-1-25 Tsujido Nishikaigan Fujisawa Kanagawa 2518511 Japan
This paper describes a novel quasi-Newton (QN) based accelerated technique for training of neural networks. Recently, Nesterov's accelerated gradient method has been utilized for training the neural networks. In t... 详细信息
来源: 评论
training Deep Neural Networks with Constrained Learning Parameters
Training Deep Neural Networks with Constrained Learning Para...
收藏 引用
IEEE International Conference on Rebooting Computing (ICRC)
作者: Date, Prasanna Carothers, Christopher D. Mitchell, John E. Hendler, James A. Magdon-Ismail, Malik Rensselaer Polytech Inst Dept Comp Sci Troy NY 12180 USA Rensselaer Polytech Inst Dept Math Sci Troy NY 12180 USA
Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating poin... 详细信息
来源: 评论
On-chip training of Memristor Based Deep Neural Networks
On-chip Training of Memristor Based Deep Neural Networks
收藏 引用
International Joint Conference on Neural Networks (IJCNN)
作者: Hasan, Raqibul Taha, Tarek M. Yakopcic, Chris Univ Dayton Dept Elect & Comp Engn Dayton OH 45469 USA
This paper presents on-chip training circuits for memristor based deep neural networks utilizing unsupervised and supervised learning methods. Memristor crossbar circuits allow neural algorithms to be implemented very... 详细信息
来源: 评论
Manhattan Rule training for Memristive Crossbar Circuit Pattern Classifiers  9
Manhattan Rule Training for Memristive Crossbar Circuit Patt...
收藏 引用
IEEE 9th International Symposium on Intelligent Signal Processing (WISP)
作者: Zamanidoost, Elham Bayat, Farnood M. Strukov, Dmitri Kataeva, Irina Univ Calif Santa Barbara Dept Elect & Comp Engn Santa Barbara CA 93106 USA Denso Corp Adv Res Div Komenoki Nisshin Japan
We investigated batch and stochastic Manhattan Rule algorithms for training multilayer perceptron classifiers implemented with memristive crossbar circuits. In Manhattan Rule training, the weights are updated only usi... 详细信息
来源: 评论