咨询与建议

限定检索结果

文献类型

  • 212 篇 期刊文献
  • 9 篇 会议

馆藏范围

  • 221 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 221 篇 工学
    • 215 篇 控制科学与工程
    • 209 篇 电气工程
    • 13 篇 计算机科学与技术...
    • 1 篇 机械工程
    • 1 篇 信息与通信工程
    • 1 篇 软件工程
  • 6 篇 管理学
    • 6 篇 管理科学与工程(可...
  • 1 篇 医学
    • 1 篇 基础医学(可授医学...

主题

  • 221 篇 deep learning in...
  • 23 篇 visual learning
  • 20 篇 perception for g...
  • 19 篇 task analysis
  • 18 篇 motion and path ...
  • 18 篇 localization
  • 18 篇 computer vision ...
  • 16 篇 visual-based nav...
  • 16 篇 robots
  • 15 篇 object detection
  • 14 篇 segmentation and...
  • 14 篇 semantic scene u...
  • 12 篇 computer vision ...
  • 12 篇 training
  • 11 篇 learning from de...
  • 11 篇 force and tactil...
  • 11 篇 learning and ada...
  • 11 篇 slam
  • 10 篇 grasping
  • 9 篇 robot sensing sy...

机构

  • 6 篇 hong kong univ s...
  • 5 篇 univ michigan de...
  • 5 篇 georgia inst tec...
  • 5 篇 univ perugia dep...
  • 4 篇 imperial coll lo...
  • 4 篇 stanford univ st...
  • 4 篇 natl univ singap...
  • 3 篇 swiss fed inst t...
  • 3 篇 univ hong kong d...
  • 3 篇 carnegie mellon ...
  • 3 篇 city univ hong k...
  • 3 篇 kth royal inst t...
  • 3 篇 univ michigan de...
  • 2 篇 seoul natl univ ...
  • 2 篇 univ adelaide sc...
  • 2 篇 georgia inst tec...
  • 2 篇 mit comp sci & a...
  • 2 篇 karlsruhe inst t...
  • 2 篇 swiss fed inst t...
  • 2 篇 carnegie mellon ...

作者

  • 8 篇 liu ming
  • 5 篇 costante gabriel...
  • 5 篇 yang guang-zhong
  • 5 篇 bohg jeannette
  • 5 篇 calandra roberto
  • 5 篇 johnson-roberson...
  • 3 篇 kumar vijay
  • 3 篇 chen steven w.
  • 3 篇 sartoretti guill...
  • 3 篇 tai lei
  • 3 篇 rus daniela
  • 3 篇 zhou xiao-yun
  • 3 篇 pan jia
  • 3 篇 vasudevan ram
  • 3 篇 davison andrew j...
  • 3 篇 choi changhyun
  • 3 篇 folkesson john
  • 3 篇 kelly jonathan
  • 3 篇 kawai hisashi
  • 3 篇 magassouba aly

语言

  • 221 篇 英文
检索条件"主题词=Deep Learning in Robotics and Automation"
221 条 记 录,以下是201-210 订阅
learning Object Grasping for Soft Robot Hands
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第3期3卷 2370-2377页
作者: Choi, Changhyun Schwarting, Wilko DelPreto, Joseph Rus, Daniela MIT Comp Sci & Artificial Intelligence Lab 77 Massachusetts Ave Cambridge MA 02139 USA
We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, b... 详细信息
来源: 评论
GOSELO: Goal-Directed Obstacle and Self-Location Map for Robot Navigation Using Reactive Neural Networks
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第2期3卷 696-703页
作者: Kanezaki, Asako Nitta, Jirou Sasaki, Yoko Natl Inst Adv Ind Sci & Technol Tokyo 1350064 Japan
Robot navigation using deep neural networks has been drawing a great deal of attention. Although reactive neural networks easily learn expert behaviors and are computationally efficient, they suffer from generalizatio... 详细信息
来源: 评论
Distributed Perception by Collaborative Robots
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第4期3卷 3709-3716页
作者: Hadidi, Ramyad Cao, Jiashen Woodward, Matthew Ryoo, Michael S. Kim, Hyesoon Georgia Inst Technol Sch Comp Sci Atlanta GA 30332 USA Georgia Inst Technol Dept Elect Engn Atlanta GA 30332 USA EgoVid Inc Ulsan 44919 South Korea
Recognition ability and, more broadly, machine learning techniques enable robots to perform complex tasks and allow them to function in diverse situations. In fact, robots can easily access an abundance of sensor data... 详细信息
来源: 评论
Recurrent-OctoMap: learning State-Based Map Refinement for Long-Term Semantic Mapping With 3-D-Lidar Data
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第4期3卷 3749-3756页
作者: Sun, Li Yan, Zhi Zaganidis, Anestis Zhao, Cheng Duckett, Tom Univ Lincoln L CAS Lincoln LN6 7TS England UTBM Lab Elect Informat & Image CNRS F-90010 Belfort France
This letter presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term three-dimensional (3-D) Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding... 详细信息
来源: 评论
Real-Time 3-D Shape Instantiation From Single Fluoroscopy Projection for Fenestrated Stent Graft Deployment
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第2期3卷 1314-1321页
作者: Zhou, Xiao-Yun Lin, Jianyu Riga, Celia Yang, Guang-Zhong Lee, Su-Lin Imperial Coll London Hamlyn Ctr Robot Surg London SW7 2AZ England St Marys Hosp Reg Vasc Unit London W2 1NY England Imperial Coll London Acad Div Surg London SW7 2AZ England
Robot-assisted deployment of fenestrated stent grafts in fenestrated endovascular aortic repair (FEVAR) requires accurate geometrical alignment. Currently, this process is guided by two-dimensional (2-D) fluoroscopy, ... 详细信息
来源: 评论
deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第4期3卷 4007-4014页
作者: Rothfuss, Jonas Ferreira, Fabio Aksoy, Eren Erdal Zhou, You Asfour, Tamim Karlsruhe Inst Technol Inst Anthropomat & Robot D-76131 Karlsruhe Germany Halmstad Univ Sch Informat Technol S-30118 Halmstad Sweden
We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory that facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep... 详细信息
来源: 评论
Motion-Based Object Segmentation Based on Dense RGB-D Scene Flow
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第4期3卷 3797-3804页
作者: Shao, Lin Shah, Parth Dwaracherla, Vikranth Bohg, Jeannette Stanford Univ Stanford CA 94305 USA
Given two consecutive RGB-D images, we propose a model that estimates a dense three-dimensional (3D) motion field, also known as scene flow. We take advantage of the fact that in robot manipulation scenarios, scenes o... 详细信息
来源: 评论
LS-VO: learning Dense Optical Subspace for Robust Visual Odometry Estimation
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第3期3卷 1735-1742页
作者: Costante, Gabriele Ciarfuglia, Thomas Alessandro Univ Perugia Dept Engn I-06125 Perugia Italy
This work proposes a novel deep network architecture to solve the camera ego-motion estimation problem. A motion estimation network generally learns features similar to optical flow (OF) fields starting from sequences... 详细信息
来源: 评论
RANUS: RGB and NIR Urban Scene Dataset for deep Scene Parsing
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第3期3卷 1808-1815页
作者: Choe, Gyeongmin Kim, Seong-Heum Im, Sunghoon Lee, Joon-Young Narasimhan, Srinivasa G. Kweon, In So Korea Adv Inst Sci & Technol Sch Elect Engn Daejeon 34141 South Korea Adobe Res San Jose CA 95110 USA Carnegie Mellon Univ Pittsburgh PA 15213 USA
In this letter, we present a data-driven method for scene parsing of road scenes to utilize single-channel near-infrared (NIR) images. To overcome the lack of data problem in non-RGB spectrum, we define a new color sp... 详细信息
来源: 评论
learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第4期3卷 4383-4390页
作者: James, Jasmin Ford, Jason J. Molloy, Timothy L. Queensland Univ Technol Sch Elect Engn & Comp Sci Brisbane Qld 4000 Australia
The commercial use of unmanned aerial vehicles (UAVs) would be enhanced by an ability to sense and avoid potential mid-air collision threats. In this letter, we propose a new approach to aircraft detection for long-ra... 详细信息
来源: 评论