咨询与建议

限定检索结果

文献类型

  • 212 篇 期刊文献
  • 9 篇 会议

馆藏范围

  • 221 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 221 篇 工学
    • 215 篇 控制科学与工程
    • 209 篇 电气工程
    • 13 篇 计算机科学与技术...
    • 1 篇 机械工程
    • 1 篇 信息与通信工程
    • 1 篇 软件工程
  • 6 篇 管理学
    • 6 篇 管理科学与工程(可...
  • 1 篇 医学
    • 1 篇 基础医学(可授医学...

主题

  • 221 篇 deep learning in...
  • 23 篇 visual learning
  • 20 篇 perception for g...
  • 19 篇 task analysis
  • 18 篇 motion and path ...
  • 18 篇 localization
  • 18 篇 computer vision ...
  • 16 篇 visual-based nav...
  • 16 篇 robots
  • 15 篇 object detection
  • 14 篇 segmentation and...
  • 14 篇 semantic scene u...
  • 12 篇 computer vision ...
  • 12 篇 training
  • 11 篇 learning from de...
  • 11 篇 force and tactil...
  • 11 篇 learning and ada...
  • 11 篇 slam
  • 10 篇 grasping
  • 9 篇 robot sensing sy...

机构

  • 6 篇 hong kong univ s...
  • 5 篇 univ michigan de...
  • 5 篇 georgia inst tec...
  • 5 篇 univ perugia dep...
  • 4 篇 imperial coll lo...
  • 4 篇 stanford univ st...
  • 4 篇 natl univ singap...
  • 3 篇 swiss fed inst t...
  • 3 篇 univ hong kong d...
  • 3 篇 carnegie mellon ...
  • 3 篇 city univ hong k...
  • 3 篇 kth royal inst t...
  • 3 篇 univ michigan de...
  • 2 篇 seoul natl univ ...
  • 2 篇 univ adelaide sc...
  • 2 篇 georgia inst tec...
  • 2 篇 mit comp sci & a...
  • 2 篇 karlsruhe inst t...
  • 2 篇 swiss fed inst t...
  • 2 篇 carnegie mellon ...

作者

  • 8 篇 liu ming
  • 5 篇 costante gabriel...
  • 5 篇 yang guang-zhong
  • 5 篇 bohg jeannette
  • 5 篇 calandra roberto
  • 5 篇 johnson-roberson...
  • 3 篇 kumar vijay
  • 3 篇 chen steven w.
  • 3 篇 sartoretti guill...
  • 3 篇 tai lei
  • 3 篇 rus daniela
  • 3 篇 zhou xiao-yun
  • 3 篇 pan jia
  • 3 篇 vasudevan ram
  • 3 篇 davison andrew j...
  • 3 篇 choi changhyun
  • 3 篇 folkesson john
  • 3 篇 kelly jonathan
  • 3 篇 kawai hisashi
  • 3 篇 magassouba aly

语言

  • 221 篇 英文
检索条件"主题词=deep learning in robotics and automation"
221 条 记 录,以下是71-80 订阅
排序:
deepTIO: A deep Thermal-Inertial Odometry With Visual Hallucination
收藏 引用
IEEE robotics AND automation LETTERS 2020年 第2期5卷 1672-1679页
作者: Saputra, Muhamad Risqi U. de Gusmao, Pedro P. B. Lu, Chris Xiaoxuan Almalioglu, Yasin Rosa, Stefano Chen, Changhao Wahlstrom, Johan Wang, Wei Markham, Andrew Trigoni, Niki Univ Oxford Dept Comp Sci Oxford OX1 3QD England
Visual odometry shows excellent performance in a wide range of environments. However, in visually-denied scenarios (e.g. heavy smoke or darkness), pose estimates degrade or even fail. Thermal cameras are commonly used... 详细信息
来源: 评论
Enabling Visual Action Planning for Object Manipulation Through Latent Space Roadmap
收藏 引用
IEEE TRANSACTIONS ON robotics 2023年 第1期39卷 57-75页
作者: Lippi, Martina Poklukar, Petra Welle, Michael C. Varava, Anastasia Yin, Hang Marino, Alessandro Kragic, Danica KTH Royal Inst Technol S-11428 Stockholm Sweden KTH Royal Inst Technol EECS RPL S-11428 Stockholm Sweden KTH Royal Inst Technol Div Robot Percept & Learning S-11428 Stockholm Sweden KTH Royal Inst Technol Sch Elect Engn & Comp Sci S-11428 Stockholm Sweden Roma Tre Univ I-00154 Rome Italy Univ Cassino & Southern Lazio I-03043 Cassino Italy
In this article, we present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces, focusing on manipulation of deformable objects. We propose a latent space roadmap (L... 详细信息
来源: 评论
A deep learning Approach for Probabilistic Security in Multi-Robot Teams
收藏 引用
IEEE robotics AND automation LETTERS 2019年 第4期4卷 4262-4269页
作者: Wehbe, Remy Williams, Ryan K. Virginia Polytech Inst & State Univ Dept Elect & Comp Engn Blacksburg VA 24061 USA
In this letter, we train a convolutional neural network (CNN) to predict the probability of security of a multi-robot system (MRS) when robot interactions are probabilistic. In the context of MRSs, probabilistic secur... 详细信息
来源: 评论
learning Affordance Segmentation for Real-World Robotic Manipulation via Synthetic Images
收藏 引用
IEEE robotics AND automation LETTERS 2019年 第2期4卷 1140-1147页
作者: Chu, Fu-Jen Xu, Ruinian Vela, Patricio A. Georgia Inst Technol Inst Robot & Intelligent Machines Atlanta GA 30332 USA
This letter presents a deep learning framework to predict the affordances of object parts for robotic manipulation. The framework segments affordance maps by jointly detecting and localizing candidate regions within a... 详细信息
来源: 评论
learning Object Grasping for Soft Robot Hands
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第3期3卷 2370-2377页
作者: Choi, Changhyun Schwarting, Wilko DelPreto, Joseph Rus, Daniela MIT Comp Sci & Artificial Intelligence Lab 77 Massachusetts Ave Cambridge MA 02139 USA
We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, b... 详细信息
来源: 评论
Radar Instance Transformer: Reliable Moving Instance Segmentation in Sparse Radar Point Clouds
收藏 引用
IEEE TRANSACTIONS ON robotics 2024年 40卷 2357-2372页
作者: Zeller, Matthias Sandhu, Vardeep S. Mersch, Benedikt Behley, Jens Heidingsfeld, Michael Stachniss, Cyrill CARIAD SE D-53115 Bonn Germany Univ Bonn D-53115 Bonn Germany CARIAD SE D-71297 Monsheim Germany Univ Oxford Dept Engn Sci Oxford OX1 2JD England Lamarr Inst Machine Learning & Artificial Intellig Dortmund Germany
The perception of moving objects is crucial for autonomous robots performing collision avoidance in dynamic environments. LiDARs and cameras tremendously enhance scene interpretation but do not provide direct motion i... 详细信息
来源: 评论
Paired Recurrent Autoencoders for Bidirectional Translation Between Robot Actions and Linguistic Descriptions
收藏 引用
IEEE robotics AND automation LETTERS 2018年 第4期3卷 3441-3448页
作者: Yamada, Tatsuro Matsunaga, Hiroyuki Ogata, Tetsuya Waseda Univ Dept Intermedia Art & Sci Tokyo 1698555 Japan
We propose a novel deep learning framework for bidirectional translation between robot actions and their linguistic descriptions. Our model consists of two recurrent autoencoders (RAEs). One RAE learns to encode actio... 详细信息
来源: 评论
Vision-Based Estimation of Driving Energy for Planetary Rovers Using deep learning and Terramechanics
收藏 引用
IEEE robotics AND automation LETTERS 2019年 第4期4卷 3876-3883页
作者: Higa, Shoya Iwashita, Yumi Otsu, Kyohei Ono, Masahiro Lamarre, Olivier Didier, Annie Hoffmann, Mark CALTECH Jet Prop Lab 4800 Oak Grove Dr Pasadena CA 91109 USA Univ Toronto Inst Aerosp Studies STARS Lab Toronto ON M3H 5T6 Canada
This letter presents a prediction algorithm of driving energy for future Mars rover missions. The majority of future Mars rovers would be solar-powered, which would require energy-optimal driving to maximize the range... 详细信息
来源: 评论
Real Time Trajectory Prediction Using deep Conditional Generative Models
收藏 引用
IEEE robotics AND automation LETTERS 2020年 第2期5卷 970-976页
作者: Gomez-Gonzalez, Sebastian Prokudin, Sergey Schoelkopf, Bernhard Peters, Jan Max Planck Intelligent Syst D-72072 Tubingen Germany Tech Univ Darmstadt D-64289 Darmstadt Germany
Data driven methods for time series forecasting that quantify uncertainty open new important possibilities for robot tasks with hard real time constraints, allowing the robot system to make decisions that trade off be... 详细信息
来源: 评论
Fully Automated Annotation With Noise-Masked Visual Markers for deep-learning-Based Object Detection
收藏 引用
IEEE robotics AND automation LETTERS 2019年 第2期4卷 1972-1977页
作者: Kiyokawa, Takuya Tomochika, Keita Takamatsu, Jun Ogasawara, Tsukasa Nara Inst Sci & Technol Div Informat Sci Nara 6300192 Japan
Automated factories use deep-learning-based vision systems to accurately detect various products. However, training such vision systems requires manual annotation of a significant amount of data to optimize the large ... 详细信息
来源: 评论