咨询与建议

限定检索结果

文献类型

  • 426 篇 期刊文献
  • 11 篇 会议

馆藏范围

  • 437 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 436 篇 工学
    • 423 篇 电气工程
    • 421 篇 控制科学与工程
    • 12 篇 计算机科学与技术...
    • 6 篇 仪器科学与技术
    • 3 篇 信息与通信工程
    • 1 篇 电子科学与技术(可...
    • 1 篇 土木工程
    • 1 篇 交通运输工程
    • 1 篇 生物医学工程(可授...
  • 3 篇 理学
    • 2 篇 化学
    • 2 篇 生物学
    • 1 篇 物理学
  • 2 篇 医学
    • 2 篇 临床医学
  • 1 篇 管理学
    • 1 篇 管理科学与工程(可...

主题

  • 437 篇 deep learning fo...
  • 77 篇 feature extracti...
  • 67 篇 three-dimensiona...
  • 59 篇 object detection
  • 55 篇 training
  • 51 篇 visual learning
  • 46 篇 task analysis
  • 42 篇 localization
  • 42 篇 computer vision ...
  • 41 篇 cameras
  • 40 篇 semantic scene u...
  • 40 篇 deep learning me...
  • 40 篇 rgb-d perception
  • 39 篇 segmentation and...
  • 39 篇 robots
  • 38 篇 semantics
  • 38 篇 computer vision ...
  • 36 篇 visualization
  • 31 篇 point cloud comp...
  • 30 篇 recognition

机构

  • 6 篇 google ch-8002 z...
  • 5 篇 univ bonn d-5311...
  • 5 篇 korea adv inst s...
  • 5 篇 zhejiang univ co...
  • 5 篇 tech univ munich...
  • 4 篇 univ tubingen d-...
  • 4 篇 keio univ yokoha...
  • 4 篇 carnegie mellon ...
  • 4 篇 univ chinese aca...
  • 4 篇 nyu brooklyn ny ...
  • 3 篇 univ michigan an...
  • 3 篇 shanghai jiao to...
  • 3 篇 zhejiang univ zh...
  • 3 篇 natl univ def te...
  • 3 篇 univ chinese aca...
  • 3 篇 shanghai jiao to...
  • 3 篇 southeast univ s...
  • 3 篇 toyota res inst ...
  • 3 篇 univ perugia dep...
  • 3 篇 alibaba grp peop...

作者

  • 8 篇 tombari federico
  • 8 篇 giusti alessandr...
  • 8 篇 stachniss cyrill
  • 7 篇 behley jens
  • 7 篇 sugiura komei
  • 6 篇 caputo barbara
  • 5 篇 guzzi jerome
  • 5 篇 seo seung-woo
  • 5 篇 weyler jan
  • 5 篇 navab nassir
  • 5 篇 hutter marco
  • 5 篇 wu jun
  • 5 篇 xiang zhiyu
  • 5 篇 valada abhinav
  • 4 篇 shin ukcheol
  • 4 篇 van gool luc
  • 4 篇 gambardella luca...
  • 4 篇 nava mirko
  • 4 篇 garg sourav
  • 4 篇 wang yue

语言

  • 437 篇 英文
检索条件"主题词=Deep Learning for Visual Perception"
437 条 记 录,以下是271-280 订阅
排序:
E2EK: End-to-End Regression Network Based on Keypoint for 6D Pose Estimation
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第3期7卷 6526-6533页
作者: Lin, Shifeng Wang, Zunran Ling, Yonggen Tao, Yidan Yang, Chenguang South China Univ Technol Sch Automat Sci & Engn Guangzhou 510000 Guangdong Peoples R China Tencent Robot X Shenzhen 518054 Peoples R China Shanghai Jiao Tong Univ Shanghai 200240 Peoples R China City Univ Hong Kong Hong Kong Peoples R China Univ West England Bristol Robot Lab Bristol BS16 1QY Avon England
The methods based on deep learning are the mainstream of 6D object pose estimation, which mainly include direct regression and two-stage pipelines. The former are keen by many scholarsat first due to their simplicity ... 详细信息
来源: 评论
Adaptive Cost Volume Fusion Network for Multi-Modal Depth Estimation in Changing Environments
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第2期7卷 5095-5102页
作者: Park, Jinsun Jeong, Yongseop Joo, Kyungdon Cho, Donghyeon Kweon, In So Pusan Natl Univ PNU Sch Comp Sci & Engn Busan 46241 South Korea Korea Adv Inst Sci & Technol Robot Program Daejeon 34141 South Korea UNIST Artificial Intelligence Grad Sch Ulsan 44919 South Korea UNIST Dept Comp Sci & Engn Ulsan 44919 South Korea Chungnam Natl Univ CNU Dept Elect Engn Daejeon 34134 South Korea Korea Adv Inst Sci & Technol Sch Elect Engn Daejeon 34141 South Korea
In this letter, we propose an adaptive cost volume fusion algorithm for multi-modal depth estimation in changing environments. Our method takes measurements from multi-modal sensors to exploit their complementary char... 详细信息
来源: 评论
SCVP: learning One-Shot View Planning via Set Covering for Unknown Object Reconstruction
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第2期7卷 1463-1470页
作者: Pan, Sicong Hu, Hao Wei, Hui Fudan Univ Sch Comp Sci Shanghai Key Lab Data Sci Lab Algorithms Cognit Models Shanghai 200438 Peoples R China
The view planning (VP) problem in robotic active vision enables a robot system to automatically perform object reconstruction tasks. Lacking prior knowledge, next-best-view (NBV) methods are typically used to plan a v... 详细信息
来源: 评论
BIMS-PU: Bi-Directional and Multi-Scale Point Cloud Upsampling
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第3期7卷 7447-7454页
作者: Bai, Yechao Wang, Xiaogang Ang Jr, Marcelo H. Rus, Daniela Natl Univ Singapore Singapore 119077 Singapore MIT 77 Massachusetts Ave Cambridge MA 02139 USA
The learning and aggregation of multi-scale features are essential in empowering neural networks to capture the fine-grained geometric details in the point cloud upsampling task. Most existing approaches extract multi... 详细信息
来源: 评论
Uncertainty-Assisted Image-Processing for Human-Robot Close Collaboration
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第2期7卷 4236-4243页
作者: Sajedi, Seyedomid Liu, Wansong Eltouny, Kareem Behdad, Sara Zheng, Minghui Liang, Xiao Univ Buffalo Civil Struct & Environm Engn Dept Buffalo NY 14260 USA Univ Buffalo Mech & Aerosp Engn Dept Buffalo NY 14260 USA Univ Florida Environm Engn Sci Dept Gainesville FL 32611 USA
The safety of human workers has been the main concern in human-robot close collaboration. Along with rapidly developed artificial intelligence techniques, deep learning models using two-dimensional images have become ... 详细信息
来源: 评论
Partial-to-Partial Point Generation Network for Point Cloud Completion
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第4期7卷 11990-11997页
作者: Zhang, Ziyu Yu, Yi Da, Feipeng Southeast Univ Sch Automat Nanjing 211189 Jiangsu Peoples R China
Point cloud completion aims at predicting dense complete 3D shapes from sparse incomplete point clouds captured from 3D sensors or scanners. It plays an essential role in various applications such as autonomous drivin... 详细信息
来源: 评论
visual Attention-Based Self-Supervised Absolute Depth Estimation Using Geometric Priors in Autonomous Driving
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第4期7卷 11998-12005页
作者: Xiang, Jie Wang, Yun An, Lifeng Liu, Haiyang Wang, Zijun Liu, Jian Chinese Acad Sci Inst Microelect Beijing 100029 Peoples R China Univ Chinese Acad Sci Sch Elect Elect & Commun Engn Beijing 100049 Peoples R China
Although existing monocular depth estimation methods have made great progress, predicting an accurate absolute depth map from a single image is still challenging due to the limited modeling capacity of networks and th... 详细信息
来源: 评论
Cross-View and Cross-Domain Underwater Localization Based on Optical Aerial and Acoustic Underwater Images
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第2期7卷 4969-4974页
作者: Dos Santos, Matheus M. De Giacomo, Giovanni G. Drews-Jr, Paulo L. J. Botelho, Silvia S. C. Univ Fed Rio Grande FURG Ctr Computat Sci C3 Intelligent Robot & Automat Grp NAUTEC BR-96203900 Rio Grande Brazil
Cross-view image matches have been widely explored on terrestrial image localization using aerial images from drones or satellites. This study expands the cross-view image match idea and proposes a cross-domain and cr... 详细信息
来源: 评论
Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第4期7卷 12094-12101页
作者: Jiang, Peizhe Yang, Wei Ye, Xiaoqing Tan, Xiao Wu, Meng Northwestern Polytech Univ Sch Marine Sci & Technol Xian 710072 Peoples R China Baidu Inc Dept Comp Vis Technol VIS Beijing 100085 Peoples R China
Monocular depth estimation (MDE) in the self-supervised scenario has emerged as a promising method as it refrains from the requirement of ground truth depth. Despite continuous efforts, MDE is still sensitive to scale... 详细信息
来源: 评论
On the Coupling of Depth and Egomotion Networks for Self-Supervised Structure from Motion
收藏 引用
IEEE ROBOTICS AND AUTOMATION LETTERS 2022年 第3期7卷 6766-6773页
作者: Wagstaff, Brandon Peretroukhin, Valentin Kelly, Jonathan Univ Toronto Inst Aerosp Studies UTIAS Space & Terr Autonomous Robot Syst STARS Lab Toronto ON M3H 5T6 Canada MIT Comp Sci & Artificial Intelligence Lab Cambridge MA 02139 USA
Structure from motion (SfM) has recently been formulated as a self-supervised learning problem, where neural network models of depth and egomotion are learned jointly through view synthesis. Herein, we address the ope... 详细信息
来源: 评论