咨询与建议

限定检索结果

文献类型

  • 20,858 篇 会议
  • 105 篇 期刊文献
  • 43 册 图书

馆藏范围

  • 21,005 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 13,618 篇 工学
    • 11,054 篇 计算机科学与技术...
    • 2,651 篇 机械工程
    • 2,251 篇 软件工程
    • 914 篇 光学工程
    • 885 篇 电气工程
    • 528 篇 控制科学与工程
    • 476 篇 信息与通信工程
    • 216 篇 测绘科学与技术
    • 135 篇 生物工程
    • 127 篇 生物医学工程(可授...
    • 98 篇 电子科学与技术(可...
    • 92 篇 仪器科学与技术
    • 46 篇 安全科学与工程
    • 40 篇 建筑学
    • 40 篇 化学工程与技术
    • 39 篇 土木工程
    • 37 篇 交通运输工程
    • 35 篇 力学(可授工学、理...
    • 33 篇 航空宇航科学与技...
  • 3,494 篇 医学
    • 3,489 篇 临床医学
    • 32 篇 基础医学(可授医学...
  • 2,246 篇 理学
    • 1,144 篇 物理学
    • 1,081 篇 数学
    • 401 篇 生物学
    • 384 篇 统计学(可授理学、...
    • 245 篇 系统科学
    • 46 篇 化学
  • 343 篇 管理学
    • 176 篇 管理科学与工程(可...
    • 168 篇 图书情报与档案管...
    • 34 篇 工商管理
  • 31 篇 法学
  • 19 篇 农学
  • 15 篇 教育学
  • 8 篇 经济学
  • 5 篇 艺术学
  • 2 篇 军事学
  • 1 篇 文学

主题

  • 8,141 篇 computer vision
  • 2,886 篇 training
  • 2,839 篇 pattern recognit...
  • 1,809 篇 computational mo...
  • 1,715 篇 visualization
  • 1,492 篇 cameras
  • 1,433 篇 three-dimensiona...
  • 1,433 篇 feature extracti...
  • 1,366 篇 shape
  • 1,360 篇 face recognition
  • 1,242 篇 image segmentati...
  • 1,135 篇 robustness
  • 1,124 篇 semantics
  • 992 篇 computer archite...
  • 984 篇 object detection
  • 982 篇 layout
  • 959 篇 benchmark testin...
  • 935 篇 codes
  • 899 篇 computer science
  • 898 篇 object recogniti...

机构

  • 174 篇 univ sci & techn...
  • 158 篇 univ chinese aca...
  • 153 篇 carnegie mellon ...
  • 145 篇 chinese univ hon...
  • 109 篇 microsoft resear...
  • 103 篇 zhejiang univ pe...
  • 99 篇 swiss fed inst t...
  • 95 篇 tsinghua univers...
  • 91 篇 microsoft res as...
  • 90 篇 tsinghua univ pe...
  • 88 篇 shanghai ai lab ...
  • 81 篇 zhejiang univers...
  • 77 篇 alibaba grp peop...
  • 74 篇 hong kong univ s...
  • 73 篇 university of sc...
  • 72 篇 peking univ peop...
  • 72 篇 university of ch...
  • 68 篇 shanghai jiao to...
  • 66 篇 univ oxford oxfo...
  • 65 篇 google res mount...

作者

  • 80 篇 van gool luc
  • 70 篇 zhang lei
  • 58 篇 timofte radu
  • 48 篇 yang yi
  • 47 篇 luc van gool
  • 46 篇 xiaoou tang
  • 44 篇 tian qi
  • 43 篇 darrell trevor
  • 42 篇 loy chen change
  • 42 篇 sun jian
  • 41 篇 qi tian
  • 40 篇 li stan z.
  • 38 篇 li fei-fei
  • 37 篇 chen xilin
  • 36 篇 shan shiguang
  • 35 篇 zhou jie
  • 35 篇 vasconcelos nuno
  • 35 篇 liu yang
  • 35 篇 torralba antonio
  • 34 篇 liu xiaoming

语言

  • 20,982 篇 英文
  • 10 篇 中文
  • 5 篇 土耳其文
  • 5 篇 其他
  • 2 篇 日文
  • 2 篇 葡萄牙文
检索条件"任意字段=2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016"
21006 条 记 录,以下是4911-4920 订阅
排序:
Polygonal Point Set Tracking
Polygonal Point Set Tracking
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Nam, Gunhee Heo, Miran Oh, Seoung Wug Lee, Joon-Young Kim, Seon Joo Lunit Inc Seoul South Korea Yonsei Univ Seoul South Korea Adobe Res Redmond WA USA
In this paper, we propose a novel learning-based polygonal point set tracking method. Compared to existing video object segmentation (VOS) methods that propagate pixel-wise object mask information, we propagate a poly... 详细信息
来源: 评论
RelTransformer: A Transformer-Based Long-Tail Visual Relationship recognition
RelTransformer: A Transformer-Based Long-Tail Visual Relatio...
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Chen, Jun Agarwal, Aniket Abdelkarim, Sherif Zhu, Deyao Elhoseiny, Mohamed King Abdullah Univ Sci & Technol Thuwal Saudi Arabia Indian Inst Technol Chennai Tamil Nadu India
The visual relationship recognition (VRR) task aims at understanding the pairwise visual relationships between interacting objects in an image. These relationships typically have a long-tail distribution due to their ... 详细信息
来源: 评论
Mask3D: Pre-training 2D vision Transformers by Learning Masked 3D Priors
Mask3D: Pre-training 2D Vision Transformers by Learning Mask...
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Hou, Ji Dai, Xiaoliang He, Zijian Dai, Angela Niessner, Matthias Meta Real Labs Menlo Pk CA 94025 USA Tech Univ Munich Munich Germany
Current popular backbones in computer vision, such as vision Transformers (ViT) and ResNets are trained to perceive the world from 2D images. However, to more effectively understand 3D structural priors in 2D backbone... 详细信息
来源: 评论
Learning to Refactor Action and Co-occurrence Features for Temporal Action Localization
Learning to Refactor Action and Co-occurrence Features for T...
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Xia, Kun Wang, Le Zhou, Sanping Zheng, Nanning Tang, Wei Xi An Jiao Tong Univ Inst Artificial Intelligence & Robot Xian Peoples R China Univ Illinois Chicago IL USA
The main challenge of Temporal Action Localization is to retrieve subtle human actions from various co-occurring ingredients, e.g., context and background, in an untrimmed video. While prior approaches have achieved s... 详细信息
来源: 评论
Unified Language-driven Zero-shot Domain Adaptation
Unified Language-driven Zero-shot Domain Adaptation
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Yang, Senqiao Tian, Zhuotao Jiang, Li Jia, Jiaya Chinese Univ Hong Kong Hong Kong Peoples R China Harbin Inst Technol Shenzhen Peoples R China Chinese Univ Hong Kong Shenzhen Peoples R China
This paper introduces Unified Language-driven Zero-shot Domain Adaptation ( ULDA), a novel task setting that enables a single model to adapt to diverse target domains without explicit domain-ID knowledge. We identify ... 详细信息
来源: 评论
Deep Hashing Network for Unsupervised Domain Adaptation  30
Deep Hashing Network for Unsupervised Domain Adaptation
收藏 引用
30th ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Venkateswara, Hemanth Eusebio, Jose Chakraborty, Shayok Panchanathan, Sethuraman Arizona State Univ Ctr Cognit Ubiquitous Comp Tempe AZ 85287 USA
In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which ... 详细信息
来源: 评论
DETRs with Hybrid Matching
DETRs with Hybrid Matching
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Jia, Ding Yuan, Yuhui He, Haodi Wu, Xiaopei Yu, Haojun Lin, Weihong Sun, Lei Zhang, Chao Hu, Han Peking Univ Beijing Peoples R China Stanford Univ Stanford CA USA Zhejiang Univ Hangzhou Peoples R China Microsoft Res Asia Beijing Peoples R China
One-to-one set matching is a key design for DETR to establish its end-to-end capability, so that object detection does not require a hand-crafted NMS (non-maximum suppression) to remove duplicate detections. This end-... 详细信息
来源: 评论
ASM-Loc: Action-aware Segment Modeling for Weakly-Supervised Temporal Action Localization
ASM-Loc: Action-aware Segment Modeling for Weakly-Supervised...
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: He, Bo Yang, Xitong Kang, Le Cheng, Zhiyu Zhou, Xin Shrivastava, Abhinav Univ Maryland College Pk MD 20742 USA Baidu Res Sunnyvale CA USA
Weakly-supervised temporal action localization aims to recognize and localize action segments in untrimmed videos given only video-level action labels for training. Without the boundary information of action segments,... 详细信息
来源: 评论
InOut : Diverse Image Outpainting via GAN Inversion
InOut : Diverse Image Outpainting via GAN Inversion
收藏 引用
ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Cheng, Yen-Chi Lin, Chieh Hubert Lee, Hsin-Ying Ren, Jian Tulyakov, Sergey Yang, Ming-Hsuan Carnegie Mellon Univ Pittsburgh PA 15213 USA UC Merced Merced CA USA Snap Inc Santa Monica CA USA Yonsei Univ Seoul South Korea Google Res Mountain View CA USA
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content. Compared to inpainting - filling in missing pixels in a way coherent with the neighboring pixels - outpa... 详细信息
来源: 评论
Captioning Images with Diverse Objects  30
Captioning Images with Diverse Objects
收藏 引用
30th ieee/CVF conference on computer vision and pattern recognition (cvpr)
作者: Venugopalan, Subhashini Mooney, Raymond Hendricks, Lisa Anne Darrell, Trevor Rohrbach, Marcus Saenko, Kate UT Austin Austin TX 78712 USA Univ Calif Berkeley Berkeley CA USA Boston Univ Boston MA 02215 USA
Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can d... 详细信息
来源: 评论