咨询与建议

限定检索结果

文献类型

  • 12,844 篇 会议
  • 13 篇 期刊文献
  • 2 册 图书

馆藏范围

  • 12,859 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 7,573 篇 工学
    • 6,863 篇 计算机科学与技术...
    • 880 篇 机械工程
    • 814 篇 软件工程
    • 435 篇 控制科学与工程
    • 360 篇 光学工程
    • 306 篇 电气工程
    • 209 篇 仪器科学与技术
    • 124 篇 信息与通信工程
    • 91 篇 生物工程
    • 62 篇 生物医学工程(可授...
    • 39 篇 电子科学与技术(可...
    • 34 篇 安全科学与工程
    • 26 篇 化学工程与技术
    • 21 篇 交通运输工程
    • 20 篇 建筑学
    • 18 篇 土木工程
  • 2,957 篇 医学
    • 2,956 篇 临床医学
    • 15 篇 基础医学(可授医学...
    • 12 篇 药学(可授医学、理...
  • 700 篇 理学
    • 359 篇 物理学
    • 225 篇 数学
    • 175 篇 系统科学
    • 95 篇 统计学(可授理学、...
    • 93 篇 生物学
    • 22 篇 化学
  • 201 篇 艺术学
    • 201 篇 设计学(可授艺术学...
  • 84 篇 管理学
    • 59 篇 图书情报与档案管...
    • 25 篇 管理科学与工程(可...
    • 14 篇 工商管理
  • 23 篇 法学
    • 21 篇 社会学
  • 5 篇 农学
  • 4 篇 教育学
  • 2 篇 经济学
  • 1 篇 军事学

主题

  • 6,464 篇 computer vision
  • 2,688 篇 training
  • 2,437 篇 pattern recognit...
  • 1,780 篇 computational mo...
  • 1,522 篇 visualization
  • 1,348 篇 three-dimensiona...
  • 1,091 篇 computer archite...
  • 1,063 篇 semantics
  • 997 篇 benchmark testin...
  • 976 篇 codes
  • 970 篇 conferences
  • 854 篇 feature extracti...
  • 830 篇 cameras
  • 771 篇 task analysis
  • 707 篇 deep learning
  • 646 篇 image segmentati...
  • 611 篇 object detection
  • 595 篇 shape
  • 554 篇 transformers
  • 538 篇 neural networks

机构

  • 132 篇 univ sci & techn...
  • 122 篇 carnegie mellon ...
  • 120 篇 tsinghua univ pe...
  • 114 篇 univ chinese aca...
  • 113 篇 chinese univ hon...
  • 94 篇 tsinghua univers...
  • 91 篇 zhejiang univ pe...
  • 91 篇 swiss fed inst t...
  • 85 篇 peng cheng lab p...
  • 81 篇 university of ch...
  • 80 篇 zhejiang univers...
  • 77 篇 shanghai ai lab ...
  • 77 篇 peng cheng labor...
  • 75 篇 university of sc...
  • 69 篇 shanghai jiao to...
  • 68 篇 shanghai jiao to...
  • 67 篇 alibaba grp peop...
  • 67 篇 stanford univ st...
  • 66 篇 univ hong kong p...
  • 64 篇 sensetime res pe...

作者

  • 77 篇 timofte radu
  • 63 篇 van gool luc
  • 45 篇 zhang lei
  • 36 篇 yang yi
  • 36 篇 luc van gool
  • 34 篇 tao dacheng
  • 31 篇 loy chen change
  • 29 篇 chen chen
  • 28 篇 sun jian
  • 28 篇 qi tian
  • 25 篇 li xin
  • 24 篇 liu yang
  • 24 篇 tian qi
  • 24 篇 ying shan
  • 23 篇 wang xinchao
  • 23 篇 zha zheng-jun
  • 23 篇 boxin shi
  • 21 篇 zhou jie
  • 21 篇 vasconcelos nuno
  • 20 篇 luo ping

语言

  • 12,851 篇 英文
  • 7 篇 其他
  • 1 篇 中文
检索条件"任意字段=IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops"
12859 条 记 录,以下是4871-4880 订阅
排序:
φ-SfT: Shape-from-Template with a Physics-Based Deformation Model
φ-SfT: Shape-from-Template with a Physics-Based Deformation...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Kairanda, Navami Tretschk, Edith Elgharib, Mohamed Theobalt, Christian Golyanik, Vladislav Max Planck Inst Informat SIC Saarbrucken Germany Saarland Univ SIC Saarbrucken Germany
Shape-from-Template (SIT) methods estimate 3D surface deformations from a single monocular RGB camera while assuming a 3D state known in advance (a template). This is an important yet challenging problem due to the un... 详细信息
来源: 评论
Geometric Anchor Correspondence Mining with Uncertainty Modeling for Universal Domain Adaptation
Geometric Anchor Correspondence Mining with Uncertainty Mode...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Chen, Liang Lou, Yihang He, Jianzhong Bai, Tao Deng, Minghua Peking Univ Sch Math Sci Beijing Peoples R China Huawei Technol Intelligent Vis Dept Beijing Peoples R China
Universal domain adaptation (UniDA) aims to transfer the knowledge learned from a label-rich source domain to a label-scarce target domain without any constraints on the label space. However, domain shift and category... 详细信息
来源: 评论
Keep it Accurate and Diverse: Enhancing Action recognition Performance by Ensemble Learning
Keep it Accurate and Diverse: Enhancing Action Recognition P...
收藏 引用
ieee conference on computer vision and pattern recognition (CVPR)
作者: Bagheri, Mohammad Ali Gao, Qigang Escalera, Sergio Clapes, Albert Nasrollahi, Kamal Holte, Michael B. Moeslund, Thomas B. Dalhousie Univ Fac Comp Sci Halifax NS Canada UAB Comp Vis Ctr Barcelona 08193 Spain Univ Barcelona Dept Appl Methemat E-08007 Barcelona Spain Visual Anal People VAP Lab DK-9000 Aalborg Denmark
The performance of different action recognition techniques has recently been studied by several computer vision researchers. However, the potential improvement in classification through classifier fusion by ensemble-b... 详细信息
来源: 评论
StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions
StyleMesh: Style Transfer for Indoor 3D Scene Reconstruction...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Hoellein, Lukas Johnson, Justin Niessner, Matthias Tech Univ Munich Munich Germany Univ Michigan Ann Arbor MI 48109 USA
We apply style transfer on mesh reconstructions of indoor scenes. This enables VR applications like experiencing 3D environments painted in the style of a favorite artist. Style transfer typically operates on 2D image... 详细信息
来源: 评论
ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts
ViP-LLaVA: Making Large Multimodal Models Understand Arbitra...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Cai, Mu Liu, Haotian Mustikovela, Siva Karthik Meyer, Gregory P. Chai, Yuning Park, Dennis Lee, Yong Jae Univ Wisconsin Madison WI 53706 USA Cruise LLC San Francisco CA USA
While existing large vision-language multimodal models focus on whole image understanding, there is a prominent gap in achieving region-specific comprehension. Current approaches that use textual coordinates or spatia... 详细信息
来源: 评论
Scan2Cap: Context-aware Dense Captioning in RGB-D Scans
Scan2Cap: Context-aware Dense Captioning in RGB-D Scans
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Chen, Dave Zhenyu Gholami, Ali Niesner, Matthias Chang, Angel X. Tech Univ Munich Munich Germany Simon Fraser Univ Burnaby BC Canada
We introduce the task of dense captioning in 3D scans from commodity RGB-D sensors. As input, we assume a point cloud of a 3D scene;the expected output is the bounding boxes along with the descriptions for the underly... 详细信息
来源: 评论
DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network
DivCo: Diverse Conditional Image Synthesis via Contrastive G...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Liu, Rui Ge, Yixiao Choi, Ching Lam Wang, Xiaogang Li, Hongsheng Chinese Univ Hong Kong CUHK SenseTime Joint Lab Hong Kong Peoples R China NVIDIA NVIDIA AI Technol Ctr Hong Kong Peoples R China Xidian Univ Sch CST Xian Peoples R China
Conditional generative adversarial networks (cGANs) target at synthesizing diverse images given the input conditions and latent codes, but unfortunately, they usually suffer from the issue of mode collapse. To solve t... 详细信息
来源: 评论
GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image
GeneAvatar: Generic Expression-Aware Volumetric Head Avatar ...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Bao, Chong Zhang, Yinda Li, Yuan Zhang, Xiyu Yang, Bangbang Bao, Hujun Pollefeys, Marc Zhang, Guofeng Cui, Zhaopeng Zhejiang Univ State Key Lab CAD & CG Hangzhou Peoples R China Google Mountain View CA 94043 USA Swiss Fed Inst Technol Zurich Switzerland ByteDance Beijing Peoples R China
Recently, we have witnessed the explosive growth of various volumetric representations in modeling animatable head avatars. However, due to the diversity of frameworks, there is no practical method to support high-lev... 详细信息
来源: 评论
SelfSAGCN: Self-Supervised Semantic Alignment for Graph Convolution Network
SelfSAGCN: Self-Supervised Semantic Alignment for Graph Conv...
收藏 引用
ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Yang, Xu Deng, Cheng Dang, Zhiyuan Wei, Kun Yan, Junchi Xidian Univ Sch Elect Engn Xian 710071 Peoples R China Shanghai Jiao Tong Univ Dept CSE Shanghai Peoples R China Shanghai Jiao Tong Univ MoE Key Lab Artificial Intelligence Shanghai Peoples R China
Graph convolution networks (GCNs) are a powerful deep learning approach and have been successfully applied to representation learning on graphs in a variety of real-world applications. Despite their success, two funda... 详细信息
来源: 评论
Improving Few-Shot User-Specific Gaze Adaptation via Gaze Redirection Synthesis  32
Improving Few-Shot User-Specific Gaze Adaptation via Gaze Re...
收藏 引用
32nd ieee/cvf conference on computer vision and pattern recognition (CVPR)
作者: Yu, Yu Liu, Gang Odobez, Jean-Marc Idiap Res Inst CH-1920 Martigny Switzerland Ecole Polytech Fed Lausanne CH-1015 Lausanne Switzerland
As an indicator of human attention gaze is a subtle behavioral cue which can be exploited in many applications. However, inferring 3D gaze direction is challenging even for deep neural networks given the lack of large... 详细信息
来源: 评论