咨询与建议

限定检索结果

文献类型

  • 45 篇 期刊文献
  • 43 篇 会议

馆藏范围

  • 88 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 55 篇 工学
    • 41 篇 计算机科学与技术...
    • 39 篇 软件工程
    • 10 篇 生物工程
    • 9 篇 控制科学与工程
    • 3 篇 机械工程
    • 3 篇 信息与通信工程
    • 2 篇 电气工程
    • 1 篇 光学工程
    • 1 篇 动力工程及工程热...
    • 1 篇 电子科学与技术(可...
    • 1 篇 建筑学
    • 1 篇 土木工程
    • 1 篇 化学工程与技术
    • 1 篇 农业工程
    • 1 篇 林业工程
  • 45 篇 理学
    • 32 篇 数学
    • 14 篇 统计学(可授理学、...
    • 10 篇 生物学
    • 8 篇 系统科学
    • 3 篇 物理学
    • 1 篇 化学
    • 1 篇 地球物理学
  • 12 篇 管理学
    • 8 篇 图书情报与档案管...
    • 4 篇 管理科学与工程(可...
    • 2 篇 工商管理
  • 2 篇 法学
    • 2 篇 社会学
  • 1 篇 教育学
    • 1 篇 教育学
    • 1 篇 心理学(可授教育学...
  • 1 篇 文学
    • 1 篇 中国语言文学
    • 1 篇 外国语言文学
  • 1 篇 农学

主题

  • 5 篇 generative adver...
  • 4 篇 machine learning
  • 3 篇 deep neural netw...
  • 3 篇 semantics
  • 2 篇 object detection
  • 2 篇 reinforcement le...
  • 2 篇 inference engine...
  • 2 篇 statistics
  • 2 篇 iterative method...
  • 2 篇 optimization
  • 2 篇 gradient methods
  • 2 篇 diffusion
  • 2 篇 molecules
  • 2 篇 decision making
  • 2 篇 crowdsourcing
  • 2 篇 bayesian network...
  • 2 篇 calibration
  • 2 篇 monte carlo meth...
  • 2 篇 supervised learn...
  • 1 篇 image enhancemen...

机构

  • 9 篇 dept. of comp. s...
  • 8 篇 dept. of comp. s...
  • 6 篇 dept. of comp. s...
  • 5 篇 the hong kong un...
  • 5 篇 key lab of intel...
  • 4 篇 gaoling school o...
  • 4 篇 peng cheng labor...
  • 4 篇 tencent ai lab
  • 4 篇 department of co...
  • 4 篇 beijing key labo...
  • 4 篇 hong kong univer...
  • 4 篇 dept. of physics...
  • 3 篇 school of comput...
  • 3 篇 dept. of comp. s...
  • 3 篇 south china univ...
  • 3 篇 dept. of comp. s...
  • 3 篇 gaoling school o...
  • 3 篇 dept. of comp. s...
  • 2 篇 intel labs china
  • 2 篇 dept. of comp. s...

作者

  • 58 篇 zhu jun
  • 20 篇 jun zhu
  • 19 篇 zhang bo
  • 18 篇 li chongxuan
  • 15 篇 su hang
  • 9 篇 bao fan
  • 7 篇 dong yinpeng
  • 7 篇 chongxuan li
  • 7 篇 xu kun
  • 6 篇 chen jianfei
  • 6 篇 bo zhang
  • 5 篇 zhang lei
  • 5 篇 pang tianyu
  • 5 篇 liu shilong
  • 5 篇 zhang hao
  • 5 篇 li feng
  • 4 篇 du chao
  • 4 篇 hang su
  • 3 篇 ren tongzheng
  • 3 篇 guoqiang wu

语言

  • 88 篇 英文
检索条件"机构=State Key Lab for Intell. Tech. and Systems"
88 条 记 录,以下是41-50 订阅
排序:
Improving black-box adversarial attacks with a transfer-based prior
arXiv
收藏 引用
arXiv 2019年
作者: Cheng, Shuyu Dong, Yinpeng Pang, Tianyu Su, Hang Zhu, Jun Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. and Sys. Institute for AI THBI Lab Tsinghua University Beijing100084 China
We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients. Previous methods tried to approximate the gradient... 详细信息
来源: 评论
Understanding and stabilizing gans' training dynamics with control theory
arXiv
收藏 引用
arXiv 2019年
作者: Xu, Kun Li, Chongxuan Wei, Huanshu Zhu, Jun Zhang, Bo Department of Computer Science & Technology Institute for Artificial Intelligence BNRist Center THBI Lab State Key Lab for Intell. Tech. & Sys. Tsinghua University
Generative adversarial networks (GANs) have made significant progress on realistic image generation but often suffer from instability during the training process. Most previous analyses mainly focus on the equilibrium... 详细信息
来源: 评论
Benchmarking adversarial robustness
arXiv
收藏 引用
arXiv 2019年
作者: Dong, Yinpeng Fu, Qi-An Yang, Xiao Pang, Tianyu Su, Hang Xiao, Zihao Zhu, Jun Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. & Sys. Institute for AI THBI Lab Tsinghua University Beijing100084 China RealAI
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning. While a lot of efforts have been made in recent years, it is ... 详细信息
来源: 评论
To relieve your headache of training an MRF, take AdVIL
arXiv
收藏 引用
arXiv 2019年
作者: Li, Chongxuan Du, Chao Xu, Kun Welling, Max Zhu, Jun Zhang, Bo Department of Computer Science & Technology Institute for Artificial Intelligence BNRist Center THBI Lab State Key Lab for Intell. Tech. & Sys. Tsinghua University
We propose a black-box algorithm called Adversarial Variational Inference and Learning (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to a... 详细信息
来源: 评论
DS3L: Deep self-semi-supervised learning for image recognition
arXiv
收藏 引用
arXiv 2019年
作者: Tsai, Tsung Wei Li, Chongxuan Zhu, Jun Dept. of Comp. Sci. & Tech. BNRist Center State Key Lab for Intell. Tech.& Sys. THBI Lab Tsinghua University Beijing100084 China
Despite the recent progress in deep semi-supervised learning (Semi-SL), the amount of labels still plays a dominant role. The success in self-supervised learning (Self-SL) hints a promising direction to exploit the va... 详细信息
来源: 评论
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
Efficient Decision-based Black-box Adversarial Attacks on Fa...
收藏 引用
IEEE/CVF Conference on Computer Vision and Pattern Recognition
作者: Yinpeng Dong Hang Su Baoyuan Wu Zhifeng Li Wei Liu Tong Zhang Jun Zhu Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. & Sys. Institute for AI THBI Lab Tsinghua University Tencent AI Lab Hong Kong University of Science and Technology
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause f... 详细信息
来源: 评论
Efficient decision-based black-box adversarial attacks on face recognition
arXiv
收藏 引用
arXiv 2019年
作者: Dong, Yinpeng Su, Hang Wu, Baoyuan Li, Zhifeng Liu, Wei Zhang, Tong Zhu, Jun Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. & Sys. Institute for AI THBI Lab Tsinghua University Beijing100084 China Tencent AI Lab Hong Kong University of Science and Technology
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause f... 详细信息
来源: 评论
Analyzing the Noise Robustness of Deep Neural Networks
arXiv
收藏 引用
arXiv 2018年
作者: Liu, Mengchen Liu, Shixia Su, Hang Cao, Kelei Zhu, Jun School of Software TNList Lab State Key Lab for Intell. Tech. Sys. Tsinghua University Dept. of Comp. Sci.Tech. TNList Lab State Key Lab for Intell. Tech. Sys. CBICR Center Tsinghua University
Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples. These examples are intentionally designed by making imperceptible perturbations and often mislead a DNN into making an incorrec... 详细信息
来源: 评论
Towards robust detection of adversarial examples  18
Towards robust detection of adversarial examples
收藏 引用
Proceedings of the 32nd International Conference on Neural Information Processing systems
作者: Tianyu Pang Chao Du Yinpeng Dong Jun Zhu Dept. of Comp. Sci. & Tech. State Key Lab for Intell. Tech. & Systems BNRist Center THBI Lab Tsinghua University Beijing China
Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples. In this paper, we present a novel training procedure and a thresholding test stra...
来源: 评论
Understanding human behaviors in crowds by imitating the decision-making process
arXiv
收藏 引用
arXiv 2018年
作者: Zou, Haosheng Su, Hang Song, Shihong Zhu, Jun Dept. of Comp. Sci. & Tech. State Key Lab of Intell. Tech. & Sys. TNList Lab CBICR Center Tsinghua University Beijing China
Crowd behavior understanding is crucial yet challenging across a wide range of applications, since crowd behavior is inherently determined by a sequential decision-making process based on various factors, such as the ... 详细信息
来源: 评论