咨询与建议

限定检索结果

文献类型

  • 45 篇 期刊文献
  • 43 篇 会议

馆藏范围

  • 88 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 55 篇 工学
    • 41 篇 计算机科学与技术...
    • 39 篇 软件工程
    • 10 篇 生物工程
    • 9 篇 控制科学与工程
    • 3 篇 机械工程
    • 3 篇 信息与通信工程
    • 2 篇 电气工程
    • 1 篇 光学工程
    • 1 篇 动力工程及工程热...
    • 1 篇 电子科学与技术(可...
    • 1 篇 建筑学
    • 1 篇 土木工程
    • 1 篇 化学工程与技术
    • 1 篇 农业工程
    • 1 篇 林业工程
  • 45 篇 理学
    • 32 篇 数学
    • 14 篇 统计学(可授理学、...
    • 10 篇 生物学
    • 8 篇 系统科学
    • 3 篇 物理学
    • 1 篇 化学
    • 1 篇 地球物理学
  • 12 篇 管理学
    • 8 篇 图书情报与档案管...
    • 4 篇 管理科学与工程(可...
    • 2 篇 工商管理
  • 2 篇 法学
    • 2 篇 社会学
  • 1 篇 教育学
    • 1 篇 教育学
    • 1 篇 心理学(可授教育学...
  • 1 篇 文学
    • 1 篇 中国语言文学
    • 1 篇 外国语言文学
  • 1 篇 农学

主题

  • 5 篇 generative adver...
  • 4 篇 machine learning
  • 3 篇 deep neural netw...
  • 3 篇 semantics
  • 2 篇 object detection
  • 2 篇 reinforcement le...
  • 2 篇 inference engine...
  • 2 篇 statistics
  • 2 篇 iterative method...
  • 2 篇 optimization
  • 2 篇 gradient methods
  • 2 篇 diffusion
  • 2 篇 molecules
  • 2 篇 decision making
  • 2 篇 crowdsourcing
  • 2 篇 bayesian network...
  • 2 篇 calibration
  • 2 篇 monte carlo meth...
  • 2 篇 supervised learn...
  • 1 篇 image enhancemen...

机构

  • 9 篇 dept. of comp. s...
  • 8 篇 dept. of comp. s...
  • 6 篇 dept. of comp. s...
  • 5 篇 the hong kong un...
  • 5 篇 key lab of intel...
  • 4 篇 gaoling school o...
  • 4 篇 peng cheng labor...
  • 4 篇 tencent ai lab
  • 4 篇 department of co...
  • 4 篇 beijing key labo...
  • 4 篇 hong kong univer...
  • 4 篇 dept. of physics...
  • 3 篇 school of comput...
  • 3 篇 dept. of comp. s...
  • 3 篇 south china univ...
  • 3 篇 dept. of comp. s...
  • 3 篇 gaoling school o...
  • 3 篇 dept. of comp. s...
  • 2 篇 intel labs china
  • 2 篇 dept. of comp. s...

作者

  • 58 篇 zhu jun
  • 20 篇 jun zhu
  • 19 篇 zhang bo
  • 18 篇 li chongxuan
  • 15 篇 su hang
  • 9 篇 bao fan
  • 7 篇 dong yinpeng
  • 7 篇 chongxuan li
  • 7 篇 xu kun
  • 6 篇 chen jianfei
  • 6 篇 bo zhang
  • 5 篇 zhang lei
  • 5 篇 pang tianyu
  • 5 篇 liu shilong
  • 5 篇 zhang hao
  • 5 篇 li feng
  • 4 篇 du chao
  • 4 篇 hang su
  • 3 篇 ren tongzheng
  • 3 篇 guoqiang wu

语言

  • 88 篇 英文
检索条件"机构=State Key Lab for Intell. Tech. and Systems"
88 条 记 录,以下是31-40 订阅
排序:
On the convergence of prior-guided Zeroth-order optimization algorithms
arXiv
收藏 引用
arXiv 2021年
作者: Cheng, Shuyu Wu, Guoqiang Zhu, Jun Dept. of Comp. Sci. and Tech BNRist Center State Key Lab for Intell. Tech. & Sys. Institute for Ai Tsinghua-Bosch Joint Center for Ml Tsinghua University Beijing100084 China Pazhou Lab Guangzhou510330 China School of Software Shandong University China
Zeroth-order (ZO) optimization is widely used to handle challenging tasks, such as query-based black-box adversarial attacks and reinforcement learning. Various attempts have been made to integrate prior information i... 详细信息
来源: 评论
Bi-level score matching for learning energy-based latent variable models  20
Bi-level score matching for learning energy-based latent var...
收藏 引用
Proceedings of the 34th International Conference on Neural Information Processing systems
作者: Fan Bao Chongxuan Li Kun Xu Hang Su Jun Zhu Bo Zhang Dept. of Comp. Sci. & Tech. Institute for AI THBI Lab BNRist Center State Key Lab for Intell. Tech. & Sys. Tsinghua University Beijing China
Score matching (SM) [24] provides a compelling approach to learn energy-based models (EBMs) by avoiding the calculation of partition function. However, it remains largely open to learn energy-based latent variable mod...
来源: 评论
Function space particle optimization for Bayesian neural networks  7
Function space particle optimization for Bayesian neural net...
收藏 引用
7th International Conference on Learning Representations, ICLR 2019
作者: Wang, Ziyu Ren, Tongzheng Zhu, Jun Zhang, Bo Department of Computer Science and Technology Institute for Artificial Intelligence State Key Lab for Intell. Tech. and Sys. BNRist Center THBI Lab Tsinghua University China
While Bayesian neural networks (BNNs) have drawn increasing attention, their posterior inference remains challenging, due to the high-dimensional and over-parameterized nature. Recently, several highly flexible and sc... 详细信息
来源: 评论
Composite binary decomposition networks  33
Composite binary decomposition networks
收藏 引用
33rd AAAI Conference on Artificial intell.gence, AAAI 2019, 31st Annual Conference on Innovative Applications of Artificial intell.gence, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial intell.gence, EAAI 2019
作者: Qiaoben, You Wang, Zheng Li, Jianguo Dong, Yinpeng Jiang, Yu-Gang Zhu, Jun Dept. of Comp. Sci. and Tech. State Key Lab for Intell. Tech. and Sys. Institute for AI Tsinghua University China School of Computer Science Fudan University China Intel Labs China China
Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts. In this paper, we pr... 详细信息
来源: 评论
Scalable training of hierarchical topic models  44th
Scalable training of hierarchical topic models
收藏 引用
44th International Conference on Very Large Data Bases, VLDB 2018
作者: Chen, Jianfei Zhu, Jun Lu, Jie Liu, Shixia Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. and Sys. Tsinghua University Beijing100084 China School of Software BNRist Center State Key Lab for Intell. Tech. and Sys. Tsinghua University Beijing100084 China
Large-scale topic models serve as basic tools for feature extraction and dimensionality reduction in many practical applications. As a natural extension of at topic models, hierarchical topic models (HTMs) are able to... 详细信息
来源: 评论
Reward shaping via meta-learning
arXiv
收藏 引用
arXiv 2019年
作者: Zou, Haosheng Ren, Tongzheng Yan, Dong Su, Hang Zhu, Jun Dept. of Comp. Sci. & Tech. State Key Lab of Intell. Tech. & Sys. TNList Lab CBICR Center Tsinghua University Beijing China
Reward shaping is one of the most effective methods to tackle the crucial yet challenging problem of credit assignment in Reinforcement Learning (RL). However, designing shaping functions usually requires much expert ... 详细信息
来源: 评论
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Evading Defenses to Transferable Adversarial Examples by Tra...
收藏 引用
IEEE/CVF Conference on Computer Vision and Pattern Recognition
作者: Yinpeng Dong Tianyu Pang Hang Su Jun Zhu Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. & Sys. Institute for AI THBI Lab Tsinghua University
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making bl... 详细信息
来源: 评论
Multi-objects generation with amortized structural regularization
arXiv
收藏 引用
arXiv 2019年
作者: Xu, Kun Li, Chongxuan Zhu, Jun Zhang, Bo Dept. of Comp. Sci. & Tech. Institute for AI THBI Lab BNRist Center State Key Lab for Intell. Tech. & Sys. Tsinghua University Beijing China
Deep generative models (DGMs) have shown promise in image generation. However, most of the existing work learn the model by simply optimizing a divergence between the marginal distributions of the model and the data, ... 详细信息
来源: 评论
Generative Well-intentioned Networks
arXiv
收藏 引用
arXiv 2019年
作者: Cosentino, Justin Zhu, Jun Dept. of Comp. Sci. & Tech. Institute for AI THBI Lab BNRist Center State Key Lab for Intell. Tech. & Sys. Tsinghua University Beijing China
We propose Generative Well-intentioned Networks (GWINs), a novel framework for increasing the accuracy of certainty-based, closed-world classifiers. A conditional generative network recovers the distribution of observ... 详细信息
来源: 评论
Evading defenses to transferable adversarial examples by translation-invariant attacks
arXiv
收藏 引用
arXiv 2019年
作者: Dong, Yinpeng Pang, Tianyu Su, Hang Zhu, Jun Dept. of Comp. Sci. and Tech. BNRist Center State Key Lab for Intell. Tech. and Sys. Institute for AI THBI Lab Tsinghua University Beijing100084 China
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making bl... 详细信息
来源: 评论