咨询与建议

限定检索结果

文献类型

  • 295 篇 期刊文献
  • 158 篇 会议
  • 6 册 图书

馆藏范围

  • 459 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 344 篇 工学
    • 272 篇 计算机科学与技术...
    • 190 篇 软件工程
    • 45 篇 控制科学与工程
    • 44 篇 信息与通信工程
    • 35 篇 光学工程
    • 30 篇 生物工程
    • 21 篇 生物医学工程(可授...
    • 18 篇 电气工程
    • 18 篇 电子科学与技术(可...
    • 15 篇 机械工程
    • 11 篇 化学工程与技术
    • 9 篇 材料科学与工程(可...
    • 8 篇 土木工程
    • 7 篇 力学(可授工学、理...
    • 6 篇 仪器科学与技术
    • 6 篇 建筑学
    • 6 篇 安全科学与工程
  • 180 篇 理学
    • 100 篇 数学
    • 60 篇 物理学
    • 44 篇 统计学(可授理学、...
    • 34 篇 生物学
    • 20 篇 系统科学
    • 17 篇 化学
    • 7 篇 地球物理学
  • 45 篇 管理学
    • 28 篇 管理科学与工程(可...
    • 22 篇 图书情报与档案管...
    • 17 篇 工商管理
  • 16 篇 法学
    • 16 篇 社会学
  • 7 篇 经济学
    • 7 篇 应用经济学
  • 6 篇 农学
  • 6 篇 医学
  • 3 篇 教育学
  • 2 篇 文学
  • 1 篇 哲学

主题

  • 15 篇 reinforcement le...
  • 10 篇 semantics
  • 9 篇 deep learning
  • 8 篇 approximation al...
  • 7 篇 decoding
  • 7 篇 machine learning
  • 7 篇 stochastic syste...
  • 6 篇 computer science
  • 6 篇 bayesian inferen...
  • 5 篇 adversarial mach...
  • 5 篇 speech recogniti...
  • 5 篇 complexity theor...
  • 5 篇 artificial intel...
  • 5 篇 accuracy
  • 4 篇 quantum control
  • 4 篇 deep neural netw...
  • 4 篇 quantum algorith...
  • 4 篇 neural networks
  • 4 篇 optimization
  • 4 篇 computational li...

机构

  • 71 篇 google deepmind ...
  • 48 篇 google
  • 28 篇 google deepmind
  • 26 篇 google research ...
  • 25 篇 mpi for intellig...
  • 21 篇 google research
  • 16 篇 google united st...
  • 13 篇 google inc.
  • 13 篇 deepmind united ...
  • 10 篇 department of co...
  • 10 篇 google inc. unit...
  • 9 篇 department of co...
  • 9 篇 google research ...
  • 8 篇 department of el...
  • 8 篇 department of co...
  • 8 篇 department of co...
  • 7 篇 department of co...
  • 7 篇 deepmind
  • 7 篇 heidelberg
  • 6 篇 department of el...

作者

  • 36 篇 bernhard schölko...
  • 35 篇 kevin murphy
  • 8 篇 müller klaus-rob...
  • 7 篇 farhi edward
  • 6 篇 jiang zhang
  • 6 篇 bakas spyridon
  • 6 篇 leibo joel z.
  • 6 篇 søgaard anders
  • 6 篇 menze bjoern
  • 6 篇 montavon grégoir...
  • 5 篇 summers ronald m...
  • 5 篇 baumgartner mich...
  • 5 篇 veličković petar
  • 5 篇 antonelli michel...
  • 5 篇 kopp-schneider a...
  • 5 篇 sadigh dorsa
  • 5 篇 isensee fabian
  • 5 篇 xia fei
  • 5 篇 demaine erik d.
  • 4 篇 kreshuk anna

语言

  • 395 篇 英文
  • 63 篇 其他
  • 1 篇 中文
检索条件"机构=Google DeepMind and Department of Computer Science and Technology"
459 条 记 录,以下是51-60 订阅
排序:
When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: You, Haoran Fu, Yichao Wang, Zheng Yazdanbakhsh, Amir Lin, Yingyan School of Computer Science Georgia Institute of Technology Atlanta United States Google DeepMind Mountain View United States
Autoregressive Large Language Models (LLMs) have achieved impressive performance in language tasks but face two significant bottlenecks: (1) quadratic complexity in the attention module as the number of tokens increas... 详细信息
来源: 评论
Variance-reduced gradient estimation via noise-reuse in online evolution strategies  23
Variance-reduced gradient estimation via noise-reuse in onli...
收藏 引用
Proceedings of the 37th International Conference on Neural Information Processing Systems
作者: Oscar Li James Harrison Jascha Sohl-Dickstein Virginia Smith Luke Metz Machine Learning Department School of Computer Science Carnegie Mellon University Google DeepMind
Unrolled computation graphs are prevalent throughout machine learning but present challenges to automatic differentiation (AD) gradient estimation methods when their loss functions exhibit extreme local sensitivtiy, d...
来源: 评论
Three Towers: Flexible Contrastive Learning with Pretrained Image Models
arXiv
收藏 引用
arXiv 2023年
作者: Kossen, Jannik Collier, Mark Mustafa, Basil Wang, Xiao Zhai, Xiaohua Beyer, Lucas Steiner, Andreas Berent, Jesse Jenatton, Rodolphe Kokiopoulou, Efi OATML Department of Computer Science University of Oxford United Kingdom Google Research Google DeepMind United Kingdom
We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch,... 详细信息
来源: 评论
Insufficient Statistics Perturbation: Stable Estimators for Private Least Squares  37
Insufficient Statistics Perturbation: Stable Estimators for ...
收藏 引用
37th Annual Conference on Learning Theory, COLT 2024
作者: Brown, Gavin Hayase, Jonathan Hopkins, Samuel Kong, Weihao Liu, Xiyang Oh, Sewoong Perdomo, Juan C. Smith, Adam Paul G. Allen School of Computer Science and Engineering University of Washington United States Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology United States Google Research United States Harvard University United States Department of Computer Science Boston University United States
来源: 评论
Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
arXiv
收藏 引用
arXiv 2024年
作者: Brown, Bradley Juravsky, Jordan Ehrlich, Ryan Clark, Ronald Le, Quoc V. Ré, Christopher Mirhoseini, Azalia Department of Computer Science Stanford University United States University of Oxford United Kingdom Google DeepMind United Kingdom
Scaling the amount of compute used to train language models has dramatically improved their capabilities. However, when it comes to inference, we often limit models to making only one attempt at a problem. Here, we ex... 详细信息
来源: 评论
Separating the Wheat from the Chaff with BREAD: An open-source benchmark and metrics to detect redundancy in text
arXiv
收藏 引用
arXiv 2023年
作者: Caswell, Isaac Wang, Lisa Papadimitriou, Isabel Google Research United States Google DeepMind United Kingdom Computer Science Department Stanford University United States
Data quality is a problem that perpetually resurfaces throughout the field of NLP, regardless of task, domain, or architecture, and remains especially severe for lower-resource languages. A typical and insidious issue... 详细信息
来源: 评论
Interpretability Illusions in the Generalization of Simplified Models
arXiv
收藏 引用
arXiv 2023年
作者: Friedman, Dan Lampinen, Andrew Dixon, Lucas Chen, Danqi Ghandeharioun, Asma Department of Computer Science Princeton University United States Google DeepMind United Kingdom Google Research United States
A common method to study deep learning systems is to use simplified model representations—for example, using singular value decomposition to visualize the model’s hidden states in a lower dimensional space. This app... 详细信息
来源: 评论
How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?
arXiv
收藏 引用
arXiv 2024年
作者: Liu, Ryan Sumers, Theodore R. Dasgupta, Ishita Griffiths, Thomas L. Department of Computer Science Princeton University United States Google DeepMind United Kingdom Department of Psychology Princeton University United States
In day-to-day communication, people often approximate the truth - for example, rounding the time or omitting details - in order to be maximally helpful to the listener. How do large language models (LLMs) handle such ... 详细信息
来源: 评论
Decoupling Semantic Similarity from Spatial Alignment for Neural Networks  38
Decoupling Semantic Similarity from Spatial Alignment for Ne...
收藏 引用
38th Conference on Neural Information Processing Systems, NeurIPS 2024
作者: Wald, Tassilo Ulrich, Constantin Köhler, Gregor Zimmerer, David Denner, Stefan Baumgartner, Michael Isensee, Fabian Jaini, Priyank Maier-Hein, Klaus H. Heidelberg Germany Helmholtz Imaging DKFZ Heidelberg Germany Faculty of Mathematics and Computer Science University of Heidelberg Germany Medical Faculty Heidelberg University of Heidelberg Germany Google Deepmind United Kingdom Pattern Analysis and Learning Group Department of Radiation Oncology Heidelberg Germany
What representation do deep neural networks learn? How similar are images to each other for neural networks? Despite the overwhelming success of deep learning methods key questions about their internal workings still ...
来源: 评论
RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation
arXiv
收藏 引用
arXiv 2023年
作者: Vecerik, Mel Doersch, Carl Yang, Yi Davchev, Todor Aytar, Yusuf Zhou, Guangyao Hadsell, Raia Agapito, Lourdes Scholz, Jon Google DeepMind United Kingdom Department of Computer Science University College London United Kingdom
For robots to be useful outside labs and specialized factories we need a way to teach them new useful behaviors quickly. Current approaches lack either the generality to onboard new tasks without task-specific enginee... 详细信息
来源: 评论