咨询与建议

限定检索结果

文献类型

  • 8 篇 期刊文献
  • 5 篇 会议

馆藏范围

  • 13 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 11 篇 工学
    • 11 篇 计算机科学与技术...
    • 8 篇 软件工程
    • 3 篇 信息与通信工程
    • 3 篇 控制科学与工程
    • 3 篇 网络空间安全
    • 1 篇 电气工程
    • 1 篇 电子科学与技术(可...
    • 1 篇 生物医学工程(可授...
  • 5 篇 管理学
    • 4 篇 图书情报与档案管...
    • 1 篇 管理科学与工程(可...
  • 3 篇 理学
    • 2 篇 数学
    • 1 篇 生物学
    • 1 篇 统计学(可授理学、...
  • 1 篇 法学
    • 1 篇 社会学

主题

  • 1 篇 modeling languag...
  • 1 篇 digital elevatio...
  • 1 篇 self-supervised ...
  • 1 篇 contrastive lear...
  • 1 篇 graph neural net...
  • 1 篇 semantics
  • 1 篇 economic and soc...
  • 1 篇 distribution tra...

机构

  • 7 篇 mit csail united...
  • 6 篇 school of eecs p...
  • 5 篇 state key lab of...
  • 5 篇 institute for ar...
  • 4 篇 mit united state...
  • 3 篇 mit eecs csail u...
  • 3 篇 tum cit mcml mds...
  • 3 篇 school of mathem...
  • 2 篇 state key lab of...
  • 2 篇 tum
  • 2 篇 cit mcml mdsi
  • 2 篇 eecs and csail u...
  • 2 篇 peking universit...
  • 2 篇 mit csail
  • 1 篇 harvard universi...
  • 1 篇 eecs csail
  • 1 篇 school of cit
  • 1 篇 cit mcml mdsi tu...
  • 1 篇 university of ox...
  • 1 篇 mcml and mdsi

作者

  • 11 篇 jegelka stefanie
  • 7 篇 wang yisen
  • 7 篇 wang yifei
  • 3 篇 lim derek
  • 2 篇 wu yuyang
  • 2 篇 stefanie jegelka
  • 2 篇 ma george
  • 2 篇 yisen wang
  • 2 篇 wei zeming
  • 2 篇 yifei wang
  • 1 篇 maron haggai
  • 1 篇 karalias nikolao...
  • 1 篇 gatmiry khashaya...
  • 1 篇 gelberg yoav
  • 1 篇 george ma
  • 1 篇 lu eric
  • 1 篇 pan xiang
  • 1 篇 gao jinyang
  • 1 篇 xu jessica
  • 1 篇 liu zhaoyang

语言

  • 8 篇 其他
  • 5 篇 英文
检索条件"机构=TUM CIT/MCML/MDSI & MIT EECS/CSAIL"
13 条 记 录,以下是1-10 订阅
排序:
Are Graph Neural Networks Optimal Approximation Algorithms?  38
Are Graph Neural Networks Optimal Approximation Algorithms?
收藏 引用
38th Conference on Neural Information Processing Systems, NeurIPS 2024
作者: Yau, Morris Karalias, Nikolaos Lu, Eric Xu, Jessica Jegelka, Stefanie MIT CSAIL United States Harvard University United States MIT United States TUM Germany CIT MCML MDSI United States EECS and CSAIL United States
In this work we design graph neural network architectures that capture optimal approximation algorithms for a large class of combinatorial optimization problems, using powerful algorithmic tools from semidefinite prog...
来源: 评论
A Canonicalization Perspective on Invariant and Equivariant Learning  38
A Canonicalization Perspective on Invariant and Equivariant ...
收藏 引用
38th Conference on Neural Information Processing Systems, NeurIPS 2024
作者: Ma, George Wang, Yifei Lim, Derek Jegelka, Stefanie Wang, Yisen School of EECS Peking University China MIT CSAIL United States TUM CIT MCML MDSI MIT EECS CSAIL United States State Key Lab of General Artificial Intelligence School of Intelligence Science and Technology Peking University China Institute for Artificial Intelligence Peking University China
In many applications, we desire neural networks to exhibit invariance or equivariance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for a...
来源: 评论
HIGHER-ORDER GRAPHON NEURAL NETWORKS: APPROXIMATION AND CUT DISTANCE
arXiv
收藏 引用
arXiv 2025年
作者: Herbst, Daniel Jegelka, Stefanie TUM Germany School of CIT Germany MCML and MDSI Germany MIT Department of EECS and CSAIL United States
Graph limit models, like graphons for limits of dense graphs, have recently been used to study size transferability of graph neural networks (GNNs). While most literature focuses on message passing GNNs (MPNNs), in th... 详细信息
来源: 评论
ON THE ROLE OF DEPTH AND LOOPING FOR IN-CONTEXT LEARNING WITH TASK DIVERSITY
arXiv
收藏 引用
arXiv 2024年
作者: Gatmiry, Khashayar Saunshi, Nikunj Reddi, Sashank J. Jegelka, Stefanie Kumar, Sanjiv MIT United States Google Research United States TUM∗and MIT United States CIT MCML MDSI EECS and CSAIL United States
The intriguing in-context learning (ICL) abilities of deep Transformer models have lately garnered significant attention. By studying in-context linear regression on unimodal Gaussian data, recent empirical and theore... 详细信息
来源: 评论
BEYOND INTERPRETABILITY: THE GAINS OF FEATURE MONOSEMANTIcitY ON MODEL ROBUSTNESS
arXiv
收藏 引用
arXiv 2024年
作者: Zhang, Qi Wang, Yifei Cui, Jingyi Pan, Xiang Lei, Qi Jegelka, Stefanie Wang, Yisen Peking University China MIT CSAIL United States New York University United States TUM CIT MCML MDSI Germany MIT EECS CSAIL United States
Deep learning models often suffer from a lack of interpretability due to polysemanticity, where individual neurons are activated by multiple unrelated semantics, resulting in unclear attributions of model behavior. Re... 详细信息
来源: 评论
Understanding the Role of Equivariance in Self-supervised Learning
arXiv
收藏 引用
arXiv 2024年
作者: Wang, Yifei Hu, Kaiwen Gupta, Sharut Ye, Ziyu Wang, Yisen Jegelka, Stefanie MIT United States Peking University China The University of Chicago United States TUM CIT MCML MDSI Germany MIT EECS CSAIL United States
Contrastive learning has been a leading paradigm for self-supervised learning, but it is widely observed that it comes at the price of sacrificing useful features (e.g., colors) by being invariant to data augmentation... 详细信息
来源: 评论
A canonicalization perspective on invariant and equivariant learning  24
A canonicalization perspective on invariant and equivariant ...
收藏 引用
Proceedings of the 38th International Conference on Neural Information Processing Systems
作者: George Ma Yifei Wang Derek Lim Stefanie Jegelka Yisen Wang School of EECS Peking University MIT CSAIL TUM CIT/MCML/MDSI & MIT EECS/CSAIL State Key Lab of General Artificial Intelligence School of Intelligence Science and Technology Peking University and Institute for Artificial Intelligence Peking University
In many applications, we desire neural networks to exhibit invariance or equivariance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for a...
来源: 评论
A Canonicalization Perspective on Invariant and Equivariant Learning
arXiv
收藏 引用
arXiv 2024年
作者: Ma, George Wang, Yifei Lim, Derek Jegelka, Stefanie Wang, Yisen School of EECS Peking University China MIT CSAIL United States TUM CIT/MCML/MDSI & MIT EECS CSAIL United States State Key Lab of General Artificial Intelligence School of Intelligence Science and Technology Peking University China Institute for Artificial Intelligence Peking University China
In many applications, we desire neural networks to exhibit invariance or equivariance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for a... 详细信息
来源: 评论
A Theoretical Understanding of Self-Correction through In-context Alignment  38
A Theoretical Understanding of Self-Correction through In-co...
收藏 引用
38th Conference on Neural Information Processing Systems, NeurIPS 2024
作者: Wang, Yifei Wu, Yuyang Wei, Zeming Jegelka, Stefanie Wang, Yisen MIT CSAIL United States School of EECS Peking University China School of Mathematical Sciences Peking University China State Key Lab of General Artificial Intelligence School of Intelligence Science and Technology Peking University China CIT MCML MDSI TU Munich Germany Institute for Artificial Intelligence Peking University China
Going beyond mimicking limited human experiences, recent studies show initial evidence that, like humans, large language models (LLMs) are capable of improving their abilities purely by self-correction, i.e., correcti...
来源: 评论
WHAT IS WRONG WITH PERPLEXITY FOR LONG-CONTEXT LANGUAGE MODELING?
arXiv
收藏 引用
arXiv 2024年
作者: Fang, Lizhe Wang, Yifei Liu, Zhaoyang Zhang, Chenheng Jegelka, Stefanie Gao, Jinyang Ding, Bolin Wang, Yisen State Key Lab of General Artificial Intelligence School of Intelligence Science and Technology Peking University China MIT CSAIL United States Alibaba Group China TUM CIT MCML MDSI MIT EECS CSAIL United States Institute for Artificial Intelligence Peking University China
Handling long-context inputs is crucial for large language models (LLMs) in tasks such as extended conversations, document summarization, and many-shot in-context learning. While recent approaches have extended the co... 详细信息
来源: 评论