咨询与建议

限定检索结果

文献类型

  • 288 篇 期刊文献
  • 221 篇 会议

馆藏范围

  • 509 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 318 篇 工学
    • 263 篇 计算机科学与技术...
    • 224 篇 软件工程
    • 67 篇 信息与通信工程
    • 47 篇 生物工程
    • 31 篇 控制科学与工程
    • 24 篇 电子科学与技术(可...
    • 21 篇 电气工程
    • 21 篇 化学工程与技术
    • 17 篇 光学工程
    • 16 篇 生物医学工程(可授...
    • 9 篇 机械工程
    • 6 篇 力学(可授工学、理...
    • 6 篇 土木工程
    • 5 篇 仪器科学与技术
    • 5 篇 材料科学与工程(可...
    • 5 篇 动力工程及工程热...
  • 211 篇 理学
    • 115 篇 物理学
    • 67 篇 数学
    • 57 篇 生物学
    • 20 篇 化学
    • 18 篇 统计学(可授理学、...
    • 6 篇 系统科学
    • 4 篇 地质学
  • 65 篇 管理学
    • 45 篇 图书情报与档案管...
    • 21 篇 管理科学与工程(可...
    • 8 篇 工商管理
  • 13 篇 医学
    • 13 篇 基础医学(可授医学...
    • 12 篇 临床医学
    • 10 篇 药学(可授医学、理...
  • 12 篇 法学
    • 12 篇 社会学
  • 2 篇 经济学
  • 1 篇 教育学
  • 1 篇 文学

主题

  • 28 篇 speech recogniti...
  • 26 篇 semantics
  • 23 篇 training
  • 18 篇 signal processin...
  • 14 篇 speech enhanceme...
  • 12 篇 acoustics
  • 12 篇 machine learning
  • 12 篇 embeddings
  • 11 篇 computational li...
  • 11 篇 adaptation model...
  • 10 篇 computational mo...
  • 10 篇 syntactics
  • 10 篇 neural machine t...
  • 9 篇 speech processin...
  • 9 篇 feature extracti...
  • 9 篇 degradation
  • 9 篇 robustness
  • 8 篇 self-supervised ...
  • 8 篇 decoding
  • 7 篇 object detection

机构

  • 153 篇 moe key lab of a...
  • 131 篇 department of co...
  • 60 篇 key laboratory o...
  • 53 篇 moe key lab of a...
  • 32 篇 department of co...
  • 28 篇 department of co...
  • 28 篇 x-lance lab depa...
  • 23 篇 suzhou laborator...
  • 22 篇 x-lance lab depa...
  • 16 篇 key lab. of shan...
  • 16 篇 research center ...
  • 15 篇 aispeech co. ltd...
  • 15 篇 ji hua laborator...
  • 15 篇 shanghai jiao to...
  • 10 篇 shanghai jiao to...
  • 10 篇 auditory cogniti...
  • 9 篇 kyoto
  • 8 篇 department of co...
  • 8 篇 aispeech ltd
  • 8 篇 microsoft resear...

作者

  • 106 篇 yu kai
  • 93 篇 zhao hai
  • 61 篇 chen lu
  • 56 篇 qian yanmin
  • 40 篇 zhang zhuosheng
  • 39 篇 yan junchi
  • 38 篇 yanmin qian
  • 36 篇 chen xie
  • 32 篇 li zuchao
  • 28 篇 wu mengyue
  • 23 篇 zhu su
  • 22 篇 guo yiwei
  • 20 篇 kai yu
  • 19 篇 yang xiaokang
  • 18 篇 chen zhengyang
  • 17 篇 xu hongshen
  • 17 篇 du chenpeng
  • 17 篇 junchi yan
  • 16 篇 cao ruisheng
  • 16 篇 ma ziyang

语言

  • 464 篇 英文
  • 45 篇 其他
  • 1 篇 中文
检索条件"机构=Dep. of Computer Science and Engineering & MoE Key Lab of AI"
509 条 记 录,以下是151-160 订阅
排序:
MULTI: Multimodal Understanding Leaderboard with Text and Images
arXiv
收藏 引用
arXiv 2024年
作者: Zhu, Zichen Xu, Yang Chen, Lu Yang, Jingkai Ma, Yichuan Sun, Yiming Wen, Hailin Liu, Jiaqi Cai, Jinyu Ma, Yingzi Zhang, Situo Zhao, Zihan Sun, Liangtai Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China Suzhou Laboratory Suzhou China
The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or overly simplistic tasks, some models h...
来源: 评论
BWIN: A BILATERAL WARPING METHOD FOR VIDEO FRAME INTERPOLATION
BWIN: A BILATERAL WARPING METHOD FOR VIDEO FRAME INTERPOLATI...
收藏 引用
2021 IEEE International Conference on Multimedia and Expo, ICME 2021
作者: Xue, Fanyong Li, Jie Liu, Jiannan Wu, Chentao Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Shanghai China
Flow-based video frame interpolation approaches typically adopt forward or backward warping to approximate the intermediate frames. And a synthesis network is used to refine the interpolation results. Optical flows in... 详细信息
来源: 评论
Converging to a Lingua Franca: Evolution of Linguistic Regions and Semantics Alignment in Multilingual Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: Zeng, Hongchuan Han, Senyu Chen, Lu Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China Suzhou Laboratory Suzhou China
Large language models (LLMs) have demonstrated remarkable performance, particularly in multilingual contexts. While recent studies suggest that LLMs can transfer skills learned in one language to others, the internal ... 详细信息
来源: 评论
PS4: A Low Power SNN Accelerator with Spike Speculative Scheme
PS4: A Low Power SNN Accelerator with Spike Speculative Sche...
收藏 引用
IEEE International Conference on computer Design: VLSI in computers and Processors, (ICCD)
作者: Zongwu Wang Fangxin Liu Xin Tang Li Jiang Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai Qi Zhi Institute MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University
Spiking neural networks (SNNs) offer computational and energy efficiency advantages over traditional artificial neural networks (ANNs) due to their event-driven representations. Unlike ANNs, which use continuous activ... 详细信息
来源: 评论
Multilingual Brain Surgeon: Large Language Models Can be Compressed Leaving No Language Behind
arXiv
收藏 引用
arXiv 2024年
作者: Zeng, Hongchuan Xu, Hongshen Chen, Lu Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China Suzhou Laboratory Suzhou China
Large Language Models (LLMs) have ushered in a new era in Natural Language Processing, but their massive size demands effective compression techniques for practicality. Although numerous model compression techniques h... 详细信息
来源: 评论
CoE-SQL: In-Context Learning for Multi-Turn Text-to-SQL with Chain-of-Editions
arXiv
收藏 引用
arXiv 2024年
作者: Zhang, Hanchong Cao, Ruisheng Xu, Hongshen Chen, Lu Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China Suzhou Laboratory Suzhou China
Recently, Large Language Models (LLMs) have been demonstrated to possess impressive capabilities in a variety of domains and tasks. We investigate the issue of prompt design in the multi-turn text-to-SQL task and atte...
来源: 评论
HOLES: Boosting Large Language Models Efficiency with Hardware-Friendly Lossless Encoding
HOLES: Boosting Large Language Models Efficiency with Hardwa...
收藏 引用
IEEE International Conference on computer Design: VLSI in computers and Processors, (ICCD)
作者: Fangxin Liu Ning Yang Zhiyan Song Zongwu Wang Li Jiang Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai Qi Zhi Institute MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University
Transformer-based large language models (LLMs) have demonstrated remarkable success; however, their increasing model size poses a challenge due to the widening gap between model size and hardware capacity. To address ... 详细信息
来源: 评论
BARTENDER: A simple baseline model for task-level heterogeneous federated learning
BARTENDER: A simple baseline model for task-level heterogene...
收藏 引用
IEEE International Conference on Multimedia and Expo (ICME)
作者: Yuwen Yang Yuxiang Lu Suizhi Huang Shalayiding Sirejiding Chang Liu Muyang Yi Zhaozhi Xie Yue Ding Hongtao Lu Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai China MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Shanghai China
This study presents the Task-level Heterogeneous Federated Learning (TH-FL), a novel paradigm that fuses the principles of Federated Learning (FL) and Multi-Task Learning (MTL). In the TH-FL scenario, each client can ... 详细信息
来源: 评论
COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with Adaptive Spike Speculation
COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with...
收藏 引用
IEEE/ACM International Symposium on Microarchitecture (MICRO)
作者: Zongwu Wang Fangxin Liu Ning Yang Shiyuan Huang Haomin Li Li Jiang Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai Qi Zhi Institute MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University
Brain-inspired spiking neural networks (SNNs) are considered energy-efficient alternatives to conventional deep neural networks (DNNs). By adopting event-driven information processing, SNNs can significantly reduce th... 详细信息
来源: 评论
Alignment for Efficient Tool Calling of Large Language Models
arXiv
收藏 引用
arXiv 2025年
作者: Xu, Hongshen Wang, Zihan Zhu, Zichen Pan, Lei Chen, Xingyu Chen, Lu Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Shanghai China AISpeech Co. Ltd. Suzhou China
Recent advancements in tool learning have enabled large language models (LLMs) to integrate external tools, enhancing their task performance by expanding their knowledge boundaries. However, relying on tools often int... 详细信息
来源: 评论