咨询与建议

限定检索结果

文献类型

  • 288 篇 期刊文献
  • 221 篇 会议

馆藏范围

  • 509 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 318 篇 工学
    • 263 篇 计算机科学与技术...
    • 224 篇 软件工程
    • 67 篇 信息与通信工程
    • 47 篇 生物工程
    • 31 篇 控制科学与工程
    • 24 篇 电子科学与技术(可...
    • 21 篇 电气工程
    • 21 篇 化学工程与技术
    • 17 篇 光学工程
    • 16 篇 生物医学工程(可授...
    • 9 篇 机械工程
    • 6 篇 力学(可授工学、理...
    • 6 篇 土木工程
    • 5 篇 仪器科学与技术
    • 5 篇 材料科学与工程(可...
    • 5 篇 动力工程及工程热...
  • 211 篇 理学
    • 115 篇 物理学
    • 67 篇 数学
    • 57 篇 生物学
    • 20 篇 化学
    • 18 篇 统计学(可授理学、...
    • 6 篇 系统科学
    • 4 篇 地质学
  • 65 篇 管理学
    • 45 篇 图书情报与档案管...
    • 21 篇 管理科学与工程(可...
    • 8 篇 工商管理
  • 13 篇 医学
    • 13 篇 基础医学(可授医学...
    • 12 篇 临床医学
    • 10 篇 药学(可授医学、理...
  • 12 篇 法学
    • 12 篇 社会学
  • 2 篇 经济学
  • 1 篇 教育学
  • 1 篇 文学

主题

  • 28 篇 speech recogniti...
  • 26 篇 semantics
  • 23 篇 training
  • 18 篇 signal processin...
  • 14 篇 speech enhanceme...
  • 12 篇 acoustics
  • 12 篇 machine learning
  • 12 篇 embeddings
  • 11 篇 computational li...
  • 11 篇 adaptation model...
  • 10 篇 computational mo...
  • 10 篇 syntactics
  • 10 篇 neural machine t...
  • 9 篇 speech processin...
  • 9 篇 feature extracti...
  • 9 篇 degradation
  • 9 篇 robustness
  • 8 篇 self-supervised ...
  • 8 篇 decoding
  • 7 篇 object detection

机构

  • 153 篇 moe key lab of a...
  • 131 篇 department of co...
  • 60 篇 key laboratory o...
  • 53 篇 moe key lab of a...
  • 32 篇 department of co...
  • 28 篇 department of co...
  • 28 篇 x-lance lab depa...
  • 23 篇 suzhou laborator...
  • 22 篇 x-lance lab depa...
  • 16 篇 key lab. of shan...
  • 16 篇 research center ...
  • 15 篇 aispeech co. ltd...
  • 15 篇 ji hua laborator...
  • 15 篇 shanghai jiao to...
  • 10 篇 shanghai jiao to...
  • 10 篇 auditory cogniti...
  • 9 篇 kyoto
  • 8 篇 department of co...
  • 8 篇 aispeech ltd
  • 8 篇 microsoft resear...

作者

  • 106 篇 yu kai
  • 93 篇 zhao hai
  • 61 篇 chen lu
  • 56 篇 qian yanmin
  • 40 篇 zhang zhuosheng
  • 39 篇 yan junchi
  • 38 篇 yanmin qian
  • 36 篇 chen xie
  • 32 篇 li zuchao
  • 28 篇 wu mengyue
  • 23 篇 zhu su
  • 22 篇 guo yiwei
  • 20 篇 kai yu
  • 19 篇 yang xiaokang
  • 18 篇 chen zhengyang
  • 17 篇 xu hongshen
  • 17 篇 du chenpeng
  • 17 篇 junchi yan
  • 16 篇 cao ruisheng
  • 16 篇 ma ziyang

语言

  • 464 篇 英文
  • 45 篇 其他
  • 1 篇 中文
检索条件"机构=Dep. of Computer Science and Engineering & MoE Key Lab of AI"
509 条 记 录,以下是171-180 订阅
排序:
SciDFM: A Large Language Model with Mixture-of-Experts for science
arXiv
收藏 引用
arXiv 2024年
作者: Sun, Liangtai Luo, Danyu Ma, Da Zhao, Zihan Chen, Baocai Shen, Zhennan Zhu, Su Chen, Lu Chen, Xin Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China Suzhou Laboratory Suzhou China AI Speech Co .Ltd. Suzhou China
Recently, there has been a significant upsurge of interest in leveraging large language models (LLMs) to assist scientific discovery. However, most LLMs only focus on general science, while they lack domain-specific k... 详细信息
来源: 评论
Scale-Aware Task Message Transferring for Multi-Task Learning
Scale-Aware Task Message Transferring for Multi-Task Learnin...
收藏 引用
IEEE International Conference on Multimedia and Expo (ICME)
作者: Shalayiding Sirejiding Yuxiang Lu Hongtao Lu Yue Ding Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai China Department of Computer Science and Engineering MOE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Shanghai China
Exploring cross-task interaction has been the mainstream in recent multi-task learning for dense predictions. However, existing works that focus on excavating cross-task contextual information are briefly based on hie...
来源: 评论
Neuronal Activation States as Sample Embeddings for Data Selection in Task-Specific Instruction Tuning
arXiv
收藏 引用
arXiv 2025年
作者: Ma, Da Shang, Gonghu Chen, Zhi Qin, Libo Luo, Yijie Pan, Lei Fan, Shuai Chen, Lu Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China AISpeech Co. Ltd. Suzhou China ByteDance China School of Computer Science and Engineering Central South University China
Task-specific instruction tuning enhances the performance of large language models (LLMs) on specialized tasks, yet efficiently selecting relevant data for this purpose remains a challenge. Inspired by neural coactiva... 详细信息
来源: 评论
A Simple Framework for Text-Supervised Semantic Segmentation
A Simple Framework for Text-Supervised Semantic Segmentation
收藏 引用
Conference on computer Vision and Pattern Recognition (CVPR)
作者: Muyang Yi Quan Cui Hao Wu Cheng Yang Osamu Yoshie Hongtao Lu Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Waseda University ByteDance Inc.
Text-supervised semantic segmentation is a novel research topic that allows semantic segments to emerge with image-text contrasting. However, pioneering methods could be subject to specifically designed network archit...
来源: 评论
Target Sound Extraction with Variable Cross-Modality Clues
Target Sound Extraction with Variable Cross-Modality Clues
收藏 引用
International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
作者: Chenda Li Yao Qian Zhuo Chen Dongmei Wang Takuya Yoshioka Shujie Liu Yanmin Qian Michael Zeng Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence AI Institute X-LANCE Lab Shanghai Jiao Tong University Microsoft Redmond WA USA
Automatic target sound extraction (TSE) is a machine learning approach to mimic the human auditory perception capability of attending to a sound source of interest from a mixture of sources. It often uses a model cond... 详细信息
来源: 评论
ALIGNSUM: Data Pyramid Hierarchical Fine-tuning for Aligning with Human Summarization Preference
arXiv
收藏 引用
arXiv 2024年
作者: Han, Yang Wang, Yiming Wang, Rui Chen, Lu Yu, Kai X-LANCE Lab Department of Computer Science and Engineering SJTU China MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai China Suzhou Laboratory Suzhou China
Text summarization tasks commonly employ Pre-trained Language Models (PLMs) to fit diverse standard datasets. While these PLMs excel in automatic evaluations, they frequently underperform in human evaluations, indicat... 详细信息
来源: 评论
On the Effectiveness of Acoustic BPE in Decoder-Only TTS
arXiv
收藏 引用
arXiv 2024年
作者: Li, Bohan Shen, Feiyu Guo, Yiwei Wang, Shuai Chen, Xie Yu, Kai MoE Key Lab of Artificial Intelligence AI Institute X-LANCE Lab Department of Computer Science and Engineering Shanghai Jiao Tong University China Shenzhen Research Institute of Big Data CUHK-Shenzhen China
Discretizing speech into tokens and generating them by a decoder-only model have been a promising direction for text-to-speech (TTS) and spoken language modeling (SLM). To shorten the sequence length of speech tokens,... 详细信息
来源: 评论
A SURVEY ON SPEECH LARGE LANGUAGE MODELS
arXiv
收藏 引用
arXiv 2024年
作者: Peng, Jing Wang, Yucheng Xi, Yu Li, Xu Zhang, Xizhuo Yu, Kai MoE Key Lab of Artificial Intelligence AI Institute X-LANCE Lab Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai China AISpeech Co. Ltd. Suzhou China
Large Language Models (LLMs) exhibit strong contextual understanding and remarkable multi-task performance. Therefore, researchers have been seeking to integrate LLMs in the broad sense of Spoken Language Understandin... 详细信息
来源: 评论
DQ-Whisper: Joint Distillation and Quantization for Efficient Multilingual Speech Recognition
arXiv
收藏 引用
arXiv 2023年
作者: Shao, Hang Liu, Bei Wang, Wei Gong, Xun Qian, Yanmin Auditory Cognition and Computational Acoustics Lab MoE Key Lab of Artificial Intelligence AI Institute Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai China
As a popular multilingual and multitask pre-trained speech model, Whisper has the problem of curse of multilinguality. To enhance multilingual capabilities in small Whisper models, we propose DQ-Whisper, a novel joint... 详细信息
来源: 评论
ChemDFM-X: Towards Large Multimodal Model for Chemistry
arXiv
收藏 引用
arXiv 2024年
作者: Zhao, Zihan Chen, Bo Li, Jingpiao Chen, Lu Wen, Liyang Wang, Pengyu Zhu, Zichen Zhang, Danyang Wan, Ziping Li, Yansi Dai, Zhongyang Chen, Xin Yu, Kai X-LANCE Lab Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence SJTU AI Institute Shanghai Jiao Tong University Shanghai200240 China Suzhou Laboratory Suzhou215123 China
Rapid developments of ai tools are expected to offer unprecedented assistance to the research of natural science including chemistry. However, neither existing unimodal task-specific specialist models nor emerging gen... 详细信息
来源: 评论