咨询与建议

限定检索结果

文献类型

  • 21 篇 会议
  • 18 篇 期刊文献

馆藏范围

  • 39 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 31 篇 工学
    • 31 篇 计算机科学与技术...
    • 24 篇 软件工程
    • 5 篇 信息与通信工程
    • 5 篇 控制科学与工程
    • 4 篇 生物工程
    • 2 篇 电气工程
    • 2 篇 交通运输工程
    • 1 篇 仪器科学与技术
    • 1 篇 冶金工程
    • 1 篇 电子科学与技术(可...
    • 1 篇 建筑学
    • 1 篇 土木工程
    • 1 篇 测绘科学与技术
    • 1 篇 化学工程与技术
    • 1 篇 环境科学与工程(可...
    • 1 篇 安全科学与工程
  • 12 篇 理学
    • 4 篇 生物学
    • 3 篇 数学
    • 2 篇 物理学
    • 1 篇 化学
    • 1 篇 大气科学
    • 1 篇 系统科学
  • 11 篇 管理学
    • 6 篇 管理科学与工程(可...
    • 6 篇 图书情报与档案管...
    • 1 篇 工商管理
  • 5 篇 法学
    • 4 篇 社会学
    • 1 篇 法学
  • 3 篇 教育学
    • 3 篇 教育学
  • 3 篇 文学
    • 3 篇 中国语言文学
    • 3 篇 外国语言文学

主题

  • 6 篇 computational li...
  • 4 篇 federated learni...
  • 3 篇 throughput
  • 3 篇 costs
  • 2 篇 optimization
  • 2 篇 complexity theor...
  • 2 篇 data structures
  • 2 篇 emotion recognit...
  • 2 篇 energy managemen...
  • 2 篇 privacy
  • 2 篇 renewable energy...
  • 1 篇 global positioni...
  • 1 篇 thermal stabilit...
  • 1 篇 vehicle dynamics
  • 1 篇 file systems
  • 1 篇 scalability
  • 1 篇 indexing
  • 1 篇 distillation
  • 1 篇 visual analytics
  • 1 篇 frequency modula...

机构

  • 24 篇 shanghai key lab...
  • 20 篇 department of co...
  • 15 篇 key laboratory o...
  • 10 篇 shanghai jiao to...
  • 6 篇 school of electr...
  • 3 篇 shanghai key lab...
  • 3 篇 tsinghua univers...
  • 3 篇 shanghai key lab...
  • 3 篇 shanghai artific...
  • 2 篇 yancheng blockch...
  • 2 篇 shanghai jiao to...
  • 2 篇 baichuan intelli...
  • 2 篇 hohhot minzu col...
  • 2 篇 department of co...
  • 2 篇 tencent
  • 2 篇 zhejiang normal ...
  • 2 篇 college of zhiyu...
  • 1 篇 school of comput...
  • 1 篇 shanghai key lab...
  • 1 篇 blockchain advan...

作者

  • 19 篇 zhao hai
  • 7 篇 yang yifei
  • 6 篇 zhang zhuosheng
  • 6 篇 xue guangtao
  • 6 篇 cao zouying
  • 5 篇 ma xinbei
  • 4 篇 yang dongjie
  • 4 篇 guangtao xue
  • 4 篇 chen yi-chao
  • 4 篇 cao jian
  • 3 篇 jie li
  • 3 篇 yi-chao chen
  • 3 篇 ding dian
  • 3 篇 li jie
  • 3 篇 yao yao
  • 2 篇 dian ding
  • 2 篇 liu yang
  • 2 篇 ju tianjie
  • 2 篇 wu chentao
  • 2 篇 chentao wu

语言

  • 34 篇 英文
  • 5 篇 其他
  • 1 篇 中文
检索条件"机构=Shanghai Key Laboratory of Trusted Data Circulation and Governance and Web3"
39 条 记 录,以下是31-40 订阅
LaCo: Large Language Model Pruning via Layer Collapse
arXiv
收藏 引用
arXiv 2024年
作者: Yang, Yifei Cao, Zouying Zhao, Hai Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China
Large language models (LLMs) based on transformer are witnessing a notable trend of size expansion, which brings considerable costs to both model training and inference. However, existing methods such as model quantiz... 详细信息
来源: 评论
VCEMO: Multi-Modal Emotion Recognition for Chinese Voiceprints
arXiv
收藏 引用
arXiv 2024年
作者: Tang, Jinghua Zhang, Liyun Lu, Yu Ding, Dian Yang, Lanqing Chen, Yi-Chao Bian, Minjie Li, Xiaoshan Xue, Guangtao Shanghai Jiao Tong University Shanghai China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China Shanghai Voicecomm Information Technology Co. Ltd. Shanghai China Shanghai Data Group Co. Ltd Shanghai200011 China
Emotion recognition can enhance humanized machine responses to user commands, while voiceprint-based perception systems can be easily inte grated into commonly used devices like smartphones and stereos. Despite having... 详细信息
来源: 评论
Instruction-Driven Game Engine: A Poker Case Study
arXiv
收藏 引用
arXiv 2024年
作者: Wu, Hongqiu Liu, Xingyuan Wang, Yan Zhao, Hai Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China Tencent China
The Instruction-Driven Game Engine (IDGE) project aims to democratize game development by enabling a large language model (LLM) to follow free-form game descriptions and generate game-play processes. The IDGE allows u... 详细信息
来源: 评论
Adaptive Guidance for Local Training in Heterogeneous Federated Learning
arXiv
收藏 引用
arXiv 2024年
作者: Zhang, Jianqing Liu, Yang Hua, Yang Cao, Jian Yang, Qiang Shanghai Jiao Tong University China Tsinghua University China Shanghai Artificial Intelligence Laboratory China Queen’s University Belfast United Kingdom Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China Hong Kong University of Science and Technology Hong Kong
Model heterogeneity poses a significant challenge in Heterogeneous Federated Learning (HtFL). In scenarios with diverse model architectures, directly aggregating model parameters is impractical, leading HtFL methods t... 详细信息
来源: 评论
CoCo-Agent: A Comprehensive Cognitive MLLM Agent for Smartphone GUI Automation
arXiv
收藏 引用
arXiv 2024年
作者: Ma, Xinbei Zhang, Zhuosheng Zhao, Hai School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University China Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China
Multimodal large language models (MLLMs) have shown remarkable potential as human-like autonomous language agents to interact with real-world environments, especially for graphical user interface (GUI) automation. How... 详细信息
来源: 评论
PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference
arXiv
收藏 引用
arXiv 2024年
作者: Yang, Dongjie Han, Xiaodong Gao, Yan Hu, Yao Zhang, Shilin Zhao, Hai Shanghai Jiao Tong University China Xiaohongshu Inc. China South China University of Technology China The Department of Computer Science and Engineering Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China
Large Language Models (LLMs) have shown remarkable comprehension abilities but face challenges in GPU memory usage during inference, hindering their scalability for real-time applications like chatbots. To accelerate ... 详细信息
来源: 评论
On the Robustness of Editing Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: Ma, Xinbei Ju, Tianjie Qiu, Jiyang Zhang, Zhuosheng Zhao, Hai Liu, Lifeng Wang, Yulong School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University China Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China Baichuan Intelligent Technology China
Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates. Model editing enables the manipulation of specific knowledge memories and th... 详细信息
来源: 评论
GKT: A Novel Guidance-Based Knowledge Transfer Framework For Efficient Cloud-edge Collaboration LLM Deployment
arXiv
收藏 引用
arXiv 2024年
作者: Yao, Yao Li, Zuchao Zhao, Hai Department of Computer Science and Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China National Engineering Research Center for Multimedia Software School of Computer Science Wuhan University Wuhan430072 China
The burgeoning size of Large Language Models (LLMs) has led to enhanced capabilities in generating responses, albeit at the expense of increased inference times and elevated resource demands. Existing methods of accel... 详细信息
来源: 评论
GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large Language Model
arXiv
收藏 引用
arXiv 2024年
作者: Zhang, Xuanchang Zhang, Zhuosheng Zhao, Hai College of Zhiyuan Shanghai Jiao Tong University China School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University China Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China
Despite the rapid progress of large language models (LLMs), their task performance remains sensitive to prompt design. Recent studies have explored leveraging the LLM itself as an optimizer to identify optimal prompts... 详细信息
来源: 评论