咨询与建议

限定检索结果

文献类型

  • 53 篇 期刊文献
  • 16 篇 会议

馆藏范围

  • 69 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 54 篇 工学
    • 42 篇 计算机科学与技术...
    • 32 篇 软件工程
    • 5 篇 电气工程
    • 4 篇 信息与通信工程
    • 4 篇 控制科学与工程
    • 3 篇 光学工程
    • 3 篇 生物工程
    • 2 篇 机械工程
    • 2 篇 电子科学与技术(可...
    • 2 篇 化学工程与技术
    • 2 篇 生物医学工程(可授...
    • 2 篇 安全科学与工程
    • 1 篇 材料科学与工程(可...
    • 1 篇 建筑学
  • 25 篇 管理学
    • 15 篇 管理科学与工程(可...
    • 13 篇 图书情报与档案管...
    • 6 篇 工商管理
  • 18 篇 理学
    • 11 篇 数学
    • 6 篇 物理学
    • 5 篇 统计学(可授理学、...
    • 4 篇 生物学
    • 4 篇 系统科学
    • 2 篇 化学
    • 1 篇 海洋科学
  • 5 篇 经济学
    • 5 篇 应用经济学
  • 2 篇 法学
    • 2 篇 社会学
  • 2 篇 医学
    • 2 篇 基础医学(可授医学...
    • 2 篇 临床医学
  • 1 篇 教育学
    • 1 篇 教育学
  • 1 篇 军事学

主题

  • 4 篇 benchmarking
  • 4 篇 linguistics
  • 3 篇 graph neural net...
  • 3 篇 computational li...
  • 3 篇 semantics
  • 2 篇 reliability
  • 2 篇 reinforcement le...
  • 2 篇 signal processin...
  • 2 篇 scattering param...
  • 2 篇 speech processin...
  • 2 篇 acoustics
  • 2 篇 digital storage
  • 2 篇 natural language...
  • 2 篇 generators
  • 2 篇 crowdsourcing
  • 1 篇 satellite naviga...
  • 1 篇 differential sys...
  • 1 篇 knowledge based ...
  • 1 篇 adaptation
  • 1 篇 friction compens...

机构

  • 37 篇 institute for ad...
  • 11 篇 school of inform...
  • 11 篇 renmin universit...
  • 7 篇 center for llm i...
  • 5 篇 university of sc...
  • 5 篇 center for machi...
  • 4 篇 institute of nat...
  • 4 篇 northeastern uni...
  • 4 篇 school of mathem...
  • 4 篇 institute of adv...
  • 4 篇 key laboratory o...
  • 4 篇 guanghua school ...
  • 4 篇 xiangjiang labor...
  • 4 篇 peking universit...
  • 4 篇 state key labora...
  • 4 篇 national enginee...
  • 4 篇 institute of com...
  • 3 篇 shanghai jiao to...
  • 3 篇 key lab of high ...
  • 3 篇 meituan

作者

  • 35 篇 li zhiyu
  • 29 篇 xiong feiyu
  • 23 篇 tang bo
  • 17 篇 song shichao
  • 15 篇 niu simin
  • 12 篇 wang hanyu
  • 10 篇 liang xun
  • 7 篇 yang jiawei
  • 5 篇 zhang zhongwang
  • 5 篇 zheng zifan
  • 5 篇 yu qingchen
  • 5 篇 xu zhi-qin john
  • 5 篇 zhang wentao
  • 4 篇 yu yu
  • 4 篇 chen ding
  • 4 篇 deng haiying
  • 4 篇 zhang sensen
  • 4 篇 wang wenjin
  • 4 篇 weinan e.
  • 4 篇 lin pengxiao

语言

  • 53 篇 英文
  • 16 篇 其他
检索条件"机构=Institute of Advanced Algorithms Research"
69 条 记 录,以下是41-50 订阅
排序:
CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: Lyu, Yuanjie Li, Zhiyu Niu, Simin Xiong, Feiyu Tang, Bo Wang, Wenjin Wu, Hao Liu, Huanyong Xu, Tong Chen, Enhong University of Science and Technology of China Hefei China Institute for Advanced Algorithms Research Shanghai China Renmin University of China Beijing China 360 AI Research Institute Beijing China
Retrieval-Augmented Generation (RAG) is a technique that enhances the capabilities of large language models (LLMs) by incorporating external knowledge sources. This method addresses common LLM limitations, including o... 详细信息
来源: 评论
HiBid: A Cross-Channel Constrained Bidding System with Budget Allocation by Hierarchical Offline Deep Reinforcement Learning
arXiv
收藏 引用
arXiv 2023年
作者: Wang, Hao Tang, Bo Liu, Chi Harold Mao, Shangqin Zhou, Jiahong Dai, Zipeng Sun, Yaqi Xie, Qianlong Wang, Xingxing Wang, Dong School of Computer Science and Technology Beijing Institute of Technology Beijing100081 China Meituan and Institute for Advanced Algorithms Research Shanghai China Meituan Beijing China
Online display advertising platforms service numerous advertisers by providing real-time bidding (RTB) for the scale of billions of ad requests every day. The bidding strategy handles ad requests cross multiple channe... 详细信息
来源: 评论
UBENCH: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions
arXiv
收藏 引用
arXiv 2024年
作者: Wang, Xunzhi Zhang, Zhuowei Li, Qiongyu Chen, Gaonan Hu, Mengting Li, Zhiyu Luo, Bitong Gao, Hang Han, Zhixin Wang, Haotian College of Software Nankai University China Institute for Advanced Algorithms Research Shanghai China College of Artificial Intelligence Tianjin University of Science and Technology China
The rapid development of large language models (LLMs) has shown promising practical results. However, their low interpretability often leads to errors in unforeseen circumstances, limiting their utility. Many works ha... 详细信息
来源: 评论
NC-ALG: Graph-Based Active Learning under Noisy Crowd  40
NC-ALG: Graph-Based Active Learning under Noisy Crowd
收藏 引用
40th IEEE International Conference on Data Engineering, ICDE 2024
作者: Zhang, Wentao Wang, Yexin You, Zhenbang Li, Yang Cao, Gang Yang, Zhi Cui, Bin Center for Machine Learning Research Peking University China Key Lab of High Confidence Software Technologies Peking University China Institute of Advanced Algorithms Research Shanghai China Institute of Computational Social Science Peking University Qingdao China National Engineering Labratory for Big Data Analytics and Applications China TEG Tencent Inc. Department of Data Platform China Beijing Academy of Artificial Intelligence China
Graph Neural Networks (GNNs) have achieved great success in various data mining tasks but they heavily rely on a large number of annotated nodes, requiring considerable human efforts. Despite the effectiveness of exis... 详细信息
来源: 评论
Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing  38
Initialization is Critical to Whether Transformers Fit Compo...
收藏 引用
38th Conference on Neural Information Processing Systems, NeurIPS 2024
作者: Zhang, Zhongwang Lin, Pengxiao Wang, Zhiwei Zhang, Yaoyu Xu, Zhi-Qin John Institute of Natural Sciences MOE-LSC Shanghai Jiao Tong University China School of Mathematical Sciences Shanghai Jiao Tong University China Key Laboratory of Marine Intelligent Equipment and System Ministry of Education China Shanghai Seres Information Technology Co. Ltd. Shanghai China Center for LLM Institute for Advanced Algorithms Research Shanghai China
Transformers have shown impressive capabilities across various tasks, but their performance on compositional problems remains a topic of debate. In this work, we investigate the mechanisms of how transformers behave o...
来源: 评论
FastMem: Fast Memorization of Prompt Improves Context Awareness of Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: Zhu, Junyi Liu, Shuochen Yu, Yu Tang, Bo Yan, Yibo Li, Zhiyu Xiong, Feiyu Xu, Tong Blaschko, Matthew B. ESAT-PSI KU Leuven Belgium University of Science and Technology of China China Institute for Advanced Algorithms Research Shanghai China National University of Singapore Singapore
Large language models (LLMs) excel in generating coherent text, but they often struggle with context awareness, leading to inaccuracies in tasks requiring faithful adherence to provided information. We introduce FastM... 详细信息
来源: 评论
Memory3: Language Modeling with Explicit Memory
arXiv
收藏 引用
arXiv 2024年
作者: Yang, Hongkang Lin, Zehao Wang, Wenjin Wu, Hao Li, Zhiyu Tang, Bo Wei, Wenqiang Wang, Jinbo Tang, Zeyun Song, Shichao Xi, Chenyang Yu, Yu Chen, Kai Xiong, Feiyu Tang, Linpeng Weinan, E. Center for LLM Institute for Advanced Algorithms Research Shanghai China Moqi Inc China Center for Machine Learning Research Peking University China School of Mathematical Sciences Peking University AI for Science Institute China
The training and inference of large language models (LLMs) are together a costly process that transports knowledge from raw data to meaningful computation. Inspired by the memory hierarchy of the human brain, we reduc... 详细信息
来源: 评论
Proxy-RLHF: Decoupling Generation and Alignment in Large Language Model with Proxy
arXiv
收藏 引用
arXiv 2024年
作者: Zhu, Yu Sun, Chuxiong Yang, Wenfei Wei, Wenqiang Tang, Bo Zhang, Tianzhu Li, Zhiyu Zhang, Shifeng Xiong, Feiyu Hu, Jie Yang, Mingchuan University of Science and Technology of China Hefei China Deep Space Exploration Laboratory China Institute for Advanced Algorithms Research Shanghai China Research Institute of China Telecom China Sangfor Technologies Inc. China
Reinforcement Learning from Human Feedback (RLHF) is the prevailing approach to ensure Large Language Models (LLMs) align with human values. However, existing RLHF methods require a high computational cost, one main r... 详细信息
来源: 评论
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
arXiv
收藏 引用
arXiv 2023年
作者: Liang, Xun Song, Shichao Niu, Simin Li, Zhiyu Xiong, Feiyu Tang, Bo Wang, Yezhaohui He, Dawei Cheng, Peng Wang, Zhonghao Deng, Haiying School of Information Renmin University of China Beijing China Institute for Advanced Algorithms Research Shanghai China State Key Laboratory of Media Convergence Production Technology and Systems Beijing China
Large language models (LLMs) produce hallucinated text, compromising their practical utility in professional contexts. To assess the reliability of LLMs, numerous initiatives have developed benchmark evaluations for h... 详细信息
来源: 评论
NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese Journalism
arXiv
收藏 引用
arXiv 2024年
作者: Li, Miao Chen, Ming-Bin Tang, Bo Hou, Shengbin Wang, Pengyu Deng, Haiying Li, Zhiyu Xiong, Feiyu Mao, Keming Cheng, Peng Luo, Yi School of Computing and Information Systems The University of Melbourne Australia Institute for Advanced Algorithms Research China Northeastern University China State Key Laboratory of Media Convergence Production Technology and Systems China
We present NewsBench, a novel evaluation framework to systematically assess the capabilities of Large Language Models (LLMs) for editorial capabilities in Chinese journalism. Our constructed benchmark dataset is focus... 详细信息
来源: 评论