咨询与建议

限定检索结果

文献类型

  • 204 篇 期刊文献
  • 170 篇 会议

馆藏范围

  • 374 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 235 篇 工学
    • 192 篇 计算机科学与技术...
    • 168 篇 软件工程
    • 49 篇 信息与通信工程
    • 29 篇 控制科学与工程
    • 22 篇 光学工程
    • 18 篇 机械工程
    • 16 篇 建筑学
    • 16 篇 土木工程
    • 15 篇 网络空间安全
    • 13 篇 电气工程
    • 11 篇 电子科学与技术(可...
    • 11 篇 测绘科学与技术
    • 11 篇 生物工程
    • 10 篇 化学工程与技术
    • 8 篇 安全科学与工程
    • 6 篇 交通运输工程
    • 6 篇 生物医学工程(可授...
    • 5 篇 仪器科学与技术
    • 4 篇 动力工程及工程热...
  • 76 篇 管理学
    • 43 篇 图书情报与档案管...
    • 40 篇 管理科学与工程(可...
    • 8 篇 工商管理
  • 67 篇 理学
    • 35 篇 数学
    • 17 篇 统计学(可授理学、...
    • 12 篇 物理学
    • 12 篇 生物学
    • 10 篇 化学
  • 4 篇 经济学
    • 4 篇 应用经济学
  • 4 篇 法学
    • 4 篇 社会学
    • 3 篇 法学
  • 2 篇 农学
  • 2 篇 医学
  • 1 篇 教育学

主题

  • 15 篇 semantics
  • 12 篇 image segmentati...
  • 10 篇 training
  • 9 篇 reinforcement le...
  • 9 篇 contrastive lear...
  • 8 篇 visual languages
  • 7 篇 speech processin...
  • 7 篇 convolution
  • 7 篇 computer vision
  • 7 篇 image reconstruc...
  • 6 篇 semantic segment...
  • 6 篇 distillation
  • 6 篇 visualization
  • 6 篇 pipelines
  • 5 篇 costs
  • 5 篇 benchmarking
  • 5 篇 codes
  • 4 篇 object detection
  • 4 篇 signal processin...
  • 4 篇 redundancy

机构

  • 214 篇 key laboratory o...
  • 62 篇 institute of art...
  • 40 篇 key laboratory o...
  • 36 篇 tencent youtu la...
  • 33 篇 peng cheng labor...
  • 32 篇 school of inform...
  • 25 篇 key laboratory o...
  • 22 篇 fujian key labor...
  • 20 篇 fujian key labor...
  • 18 篇 youtu lab tencen...
  • 17 篇 key laboratory o...
  • 10 篇 school of inform...
  • 10 篇 national univers...
  • 8 篇 tencent
  • 8 篇 skywork ai
  • 8 篇 school of comput...
  • 8 篇 the key laborato...
  • 8 篇 department of ar...
  • 7 篇 key laboratory o...
  • 6 篇 department of co...

作者

  • 152 篇 ji rongrong
  • 67 篇 sun xiaoshuai
  • 49 篇 rongrong ji
  • 40 篇 cao liujuan
  • 37 篇 ji jiayi
  • 30 篇 zhou yiyi
  • 30 篇 zhang shengchuan
  • 28 篇 wang cheng
  • 25 篇 ma yiwei
  • 24 篇 zhang yuxin
  • 24 篇 zheng xiawu
  • 22 篇 chao fei
  • 19 篇 luo gen
  • 18 篇 lin mingbao
  • 18 篇 zhang yan
  • 17 篇 xiaoshuai sun
  • 16 篇 wang haowei
  • 15 篇 shen yunhang
  • 15 篇 jiang guannan
  • 15 篇 li hui

语言

  • 318 篇 英文
  • 56 篇 其他
检索条件"机构=Key Laboratory of Multimedia Trusted Perception and Efficient Computing"
374 条 记 录,以下是51-60 订阅
排序:
Signer Diversity-driven Data Augmentation for Signer-Independent Sign Language Translation
Signer Diversity-driven Data Augmentation for Signer-Indepen...
收藏 引用
2024 Findings of the Association for Computational Linguistics: NAACL 2024
作者: Fu, Honghao Zhang, Liang Fu, Biao Zhao, Rui Su, Jinsong Shi, Xiaodong Chen, Yidong Institute of Artificial Intelligence Xiamen University China School of Informatics Xiamen University China Ministry of Culture and Tourism China Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China
The primary objective of sign language translation (SLT) is to transform sign language videos into natural sentences. A crucial challenge in this field is developing signer-independent SLT systems which requires model... 详细信息
来源: 评论
Outlier-Aware Slicing for Post-Training Quantization in Vision Transformer  41
Outlier-Aware Slicing for Post-Training Quantization in Visi...
收藏 引用
41st International Conference on Machine Learning, ICML 2024
作者: Ma, Yuexiao Li, Huixia Zheng, Xiawu Ling, Feng Xiao, Xuefeng Wang, Rui Wen, Shilei Chao, Fei Ji, Rongrong Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China School of Informatics Xiamen University 361005 China ByteDance Inc. China Peng Cheng Laboratory Shenzhen China Institute of Artificial Intelligence Xiamen University China
Post-Training Quantization (PTQ) is a vital technique for network compression and acceleration, gaining prominence as model sizes increase. This paper addresses a critical challenge in PTQ: the severe impact of outlie...
来源: 评论
Grounded Chain-of-Thought for Multimodal Large Language Models
arXiv
收藏 引用
arXiv 2025年
作者: Wu, Qiong Yang, Xiangcong Zhou, Yiyi Fang, Chenxin Song, Baiyang Sun, Xiaoshuai Ji, Rongrong Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University 361005 China
Despite great progress, existing multimodal large language models (MLLMs) are prone to visual hallucination, greatly impeding their trustworthy applications. In this paper, we study this problem from the perspective o...
来源: 评论
Can LLMs Replace Clinical Doctors? Exploring Bias in Disease Diagnosis by Large Language Models
Can LLMs Replace Clinical Doctors? Exploring Bias in Disease...
收藏 引用
2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024
作者: Zhao, Yutian Wang, Huimin Liu, Yuqi Suhuang, Wu Wu, Xian Zheng, Yefeng Jarvis Research Center Tencent YouTu Lab Shenzhen China Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China JMedical Artificial Intelligence Lab Westlake University Hangzhou China
The bias of disease prediction in Large Language Models (LLMs) is a critical yet underexplored issue, with potential implications for healthcare outcomes and equity. As LLMs increasingly find applications in healthcar... 详细信息
来源: 评论
AFFINEQUANT: AFFINE TRANSFORMATION QUANTIZATION FOR LARGE LANGUAGE MODELS  12
AFFINEQUANT: AFFINE TRANSFORMATION QUANTIZATION FOR LARGE LA...
收藏 引用
12th International Conference on Learning Representations, ICLR 2024
作者: Ma, Yuexiao Li, Huixia Zheng, Xiawu Ling, Feng Xiao, Xuefeng Wang, Rui Wen, Shilei Chao, Fei Ji, Rongrong Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China School of Informatics Xiamen University 361005 China ByteDance Inc. China Peng Cheng Laboratory Shenzhen China Institute of Artificial Intelligence Xiamen University China
The significant resource requirements associated with Large-scale Language Models (LLMs) have generated considerable interest in the development of techniques aimed at compressing and accelerating neural networks. Amo... 详细信息
来源: 评论
HRSAM: efficient Interactive Segmentation in High-Resolution Images
arXiv
收藏 引用
arXiv 2024年
作者: Huang, You Lai, Wenbin Ji, Jiayi Cao, Liujuan Zhang, Shengchuan Ji, Rongrong Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China
The Segment Anything Model (SAM) has advanced interactive segmentation but is limited by the high computational cost on high-resolution images. This requires downsampling to meet GPU constraints, sacrificing the fine-... 详细信息
来源: 评论
Breaking the Bias: Recalibrating the Attention of Industrial Anomaly Detection
arXiv
收藏 引用
arXiv 2024年
作者: Chen, Xin Cao, Liujuan Zhang, Shengchuan Zheng, Xiewu Zhang, Yan Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China
Due to the scarcity and unpredictable nature of defect samples, industrial anomaly detection (IAD) predominantly employs unsupervised learning. However, all unsupervised IAD methods face a common challenge: the inhere... 详细信息
来源: 评论
VEGA: Learning Interleaved Image-Text Comprehension in Vision-Language Large Models
arXiv
收藏 引用
arXiv 2024年
作者: Zhou, Chenyu Zhang, Mengdan Chen, Peixian Fu, Chaoyou Shen, Yunhang Zheng, Xiawu Sun, Xing Ji, Rongrong Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China
The swift progress of Multi-modal Large Models (MLLMs) has showcased their impressive ability to tackle tasks blending vision and language. Yet, most current models and benchmarks cater to scenarios with a narrow scop... 详细信息
来源: 评论
ERQ: Error Reduction for Post-Training Quantization of Vision Transformers  41
ERQ: Error Reduction for Post-Training Quantization of Visio...
收藏 引用
41st International Conference on Machine Learning, ICML 2024
作者: Zhong, Yunshan Hu, Jiawei Huang, You Zhang, Yuxin Ji, Rongrong Institute of Artificial Intelligence Xiamen University China Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China Department of Artificial Intelligence School of Informatics Xiamen University China Peng Cheng Laboratory China
Post-training quantization (PTQ) for vision transformers (ViTs) has garnered significant attention due to its efficiency in compressing ***, existing methods typically overlook the intricate interdependence between qu... 详细信息
来源: 评论
DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis
arXiv
收藏 引用
arXiv 2024年
作者: Chen, Zhongxi Sun, Ke Zhou, Ziyin Lin, Xianming Sun, Xiaoshuai Cao, Liujuan Ji, Rongrong Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University China
The rapid progress in deep learning has given rise to hyper-realistic facial forgery methods, leading to concerns related to misinformation and security risks. Existing face forgery datasets have limitations in genera... 详细信息
来源: 评论