咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Maximum Entropy Model for Exam... 收藏

Maximum Entropy Model for Example-Based Machine Translation

作     者:YIN CHEN MUYUN YANG SHENG LI 

作者机构:MOE-MS Key Laboratory of Natural Language Processing and Speech Harbin Institute of Technology P.O. Box 321 No. 92 West Dazhi Street NanGang Harbin 150001 China 

出 版 物:《International Journal of Computer Processing of Languages》 

年 卷 期:2007年第20卷第2N03期

页      面:101-113页

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:EBMT Machine learning Maximum entropy Feature space 

摘      要:Most example-based machine translation (EBMT) systems handle their translation examples using some heuristic measures based on human intuition. However, these heuristic rules are usually hard to be effectively organized to scale to incorporate diverse features to cover more language phenomenon and large domains. In this paper, we use machine learning approach for EBMT model design instead of human intuition. Maximum entropy (ME) model is introduced in order to adequately incorporate different kinds of features inherited in the translation examples effectively. At the same time, a multi-dimensional feature space is formally constructed to include various features of different aspects. In the experiments, the proposed model shows significant performance improvement.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分