咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Relational information gain 收藏

Relational information gain

关系信息获得

作     者:Lippi, Marco Jaeger, Manfred Frasconi, Paolo Passerini, Andrea 

作者机构:Univ Florence Dipartimento Sistemi & Informat Florence Italy Aalborg Univ Dept Comp Sci Aalborg Denmark Univ Trento Dipartimento Ingn & Sci Informaz Trento Italy 

出 版 物:《MACHINE LEARNING》 (机器学习)

年 卷 期:2011年第83卷第2期

页      面:219-239页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Relational learning Inductive logic programming Information gain 

摘      要:We introduce relational information gain, a refinement scoring function measuring the informativeness of newly introduced variables. The gain can be interpreted as a conditional entropy in a well-defined sense and can be efficiently approximately computed. In conjunction with simple greedy general-to-specific search algorithms such as FOIL, it yields an efficient and competitive algorithm in terms of predictive accuracy and compactness of the learned theory. In conjunction with the decision tree learner TILDE, it offers a beneficial alternative to lookahead, achieving similar performance while significantly reducing the number of evaluated literals.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分