Owing to the development of the Internet of Things (IoT), smart electrical appliances bring convenience to life. However, smart home appliances are too cumbersome as their interface operations are mostly not intuitive...
详细信息
The conventional real estate registry system exhibits numerous shortcomings, being both costly and time-consuming. Vulnerabilities to fraud, particularly through impersonation of false property ownership, further unde...
详细信息
In response to the critical need for energy conservation and carbon emission reduction globally, this paper introduces an innovative framework for hourly energy forecasting in smart buildings. Utilizing data collected...
详细信息
This study presents "X-LeafNet,"a novel approach for identifying tea leaf diseases using a modified Xception model integrated with Explainable Artificial Intelligence (XAI) techniques. The system aims to cla...
详细信息
Fog computing suffers from performance degradation in the absence of nodes' trust, due to a large number of fog nodes for various applications. Further, some fog nodes may mislead and pretend to have more resource...
详细信息
Cattle activity serves as a critical indicator of animal health and welfare, offering valuable insights into both physical and mental conditions. The livestock industry has embraced cutting-edge AI and computer vision...
详细信息
Inverse Reinforcement Learning (IRL) and Reinforcement Learning from Human Feedback (RLHF) are pivotal methodologies in reward learning, which involve inferring and shaping the underlying reward function of sequential...
详细信息
Inverse Reinforcement Learning (IRL) and Reinforcement Learning from Human Feedback (RLHF) are pivotal methodologies in reward learning, which involve inferring and shaping the underlying reward function of sequential decision-making problems based on observed human demonstrations and feedback. Most prior work in reward learning has relied on prior knowledge or assumptions about decision or preference models, potentially leading to robustness issues. In response, this paper introduces a novel linear programming (LP) framework tailored for offline reward learning. Utilizing pre-collected trajectories without online exploration, this framework estimates a feasible reward set from the primal-dual optimality conditions of a suitably designed LP, and offers an optimality guarantee with provable sample efficiency. Our LP framework also enables aligning the reward functions with human feedback, such as pairwise trajectory comparison data, while maintaining computational tractability and sample efficiency. We demonstrate that our framework potentially achieves better performance compared to the conventional maximum likelihood estimation (MLE) approach through analytical examples and numerical experiments. Copyright 2024 by the author(s)
This research discusses an emotion recognition system, which is an important component of many effective computing technologies for natural language processing, based on open-source platforms with automatic speech rec...
详细信息
To automate GUI testing for Android apps, a popular technique is to use a GUI craw-ler to systematically explore the GUIs of the apps while detecting possible app crashes. However, during GUI exploration, the crawler ...
详细信息
Recommender systems have transformed the nature of the online service experience due to their quick growth and widespread use. In today’s world, the recommendation system plays a very vital role. At every point of ou...
详细信息
暂无评论