As artificial intelligence systems, particularly large language models (LLMs), become increasingly integrated into decision-making processes, the ability to trust their outputs is crucial. To earn human trust, LLMs mu...
详细信息
Learning a locomotion controller for a musculoskeletal system is challenging due to over-actuation and high-dimensional action space. While many reinforcement learning methods attempt to address this issue, they often...
详细信息
Agriculture has a lot of relations with SDG from United Nations especially in end hungers and sustainable agriculture. One of factor important in agriculture is weather. Weather prediction is very important in agricul...
详细信息
As learning-to-rank models are increasingly deployed for decision-making in areas with profound life implications, the FairML community has been developing fair learning-to-rank (LTR) models. These models rely on the ...
As learning-to-rank models are increasingly deployed for decision-making in areas with profound life implications, the FairML community has been developing fair learning-to-rank (LTR) models. These models rely on the availability of sensitive demographic features such as race or sex. However, in practice, regulatory obstacles and privacy concerns protect this data from collection and use. As a result, practitioners may either need to promote fairness despite the absence of these features or turn to demographic inference tools to attempt to infer them. Given that these tools are fallible, this paper aims to further understand how errors in demographic inference impact the fairness performance of popular fair LTR strategies. In which cases would it be better to keep such demographic attributes hidden from models versus infer them? We examine a spectrum of fair LTR strategies ranging from fair LTR with and without demographic features hidden versus inferred to fairness-unaware LTR followed by fair re-ranking. We conduct a controlled empirical investigation modeling different levels of inference errors by systematically perturbing the inferred sensitive attribute. We also perform three case studies with real-world datasets and popular open-source inference methods. Our findings reveal that as inference noise grows, LTR-based methods that incorporate fairness considerations into the learning process may increase bias. In contrast, fair re-ranking strategies are more robust to inference errors. All source code, data, and experimental artifacts of our experimental study are available here: https://***/sewen007/***
SQL Injection has been around as a harmful and prolific threat on web applications for more than 20 years, yet it still poses a huge threat to the World Wide Web. Rapidly evolving web technology has not eradicated thi...
详细信息
As cybersecurity threats evolve alarmingly, conventional defense strategies are becoming increasingly ineffective. In response to this urgent challenge, our study presents a transformative approach to red-teaming exer...
详细信息
Free associations have been extensively used in cognitive psychology and linguistics for studying how conceptual knowledge is organized. Recently, the potential of applying a similar approach for investigating the kno...
详细信息
Kitchen appliances are essential to accomplish cooking tasks efficiently. Advancements in technology have led to changes in the needs and expectations of the users of commonly used kitchen appliances. Thus, this quali...
详细信息
This bibliometric analysis aims to explore the trends in research publication topics related to Artificial Intelligence (AI) for medicine in medical education over the last five years (2019-2023). By synthesizing data...
详细信息
Automated machine learning (AutoML) creates additional opportunities for less advanced users to build and test their own data mining models. Even though AutoML creates the models for the user, there is still technical...
详细信息
ISBN:
(纸本)9798400709241
Automated machine learning (AutoML) creates additional opportunities for less advanced users to build and test their own data mining models. Even though AutoML creates the models for the user, there is still technical knowledge and tools needed to evaluate those models, and due to the black-box nature of the machine learning models, problems can arise with regard to algorithmic biases and fairness. Such biases can escalate in future applications, necessitating a structured approach for fairness evaluation in AutoML. This involves defining fairness criteria, selecting appropriate metrics, assessing fairness across groups, and addressing biases. In the realm of educational data mining, where AutoML is prevalent, biases related to attributes like gender or race can lead to unethical outcomes. Since fairness metrics vary in definition and strength, and some may even contradict others, making fairness evaluation more complex. In this paper, ten fairness metrics were chosen, explored, and implemented on four AutoML tools, Vertex AI, AutoSklearn, AutoKeras, and PyCaret. We identified two open educational datasets and built both prediction and classification models on those AutoML frameworks. We report our work in evaluating different machine learning models created by AutoML and provide discussions about the challenges in evaluating fairness in those models and our effort to mitigate and resolve the problems of algorithmic bias in educational data mining.
暂无评论