With focus on toxic recommender algorithms, the article argues that the platform immunities granted in the early 2000s are incompatible with the regulated self-regulation of the EU Digital Services Act 2022 [DSA] and ...
详细信息
This article reflects on the problem of false belief produced by the integrated psychological and algorithmic landscape humans now inhabit. Following the work of scholars such as Lee McIntyre (Post-Truth, MIT Press, 2...
详细信息
This article reflects on the problem of false belief produced by the integrated psychological and algorithmic landscape humans now inhabit. Following the work of scholars such as Lee McIntyre (Post-Truth, MIT Press, 2018) or Cailin O'Connor and James Weatherall (The Misinformation Age: How False Beliefs Spread, Yale University Press, 2019) it combines recent discussions of fake news, post-truth, and science denialism across the disciplines of political science, computer science, sociology, psychology, and the history and philosophy of science that variously address the ineffectiveness, in a digital era, of countering individual falsehoods with facts. Truth and falsehood, it argues, rather than being seen as properties or conditions attached to individual instances of content, should now be seen as collective, performative, and above all persuasive phenomena. They should be practically evaluated as networked systems and mechanisms of sharing in which individually targeted actions are combining with structural tendencies (both human and mechanical) in unprecedented ways. For example, the persuasive agency of apparent consensus (clicks, likes, bots, trolls) is newly important in a fractured environment that only appears to be, but is no longer `public';the control of narratives, labels, and associations is a live, time-sensitive issue, a continuous contest, or ongoing cusp. Taking a social approach to truth yields observations of new relevance;from how current strategies of negative cohesion, blame, and enemy-creation depend crucially on binary ways of constructing the world, to how the offer of identity/community powerfully cooperates with the structural tendencies of algorithm-driven advertiser platforms towards polarisation. Remedies for these machine-learned and psychological tendencies lie in end-user education. So the Arts and Humanities, whether via comparisons with previous historical periods, or via principles of critical thinking and active reading, offer cru
We conducted a quantitatively coarse-grained, but wide-ranging evaluation of the frequency recommender algorithms provide 'good' and 'bad' recommendations, with a focus on the latter. We found 151 algo...
详细信息
We conducted a quantitatively coarse-grained, but wide-ranging evaluation of the frequency recommender algorithms provide 'good' and 'bad' recommendations, with a focus on the latter. We found 151 algorithmic audits from 33 studies that report fitting risk-utility statistics from YouTube, Google Search, Twitter, Facebook, TikTok, Amazon, and others. Our findings indicate that roughly 8-10% of algorithmic recommendations are 'bad', while about a quarter actively protect users from self-induced harm ('do good'). This average is remarkably consistent across the audits, irrespective of the platform nor on the kind of risk (bias/ discrimination, mental health and child harm, misinformation, or political extremism). Algorithmic audits find negative feedback loops that can ensnare users into spirals of 'bad' recommendations (or being 'dragged down the rabbit hole'), but also highlight an even larger likelihood of positive spirals of 'good recommendations'. While our analysis refrains from any judgment of the causal consequences and severity of risks, the detected levels surpass those associated with many other consumer products. They are comparable to the risk levels of generic food defects monitored by public authorities such as the FDA or FSIS in the United States. Consequently, our findings inform the ongoing discussion regarding regulatory oversight of the potential risks posed by recommender algorithms.
Online social networks have been increasingly growing over the past few years. One of the critical factors that drive these social networks' success and growth is the friendship recommender algorithms, which are u...
详细信息
ISBN:
(纸本)9781728161754
Online social networks have been increasingly growing over the past few years. One of the critical factors that drive these social networks' success and growth is the friendship recommender algorithms, which are used to suggest relationships between users. Current friending algorithms are designed to recommend friendship connections that are easily accepted. Yet, most of these accepted relationships do not lead to any interactions. We refer to these relationships as weak connections. Facebook's Friends-of-Friends (FoF) algorithm is an example of a friending algorithm that generates friendship recommendations with a high acceptance rate. However, a considerably high percentage of Facebook algorithm's recommendations are of weak connections. The metric of measuring the accuracy of friendship recommender algorithms by acceptance rate does not correlate with the level of interactions, i.e., how much connected friends interact with one another. Consequently, new metrics and friendship recommenders are needed to form the next generation of social networks by generating better edges instead of merely growing the social graph with weak edges. This paper is a step towards this vision. We first introduce a new metric to measure the accuracy of friending recommendations by the probability that they lead to interactions. We then briefly investigate existing recommender systems and their limitations. We also highlight the consequences of recommending weak relationships within online social networks. To overcome the limitations of current friending algorithms, we present and evaluate a novel approach that generates friendship recommendations that have a higher probability of leading to interactions between users than existing friending algorithms.
Retrieving items in online e-commerce systems with abundance of products is time consuming for users. To deal with this issue, recommender systems (RS) aims to help users by suggesting their interested items in the pr...
详细信息
ISBN:
(纸本)9781509060788
Retrieving items in online e-commerce systems with abundance of products is time consuming for users. To deal with this issue, recommender systems (RS) aims to help users by suggesting their interested items in the presence of thousands of products. Generally, RS algorithms are constructed based on similarity between users and/or items (e.g., a user is likely to purchase the same items as his/her most similar users). In this paper, we introduce a novel time-aware recommendation algorithm that is based on overlapping community structure between users. Users' interests might change over time, and thus accurate modelling of dynamic users' tastes is a challenging issue in designing efficient recommendation systems. The users-items interaction network is often highly sparse in real systems, for which many recommenders fail to provide accurate predictions. We apply the proposed algorithm on a benchmark dataset. Our proposed recommendation algorithm overcomes these challenges and show better precision as compared to the state-of- the-art recommenders.
Objective: To answer a "grand challenge" in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR)...
详细信息
Objective: To answer a "grand challenge" in clinical decision support, the authors produced a recommender system that automatically data-mines inpatient decision support from electronic medical records (EMR), analogous to Netflix or ***'s product recommender. Materials and Methods: EMR data were extracted from 1 year of hospitalizations (> 18K patients with > 5.4M structured items including clinical orders, lab results, and diagnosis codes). Association statistics were counted for the similar to 1.5K most common items to drive an order recommender. The authors assessed the recommender's ability to predict hospital admission orders and outcomes based on initial encounter data from separate validation patients. Results: Compared to a reference benchmark of using the overall most common orders, the recommender using temporal relationships improves precision at 10 recommendations from 33% to 38% (P< 10(-10)) for hospital admission orders. Relative risk-based association methods improve inverse frequency weighted recall from 4% to 16% (P< 10(-16)). The framework yields a prediction receiver operating characteristic area under curve (c-statistic) of 0.84 for 30 day mortality, 0.84 for 1 week need for ICU life support, 0.80 for 1 week hospital discharge, and 0.68 for 30-day readmission. Discussion: recommender results quantitatively improve on reference benchmarks and qualitatively appear clinically reasonable. The method assumes that aggregate decision making converges appropriately, but ongoing evaluation is necessary to discern common behaviors from "correct" ones. Conclusions: Collaborative filtering recommender algorithms generate clinical decision support that is predictive of real practice patterns and clinical outcomes. Incorporating temporal relationships improves accuracy. Different evaluation metrics satisfy different goals (predicting likely events vs. "interesting" suggestions).
As an interactive intelligent system, recommender systems are developed to give recommendations that match users' preferences. Since the emergence of recommender systems, a large majority of research focuses on ob...
详细信息
ISBN:
(纸本)9781450336925
As an interactive intelligent system, recommender systems are developed to give recommendations that match users' preferences. Since the emergence of recommender systems, a large majority of research focuses on objective accuracy criteria and less attention has been paid to how users interact with the system and the efficacy of interface designs from users' perspectives. The field has reached a point where it is ready to look beyond algorithms, into users' interactions, decision making processes, and overall experience. Following from the success of the joint IntRS 2014 workshop and previous workshops on Interfaces and Decisions in recommender Systems, this workshop will focus on the aspect of integrating different theories of human decision making into the construction of recommender systems. It will focus particularly on the impact of interfaces on decision support and overall satisfaction, and on ways to compare and evaluate novel techniques and applications in this area.
As an interactive intelligent system, recommender systems are developed to give predictions that match users preferences. Since the emergence of recommender systems, a large majority of research focuses on objective a...
详细信息
ISBN:
(纸本)9781450326681
As an interactive intelligent system, recommender systems are developed to give predictions that match users preferences. Since the emergence of recommender systems, a large majority of research focuses on objective accuracy criteria and less attention has been paid to how users interact with the system and the efficacy of interface designs from the end-user perspective. The field has reached a point where it is ready to look beyond algorithms, into users interactions, decision-making processes and overall experience. Accordingly, the goals of this workshop (IntRS@RecSys) are to explore the human aspects of recommender systems, with a particular focus on the impact of interfaces and interaction design on decision-making and user experiences with recommender systems, and to explore methodologies to evaluate these human aspects of the recommendation process that go beyond traditional automated approaches.
Cross-domain Scientific Collaborations have promoted rapid development of science and generated many innovative breakthroughs. However, predicting cross-domain scientific collaboration problem is rarely studied and co...
详细信息
ISBN:
(纸本)9781479935789
Cross-domain Scientific Collaborations have promoted rapid development of science and generated many innovative breakthroughs. However, predicting cross-domain scientific collaboration problem is rarely studied and collaboration recommendation methods within single domain cannot be directly utilized for solving cross-domain problems. In this paper, we propose a Hybrid Graph Model, which combines both explicit co-author relationships and implicit co-citation relationships together to construct a hybrid graph and then Random Walks with Restarts concept is used to measure and rank relatedness. The experiments with large publication data set show that Hybrid Graph Model outperforms some baseline approaches on several recommendation metrics. Citation information has been demonstrated to be very helpful for scientific collaboration recommendations as well.
We propose a new approach for Collaborative filtering which is based on Boolean Matrix Factorisation (BMF) and Formal Concept Analysis. In a series of experiments on real data (MovieLens dataset) we compare the approa...
详细信息
ISBN:
(纸本)9783319105543;9783319105536
We propose a new approach for Collaborative filtering which is based on Boolean Matrix Factorisation (BMF) and Formal Concept Analysis. In a series of experiments on real data (MovieLens dataset) we compare the approach with an SVD-based one in terms of Mean Average Error (MAE). One of the experimental consequences is that it is enough to have a binary-scaled rating data to obtain almost the same quality in terms of MAE by BMF as for the SVD-based algorithm in case of non-scaled data.
暂无评论