Formula One (F1) drivers are amongst the most highly skilled drivers in the world, but not every F1 driver is destined to be a F1 World Champion. Discovering new talent or refreshing strategies are long-term investmen...
ISBN:
(纸本)9783319121574;9783319121567
Formula One (F1) drivers are amongst the most highly skilled drivers in the world, but not every F1 driver is destined to be a F1 World Champion. Discovering new talent or refreshing strategies are long-term investments for all competitive F1 teams. The F1 world and teams invest vast amounts in developing high-fidelity simulators;however, driving games have seldom been associated with uncovering certain natural abilities. Beyond nature and nurture to attain success at the top level, certain motor-cognitive aspects are paramount for proficiency. One method of potentially finding talent is studying the behavioral and cognitive patterns associated with learning. Here, an F1 simulation game was used to demonstrate how learning had taken place. The indicative change of interest is from cognitive to motor via more skilled autonomous driving style -a skill synonymous with expert driving and ultimately winning races. Our data show clear patterns of how this skill develops.
Decentralization has been touted as the principal security advantage which propelled blockchain systems at the forefront of developments in the financial technology space. Its exact semantics nevertheless remain highl...
ISBN:
(纸本)9783031786754;9783031786761
Decentralization has been touted as the principal security advantage which propelled blockchain systems at the forefront of developments in the financial technology space. Its exact semantics nevertheless remain highly contested and ambiguous, with proponents and critics disagreeing widely on the level of decentralization offered by existing systems. To address this, we put forth a systematization of the current landscape with respect to decentralization and we derive a methodology that can help direct future research towards defining and measuring decentralization. Our approach dissects blockchain systems into multiple layers, or strata, each possibly encapsulating multiple categories, and it enables a unified method for measuring decentralization in each one. Our layers are (1) hardware, (2) software, (3) network, (4) consensus, (5) economics ("tokenomics"), (6) client API, (7) governance, and (8) geography. Armed with this stratification, we examine for each layer which pertinent properties of distributed ledgers (safety, liveness, privacy, stability) can be at risk due to centralization and in what way. We also introduce a practical test, the "Minimum Decentralization Test" which can provide quick insights about the decentralization state of a blockchain system. To demonstrate how our stratified methodology can be used in practice, we apply it fully (layer by layer) to Bitcoin, and we provide examples of systems which comprise one or more "problematic" layers that cause them to fail the MDT. Our work highlights the challenges in measuring and achieving decentralization, and suggests various potential directions where future research is needed.
Social network analysis is pivotal for organizations aiming to leverage the vast amounts of data generated from user interactions on social media and other digital platforms. These interactions often reveal complex so...
ISBN:
(纸本)9783031785474;9783031785481
Social network analysis is pivotal for organizations aiming to leverage the vast amounts of data generated from user interactions on social media and other digital platforms. These interactions often reveal complex social structures, such as tightly-knit groups based on common interests, which are crucial for enhancing service personalization or fraud detection. Traditional methods like community detection and graph matching, while useful, often fall short of accurately identifying specific groups of users. This paper introduces a novel framework specifically designed to identify groups of users within transactional graphs by focusing on the contextual and structural nuances that define these groups.
How does the public perceive Artificial Intelligence (AI)? We present a sliver of an answer to this question by analyzing discussions on 26 AI-related subreddits from 2005 to early 2023. We apply (1) topic modeling to...
ISBN:
(纸本)9783031785535;9783031785542
How does the public perceive Artificial Intelligence (AI)? We present a sliver of an answer to this question by analyzing discussions on 26 AI-related subreddits from 2005 to early 2023. We apply (1) topic modeling to find latent topics that represent the gist of subreddits as collections of representative keywords, (2) network analysis to identify interaction patterns around these topics, and (3) statistics to test for correlations between AI-related topics and how they are discussed. The identified topics range from high-level AI concepts to applications and societal impacts. The temporal analysis revealed four types of topics: vanishing, resurgent, ephemeral, and emerging ones;representing the dynamic nature of public interest and concerns. We found discussions of vanishing (e.g., self-driving) and ephemeral (e.g., Metaverse) topics to lack dispersion, depth, and controversiality, while most emerging (e.g., Workplace replacement) and resurgent (e.g., Civilization and AI) topics are positively correlated with at least one of these measures. This research advances the understanding of the substance and evolution of AI-related discourse through based on Reddit data.
Content monetization on social media fuels a growing influencer economy. Influencer marketing remains largely undisclosed or inappropriately disclosed on social media. Non-disclosure issues have become a priority for ...
ISBN:
(纸本)9783031785474;9783031785481
Content monetization on social media fuels a growing influencer economy. Influencer marketing remains largely undisclosed or inappropriately disclosed on social media. Non-disclosure issues have become a priority for national and supranational authorities worldwide, who are starting to impose increasingly harsher sanctions on them. This paper proposes a transparent methodology for measuring whether and how influencers comply with disclosures based on legal standards. We introduce a novel distinction between disclosures that are legally sufficient (green) and legally insufficient (yellow). We apply this methodology to an original dataset reflecting the content of 150 Dutch influencers publicly registered with the Dutch Media Authority based on recently introduced registration obligations. The dataset consists of 292,315 posts and is multi-language (English and Dutch) and cross-platform (Instagram, YouTube and TikTok). We find that influencer marketing remains generally underdisclosed on social media, and that bigger influencers are not necessarily more compliant with disclosure standards.
In machine learning, features play a vital role in modeling and understanding the data;their quality and representation essentially determine how accurate the results are. The problem is compounded in the graph-based ...
ISBN:
(纸本)9783031785474;9783031785481
In machine learning, features play a vital role in modeling and understanding the data;their quality and representation essentially determine how accurate the results are. The problem is compounded in the graph-based learning paradigm when one considers how complex and interconnected the data is. However, to achieve more accurate results, augmenting graph data poses specific challenges in the field of graph learning. Feature augmentation is a critical aspect of enhancing data. Moreover, some datasets have limited features and some real datasets do not have features. In this paper, we present our approach termed Feat-Aug which is an extension of our previous work on non-parametric approaches. The aim of this work is to augment node features in graphs on parametric approaches such as Graph Neural Networks (GNNs) with the objective of improving performance in node classification tasks. Our approach combines real features, such as a bag of words in citation networks, which are typically associated with nodes, with structural features extracted at the node level, such as node degree and clustering coefficient. To further enhance these features, we leverage deep learning models to incorporate additional node-level features. The final modified features are the result of the combination of both real and structural features. To evaluate the effectiveness of the approach, we carried out extensive experiments with several real datasets. Moreover, our method consistently outperforms or achieves comparable results to Graph Neural Networks (GNNs) baselines and their variations, such as popular graph neural network models. Crucially, our approach deals with the problem of insufficient real features in certain datasets. This study is a major progression in the field through an effective node classification model. By integrating both real and structural features, our approach holds promise to improve the performance of node classification models.
We introduce a new way to conduct election audits using untrusted scanners. Post-election audits perform statistical hypothesis testing to confirm election outcomes. However, existing approaches are costly and laborio...
ISBN:
(纸本)9783031786785;9783031786792
We introduce a new way to conduct election audits using untrusted scanners. Post-election audits perform statistical hypothesis testing to confirm election outcomes. However, existing approaches are costly and laborious for close elections-often the most important cases to audit-requiring extensive hand inspection of ballots. We instead propose automated consistency checks, augmented by manual checks of only a small number of ballots. Our protocols scan each ballot twice, shuffling the ballots between scans: a "two-scan" approach inspired by two-prover proof systems. We show that this gives strong statistical guarantees even for close elections, provided that (1) the permutation accomplished by the shuffle is unknown to the scanners and (2) the scanners cannot reliably identify a particular ballot among others cast for the same candidate. Our techniques drastically reduce the time, expense, and labor of auditing close elections, which we hope will promote wider deployment.
This paper presents a partially synchronous BFT consensus protocol powered by BBCA, a lightly modified Byzantine Consistent Broadcast (BCB) primitive. BBCA provides a Complete-Adopt semantic through an added probing i...
ISBN:
(纸本)9783031786754;9783031786761
This paper presents a partially synchronous BFT consensus protocol powered by BBCA, a lightly modified Byzantine Consistent Broadcast (BCB) primitive. BBCA provides a Complete-Adopt semantic through an added probing interface to allow either aborting the broadcast by correct nodes or exclusively, adopting the message consistently in case of a potential delivery. It does not introduce any extra types of messages or additional communication costs to BCB. BBCA is harnessed into BBCA-CHAIN to make direct commits on a chained backbone of a causally ordered graph of blocks, without any additional voting blocks or artificial layering. With the help of Complete-Adopt, the additional knowledge gained from the underlying BCB completely removes the voting latency in popular DAG-based protocols. At the same time, causal ordering allows nodes to propose blocks in parallel and achieve high throughput. BBCA-CHAIN thus closes up the gap between protocols built by consistent broadcasts (e.g., Bull-shark) to those without such an abstraction (e.g., PBFT/HotStuff), emphasizing their shared fundamental principles. Using a Bracha-style BCB as an example, we fully specify BBCA-CHAIN with simplicity, serving as a solid basis for high-performance replication systems (and blockchains).
On 24 February 2022, Russia's invasion of Ukraine, now known as the Russo-Ukrainian War, sparked extensive discussions on Online Social Networks (OSN). We initiate a data collection using the Twitter API to captur...
ISBN:
(纸本)9783031785405;9783031785412
On 24 February 2022, Russia's invasion of Ukraine, now known as the Russo-Ukrainian War, sparked extensive discussions on Online Social Networks (OSN). We initiate a data collection using the Twitter API to capture this dynamic environment. Next, we perform an analysis of the topics discussed and a detection of potential malicious activities. Our dataset consists of 127.2 million tweets originating from 10.9 million users. Given the dataset's diverse linguistic composition and the absence of labeled data, we approach it as a zero-shot learning problem, employing various techniques that require no prior supervised training on the dataset. Our research covers several areas, including sentiment analysis capturing the public's response to the distressing events of the war, topic analysis comparing narratives between social networks and traditional media, and examination of the correlation between message toxicity levels and Twitter suspensions. Furthermore, we explore the potential exploitation of social networks to acquire military-related information by belligerents, presenting a pipeline to classify such communications. The findings of this study provide fresh insights into the role of social media during conflicts, with broad implications for policy, security, and information dissemination. Finally, due to the recent Twitter API changes, we share anonymized data for any further research purposes.
In September 2022, Ethereum transitioned from Proof-of-Work (PoW) to Proof-of-Stake (PoS) during "the merge"-making it the largest PoS cryptocurrency in terms of market capitalization. With this work, we pre...
ISBN:
(纸本)9783031692307;9783031692314
In September 2022, Ethereum transitioned from Proof-of-Work (PoW) to Proof-of-Stake (PoS) during "the merge"-making it the largest PoS cryptocurrency in terms of market capitalization. With this work, we present a comprehensive measurement study of the current state of the Ethereum PoS consensus layer on the beacon chain. We perform a longitudinal study of the history of the beacon chain. Our work finds that all dips in network participation are caused by network upgrades, issues with major consensus clients, or issues with service operators controlling a large number of validators. Further, our longitudinal staking power decentralization analysis reveals that Ethereum PoS fairs similarly to its PoW counterpart in terms of decentralization and exhibits the immense impact of (liquid) staking services - a cornerstone of decentralized finance (DeFi) - on staking power decentralization. Finally, we highlight the heightened security concerns in Ethereum PoS caused by high degrees of centralization.
暂无评论