It is difficult to handle the database (DB) workload due to the huge increase in data., the functionality demand from the user., and the rapid changes in data. It is not easy to manage the DB workload., which therefor...
详细信息
Bitcoin transactions are created through the concept called Unspent Transaction Output (UTXO). Users put their own UTXOs as inputs into a transaction for Bitcoin transfer and create multiple outputs, each specifying t...
详细信息
ISBN:
(纸本)9788995004395
Bitcoin transactions are created through the concept called Unspent Transaction Output (UTXO). Users put their own UTXOs as inputs into a transaction for Bitcoin transfer and create multiple outputs, each specifying the recipient’s wallet address and the amount to be sent. UTXO refers to an output that has not been used as an input for any transaction yet and each UTXO can only be used as an input once. However, attempting to use a UTXO more than once is called a double-spending attack. Although double-spending in Bitcoin is ultimately impossible due to the system structure, it can occur when a transaction is deemed confirmed and off-chain goods or services are provided before sufficient transaction finality is guaranteed. We consider an attempt of double-spending attack when a UTXO used as an input in a transaction for payment exists together with another transaction on the Bitcoin network that uses the same UTXO as an input. In previous research, we randomly deployed observer nodes on the Bitcoin network and proposed a method to detect double-spending attacks using transaction data in the memory pool and a graph neural network model. In this paper, we analyze the impact of adding observer nodes to the Bitcoin network on the performance of graph neural network-based Bitcoin double-spending attack detection. We conducted experiments to examine the performance differences among three strategies for adding observer nodes. However, it was difficult to compare clear differences due to the performance degradation of the model caused by the differences in graph structure between datasets. Therefore, we provide an analysis of the causes and suggestions for improvement. Copyright 2023 KICS.
The operating system (OS) of a computer controls both its hardware and software. It handles necessary functions including input and output processing, file and memory management, and peripheral device management, incl...
详细信息
Storing big data directly on a blockchain poses a substantial burden due to the need to maintain a consistent ledger across all nodes. Numerous studies in decentralized storage systems have been conducted to tackle th...
详细信息
Attackers are now using sophisticated techniques, like polymorphism, to change the attack pattern for each new attack. Thus, the detection of novel attacks has become the biggest challenge for cyber experts and resear...
详细信息
The amount of complicated texts and documents that require a deeper understanding of machine learning techniques has expanded rapidly in recent decades. Several machine learning approaches have shown exceptional resul...
详细信息
Federated learning is an effective method to train a machine learning model without requiring to aggregate the potentially sensitive data of agents in a central server. However, the limited communication bandwidth, th...
详细信息
Analyzing and studying the data of soil compounds from 10 cities, two important features were found in this project. First, the data for all compounds in soil follow a normal distribution with different parameters. Th...
Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbot...
详细信息
Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbots, to name a few. In recent years, considerable efforts and resources have been devoted to adaptation methods for improving the downstream performance of VLMs, particularly on parameter-efficient fine-tuning methods like prompt learning. However, a crucial aspect that has been largely overlooked is the confidence calibration problem in fine-tuned VLMs, which could greatly reduce reliability when deploying such models in the real world. This paper bridges the gap by systematically investigating the confidence calibration problem in the context of prompt learning and reveals that existing calibration methods are insufficient to address the problem, especially in the open-vocabulary setting. To solve the problem, we present a simple and effective approach called Distance-Aware Calibration (DAC), which is based on scaling the temperature using as guidance the distance between predicted text labels and base classes. The experiments with 7 distinct prompt learning methods applied across 11 diverse downstream datasets demonstrate the effectiveness of DAC, which achieves high efficacy without sacrificing the inference speed. Our code is available at https://***/mlstat-Sustech/CLIP Calibration. Copyright 2024 by the author(s)
Parents today have a greater workload because they are working hard at their careers. As a result, there is less opportunity of giving babies better care. Children's health may suffer greatly as a result of this. ...
详细信息
暂无评论