Multimodal brain networks extracted from functional magnetic resonance imaging (fMRI) characterize complex connectivities among brain regions from both structural and functional, showing great potential for mental hea...
详细信息
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data. However, GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost o...
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data. However, GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant. We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates. Given an entity in the graph, CF-GNN produces a prediction set/interval that provably contains the true label with pre-defined coverage probability (e.g. 90%). We establish a permutation invariance condition that enables the validity of CP on graph data and provide an exact characterization of the test-time coverage. Besides valid coverage, it is crucial to reduce the prediction set size/interval length for practical use. We observe a key connection between non-conformity scores and network structures, which motivates us to develop a topology-aware output correction model that learns to update the prediction and produces more efficient prediction sets/intervals. Extensive experiments show that CF-GNN achieves any pre-defined target marginal coverage while significantly reducing the prediction set/interval size by up to 74% over the baselines. It also empirically achieves satisfactory conditional coverage over various raw and network features.
In the era of evolving healthcare demands, the requirement for innovation in record-keeping systems is undeniable. Traditional methods of documenting have difficulties in terms of efficiency, accuracy, and flexibility...
详细信息
We present two sharp empirical Bernstein inequalities for symmetric random matrices with bounded eigenvalues. By sharp, we mean that both inequalities adapt to the unknown variance in a tight manner: the deviation cap...
详细信息
Due to covid-19 pandemic, research in e-healthcare system is gaining popularity because in most of the cases e-healthcare system does not require to present patient physically at doctor’s door. The reason behind this...
详细信息
In this article, we proposed a new extension of the Topp–Leone family of distributions. Some important properties of the model are developed, such as quantile function, stochastic ordering, model series representatio...
详细信息
The agriculture sector is vital to a country's economic and productive development. With technological advancements, we can identify plant diseases in their early stages, similar to how human diseases are diagnose...
详细信息
Human activity recognition (HAR) is the process of using mobile sensor data to determine the physical activities performed by individuals. HAR is the backbone of many mobile healthcare applications, such as passive he...
详细信息
Atthe forefront of the Artificial Intelligence Revolution is the Generative AI domain which is making splashes in generation of new content from existing Large Language Models. Large Language Models (LLMs) are flexibl...
详细信息
ISBN:
(数字)9798350379945
ISBN:
(纸本)9798350379952
Atthe forefront of the Artificial Intelligence Revolution is the Generative AI domain which is making splashes in generation of new content from existing Large Language Models. Large Language Models (LLMs) are flexible, effective tools with many uses. On the other hand, they can be tricked by devious provocation. This work explores the detection of fraudulent prompts intended to produce false or damaging outputs using Long Short-Term Memory (LSTM) networks. During this study we analysed the LTSM model against a neural network model. We found a 91.69% accuracy for the LTSM model and a 92.52% accuracy for the simple neural network. However, the simple neural network had a higher F1 score of 0.73 compared to the LTSM score of 0.69. We found that the standard neural model is more efficient at identifying harmful user prompts. Our research highlights the need for taking into account various architectures of models to reduce the risk of malicious prompting in language learning models.
High-quality public datasets significantly prompt the prosperity of deep neural networks (DNNs). Currently, dataset ownership verification (DOV), which consists of dataset watermarking and ownership verification, is t...
ISBN:
(纸本)9798331314385
High-quality public datasets significantly prompt the prosperity of deep neural networks (DNNs). Currently, dataset ownership verification (DOV), which consists of dataset watermarking and ownership verification, is the only feasible solution to protect their copyright by preventing unauthorized use. In this paper, we revisit existing DOV methods and find that they all mainly focused on the first stage by designing different types of dataset watermarks and directly exploiting watermarked samples as the verification samples for ownership verification. As such, their success relies on an underlying assumption that verification is a onetime and privacy-preserving process, which does not necessarily hold in practice. To alleviate this problem, we propose ZeroMark to conduct ownership verification without disclosing dataset-specified watermarks. Our method is inspired by our empirical and theoretical findings of the intrinsic property of DNNs trained on the watermarked dataset. Specifically, ZeroMark first generates the closest boundary version of given benign samples and calculates their boundary gradients under the label-only black-box setting. After that, it examines whether the given suspicious method has been trained on the protected dataset by performing a hypothesis test, based on the cosine similarity measured on the boundary gradients and the watermark pattern. Extensive experiments on benchmark datasets verify the effectiveness of our ZeroMark and its resistance to potential adaptive attacks. The codes for reproducing our main experiments are publicly available at https://***/JunfengGo/***.
暂无评论