In this manuscript, we develop an efficient algorithm to evaluate the azimuthal Fourier components of the Green's function for the Helmholtz equation in cylindrical coordinates. A computationally efficient algorit...
详细信息
With the advancement of new technologies, modern society is surrounded by devices and services with internet connectivity. It provides us the better lifestyle, convenience of services and easy communications. But it o...
详细信息
Secure data ownership management is significant for realizing personal and private data sharing, which can be widely used with consumer electronics. The notion of the decentralized privacy (DP) is introduced by Zyskin...
详细信息
In recent years, hypergraph generalizations of many graph cut problems and algorithms have been introduced and analyzed as a way to better explore and understand complex systems and datasets characterized by multiway ...
详细信息
In recent years, hypergraph generalizations of many graph cut problems and algorithms have been introduced and analyzed as a way to better explore and understand complex systems and datasets characterized by multiway relationships. The standard cut function for a hypergraph H = (V, E) assigns the same penalty to a cut hyperedge, regardless of how its nodes are separated by a partition of V . Recent work in theoretical computerscience and machine learning has made use of a generalized hypergraph cut function that can be defined by associating each hyperedge e ∈ E with a splitting function we, which assigns a (possibly different) penalty to each way of separating the nodes of e. When each we is a submodular cardinality-based splitting function, meaning that we(S) = g(|S|) for some concave function g, previous work has shown that a generalized hypergraph cut problem can be reduced to a directed graph cut problem on an augmented node set. However, existing reduction procedures introduce up to O(|e|2) edges for a hyperedge e. This often results in a dense graph, even when the hypergraph is sparse, which leads to slow runtimes (in theory and practice) for algorithms that run on the reduced graph. We introduce a new framework of sparsifying hypergraph-to-graph reductions, where a hypergraph cut defined by submodular cardinality-based splitting functions is (1+Ε)-approximated by a cut on a directed graph. Our techniques are based on approximating concave functions using piecewise linear curves, and we show that they are optimal within an existing strategy for hypergraph reduction. We provide bounds on the number of edges needed to model different types of splitting functions. For Ε > 0, in the worst case, we need O(Ε−1|e|log |e|) edges to reduce any hyperedge e, which leads to faster runtimes for approximately solving generalized hypergraph s-t cut problems. For the common machine learning heuristic of a clique splitting function on a node set e, our approach requires only
Blind calibration of sensors arrays (without using calibration signals) is an important, yet challenging problem in array processing. While many methods have been proposed for "classical" array structures, s...
详细信息
Hepatitis, among prevalent diseases in today's world, affects the health of more than 1.5 million individuals annually worldwide. However, the accuracy of patient diagnosis during the early stages of symptom manif...
Hepatitis, among prevalent diseases in today's world, affects the health of more than 1.5 million individuals annually worldwide. However, the accuracy of patient diagnosis during the early stages of symptom manifestation is insufficient for effective resolutions. Fortunately, the researchers behind this study have made significant progress by developing an advanced machine-learning model that surpasses its predecessors in accuracy. To conduct their experiment, they utilized the hepatitis dataset available to the public through the University of California Irvine (UCI) machine learning repository. This dataset comprises 155 instances and serves as a valuable resource for their analysis. The authors employed various classifiers such as Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Adaboost, and Decision Tree (DT). Remarkably, SVM achieved an impressive accuracy rate of 94.78%, outperforming all other classifiers when combined with Recursive Feature Elimination (RFE).
High-level Petri net such as Coloured Petri Nets (CPNs) are characterised by the combination of Petri nets and a high-level programming language. In the context of CPNs and CPN Tools, the inscriptions (e.g., arc expre...
详细信息
Let Vn be a set of n points in the plane and let x ∈/ Vn. An x-loop is a continuous closed curve not containing any point of Vn. We say that two x-loops are non-homotopic if they cannot be transformed continuously in...
详细信息
During manufacturing test, researchers usually overlook the importance of process variation defects and marginal defects, which can seriously affect test results of Early-Life-Failure (ELF). Theoretically, machine lea...
详细信息
Connectomes are brain networks represented as a graph with the vertices being the regions of the brain and weighted edges representing strength of connections between the regions inferred from brain imaging techniques...
详细信息
ISBN:
(数字)9781728168289
ISBN:
(纸本)9781728168296
Connectomes are brain networks represented as a graph with the vertices being the regions of the brain and weighted edges representing strength of connections between the regions inferred from brain imaging techniques such a Functional MRI (fMRI). An intense research activity is to use the connectomes to identify markers for brain disorders, especially neuro-degenerative diseases such as Autism Spectrum Disorder (ASD) by studying the differences in the connectomes of healthy subjects and patients. This paper presents a novel data model for the connectome data and analyzes its efficacy in the classification of ASD and Typically Developing (TD) (healthy) connectomes. The proposed data modelling begins by clustering the vertices (brain regions) using the Graph Spectral Clustering into fixed number of clusters, the number of clusters chosen as four based on the empirical evidence. The resulting clustering is used to map the vertices into a binary matrix which is then converted into a binary row vector to form a vector space model of the connectome data that is employed to classify the connectome data. The developed model is first validated using Human Connectome Protocol (HCP) FMRI derived connectome data of 812 healthy patents. Binary data models of the UCLA Autism dataset with fMRI and DTI scans of 42 ASD and 37 TD subjects are generated and employed for their classification. Different classification algorithms are trained, tested and their performance evaluated using the resulting dataset. Cross Validation (CV) estimates identified the best performance (83% recall and 83% precision) for DTI data and (73% recall and 89% precision) for the fMRI data achieved using Logistic Regression.
暂无评论