Presented in this contribution are results gathered by Pi of the Sky during LSC-Virgo O1 science run. Pi of the Sky took part in LSC-Virgo's Electromagnetic (EM) Follow-up project during first science run of Advan...
详细信息
ISBN:
(数字)9781510613553
ISBN:
(纸本)9781510613553;9781510613546
Presented in this contribution are results gathered by Pi of the Sky during LSC-Virgo O1 science run. Pi of the Sky took part in LSC-Virgo's Electromagnetic (EM) Follow-up project during first science run of Advanced LIGO detectors between September 2015 and January 2016. LSC-Virgo's EM Follow-up is aimed for searching electromagnetic counterparts to gravitational wave transient candidates. Observing an event both in EM and gravitational wave band might be a important step forward to multi-messenger astronomy. The aim of this paper is to show algorithms used by Pi of the Sky for analysing data taken during the first science run and to present the corresponding results. Concepts of algorithms for the next science run will also be discussed.
We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique ...
详细信息
ISBN:
(纸本)9781510600195
We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstemn and hippocampus. The preliminary results demonstrate the feasibility of the method.
An increasing adoption of electronic medical records has made information more accessible to clinicians and researchers through dedicated systems such as HIS, RIS and PACS. The speed and the amount at which informatio...
详细信息
ISBN:
(纸本)9781510600249
An increasing adoption of electronic medical records has made information more accessible to clinicians and researchers through dedicated systems such as HIS, RIS and PACS. The speed and the amount at which information are generated in a multi-institutional clinical study make the problem complicated compared to day-to-day hospital workflow. Often, increased access to the information does not translate into the efficient use of that information. Therefore, it becomes crucial to establish models which can be used to organize and visualize multi-disciplinary data. Good visualization in turn makes it easy for clinical decision-makers to reach a conclusion within a small span of time. In a clinical study involving multi-disciplinary data and multiple user groups who need access to the same data and presentation states based on the stage of the clinical trial or the task are crucial within the workflow. Therefore, in order to demonstrate the conceptual system design and system workflow, we will be presenting a clinical trial based on application of proton beam for radiosurgery which will utilize our proposed system. For demonstrating user role and visualization design purposes, we will be focusing on three different user groups which are researchers involved in patient enrollment and recruitment, clinicians involved in treatment and imaging review and lastly the principle investigators involved in monitoring progress of clinical study. Also datasets for each phase of the clinical study including preclinical and clinical data as it related to subject enrollment, subject recruitment (classifier), treatment (DICOM), imaging, and pathological analysis (protein staining) of outcomes.
A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning it with a focused beam of electrons. The electrons interact with the sample atoms, producing various s...
详细信息
ISBN:
(纸本)9781510600867
A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning it with a focused beam of electrons. The electrons interact with the sample atoms, producing various signals that are collected by detectors. The gathered signals contain information about the sample's surface topography and composition. The electron beam is generally scanned in a raster scan pattern, and the beam's position is combined with the detected signal to produce an image. The most common configuration for an SEM produces a single value per pixel, with the results usually rendered as grayscale images. The captured images may be produced with insufficient brightness, anomalous contrast, jagged edges, and poor quality due to low signal-to-noise ratio, grained topography and poor surface details. The segmentation of the SEM images is a tackling problems in the presence of the previously mentioned distortions. In this paper, we are stressing on the clustering of these type of images. In that sense, we evaluate the performance of the well-known unsupervised clustering and classification techniques such as connectivity based clustering (hierarchical clustering), centroid-based clustering, distribution-based clustering and density-based clustering. Furthermore, we propose a new spatial fuzzy clustering technique that works efficiently on this type of images and compare its results against these regular techniques in terms of clustering validation metrics.
State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants withi...
详细信息
ISBN:
(纸本)9781510600980
State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.
Network motifs are referred to as the interaction patterns that occur significantly more often in a complex network than in the corresponding randomized networks. They have been found effective in characterizing many ...
详细信息
Network motifs are referred to as the interaction patterns that occur significantly more often in a complex network than in the corresponding randomized networks. They have been found effective in characterizing many real-world networks. A number of network motif detection algorithms have been proposed in the literature where the interactions in a motif are mostly assumed to be deterministic, i.e., either present or missing. With the conjecture that the real-world networks are resulted from interaction patterns which should be stochastic in nature, the use of stochastic models is proposed in this paper to achieve more robust motif detection. In particular, we propose the use of a finite mixture model to detect multiple stochastic network motifs. A component-wise expectationmaximization (CEM) algorithm is derived for the finite mixture of stochastic network motifs so that both the optimal number of motifs and the motif parameters can be automatically estimated. For performance evaluation, we applied the proposed algorithm to both synthetic networks and a number of online social network data sets and demonstrated that it outperformed the deterministic motif detection algorithm FANMOD as well as the conventional EM algorithm in term of its robustness against noise. Also, how to interpret the detected stochastic network motifs to gain insights on the interaction patterns embedded in the network data is discussed. In addition, the algorithm's computational complexity and runtime performance are presented for efficiency evaluation.
We propose a novel distributed expectationmaximization (EM) method for non-cooperative RF target localization using a wireless sensor network. We consider the scenario where few or no sensors receive line-of-sight si...
详细信息
We propose a novel distributed expectationmaximization (EM) method for non-cooperative RF target localization using a wireless sensor network. We consider the scenario where few or no sensors receive line-of-sight signals from the target. In the case of non-line-of-sight signals, the signal path consists of a single reflection between the transmitter and receiver. Each sensor is able to measure the time difference of arrival of the target's signal with respect to a reference sensor, as well as the angle of arrival of the target's signal. We derive a distributed EM algorithm where each node makes use of its local information to compute summary statistics, and then shares these statistics with its neighbors to improve its estimate of the target localization. We show that our distributed algorithm converges, and simulation results suggest that our method achieves an accuracy close to the centralized EM algorithm. We apply the distributed EM algorithm to a set of experimental measurements with a network of four nodes, which confirm that the algorithm is able to localize a RF target in a realistic non-line-of-sight scenario.
One significant technological barrier to enabling multi-sensor integrated ISR is obtaining an accurate understanding of the uncertainty present from each sensor. Once the uncertainty is known, data fusion, cross-cuein...
详细信息
ISBN:
(纸本)9781628415803
One significant technological barrier to enabling multi-sensor integrated ISR is obtaining an accurate understanding of the uncertainty present from each sensor. Once the uncertainty is known, data fusion, cross-cueing, and other exploitation algorithms can be performed. However, these algorithms depend on the availability of accurate uncertainty information from each sensor. In many traditional systems (e.g., a GPS/IMU-based navigation system), the uncertainty values for any estimate can be derived by carefully observing or characterizing the uncertainty of its inputs and then propagating that uncertainty through the estimation system. In this paper, we demonstrate that image registration uncertainty, on the other hand, cannot be characterized in this fashion. Much of the uncertainty in the output of a registration algorithm is due to not only the sensors used to collect the data, but also data collected and the algorithms used. In this paper, we present results of an analysis of feature-based image registration uncertainty. We make use of Monte Carlo analysis to investigate the errors present in an image registration algorithm. We demonstrate that the classical methods of propagating uncertainty from the inputs to the outputs yields significant underestimates of the true uncertainty on the output. We then describe at least two possible sources of additional error present in feature-based methods and demonstrate the importance of these sources of error.
Computational anatomy is a subdiscipline of the anatomy that studies macroscopic details of the human body structure using a set of automatic techniques. Different reference systems have been developed for brain mappi...
详细信息
ISBN:
(纸本)9781628413625
Computational anatomy is a subdiscipline of the anatomy that studies macroscopic details of the human body structure using a set of automatic techniques. Different reference systems have been developed for brain mapping and morphometry in functional and structural studies. Several models integrate particular anatomical regions to highlight pathological patterns in structural brain MRI, a really challenging task due to the complexity, variability, and nonlinearity of the human brain anatomy. In this paper, we present a strategy that aims to find anatomical regions with pathological meaning by using a probabilistic analysis. Our method starts by extracting visual primitives from brain MRI that are partitioned into small patches and which are then softly clustered, forming different regions not necessarily connected. Each of these regions is described by a co-occurrence histogram of visual features, upon which a probabilistic semantic analysis is used to find the underlying structure of the information, i.e., separated regions by their low level similarity. The proposed approach was tested with the OASIS data set which includes 69 Alzheimer's disease (AD) patients and 65 healthy subjects (NC).
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contribu...
详细信息
ISBN:
(纸本)9781628417609
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
暂无评论