Non-negative matrix factorization (NMF) is a popular matrix decomposition technique that has attracted extensive attentions from data mining community. However, NMF suffers from the following deficiencies: (1) it is n...
详细信息
ISBN:
(数字)9783319265353
ISBN:
(纸本)9783319265353;9783319265346
Non-negative matrix factorization (NMF) is a popular matrix decomposition technique that has attracted extensive attentions from data mining community. However, NMF suffers from the following deficiencies: (1) it is non-trivial to guarantee the representation of the data points to be sparse, and (2) NMF often achieves unsatisfactory clustering results because it completely neglects the labels of the dataset. Thus, this paper proposes a semi-supervised non-negative local coordinate factorization (SNLCF) to overcome the above deficiencies. Particularly, SNLCF induces the sparse coefficients by imposing the local coordinate constraint and propagates the labels of the labeled data to the unlabeled ones by indicating the coefficients of the labeled examples to be the class indicator. Benefit from the labeled data, SNLCF can boost NMF in clustering the unlabeled data. Experimental results on UCI datasets and two popular face image datasets suggest that SNLCF outperforms the representative methods in terms of both average accuracy and average normalized mutual information.
This dissertation addresses a growing challenge of visualizing and modifying massive 3D geometric models in a collaborative workspace by presenting a new scalable data partitioning algorithm in conjunction with a robu...
This dissertation addresses a growing challenge of visualizing and modifying massive 3D geometric models in a collaborative workspace by presenting a new scalable data partitioning algorithm in conjunction with a robust system architecture. The goal is to motivate the idea that utilizing a distributed architecture may solve many performance related challenges in visualization of large 3D data. Drawing data from modeling, simulation, interaction and data fusion to deliver a starting point for scientific discovery, we present a collaborative visual analytics framework providing the abilities to render, display and interact with data at a massive scale on high resolution collaborative display environments. This framework allows users to connect to data when it is needed, where it is needed, and in a format suitable for productivity while providing a means to interactively define a workspace that suits one's need. The presented framework uses a distributed architecture to display content on tiled display walls of arbitrary shape, size, and resolution. These techniques manage the data storage, the communication, and the interaction between many processing nodes that make up the display wall. This hides the complexity from the user while offering an intuitive mean to interact with the system. Multi-modal methods are presented that enables the user to interact with the system in a natural way from hand gesture to laser pointer. The combination of this scalable display method with the natural interaction modality provides a robust foundation to facilitate a multitude of visualization and interaction applications. The final output from the system is an image on a large display made up of either projection or lcd based displays. Such a system will have many different components working together in parallel to produce an output. By incorporating computer graphics theory with classical parallelprocessing techniques, performance limitations typically associated with the display
BackgroundTo report that artifactual microhemorrhages are introduced by the two-dimensional (2D) homodyne filtering method of generating susceptibility weighted images (SWI) when open-ended fringelines (OEF) are prese...
详细信息
BackgroundTo report that artifactual microhemorrhages are introduced by the two-dimensional (2D) homodyne filtering method of generating susceptibility weighted images (SWI) when open-ended fringelines (OEF) are present in phase data. methodsSWI data from 28 traumatic brain injury (TBI) patients was obtained on a 3 tesla clinical Siemens scanner using both the product 3D gradient echo sequence (GRE) with generalized autocalibrating partially parallel acquisition acceleration and an in-house developed segmented echo planar imaging (sEPI) sequence without GRAPPA acceleration. SWI processing included (i) 2D homodyne method implemented on the scanner console and (ii) a 3D Fourier-based phase unwrapping followed by 3D high pass filtering. Original and enhanced magnitude and phase images were carefully reviewed for sites of type iiI OEFs and microhemorrhages by a neuroradiologist on a PACS workstation. ResultsNineteen of 28 (68%) phase datasets acquired using GRAPPA-accelerated GRE acquisition demonstrated type iiI OEFs. In SWI images, artifactual microhemorrhages were found on 17 of 19 (89%) cases generated using 2D homodyne processing. Application of a 3D Fourier-based unwrapping method prior HP filtering minimized the appearance of the phase singularities in the enhanced phase, and did not generate microhemorrhage-like artifacts in magnitude images. ConclusionThe 2D homodyne filtering method may introduce artifacts mimicking intracranial microhemorrhages in SWI images when type iiI OEFs are present in phase images. Such artifacts could lead to overestimation of pathology, e.g., TBI. This work demonstrates that 3D phase unwrapping methods minimize this artifact. However, methods to properly combine phase across coils are needed to eliminate this artifact. J. Magn. Reson. Imaging 2015;41:1695-1700. (c) 2014 Wiley Periodicals, Inc.
Synchrotron (x-ray) light sources permit investigation of the structure of matter at extremely small length and time scales. Advances in detector technologies enable increasingly complex experiments and more rapid dat...
详细信息
ISBN:
(纸本)9783662480960;9783662480953
Synchrotron (x-ray) light sources permit investigation of the structure of matter at extremely small length and time scales. Advances in detector technologies enable increasingly complex experiments and more rapid data acquisition. However, analysis of the resulting data then becomes a bottleneck-preventing near-real-time error detection or experiment steering. We present here methods that leverage highly parallel computers to improve the performance of iterative tomographic image reconstruction applications. We apply these methods to the conventional per-slice parallelization approach and use them to implement a novel in-slice approach that can use many more processors. To address programmability, we implement the introduced methods in high-performance MapReduce-like computing middleware, which is further optimized for reconstruction operations. Experiments with four reconstruction algorithms and two large datasets show that our methods can scale up to 8K cores on an IBM BG/Q supercomputer with almost perfect speedup and can reduce total reconstruction times for large datasets by more than 95.4% on 32K cores relative to 1K cores. Moreover, the average reconstruction times are improved from similar to 2h (256 cores) to similar to 1min (32K cores), thus enabling near-real-time use.
Single-cell RNA sequencing has recently emerged as a powerful tool for mapping cellular heterogeneity in diseased and healthy tissues, yet high-throughput methods are needed for capturing the unbiased diversity of cel...
详细信息
Single-cell RNA sequencing has recently emerged as a powerful tool for mapping cellular heterogeneity in diseased and healthy tissues, yet high-throughput methods are needed for capturing the unbiased diversity of cells. Droplet microfluidics is among the most promising candidates for capturing and processing thousands of individual cells for whole-transcriptome or genomic analysis in a massively parallel manner with minimal reagent use. We recently established a method called inDrops, which has the capability to index >15,000 cells in an hour. A suspension of cells is first encapsulated into nanoliter droplets with hydrogel beads (HBs) bearing barcoding DNA primers. Cells are then lysed and mRNA is barcoded (indexed) by a reverse transcription (RT) reaction. Here we provide details for (i) establishing an inDrops platform (1 d); (ii) performing hydrogel bead synthesis (4 d); (iii) encapsulating and barcoding cells (1 d); and (iv) RNA-seq library preparation (2 d). inDrops is a robust and scalable platform, and it is unique in its ability to capture and profile >75% of cells in even very small samples, on a scale of thousands or tens of thousands of cells. less
This paper deals with the development and implementation of a cloud screening algorithm for image time series, with the focus on the forthcoming Sentinel-2 satellites to be launched under the ESA Copernicus Programme....
详细信息
ISBN:
(纸本)9781479979295
This paper deals with the development and implementation of a cloud screening algorithm for image time series, with the focus on the forthcoming Sentinel-2 satellites to be launched under the ESA Copernicus Programme. The proposed methodology is based on kernel ridge regression and exploits the temporal information to detect anomalous changes that correspond to cloud covers. The huge data volumes to be processed when dealing with high temporal, spatial, and spectral resolution datasets motivate the implementation of the algorithm within distributed computer resources. In consequence, an operational cloud screening service has been specifically designed and implemented in the frame of the Sentinels Synergy Framework (SenSyF). The effectiveness of the proposed method is successfully illustrated using a time series dataset with a 5-day revisit derived from SPOT-4 at high resolution, which has been collected by ESA in preparation for the exploitation of the Sentinel-2 mission.
The proceedings contain 87 papers. The topics discussed include: a semi-automatic surface reconstruction framework based on T-surfaces and isosurface extraction methods;DSVOL ii - a distributed visualization and sonif...
ISBN:
(纸本)076951846X
The proceedings contain 87 papers. The topics discussed include: a semi-automatic surface reconstruction framework based on T-surfaces and isosurface extraction methods;DSVOL ii - a distributed visualization and sonification application communicating via an XML-based protocol;visualizing inner structures in multimodal volume data;evaluating an adaptive windowing scheme in speckle noise MAP filtering;multispectral image data fusion using projections onto convex sets techniques;texture feature neural classifier for remote sensing image retrieval systems;filtering sparse data with 3D tensorial structuring elements;combining approximate geometry with view-dependent texture mapping - a hybrid approach to 3D video teleconferencing;improvement and invariance analysis of Zernike moments using as a region-based shape descriptor;automatic method for assessment of Telangiectasia degreeing by mathematical morphology;and approximating parametric curves with strip trees using affine arithmetic.
Improving both accuracy and computational performance of numerical tools is a major challenge for seismic imaging and generally requires specialized implementations to make full use of modern parallel architectures. W...
详细信息
Improving both accuracy and computational performance of numerical tools is a major challenge for seismic imaging and generally requires specialized implementations to make full use of modern parallel architectures. We present a computational strategy for reverse-time migration (RTM) with accelerator-aided clusters. A new imaging condition computed from the pressure and velocity fields is introduced. The model solver is based on a high-order discontinuous Galerkin time-domain (DGTD) method for the pressure-velocity system with unstructured meshes and multirate local time stepping. We adopted the MPI+X approach for distributed programming where X is a threaded programming model. In this work we chose OCCA, a unified framework that makes use of major multithreading languages (e.g. CUDA and OpenCL) and offers the flexibility to run on several hardware architectures. DGTD schemes are suitable for efficient computations with accelerators thanks to localized element-to-element coupling and the dense algebraic operations required for each element. Moreover, compared to high-order finite-difference schemes, the thin halo inherent to DGTD method reduces the amount of data to be exchanged between MPI processes and storage requirements for RTM procedures. The amount of data to be recorded during simulation is reduced by storing only boundary values in memory rather than on disk and recreating the forward wavefields. Computational results are presented that indicate that these methods are strong scalable up to at least 32 GPUs for a three-dimensional RTM case.
暂无评论