Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neura...
详细信息
Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data arc required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment.
Anomaly detection aims to separate anomalous pixels from the background, and has become an important application of remotely sensed hyperspectral imageprocessing. Anomaly detection methods based on low-rank and spars...
详细信息
Anomaly detection aims to separate anomalous pixels from the background, and has become an important application of remotely sensed hyperspectral imageprocessing. Anomaly detection methods based on low-rank and sparse representation (LRASR) can accurately detect anomalous pixels. However, with the significant volume increase of hyperspectral image repositories, such techniques consume a significant amount of time (mainly due to the massive amount of matrix computations involved). In this paper, we propose a novel distributedparallel algorithm (DPA) by redesigning key operators of LRASR in terms of MapReduce model to accelerate LRASR on cloud computing architectures. Independent computation operators are explored and executed in parallel on Spark. Specifically, we reconstitute the hyperspectral images in an appropriate format for efficient DPA processing, design the optimized storage strategy, and develop a pre-merge mechanism to reduce data transmission. Besides, a repartitioning policy is also proposed to improve DPA's efficiency. Our experimental results demonstrate that the newly developed DPA achieves very high speedups when accelerating LRASR, in addition to maintaining similar accuracies. Moreover, our proposed DPA is shown to be scalable with the number of computing nodes and capable of processing big hyperspectral images involving massive amounts of data.
Glaucoma is a disease associated with retina of eye. Presently, millions of human being is suffering from this disease. Early detection of these diseases can save the people from blindness. Therefore, various methods ...
详细信息
ISBN:
(纸本)9781728106489
Glaucoma is a disease associated with retina of eye. Presently, millions of human being is suffering from this disease. Early detection of these diseases can save the people from blindness. Therefore, various methods have been developed for its detection. In this paper, we have studied the reported methods and summarized their performance in terms of accuracy of detection.
Existing visual-based SLAM systems mainly utilize the three-dimensional environmental depth information from RGB-D cameras to complete the robotic synchronization localization and map construction task. However, the R...
详细信息
Existing visual-based SLAM systems mainly utilize the three-dimensional environmental depth information from RGB-D cameras to complete the robotic synchronization localization and map construction task. However, the RGB-D camera maintains a limited range for working and is hard to accurately measure the depth information in a far distance. Besides, the RGB-D camera will easily be influenced by strong lighting and other external factors, which will lead to a poor accuracy on the acquired environmental depth information. Recently, deep learning technologies have achieved great success in the visual SLAM area, which can directly learn high-level features from the visual inputs and improve the estimation accuracy of the depth information. Therefore, deep learning technologies maintain the potential to extend the source of the depth information and improve the performance of the SLAM system. However, the existing deep learning-based methods are mainly supervised and require a large amount of ground-truth depth data, which is hard to acquire because of the realistic constraints. In this paper, we first present an unsupervised learning framework, which not only uses image reconstruction for supervising but also exploits the pose estimation method to enhance the supervised signal and add training constraints for the task of monocular depth and camera motion estimation. Furthermore, we successfully exploit our unsupervised learning framework to assist the traditional ORB-SLAM system when the initialization module of ORB-SLAM method could not match enough features. Qualitative and quantitative experiments have shown that our unsupervised learning framework performs the depth estimation task comparably to the supervised methods and outperforms the previous state-of-the-art approach by 13.5% on KITTI dataset. Besides, our unsupervised learning framework could significantly accelerate the initialization process of ORB-SLAM system and effectively improve the accuracy on environme
We introduce a Particle Metropolis-Hastings algorithm driven by several parallel particle filters. The communication with the central node requires the transmission of only a set of weighted samples, one per filter. F...
详细信息
We introduce a Particle Metropolis-Hastings algorithm driven by several parallel particle filters. The communication with the central node requires the transmission of only a set of weighted samples, one per filter. Furthermore, the marginal version of the previous scheme, called distributed Particle Marginal Metropolis-Hastings (DPMMH) method, is also presented. DPMMH can be used for making inference on both a dynamical and static variable of interest. The ergodicity is guaranteed, and numerical simulations show the advantages of the novel schemes.
Community detection is a fundamental step in social network analysis. In this study, we focus on the hybrid identification of overlapping communities with network topology (e.g., user interaction) and edge-induced con...
详细信息
ISBN:
(数字)9781728143286
ISBN:
(纸本)9781728143293
Community detection is a fundamental step in social network analysis. In this study, we focus on the hybrid identification of overlapping communities with network topology (e.g., user interaction) and edge-induced content (e.g., user messages). Conventional hybrid methods tend to combine topology and node-induced content to explore disjoint communities, but fail to integrate edge-induced content and to derive overlapping communities. Moreover, although several semantic community detection approaches have been developed, which can generate the semantic description for each community besides the partition result, they can only incorporate the word-level content to derived coarse-grained descriptions. We propose a novel Semantic Link Community Detection (SLCD) model based on non-negative matrix factorization (NMF). It adopts the perspective of link communities to integrate topology and edge-induced content, where the partitioned link communities can be further transformed into an overlapping node-induced membership. Moreover, SLCD incorporates both the word-level and sentence-level content while considering the diversity of user messages. Hence, it can also derive strongly interpretable and multi-view community descriptions simultaneously when community partition is finished. SLCD's superior performance and powerful application ability over other state-of-the-art competitors are further demonstrated in our experiments on a series of real social networks.
A comparison between processing times of a standard serial algorithm and a parallel algorithm using Graphics processing Unit (GPU) is presented here. Analysis was performed on an image of the Shepp–Logan phantom, obt...
A comparison between processing times of a standard serial algorithm and a parallel algorithm using Graphics processing Unit (GPU) is presented here. Analysis was performed on an image of the Shepp–Logan phantom, obtained in a computed tomography (CT) study. Time, number of iterations and the structural similarity index (SSIM) metrics were use to quantify the quality of the reconstructed image by both methods. parallel algorithms developed in CUDA can reconstruct images in times 2 orders of magnitude less than traditional methods, while keeping the same image quality. In conclusion, this system presents a good cost/effective setup for CT image reconstruction, viable for institutions with limited resources and countries of the developing world.
image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage ...
详细信息
image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage approach, feature learning and clustering, sequentially or jointly. We observe that these works usually focus on the combination of reconstruction loss and clustering loss, relatively little work has focused on improving the learning representation of the neural network for clustering. In this paper, we propose a deep convolutional embedded clustering algorithm with inception-like block (DCECI). Specifically, an inception-like block with different type of convolution filters are introduced in the symmetric deep convolutional network to preserve the local structure of convolution layers. We simultaneously minimize the reconstruction loss of the convolutional autoencoders with inception-like block and the clustering loss. Experimental results on multiple image datasets exhibit the promising performance of our proposed algorithm compared with other competitive methods.
Red blood cell segmentation in microscopic images is the first step for various clinical studies carried out on blood samples such as cell counting, cell shape identification, etc. Conventional methods while often sho...
详细信息
ISBN:
(纸本)9781728111421;9781728111414
Red blood cell segmentation in microscopic images is the first step for various clinical studies carried out on blood samples such as cell counting, cell shape identification, etc. Conventional methods while often showing a high accuracy are heavily depending on the acquisition modality. Deep learning approaches have shown to be more robust regarding such modalities and still showing a comparable accuracy. In this paper, we first investigate necessary steps to apply a specific type of deep learning methods, namely fully convolutional networks, to red blood cell segmentation. Based on data given and constraints imposed by our partners mainly regarding a high throughput of their data we then describe an exemplary application. First results show, that even with a focus on high performance a good accuracy above 90% can be reached.
Searching is a crucial time-consuming part of many programs, and using a good search method instead of a bad one often leads to a substantial increase in performance. Hash tries are a trie-based data structure with ne...
详细信息
ISBN:
(纸本)9781728111421;9781728111414
Searching is a crucial time-consuming part of many programs, and using a good search method instead of a bad one often leads to a substantial increase in performance. Hash tries are a trie-based data structure with nearly ideal characteristics for the implementation of hash maps. In this paper, we present a novel, simple and concurrent hash map design that fully supports the concurrent search, insert and remove operations on hash tries designed to store sorted keys. To the best of our knowledge, our design is the first concurrent hash map design that puts together the following characteristics: (i) use fixed size data structures; (ii) use persistent memory references; (iii) be lock-free; and (iv) store sorted keys. Experimental results show that our design is quite competitive when compared against other state-of-the-art designs implemented in Java.
暂无评论