Recently, based on novel convolutional neural net-work architectures proposed, tremendous advances have been achieved in image denoising task. An effective and efficient multi-level network architecture for image deno...
详细信息
ISBN:
(纸本)9781728152103
Recently, based on novel convolutional neural net-work architectures proposed, tremendous advances have been achieved in image denoising task. An effective and efficient multi-level network architecture for image denoising refers to restore the latent clean image from a coarser scale to finer scales and pass features through multiple levels of the model. Unfortunately, the bottleneck of applying multi-level network architecture lies in the multi-scale information from input images is not effectively captured and the fine-to-coarse feature fusion strategy to be ignored in image denoising task. To solve these problems, we propose a multi-scale & multi-level shuffle-CNN Via multi-level attention (DnM3Net), which plugs the multi-scale feature extraction, fine-to-coarse feature fusion strategy and multi-level attention module into the new network architecture in image denoising task. The advantage of this approach are two-fold: (1) It solve the multi-scale information extraction issue of multi-level network architecture, making it more effective and efficient for the image denoising task. (2) It is impressive performance because the better trade-off between denoising and detail preservation. The proposed novel network architecture is validated by applying on synthetic gaussian noise gray and RGB images. Experimental results show that the DnM3Net effectively improve the quantitative metrics and visual quality compared to the state-of-the-art denoising methods.
Deep learning (DL) has emerged as a key application exploiting the increasing computational power in systems such as GPUs, multicore processors, Systems-on-Chip (SoC), and distributed clusters. It has also attracted m...
Deep learning (DL) has emerged as a key application exploiting the increasing computational power in systems such as GPUs, multicore processors, Systems-on-Chip (SoC), and distributed clusters. It has also attracted much attention in discovering correlation patterns in data in an unsupervised manner and has been applied in various domains including speech recognition, image classification, natural language processing, and computer vision. Unlike traditional machine learning (ML) approaches, DL also enables dynamic discovery of features from data. In addition, now, a number of commercial vendors also offer accelerators for deep learning systems (such as Nvidia, Intel, and Huawei).
Several optimization criteria are involved in the job shop scheduling problems encountered in the engineering area. Multi-objective optimization algorithms are often applied to solve these problems, which become even ...
详细信息
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between even...
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively paralleldistributedapplications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
暂无评论