Noise in astronomical images significantly impacts observations and analyses. Traditional denoising methods, such as increasing exposure time and image stacking, are limited when dealing with single-shot images or stu...
详细信息
Noise in astronomical images significantly impacts observations and analyses. Traditional denoising methods, such as increasing exposure time and image stacking, are limited when dealing with single-shot images or studying rapidly changing astronomical objects. To address this, we developed a novel deep-learning denoising model, CoaddNet, designed to improve the image quality of single-shot images and enhance the detection of faint sources. To train and validate the model, we constructed a dataset containing high and low signalto-noise ratio (SNR) images, comprising coadded and single-shot types. CoaddNet combines the efficiency of convolutional operations with the advantages of the Transformer architecture, enhancing spatial feature extraction through a multi-branch structure and reparameterization techniques. Performance evaluation shows that CoaddNet surpasses the baseline model, NAFNet, by increasing the Peak Signal-to-Noise Ratio (PSNR) by 0.03 dB and the Structural Similarity Index (SSIM) by 0.005 while also improving throughput by 35.18%. The model significantly improves the SNR of single-shot images, with an average increase of 22.8, surpassing the noise reduction achieved by stacking 70-90 images. By boosting the SNR, CoaddNet significantly enhances the detection of faint sources, enabling SExtractor to detect an additional 22.88% of faint sources. Meanwhile, CoaddNet reduced the Mean Absolute Percentage Error (MAPE) of flux measurements for detected sources by at least 27.74%.
High performance computing has been used in various fields of astrophysical research. But most of it is implemented on massively parallel systems (supercomputers) or graphical processing unit clusters. With the advent...
详细信息
High performance computing has been used in various fields of astrophysical research. But most of it is implemented on massively parallel systems (supercomputers) or graphical processing unit clusters. With the advent of multicore processors in the last decade, many serial software codes have been re-implemented in parallel mode to utilize the full potential of these processors. In this paper, we propose parallel processing recipes for multicore machines for astronomical data processing. The target audience is astronomers who use Python as their preferred scripting language and who may be using PyRAF/IRAF for dataprocessing. Three problems of varied complexity were benchmarked on three different types of multicore processors to demonstrate the benefits, in terms of execution time, of parallelizing dataprocessing tasks. The native multiprocessing module available in Python makes it a relatively trivial task to implement the parallel code. We have also compared the three multiprocessing approaches-Pool/Map, Process/Queue and Parallel Python. Our test codes are freely available and can be downloaded from our website. (c) 2013 Elsevier B.V. All rights reserved.
In modern astronomy, Short-Timescale and Large Field-of-view (STLF) sky survey produce large volume data and face a great challenge in cross identification. Furthermore, transient survey projects are required to selec...
详细信息
ISBN:
(数字)9783030280611
ISBN:
(纸本)9783030280611;9783030280604
In modern astronomy, Short-Timescale and Large Field-of-view (STLF) sky survey produce large volume data and face a great challenge in cross identification. Furthermore, transient survey projects are required to select the candidates fast from large volume data. However, traditional cross identification methods didn't satisfy the observation of transient survey. We present a fast and efficient cross identification system for large-scale astronomicaldata streams. By receiving a high-frequency star catalog and maintaining a local star catalog, the system partitions the star catalog and cross identification with the object catalog. A coding strategy is used to manage the unique ID of the all-sky star. After processingdata, all the results are stored in Redis and generate the light curve. Our experiment shows that the method could meet the strict performance requirements and good recognition accuracy on fast real-time sky survey project. Additionally, our system shows good performance in low latency large volume astronomical data processing and our system has been successfully applied in the Ground-based Wide Angle Camera (GWAC) online dataprocessing pipeline.
astronomical images from optical photometric surveys are typically contaminated with transient artifacts such as cosmic rays, satellite trails and scattered light. We have developed and tested an algorithm that remove...
详细信息
astronomical images from optical photometric surveys are typically contaminated with transient artifacts such as cosmic rays, satellite trails and scattered light. We have developed and tested an algorithm that removes these artifacts using a deep, artifact free, static sky coadd image built up through the median combination of point spread function (PSF) homogenized, overlapping single epoch images. Transient artifacts are detected and masked in each single epoch image through comparison with an artifact free, PSF-matched simulated image that is constructed using the PSF-corrected, model fitting catalog from the artifact free coadd image together with the position variable PSF model of the single epoch image. This approach works well not only for cleaning single epoch images with worse seeing than the PSF homogenized coadd, but also the traditionally much more challenging problem of cleaning single epoch images with better seeing. In addition to masking transient artifacts, we have developed an interpolation approach that uses the local PSF and performs well in removing artifacts whose widths are smaller than the PSF full width at half maximum, including cosmic rays, the peaks of saturated stars and bleed trails. We have tested this algorithm on Dark Energy Survey Science Verification data and present performance metrics. More generally, our algorithm can be applied to any survey which images the same part of the sky multiple times. (C) 2016 Elsevier B.V. All rights reserved.
One of the purposes of a Virtual Observatory is to facilitate data sharing. data products from the Solar Multi-Channel Telescope and the Solar Radio Telescope at Huairou Solar Observing Station, Beijing, are not only ...
详细信息
One of the purposes of a Virtual Observatory is to facilitate data sharing. data products from the Solar Multi-Channel Telescope and the Solar Radio Telescope at Huairou Solar Observing Station, Beijing, are not only used for solar research but also for solar activity and space environment predictions. To provide these services, we have exploited a number of technologies, which we discuss in this article. These include the setting up of a WWW server, a local area network, network security facility, dataprocessing software, etc. We discuss the implementation of a Virtual Solar Observatory (VSO) and show how it meets various user requirements for unified international metadata. We also discuss future plans for further development of the system.
The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A dataprocessing factory, using technologies both old and ...
详细信息
ISBN:
(纸本)0819446157
The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A dataprocessing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All dataprocessing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. Them science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early data Release in June 2001, and it is currently producing data Release One, scheduled for January 2003.
暂无评论