X-ray diffraction data processing proceeds through indexing, pre-refinement of camera parameters and crystal orientation, intensity integration, post-refinement and scaling. The DENZO program has set new standards for...
详细信息
X-ray diffraction data processing proceeds through indexing, pre-refinement of camera parameters and crystal orientation, intensity integration, post-refinement and scaling. The DENZO program has set new standards for autoindexing, but no publication has appeared which describes the algorithm. In the development of the new data processing Suite (DPS), one of the first aims has been the development of an autoindexing procedure at least as powerful as that used by DENZO. The resultant algorithm will be described. Another major problem which has arisen in recent years is scaling and post-refinement of data from different images when there are few, if any, full reflections. This occurs when the mosaic spread approaches or exceeds the angle of oscillation, as is usually the case for frozen crystals. A procedure which is able to obtain satisfactory results for such a situation will be described.
In the era of big data, processing information in multi-linear arrays is a challenge. In this paper, a parallel singular value decomposition algorithm based on unilateral Jacobi is proposed, and a data processing mode...
详细信息
In the era of big data, processing information in multi-linear arrays is a challenge. In this paper, a parallel singular value decomposition algorithm based on unilateral Jacobi is proposed, and a data processing model of long and short memory network combined with the parallel tensor chain decomposition algorithm is constructed. The results show that the parallel efficiency of the algorithm reaches 0.95 with 30 cores, the compression ratio is 10, the accuracy and recall rate are 0.98 and 0.96, respectively. On the ImageNet dataset, model indicators are all over 0.9, showing excellent performance. The research not only improves the efficiency of data processing, but also provides new solutions for high-dimensional data analysis, especially in the aspects of feature extraction and dimensionality reduction. Combined with the advantages of LSTM in processing time series data, the overall performance of the model is improved.
Fifty-one papers by various authors deal with management, business, and technical aspects of data processing industry;major topics are -- problems and development in software, new programming applications, systems ana...
详细信息
Fifty-one papers by various authors deal with management, business, and technical aspects of data processing industry;major topics are -- problems and development in software, new programming applications, systems analysis techniques, management and control of computer-based operations, education and personnel management, and legal aspects. Following papers presented -- Operating Systems and Multi-Programming Overview, ***;Continuous Education Needed, ***;Problems of Evaluating Operating Systems, M. TRACHTENBERG;GENESIS -- Compiler-Compiler, R. ***;Automated Management Information Systems, L.L. COOK, ***;U S Army RAPID System, ***;Time-Shared data Management System -- New Approach to data Management, ***;COBOL Standards (USASI), ***, ***;Why Another Programming Language, P. ***;PL/1 -- Pros and Cons, ***;Systems Investigation, ***;Unified Operations Management, A.O. PUTNAM;Design Techniques -- with Emphasis on Real-Time, M. KORNBLUH;Computer Time Allocation -- What's Fair and Practical, ***;Planning and Control for Computer Systems Projects, ***;Computerized Techniques of Project Management, ***;Private EDP Schools -- Some Observations, ***;Private data processing School, ***;Evolution of a "Revolution", ***;Man, Machine, Systems and Education, ***;Focus on Programming Education, ***;What Management Should Know about Systems, ***;Planning of data Systems, ***, ***;Scheduling Work Flow in Third-Generation Environment, ***;Location Planning for data processing Facility, ***;Managing Systems and Programming Organization, ***;Programming Training Programs, ***;Organization of In-House Education, ***;Producing Programmed Instruction, ***;Selection, Hiring and Training of Programmers, ***;Scheduling in a Research Environment, ***;Chrysler's Parts Division Computer System
This study aims to model the seismic reflection responses of channel, bioherm and anticlinal type hydrocarbon traps, which are important in hydrocarbon exploration, in a shot domain and to obtain migration sections. T...
详细信息
This study aims to model the seismic reflection responses of channel, bioherm and anticlinal type hydrocarbon traps, which are important in hydrocarbon exploration, in a shot domain and to obtain migration sections. The calculation of the model data was carried out with the two-dimensional acoustic finite difference method due to the possibility and convenience of selecting different trap models on an arbitrary. The effects of the geometry and discontinuities of the trap structures on the reflection waves were examined, the extent to which data processing applications contributed to the noise problems were tested, the initial geological model and zero-offset sections were compared, and the causes of the problems were discussed. Consequently, in hydrocarbon exploration in complex geological environments, pre-stack shot data modeling instead of post-stack modeling contributes to the development of data processing workflows with optimal processing parameters. This approach significantly improves the reliability of interpretations by increasing the consistency of seismic sections with the real geologic structure. In addition, reducing the risks in determining hydrocarbon reserves in complex geological environments, enables safer and more efficient resource extraction and provides the basis for sustainable and economically viable innovative approaches in the energy sector.
With the launch of NISAR, available open access Earth Observation data reaches a new milestone with an estimated data rate of similar to 70 Terabytes of NISAR science data produced daily. This additional data creates ...
详细信息
With the launch of NISAR, available open access Earth Observation data reaches a new milestone with an estimated data rate of similar to 70 Terabytes of NISAR science data produced daily. This additional data creates many opportunities within the science and user community for new and more indepth analyses, yet at the same time, most organizations lack the budget, resources, and skills to analyze all this data on-premise. This article introduces concepts on how SAR processing software from commercial and open-source providers can be coupled with highly scalable cloud processing techniques to digest SAR data at various input levels (from raw to value added products). Included in this article are examples on the global-scale processing of Sentinel-1 data for InSAR coherence estimation, the input to global biomass modeling, and the routine operational processing of SAR time series data for national and international near-real time flood mapping. The cloud-scaling Software for Earth big data processing, Prediction Modeling, and Organization (SEPPO), which is used to meet the processing challenges utilizing Amazon Web Services (AWS) for large volume and operational SAR data processing in complex workflows is introduced.
On August 5, 2022, South Korea's first lunar probe, the Korea Pathfinder Lunar Orbiter (KPLO, DANURI) was launched and entered lunar orbit in December of the same year. The KPLO is currently conducting a mission w...
详细信息
On August 5, 2022, South Korea's first lunar probe, the Korea Pathfinder Lunar Orbiter (KPLO, DANURI) was launched and entered lunar orbit in December of the same year. The KPLO is currently conducting a mission with the KPLO Gamma-Ray Spectrometer (KGRS) developed by the Korea Institute of Geoscience and Mineral Resources (KIGAM). The primary objective of the KGRS is to collect gamma-ray spectral information on the lunar surface and create an elemental map. This paper discusses the basic processing steps of the KGRS data currently operating in lunar orbit and the data processing results for approximately one year. KGRS data is currently maintained in two categories: TM2 data, and TM3 data. We are also preparing the calibration (CAL) data. In the TM2 data stage, time conversion and correction are performed. In the TM3 data stage, the SPICE (Spacecraft Planet Instrument C-matrix Events) system is used to obtain satellite positional information. All KGRS data being transmitted to the KIGAM is monitored daily. The monitoring program displays and monitors the engineering data such as device temperature and science data such as gamma-ray counts. Monitoring gamma-ray counts allows us to detect events such as solar flares or gamma-ray bursts that could affect the data. Since that kind of data can not represent information on the lunar surface, the data at that time should be excluded from mapping data. To verify the result of the data selection, we compare the gamma-ray count maps before and after the exclusion of these data.
作者:
Wang, MenghuaJiang, LideNOAA
Ctr Satellite Applicat & Res Natl Environm Satellite Data & Informat Serv College Pk MD 20740 USA Colorado State Univ
Cooperat Inst Res Atmosphere Ft Collins CO 80523 USA
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the NOAA-21 satellite was launched in November 2022 as a new VIIRS adding to the constellation of the Joint Polar Satellite System (JPSS) mission, which in...
详细信息
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the NOAA-21 satellite was launched in November 2022 as a new VIIRS adding to the constellation of the Joint Polar Satellite System (JPSS) mission, which includes the Suomi National Polar-orbiting Partnership (SNPP) (October 2011 to present) and NOAA-20 (November 2017 to present). For satellite ocean color (OC) remote sensing, the on-orbit system vicarious calibration (SVC) for deriving sensor spectral gain factors must be carried out. In this article, we document our work to obtain SVC gains for the three VIIRS sensors covering the spectral bands of visible, near-infrared (NIR), and shortwave infrared (SWIR) using a consistent NIR-SWIR SVC approach. Specifically, we derive SVC gains using the in situ normalized water-leaving radiance nL(w)(lambda) spectra from the Marine Optical Buoy (MOBY) in the Hawaii ocean region with the updated NOAA Multi-Sensor Level-1 to Level-2 (MSL12) data processing system. For VIIRS moderate (M) resolution and imaging (I) bands of M1-M4, I1, M5-M8, M10, and M11, VIIRS SVC gain sets for SNPP, NOAA-20, and NOAA-21 are (0.9752, 0.9732, 0.9772, 0.9685, 1.0090, 0.9750, 0.9765, 1.0000, 1.0050, 0.9960, and 1.0230), (1.0044, 1.0098, 1.0051, 1.0073, 1.0301, 1.0136, 1.0052, 1.0000, 1.0435, 1.0235, and 1.0330), and (1.0284, 1.0317, 1.0165, 1.0231, 1.0236, 1.0137, 1.0051, 1.0000, 0.8982, 0.8779, and 0.8434), respectively. The SVC gains derived using the NIR and SWIR data processing approaches are highly consistent. For example, SVC gain differences at VIIRS M2 blue band between using the NIR (M6 and M7) and SWIR (M8 and M10) SVC methods are 0.021%, -0.010%, and 0.010% for SNPP, NOAA-20, and NOAA-21, respectively. With the new SVC gain sets, the three VIIRS mission-long OC data can be reprocessed. Results over the MOBY site show that reprocessed VIIRS OC products are accurate and consistent, compared to those from in situ measurements. In addition, we have used Rayleigh-corrected reflectan
Due to its superiority in addressing label ambiguity, label distribution learning (LDL) has received wide attention from the community, such as image classification, emotion recognition, and big data processing. To ef...
详细信息
Due to its superiority in addressing label ambiguity, label distribution learning (LDL) has received wide attention from the community, such as image classification, emotion recognition, and big data processing. To efficiently process the data with label distribution, researchers have proposed to learn label-specific features (LSFs) that are the discriminative features for each class label. Although the LDL literature has seen many algorithms to learn LSFs, most of them ignore the characteristics of label distribution. Label distribution lies in real-value vector space with specific characteristics. In this article, we propose to learn label-distribution-specific features (LDSFs) for processing label distribution data by considering the structures of label distribution. We design a novel LDL method called LDL-LDSF to exploit LDSFs by considering the fuzzy cluster structures of label distribution data. First, LDL-LDSF learns LDSFs for the whole label distribution by jointly learning the label distribution and fuzzy C-means clustering. Second, it learns LDSFs for each label in a similar way. Third, it concatenates the learned LDSFs with the original features to deduce an LDL model. Finally, we conduct extensive experiments to justify that LDL-LDSF statistically outperforms several state-of-the-art LDL methods and validate the advantages of LDSFs for processing label distribution data.
The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock *** UCS testing methods and apparatuses have...
详细信息
The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock *** UCS testing methods and apparatuses have been proposed over the past few *** objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS *** starts with elaborating the theories of these test *** the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing ***,the method selection for UCS measurement is *** reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure *** trends of these apparatuses are towards automation,digitization,precision,and multi-modal *** size correction methods are commonly *** is to develop empirical correlation between the measured indices and the specimen *** other is to use a standard specimen to calculate the size correction *** to five input parameters are commonly utilized in soft computation models to predict the UCS of *** selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen *** engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors.
Real-time or nearly real-time (nearline) data processing methods are critical tools as detector technologies and data acquisition (DAQ) systems allow for higher data rates and volumes. The introduction of the energy s...
详细信息
Real-time or nearly real-time (nearline) data processing methods are critical tools as detector technologies and data acquisition (DAQ) systems allow for higher data rates and volumes. The introduction of the energy sciences network (ESnet), a U.S. Department of Energy (DOE) supported high-speed network for scientific research, creates opportunities to leverage the computing power of DOE facilities like the National Energy Research Scientific Computing Center (NERSC). As a first step toward realizing a DOE Office of Science Integrated Research Infrastructure (IRI) pattern, an automated workflow was developed to remotely process data obtained from a nuclear physics experiment at the Facility for Rare Isotope Beams (FRIB) at NERSC with data transferred between FRIB and NERSC over ESnet. The workflow demonstrated the ability to process one week's worth of experimental data in approximately 90 min and was used successfully for nearline analysis during a recently completed FRIB experiment. A summary of the workflow development and results of recent demonstrations will be presented.
暂无评论