Grinding is commonly used for machining parts made of hard or brittle materials with the intent of ensuring a better surface finish. The material removal ability of a grinding wheel depends on whether the wheel surfac...
详细信息
Grinding is commonly used for machining parts made of hard or brittle materials with the intent of ensuring a better surface finish. The material removal ability of a grinding wheel depends on whether the wheel surface is populated with a sufficiently high number of randomly distributed active abrasive grains. This condition is ensured by performing dressing operations at regular time intervals. The effectiveness of a dressing operation is determined by measuring the surface topography of the wheel (regions and their distributions on the grinding wheel work surface where the active abrasive grains reside). In many cases, imageprocessingmethods are employed to determine the surface topography. However, such procedures must be able to remove the regions where the abrasive grains do not reside while keeping, at the same time, the regions where the abrasive grains reside. Thus, special kinds of imageprocessing techniques are needed to distinguish the non-grain regions from the grain regions, which requires a heavy computing load and long duration. As an alternative, in the framework of the "Biologicalisation in Manufacturing" paradigm, this study employs a bio-inspiration-based computing method known as DNA-based computing (DBC). It is shown that DBC can eliminate non-grain regions while keeping grain regions with significantly lower computational effort and time. On a surface of size 706.5 mu m in the circumferential direction and 530 mu m in the width direction, there are about 7000 potential regions where grains might reside, as the imageprocessing results exhibit. After performing DBC, this number is reduced to about 300 (representing a realistic estimate). Thus, the outcomes of this study can help develop an intelligent imageprocessing system to optimize dressing operations and thereby, grinding operations.
Sensors used in intelligence, surveillance and reconnaissance (ISR) operations and activities have the ability to generate vast amounts of data. High-volume analytical capabilities are needed to process data from mult...
详细信息
ISBN:
(纸本)9781510636040
Sensors used in intelligence, surveillance and reconnaissance (ISR) operations and activities have the ability to generate vast amounts of data. High-volume analytical capabilities are needed to process data from multi-modal sensors to develop and test complex computational and deep learning models in support of the U.S. Army Multi-Domain Operations (MDO). The Army Research Laboratory designs, develops and tests Artificial Intelligence and Machine Learning (AI/ML) algorithms employing large repositories of in-house data. To efficiently process the data as well as design, build, train and deploy models, parallel and distributed algorithms are needed. Deep learning frameworks provide language-specific, container-based building blocks associated with deep learning neural networks applied to specific target applications. This paper discusses applications of AI/ML deep learning frameworks and Software Development Kits (SDKs) and demonstrates and compares specific multi-core processor and NVidia Graphics processing Unit (GPU) implementations for desktop and Cloud environments. Frameworks selected for this research include PyTorch and Matlab. Amazon Web Services (AWS) SageMaker was used to launch Machine Learning instances ranging from general purpose computing to GPU instances. Detailed processes, example code, performance enhancements, best practices and lessons learned are included for publicly available acoustic and image datasets. Research results indicate parallel implementations of data preprocessing steps saved significant time but more expensive GPUs did not provide any processing time advantages for the machine learning algorithms tested.
image set annotation is an important task in the supervised training of the deep neural network. Manual and data-driven dataset annotation methods are commonly used approaches. Both of them have shortcomings, especial...
详细信息
ISBN:
(纸本)9781510635241
image set annotation is an important task in the supervised training of the deep neural network. Manual and data-driven dataset annotation methods are commonly used approaches. Both of them have shortcomings, especially in the case of the dataset requiring professional knowledge, which leads to high cost with manual annotation methods and poorly diversified annotation samples with data-driven annotation methods. Although the recommendation annotation method based on cosine similarity using deep neural network features takes advantages of manual annotation and data-driven method, there are still problems such as low accuracy and click-through rate. In order to improve the recommendation accuracy and click-through rate, we propose a confusion graph recommendation annotation method, which builds a confusion graph based on the Largest Margin Nearest Neighbor (LMNN) distance among deep neural network features, to recommend the most confusing images to annotators. In this paper, we made ablation studies on the self-built child face dataset in terms of Precision, mAP (mean Average Precision), and CTR (click-through-rate). The experimental results show that the proposed method achieves superior performance, compared with the cosine similarity recommendation annotation method and the manual annotation method.
NASA has committed to open-source science that enables Earth observation data transparency, inclusivity, accessibility, and reproducibility – all fundamental to the pace and quality of scientific progress. We have em...
NASA has committed to open-source science that enables Earth observation data transparency, inclusivity, accessibility, and reproducibility – all fundamental to the pace and quality of scientific progress. We have embraced this vision by producing standard InSAR science products that are freely available to the public through NASA Data Active Archive Centers (DAACs) and are generated using state-of-the-art open-source and openly-developed methods. The Advanced Rapid image Analysis (ARIA) project’s Sentinel-1 Geocoded Unwrapped Phase product (ARIA-S1-GUNW) is a 90 meter InSAR product that spans major, land-based fault systems, the US Coasts, and active volcanic regions through the complete Sentinel-1 record. The products enable the measurement of centimeter-scale surface displacement with applications across the solid earth, hydrology, and sea-level disciplines. The ARIA-S1-GUNW also enables rapid response mapping of surface motion after earthquakes, landslides, and subsidence. The ARIA-S1-GUNW products are freely available through the Alaska Satellite Facility (ASF) DAAC. In the last year, we have successfully grown the archive to over 1.1 million products, a 6 fold increase, through NASA ACCESS by improving our processing workflow and leveraging HyP3, an AWS-based cloud processing environment. We are continuing to partner with researchers to generate more products over relevant areas of scientific interest. All the processing software and cloud infrastructure are open-source to ensure reproducibility and enable other scientists to modify, improve upon, and scale their own cloud workflows for related InSAR analyses. We have, in parallel, developed and supported open-source, well-documented tools to further streamline time-series analysis from the ARIA-S1-GUNW into deformation analysis workflows.
Uncertainty in medical image segmentation tasks, especially inter-rater variability, arising from differences in interpretations and annotations by various experts, presents a significant challenge in achieving consis...
详细信息
It becomes an urgent demand how to make people find relevant video content of interest from massive sports videos. We have designed and developed a sports video search engine based on distributed architecture, which a...
详细信息
ISBN:
(纸本)9783030377311;9783030377304
It becomes an urgent demand how to make people find relevant video content of interest from massive sports videos. We have designed and developed a sports video search engine based on distributed architecture, which aims to provide users with content-based video analysis and retrieval services. In sports video search engine, we focus on event detection, highlights analysis and image retrieval. Our work has several advantages: (I) CNN and RNN are used to extract features and integrate dynamic information and a new sliding window model are used for multi-length event detection. (ii) For highlights analysis. An improved method based on self-adapting dual threshold and dominant color percentage are used to detect the shot boundary. Affect arousal method are used for highlights extraction. (iiI) For image's indexing and retrieval. Hyperspherical soft assignment method is proposed to generate image descriptor. Enhanced residual vector quantization is presented to construct multi-inverted index. Two adaptive retrieval methods based on hype-spherical filtration are used to improve the time efficient. (IV) All of previous algorithms are implemented in the distributed platform which we develop for massive video data processing.
Purpose: To report the current practice pattern of the proton stereotactic body radiation therapy (SBRT) for prostate treatments. Materials and methods: A survey was designed to inquire about the practice of proton SB...
详细信息
Purpose: To report the current practice pattern of the proton stereotactic body radiation therapy (SBRT) for prostate treatments. Materials and methods: A survey was designed to inquire about the practice of proton SBRT treatment for prostate cancer. The survey was distributed to all 30 proton therapy centers in the United States that participate in the National Clinical Trial Network in February, 2023. The survey focused on usage, patient selection criteria, prescriptions, target contours, dose constraints, treatment plan optimization and evaluation methods, patientspecific QA, and image-guided radiation therapy (IGRT) methods. Results: We received responses from 25 centers (83% participation). Only 8 respondent proton centers (32%) reported performing SBRT of the prostate. The remaining 17 centers cited 3 primary reasons for not offering this treatment: no clinical need, lack of volumetric imaging, and/or lack of clinical evidence. Only 1 center cited the reduction in overall reimbursement as a concern for not offering prostate SBRT. Several common practices among the 8 centers offering SBRT for the prostate were noted, such as using Hydrogel spacers, fiducial markers, and magnetic resonance imaging (MRI) for target delineation. Most proton centers (87.5%) utilized pencil beam scanning (PBS) delivery and completed Imaging and Radiation Oncology Core (IROC) phantom credentialing. Treatment planning typically used parallel opposed lateral beams, and consistent parameters for setup and range uncertainties were used for plan optimization and robustness evaluation. Measurements-based patient-specific QA, beam delivery every other day, fiducial contours for IGRT, and total doses of 35 to 40 GyRBE were consistent across all centers. However, there was no consensus on the risk levels for patient selection. Conclusion: Prostate SBRT is used in about 1/3 of proton centers in the US. There was a significant consistency in practices among proton centers treating with proton S
The Upgrade of the Belle ii vertex detector (VTX) at the SuperKEKB accelerator in Japan is foreseen to improve tracking performance at the expected high beam backgrounds at target luminosity of 6 × 1035 cm-2s-1. ...
详细信息
Industrial robots can replace human labour to perform a variety of tasks. Among these tasks, robotic grasping is the most primary industrial robot operation. However, conventional robotic grasping methods could become...
详细信息
Industrial robots can replace human labour to perform a variety of tasks. Among these tasks, robotic grasping is the most primary industrial robot operation. However, conventional robotic grasping methods could become inapplicable for cluttered and occluded objects. To address the issue, we adopt object pose estimation (OPE) to facilitate robotic grasping of cluttered and occluded objects and propose an object detection model based on 2D-RGB multi-view features. The proposed model is built by adding four transpose convolution layers into the Resnet backbone to obtain desirable 2D feature maps of object keypoints in each image. In addition, we design a feature-fusion model to produce 3D coordinates of keypoints from 2D multi-view features based on the volumetric aggregation method, along with a keypoint-detection confidence of each view to assist the optimality judgment of the robotic grasping. Extensive experiments are conducted to verify the accuracy of OPE, and the experimental results indicate the substantial performance improvements of the proposed approach over conventional methods in various scenarios.
distributed deep learning has been proposed to train large scale neural network models with huge amounts of datasets using multiple workers. Since workers need to frequently communicate with each other to exchange gra...
详细信息
ISBN:
(纸本)9781728160955
distributed deep learning has been proposed to train large scale neural network models with huge amounts of datasets using multiple workers. Since workers need to frequently communicate with each other to exchange gradients for updating parameters, communication overhead is always a major challenge in distributed deep learning. To cope with this challenge, gradient compression has been used to reduce the amount of data to be exchanged. However, existing compression methods, including both gradient quantization and gradient sparsification, either hurt model performance significantly or suffer from inefficient compression. In this paper, we propose a novel approach, called Standard Deviation based Adaptive Gradient Compression (SDAGC), which can simultaneously achieve model training with low communication overhead and high model performance in synchronous training. SDAGC uses the standard deviation of gradients in each layer of the neural network to dynamically calculate a suitable threshold according to the training process. Moreover, several associated methods, including residual gradient accumulation, local gradient clipping, adaptive learning rate revision and momentum compensation, are integrated to guarantee the convergence of the model. We verify the performance of SDAGC on various machine learning tasks: image classification, language modeling and speech recognition. The experiment results show that, compared with other existing works, SDAGC can achieve a gradient compression ratio from 433x to 2021x with similar or even better accuracy.
暂无评论