Natural gas extraction systems often encounter manufacturing defects or develop defects over time, leading to gas leaks. These leaks pose challenges, causing revenue losses and environmental pollution. Detecting gas l...
ISBN:
(纸本)9781959025030
Natural gas extraction systems often encounter manufacturing defects or develop defects over time, leading to gas leaks. These leaks pose challenges, causing revenue losses and environmental pollution. Detecting gas leaks in the vast array of extraction, transfer, and storage equipment within these systems can be arduous, allowing leaks to persist unnoticed. Additionally, natural gas leaks are not visible to naked eyes, further complicating their detection. We developed a novel deep learning imageprocessing model that utilizes videos captured by a specialized Optical Gas Imaging (OGI) camera to detect natural gas leaks. The temporal deep learning algorithm is designed to identify patterns associated with gas leaks and improve its performance through supervised learning. Our model incorporates algorithms to detect background environments, motion, equipment, and classify gas leaks. Our model employs leak identification algorithms to determine the presence of gas leaks. These algorithms calculate the probability of detected motion indicating a gas leak based on long-term and short-term background subtraction, detected motion, motion duration, equipment location, and telemetry data. To minimize false positives, we have developed image segmentation and object detection models to identify known objects, such as equipment, people, and cars, within the video footage. To train our model we collect more than 10,000 short videos from real fields and include simulated data with known rate controlled gas release in different situations. Data consist of wide range of weather situations including different temperature, wind speed, humidity in sunny, rainy, and snowy fields. We validated our model by conducting experiments involving actual footage from the field. The model achieved a 98% true positive rate, and a 100% true negative rate, correctly refraining from sending an alarm for all non-releases. Additionally, we developed a postprocessing algorithm capable of estimating the
The increasing spread of data and text documents such as articles, web pages, books, posts on social networks, etc. on the Internet, creates a fundamental challenge in various fields of text processing under the title...
详细信息
ISBN:
(纸本)9798350314557
The increasing spread of data and text documents such as articles, web pages, books, posts on social networks, etc. on the Internet, creates a fundamental challenge in various fields of text processing under the title of "automatic text summarization". Manual processing and summarization of large volumes of textual data is a very difficult, expensive, time-consuming, and impossible process for human users. Text summarization systems are divided into extractive and abstract categories. In the extractive summarization method, the final summary of a text document is extracted from the important sentences of the same document without any kind of change. In this method, it is possible to repeat a series of sentences repeatedly and interfere with pronouns. But in the abstract summarization method, the final summary of a textual document is extracted from the meaning of the sentences and words of the same document or other documents. Many of the performed works have used extraction methods or abstracts to summarize the collection of web documents, each of which has advantages and disadvantages in the results obtained in terms of similarity or size. In this research, by developing a crawler, extracting the popular text posts from the Instagram social network, suitable pre-processing, and combining the set of extractive and abstract algorithms, the researcher showed how to use each of the abstract algorithms. and used extraction as a supplement to increase the accuracy and accuracy of another algorithm. Observations made on 820 popular text posts on the Instagram social network show the accuracy (80%) of the proposed system.
The fundamental components of automated retinal blood vessel segmentation for eye disease screening systems are segmentation algorithms, retinal blood vessel datasets, classification algorithms, performance measure pa...
详细信息
Over the years there has been huge improvements in the performance of imageprocessingalgorithms due to increase in computation power of Devices as well as use of Neural Networks. This paper focuses on comparison of ...
详细信息
With the rapid development of artificial intelligence technology, visual inspection and imageprocessingalgorithms have been continuously improved in accuracy and efficiency, and intelligent inspection systems based ...
详细信息
The quality of image and videos plays a vital role in case of real-Time systems. images are captured without sufficient illumination, lead to low dynamic range and high propensity for generating high noise levels. The...
详细信息
To address issues of insufficient sensitivity and weak anti-noise characteristics in existing image sharpness evaluation algorithms, we propose a method combining local variance and gradient analysis. Traditional meth...
详细信息
A comprehensive benchmark is yet to be established in the image Manipulation Detection & Localization (IMDL) field. The absence of such a benchmark leads to insufficient and misleading model evaluations, severely ...
In the realm of Printed Circuit Board (PCB) manufacturing, the alignment process is pivotal for ensuring the functional integrity of the final product. Traditional image measurement techniques, while foundational, oft...
详细信息
ISBN:
(纸本)9798400717024
In the realm of Printed Circuit Board (PCB) manufacturing, the alignment process is pivotal for ensuring the functional integrity of the final product. Traditional image measurement techniques, while foundational, often fall short of achieving the high degree of accuracy and precision necessary for today's complex PCB designs. This research presents a novel approach that significantly enhances measurement accuracy through the application of optimized imageprocessing techniques. By leveraging advanced algorithms within the OpenCV library, we introduce a methodology that accurately transforms pixel coordinates into real-world measurements, crucial for precise PCB alignment. Our technique employs edge detection algorithms such as Canny, Sobel, and Prewit filters, combined with a machine learning model that adapts to variations in real-time imaging conditions. The study delineates the development of a user-friendly graphical interface that streamlines the measurement process, making it accessible for practical industrial application. Results from experimental validations indicate a substantial improvement in measurement precision, with a demonstrable reduction in alignment errors compared to conventional methods. This leap forward not only promises to elevate the standards of PCB manufacturing but also opens avenues for similar advancements in other domains where image measurement is essential. The implications of this work are far-reaching, with the potential to significantly boost the efficiency and reliability of electronic manufacturing processes globally.
Language models have been successfully used to model natural signals, such as images, speech, and music. A key component of these models is a high quality neural compression model that can compress high-dimensional na...
详细信息
ISBN:
(纸本)9781713899921
Language models have been successfully used to model natural signals, such as images, speech, and music. A key component of these models is a high quality neural compression model that can compress high-dimensional natural signals into lower dimensional discrete tokens. To that end, we introduce a high-fidelity universal neural audio compression algorithm that achieves 90x compression of 44.1 KHz audio into tokens at just 8kbps bandwidth. We achieve this by combining advances in high-fidelity audio generation with better vector quantization techniques from the image domain, along with improved adversarial and reconstruction losses. We compress all domains (speech, environment, music, etc.) with a single universal model, making it widely applicable to generative modeling of all audio. We compare with competing audio compression algorithms, and find our method outperforms them significantly. We provide thorough ablations for every design choice, as well as open-source code and trained model weights. We hope our work can lay the foundation for the next generation of high-fidelity audio modeling.
暂无评论