In the Indian economy, Agriculture is considered one of the strongest pillars. Agriculture sector contributes significantly in our country, and it provides employability to many people live in a rural area. In our wor...
详细信息
ISBN:
(纸本)9781665446075
In the Indian economy, Agriculture is considered one of the strongest pillars. Agriculture sector contributes significantly in our country, and it provides employability to many people live in a rural area. In our work, various methods of imageprocessing are utilized for detecting healthy or unhealthy paddy leaves. This paper aims to carry out image analysis & classification methods to detect paddy leaf diseases and classification. We have built a system using imageprocessing for identifying spots in paddy images by applying segmentation techniques. This system mainly includes four components: pre-processing, segmentation to extract the infected region by adopting FCM and SLIC algorithm, Feature Extraction using statistical Gray-Level Co-Occurrence Matrix (GLCM) features, and color features are extracted using HSV planes. Finally, Classification is made by using an artificial neural network (ANN).
The proliferation of easy multimedia editing tools has ruined the trust in what we see. Forensic techniques are proposed to detect forgeries unnoticeable by naked human eyes. In this paper, we focus on a specific copy...
详细信息
ISBN:
(纸本)9783030666255;9783030666262
The proliferation of easy multimedia editing tools has ruined the trust in what we see. Forensic techniques are proposed to detect forgeries unnoticeable by naked human eyes. In this paper, we focus on a specific copy-move forgery attack that happens to alter portions within an image. It may be aimed to hide any sensitive information contained in a particular image portion or misguide the facts. Here, we propose to exploit the image's statistical properties, specifically, mean and variance, to detect the forged portions. A block-wise comparison is made based on these properties to localize the forged region called a prediction mask. Post-processingmethods have been proposed to reduce false positives and improve the accuracy(F-score) of the prediction mask. This decrease in FPR in the final result is comes from post processing method of overlaying multiple masks with different values of threshold and block size of the sliding window.
作者:
Noel, RomainNavarro, LaurentCourbebaisse, GuyUniv Gustave Eiffel
INRIA COSYS SII I4S F-44344 Bouguenais France Univ Lyon
Univ Jean Monnet INSERM U1059 Mines St EtienneCtr Ingn Sante CISSAINBIOSE 158 Cours Fauriel F-42023 St Etienne France Univ Lyon
Univ Claude Bernard Lyon 1 INSA Lyon INSERMUJM St EtienneCNRSCREATIS UMR 5220U1294 F-69621 Lyon France
The LBM (Lattice Boltzmann Method) is often used in CFD (Computational Fluid Dynamics) for efficient fluid flow simulations. Computation of the permeability of a porous media from direct simulations is a common applic...
详细信息
ISBN:
(数字)9781510644052
ISBN:
(纸本)9781510644052
The LBM (Lattice Boltzmann Method) is often used in CFD (Computational Fluid Dynamics) for efficient fluid flow simulations. Computation of the permeability of a porous media from direct simulations is a common application which benefits from the ability of the LBM (Lattice Boltzmann Method) to embed porosity parameters. The MM (Mathematical Morphology) is widely used in imageprocessing as the theoretical aspects guaranty robust algorithms for geometrical characterization of shapes appearing in images. The MM is commonly used to compute porosity from porous media images. The union of these two methods has been recently done through the LB3M (Lattice Boltzmann Method for Mathematical Morphology). The present work extends the LB3M to the extraction of porosity and pores segmentation from images. In order to benefit from the full capacity of the LB3M, it is necessary to reformulate and adjust the algorithms in a new paradigm. Thus, the underlying concept and algorithms required for computing the different previous information are detailed. Moreover, a comparison is provided between the permeability resulting from the CFD and MM both implemented by using the LBM. To sum up, this work emphasizes the full capacity of the LB3M to obtain complex transformations and operations issued from the MM theory through completely new and innovative algorithms. The herein challenge is to highlight the abilities of the LB3M to match with physical phenomenons. Indeed, the LB3M keeps the advantages from the MM such as a complete theory, fast convergence, scalability, robustness, etc. while adding the power of the LBM: statistical physics origins, partial differential equation solver, intrinsic properties of parallelization, efficiency, etc.
We consider the problem of optimizing the performance of an active imaging system by automatically discovering the illuminations it should use, and the way to decode them. Our approach tackles two seemingly incompatib...
详细信息
ISBN:
(纸本)9781728171685
We consider the problem of optimizing the performance of an active imaging system by automatically discovering the illuminations it should use, and the way to decode them. Our approach tackles two seemingly incompatible goals: (1) "tuning" the illuminations and decoding algorithm precisely to the devices at hand-to their optical transfer functions, non-linearities, spectral responses, imageprocessing pipelines-and (2) doing so without modeling or calibrating the system;without modeling the scenes of interest;and without prior training data. The key idea is to formulate a stochastic gradient descent (SGD) optimization procedure that puts the actual system in the loop: projecting patterns, capturing images, and calculating the gradient of expected reconstruction error. We apply this idea to structured-light triangulation to "auto-tune" several devices-from smartphones and laser projectors to advanced computational cameras. Our experiments show that despite being model-free and automatic, optical SGD can boost system 3D accuracy substantially over state-of-the-art coding schemes.
In recent years, deep learning has been widely used in the field of medical imageprocessing, such as identification of symptoms, detection of organ. Due to the complexity of medical images, in the model training, the...
详细信息
ISBN:
(纸本)9781665429825
In recent years, deep learning has been widely used in the field of medical imageprocessing, such as identification of symptoms, detection of organ. Due to the complexity of medical images, in the model training, there are many parameters when deep learning is used for image classification, and it takes a long time. Therefore, the training process for neural networks needs to be optimized. The stochastic gradient descent with momentum (SGD) is a common optimization algorithm in deep learning, and the particle swarm optimization (PSO) is a classical and effective swarm intelligence optimization algorithm. These two methods have their own advantages and disadvantages. Combining the two algorithms to calculate parameters, this paper proposes a novel particle swarm optimization-stochastic gradient descent with momentum (PSO-SGD) algorithm, which can find the optimal solution of the network more quickly and improve the solution efficiency on the basis of ensuring the classification accuracy. This algorithm is verified on two data sets, namely Blood Cell images Data Set (BCIDS) and COVED-19 Radiography Data Set (COVED19RDS). Experiments prove the effectiveness of the algorithm.
A new metamodeling approach within isogeometric analysis (IGA) scheme is proposed in this paper to implement the stochastic structural reliability analysis of shell structures. The T-spline model is applied for the lo...
详细信息
ISBN:
(纸本)9781450389532
A new metamodeling approach within isogeometric analysis (IGA) scheme is proposed in this paper to implement the stochastic structural reliability analysis of shell structures. The T-spline model is applied for the low-cost computation and exact geometric representation in analysis. The spatially dependent random material properties are considered and the density-dependent spatial clustering based multi-scale support vector regression (DCMSSVR) is developed for uncertainty quantification. A Cassegrain antenna example is investigated thoroughly, the results of which demonstrate that the proposed DCMSSVR approach have great performance in estimating the statistical characteristics of concerned structural displacement and thus the structural reliability with relatively small number of training samples.
Speech synthesis methods can create realistic-sounding speech, which may be used for fraud, spoofing, and mis-information campaigns. Forensic methods that detect synthesized speech are important for protection against...
详细信息
Speech synthesis methods can create realistic-sounding speech, which may be used for fraud, spoofing, and mis-information campaigns. Forensic methods that detect synthesized speech are important for protection against such attacks. Forensic attribution methods provide even more information about the nature of synthesized speech signals because they identify the specific speech synthesis method (i.e., speech synthesizer) used to create a speech signal. Due to the increasing number of realistic-sounding speech synthesizers, we propose a speech attribution method that generalizes to new synthesizers not seen during training. To do so, we investigate speech synthesizer attribution in both a closed set scenario and an open set scenario. In other words, we consider some speech synthesizers to be "known" synthesizers (i.e., part of the closed set) and others to be "unknown" synthesizers (i.e., part of the open set). We represent speech signals as spectrograms and train our proposed method, known as compact attribution transformer (CAT), on the closed set for multi-class classification. Then, we extend our analysis to the open set to attribute synthesized speech signals to both known and unknown synthesizers. We utilize a t-distributed stochastic neighbor embedding (tSNE) on the latent space of the trained CAT to differentiate between each unknown synthesizer. Additionally, we explore poly-1 loss formulations to improve attribution results. Our proposed approach successfully attributes synthesized speech signals to their respective speech synthesizers in both closed and open set scenarios.
The gradual advent of machine learning has been assisting to shift the field of computer vision from statisticalmethods to deep neural networks. These networks should be able to process high resolution video streams ...
详细信息
ISBN:
(纸本)9781728131870
The gradual advent of machine learning has been assisting to shift the field of computer vision from statisticalmethods to deep neural networks. These networks should be able to process high resolution video streams coming from the HD camera sources in real time. However, due to the fixed network size and to maintain the processing speed, high resolution frames need to be resized and down-sampled before feeding into the networks resulting in loss of feature information, hampering recognition accuracy. This motivated us to propose a methodology which focuses on creating and processing the active region of interests in the foreground image through an active region generator (ARG) module, eliminating the need to traverse the entire frame and down-sample the resolution before feeding it to the neural network. This resulted in saving 25x more image feature information, whilst maintaining a given person detection accuracy of 92 % mAP for longer distance up to 30 similar to 35 metre executing in real time w.r.t it's classical counterpart based on singleshot detector model. Besides, our proposed pipeline architecture utilizing multi-core TESLA GPU increases the execution throughput by a factor of 3X verified in NVIDIA DGX system.
Despeckling is a key tool for the SAR image understanding and it is the first image pre-processing for other application such as classification, segmentation and detection. The importance of having a filter able to su...
Despeckling is a key tool for the SAR image understanding and it is the first image pre-processing for other application such as classification, segmentation and detection. The importance of having a filter able to suppress noise without losing spatial details is fundamental. Among different approaches, nowadays several deep learning based algorithm for SAR despeckling are proposed. Most of them rely on simulated datasets based on certain hypothesis for the speckle that usually not fully exploit the characteristics of real SAR images, leading to methods that produce artefacts in areas with different characteristics from those present in the training. The aim of this work is to propose a multi-step despeckling process: in the first step a convolutional neural network trained under the fully developed speckle hypothesis with a statistical loss function is used for despeckling; later, by means of a statistical test and a ratio edge detector, the noise predicted by the network is used for detecting the not fully developed areas where the network will produce artefacts. Once this detection is done, an ad hoc filtering policy can be considered.
A wide variety of subtypes of cancer have been discovered. Early cancer research today requires that cancer forms be screened and treated in a timely manner because it aids in the medical treatment of patients. In the...
详细信息
A wide variety of subtypes of cancer have been discovered. Early cancer research today requires that cancer forms be screened and treated in a timely manner because it aids in the medical treatment of patients. In the fields of biomedicine and bioinformatics, various research has investigated "the use of machine learning and deep learning" to classify cancer patients as having a high or low risk of relapse. Cancer treatment and research have been affected by these methods. It is necessary to employ machine learning techniques when working with enormous datasets in order to determine the most essential properties. "Artificial neural networks (ANN), support vector machines (SVMs), and decision trees can be used to construct cancer cure prediction models (DTs)". This means that while ML techniques can help us better understand how cancer grows, they are not yet ready to be applied in daily clinical practice. "In this study, the ML and DL approaches for cancer progression modeling are explored. Certain ML, input, and data sample supervisions are often used".
暂无评论