Bearings are machine elements used in a wide variety of applications including transportation. The accurate prediction of a bearing failure is important to sensitive applications to secure its safety during the servic...
Bearings are machine elements used in a wide variety of applications including transportation. The accurate prediction of a bearing failure is important to sensitive applications to secure its safety during the service life. Bearing failure prediction is useful both in bearing testing phase as well as in case of lifetime use. Real-time Audio signal analysis and advancedalgorithms are able to identify the incipient failure, caused by defects, fatigue, overload or poor maintenance. Audio signal analysis and processing remains a domain where technique and algorithm needs to be developed. In this paper is presented a proof-of-concept technique and equipment developed to predict failure of bearings in case of testing phase. For this study, acoustic emission signals were measured and analyzed during life testing of bearing while other sound source are also recorded. Correlation between the acoustic emission patterns were identified in order to identify noise signal and identify the signal associated with bearing degradation. The developed solution to isolate other sounds signals means that the technique could be used while lifetime of the bearings. The results of this study provide evidence that accurate estimation of the failure of various bearings is possible by processing the vibration signal acquired from a single point, even in case of multiple sound sources are present and introduce noise in signalprocessing. The SVM classifier provides at least 92% mean accuracy. The influence of model on prediction accuracy has also been discussed in the work.
A novel MIT Direct Digitising signal Measurement (DDSM) module has been developed aiming to replace the centralised NI PXI system and PC processing of the Cardiff Mk2 MIT system, thus offering potentially faster measu...
A novel MIT Direct Digitising signal Measurement (DDSM) module has been developed aiming to replace the centralised NI PXI system and PC processing of the Cardiff Mk2 MIT system, thus offering potentially faster measurement cycles. The proposed module replaces the signal acquisition and offers local processing. The core of the system is a Xilinx Spartan-3 FPGA, paired with a dual 14-bit ADC capable of 120 MS/s. The FPGA provides a flexible and fast platform for data acquisition and processing. The phase is measured using a two channel phase sensitive detection via I/Q demodulation. Built-in averaging reduces the data to a single signal period of 12 samples before multiplying and accumulating the data with the I/Q signals. The system provides the I/Q values for both channels directly, eliminating the long download and processing times required by the centralised NI PXI system currently used in the Cardiff Mk2 MIT system. The module acquires both the measurement and the reference signal and has a phase noise as low as 7.5m° using a measurement time constant of 6ms. The phase drift over 6 hours is considerable at 119m°. Details of the module, circuits and algorithms employed are provided as are the results of the performance measurements.
Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior...
Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior across the entire fleet is key in such a strategy. Failure of turbine subcomponents should be detected well in advance to allow early planning of all necessary maintenance actions; Such that they can be performed during low wind and low electricity demand periods. In order to obtain the insights to predict component failure, it is necessary to have an integrated clean dataset spanning all turbines of the fleet for a sufficiently long period of time. This paper illustrates our big-data approach to do this. In addition, advanced failure detection algorithms are necessary to detect failures in this dataset. This paper discusses a multi-level monitoring approach that consists of a combination of machine learning and advanced physics based signal-processing techniques. The advantage of combining different data sources to detect system degradation is in the higher certainty due to multivariable criteria. In order to able to perform long-term acceleration data signalprocessing at high frequency a streaming processing approach is necessary. This allows the data to be analysed as the sensors generate it. This paper illustrates this streaming concept on 5kHz acceleration data. A continuous spectrogram is generated from the data-stream. Real-life offshore wind turbine data is used. Using this streaming approach for calculating bearing failure features on continuous acceleration data will support failure propagation detection.
Speech is a one-dimensional quasi non-stationary time varying signal produced by a sequence of sounds. Speech signals are random in nature. Speech signals are easily corrupted by noise so recognition is an important r...
Speech is a one-dimensional quasi non-stationary time varying signal produced by a sequence of sounds. Speech signals are random in nature. Speech signals are easily corrupted by noise so recognition is an important role in speech processing. Many researches have designed recognition system with challenging parameters. Speech corpus can vary from environment, region, dialects, age, rate at which words are spoken. Pre-processing is the first step which includes framing, de-noisingand filtering. This paper focuses on speech techniques and statistical open source tools such as HTK, Julius, CMUSphinx and Kaldi. The word error rate obtained using all the toolkits on WSJ1 corpus gives us a clear understanding that Kaldi stands out as the most advanced recipes and scripts for speech recognition systems. An Indian English corpus by IITM was implemented in Kaldi yeilds WER of 6.41 and has been compared to other indian and international languages and well known corpuses.
The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 2...
The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. advanced and characteristically expensive Digital signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.
Composites exhibit higher strength and stiffness, better design practice and greater corrosion resistance compare to metal material. However, composites are susceptible to impact damage and the typical damage behaviou...
Composites exhibit higher strength and stiffness, better design practice and greater corrosion resistance compare to metal material. However, composites are susceptible to impact damage and the typical damage behaviour in the laminated composites is fibre-breakage and delamination. Detection of failure in laminated composites is complicated compared with ordinary non-destructive testing for metal materials as they are sensitive to echoes drown in noise due to the properties of the constituent materials and the multi-layered structure of the composites. In the current study, the detection of failure in multi-layered composite materials is investigated. To obtain a high probability of defect detection in composite materials, signalprocessingalgorithms were used to resolve echoes associated with defects in glass fibre-reinforced plastics (GRP) detected by using ultrasonic testing. Pulse-echo method with single transducer was used to transmit and receive ultrasound. The obtained signals were processed to reduce noise and to extract suitable features. Results were validated on GRP with and without defects in order to demonstrate the feasibility of the method on defect detection in composites.
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from t...
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
暂无评论