With the widespread deployment of wavelength division multiplexing (WDM), optical transceivers increasingly use many glass micro-optical components (GMOC). Visual inspection of these GMOCs is a critical manufacturing ...
详细信息
With the widespread deployment of wavelength division multiplexing (WDM), optical transceivers increasingly use many glass micro-optical components (GMOC). Visual inspection of these GMOCs is a critical manufacturing step to ensure quality and reliability. However, manual inspection is often labor-intensive and time-consuming due to the transparent nature of glass components and the small, randomly located defects in three dimensions. Although automated optical inspection (AOI) exists, it has not yet been able to provide the desired level of accuracy and efficiency. This paper reports the development of an AOI platform for 3D defect detection on GMOCs. The platform incorporates 3D video acquisition and a novel two-stage neural network machine-learning algorithm. It includes a robotic arm for moving parts in 3D, a camera with an illumination module for video acquisition, and a video streaming processing unit with a machine vision algorithm for real-time defect detection on a production line. The robotic arm enables multi-perspective video capture of a test sample without refocusing. The twostage machinelearning network uses a modified YOLOv4 architecture with color channel separation (CCS) convolution, an image quality evaluation (IQE) module, and a frame fusion module to integrate the single frame detection results. This network can process multi-perspective video streams in real-time for defects detection in a coarse-to-fine manner. The AOI platform was trained with only 30 samples and achieved promising performances with a recall rate of 1, a detection accuracy of 97%, and an inspection time of 48 s per part.
The preparation process of polyurethane (PU)-modified bitumen involves numerous design parameters and performance response indexes. Due to the variety of polyurethane modifiers, the preparation process of the polyuret...
详细信息
The preparation process of polyurethane (PU)-modified bitumen involves numerous design parameters and performance response indexes. Due to the variety of polyurethane modifiers, the preparation process of the polyurethane-modified bitumen is not universally applicable. However, the traditional methods such as the response surface method and orthogonal design method have some problems such as low accuracy and a large number of samples required in the preparation process design. Therefore, according to different application environments, the problem of determining the process parameters of the polyurethane-modified bitumen accurately and efficiently needs to be solved urgently. Using Kriging-Particle Swarm Optimization (PSO) algorithm, an efficient process design method for the preparation of polyurethane modified asphalt is proposed in this paper. Combined with the sensitivity analysis method, the relatively sensitive response indexes are screened out to reduce the number of samples and improve the design accuracy. Among them, the dispersion coefficient was evaluated by fluorescence microscopy test using the Christiansen coefficient method to evaluate the uniformity of the dispersed phase of the polyurethane modifier. According to the target performance, the main process parameters of PU modified asphalt were obtained by the Kriging-Particle Swarm Optimization algorithm: shear time 86 min, shear speed 2450 rpm, shear temperature 148 degrees C, and polyurethane content 18.6%. The polyurethane-modified bitumen prepared by this optimal process met the expected performance indicators. This study achieved the expected results with a small number of samples, indicating that this method can achieve the purpose of designing the ideal process parameters of polyurethane-modified asphalt efficiently.
In view of the actual problems existing in life-cycle health monitoring and diagnosis of large complex equipment, the machine-learning algorithm is applied to data mining of the equipment operation big data, the exper...
详细信息
In view of the actual problems existing in life-cycle health monitoring and diagnosis of large complex equipment, the machine-learning algorithm is applied to data mining of the equipment operation big data, the expert knowledge base is established, the diagnosis rules related to the fault are obtained, the intelligent online monitoring and remote diagnosis of the equipment health condition are realised. The system uses uncertain fault prediction method and hybrid intelligent algorithm to discover the hierarchical association between operation feature big data and operation faults, the feature extraction of operation faults, and the intelligent diagnosis of operation faults. It effectively improved the sensitivity, robustness, and accuracy of monitoring and diagnosis. In the cloud service platform based on the Internet of things, the system realises the intelligent fault prediction and diagnosis, establishes a proactive maintenance system, improves the production efficiency, and ensures the production safety.
The coconut mite Aceria guerreronis Keifer (Acari: Eriophyidae), is a destructive mite pest of coconut, causing significant economic losses. However, an effective pest management strategy requires a clear understandin...
详细信息
The coconut mite Aceria guerreronis Keifer (Acari: Eriophyidae), is a destructive mite pest of coconut, causing significant economic losses. However, an effective pest management strategy requires a clear understanding of the geographical areas at risk of the target pest. Therefore, we predicted the potential global distribution A. guerreronis using a machinelearningalgorithm based on maximum entropy. The potential future distribution for A. guerreronis covered the 2040 and 2060 periods under two climate change emission scenarios (SSP1-2.6 and SSP5-5.85) in the context of the sixth assessment report (AR6) of the Intergovernmental Panel on Climate Change. The MaxEnt model predicts the habitat suitability for A. guerreronis outside its present distribution, with suitable habitats in Oceania, Asia, Africa, and the Americas. The habitat suitability for the pest will decrease from 2040 to 2060. The areas with the highest risk of A. guerreronis are those with an annual average temperature of around 25 degrees C, mean annual precipitation of about 1459 mm, mean precipitation seasonality close to 64%, an average variation of daytime temperature of about 8.6 degrees C, and mean seasonality of temperature of about 149.7 degrees C. Our findings provide information for quarantine measures and policymaking, especially where A. guerreronis is presently still absent.
It is incontrovertible that the bioelectronic enabled bio-integrated system is one of the most promising technologies in the upcoming decades. Driven by the great versatility of the emerging artificial intelligence, m...
详细信息
It is incontrovertible that the bioelectronic enabled bio-integrated system is one of the most promising technologies in the upcoming decades. Driven by the great versatility of the emerging artificial intelligence, machine-learning-supported bio-integrated intelligent sensing system (BISS) is capable of achieving data processing and intelligent recognition while conducting multimodal human-centered sensing;such facilitated systems capitalize the benefits of both bioelectronics and supporting algorithm, enabling accurate physiological/somatosensory recognition at the cost of certain computing resources. Herein, an overview of recent progress in BISS is presented, with an emphasis on high-tech applications enabled by innovations in combining flexible bioelectronic sensors with supporting algorithmic systems. The main applications can be divided into three categories, including implantable, skin-mounted, and wearable BISS, which have different requirements for materials, fabrication methods, and algorithms, respectively. Advances in these areas open new avenues for employing BISS as future human-machine interfaces for personalized healthcare, human enhancement, as well as other broad applications.
Titrating tacrolimus concentration in liver transplantation recipients remains a challenge in the early post-transplant period. This multicenter retrospective cohort study aimed to develop and validate a machine-learn...
详细信息
Titrating tacrolimus concentration in liver transplantation recipients remains a challenge in the early post-transplant period. This multicenter retrospective cohort study aimed to develop and validate a machine-learning algorithm to predict tacrolimus concentration. Data from 443 patients undergoing liver transplantation between 2017 and 2020 at an academic hospital in South Korea were collected to train machine-learning models. Long short-term memory (LSTM) and gradient-boosted regression tree (GBRT) models were developed using time-series doses and concentrations of tacrolimus with covariates of age, sex, weight, height, liver enzymes, total bilirubin, international normalized ratio, albumin, serum creatinine, and hematocrit. We conducted performance comparisons with linear regression and populational pharmacokinetic models, followed by external validation using the eICU Collaborative Research Database collected in the United States between 2014 and 2015. In the external validation, the LSTM outperformed the GBRT, linear regression, and populational pharmacokinetic models with median performance error (8.8%, 25.3%, 13.9%, and - 11.4%, respectively;P < 0.001) and median absolute performance error (22.3%, 33.1%, 26.8%, and 23.4%, respectively;P < 0.001). Dosing based on the LSTM model's suggestions achieved therapeutic concentrations more frequently on the chi-square test (P < 0.001). Patients who received doses outside the suggested range were associated with longer ICU stays by an average of 2.5 days (P = 0.042). In conclusion, machinelearning models showed excellent performance in predicting tacrolimus concentration in liver transplantation recipients and can be useful for concentration titration in these patients.
We use a machine-learning algorithm known as boosted regression trees (BRT) to implement an orthogonality test of the rationality of aggregate stock market forecasts. The BRT algorithm endogenously selects the predict...
详细信息
We use a machine-learning algorithm known as boosted regression trees (BRT) to implement an orthogonality test of the rationality of aggregate stock market forecasts. The BRT algorithm endogenously selects the predictor variables used to proxy the information set of forecasters so as to maximize the predictive power for the forecast error. The BRT algorithm also accounts for a potential non-linear dependence of the forecast error on the predictor variables and for interdependencies between the predictor variables. Our main finding is that, given our set of predictor variables, the rational expectations hypothesis (REH) cannot be rejected for short-term forecasts and that there is evidence against the REH for longer term forecasts. Results for three different groups of forecasters corroborate our main finding.
暂无评论