Semantic indexing of biomedical articles is difficult due to the extensive use of domain-specific terminology. The task is even more difficult when the corpus is not in English and when there are only a limited number...
详细信息
The purpose of this study is to investigate mass valuation of unimproved land value using machine learning techniques. The study was conducted in Nairobi County. It is one of the 47 Kenyan Counties under the 2010 cons...
详细信息
The purpose of this study is to investigate mass valuation of unimproved land value using machine learning techniques. The study was conducted in Nairobi County. It is one of the 47 Kenyan Counties under the 2010 constitution. A total of 1440 geocoded data points containing the market selling price of vacant land in Nairobi were web scraped from major property listing websites. These data points were adopted as dependent variables given as unit price of vacant land per square meter. The Covariates used in this study were categorized into Accessibility, Environmental, Physical and Socio-Economic Factors. Due to multi-collinearity problem present in the covariates, PLS and PCA methods were adopted to transform the observed features using a set of vectors. These methods resulted in an uncorrelated set of components that were used in training machine learning algorithms. The dependent variable and uncorrelated components derived feature reduction methods were used as training data for training different machine learning regression models namely;Random forest, support vector regression and extreme gradient boosting regression (XGboost regression). PLS performed better than PCA because the former maximizes the covariance between dependent and independent variables while the latter maximizes variance between the independent variables only and ignores the relationship between predictors and response. The first 9 components were identified as significant both by PLS and PCA methods. The spatial distribution of vacant land value within Nairobi County was consistent for all the three machine learning models. It was also noted that the land value pattern was higher in the central business district and the pattern spread northwards and westwards relative to the CBD. A relative low vacant land value pattern was observed on the eastern side of the county and also at the extreme periphery of Nairobi County boundary. From the accuracy metrics of R-squared and MAPE, Random Forest Reg
In the dynamic field of biomedical engineering, the pervasive integration of machine learning into physiological signal processing serves various purposes, from diagnostics to Brain-Computer Interface (BCI) and Human-...
In the dynamic field of biomedical engineering, the pervasive integration of machine learning into physiological signal processing serves various purposes, from diagnostics to Brain-Computer Interface (BCI) and Human-Machine Interface (HMI) using techniques such as Electroencephalography (EEG), Electromyography (EMG), Electrocardiography (ECG), and others. Nonetheless, the inherent scientific diversity within biomedical research often poses challenges, with practices sometimes misaligned with machine learning and standard statistical principles. This review analyzes 82 influential articles (2018–2023) from IEEE Xplore, aiming to identify weaknesses and assess overall rigor. It emphasizes the need for enhanced research quality and reproducibility. The key findings reveal that in over half of the articles, the ratio of female-to-male participants recruited for data collection is below 50%. Additionally, nearly 30% of the studies involve fewer than 10 subjects in data collection, with only 7% providing justification for their sample size. Moreover, only about 34% of the articles provide access to their data, and a mere 26% report performance using a confusion matrix. These insights underscore critical areas for improvement, enhancing the robustness and transparency of applications in the physiological signal processing domain.
The disease known as malaria is a communicable illness that is spread by the bites of mosquitoes. There are already diagnostic approaches that include manually counting the amount of red blood cells (RBC) that are con...
The disease known as malaria is a communicable illness that is spread by the bites of mosquitoes. There are already diagnostic approaches that include manually counting the amount of red blood cells (RBC) that are contaminated. This is accomplished by doing a microscopic examination of the stained blood cells of the individual who is suffering from the disease. However, this is a hard job that requires careful attention both visually and intellectually. In order to accomplish this, a comprehensive manual examination must be undertaken by an expert who is informed about the subject matter. It is possible that the implementation of deep learning algorithms will simplify the process of disease diagnosis and make this analysis more straightforward. Furthermore, because they are diagnosing malaria, they are expected to manage a vast quantity of data, which includes photographs of microscopic blood smears. This is because they are responsible for finding the disease. In order to get a better level of accuracy, it is necessary to train models that are more in-depth. This requires fast computing since it requires a substantial amount of computer resources. An accelerated convolutional neural network technique that is based on a graphics processing unit (GPU) is utilized by the Accelerated Malaria Diagnosis System (AMDS) that has been created in order to diagnose malaria.
Active learning (AL) has found wide applications in medical image segmentation, aiming to alleviate the annotation workload and enhance performance. Conventional uncertainty-based AL methods, such as entropy and Bayes...
Active learning (AL) has found wide applications in medical image segmentation, aiming to alleviate the annotation workload and enhance performance. Conventional uncertainty-based AL methods, such as entropy and Bayesian, often rely on an aggregate of all pixel-level metrics. However, in imbalanced settings, these methods tend to neglect the significance of target regions, eg., lesions, and tumors. Moreover, uncertainty-based selection introduces redundancy. These factors lead to unsatisfactory performance, and in many cases, even underperform random sampling. To solve this problem, we introduce a novel approach called the Selective Uncertainty-based AL, avoiding the conventional practice of summing up the metrics of all pixels. Through a filtering process, our strategy prioritizes pixels within target areas and those near decision boundaries. This resolves the aforementioned disregard for target areas and redundancy. Our method showed substantial improvements across five different uncertainty-based methods and two distinct datasets, utilizing fewer labeled data to reach the supervised baseline and consistently achieving the highest overall performance. Our code is available at https://***/HelenMa9998/Selective_Uncertainty_AL.
We extend the relation between univariate polynomial optimization in one complex variable and the polynomial eigenvalue problem to the multivariate case. The first-order necessary conditions for optimality of the mult...
We extend the relation between univariate polynomial optimization in one complex variable and the polynomial eigenvalue problem to the multivariate case. The first-order necessary conditions for optimality of the multivariate polynomial optimization problem, which are computed using Wirtinger derivatives, constitute a system of multivariate polynomial equations in the complex variables and their complex conjugates. Wirtinger calculus provides an elegant way to differentiate real-valued (cost) functions in complex variables. An elimination of the complex conjugate variables, via the Macaulay matrix, results in a (rectangular) multiparameter eigenvalue problem, (some of) the eigentuples of which correspond to the stationary points of the original real-valued cost function. We illustrate our novel globally optimal optimization approach with several (didactical) examples.
Recent Face Super-resolution (FSR) based on iterative collaboration between facial image recovery network and landmark estimation has succeeded in super-resolving facial images. However, the existing noise in coarse f...
Recent Face Super-resolution (FSR) based on iterative collaboration between facial image recovery network and landmark estimation has succeeded in super-resolving facial images. However, the existing noise in coarse features at the low-level feature extraction leads to inaccurate facial priors such as landmarks and component maps, consequently degrading the super-resolved face image on a large scale. This paper proposes, a Non-local technique for deep attentive face super-resolution network (NLDA). A Non-local module has been designed before the residual channel attention block (RCAB) to eliminate noise degradation on coarse features effectively. The proposed model optimizes feature extraction and improves facial landmark fusion to yield higher-quality super-resolved images. This approach facilitates more accurate landmark estimation and boosts the performance of our model on a large scale and various face poses. Quantitative and qualitative experiments over CelebA and Helen face image datasets show that the proposed method outperforms other state-of-the-art FSR methods in recovering high-quality face images in various face poses and at a large scale.
In this paper, we describe the epilepsy detection grand challenge, in association with ICASSP 2023. The challenge was centered on seizure detection using wearable behind-the-ear EEG. Two separate tasks were set for th...
详细信息
In times of crisis, it is vital to construct a simpler form of a person detecting robot in order to discover casualties, and the purpose of this project is to provide a prototype of a design that is capable of being p...
详细信息
The lack of detailed process information such as part shape, process setups, and process efficiencies limit the accuracy of life cycle inventory models that are used for estimating environmental impacts of manufacturi...
详细信息
The lack of detailed process information such as part shape, process setups, and process efficiencies limit the accuracy of life cycle inventory models that are used for estimating environmental impacts of manufacturing processes. Such limitations are particularly true for processes such as additive manufacturing, where part shape significantly influences the resulting process characteristics. To address the above knowledge gap, this paper investigates the influence of part shape on process inventory of fused deposition modeling 3D printing. To this end, we experimentally measured the resource consumption for printing 68 different isovolumetric mechanical components with identical print settings. These results are presented in a structured form, titled “Resource Use for 3D Printing Isovolumetric Mechanical Components (r3DiM) Benchmark”, that includes our experimental data, 3D model files, NC code used for printing the models, and layer-wise energy consumption of the heaters of the bed & extruder, material use, and other printing peripherals of each object. It is our hope that the r3DiM benchmark will serve as a reference of future researchers interested in creating shape-aware, predictive life cycle inventory models for fused deposition modeling 3D printing processes. The r3DiM benchmark can be accessed from the following url: https://***/r3DiMBenchmark .
暂无评论