Nonnegative Matrix Factorization (NMF) provides an important approach of unsupervising learning, but it faces computational challenges when applied into clustering of high-dimensional datasets. In this paper, a class ...
详细信息
Nonnegative Matrix Factorization (NMF) provides an important approach of unsupervising learning, but it faces computational challenges when applied into clustering of high-dimensional datasets. In this paper, a class of novel nonmonotone gradient-descent algorithms are developed for solving box-constrained NMF problems. Unlike existing algorithms in the literature that update each matrix factor individually by fixing another one, our algorithms simultaneously update the paired matrix factors by leveraging adaptive projected Barzilai-Borwein directions and appropriate step sizes generated by the developed nonmonotone line search rules. Theoretically, it is proved that the developed algorithms are well defined and globally convergent. Extensive numerical tests on public image datasets demonstrate that the developed algorithms in this paper outperform the state-of-the-art ones, in terms of clustering performance, computational efficiency, and robustness of mining noisy data.
Computing the approximation quality is a crucial step in every iteration of sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two qualit...
详细信息
Computing the approximation quality is a crucial step in every iteration of sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two quality indicators often used in these algorithms are polyhedral gauge and epsilon indicator. In this article, we develop an algorithm to compute the polyhedral gauge and epsilon indicator approximation quality more efficiently. We derive criteria that assess whether the distance between a vertex of the outer approximation and the inner approximation needs to be recalculated. We interpret these criteria geometrically and compare them to a criterion developed by D & ouml;rfler et al. for a different quality indicator using convex optimization theory. For the bi-criteria case, we show that only two linear programs need to be solved in each iteration. We show that for more than two objectives, no constant bound on the number of linear programs to be checked can be derived. Numerical examples illustrate that incorporating the developed criteria into the sandwiching algorithm leads to a reduction in the approximation time of up to 94 % and that the approximation time increases more slowly with the number of iterations and the number of objective space dimensions.
Hand gesture recognition (HGR) is a convenient and natural form of human-computer interaction. It is suitable for various applications. Much research has already focused on wearable device-based HGR. By contrast, this...
详细信息
Hand gesture recognition (HGR) is a convenient and natural form of human-computer interaction. It is suitable for various applications. Much research has already focused on wearable device-based HGR. By contrast, this paper gives an overview focused on device-free HGR. That means we evaluate HGR systems that do not require the user to wear something like a data glove or hold a device. HGR systems are explored regarding technology, hardware, and algorithms. The interconnectedness of timing and power requirements with hardware, pre-processing algorithm, classification, and technology and how they permit more or less granularity, accuracy, and number of gestures is clearly demonstrated. Sensor modalities evaluated are WIFI, vision, radar, mobile networks, and ultrasound. The pre-processing technologies stereo vision, multiple-input multiple-output (MIMO), spectrogram, phased array, range-doppler-map, range-angle-map, doppler-angle-map, and multilateration are explored. Classification approaches with and without ML are studied. Among those with ML, assessed algorithms range from simple tree structures to transformers. All applications are evaluated taking into account their level of integration. This encompasses determining whether the application presented is suitable for edge integration, their real-time capability, whether continuous learning is implemented, which robustness was achieved, whether ML is applied, and the accuracy level. Our survey aims to provide a thorough understanding of the current state of the art in device-free HGR on edge devices and in general. Finally, on the basis of present-day challenges and opportunities in this field, we outline which further research we suggest for HGR improvement. Our goal is to promote the development of efficient and accurate gesture recognition systems.
We propose a randomized data-driven solver for multiscale mechanics problems which improves accuracy by escaping local minima and reducing dependency on metric parameters, while requiring minimal changes relative to n...
详细信息
We propose a randomized data-driven solver for multiscale mechanics problems which improves accuracy by escaping local minima and reducing dependency on metric parameters, while requiring minimal changes relative to non-randomized solvers. We additionally develop an adaptive data-generation scheme to enrich data sets in an effective manner. This enrichment is achieved by utilizing material tangent information and an error-weighted k-means clustering algorithm. The proposed algorithms are assessed by means of three-dimensional test cases with data from a representative volume element model.
This article challenges the dominant 'black box' metaphor in critical algorithm studies by proposing a phenomenological framework for understanding how social media algorithms manifest themselves in user exper...
详细信息
This article challenges the dominant 'black box' metaphor in critical algorithm studies by proposing a phenomenological framework for understanding how social media algorithms manifest themselves in user experience. While the black box paradigm treats algorithms as opaque, self-contained entities that exist only 'behind the scenes', this article argues that algorithms are better understood as genetic phenomena that unfold temporally through user-platform interactions. Recent scholarship in critical algorithm studies has already identified various ways in which algorithms manifest in user experience: through affective responses, algorithmic self-reflexivity, disruptions of normal experience, points of contention, and folk theories. Yet, while these studies gesture toward a phenomenological understanding of algorithms, they do so without explicitly drawing on phenomenological theory. This article demonstrates how phenomenology, particularly a Husserlian genetic approach, can further conceptualize these already-documented algorithmic encounters. Moving beyond both the paradigm of artifacts and static phenomenological approaches, the analysis shows how algorithms emerge as inherently relational processes that co-constitute user experience over time. By reconceptualizing algorithms as genetic phenomena rather than black boxes, this paper provides a theoretical framework for understanding how algorithmic awareness develops from pre-reflective affective encounters to explicit folk theories, while remaining inextricably linked to users' self-understanding. This phenomenological framework contributes to a more nuanced understanding of algorithmic mediation in contemporary social media environments and opens new pathways for investigating digital technologies.
Given the limited potential of conventional statistical models, machine learning (ML) techniques in the field of energy poverty have attracted growing interest, especially during the last five years. The present paper...
详细信息
Given the limited potential of conventional statistical models, machine learning (ML) techniques in the field of energy poverty have attracted growing interest, especially during the last five years. The present paper adds new insights to the existing literature by exploring the capacity of ML algorithms to successfully predict energy poverty, as defined by different indicators, for the case of the "Urban Region of Athens" in Greece. More specifically, five energy poverty indicators were predicted on the basis of socio-economic/technical variables through training different machine learning classifiers. The analysis showed that almost all classifiers managed to successfully predict three out of five energy poverty indicators with a remarkably good level of accuracy, i.e., 81-94% correct predictions of energy-poor households for the best models and an overall accuracy rate of over 94%. The most successful classifier in terms of energy poverty prediction proved to be the "Random Forest" classifier, closely followed by "Trees J48" and "Multilayer Perceptron" classifiers (decision tree and neural network approaches). The impressively high accuracy scores of the models confirmed that ML is a promising tool towards understanding energy poverty drivers and shaping appropriate energy policies.
This paper examines five regularization algorithms for sound source identification, particularly in near-field acoustic holography applications that employ the equivalent source method (ESM) based on & ell;1 norm ...
详细信息
This paper examines five regularization algorithms for sound source identification, particularly in near-field acoustic holography applications that employ the equivalent source method (ESM) based on & ell;1 norm and & ell;2 norm. Both simulations and experimental tests were conducted to evaluate the performance of Tikhonov regularization, & ell;1-CVX, Bregman iteration (BI), fast iterative shrinkage-thresholding algorithm for ESM (FISTESM), and iterative reweighted least squares (IRLS). These algorithms were assessed based on their accuracy in localizing both single and coherent sound sources across a range of frequencies. Findings reveal that & ell;1-CVX and BI achieve high levels of resolution and stability, especially for coherent sources, while FISTESM proves to be highly efficient at higher frequencies. In contrast, Tikhonov regularization exhibits limitations when applied to sparse sound sources, and IRLS demonstrates particular effectiveness at lower frequencies. This comparative study provides critical guidance for selecting the most suitable algorithm according to specific frequency and source characteristics.
The increasing demand for high-precision real-time data processing in satellite clusters requires efficient algorithms to manage inherent uncertainties in space-based systems. We propose an innovative framework that i...
详细信息
The increasing demand for high-precision real-time data processing in satellite clusters requires efficient algorithms to manage inherent uncertainties in space-based systems. We propose an innovative framework that integrates Quantum Neural Network (QNN) architecture into Kalman filtering processes, specifically tailored for Low Earth Orbit satellite clusters. Our quantum computing-based approach achieves a significant improvement in prediction accuracy and a reduction in mean absolute error compared to classical Kalman filtering techniques. These advances significantly improve computational efficiency and error handling, making the method highly scalable under varying noise levels. A comparative analysis demonstrates the superior performance of the Quantum Kalman Filter in processing speed, resource utilization, and prediction accuracy, all evaluated within the constraints of LEO satellite constellations. These findings highlight the potential of quantum computing to optimize data processing strategies for future missions, including deep space explorations.
Aim: The planning target volume (PTV) homogeneity objective was developed for previous-generation dose calculation algorithms. Advanced algorithms report doses to medium-in-medium (Dm,m) and their values depend on the...
详细信息
Aim: The planning target volume (PTV) homogeneity objective was developed for previous-generation dose calculation algorithms. Advanced algorithms report doses to medium-in-medium (Dm,m) and their values depend on the medium considered, breaking the link between uniform irradiation and dose homogeneity. This work revises the PTV homogeneity objective when high-density heterogeneities are involved. We evaluated robust against PTV-based planning, and a dose reporting method that removes composition dependencies to express doses to muscle in muscle-like medium (Dmuscle,muscle*). Methods: Four cases featuring bone or metal within the PTV were selected and planned in RayStation with Monte Carlo. Three plans were created for each case: robust optimization for Dm,m (Robust-Dm,m), and PTV-based optimization for Dm,m (PTV-Dm,m) and Dmuscle,muscle* (PTV-Dmuscle,muscle*). The plans were reported in Dm,m and Dmuscle,muscle*, and their dosimetric parameters, robustness, complexities, and optimization times were assessed. Results: Robust-Dm,m and PTV-Dmuscle,muscle* plans presented similar Dm,m distributions with inhomogeneous PTV doses due to cold spots in high-density regions. PTV-Dm,m plans achieved homogeneous PTV doses but required local fluence compensations that impaired robustness, with significant hot spots, and increased complexity. Robust optimization was 7 to 11 times slower. Reporting in Dmuscle,muscle* restored consistency between PTVbased and robust evaluations. Conclusions: Robust optimization proves that PTV homogeneity should not be prioritized when advanced algorithms reporting Dm,m are used with high-density heterogeneities. PTV-Dmuscle,muscle* optimization offers a practical alternative for maintaining PTV-based planning and the PTV homogeneity objective while ensuring consistency with robust optimization. Dmuscle,muscle* reporting simplifies plan evaluation and aligns with clinical practice, facilitating decision-making in treatment planning.
Introduction: Fourier transform infrared (FT-IR) spectroscopy is an innovative diagnostic technique for improving early detection and personalized care for breast cancer patients. It allows rapid and accurate analysis...
详细信息
Introduction: Fourier transform infrared (FT-IR) spectroscopy is an innovative diagnostic technique for improving early detection and personalized care for breast cancer patients. It allows rapid and accurate analysis of biological samples. Therefore, the purpose of this study was to assess the diagnostic accuracy of FT-IR spectroscopy for breast cancer, based on a comprehensive literature review. Methods: An online electronic database systematic search was conducted using PubMed/Medline, Cochrane Library, and hand databases from March 28, 2024, to April 10, 2024. We included peer-reviewed journal articles in which FT-IR spectroscopy was used to acquire data on breast cancers and manuscripts published in English. All eligible studies were assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. Results: Serum, breast biopsies, blood plasma, specimen, and saliva samples were included in this study. This study revealed that breast cancer diagnosis using FT-IR spectroscopy with diagnostic algorithms had a sensitivity and specificity of approximately 98 % and 100 %, respectively. Almost all studies have used more than one algorithm to analyze spectral data. This finding showed that the sensitivity of FT-IR spectroscopy reported in six included studies was greater than 90 %. Conclusion: Employing multivariate analysis coupled with FT-IR spectroscopy has shown promise in differentiating between healthy and cancerous breast tissue. This review revealed that FT-IR spectroscopy will be the next gold standard for breast cancer diagnosis. However, to draw definitive conclusions, larger-scale studies, external validation, real-world clinical trials, legislative considerations, and alternative methods such as Raman spectroscopy should be considered.
暂无评论