Surface reconstruction is a classical process in industrial engineering and manufacturing, particularly in reverse engineering, where the goal is to obtain a digital model from a physical object. For that purpose, the...
详细信息
Surface reconstruction is a classical process in industrial engineering and manufacturing, particularly in reverse engineering, where the goal is to obtain a digital model from a physical object. For that purpose, the real object is typically scanned or digitized and the resulting point cloud is then fitted to mathematical surfaces through numerical optimization. The choice of the approximating functions is crucial for the accuracy of the process. Real-world objects such as manufactured workpieces often require complex nonlinear approximating functions, which are not well suited for standard numerical methods. In a previous paper presented at the ISMSI 2023 conference, we addressed this issue by using manually selected approximation functions via optimization through the cuckoo search algorithm with Lévy flights. Building upon that work, this paper presents an enhanced and extended method for surface reconstruction by using height-map surfaces obtained through a combination of exponential, polynomial and logarithmic functions. A feasible approach for this goal is to consider continuous bivariate distribution functions, which ensures consistent models along with good mathematical properties for the output shapes, such as smoothness and integrability. However, this approach leads to a difficult multivariate, constrained, multimodal continuous nonlinear optimization problem. To tackle this issue, we apply particle swarm optimization, a popular swarm intelligence technique for continuous optimization. The method is hybridized with a local search procedure for further improvement of the solutions and applied to a benchmark of 15 illustrative examples of point clouds fitted to different surface models. The performance of the method is analyzed through several computational experiments. The numerical and graphical results show that the method is able to recover the shape of the point clouds accurately and automatically. Furthermore, our approach outperforms other alternati
The aim in many sciences is to understand the mechanisms that underlie the observed distribution of variables, starting from a set of initial hypotheses. Causal discovery allows us to infer mechanisms as sets of cause...
Unmanned Aerial Vehicles(UAVs)provide a reliable and energyefficient solution for data collection from the Narrowband Internet of Things(NB-IoT)***,the UAV’s deployment optimization,including locations of the UAV’s ...
详细信息
Unmanned Aerial Vehicles(UAVs)provide a reliable and energyefficient solution for data collection from the Narrowband Internet of Things(NB-IoT)***,the UAV’s deployment optimization,including locations of the UAV’s stop points,is a necessity to minimize the energy consumption of the UAV and the NB-IoT devices and also to conduct the data collection *** this regard,this paper proposes GainingSharing Knowledge(GSK)algorithm for optimizing the UAV’s *** GSK,the number of UAV’s stop points in the three-dimensional space is encapsulated into a single individual with a fixed length representing an entire *** superiority of using GSK in the tackled problem is verified by simulation in seven *** provides significant results in all seven scenarios compared with other four optimization algorithms used before with the same ***,the NB-IoT is proposed as the wireless communication technology between the UAV and IoT devices.
In this paper we present a new antithetic multilevel Monte Carlo (MLMC) method for the estimation of expectations with respect to laws of diffusion processes that can be elliptic or hypo-elliptic. In particular, we co...
详细信息
Understanding neural networks is crucial to creating reliable and trustworthy deep learning models. Most contemporary research in interpretability analyzes just one model at a time via causal intervention or activatio...
详细信息
Physics-Informed Neural Networks (PINNs) have triggered a paradigm shift in scientific computing, leveraging mesh-free properties and robust approximation capabilities. While proving effective for low-dimensional part...
详细信息
Physics-Informed Neural Networks (PINNs) have triggered a paradigm shift in scientific computing, leveraging mesh-free properties and robust approximation capabilities. While proving effective for low-dimensional partial differential equations (PDEs), the computational cost of PINNs remains a hurdle in high-dimensional scenarios. This is particularly pronounced when computing high-order and high-dimensional derivatives in the physics-informed loss. Randomized Smoothing PINN (RS-PINN) introduces Gaussian noise for stochastic smoothing of the original neural net model, enabling the use of Monte Carlo methods for derivative approximation, which eliminates the need for costly automatic differentiation. Despite its computational efficiency, especially in the approximation of high-dimensional derivatives, RS-PINN introduces biases in both loss and gradients, negatively impacting convergence, especially when coupled with stochastic gradient descent (SGD) algorithms. We present a comprehensive analysis of biases in RS-PINN, attributing them to the nonlinearity of the Mean Squared Error (MSE) loss as well as the intrinsic nonlinearity of the PDE itself. We propose tailored bias correction techniques, delineating their application based on the order of PDE nonlinearity. The derivation of an unbiased RS-PINN allows for a detailed examination of its advantages and disadvantages compared to the biased version. Specifically, the biased version has a lower variance and runs faster than the unbiased version, but it is less accurate due to the bias. To optimize the bias-variance trade-off, we combine the two approaches in a hybrid method that balances the rapid convergence of the biased version with the high accuracy of the unbiased version. In addition to methodological contributions, we present an enhanced implementation of RS-PINN. Extensive experiments on diverse high-dimensional PDEs, including Fokker-Planck, Hamilton-Jacobi-Bellman (HJB), viscous Burgers’, Allen-Cahn, and Sine
In recent years, advancements in plasmonic devices have enabled the localization of electromagnetic fields at subwavelength scales, contributing to breakthroughs in various scientific and engineering fields (J. C. Ndu...
详细信息
ISBN:
(数字)9789463968119
ISBN:
(纸本)9798350359497
In recent years, advancements in plasmonic devices have enabled the localization of electromagnetic fields at subwavelength scales, contributing to breakthroughs in various scientific and engineering fields (J. C. Ndukaife, et al., ACS Nano, 2014,8,9,9035-9043). These structures are often made of metals and their electric properties are typically represented by the classical Drude model (S. A. Maier, 2007, Springer, New York). As the size of the plasmonic structures approaches the nanoscale, the classical Drude model falls short in explaining the collective movement of free electrons in metals. The interaction between moving charges and the electromagnetic fields can be described by a coupled system of the Maxwell and hydrodynamic equations (X. Zheng, et al., IEEE Trans. Antennas Propag., 2018,66,9, 4759-4771).
Physics-Informed Neural Networks (PINNs) have proven effective in solving partial differential equations (PDEs), especially when some data are available by seamlessly blending data and physics. However, extending PINN...
详细信息
Physics-Informed Neural Networks (PINNs) have proven effective in solving partial differential equations (PDEs), especially when some data are available by seamlessly blending data and physics. However, extending PINNs to high-dimensional and even high-order PDEs encounters significant challenges due to the computational cost associated with automatic differentiation in the residual loss function calculation. Herein, we address the limitations of PINNs in handling high-dimensional and high-order PDEs by introducing the Hutchinson Trace Estimation (HTE) method. Starting with the second-order high-dimensional PDEs, which are ubiquitous in scientific computing, HTE is applied to transform the calculation of the entire Hessian matrix into a Hessian vector product (HVP). This approach not only alleviates the computational bottleneck via Taylor-mode automatic differentiation but also significantly reduces memory consumption from the Hessian matrix to an HVP's scalar output. We further showcase HTE's convergence to the original PINN loss and its unbiased behavior under specific conditions. Comparisons with the Stochastic Dimension Gradient Descent (SDGD) highlight the distinct advantages of HTE, particularly in scenarios with significant variability and variance among dimensions. We further extend the application of HTE to higher-order and higher-dimensional PDEs, specifically addressing the biharmonic equation. By employing tensor-vector products (TVP), HTE efficiently computes the colossal tensor associated with the fourth-order high-dimensional biharmonic equation, saving memory and enabling rapid computation. The effectiveness of HTE is illustrated through experimental setups, demonstrating comparable convergence rates with SDGD under memory and speed constraints. Additionally, HTE proves valuable in accelerating the Gradient-Enhanced PINN (gPINN) version as well as the Biharmonic equation. Overall, HTE opens up a new capability in scientific machine learning for tackl
Quantum kernel methods are considered a promising avenue for applying quantum computers to machine learning problems. Identifying hyperparameters controlling the inductive bias of quantum machine learning models is ex...
详细信息
Quantum kernel methods are considered a promising avenue for applying quantum computers to machine learning problems. Identifying hyperparameters controlling the inductive bias of quantum machine learning models is expected to be crucial given the central role hyperparameters play in determining the performance of classical machine learning methods. In this work we introduce the hyperparameter controlling the bandwidth of a quantum kernel and show that it controls the expressivity of the resulting model. We use extensive numerical experiments with multiple quantum kernels and classical data sets to show consistent change in the model behavior from underfitting (bandwidth too large) to overfitting (bandwidth too small), with optimal generalization in between. We draw a connection between the bandwidth of classical and quantum kernels and show analogous behavior in both cases. Furthermore, we show that optimizing the bandwidth can help mitigate the exponential decay of kernel values with qubit count, which is the cause behind recent observations that the performance of quantum kernel methods decreases with qubit count. We reproduce these negative results and show that if the kernel bandwidth is optimized, the performance instead improves with growing qubit count and becomes competitive with the best classical methods.
High-quality fluorescence imaging of biological systems is limited by processes like photobleaching and phototoxicity, and also in many cases, by limited access to the latest generations of microscopes. Moreover, low ...
详细信息
暂无评论