Polishing is an effective process method to improve the surface quality of aero-engine blade. The clamping error between fixture and blade leads to position error of blade surface and reduces processing quality. In th...
详细信息
Polishing is an effective process method to improve the surface quality of aero-engine blade. The clamping error between fixture and blade leads to position error of blade surface and reduces processing quality. In this paper, a laser calibration system and a calibration algorithm are proposed to compensate clamping error. Firstly, theoretical point data of blade surface is obtained from CAD model, and the measurement point data of blade surface is obtained through laser sensor. A modified ICP algorithm is designed to calculate the coordinate transformation between the theoretical points and the measurement points. Secondly, Bezier surface is introduced to evaluate the calibration accuracy based on the distance between fitting surfaces rather than points. Finally, the point cloud registration is optimized by particle swarm optimization (PSO) algorithm based on the Bezier surface evaluation. The calibration algorithm combined ICP algorithm, Bezier surface evaluation and PSO algorithm to realize clamping error compensation. The quality of polished blade surface is consequently improved.
optimization algorithms are used to improve model accuracy. The optimization process undergoes multiple cycles until convergence. A variety of optimization strategies have been developed to overcome the obstacles invo...
详细信息
optimization algorithms are used to improve model accuracy. The optimization process undergoes multiple cycles until convergence. A variety of optimization strategies have been developed to overcome the obstacles involved in the learning process. Some of these strategies have been considered in this study to learn more about their complexities. It is crucial to analyse and summarise optimization techniques methodically from a machine learning standpoint since this can provide direction for future work in both machine learning and optimization. The approaches under consideration include the Stochastic Gradient Descent (SGD), Stochastic optimization Descent with Momentum, Rung Kutta, Adaptive Learning Rate, Root Mean Square Propagation, Adaptive Moment Estimation, Deep Ensembles, Feedback Alignment, Direct Feedback Alignment, Adfactor, AMSGrad, and Gravity. prove the ability of each optimizer applied to machine learning models. Firstly, tests on a skin cancer using the ISIC standard dataset for skin cancer detection were applied using three common optimizers (Adaptive Moment, SGD, and Root Mean Square Propagation) to explore the effect of the algorithms on the skin images. The optimal training results from the analysis indicate that the performance values are enhanced using the Adam optimizer, which achieved 97.30% accuracy. The second dataset is COVIDx CT images, and the results achieved are 99.07% accuracy based on the Adam optimizer. The result indicated that the utilisation of optimizers such as SGD and Adam improved the accuracy in training, testing, and validation stages.
AdaBelief, one of the current best optimizers, demonstrates superior generalization ability over the popular Adam algorithm by viewing the exponential moving average of observed gradients. AdaBelief is theoretically a...
详细信息
AdaBelief, one of the current best optimizers, demonstrates superior generalization ability over the popular Adam algorithm by viewing the exponential moving average of observed gradients. AdaBelief is theoretically appealing in which it has a data-dependent O(root T) regret bound when objective functions are convex, where T is a time horizon. It remains, however, an open problem whether the convergence rate can be further improved without sacrificing its generalization ability. To this end, we make the first attempt in this work and design a novel optimization algorithm called FastAdaBelief that aims to exploit its strong convexity in order to achieve an even faster convergence rate. In particular, by adjusting the step size that better considers strong convexity and prevents fluctuation, our proposed FastAdaBelief demonstrates excellent generalization ability and superior convergence. As an important theoretical contribution, we prove that FastAdaBelief attains a data-dependent O(log T) regret bound, which is substantially lower than AdaBelief in strongly convex cases. On the empirical side, we validate our theoretical analysis with extensive experiments in scenarios of strong convexity and nonconvexity using three popular baseline models. Experimental results are very encouraging: FastAdaBelief converges the quickest in comparison to all mainstream algorithms while maintaining an excellent generalization ability, in cases of both strong convexity or nonconvexity. FastAdaBelief is, thus, posited as a new benchmark model for the research community.
In this paper,we propose a low complexity spectrum resource allocation scheme cross the access points(APs)for the ultra dense networks(UDNs),in which all the APs are divided into several AP groups(APGs)and the total b...
详细信息
In this paper,we propose a low complexity spectrum resource allocation scheme cross the access points(APs)for the ultra dense networks(UDNs),in which all the APs are divided into several AP groups(APGs)and the total bandwidth is divided into several narrow band spectrum resources and each spectrum resource is allocated to APGs independently to decrease the interference among the ***,we investigate the joint spectrum and power allocation problem in UDNs to maximize the overall *** problem is formulated as a mixed-integer nonconvex optimization(MINCP)problem which is difficult to solve in *** joint optimization problem is decomposed into two subproblems in terms of the spectrum allocation and power allocation *** the spectrum allocation,we model it as a auction problem and a combinatorial auction approach is proposed to tackle *** addition,the DC programming method is adopted to optimize the power allocation *** decrease the signaling and computational overhead,we propose a distributed algorithm based on the Lagrangian dual *** results illustrate that the proposed algorithm can effectively improve the system throughput.
The work proposes high-dimensional multiple fractional-order optimization algorithm (HMFOA) to tune the controller parameters of the rotor side converter of doubly-fed induction generator-based wind turbines to achiev...
详细信息
The work proposes high-dimensional multiple fractional-order optimization algorithm (HMFOA) to tune the controller parameters of the rotor side converter of doubly-fed induction generator-based wind turbines to achieve higher control performance. The case studies are verified in eight benchmark mathematical optimization problems;the results illustrate that the proposed optimizer has fast convergence speed, high computational precision, and avoidance of falling into local optimums. Compared to four other algorithms, the results show that HMFOA obtains the optimal parameters of controllers and achieves more accurate power point tracking capability and certain fault ride-through capability, which verifies the feasibility and effectiveness of the algorithm.
Molecular docking (MD) is one of the core steps in the expensive and time-consuming process of drug design, which is basically an optimization problem based on scoring functions. AutoDock series MD software is widely ...
详细信息
Molecular docking (MD) is one of the core steps in the expensive and time-consuming process of drug design, which is basically an optimization problem based on scoring functions. AutoDock series MD software is widely accepted by academia and industry, among which AutoDock Vina (Vina) is the latest and most popular version due to its accuracy and relatively high speed. However, contrast to its prior version, i.e., AutoDock4, hardware acceleration approaches of Vina are rarely reported. In this article, we propose Vina-field-programmable gate array (FPGA), a hardware-accelerated Vina implementation with FPGA that exploits the low-level parallelism. First, the fixed-point quantization is analyzed and realized to accelerate the MD algorithm with a better energy efficiency in hardware. To boost the performance of the module-level computation, multiple in-module hardware pipelines have been designed and implemented. Besides, a strategy for fast accessing to block RAM (BRAM) is implemented by utilizing the layout of data, which brings four times memory access speed to the intermolecular and intramolecular energy computing modules. Under the same 140 ligand-receptor benchmarks, Vina-FPGA performs up to 6.9 times (average 3.7 times) faster than a state-of-the-art CPU does while consuming only 2.5% energy with similar docking accuracies. Compared to the GPU-accelerated implementation or Vina-GPU, the average energy consumption of Vina-FPGA is merely 45%.
The creation of sustainable artificial tropical forests is of utmost importance for addressing critical environmental issues like deforestation, habitat loss, and climate change. This research paper presents an innova...
详细信息
The creation of sustainable artificial tropical forests is of utmost importance for addressing critical environmental issues like deforestation, habitat loss, and climate change. This research paper presents an innovative approach that considers ecological, socioeconomic, and technological factors. The proposed approach integrates multi-criteria decision-making (MCDM) methods that take into account the preferences and opinions of stakeholders. To analyze the various criteria involved, cubic picture fuzzy numbers (CPFNs) are utilized. To ensure fairness in evaluating CPFNs, the study focuses on developing operational laws and aggregation operators that promote neutrality and fairness. Specifically, the paper introduces the "cubic picture fuzzy fairly weighted average (CPFFWA) operator" and the "cubic picture fuzzy fairly ordered weighted averaging (CPFFOWA) operator." By considering stakeholders and prioritizing their long-term sustainability and resilience, this approach proves to be highly valuable for policymakers, forest managers, and other relevant parties. This research avenue bridges the gap in current literature where limited exploration of fairly aggregation operators in tandem with picture fuzzy sets within tropical forest contexts exists. Ultimately, the research strives to contribute to the practical management of these vital ecosystems, fostering environmental conservation and sustainability through a cross-disciplinary approach amalgamating ecology, fuzzy logic, decision theory, and environmental science.
The establishment of secure data transmission among two communicating parties has become challenging. It is due to unexpected attacks during communication over an unsecured transmission channel. The security of secret...
详细信息
The establishment of secure data transmission among two communicating parties has become challenging. It is due to unexpected attacks during communication over an unsecured transmission channel. The security of secret data is secured by cryptography and steganography. Hence, this work introduces a secure data transmission model by improving the strength of hybrid cryptography and steganography approaches. It is based on an optimized substitution box (S-box) and a texture-based distortion function. The secret data is initially encrypted using a substitution-permutation network-based chaotic system in this approach. This cryptographic algorithm contains two stages: confusion and diffusion stages. The substitution box attains confusion, and the permutation box attains diffusion. However, the security of this cryptosystem strongly depends on the quality of the S-boxes. Hence, a new arithmetic optimization algorithm (AOA) is proposed to create a robust S-box with a high non-linearity score. After encrypting the secret data, the cover elements have been extracted from the embedded domain created using the sign of DCT (Discrete cosine transform) coefficients. Here, the cover elements selection is optimized by re-compressing the cover JPEG image to exclude the zero-valued DCT coefficients. Then, the embedding cost is calculated by developing a new texture-based distortion function to embed more secret data into the texture portion because the visual quality of the texture portions is not affected due to the embedding process. Finally, the encrypted data has been embedded into cover elements using Syndrome-Trellis Coding (STC) to obtain the stego elements. The overall implementation is carried out in the MATLAB platform. The developed model is evaluated based on imperceptibility and undetectability.
暂无评论