Artificial intelligence (AI) and machine learning (ML) are vastly becoming key parts of today's applications, supporting diverse human activities and decision making. New training models are used requiring an expl...
详细信息
This work demonstrates the significant prospective of GaO/ZnO hybrid nanostructures for high-performance hydrogen gas sensing. The wavy grains of GaO support the structural transformation of ZnO nanorod into nanosheet...
详细信息
In the restructured electricity market,microgrid(MG),with the incorporation of smart grid technologies,distributed energy resources(DERs),a pumped-storage-hydraulic(PSH)unit,and a demand response program(DRP),is a sma...
详细信息
In the restructured electricity market,microgrid(MG),with the incorporation of smart grid technologies,distributed energy resources(DERs),a pumped-storage-hydraulic(PSH)unit,and a demand response program(DRP),is a smarter and more reliable electricity *** consists of gas turbines and renewable energy sources such as photovoltaic systems and wind *** bidding strategies,prepared by MG operators,decrease the electricity cost and emissions from upstream grid and conventional and renewable energy sources(RES).But it is inefficient due to the very high sporadic characteristics of RES and the very high outage *** solve these issues,this study suggests non-dominated sorting genetic algorithm Ⅱ(NSGA-Ⅱ)for an optimal bidding strategy considering pumped hydroelectric energy storage and DRP based on outage conditions and uncertainties of renewable energy *** uncertainty related to solar and wind units is modeled using lognormal and Weibull probability ***-based DRP is used,especially considering the time of outages along with the time of peak loads and prices,to enhance the reliability of MG and reduce costs and emissions.
By creating multipath backscatter links and amplify signal strength, reconfigurable intelligent surfaces (RIS) and decode-and-forward (DF) relaying are shown to degrade the latency of the ultrareliable low-latency com...
详细信息
Deep learning models for computer vision applications specifically and for machine learning generally are now the state of the art. The growth of size and complexity of neural networks has made them more and more reli...
详细信息
Deep learning models for computer vision applications specifically and for machine learning generally are now the state of the art. The growth of size and complexity of neural networks has made them more and more reliable, yet in greater need of computational power and memory as is evident from the heavy reliance on graphical processing units and cloud computing for training them. As the complexity of deep neural networks increases, the need for fast processing neural networks in real-time embedded applications at the edge also increases and accelerating them using reconfigurable hardware suggests a solution. In this work, a convolutional neural network based on the inception net architecture is first optimized in software and then accelerated by taking advantage of field programmable gate array (FPGA) parallelism. Genetic algorithm augmented training is proposed and used on the neural network to produce an optimum model from the first training run without re-training iterations. Quantization of the network parameters is performed according to the weights of the network. The resulting neural network is then transformed into hardware by writing the register transfer level (RTL) code for FPGAs with exploitation of layer parallelism and a simple trial-and-error allocation of resources with the help of the roofline model. The approach is simple and easy to use as compared to many complex existing methods in literature and relies on trial and error to customize the FPGA design to the model needed to work on any computer vision or multimedia application deep learning model. Simulation and synthesis are performed. The results prove that the genetic algorithm reduces the number of back-propagation epochs in software and brings the network closer to the global optimum in terms of performance. Quantization to 16 bits also shows a reduction in network size by almost half with no performance drop. The synthesis of our design also shows that the Inception-based classifier is cap
An imbalanced dataset often challenges machine learning, particularly classification methods. Underrepresented minority classes can result in biased and inaccurate models. The Synthetic Minority Over-Sampling Techniqu...
详细信息
An imbalanced dataset often challenges machine learning, particularly classification methods. Underrepresented minority classes can result in biased and inaccurate models. The Synthetic Minority Over-Sampling Technique (SMOTE) was developed to address the problem of imbalanced data. Over time, several weaknesses of the SMOTE method have been identified in generating synthetic minority class data, such as overlapping, noise, and small disjuncts. However, these studies generally focus on only one of SMOTE’s weaknesses: noise or overlapping. Therefore, this study addresses both issues simultaneously by tackling noise and overlapping in SMOTE-generated data. This study proposes a combined approach of filtering, clustering, and distance modification to reduce noise and overlapping produced by SMOTE. Filtering removes minority class data (noise) located in majority class regions, with the k-nn method applied for filtering. The use of Noise Reduction (NR), which removes data that is considered noise before applying SMOTE, has a positive impact in overcoming data imbalance. Clustering establishes decision boundaries by partitioning data into clusters, allowing SMOTE with modified distance metrics to generate minority class data within each cluster. This SMOTE clustering and distance modification approach aims to minimize overlap in synthetic minority data that could introduce noise. The proposed method is called “NR-Clustering SMOTE,” which has several stages in balancing data: (1) filtering by removing minority classes close to majority classes (data noise) using the k-nn method;(2) clustering data using K-means aims to establish decision boundaries by partitioning data into several clusters;(3) applying SMOTE oversampling with Manhattan distance within each cluster. Test results indicate that the proposed NR-Clustering SMOTE method achieves the best performance across all evaluation metrics for classification methods such as Random Forest, SVM, and Naїve Bayes, compared t
The component aging has become a significant concern worldwide,and the frequent failures pose a serious threat to the reliability of modern power *** light of this issue,this paper presents a power system reliability ...
详细信息
The component aging has become a significant concern worldwide,and the frequent failures pose a serious threat to the reliability of modern power *** light of this issue,this paper presents a power system reliability evaluation method based on sequential Monte Carlo simulation(SMCS)to quantify system reliability considering multiple failure modes of ***,a three-state component reliability model is established to explicitly describe the state transition process of the component subject to both aging failure and random failure *** this model,the impact of each failure mode is decoupled and characterized as the combination of two state duration variables,which are separately modeled using specific probability ***,SMCS is used to integrate the three-state component reliability model for state transition sequence generation and system reliability ***,various reliability metrics,including the probability of load curtailment(PLC),expected frequency of load curtailment(EFLC),and expected energy not supplied(EENS),can be *** ensure the applicability of the proposed method,Hash table grouping and the maximum feasible load level judgment techniques are jointly adopted to enhance its computational *** studies are conducted on different aging scenarios to illustrate and validate the effectiveness and practicality of the proposed method.
Recently, deep learning has been widely employed across various domains. The Convolution Neural Network (CNN), a popular deep learning algorithm, has been successfully utilized in object recognition tasks, such as fac...
详细信息
Hand gestures have been used as a significant mode of communication since the advent of human *** facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and erro...
详细信息
Hand gestures have been used as a significant mode of communication since the advent of human *** facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free *** technology is pivotal in healthcare and communication for the deaf *** significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature ***,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and ***,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer *** proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,*** results demonstrate the effectiveness of CNN as a promising HGRoc *** findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.
This study introduces a data-driven approach for state and output feedback control addressing the constrained output regulation problem in unknown linear discrete-time systems. Our method ensures effective tracking pe...
详细信息
This study introduces a data-driven approach for state and output feedback control addressing the constrained output regulation problem in unknown linear discrete-time systems. Our method ensures effective tracking performance while satisfying the state and input constraints, even when system matrices are not available. We first establish a sufficient condition necessary for the existence of a solution pair to the regulator equation and propose a data-based approach to obtain the feedforward and feedback control gains for state feedback control using linear programming. Furthermore, we design a refined Luenberger observer to accurately estimate the system state, while keeping the estimation error within a predefined set. By combining output regulation theory, we develop an output feedback control strategy. The stability of the closed-loop system is rigorously proved to be asymptotically stable by further leveraging the concept of λ-contractive sets.
暂无评论