In recent times, AI and UAV have progressed significantly in several applications. This article analyzes applications of UAV with modern green computing in various sectors. It addresses cutting-edge technologies such ...
详细信息
Recent advancements in deep neural networks (DNNs) have made them indispensable for numerous commercial applications. These include healthcare systems and self-driving cars. Training DNN models typically demands subst...
详细信息
Classification and regression algorithms based on k-nearest neighbors (kNN) are often ranked among the top-10 Machine learning algorithms, due to their performance, flexibility, interpretability, non-parametric nature...
详细信息
Classification and regression algorithms based on k-nearest neighbors (kNN) are often ranked among the top-10 Machine learning algorithms, due to their performance, flexibility, interpretability, non-parametric nature, and computational efficiency. Nevertheless, in existing kNN algorithms, the kNN radius, which plays a major role in the quality of kNN estimates, is independent of any weights associated with the training samples in a kNN-neighborhood. This omission, besides limiting the performance and flexibility of kNN, causes difficulties in correcting for covariate shift (e.g., selection bias) in the training data, taking advantage of unlabeled data, domain adaptation and transfer learning. We propose a new weighted kNN algorithm that, given training samples, each associated with two weights, called consensus and relevance (which may depend on the query on hand as well), and a request for an estimate of the posterior at a query, works as follows. First, it determines the kNN neighborhood as the training samples within the kth relevance-weighted order statistic of the distances of the training samples from the query. Second, it uses the training samples in this neighborhood to produce the desired estimate of the posterior (output label or value) via consensus-weighted aggregation as in existing kNN rules. Furthermore, we show that kNN algorithms are affected by covariate shift, and that the commonly used sample reweighing technique does not correct covariate shift in existing kNN algorithms. We then show how to mitigate covariate shift in kNN decision rules by using instead our proposed consensus-relevance kNN algorithm with relevance weights determined by the amount of covariate shift (e.g., the ratio of sample probability densities before and after the shift). Finally, we provide experimental results, using 197 real datasets, demonstrating that the proposed approach is slightly better (in terms of F-1 score) on average than competing benchmark approaches for mit
In the contemporary era,the global expansion of electrical grids is propelled by various renewable energy sources(RESs).Efficient integration of stochastic RESs and optimal power flow(OPF)management are critical for n...
详细信息
In the contemporary era,the global expansion of electrical grids is propelled by various renewable energy sources(RESs).Efficient integration of stochastic RESs and optimal power flow(OPF)management are critical for network *** study introduces an innovative solution,the Gaussian Bare-Bones Levy Cheetah Optimizer(GBBLCO),addressing OPF challenges in power generation systems with stochastic *** primary objective is to minimize the total operating costs of RESs,considering four functions:overall operating costs,voltage deviation management,emissions reduction,voltage stability index(VSI)and power loss ***,a carbon tax is included in the objective function to reduce carbon *** scrutiny,using modified IEEE 30-bus and IEEE 118-bus systems,validates GBBLCO’s superior performance in achieving optimal *** results demonstrate GBBLCO’s efficacy in six optimization scenarios:total cost with valve point effects,total cost with emission and carbon tax,total cost with prohibited operating zones,active power loss optimization,voltage deviation optimization and enhancing voltage stability index(VSI).GBBLCO outperforms conventional techniques in each scenario,showcasing rapid convergence and superior solution ***,GBBLCO navigates complexities introduced by valve point effects,adapts to environmental constraints,optimizes costs while considering prohibited operating zones,minimizes active power losses,and optimizes voltage deviation by enhancing the voltage stability index(VSI)*** research significantly contributes to advancing OPF,emphasizing GBBLCO’s improved global search capabilities and ability to address challenges related to local *** emerges as a versatile and robust optimization tool for diverse challenges in power systems,offering a promising solution for the evolving needs of renewable energy-integrated power grids.
In this article the legend of Fig. 6 was presented without a reference. The legend of Fig. 6 has been changed from "The general framework for knowledge distillation involving a teacher-student relationship&q...
Originally, protocols were designed for multi-agent systems (MAS) using information about the network which might not be available. Recently, there has been a focus on scale-free synchronization where the protocol is ...
详细信息
An Internet of Mobile Things (IoMT) refers to an internetworked group of pervasive devices that coordinate their motion and task execution through frequent status and data exchange. An IoMT could be serving critical a...
详细信息
This paper proposes a new cluster method combined with Dynamic Mode Decomposition with Control (DMDc), and the Proper Orthogonal Decomposition (POD) to construct more accurate reduced order models. DMDc and POD are po...
Coping with noise in computing is an important problem to consider in large systems. With applications in fault tolerance (Hastad et al., 1987;Pease et al., 1980;Pippenger et al., 1991), noisy sorting (Shah and Wainwr...
详细信息
Coping with noise in computing is an important problem to consider in large systems. With applications in fault tolerance (Hastad et al., 1987;Pease et al., 1980;Pippenger et al., 1991), noisy sorting (Shah and Wainwright, 2018;Agarwal et al., 2017;Falahatgar et al., 2017;Heckel et al., 2019;Wang et al., 2024a;Gu and Xu, 2023;Kunisky et al., 2024), noisy searching (Berlekamp, 1964;Horstein, 1963;Burnashev and Zigangirov, 1974;Pelc, 1989;Karp and Kleinberg, 2007), among many others, the goal is to devise algorithms with the minimum number of queries that are robust enough to detect and correct the errors that can happen during the computation. In this work, we consider the noisy computing of the threshold-k function. For n Boolean variables x = (x1, ..., xn) ∈ {0, 1}n, the threshold-k function THk(·) computes whether the number of 1's in x is at least k or not, i.e., (Equation presented) The noisy queries correspond to noisy readings of the bits, where at each time step, the agent queries one of the bits, and with probability p, the wrong value of the bit is returned. It is assumed that the constant p ∈ (0, 1/2) is known to the agent. Our goal is to characterize the optimal query complexity for computing the THk function with error probability at most δ. This model for noisy computation of the THk function has been studied by Feige et al. (1994), where the order of the optimal query complexity is established;however, the exact tight characterization of the optimal number of queries is still open. In this paper, our main contribution is tightening this gap by providing new upper and lower bounds for the computation of the THk function, which simultaneously improve the existing upper and lower bounds. The main result of this paper can be stated as follows: for any 1 ≤ k ≤ n, there exists an algorithm that computes the THk function with an error probability at most δ = o(1), and the algorithm uses at most (Equation presented) queries in expectation. Here we define m (Eq
The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Rec...
详细信息
The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Recognition(HAR)***,the significant computational demands and memory requirements hinder the practical deployment of deep networks in resource-constrained *** paper introduces a novel network pruning method based on the energy spectral density of data in the frequency domain,which reduces the model’s depth and accelerates activity *** traditional pruning methods that focus on the spatial domain and the importance of filters,this method converts sensor data,such as HAR data,to the frequency domain for *** emphasizes the low-frequency components by calculating their energy spectral density ***,filters that meet the predefined thresholds are retained,and redundant filters are removed,leading to a significant reduction in model size without compromising performance or incurring additional computational ***,the proposed algorithm’s effectiveness is empirically validated on a standard five-layer CNNs backbone *** computational feasibility and data sensitivity of the proposed scheme are thoroughly ***,the classification accuracy on three benchmark HAR datasets UCI-HAR,WISDM,and PAMAP2 reaches 96.20%,98.40%,and 92.38%,***,our strategy achieves a reduction in Floating Point Operations(FLOPs)by 90.73%,93.70%,and 90.74%,respectively,along with a corresponding decrease in memory consumption by 90.53%,93.43%,and 90.05%.
暂无评论