Coping with noise in computing is an important problem to consider in large systems. With applications in fault tolerance (Hastad et al., 1987;Pease et al., 1980;Pippenger et al., 1991), noisy sorting (Shah and Wainwr...
详细信息
Coping with noise in computing is an important problem to consider in large systems. With applications in fault tolerance (Hastad et al., 1987;Pease et al., 1980;Pippenger et al., 1991), noisy sorting (Shah and Wainwright, 2018;Agarwal et al., 2017;Falahatgar et al., 2017;Heckel et al., 2019;Wang et al., 2024a;Gu and Xu, 2023;Kunisky et al., 2024), noisy searching (Berlekamp, 1964;Horstein, 1963;Burnashev and Zigangirov, 1974;Pelc, 1989;Karp and Kleinberg, 2007), among many others, the goal is to devise algorithms with the minimum number of queries that are robust enough to detect and correct the errors that can happen during the computation. In this work, we consider the noisy computing of the threshold-k function. For n Boolean variables x = (x1, ..., xn) ∈ {0, 1}n, the threshold-k function THk(·) computes whether the number of 1's in x is at least k or not, i.e., (Equation presented) The noisy queries correspond to noisy readings of the bits, where at each time step, the agent queries one of the bits, and with probability p, the wrong value of the bit is returned. It is assumed that the constant p ∈ (0, 1/2) is known to the agent. Our goal is to characterize the optimal query complexity for computing the THk function with error probability at most δ. This model for noisy computation of the THk function has been studied by Feige et al. (1994), where the order of the optimal query complexity is established;however, the exact tight characterization of the optimal number of queries is still open. In this paper, our main contribution is tightening this gap by providing new upper and lower bounds for the computation of the THk function, which simultaneously improve the existing upper and lower bounds. The main result of this paper can be stated as follows: for any 1 ≤ k ≤ n, there exists an algorithm that computes the THk function with an error probability at most δ = o(1), and the algorithm uses at most (Equation presented) queries in expectation. Here we define m (Eq
This paper proposes a Poor and Rich Squirrel Algorithm (PRSA)-based Deep Maxout network to find fraud data transactions in the credit card system. Initially, input transaction data is passed to the data transformation...
详细信息
The earthquake early warning (EEW) system provides advance notice of potentially damaging ground shaking. In EEW, early estimation of magnitude is crucial for timely rescue operations. A set of thirty-four features is...
详细信息
The earthquake early warning (EEW) system provides advance notice of potentially damaging ground shaking. In EEW, early estimation of magnitude is crucial for timely rescue operations. A set of thirty-four features is extracted using the primary wave earthquake precursor signal and site-specific information. In Japan's earthquake magnitude dataset, there is a chance of a high imbalance concerning the earthquakes above strong impact. This imbalance causes a high prediction error while training advanced machine learning or deep learning models. In this work, Conditional Tabular Generative Adversarial Networks (CTGAN), a deep machine learning tool, is utilized to learn the characteristics of the first arrival of earthquake P-waves and generate a synthetic dataset based on this information. The result obtained using actual and mixed (synthetic and actual) datasets will be used for training the stacked ensemble magnitude prediction model, MagPred, designed specifically for this study. There are 13295, 3989, and 1710 records designated for training, testing, and validation. The mean absolute error of the test dataset for single station magnitude detection using early three, four, and five seconds of P wave are 0.41, 0.40, and 0.38 MJMA. The study demonstrates that the Generative Adversarial Networks (GANs) can provide a good result for single-station magnitude prediction. The study can be effective where less seismic data is available. The study shows that the machine learning method yields better magnitude detection results compared with the several regression models. The multi-station magnitude prediction study has been conducted on prominent Osaka, Off Fukushima, and Kumamoto earthquakes. Furthermore, to validate the performance of the model, an inter-region study has been performed on the earthquakes of the India or Nepal region. The study demonstrates that GANs can discover effective magnitude estimation compared with non-GAN-based methods. This has a high potential
This paper comprehensively analyzes the Manta Ray Foraging Optimization(MRFO)algorithm and its integration into diverse academic *** in 2020,the MRFO stands as a novel metaheuristic algorithm,drawing inspiration from ...
详细信息
This paper comprehensively analyzes the Manta Ray Foraging Optimization(MRFO)algorithm and its integration into diverse academic *** in 2020,the MRFO stands as a novel metaheuristic algorithm,drawing inspiration from manta rays’unique foraging behaviors—specifically cyclone,chain,and somersault *** biologically inspired strategies allow for effective solutions to intricate physical *** its potent exploitation and exploration capabilities,MRFO has emerged as a promising solution for complex optimization *** utility and benefits have found traction in numerous academic *** its inception in 2020,a plethora of MRFO-based research has been featured in esteemed international journals such as IEEE,Wiley,Elsevier,Springer,MDPI,Hindawi,and Taylor&Francis,as well as at international conference *** paper consolidates the available literature on MRFO applications,covering various adaptations like hybridized,improved,and other MRFO variants,alongside optimization *** trends indicate that 12%,31%,8%,and 49%of MRFO studies are distributed across these four categories respectively.
Delay/disruption tolerant networking(DTN) is proposed as a networking architecture to overcome challenging space communication characteristics for reliable data transmission service in presence of long propagation del...
详细信息
Delay/disruption tolerant networking(DTN) is proposed as a networking architecture to overcome challenging space communication characteristics for reliable data transmission service in presence of long propagation delays and/or lengthy link disruptions. Bundle protocol(BP) and Licklider Transmission Protocol(LTP) are the main key technologies for DTN. LTP red transmission offers a reliable transmission mechanism for space networks. One of the key metrics used to measure the performance of LTP in space applications is the end-to-end data delivery delay, which is influenced by factors such as the quality of spatial channels and the size of cross-layer packets. In this paper, an end-to-end reliable data delivery delay model of LTP red transmission is proposed using a roulette wheel algorithm, and the roulette wheel algorithm is more in line with the typical random characteristics in space networks. The proposed models are validated through real data transmission experiments on a semi-physical testing platform. Furthermore, the impact of cross-layer packet size on the performance of LTP reliable transmission is analyzed, with a focus on bundle size, block size, and segment size. The analysis and study results presented in this paper offer valuable contributions towards enhancing the reliability of LTP transmission in space communication scenarios.
Battery Energy Storage Systems (BESS) are critical for addressing the intermittent nature of Distributed Energy Resources (DERs) in power distribution networks. By enabling real-time monitoring and remote control, Int...
详细信息
作者:
A.E.M.EljialyMohammed Yousuf UddinSultan AhmadDepartment of Information Systems
College of Computer Engineering and SciencesPrince Sattam Bin Abdulaziz UniversityAlkharjSaudi Arabia Department of Computer Science
College of Computer Engineering and SciencesPrince Sattam Bin Abdulaziz UniversityAlkharjSaudi Arabiaand also with University Center for Research and Development(UCRD)Department of Computer Science and EngineeringChandigarh UniversityPunjabIndia
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network’s incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks i...
详细信息
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network’s incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks is an inevitable component of network security. The main challenges of such an IDS are achieving zero or extremely low false positive rates and high detection rates. Internet of Things (IoT) networks run by using devices with minimal resources. This situation makes deploying traditional IDSs in IoT networks unfeasible. Machine learning (ML) techniques are extensively applied to build robust IDSs. Many researchers have utilized different ML methods and techniques to address the above challenges. The development of an efficient IDS starts with a good feature selection process to avoid overfitting the ML model. This work proposes a multiple feature selection process followed by classification. In this study, the Software-defined networking (SDN) dataset is used to train and test the proposed model. This model applies multiple feature selection techniques to select high-scoring features from a set of features. Highly relevant features for anomaly detection are selected on the basis of their scores to generate the candidate dataset. Multiple classification algorithms are applied to the candidate dataset to build models. The proposed model exhibits considerable improvement in the detection of attacks with high accuracy and low false positive rates, even with a few features selected.
Cloud computing is an emerging field in information technology, enabling users to access a shared pool of computing resources. Despite its potential, cloud technology presents various challenges, with one of the most ...
详细信息
Sentiment analysis plays an important role in distilling and clarifying content from movie reviews,aiding the audience in understanding universal views towards the ***,the abundance of reviews and the risk of encounte...
详细信息
Sentiment analysis plays an important role in distilling and clarifying content from movie reviews,aiding the audience in understanding universal views towards the ***,the abundance of reviews and the risk of encountering spoilers pose challenges for efcient sentiment analysis,particularly in Arabic *** study proposed a Stochastic Gradient Descent(SGD)machine learning(ML)model tailored for sentiment analysis in Arabic and English movie *** allows for fexible model complexity adjustments,which can adapt well to the Involvement of Arabic language *** adaptability ensures that the model can capture the nuances and specifc local patterns of Arabic text,leading to better *** distinct language datasets were utilized,and extensive pre-processing steps were employed to optimize the datasets for *** proposed SGD model,designed to accommodate the nuances of each language,aims to surpass existing models in terms of accuracy and *** SGD model achieves an accuracy of 84.89 on the Arabic dataset and 87.44 on the English dataset,making it the top-performing model in terms of accuracy on both *** indicates that the SGD model consistently demonstrates high accuracy levels across Arabic and English *** study helps deepen the understanding of sentiments across various linguistic *** many studies that focus solely on movie reviews,the Arabic dataset utilized here includes hotel reviews,ofering a broader perspective.
Semantic segmentation is an important sub-task for many ***,pixel-level ground-truth labeling is costly,and there is a tendency to overfit to training data,thereby limiting the generalization *** domain adaptation can...
详细信息
Semantic segmentation is an important sub-task for many ***,pixel-level ground-truth labeling is costly,and there is a tendency to overfit to training data,thereby limiting the generalization *** domain adaptation can potentially address these problems by allowing systems trained on labelled datasets from the source domain(including less expensive synthetic domain)to be adapted to a novel target *** conventional approach involves automatic extraction and alignment of the representations of source and target domains *** limitation of this approach is that it tends to neglect the differences between classes:representations of certain classes can be more easily extracted and aligned between the source and target domains than others,limiting the adaptation over all ***,we address:this problem by introducing a Class-Conditional Domain Adaptation(CCDA)*** incorporates a class-conditional multi-scale discriminator and class-conditional losses for both segmentation and ***,they measure the segmentation,shift the domain in a classconditional manner,and equalize the loss over *** results demonstrate that the performance of our CCDA method matches,and in some cases,surpasses that of state-of-the-art methods.
暂无评论