As the network security threats caused by the Internet of Things (IoT) continue to increase, the attack surface continues to expand and the network heterogeneity increases, and the use of virtualization technology and...
详细信息
Labelled data for training sequence labelling models can be collected from multiple annotators or work -ers in crowdsourcing. However, these labels could be noisy because of the varying expertise and reliabil-ity of a...
详细信息
Labelled data for training sequence labelling models can be collected from multiple annotators or work -ers in crowdsourcing. However, these labels could be noisy because of the varying expertise and reliabil-ity of annotators. In order to ensure high quality of data, it is crucial to infer the correct labels by aggregating noisy labels. Although label aggregation is a well-studied topic, only a number of studies have investigated how to aggregate sequence labels. Recently, neuralnetwork models have attracted research attention for this task. In this paper, we explore two neuralnetwork-based methods. The first method combines Hidden Markov Models with networks while also learning distributed representations of annotators (i.e., annotator embedding);the second method combines BiLSTM with autoencoders. The experimental results on three real-world datasets demonstrate the effectiveness of using neuralnetworks for sequence label aggregation. Moreover, our analysis shows that annotators' embeddings not only make our model applicable to real-time applications, but also useful for studying the behaviour of annotators.(c) 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http:// ***/licenses/by/4.0/).
Real-world adversarial attacks on machine learning models often feature an asymmetric structure wherein adversaries only attempt to induce false negatives (e.g., classify a spam email as not spam). We formalize the as...
详细信息
With the rapid growth of the scale and quantity of photovoltaic (PV) power station construction, more and more PV arrays are facing operational and maintenance challenges. In recent years, many artificial intelligence...
详细信息
Decentralized optimization are playing an important role in applications such as training large machine learning models, among others. Despite its superior practical performance, there has been some lack of fundamenta...
详细信息
ISBN:
(纸本)9781713871088
Decentralized optimization are playing an important role in applications such as training large machine learning models, among others. Despite its superior practical performance, there has been some lack of fundamental understanding about its theoretical properties. In this work, we address the following open research question: To train an overparameterized model over a set of distributed nodes, what is the minimum communication overhead (in terms of the bits got exchanged) that the system needs to sustain, while still achieving (near) zero training loss? We show that for a class of overparameterized models where the number of parameters D is much larger than the total data samples N, the best possible communication complexity is Omega(N), which is independent of the problem dimension D. Further, for a few specific overparameterized models (i.e., the linear regression, and certain multi-layer neuralnetwork with one wide layer), we develop a set of algorithms which uses certain linear compression followed by adaptive quantization, and show that they achieve dimension independent, near-optimal communication complexity. To our knowledge, this is the first time that dimension independent communication complexity has been shown for distributed optimization.
作者:
Al-Hawawreh, MunaHossain, M. ShamimDeakin Univ
Sch Informat Technol Geelong Waurn Ponds Campus Waurn Ponds Vic 3216 Australia King Saud Univ
Coll Comp & Informat Sci Res Chair Pervas & Mobile Comp Riyadh 11543 Saudi Arabia King Saud Univ
Coll Comp & Informat Sci Dept Software Engn Riyadh 11543 Saudi Arabia
The widespread use of intelligent consumer electronics, specifically autonomous vehicles, has exponentially increased. The key enablers of this pervasive are the Internet of Things (IoT), Artificial Intelligence (AI),...
详细信息
The widespread use of intelligent consumer electronics, specifically autonomous vehicles, has exponentially increased. The key enablers of this pervasive are the Internet of Things (IoT), Artificial Intelligence (AI), and Satellite communications, which provide consumers with highly precise and reliable self-driving vehicles. However, autonomous vehicles come also with significant cybersecurity concerns. Attackers can easily use satellite links to launch cyberattacks against autonomous vehicles. An Intrusion Detection System (IDS) is one of the most effective mechanisms for providing secure autonomous vehicles. However, existing IDSs based on machine and deep learning train their models in a centralized server, which uploads data or parameters to the central server for training. This structure of IDS has challenges with vehicle mobility, brings processing delays, and increases privacy and security risks, affecting vehicles' performance. Therefore, for the first time, this paper proposes a new federated learning-assisted distributed IDS using a mesh satellite net to protect autonomous vehicles. We construct a local model using a deep neuralnetwork. Then, we provide a mesh federated learning approach that keeps model training local and lets satellites exchange their parameters in a privacy-preserving way. The simulation results show that our proposed model works well while keeping the computation cost reasonable.
Sentiment analysis is the field of research to identify helpful data by utilizing sentiments of the people shared through social media like Twitter and Facebook. However, some sentiment analysis approaches attain mini...
详细信息
Weapon detection systems play a significant role in ensuring public safety, specifically in surveillance scenarios. But, poor performance has been shown by the prevailing research methodologies towards occluded and cu...
详细信息
Weapon detection systems play a significant role in ensuring public safety, specifically in surveillance scenarios. But, poor performance has been shown by the prevailing research methodologies towards occluded and customized weapons. Thus, to resolve this issue, a Non-linear and Scalar Growing Cosine Unit based Deep Convolutional neuralnetwork (NSGCU-DCNN)-based efficient weapon detection system in the surveillance video is proposed. Primarily, the input video is converted into frames. The pre-processing process is applied for each frame. By utilizing Gaussian Kernelized Double Plateau Histogram Equalization (GKDPHE), contrast enhancement is performed grounded on the frame's brightness. After that, for each frame, the motions are estimated by using the Frobenius Norm-based Three Step Search (FN-TSS) algorithm. Thereafter, by utilizing Weibull distributed Viola Jones (WDVJ), the objects are detected, and the detected objects are cropped for further analysis. Next, the edge information of each object is determined by the Canny Edge Detection (CED) algorithm. Then, the frame is further converted into a 3x3 block. Afterward, the silhouette coefficient is calculated between the frames and the mask images. Further, the features are extracted and selected from the silhouette estimated frame. The NSGCU-DCNN classifies the weapons grounded on the selected features. The experimental outcomes exhibited that when contrasted with the prevailing techniques, the proposed mechanism withstands high-accuracy rates.
A plethora of data from sources like IoT, social websites, health, business, and many more have revolutionized the digital world in recent years. To make effective use of data for any sort of analysis, prediction, or ...
A plethora of data from sources like IoT, social websites, health, business, and many more have revolutionized the digital world in recent years. To make effective use of data for any sort of analysis, prediction, or automation of applications, the demand for machine learning and artificial intelligence has grown over time. With the growing capability of neuralnetworks, they are now used in real-time applications related to medical diagnosis, weather forecasting, speech and facial recognition, stock markets, etc. Despite the undoubted processing and intelligence capabilities of neuralnetworks, there are key challenges that are to be addressed for the effective implementation of neuralnetworks in real-time applications. One of these challenges is their vulnerability due to fooling – that is, making networks classify wrongly by inducing very small changes in their inputs. How information is distributed in the network, might be a predictor for fooling, so the role of information distribution on fooling robustness is investigated here. Specifically, we use dropout, a known regularization technique to induce more distributed representations, and test network robustness to fooling induced by the Fast Gradient Sign Method (FGSM). The research findings showed that information smearedness is a better predictor against robustness to fooling as compared to dropout.
To meet the high throughput and stringent Quality of Service (QoS) requirements of the fifth and the sixth generation (5G and 6G) users, operators have explored technologies such as Multi-Input Multi-Output (MIMO), ma...
详细信息
ISBN:
(纸本)9798350385939;9798350385922
To meet the high throughput and stringent Quality of Service (QoS) requirements of the fifth and the sixth generation (5G and 6G) users, operators have explored technologies such as Multi-Input Multi-Output (MIMO), massive MIMO (mMIMO), Base Station (BS) densification, Non-Orthogonal Multiplex Access (NOMA) etc. The Central Unit (CU) and distributed Unit (DU) split architectures defined in Open Radio Access network (ORAN) and 3rd Generation Partnership Project (3GPP), gives the possibilities to explore the software-based methods, which were restricted earlier due to hardware-centric systems. With software defined Radio Access network (RAN) at cloud and artificial intelligence being a promising domain for solving various problems, these two can be leveraged in exploring to solve or optimize various existing challenges like channel estimation in the wireless receivers, which is very computationally expensive. In this work, we attempted to improve the performance of the compute intensive module channel estimation using machine learning architecture. Convolutional neuralnetwork (CNN) based image processing architectures, Super Resolution CNN (SRCNN) and Denoising CNN (DnCNN) were implemented for channel estimation module, along with various combinations of spatial data arrangements of the input signal. The spatial data arrangement of complex and real components of input signal outperformed and yielded better accuracy compared to traditional methods such as estimated Minimum Mean Square Error (MMSE) and Approximate Linear Minimum Mean Square Error (ALMMSE).
暂无评论