Illegitimate intelligent reflective surfaces (IRSs) can pose significant physical layer security risks on multi-user multiple-input single-output (MU-MISO) systems. Recently, a DISCO approach has been proposed an ille...
详细信息
Facial action unit (AU) detection, aiming to classify AU present in the facial image, has long suffered from insufficient AU annotations. In this paper, we aim to mitigate this data scarcity issue by learning AU repre...
详细信息
Most of existing resource provisioning methods are designed for traditional Web applications with linear structures. However, Web systems with the meshed topology are becoming widespread. Meshed connections among diff...
详细信息
ISBN:
(数字)9781728187808
ISBN:
(纸本)9781728187815
Most of existing resource provisioning methods are designed for traditional Web applications with linear structures. However, Web systems with the meshed topology are becoming widespread. Meshed connections among different tiers make Virtual Machine (VM) provisioning and bottleneck elimination complex. In this paper, a Jackson network based Proactive and Reactive VM auto-scaling Method (JPRM) is proposed. In JPRM, request transition behaviors among tiers are modeled as a finite-state Markov stochastic process. A transition probability matrix is studied on-line to predict resource requirements based on M/M/N queuing models as proactive control. For reactive provisioning, the final increased request rate of each tier is determined based on stable state checking and Jackson equilibrium equation solving to eliminate bottleneck tiers and avoid bottleneck shifting. The JPRM is evaluated in a simulation environment established using cloudSim. Experimental results show that JPRM avoids bottleneck shifting with reasonable additional VM rental costs compared with existing methods.
Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However...
ISBN:
(纸本)9798331314385
Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in edge computing, especially in the presence of domain shifts across local data. In this paper, we propose a heterogeneous FL framework DapperFL, to enhance model performance across multiple domains. In DapperFL, we introduce a dedicated Model Fusion Pruning (MFP) module to produce personalized compact local models for clients to address the system heterogeneity challenges. The MFP module prunes local models with fused knowledge obtained from both local and remaining domains, ensuring robustness to domain shifts. Additionally, we design a Domain Adaptive Regularization (DAR) module to further improve the overall performance of DapperFL. The DAR module employs regularization generated by the pruned model, aiming to learn robust representations across domains. Furthermore, we introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. We implement DapperFL on a real-world FL platform with heterogeneous clients. Experimental results on benchmark datasets with multiple domains demonstrate that DapperFL outperforms several state-of-the-art FL frameworks by up to 2.28%, while significantly achieving model volume reductions ranging from 20% to 80%. Our code is available at: https://***/jyzgh/DapperFL.
Implicit neural representations (INRs) aim to learn a continuous function (i.e., a neural network) to represent an image, where the input and output of the function are pixel coordinates and RGB/Gray values, respectiv...
Implicit neural representations (INRs) aim to learn a continuous function (i.e., a neural network) to represent an image, where the input and output of the function are pixel coordinates and RGB/Gray values, respectively. However, images tend to consist of many objects whose colors are not perfectly consistent, resulting in the challenge that image is actually a discontinuous piecewise function and cannot be well estimated by a continuous function. In this paper, we empirically investigate that if a neural network is enforced to fit a discontinuous piecewise function to reach a fixed small error, the time costs will increase exponentially with respect to the boundaries in the spatial domain of the target signal. We name this phenomenon the exponential-increase hypothesis. Under the exponential-increase hypothesis, learning INRs for images with many objects will converge very slowly. To address this issue, we first prove that partitioning a complex signal into several sub-regions and utilizing piecewise INRs to fit that signal can significantly speed up the convergence. Based on this fact, we introduce a simple partition mechanism to boost the performance of two INR methods for image reconstruction: one for learning INRs, and the other for learning-to-learn INRs. In both cases, we partition an image into different sub-regions and dedicate smaller networks for each part. In addition, we further propose two partition rules based on regular grids and semantic segmentation maps, respectively. Extensive experiments validate the effectiveness of the proposed partitioning methods in terms of learning INR for a single image (ordinary learning framework) and the learning-to-learn framework. Code is released here.
The Internet of Things (IoT) has brought the dream of ubiquitous data access from physical environments into reality. IoT embeds sensors and actuators in physical objects so that they can communicate and exchange data...
详细信息
The development of the Internet of Things(IoT)calls for a comprehensive in-formation security evaluation framework to quantitatively measure the safety score and risk(S&R)value of the network *** this paper,we sum...
详细信息
The development of the Internet of Things(IoT)calls for a comprehensive in-formation security evaluation framework to quantitatively measure the safety score and risk(S&R)value of the network *** this paper,we summarize the architecture and vulnerability in IoT and propose a comprehensive information security evaluation model based on multi-level decomposition *** evaluation model provides an idea for information security evaluation of IoT and guides the security decision maker for dynamic ***,we establish an overall evaluation indicator system that includes four primary indicators of threat information,asset,vulnerability,and management,*** also includes eleven secondary indicators of system protection rate,attack detection rate,confidentiality,availability,controllability,identifiability,number of vulnerabilities,vulnerability hazard level,staff organization,enterprise grading and service continuity,***,we build the core algorithm to enable the evaluation model,wherein a novel weighting technique is developed and a quantitative method is proposed to measure the S&R ***,in order to better supervise the performance of the proposed evaluation model,we present four novel indicators includes residual risk,continuous conformity of residual risk,head-to-tail consistency and decrease ratio,*** results show the advantages of the proposed model in the evaluation of information security for IoT.
Attack design is indispensable for analyzing potential risks of networked control systems (NCSs). However, the remote control center is usually well protected and its knowledge is difficult to be disclosed, which beco...
详细信息
With the increasing emphasis on data privacy, federated learning (FL) networks show great potential through distributed training without directly sharing the raw data. However, the coverage of FL terrestrial servers i...
详细信息
Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical *** of verifying whether the robustness property holds or not in certain neura...
详细信息
Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical *** of verifying whether the robustness property holds or not in certain neural networks,this paper focuses on training robust neural networks with respect to given ***-of-the-art training methods,interval bound propagation(IBP)and CROWN-IBP,perform well with respect to small perturbations,but their performance declines significantly in large perturbation cases,which is termed“drawdown risk”in this ***,drawdown risk refers to the phenomenon that IBPfamily training methods cannot provide expected robust neural networks in larger perturbation cases,as in smaller perturbation *** alleviate the unexpected drawdown risk,we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch(global robustness training),and the corresponding robustness losses are combined with monotonically decreasing weights(monotonically decreasing robustness training).With experimental demonstrations,our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great *** is also noteworthy that our training method achieves higher model accuracy than the original training methods,which means that our presented training strategy gives more balanced consideration to robustness and accuracy.
暂无评论