Action segmentation plays an important role in video understanding, which is implemented by frame-wise action classification. Recent works on action segmentation capture long-term dependencies by increasing temporal c...
详细信息
***-periments on a synthetic log of the non-secondary hy-pertension MTP and empirical findings demonstrate the effectiveness of our *** results show that the process mining in our approach framework can automatically ...
详细信息
***-periments on a synthetic log of the non-secondary hy-pertension MTP and empirical findings demonstrate the effectiveness of our *** results show that the process mining in our approach framework can automatically generate more accurate MTP mod-els,and the subprocess models based on treatment pat-terns make the models easy to understand.
There is considerable interest in the use of fractional programming (FP) for the communication system design because many problems in this area are fractionally structured. Notably, max-FP and min-FP are not interchan...
There is considerable interest in the use of fractional programming (FP) for the communication system design because many problems in this area are fractionally structured. Notably, max-FP and min-FP are not interchangeable in general if there are multiple ratios, so the two types of FP are often dealt with separately in the existing literature. As a result, an FP method for maximizing the signal-to-interference-plus-noise ratios (SINRs) typically cannot be used for minimizing the Cramér-Rao bounds (CRBs). In contrast, this work proposes a unified approach that bridges the gap between max-FP and min-FP. Particularly, we examine the theoretical basis of this unified approach from a minorization-maximization (MM) perspective, and in return obtain a matrix extension of this new FP technique. Moreover, this work presents two application cases: (i) joint radar sensing and (ii) multi-cell secure transmission, neither of which can be efficiently addressed by the existing FP tools.
Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other **...
详细信息
Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other *** trains a globalmodel by aggregating locally-computedmodels of clients rather than their ***,the divergence of local models caused by data heterogeneity of different clients may lead to slow convergence of the global *** this problem,we focus on the client selection with federated learning,which can affect the convergence performance of the global model with the selected local *** propose FedChoice,a client selection method based on loss function optimization,to select appropriate local models to improve the convergence of the global *** firstly sets selected probability for clients with the value of loss function,and the client with high loss will be set higher selected probability,which can make them more likely to participate in ***,it introduces a local control vector and a global control vector to predict the local gradient direction and global gradient direction,respectively,and calculates the gradient correction vector to correct the gradient direction to reduce the cumulative deviationof the local gradient causedby *** experiments to verify the validity of FedChoice on CIFAR-10,CINIC-10,MNIST,EMNITS,and FEMNIST datasets,and the results show that the convergence of FedChoice is significantly improved,compared with FedAvg,FedProx,and FedNova.
Stochastic distributed optimization methods that solve an optimization problem over a multi-agent network have played an important role in a variety of large-scale signal processing and machine leaning applications. A...
详细信息
Effective vulnerability detection of large-scale smart contracts is critical because smart contract attacks frequently bring about tremendous economic loss. However, code analysis requiring traversal paths and learnin...
详细信息
We consider the optimization problem of the form minx∈Rd f(x) EΞ[F(x;Ξ)], where the component F(x;Ξ) is L-mean-squared Lipschitz but possibly nonconvex and nonsmooth. The recently proposed gradient-free method req...
详细信息
Due to the convenience and popularity of Web applications, they have become a prime target for attackers. As the main programming language for Web applications, many methods have been proposed for detecting malicious ...
Due to the convenience and popularity of Web applications, they have become a prime target for attackers. As the main programming language for Web applications, many methods have been proposed for detecting malicious JavaScript, among which static analysis-based methods play an important role because of their high effectiveness and efficiency. However, obfuscation techniques are commonly used in JavaScript, which makes the features extracted by static analysis contain many useless and disguised features, leading to many false positives and false negatives in detection results. In this paper, we propose a novel method to find out the essential features related to the semantics of JavaScript code. Specifically, we develop JS-Revealer, a robust, effective, scalable, and interpretable detector for malicious JavaScript. To test the capabilities of JSRevealer, we conduct comparative experiments with four other state-of-the-art malicious JavaScript detection tools. The experimental results show that JSRevealer has an average F1 of 84.8% on the data obfuscated by different obfuscators, which is 21.6%, 22.3%, 18.7%, and 22.9% higher than the tools CUJO, ZOZZLE, JAST, and JSTAP, respectively. Moreover, the detection results of JSRevealer can be interpreted, which can provide meaningful insights for further security research.
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an encoder which can be used as a general-purpose feature extractor, such that downstream users only need to perform fine-tuning oper...
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an encoder which can be used as a general-purpose feature extractor, such that downstream users only need to perform fine-tuning operations to enjoy the benefit of "large model". Despite this promising prospect, the security of pre-trained encoder has not been thoroughly investigated yet, especially when the pre-trained encoder is publicly available for commercial *** this paper, we propose AdvEncoder, the first framework for generating downstream-agnostic universal adversarial examples based on the pre-trained encoder. AdvEncoder aims to construct a universal adversarial perturbation or patch for a set of natural images that can fool all the downstream tasks inheriting the victim pre-trained encoder. Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels. Therefore, we first exploit the high frequency component information of the image to guide the generation of adversarial examples. Then we design a generative attack framework to construct adversarial perturbations/patches by learning the distribution of the attack surrogate dataset to improve their attack success rates and transferability. Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset. We also tailor four defenses for pre-trained encoders, the results of which further prove the attack ability of AdvEncoder. Our codes are available at: https://***/CGCL-codes/AdvEncoder.
Personalized recommendation systems have become one of the most important Internet services nowadays. A critical challenge of training and deploying the recommendation models is their high memory capacity and bandwidt...
详细信息
ISBN:
(数字)9798350326581
ISBN:
(纸本)9798350326598
Personalized recommendation systems have become one of the most important Internet services nowadays. A critical challenge of training and deploying the recommendation models is their high memory capacity and bandwidth demands, with the embedding layers occupying hundreds of GBs to TBs of storage. The advent of memory disaggregation technology and Compute Express Link (CXL) provides a promising solution for memory capacity scaling. However, relocating memory-intensive embedding layers to CXL memory incurs noticeable performance degradation due to its limited transmission bandwidth, which is significantly lower than the host memory bandwidth. To address this, we introduce ReCXL, a CXL memory disaggregation system that utilizes near-memory processing for scalable, efficient recommendation model training. ReCXL features a unified, hardwareefficient NMP architecture that processes the entire embedding training within CXL memory, minimizing data transfers over the bandwidth-limited CXL and enhancing internal bandwidth. To further improve the performance, ReCXL incorporates softwarehardware co-optimizations, including sophisticated dependencyfree prefetching and fine-grained update scheduling, to maximize hardware utilization. Evaluation results show that ReCXL outperforms the CPU-GPU baseline and the naïve CXL memory by $7.1 \times \sim 10.6 \times(9.4 \times$ on average) and $12.7 \times \sim 31.3 \times(22.6 \times$ on average), respectively.
暂无评论