The Dantzig selector is a popular & ell;1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upg...
详细信息
The Dantzig selector is a popular & ell;1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document}-type variable selection method widely used across various research fields. However, & ell;1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document}-type methods may not perform well for variable selection without complex irrepresentable conditions. In this article, we introduce a nonconvex Dantzig selector for ultrahigh-dimensional linear models. We begin by demonstrating that the oracle estimator serves as a local optimum for the nonconvex Dantzig selector. In addition, we propose a one-step local linear approximation estimator, called the Dantzig-LLA estimator, for the nonconvex Dantzig selector, and establish its strong oracle property. The proposed regularization method avoids the restrictive conditions imposed by & ell;1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document} regularization methods to guarantee the model selection consistency. Furthermore, we propose an efficient and parallelizable computing algorithm based on feature-splitting to address the computational challenges associated with the nonconvex Dantzig selector in high-dimensional settings. A comprehensive numerical study is conducted to evaluate the performance of the nonconvex Dantzig selector and the computing efficiency of the feature-splitting algorithm. The results demonstrate that the Dantzig selector with non
Due to the inherent insecure nature of the Internet,it is crucial to ensure the secure transmission of image data over this ***,given the limitations of computers,it becomes evenmore important to employ efficient and ...
详细信息
Due to the inherent insecure nature of the Internet,it is crucial to ensure the secure transmission of image data over this ***,given the limitations of computers,it becomes evenmore important to employ efficient and fast image encryption *** 1D chaotic maps offer a practical approach to real-time image encryption,their limited flexibility and increased vulnerability restrict their practical *** this research,we have utilized a 3DHindmarsh-Rosemodel to construct a secure *** randomness of the chaotic map is assessed through standard *** proposed system enhances security by incorporating an increased number of system parameters and a wide range of chaotic parameters,as well as ensuring a uniformdistribution of chaotic signals across the entire value ***,a fast image encryption technique utilizing the new chaotic system is *** novelty of the approach is confirmed through time complexity *** further strengthen the resistance against cryptanalysis attacks and differential attacks,the SHA-256 algorithm is employed for secure key *** results through a number of parameters demonstrate the strong cryptographic performance of the proposed image encryption approach,highlighting its exceptional suitability for secure ***,the security of the proposed scheme has been compared with stateof-the-art image encryption schemes,and all comparison metrics indicate the superior performance of the proposed scheme.
The grid-based Xin'anjiang model (GXM) has been widely applied to flood forecasting. However, when the model warm-up period is long and the amount of input data is large, the computational efficiency of the GXM is...
详细信息
The grid-based Xin'anjiang model (GXM) has been widely applied to flood forecasting. However, when the model warm-up period is long and the amount of input data is large, the computational efficiency of the GXM is obviously low. Therefore, a GXM parallel algorithm based on grid flow direction division is proposed from the perspective of spatial parallelism, which realizes the parallel computing of the GXM by extracting the parallel routing sequence of the watershed grids. To solve data skew, a DAG scheduling algorithm based on dynamic priority is proposed for task scheduling. The proposed GXM parallel algorithm is verified in the Qianhe River watershed of Shaanxi Province and the Tunxi watershed of Anhui Province. The results show that the GXM parallel algorithm based on grid flow direction division has good flood forecasting accuracy and higher computational efficiency than the traditional serial computing method. In addition, the DAG scheduling algorithm can effectively improve the parallel efficiency of the GXM.
A computational fluid dynamics(CFD)solver for a GPU/CPU heterogeneous architecture parallel computing platform is developed to simulate incompressible flows on billion-level grid *** solve the Poisson equation,the con...
详细信息
A computational fluid dynamics(CFD)solver for a GPU/CPU heterogeneous architecture parallel computing platform is developed to simulate incompressible flows on billion-level grid *** solve the Poisson equation,the conjugate gradient method is used as a basic solver,and a Chebyshev method in combination with a Jacobi sub-preconditioner is used as a *** developed CFD solver shows good performance on parallel efficiency,which exceeds 90%in the weak-scalability test when the number of grid points allocated to each GPU card is greater than *** the acceleration test,it is found that running a simulation with 10403 grid points on 125 GPU cards accelerates by 203.6x over the same number of CPU *** developed solver is then tested in the context of a two-dimensional lid-driven cavity flow and three-dimensional Taylor-Green vortex *** results are consistent with previous results in the literature.
Mobile-edge computing (MEC) with wireless power transfer has recently emerged as a viable concept for improving the data processing capacity of limited powered networks like wireless sensor networks (WSN) and the inte...
详细信息
Mobile-edge computing (MEC) with wireless power transfer has recently emerged as a viable concept for improving the data processing capacity of limited powered networks like wireless sensor networks (WSN) and the internet of things (IoT). In this research, we explore a wireless MEC network with a binary offloading strategy. Each mobile device's (MDs) computation task is either performed locally or entirely offloaded to a MEC server. We aim to develop an online system that adapts task offloading decisions and resource allocations to changing wireless channel conditions in real-time. This necessitates solving difficult combinatorial optimization problems quickly within the channel coherence time, which is difficult to achieve with traditional optimization approaches. To address this issue, we offer a parallel computing architecture in which several parallel offloading actors as deep neural networks (DNNs) are used as a scalable method to learn binary offloading decisions from experience. It avoids the need to solve combinatorial optimization issues, reducing computational complexity significantly, especially in large networks. Compared to existing optimization approaches, numerical results demonstrate that the proposed algorithm can achieve optimal performance while reducing computing time by an acceptable margin. For instance, our algorithm achieves a latency rate of 0.033 s in a network of 30 MDs.
The computation of the Moore-Penrose generalized inverse is a commonly used operation in various fields such as the training of neural networks based on random weights. Therefore, a fast computation of this inverse is...
详细信息
The computation of the Moore-Penrose generalized inverse is a commonly used operation in various fields such as the training of neural networks based on random weights. Therefore, a fast computation of this inverse is important for problems where such neural networks provide a solution. However, due to the growth of databases, the matrices involved have large dimensions, thus requiring a significant amount of processing and execution time. In this paper, we propose a parallel computing method for the computation of the Moore-Penrose generalized inverse of large-size full-rank rectangular matrices. The proposed method employs the Strassen algorithm to compute the inverse of a nonsingular matrix and is implemented on a shared-memory architecture. The results show a significant reduction in computation time, especially for high-rank matrices. Furthermore, in a sequential computing scenario (using a single execution thread), our method achieves a reduced computation time compared with other previously reported algorithms. Consequently, our approach provides a promising solution for the efficient computation of the Moore-Penrose generalized inverse of large-size matrices employed in practical scenarios.
A heterogeneous parallel computing method for 3D transient nonlinear thermomechanical problems is proposed based on CPU-GPU platforms. The method is developed based on the stable node-based smoothed finite element met...
详细信息
A heterogeneous parallel computing method for 3D transient nonlinear thermomechanical problems is proposed based on CPU-GPU platforms. The method is developed based on the stable node-based smoothed finite element method (SNS-FEM) and element-by-element (EBE) strategy. The SNS-FEM method ensures the accuracy of temperature and displacement solution. To obtain higher computational efficiency, a series of parallel computing strategies are proposed in this paper. A one-to-one mapping relationship is established between smoothing domains calculation and threads in GPU. To improve the efficiency of constructing the smoothing domain, a semiparallel construction method is developed. And a preindexing method is proposed to improve the computational efficiency of solving linear equations by the parallel preconditioned conjugate gradient (PCG) method. The developed heterogeneous parallel computing method is quite accurate and efficient in computing the model composed of tetrahedral elements, obtaining a speedup ratio of 52.31 when the model contains 67,204 degrees of freedom.
In data-intensive parallel computing clusters, it is important to provide deadline-guaranteed service to jobs while minimizing resource usage (e.g., network bandwidth and energy). Under the current computing framework...
详细信息
In data-intensive parallel computing clusters, it is important to provide deadline-guaranteed service to jobs while minimizing resource usage (e.g., network bandwidth and energy). Under the current computing framework (that first allocates data and then schedules jobs), in a busy cluster with many jobs, it is difficult to achieve high data locality (hence low bandwidth consumption), deadline guarantee, and high energy savings simultaneously. We model the problem to simultaneously achieve these three objectives using integer programming. Due to the NP-hardness of the problem, we propose a heuristic Cooperative job Scheduling (CSA) and data Allocation method. CSA novelly reverses the order of data allocation and job scheduling in the current computing framework. Job-scheduling-first enables CSA to proactively consolidate tasks with more common requested data to the same server when conducting deadline-aware scheduling, and also consolidate the tasks to as few servers as possible to maximize energy savings. This facilitates the subsequent data allocation step to allocate a data block to the server that hosts most of this data's requester tasks, thus maximally enhancing data locality. To achieve the tradeoff between data locality and energy savings with specified weights, CSA has a cooperative recursive refinement process that recursively adjusts the job schedule and data allocation schedule. We further propose two enhancement algorithms (i.e., minimum k-cut data reallocation algorithm and bipartite based task reassignment algorithm) to further improve the performance of CSA through additional data reallocation and task reassignment, respectively. Trace-driven experiments in the simulation and the real cluster show that CSA outperforms other schedulers in supplying deadline-guarantee and resource-efficient services and the effectiveness of each enhancement. Also, the enhancement algorithms are effective in improving CSA.
In this paper, we propose three novel image encryption algorithms. Separable moments and parallel computing are combined in order to enhance the security aspect and time performance. The three proposed algorithms are ...
详细信息
In this paper, we propose three novel image encryption algorithms. Separable moments and parallel computing are combined in order to enhance the security aspect and time performance. The three proposed algorithms are based on TKM (Tchebichef-Krawtchouk moments), THM (Tchebichef-Hahn moments) and KHM (Krawtchouk-Hahn moments) respectively. A novel chaotic scheme is introduced, that enhances security by adding a layer of block permutation on top of the classical confusion/diffusion scheme, and reduces time cost through parallel computing. This approach offers improved security and faster performance compared to classical encryption schemes. The proposed algorithms are tested under several criteria and the experimental results show a remarkable resilience against all well-known attacks. Furthermore, the novel parallel encryption scheme exhibits a drastic improvement in the time performance. The proposed algorithms are compared to the state-of-the-art methods and they stand out as a promising choice for reliable use in real world applications.
In the era of Big Data, the computational demands of machine learning (ML) algorithms have grown exponentially, necessitating the development of efficient parallel computing techniques. This research paper delves into...
详细信息
暂无评论