distributed computing applications provide concurrent processing and services executed from different systems through a common cloud platform. However, without modifications or adaptable security measures, such concur...
详细信息
distributed computing applications provide concurrent processing and services executed from different systems through a common cloud platform. However, without modifications or adaptable security measures, such concurrency presents a great challenge to the proper administration of security. This paper introduces a hybrid secure equivalent computing model to address this security issue. The proposed security model was designed using a genetic algorithm for equivalent measure distribution over the processing systems. Via this model, variations in security management owing to differences in the processing times of various services can be mitigated using the probabilistic annealing method. This method helps to preserve the stability of the security method without decreasing its robustness. For robust processing, the model exploits parallel security as a service feature from the cloud with a non-tokenized key sharing method. The key sharing and revocation processes are determined using the probabilistic outcomes of the annealing method. The genetic process verifies the distribution of security measures in the key sequence of any possible processing combination without compromise. The performance of the proposed model was verified using the metrics of process failure, computational complexity, time delay, false rate, and computing level.
distributed computing systems, such as Hadoop, have been widely studied and used for executing and analyzing large data. In this paper, we investigate an emerging resource allocation problem for wireless distributed c...
详细信息
distributed computing systems, such as Hadoop, have been widely studied and used for executing and analyzing large data. In this paper, we investigate an emerging resource allocation problem for wireless distributed computing systems consisting of multifunctional nodes in charge of both numerical computation and wireless communication with master nodes. We focus on a computation power consumption model based on CMOS devices and a communication power consumption model involving multiple antenna transceivers against mutual interference. We present a joint optimization problem for workload scheduling and power allocation for achieving maximum computational speed under total power constraint. We simplify the joint optimization into two sub-problems. For workload scheduling as an integer programming sub-problem, we relax the integer constraint and establish the equivalence between relaxed and original problems. For the power allocation sub-problem, we maximize a difference of convex functions by utilizing the concave-convex procedure. We prove our proposed algorithm to converge to a stationary point of the original program. Simulation results confirm the efficiency and near-optimal performance of our proposed algorithms.
distributed computing has become one of the most important frameworks in dealing with large computation tasks. In this paper, we propose a systematic construction of coded computing schemes for MapReduce-type distribu...
详细信息
distributed computing has become one of the most important frameworks in dealing with large computation tasks. In this paper, we propose a systematic construction of coded computing schemes for MapReduce-type distributed systems. The construction builds upon placement delivery arrays (PDA), originally proposed by Yan et al. for coded caching schemes. The main contributions of our work are three-fold. First, we identify a class of PDAs, called Comp-PDAs, and show how to obtain a coded computing scheme from any Comp-PDA. We also characterize the normalized number of stored files (storage load), computed intermediate values (computation load), and communicated bits (communication load), of the obtained schemes in terms of the Comp-PDA parameters. Then, we show that the performance achieved by Comp-PDAs describing Maddah-Ali and Niesen's coded caching schemes matches a new information-theoretic converse, thus establishing the fundamental region of all achievable performance triples. In particular, we characterize all the Comp-PDAs achieving the pareto-optimal storage, computation, and communication (SCC) loads of the fundamental region. Finally, we investigate the file complexity of the proposed schemes, i.e., the smallest number of files required for implementation. In particular, we describe Comp-PDAs that achieve pareto-optimal SCC triples with significantly lower file complexity than the originally proposed Comp-PDAs.
distributed computing decomposes the user tasks into many small tasks and distributes them to multiple servers for processing to save the overall computing time. There is an important challenge that how to efficiently...
详细信息
ISBN:
(纸本)9789811996962;9789811996979
distributed computing decomposes the user tasks into many small tasks and distributes them to multiple servers for processing to save the overall computing time. There is an important challenge that how to efficiently divide the tasks to achieve the goal. In this paper, aiming at solving the complex relationship between multiple tasks and multiple servers, we propose a resource-task joint optimization model in a distributed computing environment. The optimization goal of this model is to minimize the total computation time, while considering the transmission time and execution time of task data. Based on the task size, intensity, and computing power of the server, we propose a task-resource joint optimization algorithm to solve the proposed model. Numerical simulation verified the feasibility of the model and the correctness of the algorithm.
The increasing interest in serverless computation and ubiquitous wireless networks has led to numerous connected devices in our surroundings. Such IoT devices have access to an abundance of raw data, but their inadequ...
详细信息
ISBN:
(纸本)9798350304831
The increasing interest in serverless computation and ubiquitous wireless networks has led to numerous connected devices in our surroundings. Such IoT devices have access to an abundance of raw data, but their inadequate resources in computing limit their capabilities. With the emergence of deep neural networks (DNNs), the demand for the computing power of IoT devices is increasing. To overcome inadequate resources, several studies have proposed distribution methods for IoT devices that harvest the aggregated computing power of idle IoT devices in an environment. However, since such a distributed system strongly relies on each device, unstable latency, and intermittent failures, the common characteristics of IoT devices and wireless networks, cause high recovery overheads. To reduce this overhead, we propose a novel robustness method with a close-to-zero recovery latency for DNN computations. Our solution never loses a request or spends time recovering from a failure. To do so, first, we analyze how matrix computations in DNNs are affected by distribution. Then, we introduce a novel coded distributed computing (CDC) method, the cost of which, unlike that of modular redundancies, is constant when the number of devices increases. Our method is applied at the library level, without requiring extensive changes to the program, while still ensuring a balanced work assignment during distribution.
The title of this special section, Parallel and distributed computing Techniques for Non-Von Neumann Technologies, is a bit misleading. Technically, parallel and distributed computers already diverge from the basic ar...
详细信息
The title of this special section, Parallel and distributed computing Techniques for Non-Von Neumann Technologies, is a bit misleading. Technically, parallel and distributed computers already diverge from the basic architecture that John von Neumann proposed in 1945, although they are still based on processors that execute a sequence of instructions, each of which performs a simple action such as computing an arithmetic result, reading or writing memory, or branching to a new location in the instruction sequence. But what would a computer that is not based on this model of execution look like? The technologies discussed in the following articles are more exotic, more innovative, and more intriguing than what you are likely to encounter in a typical collection of peerreviewed computer-science articles. We hope that these articles will help you view computing in a new light and give you a sense of what the future of computing may look like. Silent-PIM: Realizing the Processing-in-Memory computing with Standard Memory Requests, by Kim et al. [A1], presents a new approach to performing computation within the computer’s memory system.
The multi-user linearly-separable distributed computing problem is considered here, in which N servers help to compute the real-valued functions requested by K users, where each function can be written as a linear com...
详细信息
ISBN:
(纸本)9798350301496
The multi-user linearly-separable distributed computing problem is considered here, in which N servers help to compute the real-valued functions requested by K users, where each function can be written as a linear combination of up to L (generally non-linear) subfunctions. Each server computes a fraction. of the subfunctions, then communicates a function of its computed outputs to some of the users, and then each user collects its received data to recover its desired function. Our goal is to bound the ratio between the computation workload done by all servers over the number of datasets. To this end, we here reformulate the real-valued distributed computing problem into a matrix factorization problem and then into a basic sparse recovery problem, where sparsity implies computational savings. Building on this, we first give a simple probabilistic scheme for subfunction assignment, which allows us to upper bound the optimal normalized computation cost as gamma <= K/N that a generally intractable l(0)-minimization would give. To bypass the intractability of such optimal scheme, we show that if these optimal schemes enjoy gamma <= -rK/N W--1(-1)(- 2K/eNr) (where W-1(center dot) is the Lambert function and r calibrates the communication between servers and users), then they can actually be derived using a tractable Basis Pursuit l(1)-minimization. This newly-revealed connection opens up the possibility of designing practical distributed computing algorithms by employing tools and methods from compressed sensing.
This article (written for the celebration of the 30th Anniversary of the SIROCCO conference series) is a non-technical article that presents a personal view of what are Informatics, distributed computing, and our Job....
详细信息
ISBN:
(纸本)9783031327322;9783031327339
This article (written for the celebration of the 30th Anniversary of the SIROCCO conference series) is a non-technical article that presents a personal view of what are Informatics, distributed computing, and our Job. While it does not pretend to objectivity, its aim is not to launch a controversy on the addressed topics. More modestly it intends to encourage readers to form their own view on these important topics.
Coded distributed computing is used to mitigate the adverse effect of slow workers on the computation time in distributed computing systems. However, using error-correction codes results in encoding and decoding delay...
详细信息
Coded distributed computing is used to mitigate the adverse effect of slow workers on the computation time in distributed computing systems. However, using error-correction codes results in encoding and decoding delays. In this work, we consider a systematic maximum-distance separable (MDS) coded matrix-vector multiplication problem with multi-message communication (MMC), where the master assigns multiple sub-tasks to each worker. In this setup, we show that the received systematic outputs can be used to reduce the decoding time by implementing a proper decoding algorithm. To further reduce the decoding time, we use the MMC property that sub-tasks are executed sequentially to propose an allocation of the systematic sub-tasks that significantly increases the number of received systematic outputs. Our results further demonstrate that the reduction in the decoding time is even more significant in applications that require only a partial recovery. In these applications, it suffices to complete a certain percentage of the computation, and using our approach, we show that decoding may be completely avoided.
Data explosion poses lot of challenges to the state-of-the art systems, applications, and methodologies. It has been reported that 181 zettabytes of data are expected to be generated in 2025 which is over 150% increas...
详细信息
Data explosion poses lot of challenges to the state-of-the art systems, applications, and methodologies. It has been reported that 181 zettabytes of data are expected to be generated in 2025 which is over 150% increase compared to the data that is expected to be generated in 2023. However, while system manufacturers are consistently developing devices with larger storage spaces and providing alternative storage capacities in the cloud at affordable rates, another key challenge experienced is how to effectively process the fraction of large scale of stored data in time-critical conventional systems. One transformative paradigm revolutionizing the processing and management of these large data is distributed computing whose application requires deep understanding. This dissertation focuses on exploring the potential impact of applying efficient distributed computing concepts to long existing challenges or issues in (i) a widely data-intensive scientific application (ii) applying homomorphic encryption to data intensive workloads found in outsourced databases and (iii) security of tokenized incentive mechanism for Federated learning (FL) systems. The first part of the dissertation tackles the Microelectrode arrays (MEAs) parameterization problem from an orthogonal viewpoint enlightened by algebraic topology, which allows us to algebraically parametrize MEAs whose structure and intrinsic parallelism are hard to identify otherwise. We implement a new paradigm, namely Parma, to demonstrate the effectiveness of the proposed approach and report how it outperforms the state-of-the-practice in time, scalability, and memory usage. The second part discusses our work on introducing the concept of parallel caching of secure aggregation to mitigate the performance overhead incurred by the HE module in outsourced databases. The key idea of this optimization approach is caching selected radix-ciphertexts in parallel without violating existing security guarantees of the primitive/base
暂无评论