We present the project Asteroids@home that uses distributed computing to solve the time-consuming inverse problem of shape reconstruction of asteroids. The project uses the Berkeley Open Infrastructure for Network Com...
详细信息
We present the project Asteroids@home that uses distributed computing to solve the time-consuming inverse problem of shape reconstruction of asteroids. The project uses the Berkeley Open Infrastructure for Network computing (BOINC) framework to distribute, collect, and validate small computational units that are solved independently at individual computers of volunteers connected to the project. Shapes, rotational periods, and orientations of the spin axes of asteroids are reconstructed from their disk integrated photometry by the lightcurve inversion method. (C) 2015 Elsevier B.V. All rights reserved.
This special issue of Concurrency and Computation: Practice and Experience provides a forum for presenting advances of current research and development in all aspects of Parallel and distributed computing and Communic...
详细信息
This special issue of Concurrency and Computation: Practice and Experience provides a forum for presenting advances of current research and development in all aspects of Parallel and distributed computing and Communications.
In recent years, with the rapid development of computer network, distributed computing system has a very vast application prospect and potential utility value, which opens up a wealth of opportunities for different ap...
详细信息
In recent years, with the rapid development of computer network, distributed computing system has a very vast application prospect and potential utility value, which opens up a wealth of opportunities for different applications. With the characteristics of dynamic, heterogeneity, distribution, openness, voluntariness, uncertainty and deception, how to obtain trustworthy computing resource becomes a key issue in large-scale distributed computing research. Therefore, with considering the complex characters of trust in distributed computing environment, firstly, we construct STE architecture to rank and observe trust, which includes STE Broker, Monitoring and STE Catalogue. Secondly, a more comprehensive dynamic trust evaluation model is constructed based on Bayesian network. Finally, we apply simulation platform to imitate trust evolution process and collect related data, and the proposed method has been serviced in complex simulation system, and the results have indicated that the model is unbiased and effective. The first part is the research status and related problems. The second part is the establishment of an evaluation model. The last part is the experimental analysis and conclusion.
As an alternative to the deep learning model, deep forest outperforms deep neural networks in many aspects with fewer hyperparameters and better robustness. To improve the computing performance of deep forest, ForestL...
详细信息
As an alternative to the deep learning model, deep forest outperforms deep neural networks in many aspects with fewer hyperparameters and better robustness. To improve the computing performance of deep forest, ForestLayer proposes an efficient task-parallel algorithm S-FTA at a fine sub-forest granularity, but the granularity of the sub-forest cannot be adaptively adjusted. BLB-gcForest further proposes an adaptive sub-forest splitting algorithm to dynamically adjust the sub-forest granularity. However, with distributed storage, its BLB method needs to scan the whole dataset when sampling, which generates considerable communication overhead. Moreover, BLB-gcForest's tree-based vector aggregation produces extensive redundant transfers and significantly degrades the system's performance in vector aggregation stage. To deal with these existing issues and further improve the computing efficiency and scalability of the distributed deep forest, in this paper, we propose a novel computing-Efficient and RobusT distributed Deep Forest framework, named CERT-DF. CERT-DF integrates three customized schemes, namely, block-level pre-sampling, two-stage pre-aggregation, and system-level backup. Specifically, CERT-DF adopts the block-level pre-sampling method to implement data blocks' local sampling eliminating frequent data remote access and maximizing parallel efficiency, applies the two-stage pre-aggregation method to adjust the class vector aggregation granularity to greatly decrease the communication overhead, and leverages the system-level backup method to enhance the system's disaster tolerance and immensely accelerate task recovery with minimal system resource overhead. Comprehensive experimental evaluations on multiple datasets show that our CERT-DF significantly outperforms the state-of-the-art approaches with higher computing efficiency, lower system resource overhead, and better system robustness while ensuring good accuracy.
To provide effective technological means for evaluation of the pilot's information processing capacity for combat missions, tactical capability, command capability and cognitive decision-making capacity to compens...
详细信息
To provide effective technological means for evaluation of the pilot's information processing capacity for combat missions, tactical capability, command capability and cognitive decision-making capacity to compensate for the deficiency in the paper and pencil test, psychological dynamoscopy and other technological means used for traditional pilot's cognitive decision-making capacity evaluation. Based on distributed computing technology, build a topological structure of the evaluation system, design a background of typical combat mission, and simulate combat control interface and process;based on software engineering, establish records, manage and analyze the evaluation technology of process data. The result of this study is that a scientific method and objective measurement means need to be provided for "real" evaluation of the pilot's cognitive decision-making capacity.
The number of Internet-connected sensing and control devices is growing. Some anticipate them to number in excess of 212 billion by 2020. Inherently, these devices generate continuous data streams, many of which need ...
详细信息
ISBN:
(纸本)9781467380584
The number of Internet-connected sensing and control devices is growing. Some anticipate them to number in excess of 212 billion by 2020. Inherently, these devices generate continuous data streams, many of which need to be stored and processed. Traditional approaches, whereby all data are shipped to the cloud, may not continue to be effective as cloud infrastructure may not be able to handle myriads of data streams and their associated storage and processing needs. Using cloud infrastructure alone for data processing significantly increases latency, and contributes to unnecessary energy inefficiencies, including potentially unnecessary data transmission in constrained wireless networks, and on cloud computing facilities increasingly known to be significant consumers of energy. In this paper we present a distributed platform for wireless sensor networks which allows computation to be shifted from the cloud into the network. This reduces the traffic in the sensor network, intermediate networks, and cloud infrastructure. The platform is fully distributed, allowing every node in a homogeneous network to accept continuous queries from a user, find all nodes satisfying the user's query, find an optimal node (Fermat-Weber point) in the network upon which to process the query, and provide the result to the user. Our results show that the number of required messages can be decreased up to 49% and processing latency by 42% in comparison with state-of-the-art approaches, including Innet.
This paper proposes a distributed computing and Centralized Detemiination (DCCD) method to solve the multi-robot task allocation problem, in order to maximize the utility of the whole robot system. First, a utility mo...
详细信息
ISBN:
(纸本)9781479970162
This paper proposes a distributed computing and Centralized Detemiination (DCCD) method to solve the multi-robot task allocation problem, in order to maximize the utility of the whole robot system. First, a utility model is presented which takes the cost for executing tasks and the quality of task completing time into consideration. DCCD employs each robot to compute and provide sub-plans for executing one or multiple tasks. Then, the task manager forms allocations for accomplishing tasks using all the sub-plans and determine the optimal one according to the utility model. Compared with fully-centralized allocation, this method can reduce the computation largely for task manager. Theoretical analysis and simulation verify the effectiveness of DCCD, and shows that DCCD can obtain global optimal allocation comparing with the fact that the widely-used single-item and combinational auction methods can only obtain local optimal solution.
Now days due to rapid growth of data in organizations, extensive data processing is a central point of Information Technology. Mining of Association rules in large database is the challenging task. An Apriori algorith...
详细信息
Now days due to rapid growth of data in organizations, extensive data processing is a central point of Information Technology. Mining of Association rules in large database is the challenging task. An Apriori algorithm is widely used to find out the frequent item sets from database. But it will be inefficient in case of large database because it will require more I/O load. Later drawback of the Apriori algorithm is overcome by many algorithms / parallel algorithms (model) but those are also inefficient to find frequent item sets from large database with less time and with great efficiency. Hence hybrid architecture is proposed which consists of integrated distributed and parallel computing concept. The main idea of new architecture is that we combine distributed as well as parallel computing in such a way that it will be efficient to find out frequent item sets from large databases in less time. It also handle large database with efficiently than existing algorithms. (C) 2016 Published by Elsevier B.V.
暂无评论