The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS un...
详细信息
The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate;to distribute the data to several regional centers;and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing A layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system.
The pervasive introduction of the Internet into robotics and automation systems pushes forward an evolution that began when the computer was introduced in the enterprise in the middle of the last century, and that con...
详细信息
The pervasive introduction of the Internet into robotics and automation systems pushes forward an evolution that began when the computer was introduced in the enterprise in the middle of the last century, and that continued with the interconnection of shop-floor workstations in local networks in the 1980s. Today, the Internet represents a challenge both for research and development in the area of distributed robotics and automation. In order to gain a better understanding and evaluation of recent results in distributed computing, this paper classifies the most promising technological approaches, provides examples of how they are applied in robotics and automation, and discusses available standards and commercial solutions.
It is generally accepted that the ability to develop large-scale distributed applications has lagged seriously behind other developments in cyberinfrastructure. In this paper, we provide insight into how such applicat...
详细信息
It is generally accepted that the ability to develop large-scale distributed applications has lagged seriously behind other developments in cyberinfrastructure. In this paper, we provide insight into how such applications have been developed and an understanding of why developing applications for distributed infrastructure is hard. Our approach is unique in the sense that it is centered around half a dozen existing scientific applications;we posit that these scientific applications are representative of the characteristics, requirements, as well as the challenges of the bulk of current distributed applications on production cyberinfrastructure (such as the US TeraGrid). We provide a novel and comprehensive analysis of such distributed scientific applications. Specifically, we survey existing models and methods for large-scale distributed applications and identify commonalities, recurring structures, patterns and abstractions. We find that there are many ad hoc solutions employed to develop and execute distributed applications, which result in a lack of generality and the inability of distributed applications to be extensible and independent of infrastructure details. In our analysis, we introduce the notion of application vectors: a novel way of understanding the structure of distributed applications. Important contributions of this paper include identifying patterns that are derived from a wide range of real distributed applications, as well as an integrated approach to analyzing applications, programming systems and patterns, resulting in the ability to provide a critical assessment of the current practice of developing, deploying and executing distributed applications. Gaps and omissions in the state of the art are identified, and directions for future research are outlined. Copyright (c) 2012 John Wiley & Sons, Ltd.
Interactive scientific visualization and distributed computing are emerging as increasingly important research areas. In the computer graphics community, most attention has bee focused on rendering algorithms. The dis...
详细信息
Interactive scientific visualization and distributed computing are emerging as increasingly important research areas. In the computer graphics community, most attention has bee focused on rendering algorithms. The distributed software architecture of a graphics application system, which may be exploited by the interactive scientific visualization applications, has been ignored to some extent. We propose a distributed interactive scientific visualization system for nonprogrammer users, particularly in medical applications. The system consists of many software modules that isolate each functionality of the system and provide a flexible architecture for future development. We focus on the overall architecture of the integrated system that provides end-users and application programmers with powerful visualization tools, a distributed computation, control, and communication platform, and a load-balancing algorithm for distributed scientific visualization computation. We will discuss how all these modules of the system are articulated to provide various services of interactive visualization, network transparent interprocess communications, and group discussions.
This paper proposes a new concurrent data structure, called parallel hash table, for synchronizing the access of multiple threads to resources stored in a shared buffer. We prove theoretically the complexity of the op...
详细信息
This paper proposes a new concurrent data structure, called parallel hash table, for synchronizing the access of multiple threads to resources stored in a shared buffer. We prove theoretically the complexity of the operations and the upper limit on the thread conflict probability of the parallel hash table. To empirically evaluate the proposed concurrent data structure, we compare the performance of a TCP multi-threaded parallel hash table-based server to a conventional TCP multi-threaded shared buffer-based server implemented in Java. The experimental results on a network of 36 workstations running Windows NT, demonstrate that the parallel hash table-based server outperforms the conventional multi-threaded server. (C) 2006 Elsevier B.V. All rights reserved.
The rapid increase in the speed and capacity of commonly available PCs is providing an opportunity to use distributed computing to tackle major modeling tasks such as climate simulation. The *** project has developed ...
详细信息
The rapid increase in the speed and capacity of commonly available PCs is providing an opportunity to use distributed computing to tackle major modeling tasks such as climate simulation. The *** project has developed the software necessary to carry out such a project in the public domain. The paper describes the development of the demonstration release software, along with the computational challenges such as data mining, visualization, and distributed database management
This special section investigates the state-of-the-art in the analysis and processing of remotely sensed big data employing distributed computing architectures.
This special section investigates the state-of-the-art in the analysis and processing of remotely sensed big data employing distributed computing architectures.
With the increasing interest in the Internet and intranets, and the use of web browsers and applets for the creation of networked and distributed applications, Java is being touted as the language of choice. However, ...
详细信息
With the increasing interest in the Internet and intranets, and the use of web browsers and applets for the creation of networked and distributed applications, Java is being touted as the language of choice. However, attacks of the denial-of-service nature by malicious applets could bring down a network. Through the investigation and implementation of a thread manipulator, we are able to demonstrate that the current Java security model is not entirely perfect. (C) 1998 Elsevier Science B.V.
Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational d...
详细信息
Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational demands, distributed architectures have gained substantial popularity in the recent years. In this review, we provide an overview of the current techniques for distributed computing, along with examples of how the techniques are currently being employed in the field of proteomics. We thus underline the benefits of distributed computing in proteomics, while also pointing out the potential issues and pitfalls involved.
A distributed computing model for channel bandwidth allocation and optimization can involve multiple components working together to efficiently allocate and optimize the available bandwidth in a distributed system. Th...
详细信息
A distributed computing model for channel bandwidth allocation and optimization can involve multiple components working together to efficiently allocate and optimize the available bandwidth in a distributed system. The efficient allocation of channel bandwidth in the distributed computing model is crucial for optimizing resource utilization and improving system performance. This paper, proposed the Imperialist Competitive Spline Interpolation (ICSI) scheme, which combines computational intelligence and deep learning techniques to address the challenge of channel bandwidth allocation. The ICSI scheme optimizes resource allocation by considering user requirements and resource availability, utilizing polynomial equations and spline interpolation. The Imperialist Competitive Optimization model evaluates and optimizes the available resources in the distributed environment. With the optimized resources spline interpolation is implemented for the computation of the available resources. Extensive simulations and performance analysis demonstrate the effectiveness of the ICSI scheme in terms of resource utilization, throughput, latency, fairness index, and energy efficiency. The ICSI model achieves the minimal waiting time of 3 ms and minimal latency of 6.4 m. Comparative analysis of the Round Robin scheme further confirms the superiority of the ICSI scheme in terms of task scheduling efficiency. The findings of this paper contribute to the advancement of distributed computing models for channel bandwidth allocation, offering a promising solution for optimizing resource allocation and improving system performance in modern computing environments.
暂无评论