The proceedings contain 17 papers. The topics discussed include: a simulation model for evaluating the impact of communication services in a functional-block- based network protocol graph;using fuzzy inference to impr...
ISBN:
(纸本)9781618397850
The proceedings contain 17 papers. The topics discussed include: a simulation model for evaluating the impact of communication services in a functional-block- based network protocol graph;using fuzzy inference to improve TCP congestion control over wireless networks;backhaul optimization for traffic aggregation;using cloud computing for medical applications;a use of matrix with GVT computation in optimistic time warp algorithm for parallel simulation;Internet traffic classification using multifractal analysis approach;simulation studies of OpenFlow-based in-network caching strategies;simulation of anti-relay attack schemes for RFID ETC system;simulation of reducing re-association and re-authentication phases for low handoff latency;middleware architecture for sensor-based bridge infrastructure management;different facets of security in the cloud;and a distributed fuzzy recommendation system.
Summary form only given, as follows. The characteristics of IT applications in the future decades are computing for the masses. What datacenters will deal with is a high number of active users, a high number of applic...
详细信息
ISBN:
(纸本)9781467309752
Summary form only given, as follows. The characteristics of IT applications in the future decades are computing for the masses. What datacenters will deal with is a high number of active users, a high number of applications, a high number of parallel requests, massive amount of data, etc. This challenge is especially serious for China, since there is a huge population. The emergence of Internet-of-Thing makes the class of applications ever more and more, and the ossified computer architecture cannot be suitable for the various niche applications. To address these big issues, Chinese Academy of Sciences (CAS) has started up the Future Information Technology (FIT) Initiative, a 10-year frontier research project for targeting applications and markets of 2020-2030. The State Key Lab on Computer Architecture (CARCH), which is located at the Institute of computing Technology (ICT) and is the unique SKL in the area of computer architecture in China, is one of major undertakings of the FIT project. The research directions of CARCH include building billionthreads computer, elastic processor, cloud-sea computing, etc. In this talk, we will survey the motivations and basic ideas of these projects. Moreover, we will briefly introduce another foresighted research going on ICT: service-oriented future Internet architecture.
The availability and utility of large numbers of Graphical Processing Units (GPUs) have enabled parallel computations using extensive multi-threading. Sequential access to global memory and contention at the size-limi...
详细信息
The availability and utility of large numbers of Graphical Processing Units (GPUs) have enabled parallel computations using extensive multi-threading. Sequential access to global memory and contention at the size-limited shared memory have been main impediments to fully exploiting potential performance in architectures having a massive number of GPUs. We propose novel memory storage and retrieval techniques that enable parallel graph computations to overcome the above issues. More specifically, given a graph G = (V, E) and an integer k
The first edition of the international Workshop on the Large Scale distributed Service-oriented Systems, LSDSS-2012 is dedicated to the dissemination and evaluation of original contributions to the algorithms, archite...
The first edition of the international Workshop on the Large Scale distributed Service-oriented Systems, LSDSS-2012 is dedicated to the dissemination and evaluation of original contributions to the algorithms, architectures, techniques, protocols, components related to the Large Scale distributed Service-oriented Systems with special emphasis on Cloud, Web-based systems and pervasive computing systems. The purpose of the workshop is to provide an open forum for researchers from academia and industry to present, discuss, and exchange ideas, results, and expertise in the area of large scale distributed service-oriented systems. We received high quality papers from different universities. Each paper was reviewed by at least three referees and selected based on their originality, significance, correctness, relevance, and clarity of presentation.
The Hydra project offers version control for distributed case files in alpha-Flow. Available version control systems lack support for independent versioning of multiple logical units within a single repository each wi...
详细信息
The Hydra project offers version control for distributed case files in alpha-Flow. Available version control systems lack support for independent versioning of multiple logical units within a single repository each with its own version history and head. Our use case also requires mechanisms for labeling versions by their validity and for validity-based navigational access. Hydra is a multi-module and validity-aware version control system.
This paper describes the Infrastructure and Network Description Language (INDL). The aim of INDL is to provide technology independent descriptions of computing infrastructures. These descriptions include the physical ...
详细信息
This paper describes the Infrastructure and Network Description Language (INDL). The aim of INDL is to provide technology independent descriptions of computing infrastructures. These descriptions include the physical resources and the network infrastructure that connects these resources. The description language also provides the necessary vocabulary to describe virtualization of resources and the services offered by these resources. Furthermore, the language can be easily extended to describe federation of different existing computing infrastructures, specific types of (optical) equipment and also behavioral aspects of resources, for example, their energy consumption. Before we introduce INDL we first discuss a number of modeling efforts that have lead to the development of INDL, namely the Network Description Language, the Network Markup Language and the CineGrid Description Language. We also show current applications of INDL in two EU-FP7 projects: NOVI and GEYSERS. We demonstrate the flexibility and extensibility of INDL to cater the specific needs of these two projects.
Scientists are dealing with a larger and more complex IT infrastructure in their daily work. With the advent of Grid and Cloud technologies this trend continues. Scientists will not only deal with local managed resour...
详细信息
Scientists are dealing with a larger and more complex IT infrastructure in their daily work. With the advent of Grid and Cloud technologies this trend continues. Scientists will not only deal with local managed resources, but with resources provided by an arbitrary Cloud service provider. In order to support the scientist in his daily work a scientific workbench is required to interact with IT resources and to perform scientific analysis on them. The g-Eclipse framework is such a platform for Grid/Cloud computing based on the Eclipse ecosystem. Based on the g-Eclipse/Eclipse architecture new scientific applications/tools can be integrated in this workbench. The g-Eclipse workbench is an ideal starting point for building such a workbench for astro particle physics. The usage of remote Cloud resources requires the development of an interoperable communication and software provisioning system too. Using OSGi as a interoperable, provider-independent platform for Cloud computing, the management of the Cloud resources will be simplified significantly. The delivery of software on the scientists computer and on his Cloud resources with a ScienceStore/ScienStore component will be as easy as the deployment of little applications on end customers smart phones.
This paper describes the efforts at Ohio University to incorporate selected topics from the IEEE-TCPP Curriculum Initiative into the Computer Science/Computer Engineering curriculum prior to a transition to semesters ...
详细信息
ISBN:
(纸本)9781467309745
This paper describes the efforts at Ohio University to incorporate selected topics from the IEEE-TCPP Curriculum Initiative into the Computer Science/Computer Engineering curriculum prior to a transition to semesters at Ohio University that will occur in the Fall of 2012. In particular, this paper describes our efforts to incorporate (and evaluate) selected elements of the IEEE-TCPP Curriculum Initiative into three courses in order to best determine the appropriate placement of topics related to parallel and distributedcomputing in the new CS/CpE curriculum under the semester calendar. In particular, we plan to add or revise existing modules and assignments for CS2 (CS 240B, CS 240C at Ohio University, CS 2401 under semesters), DS/A (CS 361 Data Structures at Ohio University, CS 3610 under semesters), and Systems (CS 442 Operating Systems and Computer Architecture I, CS 4420 under semesters) to help us determine which curricular recommendations belong in those three courses in the new semesters curriculum and which topics are more appropriately placed in new required courses entitled EE 3613 Computer Organization and CS 4000 Introduction to parallel, distributed, and Web-Centric computing or in other existing advanced courses such as CS 4040 Design and Analysis of Algorithms or CS 4100 Formal Languages and Compilers.
Recently, the computational requirements for large scale data-intensive analysis of scientific data have grown significantly. In High Energy Physics (HEP) for example, the Large Hadron Collider (LHC) produced 13 petab...
详细信息
Recently, the computational requirements for large scale data-intensive analysis of scientific data have grown significantly. In High Energy Physics (HEP) for example, the Large Hadron Collider (LHC) produced 13 petabytes of data in 2010. This huge amount of data are processed on more than 140 computing centers distributed across 34 countries. The MapReduce paradigm has emerged as a highly successful programming model for large-scale data-intensive computing applications. However, current MapReduce implementations are developed to operate on single cluster environments and cannot be leveraged for large-scale distributed data processing across multiple clusters. On the other hand, workflow systems are used for distributed data processing across data centers. It has been reported that the workflow paradigm has some limitations for distributed data processing, such as reliability and efficiency. In this paper, we present the design and implementation of GHadoop, a MapReduce framework that aims to enable large-scale distributedcomputing across multiple clusters. G-Hadoop uses the Gfarm file system as an underlying file system and executes MapReduce tasks across distributed clusters. Experiments of the G-Hadoop framework on distributed clusters show encouraging results.
The Content Delivery Network paradigm with a centralized approach shows its limits in large and dynamic systems. Decentralized algorithms and protocols, such as peer-to-peer (P2P) and multi agent systems, can be usefu...
详细信息
The Content Delivery Network paradigm with a centralized approach shows its limits in large and dynamic systems. Decentralized algorithms and protocols, such as peer-to-peer (P2P) and multi agent systems, can be usefully employed. In this work, an algorithm exploiting bio-inspired agents to organize the contents in Content Delivery Networks, is proposed. Agents move and reorganize the metadata that describe the contents to improve information retrieval. Preliminary results confirm the efficacy of the approach.
暂无评论