End users execute today on their smart phones different kinds of mobile applications like calendar apps or high-end mobile games, differing in local resource usage. Utilizing local resources of a smart phone heavily, ...
详细信息
ISBN:
(纸本)9781467377010
End users execute today on their smart phones different kinds of mobile applications like calendar apps or high-end mobile games, differing in local resource usage. Utilizing local resources of a smart phone heavily, like playing high-end mobile games, drains its limited energy resource in few hours. To prevent the limited energy resource from a quick exhaustion, smart phones benefit from executing resource-intensive application parts on a remote server in the cloud (code offloading). During the remote execution on the remote server, a smart phone waits in idle mode until it receives a result. However, code offloading introduces computation and communication overhead, which decreases the energy efficiency and induces monetary cost. For instance, sending or receiving execution state information to or from a remote server consumes energy. Moreover, executing code on a remote server instance in a commercial cloud causes monetary cost. To keep consumed energy and monetary cost low, we present in this paper the concept of remote-side caching for code offloading, which increases the efficiency of code offloading. The remote-side cache serves as a collective storage of results for already executed application parts on remote servers, avoiding the repeated execution of previously run application parts. The smart phone queries the remote-side cache for corresponding results of resource-intensive application parts. In case of a cache hit, the smart phone gets immediately a result and continues the application execution. Otherwise, it migrates the application part and waits for a result of the remote execution. We show in our evaluation that the use of a remote-side cache decreases energy consumption and monetary cost for mobile applications by up to 97% and 99%, respectively.
End users execute today on their smart phones different kinds of mobile applications like calendar apps or high-end mobile games, differing in local resource usage. Utilizing local resources of a smart phone heavily, ...
详细信息
ISBN:
(纸本)9781467377027
End users execute today on their smart phones different kinds of mobile applications like calendar apps or high-end mobile games, differing in local resource usage. Utilizing local resources of a smart phone heavily, like playing high-end mobile games, drains its limited energy resource in few hours. To prevent the limited energy resource from a quick exhaustion, smart phones benefit from executing resource-intensive application parts on a remote server in the cloud (code offloading). During the remote execution on the remote server, a smart phone waits in idle mode until it receives a result. However, code offloading introduces computation and communication overhead, which decreases the energy efficiency and induces monetary cost. For instance, sending or receiving execution state information to or from a remote server consumes energy. Moreover, executing code on a remote server instance in a commercial cloud causes monetary cost. To keep consumed energy and monetary cost low, we present in this paper the concept of remote-side caching for code offloading, which increases the efficiency of code offloading. The remote-side cache serves as a collective storage of results for already executed application parts on remote servers, avoiding the repeated execution of previously run application parts. The smart phone queries the remote-side cache for corresponding results of resource-intensive application parts. In case of a cache hit, the smart phone gets immediately a result and continues the application execution. Otherwise, it migrates the application part and waits for a result of the remote execution. We show in our evaluation that the use of a remote-side cache decreases energy consumption and monetary cost for mobile applications by up to 97% and 99%, respectively.
Network simulators play a vital role in research and development of networks. Simulations are usually computationally intensive where distributed execution can improve performance significantly. However, due to the se...
详细信息
ISBN:
(纸本)9780769535227
Network simulators play a vital role in research and development of networks. Simulations are usually computationally intensive where distributed execution can improve performance significantly. However, due to the sequential nature of the communication process modeled by network simulators and limitations in access to source code it is not possible to completely isolate the sub-processes for distributed execution. An alternate solution in this case is to classify independent modules within a sequential simulation environment and execute the modules in parallel. This paper presents a similar mechanism for OPNET by enabling distributed execution of network simulation scenarios on a cluster of PCs using Sun Grid Engine. The proposed framework provides a demarcation between local configuration of software simulators and their remote execution. Such a demarcation gives a lead into future possibilities where computationally intensive tasks can be configured using devices with insufficient computational resources and executed remotely. Extensive experimental results in a Sun Grid Engine cluster environment using a variety of OPNET simulation scenarios show substantial efficiency in simulation run-time.
Discrete event simulations (DES) provide a powerful means for modeling complex systems and analyzing their behavior. DES capture all possible interactions between the entities they manage, which makes them highly expr...
详细信息
ISBN:
(纸本)9781479978816
Discrete event simulations (DES) provide a powerful means for modeling complex systems and analyzing their behavior. DES capture all possible interactions between the entities they manage, which makes them highly expressive but also compute-intensive. These computational requirements often impose limitations on the breadth and/or depth of research that can be conducted with a discrete event simulation. This work describes our approach for leveraging the vast quantity of computing and storage resources available in both private organizations and public clouds to enable real-time exploration of a discrete event simulation. Rather than considering the execution speed of a single simulation run, we autonomously generate novel scenario variants to explore an entire subset of the simulation parameter space. These workloads are orchestrated in a distributed fashion across a wide range of commodity hardware. The resulting outputs are analyzed to produce models that accurately forecast simulation outcomes in real time, providing interactive feedback and bolstering research possibilities.
The partitioning of resources such as pipelines and register files among clusters has been proven to be an effective way to improve performance and scalability. However, improvements are limited by traditional binary ...
详细信息
The partitioning of resources such as pipelines and register files among clusters has been proven to be an effective way to improve performance and scalability. However, improvements are limited by traditional binary instruction encoding schemes and centralized instruction execution control mechanism. Meanwhile, clustered processors may come at the cost of performance degradation due to limited data locality resulted from a lack of available registers and functional units. This paper introduces a Highly scalable clustered architecture (HiSCA) to improve the scalability and performance of clustered processors. The hardware/software instruction encoding scheme of HiSCA splits the instruction stream into chains of instructions (packs) and encodes common information within the same packs in dedicated instruction words, thus reducing the amount of information encoded in instruction words. The pipeline of HiSCA, which features in-order issuing, out-of-order execution and parallel but in-order commitment, release instruction issuing from the heavy burden of dynamic scheduling, and allows functional units to fetch data and manage their own execution. HiSCA scales efficiently to 32 clusters with 1024 general purpose registers. Experimental results also show that, for a 4-cluster/8-issue configuration, HiSCA can achieve an average of 13.3% performance speedup and a 4.6% improvement in frequency with minimal hardware overhead, as compared to a traditional clustered processor with nearly the same hardware complexity.
This study describes a preliminary research on Information Retrieval (IR) systems. We developed a prototype tool that runs on a master-slave model and uses distributed processing in order to decentralize the workload ...
详细信息
ISBN:
(纸本)9781467308946;9788994364261
This study describes a preliminary research on Information Retrieval (IR) systems. We developed a prototype tool that runs on a master-slave model and uses distributed processing in order to decentralize the workload while retrieving information from the Internet. Later, its viability is demonstrated with a set of executions and the discussion follows with an analysis of the master process overload versus the number of slaves connected.
This study describes a preliminary research on Information Retrieval (IR) systems. We developed a prototype tool that runs on a master-slave model and uses distributed processing in order to decentralize the workload ...
详细信息
ISBN:
(纸本)9781467308946
This study describes a preliminary research on Information Retrieval (IR) systems. We developed a prototype tool that runs on a master-slave model and uses distributed processing in order to decentralize the workload while retrieving information from the Internet. Later, its viability is demonstrated with a set of executions and the discussion follows with an analysis of the master process overload versus the number of slaves connected.
This paper aims to present a development environment that enables the automatic code generation amenable for the interconnection of components obtained as a result of the partition of a Petri net model addressing its ...
详细信息
ISBN:
(纸本)9781424493128
This paper aims to present a development environment that enables the automatic code generation amenable for the interconnection of components obtained as a result of the partition of a Petri net model addressing its distributed execution using networked controllers (including microcontrollers and FPGAs devices, as well as specific controllers based on PLCs and general purpose PCs). The proposed interconnection solution is based on a Network-on-Chip solution supporting communications based on RS-232 serial protocol (although it can operate at higher transmission rates), for intra-circuit as well as inter-circuits interconnectivity. Being a well-accepted protocol in the industry, the proposed solution integrates existing modules that use the RS232 interface without having to redesign the whole communication system. Implementation platforms used for testing solution include Xilinx reconfigurable platforms, namely Spartan3 and Virtex FPGAs, as well as low cost microcontrollers, namely Microchip PIC18F4620, and general purpose PCs;industrial PCs, PLCs or other platforms with an RS-232 serial port can be easily integrated. A ring topology was selected to allow greater flexibility. The proposed architecture and protocol will be described. Finally, an example will be presented where, starting from the Petri nets model, the flow of development will be presented.
This paper introduces the Object Interaction Language ( OIL) that allows programming and coordination of distributed, heterogeneous sensor-actuator networks, such as sensor networks and multi-robot systems. OIL is an ...
详细信息
ISBN:
(纸本)9781424466757
This paper introduces the Object Interaction Language ( OIL) that allows programming and coordination of distributed, heterogeneous sensor-actuator networks, such as sensor networks and multi-robot systems. OIL is an interpreted, object oriented language and is contained in an OIL environment. An OIL environment provides communication between agents and allows agents to exchange code snippets among each other. Possible implementations of OIL environments can be - in the simplest case - a sheet of paper with OIL code literally printed on it, or a computational agent endowed with sensors, actuators and wireless communication. The atomic primitive in OIL is the intent for which implementation is resolved during runtime, potentially using code from other OIL environments and leading to distributed execution. We develop the structure of the language and demonstrate its key properties using a distributed computation task that is parallelized via an OIL environment. We evaluate the algorithm empirically by running OIL code on a team of six computational agents that communicate wirelessly. We then show experimentally how OIL can be used to allocate sensing and mobility in a multi-robot system using a case study in navigation, where one robot dynamically provides laser range data to another robot which is blind to its environment.
In multi-agent domains, the generation and coordinated execution of plans in the presence of adversaries is a significant challenge. In our research, a special "coach" agent works with a team of distributed ...
详细信息
In multi-agent domains, the generation and coordinated execution of plans in the presence of adversaries is a significant challenge. In our research, a special "coach" agent works with a team of distributed agents. The coach has a global view of the world, but has no actions other than occasionally communicating with the team over a limited bandwidth channel. Our coach is given a set of predefined opponent models which predict future states of the world caused by the opponents' actions. The coach observes the world state changes resulting from the execution of its team and opponents and selects the best matched opponent model based on its observations. The coach uses the recognized opponent model to predict the behavior of the opponent. Upon opportunities to communicate, the coach generates a plan for the team, using the predictions of the opponent model. The centralized coach generates a plan for distributed execution. We introduce (i) the probabilistic representation and recognition algorithm for the opponent models;(ii) a multi-agent plan representation, Multi-Agent Simple Temporal Networks;and (iii) a plan execution algorithm that allows the robust distributed execution in the presence of noisy perception and actions. The complete approach is implemented in a complex simulated robot soccer environment. We present the contributions as developed in this domain, carefully highlighting their generality along with a series of experiments validating the effectiveness of our coach approach.
暂无评论