An FPGA router is developed using both the PathFinder and A* algorithms. Instead of the popular routing channel and rack model that is widely adopted by the research community, a modified routing model based on the ar...
详细信息
Particle swarm optimization is an effective evolution algorithm for global optimizing. Based on analysis of particle movements during evolution, parameter p is brought up to control the value of C 1 and C 2 , which e...
详细信息
Particle swarm optimization is an effective evolution algorithm for global optimizing. Based on analysis of particle movements during evolution, parameter p is brought up to control the value of C 1 and C 2 , which effects convergence rate of PSO. Aiming at solving different problems, corresponding p is adopted to improve performance. Particle confidence coefficient q is applied to weigh proper emphasize on itself best solution and global solution. Adaptive value of q is introduced to PSO to satisfy specific situation for each particle. Finally, performance of PSO with parameters p and q is testified by optimizing benchmark functions.
The execution of applications in dependable system requires a high level of instrumentation for automatic control. We present in this paper a monitoring solution for complex application execution. The monitoring solut...
详细信息
ISBN:
(纸本)9781424444106
The execution of applications in dependable system requires a high level of instrumentation for automatic control. We present in this paper a monitoring solution for complex application execution. The monitoring solution is dynamic, offering real-time information about systems and applications. The complex applications are described using workflows. We show that the management process for application execution is improved using monitoring information. The environment is represented by distributed dependable systems that offer a flexible support for complex application execution. Our experimental results highlight the performance of the proposed monitoring tool, the MonALISA framework.
Describes a software/hardware architectural transformation of a single-threaded, cyclic executive-based missile application to a multitasking, distributed application using MetaH (which builds a multiprocessor executi...
详细信息
Describes a software/hardware architectural transformation of a single-threaded, cyclic executive-based missile application to a multitasking, distributed application using MetaH (which builds a multiprocessor executive, based on rate monotonic theory, that binds with hand-generated Ada code and code generated from the ControlH algorithm specification language and code generator). The benefits of this process are: it provides a traceable path to the original language implementation, it achieves data encapsulation and data flow understanding, it separates out concurrent processes, it results in an object-based design, and MetaH provides a robust mechanism for multiprocessor distribution.
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely inc...
详细信息
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.
A problem of research on overlay network is lack of testing platform. Recent research has shown that one can use distributed hash tables (DHTs) to build scalable, robust and efficient applications. However, large-scal...
详细信息
A problem of research on overlay network is lack of testing platform. Recent research has shown that one can use distributed hash tables (DHTs) to build scalable, robust and efficient applications. However, large-scale distributedsystems are hard to deploy, nor are DHTs . Open DHT is a publicly accessible DHT service. In contrast to the usual DHT model, clients of OpenDHT can issue put and get operations to any DHT node, which processes the operations on their behalf. This paper introduces Open DHT and concentrates on how to do the network testing based on OpenDHT. The testing about resource location is also expounded.
Coarse-grained reconfigurable array architectures have drawn increasing attention due to their good performance and flexibility. In general, they show high performance for compute-intensive kernel code, but cannot han...
详细信息
ISBN:
(纸本)9781424465330
Coarse-grained reconfigurable array architectures have drawn increasing attention due to their good performance and flexibility. In general, they show high performance for compute-intensive kernel code, but cannot handle control-intensive parts efficiently, thereby degrading the overall performance. In this paper, we present automatic mapping of control-intensive kernels onto coarse-grained reconfigurable array architecture by using kernel-level speculative execution. Experimental results show that our automatic mapping tool successfully handles control-intensive kernels for coarse-grained reconfigurable array architecture. In particular, it improves the performance of the H.264 deblocking filters for luma and chroma over 26 and 16 times respectively compared to conventional software implementation. Compared to the approach using predicated execution, the proposed approach achieves 2.27 times performance enhancement.
distributed virtual environments (DVEs) are distributedsystems that allow multiple geographically distributed clients (users) to interact simultaneously in a computer-generated, shared virtual world. Applications of ...
详细信息
ISBN:
(纸本)9781424400546
distributed virtual environments (DVEs) are distributedsystems that allow multiple geographically distributed clients (users) to interact simultaneously in a computer-generated, shared virtual world. Applications of DVEs can be seen in many areas nowadays, such as online games, military simulations, collaborative designs, etc. To support large-scale DVEs with real-time interactions among thousands or more distributed clients, a geographically distributed server architecture (GDSA) is generally needed, and the virtual world can be partitioned into many distinct zones to distribute the load among the servers. Due to the geographic distributions of clients and servers in such architectures, it is essential to efficiently assign the participating clients to servers to enhance users' experience in interacting within the DVE. This problem is termed the client assignment problem. In this paper, we propose a two-phase approach, consisting of an initial assignment phase and a refined assignment phase to address this problem. Both phases are shown to be NP-hard, and several heuristic assignment algorithms are then devised based on this two-phase approach. Via extensive simulation studies with realistic settings, we evaluate these algorithms in terms of their performances in enhancing interactivity of the DVE
Since the spring of 1988, Carnegie Mellon University and the Naval Air Development Center have been working together to implement several large signal processing systems on the Warp parallel computer. In the course of...
详细信息
An open-source parallel 3D crystal plasticity finite elementCrystal Plasticity Finite Element (CPFE) software package, PRISMS-PlasticityPRISMS-Plasticity, is presented here as a part of the overarching PRISMS Center i...
详细信息
暂无评论