As technology scaling down allows multiple processing components to be integrated on a single chip, the modern computing systems led to the advent of Multiprocessor System-on-Chip (MPSoC) and Chip Multiprocessor (CMP)...
详细信息
ISBN:
(纸本)9781479910694
As technology scaling down allows multiple processing components to be integrated on a single chip, the modern computing systems led to the advent of Multiprocessor System-on-Chip (MPSoC) and Chip Multiprocessor (CMP) design. Network-on-Chips (NoCs) have been proposed as a promising solution to tackle the complex on-chip communication problems on these multicore platforms. In order to optimize the NoC-based multicore system design, it is essential to evaluate the NoC performance with respect to numerous configurations in a large design space. Taking the traffic characteristics into account and using an appropriate latency model become crucially important to provide an accurate and fast evaluation. In this tutorial, we survey the current progresses in these aspects. We first review the NoC workload modeling and traffic analysis techniques. Then, we discuss the mathematical formalisms of evaluating the performance under a given traffic model, for both the average and worst-case latency predictions. Finally, the advantages of combining the analytical and simulation-based techniques are discussed and new attempts for bridging these two approaches are reviewed.
Nonlinear sensors and digital solutions are used in many embedded system designs. As the input/output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an optimized...
详细信息
Nonlinear sensors and digital solutions are used in many embedded system designs. As the input/output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an optimized device has always been a design challenge. This paper aims to propose a new Adapive Neuro-Fuzzy Inference System (ANFIS) digital architecture based Field Programmable Gate Array (FPGA) to linearize the sensor's characteristic. The ANFIS linearizer in synthesized and optimized in view to digital linearization. The performance of the developed architecture is examined with comparison to ANFIS software model and with two other designed architectures based FPGA using classical techniques. The results show the successful of the proposed architecture by demonstrating the capability of the operation to linearize a temperature sensor characteristic.
The paper build simulation model of MTU396 diesel generating sets through the SIMULINK,and then using MATLAB real-time workshop(RTW)to transfer the SIMULINK simulation model into a portable embedded c++code,in the Vis...
详细信息
The paper build simulation model of MTU396 diesel generating sets through the SIMULINK,and then using MATLAB real-time workshop(RTW)to transfer the SIMULINK simulation model into a portable embedded c++code,in the Visual c++integrated development environment to develop semi physical simulation system of diesel rotational speed *** method makes full use of the SIMULINK rich,convenient modeling environment,at the same time plays a powerful hardware control function of VC and the advantages of human-computer interface flexible *** paper introduces the development of semi physical simulation system with the method of specific steps and realization method.
simulation is the state-of-the-art analysis technique for distributed thermal management schemes. Due to the numerous parameters involved and the distributed nature of these schemes, such non-exhaustive verification m...
详细信息
ISBN:
(纸本)9781479910694
simulation is the state-of-the-art analysis technique for distributed thermal management schemes. Due to the numerous parameters involved and the distributed nature of these schemes, such non-exhaustive verification may fail to catch functional bugs in the algorithm or may report misleading performance characteristics. To overcome these limitations, we propose a methodology to perform formal verification of distributed dynamic thermal management for many-core systems. The proposed methodology is based on the SPIN model checker and the Lamport timestamps algorithm. Our methodology allows specification and verification of both functional and timing properties in a distributed many-core system. In order to illustrate the applicability and benefits of our methodology, we perform a case study on a state-of-the-art agent-based distributed thermal management scheme.
computer architects and researchers in the real-time domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architecture...
详细信息
ISBN:
(纸本)9781467322973;9781467322966
computer architects and researchers in the real-time domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only compare the worst-case execution time bounds of different architectures.
This paper begins with a discussion of the role and value of industrial control system (ICS) test bed which apply a universal, controllable, realistic, and repeatable experimental platform to SCADA control system cybe...
详细信息
This paper begins with a discussion of the role and value of industrial control system (ICS) test bed which apply a universal, controllable, realistic, and repeatable experimental platform to SCADA control system cyber security research. According to ICS layered architecture, ICS test bed based on emulation, physical, and simulation (EPS-ICS Test bed) is designed and implemented. EPS-ICS Test bed enables experimenters to create experiments with varying levels of fidelity and is widely used in vulnerability digging, comprehensive security training, facilitate development of security standards, develop advanced control system architectures and technologies that are more secure and robust.
The large body of research in perceptual computing has and will continue to enable many intriguing applications such as augmented reality, driver assistance, and personal analytics. Moreover, as wearable first person ...
详细信息
ISBN:
(纸本)9781479914173
The large body of research in perceptual computing has and will continue to enable many intriguing applications such as augmented reality, driver assistance, and personal analytics. Moreover, as wearable first person computing devices become increasingly popular, the demand for highly interactive perceptual computing applications will increase rapidly. Applications including first person assistance and analytics will be pervasive across retail, automotive, and medical domains. However, the computational requirements demanded by future perceptual computing applications will far exceed the capabilities of traditional vision algorithms that are executed on sequential CPUs and GPUs. Hardware accelerators are recognized as key to surpassing the limits of existing sequential architectures. In particular, brain inspired, or neuromorphic, vision accelerators have the potential to support computationally intensive perception algorithms on resource and power constrained devices.
This paper discusses the workload utilization dissemination for grid computing. The CPU is a well-known resource item and it is an integral part in most literatures while other RI's may include memory, network and...
详细信息
This paper discusses the workload utilization dissemination for grid computing. The CPU is a well-known resource item and it is an integral part in most literatures while other RI's may include memory, network and I/O overhead. The selection of resource variables and the number of RI's involved will result in different definitions of the workload. Various combination of computer RI's have been explored for studying the style of usage, techniques embedded and their capabilities. In contemplating the exploration, this study successfully describe the pattern of workload dissemination through the usage of the RI's and elicited the enhancement factors for systems performance. Among these factors are the manipulation of computer RI's, type of workload information with method of use, the workload dissemination direction along with implementation method and using certain algorithm to come out with new integrated scheduling with load balancing capability. A combination of these factors will help in developing an optimized scheduling or load balancing algorithm.
In areas such as psychology and neuroscience a common approach to study human behavior has been the development of theoretical models of cognition. In fields such as artificial intelligence, these cognitive models are...
详细信息
In areas such as psychology and neuroscience a common approach to study human behavior has been the development of theoretical models of cognition. In fields such as artificial intelligence, these cognitive models are usually translated into computational implementations and incorporated into the architectures of intelligent autonomous agents (AAs). The main assumption is that this design approach contributes to the development of intelligent systems capable of displaying very believable and human-like behaviors. Decision Making is one of the most investigated and computationally implemented cognitive functions. The literature reports several computational models designed to allow AAs to make decisions that help achieve their personal goals and needs. However, most models disregard crucial aspects of human decision making such as other agents' needs, ethical values, and social norms. In this paper, we propose a biologically inspired computational model of Moral Decision Making (MDM). This model is designed to enable AAs to make decisions based on ethical and moral judgment. The simulation results demonstrate that the model helps to improve the believability of virtual agents when facing moral dilemmas.
In recent years the area of High Performance Computing (HPC) has received an outstanding support both from the users as well as the computer system designers. This support is mainly due to the increase of the complexi...
详细信息
ISBN:
(纸本)9781479903719
In recent years the area of High Performance Computing (HPC) has received an outstanding support both from the users as well as the computer system designers. This support is mainly due to the increase of the complexity and density of the data processing of some of the advanced research areas. These areas include but not limited to, the climate modeling, weather forecasting, and performance measurement and improving precision of the nuclear weapon systems as well as space programs. This rise in complexity would impose additional pressure on the clients to demand for further improvements on system features such as speed, reliability, performance, fault tolerance, compatibility, size and cost. This is one of the determining factors that encourages our researchers to model new architectures in the areas of HPC in order to satisfy the clients' scientific and engineering requirements. With this motivation in mind the author has introduced a new architecture which is coined as Master -Slave Multi-Super Hypercube DXTree architecture ((MS) 2 HDX - T). For this architecture the total system cost through mathematical modeling and simulation is compared with similar parameters in the existing HPC systems. The result highlights the advantages or otherwise of the proposed architecture from scientific research and/or commercial point of view.
暂无评论