Matlab/Simulink is an industrial tool that is widely used to design and validate control algorithms for embedded control systems using numerical simulation. A Simulink model of a control system typically defines one o...
详细信息
Matlab/Simulink is an industrial tool that is widely used to design and validate control algorithms for embedded control systems using numerical simulation. A Simulink model of a control system typically defines one or more control algorithms together with their environment. Such models exhibit both discrete and continuous dynamics, simulated by discretizing time. On the other hand, a colored Petri net (CPN) is a well known formalism for modeling behavior of discrete event systems. In this paper, we give a formal semantics to Simulink using the CPN formalism, by describing how Simulink models can be expressed as a *** also show how Petri nets can be simulated in Simulink. Finally, we show how a CPN model can be used for performance analysis of a Simulink model.
Cloud computing is one of the leading development technology of current era. It has got so much of importance because of the high optimization of IT resources as a service to the consumers on demand with high availabi...
详细信息
Cloud computing is one of the leading development technology of current era. It has got so much of importance because of the high optimization of IT resources as a service to the consumers on demand with high availability, flexibility, reliability and scalability. Cloud computing basically converts the desktop computing into service level computing using remote servers or huge datacenter. The cloud computing has lended many user friendly features in terms of various services available through internet but the harsh reality about cloud computing is that they are prone to security threat attacks. One of the major concerning security threats is called Distributed Denial of service attack (DDOS). DDOS attack destroys the ability of the system to provide service by overwhelming the bandwidth of network device and saturating the computation resources of service provider. This paper describes experimental analysis of impact of flooding DDOS attack in cloud environment through simulation in cloudsim. In this experiment we will be defining Infrastructure as service (IaaS) in cloudsim and this IaaS will be deployed in Eucalyptus cloud suite. simulation of attacks will be carried out on the few VM instances defined in cloudsim and computational parameters of VM are observed for VM with normal operation and VM under attack.
The developed Hardware-in-the-Loop (HIL) simulation was used to evaluate the performance of the electrical machinery driver systems in some cases which are very difficult or impossible to be tested in laboratory envir...
详细信息
The developed Hardware-in-the-Loop (HIL) simulation was used to evaluate the performance of the electrical machinery driver systems in some cases which are very difficult or impossible to be tested in laboratory environment. By using this simulation environment, embedded Control systems (ECS) can easily be tested in a practically, cheaply and unhazardous way. In this study, we have developed a novel FPGA-based model of an induction machine in a single-chip in order to use in real-time simulation applications. In development steps, we have chosen Altera tools and used floating-point number system in model equations. The cycle time of the developed HIL simulation environment is less than the previous studies thanks to FPGA based system modeling. According to the clock source of the development FPGA board, its value is about 1 microsecond.
In recent years, thermal management, which improves the reliability, performance, power leakage, etc. of modern microprocessors, has been the subject of numerous computer architecture and system software studies. To d...
详细信息
ISBN:
(纸本)9781479939541
In recent years, thermal management, which improves the reliability, performance, power leakage, etc. of modern microprocessors, has been the subject of numerous computer architecture and system software studies. To determine the detailed thermal distribution of a microprocessor is among the critical tasks for thermal management. However, because thermal modeling tools require considerable computation time and memory to simulate fine-grain thermal information, they may be unsuitable for dynamic thermal management and hardware implementation. This study proposes a novel model based on reduced resistance-capacitance (RC) networks for efficiently calculating the temperature of a microprocessor. The proposed model is compared with two existing thermal simulation tools, namely, HotSpot [1] and Temptor [2]. The experiment studies show that the results generated using the proposed model differ from those of the existing tools by only 0.5 to 1.5%. However, the suggested model can increase computation speeds by 5 to 9 times and 98 to 161 times that of Temptor and HotSpot, respectively. For the memory usage, the proposed model consumes merely 0.45% of the space used by the existing tools.
This work concerns a dedicated mixed-signal power system dynamic simulator. The equations that describe the behavior of a power system can be decoupled in a large linear system that is handled by the analog part of th...
详细信息
This work concerns a dedicated mixed-signal power system dynamic simulator. The equations that describe the behavior of a power system can be decoupled in a large linear system that is handled by the analog part of the hardware, and a set of differential equations. The latter are solved using numerical integration algorithms implemented in dedicated pipelines on a field programmable gate array (FPGA). This data path is operating in a precision-starved environment since is it synthesized using fixed-point arithmetic, as well as it relies on low-precision solutions that come from the analog linear solver. In this paper, the pipelined integration scheme is presented and an assessment of different numerical integration algorithms is performed based on their effect on the final results. It is concluded that in low-precision environments higher order integration algorithms should be preferred when the time step is large, since simpler algorithms result in unacceptable artifacts (extraneous instabilities).
In this paper we propose a generic approach to statistically model leakage variation of devices with steep sub-threshold slope caused by random threshold variations. Monte Carlo simulation results based on our model s...
详细信息
In this paper we propose a generic approach to statistically model leakage variation of devices with steep sub-threshold slope caused by random threshold variations. Monte Carlo simulation results based on our model show less than 11% error in 6σ leakage current estimation compared to 65% error using conventional square root method. A design example based on SRAM bit line leakage issue is also presented to show the correctness of our model in a realistic circuit scenario. This general-purpose modeling technique could be a useful tool in estimating leakage in a variety emerging device technology.
The Real-Time simulation (RTS) of an aero space interceptor is an integrated approach to model and simulate the dynamic onboard subsystems by putting them in hardware-in-loop simulation. To achieve the design, develop...
详细信息
The Real-Time simulation (RTS) of an aero space interceptor is an integrated approach to model and simulate the dynamic onboard subsystems by putting them in hardware-in-loop simulation. To achieve the design, development, testing and validation of such vehicles, modelling of each flight subsystem has to be carried out with greater accuracy by studying & simulating the various target trajectories. Setting up a simulation test bench involves deciding on the key hardware (H/W) and software(S/W) architectures based on the sampling needs and time criticality. Choosing the Distributed simulation approach clearly brings out the system behaviour and evaluates the system performance in a real time environment. Different methods and techniques applied for developing such a real-time simulation system is discussed in this paper.
The concept that scientists and engineers should be able to monitor and control simulations running on supercomputers has been discussed and implemented since the late 1980s. The recent explosion in the variety and ca...
详细信息
The concept that scientists and engineers should be able to monitor and control simulations running on supercomputers has been discussed and implemented since the late 1980s. The recent explosion in the variety and capabilities of mobile devices allows this access to be taken to a new level, since the simulations running on supercomputers can be accessed as users are moving around in their normal work activities. This means that they have to be able to connect and disconnect at will from the simulation running on the supercomputer, without disturbing its execution. We present a general framework for such “mobile supercomputing” within the framework of Web services standards. To illustrate the potential of our method, we present a particular application of this framework to provide a rich mobile interface to a lattice-Boltzmann simulation of complex fluids running on an IBM Blue Gene/Q supercomputer.
Cache prediction for real-time systems in a preemptive scheduling context is still an open issue despite its practical importance. In this paper, we propose a modeling approach for taking into account the cache memory...
详细信息
Cache prediction for real-time systems in a preemptive scheduling context is still an open issue despite its practical importance. In this paper, we propose a modeling approach for taking into account the cache memory in realtime scheduling analysis. The goal is to have a simple but practical implementation to handle the cache memory with a real-time scheduling analyzer. The proposed contribution consists of three main parts: (1) modeling the targeted system with the Architecture Analysis and Design Language (AADL), (2) applying the cache analysis methods in a real time scheduling analysis tool and (3) performing scheduling simulation to access schedulability. For such a purpose, we present an extension of both the scheduling analysis tool Cheddar and of the AADL modeling language in order to integrate the cache modeling and analysis methodology we proposed. Experiments are presented to illustrate our propositions. They provide results on analysis that show examples of the timing impact of task preemption as well as the increase in overall responses time of the task set. This impact is important and the developed tool provides means to precisely assess it.
Prototyping distributed embedded system can be seen as a collection of many requirements covering many domains. System designers and developers need to describe both functional and non-functional requirements. Buildin...
详细信息
Prototyping distributed embedded system can be seen as a collection of many requirements covering many domains. System designers and developers need to describe both functional and non-functional requirements. Building distributed systems is a very tedious task since the application has to be verifiable and analyzable. Architecture Analysis and Design Language (AADL) provides adequate syntax and semantics to express and support distributed embeddedsystems. This paper studies a general methodology for prototyping distributed applications using the Precision Time Protocol (PTP) for building and translating AADL systems into a distributed application using network communication protocol. This allows simulation of systems specified in AADL to fully assess system viability, to refine and to correct the behavior of the system using the BIP (Behavior Interaction Priority) toolset.
暂无评论