Operating industrial plants at their maximum energy and resources efficiency often means to operate them close at their design limits. Due to strict safety and process parameter constraints, it is challenging to contr...
详细信息
Operating industrial plants at their maximum energy and resources efficiency often means to operate them close at their design limits. Due to strict safety and process parameter constraints, it is challenging to control them close to the optimal setpoints. Currently process performance improvement projects are realized by application of PC based model predictive controllers as add-ons. Applications of up-to-date optimizing control algorithms are characterized by high requirements on software architecture, numerical computing power and realtime behavior. These requirements are not accomplishable running optimizing control algorithms on Programmable Logic Controllers (PLCs) by current available software architectures. Therefore the aim of the Embedded Energy Efficiency Industrial Controller Platform (E3ICP) project is the development of an infrastructure for running state-of-the-art model predictive controllers on industrial embedded PLCs.
Timed testing, i.e. a method of testing where timing plays a crucial role in the test verdict of the test cases, is an important quality assurance strategy in the development of embedded systems. Tool support is essen...
详细信息
Timed testing, i.e. a method of testing where timing plays a crucial role in the test verdict of the test cases, is an important quality assurance strategy in the development of embedded systems. Tool support is essential for timed testing, as timed test cases cannot be executed manually with the required precision. In case studies, we found that the existing timed testing tools do not fulfill all requirements needed for real-world-timed testing: they lack both features and responsiveness. Therefore, we have started the implementation of a timed testing tool called TripleT. TripleT is based on the notion of timed ioco as conformance relation, and uses as basis algorithms already present in TorX. However, we have improved in the responsiveness and execution time of our test cases - two major points when it comes to timeliness in timed testing. This has been achieved by using look-ahead and parallelizing the test case execution of TripleT. In the paper we will report on our approach and the results achieved.
This paper presents a methodology of designing guidance and control law for four stage launch vehicle. Design of Guidance and Control system for nonlinear dynamic systems is an arduous task and various approaches have...
详细信息
This paper presents a methodology of designing guidance and control law for four stage launch vehicle. Design of Guidance and Control system for nonlinear dynamic systems is an arduous task and various approaches have been attempted in the past to address coupled system dynamics and nonlinearities. In this paper, the approach of Guidance and Control is exclusively based on fuzzy rule-based mechanism. Since fuzzy logics are more identical to human decision making so, this strategy enhances robustness and reliability in guidance and control mechanism and meet the flight objectives tactfully and manage vehicle energy. For inner loop, fuzzy rule based autopilot is designed which precisely follows the reference pitch attitude profile. The reference pitch profile is constantly reshaped online by a fuzzy rule based guidance to achieve desired altitude. Since reshaping attitude has no great influence on velocity; therefore, an engine shutoff mechanism has been implemented depending on the magnitude of semi major axis during the 3rd and final stage to gain the required orbital velocity. To design optimal fuzzy rule based control and guidance algorithm, the distribution of the membership functions of fuzzy inputs and output are obtained by solving constraint optimization problem using Genetic algorithms (GA). To get nominal trajectory profiles for proposed scheme, offline trajectory optimization is performed primarily. To acquire optimal AOA profile, optimization problem is solved by using Genetic Algorithm. To analyze the flight path of the vehicle, a point mass trajectory model is developed. In this model constant thrust and mass flow rate are assumed while the aerodynamic coefficients are calculated by DATCOM. For performance evaluation and validation of proposed guidance and control algorithm, a Six Degree of Freedom software is developed and simulated in SIMULINK. Numerous simulations are conducted to test the proposed scheme for a variety of disturbances and modeling un
high availability, reliability and scalability are basic prerequisites for cloud applications. Due to dynamically varying workloads, it's necessary to provide resource guarantees to cloud applications for meeting ...
详细信息
high availability, reliability and scalability are basic prerequisites for cloud applications. Due to dynamically varying workloads, it's necessary to provide resource guarantees to cloud applications for meeting QoS requirements. However, it's not trivial to generate a precise scalability policy for multi-tiers cloud applications to adapt to dynamically varying workload and satisfy QoS requirements simultaneously with minimal resource consumption. In this paper, we present an on-line scalability controller of multi-tiers applications to handle with unpredictable changing workloads. The solution is based on the use of layered queuing network model combined with our dynamic resource allocation techniques. Our performance model relates application performance to their resource requirements and dynamically varying workloads. Then we evaluate our approach via experiments with real-world workloads in different scenarios. Our results indicate that our performance model faithfully captures the behaviors of multi-tiers applications over a various range of workloads and configuration schemes. Moreover, it also shows that our techniques can judiciously obtain the optimized configuration scheme effectively with modest computation.
Energy efficiency is a major concern in modern high-performance-computing. Still, few studies provide a deep insight into the power consumption of scientific applications. Especially for algorithms running on hybrid p...
详细信息
Energy efficiency is a major concern in modern high-performance-computing. Still, few studies provide a deep insight into the power consumption of scientific applications. Especially for algorithms running on hybrid platforms equipped with hardware accelerators, like graphics processors, a detailed energy analysis is essential to identify the most costly parts, and to evaluate possible improvement strategies. In this paper we analyze the computational and power performance of iterative linear solvers applied to sparse systems arising in several scientific applications. We also study the gains yield by dynamic voltage/frequency scaling (DVFS), and illustrate that this technique alone cannot to reduce the energy cost to a considerable amount for iterative linear solvers. We then apply techniques that set the (multi-core processor in the) host system to a low-consuming state for the time that the GPU is executing. Our experiments conclusively reveal how the combination of these two techniques deliver a notable reduction of energy consumption without a noticeable impact on computational performance.
Practical face recognition systems are sometimes confronted with low-resolution (LR) images. Most existing feature extraction algorithms aim to preserve relational structure among objects of the input space in a linea...
详细信息
Practical face recognition systems are sometimes confronted with low-resolution (LR) images. Most existing feature extraction algorithms aim to preserve relational structure among objects of the input space in a linear embedding space. However, it has been a consensus that such complex visual learning tasks will be well be solved by adopting multiple descriptors to more precisely characterize the data for improving performance. In this paper, we addresses the problem of matching LR and high-resolution images that are difficult for conventional methods in practice due to the lack of an efficient similarity measure, and a multiple kernel criterion (MKC) is proposed for LR face recognition without any super-resolution (SR) preprocessing. Different image descriptors including RsL2, LBP, Gradientface and IMED are considered as the multiple kernel generators and the Gaussian function is exploited as the distance induced kernel. MKC solves this problem by minimizing the inconsistency between the similarities captured by the multiple kernels, and the nonlinear objective function can be alternatively minimized by a constrained eigenvalue decomposition. Experiments on benchmark databases show that our MKC method indeed improves the recognition performance.
This study presents a speed controller design for a switched reluctance (SR) motor in order to achieve minimum torque ripple and high control performance. First of all, SR motor convertor designed for soft chopping is...
详细信息
This study presents a speed controller design for a switched reluctance (SR) motor in order to achieve minimum torque ripple and high control performance. First of all, SR motor convertor designed for soft chopping is chosen. This converter as well as producing less torque ripple, provides more degrees of freedom for SR motor drive controller. A PID controller and a switching algorithm for turn-on and turn-off degree of each phase of motor form speed control loop of SR motor drive. The primary parameters of controller are achieved by try and error. But eventually an optimization algorithm to reach the goals and constraints in different set points is defined and its parameters are optimized with a Genetic Algorithm (GA). This algorithm optimized the turn-on and turn-off degrees of each phase, the parameters of PID controller in transient state, and parameters of PID controller that considered for reducing the torque ripple in steady state. Also, GA simultaneously obtains the optimum parameters of three nonlinear gains considered for fuzzy switching between the two PID controllers. The proposed control algorithm was simulated using MATLAB / Simulink software package and an application example of 8/6 SRM to validate the performance of designed algorithm.
It is well known that the main drawback of distributed applications that require highperformance is related to the data transfer speed between system nodes. The high speed networks are never enough. The application h...
详细信息
It is well known that the main drawback of distributed applications that require highperformance is related to the data transfer speed between system nodes. The high speed networks are never enough. The application has to come out with special techniques and algorithms for optimizing data availability. This aspect is increasingly needed for some categories of distributed applications such as computational steering applications, which have to permanently allow users to interactively monitor and control the progress of their applications. Following our previous research which was focused on the development of a set of frameworks and platforms for distributed simulation and computational steering, we introduce in this paper a new model for distributed file systems, supporting data steering, that is able to provide optimization for data acquisition, data output and load balancing while reducing the development efforts, improving scalability and flexibility of the system. Data partitioning is being performed at a logical level allowing multimedia applications to define custom data chunks like frames of a video, phrases of text, regions of an image, etc.
We consider an implementation of the recursive multilevel trust-region algorithm proposed by Gratton et al. (A recursive trust-region method in infinity norm for bound-constrained nonlinearoptimization, IMA J. Numer....
详细信息
We consider an implementation of the recursive multilevel trust-region algorithm proposed by Gratton et al. (A recursive trust-region method in infinity norm for bound-constrained nonlinearoptimization, IMA J. Numer. Anal. 28(4) (2008), pp. 827-861) for bound-constrained nonlinear problems, and provide numerical experience on multilevel test problems. A suitable choice of the algorithm's parameters is identified on these problems, yielding a satisfactory compromise between reliability and efficiency. The resulting default algorithm is then compared with alternative optimization techniques such as mesh refinement and direct solution of the fine-level problem. It is also shown that its behaviour is similar to that of multigrid algorithms for linear systems.
In this paper,we describe a fast algorithm trading research and simulation system(called ATRSS) which can trade various types of Chinese market securities and stock index futures,and support all aspects of trading,suc...
详细信息
In this paper,we describe a fast algorithm trading research and simulation system(called ATRSS) which can trade various types of Chinese market securities and stock index futures,and support all aspects of trading,such as obtaining market prices,making trading decisions,placing orders,monitoring order executions,and controlling the risk according etc,with fast speed and high *** architecture of ATRSS is based on industry level message bus,event-driven software framework and compliant with the Chinese practical securities market infrastructure *** features of ATRSS include providing strategy back-test optimization,different equity exchanges matching simulation,a strategy interface and framework for developing algorithm trading *** design and implementation of ATRSS concern a research and simulation of trading system,intended to provide both a platform for future experiments with real time trading systems architectures and modules,and a framework for research on trading strategies and algorithms etc.
暂无评论