Many sophisticated formalisms exist for specifying complex system behaviors, but methods for specifying performance and dependability variables have remained quite primitive. To cope with this problem, modelers often ...
详细信息
Many sophisticated formalisms exist for specifying complex system behaviors, but methods for specifying performance and dependability variables have remained quite primitive. To cope with this problem, modelers often must augment system models with extra state information and event types to support particular variables. This often leads to models that are nonintuitive, and must be changed to support different variables. To address this problem, we extend the array of performance measures that may be derived from a given system model, by developing new performance measure specification and model construction techniques. Specifically, we introduce a class of path-based reward variables, and show how various performance measures may be specified using these variables. path-based reward variables extend the previous work with reward structures to allow rewards to be accumulated based on sequences of states and transitions. To maintain the relevant history, we introduce the concept of a path automaton, whose state transitions are based on the system model state and transitions. Furthermore, we present a new procedure for constructing state spaces and the associated transition rate matrices that support path-based reward variables. Our new procedure takes advantage of the path automaton to allow a single system model to be used as the basis of multiple performance measures that would otherwise require separate models or a single more complicated model. (C) 1999 Published by Elsevier Science B.V. All rights reserved.
Today many formalisms exist for specifying complex Markov chains. In contrast, formalisms for specifying rewards, enabling the analysis of long-run average performance properties, have remained quite primitive. Basica...
详细信息
Today many formalisms exist for specifying complex Markov chains. In contrast, formalisms for specifying rewards, enabling the analysis of long-run average performance properties, have remained quite primitive. Basically, they only support the analysis of relatively simple performance metrics that can be expressed as long-run averages of atomic rewards, i.e. rewards that are deductible directly from the individual states of the initial Markov chain specification. To deal with complex performance metrics that are dependent on the accumulation of atomic rewards over sequences of states, the initial specification has to be extended explicitly to provide the required state information. To solve this problem, we introduce in this paper a new formalism of temporal rewards that allows complex quantitative properties to be expressed in terms of temporal reward formulas. Together, an initial (discrete-time) Markov chain and the temporal reward formulas implicitly define an extended Markov chain that allows the determination of the quantitative property by traditional techniques for computing long-run averages. A method to construct the extended chain is given and it is proved that this method leaves long-run averages invariant for atomic rewards. We further establish conditions that guarantee the preservation of ergodicity. The construction method can build the extended chain in an on-the-fly manner allowing for efficient simulation. (C) 2002 Elsevier Science B.V. All rights reserved.
暂无评论