An extension of the conventional object structuring approach, called the TMO structuring scheme, has been formulated as a unified scheme for object oriented structuring of both real time and non real time applications...
详细信息
An extension of the conventional object structuring approach, called the TMO structuring scheme, has been formulated as a unified scheme for object oriented structuring of both real time and non real time applications while enabling the system designer to provide design time guarantees of timely service capabilities of the objects designed. In another area, the DRB/PSP scheme has been established as a concrete scheme for achieving scalable time bounded fault tolerance for both software and hardware faults in distributed and parallel computer systems. A recent integration of the TMO structuring scheme and the basic principle of the DRB/PSP scheme is the primary shadow TMO replication (PSTR) scheme. The TMO scheme and the PSTR scheme present good potentials for realizing a quantum jump in design productivity and system reliability in the real time distributed computing application field. We first present a modular implementation model of the PSTR scheme that can be incorporated into most commercial real time operating systems. This modular implementation model is amenable to a rigorous analysis of the recovery time bounds, a measure of great importance in complex systems. In addition, a new style of testing both control algorithms and time bounded fault tolerance protocols by use of real time simulation components is presented. The implementation model and the testing approach have been validated by a non trivial experiment.
The present state of communication networks with respect to speed and reliability and the recent growth of distributed applications have created a need for a global enterprise solution to the legality checking and att...
详细信息
The present state of communication networks with respect to speed and reliability and the recent growth of distributed applications have created a need for a global enterprise solution to the legality checking and attribute evaluation requirement. Traditionally, the mainframe systems.provided the cohesion of all the processes with respect to the company regulations. When decentralized systems.and applications became widely used the legality checking mechanism lost its central role and became a necessary component for every decentralized system. In this paper a methodology to reconnect these systems.with respect to their legality checking and attribute evaluation needs is presented. A generic Legality Checking system has been developed and integrated with scheduling systems.of the airline domain. It is shown that the client-server model adopted can bring back in a flexible manner the lost homogeneity of the central legacy systems.
This paper discusses a performance and reliability optimization approach for distributedsystems.under a given budget constraint using the genetic algorithm (GA). The overall effectiveness of a distributed system is m...
详细信息
This paper discusses a performance and reliability optimization approach for distributedsystems.under a given budget constraint using the genetic algorithm (GA). The overall effectiveness of a distributed system is measured in terms of the average network throughput. This measure computes performance and reliability from network connectivity point of view. In order to carry out this optimization a distributed genetic algorithm (DGA) based scheme is developed. To demonstrate the effectiveness of the proposed approach, the results obtained from the distributed genetic algorithm approach is compared with single machine genetic algorithm and optimal solutions computed using exhaustive searches. Moreover, a brief discussion on the speed up over single machine implementation is also included.
Modern-day computing system design and development is characterized by increasing system complexity and ever shortening time to market. For modeling techniques to be deployed successfully, they must conveniently deal ...
详细信息
Modern-day computing system design and development is characterized by increasing system complexity and ever shortening time to market. For modeling techniques to be deployed successfully, they must conveniently deal with complex system models, and must be quick and easy to use by non-specialists. In this paper we introduce "action models", a modeling formalism that tries to achieve the above goals for reliability evaluation of fault-tolerant distributed computing systems. including both software and hardware in the analysis. The metric of interest in action models is the job success probability, and we will argue why the traditional availability metric is insufficient for the evaluation of fault-tolerant distributedsystems. We formally specify action models, and introduce path-based solution algorithms to deal with the potential solution complexity of created models. In addition, we show several examples of action models, and use a preliminary tool implementation to obtain reliability results for a reliable clustered computing platform.
MEADEP (measure dependability) is a user-friendly dependability evaluation tool for measurement-based analysis of computing systems.including both hardware and software. Features of MEADEP are: a data processor for co...
详细信息
MEADEP (measure dependability) is a user-friendly dependability evaluation tool for measurement-based analysis of computing systems.including both hardware and software. Features of MEADEP are: a data processor for converting data in various formats (records with a number of fields stored in a commercial database format) to the MEADEP format, a statistical analysis module for graphical data presentation and parameter estimation, a graphical modeling interface for constructing reliability block and Markov diagrams, and a model solution module for availability/reliability calculation with graphical parametric analysis. Use of the tool on failure data from measurements can provide quantitative assessments of dependability for critical systems. while greatly reducing requirements for specialized skills in data processing, analysis, and modeling from the user. MEADEP has been applied to evaluate dependability for several air traffic control systems.(ATC) and results produced by MEADEP have provided valuable feedback to the program management of these critical systems.
There are at least two major security challenges in mobile code distributedsystems. First hosts have to be protected from potentially malicious actions of mobile code they are executing. Many techniques are known whi...
详细信息
There are at least two major security challenges in mobile code distributedsystems. First hosts have to be protected from potentially malicious actions of mobile code they are executing. Many techniques are known which address this problem. Second mobile agents themselves have to be protected against malicious hosts. Satisfactory ways to deal with the second problem are necessary features for many of the (security-sensitive) tasks mobile agent (MA) systems.are envisioned for, e.g., in e-commerce applications where a shopping agent is sent out on the Internet, vendors have a high incentive in tampering with these agents. The key point is is that the originator of the agent needs to have sound guarantees that the results an agent brings back are correct. Why else should he send out the agent in the first place?.
Online analytical processing techniques are used for data analysis and decision support systems. The multidimensionality of the underlying data is well represented by multidimensional databases. For data mining in kno...
详细信息
Online analytical processing techniques are used for data analysis and decision support systems. The multidimensionality of the underlying data is well represented by multidimensional databases. For data mining in knowledge discovery, OLAP calculations can be effectively used. For these, high performance parallel systems.are required to provide interactive analysis. Precomputed aggregate calculations in a data cube can provide efficient query processing for OLAP applications. We present parallel data cube construction on distributed-memory parallel computers from a relational database. The data cube is used for data mining of associations using attribute focusing. Results are presented for these on the IBM-SP2, which show that our algorithms and techniques are scalable to a large number of processors, providing a high performance platform for such applications.
This paper outlines a human-centered virtual machine of problem solving agents, intelligent agents, software agents and objects. It deals with issues related to high-assurance (e.g. reliability, availability, real-tim...
详细信息
This paper outlines a human-centered virtual machine of problem solving agents, intelligent agents, software agents and objects. It deals with issues related to high-assurance (e.g. reliability, availability, real-time and others) through design of human-centered system architecture in which technology is a primitive. The human-centered virtual machine is based on a number of human-centered perspectives including the distributed cognition approach. The human-centered virtual machine has been applied in complex data intensive time critical problems like real-time alarm processing and fault diagnosis, air combat simulation and business (decision support).
The confluence of computers, communications, and databases is quickly creating a global virtual database where many applications require real time access to both temporally accurate and multimedia data. This is partic...
详细信息
The confluence of computers, communications, and databases is quickly creating a global virtual database where many applications require real time access to both temporally accurate and multimedia data. This is particularly true in military and intelligence applications, but these required features are needed in many commercial applications as well. We are developing a distributeddatabase, called BeeHive, which could offer features along different types of requirements: real time, fault tolerance, security, and quality of service for audio and video. Support of these features and potential trade offs between them could provide a significant improvement in performance and functionality over current distributeddatabase and object management systems. We present a high level design for BeeHive architecture and sketch the design of the BeeHive Object Model (BOM) which extends object oriented data models by incorporating time and other features into objects.
NTT software Labs is producing a distributed, self-configuring information navigation infrastructure designed to scale to global proportions. For reasons of large scale, unreliability (of the Internet, its connected c...
详细信息
NTT software Labs is producing a distributed, self-configuring information navigation infrastructure designed to scale to global proportions. For reasons of large scale, unreliability (of the Internet, its connected computers, and the implementations), and the complete autonomy of the participants, a number of difficult database and cache consistency problems arise that are not solved by techniques commonly used either for the Internet (i.e. DNS), or for existing distributeddatabasesystems. This paper describes a set of strategies designed to solve these problems. In particular, it focuses on the use of third-party detection and notification of database and cache inconsistency.
暂无评论