LH* is a scalable distributed data structure that extends linear hashing to support file manipulations in a distributed environment. the purpose of the paper is to investigate the behavior of concurrent transactions i...
详细信息
LH* is a scalable distributed data structure that extends linear hashing to support file manipulations in a distributed environment. the purpose of the paper is to investigate the behavior of concurrent transactions in the context of LH*. We present an algorithm to synchronize concurrent transactions in LH*. the algorithm exploits the semantics of LH* and verifies the valves of the addressing parameters during two consecutive reads to detect any harmful interference. After an operation completes its manipulation, it still holds the key lock till the transaction commits or aborts. However, to simply move the lock information along withthe relocated keys cannot ensure correctness during a split. A locking protocol is therefore proposed to solve the inconsistency. Furthermore, a causal relationship is formed by associating a timestamp with each range query to eliminate the necessity of atomic broadcast.
Object version propagation is a means of managing versions of complex objects automatically. Complex objects are objects linked to other objects by dependence relations such as composition, inheritance, association, e...
详细信息
Object version propagation is a means of managing versions of complex objects automatically. Complex objects are objects linked to other objects by dependence relations such as composition, inheritance, association, equivalence, etc. the object version propagation model presented in the paper associates the version propagation capabilities, called version propagation strategies, with relations, thus making version propagation of all types of complex objects uniform. this model can be distinguished from other version propagation models by its genericity. Indeed, it allows the definition and use of multiple propagation strategies thus making the version propagation itself user-customizable. It may be applied on all types of relations. the operations propagated are those of object version creation and destruction. the model has, for now, been applied to version propagation in boththe composition and inheritance graphs.
the authors propose the use of multiple disks to increase the bandwidth of disk I/O so as to alleviate the I/O bottleneck problem existed in OODB applications. However, a major difficulty in using multiple disks for t...
详细信息
ISBN:
(纸本)0818676620
the authors propose the use of multiple disks to increase the bandwidth of disk I/O so as to alleviate the I/O bottleneck problem existed in OODB applications. However, a major difficulty in using multiple disks for the support of OODBs is that an object may reference or be referenced by other objects through associations. this requires that the search of objects be sequential through the association links, which is sometimes referred to as the navigational search of objects in an object base. this restriction limits the concurrency in object retrieval and invalidates the use of multiple disks. Techniques such as clustering/indexing of objects do not help to resolve this problem. By using a two-phase query processing strategy previously proposed by them, objects of interest can be accessed in parallel from multiple disk. the aim of the paper is to design a probability-based evaluation model to estimate the performance of using multiple disks and to compare it with a single disk system. the result shows that the scheme is viable and a significant improvement is achieved while multiple disks are used.
Although most state-of-the-art database systems have no inherent limitations w.r.t. the amount of data they can handle, the huge data quantities typically found in scientific database applications often exceed the fea...
详细信息
Although most state-of-the-art database systems have no inherent limitations w.r.t. the amount of data they can handle, the huge data quantities typically found in scientific database applications often exceed the feasibility level from a practical point of view when query performance is the issue. One theoretically well-known concept of improving query response time in scientific database applications is using the categorization and classification facilities often found in scientific computing domains for storing data aggregations that allow to substitute expensive access to raw data by the use of stored aggregated values. the results of an empirical performance study carried out in the application domain of market research are presented which substantiate the practical importance of such work. Using real market research data, it is shown that query response time can be shortened in an order of magnitude if a proper data aggregation concept is used. If the data aggregates are designed properly, the overhead of generating and managing materializations of data aggregates is by far outweighed by the improved query performance in realistic scenarios.
暂无评论