We propose a large-scale ATM switch fabric that can be constructed with currently feasible technology. A modular design that is carefully matched to technological design constraints makes large ATM switches (> 1000...
详细信息
We propose a large-scale ATM switch fabric that can be constructed with currently feasible technology. A modular design that is carefully matched to technological design constraints makes large ATM switches (> 1000 x 1000) feasible. Based on our analysis of the technology, we find that module interconnection becomes the bottleneck for a large fast packet switch rather than the topology of the interconnection network. For a large switch, reliability becomes critical, so particular attention is paid to fault tolerance which is achieved by dynamic reconfiguration of the module interconnection network. The proposed design significantly improves system reliability with relatively low hardware overhead. An abstract model Of the replacement problem for our design is presented and the problem is transformed into a well-known assignment problem. The maximum fault tolerance is found and a fast replacement algorithm is presented. The reconfiguration capability can also be used to ameliorate imbalanced traffic flows. We formulate this traffic flow assignment problem for our switch fabric and we show that the problem is NP-hard. We then propose a simple heuristic algorithm and give an example.
A local variable-size memory management policy has been developed. The variable-interval sampled working set (VSWS) is a working-set-like algorithm that exhibited a static performance similar to that of the sampled wo...
详细信息
A local variable-size memory management policy has been developed. The variable-interval sampled working set (VSWS) is a working-set-like algorithm that exhibited a static performance similar to that of the sampled working set (SWS) policy and a dynamic performance better than SWS. Simpler hardware support requirements and a lower number of process suspensions for use bit scanning purposes are 2 advantages of VSWS over SWS. Because memory demands are increasing in important applications, along with the technology-driven trend toward smaller page sizes, bit scanning overhead use is apt to become a prime characteristic of memory policies which base their estimation of the current locality on the information contained in the use bits. Before local variable-size policies become really workable options to the global policies in most virtual memory systems used today, schemes for reducing the overhead of VSWS or SWS must be studied in depth.
Previous work on paging behavior has concentrated on procedures rather than data. Recent research: 1. clarifies the paging behavior of data referenced by a newly developed language processor, and 2. analyzes theore...
详细信息
Previous work on paging behavior has concentrated on procedures rather than data. Recent research: 1. clarifies the paging behavior of data referenced by a newly developed language processor, and 2. analyzes theoretically the performance of several page replacement algorithms with no loss of generality. Experimental analyses demonstrate that the optimum page size for these data is small and locality is evident (but not so high in comparison with that for procedures), and that the efficiency of each replacement algorithm can be ranked in descending order as LRU, simplified LRUs, and first-in-first-out (FIFO). The language processor is improved on the basis of these results. Assuming the existence of locality, the performance of algorithms is analyzed theoretically. Difference in performance between LRU and FIFO is evaluated by upper and lower bound functions and is found to increase at low fault rate. The reason for the difference between LRU and simplified LRUs is analyzed by the period measure for which information about recent references to pages is collected. It is concluded that the performance of simplified LRUs is consistent with this measure and strongly depends on the reset timing of LRU-flags. Figures.
In most large computer installations, files migrate from on-line disk to mass storage, returning to disk only when they are needed by an interactive text editor. A study of file usage has determined that the average f...
详细信息
In most large computer installations, files migrate from on-line disk to mass storage, returning to disk only when they are needed by an interactive text editor. A study of file usage has determined that the average file is used 2 or fewer times; the average number of references per file, however, is 10.6. This translates to a highly skewed distribution: most files are used very little, and a few are accessed a large number of times. Data produced by a study of file usage has formed the basis for the construction and evaluation of a number of file migration algorithms. The size of a file and the time since it was last used are most important in determining when to migrate the file from disk to mass storage.
The file system, and the components of the computer system associated with it (disks, drums, channels, mass storage tapes and tape drives, controllers, I/O drivers, etc.) comprise a very substantial fraction of most c...
详细信息
The file system, and the components of the computer system associated with it (disks, drums, channels, mass storage tapes and tape drives, controllers, I/O drivers, etc.) comprise a very substantial fraction of most computer systems; substantial in several aspects, including amount of operating system code, expense for components, physical size and effect on performance. In a comparison paper, we surveyed the traditional methods for optimizing the I/O system. We then examined disk and I/O system architecture in IBM type systems, and indicated shortcomings and future directions. In this paper we go one step further and summarize research by the author on two topics: cache disks and file migration. Cache disks are disks which have an associated cache which buffers recently used tracks of data. The case for cache disks is presented, and some of the issues are discussed. Parameter values for some aspects of the cache design are suggested. The second part of this paper summarizes the author's work on file migration, by which files are migrated between disk and mass storage as needed in order to effectively maintain on-line a much larger amount of information than the disks can hold. Some of the algorithms investigated are discussed, and the basic results are presented.
The steady increase in the power and complexity of modern computer systems has encouraged the implementation of automatic file migration systems which move files dynamically between mass storage devices and disk in re...
详细信息
The steady increase in the power and complexity of modern computer systems has encouraged the implementation of automatic file migration systems which move files dynamically between mass storage devices and disk in response to user reference patterns. Using information describing 13 months of user disk data set file references, we develop and evaluate (replacement) algorithms for the selection of files to be moved from disk to mass storage. Our approach is general and demonstrates a general methodology for this type of problem. We find that algorithms based on both the file size and the time since the file was last used work well. The best realizable algorithms tested condition on the empirical distribution of the times between file references. Acceptable results are also obtained by selecting for replacement that file whose size times time to most recent reference is maximal. Comparisons are made with a number of standard algorithms developed for paging, such as Working Set, VMIN, and GOPT. Sufficient information (parameter values, fitted equations) is provided so that our algorithms may be easily implemented on other systems. [ABSTRACT FROM AUTHOR]
replacement algorithms for virtual memory systems are typically based on temporal measures of locality, while predictive loading and program restructuring are based on spatial measures of locality. This paper suggests...
详细信息
In [1] the authors use a lemma to prove their Theorem 1 (pp. 352-353). While the lemma is true, there is a defect in its proof, for one cannot assume that each of the symbols appearing as a type IV reference is in t(E...
详细信息
In [1] the authors use a lemma to prove their Theorem 1 (pp. 352-353). While the lemma is true, there is a defect in its proof, for one cannot assume that each of the symbols appearing as a type IV reference is in t(Ek , s). Using their example (p. 350) as a reference string with k = 12, t(E12 , 3) is 534 and lacks the two symbols (1 and 2) which appear as type IV references.
The running time of programs in a paging machine generally increases as the store in which programs are constrained to run decreases. Experiment, however, have revealed cases in which the reverse is true: a decrease i...
详细信息
The running time of programs in a paging machine generally increases as the store in which programs are constrained to run decreases. Experiment, however, have revealed cases in which the reverse is true: a decrease in the size of the store is accompanied by a decrease in running *** informal discussion of the anomalous behavior is given, and for the case of the FIFO replacement algorithm a formal treatment is presented.
暂无评论