Large-scale data storage systems rely on magnetic tape cartridges to store millions of data objects. As these tapes age, the resident data objects become invalid; consequently, less and less of the tape potential capa...
详细信息
Large-scale data storage systems rely on magnetic tape cartridges to store millions of data objects. As these tapes age, the resident data objects become invalid; consequently, less and less of the tape potential capacity is effectively utilized. To address this problem, data storage systems have a facility, called "recycle" in this paper, that transfers valid data objects from sparsely populated tapes onto new tapes, thus creating empty tapes for reuse. A highperformance recycle process is needed to keep the number of tape cartridges to a minimum, and to maintain a continuous supply of empty tapes for storing newly created data objects. The performance of such processes is not easy to determine, and depends strongly on the data stored on the tapes, the speed and characteristics of the computer on which recycle is executed, and the nature of the algorithms themselves. This paper documents an extensive effort to evaluate a proposed recycle algorithm, using held workload data, laboratory measurements, and modeling. The results of the study were used to improve the recycle process, and were later verified through field trials. In addition yielding the results themselves, the effort illustrated that modeling and measurement, in an industrial setting, can indeed be used successfully in the design process.< >
Model-based evaluation of reliable distributed and parallel systems is difficult due to the complexity of these systems and the nature of the dependability measures of interest. The complexity creates problems for ana...
详细信息
Model-based evaluation of reliable distributed and parallel systems is difficult due to the complexity of these systems and the nature of the dependability measures of interest. The complexity creates problems for analytical model solution techniques, and the fact that reliability and availability measures are based on rare events makes traditional simulation methods inefficient. Importance sampling is a well-known technique for improving the efficiency of rare event simulations. However, finding an importance sampling strategy that works well in general is a difficult problem. The best strategy for importance sampling depends on the characteristics of the system and the dependability measure of interest. This fact motivated the development of an environment for importance sampling that would support the wide variety of model characteristics and interesting measures. The environment is based on stochastic activity networks, and importance sampling strategies are specified using the new concept of the importance sampling governor. The governor supports dynamic importance sampling strategies by allowing the stochastic elements of the model to be redefined based on the evolution of the simulation. The utility of the new environment is demonstrated by evaluating the unreliability of a highly dependable fault-tolerant unit used in the well-known MARS architecture. The model is non-Markovian, with Weibull distributed failure times and uniformly distributed repair times.< >
In numerical simulations of fluid-dynamics problems, solution-adaptive methods have proven to be very powerful. The implementation of the modified Shepard’s interpolation to the structured grids used in CFD is sugges...
Broadband ISDN has made possible a variety of new multimedia services, but also created new problems for congestion control, due to the bursty nature of traffic sources. Lazar and Pacifici (1991) showed that traffic p...
详细信息
Broadband ISDN has made possible a variety of new multimedia services, but also created new problems for congestion control, due to the bursty nature of traffic sources. Lazar and Pacifici (1991) showed that traffic prediction is able to alleviate this problem. The traffic prediction model in their framework is a special case of the Box-Jenkins ARIMA model. In this paper, we propose a neural network approach for traffic prediction. A (1,5,1) backpropagation feedforward neural network is trained to capture the linear and nonlinear regularities in several time series. A comparison between the results from the neural network approach and the Box-Jenkins approach is also provided. The nonlinearity used in this paper is chaotic. We have designed a set of experiments to show that a neural network's prediction performance is only slightly affected by the intensity of the stochastic component (noise) in a time series. We have also demonstrated that a neural network's performance should be measured against the variance of the noise, in order to gain more insight into its behavior and prediction performance. Based on experimental results, we then conclude that the neural network approach is an attractive alternative to traditional regression techniques as a tool for traffic prediction.< >
Yield analysis for reconfigurable structures is often difficult, due to the defect distribution and irregularity of reconfiguration algorithms. In this paper, the authors give a method to analyze the yield of reconfig...
详细信息
Yield analysis for reconfigurable structures is often difficult, due to the defect distribution and irregularity of reconfiguration algorithms. In this paper, the authors give a method to analyze the yield of reconfigurable pipelines for the following model: Given n pipelines with m stages, where each stage of a pipeline is defective with constant probability and spare wires are provided for reconfiguration, the authors calculate the expected percentage of pipelines they can harvest after reconfiguration. By modeling the 'shifting' reconfiguration as weighted chains in a lattice and applying poset theory, they give upper and lower bounds for the harvest rate as a function of m and n.< >
In switch-level simulation, nodes carry a charge on their parasitic capacitance from one evaluation to the next, which gives them a memory quality. A node is classified as temporary if its memory aspect is lost and ca...
详细信息
In switch-level simulation, nodes carry a charge on their parasitic capacitance from one evaluation to the next, which gives them a memory quality. A node is classified as temporary if its memory aspect is lost and cannot affect the circuit operation, whereas a node is classified as a memory node if the memory of the node is maintained and can affect the circuit operation. Accurate classification of nodes into temporary and memory nodes increases the performance of compiled simulators and high-level model generators. An approach for reliable automatic classification of nodes in a switch-level description is introduced. Both an exhaustive, exponential-time algorithm and a polynomial-time heuristic are presented. The heuristic was implemented and tested for several large circuits, including a commercial microprocessor. For this processor, the proposed heuristics identified an average of 92% of all nodes as temporary nodes. The heuristic was applied in a high-level model generator and significantly increased its performance.< >
The four volume set assembled following The 2005 International Conference on Computational science and its Applications, ICCSA 2005, held in Suntec International Convention and Exhibition Centre, Singapore, from 9 May...
详细信息
ISBN:
(数字)9783540323099
ISBN:
(纸本)9783540258636
The four volume set assembled following The 2005 International Conference on Computational science and its Applications, ICCSA 2005, held in Suntec International Convention and Exhibition Centre, Singapore, from 9 May 2005 till 12 May 2005, represents the ?ne collection of 540 refereed papers selected from nearly 2,700 submissions. Computational science has ?rmly established itself as a vital part of many scienti?c investigations, a?ecting researchers and practitioners in areas ranging from applications such as aerospace and automotive, to emerging technologies such as bioinformatics and nanotechnologies, to core disciplines such as ma- ematics, physics, and chemistry. Due to the shear size of many challenges in computational science, the use of supercomputing, parallel processing, and - phisticated algorithms is inevitable and becomes a part of fundamental t- oretical research as well as endeavors in emerging ?elds. Together, these far reaching scienti?c areas contribute to shape this Conference in the realms of state-of-the-art computational science research and applications, encompassing the facilitating theoretical foundations and the innovative applications of such results in other areas.
暂无评论