The authors consider a flow control mechanism that dynamically regulates the rate of data flow into a network based on feedback information about the network state. Such mechanisms have been introduced in a variety of...
详细信息
In this paper, we develop a formal logical foundation for secure deductive databases. This logical foundation is based on an extended logic involving several modal operators. We develop two models of interaction betwe...
详细信息
Data parallel processing on processor array architectures has gained popularity in data intensive applications, such as image processing and scientific computing, as massively parallel processor array machines became ...
详细信息
Data parallel processing on processor array architectures has gained popularity in data intensive applications, such as image processing and scientific computing, as massively parallel processor array machines became feasible commercially. The data parallel paradigm of assigning one processing element to each data element results in an inefficient utilization of a large processor array when a relatively small data structure is processed on it. The large degree of parallelism of a massively parallel processor array machine does not result in a faster solution to a problem involving relatively small data structures than the modest degree of parallelism of a machine that is just as large as the data structure. We presented data replication technique to speed up the processing of small data structures on large processor arrays. In this paper, we present replicated data algorithms for digital image convolutions and median filtering, and compare their performance with conventional data parallel algorithms for the same on three popular array interconnection networks, namely, the 2-D mesh, the 3-D mesh, and the hypercube.
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate l...
详细信息
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that testing/verification effort can be concentrated where needed. Such a strategy is ejected to detect more faults and thus improve the resulting reliability of the overall system. The authors present an alternative approach for constructing such models that is intended to fulfil specific software engineering needs, (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). The approach to classification is to: measure the software system to be considered; and to build multivariate stochastic models for prediction. The authors present experimental results obtained by classifying FORTRAN components into two fault density classes: low and high. They also evaluate the accuracy of the model and the insights it provides into the software process.< >
The authors address the issue of assessing the difficulty of a change based on known or predictable data. They consider the present work as a first step towards the construction of customized economic models for maint...
详细信息
The authors address the issue of assessing the difficulty of a change based on known or predictable data. They consider the present work as a first step towards the construction of customized economic models for maintainers. They propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. This approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by the NASA Goddard Space Flight Center, showing that it has been effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.
In this paper, we report on an actual implementation of the external sorting problem on a multicomputer with careful attention paid to the overlap between computation and I/O in order to minimize total execution time....
详细信息
Implementation bias in a specification is an arbitrary constraint in the solution space. The authors describe the problem of bias and then present a model of the specification and design processes describing individua...
详细信息
Implementation bias in a specification is an arbitrary constraint in the solution space. The authors describe the problem of bias and then present a model of the specification and design processes describing individual subprocesses in terms of precision/detail programs, and a model of bias in multi-attribute software specifications. While studying how bias is introduced into a specification it was realized that software defects and bias are dual problems of a single phenomenon. This has been used to explain the large proportion of faults found during the coding phase at the Software Engineering Laboratory at NASA Goddard Space Flight Center.< >
Polynomial hash functions are well studied and widely used in various applications. They have gained popularity because of certain performances they exhibit. It has been shown that even linear hash functions are expec...
详细信息
As the angle of gaze changes, so does the retinal location of the visual image of a stationary object. Since the object is correctly perceived as stationary, the retinopic coordinates of the object have been transform...
详细信息
As the angle of gaze changes, so does the retinal location of the visual image of a stationary object. Since the object is correctly perceived as stationary, the retinopic coordinates of the object have been transformed into craniotopic coordinates somehow using eye position information. Neurons in area 7a of posterior parietal cortex in macaque monkeys are thought to contribute to this transformation. The author describes a model of area 7a that incorporates a topographic map. They trained networks using competitive backpropagation learning to learn the transformation task and develop this topographic map. The trained networks generalized well to previously unseen patterns. The study showed that a competitive backpropagation learning rule can train networks employing competitive activation mechanisms to learn continuous valued functions and that it is possible at least computationally to construct a topographic map in area 7a which might be used for eye position independent spatial coding.< >
暂无评论