The authors extend R.B. Boppana's results (1989) in two ways. They first show that his two lower bounds hold for general read-once formulae, not necessarily monotone, that may even include exclusive-or gates. They...
Pattern recognition in proteins has become of central importance in Molecular Biology. Proteins are macromolecules composed of an ordered sequence of amino acids, referred to also as residues. The sequence of residues...
详细信息
Presents near constant time associative parallel lexing (APL) algorithms. The best time complexity thus far claimed is O(logn) (n denotes the number of input characters for the parallel prefix lexing (PPL) algorithm. ...
详细信息
Presents near constant time associative parallel lexing (APL) algorithms. The best time complexity thus far claimed is O(logn) (n denotes the number of input characters for the parallel prefix lexing (PPL) algorithm. The linear state recording step in the PPL algorithm, which needs to be done only once for each grammar has been ignored in claiming the log n time complexity for the PPL algorithm. Furthermore, the PPL algorithm does not consider recording line numbers for the tokens and distinguishing identifier tokens as keywords or user-identifiers. The APL algorithms perform all of these functions. Thus, without considering the efforts spent on these functions, the APL algorithm takes constant time since every step depends on the length of the tokens, not on the length of the input. Generalizing and including these extra functions, the APL algorithm takes near constant time.< >
The topics discussed here are network models of object recognition; a computational theory of recognition; psychophysical support for a view-interpolation model: and an open issue, features of recognition. The authors...
详细信息
The topics discussed here are network models of object recognition; a computational theory of recognition; psychophysical support for a view-interpolation model: and an open issue, features of recognition. The authors survey a successful replication of central characteristics of performance in 3-D object recognition by a computational model based on interpolation among a number of stored views of each object. Network models of 3-D object recognition based on interpolation among specific stored views behave in several respects similarly to human observers in a number of recognition tasks. Even closer replication of human performance in recognition should be expected, once the issue of the features used to represent object views is resolved.< >
The Distributed Consensus problem involves n processors each of which holds an initial binary value. At most t processors may be faulty and ignore any protocol (even behaving maliciously), yet it is required that the ...
详细信息
The subcubic (O(nw) for w(3) algorithms to multiply Boolean matrices do not provide the witnesses;namely, they compute C=A.B but if Cij=1 they do not find an index k (a witness) such that Aik=Bkj=1. The authors design...
详细信息
The authors extend R.B. Boppana's results (1989) in two ways. They first show that his two lower bounds hold for general read-once formulae, not necessarily monotone, that may even include exclusive-or gates. They...
详细信息
The authors extend R.B. Boppana's results (1989) in two ways. They first show that his two lower bounds hold for general read-once formulae, not necessarily monotone, that may even include exclusive-or gates. They are then able to join his two lower bounds together and show that any read-once, not necessarily monotone, formula that amplifies (p-/sup 1///sub n/,p+/sup 1///sub n/) to (2/sup -n/,1-2/sup -n/) has size of at least Omega (n/sup alpha +2/). This result does not follow from Boppana's arguments and it shows that the amount of amplification achieved by L.G. Valiant (1984) is the maximal achievable using read-once formulae.< >
An associative data-parallel compilation model of logic programs capable of answering queries with unspecified relations concerning the given objects is described. The model benefits from the synergy resulting from as...
详细信息
An associative data-parallel compilation model of logic programs capable of answering queries with unspecified relations concerning the given objects is described. The model benefits from the synergy resulting from associative search, data-parallelism during goal reduction, and the use of low-level code to invoke the subgoals and savings in data-transfers resulting from the presence of global registers. The use of associative tables extends the power of logic programming to answer a large class of queries and derive unspecified relation-names for the given objects. In contrast to the interpretation based on a pure data parallel model, this model does not suffer from data sequentiality caused by the presence of multiple-occurrence variables in the goals. The model also handles variable aliasing in the clauses efficiently using the associative data-parallel search and data-parallel assignment property.< >
The authors discuss concordance compression using the framework now customary in compression theory. They begin by creating a math.matical model of concordance generation, and then use optimal compression engines, suc...
详细信息
The authors discuss concordance compression using the framework now customary in compression theory. They begin by creating a math.matical model of concordance generation, and then use optimal compression engines, such as Huffman or arithmetic coding, to do the actual compression. It should be noted that in the context of a static information retrieval system, compression and decompression are not symmetrical tasks. Compression is done only once, while building the system, whereas decompression is needed during the processing of every query and directly affects the response time. One may thus use extensive and costly preprocessing for compression, provided reasonably fast decompression methods are possible. Moreover, compression is applied to the full files (text, concordance, etc.), but decompression is needed only for (possibly many) short pieces, which may be accessed at random by means of pointers to their exact locations. Therefore the use of adaptive methods based on tables that systematically change from the beginning to the end of the file is ruled out. However, their concern is less the speed of encoding or decoding than relating concordance compression conceptually to the modern approach of data compression, and testing the effectiveness of their models.< >
暂无评论