In this work, we have developed an elegant algorithm to study the cosmological consequences from a huge class of quantum field theories (i.e. superstring theory, supergravity, extra dimensional theory, modified gravit...
详细信息
In this work, we have developed an elegant algorithm to study the cosmological consequences from a huge class of quantum field theories (i.e. superstring theory, supergravity, extra dimensional theory, modified gravity, etc.), which are equivalently described by soft attractors in the effective field theory framework. In this description we have restricted our analysis for two scalar fields - dilaton and Higgsotic fields minimally coupled with Einstein gravity, which can be generalized for any arbitrary number of scalar field contents with generalized non-canonical and non-minimal interactions. We have explicitly used R-2 gravity, from which we have studied the attractor and non-attractor phases by exactly computing two point, three point and four point correlation functions from scalar fluctuations using the In-In (Schwinger-Keldysh) and the delta N formalisms. We have also presented theoretical bounds on the amplitude, tilt and running of the primordial power spectrum, various shapes (equilateral, squeezed, folded kite or counter-collinear) of the amplitude as obtained from three and four point scalar functions, which are consistent with observed data. Also the results from two point tensor fluctuations and the field excursion formula are explicitly presented for the attractor and non-attractor phase. Further, reheating constraints, scale dependent behavior of the couplings and the dynamical solution for the dilaton and Higgsotic fields are also presented. New sets of consistency relations between two, three and four point observables are also presented, which shows significant deviation from canonical slow-roll models. Additionally, three possible theoretical proposals have presented to overcome the tachyonic instability at the time of late time acceleration. Finally, we have also provided the bulk interpretation from the three and four point scalar correlation functions for completeness.
In this paper we initiate the study of discrete random variables over domains. Our work is inspired by that of Daniele Varacca, who devised indexed valuations as models of probabilistic computation within domain theor...
详细信息
In this paper we initiate the study of discrete random variables over domains. Our work is inspired by that of Daniele Varacca, who devised indexed valuations as models of probabilistic computation within domain theory. Our approach relies on new results about commutative monoids defined on domains that also allow actions of the non-negative reals. Using our approach, we define two such families of real domain monoids, one of which allows us to recapture Varacca's construction of the Plotkin indexed valuations over a domain. Each of these families leads to the construction of a family of discrete random variables over domains, the second of which forms the object level of a continuous endofunctor on the categories RB (domains that are retracts of bifinite domains). and on FS (domains where the identity map is the directed supremum of deflations finitely separated from the identity). The significance of this last result lies in the fact that there is no known category of continuous domains that is closed under the probabilistic power domain, which forms the standard approach to modelling probabilistic choice over domains. The fact that RB and FS are Cartesian closed and also are closed under a power domain of discrete random variables means we can now model, e.g. the untyped lambda calculus extended with a probabilistic choice operator, implemented via random variables. (C) 2007 Elsevier B.V. All rights reserved.
This paper proposes efficient fixed-point and floating-point implementations for radix-10 decimal logarithm on Xilinx FPGA devices. The technique is based on the digit-recurrence method, which supports the three decim...
详细信息
This paper proposes efficient fixed-point and floating-point implementations for radix-10 decimal logarithm on Xilinx FPGA devices. The technique is based on the digit-recurrence method, which supports the three decimal floating-point (DFP) types specified in the IEEE 754-2008 standard. The novelty of this proposal is that it avoids the implementation of redundant carry-save logic by direct selection (i.e. via scaling). The designs involve novel techniques based on efficient use of dedicated resources in the programmable devices. Implementations were made on Xilinx 7-series devices. For fixed-point logarithm, they are capable of operating up to 145 MHz for p= 7,124 MHz for p = 16 and 108 MHz for p= 34, and for DFP logarithm the operation frequency obtained was 123 MHz for p= 7,104 MHz for p=16 and 93 MHz for p=34. In contrast to other related works, the proposed architecture achieves better computation times and less occupation in area in terms of LUTs. (C) 2018 Elsevier B.V. All rights reserved.
We consider the following problem: given three partitions A, B, C of a finite set OMEGA, do there exist two permutations alpha and beta such that A, B, C are induced by alpha, beta and alphabeta respectively? This pro...
详细信息
We consider the following problem: given three partitions A, B, C of a finite set OMEGA, do there exist two permutations alpha and beta such that A, B, C are induced by alpha, beta and alphabeta respectively? This problem is NP-complete. However it turns out that it can be solved by a polynomial time algorithm when some relations between the number of classes of A, B, C hold.
The work reported in this paper refers to Massey's proof of the surface classification theorem based on the standard word-rewriting treatment of surfaces. We arrange this approach into a formal rewriting system R ...
详细信息
The work reported in this paper refers to Massey's proof of the surface classification theorem based on the standard word-rewriting treatment of surfaces. We arrange this approach into a formal rewriting system R and provide a new version of Massey's argument. Moreover, we study the computational properties of two subsystems of R: R-or for dealing with words denoting orientable surfaces and R-nor for dealing with words denoting non-orientable surfaces. We show how such properties induce an alternative proof for the surface classification in which the basic homeomorphism between the connected sum of three projective planes and the connected sum of a torus with a projective plane is not required.
This paper studies themultiple-split load sharing mechanism of gears in two-stage external meshing planetary transmission system of aeroengine. According to the eccentric error, gear tooth thickness error, pitch error...
详细信息
This paper studies themultiple-split load sharing mechanism of gears in two-stage external meshing planetary transmission system of aeroengine. According to the eccentric error, gear tooth thickness error, pitch error, installation error, and bearing manufacturing error, we performed the meshing error analysis of equivalent angles, respectively, and we also considered the floating meshing error caused by the variation of the meshing backlash, which is from the floating of all gears at the same time. Finally, we obtained the comprehensive angle meshing error of the two-stage meshing line, established a refined mathematical computational model of 2-stage external 3-split loading sharing coefficient in consideration of displacement compatibility, got the regular curves of the load sharing coefficient and load sharing characteristic curve of full floating multiple-split and multiple-stage system, and took the variation law of the floating track and the floating quantity of the center wheel. These provide a scientific theory to determine the load sharing coefficient, reasonable load distribution, and control tolerances in aviation design and manufacturing.
Let f ( x 1 , …, x k ) be a Boolean function that k parties wish to collaboratively evaluate, where each x i is a bit-string of length n. The ith party knows each input argument except x i ; and each party has unlimi...
详细信息
Let f ( x 1 , …, x k ) be a Boolean function that k parties wish to collaboratively evaluate, where each x i is a bit-string of length n. The ith party knows each input argument except x i ; and each party has unlimited computational power. They share a blackboard, viewed by all parties, where they can exchange messages. The objective is to minimize the number of bits written on the board. We prove lower bounds of the form Ω(n · c −k ), for the number of bits that need to be exchanged in order to compute some (explicitly given) polynomial time computable functions. Our bounds hold even if the parties only wish to have a 1 % advantage at guessing the value of f on random inputs. The lower bound proofs are based on discrepancy upper bounds for specific functions over “cylinder intersection” sets. These results may be of independent interest. We give several applications of the lower bounds. The first application is a pseudorandom generator for Logspace. We explicitly construct (in polynomial time pseudorandom sequences of length n from a random seed of length exp( c √log n ) that no Logspace Turing machine will be able to distinguish from truly random sequences. As a corollary we give an explicit construction of a universal traversal sequence of length exp(exp( c √log n )) for arbitrary undirected graphs on n vertices. We then apply the multiparty protocol lower bounds to derive several new time-space trade-offs. We give a tight time-space trade-off of the form TS =Θ( n 2 ), for general, k -head Turing machines; the bounds hold for a function that can be computed in linear time and constant space by a k + 1-head Turing machine. We also give a new length-width trade-off for oblivious branching programs; in particular, our bound implies new lower bounds on the size of arbitrary branching programs, or on the size of Boolean formulas (over an arbitrary finite base). Using universal hashing, Nisan has recently constructed considerably improved random generators for Logspac
We extend the notion of linearity testing to the task of checking linear consistency of multiple functions. Informally, functions are "linear" if their graphs form straight lines on the plane. Two such funct...
详细信息
We extend the notion of linearity testing to the task of checking linear consistency of multiple functions. Informally, functions are "linear" if their graphs form straight lines on the plane. Two such functions are "consistent" if the lines have the same slope. We propose a variant of a test of M. Blum et al. (J. Comput. System Sci. 47 (1993), 549-595) to check the linear consistency of three functions f(1). f(2). f(3) mapping a finite Abelian group G to an Abelian group H: Pick x, y is an element of G uniformly and independently at random and check if f(1)(x) + f(2)(y) = f(3)(x + y). We analyze this test for two cases: (1) G and H are arbitrary Abelian groups and (2) G = F-2(n) and H = F-2. Questions bearing close relationship to linear-consistency testing seem to hav e been implicitly considered in recent work on the construction of PCPs and in particular in the work of J. Hastad [9] (in "Proceedings of the Twenty-Ninth Annual ACM Symposium on theory of Computing. El Paso. Texas, 4-6 May 1997," pp. 1-10). It is abstracted explicitly for the first time here. As an application of our results we give yet another new and tight characterization of NP. namely For All epsilon > 0, NP = MIP1-epsilon 1/2 [O(log n), 3, 1]. That is, every language in NP has 3-prover 1-round proof systems in which the verifier tosses O(log n) coins and asks each of the three provers one question each. The provers respond with one bit each such that the verifier accepts instance of the language with probability 1 - epsilon and rejects noninstances with probability at least;. Such a result is of some interest in the study of probabilistically checkable proofs. (C) 2001 Acadamic Press.
The highlights of the Association for Computing Machinery's (ACM ) annual report for FY06 are presented. ACM Job Migration Task Force is one of the most ambitious initiatives the association has undertaken, which ...
详细信息
The highlights of the Association for Computing Machinery's (ACM ) annual report for FY06 are presented. ACM Job Migration Task Force is one of the most ambitious initiatives the association has undertaken, which released its highly anticipated report, called Globalization and Offshoring of Software. ACM also began working with a number of organizations, including CRA, NCWIT and Microsoft on the issues related to the image and health of the computing profession. The ACM Policy and Procedures on Plagiarism also defines self-plagiarism and pledges the highest respect for maintaining intellectual property rights and confidentiality. The ACM Java Task Force, convened by the Education Board, concluded its reign with a comprehensive report reviewing the Java language, APIs and tools for the perspective of introductory computing education.
This paper introduces and reviews stochastic phonographic transduction (SPT), a trainable (''data-driven'') technique for letter-to-phoneme conversion based on formal language theory, as well as descri...
详细信息
This paper introduces and reviews stochastic phonographic transduction (SPT), a trainable (''data-driven'') technique for letter-to-phoneme conversion based on formal language theory, as well as describing and detailing one particularly simple realization of SPT. The spellings and pronunciations of English words are modelled as the productions of a stochastic grammar, inferred from example data in the form of a pronouncing dictionary. The terminal symbols of the grammar are letter-phoneme correspondences, and the rewrite (production) rules of the grammar specify how these are combined to form acceptable English word spellings and their pronunciations. Given the spelling of a word as input, a pronunciation can then be produced as output by parsing the input string according to the letter-part of the terminals and selecting the ''best'' sequence of corresponding phoneme-parts according to some well-motivated criteria. Although the formalism is in principle very general, restrictive assumptions must be made if practical, trainable systems are to be realized. We have assumed at this stage that the grammar is regular. Further, word generation is modelled as a Markov process in which terminals (correspondences) are simply concatenated. The SPT learning task then amounts to the inference of a set of correspondences and estimation from the training data of their associated transition probabilities. Transduction to produce a pronunciation for a word given its spelling is achieved by Viterbi decoding, using a maximum likelihood criterion. Results are presented for letter-phoneme alignment and transduction for the dictionary training data, unseen dictionary words, unseen proper nouns and novel (pseudo-)words. Two different ways of inferring correspondences are described and compared. It is found that the provision of quite limited information about the alternating vowel/consonant structure of words aids the inference process significantly. Best transduction performance obtaine
暂无评论