We consider here the use of the solution for the first multiplicity of types equation to compute exact probability distributions of statistical values and their exact approximations. We consider Delta-exact distributi...
详细信息
We consider here the use of the solution for the first multiplicity of types equation to compute exact probability distributions of statistical values and their exact approximations. We consider Delta-exact distributions as their exact approximations;Delta-exact distributions differ from exact distributions by no more than a predetermined, arbitrarily small value Delta. It is shown that the basis for the exact distribution computing method is an enumeration of search area elements for solution of a linear first multiplicity of type equation composed of multiplicity type vectors. Each element represents here the number of occurrences for elements of a certain type (any sign of an alphabet) in the considered sample. It is shown simultaneously, that the method for restricting the search area for solution of the first multiplicity of type equation is applied for calculating exact approximation. We give an expression defining the algorithmic complexity of exact distributions calculated using the first multiplicity solution method which is finite and allows for each value of alphabet power to determine the maximum sample size for which exact distributions can be calculated by the first multiplicity solution method using limited computing power. To estimate the algorithmic complexity of computing the exact approximations, we used the expression obtained for the first time for the number of first multiplicity equation's solutions with limitation on the values of coordinates of solution vectors. An expression determining algorithmic complexity for computing the exact approximations using the solution method for the first multiplicity equation with the constraint on the values of solution vector coordinates was obtained. The statistic value of maximal frequency is used as a parameter for restricting the solution vector coordinates, the probability of its excess is less than a pre-determined, arbitrarily small value Delta. This permits to calculate the exact approximations of
作者:
Djelloul, SelmaLRI
UMR 8623 Université de Paris-Sud 91405 Orsay Cedex Bât 490 France
In this paper we describe an algorithmic construction that, given a tree-decomposition of a graph G and a path-decomposition of a graph H, provides a tree-decomposition of the cartesian product of G and H. From the la...
详细信息
In the present paper we have introduced a new sorting algorithm named Partition Sort and performed its empirical study with necessary theoretical analysis. This algorithm was subjected to data coming from various stan...
详细信息
ISBN:
(纸本)9783642257339
In the present paper we have introduced a new sorting algorithm named Partition Sort and performed its empirical study with necessary theoretical analysis. This algorithm was subjected to data coming from various standard probability distributions. Average Case analysis revealed that for some standard distributions it worked even more efficiently than the popular Hoare's quicksort algorithm. The relative order of execution time, for different distributions, in descending order came out to be: Discrete Uniform < Binomial < Poisson < Standard Normal < Continuous Uniform.
This is a contribution to the discussion of the paper by Dorie et al. (Statist. Sci. 34 (2019) 43-68), which reports the lessons learned from 2016 Atlantic Causal Inference Conference Competition. My comments strongly...
详细信息
This is a contribution to the discussion of the paper by Dorie et al. (Statist. Sci. 34 (2019) 43-68), which reports the lessons learned from 2016 Atlantic Causal Inference Conference Competition. My comments strongly support the authors' focus on empirical evaluation, using examples and experience from machine learning research, particularly focusing on the problem of algorithmic complexity. I argue that even broader and deeper empirical evaluation should be undertaken by the researchers who study causal inference. Finally, I highlight a few key conclusions that suggest where future research should focus.
A great number of methods and of accounts of rationality consider at their foundations some form of Bayesian inference. Yet, Bayes' rule, because it relies upon probability theory, requires specific axioms to hold...
详细信息
ISBN:
(纸本)9783031124297;9783031124280
A great number of methods and of accounts of rationality consider at their foundations some form of Bayesian inference. Yet, Bayes' rule, because it relies upon probability theory, requires specific axioms to hold (e.g. a measurable space of events). This short document hypothesizes that Bayes' rule can be seen as a specific instance of a more general inferential template, that can be expressed also in terms of algorithmic complexities, namely through the measure of unexpectedness proposed by Simplicity Theory.
The inverse kinematic analysis is required in many technical applications. Due to the non linearity of the system, resolving the Inverse Kinematics (IK) for robotic system becomes more challenging problem. In this cha...
详细信息
ISBN:
(纸本)9783030112929;9783030112912
The inverse kinematic analysis is required in many technical applications. Due to the non linearity of the system, resolving the Inverse Kinematics (IK) for robotic system becomes more challenging problem. In this chapter, the problem of IK for medical application is faced, based on human lower limb as being the three arm robotic system depending on the physiological constaints. A system analysis is carried out specially in three dimensionnel space. The developed forward kinematic model leads to define the feasible workspace of the human leg in the considered configuration. A constructive algorithm to compute the optimal IK of robot system is outlined, based on human performance measure that incorporates a new objective function of musculoskeletal discomfort. The effectiveness of the proposed approach is tested, and the algorithmic complexity is finally discussed.
How does children's limited processing capacity affect cultural transmission of complex information? We show that over the course of iterated reproduction of two-dimensional random dot patterns transmission accura...
详细信息
How does children's limited processing capacity affect cultural transmission of complex information? We show that over the course of iterated reproduction of two-dimensional random dot patterns transmission accuracy increased to a similar extent in 5- to 8-year-old children and adults whereas algorithmic complexity decreased faster in children. Thus, children require more structure to render complex inputs learnable. In line with the Less-Is-More hypothesis, we interpret this as evidence that children's processing limitations affecting working memory capacity and executive control constrain the ability to represent and generate complexity, which, in turn, facilitates emergence of structure. This underscores the importance of investigating the role of children in the transmission of complex cultural traits. (C) 2014 Elsevier B.V. All rights reserved.
It is a well-known fact that the Bayesian Networks' (BNs) use as classifiers in different fields of application has recently witnessed a noticeable growth. Yet, the Naive Bayes' application, and even the augme...
详细信息
ISBN:
(纸本)9783642318368
It is a well-known fact that the Bayesian Networks' (BNs) use as classifiers in different fields of application has recently witnessed a noticeable growth. Yet, the Naive Bayes' application, and even the augmented Naive Bayes', to classifier-structure learning, has been vulnerable to certain limits, which explains the practitioners' resort to other more sophisticated types of algorithms. Consequently, the use of such algorithms has paved the way for raising the problem of super-exponential increase in computational complexity of the Bayesian classifier learning structure, with the increasing number of descriptive variables. In this context, the present work's major objective lies in setting up a further solution whereby a remedy can be conceived for the intricate algorithmic complexity imposed during the learning of Bayesian classifiers' structure with the use of sophisticated algorithms. Noteworthy, the present paper's framework is organized as follows. We start, in the first place, by to propose a novel approach designed to reduce the algorithmic complexity without engendering any loss of information when learning the structure of a Bayesian classifier. We, then, go on to test our approach on a car diagnosis and a Lymphography diagnosis databases. Ultimately, an exposition of our conducted work's interests will be a closing step to this work.
We present a robust localization algorithm for multiple radiation sources using a network of sensors under random underlying physical processes and measurement errors. The proposed solution uses a hybrid formulation o...
详细信息
ISBN:
(纸本)9780769543642
We present a robust localization algorithm for multiple radiation sources using a network of sensors under random underlying physical processes and measurement errors. The proposed solution uses a hybrid formulation of particle filter and mean-shift techniques to achieve several important features that address major challenges faced by existing localization algorithms. First, our algorithm is able to maintain a constant number of estimation (source) parameters even as the number of radiation sources K increases. In existing algorithms, the number of estimation parameters is proportional to K and thus the algorithm complexity grows exponentially with K. Second, to decide the number of sources K, existing algorithms either require the information to be known in advance or rely on expensive statistical estimations that do not scale well with K. Instead, our algorithm efficiently learns the number of sources from the estimated source parameters. Third, when obstacles are present, our algorithm can exploit the obstacles to achieve better isolation between the source signatures, thereby increasing the localization accuracy in complex deployment environments. In contrast, incompletely specified obstacles will significantly degrade the accuracy of existing algorithms due to their unpredictable effects on the source signatures. We present extensive simulation results to demonstrate that our algorithm has robust performance in complex deployment environments, and its efficiency is scalable to many radiation sources in these environments.
Data delivery across a multi-hop low-power and lossy networks (LLNs) is a challenging task: devices participating in such a network have strictly limited computational power and storage, and the communication channels...
详细信息
ISBN:
(纸本)9781479934591
Data delivery across a multi-hop low-power and lossy networks (LLNs) is a challenging task: devices participating in such a network have strictly limited computational power and storage, and the communication channels are of low capacity, time-varying and with high loss rates. Consequently, routing protocols finding paths through such a network must be frugal in their control traffic and state requirements, as well as in algorithmic complexity - and even once paths have been found, these may be usable only intermittently, or for a very short time due to changes on the channel. Routing protocols exist for such networks, balancing reactivity to topology and channel variation with frugality in resource requirements. Complementary component to routing protocols for such LLNs exist, intended not to manage global topology, but to react rapidly to local data delivery failures and (attempt to) successfully deliver data while giving a routing protocol time to recover globally from such a failure. Specifically, this paper studies the "Depth-First Forwarding (DFF) in Unreliable Networks" protocol, standardised within the IETF in June 2013. Moreover, this paper proposes optimisations to that protocol, denoted DFF++, for improved performance and reactivity whilst remaining fully interoperable with DFF as standardised, and incurring neither additional data sets nor protocol signals to be generated.
暂无评论