Earlier literature introduced a network algorithm for computing an exact test of independence in a two-way contingency table. This article adapts that algorithm to tests of quasi-symmetry in square tables. The algorit...
详细信息
Earlier literature introduced a network algorithm for computing an exact test of independence in a two-way contingency table. This article adapts that algorithm to tests of quasi-symmetry in square tables. The algorithm is generally faster than competing Monte Carlo methods, and essentially eliminates the need for asymptotic approximation of P values for assessing goodness-of-fit of the quasi-symmetry model. A macro written for the R computing package is available for implementing the method.
Theorerms are proved for the maxima and minima of IIRi!/IICj!/T!IIyij! over r× c contingcncy tables Y=(yij) with row sums R1,…,Rr, column sums C1,…,Cc, and grand total T. These results are imlplemented into the...
详细信息
Theorerms are proved for the maxima and minima of IIRi!/IICj!/T!IIyij! over r× c contingcncy tables Y=(yij) with row sums R1,…,Rr, column sums C1,…,Cc, and grand total T. These results are imlplemented into the network algorithm of Mehta and Patel (1983) for computing the P-value of Fisher's exact test for unordered r×c contingency tables. The decrease in the amount of computing time can be substantial when the column sums are very different.
As a class of approximate measurement approaches, sketching algorithms have significantly improved the estimation of network flow information using limited resources. While these algorithms enjoy sound error-bound ana...
详细信息
ISBN:
(纸本)9781450391290
As a class of approximate measurement approaches, sketching algorithms have significantly improved the estimation of network flow information using limited resources. While these algorithms enjoy sound error-bound analysis under worst-case scenarios, their actual errors can vary significantly with the incoming flow distribution, making their traditional error bounds too "loose" to be useful in practice. In this paper, we propose a simple yet rigorous error estimation method to more precisely analyze the errors for posterior sketch queries by leveraging the knowledge from the sketch counters. This approach will enable network operators to understand how accurate the current measurements are and make appropriate decisions accordingly (e.g., identify potential heavy users or answer "what-if" questions to better provision resources). Theoretical analysis and trace-driven experiments show that our estimated bounds on sketch errors are much tighter than previous ones and match the actual error bounds in most cases.
We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the inform...
详细信息
ISBN:
(纸本)3540734198
We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring.
Operators are continuously expanding their Internet Protocol Television (IPTV) service due to both the increase of the users subscribed to the service and the increase of the type of devices used by the customers. Whe...
详细信息
ISBN:
(纸本)9781467364324
Operators are continuously expanding their Internet Protocol Television (IPTV) service due to both the increase of the users subscribed to the service and the increase of the type of devices used by the customers. When a user connects to the IPTV service using multiple devices simultaneously, each connection consumes its corresponding bandwidth. This can create serious problems to the operators such as the lack of enough bandwidth, decrease of the Quality of Service, etc. This paper presents the study carried out using a basic topology on which a video is broadcasted varying several network parameters and the network conditions. The received video quality is assessed by using the Mean Option Score. Taking into account all gathered information, we propose a new protocol and algorithm to improve the Quality of Experience of IPTV service customers.
This paper discusses the simultaneous approach for test generation of two-stage and multistage tests from an item batik. However, most previous test generation problems involved binary programming models, and the effi...
详细信息
This paper discusses the simultaneous approach for test generation of two-stage and multistage tests from an item batik. However, most previous test generation problems involved binary programming models, and the efficiency of available solution algorithms is the major concern for these problems. Therefore, this study considers two important concepts on the solution process, i.e., alternative ways to formulate mathematical models and alternative solution algorithms. Based on these two concepts, this paper is two-fold. First, two binary programming models with a special network structure that would be explored computationally are presented for modeling these problems. The first model maximizes test information function at one specified ability point and the second model matches the target test information function at several specified ability points as closely as possible. Second, an efficient special-purpose network algorithm is then used to solve these two models. Hence, the test construction process tends to be improved in terms of both. computational efforts and quality of tests. An empirical study shows the results in line with the above two criteria.
We study a capacitated dynamic lot sizing problem with special cost structure involving fixed setup cost, freight cost, production cost and inventory holding cost. The freight cost is proportional to the number of con...
详细信息
ISBN:
(纸本)9781457707391
We study a capacitated dynamic lot sizing problem with special cost structure involving fixed setup cost, freight cost, production cost and inventory holding cost. The freight cost is proportional to the number of containers used. We investigate the problem in which the maximal production capacity in one period is integral multiple of the capacity of a container and reveal the special structure of the optimal solution. We transfer the lot sizing problem into a shortest path problem and propose a network algorithm to deal with it. The T-period problem is solved in O(T-4) effort by the network algorithm.
Currently, one of the main problems facing the video broadcast on the Internet is the limitation of the available bandwidth. The media elements contain a large number of multiplexed data and its encoding is necessary ...
详细信息
ISBN:
(纸本)9781467364324
Currently, one of the main problems facing the video broadcast on the Internet is the limitation of the available bandwidth. The media elements contain a large number of multiplexed data and its encoding is necessary for a smooth issuance through the new Application Programming Interface (API) in HTML5 or through third-party plugins. Internet offers a wide compatibility with many formats and video codecs, but its emission focuses specifically on the three main ones: MPEG-4, Ogg and WEBM. In this paper, we study different encodings of a video depending on the bitrate and the encoding time. This study will allow us to know the characteristics of each one of these three video formats. Then, we use this information to design an algorithm, which will run in a server, in order to send the appropriate video type according to the studied characteristics.
After the seminal paper by L. Lamport, which introduced (scalar) logical clocks, several other data structures for keeping track of causality in distributed systems have been proposed, including vector and matrix cloc...
详细信息
ISBN:
(纸本)9781728183268
After the seminal paper by L. Lamport, which introduced (scalar) logical clocks, several other data structures for keeping track of causality in distributed systems have been proposed, including vector and matrix clocks. These are able to capture causal dependencies with more detail but, unfortunately, also consume a substantially larger amount of network bandwidth and storage space than Lamport clocks. This raises the question of whether the benefits of these more complex structures are worth their cost. We address this question in the context of partially replicated systems. We show that for some workloads the use of more expensive clocks does bring significant benefits and that for other workloads no visible benefits can be observed. The paper provides a characterization of the scenarios where each type of clock is more beneficial, helping designers to develop more efficient distributed storage systems.
The vast and complex wealth of information available to researchers often leads to a systematic review, which involves a detailed and comprehensive plan and search strategy with the goal of identifying, appraising, an...
详细信息
The vast and complex wealth of information available to researchers often leads to a systematic review, which involves a detailed and comprehensive plan and search strategy with the goal of identifying, appraising, and synthesizing all relevant studies on a particular topic. A metaâanalysis, conducted ideally as part of a comprehensive systematic review, statistically synthesizes evidence from multiple independent studies to produce one overall conclusion. The increasingly widespread use of metaâanalysis has led to growing interest in metaâanalytic methods for rare events and sparse data. Conventional approaches tend to perform very poorly in such settings. Recent work in this area has provided options for sparse data, but these are still often hampered when heterogeneity across the available studies differs based on treatment group. Heterogeneity arises when participants in a study are more correlated than participants across studies, often stemming from differences in the administration of the treatment, study design, or measurement of the outcome. We propose several new exact methods that accommodate this common contingency, providing more reliable statistical tests when such patterns on heterogeneity are observed. First, we develop a permutationâbased approach that can also be used as a basis for computing exact confidence intervals when estimating the effect size. Second, we extend the permutationâbased approach to the network metaâanalysis setting. Third, we develop a new exact confidence distribution approach for effect size estimation. We show these new methods perform markedly better than traditional methods when events are rare, and heterogeneity is present.
暂无评论