Congestion is a major Internet problem. TCP has evolved in attemptting to deal with this and now employs congestion control algorithms that effectively limit the bandwidth available to any one connection. However, as ...
详细信息
Congestion is a major Internet problem. TCP has evolved in attemptting to deal with this and now employs congestion control algorithms that effectively limit the bandwidth available to any one connection. However, as TCP is not the only transport protocol used on the Internet, the growth of non-TCP traffic has lead to the formulation of TCP-Friendly formulae. It is recommended that all best effort traffic conform to these formulae, and they act as a benchmark against which the success of any congestion control algorithms can be judged. Unfortunately, key assumptions behind the formulation of TCP's algorithms - low bandwidth, and bulk data transfers - are often no longer valid. In this paper it is shown that the combination of increasing bandwidth and short web transfers means that a significant amount of TCP traffic fails to obtain the bandwidth implied by the TCP-friendly formulae. An alternative, measurement-based approach that predicts a fairer starting window size for a connection is presented and evaluated. Information about the characteristics of particular network paths is dynamically maintained and used to suggest a fairer starting window size for new connections. Minor modifications to TCP allow these suggestions to be used to set start-up control variables, and thereby strengthen the negative correlation between the bandwidth used by a connection and the level of congestion on its network path. The measurement-based approach is realised through the use of a Location Information Server (LIS). The LIS performs centralised passive monitoring of transport headers in order to derive network level path information. Location Information Packets (LIP) are used to communicate suggested start-up variables to local hosts. The design, implementation and evaluation of an LIS, LIP and participating host's TCP software are presented.
Most fast block matching algorithms ignore the efficiency in motion compensation within each checking step. In order to achieve better-compensated performance, the limited computational complexity should be allocated ...
详细信息
Most fast block matching algorithms ignore the efficiency in motion compensation within each checking step. In order to achieve better-compensated performance, the limited computational complexity should be allocated more carefully into each block. It means that the fast block matching algorithm can be viewed as a kind of rate-distortion optimization problem. The complexity-distortion optimal fast block matching algorithm should find the maximized quality of the compensated image under a target computational complexity. In order to approach the optimal complexity-distortion solution, some strategies are developed. For example, a domination-based motion vector prediction technique is developed to set the initial motion vector for each block. A predictive complexity-distortion benefit list is established to predict the compensated benefit for each block. Also, a three-level pattern searching is employed to check the candidate motion vector. Experimental results show that our proposed algorithm outperforms significantly the three-step search. For example, in "Salesman," the average checkpoints for one block is 33 by using the three-step search. The average checkpoint is 1.75 by using our proposal algorithm under the same average PSNR condition. (C) 2002 Wiley Periodicals, Inc.
The element connectivity problem falls in the category of survivable network design problems-it is intermediate to the versions that ask for edge-disjoint and vertex-disjoint paths. The edge version is by now well und...
详细信息
The element connectivity problem falls in the category of survivable network design problems-it is intermediate to the versions that ask for edge-disjoint and vertex-disjoint paths. The edge version is by now well understood from the view-point of approximation algorithms [Williamson et al., Combinatorica 15 (1995) 435-454;Goemans et al., in: SODA '94, 223-232;Jain, Combinatorica 21 (2001) 39-60], but very little is known about the vertex version. In our problem, vertices are partitioned into two sets: terminals and nonterminals. Only edges and nonterminals can fail-we refer to them as elements-and only pairs of terminals have connectivity requirements, specifying the number of element-disjoint paths required. Our algorithm achieves an approximation guarantee of factor 2H(k), where k is the largest requirement and H-n = 1 + 1/2 + ... + 1/n. Besides providing possible insights for solving the vertex-disjoint paths version, the element connectivity problem is of independent interest, since it models a realistic situation. (C) 2002 Elsevier Science (USA). All rights reserved.
The convergence properties of iterative learning control (ILC) algorithms are considered. The analysis is carried out in a framework using linear iterative systems, which enables several results from the theory of lin...
详细信息
The convergence properties of iterative learning control (ILC) algorithms are considered. The analysis is carried out in a framework using linear iterative systems, which enables several results from the theory of linear systems to be applied. This makes it possible to analyse both first-order and high-order ILC algorithms in both the time and frequency domains. The time and frequency domain results can also be tied together in a clear way. Results are also given for the iteration-variant case, i.e. when the dynamics of the system to be controlled or the ILC algorithm itself changes from iteration to iteration.
As networks become pervasive, the importance of efficient information gathering for purposes such as monitoring, fault diagnosis, and performance evaluation increases. Distributed monitoring systems based on either ma...
详细信息
As networks become pervasive, the importance of efficient information gathering for purposes such as monitoring, fault diagnosis, and performance evaluation increases. Distributed monitoring systems based on either management protocols such as SNMP or distributed object technologies such as CORBA can cope with scalability problems only to a limited extent. They are not well suited to systems that are both very large and highly dynamic because the monitoring logic, although possibly distributed, is statically predefined at design time. This article presents an active distributed monitoring system based on mobile agents. Agents act as area monitors not bound to any articular network node that can "sense" the network, estimate better locations, and migrate in order to pursue location optimality. Simulations demonstrate the capability of this approach to cope with large-scale systems and changing network conditions.
This paper presents the development and real-time implementation of an auto-white balancing algorithm named scoring. The spectral distributions of the Macbeth reference colors together with the spectral distributions ...
详细信息
This paper presents the development and real-time implementation of an auto-white balancing algorithm named scoring. The spectral distributions of the Macbeth reference colors together with the spectral distributions of several color temperature light sources are used to set up a number of reference color points in the CbCr color space. A number of representative color points are obtained from a captured image by using a previously developed multi-scale clustering algorithm. A match is then established between the reference set of colors and the representative set of colors. The matching scheme generates the most likely color temperature under which the image is captured. Furthermore, this paper discusses the real-time implementation of the developed auto white balancing algorithm on the TI TMS320DSC platform, a power-efficient single-chip processor that has been specifically designed for digital still cameras. It is shown as to how the algorithm is modified to allow a processing rate of 30 frames/s. (C) 2002 Published by Elsevier Science Ltd.
Several of edn's recent multimedia articles have pointed out the bloatedsizes of high-fidelity audio files and the consequent appeal of compressing them in lossless orlossy ways for storage and transmission. In co...
详细信息
Several of edn's recent multimedia articles have pointed out the bloatedsizes of high-fidelity audio files and the consequent appeal of compressing them in lossless orlossy ways for storage and transmission. In comparison to video, though, uncompressed audio seemsdownright diminutive. Consider the following uncompressed-video bit-rate examples. (Multiply by 60and divide by 8 to get the required per-minute storage capacity in bytes.): 1. Cellular phones: 15frames/sec, 8-bit color, QCIF (176x 144-pixel) res-olution=3.1 Mbps; 2. PDAs: 30 frames/sec, 16-bitcolor, CIF (352x288-pixel) resolution= 48.7 Mbps; 3. PCs: 30 frames/sec, 24-bit color, VGA(640x480-pixel) resolution 221.2 Mbps; 4. HDTV: 60 frames/sec, 24-bit color, 720P (1280x720-pixel,progressive-scan) resolution=1.4 Gbps (1000 times higher than the audio-CD bit rate); 5. Digitalcinema: 24 frames/sec, 30-bit color, 1080P (1920 x1080-pixel, progressive-scan) resolution=1.5 Gbps.
We consider the problem of scheduling n unit-length tasks on identical in parallel processors, when outforest precedence relations and unit interprocessor communication delays exist. Two algorithms have been proposed ...
详细信息
We consider the problem of scheduling n unit-length tasks on identical in parallel processors, when outforest precedence relations and unit interprocessor communication delays exist. Two algorithms have been proposed in the literature for the exact solution of this problem: a linear time algorithm for the special case to m = 2, and a dynamic programming algorithm which runs in O(n(2m-2)). In this paper we give a new linear time algorithm for instances with m = 3. (C) 2002 Elsevier Science (USA). All rights reserved.
Background: Prucalopride is a selective and specific 5-hydroxytryptamine(4) receptor agonist that is known to increase stool frequency and to accelerate colonic transit. Aim: To investigate the effect of prucalopride ...
详细信息
Background: Prucalopride is a selective and specific 5-hydroxytryptamine(4) receptor agonist that is known to increase stool frequency and to accelerate colonic transit. Aim: To investigate the effect of prucalopride on high-amplitude propagated contractions and segmental pressure waves in healthy volunteers. Methods: After 1 week of dosing (prucalopride or placebo in a double-blind, randomized, crossover fashion), colonic pressures were recorded in 10 healthy subjects using a solid-state pressure catheter with six sensors spaced 10 cm apart. Subjects kept diary records of their bowel habits (frequency. consistency and straining). High-amplitude propagated contractions were analysed visually, comparing their total numbers and using 10-min time windows. Segmental pressure waves were analysed using computer algorithms, quantifying the incidence. amplitude, duration and area under the curve of all detected peaks. Results: When taking prucalopride, stool frequency increased, consistency decreased and subjects strained less. Prucalopride just failed to increase the total number of high-amplitude propagated contractions (P = 0.055). The number of 10-min time windows containing high-amplitude propagated contractions was increased by prucalopride (P = 0.019). Prucalopride increased the area under the curve per 24 h (P = 0.026). Conclusions: The 5-hydroxytryptamine(4) receptor agonist prucalopride stimulates high-amplitude propagated contractions and increases segmental contractions, which is likely to be the underlying mechanism of its effect on bowel habits in healthy volunteers.
Combinatorial peptide libraries are a versatile tool for drug discovery. On-bead assays identify reactive peptides by enzyme-catalyzed staining and, usually, sequencing by Edman degradation. Unfortunately, the latter ...
详细信息
Combinatorial peptide libraries are a versatile tool for drug discovery. On-bead assays identify reactive peptides by enzyme-catalyzed staining and, usually, sequencing by Edman degradation. Unfortunately, the latter method is expensive and time-consuming and requires free N termini of the peptides. A method of rapid and unambiguous peptide sequencing by utilizing synthesis-implemented generation of termination sequences with subsequent matrix-assisted laser desorption ionization time of flight (MALDI-TOF) mass spectrometric analysis is introduced here. The required capped sequences are determined and optimized for a specific peptide library by a computer algorithm implemented in the program Biblio. A total of 99.7% of the sequences of a heptapeptide library sample could be decoded utilizing a single bead for each spectrum. To synthesize these libraries, an optimized capping approach has been introduced.
暂无评论