Virtual output queue (VOQ) is an efficient architecture for high-speed switches because it combines the low cost of input-queuing with high performance of output-queuing. The achievable throughput and delay performanc...
详细信息
Virtual output queue (VOQ) is an efficient architecture for high-speed switches because it combines the low cost of input-queuing with high performance of output-queuing. The achievable throughput and delay performance heavily depends on the scheduling algorithm used to resolve the contention for the same output ports in each cell slot. Most VOQ scheduling algorithms, as exemplified by PIM and iSLIP, re based on parallel and iterative request-grant-accept arbitration schemes. Conventional performance evaluation of these scheduling algorithms, does not consider the effect of some issues inherent to their implementation on a modular and scalable VOQ switch with input ports and switch matrix residing on separate cards. One of the main issues is the Round-Trip Delay (RTD), defined as the latency between a connection is requested to the switch matrix card and the associated acceptance notification is received on the input port card. In this paper, the effect of RTD on performance parameters for PIM and iSLIP algorithms is presented, not being considered in deep in previous works appearing In the literature. Based on simulation results, RTD is demonstrated to affect significantly contention on output ports and mean queuing delay, and thus degrade the performance of cell-based VOQ switches.
A new class of iterative bit flipping (BF) decoding algorithms adapted for low-density parity check (LDPC) convolutional codes is proposed. Compared with Gallager's original BF algorithm, the new BF algorithms imp...
详细信息
ISBN:
(纸本)9781424411894;1424411890
A new class of iterative bit flipping (BF) decoding algorithms adapted for low-density parity check (LDPC) convolutional codes is proposed. Compared with Gallager's original BF algorithm, the new BF algorithms improve both the coding gain and the error correction speed. At a bit error rate (BER) of 10 , the best of the new bit-flipping algorithms achieves, after only 6 iterations and with much simpler decoding hardware, a coding gain within 3.5 dB of that of the conventional min-sum belief propagation decoding algorithm.
This paper focuses on the usage of an enhanced equalization-based receiver for WCDMA (wideband code-division multiple access) MIMO (multiple input, multiple output) BLAST (Bell Labs layered space time)-type systems. T...
详细信息
This paper focuses on the usage of an enhanced equalization-based receiver for WCDMA (wideband code-division multiple access) MIMO (multiple input, multiple output) BLAST (Bell Labs layered space time)-type systems. The receiver is based on the MMSE (minimum mean square error) algorithm coupled with an IPC (iterative partial cancellation) scheme. The scheme is tested in both an uncoded and coded setting, using the UMTS (Universal Mobile Telecommunications System) HSDPA (high speed downlink packet access) standard as a basis, and the reference UMTS environments.
Five iterative algorithms to reconstruct images From partial noisy data have been formulated and compared by using the theory of projection onto convex sets (POCS). This formulation clearly shows the dependence on the...
详细信息
Five iterative algorithms to reconstruct images From partial noisy data have been formulated and compared by using the theory of projection onto convex sets (POCS). This formulation clearly shows the dependence on the initial solution, allows the introduction of a prior distribution in the distance to be minimized and the use of additional convex constraints.
iterative learning control (ILC) is a technique used to improve the tracking performance of systems carrying out repetitive tasks, which are affected by deterministic disturbances. The achievable performance is greatl...
详细信息
iterative learning control (ILC) is a technique used to improve the tracking performance of systems carrying out repetitive tasks, which are affected by deterministic disturbances. The achievable performance is greatly degraded, however, when non-repeating, stochastic disturbances are present. This paper aims to compare a number of different ILC algorithms, proposed to be more robust to the presence of these disturbances, firstly by a statistical analysis and then by their application to a linear motor. Expressions for the expected value and variance of the error are developed for each algorithm. The different algorithms are then applied to the linear motor system to test their performance in practice. A filtered ILC algorithm is proposed when the noise and desired output spectrums are separated. Otherwise an algorithm with a decreasing gain gives good robustness to noise and achievable precision but at a slower convergence rate
Though various theoretical results and algorithms have been proposed in one-bit Compressed sensing(1-bit CS), there are few studies on more structured signals, such as block sparse signals. We address the problem of r...
详细信息
Though various theoretical results and algorithms have been proposed in one-bit Compressed sensing(1-bit CS), there are few studies on more structured signals, such as block sparse signals. We address the problem of recovering block sparse signals from one-bit measurements. We first propose two recovery schemes, one based on second-order cone programming and the other based on hard thresholding, for common non-adaptively thresholded one-bit measurements. Note that the worst-case error in recovering sparse signals from non-adaptively thresholded one-bit measurements is bounded below by a polynomial of oversampling *** break the limit, we introduce a recursive strategy that allows the thresholds in quantization to be adaptive to previous measurements at each iteration. Using the scheme, we propose two iterative algorithms and show that corresponding recovery errors are both exponential functions of the oversampling factor. Several simulations are conducted to reveal the superiority of our methods to existing approaches.
The notion of fuzzy connectedness captures the idea of "hanging-togetherness" of image elements in an object by assigning a strength of connectedness to every possible path between every possible pair of ima...
详细信息
The notion of fuzzy connectedness captures the idea of "hanging-togetherness" of image elements in an object by assigning a strength of connectedness to every possible path between every possible pair of image elements. In a previous framework the authors presented, a fuzzy connected object was defined with a threshold on the strength of connectedness. Relative fuzzy connectedness provides a framework in which objects compete among each other and an image element is grabbed by the object within which the element has the largest fuzzy connectedness strength. Here, the authors introduce the notion of iterative relative fuzzy connectedness that leads to more effective segmentations using relative connectedness. The idea here is to identify the "core" of the object through relative connectedness in the first iteration. Subsequently this region is excluded from being considered by other co-objects for tracking their connectivity paths through. This effectively minimizes moderately strong paths seeping through the object of interest. The authors present a theoretical and algorithmic framework for defining objects via iterative relative fuzzy connectedness and demonstrate that the objects defined are independent of reference elements chosen as long as they are not in the fuzzy boundary between objects. Effectiveness of the method is demonstrated using both qualitative examples and a quantitative phantom analysis.
Penalised PET image reconstruction methods are often accelerated with the use of only a subset of the data at each update. It is known that many subset algorithms, such as Ordered Subset Expectation Maximisation, do n...
详细信息
Penalised PET image reconstruction methods are often accelerated with the use of only a subset of the data at each update. It is known that many subset algorithms, such as Ordered Subset Expectation Maximisation, do not converge to a single solution but to a limit cycle, which can lead to variations between subsequent image estimates. A new class of stochastic variance reduction optimisation algorithms have been recently proposed for general optimisation problems. These methods aim to reduce the subset update variance by incorporating previous subset gradients into the update direction computation. This work applies three of these algorithms to iterative PET penalised reconstruction and exhibits superior performance to standard deterministic reconstruction methods after only a few epochs.
This paper proposes a new regularized constrained iterative image restoration algorithms which applies three new space-adaptive methods to a degraded image, and analyze the convergence condition of the proposed algori...
详细信息
ISBN:
(纸本)0780332598
This paper proposes a new regularized constrained iterative image restoration algorithms which applies three new space-adaptive methods to a degraded image, and analyze the convergence condition of the proposed algorithm. First, we introduce space-adaptive regularization operators which change according to edge characteristics of local images in order to effectively preserve edges and boundaries in the restored images. Second, an adaptive noise reduction filter is applied on the plain regions so that salt-pepper phenomenon which results from noise amplification can be eliminated effectively. Finally, a pseudo projection operator is used to reduce the ringing artifact. And the proposed algorithm adopts momentum in the steepest descent formulation, which improves the convergence performance both in the speed and accuracy. According to the experimental results for various signal-to-noise ratios (SNR), the proposed image restoration algorithm outperforms other methods and is robust to noise effects and edge reblurring by regularization especially.
iterative machine learning algorithms, i.e., k-means (KM), expectation maximization (EM), become overwhelmed with big data since all data points are being continually and indiscriminately visited while a cost is being...
详细信息
iterative machine learning algorithms, i.e., k-means (KM), expectation maximization (EM), become overwhelmed with big data since all data points are being continually and indiscriminately visited while a cost is being minimized. In this work, we demonstrate (1) an optimization approach to reduce training run-time complexity of iterative machine learning algorithms and (2) implementation of this framework over KM algorithm. We call this extended KM algorithm, KM*. The experimental results show that KM* outperforms KM over big real world and synthetic data sets. Lastly, we demonstrate the theoretical elements of our work.
暂无评论