Rate based congestion control has been considered desirable, both to deal with the high bandwidth-delay products of today's high speed networks, and to match the needs of emerging multimedia applications. Explicit...
ISBN:
(纸本)9780897919098
Rate based congestion control has been considered desirable, both to deal with the high bandwidth-delay products of today's high speed networks, and to match the needs of emerging multimedia applications. Explicit rate control achieves low loss because sources transmit smoothly at a rate adjusted through feedback to be within the capacity of the resources in the network. However, large feedback delays, presence of higher priority traffic, and varying transient situations make it difficult to ensure feasibility (i.e., keep the aggregate arrival rate below the bottleneck resource's capacity) while also maintaining high resource utilization. These conditions along with the "fast start" desired by data applications often result in substantial queue *** describe a scheme that manages the queue buildup at a switch even under the most aggressive patterns of sources, in the context of the Explicit Rate option for the Available Bit Rate (ABR) congestion control scheme. A switch observes the buildup of its queue, and uses it to reduce the portion of the link capacity allocated to sources bottlenecked at that link. We use the concept of a "virtual" queue, which tracks the amount of queue that has been "reduced", but has not yet taken effect at the switch. We take advantage of the natural timing of "resource management" (RM) cells transmitted by sources. The scheme is elegant in that it is simple, and we show that it reduces the queue buildup, in some cases, by more than two orders of magnitude and the queue size remains around a desired target. It maintains max-min fairness even when the queue is being drained. The scheme is scalable, and is as responsive as can be expected: within the constraints of the feedback delay. Finally, no changes are needed to the ATM Forum defined source/destination policies.
One of the open questions in the design of multimedia storage servers is in what order to serve incoming requests. Given the capability provided by the disk layout and scheduling algorithms to serve multiple streams s...
ISBN:
(纸本)9780897919098
One of the open questions in the design of multimedia storage servers is in what order to serve incoming requests. Given the capability provided by the disk layout and scheduling algorithms to serve multiple streams simultaneously, improved request scheduling algorithms can reduce customer waiting times. This results in better service and/or lower customer loss. In this paper we define a new class of request scheduling algorithms, called Group-Guaranteed Server Capacity (GGSC), that preassign server channel capacity to groups of objects. We also define a particular formal method for computing the assigned capacities to achieve a given performance objective. We observe that the FCFS policy can provide the precise time of service to incoming customer requests. Under this assumption, we compare the performance of one of the new GGSC algorithms, GGSCW-FCFS, against FCFS and against two other recently proposed scheduling algorithms: Maximum Factored Queue length (MFQ), and the FCFS-n algorithm that preassigns capacity only to each of the n most popular objects. The algorithms are compared for both competitive market and captured audience *** findings of the algorithm comparisons are that: (1) FCFS-n has no advantage over FCFS if FCFS gives time of service guarantees to arriving customers, (2) FCFS and GGSCW-FCFS are superior to MFQ for both competitive and captive audience environments, (3) for competitive servers that are configured for customer loss less than 10%, FCFS is superior to all other algorithms examined in this paper, and (4) for captive audience environments that have objects with variable playback length, GGSCW-FCFS is the most promising of the policies considered in this paper. The conclusions for FCFS-n and MFQ differ from previous work because we focus on competitive environments with customer loss under 10%, we assume FCFS can provide time of service guarantees to all arriving customers, and we consider the distribution of customer waiting
The proceedings contain 30 papers. The topics discussed include: bringing real-time scheduling theory and practice closer for multimedia computing;exploiting process lifetime distributions for dynamic load balancing;e...
ISBN:
(纸本)0897917936
The proceedings contain 30 papers. The topics discussed include: bringing real-time scheduling theory and practice closer for multimedia computing;exploiting process lifetime distributions for dynamic load balancing;effective distributed scheduling of parallel workloads;limits on the performance benefits of multithreading and prefetching;fast message assembly using compact address relations;coordinated allocation of memory and processors in multiprocessors;Embra: fast and flexible machine simulation;experiences with network simulation;asynchronous updates in large parallel systems;design and analysis of frame-based fair queueing: a new traffic scheduling algorithm for packet-switched networks;and networking support for large scale multiprocessor servers.
Simulation is a critical tool in developing, testing, and evaluating network protocols and architectures. This paper describes x-Sim, a network simulator based on the x-kernel, that is able to fully simulate the topol...
详细信息
This paper presents an analysis of closed, balanced, fork-join queueing networks with exponential service time distributions. The fork-join queue is mapped onto two non-parallel networks, namely, a serial-join model a...
详细信息
An important issue in multiprogrammed multiprocessor systems is the scheduling of parallel jobs. Most research in the area has focussed solely on the allocation of processors to jobs. However, since memory is also a c...
详细信息
暂无评论