Scalable Web servers can be built using a network of workstations where server capacity can be extended by adding new workstations as the workload increases. The topic of our article is a comparison of different metho...
详细信息
Scalable Web servers can be built using a network of workstations where server capacity can be extended by adding new workstations as the workload increases. The topic of our article is a comparison of different methods to do load-balancing of HTTP traffic for scalable Web servers. We present a classification framework the different load-balancing methods and compare their performance. In addition, we evaluate in detail one class of methods using a prototype implementation with instruction-level analysis of processing overhead. The comparison is based on a trace driven simulation of traces from a large ISP (internet Service Provider) in Norway. The simulation model is used to analyze different load-balancing schemes based on redirection of request in the network and redirection in the mapping between a canonical name (CNAME) and IP address. The latter is vulnerable to spatial and temporal locality, although for the set of traces used, the impact of locality is limited. The best performance is obtained with redirection in the network.
ii common requirment to improve the end-to-end performance on the internet is critical because the internet becomes am infrastructure of our daily life. There have been many research studies to understand characterist...
ISBN:
(纸本)0769505716
ii common requirment to improve the end-to-end performance on the internet is critical because the internet becomes am infrastructure of our daily life. There have been many research studies to understand characteristics of the inside of the internet and to try to improve the end-to-end performance with caching technologies in application programs. This paper proposes a new framework to improve the end-to-end performance on the internet. The approach taken in this paper is to establish an alternative path between end-to-end hosts by the installation of rely hosts at the intermediate. The major advantage for previous caching studies can be easily applicable to all applications on a network path. The paper also describes the results of the experiment which measured a path quality of the inter intermediate link to select the better alternative path than the original path. This framework can be easily extended to construct a virtual network for the better performance on the internet. The network provides the better performance for all internet users.
The proliferation of demand for deploying realtime multimedia applications in the internet fuels the next generation internet development. The Integrated Services architecture has been designed by the internet Enginee...
详细信息
ISBN:
(纸本)0780365364
The proliferation of demand for deploying realtime multimedia applications in the internet fuels the next generation internet development. The Integrated Services architecture has been designed by the internet Engineering Task Force (IETF) to extend the best-effort service delivery model currently in place in the internet by introducing guaranteed delivery services to provide QoS performance guarantees to real-time and near-real-time multimedia applications with media playback and edge error control. This paper proposes a novel ubiquitous guaranteed service to extend QoS performance guarantees to highly-interactive real-time multimedia applications which have to exclude media playback and edge error control due to their sensitivity to of bet delay. Without media playback to synchronize received data and edge error control to recover lost data during traffic flow rerouting, the proposed service maintains performance guarantees at all times and under all network conditions via the support of an explicit seamless flow rerouting signaling service, which provides a robust fast reservation protocol and a packet sequence synchronization protocol during dow rerouting.
Current internet congestion control protocols operate independently on a per-flow basis. Recent work has demonstrated that cooperative congestion control strategies between flows can improve performance for a variety ...
详细信息
ISBN:
(纸本)1581131941
Current internet congestion control protocols operate independently on a per-flow basis. Recent work has demonstrated that cooperative congestion control strategies between flows can improve performance for a variety of applications, ranging from aggregated TCP transmissions to multiple-sender multicast applications. However, in order for this cooperation to be effective, one must first identify the flows that are congested at the same set of resources. In this paper, Re present techniques based on loss or delay observations at end-hosts to infer whether or not two flows experiencing congestion are congested at the same network resources. We validate these techniques via queueing analysis, simulation, and experimentation within the internet.
Interactive error control to mitigate error propagation from packet loss is one of the serious challenges to internet video applications to date. performance of error control scheme is highly dependent on how to coord...
详细信息
With the vast advances of internet services, large-scale and high-performance servers, such as CC-NUMA multiprocessors, are gaining importance in network computing. In a CC-NUMA multiprocessor the key component to con...
详细信息
ISBN:
(纸本)0769507689
With the vast advances of internet services, large-scale and high-performance servers, such as CC-NUMA multiprocessors, are gaining importance in network computing. In a CC-NUMA multiprocessor the key component to connect a computing node to the interconnection network is the node controller Node controllers perform protocol processing to transmit messages with other nodes in the system. As the new generation CC-NUMA multiprocessors are moving towards application-specific protocol processing, a node controller will require very powerful protocol processors or engines to provide the flexibility of processing different kinds of protocols. In this paper;we study the design of a thread-based node controller in which protocol engines have a multithreaded architecture. Multithreading allows protocol processing of different requests to proceed in parallel, whereby reducing blocking and improving response rime. Four important design parameters for a multithreaded protocol engine are examined: (1) the number of thread context storages, (2) the number of protocol operation units, (3) the scheduling policy and (4) the thread allocation scheme. From the application-driven simulation on sir representative applications we conclude that the number of thread contexts and protocol operation units have a great impact on the overall system performance. An appropriate thread allocation scheme for invalidation traffic is needed. and prioritizing a thread and scheduling it accordingly are also important for the system performance.
Single System Image is a desirable property for all services provided by a cluster environment. Coherently with this principle, cluster file systems should make data distribution transparent to the application, giving...
详细信息
ISBN:
(纸本)0769508960
Single System Image is a desirable property for all services provided by a cluster environment. Coherently with this principle, cluster file systems should make data distribution transparent to the application, giving rise to a heavy communication workload induced by I/O activity. Hence, I/O performance greatly depends on communication performance and on how the storage and network subsystems interact at all levels. In this paper we present a detailed performance analysis of the storage and network subsystems of a properly configured cluster node. In particular we consider the case of an Ultra2 SCSI controller with multiple attached disks sharing the I/O bus with a Gigabit LAN adapter. The analysis helps to understand how the several hardware and software components interact, which are the potential bottlenecks, and how these bottlenecks affect the overall performance.
In this paper we describe a new class of tools for protecting computer systems from security attacks. Their distinguished feature is the principle they are based on. Host or network protection is nor achieved by stren...
详细信息
ISBN:
(纸本)076950860X
In this paper we describe a new class of tools for protecting computer systems from security attacks. Their distinguished feature is the principle they are based on. Host or network protection is nor achieved by strengthening their defenses bur by weakening the enemy's offensive capabilities. A prototype tool has been implemented that demonstrates that such an approach is feasible and effective. We show that some of the most popular DoS attacks are effectively blocked, with limited impact on the sender's performance. Measurements of the implemented prototype show that controlling the outgoing traffic does not affect performance at the sender machine, when traffic is nor hostile. If traffic is hostile, the limited slow down experienced at the source is the price to pay to make the internet a safer place for all its users. The limited performance impact and the efficacy in attack prevention make tools like the one presented in this paper a new component of security architectures. Furthermore, such a type of tools represents an effective way to address security problems that are still unsolved or for which only partial solutions are available, such as the liability problem, intranet security, security tools performance and the use of distributed tools for intrusion.
暂无评论