We study the effectiveness of different parallel architectures for achieving the high throughputs and low latencies needed in processing signaling protocols for high speed networks. A key performance issue is the trad...
详细信息
We study the effectiveness of different parallel architectures for achieving the high throughputs and low latencies needed in processing signaling protocols for high speed networks. A key performance issue is the trade off between the load balancing gains and the call record management overhead. Arranging processors in large groups potentially yields higher load balancing gains but also incurs higher overhead in maintaining consistency among the replicated copies of the cad records. We study this tradeoff and its impact on the design of protocol processing systems for two generic classes of parallel architectnres, namely, shared memory and distributed memory architectures. In shared memory architectures, maintaining a common message queue in the shared memory can provide the maximal load balancing gains. We show, however, in order to optimize performance it is necessary to organize the processors in small groups since large groups result in higher cad record management overhead In distributed memory architectures with each processor maintaining its own message queue there is no inherent provision for load balancing, Based on a detailed simulation analysis we show that organizing the processors into small groups end using a simple distributed load balancing scheme yields modest performance gains even after cap record management overheads are taken into account, We find that the common message queue architecture outperforms the distributed architecture in terms of lower response time due to its improved load balancing capability, Finally, we do a fault-tolerance analysis with respect to the call-record data structure, Using a simple failure recovery model of the processors and the local memory, we show that in the case of shared memory architecture, the availability is also optimized when processors are organized in small groups, This is because when comparing architectures the higher cap record management overhead incurred for larger group sizes must be accounted for
暂无评论