Designing packet buffers in high-bandwidth switches and routers

PhD Qualifying Examination


Title: "Designing packet buffers in high-bandwidth switches and routers"

Mr. Dong LIN


Abstract:

The phenomenal growth of the Internet has been fuelled by the rapid
increase in the communication link bandwidth. Internet routers play a
crucial role in sustaining this growth by being able to switch packets
extremely fast to keep up with the growing bandwidth (line rate). This
demands using sophisticated packet switching and buffering techniques
to be adopted in the router design. In particular, all routers rely on
well-designed packet buffers, with large capacity but with short
response times for each packet, to deliver high throughput and deal
with temporary congestion. Current memory technologies like SRAM and
DRAM cannot meet the two requirements (capacity and response time)
simultaneously. The SRAM is fast enough but it cannot be built with
large capacity, and is power hungry. The DRAM can be built with a
large capacity, but the access time is too large. This prompted some
researchers to suggest a combined SRAM/DRAM hierarchical buffer
architecture that achieves good throughput and latency with large
capacity. Interleaved and parallel buffers further improve upon the
scalability. In all these approaches both the SRAM and DRAM need to
maintain a large number of queues. In practice, maintaining so many
dynamic queues is a real challenge and limits the buffer scalability
of these centralized approaches.

In this survey, we review previous works and seek more efficient and
novel ways of achieving scalability by introducing a distributed
packet buffer architecture. This new strategy raises two fundamental
issues: (a) how to design scalable packet buffers using independent
buffer subsystems; and (b) how to dynamically balance the workload
among multiple subsystems without any blocking. We address these
issues by first designing a basic framework which allows flows
dynamically switch from one subsystem to another without any blocking.
In particular, this framework defines a series of states that all
flows must follow. It minimizes the overhead of state maintenance and
avoids DRAM fractions. Based on this framework, we further devise a
load-balancing algorithm to meet the overall system requirements. Both
theoretical analysis and experimental results demonstrate that our
load-balancing algorithm and its corresponding memory hierarchy
outperform the others when we have a high bandwidth links with large
number of active connections (e.g., high-speed Internet).


Date:     		Friday, 29 January 2010

Time:                   4:00pm - 6:00pm

Venue:                  Room 3494
 			lifts 25/26

Committee Members:      Prof. Mounir Hamdi (Supervisor)
 			Dr. Lin Gu (Chairperson)
 			Dr. Yunhao Liu
 			Dr. Jogesh Muppala


**** ALL are Welcome ****