Network Test :: Benchmarking Services Network Test Methodologies::Results Contact Information About Network Test

Network World lab test: High-end enterprise switch/routers

Scheduled for publication in late March 2001

Test methodology

 

v. 1.22 Copyright Ó 2000 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test reserves the right to change the parameters of this test at any time.

 

By David Newman

 

Please forward comments to dnewman@networktest.com

 

1         Executive summary

This document describes the methodology to be used in a comparison of high-end enterprise switch/routers This test has four main areas:

 

 

This document is organized as follows. This section introduces the test. Section 2 describes device requirements and test equipment. Section 3 describes test procedures. Section 4 describes the change history of this document.

 

2         The test bed

This section discusses requirements of systems under test and introduces the test equipment to be used.

 

2.1        Devices under test

Participating vendors must supply two chassis, each equipped as follows:

 

 

2.2        Test Hardware

2.2.1        Spirent SmartBits

The principal test instrument for this project is the SmartBits traffic generator/analyzer manufactured by Spirent Communications Inc. (Chatsworth, Calif.). Spirent’s SmartBits 6000 chassis will be equipped with the company’s new LAN-3201 and/or LAN-6201 interfaces. These cards have 1000Base-SX gigabit Ethernet interfaces and layer 2/3/4 capabilities.

 

The SmartBits hardware will use SmartFlow and SmartWindow software and scripts custom-developed for this project.

 

3         Test procedures

For each routine in this section, this document describes:

 

·        the test objective;

 

A primary design goal of this methodology is to complete all events in 2.5 working days per vendor.

3.1        Baseline forwarding rate, latency, and jitter

3.1.1        Objectives

Determine forwarding rate/packet loss, latency, and jitter of single blade

Determine forwarding rate/packet loss, latency, and jitter of single chassis

 

3.1.2        Test bed configuration

One chassis equipped with at least 32 gigabit Ethernet (1000Base-SX) interfaces

Separate IP subnet configured for each interface, as follows:

 

Interface 1: 10.1.0.0/16

Interface 2 10.20.0/16

Interface 3 10.3.0.0/16

Interface 32: 10.32.0.0/16

 

Test traffic will represent 224 unique host IP addresses per subnet.  The host IP addresses on each subnet will use the IP addresses 10.x.0.2 through 10.x.0.225.

 

The test traffic shall consist of 64- and 1,518-byte IP packets[1] (offered in separate runs) using a bidirectional traffic orientation and fully meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.

 

3.1.3        Procedure

Offer traffic to 8 interfaces on the same blade. Determine throughput, latency, jitter, packets received in sequence, and packet loss at maximum offered load.

 

Repeat test with fully meshed traffic offered to all 32 interfaces on all blades. This test will use 217 (not 224) unique host IP addresses per subnet. (In a 32-interface full mesh, each interface addresses 31 destination interfaces. The Smartbits requires that the number of hosts must be an integer multiple of the number of destination interfaces – ergo, 217 rather than 224.)

 

3.1.4        Metrics

Throughput

Latency

Jitter

Packets received in sequence

Packet loss at maximum forwarding rate

3.2        Link Aggregation

3.2.1        Objectives

Determine maximum number of gigabit Ethernet links that can be aggregated

Determine throughput, latency, and jitter using aggregated links

Determine ability to dynamically add individual links to an existing aggregate link

Determine ability to dynamically drop individual links from an existing aggregate link

 

3.2.2        Test bed configuration

On each of two chassis, aggregate as many links into a single aggregated link as the device will support, up to a maximum that shall not exceed ½ the number of interfaces per chassis.

 

Use remaining interfaces for input and output of test traffic.

 

The test traffic shall consist of 64- and 1,518-byte IP packets (offered in separate runs) using a bidirectional traffic orientation and partially meshed distribution. See RFC 2285 for definitions of traffic orientation and partial mesh.

 

3.2.3        Procedure

Configure the device under test to create an aggregate link consisting of the maximum number of individual links supported. Offer test traffic to all non-aggregated interfaces of each chassis, destined for all non-aggregated interfaces of the opposite chassis. Measure throughput by following the procedure described in RFC 2544, section 26.1.

 

If a device supports multiple aggregated links, repeat above procedure with maximum number of active aggregated links.

 

To test addition of an individual link to an existing aggregated link: Configure an aggregated link consisting of L-1 links, where L represents the maximum number of individual links that can be aggregated into one larger unit. Offer test traffic to all non-aggregated interfaces of each chassis, destined for non-aggregated interfaces of the opposite chassis. The offered load shall not exceed the channel capacity of the aggregated link. While test traffic is being offered, configure the device to add one link to the aggregated link. The test duration must exceed the link addition by at least 30 seconds. Record any packet loss and/or drop in throughput during the test duration.

 

To test removal of an individual link to existing aggregated link: Configure an aggregated link consisting of L links, where L represents the maximum number of individual links that can be aggregated into one larger unit. Offer test traffic to all non-aggregated interfaces of each chassis, destined for all non-aggregated interfaces of the opposite chassis. The offered load shall not exceed the capacity of L-1 links in the aggregated link. While test traffic is being offered, configure the device to remove one link from the aggregated link. The test duration must exceed the link addition by at least 30 seconds. Record any packet loss and/or drop in throughput during the test duration.

 

3.2.4        Metrics

Maximum number of links that can be aggregated

Throughput

Latency

Jitter

Packets received in sequence

 

3.3        High availability and failover

3.3.1        Objectives

To determine failover time upon failure of primary link

To determine failover time upon failure of primary aggregated link

 

3.3.2        Test bed configuration

2 chassis with one primary link (single gigE) and secondary link (single gigE) configured between chassis

 

For devices that support failover across aggregated links, 2 chassis with one primary aggregated link and one secondary aggregated link

 

3.3.3        Procedure

Offer traffic at a rate of 1,000,000 pps. Physically remove the primary backbone link. Note the time for traffic to be rerouted over the primary link.

 

Repeat the procedure for devices supporting failover across aggregated links.

 

3.3.4        Metrics

Packet loss

Switchover time (determined from packet loss)

 

3.4        Quality-of-service capabilities

3.4.1        Objectives

Case 1: Determine whether devices under test allocate fixed bandwidth allocations to given traffic classes during periods of congestion

 

Case 2: Determine whether devices under test allocate any available bandwidth to a given traffic class during periods of congestion

 

3.4.2        Test bed configuration

Two chassis connected via one gigabit Ethernet circuit. For devices that support link aggregation, this test will be repeated with an aggregated link consisting of 8 gigabit Ethernet links.

 

The test traffic shall consist of 128-byte IP packets offered in a bidirectional orientation and a partially meshed distribution. See RFC 2285 for definitions of traffic orientation and partial mesh.

 

The test traffic shall be divided into three classes, distinguished by different TCP port numbers:

 

Priority

Traffic type

Destination TCP port

High

DLSw

2065

Medium

Web

80

Low

FTP

20

 

The Smartbits will offer all packets with an IP precedence value set to 1. The devices under test must reclassify traffic using IP precedence levels 7 for high priority traffic, 3 for medium-priority traffic, and 1 for low-priority traffic.

 

To create congestion, the aggregate offered load of all classes of test traffic will exceed the capacity of the backbone link (or aggregated link) by a ratio of 2:1.

 

For all tests, traffic will be offered to the device under test in a high/medium/low ratio 1:10:10 for all classes.

 

Devices under test must be configured to enforce the following rules:

 

  1. During periods of congestion when all three traffic classes are present, the device shall deliver the classes in a ratio of 2:12:7 for high, medium, and low priorities, respectively.

  2. High-priority traffic (DLSw) must be delivered with 0 percent loss.

  3. When high-priority traffic is not present, medium- and low-priority traffic shall be able to use the additional bandwidth made available by the lack of high-priority traffic. Medium-priority traffic retains a higher priority than low-priority traffic; the two classes shall share the bandwidth in a 12:7 ratio, as stated in (a) above.

  4. Vendors must not change device configuration between cases 1 and 2.

 

3.4.3        Procedure

Case 1: Offer all three classes of traffic to 7 interfaces on chassis A in a high/medium/low ratio of 1:10:10, at a rate to oversubscribe channel capacity by a factor of 2. Observe output ratio on destination interfaces; an ideal result is a ratio of approximately 2:12:7 between high, medium and low classes of traffic.

 

The traffic classes should be distributed as follows:

 

 

Traffic class

 

Aggregate offered load (Mbit/s)

 

Target aggregate output load (Mbit/s)

High

82.367

82.367

Medium

823.668

494.201

Low

823.668

288.284

Total packets

1,729.704

864.852

Total packets, preamble and interframe gap

2,000.000

1,000.000

 

Case 2: Offer medium- and low-priority classes of traffic to two interfaces on chassis A in a medium/low ratio of 10:10, at a rate to oversubscribe channel capacity by a factor of 2. Observe output ratio on destination interfaces; an ideal result is a ratio of 5:3 between medium and low classes of traffic.

 

The traffic classes should be distributed as follows:

 

 

Traffic class

 

Aggregate offered load (Mbit/s)

 

Target aggregate output load (Mbit/s)

Medium

864.852

540.532

Low

864.852

324.320

Total packets

1,729.704

864.852

Total packets, preamble and interframe gap

2,000.000

1,000.000

 

3.4.4        Metrics

Case 1: Throughput of high-, medium-, and low-priority traffic

Ratio of high-, medium-, and low-priority traffic

Packets delivered in sequence

 

Case 2: Throughput of medium- and low-priority traffic

Ratio of medium- and low-priority traffic

Packets delivered in sequence

 


4         Change History

Version 1.22

Date: 22 May 2001

Fixed typo in HTML title

 

Version 1.21

Date: 27 February 2001

Fixed typo in introduction

Changed text of section 3.4.3, case 2, to indicate only medium- and low-priority traffic is offered

 

Version 1.20

Date: 9 February 2001

Changed partial mesh to full mesh error in section 3.1.2

Changed target data rates in section 3.4 to compensate for Ethernet interframe gap and packet preamble

 

Version: 1.12

Date: January 2001

Corrected typos in version 1.11

 

Version: 1.11

Date: January 2001

Changed section 3.1.2 to reflect full mesh

 

Version 1.10

Date: December 2000

Added target data rates in section 3.4 on QOS

 

Version 1.0

Date: November 2000

Initial release



[1] All references to packet lengths in this document refer to IP over Ethernet. We measure packet length from the first byte of the Ethernet MAC header to the last byte of the CRC. Unless otherwise specified, IP packets consist of IP headers but not upper-layer headers (e.g., TCP, UDP, HTTP, etc.).

 

Network Test Footer. Privacy Policy. Contact us to learn more.