Network Test :: Benchmarking Services Network Test Methodologies::Results Contact Information About Network Test

Network World Lab Test: Enterprise Backbone Switch/Routers

Published 3 February 2003

Test Methodology

 

v. 1.52 Copyright Ó 2002-2003 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test reserves the right to change the parameters of this test at any time.

 

By David Newman

 

Please forward comments to dnewman at networktest.com

 

1         Executive summary

This document describes the methodology to be used in measuring performance of high-end enterprise switch/routers with 10-gigabit and gigabit Ethernet interfaces. This test has the following main areas:

 

 

This document is organized as follows. This section introduces the test. Section 2 describes device requirements and test equipment. Section 3 describes test procedures. Section 4 describes the change history of this document.

 

2         The test bed

This section discusses requirements of systems under test and introduces the test equipment to be used.

2.1        Devices under test

Participating vendors must supply the following:

 

 

We strongly encourage vendors to supply 20 percent additional spare interfaces.

 

NOTE: The test instruments will use conventional full-size GBICs, and we will supply single-mode and multimode cabling with SC connectors. Vendors are welcome to supply other physical interface types such as SFP “mini-GBICs” but must also supply their own SC-to-SFP cabling if they choose to do so.

 

2.2        Test Hardware

2.2.1        Spirent SmartBits

The principal test instrument for this project is the SmartBits traffic generator/analyzer manufactured by Spirent Communications Inc. (Calabasas, Calif.). Spirent’s SmartBits 6000B chassis will be equipped with the company’s XLW-3720A 10-gigabit Ethernet cards and SmartBits LAN-3311 TeraMetrics gigabit Ethernet cards.

 

The 10-gigabit Ethernet cards use XENPAK MSA modules with 1,310-nm optics.

 

The SmartBits hardware will run SAI scripts custom-developed for this project as well as Spirent’s SmartWindow application.

 

2.2.2        Spirent AX-4000

We use the Spirent AX-4000 analyzer to capture traffic at gigabit line rates.

 

3         Test procedures

For each routine in this section, this document describes:

 

·        the test objective(s);

·        the configuration to be used;

·        the procedure to be used;

·        the test metrics to be recorded.

 

A primary goal of this methodology is to complete all events in 2 working days per vendor.

3.1        Baseline 10-gigabit performance

3.1.1        Objectives

Determine throughput, delay, jitter, sequencing, and frame loss at maximum offered load for 10 gigabit Ethernet interfaces forwarding unicast IPv4 traffic

 

3.1.2        Test bed configuration

Figure 1 below shows the physical test bed topology for the 10-gigabit Ethernet baseline tests. This device under test (DUT) is one chassis equipped with at least 2, and preferably 4, 10-gigabit Ethernet interfaces. We attached SmartBits 10-Gbit/s Ethernet test interfaces to the DUT.

 

 

Figure 1: 10-Gbit/s baseline physical topology

 

Figure 2 below shows the logical test bed topology for the 10-gigabit Ethernet baseline tests. We emulate 510 IP hosts on each of four subnets, and offer traffic in a fully meshed pattern among all subnets.

 

 

Figure 2: 10-Gbit/s baseline logical topology

The following table lists the IP addressing to be used by the DUT and the test instrument. Test traffic will represent 510 unique host IP addresses per subnet.  The host IP addresses on each subnet will begin at IP address 10.x.0.3/16. 


 

DUT

TEST INSTRUMENT

Chassis number

Interface type

Port IP address/
prefix length

Interface type

Port IP address/
prefix length

Hosts emulated

1

10GE

10.101.0.1/16

10GE

10.101.0.2/16

10.101.0.3-10.101.2.4

1

10GE

10.102.0.1/16

10GE

10.102.0.2/16

10.102.0.3-10.102.2.4

1

10GE

10.103.0.1/16

10GE

10.103.0.2/16

10.103.0.3-10.103.2.4

1

10GE

10.104.0.1/16

10GE

10.104.0.2/16

10.104.0.3-10.104.2.4

 

We will use SAI scripts to generate and analyze traffic.

 

The test traffic shall consist of 64-, 256- and 1,518-byte Ethernet frames carrying UDP/IP headers[1] (offered in separate runs) using a bidirectional traffic orientation and a fully meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.

 

Because traffic such as OSPF hellos, hot standby messages, and other management messages may interfere with test traffic, vendors should either disable such timers or set them to values high enough so that such traffic does not degrade data-plane performance.

 

3.1.3        Procedure

Using a binary search algorithm, we offer traffic to each interface in a fully meshed pattern to determine the throughput rate.

 

At the throughput rate, we also run tests to determine average delay, jitter, and frames received in sequence.

 

If throughput is less than line rate, we also offer traffic at line rate and measure frame loss at maximum offered load, as defined in RFCs 2285 and 2889.

 

We repeat this test with 64-, 256-, and 1,518-byte frames.

 

Test duration is 60 seconds per iteration.

 

The precision of delay and jitter measurements is +/- 100 nanoseconds.

 

3.1.4        Metrics

Throughput

Average delay

Jitter

Frames received in sequence

Frame loss at maximum forwarding rate

3.2        Bandwidth Aggregation

3.2.1        Objectives

Determine throughput, delay, and jitter of traffic from 1-Gbit/s interfaces aggregated onto a 10-Gbit/s backbone

 

3.2.2        Test bed configuration

Figure 3 below shows the physical test bed topology for the bandwidth aggregation tests. The system under test (SUT) comprises two chassis, each configured with 10 gigabit Ethernet interfaces and 1 10-gigabit Ethernet interface.

 

Figure 3: Bandwidth Aggregation Physical Topology

 

We attach SmartBits 1-Gbit/s Ethernet test interfaces to the DUT.

 

The following table lists the IP addressing to be used by the DUT and the test instrument. Test traffic will represent 510 unique host IP addresses per subnet.  The host IP addresses on each subnet will begin at IP address 10.x.0.3/16. 


 

DUT

 

 

TEST INSTRUMENT

Chassis number

Interface type

Port IP address/
prefix length

Interface type

Port IP address/
prefix length

Hosts emulated

1

GE

10.1.0.1/16

GE

10.1.0.2/16

10.1.0.3/16-10.1.2.4

1

GE

10.2.0.1/16

GE

10.2.0.2/16

10.2.0.3/16-10.2.2.4

1

GE

10.3.0.1/16

GE

10.3.0.2/16

10.3.0.3/16-10.3.2.4

1

GE

10.4.0.1/16

GE

10.4.0.2/16

10.4.0.3/16-10.4.2.4

1

GE

10.5.0.1/16

GE

10.5.0.2/16

10.5.0.3/16-10.5.2.4

1

GE

10.6.0.1/16

GE

10.6.0.2/16

10.6.0.3/16-10.6.2.4

1

GE

10.7.0.1/16

GE

10.7.0.2/16

10.7.0.3/16-10.7.2.4

1

GE

10.8.0.1/16

GE

10.8.0.2/16

10.8.0.3/16-10.8.2.4

1

GE

10.9.0.1/16

GE

10.9.0.2/16

10.9.0.3/16-10.9.2.4

1

GE

10.10.0.1/16

GE

10.10.0.2/16

10.10.0.3/16-10.10.2.4

1

10GE

10.101.0.1/16

NA

NA

NA

2

10GE

10.101.0.254/16

NA

NA

NA

2

GE

10.21.0.1/16

GE

10.21.0.2/16

10.21.0.3/16-10.21.2.4

2

GE

10.22.0.1/16

GE

10.22.0.2/16

10.22.0.3/16-10.22.2.4

2

GE

10.23.0.1/16

GE

10.23.0.2/16

10.23.0.3/16-10.23.2.4

2

GE

10.24.0.1/16

GE

10.24.0.2/16

10.24.0.3/16-10.24.2.4

2

GE

10.25.0.1/16

GE

10.25.0.2/16

10.25.0.3/16-10.25.2.4

2

GE

10.26.0.1/16

GE

10.26.0.2/16

10.26.0.3/16-10.26.2.4

2

GE

10.27.0.1/16

GE

10.27.0.2/16

10.27.0.3/16-10.27.2.4

2

GE

10.28.0.1/16

GE

10.28.0.2/16

10.28.0.3/16-10.28.2.4

2

GE

10.29.0.1/16

GE

10.29.0.2/16

10.29.0.3/16-10.29.2.4

2

GE

10.30.0.1/16

GE

10.30.0.2/16

10.30.0.3/16-10.30.2.4

 

We will use custom-developed Spirent SAI scripts to generate and analyze traffic. Our test pattern can also be generated using Spirent’s SmartFlow application.

 

The test traffic shall consist of 64-, 256- and 1,518-byte Ethernet frames carrying UDP/IP headers (offered in separate runs) using a bidirectional traffic orientation and a partially meshed multiple device distribution. See RFCs 2285 and RFC 2889 for descriptions of traffic orientation and distribution.

 

Because traffic such as OSPF hellos, hot standby messages, and other management messages may interfere with test traffic, vendors should either disable such timers or set them to values high enough so that such traffic does not degrade data-plane performance.

.

3.2.3        Procedure

Using a binary search algorithm, we offer traffic in a partially meshed pattern to each gigabit Ethernet interface to determine the throughput rate.

 

At the throughput rate, we also run tests to determine average delay, jitter, and frames received in sequence.

 

If throughput is less than line rate, we also offer traffic at line rate and measure frame loss at maximum offered load, as defined in RFCs 2285 and 2889.

 

We repeat this test with 64-, 256-, and 1,518-byte frames.

 

Test duration is 60 seconds per iteration.

 

The precision of delay and jitter measurements is +/- 100 nanoseconds.

 

3.2.4        Metrics

Throughput

Delay

Jitter

Frames received in sequence

Frame loss at maximum offered load

3.3        Failover

3.3.1        Objective

To determine failover time upon loss of a primary link

 

3.3.2        Test bed configuration

2 chassis with one primary link (10GE) and secondary link (also 10GE) configured between chassis, and at least 1 gigabit Ethernet interface per chassis

 

For devices that support failover across aggregated links, 2 chassis with one primary aggregated link and one secondary aggregated link

 

Vendors may choose which failover mechanism to use. This may be OSPF equal cost multipath (ECMP), rapid spanning tree, or some proprietary mechanism. The method chosen must be declared at test time.

 

We use Spirent SmartWindow v. 7.40 to generate and analyze traffic for this test.

 

3.3.3        Procedure

We offer 64-byte frames to one gigabit Ethernet interface at a rate of 1,000,000 pps. Approximately 10 seconds into the test, we physically remove the primary backbone link. We note the time for traffic to be rerouted over the secondary link.

 

Repeat the procedure for devices supporting failover across aggregated links.

 

The test duration is 60 seconds.

 

3.3.4        Metrics

Frame loss

Switchover time (determined from frame loss)

3.4        Quality-of-service enforcement

3.4.1        Objectives

Case 1: Determine whether devices under test allocate fixed bandwidth allocations to given traffic classes during periods of congestion

 

Case 2: Determine whether devices under test allocate any available bandwidth to a given traffic class during periods of congestion

 

Note: The QOS tests replicate the methodology previously used for the Network World article “The Trouble With Trunking.” A primary goal of these tests is to compare results obtained with a 10-gigabit backbone vs. those obtained with multiple 1-Gbit/s circuits using link aggregation.

 

3.4.2        Test bed configuration

Figure 4 below shows the physical test bed topology for the bandwidth aggregation tests. The system under test (SUT) comprises two chassis, each configured with 12 gigabit Ethernet interfaces and 1 10-gigabit Ethernet interface. This configuration presents the device under test with 12:10 oversubscription.

 

Figure 4: QOS Test Bed Physical Topology

 

 

The following table lists the IP addressing to be used by the DUT and the test instrument. Test traffic will represent 252 unique host IP addresses per subnet.  The host IP addresses on each subnet will begin at IP address 10.x.0.3/16. 

 

DUT

TEST INSTRUMENT

Chassis number

Interface type

Port IP address/
prefix length

Interface type

Port IP address/
prefix length

Hosts emulated

1

GE

10.1.0.1/16

GE

10.1.0.2/16

10.1.0.3-10.1.0.254

1

GE

10.2.0.1/16

GE

10.2.0.2/16

10.2.0.3-10.2.0.254

1

GE

10.3.0.1/16

GE

10.3.0.2/16

10.3.0.3-10.3.0.254

1

GE

10.4.0.1/16

GE

10.4.0.2/16

10.4.0.3-10.4.0.254

1

GE

10.5.0.1/16

GE

10.5.0.2/16

10.5.0.3-10.5.0.254

1

GE

10.6.0.1/16

GE

10.6.0.2/16

10.6.0.3-10.6.0.254

1

GE

10.7.0.1/16

GE

10.7.0.2/16

10.7.0.3-10.7.0.254

1

GE

10.8.0.1/16

GE

10.8.0.2/16

10.8.0.3-10.8.0.254

1

GE

10.9.0.1/16

GE

10.9.0.2/16

10.9.0.3-10.9.0.254

1

GE

10.10.0.1/16

GE

10.10.0.2/16

10.10.0.3-10.10.0.254

1

GE

10.11.0.1/16

GE

10.11.0.2/16

10.11.0.3-10.11.0.254

1

GE

10.12.0.1/16

GE

10.12.0.2/16

10.12.0.3-10.12.0.254

1

10GE

10.101.0.1/16

NA

NA

NA

2

10GE

10.101.0.254/16

NA

NA

NA

2

GE

10.21.0.1/16

GE

10.21.0.2/16

10.21.0.3-10.21.0.254

2

GE

10.22.0.1/16

GE

10.22.0.2/16

10.22.0.3-10.22.0.254

2

GE

10.23.0.1/16

GE

10.23.0.2/16

10.23.0.3-10.23.0.254

2

GE

10.24.0.1/16

GE

10.24.0.2/16

10.24.0.3-10.24.0.254

2

GE

10.25.0.1/16

GE

10.25.0.2/16

10.25.0.3-10.25.0.254

2

GE

10.26.0.1/16

GE

10.26.0.2/16

10.26.0.3-10.26.0.254

2

GE

10.27.0.1/16

GE

10.27.0.2/16

10.27.0.3-10.27.0.254

2

GE

10.28.0.1/16

GE

10.28.0.2/16

10.28.0.3-10.28.0.254

2

GE

10.29.0.1/16

GE

10.29.0.2/16

10.29.0.3-10.29.0.254

2

GE

10.30.0.1/16

GE

10.30.0.2/16

10.30.0.3-10.30.0.254

1

GE

10.31.0.1/16

GE

10.31.0.2/16

10.31.0.3-10.31.0.254

1

GE

10.32.0.1/16

GE

10.32.0.2/16

10.32.0.3-10.32.0.254

 

We will use custom-developed Spirent SAI scripts to generate and analyze traffic. Our test pattern can also be generated using Spirent’s SmartFlow application.

 

The test traffic shall consist of 128-byte Ethernet frames carrying TCPP/IP headers using a bidirectional traffic orientation and a partially meshed multiple device distribution. See RFCs 2285 and RFC 2889 for descriptions of traffic orientation and distribution.

 

Because traffic such as OSPF hellos, hot standby messages, and other management messages may interfere with test traffic, vendors should either disable such timers or set them to values high enough so that such traffic does not degrade data-plane performance.

 

The test traffic shall be divided into three classes, distinguished by different diff-serv code points (DSCPs) and TCP port numbers:

 

 

Priority

Traffic type


Desired DSCP

 

Destination TCP port

High

DLSw

0x110000

2065

Medium

Web

0x100000

80

Low

FTP

0x000000

20

 

The test instrument will offer all packets with a diff-serv codepoint (DSCP) value set to 0x000000. The DUT must re-mark DSCPs as needed. We will spot-check the ability of the DUT to remark DSCPs by capturing and decoding traffic using Spirent’s AX-4000 analyzer.

 

The devices under test must reclassify traffic using DSCP value 0x110000 for high priority traffic and 0x100000 for medium-priority traffic. Low-priority traffic should retain its 0x000000 DSCP marking.

 

To create congestion, the aggregate offered load of all classes of test traffic will exceed the capacity of the backbone link (or aggregated link) by a ratio of 12:10.

 

For all tests, we offer traffic to the DUT in a ratio of 1:7:4 for the high, medium, and low traffic classes, respectively.

 

Devices under test must be configured to enforce the following rules:

 

  1. High-priority traffic (DSCP 110xxx) must be delivered with zero loss.

  2. Low-priority traffic must not consume more than 2 Gbit/s of bandwidth.

 

  1. When high-priority traffic is not present, medium-priority traffic shall be able to use the additional bandwidth made available by the lack of high-priority traffic.

 

  1. Vendors must not change device configuration between test cases 1 and 2.

 

3.4.3        Procedure

Case 1: Offer all three classes of traffic to 12 interfaces on chassis A in a high/medium/low ratio of 1:7:4, at a rate to oversubscribe channel capacity by a factor of 12:10. Observe output rates on destination interfaces.

 

The traffic classes should be distributed as follows:

 

Case 1

INGRESS

EGRESS

 

 

 

Traffic class

 

Aggregate offered load (pps)

 

Aggregate offered load (bit/s)

Expected aggregate forwarding rate (pps)

Expected aggregate forwarding rate (bit/s)

High

844,594.59

864,864,865

844,594.59

864,864,865

Medium

5,912,162.16

6,054,054,054

5,912,162.16

6,054,054,054

Low

3,378,378.38

3,459,459,459

1,689,189.19

1,729,729,730

Total

10,135,135.14

10,378,378,378

8,445,945.95

8,648,648,649

 

Case 2: Offer medium- and low-priority classes of traffic to 12 interfaces on chassis A in a medium/low ratio of 9:3, at a rate to oversubscribe channel capacity by a factor of 12:10. Observe output rates on destination interfaces.

 

The traffic classes should be distributed as follows:

 

Case 2 

INGRESS

EGRESS

 

 

Traffic class

 

Aggregate offered load (pps)

 

 

Aggregate offered load (bit/s)

Expected aggregate forwarding rate (pps)

Expected aggregate forwarding rate (bit/s)

High

N/A

N/A

N/A

N/A

Medium

7,601,351.35

7,783,783,784

6,756,756.76

6,918,918,919

Low

2,533,783.78

2,594,594,595

1,689,189.19

1,729,729,730

Total

10,135,135.14

10,378,378,378

8,445,945.95

8,648,648,649

 

3.4.4        Metrics

Case 1: Forwarding rate of high-, medium-, and low-priority traffic

Case 2: Forwarding rate of medium- and low-priority traffic

 

4         Change History

Version 1.52

3 February 2003

Title bar: Changed publication date to 3 February 2003

 

Version 1.51

10 January 2003

Title bar: Changed scheduled publication date to February 2003

 

Version 1.5

29 October 2002

QOS enforcement: Corrected egress frame and bit rates for cases 1 and 2

 

Version 1.4

14 October 2002

Product requirements: Dropped requirement for multiple gigabit Ethernet line cards per chassis

Test hardware: Deleted reference to SmartFlow; specified that all tests except failover will use SAI scripts

10GE baseline test: Deleted reference to SmartFlow; added reference to SAI scripts

Bandwidth aggregation: Dropped requirement for multiple gigabit Ethernet line cards per chassis

 

Version 1.3

3 September 2002

Test hardware: Changed to 3311 cards for gigabit Ethernet

Test hardware: Added description of AX-4000

10GE baseline test: Deleted reference to SmartFlow version number (will use engineering build of forthcoming SmartFlow 2.0 for 10GE tests)

10GE baseline test and bandwitdth aggregation tests: Revised number of emulated hosts to 510

 

Version 1.2

29 August 2002

Changed “latency” to “delay” throughout to avoid conflict with RFC 1242 definition

Changed “delay variation” to “jitter” throughout, following terms defined in dsterm Internet-Draft

Bandwidth aggregation and QOS tests: Changed “fully meshed” to “partially meshed multiple device” to conform with RFCs 2285, 2889

10GE baseline test: Revised number of emulated hosts back to 4,095

Bandwidth aggregation test: Revised emulated host IP addresses to start on .3

QOS test: Minor wording changes in procedure section

 

Version 1.1

26 August 2002

Test procedures into: Revised testing time to 2 days

10GE baseline test: Revised number of emulated hosts to 510

 

Version 1.0

Date: 23 August 2002

Initial public release



[1] All frame length references in this document cover IP over Ethernet. We measure frame length from the first byte of the Ethernet MAC header to the last byte of the CRC. Unless otherwise specified, IP packets contain IP and UDP headers.

 

Network Test Footer. Privacy Policy. Contact us to learn more.