Network World Lab Test: Enterprise Backbone Switch/Routers
Published 3 February 2003
Test Methodology
v. 1.52 Copyright Ó 2002-2003 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test reserves the right to change the parameters of this test at any time.
By David Newman
Please forward comments to dnewman at networktest.com
This document describes the methodology to be used in measuring performance of high-end enterprise switch/routers with 10-gigabit and gigabit Ethernet interfaces. This test has the following main areas:
This document is organized as follows. This section introduces the test. Section 2 describes device requirements and test equipment. Section 3 describes test procedures. Section 4 describes the change history of this document.
This section discusses requirements of systems under test and introduces the test equipment to be used.
Participating vendors must supply the following:
We strongly encourage vendors to supply 20 percent additional spare interfaces.
NOTE: The test instruments will use conventional full-size GBICs, and we will supply single-mode and multimode cabling with SC connectors. Vendors are welcome to supply other physical interface types such as SFP “mini-GBICs” but must also supply their own SC-to-SFP cabling if they choose to do so.
The principal test instrument for this project is the SmartBits traffic generator/analyzer manufactured by Spirent Communications Inc. (Calabasas, Calif.). Spirent’s SmartBits 6000B chassis will be equipped with the company’s XLW-3720A 10-gigabit Ethernet cards and SmartBits LAN-3311 TeraMetrics gigabit Ethernet cards.
The 10-gigabit Ethernet cards use XENPAK MSA modules with 1,310-nm optics.
The SmartBits hardware will run SAI scripts custom-developed for this project as well as Spirent’s SmartWindow application.
We use the Spirent AX-4000 analyzer to capture traffic at gigabit line rates.
For each routine in this section, this document describes:
· the test objective(s);
· the configuration to be used;
· the procedure to be used;
· the test metrics to be recorded.
A primary goal of this methodology is to complete all events in 2 working days per vendor.
Determine throughput, delay, jitter, sequencing, and frame loss at maximum offered load for 10 gigabit Ethernet interfaces forwarding unicast IPv4 traffic
Figure 1 below shows the physical test bed topology for the 10-gigabit Ethernet baseline tests. This device under test (DUT) is one chassis equipped with at least 2, and preferably 4, 10-gigabit Ethernet interfaces. We attached SmartBits 10-Gbit/s Ethernet test interfaces to the DUT.
Figure 1: 10-Gbit/s
baseline physical topology
Figure 2 below shows the logical test bed topology for the 10-gigabit Ethernet baseline tests. We emulate 510 IP hosts on each of four subnets, and offer traffic in a fully meshed pattern among all subnets.
Figure 2: 10-Gbit/s
baseline logical topology
The following table lists
the IP addressing to be used by the DUT and the test instrument. Test traffic
will represent 510 unique host IP addresses per subnet. The host IP addresses on each subnet will
begin at IP address 10.x.0.3/16.
DUT |
TEST
INSTRUMENT |
||||
Chassis number |
Interface type |
Port IP address/ |
Interface type |
Port IP address/ |
Hosts emulated |
1 |
10GE |
10.101.0.1/16 |
10GE |
10.101.0.2/16 |
10.101.0.3-10.101.2.4 |
1 |
10GE |
10.102.0.1/16 |
10GE |
10.102.0.2/16 |
10.102.0.3-10.102.2.4 |
1 |
10GE |
10.103.0.1/16 |
10GE |
10.103.0.2/16 |
10.103.0.3-10.103.2.4 |
1 |
10GE |
10.104.0.1/16 |
10GE |
10.104.0.2/16 |
10.104.0.3-10.104.2.4 |
We will use SAI scripts to generate and analyze traffic.
The test traffic shall consist of 64-, 256- and 1,518-byte Ethernet frames carrying UDP/IP headers[1] (offered in separate runs) using a bidirectional traffic orientation and a fully meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.
Because traffic such as OSPF hellos, hot standby messages, and other management messages may interfere with test traffic, vendors should either disable such timers or set them to values high enough so that such traffic does not degrade data-plane performance.
Using a binary search algorithm, we offer traffic to each interface in a fully meshed pattern to determine the throughput rate.
At the throughput rate, we also run tests to determine average delay, jitter, and frames received in sequence.
If throughput is less than line rate, we also offer traffic at line rate and measure frame loss at maximum offered load, as defined in RFCs 2285 and 2889.
We repeat this test with 64-, 256-, and 1,518-byte frames.
Test duration is 60 seconds per iteration.
The precision of delay and jitter measurements is +/- 100 nanoseconds.
Throughput
Average delay
Jitter
Frames received in sequence
Frame loss at maximum forwarding rate
Determine throughput, delay, and jitter of traffic from 1-Gbit/s interfaces aggregated onto a 10-Gbit/s backbone
Figure 3 below shows the physical test bed topology for the bandwidth aggregation tests. The system under test (SUT) comprises two chassis, each configured with 10 gigabit Ethernet interfaces and 1 10-gigabit Ethernet interface.
Figure 3: Bandwidth
Aggregation Physical Topology
We attach SmartBits 1-Gbit/s Ethernet test interfaces to the DUT.
The following table lists
the IP addressing to be used by the DUT and the test instrument. Test traffic
will represent 510 unique host IP addresses per subnet. The host IP addresses on each subnet will
begin at IP address 10.x.0.3/16.
DUT |
|
|
TEST
INSTRUMENT |
||
Chassis number |
Interface type |
Port IP address/ |
Interface type |
Port IP address/ |
Hosts emulated |
1 |
GE |
10.1.0.1/16 |
GE |
10.1.0.2/16 |
10.1.0.3/16-10.1.2.4 |
1 |
GE |
10.2.0.1/16 |
GE |
10.2.0.2/16 |
10.2.0.3/16-10.2.2.4 |
1 |
GE |
10.3.0.1/16 |
GE |
10.3.0.2/16 |
10.3.0.3/16-10.3.2.4 |
1 |
GE |
10.4.0.1/16 |
GE |
10.4.0.2/16 |
10.4.0.3/16-10.4.2.4 |
1 |
GE |
10.5.0.1/16 |
GE |
10.5.0.2/16 |
10.5.0.3/16-10.5.2.4 |
1 |
GE |
10.6.0.1/16 |
GE |
10.6.0.2/16 |
10.6.0.3/16-10.6.2.4 |
1 |
GE |
10.7.0.1/16 |
GE |
10.7.0.2/16 |
10.7.0.3/16-10.7.2.4 |
1 |
GE |
10.8.0.1/16 |
GE |
10.8.0.2/16 |
10.8.0.3/16-10.8.2.4 |
1 |
GE |
10.9.0.1/16 |
GE |
10.9.0.2/16 |
10.9.0.3/16-10.9.2.4 |
1 |
GE |
10.10.0.1/16 |
GE |
10.10.0.2/16 |
10.10.0.3/16-10.10.2.4 |
1 |
10GE |
10.101.0.1/16 |
NA |
NA |
NA |
2 |
10GE |
10.101.0.254/16 |
NA |
NA |
NA |
2 |
GE |
10.21.0.1/16 |
GE |
10.21.0.2/16 |
10.21.0.3/16-10.21.2.4 |
2 |
GE |
10.22.0.1/16 |
GE |
10.22.0.2/16 |
10.22.0.3/16-10.22.2.4 |
2 |
GE |
10.23.0.1/16 |
GE |
10.23.0.2/16 |
10.23.0.3/16-10.23.2.4 |
2 |
GE |
10.24.0.1/16 |
GE |
10.24.0.2/16 |
10.24.0.3/16-10.24.2.4 |
2 |
GE |
10.25.0.1/16 |
GE |
10.25.0.2/16 |
10.25.0.3/16-10.25.2.4 |
2 |
GE |
10.26.0.1/16 |
GE |
10.26.0.2/16 |
10.26.0.3/16-10.26.2.4 |
2 |
GE |
10.27.0.1/16 |
GE |
10.27.0.2/16 |
10.27.0.3/16-10.27.2.4 |
2 |
GE |
10.28.0.1/16 |
GE |
10.28.0.2/16 |
10.28.0.3/16-10.28.2.4 |
2 |
GE |
10.29.0.1/16 |
GE |
10.29.0.2/16 |
10.29.0.3/16-10.29.2.4 |
2 |
GE |
10.30.0.1/16 |
GE |
10.30.0.2/16 |
10.30.0.3/16-10.30.2.4 |
We will use custom-developed Spirent SAI scripts to generate and analyze traffic. Our test pattern can also be generated using Spirent’s SmartFlow application.
The test traffic shall consist of 64-, 256- and 1,518-byte Ethernet frames carrying UDP/IP headers (offered in separate runs) using a bidirectional traffic orientation and a partially meshed multiple device distribution. See RFCs 2285 and RFC 2889 for descriptions of traffic orientation and distribution.
Because traffic such as OSPF hellos, hot standby messages, and other management messages may interfere with test traffic, vendors should either disable such timers or set them to values high enough so that such traffic does not degrade data-plane performance.
.
Using a binary search algorithm, we offer traffic in a partially meshed pattern to each gigabit Ethernet interface to determine the throughput rate.
At the throughput rate, we also run tests to determine average delay, jitter, and frames received in sequence.
If throughput is less than line rate, we also offer traffic at line rate and measure frame loss at maximum offered load, as defined in RFCs 2285 and 2889.
We repeat this test with 64-, 256-, and 1,518-byte frames.
Test duration is 60 seconds per iteration.
The precision of delay and jitter measurements is +/- 100 nanoseconds.
Throughput
Delay
Jitter
Frames received in sequence
Frame loss at maximum offered load
To determine failover time upon loss of a primary link
2 chassis with one primary link (10GE) and secondary link (also 10GE) configured between chassis, and at least 1 gigabit Ethernet interface per chassis
For devices that support failover across aggregated links, 2 chassis with one primary aggregated link and one secondary aggregated link
Vendors may choose which failover mechanism to use. This may be OSPF equal cost multipath (ECMP), rapid spanning tree, or some proprietary mechanism. The method chosen must be declared at test time.
We use Spirent SmartWindow v. 7.40 to generate and analyze traffic for this test.
We offer 64-byte frames to one gigabit Ethernet interface at a rate of 1,000,000 pps. Approximately 10 seconds into the test, we physically remove the primary backbone link. We note the time for traffic to be rerouted over the secondary link.
Repeat the procedure for devices supporting failover across aggregated links.
The test duration is 60 seconds.
Frame loss
Switchover time (determined from frame loss)
Case 1: Determine whether devices under test allocate fixed bandwidth allocations to given traffic classes during periods of congestion
Case 2: Determine whether devices under test allocate any available bandwidth to a given traffic class during periods of congestion
Note: The QOS tests replicate the methodology previously used for the Network World article “The Trouble With Trunking.” A primary goal of these tests is to compare results obtained with a 10-gigabit backbone vs. those obtained with multiple 1-Gbit/s circuits using link aggregation.
Figure 4 below shows the physical test bed topology for the bandwidth aggregation tests. The system under test (SUT) comprises two chassis, each configured with 12 gigabit Ethernet interfaces and 1 10-gigabit Ethernet interface. This configuration presents the device under test with 12:10 oversubscription.
Figure 4: QOS Test
Bed Physical Topology
The following table lists
the IP addressing to be used by the DUT and the test instrument. Test traffic
will represent 252 unique host IP addresses per subnet. The host IP addresses on each subnet will
begin at IP address 10.x.0.3/16.
DUT |
TEST INSTRUMENT |
||||
Chassis number |
Interface type |
Port IP address/ |
Interface type |
Port IP address/ |
Hosts emulated |
1 |
GE |
10.1.0.1/16 |
GE |
10.1.0.2/16 |
10.1.0.3-10.1.0.254 |
1 |
GE |
10.2.0.1/16 |
GE |
10.2.0.2/16 |
10.2.0.3-10.2.0.254 |
1 |
GE |
10.3.0.1/16 |
GE |
10.3.0.2/16 |
10.3.0.3-10.3.0.254 |
1 |
GE |
10.4.0.1/16 |
GE |
10.4.0.2/16 |
10.4.0.3-10.4.0.254 |
1 |
GE |
10.5.0.1/16 |
GE |
10.5.0.2/16 |
10.5.0.3-10.5.0.254 |
1 |
GE |
10.6.0.1/16 |
GE |
10.6.0.2/16 |
10.6.0.3-10.6.0.254 |
1 |
GE |
10.7.0.1/16 |
GE |
10.7.0.2/16 |
10.7.0.3-10.7.0.254 |
1 |
GE |
10.8.0.1/16 |
GE |
10.8.0.2/16 |
10.8.0.3-10.8.0.254 |
1 |
GE |
10.9.0.1/16 |
GE |
10.9.0.2/16 |
10.9.0.3-10.9.0.254 |
1 |
GE |
10.10.0.1/16 |
GE |
10.10.0.2/16 |
10.10.0.3-10.10.0.254 |
1 |
GE |
10.11.0.1/16 |
GE |
10.11.0.2/16 |
10.11.0.3-10.11.0.254 |
1 |
GE |
10.12.0.1/16 |
GE |
10.12.0.2/16 |
10.12.0.3-10.12.0.254 |
1 |
10GE |
10.101.0.1/16 |
NA |
NA |
NA |
2 |
10GE |
10.101.0.254/16 |
NA |
NA |
NA |
2 |
GE |
10.21.0.1/16 |
GE |
10.21.0.2/16 |
10.21.0.3-10.21.0.254 |
2 |
GE |
10.22.0.1/16 |
GE |
10.22.0.2/16 |
10.22.0.3-10.22.0.254 |
2 |
GE |
10.23.0.1/16 |
GE |
10.23.0.2/16 |
10.23.0.3-10.23.0.254 |
2 |
GE |
10.24.0.1/16 |
GE |
10.24.0.2/16 |
10.24.0.3-10.24.0.254 |
2 |
GE |
10.25.0.1/16 |
GE |
10.25.0.2/16 |
10.25.0.3-10.25.0.254 |
2 |
GE |
10.26.0.1/16 |
GE |
10.26.0.2/16 |
10.26.0.3-10.26.0.254 |
2 |
GE |
10.27.0.1/16 |
GE |
10.27.0.2/16 |
10.27.0.3-10.27.0.254 |
2 |
GE |
10.28.0.1/16 |
GE |
10.28.0.2/16 |
10.28.0.3-10.28.0.254 |
2 |
GE |
10.29.0.1/16 |
GE |
10.29.0.2/16 |
10.29.0.3-10.29.0.254 |
2 |
GE |
10.30.0.1/16 |
GE |
10.30.0.2/16 |
10.30.0.3-10.30.0.254 |
1 |
GE |
10.31.0.1/16 |
GE |
10.31.0.2/16 |
10.31.0.3-10.31.0.254 |
1 |
GE |
10.32.0.1/16 |
GE |
10.32.0.2/16 |
10.32.0.3-10.32.0.254 |
We will use custom-developed Spirent SAI scripts to generate and analyze traffic. Our test pattern can also be generated using Spirent’s SmartFlow application.
The test traffic shall consist of 128-byte Ethernet frames carrying TCPP/IP headers using a bidirectional traffic orientation and a partially meshed multiple device distribution. See RFCs 2285 and RFC 2889 for descriptions of traffic orientation and distribution.
Because traffic such as OSPF hellos, hot standby messages, and other management messages may interfere with test traffic, vendors should either disable such timers or set them to values high enough so that such traffic does not degrade data-plane performance.
The test traffic shall be divided into three classes, distinguished by different diff-serv code points (DSCPs) and TCP port numbers:
Priority |
Traffic
type |
|
Destination
TCP port |
High |
DLSw |
0x110000 |
2065 |
Medium |
Web |
0x100000 |
80 |
Low |
FTP |
0x000000 |
20 |
The test instrument will offer all packets with a diff-serv codepoint (DSCP) value set to 0x000000. The DUT must re-mark DSCPs as needed. We will spot-check the ability of the DUT to remark DSCPs by capturing and decoding traffic using Spirent’s AX-4000 analyzer.
The devices under test must reclassify traffic using DSCP value 0x110000 for high priority traffic and 0x100000 for medium-priority traffic. Low-priority traffic should retain its 0x000000 DSCP marking.
To create congestion, the aggregate offered load of all classes of test traffic will exceed the capacity of the backbone link (or aggregated link) by a ratio of 12:10.
For all tests, we offer traffic to the DUT in a ratio of 1:7:4 for the high, medium, and low traffic classes, respectively.
Devices under test must be configured to enforce the following rules:
Case 1: Offer all three classes of traffic to 12 interfaces on chassis A in a high/medium/low ratio of 1:7:4, at a rate to oversubscribe channel capacity by a factor of 12:10. Observe output rates on destination interfaces.
The traffic classes should be distributed as follows:
Case 1 |
INGRESS |
EGRESS |
||
Traffic class |
Aggregate offered load (pps) |
Aggregate offered load (bit/s) |
Expected aggregate forwarding rate (pps) |
Expected aggregate forwarding rate (bit/s) |
High |
844,594.59 |
864,864,865 |
844,594.59 |
864,864,865 |
Medium |
5,912,162.16 |
6,054,054,054 |
5,912,162.16 |
6,054,054,054 |
Low |
3,378,378.38 |
3,459,459,459 |
1,689,189.19 |
1,729,729,730 |
Total |
10,135,135.14 |
10,378,378,378 |
8,445,945.95 |
8,648,648,649 |
Case 2: Offer medium- and low-priority classes of traffic to 12 interfaces on chassis A in a medium/low ratio of 9:3, at a rate to oversubscribe channel capacity by a factor of 12:10. Observe output rates on destination interfaces.
The traffic classes should be distributed as follows:
Case
2 |
INGRESS |
EGRESS |
||
Traffic class |
Aggregate
offered load (pps) |
Aggregate
offered load (bit/s) |
Expected aggregate forwarding rate (pps) |
Expected aggregate forwarding rate (bit/s) |
High |
N/A |
N/A |
N/A |
N/A |
Medium |
7,601,351.35 |
7,783,783,784 |
6,756,756.76 |
6,918,918,919 |
Low |
2,533,783.78 |
2,594,594,595 |
1,689,189.19 |
1,729,729,730 |
Total |
10,135,135.14 |
10,378,378,378 |
8,445,945.95 |
8,648,648,649 |
Case 1: Forwarding rate of high-, medium-, and low-priority traffic
Case 2: Forwarding rate of medium- and low-priority traffic
Version 1.52
3 February 2003
Title bar: Changed publication date to 3 February 2003
Version 1.51
10 January 2003
Title bar: Changed scheduled publication date to February 2003
Version 1.5
29 October 2002
QOS enforcement: Corrected egress frame and bit rates for cases 1 and 2
Version 1.4
14 October 2002
Product requirements: Dropped requirement for multiple gigabit Ethernet line cards per chassis
Test hardware: Deleted reference to SmartFlow; specified that all tests except failover will use SAI scripts
10GE baseline test: Deleted reference to SmartFlow; added reference to SAI scripts
Bandwidth aggregation: Dropped requirement for multiple gigabit Ethernet line cards per chassis
Version 1.3
3 September 2002
Test hardware: Changed to 3311 cards for gigabit Ethernet
Test hardware: Added description of AX-4000
10GE baseline test: Deleted reference to SmartFlow version number (will use engineering build of forthcoming SmartFlow 2.0 for 10GE tests)
10GE baseline test and bandwitdth aggregation tests: Revised number of emulated hosts to 510
Version 1.2
29 August 2002
Changed “latency” to “delay” throughout to avoid conflict with RFC 1242 definition
Changed “delay variation” to “jitter” throughout, following terms defined in dsterm Internet-Draft
Bandwidth aggregation and QOS tests: Changed “fully meshed” to “partially meshed multiple device” to conform with RFCs 2285, 2889
10GE baseline test: Revised number of emulated hosts back to 4,095
Bandwidth aggregation test: Revised emulated host IP addresses to start on .3
QOS test: Minor wording changes in procedure section
Version 1.1
26 August 2002
Test procedures into: Revised testing time to 2 days
10GE baseline test: Revised number of emulated hosts to 510
Version 1.0
Date: 23 August 2002
Initial public release
[1] All frame length references in this document cover IP over Ethernet. We measure frame length from the first byte of the Ethernet MAC header to the last byte of the CRC. Unless otherwise specified, IP packets contain IP and UDP headers.