Network World Clear Choice Test: Cisco Nexus 7000 Series Switch
Scheduled for Publication in Late Summer 2008
Draft Test Methodology
v. 2008072201 Copyright © 1999-2008 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test reserves the right to change the parameters of this test at any time.
This documentÕs URL is http://networktest.com/dc308/dc308meth.html. A PDF version of this document is available at http://networktest.com/dc308/dc308meth.pdf.
By David Newman
Please forward comments to dnewman at networktest.com
This document describes the methodology for testing the Cisco Nexus 7000 Series switch equipped with 256 10-gigabit Ethernet interfaces. Results from these tests will be published in an exclusive Network World article.
Tests cover two main areas:
The article describing the Nexus 7000 will be fairly short, approximately 1,200-1,500 words and will also include some features coverage of the switch. Accordingly, a key goal of this methodology is to keep testing simple. The tests described here should be able to be completed within two working days.
This document is organized as follows. This section introduces the test. Section 2 presents guidelines for scoring and other testing ground rules. Section 3 describes requirements for the device under test and test equipment. Section 4 describes test procedures. Section 5 describes the change history of this document.
Reviews published in Network World present test results in three formats: In charts or tabular form; in discussion in the article text; and in a ÒNetResultsÓ scorecard. This section discusses the weightings used to produce that scorecard.
Scorecards have a maximum rating of 5.0, where 5 = excellent, 4 = very good, 3 = average, 2 = below average and 1 = consistently subpar.
The following table lists the weightings we use to score test events.
10G Ethernet L2 performance |
15% |
10G Ethernet L3 performance (IPv4 unicast) |
15% |
10G Ethernet L3 performance (IPv4 multicast) |
15% |
High availability and resiliency |
25% |
Device features |
20% |
Device management and usability |
10% |
TOTAL |
100% |
For tests of high-end backbone equipment, Network World does not factor price into its scorecard ratings. We may discuss pricing in the article text and/or in the pros-and-cons section of the summary.
Vendors may supply unreleased versions of hardware and software for testing, provided the device under test will be available to Network World readers within 60 days of the testÕs scheduled publication date.
We make no distinction between released products and alpha-, beta-, or gold-code products. If an unreleased version of a product catches fire (an extreme case, perhaps) or produces otherwise undesirable results (a more common outcome), we will note such results in the review.
We ask vendors to supply hardware and software version information along with all configurations for all tests. This ensures repeatability and helps answers questions as to what we tested.
We strongly encourage vendors to prepare for all tests by running them in their labs before coming into ours. We will supply test scripts and consult with vendors on any and all aspects of test methodology. After testing we also share complete results and solicit vendor input on the results. The last thing we want is any result that surprises or embarrasses any vendor.
That said, Network WorldÕs policy is to publish all results from all tests. This may include results a vendor may perceive as negative. Network World will not honor requests to publish only the ÒgoodÓ results from a set of tests.
Network World maintains a standing open invitation to run the tests described here. Vendors are welcome to schedule retests to showcase new features or to correct previous problematic results.
This section discusses requirements of systems under test and introduces the test equipment to be used.
For this project, the Cisco Nexus 7000 Series switch will be equipped as follows:
We strongly encourage vendors to supply 10 percent additional spare interfaces in the event of card and/or transceiver failure.
To demonstrate the ability of the DUT to perform while also handling common security and management tasks, the following device parameters will be in effect during all tests:
á Apply a 7,000-line security access control list (ACL) on each of eight line cards
á Apply a 500-line QoS ACL on each of eight line cards
á Enable NetFlow reporting (up to a maximum of 512,000 flows)
The principal test instrument for this project is the TestCenter traffic generator/analyzer manufactured by Spirent Communications Inc.
Unless Cisco requests otherwise, the 10-gigabit Ethernet cards will use XFP MSA modules with 10GBase-SR 850-nm optics.
Unless otherwise noted, the device under test and test instruments will use addresses as given in the table below.
Note that IP routes will be advertised only in L3 tests, not in L2 tests. Also note that multicast addressing will not be assigned, nor groups joined, in unicast-only tests.
Number |
DUT
port |
DUT IP
address (all ports /16) |
Spirent
TestCenter address (all ports /16) |
Spirent
TestCenter port |
IPv4
routes advertised (all /24) |
IPv4
multicast sources |
IPv4
multicast groups |
1 |
e1/1 |
10.0.0.1 |
10.0.0.2 |
Port1 |
11.0.0.0-11.0.119.0 |
10.0.0.2-10.0.0.51 |
225.0.0.1-225.0.0.200 |
2 |
e1/2 |
10.1.0.1 |
10.1.0.2 |
Port2 |
11.1.0.0-11.1.119.0 |
|
225.0.0.1-225.0.0.200 |
3 |
e1/3 |
10.2.0.1 |
10.2.0.2 |
Port3 |
11.2.0.0-11.2.119.0 |
|
225.0.0.1-225.0.0.200 |
.. |
.. |
.. |
.. |
.. |
.. |
.. |
.. |
32 |
e1/32 |
10.31.0.1 |
10.31.0.2 |
Port32 |
11.31.0.0-11.31.119.0 |
|
225.0.0.1-225.0.0.200 |
33 |
e2/1 |
10.32.0.1 |
10.32.0.2 |
Port33 |
11.32.0.0-11.32.119.0 |
|
225.0.0.1-225.0.0.200 |
.. |
.. |
.. |
.. |
.. |
.. |
.. |
.. |
64 |
e2/32 |
10.63.0.1 |
10.63.0.2 |
Port64 |
11.63.0.0-11.63.119.0 |
|
225.0.0.1-225.0.0.200 |
65 |
e3/1 |
10.64.0.1 |
10.64.0.2 |
Port65 |
11.64.0.0-11.64.119.0 |
|
225.0.0.1-225.0.0.200 |
.. |
.. |
.. |
.. |
.. |
.. |
.. |
.. |
256 |
e10/32 |
10.255.0.1 |
10.255.0.2 |
Port256 |
11.255.0.0-11.255.119.0 |
|
225.0.0.1-225.0.0.200 |
For each routine in this section, this document describes:
á the test objective(s);
á the configuration to be used;
á the procedure to be used;
á the test metrics to be recorded;
á reporting requirements.
Determine throughput, average forwarding delay, maximum forwarding delay and sequencing for 10 gigabit Ethernet interfaces forwarding layer-2 Ethernet traffic
This device under test (DUT) is one chassis equipped with at least 256 10-gigabit Ethernet interfaces. We attach TestCenter 10-Gbit/s Ethernet test interfaces to 256 10G interfaces on the DUT.
The DUT must be configured with all 256 test interfaces in a single VLAN. The VLAN ID and VLANÕs IP address, if any, are unimportant for purposes of this test. The IP addresses given in section 3.3 of this document should be used and Spirent TestCenter must be configured to disable the use of layer-3 ARP exchanges. Test traffic will represent 20 unique host IP addresses per port, or 5,120 hosts in all.
The DUT should be configured so that all data-plane management traffic is disabled. This means disabling spanning tree, CDP, dynamic routing and any other protocols that may contend for bandwidth during the test.
To speed testing, MAC aging timers should be disabled or set to extremely high values (e.g., at least 24 hours greater than the test duration).
The test traffic shall consist of 64-, 128-, 256-, 1,518- and 9216-byte frames carrying IP headers[1] (offered in separate runs) using a bidirectional traffic orientation and a fully meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.
Using a binary search algorithm, we offer traffic to each interface in a fully meshed pattern to determine the throughput rate and frames received in sequence.
The test instrument also measures average and maximum forwarding delay at 10 percent of line rate, as well as counting frames in and out of sequence.
We repeat all tests with 64-, 128-, 256-, 1,518- and
9,216-byte frames.
Test duration is 300 seconds per iteration.
The precision of delay measurements is +/- 100 nanoseconds.
Theoretical maximum throughput (64, 256, 1518, 9216-byte frames)
Nexus throughput (64, 256, 1518, 9216-byte frames)
Average forwarding delay at 10 percent of line rate (64, 256, 1518, 9216-byte frames)
Maximum forwarding delay at 10 percent of line rate (64, 256, 1518, 9216-byte frames)
Frames received out of sequence (all tests)
DUT configuration
DUT hardware and software version
TestCenter configuration
Test results
Determine throughput, average forwarding delay, maximum forwarding delay and sequencing for 10 gigabit Ethernet interfaces forwarding IPv4 packets across subnet boundaries
This device under test (DUT) is one chassis equipped with at least 256 10-gigabit Ethernet interfaces. We attach TestCenter 10-Gbit/s Ethernet test interfaces to 256 10G interfaces on the DUT.
The DUT must be configured with each of 256 test interfaces in a unique IPv4 subnet. The IP address assignments are given in the table below.
This test uses OSPFv2 and one adjacency per 10G Ethernet interface test port. All interfaces on both the Nexus and the routers emulated by Spirent TestCenter will be in OSPF area 1 except for the loopback interface on the Nexus, which will be in OSPF area 0. We will configure all 256 of the Nexus 7000 10G Ethernet ports to bring up adjacencies with Spirent TestCenter. Then TestCenter will advertise 51,200 unique type 5 (external) LSAs (200 routes per port) and offer traffic destined to all routes.
With the exception of OSPF management traffic, the DUT should be configured so that all other management traffic is disabled. This includes spanning tree, CDP and any other protocols that may contend for bandwidth during the test.
To speed testing, MAC and ARP aging timers should be disabled or set to extremely high values (e.g., at least 24 hours greater than the test duration).
The DUT should use IP
addressing as given in the table in section 3.3 of this document.
The test traffic shall consist of 64-, 128-, 256-, 1,518- and 9216-byte frames carrying IP headers (offered in separate runs) using a bidirectional traffic orientation and a fully meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.
For purposes of this test, Òfully meshedÓ means one source on each test port will offer traffic to all routes on all other test ports. For example, the Spirent TestCenter port attached to DUT port e1/1 will offer traffic from 10.0.0.3 to all routes advertised on ports e1/2 through e10/32.
As described in the test bed configuration section, we bring up OSPFv2 adjacencies with all 256 10G Ethernet ports on Nexus. Spirent TestCenter then offers 200 unique type-5 LSAs to each port, or 51,200 routes total. After the routes have been installed in the Nexus database, TestCenter offers traffic destined to all routes.
Using a binary search algorithm, we offer traffic to each interface in a fully meshed pattern to determine the throughput rate and frames received in sequence.
The test instrument also measures average and maximum forwarding delay at 10 percent of line rate, as well as counting frames in and out of sequence.
We repeat all tests with 64-, 128-, 256-, 1,518- and 9,216-byte
frames.
Test duration is 300 seconds per iteration.
The precision of delay measurements is +/- 100 nanoseconds.
Theoretical maximum throughput (64, 128, 256, 1518, 9216-byte frames)
Nexus throughput (64, 128, 256, 1518, 9216-byte frames)
Average forwarding delay at 10 percent of line rate (64, 128, 256, 1518, 9216-byte frames)
Maximum forwarding delay at 10 percent of line rate (64, 128, 256, 1518, 9216-byte frames)
Frames received out of sequence (all tests)
DUT configuration
DUT hardware and software version
TestCenter configuration
Test results
Determine throughput, average forwarding delay, maximum forwarding delay and sequencing for 10 gigabit Ethernet interfaces when forwarding IP multicast traffic
This device under test (DUT) is one chassis equipped with at least 256 10-gigabit Ethernet interfaces. We attach TestCenter 10-Gbit/s Ethernet test interfaces to 256 10G interfaces on the DUT.
The DUT must be configured with each of 256 test interfaces in a unique IPv4 subnet.
IP address assignments are given in section 3.3 of this document. Note that the first Spirent TestCenter port (attached to interface e1/1 of the DUT) will offer traffic from 50 multicast sources. Emulated hosts on all other ports will join 200 multicast groups.
PIM-SM multicast routing should be enabled for this test. All other management protocols should be disabled. This includes spanning tree, CDP and any other protocols that may contend for bandwidth during the test.
To speed testing, MAC and ARP aging timers should be disabled or set to extremely high values (e.g., at least 24 hours greater than the test duration).
The test traffic shall consist of 64-, 128-, 256-, 1,518- and 9216-byte frames carrying IP headers (offered in separate runs) using a unidirectional traffic orientation and a partially meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.
Emulated hosts attached to ports e1/2 through e10/32 of the DUT will use IGMPv3 join messages to subscribe to 200 multicast groups as described in section 3.3 of this documents.
After group membership is verified and all tables are populated (with a learning run if necessary as described in RFC 2544 section 23 and RFC 3918 section 4.1), we will offer traffic from 50 emulated sources attached to port e1/1, destined to all multicast receivers on all other ports. Using a binary search algorithm in a partially meshed pattern (from all sources to all subscribers), we will determine the throughput rate and frames received in sequence.
The test instrument also measures average and maximum forwarding delay at 10 percent of line rate, as well as counting frames in and out of sequence.
We repeat all tests with 64-, 128-, 256-, 1,518- and 9,216-byte frames.
Test duration is 300 seconds per iteration.
The precision of delay measurements is +/- 100 nanoseconds.
Theoretical maximum throughput (64, 128, 256, 1518, 9216-byte frames)
Nexus throughput (64, 128, 256, 1518, 9216-byte frames)
Average forwarding delay at 10 percent of line rate (64, 256, 1518, 9216-byte frames)
Maximum forwarding delay at 10 percent of line rate (64, 256, 1518, 9216-byte frames)
Frames received out of sequence (all tests)
DUT configuration
DUT hardware and software version
TestCenter configuration
Test results
To determine the effect, if any, on data- or control-plane forwarding during and after an upgrade and process restart of the Nexus 7000 OSPF routing software
To determine the effect, if any, on data- or control-plane forwarding during and after an upgrade and process restart of the Nexus 7000Õs entire system image
To validate that the loss of one Nexus 7000 switch fabric does not interrupt the flow of data-plane traffic
The test bed for this event is similar to that used above in the IPv4 unicast performance tests described in section 4.2 of this document. Test traffic will consist of 64-byte frames offered at the throughput rate as determined in section 4.2.
1. We begin with an IPv4 baseline test. TestCenter brings up OSPFv2 adjacencies on each of 256 Nexus ports and advertises 51,200 type 5 LSAs.
2. After allowing sufficient time for routes to be installed in NexusÕ database, TestCenter offers 64-byte frames to all routes at the throughput rate for 300 seconds. At the conclusion of the test, we determine what frame loss, if any, has occurred.
3. We repeat the previous step, this time doing an OSPF process restart. While the Nexus routes traffic, we kill the deviceÕs OSPF process, forcing the Nexus device to start a new OSPF process. This should not affect current OSPF neighborships or traffic forwarding.
4. At the end of the test, we compare results from step 2. Any delta in frame loss between the two tests can be attributed to a process change. No difference in frame loss indicates a seamless OSPF process restart.
5. We repeat step 2, this time doing a complete upgrade of the system image and extending the test duration to 2700 seconds, the time needed to upgrade both supervisors and eight line cards. While the Nexus routes traffic, the ISSU upgrade will begin by upgrading and rebooting the secondary supervisor. Once the secondary supervisor is active, it takes over as the new active supervisor, and the primary supervisor begins the upgrade. From this point to the end of this test, the secondary supervisor forwards traffic and maintains the OSPF. When the primary supervisor has completed upgrading, the seamless upgrade begins for the eight line cards.
6. At the end of the test, we compare results from step 2. Any delta in frame loss between the two tests can be attributed to the change in software images. No difference in frame loss indicates a seamless in-service software upgrade.
7. The previous six steps validate an in-service software upgrade and OSPF process restart. To demonstrate high availability, we repeat step 2 (again using a 300-second test duration) and remove three out of the five switch fabric cards from service while the Nexus device forwards test traffic.
8. At the end of the test, we compare results from step 2. Any delta in frame loss between the two tests can be attributed to the fabric removal. No difference in frame loss indicates a seamless fabric load balancing.
The test duration is 300 seconds for the baseline and process restart tests, and 2700 seconds for the system image upgrade test.
Frame loss during baseline test
Frame loss during OSPF process restart
Frame loss during system image upgrade
Frame loss during fabric redundancy test
DUT configuration
DUT hardware and software version
TestCenter configuration
Test results
Version 2008072201
Sections 4.4.2, 4.4.3
Increased intended load from 10 percent of line rate to throughput rate
Version 2008071801
Section 4.2.2
Put all Nexus and Spirent TestCenter interfaces in OSPF area 1, except for Nexus loopback interface in area 0. Previously all interfaces were in area 0
Version 2008071602
Section 4.4.3
Fixed typos
Version 20080716
Section 2.1
Changed scoring to 15% each for L2 and L3 performance tests, and 25% for HA/resiliency tests (previously was 20% each for performance and 10% for HA/resiliency)
Section 3.1.1
Added Ò500-lineÓ to description of QOS ACL
Section 4.3.2
Deleted OSPF reference (not used in multicast testing)
Section 4.3.3
Added requirement to verify learning and table population before measuring test traffic
Corrected typo about interface 10/32
Section 4.3.4
Corrected typo about frame sizes
Section 4.4.3
Changed test from 32 to 256 ports
Rewrote HA/resiliency procedure to use 300-second duration for baseline and process restart tests and 2700-second duration for system image upgrade.
Version 20080707
Executive summary
Added upgrade to OSPF only (process restart)
Sections 4.1, 4.2, 4.3
Changed latency metric to forwarding delay at an iload of 10 percent of line rate
Sections 4.1.3, 4.2.3, 4.3.3
Changed test durations from 60 to 300 seconds
Section 4.2
Changed LSA type from type 3 (inter-area) to type 5 (external)
Sections 4.2.2, 4.3.2
Removed references to VLANs
Section 4.2.3
Corrected typo in OSPF version number
Section 4.4
Added process restart event for upgrade of OSPF component
Version 2008061401
Sections 1, 4.3
Replaced IPv6 testing with IPv4 multicast testing
Section 3.3
Added section on test bed addressing
Sections 4.1, 4.2, 4.3
Added 128-byte frames
Section 4.1.1
Added Òlayer-2Ó to test objective
Section 4.1.2
Added language requiring that ARP be disabled in L2 tests
Sections 4.2.2 and 4.2.3
Clarified OSPF routing parameters; added pointer to table in section 3.3
Changed hosts/port from 256 to 20 to work within DUT L2 table capacity
Section 4.3
Added new section on multicast testing
Version 2008061401
Sections 4.1, 4.2, 4.3
Added 128-byte frames
Section 4.3
Replaced IPv6 testing with IPv4 multicast testing
Version 2008022501
Sections 4.1, 4.2, 4.3
Reduced host count from 8,000/port to 256/port to avoid overrun of 96,000-entry Nexus MAC address capacity
Section 4.2
Added OSPFv2 routing
Section 4.3
Added OSPFv3 routing
Section 4.4
Added switch fabric redundancy tests for IPv4 and IPv6
Version 2008022201
Initial public release
[1] All frame length references in this document cover IP over Ethernet. We measure frame length from the first byte of the Ethernet MAC header to the last byte of the CRC. Unless otherwise specified, IP packets contain IP headers but no TCP or UDP headers. Also note that current IEEE 802.3 specifications do not recognize jumbo frames (currently, any length above 2,048 bytes) as a legal Ethernet frame, although many vendorsÕ devices implement jumbo support.