Network World Clear Choice Test: 3Com 7900E Switch/Router

Scheduled for publication in Network World in November 2008

Test Methodology

 

Version 2008102401. Copyright 1999-2008 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test and Network World reserve the right to change test parameters at any time.

 

PDF version: http://networktest.com/3c08/3c08meth.pdf

1       Executive summary

This document describes benchmarking procedures for the 3Com 7900E enterprise switch/router. Test results are scheduled for publication in Network World in November 2008.

 

Given that Network WorldÕs readership is comprised of enterprise network managers, the key emphases of this project will be performance and features in an enterprise context. As described in detail below, tests cover the following areas:

 

 

This document is organized as follows. This section introduces the tests to be conducted. Section 2 describes the test bed. Section 3 describes the tests to be performed. Section 4 provides a change log.

2       The test bed

This section discusses requirements of systems under test and introduces the test equipment to be used.

 

2.1     Devices under test

3Com should supply the following:

 

 

2.2     Test instruments

2.2.1    Spirent TestCenter

The primary instrument for performance assessment in this project is Spirent TestCenter.

 

We use Spirent TestCenter Application version 2.30 and Spirent ScriptMate 2.0.74 to generate test instrument configurations.

2.2.2    Fluke True-rms Clamp Meter 335

The power consumption measurement instrument for this project is a Fluke True-rms Clamp Meter 335. Power consumption tests also use a WaveTek Meterman ELS2 line splitter to avoid the need to split power cords.

 

3       Test procedures

 

This section describes the test procedures. For each procedure in this section, this document describes:

 

á      the test objective(s);

á      the configuration to be used;

á      the procedure to be used;

á      the test metrics to be recorded;

á      reporting requirements.

 

3.1     L2 unicast performance (gigabit Ethernet)

3.1.1    Objectives

To determine throughput, latency and sequencing of the DUT when forwarding unicast Ethernet frames based on L2 forwarding criteria across 288 gigabit Ethernet ports

 

3.1.2    Test bed configuration

This device under test (DUT) is equipped with 288 gigabit Ethernet interfaces configured to perform layer-2 switching.

 

We assume the use of 1000Base-T copper interfaces with RJ-45 connectors for  gigabit Ethernet interfaces.

 

All 288 ports may be assigned IP addresses; however, since the switch uses layer-2 switching for this test, ARP and other mechanisms that allow traffic to cross subnet boundaries are not required.

 

We configure Spirent TestCenter to offer fully meshed traffic between the gigabit Ethernet interfaces. RFC 2285 describes traffic orientation and distribution.

 

Test traffic offered to all ports will have 600 MAC addresses per port, and will use pseudorandom MAC addresses as described in RFC 4814.

 

The DUT must be configured so that entries in its bridging table will not age out during the test.

 

The DUT must be configured to disable spanning tree, routing protocols, multicast and any other protocols that might put control-plane traffic on the wire during the test duration. The goal of this test is to determine maximum data-plane performance, and the existence of even one extra frame other than test traffic can lead to frame loss.

 

The duration for all tests is 300 seconds.

 

3.1.3    Procedure

1.     Perform a learning run to populate the DUTÕs bridging table.

2.     Using a binary search algorithm, we offer fully meshed streams of 64-byte test traffic to all 288 gigabit Ethernet interfaces for 300 seconds to determine the throughput rate, latency, and frames received out of sequence (if any).

3.     We repeat the previous step for each of the following Ethernet frame lengths: 256, 1518 and 9216 bytes.

 

3.1.4    Metrics

Theoretical maximum throughput

Throughput (64, 256, 1518, and 9216 byte frames)

Average and maximum latency (64, 256, 1518, and 9216 byte frames)

Out of sequence frames

 

3.1.5    Reporting requirements

DUT configuration

DUT software version

Spirent TestCenter configuration

Test results

 

3.2     L2 unicast performance (10 gigabit Ethernet)

3.2.1    Objectives

To determine throughput, latency and sequencing of the DUT when forwarding unicast Ethernet frames based on L2 forwarding criteria across two 10-gigabit Ethernet ports

 

3.2.2    Test bed configuration

This device under test (DUT) is equipped with two 10G Ethernet ports configured to perform layer-2 switching.

 

We assume the use of XFP SR optics for the 10G interfaces unless otherwise specified.

 

Both 10G Ethernet ports may be assigned IP addresses; however, since the switch uses layer-2 switching for this test, ARP and other mechanisms that allow traffic to cross subnet boundaries are not required.

 

We configure Spirent TestCenter to offer bidirectional traffic between the 10G interfaces. RFC 2285 describes traffic orientation and distribution.

 

Test traffic offered to each ports will have 8,192 MAC addresses per port, and will use pseudorandom MAC addresses as described in RFC 4814.

 

The DUT must be configured so that entries in its bridging table will not age out during the test.

 

The DUT must be configured to disable spanning tree, routing protocols, multicast and any other protocols that might put control-plane traffic on the wire during the test duration. The goal of this test is to determine maximum data-plane performance, and the existence of even one extra frame other than test traffic can lead to frame loss.

 

The duration for all tests is 300 seconds.

 

3.2.3    Procedure

4.     Perform a learning run to populate the DUTÕs bridging table.

5.     Using a binary search algorithm, we offer fully meshed streams of 64-byte test traffic to both 10-gigabit Ethernet interfaces for 300 seconds to determine the throughput rate, latency, and frames received out of sequence (if any).

6.     We repeat the previous step for each of the following Ethernet frame lengths: 256, 1518 and 9216 bytes.

 

3.2.4    Metrics

Theoretical maximum throughput

Throughput (64, 256, 1518, and 9216 byte frames)

Average and maximum latency (64, 256, 1518, and 9216 byte frames)

Out of sequence frames

 

3.2.5    Reporting requirements

DUT configuration

DUT software version

Spirent TestCenter configuration

Test results

 

3.3      L3 unicast performance

3.3.1    Objectives

To determine throughput, latency and sequencing of the DUT when forwarding unicast IPv4 traffic among ports using static and dynamic routing

 

3.3.2    Test bed configuration

The device under test (DUT) is equipped with 288 gigabit Ethernet interfaces.

 

The first 144 DUT ports will run OSPF, with each port running a different adjacency. All OSPF routers will be in area 0. OSPF routers will use MD5 authentication with a password of ÒSpirentÓ. We will offer 20,736 summary (type 3) LSAs to the routed ports; with 144 routed ports, this is equivalent to 144 LSAs per port.

 

The following table lists the IPv4 addressing in use on the DUT and test instrument. Note that the first 240 ports are in 10/8 space, and the remaining 48 ports are in 11/8 space.


 

 

 

Interface type

Port IP address length/prefix length

 

 

Interface type

Spirent TestCenter IP address length/prefix length

GE

10.1.0.1/16

GE

10.1.0.2/16

GE

10.2.0.1/16

GE

10.2.0.2/16

GE

10.3.0.1/16

GE

10.3.0.2/16

GE

..

GE

..

GE

10.240.0.1/16

GE

10.240.0.2/16

GE

11.1.0.1/16

GE

11.1.0.2/16

GE

11.2.0.1/16

GE

11.2.0.2/16

GE

11.3.0.1/16

GE

11.3.0.2/16

GE

..

GE

..

GE

11.48.0.1/16

GE

11.48.0.2/16

 

In this test, we offer bidirectional partially meshed traffic between routed ports (those running OSPF) and non-routed ports. On the routed ports, traffic will be sourced from and destined to networks advertised to those ports. On the non-routed ports, traffic will be sourced from and destined to a single host defined on each Spirent TestCenter port. RFC 2285 describes traffic orientation and distribution.

 

The DUT must be configured so that entries in its ARP and bridging tables will not age out during the test duration. This can be done either by disabling aging or setting it to a value larger than the test duration.

 

The DUT must be configured to disable spanning tree, multicast and any other protocols (other than OSPF) that might put control-plane traffic on the wire during the test duration. The goal of this test is to determine maximum data-plane performance, and the existence of even one extra frame other than test traffic can lead to frame loss.

 

The duration is 300 seconds for all tests.

3.3.3    Procedure

1.     Start all OSPF routers and advertise 20,736 type 3 (summary LSAs). Verify that all LSAs have been received before offering test traffic.

2.     Using a binary search algorithm, we offer bidirectional streams of 64-byte test traffic between all routed and non-routed interfaces for 300 seconds to determine the throughput rate, latency and frames received out of sequence (if any).

3.     We repeat the previous step for each of the following Ethernet frame lengths: 256, 1518 and 9216 bytes.

 

3.3.4    Metrics

Throughput (64, 256, 1518, and 9216 byte frames)

Average and maximum latency (64, 256, 1518, and 9216 byte frames)

Out of sequence frames

 

3.3.5    Reporting requirements

DUT configuration

DUT software version

Spirent TestCenter configuration

Test results

 

3.4     L3 multicast performance

3.4.1    Objectives

 

Determine throughput, average and maximum latency and sequencing for 10 gigabit Ethernet interfaces when forwarding IP multicast traffic (RFC 3918 aggregated multicast throughput and multicast forwarding latency)

 

3.4.2    Test bed configuration

The device under test (DUT) is equipped with 288 gigabit Ethernet interfaces.

 

The DUT must be configured with each of 288 test interfaces in a unique IPv4 subnet, as described in the L3 unicast performance test. Additionally, the DUT must run PIM-SM with its loopback address configured as the rendezvous point. IGMPv3 also must be enabled on the DUT.

 

The first 48 Spirent TestCenter ports will each offer traffic from 1 multicast source, for a total of 48 transmitters per multicast group. Emulated hosts on all other ports (ports 49 through 288) will join 41 multicast groups, or a higher number if supported.

 

Aside from PIM-SM and IGMPv3, all other management protocols should be disabled. This includes spanning tree and any other protocols that may contend for bandwidth during the test.

 

To speed testing, MAC and ARP aging timers should be disabled or set to extremely high values (e.g., at least 24 hours greater than the test duration).

 

Test traffic shall consist of 64-, 256-, 1,518- and 9216-byte frames carrying IP headers (offered in separate runs) using a unidirectional traffic orientation and a partially meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.

 

3.4.3    Procedure

Emulated hosts attached to ports 49 through 288 of the DUT will send IGMPv3 reports (join messages) to subscribe to all multicast groups.

 

After group membership is verified and all tables are populated (with a learning run if necessary as described in RFC 2544 section 23 and RFC 3918 section 4.1), we will offer traffic from 48 emulated sources, destined to all multicast receivers on 240 receiver ports. Using a binary search algorithm in a partially meshed pattern (from all sources to all subscribers), we will determine the throughput rate and frames received in sequence.

 

The test instrument also measures average and maximum latency at the throughput rate, as well as counting frames in and out of sequence.

 

We repeat all tests with 64-, 256-, 1,518- and 9,216-byte frames.

 

Test duration is 300 seconds per iteration.

 

3.4.4    Metrics

Throughput (64, 256, 1518, and 9216 byte frames)

Average and maximum latency (64, 256, 1518, and 9216 byte frames)

Out of sequence frames

 

3.4.5    Reporting requirements

DUT configuration

DUT software version

Spirent TestCenter configuration

Test results

 

3.5     Power consumption

3.5.1    Objectives

To determine the power consumption of the DUT when idle

To determine the power consumption of the DUT when fully loaded

 

3.5.2    Test bed configuration

This test uses the following equipment:

 

á      Fluke 335 True-RMS clamp meter

á      WaveTek ELS2 AC line splitter

á      Spirent TestCenter chassis

 

The DUT plugs into the line splitter and the clamp meter measures power consumption through the line splitter. The Spirent TestCenter chassis attaches to 288 gigabit Ethernet interfaces of the DUT.

 

This test will measure power consumption when idle and again when fully loaded. ÒFully loadedÓ in this context means maximum utilization of the DUTÕs control and data planes.

 

The addressing for both the DUT and Spirent TestCenter are similar to that used in the IP unicast performance test.

 

Test traffic will comprise 64-byte UDP/IP frames with at least one IP option set to force Òslow-pathÓ processing by the DUT. The tester should verify that CPU utilization rises when IP options are in use; if not, other mechanisms such as management requests or flooding may be used, provided it has the effect of maximizing CPU utilization.

 

3.5.3    Procedure

1.     Using the clamp meter and leads, measure AC voltage from the power outlet. We refer to this measurement as V.

2.     Plug the DUT into the line splitter and verify the system has booted up.

3.     Place the clamp meter jaws around the Ò10XÓ receptacle of the line splitter. The clamp meter will display AC amps drawn by the DUT times 10. We refer to this figure as 10A.

4.     Derive idle-DUT power consumption in watts (W) using the formula W = V * (10A/10).

5.     Using Spirent TestCenter, offer 64-byte frames to all interfaces at the throughput rate as determined in the test of L3 basic performance. The traffic orientation must be fully meshed between all gigabit Ethernet interfaces. Also, see comments about setting IP options in ÒTest Bed ConfigurationÓ above.

6.     Repeat steps 3-4 to determine maximum-load power consumption.

7.     For devices with multiple power supplies, multiply wattage by power supply count to determine total system power consumption (assumes uniform distribution of power from all power supplies; if assymetrical, measure each power supply separately and add the measurements).

 

3.5.4    Metrics

Supplied power (volts AC)

Idle power consumption (watts)

Maximum-load power consumption (watts)

 

3.5.5    Reporting requirements

DUT configuration

DUT software version

Spirent TestCenter configuration

Test results

 

3.6     Switch management and usability

3.6.1    Objectives

To determine the types of device management supported by the DUT

To determine which cleartext and encrypted management methods are supported by default

To determine all supported management methods

To determine whether any management method is vulnerable to published exploits

 

3.6.2    Test bed configuration

The DUT should be tested in its default factory configuration. If the DUT already has been configured, it should be reset to the configuration state a first-time user would encounter.

 

3.6.3    Procedure

  1. Attach a serial console and attempt to give the device at least one IP address for management. (serial pass/fail)
  2. Over an IPv4 connection, determine which of the following management methods are enabled by default:
    1. SSHv2
    2. SSHv1
    3. telnet
    4. http
    5. https
    6. SNMPv1
    7. SNMPv2C
    8. SNMPv3
    9. proprietary GUI
    10. proprietary CLI
    11. Other (note)
  3. Repeat previous step to determine which methods are not enabled by default, but can be enabled through user configuration. Also determine whether DUT can write log entries to external syslog server or other external auditing platform.
  4. Determine whether the IPv6 management is possible for each of the previous three steps
  5. During the course of this and all other events, testers will record subjective comments about relative ease of device management for common tasks. These tasks include initial setup; L2 and L3 configuration; and configuration reloads and system reloads.

3.6.4    Metrics

Default cleartext management methods

Default encrypted management methods

Supported management methods

Exportability to external log server

Usability

 

3.6.5    Reporting requirements

DUT configuration

Test results

3.7     Switch features

3.7.1    Objective

To determine the feature set supported by the DUT

 

3.7.2    Test bed configuration

Not applicable

 

3.7.3    Procedure

We ask participating vendors to complete a features questionnaire listing various attributes supported by the DUT. Examples of such attributes include the number and type of physical interfaces; routing protocols; VLAN support; spanning tree support; discovery protocol support; anti-spoofing and anti-DOS protection mechanisms; and management methods.

 

The questionnaire includes space for vendors to describe features not covered by the various questions.

 

Network World will publish the results of the features questionnaire, usually in its online edition. The publication should include a caveat that responses are supplied by vendors, and not all features have been verified by Network World.            

3.7.4    Metrics

Features supported

 

3.7.5    Reporting requirements

Features questionnaire

 

4       Change log

Version 2008102401

24 October 2008

Initial public release