Network Test :: Benchmarking Services Network Test Methodologies::Results Contact Information About Network Test

Network World Lab Test: 10G Ethernet Edge Switch/Routers

Draft Test Methodology

 

v. 1.20 Copyright © 2004 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test reserves the right to change the parameters of this test at any time.

 

By David Newman

 

This document’s URL is http://networktest.com/04edge10g/04edge10gmeth.html. A Microsoft Word version of this document is available at ftp://public.networktest.com/pub/04edge10g/04edge10gmeth.zip.

 

Please forward comments to dnewman at networktest.com

 

1         Executive summary

This document describes the methodology used in assessing edge switch/routers with 10-gigabit Ethernet and gigabit Ethernet interfaces. This emerging product category is generally characterized by a large number of copper 10/100/1000 Ethernet interfaces and one or more 10G Ethernet uplink interfaces. Such products are intended for use in both wiring closet and data center applications.

 

This project involves comparitive testing of multiple vendors’ products. Each vendor gets a fixed test allotment in the lab, assigned on a first-come, first-served basis.

 

This document describes how we test the following events:

 

  • 10G Ethernet baseline performance
  • Gigabit Ethernet baseline performance
  • Performance with maximum VLANs
  • Performance with maximum ACEs
  • Rapid spanning tree performance
  • SSH and 801.x support
  • Features

 

Vendors may also suggest extra tests beyond these mandatory events. We will attempt to conduct such tests on a time-permitting basis. Since optional extra events will differ across vendors, we do not include them in scoring. Section 2 discusses scoring rules.

 

This document is organized as follows. This section introduces the test. Section 2 presents guidelines for scoring and other testing ground rules. Section 3 describes requirements for the device under test and test equipment. Section 4 describes test procedures. Section 5 describes the change history of this document.

 

2         Scoring and testing ground rules

2.1        Test scoring

Reviews published in Network World present test results in three formats: In charts or tabular form; in discussion in the article text; and in a “NetResults” scorecard. This section discusses the weightings used to produce that scorecard and other ground rules.

 

Scorecards have a maximum rating of 5.0, where 5 = excellent, 4 = very good, 3 = average, 2 = below average, and 1 = consistently subpar.

 

This methodology has several mandatory events. For this project, we give 100 percent weighting to the mandatory tests. For example, a device with perfect results in all mandatory events and no optional tests would have a score of 5.0.

 

10G Ethernet baseline performance (covers throughput, delay, jitter, and sequencing)

15%

Gigabit Ethernet baseline performance (covers throughput, delay, jitter, and sequencing)

15%

Performance with maximum VLANs (covers throughput, delay, jitter, and sequencing with maximum VLANs)

10%

Performance with maximum ACEs (covers throughput, delay, jitter, and sequencing with maximum ACEs applied)

10%

Rapid spanning tree performance (measures failover time using 802.1W rapid spanning tree)

10%

SSH and 802.1x support

15%

Features

25%

 

2.2        Unreleased product testing

Vendors may supply unreleased versions of hardware and software for testing, provided the device under test will be available to Network World readers within 30 days of the test’s scheduled publication date.

 

We make no distinction between released products and alpha-, beta-, or gold-code products. If an unreleased version of a product catches fire (an extreme case, perhaps) or produces otherwise undesirable results (a more common outcome), we will note such results in the review.

 

We ask vendors to supply hardware and software version information along with all configurations for all tests. This ensures repeatability and helps answers questions as to what we tested.

 

2.3        No selective publication, no withdrawals

We strongly encourage vendors to prepare for all tests by running them in their labs before coming into ours. We will supply test scripts and consult with vendors on any and all aspects of test methodology. After testing we also share complete results and solicit vendor input on the results. The last thing we want is any result that surprises or embarrasses any vendor.

 

That said, Network World’s policy is to publish all results from all tests. This may include results a vendor may perceive as negative. Network World will not honor requests to publish only the “good” results from a set of tests.

 

3         The test bed

This section discusses requirements of systems under test and introduces the test equipment to be used.

3.1        Devices under test

Participating vendors must supply the following:

 

  • 1 switch with at least 16 10/100/1000 Ethernet switch ports and at least 1 10G Ethernet interface, including XENPAK module

 

  •  (optional) 1 switch with at least 48 10/100/1000 Ethernet switch ports and at least 2 10G Ethernet interfaces, including XENPAK modules

 

We will test 16x1 and 48x2 switches separately. Vendors may enter multiple products in this test.

 

We strongly encourage vendors to supply 20 percent additional spare interfaces.

 

3.2        Test Hardware

3.2.1        Spirent SmartBits

The principal test instrument for this project is the SmartBits traffic generator/analyzer manufactured by Spirent Communications Inc. Spirent’s SmartBits 6000B and 6000C chassis will be equipped with the company’s XLW-3720A 10-gigabit Ethernet cards and SmartBits LAN-3325 TeraMetrics XD gigabit Ethernet cards.

 

The 10-gigabit Ethernet cards use XENPAK MSA modules with 1,310-nm optics.

 

The 3325 gigabit Ethernet cards have dual copper and fiber interfaces. We assume copper gigabit Ethernet interfaces for this project. Please notify us as soon as possible if you intend to use fiber gigabit Ethernet interfaces instead.

 

Siemon Co. has furnished multimode cabling with all combinations of LC-LC, SC-SC, and LC-SC connector types.

 

4         Test procedures

For each routine in this section, this document describes:

 

·        the test objective(s);

·        the configuration to be used;

·        the procedure to be used;

·        the test metrics to be recorded;

·        reporting requirements.

 

4.1        Baseline 10-gigabit performance

4.1.1        Objectives

Determine throughput, delay, jitter, and sequencing for 10 gigabit Ethernet interfaces forwarding unicast IPv4 traffic

 

4.1.2        Test bed configuration

Figure 1 below shows the physical test bed topology for the 10-gigabit Ethernet baseline tests. This device under test (DUT) is one switch equipped with at least one 10-gigabit Ethernet interfaces and at least 10 gigabit Ethernet interfaces. We attached SmartBits test interfaces to the DUT.

 

10 x 1G

(optional 20 x 1G)

 

 

 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Figure 1: 10-Gbit/s baseline physical topology

 

For devices with multiple 10G Ethernet interfaces, we conduct separate tests with single and multiple interfaces.

 

We emulate a single host with a unique source MAC and IP address on each SmartBits interface and offer traffic in a partially meshed (backbone) pattern among all interfaces.

Note this is a layer-2 test. While we offer UDP/IP packets, all interfaces are members of the same IP subnet. This layer-2 emphasis reflects the way these DUTs are most often used in customer deployments – as traffic aggregators in wiring closets or data centers. In many enterprise network designs, devices other than this type of DUT handle IP routing duties.

 

The MAC addresses we use take the form 00:PP:PP:RR:RR:RR, where PP is the test instrument’s port number (expressed in hexadecimal format) and RR are pseudorandom hexadecimal numbers.

 

We will use Spirent SmartFlow and/or SAI scripts to generate and analyze traffic.

 

The test traffic shall consist of 64-, 256-, 1,518-byte and 9000-byte jumbo frames carrying UDP/IP headers[1] (offered in separate runs) using a bidirectional traffic orientation and a partially meshed distribution between the 10G Ethernet and gigabit Ethernet interfaces. See RFC 2285 for definitions of traffic orientation and distribution.

 

Management traffic such as OSPF hellos, discovery protocols, or VRRP messages may interfere with test traffic, and may adversely affect throughput. Vendors should either disable such messages or set timers high enough so that management traffic does not degrade data-plane performance.

 

Vendors should either disable aging or set timers to high values for layer-2 forwarding databases.

 

On the edge (gigabit Ethernet) interfaces, vendors should disable autonegotiation if possible and configure for 1000-Mbit/s full-duplex operation.

 

4.1.3        Procedure

1. Using a binary search algorithm, we offer traffic to each interface in a bidirectional orientation and partially meshed distribution between the 10G Ethernet and gigabit Ethernet interfaces to determine the throughput rate and frames received in sequence.

 

2. At an intended load of 10 percent of line rate, we measure also run tests to determine average delay, maximum delay, and jitter.

 

Note that this procedure varies from the RFC 2544 recommendation to measure latency at the throughput level. Experience suggests that devices with less-than-line-rate throughput will have difficulty delivering all traffic without loss across multiple iterations. Latency measurements in the presence of loss are invalid. Since it may not be practical to measure latency at the throughput level, we instead use an offered load of 10 percent of line rate.

 

3. We repeat this test with 64-, 256-, 1,518- and 9,000-byte frames.

 

4. For devices with two 10G Ethernet interfaces, we repeat steps 1-3 with traffic exchanged between the two 10G Ethernet interfaces and 20 gigabit interfaces in a bidirectional orientation and partially meshed distribution.

 

Test duration is 60 seconds per iteration.

 

The precision of delay and jitter measurements is +/- 100 nanoseconds.

 

4.1.4        Metrics

Throughput (64, 256, 1518, 9000-byte frames)

Average delay (64, 256, 1518, 9000-byte frames)

Maximum delay (64, 256, 1518, 9000-byte frames)

Jitter (64, 256, 1518, 9000-byte frames)

Frames received out of sequence (all tests)

 

4.1.5        Reporting requirements

DUT configuration

DUT hardware and software version

SmartBits configuration

Test results

4.2        Baseline gigabit Ethernet performance

4.2.1        Objective

Determine throughput, delay, jitter, and sequencing for gigabit Ethernet interfaces forwarding unicast IPv4 traffic

 

4.2.2        Test bed configuration

We attach SmartBits 1-Gbit/s Ethernet test interfaces to all 10/100/1000 interfaces the DUT.

 

We recognize that port counts will differ among DUTs. For example, one DUT have 24 gigabit Ethernet interfaces, while another may have 48 interfaces.

 

Since there is no standard number of edge interfaces, we conduct this test in two configurations:

 

  1. Lowest common denominator (LCD): We use the fewest number of 1000-Mbit/s interfaces supported by any participating vendor
  2. Maximum interfaces: We use the maximum number of 1000-Mbit/s interfaces for a given switch.

This two-tier approach ensures an apples-and-apples comparison across all products (in the LCD case) as well as describing the performance limits of each device (the maximum interfaces case).

 

We emulate one IP host with a unique source MAC and IP address, on each SmartBits interface and offer traffic in a partially meshed (backbone) pattern among all interfaces.

 

Note this is a layer-2 test. While we offer UDP/IP packets, all interfaces are members of the same IP subnet. This layer-2 emphasis reflects the way these DUTs are most often used in customer deployments – as traffic aggregators in wiring closets or data centers. In many enterprise network designs, devices other than this type of DUT handle IP routing duties.

 

The MAC addresses we use take the form 00:PP:PP:RR:RR:RR, where PP is the test instrument’s port number (expressed in hexadecimal format) and RR are pseudorandom hexadecimal numbers.

 

We will use Spirent SmartFlow and/or SAI scripts to generate and analyze traffic.

 

The test traffic shall consist of 64-, 256-, 1,518-byte and 9000-byte frames carrying UDP/IP headers (offered in separate runs) using a bidirectional traffic orientation and a fully meshed distribution. See RFC 2285 for definitions of traffic orientation and distribution.

 

Management traffic such as OSPF hellos, discovery protocols, or VRRP messages may interfere with test traffic, and may adversely affect throughput. Vendors should either disable such messages or set timers high enough so that management traffic does not degrade data-plane performance.

 

Vendors should either disable aging or set timers to high values for ARP and layer-2 forwarding databases.

 

4.2.3        Procedure

1. Using a binary search algorithm, we offer traffic to each interface in a fully meshed pattern to determine the throughput rate and frames received in sequence.

 

2. At an intended load of 10 percent of line rate, we measure also run tests to determine average delay, maximum delay, and jitter.

 

Note that this procedure varies from the RFC 2544 recommendation to measure latency at the throughput level. Experience suggests that devices with less-than-line-rate throughput will have difficulty delivering all traffic without loss across multiple iterations. Latency measurements in the presence of loss are invalid. Since it may not be practical to measure latency at the throughput level, we instead use an offered load of 10 percent of line rate.

 

3. We repeat this test with 64-, 256-, 1,518- and 9,000-byte frames.

 

4. We repeat steps 1-3 on the lowest common number of interfaces supported by any participating vendor, and on the maximum number of 1000-Mbit/s interfaces supported by the DUT.

 

Test duration is 60 seconds per iteration.

 

The precision of delay and jitter measurements is +/- 100 nanoseconds.

 

4.2.4        Metrics

Throughput (64, 256, 1518, 9000-byte frames)

Average delay (64, 256, 1518, 9000-byte frames)

Maximum delay (64, 256, 1518, 9000-byte frames)

Jitter (64, 256, 1518, 9000-byte frames)

Frames received out of sequence (all tests)

 

4.2.5        Reporting requirements

DUT configuration

DUT hardware and software version

SmartBits configuration

Test results

 

4.3        Performance with maximum VLANs

4.3.1        Objectives

Determine throughput, delay, jitter, and sequencing for 10 gigabit Ethernet and gigabit Ethernet interfaces forwarding unicast IPv4 traffic from the maximum number of VLANs the DUT will support

 

4.3.2        Test bed configuration

The physical setup for this event is identical to that of the baseline test in section 4.1, where a single 10-gigabit Ethernet uplink exchanges traffic with 10 gigabit Ethernet edge interfaces.

 

The only difference is that in this event, we enable 802.1q VLAN tagging and use the maximum number of VLANs the DUT will support.

 

Vendors must declare to Network Test the maximum number of VLANs the DUT supports.

 

We configure the 10G Ethernet interfaces as trunk ports, capable of carrying traffic to or from all VLANs.

 

We divide VLANs evenly across all edge interfaces. For example, if the DUT supports a maximum of 1000 VLANs, we assign 100 VLANs to each of 10 edge interfaces. In cases where the maximum VLAN count is not an integer multiple of 10, we use the highest common denominator and assign the remaining VLANs to the tenth edge interface. For example, if a DUT supports a maximum of 4072 VLANs, we assign 407 VLANs to interfaces 1-9 and 409 VLANs to interface 10.

 

As in the baseline tests, we emulate a single host with a unique source MAC and IP address on each SmartBits interface and offer traffic in a partially meshed (backbone) pattern among all interfaces.

 

Note this is a layer-2 test. While we offer UDP/IP packets, all interfaces are members of the same IP subnet. This layer-2 emphasis reflects the way these DUTs are most often used in customer deployments – as traffic aggregators in wiring closets or data centers. In many enterprise network designs, devices other than this type of DUT handle IP routing duties.

 

The MAC addresses we use take the form 00:PP:PP:RR:RR:RR, where PP is the test instrument’s port number (expressed in hexadecimal format) and RR are pseudorandom hexadecimal numbers.

 

We will use Spirent SmartFlow and/or SAI scripts to generate and analyze traffic.

 

The test traffic shall consist of 68-, 260-, 1,522-, and 9004-byte frames[2] carrying VLAN tags and UDP/IP headers (offered in separate runs) using a bidirectional traffic orientation and a partially meshed distribution between the 10G Ethernet and gigabit Ethernet interfaces. See RFC 2285 for definitions of traffic orientation and distribution.

 

Management traffic such as OSPF hellos, discovery protocols, or VRRP messages may interfere with test traffic, and may adversely affect throughput. Vendors should either disable such messages or set timers high enough so that management traffic does not degrade data-plane performance.

 

Vendors should either disable aging or set timers to high values for layer-2 forwarding databases.

 

On the edge (gigabit Ethernet) interfaces, vendors should disable autonegotiation if possible and configure for 1000-Mbit/s full-duplex operation.

 

4.3.3        Procedure

1. Using a binary search algorithm, we offer traffic to each interface in a bidirectional orientation and partially meshed distribution between the 10G Ethernet and gigabit Ethernet interfaces to determine the throughput rate and frames received in sequence.

 

2. At an intended load of 10 percent of line rate, we measure also run tests to determine average delay, maximum delay, and jitter.

 

Note that this procedure varies from the RFC 2544 recommendation to measure latency at the throughput level. Experience suggests that devices with less-than-line-rate throughput will have difficulty delivering all traffic without loss across multiple iterations. Latency measurements in the presence of loss are invalid. Since it may not be practical to measure latency at the throughput level, we instead use an offered load of 10 percent of line rate.

 

3. We repeat this test with 68-, 260-, 1,514- and 9,004-byte frames. We compare results from these VLAN tests with those from the baseline tests.

 

Test duration is 60 seconds per iteration.

 

The precision of delay and jitter measurements is +/- 100 nanoseconds.

 

4.3.4        Metrics

Throughput (68, 260, 1522, 9004-byte frames)

Average delay (68, 260, 1522, 9004-byte frames)

Maximum delay (68, 260, 1522, 9004-byte frames)

Jitter (68, 260, 1522, 9004-byte frames)

Frames received out of sequence (all tests)

 

4.3.5        Reporting requirements

DUT configuration

DUT hardware and software version

SmartBits configuration

Test results

 

4.4        Performance with maximum ACEs

4.4.1        Objectives

Determine throughput, delay, jitter, and sequencing for 10 gigabit Ethernet interfaces forwarding unicast IPv4 traffic with the maximum number of access controls entries (ACEs) defined

 

4.4.2        Test bed configuration

The physical setup for this event is identical to that of the baseline test in section 4.1, where a single 10-gigabit Ethernet uplink exchanges traffic with 10 gigabit Ethernet edge interfaces.

 

The only difference is that in this event, we enable the maximum number of access control entries (ACEs) per port the DUT will support.

 

Vendors must declare to Network Test the maximum number of ACEs and ACLs per port the DUT supports.

 

All but one of the ACEs will deny access based on source IP address. The final ACE will allow access based on source IP address. Prior to running the performance test, we will spot-check the ACEs to verify that the DUT drops traffic from prohibited addresses.

 

The traffic to be blocked will come from 1.1.1.1/32, 1.1.1.3/32, 1.1.1.5/32, and so on. Note that we deliberately skip spaces to prevent aggregation.

 

As in the baseline tests, we emulate a single host with a unique source MAC and IP address on each SmartBits interface and offer traffic in a partially meshed (backbone) pattern among all interfaces.

 

Note this is a layer-2 test. While we offer UDP/IP packets, all interfaces are members of the same IP subnet. This layer-2 emphasis reflects the way these DUTs are most often used in customer deployments – as traffic aggregators in wiring closets or data centers. In many enterprise network designs, devices other than this type of DUT handle IP routing duties.

 

The MAC addresses we use take the form 00:PP:PP:RR:RR:RR, where PP is the test instrument’s port number (expressed in hexadecimal format) and RR are pseudorandom hexadecimal numbers.

 

We will use Spirent SmartFlow and/or SAI scripts to generate and analyze traffic.

 

The test traffic shall consist of 64-, 256-, 1,518-byte and 9000-byte jumbo frames carrying UDP/IP headers (offered in separate runs) using a bidirectional traffic orientation and a partially meshed distribution between the 10G Ethernet and gigabit Ethernet interfaces. See RFC 2285 for definitions of traffic orientation and distribution.

 

Management traffic such as OSPF hellos, discovery protocols, or VRRP messages may interfere with test traffic, and may adversely affect throughput. Vendors should either disable such messages or set timers high enough so that management traffic does not degrade data-plane performance.

 

Vendors should either disable aging or set timers to high values for layer-2 forwarding databases.

 

On the edge (gigabit Ethernet) interfaces, vendors should disable autonegotiation if possible and configure for 1000-Mbit/s full-duplex operation.

 

4.4.3        Procedure

1. To verify ACEs are in effect, we offer traffic from each of the prohibited source addresses. The DUT should drop all traffic. If it does not, the DUT fails this event and we do not proceed.

 

2. Using a binary search algorithm, we offer traffic to each interface in a bidirectional orientation and partially meshed distribution between the 10G Ethernet and gigabit Ethernet interfaces to determine the throughput rate and frames received in sequence.

 

3. At an intended load of 10 percent of line rate, we measure also run tests to determine average delay, maximum delay, and jitter.

 

Note that this procedure varies from the RFC 2544 recommendation to measure latency at the throughput level. Experience suggests that devices with less-than-line-rate throughput will have difficulty delivering all traffic without loss across multiple iterations. Latency measurements in the presence of loss are invalid. Since it may not be practical to measure latency at the throughput level, we instead use an offered load of 10 percent of line rate.

 

4. We repeat this test with 64-, 256-, 1,518- and 9,000-byte frames. We compare results with those from the baseline tests.

 

Test duration is 60 seconds per iteration.

 

The precision of delay and jitter measurements is +/- 100 nanoseconds.

 

4.4.4        Metrics

Throughput (64, 256, 1518, 9000-byte frames)

Average delay (64, 256, 1518, 9000-byte frames)

Maximum delay (64, 256, 1518, 9000-byte frames)

Jitter (64, 256, 1518, 9000-byte frames)

Frames received out of sequence (all tests)

 

4.4.5        Reporting requirements

DUT configuration

DUT hardware and software version

SmartBits configuration

Test results

 

4.5        Rapid spanning tree performance

4.5.1        Objective

To determine the failover time of 802.1W rapid spanning tree upon failure of a primary link

 

4.5.2        Test bed configuration

We set three of the DUT’s 10/100/1000 interfaces to full-duplex, 1000-Mbit/s operation if possible. We use autonegotiation if manual settings are not possible.

 

We configure the DUT to support rapid spanning tree bridging. We configure two DUT interfaces as members of the spanning tree, setting one as the root interface and one in blocking mode.

 

We attach SmartBits interfaces to all three interfaces of the DUT. During the test, one SmartBits interface offers traffic, and one of the two interfaces attached to spanning tree ports should receive it.

 

The MAC addresses we use take the form 00:PP:PP:RR:RR:RR, where PP is the test instrument’s port number (expressed in hexadecimal format) and RR are pseudorandom hexadecimal numbers.

 

The test traffic shall consist of 64-byte frames offered in a unidirectional pattern.

 

We verify the DUT has learned all MAC addresses in use before collecting results.

 

We use Spirent SmartFlow and/or SAI as the traffic generator.

 

4.5.3        Procedure

1. To validate spanning tree operation, we offer a single flow (one source and destination MAC address) of 64-byte frames to one interface of the DUT at a rate of 1 million fps and verify that all frames are received through one of the spanning tree interfaces.

 

We verify the DUT has learned all MAC addresses in use before collecting results, and conduct a “learning run” to populate the DUT’s address table if necessary.


2. To measure failover time, we repeat step 1. At least 10 seconds into the test duration, we remove the cable of the DUT’s root interface in the spanning tree, forcing spanning tree to reconverge.

 

At the end of the test duration, we verify that the DUT changed the other spanning tree interface from blocking to active state, and we measure total frames received. With an offered rate of 1 million fps, each dropped frame represents 1 microsecond of convergence time.

 

4.5.4        Metrics

Convergence time

 

4.5.5        Reporting requirements

DUT configuration

DUT hardware and software version

SmartBits configuration

Test results

4.6        SSH and 802.1x support

4.6.1        Objective

To determine the level of secure shell (SSH) server support in the DUT

To determine the granularity of 802.1x authentication support in the DUT

 

4.6.2        Test bed configuration

We enable SSH using the DUT’s default settings. For example, if the DUT supports network access via SSHv1 and SSHv2 by default, this is how we configure the DUT.

 

The test instrument in this event is a host running OpenSSH v. 3.6.1p1 or later, or Van Dyke Software’s SecureCRT v. 4.00 or later.

 

This test explicitly examines remote access to the DUT over a network; access via a directly attached serial console is out of scope.

 

For 802.1x testing, we direct authentication requests to the Internet Authentication Service of Windows 2000 Advanced Server. The W2KAS box defines usernames user1, user2, … user10. Clients (supplicants) will run Windows XP with included 802.1x client software.

 

4.6.3        Procedure

1. Using a host running SSH client software,  we attempt to connect to the DUT over a network using SSH’s version and verbose flags. The IP addressing used is unimportant for purposes of this test.

 

2. To determine SSHv1 support, we issue this command:

 

ssh –1 –v –v -v <DUT address>

 

3. To determine SSHv2 support, we issue this command:

 

ssh –2 –v –v -v <DUT address>

 

In both cases, we save the output of the session for analysis.

 

4. We also check whether the DUT’s default configuration supports three other insecure remote access methods: Telnet (port 23), Web (port 80), and SNMPv1/v2C writes (typically port 161).

 

The ssh event is a simple pass/fail test. To obtain a passing grade, a DUT must:

 

a.       support SSHv2 by default

b.      not support SSHv1 by default

c.       use an SSH implementation with no known vulnerabilities at test time

d.      not support remote access via telnet or Web or SNMPv1 or SNMPv2C by default when ssh is enabled

 

Inability to meet any of these criteria will result in a failing grade.

 

To determine known vulnerabilities, we compare test results against advisories from public security databases including the following:

 

CERT

ISS X-Force database

SecurityFocus

ThreatFocus

 

 

5. For 802.1x authentication, we configure the DUT to allow supplicants to attempt authentication. We check for various access means supported: user ID, port-based, MAC address based, and other.

 

This is a pass/fail functional test: Either a given authentication works or it does not.

 

4.6.4        Metrics

SSH implementation and version number

SSH protocol version(s) supported

802.1x authentication criteria supported

4.7        Features

4.7.1        Objective

To determine the feature set supported by the DUT

4.7.2        Test bed configuration

Not applicable

 

4.7.3        Procedure

We ask participating vendors to complete a features questionnaire listing various attributes supported by the DUT. Examples of such attributes include the number and type of physical interfaces; routing protocols; and management methods.

 

The questionnaire includes a space for vendors to describe features not covered by the various questions.

 

Network World will publish the results of the features questionnaire, usually in its online edition. The publication should include a caveat that responses are supplied by vendors, and not all features have been verified by Network World.

             

4.7.4        Metrics

Features supported

 

4.7.5        Reporting requirements

Features questionnaire

 

 


5         Change history

 

Version 1.2

29 June 2004

In sections 4.1, 4.2, 4.3, and 4.5, reduced emulated host count to one per port (was 1,024 or 1,000)

 

Version 1.1

11 June 2004

In section 2.1, added scoring criteria to table

In section 2.1, deleted text on testing extra features

In section 2.2, changed beta cutoff date to 30 days post publication

In sections 4.2.2 and 4.2.3, added LCD and maximum interfaces test cases

In section 4.4.2, specified that ACEs and ACLs are applied on a per-port basis, and not globally

 

Version 1.0

28 May 2004

Initial public release

 



[1] All frame length references in this document cover IP over Ethernet. We measure frame length from the first byte of the Ethernet MAC header to the last byte of the CRC. Unless otherwise specified, IP packets contain IP and UDP headers.

 

[2] These frame lengths include the 4-byte VLAN tag. If any DUT does not support 9004-byte jumbo frames, we will reduce frame length to 8096 bytes to maintain a 9000-byte maximum, and note the difference in reporting results.

Network Test Footer. Privacy Policy. Contact us to learn more.