Light Reading Lab
Test: Internet Core Routers
Scheduled for
Publication in 1Q2001
Test Methodology
v. 2.0.2 Copyright Ó 2000 by Network Test Inc. Vendors are
encouraged to comment on this document and any other aspect of test methodology. Network
Test reserves the right to change the parameters of this test at any time.
By David Newman, Glenn Chagnot, and Jerry Perser
Please forward comments to dnewman@networktest.com, glenn.chagnot@spirentcom.com, and jerry.perser@spirentcom.com
This document describes a series of
tests to be conducted on IP routers intended for use in the core of Internet service
provider networks. Results of these tests will be published by Light Reading, the online periodical
of optical networking.
The tests described here include:
These tests involve two test beds, both using packet-over-Sonet (POS) interfaces: One with an OC-48 (2.4 Gbit/s) core, and another with an OC-192 (9.6 Gbit/s) core.
Participating vendors MUST participate in the OC-48 events and MAY participate in the OC-192 events. Light Reading plans to give separate awards in the OC-48 and OC-192 events. Nonparticipation in the OC-192 tests in no way diminishes a vendors chance of winning award(s) in the OC-48 events.
Section 2 of this document describes the test bed in general terms and introduces the test equipment to be used.
Section 3 describes the specific tests to be performed.
The URL for this document is http://www.networktest.com/LR_router_00/meth.html
A list of changes for each version of this methodology is available at http://www.networktest.com/LR_router_99/changelog.html
Three companion spreadsheets describing test traffic are available via anonymous FTP.
The first spreadsheet describes the IP prefixes, prefix length distribution, and packet length distribution to be used on the test bed with an OC-48c core. The URL is:
ftp://public.networktest.com/LR_router_00/LR_router_prefixes_OC48.ZIP
The second spreadsheet describes the IP prefixes, prefix length distribution, and packet length distribution to be used on the test bed with an OC-192 core. The URL is:
ftp://public.networktest.com/LR_router_00/LR_router_prefixes_OC192.ZIP
The third spreadsheet describes the AS_PATH length distribution to be used in the BGP routing table entries. The URL is:
ftp://public.networktest.com/LR_router_00/AS-PATH_distro.ZIP
This section discusses the topologies of the test beds; offers configuration instructions for participating vendors; and introduces the test equipment to be used.
This test involves two test beds one with a core operating at OC-48 rates and one with a core operating at OC-192 rates. The tests described in Section 3 will be conducted on both the OC-48 and OC-192 test beds.
Figure 1 below shows the basic configuration of equipment to be used in the OC-48 test bed.
Figure 1: The Internet Core Router OC-48 Test Bed
For the OC-48 test bed, each vendor must supply four routers, each equipped with six OC-48c POS interfaces. Each router must support BGP4 and OSPFv2 routing. We define a router as a device that supports BGP4 and OSPFv2 and contains a unique L2 forwarding database and L3 routing table for each 6 POS interfaces.
Support for MPLS is preferable but not mandatory. As noted in the invitation letter mailed to prospective participants, Light Reading plans to give separate awards for best BGP device, best MPLS device, and best overall device. Nonparticipation in the MPLS tests does not preclude winning awards in the other events.
In addition, vendors must supply any cabling necessary to interconnect their core interfaces. Network Test and/or Spirent Communications will supply cabling to link the tester devices to the routers.
Figure 2 below shows the basic configuration of equipment to be used in the OC-192 test bed.
Figure 2: The Internet Core Router OC-192 Test Bed
For the OC-192 test bed, each vendor must supply four routers, each equipped with 12 OC-48c POS interfaces at the edge and three OC-192 interfaces (or equivalent capacity) in the core. Each router must support BGP4 and OSPFv2 routing. We define a router as a device that supports BGP4 and OSPFv2 and contains a unique L2 forwarding database and L3 routing table for each set of 12 OC-48c interfaces plus three OC-192 interfaces (or equivalent capacity).
Support for MPLS is preferable but not mandatory. As noted in the invitation letter mailed to prospective participants, Light Reading plans to give separate awards for best BGP device, best MPLS device, and best overall device. Nonparticipation in the MPLS tests does not preclude winning awards in the other events.
In addition, vendors must supply
any cabling necessary to interconnect their core interfaces. Network Test and/or Spirent
Communications will supply cabling to link the tester devices to the routers.
Router A will be the clock source; all other devices use loop timing from Router A
The Smartbits edge interfaces will be configured as follows:
Framing: Sonet (not SDH)
Rate: OC-48c
Path signal label: 0xCF
FCS: 32-bit CRC
The Smartbits edge interfaces will be configured as follows:
Encapsulation: PPP (RFC 1662; header = FF 03 00 21)
MRU: 1500 bytes
Maximum configurations: 10
Maximum failures: 5
Maximum terminations: 2
Magic number: 2
Restart timer: 3
Retry count: 1
LCP enabled
IPCP enabled
This test bed models part of the core network for a single service provider. Accordingly, 1 autonomous system (AS) will be used for all four devices under test.
All Smartbits interfaces will represent ASs external to the devices under test.
For all tests except maximum BGP table capacity (section 3.3), vendors should set BGP hold timers to zero (infinite, no keepalives exchanged) to avoid conflicts with offered test traffic.
The core interfaces (that is, the interfaces on the devices under test used to connect routers A, B, C, and D) must exchange topology update information using OSPF. Vendors must set OSPF hello and link state update intervals as high as possible to avoid conflicts with offered test traffic.
Vendors must configure the edge interfaces with the following IP addresses:
Router |
Interface |
Address/mask |
A |
1 |
217.0.1.2/24 |
A |
2 |
217.0.2.2/24 |
A |
3 |
217.0.3.2/24 |
B |
4 |
217.0.4.2/24 |
B |
5 |
217.0.5.2/24 |
B |
6 |
217.0.6.2/24 |
C |
7 |
217.0.7.2/24 |
C |
8 |
217.0.8.2/24 |
C |
9 |
217.0.9.2/24 |
D |
10 |
217.0.10.2/24 |
D |
11 |
217.0.11.2/24 |
D |
12 |
217.0.12.2/24 |
The prefixes and test traffic to be offered are described in a separate document. This document, an Excel spreadsheet, is available for anonymous FTP here:
ftp://public.networktest.com/LR_router_00/LR_router_prefixes.zip
The principal test instrument for this project is the SmartBits traffic generator/analyzer manufactured by Spirent Communications Inc. (Chatsworth, Calif.). Spirents SmartBits 6000 chassis will be equipped with the companys new POS-6505A TeraMetrics SmartModules. The TeraMetrics SmartModule offers line-rate traffic generation and analysis over OC-48c interfaces.
For the tests with an OC-48c core, three OC-48c SmartBits interfaces will be attached to each of four routers under test. Thus, a total of 12 OC-48c SmartBits interfaces will be attached to the system under test.
For the tests with an OC-192 core, 12 OC-48c SmartBits interfaces will be attached to each of four routers under test. Thus, a total of 48 OC-48c SmartBits interfaces will be attached to the system under test.
In addition to the SmartBits, an Adtech AX/4000 Broadband Test System equipped with OC-48c interfaces will be used for troubleshooting and protocol capture and decode purposes.
All interfaces will be connected using single-mode fiber-optic cabling and SC (rectangular) connectors.
For each routine in this section, this document describes:
· the test objective;
A primary design goal of this methodology was to accomplish all events on each test bed in five working days.
To keep to the five-day schedule, early versions of this methodology contained relatively few events and relatively simple configurations.
After the methodology was first circulated, vendors offered numerous excellent suggestions and additions. We have attempted to accommodate as many of these additions as possible, keeping in mind that each vendor still must complete all the OC-48c tests within five working days (with an additional five days for the OC-192 tests).
We will make every effort to conduct all procedures on all products. However, all tests will be conducted on a time-permitting basis.
To determine baseline packet loss, latency, and jitter for routed IP traffic
--Test bed topology shown in section 2.1
--Total BGP prefixes advertised to the SUT: 201,165 (approximately 2.3 times the 88,500 prefixes shown in core routers on 21 August 2000, according to Telstra)
--No overlapping prefixes
--For OC-48c test bed, prefix lengths will range from /13 to /26,
with distribution following Mae-East and Mae-West statistics taken from http://www.merit.edu/ipma/routing_table/#prefix_length
--For OC-192 test bed, prefix lengths will range from /12 to /26,
with distribution following Mae-East and Mae-West statistics taken from http://www.merit.edu/ipma/routing_table/#prefix_length
--AS_PATH length ranges from 1 to 24
--AS_PATH length distribution follows measurements from 35 major providers, 28 August
2000-13 September 2000; see ftp://public.networktest.com/LR_router_00/AS-PATH_distro.zip
Using the Spirent SmartBits to offer incrementally heavier traffic loads, Network Test will determine the maximum forwarding rate each system under test can sustain with zero packet loss.
Two iterations will be run: One consisting exclusively of 40-byte IP packets[1], and one with a quad-modal distribution of packet sizes.
In the quad-modal distribution, the lengths and distribution of the IP packets are as follows:
IP packet length (bytes) |
Streams per SmartBits interface |
Percentage of total streams[2] |
40 |
37 |
56.06% |
1,500 |
15 |
22.73% |
576 |
11 |
16.67% |
52 |
3 |
4.55% |
The packet length distribution is a composite of 22 samples taken from Merit, a consortium of Michigan-based ISPs, between 28 August 2000 and 13 September 2000. The raw data is available from http://moat.nlanr.net/Datacube. The packet lengths shown here are the four highest medians of all IP packet lengths between 20 and 1,500 bytes.
Maximum forwarding rate with zero loss (percent of media rate)
Average latency (microseconds)
Average jitter (microseconds)
To determine the impact of route lookups for mixed-length prefixes on device forwarding rates, latency, and jitter
--Test bed topology shown in section 2.1
--Total BGP prefixes advertised to the SUT: 251,165 (includes the 201,165 prefixes from the baseline tests (section 3.1), plus 50,000 additional, shorter prefixes to force longest-match lookups
--Overlap of 50,000 prefixes, about 20 percent of total prefixes
Network Test will offer the same maximum offered load as in the baseline test (section 3.1).
Network Test will compare results of this test from results from the baseline test and determine any differences in forwarding rates, latency, and jitter.
Maximum forwarding rate with zero loss (percent of media rate)
Average latency (microseconds)
Average jitter (microseconds)
This test will determine the maximum number of BGP4 prefixes one router will learn and propagate.
--Test bed topology shown in section 2.1, although only routers A and B are used
--Vendors should set BGP hold timers to 30 seconds for this test only
The Spirent SmartBits tester will be attached to one interface of routers A and B. The routing tables of the devices under test will be cleared. Then 80,000 /22 prefixes will be advertised by the tester to Router A. The correct learning of all prefixes will be verified on the BGP4 instance running on the SmartBits tester attached to Router B.
The number of prefixes to be injected will begin at 80,000 and increase in increments of 40,000 until the system under test fails to propagate one or more prefixes. The routing table of the devices under test will be cleared between iterations.
Maximum prefixes learned
This test will determine the point at which packet filtering rules degrade forwarding rate, latency, or jitter.
Vendors will configure at least twice as many filters on the devices under test as the devices have interfaces (i.e., at least 12 filters per router for tests on the OC-48 test bed and at least 30 filters per router for tests on the OC-192 test bed). Half the filters will be used for filtering on ingress and the other half for filtering on egress.
We use the term filter to describe a set of policies that either allow or deny specific patterns found in offered packets. One filter may contain many subcomponents, each of which may describe different prefixes, source or destination IP addresses, source or destination TCP or UDP port numbers. These names given to these subcomponents are vendor-specific.
The filters must cover as many items as there are prefixes of that length in this case, prefixes from /13 to /26. Note that to create the most stressful test case, we will offer traffic that hits every prefix in the routing table. (See the prefix length distribution document for an exact list of the prefixes to be used. That document is available at ftp://public.networktest.com/LR_router_00/LR_router_prefixes.zip.)
Because prefixes do not cover contiguous address space and specific prefixes will not be hit in sequential order, it will not be possible to use filters that aggregate multiple prefixes. For example, the prefixes 190.x.62.0/24 and 190.x.64.0/24 must each be covered by separate filters, and not by a more general rule covering a 190.x/16 prefix.
In all filtering tests, the tester will offer a known number of packets hitting each prefix.
All tests will be run with the 40-byte packets as described in Section 3.1.3.
Two subroutines of this test will be run one apiece for allow and deny filters. For both subroutines, the filters are applied to an increasing number of interfaces: first output, then input, then input and output.
In the first subroutine, the action at the end of the match should be an "accept" or "allow" for the test case. The filters must match individual prefixes traffic offered by the SmartBits, and the devices under test should forward all test traffic after inspection by the filter.
A known quantity of packets will be offered by the SmartBits to all edge interfaces of the devices under test. The receiving SmartBits interface will note the packet count, latency, and jitter of all traffic. These results will be compared with results from baseline measurements in section 3.1 to note any increase in packet loss, latency, or jitter.
In the second subroutine, the action at the end of the match should be "deny" or "drop" for the test case. The filters must not match any prefixes traffic offered by the SmartBits, and the devices under test should forward all test traffic after inspection by the filter.
A known quantity of packets will be offered by the SmartBits to all edge interfaces of the devices under test. The receiving SmartBits interface will note the packet count, latency, and jitter of all traffic. These results will be compared with results from baseline measurements in section 3.1 to note any increase in packet loss, latency, or jitter.
For accepted traffic on with ingress filtering:
Packet loss (percent of offered packets)
Average latency (microseconds)
Average jitter (microseconds)
For accepted traffic with egress filtering:
Packet loss (percent of offered packets)
Average latency (microseconds)
Average jitter (microseconds)
For accepted traffic with ingress and egress filtering:
Packet loss (percent of offered packets)
Average latency (microseconds)
Average jitter (microseconds)
For denied traffic with ingress filtering:
Packet loss (percent of offered packets)
Average latency (microseconds)
Average jitter (microseconds)
For denied traffic with egress filtering:
Packet loss (percent of offered packets)
Average latency (microseconds)
Average jitter (microseconds)
For denied traffic with ingress and egress filtering:
Packet loss (percent of offered packets)
Average latency (microseconds)
Average jitter (microseconds)
This test will determine the effect, if any, on forwarding rate during route flapping.
Test bed topology shown in section 2.1.
All devices under test will be configured to run BGP4 and OSPF. The Spirent SmartBits tester will be attached to one interface of routers A and B to generate route flapping and to verify table entries at every flap event.
The Spirent SmartBits 6000 will offer 40-byte packets to all POS interfaces in a partially meshed pattern at line rate. Then this procedure will be followed:
Step 1. Advertise 201,165 prefixes to Router A. Each prefix has primary, secondary, and tertiary routes. The routes are distinguished through use of BGPs AS_PATH attribute, with primary routes having the shortest AS_PATH length.
Step 2. Withdraw 50,291 primary routes from the routing table. (This is 25 percent of all prefixes.)
Step 3. Verify that traffic is routed over 50,291 secondary routes.
Step 4. Verify that traffic is routed on all other available routes.
Step 5. Re-advertise the original 50,291 primary routes after an interval of 30 seconds.
Step 6. Verify that all traffic is routed over the primary paths.
Step 7. Remove and re-advertise 50,291 routes from the routing table continuously in 30-second intervals for a period of 120 seconds.
The routes removed in each iteration will represent noncontiguous entries in the routing table.
Forwarding rate of stable paths (packets per second)
Forwarding rate of unstable paths (packets per second)
This test will establish the performance of the routing engine under stress.
Test bed topology shown in section 2.1.
All devices under test will be configured to run BGP4 and OSPFv2. The Spirent SmartBits tester will be attached to one interface of routers A and B to generate route flapping and to verify table entries at every flap event.
This test begins by clearing the routing tables of all devices under test (DUTs).
Next, the SmartBits testers will offer data-plane traffic to edge interfaces of all DUTs at 90 percent of line rate.
While data-plane traffic is active, the tester will bring up all the BGP peers (also SmartBits), measuring time until all peers are established and routing has converged. Convergence is calculated as the interval from beginning to bring up peers to the time all offered traffic is forwarded by the DUTs.
Next, the SmartBits will drop all BGP peering sessions while continuing to offer data-plane traffic. Convergence time will be considered as the interval from dropping all the peers to the time all traffic ceases to be forwarded by the DUTs.
Bring up the (Smartbits) BGP peers again, measuring time until all peers are established and routing has converged.
Packet loss (percent of offered load)
Convergence time for BGP peer formation (seconds)
Convergence time for BGP peer disconnect (seconds)
This test will demonstrate the ability to prioritize routed IP traffic into different service classes.
Test bed topology and configuration as in section 2.1, with one exception: to create congestion, the links between routers A and B and routers C and D will be disabled.
Vendors will configure queues and drop profiles on the devices under test to provide Gold, Silver, and Bronze levels of service, and to assign weighted round-robin ratios of 70, 20, and 5 for the three queues. Under congestion, with an offered load consisting of equal amounts of all three service levels, packets should be output in the ratio 70:20:5.
Classification of packets into these service levels must be done on ingress.
In the OC-48 tests, the aggregate input load will be equivalent to 28.8 Gbit/s (12 OC-48c edge interfaces, each operating at 2.4 Gbit/s). Since the core capacity is equivalent to 19.2 Gbit/s (four OC-48 interfaces, each operating at 2.4 Gbit/s in full-duplex mode), the resulting load will produce an overload of 150 percent, inducing congestion.
In the OC-192 tests, the aggregate input load will be equivalent to 115.2 Gbit/s (48 OC-48c edge interfaces, each operating at 2.4 Gbit/s). Since the core capacity is equivalent to 76.8 Gbit/s (four OC-192 interfaces, each operating at 9.6 Gbit/s in full-duplex mode), the resulting load will produce an overload of 150 percent, inducing congestion.
Aggregate forwarding rate for Gold, Silver, and Bronze packets (packets per second)
Ratio of forwarded Gold, Silver, and Bronze traffic
To determine baseline packet loss, latency, and jitter for label-switched traffic
Test bed topology shown in section 2.1.
The SmartBits tester will initiate nine label-switched paths (LSPs) at each edge interface one label apiece for each of the nine possible destination interfaces. The nine labels concatenate all the IP prefixes described in section 3.1.3. The total number of LSPs for this test is 108 (12 edge interfaces, each with 9 LSPs).
The SmartBits update interval will be an average of 30 seconds, +/- 15 seconds.
Using the Spirent SmartBits to offer incrementally heavier traffic loads, Network Test will determine the maximum forwarding rate each system under test can sustain with zero packet loss.
This test will be run exclusively with 40-byte IP packets (with packet lengths increased by MPLS labels).
Maximum forwarding rate with zero loss (percent of media rate)
Average latency (microseconds)
Average jitter (microseconds)
This test will determine the maximum number of label-switched paths one router can maintain.
Test bed topology shown in section 2.1, although only routers A and B are used
The Spirent SmartBits tester will be attached to one interface of routers A and B. The forwarding tables of the devices under test will be cleared. Then the SmartBits attached to router A, acting as a label edge router (LER), will offer one label to Router A, establishing a label-switched path (LSP) to a destination interface on Router B. The SmartBits will create primary paths using RSVP-TE (resource reservation protocol-traffic engineering). The SmartBits will continue to initiate LSPs at a low rate approximately 10 per second until traffic fails to reach the destination interface on Router B.
Maximum LSPs maintained
[1] References to packet lengths in this document cover IP packets the length from the first byte of the IP header to the last byte of the packet payload before the CRC. Packet lengths do not include link-layer headers, MPLS labels, or CRC fields.
[2] Percentages do not add to 100.00 percent due to rounding.