Network Test :: Benchmarking Services Network Test Methodologies::Results Contact Information About Network Test

Light Reading Lab Test: Edge Routers

Published 10 December 2002

Test Methodology

 

v. 2.26 Copyright Ó 2002 by Network Test. Vendors are encouraged to comment on this document and any other aspect of test methodology. Network Test reserves the right to change the parameters of this test at any time

 

Please send comments to David Newman of Network Test (dnewman at networktest.com)

1         Executive Summary

This document describes the test procedures we use to compare edge routers for service provider networks. We plan to publish results of this test in Light Reading.

 

The tests described here include:

 

 

Note that both MPLS events are optional. We recognize prospective entrants may support the Martini drafts but not RFC 2547bis, or vice-versa. Accordingly, participating vendors may opt to take part in tests of either, neither, or both of these technologies.

1.1        Organization of This Document

This document is organized as follows. This section introduces the project. Section 2 describes product requirements, the test bed, and test equipment. Section 3 describes test procedures. Section 4 logs the changes to this document

2          The Test Bed

This section describes the device under test, the basic test bed topology, and the test instruments.

2.1        Device Requirements

Participating vendors must supply at least one chassis, in total with the following equipment:

 

·        Edge (Customer-Facing) Interfaces:

12 x gigabit Ethernet (1000Base-SX, multimode fiber) distributed across at least two line cards

32 x DS3 Frame Relay (BNC connector)

 

 

·        Core (Network-Facing) Interfaces:
4 x OC-48 POS (single-mode fiber)

 

·        Support for BGPv4, OSPFv2 and IS-IS

·        Support for traffic classification based on diff-serv code points (DSCPs)

·       Support for traffic classification based on 802.1q VLANs

·        Optional: Support for MPLS Martini drafts

·        Optional: Support for BGP/MPLS VPNs per RFC 2547bis

  

 

We strongly urge vendors to supply spare line cards and management modules to avoid delays if a component should fail during testing. We recommend 20 percent sparing.

2.1       The Test Bed

The following figure shows the basic configuration we use. Test instruments on the left and right of the device under test (DUT) emulate customer and core networks, respectively.

 

We describe the test instrument to be used in the next section. Note that this is a generic illustration, and that instruments may be used in different combinations and port densities in different tests. We describe specific configurations in Section 3 of this document.  

 

2.2      Test Instrument

The primary test instrument for this project is the Spirent Adtech AX/4000, supplied by Spirent Communications Inc.

 

The test instrument is configured as follows:

 

MaxIP generator/analyzer modules

4 x OC-48 POS line cards (single-mode fiber)

32 x DS-3 line cards (RG-58 coaxial cable)

12 x gigabit Ethernet cards (1000Base-SX, multimode fiber)

AX/4000 GUI and BIOS version 4.21

custom-developed TCL scripts

 

2.3        Test Traffic

2.3.1       Packet Size Distribution

In this test, we use two types of traffic:

 

2.3.1.1        40-byte IP Packets

40-byte IP packets represent the shortest possible TCP/IP length, and therefore the most stressful test case for networking devices.

 

All references to packet length in this document refer to the IP packet only -- the length from the first byte of the IP header to the last byte of the packet payload. IP packet lengths do not include link-layer data such Sonet or Ethernet headers, 802.1q VLAN tags, MPLS labels, or CRC fields.

 

2.3.1.2       Internet Mix of Packet Sizes

This distribution represents the three most commonly found IP packet lengths on Internet backbone circuits. The data is a composite of 22 samples taken from Merit, a consortium of Michigan-based ISPs, between 28 August 2000 and 13 September 2000. The raw data is available from http://moat.nlanr.net/Datacube. The packet lengths shown here are the three highest medians of all IP packet lengths between 20 and 1,500 bytes.

 

In the tri-modal distribution, the lengths and distribution of the IP packets are as follows: 

 

IP packet length (bytes)

Percentage of total streams

40

58.96%

1,500

22.96%

576

18.08%

3         Test Events

This section describes the tests to be performed. For each routine in this section, this document describes:

 

·         the test objective;

·        the configuration to be used;

·        the procedure to be used;

·        the test metrics to be recorded.

 

3.1      IP Baseline Forwarding and Latency

3.1.1      Objective

To determine baseline throughput, latency, and jitter for routed IP traffic through the DUT when forwarding traffic on all interfaces.

 

3.1.2      Test Bed Configuration

The following figure shows the test bed topology for the IP baseline tests. On the customer-facing side of the DUT, there are 12 gigabit Ethernet and 32 DS-3 interfaces. On the core-facing side of the DUT, there are 4 OC-48 POS interfaces.

 

We will offer traffic in a partially meshed bidirectional (backbone) pattern.

 

DUT interfaces:

core: 4 x OC-48 POS; edge: 12 x 1000Base-SX, 32 x DS-3 (line card parameters given below, under “Test Instrument Interfaces”)

 

Clocking:

DUT provides line clocking; test instrument configured in loop-timed mode.

 

DUT IP:

core: 223.0.[1-4].1/30, where 1-4 identifies each OC-48 interface

 

edge:

gigabit Ethernet: 222.0.[1-12].1/30

DS-3: 222.0.[100-131].1/30

 

DUT ASN: 254

 

Test instrument interfaces:

core: 4 x OC-48 POS; edge: 12 x 1000Base-SX, 32 x DS-3

 

Clocking:

DUT provides line clocking; test instrument configured in loop-timed mode.

 

Sonet interface configuration:

SONET (not SDH) framing

Transmit an S1 byte of 15, expect to receive an S1 byte of 15

PPP mode

Local magic number: 1

LCP_FCS: 32 bits

No PPP authentication

FCS: 32 bits

 

Gigabit Ethernet configuration:

Autonegotiation disabled

Speed: 1000 Mbit/s

Mode: Full-duplex

 

DS-3 line configuration:

DS-3 framing: C bit clear
Layer 2 framing: PPP
No PPP authentication
Local magic number: 1
MTU: 4470 bytes
LCP FCS: 32 bits
HDLC framing: 16 bits

 

Test instrument IP:

core: 223.0.[1-4].2/30, where 1-4 identifies each OC-48 interface

edge: 222.0.[ASN].2/30, where ASN identifies the AS number of the edge interfaces of the test instrument (see below)

 

Test instrument ASN:

Core:  254

Edge:

gigabit Ethernet: 1-12

DS-3: 100-131

 

Test duration:

At least 60 seconds

 

3.1.3      Procedure

1. Prior to offering data-plane traffic, the test instrument establishes 1 BGP peering session per interface. The OC-48 core interfaces of the test instrument also establish OSPF adjacencies with the DUT (area 0).

 

2. The test instrument advertises a BGP table with 512,000 prefixes. The prefixes to be advertised are available here in zipped .csv format:

 

ftp://ftp.networktest.com/edge02/edge02prefixes.zip

 

The BGP table approximates the Internet distribution of prefix and AS-PATH lengths. 

 

The core (OC-48) interfaces of the test instrument advertise all but the final 4,400 prefixes. Each of the 44 edge interfaces of the test instrument will advertise 100 prefixes, making up the final 4,400 prefixes.

 

The 100 edge prefixes will all have a prefix length of /24, and all with have an AS_PATH length of 1.

 

Per RFC 1771, the test instrument on each interface advertises only those prefixes “behind” it. 

 

3. The test instrument offers traffic in a partially meshed bidirectional pattern to all interfaces. All traffic offered to the DUT’s edge interfaces is destined for all networks advertised behind the core interfaces, and vice versa.  Traffic hits all prefixes advertised in step 2.

 

Using a step or binary search algorithm, we vary the offered load to determine the throughput level.

 

The test duration is 60 seconds.

 

4. We repeat the test twice: Once with 40-byte IP packets, once with the Internet mix of packet lengths.  

 

Because we use different types and numbers of interfaces in this test, oversubscription is possible, and we compensate for this to avoid congesting the DUT. With a load of 40-byte IP packets, it is possible to congest the core side of the DUT by 7.93 percent. With the Imix load,  it is possible to congest the edge side of the DUT by 33.84 percent.

 

We compensate by reducing the maximum offered load so that we never induce congestion. Note that we “cap” the offered load only when offering traffic at higher rates would cause congestion. There is no cap needed at any lower rate.

 

The following table details the compensation we use:

 

Load

Customer-side  maximum offered load (pps)

Core-side  maximum offered load (pps)

Potential overload

Where?

Adjusted  maximum offered load (pps)

Perfect  aggregate

throughput (pps)

40-byte

22,512,527.15

24,450,612.24

7.93%

Core

22,512,527.15

45,025,054.30

Imix

3,332,391.28

2,489,858.74

33.84%

Edge

2,489,858.74

4,979,717.48

 

 

5. We measure average latency and jitter at the throughput level and at 10, 50, 90, and 95 percent of the throughput rate of the POS interfaces.

 

3.1.4            Metrics

Throughput

Average latency at throughput level and at 10, 50, 90, and 95 of POS line rate

Average jitter at throughput level and at 10, 50, 90, and 95 of POS line rate

Packets in sequence

Packets out of sequence

 

3.2      Resiliency/Redundancy

3.2.1      Objective

To determine failover time upon failure of a primary circuit of the DUT.

 

3.2.2      Test Bed Configuration

The following diagram shows the physical layout of the test bed. We attach the test instrument to the DUT with 1 gigabit Ethernet and 2 OC-48 interfaces.

 

The following diagram shows the logical topology of the test bed. Test instrument interfaces AX1, AX2, and AX3 establish OSPF adjacencies with the DUT, with AX1 as the preferred path to “emulated router 4.” “Emulated router 3” establishes an I-BGP session with the DUT.

 

DUT: 3 x 1000Base-SX

Test Instrument: 3 x 1000Base-SX

 

DUT ASN = 254

Test instrument ASN = 254

 

DUT OSPF area = 0

Test instrument area = 0

 

DUT IP addresses:

gigabit Ethernet: 222.0.1.1./30

OC-48: 223.0.254.1/30, 223.1.254.1/30

 

Test instrument IP addresses:

gigabit Ethernet: 222.0.1.2./30

OC-48: 223.0.254.2/30, 223.1.254.2/30

BGP loopback: 100.100.100.100/32

 

3.2.3      Procedure

 

1. The test instrument establishes OSPF adjacencies on all 3 of its interfaces with the DUT.

 

2. Using OSPF, the interfaces labeled AX1 and AX2 advertise topology information about the emulated routers “behind” them.

 

OSPF advertises AX2 as the preferred path to emulated router 3, and AX1 as the preferred path to emulated router 4.

 

3. Emulated router 3 on the test instrument establishes an I-BGP session with the DUT via AX2.

 

4. Using I-BGP, emulated router 3 advertises the same set of 500,000 prefixes used in the baseline forwarding test. For all routes, emulated router 4 is the next hop to the Internet.


3. The test instrument on AX3 offers a unidirectional stream of 40-byte IP packets at 100,000 packets per second to the DUT. The traffic hits all BGP routes advertised. We offer traffic for at least 60 seconds.

 

Because of the OSPF metrics established in step 2, the DUT should send this traffic via AX1.

 

4. After at least 10 seconds, we physically remove the cable from the lower-cost interface (AX1) .The DUT SHOULD reroute all traffic onto the secondary link (AX2).

 

5. We derive failover time from packet loss.  We calculate failover time by taking the number of packets lost and dividing by the packet-per-second rate.

 

At an offered rate of 100,000 pps, each dropped packet corresponds to 10 microseconds of failover time.

 

3.2.4      Metrics

Packet loss

Failover time (derived from packet loss)

 

3.3      BGP RIB Capacity

3.3.1      Objective

To determine the maximum number of BGP4 prefixes the DUT will learn and propagate.

 

3.3.2      Test Bed Configuration

DUT interfaces: 2 x 1000Base-SX

 

DUT AS = 254

Test instrument AS = 1 (transmitter), 2 (receiver)

DUT IP = 222.0.1.1/30, 222.0.2.1/30

Test instrument IP = 222.0.1.2/30, 222.0.2.2/30

 

3.3.3      Procedure

1. We attach the test instrument to two interfaces of the DUT and clear the routing table(s) of the DUT.

2. The test instrument establishes 1 EBGP peering session with each interface of the DUT. Each interface of the test instrument represents a separate AS, and the DUT represents one AS.


3. The test instrument advertises 32,000 unique prefixes to one interface of the DUT.

 

4. The DUT SHOULD propagate all prefixes advertised to the other (receiver) interface of the test instrument.

 

5. The receiving interface of the test instrument reports the number of prefixes propagated by the DUT. We consider the iteration a success if the DUT propagates all advertised prefixes.

 

6. Between test iterations, the test instrument MUST clear the routing table of the DUT by withdrawing all previously advertised routes.

 

7. Using a binary search algorithm, the test instrument advertises progressively larger numbers of prefixes until the DUT fails to propagate one or more prefixes.

 

The binary search algorithm has a resolution of 1,000 prefixes; we report results to the nearest 1,000 prefixes.

  

3.3.4      Test Metrics

Maximum BGP4 prefixes propagated

 

3.4      BGP Peering Session Capacity

3.4.1      Objective

To determine the maximum number of BGP peering sessions the DUT can sustain.

 

3.4.2      Test Bed Configuration

DUT interfaces: 12 x 1000Base-SX, 6 x DS-3 (DS-3 optional, if needed)

 

Note: This configuration can support up to 4,335 concurrent EBGP sessions. Please let us know ASAP if you would like to scale your DUT to higher levels.

 

DUT ASN = 254

 

DUT IP:

gigabit Ethernet: 222.0.1.1/16, 222.0.2.1/24 … 222.0.12.1/24

DS-3  (optional, if needed): 222.0.100.1/24, 222.0.101.1/24 … 222.0.105.1/24

 

Test instrument IP/ASN:

Customer side:

The following table lists IP addresses and ASNs for the test instrument’s customer-facing interfaces:

 

Test Instrument

DUT

Interface type

IP address/prefix length

ASN

Location

IP address/prefix length

ASN

Gigabit Ethernet

223.1.254.2/30

254

Core

223.1.254.1/30

254

Gigabit Ethernet

222.1.1-255.2/16

1001-1255

Customer

222.1.1.1/16

254

Gigabit Ethernet

222.2.1-255.2/16

1501-1755

Customer

222.2.1.1/16

254

Gigabit Ethernet

222.3.1-255.2/16

2001-2255

Customer

222.3.1.1/16

254

Gigabit Ethernet

222.4.1-255.2/16

2501-2755

Customer

222.4.1.1/16

254

Gigabit Ethernet

222.5.1-255.2/16

3001-3255

Customer

222.5.1.1/16

254

Gigabit Ethernet

222.6.1-255.2/16

3501-3755

Customer

222.6.1.1/16

254

Gigabit Ethernet

222.7.1-255.2/16

4001-4255

Customer

222.7.1.1/16

254

Gigabit Ethernet

222.8.1-255.2/16

4501-4755

Customer

222.8.1.1/16

254

Gigabit Ethernet

222.9.1-255.2/16

5001-5255

Customer

222.9.1.1/16

254

Gigabit Ethernet

222.10.1-255.2/16

5501-5755

Customer

222.10.1.1/16

254

Gigabit Ethernet

222.11.1-255.2/16

6001-6255

Customer

222.11.1.1/16

254

DS-3

222.100.1-255.2/16

7001-7255

Customer

222.100.1.1/16

254

DS-3

222.101.1-255.2/16

7501-7755

Customer

222.101.1.1/16

254

DS-3

222.102.1-255.2/16

8001-8255

Customer

222.102.1.1/16

254

DS-3

222.103.1-255.2/16

8501-8755

Customer

222.103.1.1/16

254

DS-3

222.104.1-255.2/16

9001-9255

Customer

222.104.1.1/16

254

DS-3

222.105.1-255.2/16

9501-9755

Customer

222.105.1.1/16

254

 

Core side:

223.1.254.2/30, ASN 254

 

3.4.3      Procedure

1. The first gigabit Ethernet interface of the test instrument establishes 1 IBGP session with one of the DUT interfaces.  The test instrument advertises 100,000 prefixes to the DUT over IBGP.

 

2. Over each of the remaining gigabit Ethernet interfaces, the test instrument will initially establish 8 EBGP peering sessions, for a total of 88 EBGP peering sessions.  The test instrument will advertise 100 unique prefixes per EBGP peering session to the DUT. 

 

3. The DUT SHOULD propagate all prefixes advertised from the EBGP peers to the IBGP peer on the test instrument.

 

The DUT SHOULD propagate the 100,000 routes from the IBGP peer and 100 routes from each EBGP peer to each of the other EBGP peers.

 

4. Using a binary search algorithm, the test instrument establishes a progressively larger number of peer sessions, each advertising 100 prefixes, until the DUT fails to propagate one or more prefixes to the IBGP peer, or drops one or more peering sessions.

 

Each EBGP speaker on the test instrument will support up to 255 peering sessions.

 

In the event that all EBGP-speaking gigabit Ethernet interfaces on the test instrument reach the maximum number of sessions supported (11 x 255 = 2,805), we will add DS-3 interfaces as needed. In this case, we would repeat steps 2-4 using up to 6 additional DS-3 interfaces.

 

The resolution of the test instrument is 1 peering session; we report the maximum number of peering sessions.

 

3.4.4      Test Metrics

Maximum BGP peering sessions supported

 

3.5      Device FIB Capacity

3.5.1      Objective

To determine the maximum number of unique routes to which the DUT can forward traffic.

 

3.5.2      Test Bed Configuration

DUT interfaces: 2 x 1000Base-SX

 

DUT AS = 254

Test instrument AS = 1 (transmitter), 2 (receiver)

DUT IP = 222.0.1.1/30, 222.0.2.1/30

Test instrument IP = 222.0.1.2/30, 222.0.2.2/30

 

3.5.3      Procedure

1. We attach the test instrument to two interfaces of the DUT and clear the routing table(s) of the DUT.

2. The test instrument establishes 1 EBGP peering session with each interface of the DUT. Each interface of the test instrument represents a separate AS, and the DUT represents one AS.


3. The test instrument advertises 32,000 unique prefixes to one interface of the DUT.

 

4. The DUT SHOULD propagate all prefixes advertised to the other (receiver) interface of the test instrument.

 

5. The receiving interface of the test instrument records the number of prefixes propagated by the DUT.  The test will stop if the number of prefixes propagated is less than the number advertised, or if either peering session drops.

 

6. The test instrument offers 40-byte IP packets at 1 percent of line rate. This traffic will use every prefix advertised. The test duration MUST be sufficient to ensure that the test instrument offers at least 1 packet to each prefix advertised.

 

7. The test instrument records any packet loss on the receiving interface. We consider the iteration a success if the DUT forwards all offered traffic without packet loss.

 

6. Between test iterations, the test instrument clears the routing table of the DUT by withdrawing all previously advertised routes.

 

7. Using a binary search algorithm, the test instrument advertises progressively larger numbers of prefixes and offers data-plane traffic until the DUT fails to forward traffic without loss. We consider the FIB capacity to be the maximum number of routes to which traffic can be forwarded without loss.

 

The binary search algorithm has a resolution of 1,000 prefixes; we report results to the nearest 1,000 prefixes.

3.5.4      Test Metrics

FIB capacity

3.6        OSPF Routing Capacity

3.6.1       Objective

To determine the maximum number of OSPF LSAs the DUT will learn and propagate.

3.6.2      Test Bed Configuration

DUT: 8 x 1000Base-SX

Test instrument: 8 x 1000Base-SX

 

For OSPF LSAs, we use an LSA type distribution taken from Area 0 of a tier-1 ISP. The distribution is as follows:

 

LSA type

Absolute number
of LSAs

Relative percentage of LSAs

1

200

0.39%

2

1,000

1.95%

3

40,000

78.13%

4

5,000

9.77%

5

5,000

9.77%

 

Note that the actual number of LSAs we offer will change depending on the DUT’s capacity. We will hold the ratio of LSA types constant.

 

3.6.3      Procedure

1. We attach the test instrument to 8 interfaces of the DUT and clear the routing table(s) of the DUT.

2. The test instrument establishes 1 OSPF adjacency with each interface of the DUT. All interfaces belong to backbone area 0.


3. Up to 7 interfaces of the test instrument advertise LSAs to the DUT using the LSA type distribution given in “test bed configuration” above. The initial number of LSAs is 1,000; subsequent iterations use larger or smaller numbers depending on outcome of the first iteration.

 

The test instrument will advertise up to 100,000 LSAs on each of the 7 transmitting interfaces, or 700,000 LSAs total. An 8th interface of the test instrument will act as a receiver as described below.

 

4. The DUT SHOULD propagate all LSAs received to the 8th interface of the test instrument. The test instrument verifies this by determining if the number of LSAs propagated is equal to or greater than the number advertised.

 

5. The test instrument destroys the LSAs and drops the adjacency on both interfaces.

 

6. Using a binary search algorithm, the test instrument repeats steps 2-5 to advertise progressively larger numbers of LSAs until the DUT fails to propagate one or more LSAs.

 

3.6.4      Test Metrics

Maximum OSPF LSAs propagated without loss

 

3.7       IS-IS Routing Capacity

3.7.1       Objective

To determine the maximum number of IS-IS LSPs and IS-IS routes the DUT will learn and propagate.

3.7.2      Test Bed Configuration

DUT: 2 x 1000Base-SX

Test instrument: 2 x 1000Base-SX

 

For IS-IS tests, we assume all routers are part of a Level 1 network.

 

DUT IP: 222.0.[1-2].1/30

Test instrument IP: 222.0.[1-2].2/30

 

DUT IS-IS system ID = 00 00 01

Test instrument system ID = 00 00 02, 00 00 03 … 00 00 0D

 

We configure the test instrument to use multinode IS-IS. It emulates 12 IS-IS nodes on a single interface.

 

We set the test instrument IS-IS priorities to 1, forcing the DUT to elect itself as DR (pseudonode).

 

3.7.3      Procedure

1. We attach the test instrument to two interfaces of the DUT and clear the routing table(s) of the DUT.

2. Each of the 12 IS-IS nodes on the test instrument establishes 1 IS-IS session with the DUT.


3. Each of the 12 IS-IS nodes on the test instrument advertises LSPs to the DUT. The initial number of LSPs is 1,000; subsequent iterations use larger or smaller numbers depending on outcome of the first iteration.

 

Each LSP will contain 121 routes.

 

4. The DUT SHOULD propagate all LSPs to the other interface of the test instrument. The test instrument verifies this by determining if the number of LSPs propagated is equal to or greater than the number advertised.

 

5. The test instrument deletes the routes learned and drops the IS-IS session on both interfaces.

 

6. Using a binary search algorithm, the test instrument repeats steps 2-5 to advertise progressively larger numbers of LSPs until the DUT fails to propagate one or more LSPs.

 

3.7.4      Test Metrics

Maximum LSPs propagated without loss

Maximum routes propagated without loss

 

3.8      QoS Enforcement

3.8.1      Objective

To demonstrate the ability of the device under test to enforce loss boundaries for unicast and multicast traffic classes.

 

To demonstrate the ability of the device under test to control excess bursts of traffic on a per-customer basis

 

To demonstrate the ability of the device under test to allocate predefined amounts of bandwidth to premium customers

 

 

3.8.2      Test Bed Configuration

The following figure shows the test bed topology for the QoS enforcement tests:

 

 

 

The DUT has 4 Gigabit Ethernet links, representing customer interfaces and 4 OC-48 links, representing the backbone interfaces.  On the customer side, note that each physical interface is attached to 10 logical interfaces on the test instrument, each representing one customer.

 

We distinguish traffic for each customer with VLAN tags.  The VLAN_ID tags are 1-40.

 

We use three traffic classes in this test:

 

Gold: (unicast, TCP destination port 2002, DSCP xxx110): This class represents a premium service for customers’ mission-critical traffic. Not customers receive the gold traffic class.

 

Silver: (multicast, UDP destination port 16384, DSCP xxx100): This class represents a multicast service. All customers receive the silver traffic class.

 

Particle board: (unicast, TCP destination port 80, DSCP xxx000): This class represents best-effort Web traffic. All customers receive the particle board traffic class.

 

We offer 1,500-byte IP packets for all traffic classes. The duration for all QOS tests is at least 60 seconds.

 

3.8.3      Procedure

We measure QOS enforcement in three different tests: Mixed-class forwarding, rate limiting, and rate-shaping.

 

1. Mixed-class forwarding test: The test instrument establishes 10 OSPF adjacencies on each of the four customer-facing gigabit Ethernet interfaces. Each customer advertises 10 type 3 LSAs. With 10 customers per interface and 4 customer-facing interfaces, the total number of OSPF routes advertised is 400.

 

2. The test instrument’s gigabit Ethernet interfaces send multicast group join requests from customers. Each OC-48 interface of test instrument represents one multicast transmitter, and the 10 “customers” on each gigabit Ethernet interface join all four multicast groups.

 

3. The test instrument offers silver (multicast) traffic to the OC-48 backbone links. Traffic offered to all core interfaces is destined for all the multicast addresses of all 40 customers at a rate of 202.7 packets per second.[1] With four multicast transmitters, the aggregate offered load is approximately 810.6 pps (~10 Mbit/s).

 

4. The test instrument offers particle-board (unicast) traffic to all four OC-48 backbone links.  The test instrument offers traffic in a partially meshed pattern. Traffic is destined for all routes of all 40 different customers at a rate of 7295.7 packets per second (~90 Mbit/s) per customer.2 (The aggregated offered load per OC-48 is 72,957 pps.)

 

Note that the ratio of silver to particle board traffic per customer is about 10:90.

 

Using forwarding rate per customer as a metric, we verify that each customer receives a 10:90 ratio of silver to particle board traffic.

 

5. Rate limiting test: We define four special customers out of the 40 (one per gigabit Ethernet interface). For each of these customers, the test instrument bursts particle board traffic to consume all remaining capacity of the backbone links.  Each customer should still receive an equal amount of traffic.  This test measures an edge router’s ability to provide separate queuing per customer, and deliver a fair-share of best effort (particle board) traffic.

 

We use forwarding rate per traffic class per customer as the metric.

 

6. Rate shaping test: The test instrument returns the offered traffic load to the same 10:90 ratio of silver and particle board traffic as described in steps 1-4. 

 

Now we define two groups of premium customers with four customers in each group:

 

Gold Group A: VLAN_ID 5, 11, 23, and 31 have all subscribed to a gold service at 30 Mbit/s; and

Gold Group B: VLAN_ID 7, 13, 29, and 37 have all subscribed to a gold service at 15 Mbit/s

 

We offer an additional 4,063 pps (~50 Mbit/s) per customer of Gold traffic to each customer in Gold Groups A and B.

 

We use forwarding rate per traffic class per customer as a metric.

 

We verify the following:

 

Gold Group A customers receive 2,438.2 pps (~30 Mbit/s) gold, 812.7 pps (~10 Mbit/s) silver and 4,876.5 pps (~60 Mbit/s) particle board

 

Gold Group B customers receive 1,219.1 pps (~15 Mbit/s) gold, 812.7 pps (~10 Mbit/s) silver and 6,095.5 pps (~75 Mbit/s) particle board

 

All other customers receive the same 10:90 ratio of silver and particle board traffic as described in steps 1-4.

 

7. We repeat the previous step but offer only silver and particle board traffic. This final step acts as a check against “nailing up” bandwidth for gold traffic so that the bandwidth cannot be used even when no gold traffic is present.

 

Forwarding rates should be identical to those in the mixed-class forwarding test described in steps 1-4.

 

3.8.4      Metrics

Per customer forwarding rate (silver and particle board traffic, steady state)

Per customer forwarding rate (silver and particle board traffic, burst particle board to 4 customers)

Per customer forwarding rate (Group A gold, silver, and particle board traffic)

Per customer forwarding rate (Group B gold, silver, and particle board traffic)

Per customer forwarding rate (non-premium silver, and particle board traffic)

 

3.9      MPLS Martini VC Scalability (Optional)

3.9.1      Objective

To determine the maximum number of Martini-draft virtual circuits the DUT can establish and use under load.

 

3.9.2      Test Bed Configuration

The following figure shows the test bed topology for the Martini scalability tests. We use 12 gigabit Ethernet and 32 DS-3 circuits in this test.  On the customer-facing side of the DUT, we use 2 gigabit Ethernet interfaces and 32 DS-3 interfaces. On the core-facing side of the DUT, we use 10 gigabit Ethernet interfaces.

 

Each of the 2 gigabit Ethernet interfaces supports up to 4,000 VLAN IDs. Each of the 32 DS-3 supports up to 976 frame relay data-link circuit identifiers (DLCIs). In this topology, we can establish up to 39,232 bidirectional virtual circuits.

 

DUT IP: 222.0.[3-12].1/30

Test instrument IP: 222.0.[3-12].2/30

DUT OSPF area: 0

 

For each customer-facing gigabit Ethernet interface, the DUT must be preconfigured to map 4,000 Martini virtual circuits (VCs) to 4,000 802.1q VLAN tags. The VLAN IDs will run from 1 to 4,000 on each interface. For each DS-3 interface, the DUT must be preconfigured to map 976 Martini virtual circuits (VCs) to 976 DLCIs. The DLCIs will run from 16 to 991 on each interface.

 

The following table illustrates the VLAN IDs, DLCIs, and virtual circuit IDs (VCIDs) to be used:

 

 

 

3.9.3      Procedure

1. The test instrument establishes an LDP session with each of the core-facing interfaces of the DUT.

 

Additionally, the test instrument establishes an OSPF adjacency with each of the core-facing interfaces of the DUT. Both the DUT and the test instrument interfaces are in OSPF area 0.

 

2. Each core-facing interface of the test instrument provisions a single Martini connection by distributing 1 VC label using LDP in downstream unsolicited mode.

 

The VC label will uniquely identify 1 VC, provisioned using the first configured Ethernet 802.1q VLAN tag or DLCI destined for a given core-facing interface. In each case, there is one forwarding equivalency class (FEC) created. 

 

The following table illustrates the VLAN IDs, DLCIs, and virtual circuit IDs (VCIDs) to be used when determining initial throughput.

3. Once a VC has been established on all core-facing interfaces, all interfaces of the test instrument offer streams of Ethernet frames with 802.1q tags and frame relay frames over unique DLCIs.

 

We offer traffic using a bidirectional “port pair” topology. The table in “test bed configuration” above identifies destinations for the Ethernet and frame relay traffic.

 

For example, the test instrument offers traffic with VLAN_ID X to a customer-facing gigabit Ethernet interface. It is destined for an emulated subinterface behind a core-facing gigabit Ethernet interface of the DUT, identified with VLAN_ID Y. At the same time, the test instrument offers core-facing gigabit Ethernet interface packets destined for VLAN_ID X, sourced from VLAN_ID Y.

 

The same port-pair pattern holds true for the DS-3 interfaces, although here each group of four DS-3 interfaces forwards traffic to one gigabit Ethernet interface on the core side of the DUT.

 

We offer traffic using a step or binary search algorithm to determine the throughput level. This traffic will use the same VLAN IDs and DLCIs distributed during VC setup. 

 

We repeat this test twice: Once with 40-byte IP packets, and again with the Internet traffic mix.

 

4. Once this single-tunnel baseline is established, vendors declare the maximum number of VC instances they wish to attempt. Let us call this N instances.

 

We repeat steps 1-3, this time distributing up to N VC labels, each with a corresponding 802.1q VLAN or frame relay DLCI.  We note any difference in throughput between the single-VC and max-VC cases.

 

3.9.4      Metrics

Maximum VCs established

Throughput, 1 VC, 40-byte IP packets

Throughput, 1 VC, Internet mix

Throughput, maximum VCs, 40-byte IP packets

Throughput, maximum VCs, Internet mix

3.10 MPLS VPN Scalability (Optional)

3.10.1 Objectives

To determine the maximum number of RFC 2547bis VRF (virtual routing and forwarding) instances a single PE can establish and use.

To determine the maximum number of routes each VRF instance can use.

To determine whether the DUT preserves OSPF LSA type across the MPLS cloud.

 

3.10.2 Test Bed Configuration

The following figure shows the physical test bed topology for the VRF scalability tests.

 

 

 

The test instrument on the customer-facing side of the DUT emulates up to 2,420 CE (customer edge) routers. The DUT acts as a provider edge (PE) device. The test instrument on the core side of the DUT emulates another PE device, a P device, and multiple CE devices behind the emulated PE.

 

The following figure shows the logical test bed topology for the VRF scalability tests.

 

 

 

DUT: 12 x 1000Base-SX

Test Instrument: 12 x 1000Base-SX

 

DUT ASN = 254

DUT IP:

customer-facing gigabit Ethernet: 222.PN.[1-220].1/24, where PN = port number (1-12)

 (customers will use logical subinterfaces distinguished by VLAN IDs, described below)

core-facing gigabit Ethernet: 223.1.254.1/30

 

DUT/test instrument route distinguishers/route targets:

254: <VLAN_ID>, where:

254 = DUT ASN

<VLAN> = VLAN ID used by an emulated CE

 

Test instrument ASNs:

core-facing gigabit Ethernet: 254

 

Test instrument IP:

customer-facing gigabit Ethernet: 222.PN.[1-220].2/24, where PN = port number (1-12)

OSPF LSAs advertised for: 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24 … to limit as described in test procedure

core-facing gigabit Ethernet: 223.1.254.2/30

core-side loopback address: 100.100.100.100/32

 

Test instrument VLAN IDs:

customer-facing gigabit Ethernet: 1 - 2,420

 

To emulate CE devices behind its own emulated P and PE instances, the test instrument uses PHP. The test instrument also runs OSPF (Area 0, all type 1 LSAs); LDP (advertises FECs for loopback addresses on the core side to identify remote PEs); and M-BGP (makes use of route distinguishers to advertise customer VPN routes).

 

3.10.3 Procedure

1. Prior to running the test, vendors must declare the number of VRF instances they wish to attempt. Let us call this N instances.

 

2. The test instrument will establish up to 220 OSPF adjacencies on each customer-facing interface of the DUT.  With 220 OSPF adjacencies per interface and 11 customer-facing interfaces, we will create a maximum of 2,420 VRF instances. Please let us know ASAP if you would like to scale your DUT to higher levels.

 

We use all 11 customer-facing interfaces in this test. In the event that N instances cannot be divided evenly into I interfaces, we will run the integer result of (N / I) on (N -1) interfaces and the modulus of (N / I) on the final interface, both rounded to the nearest 10.

 

For example, suppose a vendor wishes to attempt 2,000 VRF instances. Dividing 2000 by 11 gives us 181.81. Rounding to the nearest 10, we would distribute the load as 180 instances on the first 10 interfaces, and 200 instances on the final interface.

 

All “subscribers” on a given customer-facing interface will use the same IP addressing. We uniquely identify each subscriber by 802.1q Ethernet VLAN tag.

 

3. The test instrument will establish one OSPF adjacency and one LDP session with the DUT’s core-facing interface.

 

4. The DUT will establish one multiprotocol I-BGP (M-BGP) session with the PE device emulated by the test instrument.

 

5. Each OSPF adjacency on the customer-facing interfaces advertises 100 unique type 3 LSAs per customer. With 2,420 possible adjacencies and 100 LSAs per adjacency, the PE’s tables may hold an aggregate of 242,000 routing entries for customers.

 

The DUT will export these LSAs into M-BGP, which will propagate routes to the emulated PE devices.

 

6. The test instrument offers unidirectional streams of 40-byte IP packets at 1 percent of line rate (14,880 pps) to the core-facing interface of the DUT. This traffic will use every LSA advertised.

 

The test duration MUST be sufficient to ensure that the test instrument offers at least 1 packet to each prefix advertised.

 

Because we use the PHP core tunnel established to the DUT, the traffic offered will only make use of the inner labels advertised from the DUT for these LSAs.

 

We consider a VRF to be usable if it is capable of forwarding traffic sent to it.

 

7. Using a step or binary search algorithm, we repeat steps 3-6, incrementing the number of LSAs  to determine the maximum number that the maximum number of routes the DUT can propagate and use.

 

We consider an LSA usable if the DUT can forward traffic to the route it advertises. This is essentially a FIB test.

 

3.10.4 Metrics

Maximum number of usable VRFs

Maximum number of usable LSAs per VRF

4         Change History

Version 2.26

Date: 10 December 2002

Added publication date to title bar

Corrected date of version 2.25 revision

 

Version 2.25

Date: 31 July 2002

Corrected number of interfaces in OSPF capacity test

 

Version 2.24

Date: 30 July 2002

Corrected core-side addressing in BGP peering test

 

Version 2.23

Date: 24 July 2002

Corrected core-side addressing in BGP peering test

 

Version 2.22

Date: 23 July 2002

Corrected DS-3 addressing in BGP peering test

 

Version 2.21

Date: 12 July 2002

Corrected DLCI range in Martini test

 

Version 2.2

Date: 5 July 2002

Fourth public release

Corrected scalability limits in BGP peering capacity test

Added ratios in QOS enforcement test

Added anti-TDM check at end of QOS enforcement test

 

Version 2.1

Date: 26 June 2002

Third public release

 

Version 2.0

Date: 19 June 2002

Second public release

 

Version 1.20

Date: 15 May 2002

Internal interim release

 

Version 1.0

Date: 11 April 2002

Initial public release

 

 



 

2 Some traffic may be dropped due to the small degree of oversubscription that occurs when passing from a POS encapsulated link to an Ethernet link with VLAN tagging.

Network Test Footer. Privacy Policy. Contact us to learn more.