Network World Lab Test: Filtering
on Enterprise Routers
Scheduled for Publication in Summer 2003
Draft Test Methodology
v.
3.0 Copyright Ó 2003 by Network Test.
Vendors are encouraged to comment on this document and any other aspect of test
methodology. Network Test reserves the right to change the parameters of this
test at any time
Please
send comments to David Newman of Network Test (dnewman at networktest.com)
This
document describes the test procedures we use to measure the performance effect
of access control lists (ACLs) on enterprise routers.
We control for three variables in this test: The number of filters used, the size of the routing tables in use, and the existence of filter optimization techniques intended to improve filtering performance.
In this test we begin with baseline measurements of latency and throughput with no filters configured. Then we add successfully larger numbers of filters to determine the impact, if any, on the router’s forwarding and delay performance.
We repeat this exercise with progressively larger numbers of BGP and OSPF routing table entries, and with filtering optimizations (if supported) applied.
While filtering performance is our principal evaluation criterion, we also assess products based on filtering features (the various types of filtering criteria that can be used), features, and price. Participating vendors will be asked to complete a features and pricing questionnaire as a part of this test.
This document is organized as follows. This section introduces the project. Section 2 describes product requirements, the test bed, and test equipment. Section 3 describes test procedures. Section 4 logs changes to this document.
This section discusses requirements of systems under test and introduces the test equipment to be used.
Participating vendors must supply two enterprise-class routers, equipped as follows:
The
following diagram shows the general layout of the test bed we use for
performance evaluation.
The router
on the left is the device under test (DUT). The router on the right should be
identical to the DUT; it is used to provide Ethernet connections to the
SmartBits test instrument.
As shown
in the test bed diagram, the device under test (DUT) routes traffic between
four subnets.
Here are
the DUT interfaces:
Interface |
Address/prefix length |
LAN 0 |
1.1.1.1/26 |
LAN 1 |
1.1.1.65/26 |
WAN 0 |
3.3.3.1/25 |
WAN 1 |
3.3.3.129/25 |
We begin
with baseline tests of throughput and latency. Vendors should not configure
any filters for these baseline events. We give filter parameters in section 3.2
on test procedures.
For 10/100Base-T
interfaces, we use the following parameters:
Autonegotiation:
Disabled
Speed: 10
Mbit/s
Duplex:
Full
For T1
interfaces, we use the following parameters:
Layer 2 framing: PPP
MRU: 1500
Local IP: 3.3.3.x (where x = 1 or 129)
Remote IP: any
Authentication: None
Line physical mode: CPE
Loopback: Disabled
Line framing: ESF
Line encoding: B8ZS
Encoding: NRZ
Minimum interframe flags: 1 byte
CRC (FCS) size: 16 bits
Zero insertion/deletion: Enabled
DS-0 channels defined: 1-24, all 64 kbit/s
For T1 clocking, we use the following parameters:
DUT provides clocking; back-to-back router gets clocking from DUT.
The
primary test instrument for this project is the Spirent SmartBits, supplied by
Spirent Communications Inc.
The test
instrument is equipped as follows:
Hardware:
2 x
LAN-3301A 10/100/1000 TeraMetrics Ethernet line cards (RJ-45 connectors, copper
cabling)
2 x
WN-3415 T1 cards for troubleshooting
Software:
TeraRouting
2.0
SmartBits
API v. 1.40 with SAI scripting for SmartFlow and TeraRouting
SmartWindow
7.60
The
TeraMetrics line cards emulate BGP and OSPF routers in addition to offering
test traffic.
This section describes the tests to be performed. For each routine in this section, this document describes:
· the test objective;
· the configuration to be used;
· the procedure to be used;
· the test metrics to be recorded.
To determine the throughput and latency of the device under test with no filtering and only direct routes configured
We use the test bed configuration shown in section 2.2 and device configuration parameters from section 2.3.
Vendors
must not enable dynamic routing protocols for the baseline tests. Vendors must
configure static routes as follows:
Router |
Network/prefix length |
Gateway |
Left |
1.1.1.128/26 |
3.3.3.1 (serial 0) |
Left |
1.1.1.192/26 |
3.3.3.129 (serial 1) |
Right |
1.1.1.0/26 |
3.3.3.2 (serial 0) |
Right |
1.1.1.64/26 |
3.3.3.129 (serial 1) |
We attach each router interface to the test bed’s 10Base-T and T1 interfaces.
We offer 40, 238-, and 1500-byte UDP/IP packets[1] in all tests.
The test
duration is 60 seconds for all tests.
1.
We offer the DUT various loads of 40-byte UDP/IP
packets to determine the throughput level (the highest offered load at which
the DUT forwards packets with zero loss) as defined in RFC 2544, section 26.1.
We vary the offered load using a step or binary search algorithm to determine
the throughput level.
We offer traffic in a bidirectional, fully meshed pattern and note any frame
loss.
2.
We repeat step 1 with 238- and 1500-byte UDP/IP
packets.
3.
For
all three packet lengths, we measure average delay.
Throughput (aggregate packets per second for all DUT interfaces)
Average latency (microseconds)
Maximum latency (microseconds)
To determine the throughput and latency of the device under test with various numbers of filtering rules applied and direct routes configured
We use the test bed configuration shown in section 2.2.
We attach each router interface to the test bed’s 10Base-T and DS-1 interfaces.
For this filtering test, vendors must not configure dynamic
routing on their devices. Vendors
must configure static routes as follows:
Router |
Network/prefix length |
Gateway |
Left |
1.1.1.128/26 |
3.3.3.1 (serial 0) |
Left |
1.1.1.192/26 |
3.3.3.129 (serial 1) |
Right |
1.1.1.0/26 |
3.3.3.2 (serial 0) |
Right |
1.1.1.64/26 |
3.3.3.129 (serial 1) |
We require vendors to configure filtering on their devices for this test. If supported, logging SHOULD be enabled.
The following table offers an example with 8 filters:
src_ip |
dst_ip |
protocol |
src_port |
dst_port |
action |
log |
5.0.0.0/24 |
6.0.1.0/24 |
6 |
1025 |
1 |
deny |
yes |
5.0.2.0/24 |
6.0.3.0/24 |
6 |
1027 |
3 |
deny |
yes |
5.0.4.0/24 |
6.0.5.0/24 |
6 |
1029 |
5 |
deny |
yes |
5.0.6.0/24 |
6.0.7.0/24 |
6 |
1031 |
7 |
deny |
yes |
5.0.8.0/24 |
6.0.9.0/24 |
6 |
1033 |
9 |
deny |
yes |
5.0.10.0/24 |
6.0.11.0/24 |
6 |
1035 |
11 |
deny |
yes |
5.0.12.0/24 |
6.0.13.0/24 |
6 |
1037 |
13 |
deny |
yes |
1.1.1.0/24 |
1.1.1.0/24 |
6 |
111 |
111 |
allow |
yes |
Some other comments regarding the filters in use here:
1.
Note that
prefixes, protocols, and port numbers all increment by 2. We do this to prevent
summarization of networks or the use of ranges, thus forcing the use of the
desired number of filters.
The test
duration is 60 seconds for all tests.
1.
We select one filter at random from the configured list
and forward traffic to it to verify filters are properly configured.
2.
We begin with 8 filters configured. We offer the DUT
various loads of 40-byte UDP/IP packets to determine the throughput level (the
highest offered load at which the DUT forwards packets with zero loss) as
defined in RFC 2544, section 26.1. We vary the offered load using a step or
binary search algorithm to determine the throughput level.
We offer traffic in a bidirectional, fully meshed pattern.
3.
We repeat step 1 with 238- and 1500-byte UDP/IP
packets.
4.
For
all three packet lengths, we measure latency at the throughput level as defined
in RFC 2544, section 26.2.
5.
We
repeat steps 1-4 with 16, 64, and 256 filters applied.
Aggregate throughput (aggregate packets per second for all DUT interfaces) with 8, 16, 64, and 256 filters in place
Average latency (microseconds) with 8, 16, 64, and 256 filters in place
Maximum latency (microseconds) with 8, 16, 64, and 256 filters in place
To determine the throughput and latency of the device under test with various numbers of filtering rules applied and dynamic routes configured
We use the test bed configuration shown in section 2.2.
We attach each router interface to the test bed’s 10Base-T and T1 interfaces.
For this filtering test, vendors should configure dynamic routing on their devices.
We require vendors to configure BGP and OSPF on the Ethernet interfaces of their devices for this test. For BGP, the DUT is ASN 1 and the two LAN interfaces of the test instrument are ASNs 2 and 3.
For OSPF, the test instrument establishes 1 OSPF adjacency with each LAN interface of the DUT. All interfaces belong to area 0.
We require vendors to configure filtering on their devices for this test. If supported, logging SHOULD be enabled.
The following table offers an example with 8 filters:
src_ip |
dst_ip |
protocol |
src_port |
dst_port |
action |
log |
5.0.0.0/24 |
6.0.1.0/24 |
6 |
1025 |
1 |
deny |
yes |
5.0.2.0/24 |
6.0.3.0/24 |
6 |
1027 |
3 |
deny |
yes |
5.0.4.0/24 |
6.0.5.0/24 |
6 |
1029 |
5 |
deny |
yes |
5.0.6.0/24 |
6.0.7.0/24 |
6 |
1031 |
7 |
deny |
yes |
5.0.8.0/24 |
6.0.9.0/24 |
6 |
1033 |
9 |
deny |
yes |
5.0.10.0/24 |
6.0.11.0/24 |
6 |
1035 |
11 |
deny |
yes |
5.0.12.0/24 |
6.0.13.0/24 |
6 |
1037 |
13 |
deny |
yes |
1.1.1.0/24 |
1.1.1.0/24 |
6 |
111 |
111 |
allow |
yes |
Some other comments regarding the filters in use here:
1. Note that prefixes, protocols, and port numbers all increment by 2. We do this to prevent summarization of networks or the use of ranges, thus forcing the use of the desired number of filters.
2. Filters must be configured on LAN interfaces as ingress.
3. Vendors must configure static routes to LAN interface 0 (1.1.1.1/26) for all traffic destined to networks in the 6.0.0.0/16 space, and to LAN interface 1 (1.1.1.65/26) for all traffic destined to networks in the 6.1.0.0/16 space.
4. As a spot-check to verify filters are configured, we may offer traffic to networks covered by “deny” filters. We will choose the destination networks at random.
5. We rerun this test with various numbers of filters applied. A complete list of filters for each rule set is available here.
6. Regardless of the number of filters in use, the test traffic always hits the last filter configured.
The test
duration is 60 seconds for all tests.
1.
We begin with a “small table” routing configuration. We
establish BGP sessions and OSPF adjacencies with the DUT’s LAN interfaces. Once
the routing sessions are established, we advertise 64 prefixes using BGP and 64
type 3 link-state advertisements (LSAs) using OSPF. The routing table entries
are divided evenly across interfaces; for example, each LAN interface of the
test instrument advertises 32 prefixes via BGP, to make 64 total.
2.
We select one filter at random from the configured list
and forward traffic to it to verify filters are properly configured.
3.
We begin with 8 filters configured. We offer the DUT
various loads of 40-byte UDP/IP packets to determine the throughput level (the
highest offered load at which the DUT forwards packets with zero loss) as
defined in RFC 2544, section 26.1. We vary the offered load using a step or
binary search algorithm to determine the throughput level.
We offer traffic in a bidirectional, fully meshed pattern.
4.
We repeat step 1 with 238- and 1500-byte UDP/IP
packets.
5.
For
all three packet lengths, we measure latency at the throughput level as defined
in RFC 2544, section 26.2.
6.
We
repeat steps 1-4 with 16, 64, and 256 filters applied.
7.
We
repeat steps 1-6 with a “large table” routing configuration. We clear the
routing tables of the small table configuration. Then we advertise the 20,000
BGP prefixes (10,000 on each interface) plus 4,096 type 3 LSAs to the LAN
interfaces of the DUT (2,048 on each interface).
Aggregate throughput (aggregate packets per second for all DUT interfaces) with 8, 16, 64, and 256 filters and routing in place
Average latency (microseconds) with 8, 16, 64, and 256 filters and routing in place
Maximum latency (microseconds) with 8, 16, 64, and 256 filters and routing in place
To determine the throughput and latency of the device under test with various numbers of filtering rules applied and dynamic routes configured when the device under test uses a filtering optimization technique
We use the test bed configuration shown in section 2.2.
We attach each router interface to the test bed’s 10Base-T and T1 interfaces.
For this filtering test, vendors should configure dynamic routing on their devices.
We require vendors to configure BGP and OSPF on their devices for this test. For BGP, the DUT is ASN 1 and the two LAN interfaces of the test instrument are ASNs 2 and 3.
For OSPF, the test instrument establishes 1 OSPF adjacency with each LAN interface of the DUT. All interfaces belong to area 0.
We require vendors to configure filtering on their devices for this test. If supported, vendors should use a optimization techniques such as precompiling filters or “fast switching” techniques. We require vendors to configure filtering on their devices for this test. If supported, logging SHOULD be enabled.
The following table offers an example with 8 filters:
src_ip |
dst_ip |
protocol |
src_port |
dst_port |
action |
log |
5.0.0.0/24 |
6.0.1.0/24 |
6 |
1025 |
1 |
deny |
yes |
5.0.2.0/24 |
6.0.3.0/24 |
6 |
1027 |
3 |
deny |
yes |
5.0.4.0/24 |
6.0.5.0/24 |
6 |
1029 |
5 |
deny |
yes |
5.0.6.0/24 |
6.0.7.0/24 |
6 |
1031 |
7 |
deny |
yes |
5.0.8.0/24 |
6.0.9.0/24 |
6 |
1033 |
9 |
deny |
yes |
5.0.10.0/24 |
6.0.11.0/24 |
6 |
1035 |
11 |
deny |
yes |
5.0.12.0/24 |
6.0.13.0/24 |
6 |
1037 |
13 |
deny |
yes |
1.1.1.0/24 |
1.1.1.0/24 |
6 |
111 |
111 |
allow |
yes |
Some other comments regarding the filters in use here:
1. Note that prefixes, protocols, and port numbers all increment by 2. We do this to prevent summarization of networks or the use of ranges, thus forcing the use of the desired number of filters.
2. Filters must be configured on LAN interfaces as ingress.
3. Vendors must configure static routes to LAN interface 0 (1.1.1.1/26) for all traffic destined to networks in the 6.0.0.0/16 space, and to LAN interface 1 (1.1.1.65/26) for all traffic destined to networks in the 6.1.0.0/16 space.
4. As a spot-check to verify filters are configured, we may offer traffic to networks covered by “deny” filters. We will choose the destination networks at random.
5. We rerun this test with various numbers of filters applied. A complete list of filters for each rule set is available here.
6. Regardless of the number of filters in use, the test traffic always hits the last filter configured.
The test
duration is 60 seconds for all tests.
1.
We begin with a “small table” routing configuration. We
establish BGP sessions and OSPF adjacencies with the DUT’s LAN interfaces. Once
the routing sessions are established, we advertise 64 prefixes using BGP and 64
type 3 link-state advertisements (LSAs) using OSPF. The routing table entries
are divided evenly across interfaces; for example, each LAN interface of the
test instrument advertises 32 prefixes via BGP, to make 64 total.
2.
We select one filter at random from the configured list
and forward traffic to it to verify filters are properly configured.
3.
We begin with 8 filters configured. We offer the DUT
various loads of 40-byte UDP/IP packets to determine the throughput level (the
highest offered load at which the DUT forwards packets with zero loss) as
defined in RFC 2544, section 26.1. We vary the offered load using a step or
binary search algorithm to determine the throughput level.
We offer traffic in a bidirectional, fully meshed pattern.
4.
We repeat step 1 with 238- and 1500-byte UDP/IP
packets.
5.
For
all three packet lengths, we measure latency at the throughput level as defined
in RFC 2544, section 26.2.
6.
We
repeat steps 1-4 with 16, 64, and 256 filters applied.
7.
We
repeat steps 1-6 with a “large table” routing configuration. We clear the
routing tables of the small table configuration. Then we advertise the 20,000
BGP prefixes (10,000 on each interface) plus 4,096 type 3 LSAs to the LAN
interfaces of the DUT (2,048 on each interface).
Aggregate throughput (aggregate packets per second for all DUT interfaces) with 8, 16, 64, and 256 filters in place
Average latency (microseconds) with 8, 16, 64, and 256 filters in place
Maximum latency (microseconds) with 8, 16, 64, and 256 filters in place
Version 3.0
Date: 13 May 2003
Removed Cisco 2611 as test bed infrastructure; added requirement for second device to be tested in back-to-back configuration
Corrected IP addresses in PPP setup parameters
Added required static routes for baselines and filtering tests
Specified use of average and maximum latency as metrics
Clarified use of logging with filters
Version 2.1
Date: 2 May 2003
Changed net-7 error in first filtering table to net-1
Version: 2.0
Date: 24 April 2003
Added Cisco 2600 as test bed infrastructure
Changed IP addresses in section 2.2
Deleted WAN interfaces as test connections
Corrected errors in filtering rules
Version: 1.3
Date: 22 April 2003
Changed WAN interfaces from WN-3445A to WN-3415
Changed SmartFlow/SAI to SmartWindow
Corrected WAN setup parameters
Added static routes for net-7 sources
Corrected minimum frame length
Version: 1.2
Date: 8 April 2003
Changed WAN interfaces from WN-3441A to WN-3445A for SmartFlow/SAI support
Version: 1.1
Date: 1 April 2003
Changed WAN interfaces from T3 to T1
Changed test instrument from AX/4000 to SmartBits
Version: 1.0
Date: 31 March 2003
Initial public release
[1] All references to datagram length in this document cover the IP packet, from the first bit of the IP header to the last bit of the IP payload inclusive. We do not include link-layer (Ethernet or PPP) framing overhead.