Light
Reading Lab Test: Carrier-Class IPSec VPN Gateways
Published 5 June 2002
Test Methodology
v. 2.2.01 Copyright (c) 2002 Network Test Inc. and Light Reading Inc. We welcome your comments and suggestions about this document and any other aspect of test methodology. We make every effort to incorporate suggestions into this test plan, but also reserve the right to change test parameters at any time.
By David
Newman (dnewman@networktest.com)
This
document describes the test procedures we use to compare IPSec virtual private
network (VPN) equipment for service providers. Results of this test are
scheduled for publication in Light
Reading.
In this
project, we focus on devices situated inside the service provider’s cloud, and
not CPE devices. For purposes of this project, we assume that last-mile
circuits from customer sites are private and secure. Whether such an assumption
is valid is out of scope for this project. The fact is that a growing number of
service providers offer CO-based managed VPN services, and the main objective
of this test is to assess the devices used to provide those services.
We
evaluate products using the following criteria:
We determine
performance results and some management/provisioning capabilities through lab
testing. We determine other aspects of management, provisioning, features, and
price from the responses to a questionnaire we ask all each vendor to complete.
This document is organized as follows. This section introduces the project. Section 2 describes product requirements, the test bed, and test equipment. Section 3 describes test procedures. Section 4 logs the changes to this document.
This section discusses requirements of systems under test and introduces the test equipment to be used.
Participating vendors must supply at least three VPN security gateways, each equipped as follows:
Participating vendors must supply management/provisioning software and should supply a host on which to run the software.
Vendors
may supply configurations that use load-balancing or clustering of multiple
devices. In such cases, configurations must hide any addresses other than those
listed in this document.
The following
diagram shows the general layout of the test bed we use for performance
evaluation. All interfaces are gigabit Ethernet (1000Base-TX or 1000Base-SX)
except for those connecting the management console and Sniffer; these are
100Base-T.
Because we
want each device to perform as well as it can, we ask vendors to optimize their
device configurations prior to test time. We believe we have given all
necessary configuration parameters in this document; if not, please contact
Network Test with any questions.
We
configure Ethernet interfaces on the test bed to run at 1000-Mbit/s rates and
run in full-duplex mode. Vendors must disable autonegotiation on device under
test (DUT) interfaces.
Prior to
testing, vendors must configure devices with IP addresses and subnet masks as
shown. Vendors also must preconfigure
all tunnel definitions.
For all
tunnel establishment attempts, we use the following parameters:
Phase 1
mode: Main mode
Phase 1
encryption algorithm: 3DES
Phase 1
hashing algorithm: SHA-1
Phase 1
Diffie-Hellman group: 2
Phase 1
life type, duration: time-based, 28,800 seconds
PRF: Not
defined
Phase 2
mode: Quick mode
Phase 2
life type, duration: time-based, 28,800 seconds
Phase 2
PFS: enabled
Phase 2
encapsulation mode: tunnel mode
Phase 2
encryption algorithm: 3DES
Phase 2
message authentication algorithm: HMAC-SHA
The principal test instrument for this project is the SmartBits traffic generator/analyzer from Spirent Communications Inc. (Chatsworth, Calif.). Spirent’s SmartBits 6000 chassis is equipped with the company’s LAN-3201B gigabit Ethernet and TeraMetrics LAN-3301 10/100/1000 Ethernet cards.
We use two SmartBits applications in this project: SmartFlow v. 1.4 and SAI (Spirent Application Interface) v. 3.0.
SmartFlow is capable of generating any packet type in any traffic pattern at line rate across an arbitrarily large number of interfaces, using an arbitrarily large number of subnets.
SAI uses the same SmartFlow API as SmartFlow, but takes its instructions from user-developed scripts instead of the SmartFlow GUI. Functionally, SAI and SmartFlow are equivalent; both make identical packets on the wire. We use SAI because it automates some tasks that are repetitive in SmartFlow.
We use the Sniffer Pro 4.6 analyzer from Network Associates Inc. (NAI, Santa Clara, Calif.) for troubleshooting and traffic analysis. The Sniffer performs decodes of all standard IKE header messages.
To forward
traffic between subnets, the devices under test attach to Summit48 switches
from Extreme Networks Inc. (Santa Clara, Calif.). In previous tests, the
Summit48 has proven itself capable of forwarding any traffic pattern among all
ports at line rate.
In all
tests, one Summit48 will connect the devices under test. We may elect to attach
the SmartBits directly to the devices under test in some procedures involving
only one private subnet per virtual site.
IKE -- Internet Key Exchange. The key management mechanism defined for use with IPSec. An IKE negotiation precedes any IPSec session. In it, peers agree on the security parameters to be used in exchanging keys and in exchanging user data.
IKE Phase 1 -- The exchange of keying and other information between security gateways. The purpose of Phase 1 negotiations is to establish an environment for the secure exchange of session information that occurs in Phase 2.
IKE Phase 2 -- The exchange between security gateways of encryption, authentication, and other parameters to be used in IPSec SAs.
Tunnel -- The completion of a successful IKE Phase 1 and one Phase 2 negotiation and two one-way SAs for IPSec data transfer.
In some tests, this document refers to multiple pairs of IPSec SAs being passed over a connection negotiated by one IKE SA. Except where explicitly noted as such, references to “tunnels” mean the combination of IKE SA plus pair of one-way IPSec SAs.
To
determine provisioning and management capabilities of the VPN security gateways
under test.
Most
questions in this section do not require hands-on testing. For those that do, a
management console must securely monitor/manage at least two VPN security
gateways, as shown in Section 2.
We verify
the following functions. Most categories are simply pass/fail checks; a
management platform either does or does not perform a given function. For
categories labeled “subjective,” we rate management platforms on a 1-to-5
scale, where 1 = poor; 2 = fair; 3 = good; 4 = very good; and 5 = excellent.
Communications
between VPN security gateways and management console are encrypted (verified
with a Sniffer) (pass/fail)
Encryption
type used between VPN security gateways and management console
(proprietary/SSH/3DES/other)
Communications
between VPN security gateways and remote management software (e.g., a home or
notebook system) are encrypted (verified with a Sniffer) (pass/fail)
Encryption
type used between VPN security gateways and management console
(proprietary/SSH/3DES/other)
Note ease
of tunnel provisioning (subjective -- rated 1 to 5, where 1 = poor and 5 =
excellent)
Note ease
of remote security gateway configuration (subjective -- rated 1 to 5, where 1 =
poor and 5 = excellent)
Note
richness of hierarchy of management roles (subjective -- rated 1 to 5, where 1
= poor and 5 = excellent)
Note availability
of customer “template” for rapid provisioning of multiple tunnels (pass/fail)
Assess
capability for change accounting (subjective -- rated 1 to 5, where 1 = poor
and 5 = excellent)
Note
capability for storing multiple customers’ policies as single data store
(pass/fail)
Note
capability to view all of a given customer’s network from management platform
(pass/fail)
Note
capability to view part of a given customer’s network from management platform
(pass/fail)
Note
maximum number of devices managed by single data store
Note
maximum number of policies managed by single data store
Note
ability to count packets entering/leaving individual tunnels in real time
(pass/fail)
Note ability
to count packets dropped on individual tunnels in real time (pass/fail)
Note
ability to count bytes entering/leaving individual tunnels in real time
(pass/fail)
Note
ability to log fault mode for packets dropped on individual tunnels in real
time (pass/fail)
Note
capability for dedicated log server (not simply Unix syslog) (pass/fail)
Note
capability for log server redundancy (pass/fail)
Note
capability of automation of log rotation and archiving (pass/fail)
Note
capability for log archiving/retrieval per customer (pass/fail)
Note
capability for log archiving/retrieval per customer site (pass/fail)
Note
capability for log archiving/retrieval per customer tunnel (pass/fail)
Note
back-end export formats (OSS, Oracle, etc.)
For each routine in this section, this document describes:
· the test objective;
· the configuration to be used;
· the procedure to be used;
· the test metrics to be recorded.
To determine
the basic forwarding and delay characteristics of the devices under test.
The test
bed is shown below. It comprises the following subnets:
Vendors’
representatives must configure all devices under test (DUTs) to disable
compression, if supported, and to enable network address translation. Time
permitting, vendors may elect to conduct retests with compression enabled.
After
bringing up one tunnel between sites 1 and 2, we use the SmartBits traffic
generator/analyzer running SmartFlow and/or SAI to offer a bidirectional stream
of UDP/IP packets. The SmartBits uses a binary search algorithm to determine
the highest rate at which the device forwards traffic without loss (throughput,
as defined in RFC 1242). We also measure average latency at the throughput
level.
We repeat
these tests with 64-, 256, 1,400-, and 1,518-byte UDP/IP packets.[1]
Time
permitting, we may repeat the same test with up to 20,000 concurrent tunnels
(one though each pair of subnets, as described in section 3.3.3), and note any
difference in throughput. We offer the ability to test on up to 20,000 tunnels;
please advise Network Test ASAP if your device supports more than 20,000
concurrent tunnels.
Time
permitting, we may rerun tests with compression enabled in devices that support
it. All vendors must complete baseline tests with compression disabled.
The test duration is 30 seconds.
Throughput (Mbit/s)
Average latency
To determine failover time upon failure of a primary security gateway.
To determine session integrity upon failure of a primary security gateway.
As shown in the figure below, this test requires three VPN security gateways. Two of the devices will either load-share traffic or act as primary and hot-standby units.
The test
bed is shown in Section 2. It comprises the following subnets:
·
3.3.3.0/24 - Private-line subnet
·
10.0.2.0/24 - Private-line subnet, site 2
We establish one tunnel between the control and primary VPN security gateways. Using the SmartBits and SmartFlow applications, we offer traffic at a rate of 10,000 packets per second. Then we physically remove the cabling connecting the primary and control security gateways. Traffic flow should resume across the secondary security gateway.
We use 64-byte UDP/IP frames for this test.
We derive failover time from frame loss. At a rate of 10,000 fps, each lost frame counts for 100 microseconds of failover time. If the DUT cannot forward packets at a rate of 10,000 fps, we reduce the offered load to the throughput rate, and note the result.
We also note whether IPSec SA information is migrated onto the secondary security gateway or whether rekeying is required.
Frame loss
Failover time (derived from frame loss)
Session loss (pass/fail)
To
determine the maximum number of concurrent tunnels an IPSec device will
support.
This test
requires two security gateways. If supported, a pair of security gateways
should allow establishment of multiple IKE Phase1 tunnels.
The test
bed is shown below. It comprises the T subnets, where T is the number of
tunnels the vendor wishes to attempt.
This
configuration models a service provider’s network in which the service provider
sets up tunnels between a customer’s headquarters office and multiple branch
offices.
Devices on
the test bed must be configured as follows:
Vendors’
representatives must declare the number of tunnels they wish to attempt, as
follows:
--maximum
intended tunnels (one IKE Phase 1 plus one pair of Phase 2 IPSec SAs)
--one
Phase 1 IKE, maximum intended Phase 2 IPSec SAs
Vendors’
representatives must preconfigure their devices with all necessary tunnel definitions.
Each tunnel definition should use a different pair of subnets, as noted above.
We will
use the following parameters in all tunnel establishment attempts:
Phase 1
mode: Main mode
Phase 1
encryption algorithm: 3DES
Phase 1
hashing algorithm: SHA-1
Phase 1
Diffie-Hellman group: 2
Phase 1
life type, duration: time-based, 28,800 seconds
PRF: Not
defined
Phase 2
mode: Quick mode
Phase 2
life type, duration: time-based, 28,800 seconds
Phase 2
PFS: enabled
Phase 2
encapsulation mode: tunnel mode
Phase 2 encryption
algorithm: 3DES
Phase 2
message authentication algorithm: HMAC-SHA
Using
SmartFlow, we offer bidirectional traffic between each pair of subnets to
establish an IKE Phase 1/IPSec Phase 2 tunnel.
We will
begin with one tunnel establishment attempt and continue to increment the
number of tunnel establishment attempts until the number attempted is not equal
to the number of tunnels established.
We repeat
the test by attempting a large number of IPSec SAs through a single IKE
session.
We note
the maximum numbers of established tunnels and IPSec SAs. We use the Sniffer to
verify the use of unique SPIs for each tunnel.
Maximum
tunnels established
Maximum
pairs of IPSec SAs established
Version 2.2.01
Date: 8 April 2003
Title bar: Inserted actual publication date of 5 June 2003
Version 2.2.00
Date: 26 February 2002
Sections 2.2, 3.3.1.2, 3.3.2.2, 3.3.3.2:
Corrected addressing on test bed to indicate use of 4.4.4.0/24 subnet
Version 2.1.01
Date: 22 February 2002
Section 1:
Deleted erroneous publication date
Version 2.1.00
Date: 21 February 2002
Section 2.2:
Corrected addressing on test bed to indicate use of 3.3.3.0/24 subnet
Section 3.3.1.2:
Corrected addressing on test bed to indicate use of 3.3.3.0/24 subnet
Section 3.3.2.2:
Corrected addressing on test bed to indicate use of 3.3.3.0/24 subnet
Section 3.3.3.2:
Corrected addressing on test bed to indicate use of 3.3.3.0/24 subnet
Version 2.0.00
Date: 20 February 2002
Section 2.2:
Changed test bed addressing to put subnets x.x.1.x on left, all others on right
Section 2.3.1:
Deleted TeraVPN from test
Added LAN-3201B (fiber gigE) interfaces to test bed
Added SAI scripting to test bed
Section 3.2.3.3:
Added checks of real-time tunnel monitoring statistics
Section 3.3.1.2:
Changed test bed addressing to put subnets x.x.1.x on left, all others on right
Section 3.3.1.3:
Added reference to SAI
Added 256-byte frame size
Revised scalability text to clarify that up to 20,000 tunnels are possible
Section 3.3.2.2:
Changed test bed addressing to put subnets x.x.1.x on left, subnets x.x.2.x on right
Section 3.3.3
Deleted references to TeraVPN. Reworked configuration and procedure for back-to-back topology.
Version 1.00.29
Date: 30 January 2002
Initial public release
[1] References to packet length in this document cover the distance from the first byte of the Ethernet header to the final byte of the Ethernet CRC, inclusive, prior to the packet entering an IPSec tunnel.