Network Test :: Benchmarking Services Network Test Methodologies::Results Contact Information About Network Test

Network Computing Lab Test: Bandwidth Managers

Publication Date: June 12, 2000

Preliminary Test Plan

 

v. 1.0. Copyright © 2000 by Network Test Inc. Vendors are encouraged to comment on this document and any other aspect of test methodology. However, Network Test reserves the right to change the parameters of this test at any time.

 

This document describes the procedures to be used in comparing devices with bandwidth-management capabilities. Products will be evaluated in four areas: ease of configuration/management; performance; features; and co-existence with other QOS devices or mechanisms. Relative weighting of each category are as follows: configuration/management, 25 percent; performance, 40 percent; features, 25 percent; co-existence with other QOS devices/mechanisms, 10 percent.

 

The general goal of this test is to determine whether bandwidth managers ensure that high-priority flows (translation: those carrying revenue) always fall within acceptable latency/response time or throughput boundaries. One area of special interest is determining how well these devices enforce traffic management contracts on low-bandwidth WAN links and/or links used by large numbers of users.

 

To benchmark device performance, independent consultancy Network Test (Hoboken, NJ) has teamed up with Netcom Systems Inc. (Calabasas, Calif.) to develop a new test application for bandwidth managers. As with prior tests, the new application runs on Netcom’s Smartbits traffic analyzer. Unlike previous applications, the new application runs stateful TCP connections. In this test, we plan to offer up to 200 concurrent TCP connections, even through low-bandwidth T1 (1.5-Mbit/s) circuits.

 

1. Ease of configuration/ease of management

Lab personnel will grade each device, on a 1 to 5 scale (where 5 = excellent and 1 = poor), on ease of accomplishing each of the following tasks.

 

  1. A bandwidth management device is deployed on the local side of a site’s WAN router. The site uses net-10 addressing internally (10.0.1.0/24) and one public IP address on the external interface of its router. The local interface’s address is 10.0.1.1/24; the external interface is 1.1.1.1/24. The internal site does not currently run any bridging protocols (like spanning tree), and for purposes of this test all nodes use static routes.

    The lab will determine what steps are needed to deploy and configure the bandwidth manager. Among the attributes to be compared are:
    --whether the devices functions as a bridge, a router, or both;
    --whether the device can be configured solely from a command-line interface, a Web browser, or a proprietary GUI
    --what modes of IP address allocation the device supports (static, DHCP)
    --whether configuration changes require a reboot
    --whether software upgrades require a reboot
    --the length of time required per reboot
    --whether the device holds secondary configurations and software images

  2. The lab will determine the amount of “help” the device (or its documentation) offers in terms of defining the right criterion to use for shaping a given class of traffic. For example, if a user wants to protect voice-over-IP (VoIP) traffic from being swamped by lower-priority, higher-bandwidth FTP flows, does the device (or its documentation) caution the user that latency, not bandwidth, is the key metric to use?

  3. The lab will attempt to configure a policy supporting the following five rules:

    --Inbound and outbound HTTP traffic must never use more than 50 percent of available bandwidth.
    --Inbound traffic using secure sockets layer (SSL) will be serviced before regular HTTP traffic
    --Inbound  and outbound SNA traffic encapsulated into IP using Data Link Switching (DLSw) will be serviced before SSL and regular HTTP traffic, and must always be given up to 30 percent of available bandwidth
    --Microsoft NetMeeting must never use more than 20 percent of available bandwidth
    --all other types of inbound and outbound traffic should be blocked

    Scoring will depend on the ease of setting up the rules; the breadth and quality of configuration methods available (command-line, Web, proprietary GUI); the availability of remote management/configuration options; and the security of remote management methods.

  4. The lab will determine whether a policy change can be implemented globally (with one action across multiple devices), or changing configurations requires touching each device.

 

2. Performance tests

There are four tests of performance: baselines, forwarding rate, latency, and mixed-class forwarding rates.

 

Each of these four tests will be run twice, with each iteration modeling a different scenario where bandwidth managers might be used. Vendors are free to participate in either or both scenarios; there is no penalty for electing not to participate in both scenarios.

 

Scenario 1 models a branch office with 200 users. A single T1 (1.5-Mbit/s) circuit provides all Internet connectivity.

 

Scenario 2 models a server farm run by a content hosting service. In this case, a customer of the service is allocated a 100-Mbit/s circuit to handle traffic to and from its customer’s servers.

 

All tests will be conducted via 100Base-T physical interfaces; we will rate-control the offered load for the T1 tests.

 

As in the configuration/management tests, the device under test will use 10.0.1.254/24 on its local interface and 1.1.1.1/24 on its external interface. If the device operates as a bridge, the second interface should be 10.0.1.253/24.

 

a. Baseline tests

Objective: To determine each device’s basic traffic-handling capabilities before bandwidth management is enabled.

 

Procedure: Offer line-rate loads of 64- and 1518-byte UDP packets to each device in both unidirectional and  bidirectional flows.

 

Metrics:

Forwarding rate

Packet loss

Packet-by-packet latency

 

b. TCP rate enforcement

Objective: To determine the ability of the device under test to guarantee specific amounts of bandwidth to high-priority flows.

 

Procedure:  Configure device to allocate 512 kbit/s (for T1 tests) or 30 Mbit/s (for 100-Mbit/s tests) for high-priority flows.

 

Offer up to 200 concurrent HTTP 1.0 requests to each device, plus enough background UDP traffic to present an aggregate load of 150 percent of line rate, thus creating congestion. Of the 200 HTTP requests, 20 are high-priority flows. (This type of 10/90 split models traffic at an e-commerce site, where 90 percent of requests represent customers surfing around and 10 percent are placing orders.)

 

Verify that high-priority requests get the 512 kbit/s or 30 Mbit/s reserved for them, but not more.

 

Stop all traffic generation, and restart with low-priority HTTP 1.0 requests only. Verify that the low-priority sessions are able to use the bandwidth available, including that previously reserved for the high-priority flows. (This step verifies that the bandwidth managers don’t use static, TDM-like bandwidth reservation.)

 

Metrics:

Forwarding rate

Packet loss

 

c. TCP latency enforcement

(Note: Not all devices can enforce latency bounds. This test will only be run on those devices supporting such capability.)

 

Objective: To determine the ability of the device under test to enforce specific latency boundaries.

 

Procedure: Configure device to ensure that response time for high-priority HTTP 1.0 GET requests never rises above 500 milliseconds, and never above 2,000 ms for low-priority HTTP 1.0 GET requests.

 

Offer up to 200 concurrent HTTP requests to each device, plus enough background UDP traffic to present an aggregate load of 150 percent of line rate, thus creating congestion.  Of the 200 HTTP requests, 20 are high-priority flows. (This type of 10/90 split models traffic at an e-commerce site, where 90 percent of requests represent customers surfing around and 10 percent are placing orders.)

 

In this test the background load will consist of 1,518-byte frames and the HTTP requests will be for large objects, also forcing a predominance of 1,518-byte frames. This combination of long frames and congestion will force latency to rise, and thus creates a stressful environment for evaluating latency shaping. For bandwidth managers that rely on queuing, long frames mean longer intervals before the device can service packets in each queue. For bandwidth managers that rely on tuning the TCP window sizes, the overload of traffic should force the device to reduce window size in an effort to reduce congestion.

 

Verify that high-priority requests are serviced within 500 ms, as per device configuration.

 

Stop all traffic generation, and restart with low-priority HTTP 1.0 requests only. Verify that low-priority requests are serviced within 2,000 ms, as per device configuration. Restart high-priority requests and observe both high- and low-priority latency measurements. Withdraw high-priority flows once again. Measure interval between withdrawal of high-priority flows and the point where low-priority flows once again are serviced within 2,000 ms.

 

Metrics:

Packet-by-packet latency

 

d. Mixed-class traffic handling

 

Objective: To determine the ability of devices to define enforce multiple priority levels while concurrently handling TCP and UDP flows.

 

Procedure: Configure device under test to allocate bandwidth in a 3:2:1 ratio to high-, medium, and low-priority flows. The high-priority traffic of UDP packets on port 111 (portmap). The medium-priority traffic will consist of HTTP 1.0 get requests and responses with URL B. The low-priority traffic will consist of HTTP 1.0 get requests and responses with URL C.

 

Offer approximately 50 Mbit/s of each priority class to the device under test, thus creating an overload of 150 percent.

 

To conform with the 3:2:1 ratio:

 

--the device will forward all of the high-priority traffic without loss

--the device will drop about 1/3 of the medium-priority traffic

--the device will drop about 2/3 of the low-priority traffic

 

Metrics:

Forwarding rate

Packet loss

 

3. Features comparison

To assemble a table comparing device features, participating vendors will answer the following questions:

 

What types of physical interfaces does the device support?

 

What is the maximum number of physical interfaces the device support?

 

Does the device support definition of multiple virtual interfaces on a singe physical interface?

 

What criteria does each device use to classify traffic, and why?

 

--URL/cookie (new feature, and only on some devices)

--application signature (new feature)

--voice over IP identifiers like RTP headers (new feature)

--TCP/UDP port numbers

--IP precedence field

--diff-serv codepoints

--IP source/destination address

--MAC addresses

--other (specify)

 

What type of bandwidth enforcement mechanism does the device use?

--queuing

--TCP window control

--other (specify)

 

What is the cost of the device as tested?

 

4. Coexistence with other QOS devices

Given the growing importance of QOS and policy management in enterprise and service provider networks, integration with other QOS devices may be a requirement. No formal testing of integration/interoperability with other devices is planned. However, vendors will be asked to describe how their devices interoperate with policy servers, directory servers, or other pre-existing QOS mechanisms.

 

Network Test Footer. Privacy Policy. Contact us to learn more.