Data Comm Lab
Test: Proxy Caches
Scheduled
Publication Date: September 21, 1999
Test
Description
Copyright (c)
1999 CMP Media Inc. Vendors are encouraged to comment on any part of this project.
However, Data Comm and the Ircache team reserve the right to change parameters of this
test at any time.
TERMINOLOGY
Box: Vendor-supplied equipment including cache
appliance, computer system, and networking equipment (routers, switches).
Cluster: Vendor equipment plus the testing
equipment (Polygraph Unix systems).
Run: A concurrent execution of at least one
Polygraph client and one Polygraph server within a cluster.
Experiment: A sequence of Runs with similar
configurations and purpose, but with a change of at least one configuration parameter
(e.g., ``load'').
Transaction error: Any failure to submit a
complete request and/or receive a complete ``valid'' reply (within time limits and other
conditions specified by an experiment) as detected by benchmarking software. A valid reply
must have all the headers
generated by the
server and may include other headers. A valid reply must have exactly the same content as
generated by the server.
WORKLOAD
The
"Datacomm-1" workload is similar to the "PolyMix-1" workload for the
Polygraph cache testing tool, with modifications and additions described below.
A general
description of Polygraph, along with source code and documentation, is available here:
http://polygraph.ircache.net/
and a
description of PolyMix-1 is available here:
http://polygraph.ircache.net/Workloads/PolyMix-1/index.html#PolyMix-1
The Datacomm-1
workload differs from PolyMix#1 in these ways:
* HTTP
persistent connections will be used. The number of requests handled per client TCP connection is to be determined by a Zipf
distribution, with an upper limit of 64. For
the server side, it is a Zipf distribution with
a limit of 16.
* Runs will be
four hours in duration instead of one hour.
* An
"object lifecycle model" will mimic PolyMix#1 behavior. This model determines the values for Date,
Last-Modified, and Expires headers.
Because of the
increased run duration, there will be fewer overall runs.
Each box will be tested at TWO different request rates. The request rates to be tested will be supplied by
the vendor.
To execute this
workload, Polygraph version 1.2.1 or later will be used, with the following command-lines:
Client:
polyclt \
--verb_lvl 4 \
--ports
1024:30000 \
--proxy $Proxy \
--origin
$Server\
--launch_win
1min \
--rep_cachable
80p \
--pconn_use_lmt
zipf:64 \
--nagle off \
--robots 1 \
--req_rate
$RR/sec \
--dhr 55p \
--pop_model unif
\
--tmp_loc none \
--cool_phase
1min \
--goal
-1:5hr:0.30
Server:
polysrv \
--port 80 \
--verb_lvl 4 \
--idle_tout
135sec \
--pconn_use_lmt
zipf:16 \
--nagle off \
--xact_think
norm:3s,1.5s \
--obj_with_lmt
100p \
--obj_life_cycle
const:2year \
--obj_bday
const:-1year \
--obj_expire
100p=lmt+const:1 \
--goal 5.1hr
EXPERIMENTS
1) No-Proxy: a
no-proxy test will be executed to verify the Cluster equipment. The no-proxy test will be one hour in duration. Results of the no-proxy test will not be included
in the article.
2)
Filling-the-cache: The cache must be in a
full state before the DataComm-1 experiments may be executed. A cache is "full" when its disk
utilization stops increasing. Results of
filling-the-cache will not be included in the article.
3) Datacomm-1: TWO runs of the Datacomm-1 workload, at two
different request rates. Failed runs may be
repeated, if time allows.
REPORTING
The following
metrics may be reported in the Data Comm article:
* request rates
specified by the vendor
* success or
failure of a run. If a run initially fails, and later succeeds (at the same request rate), the
failed run is not reported.
* response time
mean and median values, averaged at no less than one-hour intervals.
*
document-hit-ratio and byte-hit-ratio values, averaged at no less than one-hour intervals.
* Percentage of
transaction errors.
RULES
* Once testing
has begun (i.e., the no-proxy test), the vendor is not allowed to make any changes in the Box configuration or software
versions.
* In the event
of hardware failures, vendors will be allowed to swap in new equipment and continue
testing, strictly on a time-permitting basis. In the case of a hardware failure where time
constraints prevent retesting, this will be noted in the article. Vendors are strongly
encouraged to bring spare interfaces, chassis, disks, etc., as a contingency for any
hardware failure.
*Participants will receive all test results and have a chance to comment on their results before publication.
*Participants are not permitted to withdraw or suppress results once testing begins.