Re: workload, request rate and robots.

From: Alex Rousskov (rousskov@ircache.net)
Date: Wed Dec 01 1999 - 09:48:07 MST


On Wed, 1 Dec 1999, Serge Ayoun wrote:

> I have used your scripts and they seem to work fine, however a little bit
> too slow.
> ...
> Do I need to increase the number of robots?

Yes, here is a blurb from an e-mail sent to the list a few days ago:

    To get high request rates, you will need to remove "open_conn_lmt"
    limit in your Robot definition(s) OR, better, create many robots
    using IP aliases (one robot per alias). Check out "aka" tool if
    you are creating a lot of aliases.

The "open_conn_lmt" setting prevents your robots from opening too many
connections, effectively limiting the request rate a single robot is
allowed to produce. The limit comes from the desire to simulate
browser behavior. For example, Netscape has a limit of 13-15
connections.

> or is it done automatically whith the request rate?
> I understand that the robot is a thread which sends requests but why do the
> user need to control this parameter?

The "user" has to be involved because increasing the number of robots
        - is a serious step with many consequences
        - is not always required (in general)
        - can be done in many ways (see above for an example).
Besides, some of the methods depend on your IP allocation scheme and
Polygraph does not control that.

When we decide on the pending issues (delayed ACKs, pop_model, and
DummyNet), we will be able to correlate the number of robots with the
request rate. For the bake-off, we will configure the number of robots
based on the desired request rate using R req/sec per robot. If
nothing changes, R will be equal to 0.4 req/sec, producing at most 400
req/sec load with 1000 robots per client machine.

In general, we expect Polygraph user to understand the configuration
rather than use Polygraph as a plug-and-play toy with one or two knobs.
While the latter is certainly desirable, the complexities of the
workloads _and_ benchmarking environment make that approach too risky at
the moment. Personally, I would prefer to temporary loose 50% of "user
base" rather than monitor a 10% increase of crapy benchmarking results.

Alex.



This archive was generated by hypermail 2b29 : Tue Jul 10 2001 - 12:00:10 MDT