On Wed, 1 Dec 1999, Y. Charles Hu wrote:
> > open_conn_lmt * mean_response_time * total_req_rate
> > where
> > total_req_rate = number_of_robots * req_rate_per_robot
> I am a little confused by the above formula.
That's OK. "Realistic" workloads are supposed to be confusing. :)
> why is open_comm_lmt a multiplicative term?
> shouldn't the formular simply be mean_response_time*total_req_rate?
Open connections limit specifies how many connections a robot can open.
The connections may not be closed after they are used; some of them
become idle persistent connections. Thus, open_comm_lmt determines the
maximum number of open connections a robot can have at any given time.
The response time factor in this case is somewhat less important
actually. If response time is "low", a robot may not have to open all
the open_comm_lmt connections. If response time is high, then
As you can see, the formula above is just an approximation. It can be
made more precise for any given environment.
> I thought to get high req rate for polymix-2, we should simply remove
> open_comm_lmt? or did I misunderstand?
You _can_ remove open_comm_lmt. However, during the bake-off, we intend
to use the open_comm_lmt option and increase the number of robots
instead. The latter is what I would recommend if you are preparing for
the bake-off. [ Removing the limit may make the workload "easier" on the
cache because the proxy will have to deal with less IP addresses, etc. ]
> What Shall I set it to if say I want req_rate_per_robot to be 500 req/sec?
To be safe, use 2000 robots. You should use at least 1250 robots
(0.4req/sec per robot).
> open_conn_lmt certainly limits the effective req_rate_per_robot?
Yes, it may limit the effective req_rate_per_robot. To avoid turning the
workload into a best-effort one, we use many robots and low request rate
This archive was generated by hypermail 2b29 : Tue Jul 10 2001 - 12:00:10 MDT