Re: Too many open files problem

From: Jens-S. Voeckler (
Date: Wed Dec 01 1999 - 10:11:59 MST

On Wed, 1 Dec 1999, Daniel Chandran wrote:

]>o If you are using Solaris, set in /etc/system "set rlim_fd_max=8192" and
]> reboot (you might also want to set the rlim_fd_cur soft limit):
]The OS is Solaris 2.7, and I have already increased the FD limit to
]4096 thru /etc/system file. And just now I am noticing that I am
]getting the errors even while using 1 robot with
]I increased the Req_Rate to 100/sec

Still, you should check, if the FDs are available. The last column of the
console output will give you a rough idea how many FDs were needed at the
moment of the log line. If you don't trust the console, you can always try
a "netstat -nf inet | wc -l" -- though that number also includes sockets
in a state where they don't eat FDs any more, and assuming that your test
equipment is dedicated.

If the number in the last column is steadily increasing without finding
some point of dynamic balance, you might at first try to increase your FDs
further (e.g. for my test, I set a hard limit of 16k FDs).

I still don't know anything about Polygraph and how it handles internally
the selected request rate, but if you cache in the middle (or the servers)
can only give you 90/s, then the queue will grow until you hit the wall.
Also, sometimes I make the mistake of forgetting to propagate changes in
my client workload file to my servers host.

Ok, I just see that Alex already summed it up.

Le deagh dhrachd,
Dipl.-Ing. Jens-S. Vckler (
Institute for Computer Networks and Distributed Systems
University of Hanover, Germany; +49 511 762 4726

This archive was generated by hypermail 2b29 : Tue Jul 10 2001 - 12:00:10 MDT