Now this thing is eating my head since yesterday, and I have done all i could to stop it from coming up again on my server.

The issue is with Checkpoint production server where the following error keeps popping up twice a day and the server is literally held up till you restart it –

May 18, 2010 3:03:34 PM org.apache.tomcat.util.net.PoolTcpEndpoint acceptSocket
SEVERE: Endpoint ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=19638]
ignored exception: java.net.SocketException: Too many open filesjava.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:453)

Having Googled enough, we reviewed all the jars for reloading itself through out the application, reviewed the whole code for closing all File streams blah blah blah. Nothing worked.

Finally, working on getting all the open files in a shell file passing a process id to lsof, found a properties file open a bunch of time. On checking further, realized the ResourceBundle was an instance variable in a frequently used utility class. Changed to static variable, problem solved. The unix/linux commands required to check the no. of open files that can be opened by a user are typically in ulimit and the hard limit can be checked by 


ulimit -aH


On the other hand, the command to check the no. of open files per process for any user is actually somewhere else i.e. limits.h and can be checked by

grep OPEN_MAX /usr/include/sys/limits.h


This IBM technote, though is handy with commands to get to the open file quickly.