I’ve switched to JRE 1.7. I think it is a good thing, but now I’m seeing the
memory usage fluctuate, so sometimes it drops back down. Using JRE 1.6 memory
usage pretty much only went up, I don’t recall it ever dropping down. We were
starting to think our huge CPU spikes might be related to Java garbage
collection. Hopefully JRE 1.7 is a workable solution for us.
On Dec 12, 2013, at 10:29 AM, John Caron <caron@xxxxxxxxxxxxxxxx> wrote:
> Hi jay:
>
> Debugging tomcat server performance is complicated.
>
> The first question is, how many simultaneous requests are there? Each will
> consume one thread, until request completes . So very slow connections can
> shutdown a server; the default number of threads depends on version i think
> maybe 200 for tomcat 7. This can be changed in tomcat/conf/server.xml.
>
> 1. Start up the tomcat manager application and see what it has to say about
> the connections.
>
> 2. Better, run jvisualvm on the same machine as tomcat (can also be done
> remotely) and connect to the running tomcat server.
>
> there are more complicated things to do; there are commercial versions of
> tomcat that add visibility and performance metrics. You might also google
> "tomcat performance monitoring tools".
>
> you want to upgrade to Java 7; also make sure you are on latest version of
> tomcat 7.
>
> send me the threddsServlet.log file for one of the hours where the server
> bogged down and ill see if theres anything obvious.
>
> John
>
>
>
> On 12/12/2013 10:47 AM, Gerry Creager - NOAA Affiliate wrote:
>> Jay,
>>
>> What OS, release and Tomcat version are you running? I've seen a similar
>> issue on another piece of software (Ramadda). Since I've seen this
>> behavior with the standalone server and the tomcat-based server, I'm
>> beginning to suspect my Java installation, but have not had sufficient
>> time to investigate yet.
>>
>> There may be an OS correlation here, so I'm interested. I'm running RHEL
>> 6 and the various updated flavors of OpenJDK and Tomcat6.
>>
>> gerry
>>
>>
>>
>>
>> On Wed, Dec 11, 2013 at 4:33 PM, Jay Alder <alderj@xxxxxxxxxxxxxxxxxxxx
>> <mailto:alderj@xxxxxxxxxxxxxxxxxxxx>> wrote:
>>
>> Hi, we’ve recently released a web application that uses TDS for
>> mapping, which is getting a lot of traffic. At one point the server
>> stopped responding altogether, which is a major problem. A quick
>> restart of tomcat got it going again, so I’m starting to dig into
>> the logs. We normally get the GET / request complete behavior, but
>> occasionally we’ll have:
>>
>> GET …url…
>> GET …url…
>> GET …url…
>> GET …url…
>> GET …url…
>> GET …url…
>> GET …url…
>> GET …url…
>>
>> meanwhile having a 100% CPU spike (with 12 CPUs) for a minute or more
>>
>> request compete
>> request compete
>> request compete
>> request cancelled by client
>> request cancelled by client
>> request compete
>> request compete
>>
>> While watching the logs the few times I’ve seen this occur it seems
>> to pull out of it ok. However the time the server failed, requests
>> were never returned. From the logs, requests came in for roughly 40
>> minutes without being completed. Unfortunately do to the high
>> visibility we started to get emails from users and the press about
>> the application no longer working.
>>
>> Has anyone experienced this before and/or can you give guidance on
>> how to diagnose or prevent this?
>>
>> Here are some config settings:
>> CentOS 5.7
>> Java 1.6
>> TDS 4.3.17
>> only WMS is enabled
>> Java -Xmx set to 8Gb (currently taking 5.3, the dataset is 600 Gb of
>> 30-arcsecond grids for the continental US, 3.4 Gb per file)
>> For better or worse we are configured to use 2 instances of TDS to
>> keep the catalogs and configuration isolated. I’m not sure if this
>> matters, but I didn’t want to omit it. Since it is a live server I
>> can’t easily change to the preferred proxy configuration.
>>
>> I am trying not to panic yet. However, if the server goes
>> unresponsive again, staying calm may no longer be an option.
>>
>> Jay Alder
>> US Geological Survey
>> Oregon State University
>> 104 COAS Admin Building
>> Office Burt Hall 166
>> http://ceoas.oregonstate.edu/profile/alder/
>>
>>
>> _______________________________________________
>> thredds mailing list
>> thredds@xxxxxxxxxxxxxxxx <mailto:thredds@xxxxxxxxxxxxxxxx>
>> For list information or to unsubscribe, visit:
>> http://www.unidata.ucar.edu/mailing_lists/
>>
>>
>>
>>
>> --
>> Gerry Creager
>> NSSL/CIMMS
>> 405.325.6371
>> ++++++++++++++++++++++
>> “Big whorls have little whorls,
>> That feed on their velocity;
>> And little whorls have lesser whorls,
>> And so on to viscosity.”
>> Lewis Fry Richardson (1881-1953)
>>
>>
>> _______________________________________________
>> thredds mailing list
>> thredds@xxxxxxxxxxxxxxxx
>> For list information or to unsubscribe, visit:
>> http://www.unidata.ucar.edu/mailing_lists/
>>
>
> _______________________________________________
> thredds mailing list
> thredds@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe, visit:
> http://www.unidata.ucar.edu/mailing_lists/
Jay Alder
US Geological Survey
Oregon State University
104 COAS Admin Building
Office Burt Hall 166
http://ceoas.oregonstate.edu/profile/alder/