AgileLoad Logo

Anomalies Management

The WebLogic collector allows to insert the following types of anomalies in an anomalies profile: WebLogic 6, 7, 8, 9, 10, 10.3, 12


WebLogic 6, 7, 8

EJB cache hit

The cache hit ratio is a ratio of the number of times the bean cache has been accessed to the number of times a bean is looked up and found in the cache.

A cache not properly sized may be the origin of a low cache hit ratio. If some beans are more frequently used than others, size your cache large enough so that they can remain in the cache. To increase the maximum size of your cache, increase the "max-beans-in-cache" parameter in your weblogic-ejb-jar.xml file.
However, if your application doesn’t use some beans more frequently than others, increasing the maximum cache size may not affect the cache hit ratio.

The default anomaly conditions are:

Severity 1: The cache hit ratio is less than 75%
Severity 2: The cache hit ratio is less than 50%
Severity 3: The cache hit ratio is less than 30%

EJB transaction timeout

Percentage of transactions which timed out in the module. This is based on the ratio of the number of timed out transactions to the number of completed transactions.

A wrong transaction timeout value could cause a high transaction timeout ratio.
If the transaction timeout value is set too low, transactions could time out before the thread is able to complete the necessary work. Increase the transaction timeout value to reduce the number of transaction timeouts.
If the transaction timeout value is set too high, it can cause threads to wait a long time for a resource before timing out. Request time might increase because a request will wait longer before timing out.

The default anomaly Conditions are:

Severity 1: The transaction timeout ratio is greater than 10%
Severity 2: The transaction timeout ratio is greater than 15%
Severity 3: The transaction timeout ratio is greater than 20%

EJB transaction rollback

Percentage of transactions which rolled back in the module. This is based on the ratio of the number of rolled back transactions to the number of completed transactions.

If the transaction timeout ratio is higher than expected, try to resolve the timeout problem. It might solve the problem. If not, try to locate the transactions which rolled back in your application.

The default anomaly Conditions are:

Severity 1: The transaction rollback ratio is greater than 25%
Severity 2: The transaction rollback ratio is greater than 50%
Severity 3: The transaction rollback ratio is greater than 60%

Servlet response time

A high ratio indicates that one or more JSPs/servlets in the web module are experiencing a higher than normal degradation in response time.

The default anomaly conditions are:

Severity 1: Ratio is greater than 5.
Severity 2: Ratio is greater than 50.
Severity 3: Ratio is greater than 100.

JDBC pool used

If one of the JDBC pools is heavily used, an alert is raised.
If during peak usage, there are no threads waiting for connections, the performance should be improved.

The default anomaly condition is:

Severity 1: Utilization is greater than 90%.

JTA rollback

If a large number of transactions are being rolled back, the performance of the system is strongly degraded.
Try to increase the resources which are causing the roll back.
If there are lots of application rollbacks, try to modify the application and manage the problem differently to avoid to do a roll back.

The default anomaly conditions are:

Severity 1: More than 25% of transactions were rolled back.
Severity 2: More than 50% of transactions were rolled back.
Severity 3: More than 60% of transactions were rolled back.

JVM heap used

An alert is raised if the heap utilization is very high. To improve the system performance, increase the heap size.

The default anomaly values are:

Severity 1: Heap utilization is greater than 75%
Severity 2: Heap utilization is greater than 85%
Severity 3: Heap utilization is greater than 90%

JDBC pool used

If one of the JDBC pools is heavily used, an alert is raised.
If during peak usage, there are no threads waiting for connections, the performance should be improved.


The default anomaly values are:

Severity 1: Utilization is greater than 90%.

Thread utilization

If a thread pool is heavily used it could degrade the response time of the system.

Most of the time, the execute queue is heavily used when a problem occurs in another part of the system than the execute queue itself. Try to locate and fix the origine of the problem; it should resolve the execute queue problem. If the problem persists, try to increase the threads count for the problematic execute queue.

Warning: A high execute thread count causes more memory to be used and increases context switching, which can degrade performance.

The default anomaly conditions are:

Severity 1: Greater or equel than 75% of the threads are in use.
Severity 2: Equal to 100%.

Pending Requests

This anomaly occurs when the number of pending requests in the execution queue has exceeded the predefined threshold for this application.

Requests to a server instance are placed in an execute queue. Threads assigned to that execute queue remove requests from the queue in the order they are received and process the requests. The thread goes back to the execute queue to retrieve the next request to process when a request has finished processing.
Unless you configure additional execute queues, and assign applications to them, web applications use
the weblogic.kernel.defaultexecute queue.

Most of the time, there are pending requests when a problem occurs in another part of the system. Try to locate and fix the origine of the problem; it should resolve the pending requests problem. If the problem persists, try to increase the threads count for the problematic execute queue.

Warning: A high execute thread count causes more memory to be used and increases context switching, which can degrade performance.

The default anomaly conditions are:

Severity 1: Pending request is different from zero.
Severity 2: More than 25% of the threads are pending.
Severity 3: More than 50% of the threads are pending.



WebLogic 9, 10, 10.3, 12

EJB cache hit

The cache hit ratio is a ratio of the number of times the bean cache has been accessed to the number of times a bean is looked up and found in the cache.

A cache not properly sized may be the origin of a low cache hit ratio. If some beans are more frequently used than others, size your cache large enough so that they can remain in the cache. To increase the maximum size of your cache, increase the "max-beans-in-cache" parameter in your weblogic-ejb-jar.xml file.
However, if your application doesn’t use some beans more frequently than others, increasing the maximum cache size may not affect the cache hit ratio.

The default anomaly conditions are:

Severity 1: The cache hit ratio is less than 75%
Severity 2: The cache hit ratio is less than 50%
Severity 3: The cache hit ratio is less than 30%

EJB transaction timeout

Percentage of transactions which timed out in the module. This is based on the ratio of the number of timed out transactions to the number of completed transactions.

A wrong transaction timeout value could cause a high transaction timeout ratio.
If the transaction timeout value is set too low, transactions could time out before the thread is able to complete the necessary work. Increase the transaction timeout value to reduce the number of transaction timeouts.
If the transaction timeout value is set too high, it can cause threads to wait a long time for a resource before timing out. Request time might increase because a request will wait longer before timing out.

The default anomaly Conditions are:

Severity 1: The transaction timeout ratio is greater than 10%
Severity 2: The transaction timeout ratio is greater than 15%
Severity 3: The transaction timeout ratio is greater than 20%

EJB transaction rollback

Percentage of transactions which rolled back in the module. This is based on the ratio of the number of rolled back transactions to the number of completed transactions.

If the transaction timeout ratio is higher than expected, try to resolve the timeout problem. It might solve the problem. If not, try to locate the transactions which rolled back in your application.

The default anomaly Conditions are:

Severity 1: The transaction rollback ratio is greater than 25%
Severity 2: The transaction rollback ratio is greater than 50%
Severity 3: The transaction rollback ratio is greater than 60%

Servlet response time

A high ratio indicates that one or more JSPs/servlets in the web module are experiencing a higher than normal degradation in response time.

The default anomaly conditions are:

Severity 1: Ratio is greater than 5.
Severity 2: Ratio is greater than 50.
Severity 3: Ratio is greater than 100.

JTA rollback

If a large number of transactions are being rolled back, the performance of the system is strongly degraded.
Try to increase the resources which are causing the roll back.
If there are lots of application rollbacks, try to modify the application and manage the problem differently to avoid to do a roll back.

The default anomaly conditions are:

Severity 1: More than 25% of transactions were rolled back.
Severity 2: More than 50% of transactions were rolled back.
Severity 3: More than 60% of transactions were rolled back.

JVM heap used

An alert is raised if the heap utilization is very high. To improve the system performance, increase the heap size.

The default anomaly values are:

Severity 1: Heap utilization is greater than 75%
Severity 2: Heap utilization is greater than 85%
Severity 3: Heap utilization is greater than 90%

JDBC pool used

If one of the JDBC pools is heavily used, an alert is raised.
If during peak usage, there are no threads waiting for connections, the performance should be improved.


The default anomaly values are:

Severity 1: Utilization is greater than 90%.

Thread utilization

If a thread pool is heavily used it could degrade the response time of the system.

Most of the time, the execute queue is heavily used when a problem occurs in another part of the system than the execute queue itself. Try to locate and fix the origine of the problem; it should resolve the execute queue problem. If the problem persists, try to increase the threads count for the problematic execute queue.

Warning: A high execute thread count causes more memory to be used and increases context switching, which can degrade performance.

The default anomaly conditions are:

Severity 1: Greater or equel than 75% of the threads are in use.
Severity 2: Equal to 100%.

Pending Requests

This anomaly occurs when the number of pending requests in the execution queue has exceeded the predefined threshold for this application.

Requests to a server instance are placed in an execute queue. Threads assigned to that execute queue remove requests from the queue in the order they are received and process the requests. The thread goes back to the execute queue to retrieve the next request to process when a request has finished processing.
Unless you configure additional execute queues, and assign applications to them, web applications use
the weblogic.kernel.defaultexecute queue.

Most of the time, there are pending requests when a problem occurs in another part of the system. Try to locate and fix the origine of the problem; it should resolve the pending requests problem. If the problem persists, try to increase the threads count for the problematic execute queue.

Warning: A high execute thread count causes more memory to be used and increases context switching, which can degrade performance.

The default anomaly conditions are:

Severity 1: Pending request is different from zero.
Severity 2: More than 25% of the threads are pending.
Severity 3: More than 50% of the threads are pending.





Go back to the Learning center:



Copyright © AgileLoad. All rights reserved.
Agile Load testing tool| Contact AgileLoad | Terms of Use | Sitemap