This module offers functionalities that are unique on the market. Organized around a 100% parametrizable engine, it detects possible anomalies through topological or threshold analysis by reading your test results. Three levels of severity are available and for each of them the threshold for triggering an anomaly can be set. The rule used by the engine to generate an anomaly can also be modified: 7 formulations determine the creation of an anomaly (threshold or topological analysis). The anomalies, organized by chronological order or by application component, allow you to immediately visualize their severity, the incriminated component and the period of time during which the values affected remained above the limits.
Agileload automated anomaly detection helps detect trends or exception conditions that can be the cause of performance issues.
Several statistics are collected during test executions; these numbers are analyzed in order to generate a report, guarantee main performance goals (Speed, Scalability and Stability) and to identify possible performance problems. Examples of statistics collected under test are:
- Transactions Response Times (Averages, Standard Deviations): The average time taken to perform transactions during the test. This statistic helps to determine whether the performance of the server is within acceptable minimum and maximum transaction performance time ranges defined for your system.
- Hits Per Second: The number of hits made on the Web server by users. This statistics helps to evaluate the amount of load users generate, in terms of the number of hits.
- Throughput: The amount of throughput (in bytes) on the Web server during the test. Throughput represents the amount of data that the users received from the server at any given second. This statistic helps to evaluate the amount of load users generate, in terms of server throughput.
- Transaction per second: The number of completed transactions (both successful and unsuccessful) performed during a test. This statistic helps to determine the actual transaction load on system.
- CPU: The CPU % of utilization spent during a test.
- Memory: Memory utilization spent during a test.
- Disk: Disk utilization spent during a test.
Whatever type of test you run, whether benchmarking, load balancing or scalability, the results will provide an endless amount of value if you can quickly and flexibly view them from different angles.
Merge graphs to make it more meaningful
- Transaction response time under load
- Response time per virtual users versus CPU utilization
- Cross scenarios graphs
Agileload provides an efficient integration of data analysis and reporting capabilities to leverage the same test results through the creation of different views.