Checks View is very useful for finding out if a test has run successfully. Ideally no checks will fail, if there are failures (choose ‘Select All’ in the middle right of the window to have all checks included in the Pie Chart) then it’s useful to be able to find out why. The upper part of the Checks View details any checks that have failed, here they are displayed as a hyperlink. If when the test was set up the option to ‘Save on Check Failed’ was selected then analysis is quite straight forward as it’s just a case of clicking the hyperlink, if this option was not set the failed check is not displayed as a hyperlink and then no further trouble shooting is possible.
Clicking the hyperlink will open the Virtual Users Details View as below. The relevant request will be automatically selected in the upper part of the window, on the left upper it can be seen that the request was made by Virtual User #1. Looking at the lower half of the screen the HTTP request is displayed above the HTTP response in HTML. Clicking the Preview button will show a rendered version of the HTML received in the HTTP response. The HTTP response shown to the right is not rendered with graphics, only HTML. So while the screen is not entirely clear it is possible to start to understand why the check has failed. In this case the script was trying to login into a web application and the ‘Login’ check is looking for the text ‘Welcome’, this can been seen in part in the checks screen and in the script itself. It now becomes obvious that as the server is still offering the user a login form that the login has failed.
The next and possibly most important view on the result set is created by drag and drop actions using the relevant folders, these will usually include the HTTP, Timers, Page Timers and if in place some of the Monitoring folders. As stated earlier, infrastructure monitoring will not be covered in this section. An example of it’s use however would be to build a graph like the one that will be built below to include numbers of active users plotted against page response time and CPU time.
The screenshot above shows the actions required to build a graph showing the number of users active in the test against the page response time. The actions are, double click a value – in this case ‘Active Users’ is chosen, a dialog box will ensue if there are further questions otherwise the graph will be created. The next step is to build the graph with the desired metrics, here ‘Page Timers / Time’ is dragged onto the chart, a dialog box opens asking which timers should be included, then the plot is made on the graph.
The graph is plotted over elapsed time. This is a very effective way to understand the how the performance of the site reacts to user load on the system. The left hand axis on the chart denotes page response time in milliseconds; the right hand axis denotes the number of users. The X axis is real elapsed time. The blue plot is the number of users at that time in the test and the red line is the response time seen at that time in the test.
In this case response time is stable after an initial peak, showing that the system is managing the load. The peak is due to the initial request downloading un cached content such as graphical content, subsequent requests by the same user are cached. Cache behaviour can be controlled in the test setup and is documented that earlier section.
Another important metric is Transactions Per Second. The graph below shows that the web test drives the web application at a rate of 1 transaction per second. The application may be able to handle more transactions and further testing would prove or disprove that. A way to increase the transaction rate is to increase the number of users, or decrease the user wait time while maintaining the concurrent virtual user rate.
This graph is particularly useful to stakeholders as it often directly relates to business performance requirements. For example an insurance company must be able to sell X policies per hour, this graph will determine if that is possible or not.
The Page Timer Values Distribution graph is useful for evaluating the percentage of requests that were dealt with within agreed limits such as those specified by a service level agreement. The graph above shows that nearly 90% pages were served in less than 4s.