Windows anomalies

The Windows collector allows to insert the following types of anomalies in an Anomalies Profile:

Memory: % Used memory

100 * Committed Bytes / (Committed Bytes + Available Bytes)

Severity 1: > 75%
Severity 2: > 80%
Severity 3: > 95%

Memory: Page Faults

Average number of pages faulted per second. It is measured in number of pages faulted per second because only one page is faulted in each fault operation, hence this is also equal to the number of page fault operations. This counter includes both hard faults (those that require disk access) and soft faults (where the faulted page is found elsewhere in physical memory.) Most processors can handle large numbers of soft faults without significant consequence. However, hard faults, which require disk access, can cause significant delays.

Severity 1: > 350 pages/s
 

Memory: Paging

Rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays. It is the sum of "Pages Input/sec" and "Pages Output/sec". It is counted in numbers of pages, so it can be compared to other counts of pages, such as "Page Faults/sec", without conversion. It includes pages retrieved to satisfy faults in the file system cache (usually requested by applications) non-cached mapped memory files.

Severity 1: > 60 pages/s

PhysicalDisk: % Used disk

Percentage of elapsed time that the selected disk drive was busy servicing read or write requests.

Severity 1: > 90%

PhysicalDisk: Queue Length

Number of requests outstanding on the disk at the time the performance data is collected. It also includes requests in service at the time of the collection. Multi-spindle disk devices can have multiple requests that are active at one time, but other concurrent requests are awaiting service. This counter might reflect a transitory high or low queue length, but if there is a sustained load on the disk drive, it is likely that this will be consistently high. Requests experience delays proportional to the length of this queue minus the number of spindles on the disks. For good performance, this difference should average less than two.

Severity 1: > 3

PhysicalDisk: Time per transfer

The time, in milliseconds, of the average disk transfer.
In general, this counter can be high when insufficient numbers of disks, slow disks, poor physical disk layout or disk fragmentation occurs.

Severity 1: > 15 ms
Severity 2: > 20 ms
Severity 3: > 30 ms

Processor: % Used Processor

Percentage of elapsed time that the processor spends to execute a non-Idle thread. It is calculated by measuring the duration of the idle thread is active in the sample interval, and subtracting that time from interval duration. (Each processor has an idle thread that consumes cycles when no other threads are ready to run). This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It is calculated by monitoring the time that the service is inactive, and subtracting that value from 100%.

Severity 1: > 80%
Severity 2: > 90%
Severity 3: > 95%

NetworkInterface: Output Queue Length

Length of the output packet queue (in packets). If this is longer than two, there are delays and the bottleneck should be found and eliminated, if possible.

Severity 1: > 3

NetworkInterface: % Bytes per second

100 * (KBytes Sent/sec + KBytes Received/sec) / BandWidth

Severity 1: > 75
Severity 2: > 85
Severity 3: > 95

TCP: Segments Retransmitted/sec

Rate at which segments are retransmitted, that is, segments transmitted containing one or more previously transmitted bytes.

Severity 1: > 1

System: Congestion

Processor Queue Length / number of CPU.
Processor Queue Length is the number of threads in the processor queue. A sustained processor queue of less than 10 threads per processor is normally acceptable, dependent of the workload.

Severity 1: > 10

 





Go back to the Learning center:


Copyright © AgileLoad. All rights reserved.
Agile Load testing tool| Contact AgileLoad | Terms of Use | Sitemap