Test Run Details
For each test run, we provide a detailed report page with visualizations of all the collected data of the test run.
We measure a number of metrics per test run and process the collected data to provide a presentable information layer. The metrics include information like number of active clients, response times and request/response traffic.
Per Request metrics are based on measurements from the individual requests, aggregated to the per-second granularity. Non-request specific metrics (e.g. active clients or network traffic) are based from snapshots that occur every 10 seconds.
Active Clients, Active Connections, and New Connection Rate
Active Clients metric displays the amount of clients that were active at the time of the snapshot.
Depending on the defined test case each client can make one or more requests.
Request Rate shows the total amount of requests that were made by the active clients per second.
Active Connections metric display the number of open TCP connections at the time of the snapshot.
Each active client will keep a connection to each target open after an initial request is performed and close it after an idle timeout is reached or the client’s session ends.
New Connection Rate metric displays the number of newly established TCP connections at the time of the snapshot.
Snapshots are taken every 10 seconds and thus these metrics may not reflect every detail that occured during the 10 second window.
NoteVery fast client sessions may start and finish before they are picked up by the snapshot and may not appear in the
Active Clientsmetric. You can work around this by adding a
Request Rate shows the number of requests started at the specific second.
Response Times (Percentile)
Response Traffic shows the incoming traffic that is measured, whereas the metric
Request Traffic shows the outgoing traffic.
Request and response traffic are always measured and displayed in transferred bytes (not bits). The traffic data is based on the 10 second interval snapshots.
Displays all of the above metrics in one graph to find correlations.
Shows all HTTP Codes and their frequency of occurrence. You can filter the HTTP Codes by request tag.
Other Errors and Events
There are a couple of events and errors that are not related to HTTP errors. In this chart we present those events over time.
connect: retry limit reached: The retry limited was reached and no TCP connection could be established
connect: ECONNREFUSED: The connection was refused
connect: ETIMEOUT: A timeout occurred before the connection could be established
connect: ENXDOMAIN: The target (host) could not be resolved.
request: url forbidden: The engine denied performing a request to an endpoint which is not defined as a Target
request: url malformed: The engine could not perform a request because it was not able to parse the request URL
request: connection closed while sending
request: send retry limit reached: The engine gave up sending the request
response: body size, Content-Length mismatch: The size of the received content body from your endpoint did not match the
Content-Lengthresponse HTTP header
response: JSON expected: The response could not be parsed as JSON although JSONPath content extraction was requested
connect: EADDRINUSE: These errors occur mostly due to overloading the load generating cluster; try to increase the cluster size and try again
DataSource is exhausted: These errors (for mode
exclusive) may occur when picking rows from a data source via
pickFrom()but no rows are available. See Data Sources for details.
Statistics by Arrival Phase
If you’ve defined more than one arrival phase you get a statistics summary for each arrival phase with information about requests count, apdex, mean & stddev, median, percentiles (95th, 99th) and errors.
Statistics by Request Tag
If you’ve defined tags on your requests you get a statistics summary for each tag with information about requests count, apdex, mean & stddev, median, percentiles (95th, 99th) and errors. Requests that are not tagged will appear as -default-.
The last table in the report shows a list of all performed checks and assertions. It presents the name, the total number of performed checks (across all phases) and their success/failures counters and rate.
If you have defined transactions via
session.transaction(), the Transaction Timings table shows the mean & stddev, median, and percentiles across of all executions of the transaction by that name. For each transaction execution we are measuring the duration of all requests.
NoteNote: Aborted sessions (e.g. via
abort()) are excluded from this data to avoid skewing the results, as aborted sessions are usually faster but irrelevant.
The report also contains statistics for sessions and requests within those session as Session Timings. These are measured like a transaction at the top of each session.
Like for transaction timings, we not including sessions that have failed but only successful ones.