The sensenet server environment reaches its maximum performance (for whatever reason) when the served request count cannot increase, even if the received requests are growing. This expectation outlines a prerequisite to the algorithm to work: after having reached the maximum performance, during an increasing load we must see a constant served request per second plateau that follows the increasing phase.
The benchmark tool records every second the
and much more data. The trend graph looks like the following:
Two complement behaviours appear on the graph.
The most relevant parameter is the served requests. It is a very noisy graph but easy to recognise the growing and constant phases.
End point detection with rulers: draw a line over the estimated average of the growing phase, and another one over the constant phase. The intersection of the two line is the measuring endpoint. This point signs the limit of the tested system. After this point the benchmark sofware can increase the load only in vain.
In the algorithmic version the benchmark tool uses a noise filter on the req/sec (blue line), and makes the (also noise filtered) differential function (red).
The graph of the differential function indicates the direction of the main function changes in every point. When the growing stops, the differential function goes through zero from the positive values. At this point the benchmark measuring result is the value of the active profile count (190 in this case).
Color codes on all graphs:
Is something missing? See something that needs fixing? Propose a change here.