WSO2 API-M Performance and Capacity Planning

The following sections analyze the results of WSO2 API Manager performance tests.

Summary

During each release, WSO2 executes various automated performance test scenarios and publishes the results.

Test Scenarios Description
Passthrough A secured API, which directly invokes the back-end service.
Transformation A secured API, which has a mediation extension to modify the message.

WSO2 uses Apache JMeter as the test client. WSO2 tests each scenario for a fixed duration of time. Thereafter, WSO2 splits the test results into warmup and measurement parts and uses the measurement part to compute the performance metrics.

Test scenarios use a Netty based back-end service which echoes back any request posted to it after a specified period of time.

WSO2 runs the performance tests with different concurrent user loads, message sizes (payloads), and back-end service delays.

The main performance metrics:

  • Throughput: The number of requests that the WSO2 API Manager processes during a specific time interval (e.g. per second).
  • Response Time: The end-to-end latency for an operation of invoking an API. The complete distribution of response times was recorded.

In addition to the above metrics, WSO2 measures the load average and several memory-related metrics.

The duration of each test is 900 seconds. The warm-up period is 300 seconds. The measurement results are collected after the warm-up period.

A c5.large Amazon EC2 instance was used to install WSO2 API Manager.

Test parameters

Test Parameter Description Values
Scenario Name The name of the test scenario. -
Heap Size The amount of memory allocated to the application 2G
Concurrent Users The number of users accessing the application at the same time. 50, 100, 200, 300, 500, 1000
Message Size (Bytes) The request payload size in Bytes. 50, 1024, 10240, 102400
Back-end Delay (ms) The delay added by the back-end service. 0

Measurements collected

The following are the measurements collected from each performance test conducted for a given combination of test parameters.

Measurement Description
Error % The percentage of requests with errors.
Average Response Time (ms) The average response time of a set of results.
Standard Deviation of Response Time (ms) The “Standard Deviation” of the response time.
99th Percentile of Response Time (ms) 99% of the requests took no more than this time. The remaining samples took at least as long as this.
Throughput (Requests/sec) The throughput is measured in requests per second.
Average Memory Footprint After Full GC (M) The average memory consumed by the application after a full garbage collection event.

For a detailed analysis of the performance of API-M 3.2.0, see API-M 3.2.0 Performance graphs on Github.

Observations from all results

There are key observations for the average user scenario of accessing APIs with 1KiB messages and the back-end service having 30ms delay.

The following are the key observations from the all performance tests done with different message sizes and different backend delays. (See Comparison of results for all charts used to derive the pointed mentioned below)

Throughput comparison:

A throughput increase is observed in the transformation scenario in API-M 3.2.0, in comparison to API-M 3.1.0

The throughput increases up to a certain limit when the number of concurrent users increases. The Mediation API throughput increase rate is much lower than the Echo API. Throughput decreases when the message sizes increase. Throughput decreases when the backend sleep time increase. This observation is similar to both APIs. This means that if the backend takes more time, the request processing rate at the API Manager Gateway will be less.

The average response time increases when the number of concurrent users increases. The increasing rate of average response time for both API-M 3.2.0 and API-M 3.1.0 is similar.

The average response time increases when the number of concurrent users increases. The average response time increases considerably for Mediation API when the message sizes increase due to the message processing. The average response time of the Echo API does not increase as much as the Mediation API. The average response time increases when the backend sleep time increases. This observation is similar to both APIs.

The GC throughput decreases when the number of concurrent users increases. When there are more concurrent users, the object allocation rate increases. The GC throughput increases when the message size increases. The request processing rate slows down due to the time taken to process large messages. Therefore, the object allocation rate decreases when the message size increases. The GC throughput increases when the backend sleep time increases. The object allocation rate will be low when the backend takes more time to respond.

Comparison of 3.1.0 and 3.2.0

Average response time comparison

Average response time vs concurrent users

GC Throughput comparison

GC throughput vs concurrent users

GC throughput with 0ms backend delay

GC throughput vs message size

GC throughput vs sleep time

Load average comparison

Load average vs concurrent users

Load average vs message size

Load average vs sleep time

Throughput comparison

Throughput vs concurrent users

Throughput vs message size

Throughput vs sleep time

Response time comparison

Percentile comparison

For more comparisons, see the comparison graphs on GitHub

Top