Contents

Introduction

This post demonstrates an approach to measure data ingestion throughput via Azure Event Hubs using Application Insights. It also includes some results while comparing the implementation of the so-called Event Processor Host (EPH) in three different programming languages (.NET Core, Python and Java) and includes link to the source code of the experiments.

This work and the measurements were done to find optimal virtual machine sizes and the right amount of EPH instances to achieve certain throughput levels.

For similar benchmarking when using Azure Functions, I recommend checking the great post by Paul Batum.

Test setup

The test was performed by first loading an Event Hub with messages using a tool developed for that purpose. The payload was a static JSON with size 12B in one test and 2KB in others.

The EPH was implemented to checkpoint every 10 seconds, which was considered to be a reasonable tradeoff between performance and potentially having to deal with some amount of duplicate messages.

After the Event Hub was loaded with messages, the EPH implementations were deployed on a Kubernetes cluster in the same region as the Event Hub. The deployments were done one after another so that only one language implementation is processing at any given time.

While running, the implementations sent telemetry about the current throughput to Application Insights - see here for an example how it was done in the .NET Core implementation.

Results

Once the data is in Application Insights, you can use custom queries to get insights into what was going on during the test.

For example, the following query:

customMetrics
| where timestamp > datetime("2017-11-20T15:19:43")
| summarize sum_by_name=sum(value) by name, bin(timestamp, 10s)
| summarize avg(sum_by_name) by name, bin(timestamp, 30s)
| render timechart

Would result into a graph like this:

Insights

In above, I was using 2KB payload in an Event Hub with 100 partitions and reading the data with 20 EPH instances. Those instances were scheduled by Kubernetes according to the resource definition, which effectively meant that two instances fit a single Standard_D2_v2 virtual machine (since that has 2 vCPUs).

In below table, I collected a summary of the main tests that I run. The header is in format partitions / EPH instances / payload size. In all tests, the number of Event Hubs Throughput Units was 100. The result is a peak throughput in MB/s.

Language 4 / 4 / 2KB 100 / 20 / 2KB 2 / 2 / 12B
Java 29.7 MB/s 269.0 MB/s 0.44 MB/s
.NET Core 12.2 MB/s 246.6 MB/s 0.71 MB/s
Python 12.7 MB/s 39.5 MB/s 0.06 MB/s

It should be noted that above does not tell any absolute truth about the speed of those different languages, but should be considered a snapshot of how the current versions of the implementation behaved with the given parameters. For example, changing the payload size and the size of the virtual machines might give different results.

There is also built-in metrics in Event Hubs that can be queried with the azure-cli command az monitor metrics list. During testing, I used a command like:

az monitor metrics list --resource "<full-resource-id>" --metric-names OutgoingBytes --time-grain PT1M

To get the in-built metrics partly to compare them to the results I saw via Application Insights to get confidence that there were no mistakes in the way I measured.