Azure Serverless Performance / Cost Monitoring – Retrieving CosmosDB Request Charges per Function

Introduction

For me, one of the major advantages of cloud is that you become very aware of the costs involved. Where this might seem like an odd thing to say, this also forces you to think in terms of the costs and develop accordingly. Every session I have with customers, I’m focussed on keeping the costs as low as possible, given the requirements at hand.

Where Virtual Machines have their place, I -really- love the PaaS-services, and especially the entire “Serverless”space. A typical combination you see here, is Azure Functions & Azure CosmosDB. Both have a nice modular pricing model, where Azure Functions even go towards a subsecond billing model based on the actual CPU cycles. Though, how granular the pricing might be, if we look towards the resource costs… Then we see that is still linked to an “App Service” (being a consumption plan here) ;

The same goes for my Azure Cosmos DB Account (which hosts several collections)…

To go even a step further, I wanted to be able which functions (low level code) are the things I need to focus on in terms of costs. If we would succeed in this, then suddenly, refactoring would have a tangible ROI.

 

CosmosDB : Request Units / Charges

When you are using CosmosDB, your budget is determined by the amount of Request Units per Seconds your collection can process.

The throughput of this collection is sized at the minimum of 400 RU/s. Now let’s run a query on this collection ;

You’ll notice that with the results are accompanied by a statement that indicates the “Request Charge”. This is the amount of RUs that were needed to process the query. So the objective here is to try to keep this numbers as low as possible… In this case, as the collection is provisioned at 400RU/s and the query needs 45 RU/s. This would mean that this collection can process 8 concurrent requests per second of this query. So I immediately know the impact on my costs if the amount of requests increases.

 

CosmosDB : Monitoring RU/second

The first view towards the RU consumption is in the “Metrics”-view on the Azure Monitoring pane ;

Now this provides us with a view on collection level. Wouldn’t it be great if we could link consumption towards an individual function?

 

Monitoring Request Charges per Functions

As I’m using MongoDB in this case (Sidenote ; the same is possible when using for instance the DocumentDB api), I can use the “getLastRequestStatistics” command to retrieve the request charges of CosmosDB. Let’s take a look at the following sample (taken from this function) ;

    var findVms = function(db, callback) {
      var cursor =db.collection('vmchooserdb').find( { "type": "disk", "region": region } );
      cursor.each(function(err, doc) {
        assert.equal(err, null);
        if (doc != null) {
            if (index === 0) {
              db.command({getLastRequestStatistics:1}).then(result => {
                context.log.metric("RequestCharge", result.RequestCharge); 
                context.log("RequestCharge:"+result.RequestCharge); 
              })
              .catch(error => {
                context.log(error);
              });
            }
            index++;
            output[index] = doc;
        } else {
            callback();
        }
      });
};

What can you see here? Via the “db.Command…” I get the request charge and I send it to both “context.log.metric” (which is the Application Insights integration) and “context.log” (which is the “default logging”). Let’s take a look at the monitoring tab of our function. When I look at the logs of a certain invocation. Here we can see that the request charge was 11.1 ;

Now let’s take a look at Application Insights… Here we’ll go to the “Analytics” view ;

Here I can filter the “customMetrics” based on the name of the function (getVmSize/getDiskSize) and the “RequestCharge” indicator.

&

This provides me with the ability to monitor the evolution of the request charges of my function. And this is also the first part of gathering the data I need about my end-to-end cost of a single function.

 

Function Charges

The same logic as what we just saw with CosmosDB can be applied to function too… Due to that integration with Application Insights, we do not need to send any customMetrics, as they are gathered by default. Here we can see what the memory consumption was (in buckets of 128MB, conform the billing model ;

But also summarize the duration of a given function ;

 

Closing Thoughts

In today’s post, we discussed how we can monitor costs & performance on the granular level of an individual function. When you have this knowledge at your fingertips, then you can now make very informed decisions on refactoring. In addition, sizing in terms of scaling is also very easily extrapolated. You can calculate what impact an increase in load will have on your budget…

I hope you liked today’s post. For my this journey was very insightful! It has provided me with yet again another set of metrics that can guide me through well-informed decisions. 😉

Advertisements

Windows Storage Performance Benchmarking : a predefined set of benchmarks & analytics!

Introduction
A while ago we were looking into a way to benchmark storage performance on Windows systems. This started out with the objective to see how Storage Spaces held up under certain configurations and eventually moved towards us benchmarking existing OnPremise workloads to Azure deployments. For this we created a wrapper script for SQLIO that was heavily based upon previous work from both Jose Baretto & Mikael Nystrom. Adaptations were made to make it a bit more clean in code and to have a back-end for visualization purposes. At this point, I feel that the tool has a certain level of maturity that it can be publically shared for everyone to use.

Storage Performance Benchmarker Script
The first component is the “Storage Performance Benchmarker Script“, which you can download from the following location ; https://bitbucket.org/kvaes/storage-performance-benchmarker

I won’t be quoting all the options/parameters, as the BitBucket page clearly describes this. By default it will do a “quick test” (-QuickTest true). This will trigger one run (with 16 outstanding IO) for four scenario’s ; LargeIO Read, SmallIO Read, LargeIO Write & SmallIO Write.

The difference between the “Read” & “Write” part will be clear I presume… 🙂 The difference between the “LargeIO” & “SmallIO” reside in the block size (8Kbyte for SmallIO, 512Kbyte for LargeIO) and the access method (Random for SmallIO & Sequential for LargeIO). The tests are foreseen to mimmick a typical database behaviour (SmallIO) and a large datastore / backup workload (LargeIO). When doing an “extended test” (-QuickTest false), a multitude of runs will be foreseen to benchmark different “Outstanding IO” scenario’s.

Website Backend
You can choose not to send the information (-TestShareBenchmarks false) and the information will not be sent to the backend server. Then you will only have the csv output, as the backend system is used to parse the information into charts for you ; Example.

2015-08-20 07_40_02-Storage Performance Benchmarker [kvaes.be]

By default, your information will be shown publically, though you can choose to have a private link (-Private true) and even have the link emailed to you (-Email you@domain.tld).

On the backend, you will have the option to see individual test scenarios (-TestScenario *identifying name*) and to compare all scenarios against each other.

For each benchmark scenario, you will see the following graphs ;

  • MB/s : The throughput measured in MB/s. This is often the metric people know… Though be aware that the MB/s is realised by multiplying the IO/s times the block size. So the “SmallIO” test will show a smaller throughput compared to the “LargeIO”, though the processing power (IOPS or IO/s) of the “SmallIO” may sometimes be even better on certain systems.
    chart-55d567c06e74f
  • IO/S : This is the number of IOPS measured during the test. This provides you with an insight into the amount requests a system can handle concurrently. The higher the number, the better… To provide assistance, marker zones were added o indicate what other systems typically reach. This to provide you with an insight about what is to be expected or to which you can reference.
    chart-55d567cadcb77
  • Latency : This is the latency that was measured in milliseconds. Marker zones are added to this chart to indicate what is to be considered a healthy, risk or bad zone.
    chart-55d567d16f7e3

The X-axis will show the difference between different “Outstanding IO” situations ;

Number of outstanding I/O requests per thread. When attempting to determine the capacity of a given volume or set of volumes, start with a reasonable number for this and increase until disk saturation is reached (that is, latency starts to increase without an additional increase in throughput or IOPs). Common values for this are 8, 16, 32, 64, and 128. Keep in mind that this setting is the number of outstanding I/Os per thread. (Source)