Azure Serverless Performance / Cost Monitoring – Retrieving CosmosDB Request Charges per Function


For me, one of the major advantages of cloud is that you become very aware of the costs involved. Where this might seem like an odd thing to say, this also forces you to think in terms of the costs and develop accordingly. Every session I have with customers, I’m focussed on keeping the costs as low as possible, given the requirements at hand.

Where Virtual Machines have their place, I -really- love the PaaS-services, and especially the entire “Serverless”space. A typical combination you see here, is Azure Functions & Azure CosmosDB. Both have a nice modular pricing model, where Azure Functions even go towards a subsecond billing model based on the actual CPU cycles. Though, how granular the pricing might be, if we look towards the resource costs… Then we see that is still linked to an “App Service” (being a consumption plan here) ;

The same goes for my Azure Cosmos DB Account (which hosts several collections)…

To go even a step further, I wanted to be able which functions (low level code) are the things I need to focus on in terms of costs. If we would succeed in this, then suddenly, refactoring would have a tangible ROI.


CosmosDB : Request Units / Charges

When you are using CosmosDB, your budget is determined by the amount of Request Units per Seconds your collection can process.

The throughput of this collection is sized at the minimum of 400 RU/s. Now let’s run a query on this collection ;

You’ll notice that with the results are accompanied by a statement that indicates the “Request Charge”. This is the amount of RUs that were needed to process the query. So the objective here is to try to keep this numbers as low as possible… In this case, as the collection is provisioned at 400RU/s and the query needs 45 RU/s. This would mean that this collection can process 8 concurrent requests per second of this query. So I immediately know the impact on my costs if the amount of requests increases.


CosmosDB : Monitoring RU/second

The first view towards the RU consumption is in the “Metrics”-view on the Azure Monitoring pane ;

Now this provides us with a view on collection level. Wouldn’t it be great if we could link consumption towards an individual function?


Monitoring Request Charges per Functions

As I’m using MongoDB in this case (Sidenote ; the same is possible when using for instance the DocumentDB api), I can use the “getLastRequestStatistics” command to retrieve the request charges of CosmosDB. Let’s take a look at the following sample (taken from this function) ;

    var findVms = function(db, callback) {
      var cursor =db.collection('vmchooserdb').find( { "type": "disk", "region": region } );
      cursor.each(function(err, doc) {
        assert.equal(err, null);
        if (doc != null) {
            if (index === 0) {
              db.command({getLastRequestStatistics:1}).then(result => {
                context.log.metric("RequestCharge", result.RequestCharge); 
              .catch(error => {
            output[index] = doc;
        } else {

What can you see here? Via the “db.Command…” I get the request charge and I send it to both “context.log.metric” (which is the Application Insights integration) and “context.log” (which is the “default logging”). Let’s take a look at the monitoring tab of our function. When I look at the logs of a certain invocation. Here we can see that the request charge was 11.1 ;

Now let’s take a look at Application Insights… Here we’ll go to the “Analytics” view ;

Here I can filter the “customMetrics” based on the name of the function (getVmSize/getDiskSize) and the “RequestCharge” indicator.


This provides me with the ability to monitor the evolution of the request charges of my function. And this is also the first part of gathering the data I need about my end-to-end cost of a single function.


Function Charges

The same logic as what we just saw with CosmosDB can be applied to function too… Due to that integration with Application Insights, we do not need to send any customMetrics, as they are gathered by default. Here we can see what the memory consumption was (in buckets of 128MB, conform the billing model ;

But also summarize the duration of a given function ;


Closing Thoughts

In today’s post, we discussed how we can monitor costs & performance on the granular level of an individual function. When you have this knowledge at your fingertips, then you can now make very informed decisions on refactoring. In addition, sizing in terms of scaling is also very easily extrapolated. You can calculate what impact an increase in load will have on your budget…

I hope you liked today’s post. For my this journey was very insightful! It has provided me with yet again another set of metrics that can guide me through well-informed decisions. 😉

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.