If we go several years back, I already leveraged the instant scaling of CosmosDB… Recently a new plan has been introduced to cover this behavior, being the Consumption Based / Serverless option! For a new project I immediately started using this one, and I am very happy about it. Where I came to a point where I said to myself, let us migrate the other databases (where fit) to this option too. For today’s post, I will go into the differences I noticed… and hopefully save you some time looking up things. 😉 Though be aware that I have been leveraging the MongoDB API/endpoint.
Continue reading “Moving an existing CosmosDB database/collection to CosmosDB Serverless when using MongoDB”
There are a lot of scenario’s where organization are leveraging Azure to process their data at scale. In today’s post I’m going to go through the various pieces that can connect the puzzle for you in such a work flow. Starting from ingesting the data into Azure, and afterwards processing it in a scalable & sustainable manner.
High Level Architecture
As always, let’s start with a high level architecture to discuss what we’ll be discussing today ;
- Ingest : The entire story starts here, where the data is being ingested into Azure. This can be done via an offline transfer (Azure DataBox), or online via (Azure DataBox Edge/Gateway, or using the REST API, AzCopy, …).
- Staging Area : No matter what ingestation method you’re using, the data will end up in a storage location (which we’ll now dub “Staging Area”). From there one we’ll be able to transfer it to it’s “final destination”.
- Processing Area : This is the “final destination” for the ingested content. Why does this differ from the staging area? Cause there are a variety of reasons to put data in another location. Ranging from business rules and the linked conventions (like naming, folder structure, etc), towards more technical reasons like proximity to other systems or spreading the data across different storage accounts/locations.
- Azure Data Factory : This service provides a low/no-code way of modelling out your data workflow & having an awesome way of following up your jobs in operations. It’ll serve as the key orchestrator for all your workflows.
- Azure Functions : Where there are already a good set of activities (“tasks”) available in ADF (Azure Data Factory), the ability to link functions into it extends the possibility for your organization even more. Now you can link your custom business logic right into the workflows.
- Cosmos DB : As you probably want to keep some metadata on your data, we’ll be using Cosmos DB for that one. Where Functions will serve as the front-end API layer to connect to that data.
- Azure Batch & Data Bricks : Both Batch & Data Bricks can be directly called upon from ADF, providing key processing power in your workflows!
- Azure Key Vault : Having secrets lying around & possibly being exposed is never a good idea. Therefor it’s highly recommended to leverage the Key Vault integration for storing your secrets!
- Azure DevOps : Next to the above, we’ll be relying on Azure DevOps as our core CI/CD pipeline and trusted code repository. We can use it to build & deploy our Azure Functions & Batch Applications, as for storing our ADF templates & Data Bricks notebooks.
- Application Insights : Key to any successful application is collecting the much needed telemetry, where Application Insights is more than suited for this task.
- Log Analytics : ADF provides native integration with Log Analytics. This will provide us with an awesome way to take a look at the status of our pipelines & activities.
- PowerBI : In terms of reporting, we’ll be using PowerBI to collect the data that was pumped into Log Analytics and joining it with the metadata from Cosmos DB. Thus providing us with live data on the status of our workflow!
Now let’s take a look at that End-to-End flow!
Continue reading “Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!”
Today’s post will be on how I see the smoothest way to do prototyping & hobby projects in regards to IoT. What is my main principle in deciding this? I only want to spend time on “business logic” and not waste time on the nuts & bolts of the engine.
So what’s the architecture we’ll be using for this?
- Device : Particle Photon + Grove Expansion Board + Grove Sensors (Temperature & Air Quality )
- Particle Platform : Used for the development
- Azure IoT Hub : Basically a 1:1 link with Particle, which will take over once we go to a production grade setup.
- Azure Stream Analytics : Streaming the ingest data from our IoT Hub towards our various landing zones.
- Azure CosmosDB : For storing the data we’ll use in our reports.
- Azure Storage Account : Cheap storage where we keep all the data we collected, and which we could use for our analytics.
- PowerBI : The make nice reports of the data we collected. 😉
Now let’s delve into these parts one by one!
Continue reading “IoT Prototyping in Azure with Particle & Grove”
A lot of workloads are driven by peak consumption. From my experience, there aren’t the amount of workloads that have a constant performance need are in the minority. Now here comes the interesting opportunity when leveraging serverless architectures… Here you only pay for your actual consumption. So if you tweak your architecture to leverage this, then you can get huge gains!
For today’s post, I’ll be using VMchooser once again as an example. A lot has changed since the last post on the anatomy of this application. Here is an updated drawing of the high level architecture ;
Underneath you can see the flow that’ll be used when doing a “Bulk Mapping” (aka “CSV Upload”). The webapp (“frontend”) will store the CSV as a blob on the storage account. Once a new blob arrives, a function will be triggered that will examine the CSV file and put every entry onto a queue. Once a message is published onto the queue, another function will start processing this message. By using this pattern, I’m transforming this job into parallel processing job where each entry is handled (about) simultaneously. The downside of this, is that there will be contention/competition for the back-end resources (being the data store). Luckily, CosmosDB can scale on the fly too… We can adapt the request units as needed; up or down! So let’s do a small PoC and see who this could work…
Continue reading “Serverless On-Demand Scaling : Pushing the pedal when you need it…”