IoT Prototyping in Azure with Particle & Grove

Introduction

Today’s post will be on how I see the smoothest way to do prototyping & hobby projects in regards to IoT. What is my main principle in deciding this? I only want to spend time on “business logic” and not waste time on the nuts & bolts of the engine.

Architecture

So what’s the architecture we’ll be using for this?

  1. Device : Particle Photon + Grove Expansion Board + Grove Sensors (Temperature & Air Quality )
  2. Particle Platform : Used for the development
  3. Azure IoT Hub : Basically a 1:1 link with Particle, which will take over once we go to a production grade setup.
  4. Azure Stream Analytics : Streaming the ingest data from our IoT Hub towards our various landing zones.
  5. Azure CosmosDB : For storing the data we’ll use in our reports.
  6. Azure Storage Account : Cheap storage where we keep all the data we collected, and which we could use for our analytics.
  7. PowerBI : The make nice reports of the data we collected. 😉

Now let’s delve into these parts one by one!

Continue reading “IoT Prototyping in Azure with Particle & Grove”

Serverless On-Demand Scaling : Pushing the pedal when you need it…

Introduction

A lot of workloads are driven by peak consumption. From my experience, there aren’t the amount of workloads that have a constant performance need are in the minority. Now here comes the interesting opportunity when leveraging serverless architectures… Here you only pay for your actual consumption. So if you tweak your architecture to leverage this, then you can get huge gains!

For today’s post, I’ll be using VMchooser once again as an example. A lot has changed since the last post on the anatomy of this application. Here is an updated drawing of the high level architecture ;

Underneath you can see the flow that’ll be used when doing a “Bulk Mapping” (aka “CSV Upload”). The webapp (“frontend”) will store the CSV as a blob on the storage account. Once a new blob arrives, a function will be triggered that will examine the CSV file and put every entry onto a queue. Once a message is published onto the queue, another function will start processing this message. By using this pattern, I’m transforming this job into parallel processing job where each entry is handled (about) simultaneously. The downside of this, is that there will be contention/competition for the back-end resources (being the data store). Luckily, CosmosDB can scale on the fly too… We can adapt the request units as needed; up or down! So let’s do a small PoC and see who this could work…

Continue reading “Serverless On-Demand Scaling : Pushing the pedal when you need it…”