Introduction
In my current role at Microsoft, I often talk about the possibilities in regards to application modernization. A typical ask in this space is to what kind of service they should use as a underlying platform for their own services. Where this commonly results in a (brief) discussion about VMs vs Containers vs Serverless/FaaS. Today’s post is about my personal take on the matter.
Setting the scene
First let’s start with setting the scene a bit… For today I’ll try to focus on the application modernization landscape, where the same goes for the data platform stack. Here you can pretty much interchange “Functions” with “Data Lake Analytics” and “Containers” with “HD Insights”. Though we’ll not go into that detail, in order to reduce the complexity of the post. 😉
When looking towards the spectum, the first thing to acknowledge is the difference in service models. Here we mainly have two service models in play ;
- Infrastructure-as-a-Service (IaaS) : Here you will be responsible for everything that happens from the OS and upwards.
- Platform-as-a-Service (PaaS) : Here you will be able provided with a managed platform. Anything that you “drop into” the platform is within your responsibility to manage.
That being said… I’ve also seen discussions where the statement is that “FaaS” (Function-as-a-Service) is actually what PaaS should have been, the total abstraction of any server related concepts. In my mind, there are two dimensions to that discussion ;
- Cost Granularity : The first thing to consider is the cost aspect. If we look at the different services, then we see two types here. The ones who relate back to a “VM” (cores + memory), and the ones who run per CPU cycle… When I’m typing this, I’m actually pondering that a CPU cycle is also relation back to a “VM” (very granular slice of “core + memory”). Though it feels totally different in terms of cost sizing / approach. With FaaS/Serverless, and to an extend PaaS, I’ve notice the cost discussion being really on business level.
- Management Depth : The second thing to consider is the “management depth”. Thinking out loud… not sure if that is a commonly accepted term. Though what I mean by this is the ability to control the underlying platform. The more you go to the left side of the drawing above, you’ll notice that you can have full control / customization of the entire platform. The more you go to the right, you’ll notice that you’ll have to abide by certain standards. Though the more you can control, the more you will need to manage yourself. Does that mean that there is no management on the left side, off-course not! I hope, by know, everyone has realized that there are no silver bullets. Not XX years ago, and neither YY years in the future!
“WARNING : Vendor Lock-In Approaching. Make a legal U-Turn”
When looking towards the serverless stack, I typically see two archetypes of organizations in terms of their response to it ;
- “Time-to-Market” : The first group is focused on the time-to-market. They are outright fans of serverless/functions as they can get straight to the business logic and create value from their project.
- “Portability” : The second group wants to have full portability of their solution. They want to avoid a vendor lock-in at all times. Here I typically see that the container road is the one they want to take.
First of all, let’s talk “semantics”… 😉 If we want to go nitty gritty on things, be aware that there is a difference between vendor lock-in and technology lock-in. One could argue that serverless isn’t a vendor locking, as we cannot speak about a monopoly. One could still adapt the code to run on other serverless platforms.
So actually, we the actual discussion should be around the cost of migration for a given solution/implementation? Because, let’s take the example of containers, if we should leverage those in terms of portability… The technology has a clear vision on “build once, run anywhere”. That does not mean “move to anywhere, without a cost”!
Portability = Business Requirement
Portability is a business requirement and has been around for ages. I’ve used to work on various large Java based projects. Where Java (and others ; like python) always touted about the portability, one always needed to consider this in the code implementation. It was very easy to use an OS or platform (weblogic, jboss, websphere, …) specific implementation and be “locked” to a given platform (and thus lose the portability).
Polynimbus isn’t for everyone
So the question is, how far is your business willing to go in terms of business continuity? The Walhalla here is to have a polynimbus architecture. Where I’ve seen organizations where that is a very sound / keen strategy. I must say that the majority of the organizations are not ready to commit to such a strategy. In their hearts they want to go that route, though they truly underestimate the strain that puts on the organization. Aside from the needed talent pool & the costs involved… Also consider that you are “limiting” yourself by refraining to use any vendor specific features. Because, if you would use those, then you would lose your compliance with the statement you made in terms of “multi-cloud” / “portability”.
The logic applies to both “OnPrem” & “Cloud”
Given, I’m working for Microsoft, mainly due to my affection for everything the public cloud brings. Though the same logic of this post relates to “OnPrem / On Premises”! For example, did you mix vSphere & Hyper-V? Where you running OpenStack and…? What about Weblogic, Jboss & Websphere? Or MSSQL, Postgres, MySQL & Oracle? AX & SAP ERP? 😉
If you weren’t doing it before, why do you want to do it now? I can understand the choice if this was a business strategy, though, often it is not. Often it is a technical ask (“nice-to-have”), which is not backed by the rest of the organisation. My apologies for stating this so bluntly, but I can only imagine that you’ll concur on this statement.
Focus => Cost on the of Application Lifecycle
In my mind, you should focus on the potential cost of migration and incorporate this into the total cost of the entire lifecycle of the application. Let’s imagine a flow where we would need to migrate… So portability typically resolves around the ability that the “Migrate”-cost, in combination with the period where there is “dual operations”, is envisioned as being very low.
So if we imagine that go “multi technology” (technology, as it covers all grounds), then we typically see that the costs (aside from the migration) is higher. So from a business perspective, you should factor in three scenarios ;
- Single Technology Stack : Here you assume the best case scenario that you stay with one technology stack during the entire lifecycle. This is a kind of best case scenario.
- Migration Scenario : Here you assume a worst case scenario where you would need to migrate between technology stacks.
- Dual Technology Stack : Here you assume that you need to be compatible between two (or more?) stacks and be able to switch at any times.
Closing Thoughts
Being cautious is far from a bad thing! Though don’t be blinded by portability without seeing the broader picture. In the end everything revolves around business requirements in combination with a certain budget. Often too much, I’ve seen organisations only factor in certain cost aspects of a full application lifecycle.
It’s kinda the same like with Disaster Recovery… People have learned over the years that stating an RPO of 0 and an RTO of 0 comes with a huge cost. The same goes with being fully portable. You have to factor in the costs needed to achieve that, and decide if that is worth it or not. For some that might prove to be a good business case, for other it might not.
I hope you enjoyed the read! Let me know what your thoughts are on the matter.
One thought on “FaaS & Serverless – Vendor lock-in or not? Consider the cost of the full application lifecycle”