Comparing Costs : Is Cloud more expensive than an On Premises setup?

Introduction

In my role as a Cloud Solution Architect, I’m often faced with the statement that cloud is expensive. My reply is always that Cloud is not expensive (more expensive than On Premises) if you take into account all the costs involved. As this is an easy statement to make… I made an effort to create a cost comparison for four different scenario’s (in term of deployment size) and stacked “OnPremises” vs “Cloud”.

apple-orange-compare

In this post we’ll discuss this calculation and ensure that we are comparing apples to apples!

 

Design Decisions

Continue reading “Comparing Costs : Is Cloud more expensive than an On Premises setup?”

Rancher End-to-End Service Example using an Owncloud-plus-mysql Deployment

Introduction

So what will we be doing today? We are going to leverage the power of the combination between docker containers & the rancher eco system. As a demonstration, we’ll be publishing “Owncloud” with a “mysql” backend. As we tend to like it a bit more secure, we’ll introduce a loadbalancer service as SSL termination. This as we want to keep our “Owncloud” as “vanilla” as possible. We’ll be pointing that service towards the outside world and will make it accessible via the “external dns”.

RancherOwncloud

What can we optimize further about the design? (but is out-of-scope for today)

  • Add sidekick containers for backup purposes
  • Add data volume containers
  • Introduce scalable worker containers (“Owncloud”)
  • Introduce convoy for our data containers

Continue reading “Rancher End-to-End Service Example using an Owncloud-plus-mysql Deployment”

Microsoft Azure Support Lifecycle

As an architect, one always looks towards the supportability of the entire stack during the projected lifecycle of the solution that will be built. So when we are using Azure components, we should also be aware that these cloud services have a given support lifecycle too… Anyhow, let’s dig in!

Azure has four different support categories when looking towards the support lifecycle ;

  • Azure Virtual Machines – The Microsoft software supported on Azure Virtual Machines (Infrastructure as a Service) will follow the existing Mainstream and Extended Support phase of the on-premises lifecycle support policy.
  • Azure Cloud Services – Microsoft Azure Cloud Services (Web and Worker Roles/Platform as a Service), allows developers to easily deploy and manage application services while delegating the management of underlying Role Instances and Operating System to the Azure Platform. The lifecycle policy details for the Guest OS provided by Azure for Cloud Services.
  • Azure Services – All other Azure Services follow the Online Services Support Lifecycle Policy for Business and Developer
  • Support for Custom Applications using Open Source on Azure – For all scenarios that are eligible for support through an Azure purchased support plan, Microsoft will provide commercially reasonable efforts to support custom applications and services built using open source software running on Azure

Source : https://support.microsoft.com/en-us/lifecycle#gp/azure-cloud-lifecycle-faq

Is there anything specific you should note?

  • Microsoft Azure will provide 12 months of notice to retire the oldest Guest OS family from the list of supported OS families.
  • At the end of the 12 month notification period, Microsoft Azure will stop providing the latest/patched version for a retired OS family. The cloud services using the retired OS family will be unsupported and will be stopped.
  • Each Guest OS Version is normally disabled 60 days after its release. After this grace period expires, Cloud Services using the retired OS version will be force upgraded to a supported Guest OS version.

Database variants explained : SQL or NoSQL? Is that really the question?

A first glance beyond the religion

When taking a look towards the landscape of databases, one can only accept that there has been a lot of commotion about “SQL vs NoSQL” in the last years. But what is it really about?

SQL, which stands for “Structured Query Language”, has been around since the seventies and is commonly used in relational databases. It consists of a data definition language to define the structure and a data manipulation language to alter the data within the structure. Therefore a RDBMS will have a defined structure and has been a common choice for the storage of information in new databases used for financial records, manufacturing and logistical information, personnel data, and other applications since the 1980s.

1401269083847

NoSQL, which stands for “Not only SQL”, departs from the standard relational model since it saw its first introduction in the nineties. The primary focus of these database was performance, or a given niche, and focus less consitency/transactions. These databases provide a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling, and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, graph, or document) differ from those used in relational databases, making some operations faster in NoSQL and others faster in relational databases. The particular suitability of a given NoSQL database depends on the problem it must solve.

So it depends on your need…

Do you want NoSQL, NoSQL, NoSQL or NoSQL?

NoSQL comes in various flavors. The most common types of NoSQL databases (as portrayed by Wikipedia) ;

There have been various approaches to classify NoSQL databases, each with different categories and subcategories. Because of the variety of approaches and overlaps it is difficult to get and maintain an overview of non-relational databases. Nevertheless, a basic classification is based on data model. A few examples in each category are:

  • Column: Accumulo, Cassandra, Druid, HBase, Vertica
  • Document: Clusterpoint, Apache CouchDB, Couchbase, MarkLogic, MongoDB, OrientDB
  • Key-value: Dynamo, FoundationDB, MemcacheDB, Redis, Riak, FairCom c-treeACE, Aerospike, OrientDB
  • Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog
  • Multi-model: OrientDB, FoundationDB, ArangoDB, Alchemy Database, CortexDB

Column

A column of a distributed data store is a NoSQL object of the lowest level in a keyspace. It is a tuple (a key-value pair) consisting of three elements:

  • Unique name: Used to reference the column
  • Value: The content of the column. It can have different types, like AsciiType, LongType, TimeUUIDType, UTF8Type among others.
  • Timestamp: The system timestamp used to determine the valid content.

Example

{
    street: {name: "street", value: "1234 x street", timestamp: 123456789},
    city: {name: "city", value: "san francisco", timestamp: 123456789},
    zip: {name: "zip", value: "94107", timestamp: 123456789},
}

Document

A document-oriented database is designed for storing, retrieving, and managing document-oriented information, also known as semi-structured data. The central concept of a document-oriented database is that Documents, in largely the usual English sense, contain vast amounts of data which can usefully be made available. Document-oriented database implementations differ widely in detail and functionality. Most accept documents in a variety of forms, and encapsulate them in a standardized internal format, while extracting at least some specific data items that are then associated with the document.

Example

<Article>
   <Author>
       <FirstName>Bob</FirstName>
       <Surname>Smith</Surname>
   </Author>
   <Abstract>This paper concerns....</Abstract>
   <Section n="1"><Title>Introduction</Title>
       <Para>...
   </Section>
 </Article>

Key-Value

A key-value (an associative array, map, symbol table,or dictionary) is an abstract data type composed of a collection of key/value pairs, such that each possible key appears just once in the collection.

Example

{
    "Pride and Prejudice": "Alice",
    "The Brothers Karamazov": "Pat",
    "Wuthering Heights": "Alice"
}

Graph

A graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A graph database is any storage system that provides index-free adjacency. This means that every element contains a direct pointer to its adjacent elements and no index lookups are necessary. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.

Example

GraphDatabase_PropertyGraph

MultiModel

Most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated. In contrast, a multi-model database is designed to support multiple data models against a single, integrated backend. Document, graph, relational, and key-value models are examples of data models that may be supported by a multi-model database.

And what flavor do I want?

Each type and implementation has its own advantages… The following chart from Shankar Sahai provides a good overview ;

nosql-comparison-table

Any other considerations I should take into account?

Be wary that most implementations were not designed around consistency integrity and more towards performance. Transactions are referential integrity are not supported by most implementations. High availability designs (including on geographic level) are possible with some implementations, though this often implies a performance impact (as one would expect).

Also check out the research made by Altoros ;

5. Conclusion
As you can see, there is no perfect NoSQL database. Every database has its advantages and disadvantages that become more or less important depending on your preferences and the type of tasks.
For example, a database can demonstrate excellent performance, but once the amount of records exceeds a certain limit, the speed falls dramatically. It means that this particular solution can be good for moderate data loads and extremely fast computations, but it would not be suitable for jobs that require a lot of reads and writes. In addition, database performance also depends on the capacity of your hardware.

They did a very decent job in performance testing various implementations!

2015-01-21 09_08_23-A_Vendor_independent_Comparison_of_NoSQL_Databases_Cassandra_HBase_MongoDB_Riak.

Microsoft Azure : How to connect my Enterprise? Expressroute or VPN?

microsoft-azure1

Microsoft has been going at warp speed last year (and it looks this pace will be kept) with the features they have been adding to Azure. In the beginning when I came into contact with Azure, one of my first questions was ; “How can I hook up Azure in my Wide-Area-Network (WAN)?” The answer at that point was a kinda flaky VPN connection. About a half-year ago, Microsoft released “ExpressRoute”. This was the answer Enterprise customers were looking for in terms of hooking up Azure to their WAN. So let’s take a look at your options…

Basically, you have five options to connect to Azure ;

Internet (public)

  • Medium : Public
  • Network : Public
  • Capacity: No explicit cap
  • Connection Resilience : Active / Active
  • High Level Solution : Your “typical” enterprise internet
  • Typical Usage : Almost everything in Azure that isn’t linked by the underneath mentioned services.

Virtual Network – Point-to-site

  • Medium : Public
  • Network : Private
  • Capacity: Typically 100 Mbit Aggregates
  • Connection Resilience : Active / Pasive
  • High Level Solution : A point-to-site VPN also allows you to create a secure connection to your virtual network. In a point-to-site configuration, the connection is configured individually on each client computer that you want to connect to the virtual network. Point-to-site connections do not require a VPN device. They work by using a VPN client that you install on each client computer. The VPN is established by manually starting the connection from the on-premises client computer. You can also configure the VPN client to automatically restart.
  • Typical Usage : Proof-of-Concept, Prototyping, Evaluation, …

Virtual Network – Site-to-site

  • Medium : Public
  • Network : Private
  • Capacity: Typically 100 Mbit Aggregates
  • Connection Resilience : Active / Pasive
  • High Level Solution : A site-to-site VPN allows you to create a secure connection between your on-premises site and your virtual network. To create a site-to-site connection, a VPN device that is located on your on-premises network is configured to create a secure connection with the Azure Virtual Network Gateway. Once the connection is created, resources on your local network and resources located in your virtual network can communicate directly and securely. Site-to-site connections do not require you to establish a separate connection for each client computer on your local network to access resources in the virtual network.
  • Typical Usage : Small scale production workloads, development/test environments, …

ExpressRoute – Exchange Provider

  • Medium : Private
  • Network : Public
  • Capacity: up to 1Gbps
  • Connection Resilience : Active / Active (customer managed)
  • High Level Solution : Azure ExpressRoute lets you create private connections between Azure datacenters and infrastructure that’s on your premises or in a co-location environment. ExpressRoute connections do not go over the public Internet, and offer more reliability, faster speeds, lower latencies and higher security than typical connections over the Internet. In some cases, using ExpressRoute connections to transfer data between on-premises and Azure can also yield significant cost benefits. With ExpressRoute Exchange Provider, you can establish connections to Azure at an ExpressRoute location (Exchange Provider facility) clip_image002 - Exchange Provider - ExpressRoute
  • Typical Usage : Mission Critical Workloads

ExpressRoute – Network Service Provider

  • Medium : Public
  • Network : Public
  • Capacity : up to 10Gbps
  • Connection Resilience : Active / Active (telecom provider managed)
  • High Level Solution : Azure ExpressRoute lets you create private connections between Azure datacenters and infrastructure that’s on your premises or in a co-location environment. ExpressRoute connections do not go over the public Internet, and offer more reliability, faster speeds, lower latencies and higher security than typical connections over the Internet. In some cases, using ExpressRoute connections to transfer data between on-premises and Azure can also yield significant cost benefits. With ExpressRoute Service Provider, you can directly connect to Azure from your existing WAN network (such as a MPLS VPN) provided by a network service provider. clip_image002 - Network Service Provider - ExpressRoute
  • Typical Usage : Mission Critical Workloads

Network Seggregation

So if I get ExpressRoute, how will my network flows go?

clip_image001
Basically, the private solutions will ensure that your company communication will not traverse over the public internet. You can configure your service to either use the internet connect of Azure, or your own hop, to breakout towards public services. Let’s say for instance, if you want to download updates, you could set it up that those are done via Azure, instead of going back over your ExpressRoute link in order to break out from within your own premises.

Decision Chart

So what does this mean for a typical Enterprise?

It depends on your scenario…

  • Looking to get do some raw testing?
      Isolated Test : Internet only
      Integrated : Point/site-to-site vpn
  • Hook up your development/test environment in a lean manner? Site-to-site vpn
  • Azure as a Disaster Recovery location? Dependent on your size …
      Small IT Landscape : Site-to-site
    1. From a few TB : ExpressRoute
  • Azure as a Primary Datacenter : ExpressRoute Service Provider

route

References

Where can I find additional information?

A roadmap to the cloud… Where should I focus on?

Cloud is here to stay!
A lot of questions about “THE Cloud” have risen the last years. In the beginning, the most responses included that it was a hype or that it was a rebranded solution from the past (“ASP“). Though at this point in time, it is safe to say that “Cloud Services” are here to stay and that there is no point back but to embrace them as an IT department. My personal sentiment is that the current market leaders “Amazon” & “Microsoft” will continue to grow and eventually dominate this market. As google has enough cashflow, I suspect that they will join in this battle. So the current conundrum is ; how to move your current landscape from an “on premise” way of working towards the cloud…?

Cloud Maturity Model
For organisations who are stuck with this question, I would like to point out to a fine document (“Cloud Maturity Model“) of the Open Data Center Alliance. It describes the different stages, even from different perspectives, that you will traverse in your journey.

Quote about the cloud maturity model ;

2014-12-02 10_59_04-Cloud_Maturity_Model_Rev_2.0.pdf - Adobe Reader

Progression through the various maturity levels is based on the evolution of a number of parallel capabilities, as described in the following figures.
The result is represented by an inferred resulting maturity, roughly mapped as follows:

  • CMM 1. (Initial / Ad Hoc) The existing environment is analyzed and documented for initial cloud potential. Pockets of virtualized systems exist, for limited
    systems, without automation tooling, operated under the traditional IT and procurement processes. Most of the landscape still runs on physical
    infrastructure. The focus is on the private cloud, although the public cloud is used for niche applications.
  • CMM 2. (Repeatable / Opportunistic) IT and procurement processes and controls are updated specifically to deal with cloud and who may order services and service
    elements and how. Private cloud is fully embraced with physical-to-virtual movement of apps and the emergence of cloud-aware apps.
  • CMM 3. (Defined / Systematic) Tooling is introduced and updated to facilitate the ordering, control, and management of cloud services. Risk and governance controls
    are integrated into this control layer, ensuring adherence to corporate and country requirements. Complementary service management
    interfaces are operational. More sophisticated use of SaaS is evident, and private PaaS emerges.
  • CMM 4. (Measured / Measurable) Online controls exist to manage federated system landscapes, distributed data and data movement, federated and distributed
    application transactions, and the cross-boundary transitions and interactions. Defined partners and integration exist, enabling dynamic
    movement of systems and data, with supporting tool layer integration (for example, service desk, alerting, commercial systems, governances).
    Cloud-aware apps are the norm and PaaS is pervasive. Hybrid apps develop across cloud delivery models.
  • CMM 5. (Optimized) All service and application deployments are automated, with orchestration systems automatically locating data and applications in the
    appropriate cloud location and migrating them according to business requirements, transparently (for example, to take advantage of carbon
    targets, cost opportunities, quality, or functionality).

So far, so good… yeah? I know, this all still sounds a bit “fluffy“. The basics to remember is that there are various stages involved so you can keep track of where you are. Though for me there are three focus points that every organisation should embrace in order to be ready for the future with cloud services.

  • IAAS has become commodity
  • Federation is the new black
  • Interoperability is mandatory

IAAS has become commodity
I do NOT believe in on-premise virtualisation farms anymore… for the majority of organisations. I must concur that there are use cases that would still require this, though for the majority of organization this is not the case. I can see you pondering “But we are special!”, and I must disappoint you, most organisations are not. Internal IT should focus on the things that deliver real value to an organisation. An Infrastructure-as-a-Service layer has become a basic commodity in the market and you should embrace it. The time you spend in maintaining the lowest layers is better invested in real business value. I, yet again, concur that this will imply a shift of skills needed…

“When the winds of change blow, some people build walls and others build windmills.” -Chinese Proverb

Federation is the new black
Let’s start with a quote from the maturity model ;

Federation refers to the ability of identity and access management software to be able to securely share user identities and
profiles. This ability allows users within a specific organization to utilize resources located in multiple clouds without having to generate
separate credentials in each cloud individually. IT is able to manage one set of identities, authorizations, and set of security review processes.
From the user perspective, this enables seamless integration with systems and applications.

For most organisations, start with setting up a federation service… Active Directory Federations Services, or a SAML provider, pick something that best fits your current technology stack. Though be aware that federation is a key, if not THE key, component of a succesful cloud roadmap!

Interoperability is mandatory
And, yet again, let’s start with a quote ;

There are two key concepts of interoperability: (1) The ability to connect two systems that are concurrently running in cloud
environments, and (2) the ability to easily port a system from one cloud to another. Both involve the use of standard mechanisms for service
orchestration and management, enabling elastic operation and flexibility for dynamic business models, while minimizing vendor lock-in.

Your high level architecture should consist of “islands”, which are linked together via APIs and/or abstraction layers and where authentication is done via federation mechanisms.

In addition, keep in mind that you will move systems around. So interoperability towards migrating systems is a key requirement and should always be a focal point in your decision-making. For instance; Think about exit scenarios with a specific cloud provider. How will you handle this?

Conclusion (TL;DR)

  • Cloud is here to stay. In a few years, it will be the defacto standard.
  • Infrastructure-as-a-Service has become commodity. In a few years, this segment will be dominated by Amazon, Microsoft & Google.
  • Federation is the new black. If you haven’t set up a federation system… DO IT NOW!
  • Interoperability is mandatory. Always keep in mind that systems should be portable islands which are built for data interaction.

The DTAP-Street : a phased approach to a development / deployment cycle

The acronym DTAP finds its origin in the words Development, Testing, Acceptance and Production. The DTAP-street is a commonly accepted method to have a phased approach to software development / deployment.

A typical flow works as follows :

  • Development – This environment is where the software is developed. It is the first environment that is used. Changes are very frequent here, as this is the first area where creativity is forged into a product.
  • Test – A developer is (hopefully) not alone. In the test environment, the complete code base is merged and forged into one single product. The first attempts at standardization and alignment towards the future production environment are made here.
  • Acceptation – Once the development team feels that the product is ready, it will be deployed to acceptance. This is a look-alike of the production and used by operations as a staging environment for production releases.
  • Production – The real deal… Here the product surely needs to be ready for prime-time.

dtap-kvaes.be-271014

Sometimes the following are also added ;

  • Education / Training – Sometimes a dedicated environment is needed where people can test drive the software in a safe sand box. Due to efficiency reasons, this environment is often time shared with acceptation.
  • Backup / Disaster Recovery – Disasters can happen… Therefore some disaster recovery plans may rely on a dedicated backup / disaster recovery location.
  • Integration – An environment that is sometimes located between “Test” & “Acceptance” as an intermediate step to test certain partner integrations. Just as with the “eduction” environment, this environment is often time shared with acceptation.

What are the most commonly used formations?

  • Live – Production – Many companies rely solely on a production environment. The risk reduction is often neglected in favor of the cost benefit of having one environment.
  • Staging – Production/Test – If no real customization are done to the implemented software, then two environments may suffice.
  • DTAP – Development/Test/Acceptation/Production – Once customization hit… then a full DTAP-street is needed to reduce the amount of risks involved with software development.
  • DTAPB – Development/Test/Acceptation/Production/Backup – This is an enhanced DTAP-street that is capable of doing a disaster recovery. (Sidenote ; The Test/Development environment is often shared with the backup location. This provides the advantage that the resources of the Test/Development can be sacrificed during a disaster.)

What Code / Data flows occur between the environments?

  • Software Versions – Software releases go from Development to Test to Acceptation to Production… The timing varies from the chose release management cycle, though typical times are as follows ; Development (Continuous Builds), Test (Daily Build), Acceptation (Once per quarter, three weeks before production), Production (Once per quarter)
  • Data – Data flows in the opposite direction as software versions. Data is taken from production and copied to Acceptance / Test / Development. Depending on the environment (and relative security compliancy), the data may be anonymized or even reduced to have a representative production workload of a limited size.