Azure : Using PHP to go all oauth2 on the management API!

Introduction

As a hobby effort, I wanted to create a small poc where any user would be able to login with their AAD user, grant access to an application, after which that application could query their subscriptions.

In all honesty, I’ve been struggling more than I like to admit with getting this working… So this post will cover all the steps that you need to do to get this working!

 

Oauth & Azure AD

Before getting our hands dirty, read up on the following post ; Authorize access to web applications using OAuth 2.0 and Azure Active Directory

Ready it thoroughly! To be honest, I didn’t at first and it cost me a lot of time. 😉

Anyhow, the flow looks as follows…

active-directory-oauth-code-flow-native-app

So basically;

  • We’ll redirect the user to sign-in (and if this hasn’t been done, grant our application access)
  • If all went well, we’ll receive an authorization code
  • We’ll use this code to get a bearer (and refresh) token
  • Next up we’ll use the bearer code to connect to the Azure REST API for getting the list of subscriptions for that user.

Continue reading “Azure : Using PHP to go all oauth2 on the management API!”

Azure Resource Manager : Deployment variants within one script

Introduction
Today a quick post to show you that you can setup a deployment with several variants within the current template functions.
So for this post we’ll be combining the deploment for the rancher <a href="https://kvaes.wordpress.com/2016/01/22/deploying-rancher-server-via-an-azure-resource-manager-template/”>server & nodes into one script.

2016-02-19 10_53_14-Parameters - Microsoft Azure

Continue reading “Azure Resource Manager : Deployment variants within one script”

Microsoft Azure : Budget Automation for your Development / Test Environment

Billing-per-minute

What is one of the biggest business advantages of Azure? You are only charge for your actual usage per minute.  For many organizations, the cost of a development/test environment is a sore spot as this costs a handful of cash. Today will introduce you to Azure Automation, which will let you orchestrate things, as stopping/starting your environment.

What are we going to do?

  • Setup a dedicated account for our scheduled runbooks
  • Configure two runbooks ; “stop all servers” & “start all servers”
  • Schedule those runbooks

 

Setup a dedicated account for our scheduled runbooks

In my opinion, you always needs to set up dedicated accounts for services. They should not be running under anyones “personal” account. At a given point they will leave the company. At that time, if the system is still active and the user account will be decommissioned, the system will cease to halt. In addition, this will also give you a traceability of the actions of the given service.

So how do you setup a dedicated account for the scheduled runbooks? Check the following post ; Azure Automation: Authenticating to Azure using Azure Active Directory

In summary, the steps you will need to do ;

  • Create an additional user in your Azure Active Directory
    2015-01-27 08_15_14-Active Directory - Windows Azure
  • Add the user as a co-administrator to your account2015-01-27 08_13_21-Settings - Windows Azure

It’s also advised to note down both the full username (dixit, username@account.onmicrosoft.com) and the password you have assigned. After the creation, be sure to login with the account. You will be asked to change your password. If you “forget” (too lazy huh?) to do this step, you will get an authentication error when trying to use this account for your automations (So yes, I tried to be lazy too…).

 

Configure two runbooks ; “stop all servers” & “start all servers”

In this phase, we’ll do the following

  • Create the Automation account (“folder”) under the Runbooks will be stored
  • Create a “start all servers” runbook from the gallery
  • Create a “stop all servers” runbook from the gallery

 

Browse to “Automation”, select “Runbook” and then choose “From Gallery”

2015-01-27 08_21_38-Automation - Windows Azure

 

In the gallery, go to “VM Lifecycle Management”, and select “Azure Automation Workflow to Schedule starting of all Azure Virtual Machines”2015-01-27 08_22_12-Automation - Windows Azure

Press next, review the code. The code is pretty straight forward… But we’ll get into that later on.

2015-01-27 08_22_29-Automation - Windows Azure

Now enter the name of your runbook, and choose “Create a new automation account”. Give the account a name and choose your subscription & region.

2015-01-27 08_23_19-Automation - Windows Azure

Now we’ll repeat the process for the “stop all servers” runbook.

2015-01-27 08_28_22-Automation - Windows Azure 2015-01-27 08_28_37-Automation - Windows Azure 2015-01-27 08_28_49-Automation - Windows Azure

Now browse back to the “Automation” screen ;

2015-01-27 08_29_51-Automation - Windows Azure

Before we can go on with these steps, we’ll need to add our user to the “Assets” of our “Automation Account”. Browse to “Assets” and select “Add settings”.

2015-01-28 10_43_07-Automation - Windows Azure

Select “Add credential”… Then use “Windows Powershell Credential” as “Credential Type” and name the credential.

2015-01-28 10_43_34-Automation - Windows Azure

Now enter the user information you noted down earlier… and press save.

2015-01-28 10_44_32-Automation - Windows Azure

You are now good to go!

2015-01-28 10_42_48-Automation - Windows Azure

Select “Runbooks”, now you can see both runbooks we just created.

 

2015-01-27 08_30_09-Automation - Windows Azure

Select the “Stop-AllAzureVM” & adjust the two parameters and press save ;

  • -Name “username@domain.onmicrosoft.com”
  • -Subscriptionname “Subscription Name”

2015-01-27 08_30_48-Automation - Windows Azure

Select the “Start-AllAzureVM” & adjust the three parameters and press save ;

  • -Name “username@domain.onmicrosoft.com”
  • -Subscriptionname “Subscription Name”
  • -Name “Your Most Important Server”

2015-01-27 08_33_48-Automation - Windows Azure

What did we just do for both scripts? We entered the user account & subscription under which the script will be executed. This is a mandatory step and understandingly so. Now let us test the “StartAllAzureVM”-script… I’ve prepared two virtual machines, which are currently shutdown.

2015-01-27 08_34_03-Virtual machines - Windows Azure

So we’ll press “Test” on the runbook…

2015-01-27 08_34_20-Automation - Windows Azure

And yes, we are sure. Azure Automation will save the runbook one more time to be safe.

2015-01-27 08_34_33-Automation - Windows Azure

 

The output pane will show the status “starting”.

2015-01-27 08_34_52-Automation - Windows Azure

And it will change to “running” after a while.

2015-01-27 08_35_40-Automation - Windows Azure.

Once you see the code below, you will know that you have been authenticated. So all our hard work with creating the user paid off! If you do not see this, that is the part you should be debugging…

2015-01-27 08_35_56-Automation - Windows Azure

Suddenly our “most important server” will be showing the status “Starting”…

2015-01-27 08_36_31-Virtual machines - Windows Azure

 

And the output pane will verify this status!

2015-01-27 08_36_41-Automation - Windows Azure

So basically, we are safe to say that our script works. Let’s publish the runbooks so that we can schedule them later on.

2015-01-27 08_50_11-Edit Post ‹ Karim Vaes — WordPress

 

For each runbook, press the “publish”-button

2015-01-27 08_48_32-Automation - Windows Azure

We are sure, and you will see the runbook shift from “draft” to “published”.

 

2015-01-27 08_48_59-Automation - Windows Azure

Congrats so far! We are now ready to schedule those babies!

 

Schedule those runbooks

So which steps will we be doing in this phase?

  • Create two schedules ; “start of business day” & “end of business day”
  • Attach the “start” runbook to the “start of business day” schedule
  • Attach the “stop” runbook to the “end of business day” schedule

 

Let us start creating the two schedules ;

 

Go to our “Automation Account” and select “Assets”. Here you press the “Add Setting”-button.

2015-01-27 08_54_49-Automation - Windows Azure 2015-01-27 08_55_04-

Choose “Add Schedule”2015-01-27 08_55_16-Automation - Windows Azure

Enter the name…2015-01-27 08_55_28-Automation - Windows Azure

The schedule…2015-01-27 08_56_14-Automation - Windows Azure

Rince & repeat…

2015-01-27 08_58_01-Automation - Windows Azure

Now we have both schedules. One that will occur at 08:00 and another one that will occur at 17:00 (5pm). Now let’s link our runbooks…

Go to our “Automation Account”, and select “Runbooks”. Click on one of them

2015-01-27 09_01_15-Automation - Windows Azure

Go to “Schedule”, and press “Link to an existing schedule”.

2015-01-27 09_01_29-Automation - Windows Azure

Select the schedule…

2015-01-27 09_01_41-Automation - Windows Azure

And you will see the schedule attached.

 

2015-01-27 09_02_04-Automation - Windows Azure

Rince & repeat for the other one.

 

Summary

With the power of automation & a gallery of pre-made runbooks, we were able to save our business tons of money by only running the servers during the business hours. Be aware that the above example does not accompany holidays / weekends… In addition, the money saving is “limited” to the “compute”, as the storage of your devices will remain “active” (on disk).

Microsoft Azure : How-to setup a site-to-site VPN using OpenSwan (on a Telenet SOHO subscription)

Objective of the day?

We’ll be setting up an IPSec VPN tunnel between Microsoft Azure and a development/management environment using commodity internet connection of a Belgian ISP.

Azure-Site_to_Site_VPN

What will our test environment look like?

  • Private Network : 192.168.0.0/24
  • System running Openswan : 192.168.0.226
  • Private Internet Connection : 81.82.83.84
  • Azure VPN Gateway : 104.40.149.247
  • Test System on Azure : 10.0.0.4
  • Azure Network : 10.0.0.0/24

The steps we’ll be going through?

 

  • Configure Virtual Network on Azure
  • Configure VPN Gateway
  • Configure Openswan
  • Configure NAT Rules on the ISP (Telenet) Router
  • Activate IPSec VPN Tunnel
  • Test Connectivity

Continue reading “Microsoft Azure : How-to setup a site-to-site VPN using OpenSwan (on a Telenet SOHO subscription)”

Database variants explained : SQL or NoSQL? Is that really the question?

A first glance beyond the religion

When taking a look towards the landscape of databases, one can only accept that there has been a lot of commotion about “SQL vs NoSQL” in the last years. But what is it really about?

SQL, which stands for “Structured Query Language”, has been around since the seventies and is commonly used in relational databases. It consists of a data definition language to define the structure and a data manipulation language to alter the data within the structure. Therefore a RDBMS will have a defined structure and has been a common choice for the storage of information in new databases used for financial records, manufacturing and logistical information, personnel data, and other applications since the 1980s.

1401269083847

NoSQL, which stands for “Not only SQL”, departs from the standard relational model since it saw its first introduction in the nineties. The primary focus of these database was performance, or a given niche, and focus less consitency/transactions. These databases provide a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling, and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, graph, or document) differ from those used in relational databases, making some operations faster in NoSQL and others faster in relational databases. The particular suitability of a given NoSQL database depends on the problem it must solve.

So it depends on your need…

Do you want NoSQL, NoSQL, NoSQL or NoSQL?

NoSQL comes in various flavors. The most common types of NoSQL databases (as portrayed by Wikipedia) ;

There have been various approaches to classify NoSQL databases, each with different categories and subcategories. Because of the variety of approaches and overlaps it is difficult to get and maintain an overview of non-relational databases. Nevertheless, a basic classification is based on data model. A few examples in each category are:

  • Column: Accumulo, Cassandra, Druid, HBase, Vertica
  • Document: Clusterpoint, Apache CouchDB, Couchbase, MarkLogic, MongoDB, OrientDB
  • Key-value: Dynamo, FoundationDB, MemcacheDB, Redis, Riak, FairCom c-treeACE, Aerospike, OrientDB
  • Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog
  • Multi-model: OrientDB, FoundationDB, ArangoDB, Alchemy Database, CortexDB

Column

A column of a distributed data store is a NoSQL object of the lowest level in a keyspace. It is a tuple (a key-value pair) consisting of three elements:

  • Unique name: Used to reference the column
  • Value: The content of the column. It can have different types, like AsciiType, LongType, TimeUUIDType, UTF8Type among others.
  • Timestamp: The system timestamp used to determine the valid content.

Example

{
    street: {name: "street", value: "1234 x street", timestamp: 123456789},
    city: {name: "city", value: "san francisco", timestamp: 123456789},
    zip: {name: "zip", value: "94107", timestamp: 123456789},
}

Document

A document-oriented database is designed for storing, retrieving, and managing document-oriented information, also known as semi-structured data. The central concept of a document-oriented database is that Documents, in largely the usual English sense, contain vast amounts of data which can usefully be made available. Document-oriented database implementations differ widely in detail and functionality. Most accept documents in a variety of forms, and encapsulate them in a standardized internal format, while extracting at least some specific data items that are then associated with the document.

Example

<Article>
   <Author>
       <FirstName>Bob</FirstName>
       <Surname>Smith</Surname>
   </Author>
   <Abstract>This paper concerns....</Abstract>
   <Section n="1"><Title>Introduction</Title>
       <Para>...
   </Section>
 </Article>

Key-Value

A key-value (an associative array, map, symbol table,or dictionary) is an abstract data type composed of a collection of key/value pairs, such that each possible key appears just once in the collection.

Example

{
    "Pride and Prejudice": "Alice",
    "The Brothers Karamazov": "Pat",
    "Wuthering Heights": "Alice"
}

Graph

A graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A graph database is any storage system that provides index-free adjacency. This means that every element contains a direct pointer to its adjacent elements and no index lookups are necessary. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.

Example

GraphDatabase_PropertyGraph

MultiModel

Most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated. In contrast, a multi-model database is designed to support multiple data models against a single, integrated backend. Document, graph, relational, and key-value models are examples of data models that may be supported by a multi-model database.

And what flavor do I want?

Each type and implementation has its own advantages… The following chart from Shankar Sahai provides a good overview ;

nosql-comparison-table

Any other considerations I should take into account?

Be wary that most implementations were not designed around consistency integrity and more towards performance. Transactions are referential integrity are not supported by most implementations. High availability designs (including on geographic level) are possible with some implementations, though this often implies a performance impact (as one would expect).

Also check out the research made by Altoros ;

5. Conclusion
As you can see, there is no perfect NoSQL database. Every database has its advantages and disadvantages that become more or less important depending on your preferences and the type of tasks.
For example, a database can demonstrate excellent performance, but once the amount of records exceeds a certain limit, the speed falls dramatically. It means that this particular solution can be good for moderate data loads and extremely fast computations, but it would not be suitable for jobs that require a lot of reads and writes. In addition, database performance also depends on the capacity of your hardware.

They did a very decent job in performance testing various implementations!

2015-01-21 09_08_23-A_Vendor_independent_Comparison_of_NoSQL_Databases_Cassandra_HBase_MongoDB_Riak.

Web Development : A step up with Automated Deployment

Developing a website… ; Open up “notepad++”, browse to your web server via FTP and edit the files. Then refresh to see the changes…

Sounds familiar? Probably… It’s a very straight forward and easy process. The downside however is that you have no tracking of your changes (Version Control) and that the process is pretty manual. So this becomes a problem when you aren’t the only one on the job or if something goes wrong.

So let’s step it up and introduce “version control”… Now we have an overview of all the revisions we made to our code and we are able to revert back to it. Yet suddenly, we need to do a lot more to get our code onto the web server. This brings us to the point where we want a kind of helper that does the “deployment” for us.

The basic process
automated-web-development-kvaes.be

  • Local Development : The development will happen here. Have fun… When you (think you) are happy with what you have produced, you update the files via your version system.
  • Source Repository : The source repository will contain all the versions of your code. Here you can configure it to send a notification to your deployment system whenever a new version has been introduced.
  • Deployment System : The deployment system will query the source repository and retrieve the latest code. This code will be packaged, transmitted and deployed onto the target system(s).
  • Target Systems : The systems that will actually host your code and deliver the (web) service!

Real Life Example?
Ingredients

Recipe

  • Create a private repository at BitBucket
  • Pull/push the repository between BitBucket & your local SourceTree
  • In GitHub, go to “Settings”, “Deployment Keys” and generate a key for your automation. Copy it to your clipboard…
    2015-01-12 15_33_53-kvaes _ kvaes.be - 2015 _ Admin _ Deployment keys — Bitbucket
  • In DeployHQ, go to “Settings”, “General Settings” and copy to key into the “Public Key Authentication” textbox.
    2015-01-12 15_31_19-Website 2015 - LogiTouch - Deploy
  • In DeployHQ, go to “Settings”, “Servers & Group” and create a new server.
    2015-01-12 15_36_53-Website 2015 - LogiTouch - Deploy
  • In the same screen, Enable “Auto Deploy” and copy the url hook.
    2015-01-12 15_38_19-Website 2015 - LogiTouch - Deploy
  • Now go to “Settings” in GitHub, and then “Hooks”. Add a “POST” hook containing the url hook you just copied.
    2015-01-12 15_39_11-kvaes _ kvaes.be - 2015 _ Admin _ Hooks — Bitbucket
  • Now every time you do a commit on your workstation, the code will be deployed to your server!

In fact, this is the mechanism I utilize for my own (hobby) development projects. An example of here, is my own homepage, which is deployed via the system as described above.

The DTAP-Street : a phased approach to a development / deployment cycle

The acronym DTAP finds its origin in the words Development, Testing, Acceptance and Production. The DTAP-street is a commonly accepted method to have a phased approach to software development / deployment.

A typical flow works as follows :

  • Development – This environment is where the software is developed. It is the first environment that is used. Changes are very frequent here, as this is the first area where creativity is forged into a product.
  • Test – A developer is (hopefully) not alone. In the test environment, the complete code base is merged and forged into one single product. The first attempts at standardization and alignment towards the future production environment are made here.
  • Acceptation – Once the development team feels that the product is ready, it will be deployed to acceptance. This is a look-alike of the production and used by operations as a staging environment for production releases.
  • Production – The real deal… Here the product surely needs to be ready for prime-time.

dtap-kvaes.be-271014

Sometimes the following are also added ;

  • Education / Training – Sometimes a dedicated environment is needed where people can test drive the software in a safe sand box. Due to efficiency reasons, this environment is often time shared with acceptation.
  • Backup / Disaster Recovery – Disasters can happen… Therefore some disaster recovery plans may rely on a dedicated backup / disaster recovery location.
  • Integration – An environment that is sometimes located between “Test” & “Acceptance” as an intermediate step to test certain partner integrations. Just as with the “eduction” environment, this environment is often time shared with acceptation.

What are the most commonly used formations?

  • Live – Production – Many companies rely solely on a production environment. The risk reduction is often neglected in favor of the cost benefit of having one environment.
  • Staging – Production/Test – If no real customization are done to the implemented software, then two environments may suffice.
  • DTAP – Development/Test/Acceptation/Production – Once customization hit… then a full DTAP-street is needed to reduce the amount of risks involved with software development.
  • DTAPB – Development/Test/Acceptation/Production/Backup – This is an enhanced DTAP-street that is capable of doing a disaster recovery. (Sidenote ; The Test/Development environment is often shared with the backup location. This provides the advantage that the resources of the Test/Development can be sacrificed during a disaster.)

What Code / Data flows occur between the environments?

  • Software Versions – Software releases go from Development to Test to Acceptation to Production… The timing varies from the chose release management cycle, though typical times are as follows ; Development (Continuous Builds), Test (Daily Build), Acceptation (Once per quarter, three weeks before production), Production (Once per quarter)
  • Data – Data flows in the opposite direction as software versions. Data is taken from production and copied to Acceptance / Test / Development. Depending on the environment (and relative security compliancy), the data may be anonymized or even reduced to have a representative production workload of a limited size.