Azure Automation : Adding modules via Powershell

The last days I was troubleshooting an issue where I was unable to deploy modules to Azure Automation via Powershell… What did I want to do? Add the xActiveDirectory module to my Automation Account.

So let’s look at the documentation of the “New-AzureRmAutomationModule” cmdlet ;

Specifies the URL of the .zip file that contains a module that this cmdlet imports.

Which would give something like…

$dscActiveDirectoryLink = "<a class="linkified" href="" target="_blank" rel="nofollow noreferrer"></a>"
New-AzureRmAutomationModule -ResourceGroupName $ResourceGroupNameAutomationAccount -AutomationAccountName $automationAccountName -Name xActiveDirectory -ContentLink $dscActiveDirectoryLink

So what was my logic here? I went to the project website and took the latest release (bundled as a zip file). Sounds great? It failed… Everytime I got the following error ;

Error extracting the activities from module xActiveDirectory- Extraction failed with the following error: Orchestrator.Shared.AsyncModuleImport.ModuleImportException: Cannot import the module of name xActiveDirectory-, as the module structure was invalid.

After a hint by Joe Levy, it struck me… The command was expecting a nuget package! Underneath, this is also a zipfile. So when obtaining that package and using it for the -ContentLink, all went smooth!


Update with Code Sniplet (with help from Joe Levy!) ;


# Requires that authentication to Azure is already established before running

    [String] $ResourceGroupName,

    [String] $AutomationAccountName,
    [String] $ModuleName,

    # if not specified latest version will be imported
    [String] $ModuleVersion

$Url = "`$filter=IsLatestVersion&searchTerm=%27$ModuleName $ModuleVersion%27&targetFramework=%27%27&includePrerelease=false&`$skip=0&`$top=40" 
$SearchResult = Invoke-RestMethod -Method Get -Uri $Url 

if(!$SearchResult) {
    Write-Error "Could not find module '$ModuleName' on PowerShell Gallery."
elseif($SearchResult.C -and $SearchResult.Length -gt 1) {
    Write-Error "Module name '$ModuleName' returned multiple results. Please specify an exact module name."
else {
    $PackageDetails = Invoke-RestMethod -Method Get -Uri $ 
    if(!$ModuleVersion) {
        $ModuleVersion = $

    $ModuleContentUrl = "$ModuleName/$ModuleVersion"

    # Test if the module/version combination exists
    try {
        Invoke-RestMethod $ModuleContentUrl -ErrorAction Stop | Out-Null
        $Stop = $False
    catch {
        Write-Error "Module with name '$ModuleName' of version '$ModuleVersion' does not exist. Are you sure the version specified is correct?"
        $Stop = $True

    if(!$Stop) {

        # Find the actual blob storage location of the module
        do {
            $ActualUrl = $ModuleContentUrl
            $ModuleContentUrl = (Invoke-WebRequest -Uri $ModuleContentUrl -MaximumRedirection 0 -ErrorAction Ignore).Headers.Location 
        } while($ModuleContentUrl -ne $Null)

        New-AzureRmAutomationModule `
            -ResourceGroupName $ResourceGroupName `
            -AutomationAccountName $AutomationAccountName `
            -Name $ModuleName `
            -ContentLink $ActualUrl

Azure : Traffic Manager in Classic mode vs Resource Manager


Today I was setting up a Traffic Manager deployment in Resource Manager. I wanted a rather “simple” failover scenario where my secondary site would only take over when my primary site was down. As you might now, there are several routing methods, where “failover” is one ;

Failover: Select Failover when you have endpoints in the same or different Azure datacenters (known as regions in the Azure classic portal) and want to use a primary endpoint for all traffic, but provide backups in case the primary or the backup endpoints are unavailable.

Though I was surprised that the naming between the “classic mode” (“the old portal“) and “resource manager” (“the new portal“) were different!


“Classic Mode” / Service Management

So when taking a look at “classic mode”, we see three methods ;

2016-05-09 13_01_02-Traffic Manager - Microsoft Azure

They are described fairly in-depth on the documentation page, though in short ;

  • Performance : You’ll be redirected to the closest endpoint (based on network response in ms)
  • Round Robin : The load will be distributed between all nodes. Depending on the weight of a node, one might get more or less requests.
  • Failover : A picking order will be in place. The highest ranking system alive will receive the requests.


“New Portal” / Resource Manager

When taking a look at “Resource Manager”, we’ll see (again) three methods ;

2016-05-09 13_01_49-Create Traffic Manager profile - Microsoft Azure

Though the naming differs… When going into the technical details, it’s more a naming thing than a technical thing. The functionalitity is (give of take) the same. Where the “Round Robin” had the option of weights (1-1000) before, this now seems a focal point. Where “Failover” was working with a list (visualizuation), you can now directly alter the “priority” (1-1000) of each endpoint.

The info when checking out the routing method from within the portal ;

  • Performance: Use this method when your endpoints are deployed in different geographic locations, and you want to use the one with the lowest latency.
  • Priority: Use this method when you want to select an endpoint which has highest priority and is available.
  • Weighted: Use this method when you want to distribute traffic across a set of endpoints as per the weights provided.



Where the naming differs between the two stacks, the functionality remains the same ;

  • Performance didn’t get renamed
  • Round Robin became “weighted”
  • Failover became “priority

Azure Resource Manager : Marketplace image requires Plan information in the request

Today I encountered an error I was unfamiliar with ;

New-AzureRmResourceGroupDeployment : 14:18:22 – Creating a virtual machine from Marketplace image requires Plan information in the request. OS disk name is osdisk.

After searching a bit, I encountered various posts regarding powershell deployment. Though, as I was using ARM templates (JSON), it was apparent that there is a difference in deploying regular OS images (like Windows, Ubuntu, …) and marketplace items like (Kemp, Netscaler, Barracuda, CheckPoint, …). The latter will require an additional parameter ; “plan”.

Let’s take a look at the following code sniplet…


      "apiVersion": "2015-06-15",
      "type": "Microsoft.Compute/virtualMachines",
      "name": "[concat(variables('node1XVirtualMachineName'), copyindex(1))]",
      "plan": "[variables('node1XimagePlan')]",
      "copy": {
        "name": "virtualMachineLoop",
        "count": "[variables('node1XCount')]"
      "location": "[resourceGroup().location]",
      "tags": {
        "displayName": "VirtualMachines"
      "dependsOn": [
        "[concat('Microsoft.Storage/storageAccounts/', variables('vhdStorageName'))]",
        "[concat('Microsoft.Compute/availabilitySets/', variables('availabilitySet1XName'))]"
      "properties": {
        "availabilitySet": {
          "id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySet1XName'))]"
        "hardwareProfile": {
          "vmSize": "[variables('node1XSize')]"
        "osProfile": {
          "computerName": "[concat(variables('node1XVirtualMachineName'), copyIndex())]",
          "adminUsername": "[variables('node1XAdminUsername')]",
          "adminPassword": "[variables('node1XAdminPassword')]"
        "storageProfile": {
          "imageReference": "[variables('node1XimageReference')]",
          "osDisk": {
            "name": "osdisk",
            "vhd": {
              "uri": "[concat('http://', variables('vhdStorageName'), '', 'osdisk', copyindex(), '.vhd')]"
            "caching": "ReadWrite",
            "createOption": "FromImage"
        "networkProfile": {
          "networkInterfaces": [
              "id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('networkInterface1XNamePrefix'), copyindex()))]"
        "diagnosticsProfile": {
          "bootDiagnostics": {
            "enabled": true,
            "storageUri": "[concat('http://', variables('vhdStorageName'), '')]"

In a regular deployment we would only need line 32. When working with marketplace items we’ll need to add the code for line 5. What’s the content of those two parameters? Let’s check the parameters section…

    "node1XimageReference": {
      "publisher": "[variables('node1XimagePublisher')]",
      "offer": "[variables('node1XimageOffer')]",
      "sku": "[variables('node1XimageSku')]",
      "version": "[variables('node1XimageVersion')]"
    "node1XimagePlan": {
      "name": "[variables('node1XimageSku')]",
      "product": "[variables('node1XimageOffer')]",
      "publisher": "[variables('node1XimagePublisher')]"

The documentation on this is quite scarce… Though the name for the plan isn’t something you can choose yourself. This is the SKU of the marketplace item! I hope this helped, as it got me distracted from my endgoal for a bit. 🙂

Azure : Debugging VPN Connectivity on Resource Manager

Debugging failed VPN tunnels can be quite annoying… Today we had an issue with a new deployment that had us on a wild goose chase for a while. So a quick post to give all of you some tracking points ; vpn_trans

  • The first VPN gateway that receives a packet which is in need of the tunnel will initiate the connection. In ARM you have no way to manually initiate the connect.
    As a side effect, the destination gateway is typically the one who has the most useful information regarding the VPN connection. So when debugging, look towards that gateway.
    Therefor I would suggest to start a ping from an Azure VM (within the VNET) towards the local network. This will kickstart the connection process.
  • The diagnostic part on the Azure side is quite “basic” and well hidden… Actually, the commands to get diagnostics are only available in “classic”-mode. Though you can work your way around it. Check out the following post for more information o getting diagnostics for the VNET gateway on Resource Manager.
  • With the change from “Classic” to “Resource Manager”, there was also a change in the naming of the VPN types. Previously we had “static” and “dynamic”. The “static” connection was “policy-based” and the “dynamic” was “route-based”. When looking towards the effect, the “route-based” deployment relied on IKE V2, where the “policy-based” deployment relies on IKE V1. This is VERY important to know, as this has an effect on the amount of tunnels you can build. In addition, there are a lot of VPN gateways that do not support IKE V2 (at this moment).

Good luck troubleshooting!

Autoscaling Docker hosts on Azure with Virtual Machine Scale Sets & Rancher


A while back Mark Russinovich announced the public preview of the “Virtual Machine Scale Sets“;

VM Scale Sets are an Azure Compute resource you can use to deploy and manage a collection of virtual machines as a set. Scale sets are well suited for building large-scale services targeting big compute, big data, and containerized workloads – all of which are increasing in significance as cloud computing continues to evolve. Scale set VMs are configured identically, you just choose how many you need, which enables them to scale out and in rapidly and automatically.


So here we have a cloud service that would enable us to autoscale our hosts in terms of the load of the underlying systems. Now imaging combining this feature with Docker… I don’t know about your, but I’m excited about this premise! When combining this with Rancher, you could make your own Containers-as-a-Service (CaaS)! Today we’ll be delving into the matter to see how to implement this…


The Design

A quick extract from the ARM Resource Visualizer… when loading the ARM Template I have prepared for this deep dive.

2016-03-04 14_39_53-Azure Resource Visualizer

Continue reading “Autoscaling Docker hosts on Azure with Virtual Machine Scale Sets & Rancher”

How to manually size your Azure Virtual Machine Scale Sets (vmss)?

Whilst exploring the Azure container service preview. One of the key components here is the “virtual machine scale set“. But as you can see, at the moment there isn’t much to configure via the portal…

2016-02-22 22_10_21-Properties - Microsoft Azure

So how can you (manually) update your virtual machine scale set? Use the following arm template :

2016-02-22 22_11_11-Parameters - Microsoft Azure

Here you can enter the name of your virtual machine scale set and the new scale set capacity (or size). Execute the deployment and wait for a bit…

2016-02-22 22_11_41-Microsoft.Template - Microsoft Azure

Even before the deployment finishes, you’ll see that the capacity has been changed.

2016-02-22 22_11_59-Properties - Microsoft Azure

Azure Resource Manager : Deployment variants within one script

Today a quick post to show you that you can setup a deployment with several variants within the current template functions.
So for this post we’ll be combining the deploment for the rancher <a href="”>server & nodes into one script.

2016-02-19 10_53_14-Parameters - Microsoft Azure

Continue reading “Azure Resource Manager : Deployment variants within one script”