Reverse engineering the “AADLoginForLinux” in order to tweak proactive user configuration

Introduction

Last summer I posted about taking a look under the hood of the Azure Active Directory integration for a Linux Virtual Machine. For today, let’s take it a bit further… What if we would want to pre-provision a set of UIDs (User IDs) & GIDs (Group IDs) on a range of virtual machines for cross machine consistency. Let’s say, we would want to make use of an NFS drive and use the same UID/GID across all those boxes. Can we do that with the AAD extension? If so, how can we do it? Let’s hope we can… Otherwise it’ll become a rather short blog post.

 

Disclaimer

This post is based upon my personal experience reverse engineering how this extension works. This is by no means a support statement. If you’re a technical nut (like myself) and want to know how you can tweak this at your own doing… Then this post is for you. 😉

Continue reading “Reverse engineering the “AADLoginForLinux” in order to tweak proactive user configuration”

Taking a look under the hood of the Linux VM Authentication

Introduction

Today we’ll do a deep-dive into how you can log into an Azure Linux VM with Azure Active Directory (AAD). In essence, we’ll go through the following documentation flow, and then take a look how that looks under the hood.

 

Part one : “Creation”

The part on creating & integrating the VM is VERY straightforward…

  • Create a resource group
  • Create a Linux virtual machine
  • Add the “Azure AD login VM”-extension

And that’s it! Really, that’s it…

Continue reading “Taking a look under the hood of the Linux VM Authentication”

Managing Linux hosts with Desired State Configuration via Azure Automation

Introduction

For this post I’ll be assuming you know the basics of Desired State Configuration (or DSC in short). The objective of today is to test what Azure Automation can bring to the table in terms of managing Linux hosts. We all know about Puppet, Chef, Ansible, … but is Azure Automation a viable alternative? 

cmts1

 

First things first… Azure Automation Account

When getting started with DSC on linux, check out this documentation page as a reference. First up, we’ll create an Azure Automation account.

2016-09-15-14_05_03-inbox-karim-vaesxylos-com-outlook

Copy one of the keys and the URL, as we’ll need it to manually register our “OnPremise” host.

Continue reading “Managing Linux hosts with Desired State Configuration via Azure Automation”

Azure File Share : Issue mounting outside of the Azure region from Ubuntu Linux

Today I was setting up a deployment with two hosts ;

  • One in West Europe (“WE”)
  • One in North Europe (“NE”)

The objective was to have a shared mountpoint between both. So I created a storage account in the region West Europe. In this storage account I created a file share, and mounted it on to the VM located in WE. Though when using the exact same config in NE, I got the following error message ;

2016-03-30 13_49_01-kvaes@rancherne0_ ~

mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Continue reading “Azure File Share : Issue mounting outside of the Azure region from Ubuntu Linux”

Rancher : Docker Lifecycle Management – Or how to upgrade containers?

Introduction

It’s all fun & games to create & deploy containers. And the “pets vs cattle” thingie is also cool… Though what about the lifecycle management? That’s something we’ll be handling today!

What will we be doing today?

  • Create a small dummy container
  • Setup a source respository (at BitBucket) for that dummy container
  • Setup an automated build (linked to the source repository) on your docker hub respository
  • Deploy a service on rancher
  • Update the source
  • Upgrade the service to the latest version
  • Enjoy life even more!

What will already need to be setup?

Continue reading “Rancher : Docker Lifecycle Management – Or how to upgrade containers?”

Microsoft Azure : Benchmark Tests – Storage – How do the different series relate to each other?

Azure currently has different “series” of machines. The A-series are seen as “general purpose” machines, where the D-series are targeted towards compute optimization. In the US, the G-series have even seen the light! Today I want to know what the effect of this is on storage performance… Typical IT organizations are worried by storage performance in the cloud as their ERP/BI implementation is “quite eager” to obtain the maximum storage performance.

So what will we be covering today?

  • A bit of theory concerning the differences
  • Test Environment Explained
  • Test Method Explained
  • Display of test results
  • Conclusion / analysis of the test results

 

A bit of theory concerning the differences

What does Microsoft say about their series ;

  • General purpose compute (A) – Basic tier : An economical option for development workloads, test servers, and other applications that don’t require load balancing, auto-scaling, or memory-intensive virtual machines.
  • General purpose compute (A) – Standard tier : Offers the most flexibility. Supports all virtual machine configurations and features.
  • Optimized compute (D) : 60% faster CPUs, more memory, and local SSD – D-seres virtual machines feature solid state drives (SSDs) and 60% faster processors than the A-series and are also available for web or worker roles in Azure Cloud Services. This series is ideal for applications that demand faster CPUs, better local disk performance, or higher memories.
  • Performance optimized compute (G) : unparalleled computational performance with latest CPUs, more memory, and more local SSD – G-series virtual machines feature latest Intel® Xeon® processor E5 v3 family, two times more memory and four times more Solid State Drive storage (SSDs) than the D-series. G-series will provide unparalleled computational performance, more memory and more local SSD storage than any current VM size in the public cloud making it very ideal for your most demanding applications.

Sidenote ; Azure has also released “DS” (“Premium Storage“). We won’t be looking into this area, as it is current still under preview.

Today we’ll be checking what we can get out of those machines via benchmarking. Be aware, that Microsoft is open towards the IOPS delivered by each machine. Be sure to check out the support article “Virtual Machine and Cloud Service Sizes for Azure“. Depending on the type of virtual machine, you can attach a maximum amount of disks. Per disk, you are granted a given number of IOPS. The amount of IOPS granted differs by “series”. An “A – Basic” will be granted 300 IOPS per disk. An “A – Standard”, “D” & “G” will be granted 500 IOPS per disk, where the “DS” will be granted 1600 IOPS per disk.

 

Test Environment Explained

We’ll be creating four machines today ;

  • TEST-BSC-A1 : A1 Basic (West Europe)2015-01-27 16_13_03-Virtual machines - Windows Azure
  • TEST-STD-A1 : A1 Standard (West Europe)
    2015-01-27 16_12_48-Virtual machines - Windows Azure
  • TEST-STD-D1 : D1 Standard (West Europe)
    2015-01-27 16_13_24-Virtual machines - Windows Azure
  • TEST-STD-G1 : G1 Standard (West US*)
    2015-01-28 08_48_56-Virtual machines - Windows Azure

Each machine will be installed with Ubuntu 14.04 with the Azure image of 23/01/2015. The system will then be foreseen with two benchmarking tools ;

These packages will be installed from the Azure Ubuntu Repositories by using the following method. First make sure to uncomment all “multiverse” repositories.

sudo vi /etc/apt/sources.list

Then do an update of the packages list and install both softwares

sudo apt-get update && sudo apt-get install bonnie++ iozone3

 

(Disclaimer : For the test with the G1, I created an additional disk, as the base OS disk was not large enough to fit the test file. Bonnie++ advises to create a test file that is twice the size of the memory. This to counter caching mechanisms. / Update : One error I made, was that I the host caching is disabled by default, so some results on the G1 are not aligned with the other tests. This is only relevant towards the Bonnie++ tests, not to the IOzone tests.)

 

Test Environment / Method Explained

Now we are ready to go… On each system the following commands were executed ;

bonnie++ -d /tmp > /tmp/bonnie.txt

iozone -R -l 5 -u 5 -r 4k -s 100m -F /tmp/f1 /tmp/f2 /tmp/f3 /tmp/f4 /tmp/f5 > /tmp/iozone_results.txt

iozone -R -l 5 -u 5 -r 4k -s 100m -F /mnt/f1 /mnt/f2 /mnt/f3 /mnt/f4 /mnt/f5 > /tmp/iozone_results-mnt.txt

So what are we basically going to do? A good description about what IOzone will do can be found in the article “I Feel the Need for Speed: Linux File System Throughput Performance, Part 1” of Linux Magazine. The highlights ;

IOzone

IOzone is open-source and written in ANSI C. It is capable of single thread, multi-threaded, and multi-client testing. The basic idea behind IOzone is to break up a file of a given size into records. Records are written or read in some fashion until the file size is reached. Using this concept, IOzone has a number of tests that can be performed:

  • WriteThis is a fairly simple test that simulates writing to a new file. Because of the need to create new metadata for the file, many times the writing of a new file can be slower than rewriting to an existing file. The file is written using records of a specific length (either specified by the user or chosen automatically by IOzone) until the total file length has been reached.

  • Re-writeThis test is similar to the write test but measures the performance of writing to a file that already exists. Since the file already exists and the metadata is present, it is commonly expected for the re-write performance to be greater than the write performance. This particular test opens the file, puts the file pointer at the beginning of the file, and then writes to the open file descriptor using records of a specified length until the total file size is reached. Then it closes the file which updates the metadata./LI>

  • ReadThis test reads an existing file. It reads the entire file, one record at a time.

  • Re-readThis test reads a file that was recently read. This test is useful because operating systems and file systems will maintain parts of a recently read file in cache. Consequently, re-read performance should be better than read performance because of the cache effects. However, sometimes the cache effect can be mitigated by making the file much larger than the amount of memory in the system.

  • Random ReadThis test reads a file with the accesses being made to random locations within the file. The reads are done in record units until the total reads are the size of the file. The performance of this test is impacted by many factors including the OS cache(s), the number of disks and their configuration, disk seek latency, and disk cache among others.

  • Random WriteThe random write test measures the performance when writing a file with the accesses being made to random locations with the file. The file is opened to the total file size and then the data is written in record sizes to random locations within the file.

  • Backwards ReadThis is a unique file system test that reads a file backwards. There are several applications, notably, MSC Nastran, that read files backwards. There are some file systems and even OS’s that can detect this type of access pattern and enhance the performance of the access. In this test a file is opened and the file pointer is moved 1 record forward and then the file is read backward one record. Then the file pointer is moved 2 records backward in the file, and the process continues.

  • Record RewriteThis test measures the performance when writing and re-writing a particular spot with a file. The test is interesting because it can highlight “hot spot” capabilities within a file system and/or an OS. If the spot is small enough to fit into the various cache sizes; CPU data cache, TLB, OS cache, file system cache, etc., then the performance will be very good.

  • Strided ReadThis test reads a file in what is called a strided manner. For example, you could read data starting at a file offset of zero, for a length of 4 KB, then seek 200 KB forward, then read for 4 KB, then seek 200 KB, and so on. The constant pattern is important and the “distance” between the reads is called the stride (in this simple example it is 200 KB). This access pattern is used by many applications that are reading certain data structures. This test can highlight interesting issues in file systems and storage because the stride could cause the data to miss any striping in a RAID configuration, resulting in poor performance.

  • FwriteThis test measures the performance of writing a file using a library function “fwrite()”. It is a binary stream function (examine the man pages on your system to learn more). Equally important, the routine performs a buffered write operation. This buffer is in user space (i.e. not part of the system caches). This test is performed with a record length buffer being created in a user-space buffer and then written to the file. This is repeated until the entire file is created. This test is similar to the “write” test in that it creates a new file, possibly stressing the metadata performance.

  • FrewriteThis test is similar to the “rewrite” test but using the fwrite() library function. Ideally the performance should be better than “Fwrite” because it uses an existing file so the metadata performance is not stressed in this case.

  • FreadThis is a test that uses the fread() library function to read a file. It opens a file, and reads it in record lengths into a buffer that is in user space. This continues until the entire file is read.

  • FrereadThis test is similar to the “reread” test but uses the “fread()” library function. It reads a recently read file which may allow file system or OS cache buffers to be used, improving performance.

When taking a look at Bonnie++, check out this article by TextualityMy objective is to gain a proper insight towards the latencies with Bonnie++ and use IOzone for the actual thoughput.

 

Display of test results

Latency

2015-01-28 11_57_47-Bonnie.xlsx - Excel  2015-01-28 12_02_31-Bonnie.xlsx - Excel

Throughput

2015-01-28 12_06_15-iometer.xlsx - Excel

2015-01-28 12_01_58-iometer.xlsx - Excel

Download Raw Results Files

Conclusion / analysis of the test results

So what have we learned today?

  • The latency of the A-series is significately higher than those of the D/G-series.
  • There is a performance difference between the “Basic” and “Standard” of the A-series.
  • Whilst the D-series outperform the A-series, the G-series put all of the others in the dark.
  • There is a performance answer to all loads… Just choose wisely!

 

Debugging your PulseAudio/Alsa sound via Nvidia HDMI?

Check out the following posts, they will surely help you!

For those who also use a Zotac GT 430 (under Ubuntu) ;

  • The following channel was a succes :

    aplay -Dplughw:1,9 /usr/share/sounds/alsa/Front_Center.wav

  • Added “load-module module-alsa-sink device=plughw:1,9” to /etc/pulseaudio/default.pa
  • Changed “/usr/share/alsa/alsa.conf
    defaults.ctl.card NVidia
    defaults.pcm.card NVidia
    defaults.pcm.device 9

Getting PHP syntaxing working in vi on Ubuntu

If you want the syntax highlighting to work with Vim on Ubuntu, then simply install the vim-full package:

sudo apt-get install vim-full

Edit the /etc/vim/vimrc file and uncomment (remove the following line

syntax on

Easy as that… (all you need is a few minutes and a bit of bandwith to download the packages)

Linux Kernel 2.6.17 – 2.6.24.1 vmsplice Local Root Exploit

Hacked

A proof of concept for a local root exploit to hack linux kernels between version 2.6.17 and 2.6.24.1 has been released by ‘milw0rm’. I guess I won’t be the only one who says “feck…” to this.

$ gcc exploit.c -o exploit
$ whoami
heikki
$ ./exploit
———————————–
Linux vmsplice Local Root Exploit
By qaaz
———————————–
[+] mmap: 0x0 .. 0x1000
[+] page: 0x0
[+] page: 0x20
[+] mmap: 0x4000 .. 0x5000
[+] page: 0x4000
[+] page: 0x4020
[+] mmap: 0x1000 .. 0x2000
[+] page: 0x1000
[+] mmap: 0xb7d90000 .. 0xb7dc2000
[+] root
$ whoami
root
Kernel 2.6.22-14-generic

References:
LKML
milw0rm.com
Launchpad
Debian Bugs

The basics behind the Completely Fair Scheduler

The GNU/Linux kernel, version 2.6.23, comes with a modular scheduler core and a Completely Fair Scheduler (CFS), which is implemented as a scheduling module. If you’re interested in the workings of this scheduler, be sure to check out the following article at DevWorks. Below you can find an excerpt of the article, which is in my opinion the core of the article.

How CFS worksSchedule
The CFS scheduler uses an appeasement policy that guarantees fairness. As a task gets into the runqueue, the current time is recorded, and while the process waits for the CPU, its wait_runtime value gets incremented by an amount depending on the number of processes currently in the runqueue. The priority values of different tasks are also considered while doing these calculations. When this task gets scheduled to the CPU, its wait_runtime value starts decrementing and as this value falls to such a level that other tasks become the new left-most task of the red-black tree and the current one gets preempted. This way CFS tries for the ideal situation where wait_runtime is zero!

Continue reading “The basics behind the Completely Fair Scheduler”