Azure : Storage Explorer Preview


Another build announcement was the “Storage Explorer Preview”… ( )

  • Main Features
    • Mac OS X, Linux, and Windows versions
    • Sign in to view your Storage Accounts – use your Org Account, Microsoft Account, 2FA, etc
    • Connect to Storage Accounts using:
      • Account name and Key
      • Custom endpoints (including Azure China)
      • SAS URI for Storage Accounts
    • ARM and Classic Storage support
    • Generate SAS keys for blobs, blob containers, queues, or tables
    • Connect to blob containers, queues, or tables with Shared Access Signatures (SAS) key
    • Manage Stored Access Policies for blob containers, queues, and tables
    • Local development storage with Storage Emulator (Windows-only)
    • Create and delete blob containers, queues, or tables
    • Search for specific blobs, queues, or tables
  • Blobs
    • View blobs and navigate through directories
    • Upload, download, and delete blobs and folders
    • Open and view the contents text and picture blobs
    • View and edit blob properties and metadata
    • Search for blobs by prefix
    • Drag ‘n drop files to upload
  • Tables
    • View and query entities with ODATA
    • Insert prewritten queries with “Add Query” button
    • Add, edit, delete entities
  • Queues
    • Peek most recent 32 messages
    • Add, dequeue, view messages
    • Clear queue

Continue reading “Azure : Storage Explorer Preview”

Storage Performance Benchmarker 0.3 – DISKSPD option!

With this, I’m happy to announce the new release of the “Storage Performance Benchmarker“! The previous version was heavily relying on “SQLIO”, where this version offers you the ability to choose between “DISKSPD” (default) and “SQLIO”. The output will still be aggregated in the same manner towards the backend web interface, though the individual output locally will be in the format of the relative output.

2015-08-31 15_42_15-Administrator_ Windows PowerShell

Parameters added ;

  • -TestMethod : Either “DISKSPD” or “SQLIO”, depending on your preference
  • -TestWarmup : The warmup time used if you use “DISKSPD”

If you have any suggestions/comments, feel free to let me know!


Windows Storage Performance Benchmarking : a predefined set of benchmarks & analytics!

A while ago we were looking into a way to benchmark storage performance on Windows systems. This started out with the objective to see how Storage Spaces held up under certain configurations and eventually moved towards us benchmarking existing OnPremise workloads to Azure deployments. For this we created a wrapper script for SQLIO that was heavily based upon previous work from both Jose Baretto & Mikael Nystrom. Adaptations were made to make it a bit more clean in code and to have a back-end for visualization purposes. At this point, I feel that the tool has a certain level of maturity that it can be publically shared for everyone to use.

Storage Performance Benchmarker Script
The first component is the “Storage Performance Benchmarker Script“, which you can download from the following location ;

I won’t be quoting all the options/parameters, as the BitBucket page clearly describes this. By default it will do a “quick test” (-QuickTest true). This will trigger one run (with 16 outstanding IO) for four scenario’s ; LargeIO Read, SmallIO Read, LargeIO Write & SmallIO Write.

The difference between the “Read” & “Write” part will be clear I presume… 🙂 The difference between the “LargeIO” & “SmallIO” reside in the block size (8Kbyte for SmallIO, 512Kbyte for LargeIO) and the access method (Random for SmallIO & Sequential for LargeIO). The tests are foreseen to mimmick a typical database behaviour (SmallIO) and a large datastore / backup workload (LargeIO). When doing an “extended test” (-QuickTest false), a multitude of runs will be foreseen to benchmark different “Outstanding IO” scenario’s.

Website Backend
You can choose not to send the information (-TestShareBenchmarks false) and the information will not be sent to the backend server. Then you will only have the csv output, as the backend system is used to parse the information into charts for you ; Example.

2015-08-20 07_40_02-Storage Performance Benchmarker []

By default, your information will be shown publically, though you can choose to have a private link (-Private true) and even have the link emailed to you (-Email you@domain.tld).

On the backend, you will have the option to see individual test scenarios (-TestScenario *identifying name*) and to compare all scenarios against each other.

For each benchmark scenario, you will see the following graphs ;

  • MB/s : The throughput measured in MB/s. This is often the metric people know… Though be aware that the MB/s is realised by multiplying the IO/s times the block size. So the “SmallIO” test will show a smaller throughput compared to the “LargeIO”, though the processing power (IOPS or IO/s) of the “SmallIO” may sometimes be even better on certain systems.
  • IO/S : This is the number of IOPS measured during the test. This provides you with an insight into the amount requests a system can handle concurrently. The higher the number, the better… To provide assistance, marker zones were added o indicate what other systems typically reach. This to provide you with an insight about what is to be expected or to which you can reference.
  • Latency : This is the latency that was measured in milliseconds. Marker zones are added to this chart to indicate what is to be considered a healthy, risk or bad zone.

The X-axis will show the difference between different “Outstanding IO” situations ;

Number of outstanding I/O requests per thread. When attempting to determine the capacity of a given volume or set of volumes, start with a reasonable number for this and increase until disk saturation is reached (that is, latency starts to increase without an additional increase in throughput or IOPs). Common values for this are 8, 16, 32, 64, and 128. Keep in mind that this setting is the number of outstanding I/Os per thread. (Source)

Permanently disable rightside toolbar in Adobe Acrobat Reader

The latter versions of Adobe Acrobat Reader had an annoying pane in the right that everyone always closed down immediately to increase readability. Though next time you opened up a PDF, the pane would be back! All changes to the state of the application were saved, except for this one… Extremely annoying and a waste of precious time when you need to close down that pane everything single time!

So for those who are also annoyed by this, apply the following steps for permanent removal ;

  • Close Acrobat Reader
  • Browse to the program folder of Acrobat Reader and then go to “Acrobat Reader DC\Reader\AcroApp” and then your language code
  • Delete the following files : AppCenter_R.aapp & Home.aapp & Viewer.aapp
  • Start up Acrobat Reader
  • Rejoice!

Web Development : A step up with Automated Deployment

Developing a website… ; Open up “notepad++”, browse to your web server via FTP and edit the files. Then refresh to see the changes…

Sounds familiar? Probably… It’s a very straight forward and easy process. The downside however is that you have no tracking of your changes (Version Control) and that the process is pretty manual. So this becomes a problem when you aren’t the only one on the job or if something goes wrong.

So let’s step it up and introduce “version control”… Now we have an overview of all the revisions we made to our code and we are able to revert back to it. Yet suddenly, we need to do a lot more to get our code onto the web server. This brings us to the point where we want a kind of helper that does the “deployment” for us.

The basic process

  • Local Development : The development will happen here. Have fun… When you (think you) are happy with what you have produced, you update the files via your version system.
  • Source Repository : The source repository will contain all the versions of your code. Here you can configure it to send a notification to your deployment system whenever a new version has been introduced.
  • Deployment System : The deployment system will query the source repository and retrieve the latest code. This code will be packaged, transmitted and deployed onto the target system(s).
  • Target Systems : The systems that will actually host your code and deliver the (web) service!

Real Life Example?


  • Create a private repository at BitBucket
  • Pull/push the repository between BitBucket & your local SourceTree
  • In GitHub, go to “Settings”, “Deployment Keys” and generate a key for your automation. Copy it to your clipboard…
    2015-01-12 15_33_53-kvaes _ - 2015 _ Admin _ Deployment keys — Bitbucket
  • In DeployHQ, go to “Settings”, “General Settings” and copy to key into the “Public Key Authentication” textbox.
    2015-01-12 15_31_19-Website 2015 - LogiTouch - Deploy
  • In DeployHQ, go to “Settings”, “Servers & Group” and create a new server.
    2015-01-12 15_36_53-Website 2015 - LogiTouch - Deploy
  • In the same screen, Enable “Auto Deploy” and copy the url hook.
    2015-01-12 15_38_19-Website 2015 - LogiTouch - Deploy
  • Now go to “Settings” in GitHub, and then “Hooks”. Add a “POST” hook containing the url hook you just copied.
    2015-01-12 15_39_11-kvaes _ - 2015 _ Admin _ Hooks — Bitbucket
  • Now every time you do a commit on your workstation, the code will be deployed to your server!

In fact, this is the mechanism I utilize for my own (hobby) development projects. An example of here, is my own homepage, which is deployed via the system as described above.

Tool Suggestion : Amahi

Link : Amahi

Feature Overview
The Amahi Linux Home Server makes your home networking simple. We like to call the Amahi servers HDAs, for “Home Digital Assistants.” Each HDA delivers all the functionality you would want in a home server, while being as easy to use as a web browser.

The core functionality available in the base Amahi HDA install includes:

  • Protect Your Computers Back-up all your networked PCs simply and easily on your home network. If one of your PCs “dies” you can easily restore it!
  • Organize Your Files Access, share and search your files from any machine on your network, making it easy to share and find your photos, music and videos.
  • Internet Wide Access Automatically setup your own VPN so you can access your network from anywhere: safely and securely.
  • Private Internet Applications Shared applications like calendaring, private wiki and more to come, will help you manage your home and your family!

Tool Suggestion : TeamLab

Link : TeamLab

Project Management
Build teams and assign tasks. Schedule project milestones, track project activity and generate reports.

Business Collaboration
View employee details, create posts in blogs and forums. Share photos, bookmarks and Wiki pages.

Instant Messaging
Chat with colleagues in real time. Get contact list automatically updated. Receive “what’s new” notifications.

  • Document Editing
  • E-Mail Management
  • HR Administration

Basecamp Import
Import your Basecamp projects into TeamLab. Make a seamless move to free project management and collaboration software.

Data Backup & Restore
Create data backup directly from your portal. Deploy TeamLab on your own server and restore data from backup in one click.

Wrapperscript around ghettoVCB

This script is handy if you want to backup a (free) ESXi server without console access. You set this script up in a cron of a seperate server, which will initiate the backups. It heavily relies on ghettoVCB, a great tool! If you’re looking for a more professional tool, check out esxpress (~1k / esx host).

# Heavily depend on

# Backup Variables
LOG=/root/esx-backup/logs       # log directory (local)
RTMP=/tmp                       # temp directory (remote)
BACKUPALL=YES                   # YES to backup all VMS, and ignore the "serverlist"

# Mailing Variables
SUBJECT="Myhost- ghettoVCB"

cd $WORK

for PID in $(ps -ef | grep -v "grep" | grep -v "$$" | grep -i "backup" | awk '{ print $2 }')
  KILL=`kill $PID`


for SERVER in $(ls -l | grep ^d | grep -v logs | awk '{print $8}'); do
        DATE=`date +"%Y%m%d"`
        echo "STARTING - $SERVER" >> $LOG/$DATE-$SERVER.log
        date >> $LOG/$DATE-$SERVER.log
        if [ "$BACKUPALL" = "YES" ] ; then
                GENERATE=`ssh root@$SERVER -C vim-cmd vmsvc/getallvms | awk '{ print $2 " " $3 }' | grep -v "Name File" > $SERVER/serverlist`
        COPY=`scp $SERVER/* root@$SERVER:$RTMP >> $LOG/$DATE-$SERVER.log`
        RIGHTS=`ssh root@$SERVER -C chmod 755 $RTMP/`
        BACKUP=`ssh root@$SERVER -C $RTMP/ $RTMP/serverlist >> $LOG/$DATE-$SERVER.log`
        echo "ENDED - $SERVER" >> $LOG/$DATE-$SERVER.log
        echo "BACKUP FAILED" > $LOG/latest-$SERVER.log
        grep -v "Clone" $LOG/$DATE-$SERVER.log > $LOG/latest-$SERVER.log
        mailx -s "$SUBJECT" "$EMAIL" < $LOG/latest-$SERVER.log