Wrapperscript around ghettoVCB

This script is handy if you want to backup a (free) ESXi server without console access. You set this script up in a cron of a seperate server, which will initiate the backups. It heavily relies on ghettoVCB, a great tool! If you’re looking for a more professional tool, check out esxpress (~1k / esx host).

# Heavily depend on http://communities.vmware.com/docs/DOC-8760

# Backup Variables
LOG=/root/esx-backup/logs       # log directory (local)
RTMP=/tmp                       # temp directory (remote)
BACKUPALL=YES                   # YES to backup all VMS, and ignore the "serverlist"

# Mailing Variables
SUBJECT="Myhost- ghettoVCB"

cd $WORK

for PID in $(ps -ef | grep -v "grep" | grep -v "$$" | grep -i "backup" | awk '{ print $2 }')
  KILL=`kill $PID`


for SERVER in $(ls -l | grep ^d | grep -v logs | awk '{print $8}'); do
        DATE=`date +"%Y%m%d"`
        echo "STARTING - $SERVER" >> $LOG/$DATE-$SERVER.log
        date >> $LOG/$DATE-$SERVER.log
        if [ "$BACKUPALL" = "YES" ] ; then
                GENERATE=`ssh root@$SERVER -C vim-cmd vmsvc/getallvms | awk '{ print $2 " " $3 }' | grep -v "Name File" > $SERVER/serverlist`
        COPY=`scp $SERVER/* root@$SERVER:$RTMP >> $LOG/$DATE-$SERVER.log`
        RIGHTS=`ssh root@$SERVER -C chmod 755 $RTMP/ghettoVCB.sh`
        BACKUP=`ssh root@$SERVER -C $RTMP/ghettoVCB.sh $RTMP/serverlist >> $LOG/$DATE-$SERVER.log`
        echo "ENDED - $SERVER" >> $LOG/$DATE-$SERVER.log
        echo "BACKUP FAILED" > $LOG/latest-$SERVER.log
        grep -v "Clone" $LOG/$DATE-$SERVER.log > $LOG/latest-$SERVER.log
        mailx -s "$SUBJECT" "$EMAIL" < $LOG/latest-$SERVER.log

Using ssh keys with ESXi

  • First “unlock” your ESXi
  • Create your sshkey (puttykeygen or ssh-keygen) on the client machine
  • Place the keyfile (for example : id_rsa.pub) from the client on the host
  • Create “.ssh” directory on the root of ESXi device
  • cat id_rsa.pub >> /.ssh/authorized_keys
  • chmod 0600 -R /.ssh on the ESXi

Enabling SSH on an ESXi

  • Go to the ESXi console and press alt+F1
  • Type: unsupported
  • Enter the root password
  • At the prompt type “vi /etc/inetd.conf”
  • Look for the line that starts with “#ssh” (you can search with pressing “/”)
  • Remove the “#” (press the “x” if the cursor is on the character)
  • Save “/etc/inetd.conf” by typing “:wq!”
  • Restart the server. Many guides say that you just have to restart the services, but this fails…

Login through ssh after the reboot and get the following:

login as: root
root@ESXi’s password:
Tech Support Mode successfully accessed.
The time and date of this access have been sent to the system logs.
WARNING – Tech Support Mode is not supported unless used in
consultation with VMware Tech Support.
~ #

Innotek (VirtualBox) Acquired by Sun

Press announcement

virtualboxSANTA CLARA, CA February 12, 2008 Sun Microsystems, Inc. (NASDAQ: JAVA) today announced that it has entered into a stock purchase agreement to acquire innotek, the provider of the leading edge, open source virtualization software called VirtualBox. By enabling developers to more efficiently build, test and run applications on multiple platforms, VirtualBox will extend the Sun xVM platform onto the desktop and strengthen Sun’s leadership in the virtualization market. This software is available for all major operating systems at http://www.virtualbox.org and http://www.openxvm.org.

So sun strengthens it’s product portfolio by adding a virtualization option.

The Virtualization options in Linux

Check out the following article at TechThrob.com.
An excerp of the intro:

This week Canonical, the company behind Ubuntu Linux, announced a partnership with Parallels, maker of the Virtualization products Parallels Workstation and Parallels Desktop for Mac. Consequently, the Parallels Workstation virtualization software is now available to download and install in Ubuntu Linux, completely supported by Canonical, and done entirely through the Add/Remove programs interface. This makes four different virtualization programs — three of which are installable via the package repositories — that run on Ubuntu Linux. (See the Correction: in the Installing VirtualBox section for more information)

This article compares four virtualization products available for Ubuntu Linux: the free, open source x86 emulator Qemu; the closed-but-free versions of VirtualBox and VMware-Server, and the commercial Parallels Workstation.

What we often forget when implementing virtualization solutions

“The beginning is the half of every action”
Someone once told me “There is nothing more permanent than temporarily.” (roughly translated), and it’s something you often see in the IT world. A server goes down, let’s do the quick fix now and do the in depth analysis/coding later. The last step is often moved the the refrigerator called “on hold”-, “TODO” or “when we have time” boxes.

The following situation might ring some bells:

X : How de we save on infrastructure costs?
Y : Maybe by virtualizing our infrastructure?
X : Sounds good, how do we do this?
Y : Let’s first try our lab/development/staging environment?
X : And if that works move all servers to it!

Help!!! My virtual servers are breeding like rabbits
Most companies who’ve started with virtualizations, like for example VmWare of Xen, have found themselves rushing (or stumbling) way too fast in this new enviroment. The virtual infrastructure needs the same amount of thought as your physical infrastructure. It’s not because a virtual server is created at a fraction of the time it would take a physical one, that one shouldn’t follow the same steps.

Perhaps the sexiest aspect of virtualization is its speed: You can create VMs in minutes, move them around easily, and deliver new computing power to the business side in a day instead of weeks. It’s fun to drive fast. But slow down long enough to think about making virtualization part of your existing IT processes

It’s not because it’s virtual that it doesn’t need to be managed
Continue reading “What we often forget when implementing virtualization solutions”

Vmmon Issues with ubuntu gutsy gibson

The Problem

kvaes@ubuntu:~$ sudo vmware
[sudo] password for kvaes:
vmware is installed, but it has not been (correctly) configured
for this system. To (re-)configure it, invoke the following command:

So yet again; new kernel version, new issues with vmware server. Here is the “HOWTO” to get your vmware server working again.

The Fix

  • Download the latest vmware-any-any update (currently: 114 )
  • Unpack it :
    tar xzf vmware-any-any-update???.tar.gz
  • Run the any-any update (press -enter- on all defaults) :
    sudo ./runme.pl
  • Run the install update (press -enter- on all defaults) :
    sudo /usr/bin/vmware-config.pl
  • “Done!!!”
    sudo vmware

The Past
This was an update on the previous threads for Feisty:

Running your dual boot windows inside Vmware Server within Ubuntu

I guess I’m a linux evangelist… Ubuntu is my main operating system, yet (for work interoperatibility) windows is sometimes needed. At that time, I mostly have multiple workspaces open, along with a lot of processes running. Doing a reboot to windows would mean a loss of time & productivity. Or just too much work, as everyone is kinda lazy by nature…

So my research began… First I used VmWare convertor to run my windows inside my linux. Yet having two windows machines, meant twice the space/maintenance. After browsing thru the options, I saw the option to boot straight from a physical disk/partition. After some experiments, I got it working. Below you can find a small guide on how to get it done.

Continue reading “Running your dual boot windows inside Vmware Server within Ubuntu”

Performance impact of the VmWare Virtual Switch

Let’s start out with the basics. Vmware has several products that can be used for virtualization. The most commonly know products are “vmware workstation”, “vmware server” & “vmware player”. They should actually be classed under “emulation” rather than device sharing. In my “hobby environment” I used the VmWare server; It’s free, and it’s solid.

Yet for the enterprise needs, esx is the way to go. Esx is a kernel on it’s own, and enables the virtual machines to really share the resources. This gives esx an extreme advantage over the other products, yet be aware that it also implies technical restrictions/difficulties.

As you can probably guess, adding an extra “emulation layer” will result in some performance loss. Those products will most likely suffice for function test/development environments. Yet a bit more performance and resource sharing is required for servers that need an enterprise production level.

Another thing you need to consider is infrastructure architecture you’re going to build. Here is where the article comes down to… The network sharing part in ALL vmware products is done thru a kind of “virtual switch”. This program is software, and is bound to cpu usage. When several servers share an environment within a vmware product, and one server starts to do a lot of bandwith. Then all servers will notice this as the virtual switch will need cpu power for this.

Don’t get me wrong here… I don’t want to bash the product, but I want to make you aware of this situation so that you can design your server farms for this.
For example: organise your farm so that the intensive servers share their environment with some “light” servers
Also make sure your system architectures know this limitation! This gives them the opportunity to design a system that suits a shared hosting environment. It’s just awfull if everybodies hard work goes down the drain, due to a design issue that could have been tackled.