Performance impact of the VmWare Virtual Switch

Let’s start out with the basics. Vmware has several products that can be used for virtualization. The most commonly know products are “vmware workstation”, “vmware server” & “vmware player”. They should actually be classed under “emulation” rather than device sharing. In my “hobby environment” I used the VmWare server; It’s free, and it’s solid.

Yet for the enterprise needs, esx is the way to go. Esx is a kernel on it’s own, and enables the virtual machines to really share the resources. This gives esx an extreme advantage over the other products, yet be aware that it also implies technical restrictions/difficulties.

As you can probably guess, adding an extra “emulation layer” will result in some performance loss. Those products will most likely suffice for function test/development environments. Yet a bit more performance and resource sharing is required for servers that need an enterprise production level.

Another thing you need to consider is infrastructure architecture you’re going to build. Here is where the article comes down to… The network sharing part in ALL vmware products is done thru a kind of “virtual switch”. This program is software, and is bound to cpu usage. When several servers share an environment within a vmware product, and one server starts to do a lot of bandwith. Then all servers will notice this as the virtual switch will need cpu power for this.

Don’t get me wrong here… I don’t want to bash the product, but I want to make you aware of this situation so that you can design your server farms for this.
For example: organise your farm so that the intensive servers share their environment with some “light” servers
Also make sure your system architectures know this limitation! This gives them the opportunity to design a system that suits a shared hosting environment. It’s just awfull if everybodies hard work goes down the drain, due to a design issue that could have been tackled.

2 thoughts on “Performance impact of the VmWare Virtual Switch

  1. Karim,

    I stumbled across your article because I work in performance analysis for VMware. Your observations about overhead with network virtualization are generally correct. But in truth the overhead for the virtual switch isn’t a dominating factor in planning for network virtualization.

    The dominating factor is efficiency of network virtualization. The very worst implementations of such out there (that don’t need naming here) have throughputs throttled at 1/3 of wire speed with 100% CPU utilization. The best out there (ours, frankly) maintains wire speed and leaves a lot of CPU left for application use.

    Anyway, planning around network virtualization is important. Planning for the addition capabilities provided by a virtual switch isn’t.

  2. Scott,

    Karim is right in stating that the virtual switch adds a virtualisation layer that will slow down network traffic, unless you use very old 10Mbit network and a server equipped with the fastest CPUs. I’ve been designing, installing and servicing several VMware ESX servers for more than 3 years and can talk with some authority too.

    Unlike competing virtualization products, there is always a virtual switch sitting between the guest OS NIC and the hardware network. VMware supports a limited number of drivers that run in the console OS and connect the hardware NICs to one or more virtual switches. The virtual machines can have a vlance or vmxnet NICs that connect to the virtual switch. Most guest OSses recognize the (v)Lance network card by default. Using vlance has a large emulation overhead. vmxnet is a card that has no hardware equivalent and for which drivers are delivered by VMware for most popular guests. vmxnet is more CPU friendly.

    The virtual switch runs on CPU0, core 0. So even if you have a large 4 CPU, quad core server, all network traffic of all virtual machines is processed by a single core.

    Is this a major problem? No.
    But there are some things you shouldn’t do with virtual machines running on top of VMware ESX.
    1. Don’t access NAS from your virtual machines. So no drive mappings.
    2. No backup clients inside the guests.
    3. Equip your ESX servers with few CPUs and cores and select the fastest CPU available.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.