Showing posts with label VMware. Show all posts
Showing posts with label VMware. Show all posts

Monday, 10 July 2017

Down the OVA compatibility rabbit hole

I recently volunteered to create a B2R CTF for SecTalks_BNE. It was fairly simple to create the content within the machine, however I came across a few hurdles when trying to make the machine as portable as possible. I wanted it to be easily usable on VirtualBox as well as VMware Fusion, Player and Workstation.

Before embarking on this project I had foolishly assumed I could just create the VM in VirtualBox and then "Export Appliance..." to create a portable OVA. If only it were that simple!

The OVA files that were created by VirtualBox worked fine by other VirtualBox users, but VMware users were getting various levels of success; Fusion wouldn't play nice at all.

I've created this post so that I remember what to do again down the track, and as a side bonus hopefully someone else will benefit or learn from it!

Let me explain some acronyms first

An OVA file is an Open Virtualisation Appliance. It's essentially a tarball containing an OVF, one or more disk images (usually VMDK files) and a manifest (checksum) file.

The OVF (Open Virtualisation Format) specifies the configuration of the virtual machine. The disk images contain data held by the virtual drives.

Gathering test data

To get some VMware test data I dragged my old HP N54L out of the cupboard and installed ESXi 6.5 on it. The disk performance was horrendously slow until I disabled the problematic AHCI driver as per this blog.

After creating a few OVA files from ESXi, my testing concluded that VirtualBox happily accepted a VMware OVA but VMware had a hard time working with a VirtualBox OVA.

One solution would be to do all my development on ESXi, but I quite like using VirtualBox on my laptop!

My VirtualBox solution

I decided to keep things simple and use ESXi to generate the initial OVA. I chose to target VMware 4 to keep it compatible with pretty much everything. After this step ESXi was no longer required.

I then unpacked said OVA, prepared the replacement disk image with VirtualBox and rolled my own OVA using a few commands.

The initial OVA contained the following:
$ tar xvf covfefe.ova
covfefe.ovf
covfefe.mf
disk-0.vmdk

To prepare the replacement disk-0.vmdk file, I ran through the steps in my earlier blog post and converted from VDI to VMDK with clonemedium (also mentioned in the same post).

After replacing the VMDK file, I edited the size entry in the OVF to reflect the new file:
<File ovf:href="disk-0.vmdk" ovf:id="file1" ovf:size="464093696"/>

Once I finished editing the OVF I had to create the correct checksums to use in the manifest file:
$ shasum covfefe.ovf disk-0.vmdk
249eef04df64f45a185e809e18fb285cadfcd6f0  covfefe.ovf
ae1718beb7d5eb7dfb5158718b0eceda812512a2  disk-0.vmdk

After the changes my manifest file looked like this:
$ cat covfefe.mf 
SHA1 (covfefe.ovf)= 249eef04df64f45a185e809e18fb285cadfcd6f0
SHA1 (disk-0.vmdk)= ae1718beb7d5eb7dfb5158718b0eceda812512a2

I then reassembled the OVA file:
$ tar cf covfefe.ova covfefe.ovf covfefe.mf disk-0.vmdk

Just as a test I also did the assembly using OVF Tool as it did some extra checks while assembling:
$ /Applications/VMware\ OVF\ Tool/ovftool covfefe.ovf covfefe.ova

The OVA has worked flawlessly on everything I've tested it on so far which is VirtualBox 5.1.22, VMware ESXi 6.5, Fusion 8.5.8 and Player 6.0.1.

Prepping a Linux VM for OVA export

These are the steps I recommend to prepare a Linux VM for OVA export. It should keep the size down to a minimum and prevent headaches and confusion down the track!

I'm using VirtualBox but the info applies to VMware. You'll just have to read the VMware documentation for the compacting section.

I am running these commands from a Debian Stretch live CD inside the guest, and have mounted the destination filesystem (/dev/sda1) as /mnt:
$ sudo mount /dev/sda1 /mnt

Disable systemd from renaming network interfaces

If you leave this enabled, you'll have different network interface names for VirtualBox and VMware so your interface definitions won't work in both!

I disable this by adding the kernel parameter "net.ifnames=0", you can do this within /mnt/etc/default/grub:
GRUB_CMDLINE_LINUX="net.ifnames=0"

Then run update-grub from within a chroot:
$ sudo mount --bind /dev /mnt/dev
$ sudo mount --bind /proc /mnt/proc
$ sudo mount --bind /sys /mnt/sys
$ sudo chroot /mnt
# update-grub
# exit
$ sudo umount /mnt/dev /mnt/proc /mnt/sys

You'll now want to adjust /etc/network/interfaces (or equivalent) accordingly to reflect eth0 instead of enp0s17 or whatever.

Sanitise the log directory

Nuke the contents but leave files in place:
$ sudo find /mnt/var/log -type f -exec sh -c 'cat /dev/null > {}' \;

Discard unallocated blocks

Unmount the filesystem then discard unallocated blocks:
$ sudo umount /mnt
$ sudo e2fsck -E discard /dev/sda1

Compact the disk image

This is done from the host, not the guest.

If you're using a VDI file, you can use modifymedium --compact:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi

If you're using a VMDK file, you can use clonemedium:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi

Friday, 13 April 2012

A potential backup solution for small sites running VMware ESXi

Today, external consumer USB3 and/or eSATA drives can be a great low cost alternative to tape. For most small outfits, they fulfil the speed and capacity requirements for nightly backups. I use the same rotation scheme with these drives as I did tape with great success.

Unfortunately these drives can't easily be utilised by those running virtualised servers on top of ESXi. VMware offers SCSI pass-through as a supported option, however the tape drives and media are quite expensive by comparison.

VMware offered a glimpse of hope with their USB pass-through introduced in ESXi 4.1, but it proved to have extremely poor throughput (~7MB/sec) so can realistically only shift a couple of hundred GB at most per night.

I have trialled some USB over IP devices; the best of these can lift the throughput from ~7MB/sec to ~25MB/sec, but the drivers can be problematic and are often only available for Windows platforms.

This got me thinking about presenting a USB3 controller via ESXi's VMDirectPath I/O feature.

VMDirectPath I/O requires a CPU and motherboard capable of Intel Virtualization Technology for Directed I/O (VT-d) or AMD IP Virtualization Technology (IOMMU). It also requires that your target VM is at a hardware level of 7 or greater. A full list of requirements can be found at http://kb.vmware.com/kb/1010789.

I tested pass-through on a card with the NEC/Renesas uPD720200A chipset (Lindy part # 51122) running firmware 4015. The test VM runs Windows Server 2003R2 with the Renesas 2.1.28.1 driver. I had to configure the VM with pciPassthru0.msiEnabled = "FALSE" as per http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf or the device would show up with a yellow bang in Device Manager and would not function.

The final result - over 80MB/sec throughput (both read and write) from a Seagate 2.5" USB3 drive!

Tuesday, 29 July 2008

Moving from VMware Server to ESXi

At home I'm currently using VMware Server with Windows 2003 as the host OS. In addition to running 5 guest operating systems, the host OS performs the following tasks:

  • Shuts down the server in the event of an extended power outage thanks to APC PowerChute.
  • Backs up the VMDK files to locally attached USB hard drives.
  • Allows remote administration via Terminal Services (Remote Desktop).
  • Hosts my complex virtual networking services including NAT (with port forwards) and routing for the virtual machines.
Lately I've been reading up on VMware ESXi and it appears as though my existing hardware is going to work, however I'm having a hard time deciding if the extra efficiency is worth the hassle. From what I've read I will have to find a different way to perform backups since local USB devices aren't supported, in addition I will have to provision a VM to perform the NAT and routing duties. On the other hand the I/O struggles at times with VMware Server so the extra performance and stability from ESXi would be welcomed; I've had VMware Server's NAT implementation crash twice during 18 months of use.

If anyone out there has made the move, I'd love to hear their experiences and feedback!