Friday, May 6, 2011

EagerZeroedThick disk issues during storage vMotion

During a large LUN migration project I have discovered that vSphere 4.1 doesn't provision EagerZeroedThick disks properly when doing a storage vMotion. When the VM reaches the new LUN the disks will be formatted as Thick instead of EZT. To resolve this I found the vmkfstools command line utility helps. I have as yet been unable to find a way to resolve this issue using PowerCLI.

1) Enable Local Tech Support Mode on the ESXi host where the VM resides.
2) Log into the console of the ESXi host and push ALT F2 to enter unsupported mode.
3) Log in as root and enter the password.
4) Browse to the folder of the VM; this should be /vmfs/volumes/datastorename *1
5) Type in the following commands to; create a copy of the virtual disk as eagerzeroedthick, move the disk to a backup name (_old.vmdk) and finally change the name of the new disk to the name of the old disk.

a) vmkfstools -i servername_2.vmdk servername_2_new.vmdk -d eagerzeroedthick
b) mv servername_2.vmdk servername_2_old.vmdk
c) mv servername_3_new.vmdk servername_3.vmdk

Perform steps a, b and c for each disk that needs to be EZT. Then power on the VM. Because we have renamed the new EZT disk to be the same as the old disk we do not need to change anything on the VM and we can keep a backup of the original thick disk. I would confirm that the server is now working and delete the old disks.

*1 Note that the syntax for the datastore name requires that spaces in the name be substituted with a backslash. For example; to access a datastore named North Prod LUN 009 XIV 02 we would need to enter into the command line the following. Please note that the syntax is case sensitive.

cd vmfs
cd volumes
cd North\ Prod\ LUN\ 009\ XIV\ 02\

This is a good argument for not having spaces in your datastore names at the vSphere level. Once I find a way to do this through PowerCLI it probably won't be as annoying.

*** UPDATE - 20-07-2011 ***

I have discovered that the reason the disks are converted is that the LUN's in question are using a different block size. If both the source and destination LUN use the same block size then this issue does not occur.


  1. Hi, I know this is a couple of years old now, but I wanted to thank you for this post, as it saved me several hours of hair pulling.

    I was doing an SVmotion from an HP EVA to an HP 3Par, and came across this issue.

    1. I'm really pleased you found value in this post and that I was able to help!