Showing posts with label ESXi. Show all posts
Showing posts with label ESXi. Show all posts

Thursday, 8 November 2012

VMWare: Remove datastore problem - "The resource xxxxxxxxxxxxxxxxxxxxx" is in use

Today I've faced a situation, when I removed all VMs from a datastore, but I still couldn't remove it because of the following error:

Status: The resource 'xxxxxxxxxxxxxxxxxx' is in use.

What I did first, is unmounting the datastore on all hosts in the cluster. This allowed me to find two hosts which has locked this datastore. While all other hosts allowed me to unmount the datastore with no problem, those two returned me an error:

Status: The resource 'xxxxxxxx' is in use.
Erorr Stack: Cannot unmount volume 'xxxxxxxx' because file system is busy. Correct the problem and retry the operation.

At this moment I've remembered, that some time ago in order to install some upgrade, I needed to configure a ScrathConfig option to use some external location on those two hosts.

Tip:
Go to "Inventory > Hosts and Clusters", select host that uses a datastore, goto "Configuration" tab, and click on "Advanced Settings" option. Find "ScratchConfig" section and change to something else (e.g. /tmp). Restart the host. Now you will be able to remove the datastore.

PS: Of course, this tip is a kind of useless if it was you who configured this option. But if you've got some legacy which you haven't configured before.

PPS: I didn't have this situation, but I'd advice to check also Syslog.global.logDir option.

VMWare Cluster - Remove datastore failed - The vSphere HA agent on host '10.0.0.1' failed to quiesce file activity on datastore '/vmfs/volumes/XXXXXXXXXXXXX'. To proceed with the operation to unmount or remove a datastore, ensure that the datastore is accessible, the host is reachable and its vSphere HA agent is running.

For the last few days I do some reorganization of our Virtual Infrastructure. One of the steps of this reorganization is upgrade from VMFS 3 to VMFS 5 for all our storages connected to the main HA cluster.

Although it is possible in ESXi to upgrade to VMFS5 in-place, I decided to completely remove and re-create storages. The main reason was that previously all arrays were sliced in several 2TiB(-1MiB) logical drives, and I wanted to create single logical unit for each storage subsystem.

But almost each time I tried to remove an old datastore from ESXi, I've received an error:


Status: The vSphere HA agent on host '10.0.0.1' failed to quiesce file activity on datastore '/vmfs/volumes/XXXXXXXXXXXXX'. To proceed with the operation to unmount or remove a datastore, ensure that the datastore is accessible, the host is reachable and its vSphere HA agent is running.


Tip:
Well, solution is quite simple. Just go to the host which generated this error (in my example it's 10.0.0.1) in "Inventory > Hosts and Clusters" view, and remove a the datastore from the Summary tab of that host.

Monday, 16 April 2012

My Projects: Virtualization - part 2: P2V

Here is the next post in "My Projects" series, and the second one about Virtualization.

After migration from XEN was finished, we started to migrate all existing physical machines to virtual. Generally, this was one of the most light project for the last couple of years. First of all - VMware Converter 4.0 has been released, so all CentOS-es were migrated smoothly, without an issue. Second - while migrating VMs from XEN, I've collected so much experience, that some tasks which seemed very complicated before, now was a kind of obvious for me (e.g. like playing with LVM volumes).

What we had:

A bunch of  physical servers (IBM) with a range of OSes installed (mainly RH-based).

What was done:

  • Most of servers were migrated using VMware Converter.
  • Some of those old ones were migrated manually as in previous project.
  • Some servers were reinstalled as VMs and just services were migrated.

Result:

The result is pretty obvious for this kind of migration. However I'll provide a couple of benefits:
  • Increased reliability. Very small  system outage after H/W failure (That even happened twice. HA automatically migrated all VMs to the another ESXi host.)
  • Uninterrupted maintenance. I just migrate all VMs to another host in cluster during the upgrade.
  • Energy savings. I cannot provide exact quotas, but I was really surprised when review last report. We save really a lot.
  • Convenience. Add/remove some disk space/RAM/vCPUs is just several mouse clicks now.

Saturday, 13 August 2011

Preconfigured ESXi on USB flash drive

It was quite long after my last post. Too much work and almost no free time. However, I've learned few interesting things, and I'm going to share them. And the first one - about the ESXi deployment.

We have up to 20 ESX nodes in our offices worldwide. And sometimes it was a hard way to deploy them remotely, using local supporters help. Moreover, even in our datacenter in place where we're sitting, it's quite uncomfortable to provide some maintenance if you forgot to take warm closes.

I knew there is a possibility to install ESXi on a USB flash drive. But I was interested in pre-deployment. The way is happened to be very easy.
To deploy a clean installation, all you need, is just to:
a) download the ESXi iso, for example here.
b) extract a file, named imagedd.bz2
c) unbzip2 it :)
d) deploy it on your flash drive. I used dd under the Linux like following:

# dd if=imagedd of=/dev/sdb bs=1M
(please, check if your flash drive is under /dev/sd?. if you just use the string above, you can destroy the existing data! Use it only if you understand what it means and on your own risk!)

Done! Now you have a fresh bootable flash drive with ESXi unconfigured yet.
Next thing I did, is just connect it to some laptop (I had Lenovo G550) and boot up.You can use any PC or laptop with H/W Virtualization support. The only issue I had with this budget laptop using ESXi, is that the keyboard doesn't work. So I just have plugged an external one. Now, you can configure the network and make some configuration remotely, with vSphere client, that is not possible to do from the console.

The main problem we had, is that after you select several interfaces to be used for NIC teaming (or portchannel on the Cisco switch side), ESXi that by default it uses "Route based on the originating virtual port ID". And that doesn't work well with the Cisco default configuration, which requires to be set the "Route based on IP hash" (or src-dst-ip hash).
Important note, for those who faced that NIC teaming is not working correctly! In difference to ESX 4.0, which we had used before, in ESXi 4.1. we found, that it makes NIC teaming configuration of management interface not as a default (inherited from the virtual switch config), but configures it independently! This strange behavior cost us 2 hours before we found this. Because we checked many times the NIC Teaming configuration of vSwitch and had no idea that Management Network has it different.

Next I configured all DNS settings, all VM and VMkernel networks, NTP, etc... Then I closed vSphere client and made the final network configuration to be used in the datacenter. Shutdown.... Ready!

All I need after that, is just go to the server room (or send a flash drive to the remote office), plug flash drive to the server (In case of IBM x3650 M2 and M3 there is a special USB port an a RAID controller for that purpose), boot from USB (in case of the mentioned IBM servers - boot from HardDisk1), and select the network interfaces to be for the Management Network after ESXi boot.

That is the way I do the ESXi deployment now. If you have any suggestions - you're welcome.