As a result of hurricane Matthew, our business shutdown all servers for just two days.
One of several servers had been an ESXi host by having a connected HP StorageWorks MSA60.
Once we logged to the vSphere customer, we pointed out that none of your visitor VMs can be found (they are all detailed as “inaccessible”). So when we glance at the equipment status in vSphere, the array controller and all connected drives look as “Normal”, however the drives all reveal up as “unconfigured disk”.
We rebooted the host and attempted going to the RAID config energy to see just what things seem like after that, but we received the message that is following
“An invalid drive motion ended up being reported during POST. Alterations towards the array setup following a drive that is invalid can lead to loss of old setup information and contents of this initial rational drives”.
Needless to state, we are really confused by this because nothing ended up being “moved”; absolutely absolutely nothing changed. We simply driven within the MSA plus the host, and possess been having this problem from the time.
We have two primary questions/concerns:
The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the probability of this occurring once again (especially it) since I have no idea what caused.
Can there be a snowball’s chance in hell that i could recover our guest and array VMs, alternatively of getting to reconstruct every thing and restore our VM backups?
I’ve two questions/concerns that are main
- The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery about the chance of this occurring once again (especially since I have do not know just what caused it).
A variety of things. Can you schedule reboots on your entire gear? If you don’t you should for only this explanation. Usually the one host we now have, XS decided the array was not prepared over time and don’t install the storage that is main on boot. Constantly good to understand these things ahead of time, right?
- Will there be a snowball’s opportunity in hell that i could recover our guest and array VMs, rather of getting to reconstruct every thing and restore our VM backups?
Perhaps, but i have never ever seen that one mistake. We are chatting really restricted experience right here. According to which RAID controller it really is linked to the MSA, you are in a position to see the array information through the drive on Linux making use of the md utilities, but at that true point it is faster simply to restore from backups.
Any number of things. Can you schedule reboots on all your valuable gear? Or even you should just for this explanation. Usually the one host we now have, XS decided the array was not prepared over time and did not install the primary storage space amount on boot. Constantly good to understand these things in advance, important link right?
I really rebooted this host numerous times about a month ago whenever I installed updates about it. The reboots went fine. We additionally completely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing straight right back on, the raid and server array information ended up being all intact.
A variety of things. Do you really schedule reboots on your entire gear? Or even you should really for only this explanation. Usually the one host we now have, XS decided the array was not prepared over time and did not install the storage that is main on boot. Constantly good to understand these things in advance, right?
We really rebooted this host times that are multiple a month ago whenever I installed updates about it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing straight straight back on, the server and raid array information ended up being all intact.
Does your normal reboot routine of the host include a reboot for the MSA? would it be they had been driven straight back on within the wrong purchase? MSAs are notoriously flaky, likely this is where the presssing problem is.
I would phone HPE help. The MSA is a flaky unit but HPE help is very good.
We actually rebooted this host numerous times about a month ago once I installed updates onto it. The reboots went fine. We also completely powered that server down at round the exact same time because I added more RAM to it. Once more, after powering every thing right back on, the raid and server array information had been all intact.
Does your normal reboot routine of the host add a reboot associated with MSA? would it be which they had been driven straight right back on into the wrong purchase? MSAs are notoriously flaky, likely that’s where the problem is.
I would phone HPE help. The MSA is just a flaky unit but HPE support is very good.
We unfortuitously do not have a reboot that is”normal” for just about any of our servers :-/.
I am not really yes just what the proper purchase is :-S. I’d assume that the MSA would get driven on very very first, then your ESXi host. Should this be proper, we now have currently tried doing that since we first discovered this matter today, while the problem continues to be :(.
We don’t have help agreement with this server or the connected MSA, and they are most likely way to avoid it of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), therefore I’m unsure exactly how much we would need to invest to get HP to “help” us :-S.
I really rebooted this host numerous times about a month ago once I installed updates about it. The reboots went fine. We additionally entirely powered that server down at round the exact same time because I added more RAM to it. Once again, after powering every thing straight back on, the server and raid array information ended up being all intact.