One of our VPS became not bootable for some reason, so I am trying to mount it unfortunately with no much success:
root@ns1:/root#
cloudmin mount-system --host ns1.domain.tld
Mounting filesystem for nns1.domain.tld …
… failed : Failed to mount /home/servers/ns1.domain.tld.img on /mnt/kvm-ns1.domain.tld : mount: you must specify the filesystem type
cloudmin mount-system --host name
[–dir mount-point]
[–want-dir directory]
The problem is Graphical Console link is missing. When starting the VM it gives:
… started, but a problem was detected : KVM instance was started, but could not be pinged after 60 seconds
I know it is really bad. But I really hope there is some kind of use of its .img file. I need to crack and get into it.
root@ns1:/home/servers#
cloudmin mount-system --host ns1.domain.tld
Mounting filesystem for ns1.domain.tld ..
.. failed : Failed to mount /home/servers/ns1.domain.tld.img on /mnt/kvm-ns1.domain.tld : mount: you must specify the filesystem type
root@ns1:/home/servers#
mount /home/servers/ns1.domain.tld /mnt/test -t ext3
mount: /home/servers/ns1.domain.tld is not a block device (maybe try `-o loop’?)
root@ns1:/home/servers#
mount /home/servers/ns1.domain.tld /mnt/test -t ext3 -o loop
loop: can’t delete device /dev/loop1: Device or resource busy
mount: wrong fs type, bad option, bad superblock on /dev/loop1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I wonder what kind of other methods are there to try to open this file?
For some reason this system was missing .swap file, so I copied one from another VM (I know maybe it is not right, but I don’t know how and where to obtain missing swap file) and rebooted. Finally I had Graphical Console, but it shows the system is not bootable as in the screenshot.
I also noticed working VMs show:
Current use Mounted on / as Linux EXT3
and the corrupted one istno mounted as ext3, it only shows:
We’ve already contacted Jamie and he said he will take a look at our system.
dmesg | tail -30 command is giving:
root@ns1:/root#
dmesg | tail -30
br0: port 14(tap12) entering disabled state
SELinux: 2048 avtab hash slots, 278061 rules.
SELinux: 2048 avtab hash slots, 278061 rules.
SELinux: 9 users, 12 roles, 3900 types, 205 bools, 1 sens, 1024 cats
SELinux: 81 classes, 278061 rules
EXT3-fs (loop0): error: can't find ext3 filesystem on dev loop0.
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
br0: port 14(tap12) entering disabled state
device tap12 left promiscuous mode
br0: port 14(tap12) entering disabled state
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
br0: port 14(tap12) entering disabled state
device tap12 left promiscuous mode
br0: port 14(tap12) entering disabled state
EXT3-fs (loop1): error: can't find ext3 filesystem on dev loop1.
EXT3-fs (loop1): error: can't find ext3 filesystem on dev loop1.
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
tap12: no IPv6 routers present
br0: port 14(tap12) entering forwarding state
br0: port 14(tap12) entering disabled state
device tap12 left promiscuous mode
br0: port 14(tap12) entering disabled state
EXT3-fs (loop2): error: can't find ext3 filesystem on dev loop2.
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
tap12: no IPv6 routers present
br0: port 14(tap12) entering forwarding state
There was no any outages or other unordinary events, besides other several VMs on this Cloudmin are running just fine. However I have a theory what might have happened.
This particular VM was initially created with hostname ‘000’, to which Cloudmin adds the domain name part and it becomes ‘000.domain.tld’. But then later it was assigned a client domain name and the hostname in the Cloudmin interface had been changed to ‘ns1.clientdomain.tld’.
Some time later, we created another VM with ‘000’ hostname (just to keep it on top of the list of all other VMs), which didn’t come up and caused ‘ns1.clientdomain.tld’ to go down, apparently because Cloudmin created files for two different VMS with similar names. And our mistake was to go ahead and delete new ‘000’, I am afraid that swap file for the earlier client VM was deleted together with ‘000’.
If this was the cause, then it is a ver serious bug and Cloudmin must have some checks to avoid file name confusion in case if a user tries to add/delete two VMs with identical names.
Cloudmin should have prevented this by blocking creation of a VM with the same underlying name as an existing one … but just in case I’ll add a check to the next Cloudmin release to prevent a VM from being created that would over-write the disks of an existing one.