Unable to re-mount storage after VM reboot RRS feed

  • Question

  • I have a VM on Azure, with a 200 GB disk attached. After a server restarted, I wasn't able to mount the disk. I don't know if I'm missing something or doing something wrong. I was able to follow the instructions here [1] up to a point, but then I'm asked to format the disk and I don't want to lose the data I have on it:

    ~$ azure vm disk list healthwitz-crawler-1
    info:    Executing command vm disk list
    + Fetching disk images                                                         
    + Getting virtual machines                                                     
    + Getting VM disks                                                             
    data:    Lun  Size(GB)  Blob-Name                                 OS   
    data:    ---  --------  ----------------------------------------  -----
    data:         30        healthwitz-crawler-1-os-1024.vhd          Linux
    data:    0    200       healthwitz-crawler-1-20160913-163107.vhd       
    info:    vm disk list command OK

    ~$ sudo fdisk -l
    Disk /dev/sdc: 200 GiB, 214748364800 bytes, 419430400 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    (there's no entry for /dev/sdc1/)

    ~$ sudo fdisk /dev/sdc

    Welcome to fdisk (util-linux 2.27.1).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    /dev/sdc: device contains a valid 'ext4' signature; it is strongly recommended to wipe the device with wipefs(8) if this is unexpected, in order to avoid possible collisions

    Device does not contain a recognized partition table.
    Created a new DOS disklabel with disk identifier 0xca1eef67.

    Command (m for help): ^C

    [1] https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-classic-attach-disk

    Monday, November 28, 2016 4:09 PM

All replies

  • Hi,

    Normally, reboot the server the attached disk will not detached, to troubleshoot this issue more efficiently, maybe we should do some tests:

    1.”data:    0    200       healthwitz-crawler-1-20160913-163107.vhd 

    This blob is the blob you store data on it? or you create a new disk by CLI cmdlet (azure vm disk attach-new myVM 200)?

    Maybe we should check the storage account with azure portal, and try to make sure the healthwitz-crawler-1-20160913-163107.vhd is the disk you have store data on it.

    If we can find another VHD, maybe we can try to attach the disk to the VM, and use fdisk -l to show the details.

    2.Maybe we should collect logs about the disk. we can use this command:

    $sudo grep SCSI /var/log/messages

    If you still have questions, welcome to post back here. Thanks.

    Best Regards,

    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Tuesday, November 29, 2016 4:24 AM
  • Hi Andrei,


    Thanks for posting here!


    The output to the fdisk command utility indicates that, at this point, you have a file system without any partitions. Therefore, you won't be able to see any entry for /dev/sdc1. Can you confirm if you had that partition prior to the server reboot. Also, were there any changes made to the server prior to the reboot.



    Md. Shihab

    • Edited by Md Shihab Tuesday, November 29, 2016 4:55 AM
    Tuesday, November 29, 2016 4:55 AM
  • Hello and thank you for helping.

    Jason_ye, regarding #1 in your post, I only have one VM, with its 30 GB VHD and the extra 200 GB VHD, so I exclude the possibility of having a mix-up. Would have attached a photo with the Azure interface, but it seems new users aren't allowed to post photos or links. Regarding #2, the logs go back only 7 days unfortunately and this happened before that, so I couldn't find anything.

    Md Shihab, before the reboot I did have an /dev/sdc1/ mounted on that VM, from the 200 GB VHD. I don't know if there were any changes, as I didn't reboot the server and don't know what caused it. I only found a mention in the Mongo log file (MongoDB was storing its database on that drive) that it got restarted and then MongoDB couldn't connect to the data files.

    Tuesday, November 29, 2016 8:33 AM
  • Hi Andrei,

    Could you try mounting that disk on another VM, if possible, to see if it helps. Meanwhile, we will check this with our internal teams and get back to you.


    Md. Shihab 

    Tuesday, November 29, 2016 11:59 AM