none
Disks larger than 1023 GB

    Question

  • Hello Guys,

    Currently we are practicing and learning the VM migration from the on-premise to the Azure and faced with the following issue:

    We don't have problem with the "normal" size servers (physical or virtual), but we have problem with the disks larger than 1023 GB. We are not able to migrate these disks directly, because of the Azure's limitations (blob size.)

    There is (I found) a workaround if a bigger disk needed: We are adding more data disks to the VM in the Azure and aggregating them using with the Storage Spaces, inside of the VM. (Do you have better idea in this point?) Then we are copying the data from the on-premise to there. But the problem is the following: We can copy the base data as long as necessary, but we would like to keep the cutover time as short as possible. (Simple file data and databases - SQL - are also possible.)

    So I would like to ask you, what is your opinion what is the best solution to this issue above? What is your recommendation? Your help is appreciated!

    Best Regards,

    Gabor
    Monday, February 20, 2017 3:20 PM

All replies

  • Azure VHD sizes are limited to 1023GB at the moment and there isn't really any way around this, the solution is as you mention to use software to combine multiple disks, storage spaces or software RAID in disk management. By doing so you do also get some performance benefit by combining the performance of multiple disks.

    Bear in mind that each VM sizes has a limit on the number of VHD's that can be attached, so pick the right size for your needs.

    • Proposed as answer by Bhushan Gawale Tuesday, February 21, 2017 6:16 AM
    Monday, February 20, 2017 3:27 PM
  • Thanks for the answer Sam! Yes the VHD sizes are limited unfortunately. :-( But what can we do if we need more storage? Because the server has (at least) a data disk > 1023 GB? How can we migrate its data into the Azure VM?

    Actually my question consist by two parts:

    1. How can we provide large storage? (Strorage Spaces is a possible workaround.)
    2. How can we move the large data from the on-premise to the Azure and minimize the cutover time? (What is the optimal solution to reach this goal?)

    Have somebody has similar issue?


    • Edited by Gabor Futo Tuesday, February 21, 2017 8:52 AM
    Tuesday, February 21, 2017 8:46 AM
  • It sounds like you really have two, unrelated questions here.

    For the first, making a larger disk, the answer is as I mentioned, using software to combine multiple disks. if you are using Windows VM's then storage spaces is the best option.

    As for moving data, this is really going to depend on the type of data. The best bet would be to do your initial upload whilst still using your on prem servers, then do a smaller upload to add any new data just prior to cutover.

    Tuesday, February 21, 2017 9:34 AM
  • Yes, you're correct these are two questions here. Although I think they are related. (We need to move more than 1023 GB data into the Azure VM.)

    So the only thing we can do to solve the issue of the 2nd question, that we are using a file copy tool like the Robocopy, AZCopy or similar?

    Tuesday, February 21, 2017 1:41 PM
  • There are various ways to get the data into the environment. Assuming you want the data inside a VM rather than just in blob storage then you could look at something like SFTP. Alternatively you could move the data to blob storage first, then in to your VM, so you could use AZ copy or one of the many third part tools for this. You could also look at the Azure Import/Export services that provide a way for you to send a disk of data to be imported.


    Tuesday, February 21, 2017 3:02 PM