docker commit > invalid tar header RRS feed

  • Question

  • Hi,

    After importing a couple of databases in the ms developer sql database image, then stopping it, and committing, i get the following error:

    PS > docker commit ContainerXY my/tagError 
    response from daemon: re-exec error: exit status 1: output: archive/tar: invalid tar header

    The Databases are quite big though, but anyway, it should not fail like that.
    Is there any more info i can give you to solve that issue?

    PS C:\Windows\system32> docker info
    Containers: 2
     Running: 0
     Paused: 0
     Stopped: 2
    Images: 209
    Server Version: 17.03.1-ee-3
    Storage Driver: windowsfilter
    Logging Driver: json-file
     Volume: local
     Network: l2bridge l2tunnel nat null overlay transparent
    Swarm: inactive
    Default Isolation: process
    Kernel Version: 10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)
    Operating System: Windows Server 2016 Standard
    OSType: windows
    Architecture: x86_64
    CPUs: 12
    Total Memory: 63.92 GiB
    Name: S6
    Docker Root Dir: D:\System\Data\Docker
    Debug Mode (client): false
    Debug Mode (server): true
     File Descriptors: -1
     Goroutines: 22
     System Time: 2017-06-02T09:56:38.4659153+02:00
     EventsListeners: 0
    Experimental: false
    Insecure Registries:
    Live Restore Enabled: false
    PS C:\Windows\system32> docker version
     Version:      17.03.1-ee-3
     API version:  1.27
     Go version:   go1.7.5
     Git commit:   3fcee33
     Built:        Thu Mar 30 19:31:22 2017
     OS/Arch:      windows/amd64
     Version:      17.03.1-ee-3
     API version:  1.27 (minimum version 1.24)
     Go version:   go1.7.5
     Git commit:   3fcee33
     Built:        Thu Mar 30 19:31:22 2017
     OS/Arch:      windows/amd64
     Experimental: false
    PS C:\Windows\system32> 

    Friday, June 2, 2017 7:57 AM

All replies

  • 124gb container filesystem size
    Friday, June 2, 2017 8:07 AM
  • Is there nothing i could do about this?

    Added ticket on moby:

    Thursday, June 8, 2017 10:29 AM
  • Yeah, we had the same issue,

    That is a real show stopper, we really hope the Single-File-Limitation will be removed one day.

    What you can do, is recreate the biggest tables on the secondary filegroup (Where you limit the files to 7GB),

    and after you recreated the tables and copied all data over to it, remove the similar tables from the primary filegroup and shrink the database when you are done.

    This takes forever, but it is a solution.

    Monday, June 12, 2017 8:57 AM
  • Can you give me a bit more insight here?

    What of the used systems has a single file limitation? Is it the Tar?
    And you suggest to duplicate the databases into a storage system where the database files will be split into 7gb parts?
    Isn't it possible to just restore them into such a system?

    Wednesday, June 14, 2017 5:27 PM
  • Hi Christian,

    We didn't find out where the limitation is coming from.

    But we did find out that it is a single file size limitation somewhere between 7 and 8GB.

    Try it - use 7-Zip, take a folder, create 8GB parts archive of it, and put one of the parts into container and try to create an image of it. You won't be able to.

    We weren't able to restore them in 7GB parts. We had to completely rebuild the biggest tables with 7GB limitation right from the start, copy the data and shrink the database afterwards.

    Hope this helps you.



    Thursday, June 15, 2017 7:14 AM