none
Azure Backup - ERROR 0x07EF9

    Question

  • Hi,

    Does anyone knows what 0X07EF9 error code stands for? I have deployed a backup agent on the machine and I am unable to complete the initial backup.

    I have searched the internet for this error message and tried all suggestions, however it does not solve my problem. Below are the steps I have already performed:

    1. Tried backing up non-system drive
    2. Tried deleting registration in portal and re-registering server
    3. Tried elevating backup agent as administrator
    4. Tried disabling antivirus (Kaspersky)
    5. Moved Scratch to separate partition

    I have filtered out CBEngineCurr.errlog file for lines were Level != 'NORMAL' and got below:

    1900	2088	02/11	12:35:19.301	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	1988	02/11	12:35:20.824	22	vdshelper.cpp(923)	[000000001A800230]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80990fb0] Disk [\\?\PHYSICALDRIVE4] is not found as unknown	
    1900	2088	02/11	12:35:21.301	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:23.301	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:25.302	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:27.303	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:29.304	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:31.304	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:33.304	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:35.305	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:37.305	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	1988	02/11	12:35:37.814	03	registryutils.cpp(314)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070002] : Encountered Failure: : lVal : registry.GetValueEx(pwstrValueName, pllValue)	
    1900	2088	02/11	12:35:39.305	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:41.305	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:43.305	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	1988	02/11	12:35:43.867	18	fsutils.cpp(4133)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070002] : GetFileAttributes failed for \\?\Volume{a376f9cf-6bc3-4709-ade7-0fab4a1b1f24}\System Volume Information\Dedup	
    1900	1988	02/11	12:35:43.867	18	fsutils.cpp(657)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070002] : Failed to get attributes for \\?\Volume{a376f9cf-6bc3-4709-ade7-0fab4a1b1f24}\System Volume Information\Dedup	
    1900	1988	02/11	12:35:44.202	22	agentutils.cpp(306)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070002] : Encountered Failure: : lVal : registry.GetValue(hook, &dwRegValue)	
    1900	1988	02/11	12:35:44.202	70	onlinesubtask.cpp(436)	[000000001A946D90]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	ACTIVITY	COnlineSubTask::DeactivateSubTask => Deactivating SubTask	
    1900	1988	02/11	12:35:44.202	32	fileprovider.cpp(1361)	[000000001A902FC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	FileProvider:UploadMetadata:: Error while uploading Metadata. Hr: = [0x1d2bb720]. volume path = \\?\Volume{a376f9cf-6bc3-4709-ade7-0fab4a1b1f24}\	
    1900	1988	02/11	12:35:44.202	32	extentutils.cpp(639)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	ExtentUtils: Error while setting data bit map for volume [\\?\Volume{a376f9cf-6bc3-4709-ade7-0fab4a1b1f24}\]. Hr: = [0x80070020]	
    1900	1988	02/11	12:35:44.202	32	extentutils.cpp(441)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	ExtentUtils:AddStreamName:: Error occured Hr: = [0x80070020]	
    1900	1988	02/11	12:35:44.202	32	fileprovider.cpp(1763)	[000000001A902FC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	FileProvider: Error while setting up DataBitMap. Hr: = [0x80070020]. MountInfoId= {F7C37748-8026-496F-B28B-4E49949C69C2}	
    1900	1988	02/11	12:35:44.202	70	acceptdatasetsubtask.cpp(1145)	[000000001A946D90]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070020] CAcceptDatasetSubTask Failed: Error in EndMetadata Phase	
    1900	1988	02/11	12:35:44.202	32	fileprovider.cpp(1021)	[000000001A902FC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070020] Setting error on File provider 000000001A902FC0. DlsErrorCode = 0x7ef9	
    1900	1988	02/11	12:35:44.202	32	extentutils.cpp(570)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070020] ExtentUtils:IterateDirectory- Failure occured while iterating over a path \\?\Volume{a376f9cf-6bc3-4709-ade7-0fab4a1b1f24}\System Volume Information	
    1900	1988	02/11	12:35:44.289	18	dsmsendersubtaskbase.cpp(155)	[000000001A8AB7E0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	CDsmSenderSubTaskBase received session closed completion in WAIT state	
    1900	1BF8	02/11	12:35:44.289	70	onlinesubtask.cpp(436)	[000000001A8A3E80]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	ACTIVITY	COnlineSubTask::DeactivateSubTask => Deactivating SubTask	
    1900	1988	02/11	12:35:44.289	18	dsmsubtaskbase.cpp(226)	[000000001A8AB7E0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Session closed before data move completed	
    1900	2088	02/11	12:35:45.306	71	dscontext.cpp(157)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Last completed state for Ds Id (105553193863727) is 18	
    1900	2088	02/11	12:35:45.306	71	replicator.cpp(257)	[000000001D17B200]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	FATAL	Hr: = [0x80070020] Replication for DS Id (105553193863727) failed. Found error set in status.	
    1900	2088	02/11	12:35:45.306	71	dscontext.cpp(163)	[00000000199E8EC0]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Ds Id (105553193863727) failed. DLS: 32505 HRESULT: 0x80070020	
    1900	2088	02/11	12:35:45.671	71	backupasync.cpp(1326)	[000000001D095D20]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80070020] Backup Progress: Failed	
    1900	2088	02/11	12:35:46.122	22	vdshelper.cpp(923)	[000000001A800130]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x80990fb0] Disk [\\?\PHYSICALDRIVE4] is not found as unknown	
    1900	2088	02/11	12:35:46.461	03	dynamicloadedmodule.cpp(91)	[000000001A0DEE00]	F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Failed: Hr: = [0x8007007e] : Failed to load DLL [C:\Windows\system32\WSBOnline.dll]	
    1900	2088	02/11	12:35:46.461	79	bpbackupstoreutils.cpp(404)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Error trying to add regkey for completion success ignoring	
    1900	2088	02/11	12:35:46.461	79	bpbackupstoreutils.cpp(431)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Snap-In Updation skipped because dll not loaded	
    1900	2088	02/11	12:35:46.461	79	bpbackupstoreutils.cpp(171)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	WARNING	Snap-In Updation Skipped because WSBOnline.dll was not found.	
    1900	2088	02/11	12:35:47.358	71	async.cpp(960)		F6D5C7B7-79AC-4559-82C0-D3369D7BF457	FATAL	Hr: = [0x80070020] DoAsync operation failed.	
    1900	20E4	02/11	12:39:37.931	03	workitem.cpp(233)			WARNING	Idle Timer FIRED But WorkItem 000000001A97C410 Doesnt Exist! TimerOrWaitFired: True	
    1900	20E4	02/11	12:39:55.178	03	workitem.cpp(233)			WARNING	Idle Timer FIRED But WorkItem 000000001D1647E0 Doesnt Exist! TimerOrWaitFired: True	
    1900	20E4	02/11	12:39:55.238	03	workitem.cpp(233)			WARNING	Idle Timer FIRED But WorkItem 000000001D136EC0 Doesnt Exist! TimerOrWaitFired: True	
    

    Thursday, February 11, 2016 1:33 PM

All replies

  • P.S. I am using the latest agent version and no other backup solutions are in place
    Thursday, February 11, 2016 8:47 PM
  • Hello JustinasB,

    This question should be posted in the "Azure Backup Forum", so that the right folks can get to it. Note this has already been asked and answered before there with multiple resolutions. Please see if any of those answers can help you out.

    Thanks,

    Sriprasad


    Friday, February 12, 2016 12:09 AM
  • Hi Sriprasad,

    Thank you for pointing out that this question has been asked and that there are a number of resolutions. It would have been really helpful to have a link to the forum post that actually contained the resolution. You also proposed that this post was the answer to the question which I don't believe it is because I am now looking through the Azure Backup Forum for an answer!

    Matt


    Matthew Woollard

    Friday, April 22, 2016 3:11 PM
  • Same problem and frustration with these online communities, so fragmented.

    Try 'Unable to find changes in a file - Error 0x07EF8' - I would give the link but I am informed that I cannot post links till my account has been validated, 1 hour since registering with this forum and no request to validate yet. Pity whomever produces this resource cannot do a bit of joined up thinking and
    validate my MSFT account in the backend, it is typical obstruction in the name of security, if I can login with a valid MSFT Account then that automatically identifies me surely.

    Also still a running sore in this thread 'Azure Backup Error 0x07EF8' - Again forum elects to prevent links being posted.

    <o:p></o:p>

    Everyone keeps blaming some file locking without any clear direction from that except that old whipping horse AV. We are running InTune so would expect this not to be an issue, although past experience with MSFT product conflicts could suggest this may still be a conflict.

    1.Uninstalling and re-installing the Backup Agent (yes using the latest version downloaded

    from the Azure Management Portal) and cleaning out directories in Prog'Files.

    2.Creating a new and dedicated Recovery Services Vault.

    Still gettign same error - I would post a screen shot of the dialogue but this forum once again finds that too much of a risk.....

    <o:p abp="3302"></o:p>

    Monday, May 16, 2016 4:14 PM
  • Hi JustinasB,

    I am getting the same error messages, did you find a resolution for this problem?

    Regards

    Nick

    Thursday, June 23, 2016 12:42 PM
  • Hi Nick,

    Unfortunatelly no. I was advised that this may be caused by antivirus, however turning it off does not help. It is still failing daily:

    Sunday, June 26, 2016 6:29 PM
  • Hi Justin,

    Following multiple attempts over 6 months to get this working again i finally appear to have had a breakthrough.

    My initial problems appeared to stem from the fact my DPM 2012 R2 installation was running on Server 2008 R2, which had a data source size limit of 1700GB and the volume that was failing had steadily grown over this limit to 1850GB.

    My only option was to upgrade to Server 2012 R2 which had a data source size limit of 54400GB. I did this following the instructions under "In-place upgrade of the operating system on a server running DPM isn’t supported" at https://technet.microsoft.com/en-us/library/dn554221(v=sc.12).aspx

    Once the upgrade was complete and all data sources had run their consistency checks I was still experiencing problems with my 1850GB data source and looking in the CBEngineCurr.errlog the message I was getting was [0x80990fb0] Disk [\\?\PHYSICALDRIVE4] is not found as unknown. Following investigations I found that the VHD file that was being created when the job started was failing to mount. To confirm this I looked in Disk Management after the job started and was prompted with a message to initialize Disk 4.

    After multiple attempts at retrying the job and it failing, I decided that there was nothing to lose by deleting the VHDs and any other files in my scratch drive that were associated with the Azure backup of this data source. So I searched my Scratch/VHDs folder and identified the folder that this backup related to by looking at the "Date Modified" attribute (I knew that this was the only Online Recovery Point I had attempted on the current day). The folder was called {37c0ef58-7651-4cd8-ea3f-3d8ffe8304ef}. I then searched the entire scratch folder for "37c0ef58-7651-4cd8-ea3f-3d8ffe8304ef" and moved any files that returned as part of the search to an alternative location so that i could restore them if I needed to. (I did have to restart the server before I could move all files as some were locked as part of a previously failed backup job)

    I also did a search of the registry for "37c0ef58-7651-4cd8-ea3f-3d8ffe8304ef" and deleted any references to this that also appeared to relate to Azure or DPM. All references were found under "HKLM\Software\Microsoft\Windows Azure Backup\Config\CloudBackupProvider\". I made sure that i created a backup of these registry items first and exported the keys so that i could restore them to their original state if it turned out i broke anything.

    The first couple of attempts I made to create an Online Recovery Point after deleting these files still failed so I thought that deleting these files had had no impact on the success/failure of the job. As I had exhausted all the options I could think of at this time, I just left the server to get on with successfully backing up my other data sources and left the failing one to fail until I had any other ideas or picked up anything from these forums.

    However when I came to check on the status of all jobs 24 hours later I was surprised to see a job still running for my previously failing data source and it showing that data was being transferred. I checked in Disk Management and the VHD had successfully mounted. The job took over 60 hours to complete and transferred over 0.5TB of data as it had 6 months of changed data to transfer but it completed successfully and so have any subsequent Online Recovery Points for this data source.

    I am still slightly at a loss as to which stage the changes i made would have actually caused the Online Recovery Point to start working. I dont know what would have happened if i had left DPM to its own devices after upgrading the Windows server to 2012 R2, but i thought I would share everything I did just in case it helps.

    Monday, June 27, 2016 11:04 AM