locked
400 bad request when Sync ... RRS feed

  • Question

  • Hi all

    When I sync multi-data (I don't think this data too big) sync service return error 400.

    I searched on forum, someone said that because primary keys of table contain the table name. (http://social.msdn.microsoft.com/Forums/en-US/synclab/thread/0b49060b-4e6b-4165-862b-8bf62b5670d6) but I don't think it right for my case.

    Because, when I create 1 data (a data include multi-records on multi-table), sync can work fine.

    I didn't any error on response except error 400

    So, can anybody show me the explain and how to fix this issue.

    Thank you.

    Friday, February 11, 2011 9:10 AM

All replies

  • Hi, anybody have same issue like me.

    Please help. Thank you.

    Monday, February 14, 2011 4:29 AM
  • Bruce i was getting not the same stuff that you are getting, but try to reduce the data size to a minimum and try again, i was trying to sync a larger dataset from the client to the server and i was getting errors, once i limited the size and did it incrementally it worked.
    Monday, February 14, 2011 4:10 PM
  • Thank for your answer.

    Can you show me more detail about limit the size of Sync, ex:

    how to limit sync size and

    how much size Sync can work fine, I don't see any document about Sync size.

    Thank you.

    Tuesday, February 15, 2011 2:02 AM
  • I am using siaqodb on the client and its implementation of the SyncFramework provider follows the official guidelines providing a filed called IsDirty, that is the field that is further analyzed to be sent up,  i was making too many changes on the client and my sync was failing, hence my guess

    what i did was iterate my entities that had this IsDirty property true and mark only some of them per call of Synchronize. 

    About your questions, i don't think there is any documentation yet,  you could go take a look at interceptors, but i don't know how to properly use them besides reaching them with a breakpoint :) .

    Tuesday, February 15, 2011 6:57 AM
  • Thank TestingSync.

    But In my opinion, using IsDirty is not a solution because we may have a big data need to sync all of them are IsDirty.

    So, I need want to know, Sync have any limit data when call service to copy from Isolate Storage to SQL Server.

    I think Sync service is WCF, so it will have limit size, but I can not find a place to config this size (ex: webconfig...)

    And if Sync service limit size it should be document and provide some methods to prevents user sync bid data.

    If Sync doesn't limit size, how can I fix my issue :(

    Anybody help.

    Thank you.

    Tuesday, February 15, 2011 9:33 AM
  • I have been testing this now, and if you try to sync up more then 40 entities or so (about 40kb of data up) you will get the 400 bad request response, i have no idea why, lets hope someone from the CTP team takes the time to look at this, as well as all the other problems that are floating around,  unfortunately i fell the CTP 4.0 forum is lacking a little official love, but lets hope its because some more grate stuff is coming specially if it is documentation.
    Wednesday, February 16, 2011 7:51 PM
  • Hi TestingSync,

    Please have a look at the following thread and see if it fixes your issue.

    http://social.msdn.microsoft.com/Forums/en-US/synclab/thread/9ef93c62-ec05-438e-8473-83c371b70d4e


    SDE, Sync Framework - http://www.giyer.com
    • Proposed as answer by Ganeshan Wednesday, February 16, 2011 10:01 PM
    • Unproposed as answer by BruceDo Thursday, February 17, 2011 9:49 AM
    Wednesday, February 16, 2011 10:01 PM
  • That works, but it goes to show how obscure things are now to someone new to the hole thing, i for one by looking at the documentation did not find any information, i am totally new to WCF, my Web.Config for instance did not even had a  <system.serviceModel> element, ,and since feedback is the only way to get information until the docs are ready. How to properly use WCF  here? what are the available contracts? what binding options can we choose?  and how to configure the client to use those configurations?
    Thursday, February 17, 2011 6:08 AM
  • I think config size is temporary solution, because as you know, Sync support working offline. In case, network not available for long time and user create a lot of data at local (maybe exceed config). When network available, sync fail and user will lose their data.

    Currently as I know, when call RefreshAnsync, Sync will upload all data from local to server, so why don't we support partial sync, I mean we should check sync size and separate data to many packages and Sync one by one.

    I tried config solution, but my application still fail because big data.

    How about your ideal.

    Thank you

     

    Thursday, February 17, 2011 9:49 AM
  • Hello, anybody have any suggestion?
    Monday, February 28, 2011 8:24 AM
  • Hi

    I'm really need you support me this case.

    Thursday, March 10, 2011 10:29 AM
  • Bruce,
    Sorry for the delayed response. You are right that at this moment the sync isolated provider uploads all changes in one batch and doesnt not break it up. It is a feature that we are planning on but it bring interesting issues around relationships. Without information on which entities are related to which and by what key, the CacheController/Provider cannot group them and hence on RI errors on server the forced changes flows back to clients and you lose your changes.

    The only workaround I can recommend for you is to set the size to max possible. BTW, out of curiousity, how much data are you uploading to the server now?


    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    Thursday, March 10, 2011 7:36 PM
  • Yes, I know this is a difficult problem when separate to sync many parts relate to database relationship (I thought about this, and have no solution yet.)

    But, as you know, the target of sync is save data at local storage, if we can not support this case, sync have no meaning in my opinion.

    Actually, we don't know and don't need to know exactly data will be uploaded to server. Because we support offline storage, so user can create how many data they want at client size. (Do you agree with me?)

    Thank you.

    Friday, March 11, 2011 2:16 AM
  • How about creating multiple scopes for your data and then uploading them in parallel? Is that something you are willing to consider?
    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    Friday, March 11, 2011 6:03 PM
  • Thank for you suggest, but I don't think that is a good solution.

    It look like we are manually separate into 2 packages :). But how about if one of them sync fail and...

    I'm looking for a solution can automatic separate sync package when it too big.

    As you know, I developing a project that have a large data need to be save.

    And we have a function called copy data, that mean customer can copy their data into many new instance so a large data can be multiple many time.

    Monday, March 14, 2011 9:04 AM