locked
Updating VFP Data Files From Remote Locations RRS feed

  • Question

  • Greetings,

    We have a VFP9 SP2 database on a server that needs to be updated by users in the remote offices. Each location has a router and sees the server as a mapped drive.

    I used to have them RDP to the server and run the app.  That is no longer an option.  The remote offices must run the app locally since it is now tied to a piece of equipment connected to the local workstation. 

    My problem is that when they try to update the tables on the mapped drive, the latency is so bad, it becomes unusable.

    I rewrote the local app to xcopy the tables they need to update down to their local drive.  This is done one time when they first launch the session and takes less than 20 seconds. From this point on, they are working with local tables and there is zero latency.  When they close the app, a routine starts that appends changed records and adds new records to the live tables on the server and then closes the connection.  Error checking is in place to handle the various problems that would exist with a system like this.

    My problem is that once the tables are xcopied down to the local drive, they are still part of the DBC on the server.  I issue a FREE TABLE command for each table that I open. Regardless of what I do, it still locks the DBC on the server.

    The local app has an identical DBC as the one on the server.

    My question is what is the best way to get data from tables from a DBC on the server without locking the DBC?  

    Any and all suggestions will be greatly appreciated.

    Dick Day


    • Edited by DickDay0 Tuesday, April 12, 2016 6:25 PM
    Tuesday, April 12, 2016 5:25 PM

Answers

  • And once more in very fast and short way:

    DBCs and their DBFs are movable, no matter if within a LAN or to a totally different location. If moving all files together, relative paths don't change and as there is no absolute link back to the initial DBC a DBF came from, it'll be fine in other locations, too.

    That's one of the overall concepts of VFP data, it's movable. The only time you need to FREE a dbf is, if you want to use it without a DBC. You lose all features stored by the DBC, then, ie long field names and many more.

    This does not at all compare to the situation of SQL Server, where you need to detach database files to be able to move them and attach them in another location. In that regard you also don't move data that way for an application like yours, you'd use replication.

    If you still see problems, please post back. 


    Olaf Doschke - TMN Systemberatung GmbH

    http://www.tmn-systemberatung.de


    • Edited by Olaf Doschke Thursday, April 14, 2016 7:05 AM
    • Marked as answer by DickDay0 Thursday, April 14, 2016 1:57 PM
    Thursday, April 14, 2016 7:04 AM

All replies

  • My cudos for developing the merge back into the server. That's a hard part. What about concurrent changes of data in the head quarter and remote locations?

    Detail problems.

    I wonder what you really lock. When you copy files, nothing is locked from also having shared read/write access. Does your application use tables exclusive? Then this operation can't be done during a copy of a file.

    If relative paths are the same, there is no need to FREE DBFs. The header entry in DBFs point to a DBC does so with a relative path, unless the DBC is on another drive, that shouldn't be the case.

    Overall this solution has not a bright future, as copying all data will take longer and longer. But I am in general an advocate of a local installation of software and data and a syncing of data only even just in the light of the latency and speed of local apps and costs for CALs of terminal servers.

    Bye, Olaf.


    Olaf Doschke - TMN Systemberatung GmbH

    http://www.tmn-systemberatung.de

    Tuesday, April 12, 2016 6:42 PM
  • Hi Dick.

    We also have multiple businesses using RDP. This would be the ideal option. Something that might sound silly is how would upgrading your internet speed affect your situation? 


    Mike z

    Tuesday, April 12, 2016 7:06 PM
  • Thank you for such a quick reply.

    What about concurrent changes of data in the head quarter and remote locations?   Each record has a time-stamp of when it was last edited.  When the local app grabs the data from the server, it also creates a time stamp.  If a record is changed at the local level, the program compares the date/time of the download to the date/time that a record was changed by someone at headquarters.  If a record was modified at HQ after the download, the live record is not modified and a supervisor receives an internal message detailing what changes would have been made had the changes been posted.    That may happen but not very often.

    I wonder what you really lock. When you copy files, nothing is locked from also having shared read/write access. Does your application use tables exclusive? Then this operation can't be done during a copy of a file. The only time a file is opened exclusively is when there is a problem and a re-index is being done.  There is a system-wide flag that is set when maintenance is being done.  The local app checks this flag before attempting the download.

    Overall this solution has not a bright future, as copying all data will take longer and longer.   The data files are archived every 8 weeks so the file sizes are generally really small.  To copy the files needed, when they are at their largest, takes about 20 seconds.  When I tried to open the live tables on the server for editing by the local app, it took 20 seconds between keystrokes.  The latency made that approach unuseable.

    I think what I need to be able to do is get the records down to the local app  so there is no link back to the DBC on the server.  It's that connection that causes the latency.  I had hoped to be able to copy the tables down to the local app with no connection back to the DBC on the server but am running into problems.

    I tried copying the DBC and tables down to the local app so the tables would be seeing a local DBC instead of the one on the server.   I've tried a number of different ways but I always end up with tables linked back to the server.

    Thank you!


    Dick Day

    Tuesday, April 12, 2016 7:22 PM
  • Most of the remote locations are in very small towns and have no access to high-speed connections.  We have two 100/12 connections here but the remote sites typically have  3/1 or 5/2.    RDP worked great for years since all we were doing was sending mouse/key clicks.   Hardware prevents us from using RDP.  I understand that the latest RDP version offers resource sharing 'if' the hardware is on the approved list. Our's didn't make the list :)

    I do not like the approach we are taking... download, add/edit and then upload changes.  But I don't know how else to handle it.

    Thanks for the suggestions!



    Dick Day

    Tuesday, April 12, 2016 7:32 PM
  • If data never grows large, fine. You also seem to misunderstand me, though the size is a problem in general in an always growing database, I am advocate - I'm pro - about local installations and data syncing, not contra.

    I think what I need to be able to do is get the records down to the local app  so there is no link back to the DBC on the server.  

    Why would that be the case?

    The links are not to an absolute file location.

    If you copy a DBC+DBFs or - as in your case - already have a similar DBC at remote locations and update the DBF copies, too, there is nothing to do anymore.

    Really nothing.

    Just do this:

    CD D:
    MKDIR D:\temp
    MKDIR D:\temp\tables
    CREATE DATABASE D:\temp\whatever.dbc
    CREATE TABLE D:\temp\tables\any.dbf (id int)
    USE
    CLOSE DATABASE
    DO ADDBS(HOME())+"Tools\Hexedit\Hexedit.app" WITH "D:\temp\tables\any.dbf"

    At address 0x00000040 you'll see the link of the DBF to the DBC is ..\whatever.dbc and not D:\temp\whatever.dbc

    So the dbf continues to work copied to another location as long as one folder above is the DBC. Don't FREE the DBF, simply copy as is.

    Only the relative locations are needing to match. A copy of the database can also be in F:\comingfromheadquarter\dbcs\whatever.dbc, if the table goes into F:\comingfromheadquarter\dbcs\tables\any.dbf

    The absolute path doesn't interest in the normal case.
    Do you have your DBC on a different drive than DBFs?

    Bye, Olaf.


    Olaf Doschke - TMN Systemberatung GmbH

    http://www.tmn-systemberatung.de

    Tuesday, April 12, 2016 7:53 PM
  • Thank you for explaining it so clearly.   I'm approaching 70, so things are not as clear as they once were :)

    Just to be clear, the local drive does not need to have a dbc on it since it will be connected to the dbc on the server, correct?  Prior to doing the xcopy, should I delete the old tables that were copied down from the server in the last session?

    Thank you.


    Dick Day

    Wednesday, April 13, 2016 3:30 PM
  • OK, I'll try to write this slower.

    DBF and DBC have to exist in the right relative paths. That includes a DBC has to exist.

    There is no absolute link in a DBF, so they don't stay connected to their original DBC. There is no path in a DBF connecting it to your original server. Simply reread, what I already wrote.

    Since the composition of a database typically only changes with application updates, you only need to recopy DBC/DCT/DCX files, when that happens. You have to copy them at least once. And they have to go into the same relative location, or the other way around: The DBF copies have to be put into the same location relative to the copied DBC.

    Since you bring back data and merge it into the head quarter DBC/DBFs, everything on remote sites can be overwritten. You could profit of a syncing of only changed data also in the direction from head quarter to remote locations, it's just the other way around. But since it only takes 20 seconds, you best copy full files to remote and then bring back changes for mergin in. That way you won't have trouble with any branch location trending off into a totally different state of data.

    Obviously you can't just bring back remote data ond overwrite central data, then only one location could work at each time. But since the central data is a master, the remote locations could just be overwritten by this for sake of simplicity.

    What I personally plan to do is replication. Much too complex for your situation, though. As the DBC/DCT/DCX files are typically small, you can copy them anyway within the 20 seconds you have measured as max time.

    Bye, Olaf.


    Olaf Doschke - TMN Systemberatung GmbH

    http://www.tmn-systemberatung.de


    Wednesday, April 13, 2016 7:46 PM
  • And once more in very fast and short way:

    DBCs and their DBFs are movable, no matter if within a LAN or to a totally different location. If moving all files together, relative paths don't change and as there is no absolute link back to the initial DBC a DBF came from, it'll be fine in other locations, too.

    That's one of the overall concepts of VFP data, it's movable. The only time you need to FREE a dbf is, if you want to use it without a DBC. You lose all features stored by the DBC, then, ie long field names and many more.

    This does not at all compare to the situation of SQL Server, where you need to detach database files to be able to move them and attach them in another location. In that regard you also don't move data that way for an application like yours, you'd use replication.

    If you still see problems, please post back. 


    Olaf Doschke - TMN Systemberatung GmbH

    http://www.tmn-systemberatung.de


    • Edited by Olaf Doschke Thursday, April 14, 2016 7:05 AM
    • Marked as answer by DickDay0 Thursday, April 14, 2016 1:57 PM
    Thursday, April 14, 2016 7:04 AM
  • Olaf, you made it very clear, thank you.  I'm nearly 70 years old and very much appreciate "slower".

    Thank you again.


    Dick Day

    Thursday, April 14, 2016 2:00 PM