My client manufactures a device that takes various measurements of a given sample and writes the results to a database. The amount of data generated is relatively small.
In the current configuration, each device has its own computer, and that computer runs an instance of sql server. The devices are not networked.
The client wants to modify the device such that roughly fifty of them can be connected to a local area network.
The devices use various consumables that are lot numbered and once used cannot be used again. These lot numbers are written to the database when a sample is measured. This requirement is notable because in the current configuration a device has
no way of knowing if a consumable has been used by a different device. In the proposed network configuration, the expectation exists that each device will have immediate access to information about consumables used by other devices.
The client wants a recommendation on which of the two architectures should be used:
1.) Each device will write data to its own local database as it does now. Microsoft Sync Framework will be installed on each device and synchronization will be performed in real-time. Each device will periodically broadcast a heartbeat
(1 to 5 min intervals have been proposed) and this heartbeat will contain a CRC checksum. Every device on the network will listen for heartbeats. A device will initiate a sync if the heartbeat CRC differs from its own.
2.) The database on each device will be removed and a database server will be used instead.
This question is largely about the feasibility of the first option. Specific questions are:
Does sync framework provide tooling for generating a crc, broadcasting a heartbeat, queueing while a device is busy syncing with another device?
The client is concerned that if a database server is used, all devices on the network will be rendered unusable in the event of server failure. Does using sync framework effectively mitigate this risk?