I have had to force quit FMP — which I hate doing — on occasion. I believe that when the connection to FMS is severed, FMS somehow “protects” the affected file. Perhaps by writing to temp files? Can anybody confirm and discuss how this works?
When force-quitting FMP while a FMS-hosted file is open, it does not produce an integrity check of the file at the next opening. Which it does when the file was hosted on FMP.
I would also be interested to learn how FMS is dealing with such cases, in order to protect the concerned file(s).
This is where FileMaker using a local cache on the Client device and on the Server itself helps.
Hosted File:
- Changes on the local client device are save to this temp cache.
- When committed ( or when hitting the cache limit ), the changes are flushed to the server.
- If a file disconnects, it just can't flush it's changes to the server. FileMaker Server waits for the a timeout on the connection and then disconnects the user if there is no response during it's normal interval.
- This just becomes, the user changes didn't make it to the server, but the file on the server is not forced closed in an unexpected state, so there is no damage.
- Exception: you are committing changes to the schema, and the network connection gets severed while that is happening. That can lead to unexpected issues. ( see the effect on the a local file below )
Local Files Crashing:
- The only component here is the local cache. Since there is no other air gap to protect the files, a force close on a local file can have a much wider range of effects on the file.
- Most often, a local file crashing can cause undetected problems that may not show up for days, months, or even years.
- The rule of thumb for many devs is: NEVER use a crashed FileMaker file ( local, or if the server itself crashed ). Revert to a known good backup, import any "new" data. And trash the crashed file.
- Some people may use a crashed file, but you really need to have proof that the file is fine vs just assuming it "looks" ok.
This is a great answer Josh#1, and I have nothing to add except a theoretical musing about this. Doesn't it seem like it should be possible for the schema commit payload to first upload to a cache on the server, and then, after being verified as well-formed, it gets saved by the server. That way the server would no longer depend on a valid client connection for the operation to complete.
This mid-commit disconnect statement makes it sound like the local cache is getting sent up to the server in little pieces and committed bit by bit, which may be true(IDK), but it seems like a fragile design choice on Claris's part. Do you have any more details about this aspect?
I've never found the risk ( likelihood of it happening ) to be all that high. The threat ( potential harm ) is clearly very high. The window for the threat increases as the network lag increases. The exact mechanism they use to send the changes to the server, and then process, I don't have all of the details. Though this would be a great topic for an under-the-hood. It's even possible that this is discussed in one of them already.
I've worried about this much less than a local file crashing. I NEVER run a file local that is business critical. That I've seen cause a lot of damage.
Agreed, this is the real lesson, thx.