Memory / Cache assignment experience for FMP Client

Since current FMP18 cache settings can be set to max 2048MB
To get things done faster we always increase the default from 128MB to 512MB which enhances PDF creation and improves overall stability if RAM configuration of the host allows.

What is your take on this?
THANKS!

Confirmed, 512 MB is the standard setting here.

2 Likes

We sit at 512 also. I’ve, personally, run into some weird issues and performance degradation when I’ve pushed it to the max.

We try to leave the cache at default if possible, bearing in mind the users are getting very fast bandwidth between our hosted virtual FileMaker and FileMaker Server servers. However, those with more complex roles we will increase to 256Mb or 512Mb, but no more.

Interestingly, if we were to increase every user from 128Mb to 256Mb it would cost us an extra £700/month, that’s £8400 per year and double that if we standardised everyone on 512Mb, so it is pretty critical to us, particularly as we’ve not increased our costs since we started hosting 8-years ago.

1 Like

Is that because of the RAM allocated to the citrix environment?

Hi Josh

Sort of, although we’ve abandoned Citrix for RemoteApp, but the principal is exactly the same. We never put more than 50 users on any one streaming server and we have to estimate how much RAM to allocate to each server to service all the users.

If you look at the following:
RemoteApp%20RAM

You can see the average usage is between 300Mb and 400Mb per user. This will vary wildly depending on the user, with multi-window users taking up much, much more RAM. The worst offenders are our developers where plenty of layout and script debugger/data viewer use can easily push the allocation over 2Gb. It only ever goes up, it never goes down until the user logs out. Our developers are encouraged to log out at lunch to free up memory (FileMaker also performs better after this).

As far as my entry in the above is concerned, I opened FMPA 18, with no files open I changed the 128Mb default cache, which was consuming about 250Mb RAM, to 2048 and as you can see the RAM allocated to my account immediately shot to nearly 1.8Gb. With RAM and vCPU being our most expensive resource, we could easily need to double or quadruple our server RAM specifications, hence the cost mentioned previously.

Regards
Andy

1 Like

Thanks for chiming in @jormond. Could you describe a little bit more precise what the drawbacks were when increasing higher? If an abundance of fast RAM and fast SSD connectivity granted it should fly?
(Have a client who always buys top of the line hardware and contemplates about maxing FM memory settings for faster UE)

To be honest, I can’t say I understand the mechanism well enough to comment extensively. For the most part, the behavior is only seen under specific circumstances, that I can’t readily reproduce.

My tests were essentially, turn up half the users 100MB at a time and see if there is a difference. Most of the time, there was no difference, I presume because there is not a lot of data that is being held in memory, because the actions aren’t long and drawn out, nor involving a ton of data.

What I do know, getting past 1024MB, that half of the users, in that database, started seeing slow downs for some actions. Specifically CRUD operations where there were loops and such that required a lot of data processing. The rest of the time they were fine, small data sets, commits, etc. So not much was being held in the cache anyway.

The big misunderstanding that some people have, is they think that cache setting determines how much RAM FileMaker on their machine uses. And that’s not exactly how it works.

Sorry, I can’t be more specific. I’ve literally only tuned it up/down to see impact. Too small or too large seems to lead to excessive requerying of data. If that makes sense.

1 Like

cache has to be maintained - that FM opened the max to 2GB if hardware fits it must help otherwise why did they allow increase … ?

Keep in mind there are 2 separate mechanisms.

  1. Cache file. The file FileMaker Pro Advanced maintains in sync with the server, based on data you have accessed.
  2. Memory Cache. These are changes being made that haven’t been written to disk, and back to the server. This is the cache that setting allows.

There are definitely some reasons where a user may require their Memory Cache to be 2GB. But you would already be stressing FileMaker at that point I would think.

  1. isn’t that same as temp file?

from my understanding of hardware and OS if it’s fast it should not stress FM when maxing fm.app-cache/memory settings… I wish Claris would tell…

thanx!

Yes. The Cache file is the temp file.

“Should not” is the key word here. It depends 100% on what is happening. The trouble with explaining it to me, is that level of computer science is a step beyond me. There is so much going on, without seeing it, I can’t even come up with a theoretical scenario.

The thought of some data being in the Cache file, and some being help in memory, well…I can see that causing some cross-stream queries to happen. However, I have no idea how it works. Maybe shoot an email to Claris and see if they are able to shed some light on it.

2 Likes

Looking back over some older threads on FMForums.com, increasing the cache should help FileMaker Pro Advanced. The risk is lost data. Any crash, network interruption, etc, when FMPA can’t send data back to the server will cause that data to be lost.

This is likely why I’ve never gone too far down the “how much can I use” path. This is, in my opinion, the bigger issue.

3 Likes