FMS19.6.3 poor performance on M1 mini / Ventura

I recently upgraded this server to Ventura 13.6.3 and FileMaker Server 20.3.1.31 and have re-run the test.

  • FMS 19.6.3 (Server Script), M1 Ventura 13.5: 51 seconds
  • FMS 20.3.1.31 (Server Script), M1 Ventura 13.6.3: 50 seconds

So, updating to FMS20 does not seem to help; whatever slowdown was introduced between FMS18 and FMS19 seems to persist in FMS20.

I'm still seeing the same pathological behavior, where both the FileMaker Server process and the FileMaker Scripting process never use more than about 20% of the CPU.

1 Like

why don't you update macOS? I saw speed increasing. (FM20.3.1 now compatible with Sonoma)

That's part of the plan, I'm just going step-by-step.

The next thing I tried was the new FM 20 feature which allows you to choose how data is flushed during loops:

Using Loop Flush: Defer , with or without being inside a Transaction has no effect on performance in this test.

I also find that Replace Field Contents vs. Set Field inside a Loop has no meaningful performance difference either.

1 Like

Aha! Fixed it! :joy:

As background, this table has a ton of Auto Enter indexed fields, many of which reference data from related tables. They are all triggered by a single field named TriggerCalc.

This time, I tried a different approach, removing the trigger from these fields one by one and doing a benchmark after each.

I quickly found that one single calculation causes the problem:

  • it pulls data from a related table (A-->B which many of these calculations do)
  • it also pulls data from a table that is related to the related table (A-->B-->C).

It was fairly easy to replace this A-->B-->C relationship with a direct relationship (A-->C) and now it's much faster:

  • FMS 19.6.3 (Server Script), using the A-->B-->C calculation: 50 seconds
  • FMS 19.6.3 (Server Script), using the A-->C calculation: 9 seconds

I also notice the CPU usage is significantly better:

And the test where I run 4 PSOS scripts simultaneously works much better:

  • 1 PSOS script (1000 records) completes in 10 seconds (about 100 records/second)
  • 4 PSOS scripts (4000 records) completes in 20 seconds (about 200 records/second)
  • 10 PSOS scripts (10k records) completes in 60 seconds (about 166 records/second) -- a bit slower, which isn't surprising since this is a M1 CPU which only has 4 Performance cores.

And now CPU usage on the server is hitting (or exceeding) 100%:

image

3 Likes

@xochi

I wouldn't except RFC and Loop>SetField constructs to be much different in performance, but the loop structure provides lots more opportunity to do multiple things as well as collect missed records that RFC would just skip.

Now that you have this "resolved", questions about what the code itself is doing?

  • If the TOGs that are in this script requirement are extensive, the # of relationships, regardless of whether the code touches each or not - radically impacts performance as each gets evaluated with every record edit. If you can create a TOG that only includes the data tables you need, that would be a big step in optimizing performance.
  • Is there a reason to run through 100% of the records and run an update?
  • If data in child TOs is being denormalized into the parent TO, are this part of the batch change? Is there a reason to NOT do this on a record commit at edit time?

Thanks for posting the results and for explaining how you achieved them.

Whenever I posts that highlight poor performance I'm always curious to know which link in the chain is causing the problem.

2 Likes

@Kirk

  • I agree, there seems little or no difference in speed between a Loop/Set Field vs. Replace Field Contents (RFC) and the Loop is much more flexible.
  • My TOGs are somewhat of a mess, as this is an app that started in FileMaker 6 originally. Do you have more information about what

"each [TOG] gets evaluated with every record edit"

means, and if anything has changed with the new FM 20 features which claim to cache relationships? Claris says about FM20:

"To improve performance, the database engine now caches relationships to evaluate dependencies."

  • the reason for this batch process is so that as many fields as possible can be Indexed. For example: imagine a field TotalSalaryThisMonth = Sum(Payroll:PaycheckAmount) which could be done as a normal calculated field, but gives terrible performance if you try to use that field in a Find or in a relationship.

My next step is to see if I can create a tiny demo database which illustrates the problem, and send it to Claris.

Caching relationships is an attempt - TBD how successfully - to help with the fact that in FM each relationship is, in effect, a query. As a result, any change of any record in any TO in a given TOG will evaluate all relationships to see if the change impacts anything in other TOs.

Great low code technique; suffers performance issues at scale. Honza @ 24u did a bunch of benchmarking on TOG size impacts. .

If you avoid unstored calculations - which includes summary fields (although they are some of the most performance optimized calculations in FM) - on layouts particularly list view rows and portals, you can get some outstanding performance improvements.

A “fake” performance gain can be had by hiding unstored calcs until the list is displayed then flip the hide condition. It is no faster but to the user, seeing the data quickly has the perception of performance and then the math occurs secondarily.

If you can script the results into a standard data field that can result in performance gains. The command getsummary ( field ; breakpoint ) can be your friend.

Ok this is interesting - I made a simple benchmark test, and not only does it demonstrate the problem, it shows the pure performance delta is 100 times worse

The benchmark has a fairly simple setup - 3 tables, A, B, and C, and two triggered auto-enter calc fields. One pulls data from A->C, the other pulls data from A->B->C.

The TOG looks like this:

The results are dramatic:


Begin Test 1 (PSOS Server 20.3.1)
Updated 1000 records in .44 seconds.
Rate = 2268 per second

Begin Test 2 (PSOS Server 20.3.1)
Updated 1000 records in 50.88 seconds.
Rate = 20 per second
3 Likes

I decided to play around with the relationships. The B->C and A->C relationships both are multi-predicate:

  • primaryKey = primaryKey (both are indexed Number fields)
  • kgDeceased = EventKind.

EventKind is an indexed text field, but what about the kgDeceased field?

It's a Calculation field with type Text, with Storage set to Global

And that seems to be the problem.

If change that field to either Unstored Calculation or Stored Calculation (with or without Indexing) then the performance hit disappears:

(Unstored Calculation)
Rate = 1695 per second

(Stored, Indexed Calculation)
Rate = 1675 per second

True, 1700 records per second is not as fast as 2300/second, but that's fair and to be expected since it's tunneling through the A->B->C relationship.

3 Likes

To summarize the issue:

  • when using FileMaker Server 19 or 20...
  • a field that has an AutoEnter calculation that references a field in a table 2 relationships away...
  • and the 2nd relationship has a match field that is a Calculation field set to Global storage

The performance is nearly 100 times as slow. Diagnostics on the FMS machine show that the FMSERVERD and FMSAED processes are using very little CPU, suggesting some sort of pathological situation with poor thread usage.

Solutions / Workarounds:

1. Stay with FileMaker Server 18 or earlier.

2. Change the match field type.

  • Global Calculation: Bad
  • Unstored Calculation: OK
  • Stored Calculation (no index): OK
  • Stored Calcuation (with index): OK

3. Redesign the relationship.
Having a Global Calculation key field does not cause a problem when the relationship is direct (A-->C) but only causes trouble in an indirect relationship (A->B->C).

4. Use an intermediate calc field
If you add a Calculation field in table B that is equal to the field value from table C, when Table A references this field in table B, there is no speed penalty.

2 Likes

If anyone wants to test this out, I made a demo file:
Download
username=Admin
password=benchmark

Note: I'm seeing the problem when testing only on FMS - if you run the file locally in FMP, the bug doesn't manifest.

1 Like

Now THAT definitely justifies a bug report to Claris.

3 Likes

@xochi

This might be obvious but I need to ask anyway; PSoS is it's own user, regardless of whether initiated from a client script or a admin console scheduled script. As such, the globals are unique and specific to that "user".

Are you setting the global calculation AFTER you enter the script, or if not, maybe it is recalculating with each OnRecordLoad operation as it traverses the records? (This is a SWAG - Software Wild Ass Guess - no tangible basis for the question beyond gross speculation on my part).

@kirk, good question - It's not a global field that gets set by a script, rather it's a calculated field with global storage:

I get that - and that is the "rub"; DOES a global calculation exist for all records for PSoS "user" or does it recalc as each record is touched?

IDK - pure speculation. IF you set that global as a static value, does the performance issue resolve??

Globals and PSoS/Schedule scripts are always grey areas......

There is no client per se, on FMS, only the execution engine (FMSE), so maybe global calcs don't exist until a record is touched???? The clue here is the difference in FMP vs FMS execution times

Based on what you find, maybe setting the global value AFTER the PSoS starts, rather than a global calc, would resolve it. ...... another SWAG

I can say that even when it's terribly slow, the calculation results are correct, which suggests to me that the script engine does get the correct global values. Also, the fact that it only seems to manifest with an A->B->C relationship suggests to me it's some sort of weird edge case.

I'm only testing on macOS Arm servers, I would be quite interested in anyone could do a test on a Windows or Linux server to see if it's macOS only?

I would expect that, but if it sets the calc with each record touched, and re-evaluates the relationship, that might be the culprit.

On FMS/Linux (fmcloud.fm)
Begin Test 1 (PSOS Server 19.6.3)
Updated 1000 records in .82 seconds.
Rate = 1217 per second

Begin Test 2 (PSOS Server 19.6.3)
Updated 1000 records in 49.99 seconds.
Rate = 20 per second

1 Like

Quick test: I replaced your table B calculated global with a non-global text field, and in your PSoS2, went to layout B, did an RFC on that field with "Deceased"

Begin Test 2 (PSOS Server 19.6.3)
Updated 1000 records in 1.09 seconds.
Rate = 918 per second

Turing that field back into a global and setting it by PSoS2 script, resulted in the same 50 second runtime

SO Table B, kgDeceased as a NON-global text field Not calculated. In your PSoS2 script loop set B::kgDeceased to "Deceased" as the step before the trigger calc. Result:

Begin Test 2 (PSOS Server 19.6.3)
Updated 1000 records in 1.54 seconds.
Rate = 651 per second

BUT just remove the GLOBAL condition from the kgDeceased field - the calc will be a STORED value for each record, without the global. Result:

Begin Test 2 (PSOS Server 19.6.3)
Updated 1000 records in 1.24 seconds.
Rate = 810 per second

2 Likes

Thanks, those tests are consistent with my results.

I tried another variation where the auto enter calc field in Table A grabs a calc field from table B (which itself grabs a value from table C). I thought that perhaps by breaking up the A->B->C into two parts, it would be fast, and in fact it is.

So it seems very specific to the combination of "at least 2 relations away, and the key field on the 2nd relation is a Calculation with Global storage"