Performance benchmarking FM18 Intel to FM19 on M1

I have some FileMaker processes which can take 3-4 hours to run. Can I speed them up?

My current setup:

  • FM18 Server on a a 2012 Mac Mini with Core i7

Is it worth upgrading to:

  • FM19 Server on M1 mac mini

According to GeekBench, the single-core performances are:

So in a pure-compute workload I could get a 2.4x speedup (in theory)

Also, the M1 SSD, memory access bandwidth and multicore performance are much faster than the Intel Mini.

FM19 might be faster than FM18 (or could be slower?)

My question: has anyone done a similar upgrade (FM18 to FM19, Intel to M1)?

Roughly what sort of real-world performance gains have you found on a long-running FileMaker Server processes?

1 Like

There are so many things you can do to speed up FileMaker with your current processor. For example, by just offloading loops out of scripts, you can often get a 300X+ performance improvement.
If you post more information, I'll try to help.

Thanks Oliver - I agree, one can often get 10x to 100x speedups with optmization within FileMaker (triggered stored auto-enter calc fields, better relationships, etc.).

For this discussion, I'd like to stick only to the topic of FM18 vs FM19, and Intel vs. M1.

That's fine. My jaw dropped and hit the floor when you said 3 - 4 hours! :slight_smile:

I have a script that sends out bulk emails, often about 8000 in a blast.
On a Quad-Core i7 Mac mini with FMP18 it took about 70 minutes to complete.
On a M1 Mac mini with FMP19 or FMP18 it takes about 34 minutes.

2 Likes

do you use HDDs or SSDs in your Mac Mini from 2012?

1 Like

Hey Dave -- Why still so slow? 34 minutes??? Where is the bottleneck?

1 Like

SSDs only.

Bulk processes operating on wide tables with millions of records with complicated relationships between them:

image

Lots of complicated logic such as:

  • if a person in the last 6 months had status X, and Salary Y, then for the next 5 months they get Status Z (unless A, B, or C were true in the last 12 months...)
  • if their home mailing address is valid, use it, unless it hasn't been validated in X months, in which case use their NCOA address (but only if it's newer than Y months and has a match score greater than Z)...

Many of these have already been optimized as much as possible - not to say there isn't further gain to be had, just that it may not be easy.

If I could get a 2x speedup just from hardware, that probably pays for itself instantly.

1 Like

How about a 300X speed up for free?
I'd need to know more about what you're doing, and for some things I'm sure it wouldn't be 300X. :slight_smile:
If-logic, no matter how complex, isn't much, if any, a speed factor by itself, however. With millions of records, I'd be tempted to move some of the data to a faster data engine also - like MySQL or SQL Server. That alternate data engine might not be in the cards depending how many scripts you have, of course. If you have a lot of scripts with loops, with millions of records, that would be what I'd try to refactor first.

The new Mac Studio looks nice.
Apple's Mac Studio Is the iMac Pro Reboot You've Been Waiting For | PCMag*

These are individualized emails being sent through the 360Works Email plugin.
When we used fmSpark to send the blasts it took 5 hours or more to send the blasts. Sometimes much longer.
Since we run these at 5 am I'm perfectly happy with 34 minutes.

1 Like

It's all good.

34 minutes is really fast for 8,000 emails!

Via extrapolation (Rough Estimate: 100 emails actually sent), I am able to guesstimate/extrapolate 8,000 emails would take about 26 - 30 minutes using REST and no plug-in. Your plug-in is probably using similar code. With that many emails, other factors come into play like how fast the server itself is etc. Anything less than 50 minutes is excellent.

For the email body in my test, I sent a normal sized paragraph and also a small attachment.(I was trying to think of a way to benchmark 8000 email sends the way I'd do it, anyway...)

I'm way too deep into this project to make a change like this at this point, though I agree with the idea in principle.

FileMaker is great, but in certain operations it's amazingly slow - like 100x to 1000x slower than other database products. I hope that Claris keeps making incremental improvements, but sometimes I wonder if they need to consider adding new features to boost performance.

I do have a M1 mini available for testing, and based on at least one person saying about 2x speedup, I may do some testing to see if I get similar improvements.

1 Like

Totally understand.

Can you identify where the slowness is? Loops?

I just ran a quick test

  • copied my giant database to three differnet macs
  • ran one of the long-running batch processes (updating the trigger field in about 35000 records in a complex join table, which does a ton of auto-enter calcs which pull from multiple related tables). Stopped each one at 300 seconds.
  • I ran this test locally in FMPro (rather than FM Server) so it's not an exact test, but should give a rough idea of cross-machine speed

Results (records per time : relative to baseline):

  • 3000 : 1.0x : Mac Mini 2012 (Intel i7) macOS 10.13, FileMaker Pro 18
  • 6000 : 2.0x : Macbook Pro 2019 (Intel i9) macOS 12.2, FileMaker Pro 19
  • 9000 : 3.0x : Mac Mini 2020 (M1) macOS 12.2, FileMaker Pro 19

These results suggest I may see a roughly 3x speedup by moving to a M1 mini - whether that's purely hardware or a combination of hardware/sofware I don't know (and in some sense, don't care).

No loops. The main batch process is as described in my prior post: updating a Trigger field in a large summary table, which then causes about 50 stored, indexed autoenter calc fields to update - many of thse are pulling data from related tables, some of which use complex relationships (e.g. 2 or 3 foreign keys in the relationship).

So I think it's pretty much just measuring the raw speed of the database engine and calculation engine.

N.B.: why do this batch process at all? By using triggered auto-enter calc fields, the table can have stored, indexed fields, which (after they are all updated) speeds up further FileMaker database operations by 10x to 1000x. Normally in FileMaker, calculation fields that operate on related data can not be stored or indexed, so performance is painfully slow.

1 Like

Maybe you're just at the FileMaker limit for your complex application and faster hardware is really the easiest option. Very interesting discussion.

Another idea:

It feels like these computations are mostly CPU-bound since they are happening in a single thread. The records are logically isolated (e.g. data is stored per customer, but there are no cross-customer calculations).

I've wondered if I could speed things up by breaking the records into batches, and launching multiple batches in parallel using PSOS scripts...

PSOS was a bust, unfortuantely.
Using FM18 Server, I set up a benchmark which allows me to choose the # of threads.

  • Running 1 PSOS script ran at a speed of 6.6 records per second
  • Running 4 PSOS scripts, each ran at a speed of 1.1 records per second (4.4 total).

Watching the CPU in Activity montior, there's no evidence that multiple PSOS scripts are running in parallel (or, if they are, perhaps there's another limiting process which is only running in a single thread).

Does FM19 Server improve the PSOS threading behavior?

You could also create ACTUAL threads that would run in parallel. That's more programming, I know, but it's all doable (outside FileMaker).

Sounds like there are some big performance changes in 19 - including non-exclusive table Read locks, which should allow multiple users to read tables concurrently. See https://www.soliantconsulting.com/blog/filemaker-server-performance/#sharing-lock

I don't have FMS 19 running right now, so I can't benchmark just yet, but that sounds promising.

1 Like