I had a client last year, where MySQL was horrible slow for some queries.
Like 2000 records is done in a second and 20000 records needed 2 minutes, 50000 records needed 10 minutes.
We wondered why things got so slow with many data. Until we realized that the server had a cache size, which was smaller than the size of that table. For every run over the records to find something like a related record, the MySQL software had to read the whole table into memory to find the record. Once cache size setting got upgraded, so the table fits into the cache, the searches got faster. Now it's like 5 seconds for 50000 records.
Same for FileMaker. If an all data needed for the query fit in the cache in-memory, then the query is much faster, than if FileMaker needs to remove data from the cache to make space for new data.
On the other side, all the cache management slows down data access.
You need to find a good balance between a too much cache or not enough cache.
30k records is still a sum of a lot of records. FileMaker SQL is not as optimised as you might think. To sum records, records have to be downloaded from server to client, so if you have 400 fields of data per record, that's a lot of data. You do say local but I'm not sure if you mean local server or local on a machine?
Also if you are editing any of those records at the time, SQL will perform far more terribly.
What are you summing also? Make sure it's not a summary field
Does using PSOS obviate the problem of everything getting downloaded to the client? Also, while I'm asking dumb questions, if not using PSOS, does the data need to be downloaded to the client for each and every eSQL call? Say they are seconds apart and that the underlying data has not changed. I'm assuming the answer is yes, but I'm not sure.
Yes. This can be a significant factor in the speed bump you get with PSOS.
On the other side of the coin, any PSOS call means generating a session on the server. That requires resources ( nothing is free ). If you create many PSOS calls you must ensure that your server has capacity.
Running some tests on a large test system. This is a table with about 200 fields and over 8 million records.
I'm running this query, which gets about 30k records:
Set Variable $result = ExecuteSQL ( "SELECT SUM (Salary) FROM WorkerDate WHERE \"Date\" = '4/1/2025' "; ","; "¶" )
Running script on my mac, server is on LAN w/Gigabit Ethernet
Pro 21.1.1 SELECT SUM of 33236 records took 0:00:09 seconds for a rate of 3693 records/second
Pro 21.1.1 SELECT SUM of 33236 records took 0:00:01 seconds for a rate of 33236 records/second
Running script on server using PSOS
Server 20.3.4 SELECT SUM of 33236 records took 0:00:02 seconds for a rate of 16618 records/second
Server 20.3.4 SELECT SUM of 33236 records took 0:00:02 seconds for a rate of 16618 records/second
Interesting findings:
running on FMPro, record caching is very important: the second time I run the script the records are already cached locally, so the speed is nearly 10x higher
running on FMServer, using PSOS is much faster, but it also appears there is no benefit of caching (which makes sense, as each PSOS script run basically is it's own separate login session).
Notes:
both the Date field I'm searching on, and the Salary field I'm SUM() ing are stored, indexed fields.
We want to sum the entire table as it only has 30K rows. No WHERE clause used for that reason.
I think the problem is what I thought was a numeric field was actually a calculation field, possibly unstored so SQL is waiting on each row's updated calculation.