I have a SQL call which is a little slow to run locally, so I wrapped it up and sent it off to the server. I have a script on the server which sets a variable to the SQL command (see below) and then exits script and returns the result.
Then I have a second "Stub" script which takes the script result and does what I need with it.
It seemed to get slower when I offloaded the SQL to the server so I timed it and it's constantly taking around 28 seconds on the server and around 11 if I just run the server script locally.
I had some substitutions, and some JSON conversions, and some custom functions and I thought they were maybe the problem, so I stripped them out from the code below.
Is this normal? We're cohosting and switched from Windows to Linux. My benchmarks have shown that while some interactions are faster, others are slower, so maybe Linux Filemaker Server just can't handle SQL commands well?
Have others experience this? My Filemaker Server is running 19.5.2 so it's a little out of date, but I checked the release log for the couple of more recent updates and none of them seem to have speed updates or mention SQL, so I'll probably run an update within a few days, but I was wondering if this is a "known thing" and if there's a workaround or "best practice"?
I was reusing that code snippet... does that affect speed? If so I'll definitely change it, but I can't imagine that would cause it to be slower on the server than when I run the same script on the client, would it?
LIKE is a comparison, it is much fuzzier than an exact match, and takes more effort to match.
When you do things on the server you are doing a lot more than performing the task at hand. The server opens a new client session, allocating system resources to the session. The database is being opened and that invokes the OnFirstWindowOpen script. If you use that script trigger you may be performing a lot of unwanted actions when it is running on the server. If so, test for client type and exit the script early.
Like is bad, try to avoid if you can. You certainly don't require it when doing a search on a primary key.
Also when it comes to your SQLGetTableName / FieldName functions, if you can help it (and I know this sounds bad) but hard-coding is faster. The internal SQL tables for obtaining Base table names, and field names, will become slower the more table occurrences you have in your file.
If you need to get field names, you can use a non SQL approach which is just:
BaseTableName does require either hard-coding of the name (which is relatively safe, how often do you change base table names..) or the use of internal tables accessed via SQL. However, these queries work off searching for BaseTableName where the tableOccurrence name is something - so again the larger the TO listing the slower this is.
--
Finally , if you are running this on server via PSOS, are you bypassing all your startup script logic in doing so? This can add overhead if you are unnecessarily running startup procedures.
I think you're saying instead of having a custom function which I pass in the field which performs the operation you mention, if I just put the entire operation in the call it will run faster, am I understanding correctly?
You mentioned benchmarks and I'm wondering if you're timing the whole PSOS call or if you're timing the query itself.
If not already, I'd recommend benchmarking the time the query by itself takes when run locally vs on the server. Also, you may need to delete the local cache to get a realistic idea about how slow a local query is when the user runs it for the first time.
With those numbers you can get a clearer picture about what specifically is slow about the workflow, and it may not be the sql query itself (others mentioned stuff like the startup script running, which is a common culprit).
Thanks Josh... it seems like most of my problem is caching.
Running PSOS, entire script takes 52 seconds
Timing JUST the SQL on server is 48 seconds
So my startup script is significant and I have to cut that down.
Run locally first run it's 64 seconds, slower than server, but subsequent local queries are around 7 seconds each (local caching), which I think was probably the cause of why I was seeing such a discrepancy earlier in terms of speed.
I changed the LIKE to = and total time on server was 3-4 seconds of which about 2.7 seconds was the actual query, compared to 3.5 seconds locally for the same query.
I’m not sure how sending this to the server could help. If a record is in an open state all of the data required for the ExecuteSQL is exchanged between the server and FMP with the associated network traffic overhead. However, if the record is in a committed state, ExecuteSQL is performed on the server.
All of the above recommendations regarding startup scripts and PSOS are valid, but ensuring the record is not open before running the SQL is essential as far as ExecuteSQL is concerned.
The difference you’ll see with SQL calls on the client versus server is that the data has to be transferred. Not just the found data. The whole set of data being queried is transferred.
When your data sets are large Claris recommends using relationships in the graph as that allows the server to trim the data being transferred and reduce overhead.
It sounds like you're saying "sometimes" when you run ExecuteSQL locally it actually runs on the server anyway... which is good to know, I won't try to push that through a PSOS script to save bandwidth... but I got lost with the "record is not open".
I'm querying a database of interactions and finding a subset of them. At any given time there's probably about a 50% chance that at least one staff member is adding or editing one of the records I'm querying. The layout that I'm on when I call this query doesn't have any of those records "open" but that's not to say someone doesn't.
You’ve raised a really good question and I can only provide my understanding and am prepared to be corrected.
A record being open is defined as ‘being edited’ or ‘owned’ by the user. If the record is in this state, this is when the data is exchanged between the user and the server when ExecuteSQL is activated. I have not heard of a situation when another user ‘owns’ the record that would trigger network traffic between the ‘executor’ of SQL and the server, only if the ‘executor’ was also the ‘owner’ for the record.
Get ( RecordOpenState ) returns a value of 1 or 2 for a new/modified record that hasn’t been committed for the editing user. However, it returns 0 for other users. If FileMaker applies this same rule to ExecuteSQL, Then I believe my assumption above should be correct.
Commit records ( if possible ) prior to an executeSQL call. Why? If the current user has an open record in the Target table then FMP must download the entire record set to evaluate them before processing the call.
@Malcolm, there are always possible variations, but working on the assumption that the user is working in a single window and the active record is in a committed state, then then the SQL will be executed on the server and the entire record set will not be downloaded for evaluation.
OK thanks... this is good to know. In this particular case it pulls data for a chart which is in a floating window, so I guess my options are "cycle through open windows and commit them all before SQL or run the SQL and hope it works quickly"... both have downsides.