Yes, at some point isValid was given extra capabilities and the documentation addresses this scenario specifically. IsValid( ) returns true when "FileMaker Pro Advanced cannot locate (temporarily or permanently) the related table in which the referenced field is defined".
I forget which version introduced the extra functionality. Prior to that isValid ( )'s functions were limited to testing the validity of data.
Omce again, we’ve been involved in an upgrade from FMS15/FMP15 to FMS18/FMPA18 and, just as we had with a v17 equivalent the client immediately complains of it running much slower.
These solutions are more old school, they have been in use for years with loads of unstored calculations, many in wide portals.
We spent 4 hours carrying out like for like tests yesterday, mostly navigating to layouts containing portals and portal scrolling.
Results were:
Navigation - FM18 typically 25% slower than FM15
Portal scrolling - 1st portal height scroll (i.e. clicking in the grey area of the scroll bar) FM18 21% slower than v15
However, second portal scroll FM18 was over 100% slower than v15.
Hence, major structural work is required to old school FileMaker solutions before upgrading. Trouble is, a lot of these are very complex systems, written before many of the tools we rely on now were released. It is very difficult to explain to customers that the newer software is much slower than the old.
I appreciate Claris are optimising for cloud and using caching a lot, but a difficult sell to their existing customer base non-the less.
We hardly use any calculation fields these days, everything is script driven, the above wasn’t one of our solutions, but we still have long-term customers out there with these type of solutions.
@AndyHibbs is this with StartupRestoration on or off? Many things are slower with 18 as a result of how StartupRestoration works.
We did not notice as much with calcs because we don’t have a lot of complex unstored calcs. Ours users significantly noticed the difference is find performance when we turned SR off.
We tried all available settings, startuprestoration on and off, also doubled the cachesize in FMS18 and upped the memory in FMPA18.
None of these made any difference in our particular tests. Almost certainly we will disable the startuprestoration on the production server when we are able to restart the service.
As mentioned above, this is not our solution, we were testing in conjunction with the developer, so have no detailed knowledge of structure.
We tried FMPv16 client, again no change in performance.
Our experience of this happening started with FMS16.
If you're experiencing slow response from GTRR, the first thing I would look at is top call stats to see what exactly FileMaker Server is doing at that time. GTRR is a complex script step that does many things as part of it behind the scenes, so without having the possibility to debug Draco's code directly, Top Call Stats is the easiest way to get some more info. The other potential source is sniffing the communication with WireShark or similar tool.
You can also simply hire someone to analyze and optimize it for you but I don't want to post advertisement here...
The developers know what they have to do, but the hardest thing to understand is the impact of FileMaker Server since v16.
Most customers would expect things to get faster when investing in new hardware and software, not slower, or bringing an existing system to a standstill.
Due to circumstances today I’ve had to recover 3 files as a result of the above using an iPhone and Jump Desktop. You really discover the IOS 13 limitations, but equally impressive this can be achieved on a standard sized phone. But a system that has run fine in FMS15 is now giving major problems in v18.
There are now legacy features in FileMaker that really shouldn’t be used due to whatever changes that have been made in FMS.
I use GTRR often with a relation where the left side of the relation is a global field containing a list of record-uids and the right side of the relation the uid field of the table that I want to select in.
What would be the equivalent Find request in such a use case?
I would be curious to test this ( collect keys, go to layout, perform finds ) vs the global GTRR ( collect keys, drop in global, and then GTRR ). I have a feeling which would be faster, but as I love to repeat wise words I heard years ago ( it was either Wim or Daniel Wood, or both )..."unless it's tracked and measured it's just an opinion".
I believe it was in v.16 when I was working on a project where we needed to reproduce a found set where we had a collection of all the primary keys available. I recall that I was eager to try an approach that scripted the creation of the found set with Perform Find and Extend Found Set script steps. It was my hope that using that script could possibly be equally, or more, performant than using a multi-key field, a relationship, and GTRR. As it turns out, GTRR was a bit faster in all the examples I tried.
I believe that I set up the Find based scripts to have a configurable batch size for how many ID's would be added as criteria before executing the Find/Extend step. What I don't recall was whether or not I discovered any sweet spot where performance was best.
It occurred to me to mention this earlier on in this thread, but since that was back in v.16, and thus no longer recent, I held off until now to mention it.
Since I didn't yield any performance gains with my attempt to script the found set creation, I let the scripted find idea sit on the shelf. I haven't ruled it out, however -- I just haven't (yet) had a situation where it made sense to use it.
EDIT:
When I posted this a day or two ago, I neglected to mention solution environment:
The above was in the context of a FMS-hosted solution across a LAN, i.e. not a locally running file, nor a WAN setup.
Back in v9 or v10 it was quite easy to overload GTRR when going to all related records from all records in the current found set. I'm sure that the behaviour has improved but I still script these actions so that they run in batches. This isn't going to be the fastest method but we want to ensure we have 100% reliability before we tune for speed.
This is the only scenario in which I've personally experienced major slowdowns. But it's also easy to implement and enables searches that clients love and competing products (in our sector) never seem to offer.