R.I.P - GTRR

@nicklightbody Great to see you around here Nick. Hoping we will hear more and more from you.

1 Like

2 posts were split to a new topic: Let's deep dive and explore Performance

The problem is in a normalized data structure, a normal find doesn't provide the needed functionality in anything other than a simple find.

I'm not sure what you are saying here Josh?

I was replying to Nick's comment.

As an example, if you need to find related records based on the current found set, a simple find typically won't work. On the other hand, a list of UUIDs, and GTRR from a global field can actually be significantly faster than multiple scripted finds.

1 Like

Josh - I guess the best way to test this would be to build a model and run some timed tests?
My recent experience developing ecoWeb was as follows.

  • Table of web pages
  • Table of paragraphs
    Any para can be used on any or no pages using a multi key field in the page record
    In order to build the page of html with inline css I needed to get the paras used on the page in the correct order the run down that set of paras and build the page.

First try: I originally used a portal and processed it by using goto portal row.

Second try: Then I reduced the portal length to 1 row and didn’t display anything in the portal - this was several times faster (say 5x faster) by reducing the amount of work required to render the portal on each step.

Third try: then I picked up the UUIDs of the paras from the multi key field, went to the Table of paras - found the correct paras using the UUID to perform find, sorted them, built the page and returned to Ui. This was several times faster than Second try, about 20 times faster than First try.

My development work has been focused on running ecoWeb on FMServer for remote shared access. The key advantage of the Third try is of course that it can be run server side whilst the portal based stuff isn’t really suitable for that.

Although FileMaker was described 25 years ago as a relational database the relationship part of it is the slowest these days, in my view. I am seeing much better performance using Find. A chap I am coaching said to me on Friday that when he started with FM he was told that GTRR was the fastest way of getting subsets of records. That was true of course, before Perform Find was upgraded in about 2012 but it now no longer true I would suggest?

When I was recording the Fireside FileMaker podcast with John Mark Osborne and Michael Rocharde a few weeks ago (https://www.podbean.com/eu/pb-grwe2-fa499d) and we talked about this question JMO said he always tells people he is teaching that relationships are slow and that there are faster ways of finding data.

However, these are just subjective opinions. A reproducible test is required I suggest? Would you like to collaborate on building a test? We can give people on this list full access to it and build it on deskspace.fmcloud.fm?

What do you think?

Cheers, Nick

1 Like

The dsBenchmark research that we and others have done since FM13 show the user activity load capacity of each version improving, except I think for FMS17 or 18? Now that the record restoration feature is turned off by default in 19 everyone I have talked to, including @AndyHibbs, have said that FMS19 is stable and generally good.
Most performance issues arise in my view as a result of poor system design which is not obvious until the system becomes big enough to test FileMaker’s capacity.
The big challenge is that FileMaker is very permissive. It permits users to make poor design decisions, it’s very accessibility results in users then complaining about FileMaker when in reality it is their own responsibility?
Cheers, Nick

I’m struggling with this now. My POS calculates the current inventory quantities on 25,000 items every time I open the file. Fixing that is on my list of planned enhancements, but I’m concerned implementing the changes will be very time consuming and that I’ll break something in the process.

If your POS script step call is set to NOT wait for completion then it will become a background activity that you should not notice as you login without delay. Just bear in mind that the stock levels may not be correct initially. Maybe add a visible last mod TS so you can check whether the stock is up to date?
Cheers, Nick

I suspect that, in this case, "POS" may be referring to Point of Sale, and not Perform On Server.

p.s. Like others, happy to have you sharing your performance insights here, @nicklightbody

2 Likes

I agree that a find, if you are looking to get a single set of results, is faster if the fields are stored text.

However, if a complex find, where the found set is dependent on data from multiple tables, GTRR is a often necessary. Gathering the UUIDs of the current found set, dropping them into a Global field, and doing a GTRR through a relationship based on that global field, and only for the current record ( not the found set ), is nearly instant. Depending on the complexity of the graph, sometimes the first GTRR may take a second or two... but after that, it's crazy fast.

So take the example of a piece of Jewelry. Finding all the rings with a 2 carat diamond as the centerpiece that were sold in the past 3 months, to clients in the NorthEast.

Or you want to find pendant necklaces that have either a round stone, or emerald cut, or cushion cut or one of 3 specific sizes.

Those finds will require finding data on 3-5 different tables in a custom app that has any level of normalization.

GTRR on a single record using a global field is what I've found to be the fastest approach in every case, except a local, non-hosted file. The difference in performance in a LAN vs a WAN deployment is almost negligible. The only difference seems to be the size of the raw data and how long it takes to travel 100 miles vs 100 feet.

Finds across relationships are highly variable based on the structure, and the data.
GTRR has definitely gotten slow if you are trying to GTRR the whole found set.
And for finding related records in a single table based on a single parent record, agree 100%, a find is faster.

1 Like

Hmm - yes you must be correct - so I guess the answer is that it should logically be doing that operation server side since to do it from the Point of sale machine is clearly not desired method.
Alternatively add an old MacMini or similar as a robot and set it run the script to a schedule?

The deeper question is how should such a process be designed so that this large update is not required?

Create a second repeating field for stock level [2]

Add that field to the necessary screens.

What a stock item is allocated the stock level of repeat [2] should be updated, when it is returned to stock or deliveries received ditto.

That sort of function could be added initially only on some special test stock items, the overall update of repeat [1] retained as a method of validation of repeat [2] which would then be required less frequently when the new method is proven?

Cheers, Nick

@jormond - great this is cool - I think that based on this it should be possible to create a simple brief technical guidance note that tells people which method is desirable based on their requirement? Do you want to do the first draft and share it? Then we can challenge others to improve on it?

1 Like

Yes. Completely agree.

I'm assuming you are running on FMS of some flavour? That sort of processing should be done (1) at the POS or (1) in batches overnight. In scenario 1 you can use PSOS without waiting for a result so that there is no slowdown. In scenario 2 it is a scheduled script on the server that occurs in the small hours of the morning.

2 Likes

Yes, I’m using FMS19 hosted by AWS (managed by Productive Computing). The in-stock quantity calculation really does not take that long, so I’ve let it go as I have more pressing issues to address. Mainly, it’s an annoyance and slightly baffling that the developer used a calculated field rather than just adding and subtracting as transactions occur.

1 Like