Confusing 1 to Many Find behavior with relationship sorted by Date

I have two tables and TOs:

  • Worker (ID, name, gender)
  • Info (ID, date, sex, cGenderUpdate)

The TOs are related:

  • Worker::ID = Info::ID
  • sorted by Info::Date descending - so from the Worker table to the Info table, the most recent record (with the latest Date) should be found.

The Info table has a calculated field indicating if the gender field in the Worker table needs updating:

  • Info::cGenderUpdate = If(Worker::Gender ≠ Info::Sex, 1, "")

I have a layout based on Worker, which shows the related Info fields.

When I do a search from the Worker table, searching on the Info::cGenderUpdate field:

image

FileMaker finds 7 matching records, but as you can see, none of them actually have the correct value in the search field:

If I check the Info table, I can see that all of these records did have a 1 value in the cGenderUpdate calc field, but only for older records, which should not be found by this search (since the relationship is sorted by date descending).

This is with FileMaker Server 18, using a FM19 client.

Very confused by this. Ideas?

  • FileMaker Bug?
  • Corrupt index in one of the fields
  • bad data? (e.g. could there be some unusual data in the Date field which is sometimes being intreprted as a date, and sometimes as text?)
  • multi-line data in one of the fields?

Digging into the data more, I can see those 7 records are the only ones where the gender changed at some point.

What I thought I was asking FileMaker to do:

  • in the Worker table, find any records where the most recent Info record (as determined by Date sorted Descending) has a cGenderUpdate value of 1

What FileMaker is actually doing:

  • in the Worker table, find any records where ANY related Info record has a cGenderUpdate value of 1.

It's as if FileMaker is ignoring the "sorted by Date Descending" predicate in the relationship.

A sorted relationship has no impact on search results compared to an unsorted relationship… so yes, FileMaker is ignoring the relationship sort when performing a search.

1 Like

Interesting. Been doing this for 20+ years and somehow I never knew that.

What features of a relationship do matter then?

  • equality (=)
  • comparison operators? (less than, greater than...)
  • the (X) operator ?

Also, as a workaround, if instead I define calculated field in the Worker table:

Worker::cGenderUpdate = Info::cGenderUpdate

and then search on that Worker field, it works as expected.

Predicates are all that matter. The cartesian join, however, is useless in a search. All records always match.

What about searching on a field in a portal that has a filter or sort?

No and no. Filters don't omit the related record from the relationship. They only hide related records after they match a relationship. They give the illusion that there are fewer or no matched records.

Sorts only reorder related records. A re-ordered record is still a matched record.

Only predicates apply to relationships. Same with searches.

1 Like

The problem comes from the fact that a child record field in a context of a parent table, only shows the first field of the child table, whereas a search using the child fields, searches ALL the child table records.

If the CONTEXT of the search layout is the parent table, and the search is from fields in the child table, you won't get the data you expect. ANY FIND is completed independent of the relationship filtered view of data.

If you want to run that FIND, from that context, you would need to FIND for the parent fields, GTRR, then a constrained FIND of the child table fields within that found set resulting from the GTRR.

Alternatively, change the context to the child table, and search there. You will likely show redundant parent record fields, but it will be the right data. You can then to a GTRR to a your original layout in the parent context, and it will reflect that found set from an appropriately run FIND.

I had this conversation a couple years ago with the Beezwax LogiCator author - when child record fields are in a context, searching those fields give erroneous results.

De-normalizing the data is one common approach, although can create significant overhead, especially if a calculated field, which, by definition being across a relationship, is unstored.

If you want to denormalize the data to support this, I might suggest either scripting the denormalization data update process (often hard to ensure you always catch it), or create an auto-enter in the parent for the first record's field from the child table, and trigger that parent to force an update, either through a LET setting a completely indepdent variable as part of the auto-enter calc, or resetting any parent side relationship field to itself, forcing the auto-enter calc to re-evaluate.

I make heavy use of stored, indexed auto-enter calc fields when I want to denormalize results across a relationship. They work great and are very fast, and will automatically update on any changes to referenced field in the same table but to update in response to changes in other related tables, you have to trigger them manually.

When I'm using this pattern, I tend to do this:

TableA::triggerField  // set this field to any value to trigger updates
TableA::TaxRate
TableA::TotalTaxOwed
   [Auto enter calculated field, stored, indexed]
   = Let (x = TriggerField ;   // setting triggerField will cause this calculation to update
            TaxRate * Sum(TableB::SalesItems)   // changing TableA::TaxRate will also update this field
     )

Then, any changes to fields in TableA will automatically update the autoenter, stored, indexed, calc field. But changes to TableB will require a manual update.

I typically do this with a bunch of fields that all rely on the same triggerField, so that updating the entire table is as easy as:

select all records
replace field [With Dialog: Off, Triggerfield; 1]

Also, you can set the trigger field from another table, across a relationship, if you only need to update a subset of records.

Indexed and stored fields in FileMaker are very fast -- I can do a search on ten million records and get results in a second or two. If these same fields were regular cacluated (non-stored, non-indexed) fields, the search can be 10x, 100x, or even 1000x slower.

Same here - or at least VERY similar. My trigger field is a timestamp field, so I have the data indicative of time of action, which helps with debugging.

Let [ ~trigger = TriggerTimeStampField ] ; unrelated calc )

I also do NOT use a RFC, as a record that is open will just be skipped by a RFC, whereas a loop - which is just as fast - can trap for open records and recover.

1 Like

@Kirk - agree, I've done some of those myself:

  • use a Timestamp for the trigger field, very useful in debugging
  • If this is an table where records could be locked / in use, add an additional field "TriggerUpdateRequest" timestamp field which can be set by the client...
  • then, a regularly scheduled Server-Scheduled Script procesess the updates (by finding records needing updates, processes them using a loop, which can trap for errors if records are in use, skip those, and come back later to update the missed records later on the next run.