If I check the Info table, I can see that all of these records did have a 1 value in the cGenderUpdate calc field, but only for older records, which should not be found by this search (since the relationship is sorted by date descending).
This is with FileMaker Server 18, using a FM19 client.
Very confused by this. Ideas?
FileMaker Bug?
Corrupt index in one of the fields
bad data? (e.g. could there be some unusual data in the Date field which is sometimes being intreprted as a date, and sometimes as text?)
A sorted relationship has no impact on search results compared to an unsorted relationship… so yes, FileMaker is ignoring the relationship sort when performing a search.
No and no. Filters don't omit the related record from the relationship. They only hide related records after they match a relationship. They give the illusion that there are fewer or no matched records.
Sorts only reorder related records. A re-ordered record is still a matched record.
Only predicates apply to relationships. Same with searches.
The problem comes from the fact that a child record field in a context of a parent table, only shows the first field of the child table, whereas a search using the child fields, searches ALL the child table records.
If the CONTEXT of the search layout is the parent table, and the search is from fields in the child table, you won't get the data you expect. ANY FIND is completed independent of the relationship filtered view of data.
If you want to run that FIND, from that context, you would need to FIND for the parent fields, GTRR, then a constrained FIND of the child table fields within that found set resulting from the GTRR.
Alternatively, change the context to the child table, and search there. You will likely show redundant parent record fields, but it will be the right data. You can then to a GTRR to a your original layout in the parent context, and it will reflect that found set from an appropriately run FIND.
I had this conversation a couple years ago with the Beezwax LogiCator author - when child record fields are in a context, searching those fields give erroneous results.
De-normalizing the data is one common approach, although can create significant overhead, especially if a calculated field, which, by definition being across a relationship, is unstored.
If you want to denormalize the data to support this, I might suggest either scripting the denormalization data update process (often hard to ensure you always catch it), or create an auto-enter in the parent for the first record's field from the child table, and trigger that parent to force an update, either through a LET setting a completely indepdent variable as part of the auto-enter calc, or resetting any parent side relationship field to itself, forcing the auto-enter calc to re-evaluate.
I make heavy use of stored, indexed auto-enter calc fields when I want to denormalize results across a relationship. They work great and are very fast, and will automatically update on any changes to referenced field in the same table but to update in response to changes in other related tables, you have to trigger them manually.
When I'm using this pattern, I tend to do this:
TableA::triggerField // set this field to any value to trigger updates
TableA::TaxRate
TableA::TotalTaxOwed
[Auto enter calculated field, stored, indexed]
= Let (x = TriggerField ; // setting triggerField will cause this calculation to update
TaxRate * Sum(TableB::SalesItems) // changing TableA::TaxRate will also update this field
)
Then, any changes to fields in TableA will automatically update the autoenter, stored, indexed, calc field. But changes to TableB will require a manual update.
I typically do this with a bunch of fields that all rely on the same triggerField, so that updating the entire table is as easy as:
select all records
replace field [With Dialog: Off, Triggerfield; 1]
Also, you can set the trigger field from another table, across a relationship, if you only need to update a subset of records.
Indexed and stored fields in FileMaker are very fast -- I can do a search on ten million records and get results in a second or two. If these same fields were regular cacluated (non-stored, non-indexed) fields, the search can be 10x, 100x, or even 1000x slower.
Same here - or at least VERY similar. My trigger field is a timestamp field, so I have the data indicative of time of action, which helps with debugging.
Let [ ~trigger = TriggerTimeStampField ] ; unrelated calc )
I also do NOT use a RFC, as a record that is open will just be skipped by a RFC, whereas a loop - which is just as fast - can trap for open records and recover.
use a Timestamp for the trigger field, very useful in debugging
If this is an table where records could be locked / in use, add an additional field "TriggerUpdateRequest" timestamp field which can be set by the client...
then, a regularly scheduled Server-Scheduled Script procesess the updates (by finding records needing updates, processes them using a loop, which can trap for errors if records are in use, skip those, and come back later to update the missed records later on the next run.