@steve_ssh is correct, it's the date predicate that is the killer, > , < and ≥ and ≤ are always the most expensive predicates, worse with timestamps, but dates are right up there with poor perfromance. The reason simply is an = is an easy check on the index, but any ≥ or ≤ takes more time to traverse the index.
These predicates perform worse the more records you have in your table B over time, so you will note perfromance degrades over time. It actually makes no difference how many records are found through your relationship, the performance is more to do with the overall number of records in the table. more records = larger index = slower evaluation of relationship
Sorting might be playing a smaller role in the performance. Max or Min functions aren't going to help at all because they still require ordering or records to return the result, which is comparable to the sort on the relationship.
One thing you could try is not using a sort at all, and instead using the Last function to get the last record in the relationship. Obviously this is fraught with some danger, and it assumes every record in B is created in chronological order. If you create records out of order then Last may not necessarily be the most recent date and this technique won't work.
In my experience, FileMaker relationships aren't very well optimised. In your case, you would expect the two "=" predicates to evaluate first, reducing the amount of potential results, and the ≥ predicate to occur last on the results, but this is not the case based on what I have seen with this predicate.
Another option is deferring the perfromance hit to an area that is less of an impact. Here is an example of this.
Consider the calc on table A is actually a stored value via auto-enter calc. It still uses the same calculation and relationship. It could be triggered using a special "refresh" field, referenced in the auto-enter purely for triggering purposes. Thus each time "refresh" is set to 1, the auto enter fires.
The key now is ensuring that this auto entered value is kept current. Any point in the solution that has the potential to alter the value of this calculation must instead update the refresh flag. So if a record is added, deleted, or the date is set in table B, the appropriate record in table A is updated, keeping the result relevant.
This is deferring the performance hit to the create/delete/modify actions on records in B, rather than the perfromance hit in evaluating the calc in A.
Often, this approach (while more work) can yield perceived performance benefits. If for example you are running a big report that uses this calculation, you might be required to evaluate it for hundreds or thousands of records in order to produce the report - resulting in a large run time. This stored approach will make the report almost immediate to run. The actual time saved is distributed to the multiple smaller operations such as add/edit/delete which might occur over a period of weeks or months and so has less of an impact.
outside of that, going back to my earlier point about perfromance, you might find this calculation will run faster if you are able to cull the number of records in table B. If it is a large table, consider archiving old data. The fewer records in this table you have the better.