The killer is latency.
To understand what's happening you need to consider what is required each time you update a single record in the found set.
First, that record must be downloaded, requiring a request/response.
Next, to update that field, any used relationships must be evaluated. Once evaluated, found record(s) are also downloaded to client. the udpate happens, and then the record is written back to server.
So in a very basic sense (and this abridged):
1x request for record download
1x response for sending record
1x query for relationship needed
1x response and record received
update record
1x request to commit record
1x response acknowledging update
Next it all depends on your latency, that being the time required to send a request. Let's say you're latency is 20ms
Taking that for the above, it's about 6 requests, so 20ms = 120ms (per record). That's just if you're using 1 relationship in your update, the more you add, the more queries required per record.
Multiply that by 15,000 records and you quickly end up with 1,800 seconds.. so yeah slow
I'm assuming here obviously that your solution is hosted on a server.
Factory in also the string concatenation has a small overhead too. There is also index writing to consider. In that situation I would imagine a loop to be slower. The reason being each time you iterate a loop, the index (if there is one) is updated after each record committal. On a replace however index writing is batched into groups of 25 records at a time, so there are less index writes
Overall, a replace on 15,000 records will be slow on any hosted solution, it's just the nature of request and latency over the network and the amount of requests/responses . This is the reason anything in FileMaker is slow on a hosted solution, it's really all about the # of requests multiplied by latency per request.
One solution is performing these on server using PSOS instead, you eliminate any latency overheads, especially for an operation of this size I would be recommending that.
Other factors could be when backups are occurring, a backup will pause operations like a replace while the databases are paused to carry out the backup. Even other users on server doing things will affect performance of a replace.
Finally, any other actions occurring as a result of the replace will have an impact. Consider other fields in the table with auto-enter calculations which themselves update as a result of the field being updated. These must also evaluate - if any of these auto-enters themselves reference relationships, t hen all of those must also be queried, with records downloaded - very quickly things begin to spiral.