Real-time capabilities

Recently, we had an interesting discussion about real-time capabilities.
This brings me back to the days of computers with M68000 processors and their real-time capability and a control and data capture system that I programmed in C and Assembler.
Hardware was slow then (processor running at 8MHz) and meeting hard real-time criteria was a processor cycle-counting business.
Today we have ample computing power at our disposal and our platform of choice is not a dedicated real-time system.
However, system response time is relevant, both in development and operations.
Apart from the obvious (producing list views and reports should not take hours or even days), what are the time frames (ms, s, h, d) that are relevant to your development work and operations of your solutions and what are typical time-critical tasks?

2 Likes

it is very well defined here: Real-time computing - Wikipedia

And in FM terms - an environmental change of any data relevant for the RT response would update accordingly without outside interference.

1 Like

The only time requirement we get is As Soon As Possible ( ASAP ).

1 Like

I think it tends to vary based on the frequency of said operations. Users are more willing to wait for something if they are not running that process constantly. Just like with my own computer, I want its boot time to be short, but since I am not rebooting my computer 15 times a day, it is ok that my boot is not instantaneous.

Something else that can influence is comparison. Again, with the same example, having a computer with the OS on a spinning disk or with the OS on a SSD, if in a conference room (in a non-Covid era), I boot in 60 seconds, and the person next to me boots in 30 seconds, I will sure be thinking again about that boot time, even if 60 seconds was no problem by itself until I saw someone achieving better performances.

If there is something that gets done over and over during the day (with or without the awareness of the user), bringing the time down and tuning things up can be well worth it.

I think the success for any optimization is to focus on bottlenecks. Being able to identify them can be challenging.

I think for the best user experience, we should aim for things that are as fast as possible. This study from 2004 seems to suggest:

  • 0.1 second as the limit for the user to discern between something that feels instantaneous and something where some processing is taking place
  • 1 second as the limit to keep the user's flow of thought
  • 10 seconds as the limit to remain focused on the task, shifting to something else while letting the computer finish for anything longer

I think the context was for web users circa 2004. Today the 10-second mark may be even shorter.

Again, for something the user will run just once a week or once a month, I'm not very likely to go all out on optimization. For something that impacts all users countless time in the day, I'll look into it, otherwise they have a bad experience and it is our job to make that better (otherwise they will find someone else who will solve that for them).

There are not many as far as I am concerned. We are not talking about "I forgot to do this an hour ago and now I need the output in 2 minutes". I tend to think of time-critical activities as tied to data capture or communication, rarely about data processing (even if in some circumstances faster data processing can be a game changer). You may be monitoring weather conditions, or anything else giving you a steady stream of incoming data. Having the right data with the right time intervals cannot be compensated by data processing (if you need readings every second and only have readings every minute, you cannot manufacture data you do not have). You may need to relay information over to another component almost instantaneously.

Processing data tends to take time and the volume of data will influence how lengthy things are. If you need real-time, you cannot rely much on cached data to get performance gains, but luckily none of my customer ever asked me to chart weather conditions in real-time. Some processes are also more efficient because of batching so seeking fast response times can be good, but if it ends up taxing your system for something frivolous, it should be avoided.

I would be curious to hear about people deploying something where processing the data faster (near instantaneous or not) ended-up being a game changer, opening new doors to make enhancements to the whole business process. Those are the best stories.

We were contacted by a telephone services re-seller. The upstream supplier sent them CSV data of all calls made, regularly up to 2GB. The processing was all done in FileMaker and it was taking as much as 48hrs. If an error was found the processing had to start again from the beginning. And reports had to go out weekly, so in a worst-case they could only correct three errors in a week or they breached the reporting obligations.

We used command line tools to build a data processor that began by splitting the data into known errors, possible errors, and positively clean data. The possible errors were passed to an operator for analysis. By leveraging the command line tools as a complementary toolset we were able to verify the data and have it imported within a four or five hours. In business terms that meant they now only required one day for processing and they had six days for reporting.