Hosting and Bandwidth

We're in Massachusetts and have been hosting in Virginia and with some of our more complex screens it sometimes takes a few seconds to load. We got a Remote Desktop also located in Virginia and the speed increased for some operations by as much as 6-fold... which makes me think our problem is latency.

I used Network Link Conditioner to add some uplink and downlink delay and uplink and downlink delays significantly impacted performance, so I'm looking to see if we can possibly host the solution nearer to us.

I found a local host who's offering to let us on their fiber network at 30/30, which seems slow to me, but maybe latency is really the only thing that matters?

I wrote a script that went through a whole bunch of my records and watched the server network traffic and it went up to 4,500,000 bytes per second(4.5Mbps). We have 20 staff members and students sometimes use the web view to check their data but realistically there are very few cases where we'd have more than 15 active users at a time, and most are sitting there with open screens and clicking every 20-200 seconds.

My IT guy is pushing me to by a machine with the 10 gig GBIC, which feels so over the top.

The colocation place in our little neck of the woods is offering 30/30. If my math is right that means we could have 7 users maxing out that 4.5Mbps without a problem. Which means in a staff meeting when I'm doing training and everyone clicks on the same link at the same time there might be a hiccup but otherwise the 30/30 is fine.

My general conclusion is for a small app like this which isn't moving images or such through a container 30/30 is fine and there's absolutely no need for a 10 gig GBIC.

It also seems like my biggest slowdowns now (except for my code of course lol) are latency and server speed, as some server side scripts take 5-10 seconds to run, and a nearby Mac mini would fix them.

I just don't want to trade those for a bandwidth problem.


Re: server speed: Disk I/O is one of the most important factors for FMS.

Latency: YES, this can be a total killer. However, I live in Los Angeles and have modest client servers (AWS) in Oregon, which is ~900miles away and the solution performance feels acceptable, even with that amount of latency.

Bandwidth: this can also be a problem.

You can certainly try a closer server and see if that provides a good enough improvement, and if it does, that's likely the cheapest solution! But my hunch is that your app is simply fetching too much data. I recommend Little Snitch for measuring network speed and volume when you perform different actions and visit different layouts. But beware that FM caches data, so to get an accurate representation, you may have to clear your FM cache between tests.

If you want to see a deeper dive into network measurement and changes you can make to address bottlenecks, this video is amazing

1 Like

The design of your FileMaker solution may also be a factor. We’ve been providing Citrix XenApp and Microsoft RemoteApp workspaces (RDP protocol) access to our servers for over 10-years now. In our case, the FileMaker Servers and the RemoteApp servers streaming FileMaker are on the same VLAN within our infrastructure supplier, hence very fast bandwidth between these.

This arrangement allows for traditional FileMaker design to work fine. However, if you are going to use tables containing large number of fields, or use features such as summary fields, cross table sorting, replace field contents, etc. running FileMaker Pro locally on each computer, then the constraining factor of the user’s connection to the Internet can produce horrible results.

We (in the UK) have users based in Asia, Europe, USA and Australia, where latency is going to be at its worst, who operate at acceptable levels as only the RDP traffic is being transmitted/received. These users would not be able to work using a traditional FileMaker installation.

I’d suggest reading through Nick Lightbody’s ‘Performance core principles’ postings within this forum, starting at Performance core principles: 1. Unstored calculations before necessarily buying into bandwidth/latency only as the solution (both of which remain factors of course).

Our personal opinion is that FileMaker now consists of legacy features that should not be used in a WAN scenario, hence the coding can be somewhat more laborious to squeeze the features and speed out of the system. Saying that, if your server side scripts are taking that long to run, then perhaps the server is a factor as well?



The difference in performance between two different FM solutions doing the same thing - in the same environment - can be very large.
You can write FM stuff that is fast over WAN, probably most people write stuff that is not so fast. It takes time and knowledge to write fast stuff.
For example: I am coaching someone, a dev with over 20yrs experience, in performance. He then improved the speed of a core part of his system by 6 times, now it took 1 second to do what used to take 6+ seconds. I did nothing except talk with him and open his eyes to what he could do and how to do it.
Here are several examples of a FM App that handles images on a Linux server in France - each one is using different image options as a comparison - as an example of good WAN performance with potentially a fairly heavy data load.
Compare it with other methods of handling images with FM over WAN.
The Performance discussions to which Andy refers are probably quite authoritative as they have received contributions from a fairly wide range of developers and there seems to be a general level of agreement on most of the main subject areas, do you should read them with some confidence.
Cheers, Nick


Thanks, this is helpful. I'm aware that there's lots that can be done to improve database speed from a code/structure point of view. For example this summer I did the "arachnophobia" thing and went from a spider web of interconnected tables to nothing deeper than 4 levels. It took 20 hours and I got a solid 5% speed improvement.

That being said I KNOW that a Remote Desktop that's physically close to my solution makes my app up to 6 times faster depending on what I'm doing, and we're currently paying $200 a month for 20 Remote Desktop connections for my staff in addition to $150 a month in hosting... but Remote Desktop is a pain, so if I can get that same speed improvement in 20h of work by moving it closer to us it seems like a no-brainer in terms of next steps for a guaranteed speed improvement for my team.

This is a solid point. If you have good evidence that a closer server improves performance on your solution specifically, then that's a fantastic first step, and likely cheaper than diagnosing and redesigning all the slow bits of your app.

The app performance tuning can come after and be done over time. It's still very important long-term, but getting an immediate improvement by throwing more "metal" at the problem is likely worth it to make it usable meanwhile.

Out of curiosity: do all users work on-site or do you have a mix of on-site and off-site users (i.e. over VPN)?

Most are on site, but not all when it's not COVID. If we go back into lockdown everyone will be remote.

In addition students access the portal from offsite. Our school network is SUPER locked down and we only have a part time networking person so they don't drill holes in the firewall (I don't blame them), otherwise we could just host it on our internal fiber network.

The district VPN is only open to about 12 people in the entire district for security reasons and wouldn't be an option, and wouldn't work for students accessing the web portal anyway.

That being said everyone who works with us lives within 20 miles of the school, so getting it hosted anywhere in our region should provide the same benefits we're getting with Remote Desktop, without the hassle and cost of Remote Desktop... unless I screw up the server purchase or the host selection... (cue ironic music)

1 Like