This weekend, I set up the Llama3.2 model running locally, and connected from FileMaker Pro to build an AI chat, with no external dependencies. The Llama3.2 model is optimized to run on standard hardware, without the need for a powerful GPU. I was able to run it on my M1 Macbook with 16GB of RAM, with no issues, and the model responded quickly.
This model is 2GB and contains 3 billion parameters! It’s surprisingly knowledgeable about a wide range of topics for only a 2GB download.
Check out the full tutorial here: