Integrating Llama3 for Local, Offline AI

This weekend, I set up the Llama3.2 model running locally, and connected from FileMaker Pro to build an AI chat, with no external dependencies. The Llama3.2 model is optimized to run on standard hardware, without the need for a powerful GPU. I was able to run it on my M1 Macbook with 16GB of RAM, with no issues, and the model responded quickly.

This model is 2GB and contains 3 billion parameters! It’s surprisingly knowledgeable about a wide range of topics for only a 2GB download.

Check out the full tutorial here:

8 Likes

Really cool. Thank you for sharing this here.

2 Likes

UPDATE: Meta just released Llama3.2-vision! I've updated the app to accept image prompts and added it to the FileMaker Experiments repo.

3 Likes

Hey! Someone from the Reddit community built on my app to make a Docker container with Ollama and FileMaker! This takes it from a proof-of-concept to a production ready setup for local, offline AI integrations.

3 Likes