> Changing your LLM inference provider is the easiest switch in technology I can think of.
Thats true right up until you’re working with confidential info in a corporate context. Then it’s a multi month cross discipline cross jurisdiction project not an edit in a config file.
Try Mistral-Nemo-2407-12B-Thinking-Claude-Gemini-GPT5.2-Uncensored-HERETIC_Q4_k_m.gguf. This 7.5GB model runs well in llama.cpp on my 2021 Macbook Pro and is good at both coding and business document analysis tasks.
Thats true right up until you’re working with confidential info in a corporate context. Then it’s a multi month cross discipline cross jurisdiction project not an edit in a config file.