I would like to point the AI Assist to use Ollama - with the standard OpenAI API spec (https://ollama.com/blog/openai-compatibility?ref=upstract.com)
This would allow me to run completely locally and avoid the concerns of running AI Assist over the web to third party sources or incurring costs of models running in the cloud.