@nyarchlinux I have a question about this. What AI does this run on?
If it runs on Ollama, I would totally try it because I could run everything locally.
Top-level
@nyarchlinux I have a question about this. What AI does this run on? If it runs on Ollama, I would totally try it because I could run everything locally. 2 comments
@nyarchlinux Ah, very cool. Thank you very much! I'll try and get it working!!! |
@mrmasterkeyboard it runs on any AI you want, we have 10+ providers and Ollama is supported. You can also run custom gguf models using gpt4all.
There are also offline/self hostable TTS providers, Translation providers and Speech Recognition providers.
Assuming that you have good enough hardware, you can run it full privacy without losing any functionality. Before the final release I will likely write some guides on how to fine tune the settings to privacy use.
For the assistant part of the program, we suggest running a model that is at least llama 3.1 70B, but you are totally free to use any model and you have a lot of control on it
@mrmasterkeyboard it runs on any AI you want, we have 10+ providers and Ollama is supported. You can also run custom gguf models using gpt4all.
There are also offline/self hostable TTS providers, Translation providers and Speech Recognition providers.
Assuming that you have good enough hardware, you can run it full privacy without losing any functionality. Before the final release I will likely write some guides on how to fine tune the settings to privacy use.