To go into more detail:
-
Exllama is faster than llama.cpp with all other things being equal.
-
exllama’s quantized KV cache implementation is also far superior, and nearly lossless at Q4 while llama.cpp is nearly unusable at Q4 (and needs to be turned up to Q5_1/Q4_0 or Q8_0/Q4_1 for good quality)
-
With ollama specifically, you get locked out of a lot of knobs like this enhanced llama.cpp KV cache quantization, more advanced quantization (like iMatrix IQ quantizations or the ARM/AVX optimized Q4_0_4_4/Q4_0_8_8 quantizations), advanced sampling like DRY, batched inference and such.
It’s not evidence or options… it’s missing features, thats my big issue with ollama. I simply get far worse, and far slower, LLM responses out of ollama than tabbyAPI/EXUI on the same hardware, and there’s no way around it.
Also, I’ve been frustrated with implementation bugs in llama.cpp specifically, like how llama 3.1 (for instance) was bugged past 8K at launch because it doesn’t properly support its rope scaling. Ollama inherits all these quirks.
I don’t want to go into the issues I have with the ollama devs behavior though, as that’s way more subjective.
To actually answer this, you could look into free APIs of open source models, which have daily limits but are otherwise largely catch-free. You could even mirror endpoints on your VPS if you need to, or host “middleware” like prompt formatters and enhancers.
I say this because, as others said, you cannot actually host AI on a VPS…