Indeed, this model is working on my machine. Can you explain the difference with the one I tried before?
Indeed, this model is working on my machine. Can you explain the difference with the one I tried before?
I have a MacBook Pro M1 Pro with 16GB RAM. I closed a lot of things and managed to have 10GB free, but that seems to still not be enough to run the 7B model. For the answer being truncated, it seems to be a frontend issue. I tried open-webui connected to llama-server and it seems to be working great, thank you!
I tried llama.cpp with llama-server and Qwen2.5 Coder 1.5B. Higher parameters just output garbage and I can see an OutOfMemory error in the logs. When trying the 1.5B model, I have an issue where the model will just stop outputting the answer, it will stop mid sentence or in the middle of a class. Is it an issue with my hardware not being performant enough or is it something I can tweak with some parameters?
Thanks!
For people on MacOS, is there a better alternative than croco.cpp?
I’m new to this and I was wondering why you don’t recommend ollama? This is the first one I managed to run and it seemed decent but if there are better alternatives I’m interested
Edit: it seems the two others don’t have an API. What would you recommend if you need an API?
Well this is what I quite don’t understand: I was trying to run the q3_k_m which is 3.81GB and it was failing with OutOfMemory error. The one you provided IQ4_XS is 4.22GB and is working fine.