I probably have to wait for my client (for noobs) to support MTP. So until then I play around with what I have. I’m not even that deep into Ai anyway and mostly play around and only use it occasionally to help. But thanks for the suggestion.
I’m still experimenting, and just started doing some custom settings. What makes these “bigger” models more usable is, lowering the context to free up VRAM a bit and in exchange load more of the core model into VRAM. In example I’m trying this with a 31B unsloth gemma 4 model, but Q3_K_M and get 4 tok/sec. It’s slow and doesn’t have huge context, but for the occasional questions this is tolerable, with respect to the hardware I have.
My main models are the previously mentioned 35B-A3B and 26B-A4B (where only a few billion parameters are active from a bigger pool) anyway, as they are pretty fast with 17 to 50 tok/sec. While the quality is acceptable and not really much different from the “bigger” models I can run.
I probably have to wait for my client (for noobs) to support MTP. So until then I play around with what I have. I’m not even that deep into Ai anyway and mostly play around and only use it occasionally to help. But thanks for the suggestion.
I’m still experimenting, and just started doing some custom settings. What makes these “bigger” models more usable is, lowering the context to free up VRAM a bit and in exchange load more of the core model into VRAM. In example I’m trying this with a 31B unsloth gemma 4 model, but Q3_K_M and get 4 tok/sec. It’s slow and doesn’t have huge context, but for the occasional questions this is tolerable, with respect to the hardware I have.
My main models are the previously mentioned 35B-A3B and 26B-A4B (where only a few billion parameters are active from a bigger pool) anyway, as they are pretty fast with 17 to 50 tok/sec. While the quality is acceptable and not really much different from the “bigger” models I can run.