This is one of the “smartest” models you can fit on a 24GB GPU now, with no offloading and very little quantization loss. It feels big and insightful, like a better (albeit dry) Llama 3.3 70B with thinking, and with more STEM world knowledge than QwQ 32B, but comfortably fits thanks the new exl3 quantization!
You need to use a backend that support exl3, like (at the moment) text-gen-web-ui or (soon) TabbyAPI.
What are the benefits of EXL3 vs the more normal quantizations? I have 16gb of VRAM on an AMD card. Would I be able to benefit from this quant yet?
AFAIK ROCm isn’t yet supported: https://github.com/turboderp-org/exllamav3
I hope the word “yet” means that it might come at some point, but for now it doesn’t seem to be developed in any form or fashion.
There’s a “What’s missing” section there that lists ROCm, so I’m pretty sure it’s planned to be added
That, and exl2 has ROCm support.
There was always the bugaboo of uttering a prayer to get rocm flash attention working (come on, AMD…), but exl3 has plans to switch to flashinfer, which should eliminate that issue.
^ what was said, not supported yet, though you can give it a shot theoretically.
Basically exl3 means you can run 32B models, totally on GPU without a ton of quantization loss, if you can get it working on your computer. But exl2/exl3 is less popular largely because it’s PyTorch based, hence more finicky to setup (no GGUF single files, no Macs, no easy install, especially on AMD).