Scientists used "knowledge distillation" to condense Stable Diffusion XL into a much leaner, more efficient AI image generation model that can run on low-cost hardware.
I think they got their numbers wrong. It says they shrink it down to 700 million parameters, that would make it smaller than SD 1.5, which means it should take way less than 8GB of RAM.
I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.
I think they got their numbers wrong. It says they shrink it down to 700 million parameters, that would make it smaller than SD 1.5, which means it should take way less than 8GB of RAM.
I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.