Scientists used "knowledge distillation" to condense Stable Diffusion XL into a much leaner, more efficient AI image generation model that can run on low-cost hardware.
This is odd reporting: Stable Diffuse XL AFAIK already runs on a GPU with 8GB ram and usually doesn’t need that much time to generate an image either (depends on the GPU).
I think they got their numbers wrong. It says they shrink it down to 700 million parameters, that would make it smaller than SD 1.5, which means it should take way less than 8GB of RAM.
I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.
This is odd reporting: Stable Diffuse XL AFAIK already runs on a GPU with 8GB ram and usually doesn’t need that much time to generate an image either (depends on the GPU).
I think they got their numbers wrong. It says they shrink it down to 700 million parameters, that would make it smaller than SD 1.5, which means it should take way less than 8GB of RAM.
I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.