This is odd reporting: Stable Diffuse XL AFAIK already runs on a GPU with 8GB ram and usually doesn’t need that much time to generate an image either (depends on the GPU).
I think they got their numbers wrong. It says they shrink it down to 700 million parameters, that would make it smaller than SD 1.5, which means it should take way less than 8GB of RAM.
I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.
Is that feasible on a Raspberry pi?
Probably. FastSD CPU already runs on a Raspberry PI 4.
No, lol. Well, at least I’m not 100% familiar with Pis new offerings, but idk about their PCI-E capabilities. Direct quote:
The tool can run on low-cost graphics processing units (GPUs) and needs roughly 8GB of RAM to process requests — versus larger models, which need high-end industrial GPUs.
Makes your question seem silly trying to imagine hooking up my GPU which is probably bigger than a Pi to a Pi.
Have been running all the image generation models on a 2060 super (8GB VRAM) up to this point including SD-XL, the model they “distilled” theirs from… Not really sure what exactly they think they are differentiating themselves from, reading the article…
There are three models and the smallest one is 700M parameters.
Makes your question seem silly trying to imagine hooking up my GPU which is probably bigger than a Pi to a Pi.
Here is an alternative Piped link(s):
Jeff Geerling has entered the chat
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Lol read the article, it cites “8gb vram” and if i had to guess it will only support nvidia out of the gate
deleted by creator