“Better quality” is an interesting concept. Increasing steps, depending in the sampler, changes the image. The seed mode usually changes image with changes in size.
So, what exactly do you mean with “better quality”?
“Better quality” is an interesting concept. Increasing steps, depending in the sampler, changes the image. The seed mode usually changes image with changes in size.
So, what exactly do you mean with “better quality”?
This isn’t what you asked specifically, but it’s related enough… have a look into https://apps.apple.com/it/app/draw-things-ai-generation/id6444050820?l=en-GB as it’s free, ad free, free from tracking and really well optimized. With that I can run Schnell on my iPhone 13 Pro!
I enjoy this 1.5 Lora https://civitai.com/models/165876/2d-pixel-toolkit-2d it’s pretty neat!
The floor is a carpet, and the shoes is harder to tell but might be a similar situation? Velvet maybe?
I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.
Cool, looks simple enough. Can’t test it on my phone, but for things with the A12 and up (although ram can and will be a problem if less than 6gb) there’s https://apps.apple.com/it/app/draw-things-ai-generation/id6444050820?l=en-GB
Can I offer what I believe is a better option? 1.5 LCM models. 5 steps for a good image, and they’re 5 steps at 1.5 speeds :)
I like this but obviously you can find other LCM models.
How old of a Xeon? Because it won’t be a fast result, but maybe you are fine with it. Back when I tried this, SD 1.5 could do 20 steps at 512x512 on my Ryzen 5 3600 in roughly 7 minutes…
Unless they aim for a specialized model? I don’t have insight on the matter, just a guess.
Is it? The authors all have names that (in my ignorance!) all sound Russian, and Kandinsky was a Russian painter…
Yup, same reason why you can ask for a fox using a crocodile as a mech and get a good result. The model has the concept of all things requested and mixes them (with varying success).
I have a few examples that I hope retain their metadata.
Seed mode is… basically, I stopped using Automatic1111 a long time ago and kinda lost track of what goes on there but in the app I use (Draw Things) there’s a seed mode called Scale Alike. Could be exclusive, could be the standard everywhere for what I know. It does what it says, changing resolution will keep things looking close enough.
Edit: obviously at some point they had to lose the bloody metadata….