SAI could have easily avooded the massive confusion by providing information on how to use it. Instead, SAI employees are, quite rudely, claiming its a skill issue on discord, but they also don't provide any info on how to use it properly.
If you release some software and users are struggling, telling them to "git gud" and similar is very odd.
I'm not sure that "company which is hemorrhaging money releases yet another open model that won't make them any money" contradicts the financial turmoil, it just means the rate at which they are incinerating money hasn't quite caught up with the rate that investor hype is replenishing their money pile.
The model looks excellent. Complex arrangements, high quality text, and easier fine-tuning right in time for campaign season... this election year is going to be a fun one.
The images look very detailed, I don't see the typical artifacts in the textures, probably thanks to the better VAE used, I will wait for some anime finetuned model.
5 gigabytes vram in its minimum configuration, but various things can be done to increase that. Quantization and distillation might theoretically reduce resource needs, but that's still small enough to get halfway decent CPU generation time.
It's hard to infer relative performance based on parameter count alone. SD3 and SDXL are quite different architecturally. The only way to really tell is to compare it with examples. Even this lobotomized 2B model seems to perform better on prompt adherence and text compared the base SDXL model, so I think it has potential once fine tuned.
The stable diffusion subreddit is throwing a hissy fit over censorship, but that is par the course.