Image generation models are generally more than capable of doing that they’re just not trained to do it.
That is, just doing a bit of hand-holding and showing SDXL appropriately tagged images and you get quite sensible results. Under normal circumstances it just simply doesn’t get to associate any input tokens with the text in the pixels because people rarely if ever describe, verbatim, what’s written in an image. “Hooters” is an exception, hard to find a model on Civitai that can’t spell it.
I will probably use these images in a corporate PowerPoint. I’m not asking for your permission, I’m warning you. Sorry, it’s too good. (I will credit you as a CTO of some company ending in -SYS or - LEA if you want)
I had to run that through Bing AI real quick lol
Ataliative 😮
Adetvi Learning 😲
Blowv the ciompetittio 🤯
BlocklBerach
Clould frist lustion
Agíee
Clould FRIST
I’m surprised it’s able to make readable text.
Blovw it hard!
Image generation models are generally more than capable of doing that they’re just not trained to do it.
That is, just doing a bit of hand-holding and showing SDXL appropriately tagged images and you get quite sensible results. Under normal circumstances it just simply doesn’t get to associate any input tokens with the text in the pixels because people rarely if ever describe, verbatim, what’s written in an image. “Hooters” is an exception, hard to find a model on Civitai that can’t spell it.
I will probably use these images in a corporate PowerPoint. I’m not asking for your permission, I’m warning you. Sorry, it’s too good. (I will credit you as a CTO of some company ending in -SYS or - LEA if you want)