L
25

I ran a test between two AI image generators for a client project in Austin

We needed 50 custom icons for a new app, so I tried using the basic text prompts on one tool versus feeding it a few hand drawn sketches first. The sketch method gave us usable results in about 3 hours, while the text only version took all day and needed way more edits. The AI that saw the rough drawings just understood the style we wanted much faster. It felt like giving it a visual anchor made all the difference. Has anyone else found that starting with a visual reference, even a bad one, speeds up AI image work?
2 comments

Log in to join the discussion

Log In
2 Comments
hayes.lee
hayes.lee3d ago
That part about the "visual anchor" making all the difference, I see it the total opposite way. For me, throwing a sketch at the AI just gives it more wrong stuff to copy. It locks you into your own bad drawing. Starting with just words lets the machine do its own thing from scratch, which is the whole point. My best results come from writing a tight text prompt and then letting it run wild for a while. Feeding it a sketch feels like trying to teach a dog to paint by holding its paw. You just get a messy picture.
9
dylan_anderson
Hayes has a point about the machine doing its own thing. But calling it a messy picture is a bit much. It's just a different tool, not a total failure.
0