Aside from the fact that your comment applies to photography as well, I think it’s fair to point out image generation can also be a complex pipeline instead of a simple prompt.
I use ComfyUI on my own hardware and frequently include steps for control net, depth maps, canny edge detection, segmentation, loras, and more. The text prompts, both positive and negative, are the least important parts in my workflow personally.
Hell sometimes I use my own photos as one of the dozens of inputs for the workflow, so in a sense photography was included.
These are not comparable to AI image generation.
Even tracing has more artistic input than typing “artist name cool thing I like lighting trending on artstation” into a text box.
So about the same as a photograph then
Removed by mod
Aside from the fact that your comment applies to photography as well, I think it’s fair to point out image generation can also be a complex pipeline instead of a simple prompt.
I use ComfyUI on my own hardware and frequently include steps for control net, depth maps, canny edge detection, segmentation, loras, and more. The text prompts, both positive and negative, are the least important parts in my workflow personally.
Hell sometimes I use my own photos as one of the dozens of inputs for the workflow, so in a sense photography was included.