A group of hackers that says it believes “AI-generated artwork is detrimental to the creative industry and should be discouraged” is hacking people who are trying to use a popular interface for the AI image generation software Stable Diffusion with a malicious extension for the image generator interface shared on Github.
ComfyUI is an extremely popular graphical user interface for Stable Diffusion that’s shared freely on Github, making it easier for users to generate images and modify their image generation models. ComfyUI_LLMVISION, the extension that was compromised to hack users, is a ComfyUI extension that allowed users to integrate large language models GPT-4 and Claude 3 into the same interface.
The ComfyUI_LLMVISION Github page is currently down, but a Wayback Machine archive of it from June 9 states that it was “COMPROMISED BY NULLBULGE GROUP.”
Based on the discussion that I’ve seen, it looks like the “Anti-AI” motive was an excuse since all the hack was doing was to steal API keys and potentially sell them. Here’s a discussion thread on reddit that goes into this more.
Looting API keys makes way more sense. They must have been stuck using GPT2 to write that incoherent statement.
AI-generated artwork is detrimental to the creative industry and should be discouraged
Man you wouldn’t guess how airbrush artist felt when Photoshop came around.
I really don’t understand this. All these search engine companies give millions of users a single button to create the most soulless art you’ve ever seen, but instead of caring about that they attack the tool that most enables the user to have control over their generation. You can argue that unlimited competition is bad for commission artists, but this attack is not “Pro Art”.
Using creative cloud isn’t a sin, but helping maintain Adobes industry stranglehold should be.
Honestly I still don’t understand the “stealing” argument. Does the stealing occur during training? From everything I’ve learned about the technology, the training, in terms of the data given and the end result, isn’t any different than me scrolling through Google images to get a concept of how to draw something. It’s not like they have a copy of the whole Internet on their servers to make it work.
Does it occur during the image generation? Because try as I might, I’ve never been able to get it to output copyrighted material. I know over fitting used to be an issue, but we figured out how to solve that issue a long time ago. “But the signatures!!” yeah, it’s never outputted a recognizable/legible signature, it just associates signatures with art.
Shouldn’t art theft be judged like any other copyright matter? It doesn’t matter how it was created, it matters if it violates fair use. I really don’t think training crosses that line, and I’ve yet to see these models output a copy of another image outside of image-to-image models.
It’s theft of labor without any compensation, aimed at cheapening the very value of that labor.
A human artist can, and often does, train simply by looking at the real world. The art they then produce is a result of that knowledge being interpreted and stylized by their own brain and perception. The decision making on how to represent a given subject, what details to add and leave out to achieve an effect, is done by the artist themselves. It’s a product of their internal mental laboring.
By contrast, if you trained an AI on photos alone it would never, ever produce anything that looks like a drawing or a piece of art, it would never create a stylized piece of art or make a creative decision of its own.
In order to produce art the AI must be fueled with human created art, that humans labored to produce. The human artists are not being compensated for the use of that labor, and even worse the AI is leveraging that to make the human labor worth less. And what’s more, that AI’s ability will stagnate without further theft of newer, more novel art and concepts.
Without that keystone of human labor the AI simply can’t function.
Ripping off so many people at once and so chaotically that you can’t distinguish exactly how any given individual is being exploited doesn’t mean those people aren’t still being ripped off. The machine that the tech bros created could not exist without the stolen labor of the artists.
I get the sentiment but I don’t think anything here addresses anything I haven’t already mentioned. The labor is certainly being used and it’s certainly for profit, but not in any way that humans don’t already do.
I really am sympathetic towards artists, though. Like I get that a lot of demand for their work could one day be taken by what generative AI is working towards. I just don’t understand how we can reasonably call it theft/crime when a computer figures out how to make an image by looking at other images but not when humans do it. The whole thing seems like an appeal to emotion.
Have you read this article by Cory Doctorow yet?
For me the funniest moment of this whole saga was when the AI bros were claiming that they weren’t stealing anyone’s art, but then flipped shit when a FOSS tool released that let people reformat their art pieces specifically so that it’d be harmful to AI art generators that copied them.
You’ve got it backwards. Glaze and Nightshade aren’t FOSS and Ben Zhao, the University of Chicago professor behind them stole GPLv3 code for glaze. GPLv3 is a copyleft license that requires you share your source code and license your project under the same terms as the code you used. You also can’t distribute your project as a binary-only or proprietary software. When pressed, they only released the code for their front end, remaining in violation of the terms of the GPLv3 license.
Moreover, Nightshade and Glaze also only works against open source models, because the only open models are Stable Diffusion’s, companies like Midjourney and OpenAI with closed source models aren’t affected by this. Attacking a tool that the public can inspect, collaborate on, and offer free of cost isn’t something that should be celebrated.