Popular drawing tablet manufacturer Wacom is the latest target being slammed by the digital art community after appearing to use AI-generated images in ads. Over the weekend, creatives across X (formerly Twitter) and TikTok noticed that Wacom was promoting its Intuos pen tablet with a dragon illustration that showed telltale marks of AI-generated images — such as questionable scale designs and fur blending unnaturally into other sections of the image.

Wacom deleted the images without explanation, fueling speculation that an industry-standard brand for artists was using tools widely criticized for replacing them. And it wasn’t the only AI controversy this weekend. Wizards of the Coast (WotC), the publisher behind Magic: The Gathering and Dungeons & Dragons, also issued an apology on Sunday for using an ad with AI-generated elements. The controversies have escalated mistrust around an already complicated question: how can creatives, and the companies that work with them, navigate a flood of images that are easy to create and hard to conclusively detect?

Many artists have already rallied against companies using generative AI. They fear it could impact job security across a wide range of creative professions like graphic design, illustration, animation, and voice acting. But there’s a particular sense of betrayal around brands like Wacom, whose main audience is artists. The company has long been considered the industry standard supplier for drawing tablets, thanks to the success of products like its professional Cintiq lineup. At the same time, it’s facing more competition than ever from rival brands like XPPen and Xencelabs.

Wacom initially didn’t respond to artists’ complaint. Several days later, and after this article was published, the company issued a contrite statement saying that the images in question had been purchased from a third-party vendor and had evaded being flagged by the online AI detection tools it used for vetting.

“We want to assure you that using Al generated images in these assets was not our intent,” Wacom wrote. The company said it is now unsure whether AI was used in their creation. “For this reason, we immediately discontinued their use.”

WotC’s issues are more straightforward — but in some ways, they point to a tougher problem. The company announced in August that it would prohibit AI-generated imagery in its products after confirming AI was used to create some artwork for the Bigby Presents: Glory of the Giants D&D sourcebook. In December, WotC also refuted claims that AI-generated imagery was included in the upcoming 2024 Players Handbook for D&D.

Despite this, the company shared a new marketing campaign for its Magic: The Gathering card game on January 4th that was quickly scrutinized for containing strangely deformed elements commonly associated with AI-generated imagery. The company initially denied AI was involved, insisting that the image was made by a human artist, only to back down three days later and acknowledge that it did in fact contain AI-generated components. In an apology that followed, WotC implied the issue was tied to the increasing prevalence of generative AI integrations within widely used creative software like Adobe Photoshop’s generative fill feature.

“We can’t promise to be perfect in such a fast-evolving space, especially with generative AI becoming standard in tools such as Photoshop,” said WotC in its online statement. “But our aim is to always come down on the side of human made art and artists.”

WotC says it’s examining how it’ll work with vendors to better detect unauthorized AI usage within any marketing materials they submit — but that’s easier said than done these days. There are currently no truly reliable means to check if a given image was generated using AI. AI detectors are notoriously unreliable and regularly flag false positives, and other methods like the Content Credentials metadata backed by Adobe can only provide information for images created using specific software or platforms.

Even defining AI-generated content is getting harder. Tools like Firefly, the AI model integrated into Photoshop and Illustrator, allow users to make prompt-driven alterations on individual layers. Some creative professionals argue these are simply tools that artists can benefit from, but others believe any generative AI features are exploitive because they’re often trained on masses of content collected without creators’ knowledge or consent. Adobe has assured users that Firefly-powered AI tools are only trained on content that Adobe owns, but that doesn’t mean they’re passing everyone’s ethical sniff test.

The situation has left bystanders visually checking projects for telltale signs of inhuman origin and organizations trusting artists to not lie about how their content was made. Neither option is exactly infallible.

That uncertainty has driven a wedge of paranoia and anxiety throughout the online creative community as artists desperately attempt to avoid contributing to — or being exploited by — the growing AI infestation. The rapid deployment of generative AI technology has made it incredibly difficult to avoid. The inability to trust artists or brands to disclose how their content is produced has also sparked AI “witch hunts” led by creatives. The goal is to hold companies accountable for using the technology instead of paying designers, but in some cases, accusations are entirely speculative and actually harm human artists.

Even when companies insist that AI hasn’t been involved, creatives are understandably skeptical. A poster for Marvel’s Loki TV series on Disney Plus was also criticized last year after some creatives claimed it contained AI-generated stock assets from Shutterstock. Following its own investigation, Shutterstock said that the stock image was human-made, and that a “software tool” had instead been used to create the “subtle creative imperfections most often associated with AI generated art.” Shutterstock declined, however, to share what the software tool in question was.

The headache of trying to avoid generative AI entirely — including by unknowingly promoting it or having online portfolios scraped to train it — has proved too much for some creatives. Artists have credited it as a reason for leaving the industry or contemplating abandoning their studies. At least one art competition, the annual Self-Published Fantasy Blog-Off (SPFBO) cover contest, was shut down entirely after last year’s winner admitted under duress to using banned AI tools.

But even creatives who don’t expect to stop generative AI’s development want something more substantive from companies than “trust me bro,” especially when those companies rely on their patronage. Artists value honesty and accountability over stonewalling and evasiveness about whether AI tools are being used.

Wacom and WotC eventually provided similar responses to their respective situations: that the offending images had come from a third-party vendor, that the companies were unaware that AI had been used to make them, and that they promised to do better in the future. That hasn’t reassured some artists, however, who questioned how apparent hallucinations within the images had gone undetected, and why these creative companies weren’t hiring artists directly.

Both cases suggest that anti-AI pressure campaigns will remain an ongoing force in the creative world. Generative AI might not be going anywhere — but for many companies, using it has turned into a PR nightmare.

Update January 9th, 5:11PM ET: Added Wacom’s response to accusations that it used AI generated images.





Source link

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *