Preserving Creative Identity: Artists Denounce AI Intrusion

Artists grappling with the intrusion of artificial intelligence (AI), which diligently analyzes their creative output and emulates their distinctive styles, have forged alliances with academic researchers. Their effort aims to combat this burgeoning menace.
US illustrator Paloma McClain found herself compelled to adopt a defensive stance upon discovering that several AI models had, without acknowledgment or remuneration, appropriated her artwork. McClain expressed her perturbation at this development, asserting, "It bothered me. I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others." In response, the artist turned to a freely available software application named Glaze, which was conceived by researchers at the University of Chicago. Glaze effectively employs tactics that outsmart AI models during their training phases. It subtly manipulates pixels in a manner imperceptible to human observers, but which renders a digitized artwork markedly distinct from the AI-generated reproductions. Professor of computer science Ben Zhao, a member of the Glaze team, elucidated their mission, stating, "We're essentially providing technical tools to assist in safeguarding human creators against intrusive and unscrupulous AI models." The expedited development of Glaze, completed within a mere four months, emanated from technology originally designed to disrupt facial recognition systems. Zhao emphasized the urgency of the endeavor, affirming, "We were working at super-fast speed because we knew the problem was serious. A lot of people were in pain."

While large generative AI entities do establish agreements for data usage in specific cases, a substantial proportion of digital assets encompassing images, audio and text, which inform the cognitive processes of highly sophisticated software, have been harvested from the internet without explicit consent. Since its introduction in March 2023, Glaze has garnered over 1.6 million downloads, as reported by Zhao. Furthermore, the Glaze team is actively refining an augmentation known as Nightshade, which bolsters defenses by confounding AI systems. For example, Nightshade endeavors to manipulate AI into interpreting a dog as a cat. McClain asserted, "I believe Nightshade will have a noticeable effect if enough artists use it and put enough 'poisoned' images into the wild," signifying easily accessible online platforms. She added, "According to Nightshade's research, it wouldn't take as many 'poisoned' images as one might think." Zhao divulged that several companies have expressed interest in adopting Nightshade, affirming that their objective is to empower individuals and organizations to safeguard their intellectual property.
In parallel developments, the startup Spawning has introduced Kudurru software, which detects efforts to amass large quantities of images from online sources. Artists can subsequently block access or supply images that deviate from the AI's requisites, thereby contaminating the data pool used to train the AI. Spawning's cofounder Jordan Meyer disclosed that more than a thousand websites have already been integrated into the Kudurru network. Additionally, Spawning has launched haveibeentrained.com, a website featuring an online tool enabling artists to determine whether their digital works have been incorporated into an AI model, and allowing them to opt out of such usage in the future. In tandem with these image-focused defenses, researchers at Washington University in Missouri have devised AntiFake software designed to thwart AI's replication of human voices. AntiFake augments digital voice recordings by introducing inaudible noises that render the synthesis of a human voice "impossible," according to Zhiyuan Yu, the PhD student leading the project. The program aspires to extend its utility beyond merely obstructing unauthorized AI training to preempting the creation of "deepfakes," deceptive audio or video recordings depicting individuals engaging in actions or uttering statements they never did. In a telling example, a prominent podcast recently sought the assistance of the AntiFake team to safeguard its productions from being misappropriated. Although the software is presently tailored for spoken voice recordings, Zhiyuan Yu suggested its applicability to musical compositions. Jordan Meyer, advocating for a future predicated on consent and compensation for all data employed in AI, remarked, "The best solution would be a world in which all data used for AI is subject to consent and payment. We hope to push developers in this direction."
With AFP.
Comments
  • No comment yet