study

Study of AI as a creative writing helper finds that it works, but there's a catch

A new experiment in writing shows the limits of AI-driven ‘creativity.’

Researchers are testing the limits of AI-generated creativity. Credit: Didem Mente / Anadolu via Getty Images

Researchers are exploring the existential implications of generative AI, including whether or not the advancing technology will actually make humans more creatively capable — or narrow our views.

The new study, published in Science Advances by two University College London and University of Exeter researchers, tested hundreds of short stories created solely by humans against those created with the creative help of ChatGPT’s generative AI. One group of writers had access solely to their own ideas, a second group could ask ChatGPT for one story idea, and a third could work with a set of five ChatGPT made prompts. The stories were then rated on “novelty, usefulness (i.e. likelihood of publishing), and emotional enjoyment,” reported TechCrunch.

“These results point to an increase in individual creativity at the risk of losing collective novelty,” the study reads. “This dynamic resembles a social dilemma: With generative AI, writers are individually better off, but collectively a narrower scope of novel content is produced.”

Mashable Light Speed

Participants were “measured” for creativity prior to the writing session with a commonly used word-production task that builds a standard of creativity among respondents. Those who tested lower on these creativity proxy tests received better scores on their personal writing when given access to AI-generated ideas. But for those with already high creativity scores, AI ideas had little to no benefit on their story ratings.

Additionally, the pool of stories aided by AI-generated prompts were deemed to be less diverse and displayed less unique writing characteristics, suggesting the limits of ChatGPT’s all-around ingenuity. The new study’s literary findings add to concerns about AI’s self-consuming training loops, or the problem of AI models trained only on AI outputs degrading AI models themselves, Mashable’s Cecily Mauran reported.

Study author Oliver Hauser said in a comment to TechCrunch: “Our study represents an early view on a very big question on how large language models and generative AI more generally will affect human activities, including creativity… It will be important that AI is actually being evaluated rigorously — rather than just implemented widely, under the assumption that it will have positive outcomes.”

Chase joined Mashable’s Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she’s very funny.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.