As AI-generated text continues to evolve, distinguishing it from human-authored content has become increasingly difficult. This study examined whether non-expert readers could reliably differentiate between AI-generated poems and those written by well-known human poets. We conducted two experiments with non-expert poetry readers and found that participants performed below chance levels in identifying AI-generated poems (46.6% accuracy, χ2(1, N = 16,340) = 75.13, p < 0.0001). Notably, participants were more likely to judge AI-generated poems as human-authored than actual human-authored poems (χ2(2, N = 16,340) = 247.04, p < 0.0001). We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored. Our findings suggest that participants employed shared yet flawed heuristics to differentiate AI from human poetry: the simplicity of AI-generated poems may be easier for non-experts to understand, leading them to prefer AI-generated poetry and misinterpret the complexity of human poems as incoherence generated by AI.
They specify in the study that the participants were “non-expert poetry readers.” I’d be interested to see the same experiment repeated with English professors, or even just English majors. Folks with a lot of experience reading poetry. With exposure to its history, its notable works, and its different styles.
Good luck finding people willing to deal with English majors long enough to conduct the study.
This. Marvel superhero movies are also more popular with the general public than art films, but that doesn’t necessarily mean they’re better.
You can get whatever result you want if you’re able to define what “better” means.