Are we witnessing the end of plagiarism?

Jul 14, 2025Academia, AI Integration

TL;DR: In the age of AI-assisted writing, our traditional understanding of plagiarism no longer fits. Professor Eaton’s idea of post-plagiarism offers a new framework for authorship, integrity, and trust in education.

 

It started with something simple. I’d written a paragraph for a class announcement and wanted it to sound clearer. So I pasted it into ChatGPT, added “Refine the following messages to make it friendlier and more professional”, and hit enter. A few seconds later, the words returned, still mine, but just a little better. Cleaner. More confident. But then something strange happened. I hesitated.

Should I send this out? Did I still write this?

That tiny moment cracked open a much larger question, one I’ve since heard echoed in classrooms, faculty meetings, and online forums: If AI helped, even a little, is it still my work?

This dilemma isn’t just about “cheating.” It cuts to the heart of what it means to create, to take ownership, and to be recognised for what we do. For decades, the answer was simple. Plagiarism was the ultimate academic sin, and the best way to avoid it was to submit work that was entirely your own. But in a world of generative AI, where we can collaborate with machines at every stage of the creative process, from outlining and rewriting to ideating and image-making, that logic begins to fall apart.

That’s why found Professor Sarah Eaton’s concept of postplagiarism is so timely and, I think, transformative.

Rethinking academic integrity

In her editorial, Eaton suggests that we’re entering a new era of intellectual and educational life. She calls it the postplagiarism era. At its core is the idea that much of what we produce today — especially in writing — is neither fully human nor fully AI. It’s hybrid. We draft, the machine rewrites. The machine generates, we edit. The boundaries blur. And as Eaton (2023, p3) puts it, “trying to determine where the human ends and where the AI begins is pointless.”

That idea might sound alarming, especially in fields like education or research that hinge on originality and attribution. But Eaton isn’t arguing for looser standards. She’s arguing for a deeper reckoning with the technologies of our time. Rather than doubling down on detection tools or suspicion, she invites us to ask different questions: What does authorship mean in a hybrid world? How can we uphold integrity without clinging to outdated definitions? What kind of learning environment emerges when we shift from surveillance to trust?

That last question stuck with me the most.

Suspicion vs Trust: What are we teaching students to value?

So much of academic life has been shaped by what Eaton calls a culture of suspicion. We train students to cite everything “properly,” not necessarily because they value attribution, but because they fear being accused of plagiarism. Entire software ecosystems are built to detect dishonesty, not to nurture ethical awareness. But when we approach integrity through punishment, we miss the bigger opportunity. That is helping learners think critically about their own choices, responsibilities, and relationships to knowledge.

In contrast, Eaton’s postplagiarism framing asks us to cultivate a culture of responsibility and care, envisioning a world where students and educators are transparent about their use of tools, take ownership of the ideas they develop, and attribute ideas out of respect rather than obligation.

This approach resonates deeply with design education. In studio-based learning, we already navigate murky boundaries between individual and collaborative work, between remix and innovation. We value process as much as product. And we understand that good ideas rarely emerge in isolation. So maybe, rather than resisting hybrid authorship, we’re actually well positioned to model what ethical AI use could look like.

Hybrid writing is already here

I’ve started realising how deeply we’re already living in a post-plagiarism world. I see students polishing a reflective journal with ChatGPT and Grammarly before handing it in. A colleague of mine used Claude to reshape a research abstract . Even I find myself co-creating a workshop prompt through back-and-forth iterations with a GPT assistant.

None of this is “cheating.” But it does challenge the simplistic notion that work is either “ours” or “not ours.” It’s more accurate to say: it’s ours, and it’s entangled with tools, platforms, datasets, and invisible collaborators. Eaton names this clearly: “Humans can relinquish control, but not responsibility.”

Even when AI contributes to our process, we remain accountable for the outcome. For the truthfulness of our claims. For the rigour of our research. For the impact of our teaching. And maybe that’s the real lesson here:

We don’t need new plagiarism policies so much as new literacies of authorship, agency, and accountability. We need to help learners name the choices they’re making, understand the tools they’re using, and take ownership of the knowledge they’re shaping.

Beyond writing: Postplagiarism in a neurotechnological future

In the final sections of her article, Eaton looks past GenAI and ahead towards brain-computer interfaces (BCIs) and neurotechnology. These tools are still emerging, but they raise even more complex questions about consent, cognition, and creativity.

If students begin to use embedded or wearable neurotools to aid memory, synthesis, or communication, how will educators respond? Will it still make sense to assess individual performance, or will we need to redefine what constitutes a fair, meaningful demonstration of learning?

Rather than panic, Eaton calls for pre-emptive, transdisciplinary research. Her argument isn’t that we should let go of integrity. It’s that we should update our frameworks so they remain relevant.

As someone teaching and researching in design, this resonates. We’re already dealing with augmented ideation tools, AI-generated mood boards, and adaptive feedback systems. But the ethics haven’t caught up yet. If we don’t define what ethical co-creation looks like now, we risk defaulting to the logic of control. Or worse, apathy.

Rethinking learning for a hybrid future

If Eaton is right (and I believe she is), then postplagiarism isn’t a threat to education but an invitation to transform it. We have a chance to design assessments that reward thinking, not just typing. We can teach authorship as a relational practice, rather than a mere technical requirement. And we can model transparency in our own use of AI, giving students the opportunity to learn from how we make judgments.

And perhaps most importantly, we can re-centre learning as an act of care. For knowledge. For others. For the world we’re creating together with our tools.

What’s one way you’re already navigating hybrid writing? Could you be more open or more intentional about it?

Hello! I'm Linus, an academic researching cognition, behaviour and technologies in design. I am currently writing about AI in Design, academia, and life.