The Quiet Politics of Using AI in Universities

Aug 18, 2025Behaviours, Culture, Talent

TL;DR: In academia, AI use is often kept quiet, likely to preserve intellectual credibility. This post explores the rise of stealth AI-ing and the quiet politics of competence that keep AI use hidden.

 

When we talk about AI adoption in education, the conversation usually starts with access, skills, or ethics. But another, quieter force is at play: reputation management.

AI is supposed to make us better at our jobs. Faster. Smarter. More creative. But a strange thing is happening in universities and studios. Those of us who use AI frequently, perhaps even fluently, are going quiet about it. Not because we’re ashamed. But because we’re managing how our intelligence is interpreted. We’re navigating a culture where how we work matters just as much as what we produce.

We’re entering the era of stealth AI-ing.

This is AI discretion. You use AI to develop a paper outline, shape a paragraph, or prompt a visual. Then you move that output into a cleaner format before sharing it. You tweak it, refine it, maybe strip out anything too obviously machine-made. Not because it’s bad, but because announcing “AI helped me” can distort how work is received. Or worse, how you are perceived.

I found myself doing this just last week. I was drafting a research paper outline with Claude, trying to land the framing, sharpen the research question, and test a few different flows. Once I got something useful, I copied the text into a Word document, formatted it neatly, and showed it to my collaborator. When they gave feedback, I noted their comments in the Word doc. After the meeting, I keyed the notes back into Claude and iterated further in private.

I was not deliberately concealing my AI use. But I do think I was subconsciously avoiding something. I didn’t want to sidetrack the conversation into a debate about tools, or prompt a discussion about how I was working. I just wanted to keep the focus on the paper.

That, I think, is the real shape of stealth AI-ing. It’s not hiding, but quietly rerouting the actual process to protect momentum or sometimes, to protect perception.

When credibility and AI don’t mix

Acar and colleagues wrote about The hidden Penalty of Using AI at work in Harvard Business Review. In their study with 28,000 software engineers, they found that when people believed AI helped with the code, they rated the engineer as less competent even when the work was identical. The impact was worse for women and older workers. But in academia, where authorship and originality are sacred, the penalty takes on a different form.

So when we’re “stealth AI-ing” we are in a way protect intellectual credibility. Our standing is built on a performance of effort, and AI unsettles that performance. A few months ago, one younger academic I know used AI to craft more thoughtful and personalised student feedback. But his senior convenor, less familiar with AI, forced him to reduce it. Why? The senior convenor could not match the same performance of the younger academic. It’s a reminder that AI challenges not only our workflows, but our hierarchies.

Why we hide our process

Stealth AI-ing becomes a way to manage those dynamics. Especially in fields like design and architecture, where critique culture still privileges the hand-drawn, the authored, the original, over the automated, digital, and fast production. When we’re not sure how AI use will be received by students, colleagues, and reviewers, we quietly de-emphasise it. We “clean up” the output. This then isn’t an ethics of AI problem anymore, It’s a work culture problem.

Students do it too, just differently

And students? They learn from us.

When educators treat AI as a private advantage, students internalise that it’s something to get right in solitude. They don’t bring questions about bad prompts into critique. They don’t show iterations. And they certainly don’t admit when the AI confused them.

But when design education thrives on visible process, this becomes a problem. Students need to understand, and Educators should encourage, that prompting AI is not cheating but a new way for designers to train and be explicit in their decision-making. Otherwise, we create “a culture of surface polish without process memory.” The work becomes cleaner, faster, but hollower.

This tension is especially sharp in AI-mediated studio environments. The most effective students are fluent in tools And they’re willing to show unfinished work, test ideas in public, and reflect on how the AI did or didn’t help. This kind of learner confidence is a cultural confidence.

From mastery to shared play

So what would it take to shift the tone?

Instead of presenting AI as a skill to be quietly mastered, what if we framed it as something playful. Something teachers and students could explore together? This shift has already started. Educators are prompting live in class, showing where outputs fail, and inviting critique on AI outputs. The focus moves from polish to discernment, from outputs to how we make decisions with AI along the way. That to me is the heart of design learning.

The real opportunity here is about making space for AI to rearrange what counts as creative labour.

Building credibility with AI, not in spite of it

If you’re quietly using AI in your academic work, you’re not alone. But ask yourself. Are you being discreet to protect efficiency, or to protect how you’re seen? That’s a heavy load to carry. And over time, it shapes a culture where stealth AI-ing becomes the unspoken norm. Not because AI use is wrong. But because being seen using AI still risks being read as lazy, inauthentic, or unoriginal.

But what if we changed that? And doing so by modelling a transparent process. By showing that prompting, editing, and curating are intellectual acts. By treating AI as a collaborator worth discussing. Because when we make that shift, AI stops being something to hideand  something to think with.

And reputation? It won’t be based on whether you used AI. It’ll be based on what you did with it.

Hello! I'm Linus, an academic researching cognition, behaviour and technologies in design. I am currently writing about AI in Design, academia, and life.