Sci-Tech
a month ago

Research in the age of Algorithms

Is using AI a smart move or a shortcut?

Representational image
Representational image

Published :

Updated :

A long time ago—at least according to Plato—a conversation between a god and a king still echoes today. In his Phaedrus, Plato tells the story of Theuth, the Egyptian god of invention, who presented King Thamus with a gift, the written word. Theuth believed it would make people wiser and improve memory. But Thamus wasn’t impressed. He said, no, this new skill would produce forgetfulness, not memory.

People would look bright, sure—but only because they’d stop remembering and start depending on what was written. It would give them the illusion of knowledge, not the real thing.

It’s oddly familiar, especially now that artificial intelligence is being folded into research everywhere—from literature reviews to lab analysis.

Like writing, AI promises speed, scale, and easy access to knowledge. But what if the machine starts thinking for you? What happens to the thinking you were supposed to do yourself?

AI should be used like pen and paper, not like a brain. It should help capture and organise thoughts—not replace them. That distinction, though, is getting increasingly blurry.

There’s no doubt AI can lighten the load. Tools like GPT-4, citation managers, and machine learning models can process data far faster than a human can.

As Luciano Floridi points out in The Fourth Revolution: How the Infosphere is Reshaping Human Reality (2014), the information environment has shifted so dramatically that old methods can barely keep up.

In many cases, AI fills a practical gap. In big-data disciplines—such as climate science, linguistics, and even sociology—it can help researchers spot patterns that would be impossible to detect manually.

But that advantage comes with weight. As Kate Crawford outlines in Atlas of AI (2021), AI systems don’t exist in a vacuum. They’re built on data—data that reflects human assumptions, prejudices, and power structures. If researchers unquestioningly accept what the algorithm offers, they risk reinforcing bias instead of questioning it. And the worst part? That bias doesn’t always reveal itself.

There’s another layer, too—one that’s less technical and more philosophical. In The Shallows (2010), Nicholas Carr warns about the slow erosion of deep thinking when people rely too heavily on digital shortcuts.

His concern wasn’t directly with AI, but the logic still applies. Research isn’t just gathering information; it’s wrestling with it, shaping it, and sometimes getting stuck in it. That cognitive struggle is part of what makes original thinking possible. When AI removes the battle, it can also flatten the result.

Still, it’s not all warning signs. AI, if used correctly, could also make research more inclusive. In New Laws of Robotics (2020), Frank Pasquale argues for AI that serves the public interest.

For researchers in underfunded universities or remote locations, AI tools may provide access to resources and computing power that they’d otherwise lack.

Used responsibly, AI has the potential to close the academic gap between the Global North and South. It could offer researchers the equivalent of a digital research assistant—fast, responsive, and relatively cheap.

But there’s a fine line between help and dependency. As Debora Weber-Wulff reminds us in False Feathers: A Perspective on Academic Plagiarism (2014), when the source of knowledge becomes unclear—when it’s a blur of machine output and human input—academic integrity starts to wobble. If AI drafts a chunk of a paper or proposes citations, who owns that thinking? And, more importantly, who is being held accountable if something goes wrong?

There’s also the question of what doesn’t get seen. As Cathy O’Neil notes in Weapons of Math Destruction (2016), algorithmic systems often loop back on themselves.

If AI tools are trained on the most cited papers and then recommend those same papers to new researchers, it creates a feedback loop. The same ideas get repeated while less popular (but potentially groundbreaking) voices are silenced. That’s not just bad for research—it’s bad for knowledge itself.

Even so, turning away from AI entirely isn’t a real option. Stuart Russell, in Artificial Intelligence: A Modern Approach (2021), emphasises that AI, like any tool, reflects how it’s used. It’s not inherently good or bad. The danger lies in confusing the tool with the thinker.

The pen doesn’t write the story. The paper doesn’t argue the theory. And AI doesn’t generate insight—it helps manage the process, nothing more.

So, is using AI in research good or bad? Maybe that’s the wrong question. It’s not a yes-or-no situation. It’s a matter of how it’s used, who’s guiding the process, and whether the person using it is still thinking critically—or just clicking “generate.”

Plato’s warning still stands. Tools can help, but only if the mind stays in charge. AI belongs on the researcher’s desk—not in their head.

raiyanjuir@gmail.com

Share this news