AI Can Be Misled

Posted on 15 Apr 2025 12:16 in Artificial Intelligence (AI)
by Siddharth Deshmukh

AI Can Be Misled — That’s Why Critical Thinking Still Belongs to Us

 

As language models like ChatGPT, DeepSeek, and others find their way into daily workflows—from writing emails to analyzing research—many professionals are asking a valid question:

Am I about to be replaced?

 

It’s a fair worry. But here’s a timely and critical reminder:

Even AI can be misled. And critical thinking still belongs to humans.

 

AI Mirrors Us—It Doesn’t Understand Us

Generative Pre-trained Transformers (GPTs) and other large language models (LLMs) are now widely used in content creation, translation, customer support, education, research, coding, data analysis, and even image processing.

 

But these models are only as good as the data they’ve been trained on. That data comes from human-created content: books, articles, interviews, news, opinions, speeches, and history.

 

So, if the source material is biased, incomplete, or one-sided—AI will reflect that.

 

What AI Lacks—and Why It Matters

Here’s the difference: Humans think. AI mimics.

 

AI can reflect, summarise, and even spark ideas. But truth-seeking? That’s still our job.

 

Recognising the limits of so-called “unbiased machines” is essential. It’s why we still need human professionals asking tough questions, especially in a world where data isn’t always clean or fair.

 

3 Truths to Remember When Working with AI

 

1. AI Relies on Data—Not Intent

AI generates responses by identifying patterns in textbooks, articles, debates, and more. That means:

 

No agenda: It doesn’t choose sides or have beliefs.

No intent: It doesn’t promote a narrative. It reflects what it’s seen.

 

2. Why It Feels Like “Controlled Narratives”

Ask AI about a public figure, and it might sound overly cautious or diplomatic.

 

That’s not bias—it’s balance based on what’s dominant in the data. If certain views flood the internet, those views will surface more. And when AI tries to present both sides, it may sound like it’s hedging, even when it’s just trying not to overcommit.

 

3. Can AI Be Misled? Yes—But Differently

AI lacks independent judgment. It doesn’t verify facts or understand nuance the way humans do. It generates language that looks informed, but it can’t recognise what’s missing, false, or oversimplified.

 

That’s why your judgment matters.

 

You have to cross-check, dig deeper, and think critically. If AI sometimes sounds like it’s pushing a narrative, it’s a sign of how crucial human vigilance is—especially in a world flooded with misinformation.

 

The Human Role Isn’t Going Anywhere

AI is a tool, not an authority.

 

Your ability to think, pause and ask, “Is this true?” is irreplaceable, especially when the topics get complex.

 

So, no, AI isn’t replacing humans. Not the ones who still ask better questions.

 

Stay curious. Stay thoughtful. Stay human.



Leave a Comment..

Name * Email Id * Comment *