The Hidden Anomaly in Today’s Debate About AI and Human Development.
By Hiran de Silva
A curious contradiction is unfolding in the global conversation about AI, creativity, and learning. It erupted again today on LinkedIn, in a thread beneath a thoughtful post by MB Vishwanath and a comment by Carl Seidman.
Here’s the contradiction in its simplest form:
- A learning from B, and applying or monetising what they learned, is universally accepted.
- But when a machine learns from A, suddenly it becomes controversial.
- And when a person learns with the assistance of a machine, that too becomes questionable.
This is the anomaly we need to expose—because until we resolve this contradiction, our debates about AI will remain emotional, inconsistent, and intellectually incoherent.
1. The Human Learning Economy Has Always Been Based on Borrowing, Adapting, and Monetising
Every profession, every craft, every field is built on exactly one principle:
We learn from others, we build on it, and we use it to generate value.
Every author has read other authors.
Every consultant has learned frameworks, examples, and case studies from someone else.
Every accountant, designer, engineer, and strategist has absorbed, adapted, and monetised the accumulated knowledge of those who came before.
We call this “education,” “training,” “apprenticeship,” “professional development,” “career progression,” “innovation,” “best practice,” or simply “experience.”
It is the backbone of human civilisation.
No one objects.
In fact, whole industries exist to encourage it.
If you don’t learn from others, you’re considered untrained, unskilled, or obsolete.
2. But When Machines Learn… We Suddenly Object
Yet the moment a machine does what humans do naturally—learn from the collective output of others—we react as though something immoral has happened.
Suddenly:
- It’s “unfair.”
- It’s “stealing.”
- It’s “plagiarism.”
- It’s “exploitation.”
- It’s a “threat.”
But why is it acceptable for a human being to learn from thousands of books, articles, and videos… yet unacceptable for a machine?
If learning from others is noble, productive, and essential when humans do it, why is it problematic only when machines do?
The inconsistency reveals the truth:
The ethical discomfort is not about the learning.
It’s about the implications of its efficiency.
Machines can learn faster.
Machines can summarise patterns instantly.
Machines can accelerate the learning of millions of people at a scale unmatched in human history.
This challenges existing economic structures, power dynamics, and identity markers.
So the objection is not that machines learn.
It’s that machines learn too well.
3. And Then We Go Further: We Don’t Want Humans Learning Through Machines Either
Here’s where the contradiction deepens.
There is now a growing sentiment that:
If a human uses an AI assistant to help them learn, produce, create, or perform—then the result is somehow invalid.
Why?
Because the human was “assisted.”
But we are always assisted.
We learn with:
- books
- teachers
- mentors
- calculators
- search engines
- IDEs
- libraries
- design tools
- frameworks
- templates
- spreadsheets
- methodologies
- entire knowledge industries
The idea of “pure, unassisted creativity or learning” is a myth.
It has never existed.
Yet suddenly, when the assistant is a machine capable of pattern recognition, people react as though the learner is “cheating.”
But using advanced tools to amplify ability is the story of human progress:
- The calculator didn’t make mathematics invalid.
- The typewriter didn’t make writing illegitimate.
- CAD didn’t make architecture fake.
- Google didn’t make research fraudulent.
- Excel didn’t make finance dishonest.
- Spellcheck didn’t make authors lazy.
- Aviation autopilot didn’t make pilots unskilled.
But somehow, AI helping people learn and create is unacceptable?
This is an emotional reaction, not a logical one.
4. What We Are Really Seeing: Fear of Displacement, Loss, and Identity
People aren’t afraid that machines will learn.
People are afraid that machines will learn faster than humans, and therefore upset:
- status
- hierarchy
- job security
- intellectual ownership
- expertise signalling
- the value of accumulated experience
In other words:
The resistance is not philosophical—it’s economic and psychological.
But to disguise that discomfort, we invent moral arguments about “fairness,” “integrity,” or “authenticity.”
The problem is that these arguments do not stand up to scrutiny.
They apply only when machines are involved.
They never apply to humans learning from humans.
5. The Inevitable Shift: Humans + Machines Will Become the New Centre of Expertise
Just as calculators redefined mathematical competence, AI will redefine intellectual competence.
The future professional will be someone who:
- understands the questions to ask
- knows how to frame a problem
- uses AI to explore possibilities
- synthesises judgment, experience, ethics, and context
- delivers value enhanced by machine-supported acceleration
This does not diminish human capability.
It elevates it.
In fact, AI does not remove human expertise.
It exposes a deeper truth:
The real value has never been in the information—
It has always been in the interpretation, the framing, the judgment, and the execution.
AI handles the pattern extraction.
Humans handle the meaning.
6. Conclusion: We Need Consistency in Our Ethics of Learning
If it is morally acceptable for everyone to learn from everyone…
If it is morally acceptable for people to learn, commercialise, and monetise what they’ve been taught…
If it is morally acceptable for human knowledge to be shared, copied, remixed, improved, and passed on…
…then it cannot suddenly be immoral when machines do it,
and it cannot suddenly be disallowed when humans do it with machines.
Because that is not an ethical argument.
That is a fear.
And the role of thought leaders—as Carl Seidman wisely pointed out—is to name the phenomenon clearly.
The uncomfortable truth
We accept learning when it preserves existing hierarchies.
We resist learning when it threatens them.
But progress has never waited for emotional comfort.
It always moves where learning is allowed.
And the next wave belongs to those who embrace the tools that amplify human capability—not those who fear them.
Hiran de Silva LinkedIn post
Why Is Learning Allowed… Except When It Isn’t?
Inspired by today’s excellent post from @MB Vishwanath and a sharp observation from @Carl Seidman, CSP, CPA.
Something strange is happening in today’s AI debate.
We all accept that humans learn from each other.
A learns from B. B monetises what they learned from A.
That’s called education, training, career development, expertise.
It’s the foundation of every profession.
No controversy.
No objections.
But when a machine learns from human knowledge… suddenly it’s unacceptable.
And when a human learns with the assistance of a machine, that too becomes questionable.
Why?
We’re essentially saying:
- Humans may learn from humans.
- Machines must not learn from humans.
- Humans must not learn with machines.
You see the contradiction?
As Carl Seidman pointed out in the thread:
We’re perfectly comfortable with people learning from each other and monetising what they’ve learned.
We just don’t want machines to do the same.
And increasingly, we don’t want people to learn with machine assistance either.
The resistance isn’t about “ethics.”
It’s not about “fairness.”
It’s not about “authenticity.”
It’s about fear:
- Fear that machines learn faster.
- Fear that experience gaps shrink.
- Fear that hierarchies get disrupted.
- Fear that capability becomes widely accessible.
But here’s the truth:
We’ve always used tools to accelerate learning—books, calculators, Google, templates, IDEs, frameworks, mentors, and whole industries built on knowledge transfer.
AI is simply the next—and most powerful—tool.
If it’s acceptable for humans to learn, adapt, and monetise what they’ve learned…
it cannot suddenly become immoral when machines do the same,
or when humans use machines to do it better.
This isn’t an ethical issue.
It’s an emotional one.
And the future will belong to those who embrace tools that amplify human capability, not those who fear them.



Add comment