top of page

Are We Thinking, or Just Well-Trained?

  • Ildiko Almasi Simsic
  • Jun 12
  • 4 min read

What myESRA taught me about human and machine intelligence


When I was building myESRA, one of the things that fascinated me most was how we built the knowledge base. I based this exercise on my experience of acquiring industry knowledge as an early career professional. The sources I turned to or my colleagues pointed me to became the 'training data' for myESRA. We didn’t pull it from abstract rules or a clean dataset. We used reports. Project documents. Work done by colleagues. Some of them might not have been perfect - some even biased - but they still held patterns, informed best practice and were the industry knowledge. It had structures, choices, problems and responses.


After launching myESRA, one of the main criticisms I heard was: What if those documents are low quality?

My answer: We still read them -without the technology. We still use them to identify patterns or best practice. We just apply our judgment when writing our own report. Maybe highlighting our industry-wide biases and shortcomings will lead to higher quality outputs in the future too.


And that’s where the idea clicked for me: humans are trained too.

This is not new for a sociologist who's been enthusiastic about anthropology. It is well documented that we have our biases based on our upbringing and lived experiences. That's when it occured to me that maybe that is our 'training data' for our natural intelligence. We learn by observing what’s around us, drawing conclusions from culture, language, and past examples - flawed as they may be. Honorable mention here is the unconscious bias training that is specifically designed to make us rethink our 'training data'.


So how different is that from training a machine?


Pattern Recognition Is Not Just for Coders


Reading Range by David Epstein opened this up even further. The idea that generalists thrive by connecting dots across disciplines helped me understand my own process. I didn’t come from a tech background. But I knew how people in my field think. How we absorb information. How we repeat patterns.


I just mirrored those patterns to build a tool.


And suddenly I was asking myself: Is intelligence just recognising patterns well enough to adapt them to something new?


If so, machines are doing something eerily similar. They’re trained. They correlate. They generate based on past patterns.


Just like us.


Knowledge vs. Intelligence


This is where I think we often confuse the two.


Knowledge is about what you've absorbed. The facts, experience, data. It's what fills the training set. School for humans and model training for AI.


Intelligence, on the other hand, is what you do with it. The ability to see relationships, spot contradictions, form judgements, or pivot when the context changes.


A model can hold far more knowledge than a person ever could - but does that make it more intelligent?


In human life, intelligence often show up in nuance.

Knowing when to act.

When to stay silent.

How to read the room.

How to challenge a bad idea even if it sounds convincing.

These aren't facts. They're patterns we feel. and that might be the real edge we have - for now!


Intuition, Originality, and the Remix Illusion


Of course, intuition feels like a different kind of intelligence. It’s fuzzy. Emotional. Sometimes it’s exactly what leads us to a breakthrough - scientists I’ve spoken to even admit they’ve followed gut feelings when writing algorithms, and those hunches worked.


But is that intuition truly original or is it inferred knowledge (even if subconscious)?

I can’t count how many times I’ve had an idea I felt was totally new - until I Googled it and found someone else said it 10 years ago in a different context. What about the times, I 'just knew' something is how it is supposed to be without having an explanation - only to find out that I was part of a discussion about a similar issue years ago and my brain subconsciously 'remembered' the right information at the right time.


So maybe we’re not inventing.

Maybe we’re just remixing in ways that feel new because the combinations are more niche, more personal, more “now.”


And if that’s the case, the question becomes:

Is creativity just recombination? And if so, can machines be creative too?


Are We Really That Different?


We worry about AI being biased. About AI art being cliché. But let’s be honest - we are the ones who created the dataset. The stereotypes. The limited views.


So if a machine reflects them back to us, is that its flaw - or ours?


And what about self-awareness?

Machines “know” they’re machines. Humans “know” they’re humans. But not all humans are self-aware in the way we like to believe. Some live in total denial of their limitations, patterns, and programming.


So again, I ask: Are we really that different?


If I Were to Rewrite the Turing Test…


The original Turing Test asked whether a machine could fool us into thinking it’s human.


I’d flip the question.


Can a machine surprise us in a way that feels meaningful?


Can it generate something that makes us reflect, question, feel curious - even if it’s not “thinking” the way we do?


If yes, maybe that’s not a failure of humanity.

Maybe that’s a new form of intelligence.


And maybe our own is not as unique - or as original - as we thought.

Comments


©2020 by Ildiko Almasi Simsic. All rights reserved. 

bottom of page