
Smart Tools, Stupid Results
My jaw dropped.
“Can you please say that again?”
“I checked your idea with ChatGPT — and you’re wrong.”
“So, let me get this straight. You’re telling me that weeks of meticulous work — calculated, refined, structured in matrices and tables, and thoroughly double-checked — work grounded in my education and 30 years of experience as an economist with a strong penchant for mathematics, logic, and statistics… you’re saying it’s worthless? Just because you asked ChatGPT? And tell me — how exactly did you formulate your question? Was it hypothetical, open, closed, or leading? Did you funnel and probe it?”
She hesitated.
“No, that’s not what I meant — I just asked ChatGPT, and it says…”
I listened. Incredulous.
Well, here we are.
Frankly, I don’t care — because I have no idea how you framed your question to the tool.
And for the sake of peace and clarity, I will check it again. I will even prepare a “dummy” version for everyone to understand — just as I did twenty years ago at the bank, where I developed a financial mathematics tool that was later implemented as an official tool on our banking website.
The results were correct. Of course.
Out of curiosity — and perhaps a touch of mischief — I presented my results to ChatGPT. Ironically, it validated every single step of my reasoning.
Irony on: What a relief! Irony off.
Let me say this clearly: I use AI. Almost every day. I use it to help me solve complex questions, to expand my perspective, to test my ideas. To search for the right tools for my ideas.
And yes — I deeply regret that I can’t (yet) afford a full IT team to build the digital tools that fuel my ideas. They too have to fill their fridges, and for now, I simply can’t contribute to that segment of the GDP. So, I work alone — supported by what I have. And that includes AI tools, especially in a field where I lack expertise: IT.
But: AI should never replace your own thinking.
You must check every single step.
You must consider every misstep, every nuance, every exception — because human behavior is flexible, emotional, sometimes flawed, and that’s exactly what makes it human.
AI draws from the collective knowledge of the world — a mass of data, weighted by algorithms designed to statistically determine what is “most likely” correct or helpful.
But as Carlo M. Cipolla reminds us in The Basic Laws of Human Stupidity:
- There will always be more stupid people than you think.
- The proportion of stupid people is constant — regardless of education, social class, or geography. Even Nobel Prize winners don’t escape the ratio.
And the same goes for the group of people that feeds tools like ChatGPT
So please — I beg you — never treat ChatGPT or any other AI tool as a substitute for your own reasoning.
Use it, yes.
Question it, always.
And then — most importantly — think.
And only feed it with what you know to be right.
Because you checked it, you questioned it and you did not rely on assumptions.
Otherwise?
It will be fed with a lot of BS.
And that BS will be repeated — again and again — amplified like a snowball. (*)
Until in the end… BS will have taken over.
And an entire society will be left believing in it.
The loss for humanity ?
I do not even dare to think about it.
- PS: The same applies to investment ideas: as more people rely on AI-generated advice, many end up following identical tips. Inevitably, this drives stock prices beyond the companies’ true value—the value ultimately tied to their capacity to generate cash flow that justifies the price you pay.
