The Threat of AI?
- Igor Ageyev

- 10 hours ago
- 3 min read

The Threat of AI? - Something Big Is Happening from Matt Shumer Something Big Is Happening — Matt Shumer
Matt Schumer's post sparked a heated online debate. Matt is an expert in programming AI agents. He shared his fears about threats to humanity, drawing on his experience and recent progress in AI.
In my opinion, that vision unfoundedly extrapolates from the particular to the whole, drawing far-reaching conclusions. There are many such examples: one can recall the prediction that London would be covered in manure, that wars would end due to the invention of the machine gun, that the world would end due to nuclear weapons, and so on.
My experience with AI, including the paid versions of Gemini 3 pro + Thinking, ChatGPT 5.2, Grok 4.1, and Claude Sonnet 4.5, leads me to conclude that AI remains, for now, a magnificent mediocrity. I don't use it in programming; my area is less formalised. Even in my tasks, if I configure it, give it context, and teach it how to solve problems, the AI can solve standard problems faster and with fewer errors. It can even pull in something that wasn't there originally, something I didn't know or had forgotten, by compiling it from the internet and other sources. I've only experienced this twice: once, it was completely new to me, and the second time it significantly expanded the scope. Attempts to force the AI to solve a problem in a non-standard, commonly used way and generate a non-typical solution are not a trivial task.
However, there is a price to pay for this increased independence. The hallucinations have become much more realistic and harder to detect. My tasks don’t have a simple, quick, and clear "switch-on and it works" criterion for correctness. Therefore, the AI configuration includes a significant section specifying additional checks for everything from internet links to citations, figures, and the AI’s conclusions. The AI's conclusions are always marked, and I check them to minimise the risk of serious disputes later.
All that AI can currently offer is some cost reduction under certain conditions. For example, to integrate AI into an operating model and achieve tangible benefits, that model and, most likely, business processes must be restructured, and this is not just a technological challenge. Even then, the goal will be achieved only if competitors don't do the same, if the transformation is successful, if the costs and time are acceptable, if the resulting advantage can be exploited, and many other "ifs" arise.
AI has no initiative or desires of its own. It wants nothing and needs nothing. All it can do is fulfil human requests, and human goals are different from replacing all humans with AI.
Can that be considered a breakthrough?
But that's not the most important thing. The most important thing is that the vast majority of companies, organisations, and countries' problems are not caused by bad code or even by poor or incompetent lower-level workers.
There's another important factor: feedback loops. Even if everything progresses as the author assumes, the market and economy will decline, and society will either revert to a stable state without AI or ban AI. Something like Dune, but without the spice.
If you add political risks and the prospect of secular stagnation, which could very well lead to depression and a transformation into a new social system that politicians are actively provoking, then the threat of AI becomes almost illusory.
But I don't mean to say I don't see the potential and risks associated with AI. There are many opportunities, including coding. One who can exploit opportunities promptly and manage risks will benefit from short-term waves of demand.

Comments