Author says:
- “centaurs” are the people who get to choose to use AI for the tasks it helps them with
- “reverse centaurs” are forced to use AI by their employers in bad situations and have to serve as the “accountability sink” for the machine
Sometimes this goes badly and obviously awry, like when the AI tells you to put glue or gravel on your pizza. But more often, AI’s errors are precisely, expensively calculated to blend in perfectly with the scenery.
AIs, after all, are statistical guessing programs that infer the most plausible next word based on the words that came before
AIs are conservative. They can only output a version of the future that is predicted by the past
Author references Slopsquatting in this. Malicious dependencies that take advantage of how LLMs generate code.
Automation blindness is a name for what happens when you’re asked to repeatedly examine the output of a generally correct machine for a long time. We just aren’t good at this.
Author points to inefficacy of TSA as an example of this. No source cited here. But, the gist is, they’re really good at finding water bottles because those come up constantly. But they frequently fail to spot smuggled in guns or bombs by test teams since those are so rare.
Aside, from this article: Stack Overflow data reveals the hidden productivity tax of ‘almost right’ AI code
As Venturebeat reports, while usage of AI coding assistants is up (from 76% last year to 84% this year), trust in these tools is plummeting – 33%, with no bottom in sight. 45% of coders say that debugging AI code takes longer than writing the code without AI at all. Only 29% of coders believe that AI tools can solve complex code problems.
Venturebeat concludes that there are code shops that “solve the ‘almost right’ problem” and see real dividends from AI tools. What they don’t say is that the coders for whom “almost right” isn’t a problem are centaurs, not reverse centaurs.
The AI bubble is driven by the promise of firing workers and replacing them with automation. Investors and AI companies are tacitly (and sometimes explicitly) betting that bosses who can fire a worker and replace them with a chatbot will pay the chatbot’s maker an appreciable slice of that former worker’s salary for an AI that takes them off the payroll.
AI exists to replace workers, not empower them. Even if AI can make you more productive, there is no business model in increasing your pay and decreasing your hours.
AI companies are not pitching a future of AI-enabled centaurs. They’re colluding with bosses to build a world of AI-shackled reverse centaurs.
Some people are using AI tools (often standalone tools derived from open models, running on their own computers) to do some fun and exciting centaur stuff.
But for the AI companies, these centaurs are a bug, not a feature – and they’re the kind of bug that’s far easier to spot and crush than the bugs that AI code-bots churn out in volumes no human can catalog, let alone understand.