We're Cooked, Personal Musing on LLM-bros

(February 2026)

Introduction

For brevity I will start with this and not repeat it as often as it deserves: all of this is my (John Mount’s) personal (un)informed opinion. And yes the TL;DR is, as always, “old man yells at cloud.”

It seems clear we are several years into the throes of a financially leveraged large language model (LLM) facilitated crisis. A common feeling for techies in the silicon valley bubble is: “we’re cooked.” Sure, they themselves can currently use LLMs to appear to be doing their job for now. However, the feeling is that even that will not last.

Some of the reasoning is: if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.

The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything- not the LLMs. Of course this has the limited comfort of “it’s not the fall that kills you; it’s the sudden stop at the end.”

Orientation

Some of key features I see in the current LLM mania include:

  • The players are heavily financialized. We see Nvidia and their partners and contenders being highly leveraged with circular financing and making up much of the claimed value of the S&P 500. The financial play is so aggressive that not only are the primary players (Nvidia, OpenAI, Anthropic, Microsoft, Google, Oracle, Amazon, Meta) obviously determined to capture almost all the value of success, they are convincingly setting up that the rest of us will not survive a failure. This is a variation on the usual "privatizing profit and socializing risk" play.
  • LLM based services artificially replace previously working services such as search, text editors, and spreadsheets. Take search as an example. In my opinion LLMs are demonstrably superior to classic search, as they can match more broadly and specialize responses to different users. However we can't test this comparison, as non-LLM search is no longer offered by the larger players such as Google.
  • The technology is non-consensual. Usage is up as usage is un-avoidable. LLM technology is pre-packed an obscene range of previously used products: Notepad, spreadsheets, search, IDEs, cell phones, web browsers, and more. It doesn't matter if you mark an email as private, the Microsoft Copilot opt-out was never real and your recipient probably shared it an LLM anyway.
  • The observed strategies are incredibly unreflective or anti-philosophic. That is: there are no concerns to ponder, just opportunities. A few examples:
    • Claiming that the inability to always quickly superficially distinguish between two things implies they are in fact the same. This was a dumb idea when Turing wrote about it, and remains so. There are important differences between signifiers and realizations. For some interesting writing on the erasure of women, meaning, and status from the Turings test please see: Olivia Guest "Turing Test".
    • Using Searle's room as an implementation blueprint instead of as a critique.
    • Using thinly veiled variations of Roko's basilisk and Pascal's wager as marketing strategies. It isn't going to be one tick better to be enslaved by Sam Altman than to be enslaved by whomever else he claims should not get to the throne he desires.
    A philosophical attitude at least knows to treats these as important and dangerous distinctions, even before worrying if there are solutions. For example: consider the current "it is best if X or Y seized everything and leased bits of it back to you" LLM arguments. You can give these friendly names such as utilitarianism or effective altruism. Or you can recognize them as variations of "the ends justify the means", and realize we already have tools for evaluating these arguments. One of my favorites tools being Kantian "universalizability", or "what if everybody did that?" The concept being: an action isn't moral if everybody else also attempting it would generate a hell.
  • LLMs kill all neighboring structures by imitation and cheapening. For example: there is in fact little point in doing actual biological research, when an LLM can write a paper that is superficially indistinguishable from the description of actual research results. Anybody who is paid for publications is wasting their time doing actual research. The imitation paper may not have any correct or usable results, but that turns out to not matter in a market for lemons. Using Goodhart's law as a blueprint instead of a warning speeds us to our end.
  • Gouging out alternatives. It is getting more difficult to build a local non-LLM alternative to LLM infested remote services. Hoarding by the big players have made first graphics cards and now ram much more expensive and harder to obtain. Microsoft purchased more GPUs than they can power up (ref). That is hoarding to drive up prices and drive out alternatives. Hopefully they brought some of them from a company like MiniScribe and actually are only hoarding bricks.
  • LLM ethics washing. LLM interaction systems seem to be often used to supply deniability. "It can't be unethical if the intent is undocumented." This lets LLMs refuse medical and insurance payment at a certain profitable rate.

The doom sale

Key points in promoting the LLM crisis include:

  • Convincing everybody this situation is about technology. The bros are claiming we are on the verge of an auto-catalytic self improving LLM factor. Where in fact they are building an unsupported money feedback loop.

    We have in fact seen great technological innovation before, and survived as a society. However, those times either had fewer technological jumps or less financing. The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results a price way below production costs. This isn't charity, it is to demoralize and kill competition.

  • Convincing people they may survive the crisis, if they cooperate.

    For example: claiming "after we take over the world we will consider adding Universal Basic Income (UBI)". The LLM bros already a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?

Have we been here before?

Have we been here before? In part yes:

  • Trick example: Wolfgang von Kempelen's Mechanical Turk.

    A Wizard of Oz "Pay no attention to the man behind the curtain!" puppet with a human chess player hidden inside. Not the last time we see falsified technology claims.

  • Technological example: the IBM 704 of 1954.

    One of the last vacuum tube computers (before transistors and large scale integrated circuits through lithography). This computer:

    • Introduced: FORTRAN (the basis of much scientific computing).
    • Introduced: LISP (the basis for much artificial intelligence work).
    • Was used by Edward O. Thorp to build blackjack strategies.
    • Allowed Alex Bernstein to develop a chess program that plays on lichess.org at a mid-amateur level to this day.

    This was a lot very quickly. I imagine people were rightly impressed. This opportunity is part of why the Dartmouth workshop of 1956 made such exciting claims. What was different was the Dartmouth workshop was only funded for about $7,500 1956 US dollars (about $90,000 2026 US dollars), not hundreds of billions, and not claiming values in trillions. Also only about 123 IBM 704s were produced, a far cry from approximately 40 million graphics cards a year Nvidia produces.

    So here we had the technology, but not the money, scale, or leverage.

  • Financial example: 19th century railways.

    The railroads were a technological leap, but the scale and rapidity of deployment was due to financing.

    This is a much cited example, but it is relevant. In the US railway construction eventually captured 2.6 percent of the gross domestic product (GDP). Much of the investment control and later wealth being captured by Vanderbilt, Gould, Stanford, Huntington, Hopkins, Crocker, Flager, and others (alternately called and magnates or the "robber barons" who ruled the gilded age).

What about AI?

We have had AI for a very long time now. At least since the 1940s.

There are solid arguments that AI is improving quickly over time. For example Andy Jones shared a celebrated horsepower analogy with example data. He compares the replacement of horses by machine-horsepower to the increasing dominance of chess playing AIs. This gives an economics-style overview of what steady progress looks like.

The issue being: the progress shown is for classical AI (rules systems, knowledge representation, search) and deep neural nets. It is not for LLMs. Jones’ charts are not for LLMs. Let’s dig into the chess analogy.

Take Bernstein’s 1957 chess program. Let’s consider it as an artificial intelligence. Just one for the narrow domain of playing chess. On a modern computer Bernstein’s routinely plays chess at a mid-amateur level (around 1400 Elo as measured in the lichess.org population). This puts it at around the 35th percentile in the distribution of active players on lichess.org in classical time control chess, and closer to 50th percentile in rapid time controls. The program moves instantly, is downright unnerving to play, and you lose if you attempt to move as fast as it does. From the amateur’s point of view: chess was solved in 1957, we just needed to wait for faster computers. And AI chess continued to improve for quite a while resulting in champion to super-human performance from: Deep Blue, AlphaZero, Stockfish, and Leela Chess Zero. The world isn’t afraid of these artificial intelligences, as their domain of intelligence is so specific (just playing chess).

The above is a common trajectory for AI.

Now back to the magic LLMs that “can do anything.” LLMs can’t even replicate the Bernstein’s result to this day. They can’t play amateur level chess because they don’t seem good at maintaining state and performing multiple step reasoning. And they don’t play champion level chess, and most certainly do not play super-human chess. I share some of GothamChess’s (Levy Rozman) hilarious roasting of LLMs failing to play chess:

LLMs are really good at claiming things and arguing, but they are less good at executing in the face of consequences.

LLMs vastly under-perform many already existing AIs. LLMs look great if they can claim other work as their own, and remain un-criticizable (“you wouldn’t expect them to do that”).

You wouldn't say that to the singularity

One of the claimed LLM threat is: they will very soon be able to usefully re-design themselves. This would then trigger a technological singularity, as improved LLMs could then design even more improved LLMs. The assumed ability to improve would be proportional to the abilities of the AI leading to an exponential explosion in AI ability. In fact Anthropic shares an article that would imply this has already happened.

However, I think don’t this is the case. This is, as above, claiming success from other fields that LLMs are not able to adopt. And it again taps the insidious Roko’s basilisk “you better be nice to me before the tipping point” argument.

LLMs don’t appear to work like previous deep learning, search, and rule based AIs. What LLMs do is fill in missing bits: next word, desired conclusion, summary. They also take in your input and reflect it back. They have aspects of being a “text mirror.” LLMs are stochastic parrots, just very good ones. We have gotten to the point where they can start from a blank document and convincingly fill-in or fabricate the whole thing.

Older AIs did compose reliably and even self-improved through self-training and reinforcement learning.

LLMs do not currently reliably compose or self-improve. Sure, it is amusing to watch the absolute clown-show composition attempts such as “agentic AI” and “claws”. But these system are dominant more due to patronage than to ability. Once we could see a single LLM could not reliably execute tasks, it becomes time to claim a federation of them could work.

But they are popular

A lot of the LLM proponents are popular technologists talking about how they can solve some of the current engineering problems by appealing to:

  • Orbital data centers.

    Probably not a good idea at all and probably not in our near term technological reach.

    We are still at a time where putting an air-frier in space is considered noteworthy. Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).

  • Dyson Spheres.

    Probably impossible without magic.

    Explained by Angela Collier "Dyson spheres are a joke" as possibly a deliberately impossible structure proposed by Freeman Dyson. Collier's theory is Dyson felt SETI embarrassed itself by using anthropomorphic reasoning to say we should look for extra-terrestrial intelligences by observing specific radio wavelengths. His Swiftian parody was to assume one overly specific impossible sort of alien, the sphere builders. For these aliens the only observable signal would be the small amount of infra-red they could not harvest for thermodynamic reasons.

It is no coincidence that these are both taken from popular science fiction, and trivially impractical. Consider:

  • They story of Achilles on Skyros.

    Described in "Astounding: John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction" by Alec Nevala-Lee. At Skyros Achilles, disguised as a woman, was found out when he picked up a sword (betraying his true nature). Asimov, Campbell and Hubbard described Astounding magazine as being such a test, where the Nietzschean supermen hidden in the pulp reading population would involuntarily reveal themselves by picking smart science fiction. However the science in "Astounding" is routinely wrong, and the stories all share very similar male oriented power fantasies. So it is not clear what was being filtered for, other than good prospects for Dianetics/Scientology and adoring fans to grope.

  • Why Do Nigerian Scammers Say They are From Nigeria?

    This paper describes a sweet spot of scams. They need to be plausible enough to attract potential victims. But they should be flawed enough so those who would fight self-reject. You want to attract marks, not attract debate partners.

The “big thinkers” refer to far out science fiction ideas as if they were near term achievable exactly to attract the credulous. That isn’t to say science fiction doesn’t translate to reality, such as communications satellites did. But there is a great difference between communication satellites and near-term Dyson spheres. Those that can’t tell the difference are good marks.

Conclusion

I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.