Why this analyst says the AI bubble is 17 times bigger than the dot-com bust

Why this analyst says the AI bubble is 17 times bigger than the dot-com bust

A version of this story will appear in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free here.


New York
 — 

At this point, even the concept of an “AI bubble” seems to be a bubble. (In fact, Deutsche Bank analysts said last month that the “AI bubble” bubble has already burst.)

Perhaps some corners of the internet are bored of the bubble talk. That’s not making the market any less, um, bubbly.

Just this week, the Financial Times wrote that 10 AI startups — not a dollar in profit among them — have gained nearly $1 trillion in market value over the past 12 months. (That is, to use a technical term, bananas.)

Even as Wall Street analysts and tech media increasingly question the hype, drawing uneasy comparisons to the late 1990s, the AI industry’s response has been to shrug and watch their valuations tick higher and higher. The AI faithful believe the technology will disrupt (in a good way, hopefully!) virtually every aspect of modern life, from phone operating systems to pharmaceuticals to finance. And even if there is a bubble, proponents say, the dot-com bubble gave us companies like Amazon, and the internet became, well, the internet.

There are plenty of skeptics countering the AI hype machine, though few professional market analysts have done so as stridently as Julien Garran, a researcher and partner at the UK firm MacroStrategy Partnership.

Earlier this month, Garran published a report claiming that we are in “the biggest and most dangerous bubble the world has ever seen.” Garran concludes that there is a “misallocation of capital in the US” that makes the current frenzy 17 times bigger than the dot-com bubble and four times bigger than the 2008 real-estate bubble.

That is, needless to say, a bold claim about a phenomenon that is famously hard to predict.

I sat down (virtually) with Garran earlier this week to talk bubbles and why he thinks the AI fervor is, to quote his report, not just “a bit bad” but rather “the antithesis of socio-economic progress.”

The following interview has been edited for length and clarity.

Nightcap: Your latest AI report made a lot of waves among finance and tech media junkies like myself. Can you walk me through the top lines?

At the heart of the note is a golden rule I’ve developed, which is that if you use large language model AI to create an application or a service, it can never be commercial.

One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.

The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.

And the third set of problems, in terms of how it’s built, is around the idea of scaling. There’s a real problem at a certain point in terms of how much you have to spend to improve them. I’d say it’s definite that (developers) have hit a scaling wall. Otherwise they’d be releasing demonstrably better and better models each time they came to came to market with a new product. And since ChatGPT 4 came out in March of 2023, they haven’t actually raised the bar significantly.

Nightcap: What about the argument that ChatGPT, while not perfect, is capable of doing some low-level grunt work could increase productivity?

Garran: There are certain bullsh*t jobs out there — some parts of management, consultancy, jobs where people don’t check if you’re getting it right or don’t know if you’ve got it right. So you can argue that you can replace bullsh*t with bullsh*t, and, yes, OK, I’m prepared to accept that you probably can, but that doesn’t really make it more broadly useful.

Nightcap: So how should regular people think about all the huge sums of money churning through the industry?

Garran: The AI ecosystem can’t really sustain itself. You have Nvidia making a ton of money … Anybody else — the data centers, the the LLM developers, the software developers that use LLMs — they’re all heavily loss-making.

Consequently, to maintain the process, you need to have a continued funding, which is why it looks like a permanent funding tour. But despite all of this, there’s no obvious way that they actually turn this around to a profit. It’s hope over realistic expectation … When you run out of investors, then the whole thing is going to roll over.

Nightcap: Are investors actually pulling back?

Garran: The amount of (venture capital) willingness to fund some of these startups, especially the software developers, is beginning to diminish because they’re valued so highly. That leaves you with with basically SoftBank, which has had to raise a lot of debt against its shares to fund the first tranche of the OpenAI commitment that it made, and it’s still got a bigger second tranche to fund.

You’ve got foreign states like, say, Saudi Arabia. But there aren’t many countries who have unlimited spending power. And that then leaves Nvidia as sort of last man standing.

Large language models, Garran argues, can be impressive text predictors but are not the economy disruptors AI evangelists claim them to be.

Nightcap: Are we in the process of deflating, or is the bubble still growing?

Garran: With AI, I can’t say it’s starting to deflate. We’re just a week past all-time highs, so it’d be kind of arrogant to say that was definitely the top. But it’s certainly drawing closer.

Nightcap: I have to ask, because I ask myself this all the time: What if you’re wrong? What if the hype is real?

Garran: Well, there are two ways I’d be wrong.

One is, it just takes longer to break than I thought. Which, to be honest, it already has. And what happens if I’m wrong in that way, is simply that they continue building things that aren’t fundamentally useful to the economic society.

If it carries on for another year or another two years because they managed to persuade someone to provide funding, the more people are doing stuff that’s not going to make a return. The future is not going to be as bright. Future (gross domestic product) will be lower than it would be if they didn’t do this stuff, and they just went about doing some mundane things that people actually valued.

And if I’m completely wrong… (and) someone could come up with “superintelligence,” well, that completely changes the world, and we’d be depending on whoever controlled the systems. We could be in a utopia of sorts, or we could be in “Brave New World.” Or we could be in a utopia like “Player Piano,” the Kurt Vonnegut book where everyone’s employed, apart from some people living in their ivory towers.

To be honest, I think that is beyond our current capability as an industrial society to pull it off. If that starts changing, then I’ll change my mind very quickly. I just haven’t seen it.

Source link

Visited 1 times, 1 visit(s) today

Leave a Reply

Your email address will not be published. Required fields are marked *