We look to past models to explain AI adoption, like the early growth of the internet.
What the models don't take into account is that LLMs are the first technology that I've seen that actively makes things worse.
Used for UX research, it will invent user needs that weren't expressed in the interview but look plausible, which could result in the team building the wrong thing.
Used for generating code, it hallucinates entire functions, but the functions look plausible, so you'd only likely spot them if you already knew how to code well enough not to need AI.
Used for creative writing - or creative anything - it's hot garbage. It's like that kid on the X-Factor who proudly takes the stage while the rest of us wince: you might think it looks good, but honey, it doesn't.
That's before we get to the mass exploitation of underpaid labour that goes into training these things, the intrinsic bias, the blatant copyright theft (yes, there are lawsuits), and the fact that, even if the thing worked at all (which it often doesn't), it would lead to mass layoffs and even more inequality.
Oh, and the environmental damage, but we don't care about that.
We have been falling over ourselves to attach ourselves that makes life worse on almost every conceivable dimension, other than the use cases at which it was already pretty good before the current gold rush (cleaning up photos and retrieving search results).
Of all the insane hype cycles I have ever seen, this is unquestionably the most depressing.