It's really worrying to me, even as a self proclaimed "LLM <-> AI" skeptic, to see what kind of stuff people pretend to get out from an LLM. Typewriter monkeys as a service almost.
Still useful for the odd task here and there, but not as useful as all the money being invested in this (except for the companies getting that money, that is).
> Typewriter monkeys as a service almost. // Still useful for the odd task here and there
1) Paramount task: searching in naturally structured language, as opposed to keywords. Odd tasks: oh yes, several tasks of fuzzy sophisticated text processing previously unsolved.
2) They translate NN encodings in natural language! The issue remains about the quality of /what/ they translate in natural language, but one important chunk of the problem* is in a way solved...
Now, I've been probably one of the most vocal here, shouting "That's the opposite of intelligence!" - even in the past 24 hours -, but be objective: there are also progresses ...
(* Around five years ago we were still stuck with Hinton's problem of interpreting pronouns as pointers in "the item won't fit in the case: it's too big" vs "the item won't fit in the case: it's too small" - look at it now...)
Of course I see progress, but I feel like the bridge of "thinks" versus "regurgitates" is still far off, if it is still in the horizon with the current approach. IMHO.
edit: furthermore, LLMs probably tackle very little "real state" in the "make machines THINK" land. But a crucial piece on the overall puzzle.
Still useful for the odd task here and there, but not as useful as all the money being invested in this (except for the companies getting that money, that is).
edit: actual example of something I'd expect a real AI to be able to solve by itself, but currently LLMs fail miserably https://x.com/RadishHarmers/status/1885884032220643587