Sounds more like hope to me. There's nothing special about creativity.
The real reason chefs won't be released by robots sooner than AI and software devs is a lot more boring. Sensorimotor and physical perception is harder than reasoning and "creativity".
It doesn't help that we don't know how to digitize taste so any model with a good sense of taste will have to develop it indirectly incentivized by something else (eg a language model training on recipes).
GPT is a predictor. It will just continue to reduce loss until it has modelled the data entirely.
I see GPT and similar LLMs as basically like managing an over-eager intern. Have you ever tried to do that?
They're overflowing with ideas and knowledge and passion, and "all" you have to do is point them in the right direction. Except that when you review their code, you find that they didn't consider about a thousand different edge cases. Oh, and they didn't follow the style guide. Oh, and they are way over-focused on the wrong parts of the problem, prematurely optimizing performance in places where it doesn't matter. Oh, and their code is nigh-on unreadable. Oh, and they wrote a bunch of code that is already provided in libraries, but they just didn't know about it, and their implementation is probably full of subtle bugs so they should just use the library version. Oh, and they forgot to update the CI because now they need to pull in a new dependency to run their tests. Oh, and... the list goes on.
I don't ever see AI progressing past the regurgitation stage; you can give it as much knowledge as you want, and it can rearrange and reproduce and restate that knowledge in a thousand different ways; but we're so far from AI being able to handle all the details of our work, that as I said above, fully specifying the problem is going to be just as much work as doing the work yourself. And you'll still need expertise to do so, because our software systems are complex and full of nuance that can't be easily communicated to a machine.
AI doomers always strike me as people who have never had to try to corral interns; maybe that's an overly specific life experience to expect someone to have, but it's a really useful proxy for how much "more productive" we're going to get with AI. A really good intern can solve simple problems given exact constraints, but anything requiring lateral thinking will take them a lot of coaxing to get to the right solution. And that's okay! People can learn and get better. But I don't see AI getting better "enough" to take on anything beyond that first rung of the ladder of complexity.
>I see GPT and similar LLMs as basically like managing an over-eager intern. Have you ever tried to do that?
So? Assuming you're right, GPT-3 was not at the level of intern. GPT-2 could not even write coherent text.
I bet you didn't expect any of those developments either.
It's interesting how much people struggle to look forward. I guess we never needed to for nearly all of our evolutionary history. Like very few people genuinely think they'll be replaced right before they are.
Your argument basically boils down to "I have a hunch language models will stop improving".
Your hunch is unfounded, backed by nothing except vague assertions that "surely this time" these goals won't be reached.
Assertions that many people parroted just a few years ago and have continued to be proven wrong.
The text is data. Gradient descent and prediction will strive the model the data. The data will be modelled. That's really all there is to it.
The real reason chefs won't be released by robots sooner than AI and software devs is a lot more boring. Sensorimotor and physical perception is harder than reasoning and "creativity".
It doesn't help that we don't know how to digitize taste so any model with a good sense of taste will have to develop it indirectly incentivized by something else (eg a language model training on recipes).
GPT is a predictor. It will just continue to reduce loss until it has modelled the data entirely.