4) If computers get good enough at 1) or 2), then there'd be much bigger problems, and essentially all humans will become the starving artists.
Also, I'm not so sure that language models like SD, Imagen, GPT-3, PaLM are purely copycats. And I'm not so sure that most human artists are not mostly copycats either.
My suspicion is that there's much more overlap between how these models work and what artists do (and how humans think in general), but that we elevate creative work so much that it's difficult to admit the size of the overlap. The reason why I lean this way is because of the supposed role of language in the evolution of human cognition (https://en.m.wikipedia.org/wiki/Origin_of_language)
And the reason I'm not certain that the NN-based models are purely copycats is they have internal state; they can and do perform computations, invent algorithms, and can almost perform "reasoning". I'm very much a layperson but I found this "chains of thought" approach (https://ai.googleblog.com/2022/05/language-models-perform-re...) very interesting, where the reasoning task given to the model is much more explicit. My guess is that some iterative construction like this will be the way the reasoning ability of language/image models will improve.
But at a high level, the only thing we humans have going for us is the anthropic principle. Hopefully there's some magic going on in our brains that's so complicated and unlikely that no one will ever figure out how it works.
BTW, I am a layperson. I am just curious when we will all be killed off by our robot overlords.
> and essentially all humans will become the starving artists
all of these assumptions miss something so huge that it surprises me that so many miss it: WHO is doing the art purchasing? WHO is evaluating the "value" of... well... anything, really? It is us. Humans. Machines can't value anything properly (example: Find an algorithm that can spot, or create, the next music hit, BEFORE any humans hear it). Only humans can, because "things" (such as artistic works, which are barely even "things", much more like "arbitrary forms" when considered objectively/from the universe's perspective) only have meaning and value to US.
> when we will all be killed off by our robot overlords
We won't. Not unless those robots are directed or programmed by humans who have passionate, malicious intent. Because machines don't have will, don't have need, and don't have passion. Put bluntly and somewhat sentimentally, machines don't have love (or hate), except that which is simulated or given by a human. So it's always ultimately the human's "fault".
>we won't be killed off by AGI because humans don't have malicious intent
I wouldn't say malice is necessary. It's just economics. Humans are lazy, inefficient GI that farts. The only reason the global economy feeds 8 billion of us is that we are the best, cheapest (and only) GI.
If we manage to create life capable of doing 1) and 2) but also capable of self-improvement and self-design of their intelligence I think what we've just done is created the next step in the universe understanding itself, which is a good thing. Bacteria didn't panic when multi-cellular life evolved. Bacteria is still around, it's just a thriving part of a more complex system.
At some point biological humans will either merge with their technology or stop being the forefront of intelligence in our little corner of the universe. Either of those is perfectly acceptable as far as I am concerned and hopefully one or both of them come to pass. The only way they don't IMO is if we manage to exterminate ourselves first.
Bacteria obviously lack the capacity to panic about the emergence of multicellular life.
A vast number of species are no longer around, and we are relatively unusual in being a species that can even contemplate its own demise, so it's entirely reasonable that we would think about and be potentially concerned about our own technological creations supplanting us, possibly maliciously.
Also, I'm not so sure that language models like SD, Imagen, GPT-3, PaLM are purely copycats. And I'm not so sure that most human artists are not mostly copycats either.
My suspicion is that there's much more overlap between how these models work and what artists do (and how humans think in general), but that we elevate creative work so much that it's difficult to admit the size of the overlap. The reason why I lean this way is because of the supposed role of language in the evolution of human cognition (https://en.m.wikipedia.org/wiki/Origin_of_language)
And the reason I'm not certain that the NN-based models are purely copycats is they have internal state; they can and do perform computations, invent algorithms, and can almost perform "reasoning". I'm very much a layperson but I found this "chains of thought" approach (https://ai.googleblog.com/2022/05/language-models-perform-re...) very interesting, where the reasoning task given to the model is much more explicit. My guess is that some iterative construction like this will be the way the reasoning ability of language/image models will improve.
But at a high level, the only thing we humans have going for us is the anthropic principle. Hopefully there's some magic going on in our brains that's so complicated and unlikely that no one will ever figure out how it works.
BTW, I am a layperson. I am just curious when we will all be killed off by our robot overlords.