Hah! I love the one in the bottom-left that looks like it's flipping us off.
This reminds me of playing around with Craiyon a few weeks back. It seemed to think of itself as a coyote who loves soccer. Of course, the training datasets aren't going to have data on dalle or craiyon, so those prompts are going to be more based on the other words, with some randomness
> I asked DALL-E to draw a self portrait of itself and its family one time, and the results still make my hairs stand up.
Dall-E and other Image generation models are not conscious, and they aren't even intelligent. Stop anthropomorphizing them, it's not helpful. We will likely face this problem for real in the coming decades but there's no sense doing so with current models.
I don't subscribe to the AI Boogeyman theories. Consciousness is some made up philosophical macguffin. I don't care if a bunch of phds can align on a definition for hokum and then tell me if one robot has it or not.
Asking things to draw themselves is fun, I could just as easily ask an elephant holding a paint brush to paint itself and then enjoy the outcome. That's pretty much all there is to it.
>Consciousness is some made up philosophical macguffin.
Well it's not totally made up. There is definitely something there, there is a real difference between the experience of being a rock and being a human.
The idea then of a P-zombie or some other version of a major intelligence operating with the lights off internally really is spooky.
>Asking things to draw themselves is fun, I could just as easily ask an elephant holding a paint brush to paint itself and then enjoy the outcome. That's pretty much all there is to it.
Agreed. But asking Dalle or whatever model to draw "The meaning of life" and thinking there is some kind of enlightenment in what it draws is ridiculous.
> Well it's not totally made up. There is definitely something there
"Consciousness" can be a (fairly vague) term encompassing concretely realizable things like train of thought, the ability to introspect those thoughts, an internal model of self, etc. Humans have these whereas a rock does not, and there's nothing in particular preventing AI from eventually having these.
"Consciousness" as something incorporeal that a physically identical being could lack is nonsense territory IMO (but already heavily debated by people far smarter than me). I don't see how whatever we mean by consciousness can make no physical difference when we're directly aware of it and talking about it in the physical world.
If you are having this conversation with me then you are a consciousness and I am a consciousness and that's as good a definition of consciousness as we are ever going to get. Consciousness thus defined exists entirely within the communicative medium.
If I understand it correctly, it’s more like a big association model. So the word “pig” has a lot of images of the word “pig”, or images that had been tagged with those. They also have weighted associations with other things that have to with pigs.
DALL-E itself was Clippy-like, blue, and bean shaped with eyes and mouth more expressive than a Zuckerberg VR avatar.
As for the family? There was none. DALL-E rendered itself on a pure black background