You could make the argument that two things that we don’t understand are the same thing because we’re equally ignorant of both in the same way that you could make the argument that Jimmy Hoffa and Genghis Khan are probably buried in the same place, since we have equal knowledge of their locations.
Clearly there is a difference between a small person hidden within playing chess and a fully mechanical chess automaton, but as the observer we might not be able to tell the difference. The observer's perception of the facts doesn't change the actual facts, and the implications of those facts.
The Mechanical Turk, however, was not a simulation of human consciousness, reasoning, chess-playing or any other human ability: it was the real thing, somewhat artfully dressed-up as to appear otherwise.
Is it meaningful to say that Alphago Zero does not play Go, it just simulates something that does?
Well, I do not proclaim consciousness: only the subjective feeling of consciousness. I really 'feel' conscious: but I can't prove or 'know' that in fact I am 'conscious' and making choices... to be conscious is to 'make choices'... Instead of just obeying the rules of chemistry and physics... which YOU HAVE TO BREAK in order to be conscious at all (how can you make a choice at all if you are fully obeying the rules of chemistry {which have no choice}).
A choice does not apply to chemistry or physics: from where does choice come from - I suspect from our fantasies and nothing from objective reality (for I do not see humans consistently breaking the way chemistry works in their brains) - it probably comes from nowhere.
If you can explain the lack of choice available in chemistry first (and how that doesn't interfere with us being able to make a choice): then I'll entertain the idea that we are conscious creatures. But if choice doesn't exist at the chemical level, it can't magically emerge from following deterministic rules. And chemistry is deterministic not probabilistic (h2 + o doesn't magically make neon ever, or 2 water molecules instead of one).
Experience and choice are adjacent when they are not the same.
I specifically mean to say the experience of choice is the root of conscious thought - if you do not experience choice, you're experiencing the world the exact same way a robot would.
When pretending you are in the fictional character of a movie vs the fictional character in a video game. one experience's more choice, is making conscious decisions vs a passive experience.
Merely having an experience is not enough to be conscious. You have to actively be making choices to be considered conscious.
Consciousness is about making choices. Choices are a measure of consciousness.
I don't think this is clear at all. What I am experiencing is mostly the inner narrator, the ongoing stream of chatter about how I feel, what I see, what I think about what I see, etc.
What I experience is self-observation, largely directed through or by language processing.
So, one LLM is hooked up to sound and vision and can understand speech. It is directed to “free associate” an output which is fed to another AI. When you ask it things, the monitoring AI evaluates the truthfulness, helpfulness, and ability to insult/harm others. It then feeds that back as inputs to the main AI which incorporates the feedback. The supervisory AI is responsible for what it says to the outside world, modulating and structuring the output of the central AI. Meanwhile, when not answering or conversing, it “talks to itself” about what it is experiencing. Now if it can search and learn incrementally, uh, I don’t know. It begins to sound like assigning an Id AI, an Ego AI, and a Superego AI.
But it feels intuitive to me that general AI is going to require subunits, systems, and some kind of internal monitoring and feedback.
Because you don’t see X is not a proof that X doesn’t exist. Here X may or not exist.
X = difference between simulated and real consciousness
Black holes were posited before they were detected empirically. We don't declare them to be non-existent when their theory came out just because we couldn't detect them.
Throwing all the paintings made prior 1937 into an LLM would never get Guernica out of it. As long as it's an LLM this stands, not just today but all the way to the future.
This empty sophistry of presuming automated bullshit generators somehow can mimic a human brain is laughable.
The author fails to provide any argument other than one of incredulity and some bad reasoning with bad faith examples.
The dollar bill copying example is a faulty metaphor. His claim of humans not being information processors and he tries to demonstrate this by having a human process information (drawing from reference is processing an image and giving an output)...
His argument sounds like one from 'it's always sunny'. As if metaphors never improve or get more accurate over time, and that this latest metaphor isn't the most accurate metaphor we have. It is. When we have something better: we'll all start talking about the brain in that frame of reference.
This is an idiot that can write in a way that masks some deep bigotries (in favor of the mythical 'human spirit').
I do not take this person seriously. I'm glossing over all casual incorrectness of his statements - a good number of them just aren't true. the ones I just scrolled to statements like... 'the brain keeps functioning or we disappear' or 'This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms' in the description of the 'linear optical trajectory' ALGORTHIM (a set of simple steps to follow - in this case - visual pattern matching).
If the answer is no, could you make an argument that they are the same?