I don't understand how you can be so confident of this. How are you defining consciousness? How are you measuring it? What makes you believe with such emphatic certainty that I am a conscious being and not a p-zombie? (or, if you prefer, a bot that easily passes the Turing test)
> They are not unrelated ideas, and in fact, the ability of an organism to survive is closely tied to this kind of behavior.
That's what I'm saying, consciousness is not a "kind of behavior". There is nothing behavioral about your inner experience as a conscious entity.
I think the TL;DR of the argument against p-zombies goes like this: if you have two things that are by definition indistinguishable by any possible measurement even in principle, they are by this very definition the same. Since there is, by definition, no way to tell if someone is a p-zombie or not, the introduction of the term "p-zombie" doesn't make any sense at all, and therefore why would you ever do that?
The people who argue p-zombies often do this because they want to keep consciousness as something fundamentally different than the material world, something inaccessible to science. But it's wrong. Even magic is accessible to science. By the very definition and idea of science, anything that has any causal influence on the observable universe can be studied and is in the domain of science.
The TL;DR argument against philosophical zombies is more like: if consciousness is non-causal (the consequence if p-zombies can exist), then the answer to the question "Why do I think I'm conscious" can not in any way make reference of the fact that you actually are conscious. Suppose we take the two parallel universes, and we run the same experiment in each, where the conscious and non-conscious doppelgangers are both asked the question "are you a conscious, self-aware human being?" Both of them will answer "yes" of course, and we can record and observe whatever we want about their brain states on so on, and get exactly the same results for both.
So, only one of the versions is correct, but it's only by coincidence! All the reasons that the conscious brain has to think it's a conscious human being, and answer "yes" to the question, are also in play in the zombie universe, which also answers "yes". The only difference is that in the "real" world the non-zombie brain happens to be right, for literally no reason at all.
And I think it's around this point you're supposed to realize the absurdity of the thought experiment.
That's a very bad argument. Indistinguishability doesn't entail identity. One obvious way to show this is to note that only the latter is a transitive relation. In other words, if A = B and B = C, then A = C; but if A is indistinguishable from B and B is indistinguishable from C, it doesn't follow that A is indistinguishable from C.
Yes, I know. Indistinguishability in that sense is not a transitive relation. Imagine e.g. that we have detectors which can distinguish As from Cs, but no detectors which can distinguish As from Bs or Bs from Cs. There is no contradiction in that scenario. In contrast, there is no consistent scenario in which A = B and B = C but A != C.
Imagine that we have bunch of As, Bs and Cs in one place. Start testing every one against another. You'll quickly discover two groups - An A tests positive with other As and Bs, but tests negative with Cs. A C tests negative with As, but tests positive with Bs and other Cs. B is the one that tests positive with everything.
Here, I distinguished them all. Doesn't that contradict your argument about indistinguishability not being transitive in general?
Yeah, that strategy would work in the scenario I sketched, but it's easy to change it so that you couldn't do that. Just say we have As, Bs, Cs and Ds and that all pairings are indistinguishable except As with Ds.
But at this point I have to ask, how do you define identity? I'm pretty sure that I could use the strategy I outlined above to separate our objects into three groups - As, Ds and the rest. So how do you define that Bs are not Cs, if there is no possible way for telling the difference?
I'd define identity as the smallest relation holding between all things and themselves.
If you want, you can redefine identity in terms of some notion of indistinguishability, but then you'll end up with the odd consequence that identity is not transitive. In other words, you'd have to say that if A is identical to B, B is identical to C, and C is identical to D, it doesn't necessarily follow that A is identical to D.
There are even semi-realistic examples of this, I think. Suppose that two physical quantities X and Y are indistinguishable by any physically possible test if the difference between X and Y < 3. Then i(1, 2), i(2,3), i(3,4), but clearly not i(1,4).
I'll have to think a bit more about this. Thanks for all those scenarios and making my brain do some work :).
So at this point I'm not sure if your example is, or is not an issue for a working definition of identity. To circle back to p-zombies, as far as I understand, they are not supposed to be distinguishable from non-p-zombies by any possible means, which includes testing everything against everything.
What if I define the identity test I(a,b) in this way: I(a,b) ↔ ∀i : i(a,b), where i(a,b) is an "indistinguishable" test? This should establish a useful definition of identity that works according to my scenario, and also your last example unless you limit the domain of X and Y to integers from 1 to 4. But in this last case there's absolutely no way to tell there's a difference between 2 and 3, so they may as well be just considered as one thing.
As I said, I need to think this through a bit more, but what my intuition is telling me right now is that the very point of having a thing called "identity" is to use it to distinguish between things - if two things are identical under any possible test, there's no point in not thinking about them as one thing.
>But in this last case there's absolutely no way to tell there's a difference between 2 and 3, so they may as well be just considered as one thing.
Yes, that's the point. But then you lose the transitivity property, since although 2 and 3 are indistinguishable, 3 and 4 are indistinguishable, and 4 and 5 are indistinguishable, 2 and 5 are not. So the kind of operational definition of identity you have in mind yields a relation that's so radically unlike the standard characterization of the identity relation that I don't see any reason to call it "identity" at all.
Here's one way of drawing this out. Suppose that X linearly increases from 2 to 5 over a period of 3 seconds. Do we really want to say that there was no change in the value of X between t=0 and t=1, no change between t=1 and t=2, no change between t=2 and t=3, and yet a change between t=0 and t=3? (?!)
As far as I understand you, you have some kind of positivist skepticism about non-operationalizable notions, and so you want to come up with some kind of stand-in for identity which can play largely the same role in philosophical/scientific discourse as the ordinary, non-operationalizable notion of identity. That's a coherent project, but it rests on assumptions that anyone who's interested in P-zombies is likely to reject.
> Here's one way of drawing this out. Suppose that X linearly increases from 2 to 5 over a period of 3 seconds. Do we really want to say that there was no change in the value of X between t=0 and t=1, no change between t=1 and t=2, no change between t=2 and t=3, and yet a change between t=0 and t=3? (?!)
Yeah, I get that, but what I meant in my previous comment is that you either limit the domain of t to 0-3 (and X to 2-5) and there is indeed no way to tell the change between t=2 and t=3, or you don't limit yourself to that test and can distinguish the intermedate values by means of the trick I described before. In other words, either you have transitive identity or you have all the reasons to treat non-transitive cases as one (if the identity test is like the one I described in my previous comment).
> positivist skepticism about non-operationalizable notions
I think it's too late in the night for me to understand this, I'll need to come back to it in the morning. Could you ELI5 to me the meaning of "non-operationalizable" in this context?
Again, thanks for making me think and showing me the limits of my understanding.
>Again, thanks for making me think and showing me the limits of my understanding.
Yes this was a fun discussion, thanks.
Your objection stands if you have (and know you have) at least one instance of every value for the quantity. So suppose that we are given a countably infinite set of variables and told that each integer is denoted by at least one of these variables, and then further given a function over pairs of variables f(x,y), such that f(x,y) = 1 if x and y differ by less than 3 and = 0 otherwise. Then, yes, we can figure out which variables are exactly identical to which others.
However, I would regard this as irrelevant scenario in the sense that we could never know, via observation, that we had obtained such a set of variables (even if we allow the possibility of making a countably infinite number of observations). Suppose that we make an infinite series of observations and end up with at least one variable denoting each member of the following set (with the ellipses counting up/down to +/-infinity):
...,0,2,3,4,5,6,7,9,...
In other words, we have variables with every integer value except 1 and 8. Then for any variable x with the value 4 and variable y with the value 5, f(x,z) = f(y,z) for all variables z. In other words, there'll be no way to distinguish 4-valued variables from 5-valued variables. It's only in the case where some oracle tells us that we have a variable for every integer value that we can figure out which variables have exactly the same values as which others.
Of course it does, by Voevodsky's Univalence Axiom ;-).
>One obvious way to show this is to note that only the latter is a transitive relation. In other words, if A = B and B = C, then A = C; but if A is indistinguishable from B and B is indistinguishable from C, it doesn't follow that A is indistinguishable from C.
In this case, you seem to be envisioning A, B, and C as points along a spectrum, and talking about ways to classify them as separate from each-other, in which we can classify {A, B}->+1 or {B, C}->+1, but {A, C}->-1 always holds.
That's fine, but when we say indistinguishable in the p-zombie argument, we're talking about a physical isomorphism, which doesn't really allow for the kinds of games you can get away with when classifying sections of spectrum.
>Of course it does, by Voevodsky's Univalence Axiom ;-).
I think this was a joke, right? Just asking because it's hard to tell sometimes on the internet. I didn't see how VUA was particularly relevant but I may be missing something.
It is question-begging in this context to assert that the existence of a physical isomorphism between A and B entails that A and B are identical, since precisely the question at issue in the case of P-zombies is whether or not that's the case.
I took OP to be making an attempt to avoid begging the question by arguing that in general, indistinguishability in a certain very broad sense entails identity, so that without question-beggingly assuming that the existence of a physical isomorphism entails identity, we could non-question-beggingly argue from indistinguishability to identity. In other words, rather than arguing that P-zombies couldn't differ in any way from us because they're physically identical to us (which just begs the question), the argument would be that they couldn't differ in any way from us because they're indistinguishable from us.
This isn't really germane to the p-zombie thought experiment, but:
Indistinguishability does entail identity. If I have a sphere of iron X, and a sphere of iron Y which is atom-for-atom, electron-for-electron, subatomic-particle-for-subatomic-particle identical to sphere X, and I place sphere X in position A, and sphere Y in position B, then they are still distinguishable, because one is in position A and one is in position B.
Basically, I'm not sure what the two of you mean by "the same", but I suspect you're not in agreement on it.
I think we're talking about a sense of indistinguishable/identical for which the two spheres would be indistinguishable/identical, since we're comparing a person to a P-zombie, so it's clear that we're dealing with two different individuals. I think identity in that sense is still transitive on the ordinary understanding. So e.g. if I can show that sphere A has exactly the same physical constitution as sphere B, and that sphere B has exactly the same physical constitution as sphere C, then presumably sphere A must have exactly the same physical constitution as sphere C.
The human and the p-zombie are distinguishable because one is in the zombie universe and one isn't. For the purposes of the experiment, you're not supposed to be able to tell which universe is which by observation of the universe itself (i.e. there is no property of p-zombies that gives them away as p-zombies), but from the outside looking in I guess you have a label for one and a label for the other.
Like I said, it doesn't seem germane to the thought experiment anyway, which doesn't allow for epsilons, at least none that could have a causative effect on anything. Like, if you have universe A with no consciousness, and universe B with orange-flavored consciousness, and universe C with grape-flavored consciousness, and finally universe D with cherry-flavored consciousness, and none of them are distinguishable from the others except for universe A and universe D, then you're violating the terms of the thought experiment because you have two supposedly physically identical universes which are nonetheless distinguishable by dint of their underlying consciousness substrates (or lack thereof).
Anyway you're right, it is a weak argument, but only because it doesn't go far enough in outlining why p-zombies are ridiculous (which, IMO, the argument I presented instead, does).
Identity isn't what we're measuring here, it's "humanness" or "consciousness" -- things that are behaviorally distinguishable. Up to an abstract categorical similarity.
Thus they only need to be indistinguishable up to some feature of similarity that allows them to be classified in the same group. That's why, for example, we don't have to worry about "A is the same as B except that it is 2 meters to the left."
OP was saying that P-zombies are "the same" as us in virtue of being indistinguishable from us. I was just pointing out that this inference doesn't go through, since two non-identical things can be indistinguishable.
>I don't understand how you can be so confident of this. [...] How are you measuring it? What makes you believe with such emphatic certainty that I am a conscious being and not a p-zombie?
Because p-zombies are self-contradictory. The definition of a p-zombie is a contradiction. It's like saying "suppose 1 = 2 and 1 != 2. Call this a p-zombie quality."
When you suppose that the behavior of a thing is separate from the reality of a thing, you are failing to account for how the words 'behavior' and 'reality' acquire meaning -- through observation. They cannot be different because the processes that establish their meaning are identical.
To suppose that a p-zombie could be different from a person, yet measurably identical in all aspects is a contradiction.
>How are you defining consciousness?
There is a big difference between meaning and definition. I don't have to define consciousness, I only need to know what it means. I only need to identify the use-cases where it is appropriate.
>There is nothing behavioral about your inner experience as a conscious entity.
Yes there is: behavior is the activity that you measure, and you can measure brain activity.
> behavior is the activity that you measure, and you can measure brain activity.
You've shifted your definition of "behavior" now. I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans. For purposes of the thought experiment, I certainly don't care if the p-zombie has a slightly different brain-wave. Let's say they're permanently sleepwalking, then.
I really feel like you're hand-waving at supposed contradictions here, rather than engaging with why this is a difficult problem. If you firmly reject the idea of a p-zombie, let's leave that aside for now.
Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?
> Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?
I don't even know that other humans are conscious entities. At least not with the level of rigor you seem to be demanding I apply to this hypothetical robot. However, if you and I were to agree upon a series of test that, if passed by a human, we would assume for the sake of argument that that human was a conscious entity, and if we then subjected your robot to those same tests and it also passed, then I would also assume the robot was also conscious.
You might have noticed I made a hidden assumption in the tests though, which is that in establishing the consciousness or not-consciousness of a human they do not rely on the observable fact that the subject is a human. Is that reasonable?
Sure, absolutely. I agree that we could construct a battery of tests such that any entity passing should be given the benefit of the doubt and treated as though it were conscious: granted human (or AI) rights, allowed self-determination, etc.
> I don't even know that other humans are conscious entities
Exactly. Note that the claim Retra is making (to which I was responding) was very much stronger than this. He is arguing not just that we should generally treat beings that seem conscious (including other people) as if they are, but that they must by definition be conscious, and in fact that it is a self-contradictory logical impossibility to speak of a hypothetical intelligent-but-not-conscious creature.
>For purposes of the thought experiment, I certainly don't care if the p-zombie has a slightly different brain-wave.
Yes, you do. Because if the p-zombie has a slightly different brain-wave, it remains logically possible that p-zombies and a naturalistic consciousness can both exist. The goal of the thought-experiment is to prove that consciousness must be non-natural -- that there is a Hard Problem of Consciousness rather than a Pretty Hard Problem. Make the p-zombie physically different from the conscious human being and the whole thing fails to go through.
Of course, Chalmers' argument starts by assuming that consciousness is epiphenomenal, which is nonsense from a naturalistic, scientific point of view -- we can clearly observe it, which means it interacts causally, which renders epiphenomenalism a non-predictive, unfalsifiable hypothesis.
Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?
>I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans.
I was talking about the stupidity of p-zombies. Either way, those 'minute' differences in MRI scans build up in such a way to determine the survival of the mind being scanned.
>Do you believe [...] such a robot be necessarily a conscious entity?
Yes, it would. Because in order to cause such behavior to be physically manifest, you must actually construct a machine of sufficient complexity to mimic the behavior of a human brain exactly. It must consume and process information in the same manner. And that's what consciousness is: the ability to process information in a particular manner.
Even a "sleepwalking zombie" must undergo the same processing. That processing is the only thing necessary for consciousness, and it doesn't matter what hardware you run it on. As in Searle's problem: even if you run your intelligence on a massive lookup table, it is still intelligence. Because you've defined the behavior to exactly match a target, without imposing realistic constraints on the machinery.
> Yes, it would. [...] that's what consciousness is: the ability to process information in a particular manner.
Then this is our fundamental disagreement. You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.
I believe that you're neglecting the "the experience of what it's like to be a human being"[0] (or maybe you yourself are a p-zombie ;) and you don't feel that it's like anything to be you). There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it, but that's different than sidestepping the question entirely by defining down consciousness until it's something we can measure (e.g. information processing). I posted this elsewhere, but I highly recommend reading Chalmers' essay "Facing Up to the Problem of Consciousness"[1] if you want to understand why many people consider this one of the most difficult and fundamental questions for humanity to attempt to answer.
>You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.
No, that is not at all what is happening. That's not even on the same level of discourse.
>I believe that you're neglecting the "the experience of what it's like to be a human being"
That experience is the information processing. They are the same thing, just different words. Like "the thing you turn to open a door" and "doorknob" are the same thing. I'm not neglecting the experience of being human by talking about information processing. What is human is encoded by information that you experience by processing it.
>There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it [...]
No, this is not agreement with me. This is not at all what I'm saying.
In that case, I'm really struggling to understand your position.
> What is human is encoded by information that you experience by processing it.
So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same? Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals? What about a person taking Ambien who walks, talks and responds to questions in their sleep (literally while "unconscious")?
>So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same?
Yes, exactly.
>Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals?
This is a different question. No, computers aren't conscious. You need to have the 'right kind' of information processing for consciousness, and it's not clear what kind of processing that is.
This is essentially the Sorites Paradox: how many grains of sand are required for a collection to be called a heap? How much information has to be processed? How many neurons are needed? What are the essential features of information processing that must be present before you have consciousness?
These are the interesting questions. So far, we know that there must be continual self-feedback (self-awareness), enough abstract flexibility to recover from arbitrary information errors (identity persistence), a process of modeling counterfactuals and evaluating them (morality), a mechanism for adjusting to new information (learning), a mechanism for combining old information in new ways (creativity), and other kinds of heuristics like emotion, goal-creating, social awareness, and flexible models of communication.
You don't need all of this, of course. You can have it in part or in full, to varying levels of impact. "Consicousness" is not well-defined in this way; it is a spectrum of related information processing capabilities. So maybe you could consider computers to be conscious. They are "conscious in a very loose approximation."
I don't understand how you can be so confident of this. How are you defining consciousness? How are you measuring it? What makes you believe with such emphatic certainty that I am a conscious being and not a p-zombie? (or, if you prefer, a bot that easily passes the Turing test)
> They are not unrelated ideas, and in fact, the ability of an organism to survive is closely tied to this kind of behavior.
That's what I'm saying, consciousness is not a "kind of behavior". There is nothing behavioral about your inner experience as a conscious entity.