I like the pragmatic nature of pragmatism (pun intended!) but this is the best critique against it. By its nature, it is descriptive and not normative, so it could be argued that it in fact, is not a philosophy at all, but instead "proto-science" or perhaps science for the common man.
I don’t get that feeling from reading James himself. He offers a VERY good philosophical and logical basis for the golden rule, and one that clearly points toward an ever-expanding “inclusion” of other beings to be taken care of/treated with respect.
Far more sound and defensible than any other moral system I’ve encountered.
TLDR of that basis: minds depend first and foremost on categorization and distinction. They must have a built-in way to discriminate types of things and they must have a preference for treating like things alike. As a conscious person develops, they understand other people to be like themselves and therefore ought to have a preference for treating others like themselves.
That’s not to say this imperative can’t be overridden by other concerns, but those other concerns all seem quite obviously more superficial than this extremely fundamental one of “how do I even discern this thing from that thing.”
Pair this with some of the work on “what is stuff” that the Buddhists made a ton of progress on, and you have a logically sound moral system that should compel you to treat everything (and certainly all living things) with respect (using “respect” as a catch-all for “the range of behaviors you’d expect from following the golden rule”)
>they must have a preference for treating like things alike
Suppose I recognize a type of thing, "dollars". I spend some of my dollars to buy food. Given this "preference for treating like things alike", does that mean I now have to spend all my dollars to buy food?
>As a conscious person develops, they understand other people to be like themselves and therefore ought to have a preference for treating others like themselves.
I imagine an egoist/solipsist could respond by saying "well, I put myself in a special category, a category with just one object in it".
- - -
In my view, the is/ought boundary is fairly inescapable here.
Imagine a hypothetical intelligent species that's predatory, 100% carnivorous, and solitary. Like a coyote, but with the reasoning and philosophy ability of a human. It lacks mirror neurons, and it kills daily or weekly for its very survival.
Can you think of an argument that would actually work to persuade a coyote philosopher that it should adopt a vegan diet? I can't.
IMO, human compassion is downstream of us being a social species. Once you have compassion for at least one fellow human, you can at least argue that restricting that compassion to just a subset of humans, or just a single species, seems rather arbitrary.
But bootstrapping compassion from an egoist perspective, and convincing a sentient coyote philosopher to go vegan, seems a bit harder. I imagine from the coyote philosopher's perspective, "suffering is bad when it happens to you, and you're not special, therefore suffering is also bad when it happens to other beings" would seem like a rather trippy and counter-intuitive argument.
- - -
An interesting question on the boundary between is and ought: "Are shrimp capable of suffering?" (See HN discussion: https://news.ycombinator.com/item?id=42172705 ) People generally treat it as a factual question that can be resolved through ordinary scientific discourse, but it's also a key values question -- if shrimp can suffer, that ought to change our values re: the importance of humane treatment. And there's a chance we'll never know the 'fact of the matter' regarding whether shrimp can suffer. I'm not sure an experiment can, in principle, provide a definitive answer.
Same line of argument goes for advanced AI models. If AI passes the Turing test, can one argue that it's morally irrelevant merely because it runs on silicon substrate? Seems dubious. (If I gradually replace the neurons in your brain with transistors, do you gradually become morally irrelevant?)
So if we're going to regard advanced AI as non-sentient, what's the key differentiator supposed to be? Is there actually any way to definitively answer this question, even in principle?