Hacker News new | past | comments | ask | show | jobs | submit login
The three-page paper that shook philosophy: Gettiers in software engineering (jsomers.net)
204 points by FigurativeVoid 16 hours ago | hide | past | favorite | 121 comments





Relevant (deleted, as far as I can tell) tweet:

> When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.

https://old.reddit.com/r/PhilosophyMemes/comments/gggqkv/get...


Hmm. That seems like a better example of the problem than either of the examples at https://en.wikipedia.org/wiki/Gettier_problem .

The cases cited in the article don't seem to raise any interesting issues at all, in fact. The observer who sees the dark cloud and 'knows' there is a fire is simply wrong, because the cloud can serve as evidence of either insects or a fire and he lacks the additional evidence needed to resolve the ambiguity. Likewise, the shimmer in the distance observed by the desert traveler could signify an oasis or a mirage, so more evidence is needed there as well before the knowledge can be called justified.

I wonder if it would make sense to add predictive power as a prerequisite for "justified true knowledge." That would address those two examples as well as Russell's stopped-clock example. If you think you know something but your knowledge isn't sufficient to make valid predictions, you don't really know it. The Zoom background example would be satisfied by this criterion, as long as intentional deception wasn't in play.


It’s not super clear there, but those are examples of a pre-Gettier type of argument that originally motivated strengthening, and externalizing, the J in JTB knowledge— just like you’re doing!

Gettier’s contribution — the examples with Smith — sharpens it to a point by making the “knowledge” a logical proposition — in one example a conjunction, in one a disjunction — such that we can assert that Smith’s belief in the premise is justified, while allowing the premise to be false in the world.

It’s a fun dilemma: the horns are, you can give up justification as sufficient, or you can give up logical entailment of justification.

But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma.

One can even keep buying original recipe JTB, as long as one is willing to bite the bullet that we can flip the “knowledge” bit by changing superficially irrelevant states of the world. And hey, why not?


Well ... obviously any Gettier-style example will not have enough evidence because someone came to the wrong conclusion. But there is a subtle flaw in your objections to Wikipedia's examples - to have a proper argument you would need to provide a counterexample where there is enough evidence to be certain of a conclusion. And the problem is that isn't possible - no amount of evidence is enough to come to a certain conclusion.

The issue that Gettier & friends is pointing to is that there are no examples where there is enough evidence. So under the formal definition it isn't possible to have a JTB. If you've seen enough evidence to believe something ... maybe you'd misinterpreted the evidence but still came to the correct conclusion. That scenario can play out at any evidence threshold. All else failing, maybe you're having an episode of insanity and all the information your senses are reporting are wild hallucinations but some of the things you imagine happening are, nonetheless, happening.


The example is also a joke. Many things shown on screens are CGI or AI-generated, so the belief here is not justified.

I believe traders call this Lambda.

I'm not sure I see the big deal. Justification is on a scale of 0 to 1, and at 1 you are onmiscient. We live in a complicated world; no one has time to be God so you just accept your 0.5 JTB and move on.

Or for the belief part, well, "it's not a lie if you believe it".

And as for the true bit, let's assume that there really is a cow, but before you can call someone over to verify your JTB, an alien abducts the cow and leaves a crop circle. Now all anyone sees is a paper-mache cow so you appear the fool but did have a true JTB - Schroedinger's JTB. Does it really matter unless you can convince others of that? On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?

JTB only exist to highlight bad assumptions, like being on the wrong side of a branch predictor. If you have a 0.9 JTB but get the right answer 0.1 times and don't update you assumptions, then you have a problem. One statue in a field? Not a big deal! *

* Unless it's a murder investigation and you're Sherlock Holmes (a truly powerful branch predictor).


edit: Then there's the whole "what is a cow" thing. Like if you you stuffed a cow carcass with a robot and no one could tell the difference, would that still be a cow? Or what if you came across a horrifying cow-horse hybrid, what would you call that? Or if the cow in question had a unique mutation possessed by no other cow - does it still fit the cow archetype? For example, what if the cow couldn't produce milk? Or was created in lab? Which features are required to inherit cow-ness? This is an ambiguity covered by language, too. For example, "cow" is a pejorative not necessarily referring to a bovine animal.

edit: And also the whole "is knowledge finite or infinite?". Is there ever a point at which we can explain everything, science ends and we can rest on our laurels? What then? Will we spend our time explaining hypotheticals that don't exist? Pure theoretical math? Or can that end too?


A robot in a cow carcass is not a cow, it's a "robot in a cow carcass". Someone might believe it's a cow because they lack crucial information but that's on them, doesn't change the fact.

A cow-horse hybris is not a cow, it's a cow-horse hybrid.

A cow with a genetic mutation is a cow with a genetic mutation.

A cow created in a lab, perhaps even grown 100% by artificial means in-vitro is of course still a cow since it has the genetic makeup of a cow.

The word cow is the word cow, its meaning can differ based on context.

Things like this is why philosophers enjoy zero respect from me and why I'm an advocate for abolishing philosophy as a subject of study and also as a profession. Anyone can sit around thinking about things all day. If you spend money on studying it at a university you're getting scammed.

Also knowledge is finite based purely on the assumption that the universe is finite. An observer outside the universe would be able to see all information in the universe and they would conclude; you can't pack infinite amounts of knowledge into a finite volume.


While I tend to also wave away philosophers as it always boil down to unclear definitions, I don’t think your argument answers the question at all.

From “it has the genetic makeup of a cow”, you’re saying that what make a cow a cow is the genetic makeup. But then part of that ADN defines the cow? What can vary, by how much, before a cow stops being a cow?

The point is that you can give any definition of “cow”, and we can imagine a thing that fits this definition yet you’d probably not consider a cow. It’s a reflection on how language relates to reality. Whether it’s an interesting point or not is left to the reader (I personally don’t think it is)


You've called J and T into question, so let's do B as well. Physicists know that QM and relativity can't be true, so it's fair to say that they don't believe in these theories, in a naive sense at least. In general anyone who takes Box' maxim that all models are wrong (but some are useful) to heart, doesn't fully believe in any straightforward sense. But clearly we'd say physicists do have knowledge.

You're view is more inline with the philosophy of science which holds nothing an ever be justified.

https://www.wikiwand.com/en/articles/Karl_Popper

read The problem of induction and demarcation: https://www.wikiwand.com/en/articles/Falsifiability

Basically to some it all up because we aren't "omniscient" nothing can in actuallity ever be known.


Does the philosophy of science theorize anything about the end or limits of science and knowledge? I find that topic fascinating.

Bayesian epistemology is indeed one of the developments in the field that avoids Gettie.

Biology makes it even more complicated. If you see your mother, you consider her to be imposter. While if you hear your mother's voice, you consider her to be real.

Ramachandran Capgras Delusion Case

https://www.youtube.com/watch?v=3xczrDAGfT4

> On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?

This is case of consensus reality (an intuition pump I borrowed from somewhere). Consensus reality is also respected in Quantum realm.

https://youtu.be/vSnq5Hs3_wI?t=753

while individual particles remain in quantum superposition, their relative positions create a collective consensus in the entanglement network. This consensus defines the structure of macroscopic objects, making them appear well-defined to observers, including Schrödinger's cat.


An old timer I worked with during my first internship called these kinds of issues "the law of coincidental failures" and I took it to heart.

I try a lot of obvious things when debugging to ascertain the truth. Like, does undoing my entire change fix the bug?


I wonder if there is a law of coincidental succeeses too. (if you're an old timer, you might call this some sort of Mr. Magoo law, or maybe "it seems to work for me")

Yeah, good times. I just recently had one that was a really strong misdirection, ended up being 2 simultaneous other, non related things that conspired to make it look convincingly like my code was not doing what it was supposed to. I even wrote tests to see if I had found a corner-case compiler bug or some broken library code. I was half way through opening an issue on the library when the real culprit became apparent. It was actually a subtle bug in the testing setup combined with me errantly defining a hardware interface on an ancient protocol as an HREG instead of an IREG, which just so happened to work fine until it created a callback loop inside the library through some kind of stack smashing or wayward pointer. I was really starting to doubt my sanity on this one.

> corner-case compiler bug

They say never to blame the compiler, and indeed it's pretty much never the compiler. But DNS on the other hand... :-)


Unless you wrote the compiler

The joys of modbus PLCs, I take it?

Ah, yes. But a roll- your own device with C++ on bare metal, so lots more fun.

(we’ll need a few thousand of these, and the off the shelf solution is around 1k vs $1.50 for RYO )

By the way, the RISC V espressif esp32-C3 is a really amazing device for < $1. It’s actually cheaper to go modbus-tcp over WiFi then to actually put RS485 on the board like with a MAX485 and the associated components. Also does ZIGBEE and BT, and the espressif libraries for the radio stack are pretty good.

Color me favorably impressed with this platform.


“I am sitting with a philosopher in the garden; he says again and again 'I know that that’s a tree', pointing to a tree that is near us. Someone else arrives and hears this, and I tell him: 'This fellow isn’t insane. We are only doing philosophy.”

― Ludwig Wittgenstein


I often wonder if LLMs would have made Wittgenstein cry...

It's remarkable how LLMs have skipped any kind of philosophical grounding for "how do we know that this output is valid?" and just gone straight to "looks good to me". Very postmodernist. Also why LLMs are going to turn us into a low-trust society.

A tool for filling the fields with papier-mache cows.


> A tool for filling the fields with papier-mache cows.

Cargo culting as a service.


I don't know, but I reckon he would have been unimpressed.

After Godel published his landmark incompleteness proof, that a logical system can't be complete and also without any internal inconsistencies, I would have expected this to trickle into philosophical arguments of this type.

I see no practical usefulness in all of these examples, except as instances of the rule that you can get correct results from incorrect reasoning.


Philosophy is quite far away from pure math for Godel's argument to really matter.

Maybe it shook analytic philosophy, or some subdiscipline thereof, but this really was not registered beyond that within philosophy. Analytic philosophy likes to imagine itself to be philosophy propper, which is nonsense. It's just an over confident, aggressively territorial branch which hogs all the resources, even though the majority of students yearn for the richness and breadth of something more akin to what is today going by the moniker of Post-Kantian Philosophy (formerly Continental or Modern European philosophy).

Which texts would you recommend students read to learn about Post-Kantian philosophy?

Gettier cases tell us something interesting about truth and knowledge. This is that a factual claim should depict the event that was the effective cause of the claim being made. Depiction is a picturing relationship: a correspondence between the words and a possible event (eg a cow in a field). Knowledge is when the depicted event was the effective cause of the belief. Since the paper mache cow was the cause of the belief, not a real cow, our intuitions tell us this is not normal knowledge. Therefore, true statements must have both a causal and depictional relationship with something in the world. Put another way, true statements implicitly describe a part of their own causal history.

Mathematicians already explored exactly what you describe: this is the difference between classical logic and intuitionistic logic:

In classical logic statements can be true in and of themselves even if there as no proof of it, but in intuitionistic logic statements are true only if there is a proof of it: the proof is the cause for the statement to be true.

In intuitionistic logic, things are not as simple as "either there is a cow in the field, or there is none" because as you said, for the knowledge of "a cow is in the field" to be true, you need a proof of it. It brings lots of nuance, for example "there isn't no cow in the field" is a weaker knowledge than "there is a cow in the field".


Causal Bayesian networks are one way to formalize the causality requirement: https://en.wikipedia.org/wiki/Bayesian_network

Dumb counterpoint: if it’s not a true belief, is it a false negative or a false positive? Any third option I can think of starts with “true”…

QED - proof by terminological convention!


Physics has kinda-solved what it means to know something.

- JTB is not enough, for something to be “true” it needs _testability_. In other words, make a prediction from your knowledge-under-test which would be novel information (for example, “we’ll find fresh cow dung in the field”). - nothing is really ever considered “true”, there’s only theories that describe reality increasingly correctly

In fact, physics did away with the J: it doesn’t matter that your belief is justified if it’s tested. You could make up a theory with zero justification (which doesn’t contradict existing knowledge ofc), make predictions and if they’re tested, that’s still knowledge. The J is just the way that beliefs are formed (inference)


"Testability" to me sounds like a type of "justified" - I can be justified for many reasons, and testability is just one of those. But there are reasons where I might be justified but where testability is impossible.

For example, if I toss a coin and it comes up heads, put the coin in my pocket and then go about my day, and later on say to somebody "I tossed a coin earlier, and it came up heads", that is a JTB, but it's not testable. You might assume I'm lying, but we're not talking about whether you have a JTB in whether I tossed a heads or not, we're talking about if I have one.

There are many areas of human experience where JTB is about as good as we are going to get, and testability is off-limits. If somebody tells me they saw an alien climb out of a UFO last night, I have lots of reasons to not believe them, but if this a very trustworthy individual who has never lied to me about anything in my decades of experience of knowing them, I might have a JTB that they think this is true, even if it isn't. But none of it is testable.

Physics - the scientific method as a whole - is a superb way to think about and understand huge swathes of the World, but it has mathematically proven limits, and that's fine, but let's not assume that just because something isn't testable it can't be true.


Why did the physicist stop hanging out with philosophers?

Because every time they said, "I've found the absolute truth," the philosophers just replied, "Only in your frame of reference!"


I enjoy this metaphor of the cow and the papier-mâché.

Presumably, there is a farmer who raised the cow, then purchased the papier-mâché, then scrounged for a palette of paints, and meticulously assembled everything in a field -- all for the purpose of entertaining distant onlookers.

That is software engineering. In Gettier's story, we're not the passive observers. We're the tricksters who thought papier-mâché was a good idea.


> true, because it doesn't make sense to "know" a falsehoood

That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.


False propositions are not knowledge, only true propositions are knowledge. Therefore you cannot know something true that is actually false, you can only believe something true that is actually false. Precisely describing how one moves from belief to knowledge is exactly what epistemology is about.

> False propositions are not knowledge, only true propositions are knowledge

From my point of view, "to know" is a subjective feeling, an assessment on the degree of faith we put on a statement. "Knowledge" instead is an abstract concept, a corpus of statements, similar to "science". People "know" false stuff all the time (for some definition of "true" and "false", which may also vary).


Precisely, but I think the feeling of knowing may be defined differently for the person having the feeling and from the viewpoint of others.

A flat-earther may feel they "know" the earth is flat. I feel that i "know" that their feeling isn't "true" knowledge.

This is the simple case where we all (in this forum, or at least I hope so) agree. If we consider controversial beliefs, such as the existence of God, where Covid-19 originated or whether we have free will, people will often still feel they "know" the answer.

In other words, the experience of "knowinging" is not only personal, but also interpersonal, and often a source of conflicts. Which may be why people fight over the defintion.

In reality, there are very few things (if any) that can be "known" with absolute certainty. Anyone who has studied modern Physics would "know" that our intuition is a very poor guide to fundamental knowledge.

The scientific method may be better in some ways, but even that can be compromized. Also, it's not really useful for people outside the specific scientific field. For most people, scientific findings are only "known" second hard from seeing the scientists as authorities.

A bigger problem, though, is that a lot of people are misusing the label "scientific" to justify beliefs or propaganda that has only weak (if any) support from the use of hard science.

In the end, I don't think the word "knowledge" has any fundamental correspondence to something essential.

Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.

The types of "knowledge" that we feel we "know", to the extend that we learn them from others, seem to evolve in parallel to this as memes/memeplexes (using Dawkin's original use of "meme").

Such memes spread in part virously by pure replication. But if they convey advantages to the hosts they may spread more effectively.

For example, after Galilei/Newton, Physics provided several types of competitive advantage to those who saw it as "knowledge". Some economic, some military (like calculating artillery trajectories). This was especially the case in a politically and religously fragmented Europe.

The memeplex of "Science" seems to have grown out of that. Not so much because it produces absolute truths, but more because those who adopted a belief in science could reap benefits from it that allowed them to dominate their neighbours.

In other areas, religious/cultural beliefs (also seen as "knowledge" by te believers) seem to have granted similar power to the believers.

And it seems to me that this is starting to become the case again, especially in areas of the world where the government provides a welfare state to all that prevent scientific knowledge to grant a differential survival/reproductive advantage to those who still base their knowledge on Science.

If so, Western culture may be heading for another Dark Age....


> False propositions are not knowledge, only true propositions are knowledge.

This is something that a lot of Greeks would have had issues with, most probably Heraclitus, and Protagoras for sure. Restricting ourselves to Aristotelian logic back in the day has been extremely limiting, so much so that a lot of modern philosophers cannot even comprehends how it is to look outside that logic.


No, I think many people use a definition of "know" that doesn't include "knowing" falsehoods. Possibly you and they have fundamentally beliefs about the nature of reality, or possibly you are just using different definitions for the same word.

I agree that it is not often helpful to to avoid the issue by redefining a term in a way not originally intended (though it may be justified if the original definition is predicated on an unjustifiable (and sometimes tacit) assumption.)

Furthermore, OP’s choice of putting “know” in quotes seems to suggest that author is not using the word as conventionally understood (though, of course, orthography is not an infallible guide to intent.)

IMHO, Gettier cases are useful only on that they raise the issue of what constitutes an acceptable justification for a belief to become knowledge.

Gettier clauses are specifically constructed to be about true beliefs, and so do not challenge the idea that facts are true. Instead, one option to resolve the paradox is to drop the justification requirement altogether, but that opens the question of what, if anything, we can know we know. At this point, I feel that I am just following Hume’s footsteps…


And, do they know if their definition is the right one? And how do they know it? And, is it actually true?

This is the true analytic answer! More fundamentally, “know” is a move in whatever subtype of the English language game you’re playing at the moment, and any discussions we have about what it “really” or “truly” means should be based on those instrumental concerns.

E.g. a neurologist would likely be happy to speak of a brain knowing false information, but a psychologist would insist that that’s not the right word. And that’s not even approaching how this maps to close-but-not-quite-exact translations of the word in other languages…


> That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.

I think the philosophical claim is that, when we think we know something, and the thing that we turns out to be false, what has happened isn't that we knew something false, but rather that we didn't actually know the thing in the first place. That is, not our knowledge, but our belief that we had knowledge, was mistaken.

(Of course, one can say that we did after all know it in any conventional sense of the word, and that such a distinction is at the very best hair splitting. But philosophy is willing to split hairs however finely reason can split them ….)


The problem with the hair splitting is that it requires differentiating between different brain states over time where the only difference is the content.

On Jan 1 2024 I "know" X. Time passes. On Jan 1 2028, I "know" !X. In both cases, there is

(a) something it is like to "know" either X or !X

(b) discernible brain states the correspond to "knowing" either X or !X and that are distinct from "knowing" neither

Thus, even if you don't want to call "knowing X" actually "knowing", it is in just about every sense indistinguishable from "knowing !X".

Also, a belief that we had the knowledge that relates to X is indistinguishable from a belief that we had the knowledge that relates to !X. In both cases, we possess knowledge which may be true or false. The knowledge we have at different times alters; at all times we have a belief that we have the knowledge that justifies X or !X, and we are correct in that belief - it is only the knowledge itself that is false.


Maybe the people who use "know" in the way you don't are talking about something other than brain states or qualia. There are lots of propositions like this; if I say, "I fathered Alston", that may be true or false for reasons that are independent of my brain state. Similarly with "I will get home tomorrow before sunset". It may be true or false; I can't actually tell. The same is true of the proposition "I know there are coins in the pocket of the fellow who will get the job", if by "know" we mean something other than a brain state, something we can't directly observe.

You evidently want to use the word "know" exclusively to describe a brain state, but many people use it to mean a different thing. Those people are the ones who are having this debate. It's true that you can render this debate, like any debate, into nonsense by redefining the terms they are using, but that in itself doesn't mean that it's inherently nonsense.

Maybe you're making the ontological claim that your beliefs about X don't actually become definitely true or false until you have a way to tell the difference? A sort of solipsistic or idealistic worldview? But you seem to reject that claim in your last sentence, saying, "it is only the knowledge itself that is false."


"I know I fathered Alston" .. the reasons it may be true or false are indeed independent of brain state. But "knowing" is not about whether it is true or false, otherwise this whole question becomes tautological.

If someone is just going to say "It is not possible to know false things", then sure, by that definition of "know" any brain state that involves a justified belief in a thing that is false is not "knowing".

But I consider that a more or less useless definition of "knowing" in context of both Gettier and TFA.


How about "beliefs that seem to be true are not necessarily true, and the causes of those beliefs may not be valid, especially if examined more closely"?

Or, try renaming the variables and see if it still bothers you identically.


Could you elaborate what you mean by that?

We all carry around multiple falsehoods in our heads that we are convinced are true for a variety of reasons.

To say that this is not "knowing" is (as another commenter noted) hair-splitting of the worst kind. In every sense it is a justified belief that happens to be false (we just do not know that yet).


> In every sense it is a justified belief that happens to be false

Not to mention what does it even mean for something to be false. For the hypothetical savage the knowledge that the moon is a piece of cheese just beyond reach is as true as it is for me the knowledge that it's a celestial body 300k km away. Both statements are false for the engineer that needs to land a probe there (the distance varies and 300k km is definitely wrong).


What exactly does it mean to know something then? As distinct from believing it. Just the justification, and then, I guess it doesn't have to be a very good justification if it can be wrong?

I think like many things “know” and “believe” are just a shorthand for convenient communication that makes binary something that is really a continuum of probability. That continuum might be something from loose theory to fundamental truth about the universe in our minds. Justifications and evidence move things down the continuum, such that we might assign a probability a thing is true, things can approach 100% probability but never get there, but we as mortals need to operate in the world as if we know things so we say anything close to 100% we “know”. Even though history tells us even some things we believe to be fundamental truths can be discovered to be wrong.

I think I would say that knowing means that your belief can resist challenges (to some degree) and that it is capable of driving behavior that changes others' beliefs.

The strength of the justification is, I would suggest, largely subjective.


> What exactly does it mean to know something then?

This is one of the best questions ever, not just for philosophers, but for all us regular plebes to ponder often. The number of things I know is very very small, and the number of things I believe dramatically outnumbers the things I know. I believe, but don’t know, that this is true for everyone. ;) It seems pretty apparent, however, that we can’t know everything we believe, or nothing would ever get done. We can’t all separately experience all things known first-hand, so we rely on stories and the beliefs they invoke in order to survive and progress as a species.


> "Knowing" falsehoods is something we broadly acknowledge that we all do.

Only in abstract discussions like this one. And in some concrete discussions on certain topics, not "knowing" seems to be essentially impossible for most non-silent participants.


I always come back to this saying:

“Debugging is the art of figuring out which of your assumptions are wrong.”

(Attribution unknown)


I always thought of what I learned in some philosophy class, that there are only two ways to generate a contradiction.

One way is to reason from a false premise, or as I would put it, something we think is true is not true.

The other way is to mix logical levels (“this sentence is false”).

I don’t think I ever encountered a bug from mixing logical levels, but the false premise was a common culprit.


I'm not sure if it qualifies as mixing logical levels but I once tracked down a printer bug where the PDF failed to print.

The culprit was an embedded TrueType font that had what (I think) was a strange but valid glyph name with a double forward slash instead of the typical single (IIRC whatever generated the PDF just named the glyphs after characters so /a, /b and then naturally // for slash). Either way it worked fine in most viewers and printers.

The larger scale production printer on the other hand, like many, converted to postscript in the processor as one of its steps. A // is for an immediately evaluated name in postscript so when it came through unchanged, parsing this crashed the printer.

So we have a font, in a PDF, which got turned into Postscript, by software, on a certain machine which presumably advertised printing PDF but does it by converting to PS behind the scenes.

A lot of layers there and different people working on their own piece of the puzzle should have been 'encapsulated' from the others but it leaked.


some possible examples:

security with cryptography is mostly about logical level problems, where each key or operation forms a layer or box. treating these as discrete states or things is also an abstraction over a seqential folding and mixing process.

debugging a service over a network has the whole stack as logical layers.

most product management is solving technical problems at a higher level of abstraction.

a sequence diagram can be a multi-layered abstraction rotated 90 degrees, etc.


As long as "your assumptions" includes "I know what I am doing", then OK.

But most people tend not to include that in the "your assumptions" list, and frequently it is the source of the bug.


What if you never believed that in the first place?

Then, to be consistent, you should not trust either your deductions or even your choice of axioms.

In other words, it looks like a form of solipsism.


You can not know what you are doing and still trust in logic.

But what world it would be if you could flip a coin on any choice and still survive! If the world didn't follow any self-consistent logic, like a Roger Zelazny novel, that would be fantastic. Not sure that qualifies as solipsism, but still. Would society even be possible? Or even life?

Here, as long as you follow cultural norms, every choice has pretty good outcomes.


I trust things to varying degrees until I test them. Then I trust them more or less.

Then you're good to ignore that as a possible source of the problem.

It's likely that I'm wrong, I need to look deeper into it.

But isn't the paper-mache cow case solved by simply adding that the evidence for the justification also needs to be true?

The definition already requires the belief to be true, that's a whole other rabbit hole, but assuming that's valid, it's rather obvious that if your justification is based on false evidence then it is not justified, if it's true by dumb luck of course it doesn't count as knowing it.

EDIT: Okay I see how it gets complicated... The evidence in this case is "I see something that looks like a cow", which I guess is not false evidence? Should your interpretation of the evidence be correct? Should we include into the definition that the justification cannot be based on false assumptions (existing false beliefs)? I can see how this would lead to more papers.

EDIT: I have read the paper and it didn't really change my view of the problem. I think Gettier is just using a sense of "justified" that is somewhat colloquial and ill defined. To me a proposition is not justified if it is derived from false propositions. This kind of solves the whole issue, doesn't it?

To Gettier it is more fuzzy, something like having reasonably sufficient evidence, even if it is false in the end. More like "we wouldn't blame him for being wrong about that, from his point of view it was reasonable to believe that".

I understand that making claims of the absolute truthfulness of things makes the definition rather useless, we always operate on incomplete evidence, then we can never know that we know anything (ah deja vu). But Gettier is not disputing the part of the definition that claims that the belief needs to be true to be known.

EDIT: Maybe the only useful definition is that know = believe, but in speech you tend to use "he knows P" to hint that you also believe P. No matter the justification or truthfulness.

EDIT: I guess that's the whole point that Gettier was trying to make: that all accepted definitions at the time were ill-defined, incomplete and rather meaningless, and that we should look at it closer. It's all quite a basic discussion on semantics. The paper is more flamebait (I did bite) than a breakthrough, but it is a valid point.


Indeed, there needs to be some sort of connection between the truth and the justification, not just "true && justified".

The problem is that when you're working at such a low level as trying to define what it means to know something, even simple inferences become hellishly complicated. It's like trying to bootstrap a web app in assembly.


The non-bovine examples are in a way more complex (but also more common), since they involve multiple possible causes for an event. In software engineering, bugs and outages and so on are not just caused by lack of testing, but lack of failsafes, lack of fallbacks, coding on a Monday morning, cosmic background radation, too many/few meetings, etc. etc. And it's hard to pinpoint "the cause" (but perhaps we shed some light on a graph of causes, some of which may be "blocked" by other causes standing in the way).

Are there any good examples of gettiers in software engineering that don't rely on understanding causality, where we're just talking about "what's there" not explaining "how it got there"?


I would even say the two real-life examples given in the blog are not Gettier cases at all. Gettier is about the cause of the "knowledge," not about knowledge of the cause.

For the autofocus example, if the statement in question was "my patch broke the autofocus," it would not be Gettier because it is not true (the unrelated pushed changes did); if the statement in question was "my PR broke the autofocus," it would not be Gettier because it is JTB, and the justification (it was working before the PR, but not after) is correct, i.e., the cause of the belief, the perception, and deduction, are correct; Same if the statement in question was "the autofocus is broken."

It would be Gettier if the person reporting the bug was using an old (intact) version of the app but was using Firefox with a website open in another window on another screen, which was sending alerts stealing the focus.

The most common example of true Gettier cases in software dev is probably the following: A user reports a bug but is using an old version, and while the new version should have the bug fixed, it's still there.

The statement is "the current version has the bug." The reporter has Justified Belief because they see the bug and recently updated, but the reporter cannot know, as they are not on the newest version.


The link to the paper seems to be down, here's an alternative one.

https://fitelson.org/proseminar/gettier.pdf

It's really worth a read, it's remarkably short and is written in very plain language. It takes less than 10 mins to go through it.


The link to the actual paper [1] seems to be overloaded.

[1] http://www-bcf.usc.edu/~kleinsch/Gettier.pdf


Alternative host [no paywall]: https://fitelson.org/proseminar/gettier.pdf

This seems to essentially be saying that coincidences will happen and if you’re fooled by them, sometimes it’s not your fault - they are “justified.” But they may be caused by enemy action: who put that decoy cow there? I guess they even made it move a little?

How careful do you have to be to never be fooled? For most people, a non-zero error rate is acceptable. Their level of caution will be adjusted based on their previous error rate. (Seen in this sense, perfect knowledge in a philosophical sense is a quest for a zero error rate.)

In discussions of how to detect causality, one example is flipping a light switch to see if it makes the light go on and off. How many flips do you need in order to be sure it’s not coincidence?


> How careful do you have to be to never be fooled?

This is where Contextualism comes into play. Briefly, your epistemic demands are determined by your circumstances.

https://plato.stanford.edu/entries/contextualism-epistemolog...


Would a merge instead of a rebase have made it easier to find the bug? (Serious question)

Yes, most likely. Rebase hides the fact the 2 changes happened separately, a merge would make it much easier to see the different avenues that may lead to the bug.

We purposefully try not to do rebases in my team for this reason.


I'm not sure I follow, do you have an example where merging makes things more obvious?

Seems somehow related to "parallel" construction of evidence.

It absolutely is. A fake cow or whatever, provides the evidence for justified belief.

The "programmer's model" is their mental model of what's happening. You're senior and useful when you not only understand the system, but can diagnose based on a symptom/bug what aspect of the system is implicated.

But you're staff and above when you can understand when your programming model is broken, and how to experiment to find out what it really is. That almost always goes beyond the specified and tested behaviors (which might be incidentally correct) to how the system should behave in untested and unanticipated situations.

Not surprisingly, problems here typically stem from gaps in the programming model between developers or between departments, who have their own perspective on the elephant, and their incidence in production is an inverse function of how well people work together.


You are defining valid steps in understanding of software, but attaching them to job titles is just going to lead to very deceptive perspectives. If your labeling was accurate, every organization I've ever worked at would at least triple the number of staff engineers than it does.

Hmm, are there better cases that disprove JTB? Couldn't one argue that the reliance on a view that can't tell papermache from a cow is simply not a justified belief?

Is the crux of the argument that justification is an arbitrary line and ultimately insufficient?


Yes, the paper itself is much more unambiguous (and very short): https://courses.physics.illinois.edu/phys419/sp2019/Gettier....

These are correct but contrived and unrealistic, so later examples are more plausible (e.g. being misled by a mislabelled television program from a station with a strong track record of accuracy).

The point is not disproving justified true belief so much as showing the inadequacy of any one formal definition: at some point we have to elevate evidence to assumption and there's not a one-size-fits-all way to do that correctly. And, similarly to the software engineering problems, a common theme is the ways you can get bitten by looking at simple and seemingly true "slices" of a problem which don't see a complex whole.

It is worth noting that Gettier himself was cynical and dismissive of this paper, claiming he only wrote it to get tenure, and he never wrote anything else on the topic. I suspect he didn't find this stuff very interesting, though it was fashionable.


I like the example of seeing a clock as you walk past. It says it's 2:30. You believe that the time is 2:30. That seems like a perfectly reasonable level of justification -- you looked at a clock and read the time. If unbeknownst to you, that clock is broken and stuck at 2:30, but you also just happened to walk by and read it at 2:30, then do you "know" that it's 2:30?

I think a case can't so much "disprove" JTB, so much as illustrate that adopting a definition of knowledge is more complex than you might naively believe.


I was thinking that one solution might be to specify that the "justification" also has to be a justified true belief. In this case, the justification that you see a cow isn't true, so it isn't a JTB.

Of course that devolves rapidly into trying to find the "base case" of knowledge that are inherent


This and many other suggestions have been explored, and usually found wanting (see e.g. https://plato.stanford.edu/entries/knowledge-analysis/#NoFal...).

I believe that schrodinger's cat also applies to software bugs. Every time I go looking, I find bugs that I don't believe existed until I observed them.

I have similar belief, but only when it comes to bugs that make me look foolish.

The more likely a bug is to make me look dumb, it will only appear as soon as I ask for help.


The bugs that always disappear when I try to demonstrate them to others - my favorite. Reminds me of the "Tom Knight and the Lisp Machine" koan. This is largely true, but I remember a failing piece of authentication hardware that I didn't understand but was convinced was failing. Every time I called someone over it would work, so I couldn't get it replaced. Eventually the failure rate got so high that everyone agreed the damn thing had failed, "dead as a brick". But until then I was SoL. What are the chances that something 99% towards "dead as a brick" would always work around a superior.

From the examples he mentions, aren't these just "race conditions"?

Oh this is such a better and more useful idea than some other common ones like “yak shaving” or DRY

Love it


I think those other two are also very useful. I've actually had a lot of traction with introducing "yak shaving" in everyday life situations to non-programmers -- it applies to all kinds of things.

EDIT: Deleted paragraph on DRY that wasn't quite right.


This is very common in finance. Knowing when finance research that made right predictions with good justifications falls into the "Gettier category" or not is extremely hard.

I usually just call these red herrings: https://en.wikipedia.org/wiki/Red_herring

I just called it a red herring or a false signal and everyone understood what I meant.

Gettier cases are fun, although the infinite regress argument is a much clearer way to show that JTB is a false epistemology.

Well this is a roundabout way of justified thinking about a belief that just happens to align with some actual facts..

On a more serious note: populist politicians seem to like making gettier claims; they cost a lot of time to refute and are free publicity. Aka the worst kind of fake news.

A rather ambitious claim considering the context!

The impossibility of solving the Gettier problem meshes nicely with the recent trend to Baysianism and Pragmatism. Instead of holding out for justified true belief and "Bang-Bang" either labeling them True or False, give them degrees of belief which are most useful for prediction and control.

I don't understand the Gettier problem. The example of the cow for example: You do not have a justified belief there is a cow there, all you can justify is that there is the likeness of a cow there.

To be able to claim there is a cow there requires additional evidence.


Meh, these are just examples of the inability to correctly root cause issues. There is a good lesson in here about the real cause being lack of testing (the teammate’s DOM change should have never merged) and lack of monitoring (upstream mail provider failure should have been setting off alerts a long time ago).

The changes only had adjacency to the causes and that’s super common on any system that has a few core pieces of functionality.

I think the core lesson here is that if you can’t fully explain the root cause, you haven’t found the real reason, even if it seems related.


Yeah why did he rebase unreleased code?

And the “right” RC only has to be right enough to solve the issue.


Seems like the terminology of calling it a ‘true’ belief led to some confusion. Of course there is an huge difference between evidence and proof. Correlation is not causation, Godel’s incompleteness theorem, all abstractions are leaky, etc.

Desperation to ‘know’ something for certain can be misleading when coincidence is a lot more common than proof.

Worse yet is extending the feeling of ‘justified’ to somehow ‘lessen’ any wrongness, perhaps instead of a more informative takeaway.


Question I like to ask my colleagues. Suppose you have a program that passes all the tests. Suppose also that in that program there is a piece of code performs an operation incorrectly. The result of that operation is used in another part of the code that also performs an operation incorrectly, but in such a way that the tested outcome is correct.

Does the code have 0 defects, 1 defect, or 2 defects?


I can understand 0 and 2, but what's 1? The "I don't know how to count" answer?

You can also download the paper from [1] since the link on the article seems unavailable.

[1] https://fitelson.org/proseminar/gettier.pdf


I wasn’t aware there was a term for this or that this was not common knowledge - for me I refer to them as “if I fix this, it will break EVERYTHING” cases that come up in my particular line of work frequently, and my peers generally tend to understand as well. Cause/effect in complex symptoms is of course itself complex, which is why the first thing I typically do in any environment is set up metrics and monitoring. If you have no idea what is going on at a granular level, you’ll quickly jump to bad conclusions and waste a lot of time aka $.

I’ve come across (possibly written) code that upon close examination seems to only work accidentally — that there are real failures which are somehow hidden by behavior of other systems.

The classic and oft heard “How did this ever work?”


Many years ago I was grading my students’ C programs when I found a program that worked without global variables or passing parameters. Instead, every function had the same variables declared in the same order.

In at least a few cases I can think of, the answer was almost definitely "it actually never did work, we just didn't notice how it was broken in this case".

I think this stuff is really funny when I find it and I have a whole list of funniest bugs like this I have found. Particularly when I get into dealing with proxies and reponse/error handling between backend systems and frontend clients - sometimes the middle layer has been silently handling errors forever, in a way no one understood, or the client code has adapted to them in a way where fixing it will break things badly - big systems naturally evolve in this way and can take a long time to ever come to a head. When it does, that’s when I try to provide consulting, lol.

This is horrifying, and needs a trigger warning lol. It gave me a sense of panic to read it. It’s always bad when you get so lost in the codebase that it’s just a dark forest of hidden horrors.

When this kind of thing tries to surface, it’s a warning that you need to 10x your understanding of the problem space you are adjacent to.


The surest way to get yourself into a mess like this is to assume that a sufficiently complex codebase can be deeply understood in the first place.

By all means you can gain a lot by making things easier to understand, but only in service of shortcuts while developing or debugging. But this kind of understanding is not the foundation your application can safely stand on. You need detailed visibility into what the system is genuinely doing, and our mushy brains do a poor job of emulating any codebase, no matter how elegant.


I guess I’ve worked in a lot of ancient legacy systems that develop over multiple decades - there’s always haunted forests and swaths of arcane or forgotten knowledge. One time I inherited a kubernetes cluster in an account no one knew how to access and when finally hacking into it discovered troves of crypto mining malware shit. It had been serving prod traffic quietly untouched for years. This kind of thing is crazy common, I find disentangling these types of projects to be fun, personally, depending on how much agency I have. But I’m not really a software developer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: