Hacker News new | past | comments | ask | show | jobs | submit login
Developer Philosophy (qntm.org)
282 points by todsacerdoti 2 days ago | hide | past | favorite | 69 comments





To me, software development is many things, but the inner loop is mainly characterized by two alternating modes: Trying things and Learning things.

The biggest hidden assumption I see is that we expect other people to be trying and learning things in the same way we do.

Experienced devs will write code to improve their own understanding, they will create tests to improve their own understanding, they will do research, they will do a lot of learning and all of that rarely gets talked about at standup or in a pull request. This work never shows up in a planning or budget meeting.

I see junior devs getting stuck hard on delivering production ready code - which is the WORK PRODUCT that we want; but that is not what the actual work of software development looks like. A lot of junior devs are hesitant to do throw-away work that does not get committed to source control - they feel like someone out there knows the answer and they just have to find them. And for a long time, that works - they can find someone to provide the answer they need, they can find a senior dev who glances at the code and fixes it effortlessly.

This whole thing is exacerbated by managers with no or minimal dev experience, who also confuse work product and tracked work items with the actual work of producing software.

(And if you're wondering who 'junior devs' is, it's me. I'm junior devs.)


Something which has been lost is RTFM. Anything tool, library or language you want to use you better get comfortable with the idea you'll have to read documentation (or RFC). Pasting code you don't really understand from anywhere is not how you'll master something, that's how you'll brick a VM trying to extend some partition. Multiple times.

But telling people to RTFM instead of giving the answer is rude now. Also tutorials should take more time to show to watchers / readers how to get the information they distill.

Simple example with our friend k8s: most tutorials will just give you some generic Deployment yaml. Never explaining (or only at surface level) those labels and selector. Especially why you want your spec.selector to match spec.template.labels. Tutorials should link first to the Deployment documentation https://kubernetes.io/docs/reference/kubernetes-api/workload... where you'll find a link to the selector specification https://kubernetes.io/docs/reference/kubernetes-api/common-d... explaining how it matches labels and on the Deployment page you also see that your spec.template is in fact a PodTemplate which is why you want the deployment selectors to match those Pods label. 90% of examples you'll find tend to use the same names for everything because they do simple things but when you try to do something a little more complex suddenly they don't help learning which names can be changed and what it entails. Traefik gets a gold star for having part of their annotations names being meaningful.


> But telling people to RTFM instead of giving the answer is rude now.

I wish the standard answer was linking to relevant documentation. Not quite "RTFM", since it can be hard to find the right part of the manual. But the reason humanity has gotten where it is is the scalable knowledge transmission of saying/writing something once and receiving it multiple times. It's embarrassing to regress from that.


I believe your description of software development is highly aligned with the ideas of peter naur, programming as theory building.

https://pages.cs.wisc.edu/~remzi/Naur.pdf


I love when this paper pops up, because now I get to recommend a relevant episode of a really great, really nerdy podcast:

https://futureofcoding.org/episodes/061.html


Thanks for the pointer, lots of good looking topics in the back log!

> they can find someone to provide the answer they need, they can find a senior dev who glances at the code and fixes it effortlessly.

I have often taken pains to point out when something like this falls inside my area of expertise that the reason I can fix it so quickly is usually because I've made the same mistake before enough times and remember how to solve it.


"I apologise for writing such a long letter, but I didn't have time to write a short one."The quote, " is generally credited to Blaise Pascal, a French mathematician and philosopher. In his work "Lettres Provinciales" (Provincial Letters), published in 1657, Pascal wrote in French: "Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte," which translates to, "I have made this letter longer than usual because I lack the time to make it shorter."

First time hearing about the 90/50 rule and it's something I've seen play out so many times. It's easy to get caught up in just getting the code work on a limited case, but true quality comes from spending extra time on testing and handling edge cases. It's often those overlooked details that can take up more time than we expect. This is particularly crucial for junior developers to grasp as you need to understand that your task is not done when you send your first push to git.

I also agree wholeheartedly with automating best practices. Relying on manual reviews for everything is just not scalable. Setting up automated tests to enforce format, lints and testable code have the side-effect of creating a code base with clear expectations and minimal side effects.


The version of this quote I've heard the most is "The last 10% of a project takes 90% of the time." It's a bit different, but the idea is the same: the real work of implementing a feature comes in handling unexpected behavior and edge cases. It can look finished from the outside, but imagining and preparing for every conceivable outcome takes much of the time.

I think the solution depends on the type of problem you have. If your problem is reliability then increased testing and handling edge cases (or avoiding code that even hits those edge cases) will help. But if the problem is uncertain requirements, then more testing will just add to the amount of code you're going to throw away and refactoring will only become harder.

The problem is that often times in software engineering both problems happen in tandem - you're improving reliability while also trying to change how your system works as new feature requests come in. The solutions are almost diametrically opposed.


More testing can help show the uncertain requirements in more concrete ways.

What should happen in these cases? Now we have this list in one place does it seem unusual or inconsistent?

Bonus points if you can have more general tests (like property based ones) that let you test broader statements.

I always fall back to this example but I built a testing lib for a stack we had a UI framework in. It helped because we could write a test like:

"For any UI, if the user moves right and the focus changes then when the user moves left the focus goes back to the original item"

And then we found a bug in the spec with:

For any series of API calls that build a UI, and any series of user interactions, one of the following is always true

* There are no items in the UI

* There is exactly one item with focus

Being able to specify things at this level resulted in us being able to define the major behaviours succinctly and that made it easier to have them be consistent.


I see the parent commenters point as being more of the “can’t see the forest for the trees” type of uncertain requirements.

To go into more detail, it is the type of uncertainty about whether you are actually solving the right problem or if it really is a problem. While what you suggest does help ferret this out, what you are suggesting can also prolong the process of getting the code in front of a real end user to see if they use or adopt it the way you are expecting. It allows you to quickly throw away ALL the code and get to solving the right problem.

Another way to put it, the more the problem space is unknown, the faster you should get the code in front of someone and delay the edge-case discoveries.


True quality comes from avoiding edge case traps.

Some approaches/feature types create edge case explosions.

For example, I'm often hell bent on the need for avoiding bidirectional syncing or writing mini parsers. If there is another, simpler way its always better.


> It's easy to get caught up in just getting the code work on a limited case, but true quality comes from spending extra time on testing and handling edge cases

The frustrating thing is how this feels so incompatible with the way most software companies work in two week sprints and judge productivity by tickets closed / story points completed

You can't take your time to do it well


I think any attempt to distil hard earned experience and domain awareness will eventually devolve into misplaced generalisms.

This isn't to say that the article isn't good. It's well written and the teachings are valuable.

This comment is for the inexperienced dev that arrives at theses posts looking for ideological prescriptions: don't.

Give yourself time. Let yourself fail and learn from your mistakes. Keep reading the masters work like Clean Code, SICP, Working effectively with legacy code, software architecture the hard parts, mythical man month etc... but don't let anyone prescribe to you how to do your job.

Developing is ultimately managing an unmanageable and ever evolving complexity and making it work. Developing is art and experience. Developing requires great peace of mind.


The author has a few words about Clean Code https://qntm.org/clean

A better recommendation is "A Philosophy of Software Design" by Osterhaut.


> Let yourself fail and learn from your mistakes.

In my experience, trying and failing is the very best teaching/learning method.

If you're observant and lucky during your career, you'll gain the skill of learning from other people's failures as well as your own.

(And from a management and mentoring perspective, it's important to assign tasks/projects to junior and mid level devs that have real risks of failure while shielding those devs from blame or blowback if/when they fail. All the very best devs I've worked with in my 30+ years in this game have a deep oral history of war stories they can dig into when explaining why a particular approach might not be as good as it seems on the surface.)


generalisms are fine as long as you don't try to turn it into a hard and fast rule.

generalisms contain knowledge that generally applies, but i think it's well understood that there will always be times where it doesn't apply.


the last three entries are great and need to be drilled into folks head. too much of CS training (and leet-code driven interviewing) encourages people to get clever. but reading somebody elses clever code sucks.

write code that's meant to be easily read and understood. that's not just about algorithms and code comments -- variables/functions named descriptively, formatting is consistent, things don't get too nested, design things to be modular in a way that lets you ignore irrelevant sources of complexity, etc


Kernighan's Law - Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

That, of course, is only valid if you are using your cleverness to optimize for something that isn't readability.

A lot of clever code thinks it's optimizing for readability when it's really optimizing for the feeling of cleverness itself.

I feel as an outlier to this I need to make a comment.. Debugging (with source) to me at least, it’s so much more easier as you have all of the stack with you along the chain.. It’s very rare, not impossible though, to find crazy behavior during correct debugging.. This law is new to me though.

I think it depends on how thorny the thing you need to debug is. Race conditions, intermittent bugs that crash the process leaving no trace, etc. Debugging is much more than using a debugger

I like to differentiate between “bugs that stem from a recent code change” (easy to fix) and “bugs that suddenly appear out of nowhere” (hard to fix)

"Do not write today, that what you cannot debug tomorrow."

I'd be a bit wary of taking 'simplicity' as a goal. Some people read that and think to write shorter, cleverer code instead of longer, more straightforward code which might have a few redundancies. (E.g., people often praise FP languages and Lisp-like languages for their very flexible control flow, but adding custom abstractions can result in shorter code that is difficult for newcomers to understand.)

On the other hand, too much modularity and factored-out code can also obscure the behavior of a program, even if each module has a well-defined role and satisfies its interface. (E.g., big Java programs can have hundreds of classes with small method bodies and lots of calls to other classes and interfaces. If you see an interface call, but the object came from somewhere several classes away, then good luck figuring out which implementation gets called.)

I'd say that the ultimate moral is "Keep the flow of logic and data relatively linear (or at least, no more complex than it needs to be to solve the problem). Avoid dicing it up between a dozen different spots, or performing crazy stunts with it, so that only an expert can understand what's going on. And just because some logical constructs (like dynamic dispatch) are unwritten doesn't mean they don't create mental load."


I think it's fine if code is difficult for newcomers to understand (of course, to a point). Most programmers are taught in C-like languages and paradigms. Using FP (ML-like) languages is already "difficult to understand".

The question then becomes: how large is the disconnect between the "theory" in the mind of the newcomer(s) vs. the "theory" they need to be useful in the codebase -- and is this gap worth it?

For example, programming with explicit effects (i.e. `IO`) or even just type-safe Futures. There's not too much difficulty with simply getting started, and it builds up a general theory of effects in the newcomer, which would presumably be useful in many contexts of the codebase even outside of async effects, e.g. error-handling with `Either`.


One dev's "you've just gotta learn how we do things" is another dev's "holy hell why does this codebase have its own mind-bending do-everything hyper-abstractions that don't exist outside this organization". (I'm thinking not of the basic concepts of popular FP languages so much as the powerful abstractions they can be used to create. Though if a company invents its own language, that can very quickly enter the 'overly large gap' range.)

I agree that there's a spectrum here, but IME it's very easy for existing devs familiar with everything to underestimate the gap between newcomers' knowledge and their own, or to overestimate how much it's needed. In the worst case, you end up with brittle spaghetti 'abstractions' where the old-timers swear up and down that everything is perfectly sensible and necessary.

(I've personally seen this kind of overestimated necessity twice, in the context of API design. In both cases, the author of a public library published a new version incompatible with the old version, under the justification that users really ought to learn and pay attention to the fine details of the problem space that the library is solving, and that changing the API is a good way to force their hand. But from an outside perspective, the fine details weren't really important for 99.9% of use cases, and it isn't practical for every dev to be conscious of every tradeoff and design decision in the whole stack.)


> the worst case, you end up with brittle spaghetti 'abstractions' where the old-timers swear up and down that everything is perfectly sensible and necessary.

i had this at last place. all previous members of the original team were gone. it was myself and a part time dev (1/2 days per week).

writing new tests would take longer than writing a change - not because we were writing a significant amount of tests. because the “factory” spaghetti code god class used to create all tests was just a nightmare.

got to the point where i just focussed on manual QA. was faster for each change (small team helped there).

and rewriting from scratch on 20k existing LoC for that repo wasn’t gonna work as we just didn’t have the people or time.

basically — we didn’t have time to deal with the bullshit.

keep it stupid.

for the love of everything good and sacred in the world, please, i beg you, keep it stupid (addressing this generally, not at the parent).

it’s easier to get it right, quickly, when it’s done stupidly.

i now want to have a moan into the vast empty space of the interwebs:

the data/mental model for the whole thing was just wrong. split tables for things that had a 1:1 relationship. code scheduling for worker tasks spread across multiple services. multiple race conditions i had to redesign to fix.

oh and no correct documentation. what was there was either wrong or so high level that it was essentially useless.

and roll-your-own-auth.

apparently thus was all done under the clean code cargo cult. which tracks cos there were so many 10 line methods everywhere which meant jumping around constantly.


Everything is difficult for newcomers to understand. Newcomers should be helped to learn new things. Every programming language will be incomprehensible to non-programmers. That's not the target audience.

> but adding custom abstractions can result in shorter code that is difficult for newcomers to understand

I’ve worked at places where the lead would abstract everythiNg possible, it made tracing the flow not just difficult but almost intentionally obfuscated. When calling him up on it he would sing the principles and say it’s the source of robust code. I’m sure out 100 customers appreciated that.

I do appreciate your comment on keeping the flow right, I would add to that I guess by making sure your domains and boundaries are well established and respected.. mistakes will always happen, but if they’re “gated” by domains, the person who fixes it will definitely buy you a beer/coffee.


A great rule of thumb I found useful while learning under great tutorship (way back when) was, look at your check-in, (svn at the time) if you think you will remember it in 5 months in less than 15 mins then it’s okay.. But still have someone put eyes on it before clicking. SVN made us careful maybe..

i inherited spaghetti code written by a geography phd and some fresh out of coding boot camp flunkies. i really enjoy finding "clever" code, even code they only think is clever. its entertaining! might as well be entertained at work.

if its legit clever code, just add a link to the source for the theory behind it and that's enough for me


The (linked) Ratchet is genius! A near perfect mechanism to enforce the future you want without undue work in the meantime.

https://qntm.org/ratchet


One useful tool beyond regular grep for counting the number of times a pattern appears is ast-grep [1], which allows for more sophisticated rules.

[1] https://ast-grep.github.io/


Thanks for pointing that out, I absolutely love this idea. I’m going to see what kinda stuff I could ratchet at work. It’s probably a lot

The author cited this quote:

> he first 90% of the job takes 90% of the time. The last 10% of the job takes the other 90% of the time.

When I report my work done I always prefer the ironic version:

> I've already did 90%, now there's just the other 90%.

It is fun, but most importantly, for non developers, it reports a reality of our work. To do the simplest case, is almost easy, but when you have to factor in taking care of exceptions, errors, usability, log, robustness, security etc. there's a lot of "unexpected" work.


This reminds me of Joel Spolsky's test and true article on rewrite: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

"By the time the ground-up rewrite starts to seem like a good idea, avoidable mistakes have already been made. "

This is the wisdom of devops. Do it well, and things will go smoothly.


Someone below already commented [0]

> I think any attempt to distil hard earned experience and domain awareness will eventually devolve into misplaced generalisms.

Ground up rewrites are a gamble. Classic Spolksy essay [1] about Netscape losing their lead to Internet Explorer is a must read.

Briefly:

1. You think you know how to build the new version, but you really don't. Years of improvements of the old version, years of bug fixes, years of business knowledge are below the surface.

2. The new version is going to have new bugs.

3. While you're rebuilding everything from scratch your competition is improving their product.

0: https://news.ycombinator.com/item?id=42921426

1: https://www.joelonsoftware.com/2000/04/06/things-you-should-...


All abstractions are leaky, and all pithy sayings have a contradictory and equally pithy counter-saying.

True wisdom is knowing which wisdoms to look at and when.


I heard someone once say “All that ‘cruft’ you complain about, I call ‘bug fixes.’”

A mature codebase represents a lot of “tribal investment,” which is sort of like “tribal knowledge.”

It’s something that can’t easily be quantified, but, is, nonetheless, a big deal, and represents a really significant investment of resources. Throwing it away, means tossing out that investment, as well. That’s why many large software codebases are still in “non-buzzword-compliant” languages.

At the same time, we don’t want to throw good money after bad, so experience gives us the tools we need, to figure out when it’s time to “clear the decks.”


>> You think you know how to build the new version, but you really don't. Years of improvements of the old version, years of bug fixes, years of business knowledge are below the surface.

This point has always baffled me. The old adage of "if it ain't broke, don't fix it." but then in the past decades of everybody jumping on the JS frameworks and whole heartedly believing by moving to "X" framework, they can eliminate all the issues you have or had with the older version.

In my own experience, it never goes this way. Oh sure, do you want to be working in legacy code that's 10 plus years old? Probably not, but the idea that every new shiny thing has all the answers to all the issues you had before is a sure fire way to get a lot of people fired for overpromising and underdelivering.


I like the list. I'd add:

* Optimize for readability.

* The cost of a bug is exponentially proportional to the time between that it was introduced and the time is was discovered (some NIST study): prevent bugs from going to the next phase (requirements -> design -> dev't -> staging -> prod).

* Favor FP principles such as: separate data and logic (functions); immutability; explicit nulls; prefer pure functions; etc. Even in non-FP languages.

* Strong typing disciplines in languages are worth it.


Same qntm that wrote There Is No Antimemetics Division

Also Ra, Fine Structure and a number of short stories. They are incredible.

See also their post on Clean Code: https://qntm.org/clean


Whoa! Small world. That was a fun story.

Some of these items were such elegant distillations of my thoughts and feelings that it made me laugh when I read them. To the author: thank you so much for sharing this wisdom.

The only thing that I've found difficult to reconcile is the push and pull between those that think they're fighting the "you are 90% done" fight and those who think they're fighting the "think about pathological data" fight. Essentially, I've personally found the clash between speed and preparedness/safety to be a difficult one to solve.


I'm not being snarky, really. I was a little struck by:

>After the session, I felt that it might be valuable to write my own thoughts up, and add a little more detail. So here we are.

did the senior developers at the session stay to listen to each other? do you all think alike, did you learn anything from them, or feel that anything they said should be written up?

I guess I'm thinking, what would a junior developer take away after listening to all the presentations, or what should they, from the pov of the senior developers?


As has been noted in the previous comments, I have learned that “It depends” is really the most important artifact of my experience.

Every project, every challenge, every interpersonal relationship, is a unique context. There are global policies, strategies, and structures that apply to most contexts, but there’s always an “outlier,” that throws a curveball, and that won’t fit into the plan.

Experience helps us to handle these out-of-band anomalies. We realize that we have a library of heuristics, as opposed to rules.

“Breaking the rules” is serious stuff. When we are younger, we often lack awareness of the ramifications of our deviations, and can easily make things worse. When we have experience, we can “play the tape through to the end,” and anticipate the results of our decisions, much more clearly, as well as develop long-term transition strategies.


These are all good ideas, particularly the last 5 which are mostly technical. The first 2 are often influenced by things beyond developers' control. For example, developers might be pushed on a schedule that encourages accumulation of technical debt, leading to the inevitable bug bankruptcy and ground-up rewrite.

I think every developer should strive to do the right thing, and also be flexible when the right outcome didn't happen.


Seeing the domain name I really hoped this was some way-out there SF story about computers philosophising about their developers or sth.

Optimizing what I do to give the team and decision makers optionality has been a big one for me. I don't want to have a fragile plan that minimizes my effort if it forces things to be done in sequence. Multiple independent changesets solve the problem faster and better.

Reduction of coupling is a related concept - it's always better if two things can be two things, not one interconnected hairball.


> I had one other thing for this list but I don't remember it right now.

Write it down. That's what I would put on my list. The ability to write things down is like an extension of the brain, a way to offload and document the models that you're constantly building in your head. (Also helps with recall.)


Great hard-won insight! Looking forward to many HN comments on this thread.

Well, there's a gauntlet being thrown down if I ever saw one.

For me? Maybe something like:

1. All rules have exceptions - have places that should be exceptions. Rigidly enforcing a rule may be better than having no rule, but thoughtfully following the rule (almost all the time) is even better. Know your rules, know why they're the rules, and know when it might be reasonable to make an exception. (Maybe this says that "rules" are really more like "guidelines".)

Rules often come in opposite pairs. I think of it like this: In Zion National Park, there's a hike called "Angel's Landing". You wind up going up this ridge. On one side is a 1000-foot drop-off. On the other side is a 500 foot drop-off. The ridge isn't all that wide, either. If you look at one cliff, and you think "I need to be careful not to fall off of that cliff", and you back too far away from it, then you fall off of the other cliff.

I think software engineering can be like that. There is more than one mistake that you could make. Don't be so busy avoiding one mistake that you make the opposite mistake. This takes thoughtful understanding, not just blindly following rules.

2. In general, don't repeat yourself. Early in my career, a coworker and I learned to ask ourselves, "Did you fix it everyplace?" It's even better if there's only one place.

But... It is common that things start out the same, and then become slightly different (and it gets covered with an "if"), and then become more different (and now we have several "if"s), and then become even more different. Eventually it can become better to decide that these are actually different things, and split them. Knowing when to do so is an art rather than a science, but it's important.

3. The most general problem cannot be solved. The trick is to do something simple enough that you can actually finish it, but hard enough that it's actually worth doing. You cannot address this just by changing the lines of code you write; the problem is at the specification level (though it can affect the specification of a module or a function, not just a project).

4. Code that is "close to perfect", delivered now, may well be better than code that is absolutely perfect, an uncertain amount of time in the future. It depends on how many imperfections there are, how likely they are to be hit, and how damaging they are when they are hit. Code that is close to perfect may be perfectly usable to many people; code that hasn't shipped yet is currently usable to nobody. (Note well: This is not an excuse for sloppiness! It is an excuse for not going completely hog-wild on perfectionism.)

5. You've got a perfect design? Great. As part of delivering the design, deliver an explanation/roadmap/tour guide for it. (If it were an electrical design, it might be called a "theory of operation".) Consider checking it in to the version control system, right beside the code - like, in the top level directory.

6. All these things take maintenance. You have to revisit your "don't repeat yourself" decisions. You have to revisit what is within scope and out of scope. You have to revisit which bugs are tolerable and which are not. You have to update your design documents. Take the time and do the work. If you don't, your program will slowly become more and more brittle.


Love this. Would love to see some type of grounded-theory qualitative analysis of Developer Philosophies of many senior software practitioners - can imagine what common themes would emerge, but I'm more curious to see which pieces feel intuitive and right but aren't supported by the majority of senior practitioners.

The #1 rule in my home is "Be kind" (not to be confused with "be nice".)

The #1 rule I try to abide by when coding is "Be empathetic" (to others and my future self.)

In practice, this means things like valuing clarity over cleverness (unless I can manage to be both!) and documenting unless I can really justify not (i.e., is it realistically "self-documenting"), but of course also extends to empathy for the user—I'm a front-end dev so much of my work is UI.

You may read this and think "duh" but trust me when I say that in the 20+ years of doing this professionally, those are very clearly not obvious guidelines (even for myself.)


The point about edge cases intersects with the one about automating good practice: automate finding edge cases! Property based testing is way more fun than trying to dream up all the gnarly ways things could go wrong, and it’s better at finding them than you are.

I'd add this: if you do end up with a ground-up rewrite, make sure the original authors of the current code are involved in the rewrite. If they are not, it's not a rewrite, just another version.

This post really verbalized my own similar experiences so much better than I ever could have:

- I always think what could go wrong, but of course, it's all about the edge cases!

- what the last 10% really consists of.


"Nobody cares about the golden path. Edge cases are our entire job."

This is an obvious exaggeration; if you ignore the golden path, the code doesn't solve the problem it's meant to solve. But yes; writing reliable code is all about the edge cases. Eliminate them if you can; code for them if you can't.


Back in the 1970s, when I was just a high school kid reading computer magazines, I saw references to a study (I believe by IBM), that said in production code, that 70 or 80% of the lines of code was error handling; only 20% or 30% was the "golden path". I'm not sure I saw an actual reference, even then, and I certainly cannot give a reference now.

Does anybody know the study in question? (I have never seen another study on the topic. Does anyone else know of one?)

This was almost certainly either done in assembly or PL/I or Algol or something. Do more modern languages change it? Exceptions? Dual-track programming (options or Maybes)?

Regardless of exact numbers, yes, error cases are something you have to think about all the time.


Maybe this applies to Go code?

Snark aside, I am somebody who is very concerned about edge cases, but those ratios seem completely wrong to me for the kind of code I write. And perhaps one should say "corner cases" instead of "error cases". Corner cases aren't necessarily errors. What I find is that a good algorithm that is properly "in tune" with the problem space will often implicitly handle corner cases correctly. While an algorithm that was hacked together by somebody who is "coding to the test" without really understanding the problem space tends to not handle corner cases, and then the developer tries to handle them by adding if-statements to patch their handling.

In the end, devoting 70% or 80% of thinking time to corner cases seems entirely plausible to me. 70% or 80% of lines of code dedicated to corner cases may be a smell.


Depends on the kind of code. If you're automating a process, or handling large quantities of user input, for example, problem cases are everywhere. Or if you're working with a complex problem space where things can compose in unexpected ways (e.g., a language interpreter).

Surprisingly boring advice from such an imaginative writer.

The creative writing and boring advice are both because he's smart.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: