I believe the first step when creating something new aught to be answering honestly for yourself why existing solutions won’t work for you. Your time is valuable and much better spent building something no one else has yet.
I was once heavily afflicted by NIH, but with experience I realized how much time I was wasting stubbornly trying to reinvent the wheel. That’s actually a great analogy if you think about it—-the original wheels were fashioned out of wood and are quite simple in concept, but creating modern wheels require s tons of specialized tools and knowledge to build. It doesn’t make sense to attempt this if I can just buy a premade one that suits my needs. That premade wheel also has the benefit of huge amounts of iteration in response to problems encountered over time, and I would probably need to replicate at least some of that in order to build something competitive with the existing solution.
In the realm of code, using other people’s solutions when available lets me focus on the original problem I want to solve. It also allows me to minimize the amount of code I need to actually maintain myself, since typically there’s a group of people that are collectively much more knowledgeable than me doing that for free. If I ever do have a reason to know how the library works (patches, bug fixes, behavioral questions, curiosity), I’ve found it much quicker to figure out how the code works rather than write my own.
Earlier in my career I could just use any old tool that got the job done with plenty of dependencies, without a care in the world. Then three things happened. First, I got exposed to some of the highest quality tools and libraries in the industry. As in, the ones that only a few people are lucky enough to get to use, and it made using anything less polished afterwards painful. Sort of like living in a mansion and then needing to move back into a slum. The other thing was just finding out about too many of the nasty corner cases that exist in technology, which in most cases are safe to ignore, but are hard to unthink about once you know about them. The third was just having too many things I depended on break over time or abandon the principles that attracted me to the project in the first place.
So now, later in my career, my biases are more tuned towards reinventing things than ever before. I'd rather minimize risk by having something perfectly tailored to my specific needs. If it breaks, I'll only have myself to blame. I view that as a fun learning experience. Much better than the alternative, which is nagging folks in the open source community and filing issues expecting them to support me, and then feeling guilty afterwards. Only thing I'm worried about is that my standings will continue to grow so high that the joy of programming is so hard to find that I'll just do management, and just let other people have fun with all the churn.
Well, there's also something similar, but related, that I've found.
Namely, the amount of understanding and effort needed to solve my problem correctly and safely...is frequently less than the amount of understanding and effort needed to use some opinionated library or framework.
Not always; there are plenty of times a library or framework is a better choice. But what tilts the scale in that direction is making them very simple, with minimal assumptions, minimal state, and simple APIs into them, because that can keep it simpler than writing myself.
It all really depends on the example. I know tons of developers that skip Unity or Unreal because there is a learning curve and they can already render a cube themselves. They think "I just write a model loader" and ship and not realize how many man years of work they are ignoring in all other parts. Input systems across 10+ platforms, graphics APIs across those 10 platforms, physics libraries, post processing library, all the edge cases of importing that took man years to find, gaming networking libraries and on and on.
Of course is their choice and if they enjoy reinventing the wheel good for them. But they'll spend 6 to 18 months working on reproducing some fraction of the features they'd have for free instead of working on what makes their thing unique.
As I said, "there are plenty of times a library or framework is a better choice".
An egregious example of what I'm referring to is what we saw with the left pad library. That always blew my mind because the amount of effort to find a library, learn it, include it, and use it, seemed really high for the functionality. I don't understand why it saw such adoption - I wouldn't even -think- to Google that functionality as a library; I'd just write it (if I was unaware of the string method, which to be fair, I only know of because of the leftpad kerfuffle).
A game engine is an example where learning the framework is assuredly going to be faster; you'd need a better reason to write your own, and as a dev manager I'd be pushing back on it hard (whether it was coming from the devs or the business).
I was excited for Unity's input system rewrite and hoped that it would allow games to put input events on the wire to the opponent/server as soon as they were received, independently of the game loop. I asked the developer about it and he said "no, it only does part of what you want, and input events for some devices are still processed on the main thread anyway." So it seems like I have to wait decades for Unity to catch up with the kind of thing one can hack into love2d in an afternoon.
Just as a quippy counterpoint (without engaging with GPs actual argument), I would say that as I've became more experienced as a developer, the more I've realized how much of a liability my own code is (and that using libraries allows me to write less code).
> First, I got exposed to some of the highest quality tools and libraries in the industry. As in, the ones that only a few people are lucky enough to get to use, and it made using anything less polished afterwards painful.
If any Hackernews wondered why anyone would want to live and do all their work from within Emacs, this alone would explain it.
A good reason to implement things yourself is to learn. I recently implemented a PNG loader saver. All in all it took me 3-4 days to do.
I now have a complete grasp of the file format, and the code base. I will never need to find a library for it, or have a dependency, the code wont change without me knowing about it, and I can very accurately make decisions about when using PNG is the right format. I can speak about the format with authority.
I guess it depends on the problem domains you work in because I've never once needed to speak on the PNG format with authority.
Similarly, finding authoritative comparisons of which image formats do what best is pretty easy.
I've also never run up against shortcomings of existing PNG solutions. Even in assembly for the Z80 I remember using a PNG converter that would handle greyscale on a calculator!
But 3 or 4 days of work! Mam I can think of a lot of things I could do with that.
If it was for the fun of the experience I'd totally get it, but spinning it as an "I needed to do this", I don't think I could go that far in the areas I've worked in
I think i think of it as a investment in my career. if you can do close to 30 of these small projects each year, it starts adding up. You become an expert in many domains and you start to build up a large code base.
If you look at my codebase (www.gamepipeline.org) you can see that it looks more similar to a platform holder like Microsoft/Apple/Google, then what most individuals github profiles.
The most common critique of my coding ethos is that its not efective (I write everything in C and I have Zero dependencies, so I write everything from scratch). But looking at what i have been able to accomplice over the years, I cant think of many people who have produced as many applications as I have (http://www.quelsolaar.com). If writing everything in higher level languages and using lots of dependencies was as effective as people say, then there would be plenty of people running circles around me in terms of productivity, and I just don see that.
Looking at your codebase, it looks like each of these files could make excellent online tutorials. Many of the APIs you're using are tricky to figure out. It helps so much being able to google concrete usage examples, that are well organized and provide brief explanations and screenshots. It reminds me of the days when I used to used Beej's tutorials on socket programming. That's the kind of content we sadly don't see as much anymore. Online programming resources since then sadly seem to have become polarized between (a) direct answers to specific questions on stack overflow, and (b) low-effort seo-optimized blind leading the blind.
quelsolaar.com is really bad on Safari (iPad.) Took >10 seconds to load and then I couldn’t even read most of it as animations clearly meant to pop up at a different point constantly popped up over the text.
I agree. Years ago over a Christmas break I hand ported the Chipmunk-2d physics engine to javascript because I wanted to understand how it worked. (And wanted to have a physics engine I could use for small games).
It only took me about 2 weeks and it was one of my most memorable experiences of the year. I learned a few great data structures, I learned about JS microoptimization (I got an 8x speed up from the first port to the final version of the code). And I learned all sorts of practical physics - moments of inertia, rotational momentum, restitution, solvers, etc.
It was a silly thing to do by most standards, but acts like that make me a much better engineer.
At some point in your career after you get sick of the treadmill that is trying to "keep up" and maybe have experienced or been close to burnout at least once and have gone through really asking yourself "why am I doing this?", it's the projects like these that actually help you reconnect with what it is you love about the craft in the first place.
Learning is nice and all, but if you didn't document well, and regardless probably the first time the next guy needs a feature that exists in off the shelf solutions, that code will likely be ripped out pdq (given the problem space was non-trivial).
> Your time is valuable and much better spent building something no one else has yet.
I strongly disagree with this. I believe the exact opposite is true. Creation spurts from learning, so if you want to be creative it pays of to spend most of your time learning things that already exist. In the context of programming, this means implementing algorithms for which several other implementations already exist.
Sportsmen spend nearly all of their time training, and just a tiny (but very focused) amount of time competing. Likewise, scientists spend most of their time preparing experiments and reproducing results of others, and a tiny amount of time "creating". I do not believe that programmers can successfully escape this scheme. The best programmers I know, spend an inordinate amount of time rewriting well known algorithms (and oftentimes, improving them with slight variations).
How would one justify literally building something that someone has done very well, so that you can learn, literally by building something that's probably not as good.
I mean, on one's own time, this seems like rather a good thing to do.
But to say "Yeah, instead of something that works now 0 days work let's spend 3 months building something that may not work very well and we have to maintain" - is not a reasonable premise in most scenarios.
It defies the very point of trade specialization upon literally which the economy is built.
So while we should always be learning, learning in and of itself in most instances is not a good enough reason to do something at least at work.
It's simpler to justify if the development process is also seen as a knowledge acquisition process. Does the knowledge acquired by re-invention and its effect on subsequent effectiveness and or productivity outweigh the cost of time expended. In many cases, especially in a mature or stable environment, the answer is yes. It's also an attractive method of learning to people who like to think from first principles, as it involves developing a bottom-up understanding of whatever "the wheel" in this case is via practice.
I find that strong opinions either way on this question to be indicative of time preference. My own view is there's a time to do it and there's a time to avoid it. It becomes very valuable when ones navigating an unfamiliar area or is in the midst of a fundamental shift in the landscape. It's best to avoid it if the team or project is an a do-or-die scenario where certain milestones must be met to ensure it remains a viable team or project.
"as a knowledge acquisition process" so yes, if the module is a little bit core and you would rather have it internally, the added risk of time/quality/maintenance may be worth it. Sure.
>I mean, on one's own time, this seems like rather a good thing to do.
This is an interesting example of Conway's Law. If I hire a contractor for something, I'll want them to just import something that works, not try to explore things deeply. If I have a long term employee or cofounder, I'll want to encourage that experimentation and growth.
"What I cannot create, I do not understand" - What he meant by this was re-implement more than re-invent... re-invent is an oxymoron I just realized; you cannot re- something that needs to be new.
In addition to this I’ve found if its something too big for my current capabilities and I don't mind not owning it, just putting it out into the world and being patient yields wonders! Its almost a super power.
For instance if you want a new material or something fabricated and it doesn't exist but is physically capable of being manufactured just steamrolling alibaba, ebay and other chinese manufacturers with searches from different IP addresses will convince the powers at be that a sufficient market potential exists for what ever you want. Do this enough, sit back, wait... and voila! Dichroic fabric, holographic fabric, weird whatever you want will magically appear on your store shelves in just a matter of a year. Wait another year and it’ll be 40% off!
I would say "code it yourself, if it doesn't significantly slow you down". There's definitely something to be said for minimizing your dependencies, but if you're spending weeks implementing things completely tangential to what you're trying to do, you made a wrong turn somewhere. It's all about picking your battles, and I think you can go too far in both directions.
I'm personally starting to take the view of "sharpen your tools". That is, I'm fine with taking dependencies where appropriate, but I'm starting to contribute fixes and improvements to them as a way to derisk them. I think that benefits everyone.
I don't think that's a useful question unless you know how to answer it within specific contexts. In other words, what does "work" mean? There are so many dimensions to consider there, including technical, social, business, developmental, and legal ones.
Sometimes the most extreme NIH tendencies are perfectly justified due to these constraints, and sometimes they're absurd. I don't think any point on the continuum is generally "correct" nor should be generally advocated for. Instead we should be discussing how to figure out where the point lies on the continuum, within the context of the software we're currently writing.
When I code something myself I am not reinventing any wheels and I am not building a creaky wooden wheel. As you say, the world is full of technically marvelous wheels that I can crib off of and I have access to fabulous tools that allow me to easily synthesize new wheels.
Unlike actual wheels, the ability to quickly and easily tear down, reconfigure, and rebuild my digital tool box is the defining hallmark of software.
I see the philosophy of "don't reinvent the wheel" taken to extremes in the React world, and before that, in the jQuery world. There is simply a plugin/component library for anything and everything you need to do, so relatively simple applications have monstrous dependency trees because it's easier to `npm install` the whole kitchen sink and use the little bits you need, than taking some time to understand & implement a smaller, focused alternative that could live in a `util.js` file in your project.
At least thanks to tree-shaking and bundler innovations, the end-users don't suffer from bloated bundle downloads, but your node_modules folder is still 600MB large.
While I appreciate the sentiment behind this, with experience comes a sixth sense of what should be rebuilt, and what is better reused/modified. Experience also brings with it an ability to scan through foreign code and get a "feel" for how usable it is.
For example, I've chosen to replace one of the foundational data communication building blocks [1] because, after extensive research, the existing systems can't be modified to handle all of the general use cases I want handled. And it won't stop there, because I also have a number of additional requirements that HTTP won't handle (so there's also a new protocol in the works that will be based off this technology). If I could have simply modified an existing system to do what I want, I certainly wouldn't have spent the last two years on this! I've got plenty of other things demanding my attention...
It is generally good to depend on well-tested, large and reliable libraries. I have done the code-it-yourself thing to smaller libraries, though. The quality of the maintenance work done on smaller libraries is by no means guaranteed. They can acquire bugs at any point in time and when they do it may be best to just replace them by home written replacements.
A large complicated well maintained and widely used library is infinitely preferable to a large complicated library you need to maintain yourself and used only by you. In a similar vein, a well known standard format (or encoding) will always be a better choice than some ad-hoc format you create yourself because not only will that standard have encountered and dealt with problems you haven't even considered, but there are also likely to be a plethora of libraries, frameworks, and tools that support that format, where as if you create something yourself you end up needing to create anything you need.
Your time is generally better spent working on solving your core problem rather than the dozens of ancillary problems that end up needing to be solved along the way (particularly where a whole bunch of other people have spent a whole bunch of time already solving those problems).
> A large complicated well maintained and widely used library is infinitely preferable to a large complicated library you need to maintain yourself and used only by you.
Yes, but a large complicated well maintained and widely used library is not necessarily preferable to a small not so complicated library that does exactly what you need and nothing else. And that goes for formats too.
Recently I was involved in a project where order numbers had to be sent from one system to another. Some colleagues insisted that we baked them into a large xml document and then used libraries to both create the documents as well as parse them. In this case the economic thing to do was to write them each separated by EOL. Even the code we would have written ourselves would have been larger if we’d used the XML solution, not to talk about everything needed to include in builds and deploys.
There's an important difference between using something because it's popular, and using something because it's a standard designed for your problem. For something simple like just sending some numbers across the wire XML is massive overkill (as an aside, there's actually very little XML is a good solution to). CSV, TSV, JSON arrays, one of dozens of serialization formats, or even just a simple EOL separated value like you proposed are all both standardized and very simple solutions to the problem. On the other hand, had they proposed inventing some new binary serialization protocol and using that to transmit the numbers, that would be even worse than using XML.
You should always pick the simplest solution to the problem that meets all your requirements, but when considering solutions you should favor standards compliant solutions. A common example is date formats. Lots of places roll their own date format string when sending dates, but using ISO-8601 will save you (and your clients) so many headaches in the long run.
Honestly for your example, not knowing all the details I can't say for sure if a EOL separated value is a good solution, but based on just the description I probably would have gone with a CSV, or possibly a JSON array. I definitely would not have used XML (dear god, why would anyone pick XML in this day and age?), although if they were concerned about needing to add more data down the line I could maybe see an argument for something a bit more involved than a CSV.
One thing I would add is that it's much easier to appreciate existing solutions when you try to build your own. Often these "I could code something better in a day" thoughts turns into thinking "it's a tougher problem than I though".
Other times, though, either I learn something, or end up making something that is much better for me to use. I would suggest at some time every programmer try build their own limited scope library if they find existing solutions do not meet their requirements.
I like the idea of a “yeoman” programmer/engineer movement. It’s like homesteading except for consumer appliances/software — sustenance coding. I was happy to see a step-by-step guide to building a doorbell camera on the front page yesterday.
If the code is for you, and for you exclusively, go ahead.
If you're writing code as part of a team for a customer, then it isn't for you and it's whole purpose is to solve the problem at hand.
Secondly:
The primary issue I have with a NIH code-it-yourself approach is that it doesn't scale over time. Professionally, over two decades, I have seen several teams go through a technical evaluation and decide, in the end, that no open or closed source solution exactly fit their needs. So they coded it themselves.
Fast forward three to five years and everyone regretted the NIH approach. Those open or closed source solutions had matured and easily surpassed the home grown feature set, which still required a team of engineers to invest in.
There are exceptions, of course. Sometimes you have to build it yourself. But more often than not, it's much, much more effective to let go of your ego and collaborate with others, particularly on open source solutions in which you always have the option to fork the codebase and bring it, effectively, in house.
Generally, I find those that strongly advocate for NIH overly discount the long term costs of maintaining software.
> Fast forward three to five years and everyone regretted the NIH approach. Those open or closed source solutions had matured and easily surpassed the home grown feature set, which still required a team of engineers to invest in.
This is the root of the problem: a small team generally can't keep up with an open source project over time.
The NIH approach works well at larger companies who can dedicate a lot of engineers to working on in-house infrastructure projects full time. At smaller companies it is a distraction from building the core product.
While I certainly would encourage and do encourage people to code (it) themselves, the claim that using that not doing so
> mean[s] endless seeking, evaluating and further deviation from our goals.
is simply not true. Only if software were written to serve ultra-particular and individual goals would that be the case.
Software is written to serve needs - of fewer or of many. And while it may serve a more constrained set of needs more optimally, it typically serves wide enough needs well enough, that the vast majority of people have most of their software needs met by software written by others (albeit with room for improvement).
Also, almost no person, even a proficient coder, has enough time and attention span to code most of the software they use. On the contrary, we absolutely and necessary _won't and can't_ code "it" ourselves, where "it" is the main bulk of software we use over the course of our lives.
----
Instead of this manifesto, I would suggest a "Write good, robust, widely-usable libraries" manifesto - because that's how other people will be realistically able to code "it" themselves when and if they need to.
I don't know, it's certainly been the pattern I've experienced with Slashdot, reddit, Facebook, Twitter, Instagram, Windows, Debian, Ubuntu, Mac, Firefox, Chrome, Hotmail, Yahoo Mail, Gmail, Android, iOS, and so many other services and software that I'd be sitting here all day counting them all.
First, I find something which suits me.
Then, it starts growing from under me, typically in the direction of bloat and feature removal.
Then, the usefulness to abuse ratio drops gradually.
Then, I'm faced with having to migrate or simply abandon yet another platform.
It's a serious issue, but I believe we can overcome it with just this sort of approach combining FLOSS and dogfooding.
After dealing with the same sort of rot pattern in both software and services for years, I've set out to replace as much as I can with my own tooling.
Mostly this has meant developing a hybrid blogging, forum, note-taking and writing, DAG database, information archiving and retrieval system and dogfooding it as much as possible.
In business this is known as the build/buy dichotomy. Best practice is to focus on your core competencies. Meaning buy unless the available solutions do not fit your needs or are cost-prohibitive.
As always, it all boils down to cost. "Code it yourself" may make your programmers happy, and give them interesting work to do, but if it costs the business a big chunk of money they could have saved or put to more profitable efforts, it's a very bad idea.
There is a huge missing point which overweights everything else; it's really expensive (in time or maybe in money) to develop your own software even if you are a good developer.
Doesn’t this quickly become untenable? Sure, you might be able to code some of your own tools, but what about the OS those tools run on? Hardware? Drivers?
This is kind of covered in that manifesto itself in that you only go down the road of making something yourself when the tools you use no longer match your use case.
A lot of tools will be fine or meet your goals, such as your OS. If your OS doesn't match your goals then you're left with the three options detailed. In the case of an OS, the "big" OS's probably meet most people's general computing needs. It's not uncommon to write a custom "OS" for embedded devices. Some people do feel like making their own OS and there are plenty of examples out there of them and that's perfectly fine.
Anything like this is also not written as law. Your text editor annoys you under these certain conditions? You don't have to rewrite it, fix it, or switch to another editor. If you're compelled to write a replacement, it shouldn't be stigmatized which rewriting existing software frequently is.
It _immediately_ becomes untenable, actually. It'll like telling people to "manufacture it yourself". That might be possible for a perfectly fit person living a Robinson Crusoe life in some remote region; otherwise it's just divorced from reality.
We are a social animal - "zoon politicon" in Plato's original Greek - and our activities are social. For better and for worse, we can't declare that not to be the case or claim that we want to just be "left alone".
I was once heavily afflicted by NIH, but with experience I realized how much time I was wasting stubbornly trying to reinvent the wheel. That’s actually a great analogy if you think about it—-the original wheels were fashioned out of wood and are quite simple in concept, but creating modern wheels require s tons of specialized tools and knowledge to build. It doesn’t make sense to attempt this if I can just buy a premade one that suits my needs. That premade wheel also has the benefit of huge amounts of iteration in response to problems encountered over time, and I would probably need to replicate at least some of that in order to build something competitive with the existing solution.
In the realm of code, using other people’s solutions when available lets me focus on the original problem I want to solve. It also allows me to minimize the amount of code I need to actually maintain myself, since typically there’s a group of people that are collectively much more knowledgeable than me doing that for free. If I ever do have a reason to know how the library works (patches, bug fixes, behavioral questions, curiosity), I’ve found it much quicker to figure out how the code works rather than write my own.