When the topic of "great" developers comes up, I always challenge anyone to define precisely, in quantifiable terms, what a "great developer" is exactly.
How do you know someone is a "great developer" if you can't, in tangible terms, say exactly what a "great developer" is?
This is even more of an issue when recruiting because recruiting is an assessment process to determine if someone meets some requirement. But just like software testing, if you can't quantify the requirement, then you can't quantify a test to determine if someone meets that requirement, and therefore the interview/assessment process is simply voodoo recruiting.
Voodoo recruiting is when you have the appearance of scientific process in recruiting. - a process, - a set of questions that appear good, - a coding test that seems technical enough that it must be telling you something meaningful about the candidate, - the opinions of many people on the candidate. But in fact all of this is voodoo because you still can't directly define the requirement and thus you never known if you ever found what you are looking for, which is not actually known.
The final decision in recruiting a "great developer", as always, comes down to some subjective opinion based on a range of criteria that can't be proven to have had any meaning in the first place.
> The final decision in recruiting a "great developer", as always, comes down to some subjective opinion based on a range of criteria that can't be proven to have had any meaning in the first place.
Careful, this road can lead to nihilism.
Subjectivity is an important part of reality. Subjective things are also messy. We can acknowledge that these aspects are subjective and wrestle with them the best we can, or we can deny they exist at all, and end up pushing those problems somewhere else, where they might fester and become worse.
> I always challenge anyone to define precisely, in quantifiable terms, what a "great developer" is exactly.
Replace "great developer" with words like "democracy", "truth", "love", "good music", "healthy diet", "pornography".
We can't reliably quantify or strictly define any of those things. Sometimes "I know it when I see it" and sometimes not, when cognitive biases and instincts lead us astray. But to deny there is such a thing as a good developer, or a "great" developer, because we don't have perfect metrics, seems like going too far.
I agree that having a "pseudoscientific" process can be bad. When you are dealing with something that is inherently subjective, you should resist both urges: one is false precision, a system that claims to cleanly settle once and for all what is inherently messy and difficult; and the other is the denial that that 'something' exists at all.
I feel like empiricists routinely and easily perform the partially opinionated act of operationalization, and what's being critiqued here is the <lack> of opinion in operationalization, except for the mocking of scientific appearance. Likewise, many discussions on consciousness are worthy of critique for lacking operationalization.
We will always question whether the operationalization is a good proxy, but there's a difference when firms lack even the opinion to do so.
You're just saying that abstract nouns have no explicit referent like a brick or apple.
I take exception to democracy as a noun or attribute like 'truth', however. Truth is an abstract attribute. Sometimes a noun. Democracy is a process: two wolves and a sheep voting on what's up for dinner.
> You're just saying that abstract nouns have no explicit referent like a brick or apple.
I think smallnamespace has the right idea but maybe chose some week analogies in his examples, you've also picked the the most well defined one (although it's popularly contorted into many things it's original definition is actually quite strict).
Here's a better analogy, actually it's not really even an analogy, just a generalisation that countless people have attempted and failed to define and quantify throughout history: Intelligence, (define it), scientists can't, and IQ doesn't cut it. We can quantify some specific easily measurable abilities but there are many others we can't, seemingly vague things like intuition that are vital ingredients to the umbrella of "intelligence" are just do difficult to nail down individually let alone in concert. Likewise you can measure different aspects of developers, implement this generic algorithm, define the complexity of this code in big O notation, they can sometimes be indicators like IQ but none of them really point to actual brilliance, that can only be measured on the job.
Some will argue against me here but I don't think these types of things can be measured (except externally by overall results when applied), because in order to measure we need something to compare to, but everyones own bag of tricks is nuanced and intricately different to everyone elses in subtle ways... if we all thought exactly the same way but to different "abilities" then everyone would be making the same insights but at different speeds and efforts, that's not what people do, people just think in their own way and the more unique that is the more "clever" they can appear.
Where this analogy breaks down (or does not share with) is that programming is not all about being clever and unique (although that will of course differentiate you), it's also a craft that it's possible to be good at without being particularly unique... it's quite possible to have one and not the other, you could come up with original solutions to problems but be terrible at architecting and implementing it.
Thanks, and you are right - I digressed into that analogy a bit too much... I am agreeing and supporting what smallnamespace said. In summary:
"Great developer" Is extremely subjective from both perspectives. Specifically the role, what are you developing? but more importantly the person, clones are not useful in cognitively demanding roles (otherwise it would have been automated already).
Highly subjective things are messy and often impossible to quantify (as smallnamespace was arguing), but his analogies were weak in this respect so I furthered this argument by extending it into the familiar difficulties of defining and quantifying intelligence (also highly subjective by application and individual).
I'm curious about your claim that GI is hard to define (true) and measure (not true?).
GI is whatever the standard iq test measures. Higher numbers seem to correlate with better outcomes in certain settings. Right? What's controversial about that?
IQ is far from comprehensive, the aspects of the mind it measures are discrete and relatively low level, this is why two people with the same IQ can have different intelligence as perceived by others. As you traverse the higher level of abstractions it becomes impossible to compare, some just don't exist in other peoples minds, they are not a fundamental part of the biological structure (subjectivity starts to define the quanta, not biology, so how do you compare something that does not exist in all individuals, not to mention the number of possible abstractions multiply as the level of abstraction increases).
As I said before IQ is an indicator of potential at best (the potential for the abstractions built on top of it), even of the low level abilities it does not include all of them, we cannot externally measure all of them.
Here's another way of looking at it: if IQ _was_ comprehensive, the number of individual questions would be astronomical, it would be impossible to construct or take such a test because it tries to enumerate and quantify all of the different learned abstraction above the inherently low level ones across all of humanity.
Taking that fictitious test a bit further, note that if we imagine all humans actually taking it, their score will be extremely low, because it's impossible to have 100% (you would need a massive brain that could simultaneously learn all of the _different_ ways of thinking).
> GI is hard to define (true) and measure (not true?)
Doesn't this seem like a contradiction to you? how do you empirically measure something that you cannot define?
Certainly one number is vaguely ridiculous (yet effective for the original purpose), but you are conflating learned models (which are content), as opposed to capacity (such as raw short term memory).
> you are conflating learned models (which are content), as opposed to capacity (such as raw short term memory).
There is no correct definition, however most people agree that intelligence is more than innate ability. If you are focusing on the later as a definition then you can indeed measure it, but consider that there are many people with an IQ as high as Einstien's, but no such deep understanding in quantum mechanics or even any subject.
I would argue that casting out "learned" as something superficial is incorrect, learning is not simply acquisition of knowledge (as education systems have very slowly begun to understand since the mid 20th century)
I often tell people that before they think about their hiring process, they should think about their performance reviews. You're doing those with multiple orders of magnitude more data, yet most managers will say their performance reviews have a lot of uncertainty.
I spent a lot of time at my last job developing ways to measure performance across my team. One big surprise was that I had to significantly change the way we assigned work in order to make it at all measurable. Rather than giving people well-defined tasks, I switched to assigning projects that had some amount of (expected) business value. This meant the projects had to include things like deployment, talking to marketing, and whatever needed to happen to make it useful. Generally each one was 3-6 weeks of work.
This provided a good first-order measure of effectiveness. Importantly, it helped people change their habits to contribute more business value. I found massive variation in the junior engineers, easily with 5-10x differences between the most and least effective. The differences in senior engineers were smaller, but still ~2x.
Of course, there are other major factors you have to take into account, like code quality and mentoring. I developed similarly detailed systems for measuring those. Once again, you have to change the way you do things if you want to make them measurable, often accepting local inefficiencies for global insight.
With all that data in hand, I correlated that to our hiring ratings. The results? Our two best engineers were both borderline in the hiring process, and would have been rejected except for a strong vote of confidence by our senior engineer, who, when we checked, turned out to have a basically perfect track record on hiring judgments-much better than anyone else in the company, me included.
IME it's hard to precisely rank average devs who are in the middle. There's a lot of clumping.
It's a lot less hard to identify overperformers or extreme underperformers. You mention the big possible range, but I've never had to dive that deeply into data to spot them - I've used the data to check my instincts, but have generally been right. And if you don't trust your instincts, just ask your team! They know whose code reviews they dread, who never offers helpful suggestions, who seems to do just the minimum. It's sometimes tricky (people often don't want to say bad things about their peers), but generally people already know.
I know what I'm looking for in my performance reviews, and so I know what I'm looking for in my candidates. A competent coding baseline, and skills in the non-coding areas (knowing what not to build, anticipating edge cases, ability to distill business logic, ability to explain complex things clearly...). But this list is quite different from the "standard" tech interview loop: algorithms after algorithms with a sprinkling of "design a URL shortener."
* When I took over, I was a lot more permissive about letting people work from home. Upper management pushed back and was worried about productivity. I was able to show that people were getting much more done than before (though admittedly there were other factors, I changed a lot of things.)
* After a few months of monitoring, it became clear that one coworker had a predictable but volatile pattern: he'd be very productive for a few weeks, then very unproductive for a few weeks. So I asked him to take on urgent projects during the "up" periods and avoid them during the "down" periods.
* One particular person improved so fast it was almost unbelievable. They were already strong to start, but after 12 months they were literally getting about 4 times as much done as they had at the 3 month mark. I would never have believed it if I hadn't been keeping close track...and this helped me fight for promoting them twice in a year.
Not everyone is great at interviews. One of the best engineers I worked with was very bad at interviewing (even I wasn't initially sure whether we should hire him).
The problem is that most of the interviews are very different from actual day to day work. Candidates do not perform same tasks in the same way as they would while working. Same goes for evaluation of results - very rarely is candidate rated using same criteria as it would in his performance review.
Would you mind sharing more details how you measured performance of your team? Thanks
The company I work for just started the process of completely reworking how the development team's impact/effectiveness is measured (along with each member of the team as you would imagine), so your comment really peaked my interest.
What did you end up measuring? How did you measure it? We've been debating for the last few days on how to measure the outputs and their real business impact without getting caught up measuring the inputs that may or may not really matter (commits, loc, story points, etc...).
For context this is an org that has been, historically, in the dysfunctional area of the operating spectrum, high employee churn, lots of technical debt, no testing to speak of in flagship product, high concentrations of knowledge held by single individuals.
We have somewhat coalesced around the idea of dynamically assigning business impact metrics on a per feature/product basis (if we build this thing we would expect to see metric x, y, z go in said direction). In addition to those metrics we are thinking of also doing something along the lines of an NPS (net promoter score) score that would be given to the feature/product by the end-user. Taking both of these into account would then score the development teams effectiveness/impact.
In addition to the outputs mentioned above we would also be tracking the inputs, but more as a historical data set, to see if there are any correlations between our inputs (commits, loc, story points, etc...) to better NPS and business impact metrics.
I'd love to hear any feedback, experiences, advice.
P.S. Team size is 10 devs, core team of 4 in U.S.A co-located, all others remote international.
> For context this is an org that has been, historically, in the dysfunctional area of the operating spectrum, high employee churn, lots of technical debt, no testing to speak of in flagship product
> P.S. Team size is 10 devs, core team of 4 in U.S.A co-located, all others remote international.
My advice would be to bring in a good dev manager and stay away from trying to narrowly define dev productivity metrics.
A good dev manager will be able to bring you up to average and fix the obvious problems.
If you are part of a larger company then the business side is probably using OKRs (objective-key results) or something similar to track at a higher level. Start looking at and making sure your team is contributing to these.
As a senior manager your teams self-assigned dev metrics are meaningless to me. It's not going to be enough to justify more staff, pay rises, different work, etc.
For code quality, main thing I did was read every pull request and keep track of how people did, including cases when people improved existing code. I also kept track of times where someone made a contribution to how we thought about quality – e.g. when one person used type-level programming to essentially combine a bunch of tests into one and significantly reduce the amount of code needed.
We had ongoing training – we'd meet every week to go over a chapter of some book – so everyone converged on the same code style which made monitoring quality relatively easy.
With mentoring, I primarily paid attention to what people said about others in terms of mentoring. I also monitored mentoring through code reviews & slack. People's reviews were very consistent – "Bob is the most helpful mentor, Charles is also pretty helpful" – so again, relatively easy in that specific case.
Are you saying there's no such thing as developers significantly better than the norm, or rather that it's infeasible to identify them?
I don't think your point about precisely quantifying the definition of a "great developer" is a good one. I can capture an observation of superior capability in several dimensions without having to rigorously quantify the relative differences in capability or precisely why one is more capable than the other. If one developer accomplishes in a few days what takes another two weeks with code that is at least as maintainable and performant, that developer is better. If that developer is better when put next to most of the other developers you have around you, then they're a "great developer."
Your focus on quantifiable definition seems to imply that significant differences in human capital can only be ascertained mathematically, but that doesn't reflect quite a lot that we're already familiar with in everyday life. If I'm placed in front of two walls which are both much taller than I am, I can see which of the two is taller than the other if the difference is significant as long as the tops aren't out of sight. I don't need to quantify this; I could just say, "One looks to be a lot taller than the other, but I'm not sure by how much exactly, or what their respective heights are."
I'll happily agree that a lot of tech interviewing (and interviewing in general) is dreadfully broken. I'll also happily agree that figuring out which skills to prioritize and how to judge the level of those skills in candidates is very difficult work. But I strongly disagree that the fundamental premise of that work is intrinsically "voodoo." If your bar is a mathematical formalization then of course you're going to be disappointed, but that also holds true for the majority of our professional and personal activity.
A developer becomes a "great developer" when the company, team, resources, projects, recognition, etc., are compatible with that person. Under that logic, I believe any programmer can be great if they desire to do so and find the environment and motivation to thrive.
Most technical interviews fail to find the right people because interviewers and hiring managers usually go at it with an "idea" of what a "great developer" looks like to them. In most cases, everyone ends up hiring people who don't work out and miss out on people who could have become the "great developers" there were looking for in the first place.
I will second this because I've seen it happen, both to me and to other people -- an environment that wasn't a good fit changes and suddenly someone excels, or someone excels and the environment changes and suddenly it's like they can't do anything right.
Also, having clear criteria set up in advance is essential. You should know before you go into the room (or pick up the phone, or whatever) not only what constitutes a pass versus a fail, but also what kinds of things in the interview signal more than just passing. And the pass/fail can't just be "regurgitated an algorithm we wanted them to regurgitate", because that tells you nothing about whether somebody's actually good to work with.
I have a strong belief that the best way to interview a candidate is to take a literal real world problem the team experienced, distill it down to its essence so it can be tackled in a 2-4 hour pairing session. You should have full access to internet, whiteboard, etc.
If the candidate is having to review CS algos and doing whiteboard prep, you're doing it wrong (unless that stuff truly is what the job entails). If the candidate is nervous because they feel like they're giving a dissertation, you're doing it wrong. It should mimic as closely as possible what the actual job will be like.
You're not going to fully know whether or not they will produce in practice, but you'll be able to tell that you can work with them and that's half the battle.
In a sentence, I am saying that pretty much every recruiting process at every company, when it assesses developers, results in a meaningless decision, despite the universal certainty at every company that their recruiting process absolutely definitely results in making accurate decisions about people.
Did the interview process find a great developer? Maybe. Did the recruiting process reject great developers? How would you ever know? I'm pretty much certain that most companies reject many more developers who would be really productive/effective, than they choose to hire.
That still seems like a pretty strong claim. Inefficient? Sure. Misaligned incentives? Definitely. But actually meaningless? If a company turns away many developers who would be excellent hires but consistently meets its product/engineering goals and hires more good developers than bad developers overall, their recruiting process certainly has a positive signal. It's obviously inefficient, but it couldn't be meaningless.
I also don't think it's fair to characterize most companies as believing their recruiting practices infallible. Google's former director of recruiting has frequently talked about how their recruiting processes (and particularly interviewer feedback) seemed very noisy and fairly inconsistent in a variety of dimensions. The common recruiting-page-positive-vibes spiel might indicate a lot of confidence, but that's not what I would use to determine it. A lot of companies also seem pretty happy to try out new programs like e.g. Triplebyte or otherwise innovate on their sourcing processes.
Not a direct response to your comment, but I once sent someone who I thought was a really bright developer to a company, who assessed him and rejected him.
That same developer soon after got a job at Amazon. When I told the original employer who had rejected him, he said "So Amazon are hiring shit people now?".
That hiring manager I have never ever known to question any hiring decision he ever made - he has always been certain that the rejection was correct.
And what I have observed in recruiting is that any rejection decision made by a company is made with 110% certainty.
The typical tech company interview measures things which have no relationship to someone's quality as a programmer or their effectiveness as a member of a dev team. Any success of such a process (success meaning hires someone who turns out to be competent and a good fit in the team) is the result of factors unrelated to the interview process, and probably has a significant component of chance. But it's treated as an effective arbiter of "qualified" versus "utterly and totally incompetent to work anywhere".
(I firmly believe that Joel Spolsky and Jeff Atwood have done more active harm to tech hiring than perhaps any two other people, by the way)
Here are slides from a talk I gave last year with a former co-worker about doing better tech interviews, which go into a bit more detail on the issue:
I really don't think you can declare that any success of the process is due to chance or "other factors." That particular point is completely unsubstantiated. The process is inefficient, yes, but I really don't see any basis to call it meaningless except for personal dislike.
Even if the process is inefficient (many false negatives), if it yields a population with fewer false positives than the general population it is meaningful. This is an important distinction because many people seem to be unilaterally declaring these hiring practices to be useful due to dislike and anecdata. Of course they're not ideal and there is a lot of room for improvement, but there's no empirical reason for us to act as though the entire system is arbitrary just because we don't like it.
The primary question we should be looking at to inject some empiricism into this discussion is how many engineers end up being fired once hired, how long it takes to fill each technical role on average and how many candidates are turned down. There seens to be a false dichotomy at play, where people are only able to damn the process in its entirety, but it can still be bad and retain some signal. That shouldn't be a controversial point, it should be a minor preliminary observation.
Again: the metrics being measured in typical tech interview processes are not the same metrics those same companies would use to measure success on the job. They are not useful predictors of success or failure on the job.
If the interview metrics are that disconnected, there is no meaning to the interview metrics.
> If a company turns away many developers who would be excellent hires but consistently meets its product/engineering goals and hires more good developers than bad developers overall, their recruiting process certainly has a positive signal. It's obviously inefficient, but it couldn't be meaningless.
Actually, that result would be precisely what you would expect from a process that fails to disprove the null hypothesis, and is therefore meaningless. You'd see them hiring essentially a random sample of engineers with the same distribution as the whole population.
I leave it to someone better at statistics than I am to figure out what an actually effective hiring process looks like, but I think the correlation between "is a relatable person" and hiring is much higher than "is a good engineer".
> Actually, that result would be precisely what you would expect from a process that fails to disprove the null hypothesis, and is therefore meaningless. You'd see them hiring essentially a random sample of engineers with the same distribution as the whole population.
This implies that the capability distribution at Google is approximately equal to the capability distribution of the entire set of eligible developers. I don't personally think that's true, and in any case even if it is true it's not obvious.
Do companies really believe their hiring processes are all great? From people posting online it seems like everyone knows their process sucks but doesn’t have ideas for improving it
> the universal certainty at every company that their recruiting process absolutely definitely results in making accurate decisions about people
The companies I've worked at have always been very aware their interviewing process could be better, and so I've been looped into many, many conversations over the years along "how can we reduce false negatives" or "why are we rejecting all our senior candidates?"
This is the case for basically all superlatives that apply to human emotions, though.
What's a "great company to work for"? It's the company you want to keep working for. That's not going to be the same for every person - I have had great experiences at companies that are commonly known as "great companies to work for", but know coworkers that hated them.
What's a "great investment"? It's something you can sell for a lot more than you paid for it. Why can you sell it for more than you paid for it? Because the person you're selling it to also believes it's a "great investment", and so you have a circular definition.
What does it mean to "Make America Great Again"? Half the country believes we're doing it right now, and the other half believes we're in a bizarro throwback to the 1950s that isn't going to end well for anyone.
So what does it mean to recruit a "great developer"? Well, it's when you hire the person that you wanted to hire. Yes, it's subjective - it's always subjective. That doesn't mean it has no meaning, though, because people ascribe meaning to subjective qualities all the time.
The distinction I'd elucidate from the gp's is that "great" is a problem comes because it's both a superlative and because it can't be quantified.
You can ask for a developer who produce a large amount of average code. You can ask for a developer that produces very few terrible errors. You can ask for a develop who produces decent code and happens know domain X very well. You can ask for a developer who can mentor younger developers. You can ask for a developer who might needs virtually no supervision to succeed. And so-forth. They could all be different people. Perhaps you can sift until you somehow find someone who's all that but they'd be fool to join you because you're looking for perfection rather than looking for fulfill your actual needs.
Superlatives are a terrible way to make any kind of serious decision. Salespeople love specifically because they tend to be unmeasurable and irrelevant when they are measurable.
Thus the more superlatives are entering a decision process, the more you can tell you're sold shine rather than substance.
Not only that, but is anyone really "great" across multiple/all domains? I've worked across a pretty diverse set of businesses and functions in management, and I can say pretty confidently that "greatness" is very dependent upon fit for the problem space, and I do mean that to work both ways: Fit is incredibly dependent upon the individual giving a shit about that problem space. No doubt someone could be very good working on something they don't care that much about, but we're talking great here, right? That means talking/writing in broad strokes about great developers approaches impossible.
I also don't think you need "great" developers in most roles. Sometimes average is exactly what you need, and hiring above that just leads to having a great person leave because they're not particularly challenged or enthusiastic about the role.
And to wrap all of that up, I don't even know that I can objectively define what "great" really means anyway, so maybe I have no place in this discussion :)
>Fit is incredibly dependent upon the individual giving a shit about that problem space.
Amen.
That should be obvious, but I don't think I've ever had any recruiter or phone screen where they acknowledged that in any way.
I responded to a recruiter recently by asking what the company actually did, and they reacted like I was just hamfistedly demanding the company name or prematurely trying to research for the interview, but I really just wanted to know - what would I actually be contributing to the world in this job, never mind the technical challenges, the buzzwords, and the paycheck.
It's like everyone is assumed to be at best apathetic to the larger context of their job, apart from delusions of grandeur about "changing the world".
I've posted this a few times before (taken shamelessly from tokenadult), but I'll post it again:
There are three main criteria for hiring (in knowledge work environments):
1 General mental ability (Are they generally smart)
2 Work sample test (of real, representative work, no hazing!)
3 Integrity (Because 1 and 2 don't matter if you hire a psychopath or narcissist(cough) ).
For 1 and 3 there are well known tests and you can find psychologists to administer them. Apply these tests evenly to everyone, even the people you're sure you want.
The other approach you can take is the IBM/60's corporate approach. Which is apply the above approach to the "office girls" EI: general population and train the smart ones for the job. This can work surprisingly well if you have good people in leadership.
I strongly agree. This problem applies both to the interview process and to evaluating the performance of engineers after hiring them.
A lot of companies try to use supposedly "data-oriented" approaches to hiring. They assume that a "successful" interview process hires candidates who get good performance reviews a year later. But those reviews are just as subjective as the hiring process.
If your metric for "success" is whether you still like someone a year from now, you might as well just hire whomever you like today!
I still think you can have an effective recruiting process which attempts to identify someone's attributes across a range of criteria - I attempted to identify some of those characteristics here: http://www.supercoders.com.au/blog/50characteristicsofagreat...
You would fail entirely in your recruiting effort if you just hired "whomever you like today" - you must make some attempt to measure a persons capabilities.
What I am attempting to dispel is the idea that it is possible to measure someone as a "great developer" because it is a matter of opinion - person A may think someone is a "great developer", person B may not.
I also think that pretty much every recruiting process at every company is "Voodoo Recruiting", or "Recruiting Theatre" - the appearance of rigor and scientific approach, but really just a show that allows the assessor to feel like some scientific, justifiable process has been executed that has resulted in a "correct" hiring "go/no go" decision, but in fact does nothing more than make people feel comfortable with making a subjective decision in the end.
Whether or not someone is a great developer also changes according to time and context. What I mean by this is that some developer X who got amazing results in some job that they had, might be judged as being a shit developer when put into a job where they weren't inspired by the project, were working with unfamiliar technologies, where they are in a life phase dealing with depression, or where their colleagues or boss treated them without respect. Suddenly "great developer" in some previous context is now seen as a shit developer in some other context... again, it is simply not possible to evaluate a given person in a job interview and determine if they are "a great developer".
I had two jobs doing maintenance on very large(millions of LoC) projects that brought in significant revenue. And there success was making sure your code was bullet proof. And resisting the temptation to do any refactoring. Everything should work was close to how it's always worked. So a good fix was one that was a very specific if statement that covered a very small edge case.
This was a very different measure of success from the couple of large Enterprise companies where I worked on new projects, where success was managing 4 different stakeholders with a conflicting set of requirements. And meticulously following large sets of rules and processes around code.
Which was a different from building an mvp for a startup where shipping as much code as possible mattered, technical debt didn't matter, breaking things was ok, and there was only one stakeholder to manage.
Sometimes you can find one person who fits all these roles, but there are a lot of developers who would be great developers in some of the roles, and not others.
You create a strawman. Nobody denies what you claim. The issue is not at all the existence of "great people" (for anything).
The problem is how do you find them. You have a thousand people in front of you. Now please make a prediction that actually works reliably for big numbers of people.
Your argument suffers from the same problem as any of the others that always come up after something happened, "why didn't they...". You look backwards. You make a "prediction"about something in the past and of course you get 100% accuracy and think "that's easy". Okay, now please make tat prediction for the future. You have people in front of you whom you know zilch about. Now make - en mass - statements about their future work that when looking back from the future will still look good.
About your
> Was Heaviside a great engineer?
Now please make that backward-looking prediction about his greatness for the same person before it turned out that he was good. And not just for one person either, do it for thousands of not-famous people with some degree of consistency.
And then report about all your predictions, not just the few that worked, and don't forget to mention if you did some extreme filtering beforehand in order to ensure you get a lot of "hits", leaving a large part of potential candidates without any chance to begin with. That kind of thing works for some (like so many "recipes for success") but not for society.
> You have a thousand people in front of you. Now please make a prediction that actually works reliably for big numbers of people.
The simplest answer is to predict that 0 of the 1000 are great developers. That will be reliable for most cohorts of 1000 people and most active industry applicants for developer positions.
You can only reliably find greatness by filtering through many college applicants, training and mentoring them, keeping, motivating, and incenting the best, or by using networking and references to find the standout performers. It’s so rare for these industry experienced people to need to apply for a job cold that that is a tremendously adverse pool from which to select in my experience If greatness is the goal.
If good enough is the goal, there are plenty more of those devs available.
So your plan would be to hire all the college applicants that applied (or a random sample) and then mentor and train them to see who works out the best?
No, you hire the ones who seem interested, skilled “enough”, and intelligent (possibly in reverse order of importance). You can very safely make a cut against some very likely poor candidates.
But yeah, you have a light filter and then let them grow and evolve. You can’t possibly learn in 4 hours anywhere near what you can in 2 years. In college recruiting, you’re looking for “could be solid” and you get the raw material for that a lot. Some happen to turn out great.
Finding greatness in industry is driven mostly by asking some of your best employees to search their networks for their best ex-colleagues and otherwise by dumb luck (someone tried a startup that went bust and no one in their network had an opening, someone is moving to a new city with no network, etc)
Those three traits are exactly what I use for college recruiting. I guess some people look at great developers on an absolute scale but I consider it on a relative scale within the role under consideration. I consider those three traits make a junior candidate great. Actually those same traits are important for all roles. It is just that for junior roles I don't expect much beyond that.
I'll chime in with the sister posts expressing vehement disagreement.
I can tell you what "great" means to me, in 2(.5) very clear words.
"You (fucking) Ship."
To me a great dev has a track record of delivering the end result. A greater dev delivers the result faster, with less debt attached.
One can fairly say that this is still subjective, but my criteria are _quite_ meaningful and even somewhat empirical/measurable, although it's admittedly hard to do in a short interview format. I certainly know a "Great dev" when I've met/worked with them, because they ship faster than I do, with less friction or need to revisit post-factum.
Edit:
Since half the responses seem to elide a very key part of my statement, let me echo: _debt matters_. Having someone who ships but burns your team down, not good. Someone who ships but leaves you with years of bugs or un-maintainable code, not good. As I'm sure we all know, you get optimizations against KPIS you measure, and if you don't measure "delivery" holistically, you'll absolutely get people who excel at one component while salting the fields.
Trying to map your statement to my own experiences, coders with mojo who I still don't want to work with, the debt in those cases was often social, organizational, teamwise, etc.
I'd be curious if you read my statement of "delivery with debt attached" to encompass those "soft" types of debt, whether you still have found outliers.
There are things that you don't use as your single one dimensional metric, but they are still pre-requisites. "Getting things done" is always a pre-requisite. It's like salary from the other side; you don't (I don't) rate jobs based on ordering them all by salary, but it has to be a threshold (which may vary based on other factors).
A great dev can even be great within fixing problems/performing ops/maintenance, by my same qualification. _They ship fixes._ Perfection has very little to do with greatness. Aspiring for it might, but that's a second discussion about unrealistic goals and setting oneself up for disappointment.
You're the one who said you could describe "great" in two words that didn't include maintenance or quality. If it turns out those two words aren't enough, or need lots of asterisks and clarifying words, that's on you.
I don't get the feeling that you and the parent are taking my comment in good faith; clarification is of course necessary if the parent response is effectively a non-sequitur/strawman vs what the other respondents took my meaning to be; but I'd rather not point fingers here, as that's not useful to a productive discussion.
I chose not to use additional clarification because that unnecessarily constrains "shipping" to me. One can deliver value, and have a track record for delivering value, across a very wide set of variables, and I've found my heuristic to be far more elegant (if perhaps not precise enough to bear the rough seas of internet discourse) at mapping post-factum "Was this a successful business relationship" than a much more hair-splitting definition, as well as helping me keep personal biases out of my judgement re: someone else's success.
> I don't get the feeling that you and the parent are taking my comment in good faith.
I'm only playing by the rules you yourself set out. People in this thread were discussing how measuring "greatness" is subtle and very difficult, and then you came in asserting you could solve it with a snappy 2.5-word manta. If you're now claiming that additional clarification is needed, well, yeah, that's what everyone was saying to begin with.
The problem honestly seems like less of a debate about defining greatness and more about defining shipping, at this point.
Maybe this is nitpicking, but I've had this conversation in person more times than I can count during loops, review cycles, and over beers, and I'm hard pressed to think of the last time I got such pushback against something that seemed pretty cut-and-dry; namely "did you get done what you needed to get done without undue pain and suffering."
I'll openly concede that I very well could be in a "communication bubble" where words like shipping have loaded context. I'd still defend my point that if one chose the isomorphic terms for within their space, the "intent" of my message holds water as a useful heuristic, if a rather reductionist one. Tautologically, someone who I can trust to fulfill their role without "fires everywhere", two thumbs up in my book.
That being said, I'm honestly blown away by the amount of downvotes I've been getting for what I typically saw as the pillar of "meritocracy", that you get your job done without burning down the house. I wish more of the opposition at least take the time to express _why_ as opposed to just burying this. At this point I feel like I'd do better to "Save my account" and stop commenting but alas this is a topic close to my heart.
Are you sure we're not debating pedanticisms here? Clearly he delivered _something_ that let you derive those gains. Whether it was a new feature, an algo/data analysis that pointed out optimizations, whether it was 2 years training new employees who ended up being powerhouses, whether it was simply being a good lead for his team and running interference/making sure they're productive, to me those are all forms of delivering a result.
Nostrademons put it better than my perhaps-too-succinct summary. You needed this dev to do something for you, and they accomplished it. They shipped. (This gets even more vague when you get higher level employees who set their own mission, but at the end of the day you're still judging them by some metric of long-term success. Do they have a track record of meeting that? Then yes, they're probably pretty solid by my standard.)
And for the record, yes, my peers who "just ship" have regularly returned <large number> values here. I have the advantage of working for a bigCo so it's easy to generate outsized returns; so I'd say to not let the number distract from my point. I want devs I can rely on without babysitting, open and closed. Over the last 10-15 years, I've come to realize that nothing has come to mean as much to me as knowing I have a team can honestly _trust_ to get the job done. That's pretty exclusively why I define "Great" the way I do; nothing else feels quite as good to me nowadays as when a peer says "I got that handled" and I just know 100% it is. Give me a team made up of people like that and I'd tilt at any windmill :)
People are poking holes in your machismo bullshit claims about software development, and then you redefined your initial claim to simply be that great developers "do their job." It contains zero information.
1. consider an attempt at simplification "machismo bullshit",
1.5 don't realize the irony in responding in that fashion
2. consider "doing ones job without creating debt" to contain zero information, and deride my attempt to engage others in discussion
and 3. are taking this aggressive and ad-hominem a response to said discussions, I'm very glad we apparently live in different worlds of software development.
I respect that you'd take the time to tell me why you disagree. I don't respect the way you've conveyed your stance.
> Are you sure we're not debating pedanticisms here?
I'm not sure. Maybe.
> Clearly he delivered _something_ that let you derive those gains.
> to me those are all forms of delivering a result.
But then by shipped you simply mean productive over the course of their employement - that's a pretty context sensitive metric to hire by.
For example I've found through my own failures that being highly productive at a big corporate does not translate to being highly productive at a small startup.
To the latter point, I certainly had a similar experience, and that's in fact why I value track record so highly. It took a very different set of skills when I made my own transition from contracting -> academia -> bigCo, but I'd loop this back to my "thesis", namely that someone observing my weakness after each step would rightfully observe I don't have a track record within that context.
To some extent this seems logical, would we call a home carpenter "great" within the context of industrial skyscraper construction? The moment you step back from a rather tight focus, (and this may be the root question/debate about "what does great mean", and why my approach has generated some highly opposed responses, because I try to dodge that problem by limiting the scope) any commonalities become much harder to define, and limited mostly to "meta-skills."
So basically, you're right, for me it is very context sensitive, and as I said upfront, I have no silver bullet for identifying greatness without digging into that context quite a bit. In prior threads about the trouble with interviewing I've been very eager to both share the techniques I use/find out how others approach the problem, since summarizing after the fact what I found valuable, much much easier than determining it in a loop. I would love techniques that let me determine how well someone's skills in context A transfer to B, but that is a _massively_ dimensioned question that I can't even begin to hand-wave at, beyond that it might go back to my "meta-skills" categorization.
This is one of the signals a large percentage of video game companies use. I'm not sure it's that great but here we are - at a lot of small shops this led to a Lord of the Flies mentality to mentions in the credits from what I was told.
I can remember working on a shipped product that had a lot of bugs, it was one guy's several hundred thousand line DOS program in one c file that I had to fix bugs and add features to - the guy quit before stooping to fix bugs/add features requested by the customers. Did get to practice a lot of refactoring on that one.
Want to know a great strategy for hiring lemons? Ask only Leetcode-style questions. Don't ask anything relevant to the job. Don't ask about past experiences, architecture, tech-specific questions, or anything like that.
Only. Ask. Leetcode. Questions. If it's not a stupid algorithm disguised as something cute like a dog jumping over a river, then don't ask it.
You will be in the exact situation this article describes: Passing up on good engineers, and not being able to tell if the people you do hire can actually perform on the job.
I hold Joel Spolsky in very high regard, but I just think he's fundamentally wrong here. I've seen (first hand) many times when great developers stagnate at various companies and they leave. This can happen for a few reasons, but we're deluding ourselves if we think every single job is "solving the big one" -- no, sometimes we need people to write the plumbing, the boring stuff, the unit tests, and so on. Great developers, more often than not, get bored. If they have any sort of intellectual curiosity, they will leave. In most cases, corporate programming is just not very interesting. So first, we have the curious geniuses. I know many of these people that hop around from job to job and they say "meh Google sucked" or "Facebook was boring so I'm at this new startup now."
Second, we have the people like me. I hope my hubris isn't showing, but I think I'm a pretty decent developer. I wrote a few books, had some contributions to a few big-boy OSS projects, am in the top 1.5% of Stack Overflow (even though I haven't really been contributing for like 4 years now), etc. I also built things -- many, many things -- some have a few hundred stars on Github, many have none. My goal in life is to "do my own thing" and, eventually, get that FU money. That is never going to happen working a regular 9-5 job and, in my early 30s now, I've completely accepted that. So I work for a few years (2-3) and then take a sabbatical where I dedicate my entire time on a project or two, trying to get it off the ground. When it inevitably fails, I go back to a "regular" job. The gap, contrary to popular belief, doesn't hurt you. When you explain and show what you built, people are more likely to be blown away rather than derisive.
Finally, we have people like Max Howell who famously got rejected by Google even though literally everyone at Google uses his software. Max Howell is most definitely a great developer, and yet Google tripped all over their own corporate shoelaces. Let me put it this way: if I did a startup, I would 100% want Max Howell on my founding team. I mean just take a look at not only the code quality, but also how PRs/reviews are handled in homebrew.
> Great developers, more often than not, get bored. If they have any sort of intellectual curiosity, they will leave.
This is a good point. When you're skilled and have a good network, there's no point in putting up with a bad manager or an uninteresting or underpaid role.
This kind of bias is a prime example of people valuing their own preconceived notions of how the world works to the point where they are incapable of updating that belief with hard data.
What do you believe is so special about swapping branches of a tree? Couldn't you just as easily say "if you can't make and maintain something more than a few thousand people use reliably, then you're not a great developer?"
So, is your point that if someone can't do something that's "simple", then they're not "great"? I've seen Nobel Prize winning physicists make trivial math errors. I'm talking adding 8 + 9.
If you can't say that is true for all cases is it not worthless as a metric?
It wasn't that he screwed up a minor thing off the top of his head, he had fundamental misunderstandings on how to approach the problem. I doubt that's the case for your 8+9 example.
What exactly is your point then? Do you not think "great" people occasionally have "fundamental misunderstandings"? Who determines what is simple? Who determines what's fundamental? I also find it strange that you give the physicist a pass in my example, but not the developer.
If someone makes a mistake adding any two 3 digit numbers in their head are they not great? After all, there are people in grade school who can do this easily.
There's a huge difference between messing up a small piece, and not knowing how to even approach a trivial problem in the field. Do you not see the difference? Like, this is way more like Francis Crick asking 'what's a cell?' than screwing up a math problem.
Hahaha, I totally agree. In fact, despite having written a half dozen CAN device drivers professionally, I would be an absolutely terrible choice for a team of 'great mechanics.' Hell, I pay someone else to change my oil.
You can do really good work adjacent to a field, while not being a great contributor in the field that you're adjacent to.
I wrote some great software for a major corporation, and saw my income triple. I went into a google interview and also failed, and was rejected. Mostly for missing edge cases on a few problems. What would you classify me as?
Everyone wants a Ferrari, but most work is ordinary enough that all they really need is a Kia. It's very unlikely that you're the next Google etc., better to hire according to your needs and stature. Maybe you know what you're doing, and if you need a Ferrari to get it done, get a Ferrari.
But let's be honest, most jobs out there are "sell this to people, faster". How much intellect are we supposed to spend on that? I digress. All you really need for that shit is a Kia.
Also, it's really hard to hide both competence and incompetence. I wouldn't worry about passing resume screens or coding interviews if you can actually make something. If you have the skills to make things, you can even start your own business.
Most people are average skilled, though. To be honest, that's all you really need. Let's not pretend otherwise.
I was really enthusiastic about a recent interview, because the process started with a take home coding assignment that I thought I did well on and got me invited for a 4+ hour interview. But then the in person interview session ended with asking me to write a simple program in an unfamiliar IDE and language while they talked at me - they called it "pair programming". I am utterly unable to work anything out if I can't have intermittent peace and quiet. So, the verdict was accurate (in a tautological sort of way) that I wasn't a good fit, but clearly they have preconceptions that are preventing them from hiring useful people.
One thing to keep in mind though, is that whenever you're dealing with a lot of similar trials of something, avoiding all the big mistakes is more important than getting all the big successes. I think this applies to hiring people, choosing investments, relationships, probably other things. So having irrational reasons for rejecting people is less harmful than having irrational reasons for accepting people. Which is probably an important reason why free markets don't automatically eliminate the prejudices we consider unacceptable.
Did they tell you it was going to be pair programming or was it a surprise? I find things like this are better to practice beforehand or if one knows beforehand one would prefer not to do it, to just reject it before it gets that far along in the process.
"Pair programming" in an interview wasn't something I'd seen before, so I didn't prejudge it. I'm still not sure what to think of the exercise - because it may or may not resemble what is actually done on the job.
From a previous HN thread on pair programming[1]:
"Both partners must be fairly versed in the language and methodology of the programming."
I got into the weeds with the minutiae of the editor pretty quickly. Figuring that out and the language and the program in a few minutes while maintaining a conversation about all of the above is far beyond my limit of simultaneous tasks. It's not about the difficulty of any of them, it's about the contention for locks, if you will.
Also from the article it links to:
"Navigator knows the system well while the driver does not" is described as a fairly good situation. This jumps out at me because (presuming I was the driver and interviewers were navigating) in actual driving situations, I really don't function well with someone telling me what to do and carrying on a conversation at the same time as I the driver am trying to exercise higher brain functions.
It may be there is a segment of the programming population that uses mathematical processing brainpower for programming, and another segment that uses verbal processing power, and the disconnect here is that I am in the latter category. Thus, while talking is very helpful to me to work out a problem, I can't talk about one thing while doing another because they both require my verbal faculties. Maybe people who multitask better are using different parts of the brain that are inherently more independent.
I've done it before and it can be difficult to get into a flow state that is more natural for me by myself but like all things requires practice to acquire and maintain skills and also, familiarity as you say, with the tools and problem space or it's not a very good assessment of your ability except for that one very particular set of circumstances of going in blind on tools/editor!
I do freelancing. Three times in the last few months I’ve had to sit in front of coderpad to attack a problem I’m not familiar with and come up with a solution—with one or two folks watching, the clock ticking, my family’s livelihood on the line, next to ~two hundred grand in a suitcase. I write well tested code every day but can’t perform under these conditions.
It’s basically the scene from Swordfish, and just as ridiculous.
Two of the three groups seemed to realize it’s a poor test, didn’t have high expectations but went through with it anyway. One seemed to think I was incompetent because I didn’t have his trick memorized about keyword args making closures possible in a loop in Python.
I don’t write clever code on purpose, and certainly not enough to have it memorized to use under the gun.
The 'shortage' is a myth and I blame time-wasting hiring practices such as these. Am thankful a smart guy like Luu is highlighting the bullshit.
> needing a weird combination of skills, can be solved by hiring people with half or a third of the expertise you need and training people.
I'll add that companies don't mention their internal tools/libraries/current code base (which is most likely of pretty crap quality and poorly documented) which is much more a hurdle than whatever language/toolset than they put in a job ad as required experience.
This is a frustrating part. Even if you ask about it in an interview (there's almost always internal tools/libs/code), there's not really any compensation if they lie about their code quality and then put you on that stuff full-time.
Rewriting that stuff is not great for a career. It's always been the stuff I've chosen to leave out because I don't want to signal to anyone that I'm willing to full-time rewrite your stuff.
"Joel's claim is basically that "great" developers won't have that many jobs compared to "bad" developers because companies will try to keep "great" developers. "
This is a not what Joel said. Joel said that great developers are not on the market very often. They don't need to go on the open market to find their next job.
Most teams should not waste their time searching for a “great” developer (whatever that is, exactly). Instead they would be better off just hiring the first candidate that comes along in whom they have reasonable level of confidence will soon be a productive contributor to the team's goals and makeup, and then move on with business. Everyone involved in the recruiting, screening, and interview process should not lose sight of this as the objective, and resist any temptation to get sidetracked by silliness like “pedigree” and “trendiness”.
What I've tried to do over my career has been to hire people who showed ability to learn and grow, and then offer them the ability to step up to challenges. What I've gotten has been a rather bad mixture of incompetence, cluelessness, etc. They were eager to pass the interviews and get the job. They were not eager to do the work under the timelines we needed.
This isn't just devs. This was sales, ops, and finance.
I don't believe in any hard and fast rules about good vs non-good. I do believe that past performance is no indicator of future ability. That at least has been my experience.
Even more relevant, pedigree is of no value in estimating potential success or failure of an employee at their mission. The best dev I'd ever worked with had a GED (high school equivalency diploma), and never went to college. The worst had a top-of-heap pedigree.
Hiring is hard. You have to be an optimist. In the case of a small company, a bad hire can be a fatal mistake.
If everyone you hire in every department is "not eager to do the work under the timelines you need", then perhaps your timelines are not realistic and you either need to adjust them, hire more staff, or both?
A commander can blame his soldiers for losing him the war, but is it more their fault or his?
This is sort of like the venture capital market. Entrepreneurs will sort rank their investor list based on perceived value of that investor and will work their way down. Most VCs simply never see the great companies, which is why top firms consistently outperform while the overall asset class has fared poorly. In short, deal flow is everything.
I think there's a really key point with nontraditional developers. I'm also terrible at interviews because of both anxiety and lack of formal education (I can usually get optimal answers, but come across as "not confident" in them), but consistently get really high ratings and reviews from wherever I work.
I've been thinking about this for a while and have an interesting idea -- feedback welcome!
Imagine a fully open company -- code, tools, library, marketing data, strategy, etc. Literally anything about the company you can think of is accessible on the open and ready to interact with, except for the underlying database data.
Following along? Cool. Now, imagine all of this stuff being on some open repository (not necessarily Git, but for the sake of example let's suppose so). As employees create their backlog of items they add four properties:
- A 1-100 score on how much difficulty they would have completing the item, 100 being extreme, 1 being none
- A 1-100 score on how much difficulty they think an entry employee would have with the item. An entry employee would be defined clearly as the lowest level for their ladder, e.g. Software Developer I.
- A time estimate on how much time they would have completing the item.
- A time estimate they estimate on how long they think an entry level employee would have.
So, knowing on this. Would the perfect hiring process simply not be taking the items, picking things that should take a reasonable amount of time (less than 8 hours) that are in the highest reported entry difficulty possible and then scaling that by the average self difficulty? With this system you could even anonymize the dummy interview pull requests and have employees rate the quality of it. Differences in quality and difficulty could be a positive signal (e.g. high quality for low difficulty item is good, obviously, but high quality for low difficulty item that was estimated to take less time than the average estimate, by employees, is even better).
---
So the purpose of this is less to identify who is objectively great, but rather to figure out how good the candidate is relative to people in your own organization. Personally I think finding an objective measure is an intractable problem.
Domain knowledge is the uncontrollable here. A junior is still probably a junior 3 months in but they are a lot faster for their domain knowledge (codebase, business rules etc). Day 1 (I.e. Your proposed interview) there would be little observable difference between various experience levels/quality of engineer since the overhead of domain knowledge will dwarf anything they already have in their head. That is, unless the problem you pick is so isolated that there is no domain knowledge required. In that case you have expensively reproduced fizzbuzz without the mass memorization of code golf style answers.
I think it's possible for both of these to be true if you back off of the extreme positions.
1) Great developers are seldom looking for new jobs, but
2) There are a ton of inefficiencies in hiring that making finding great developers possible if you think outside the box.
The problem with this is that so many companies (especially startups with young ceos) have _no idea_ how valuable an employee is. They negotiate hard and underpay often because that's just a general prescription in business. From my personal experience "great" developers change jobs all the time.
The article paraphrases Spolsky incorrectly: great developers don’t look for jobs more than 4x in their career not because they stay in their jobs longer (ie: career length/4) but because they are headhunted for most of their jobs so don’t have to look themselves
The "Bob" scenario happens all too often. Sometimes the "slow and sure" developer gets a lot of grief from pathological management that doesn't realize that developers who seem faster as short tasks might go in circles for years on bigger projects.
Whether your personality negatively/positively impacts your effectiveness does matter. If people were robots or Spock or pure math then this wouldn't be an issue.
I will say that the reverse is true as well. The culture of a team can either increase or decrease the effectiveness of a developer.
This is just not persuasive - here's a key point from it:
"If we put aside Joel's argument and look at the job market, there's incomplete information, but both current and prospective employers have incomplete information, and whose information is better varies widely. It's actually quite common for prospective employers to have better information than current employers!"
Ok, sure, maybe there are employers who don't know what they've got. But just based on the raw amount of information on either side, all else being equal current employers should have a much better idea of actual performance.
How do you know someone is a "great developer" if you can't, in tangible terms, say exactly what a "great developer" is?
This is even more of an issue when recruiting because recruiting is an assessment process to determine if someone meets some requirement. But just like software testing, if you can't quantify the requirement, then you can't quantify a test to determine if someone meets that requirement, and therefore the interview/assessment process is simply voodoo recruiting.
Voodoo recruiting is when you have the appearance of scientific process in recruiting. - a process, - a set of questions that appear good, - a coding test that seems technical enough that it must be telling you something meaningful about the candidate, - the opinions of many people on the candidate. But in fact all of this is voodoo because you still can't directly define the requirement and thus you never known if you ever found what you are looking for, which is not actually known.
The final decision in recruiting a "great developer", as always, comes down to some subjective opinion based on a range of criteria that can't be proven to have had any meaning in the first place.