I'm always curious about the senior engineers who "can't code their way out of a cardboard box". I often wonder if I'm one of those, or come across as one of those.
Can you give an example of something you've seen. I just keep hearing about people who can't do fizzbuzz, but how bad are people really? Or what is it that makes these engineers so bad? How have they lasted in the industry?
they're really bad. i interviewed people that couldn't fill in a simple fizzbuzz after defining a function body and return type for them. they last because they can last, a lot of these jobs really just require codemonkeys. I have seen if states literally hundreds of words long made by these kinds of people.
that said i feel you. tbh, i have managed to impress anyone i worked with. most were people i met at hackathons, people that wanted to bootstrap their startups or general referals. on the other hand I failed most interviews i took(with a few noteworthy exceptions)
It once took me until after the interview to figure out that the guy just wanted to find out whether i understand how variable scopes work.
A lot of interview questions are questionable at best, and most code challenges are even worse. they're exactly how universities teach programming. The only problem is that universities suck at teaching programming. I know a lot of really awesome people that went to university because they thought knowing the topic they study was a good idea, but it turned out to be the opposite.
But one lady once asked me is there a special command to find out where traffic goes? I said route. Is there any special flag you would use she continued. But what she wanted was to hear route -n.
I'm pretty sure in those kinds of interviews they think i'm an idiot
I have two examples to share. A simple whiteboard question I ask is "Write a function to compute interest on your money. You have $X and I'm going to give you y% interesting for Z years."
I'm generally happy if they don't ask any questions and write a simple interest calculator. Most people ask some questions about when interest is calculated during the year. Then, if the questions haven't gotten us there yet, I ask them to modify it to a compound interest calculator where you can also contribute some money every year.
I've never had anyone who didn't understand the concept or what I was after. Or at least I didn't have anyone who admitted to still not understanding how it worked.
Two people stand out.
One of them immediately tried to write a recursive function and got lost. Then started over and tried to calculate a one-liner to do compound interest without looping. Couldn't get it, started over with a for-loop and failed to return in the proper place: the code was returning a value and then immediate next line was more calculation.
The second let me explain the problem and then said "I'm sure there's some library call for calculating compound interest, I'm just not sure what it is." Full stop. I explained that I wasn't asking for the library call, I was asking them to work up the algorithm and write code that did it.
They repeated the statement that there was a library call for it but they didn't know it and that if the interview was going to be about the specifics of library functions then it wasn't going to be a good interview.
In both cases these people passed a phone screen focusing mainly on experience, algorithms, design, etc... Since then I've adapted the phone screen to cover more of Steve Yegge's points [1] and I actually include a small coding question on the phone.
I don't think my system is perfect, but in the last round of hiring the people who made it through the screen were almost all good technically and weren't offered positions because of fit with the team. I'm sure there were a few people who might have been good that got filtered out by writing code over the phone but at the time I was willing to risk that vs the time commitment for the rest of my team to do on-site interviews for someone that could be DOA.
"Engineers" who use so many nested if's that I have to horizontally scroll my 2560px width monitor.
They've lasted because they've usually only had one or two employers, as a permanent employee (rather than a contract) and no-one code reviews until it's too late.
So you are saying that you have interviewed hundreds of engineers and found many, many wanting according to whatever standards you set up in your interviews? Including many who came with high, verified recommendations from other workplace?
Have you also followed up on those failed interviewees to verify that in a real world setting they cannot perform?
If you haven't, then all I can see is a statement of "I have a standard very few can meet".
The more pernicious senior engineer is one who can code their way out of a box in an interview, but cannot deliver working code in a production environment due to various reasons - perfectionism seems to be a common thread to not delivering in my (limited) experience.
Do you have any pointers on weeding these candidates out? Our country's employment laws make it hard to fire net negative producing programmers.
> can code their way out of a box in an interview, but cannot deliver working code in a production environment
> perfectionism seems to be a common thread to not delivering in my (limited) experience
I'm afraid I'm the engineer being described here. I pride myself on technical excellence, theoretical understanding and practical experience with lots of different technologies.
Yet my work history mostly consists of an unbroken trail of unfinished projects and undelivered deliverables stretching back decades.
How can I cure myself of this? Or should I switch careers?
I suppose the answer can only be supplied if we know more about you as a developer, and as a person. If you're a perfectionist, then it could well be that your previous projects are perceived by others to be of better quality than you yourself perceive them. A perfectionist could well consider a project 'unfinished' if it doesn't cover improbable edge cases, for example.
However, assuming that you're accurate in your statement that your projects were unfinished, then it would appear that you need to question why that's the case, as we cannot infer that the traits you've ascribed to yourself are the cause of project failures without knowing more about the other variables in the equation. For instance, the projects may well all have been failures for no fault of your own: scope creep, management interference, poor teamwork (or poor team members), or any number of other variables could be responsible. But since this is not something we can address at this point, let's assume that your project failures are indeed a result of your perfectionism.
The easiest way (in my experience) to 'cure' perfectionism in software is to follow the YAGNI ("You Ain't Gonna Need It") mantra, coupled with approaching a project from a divide and conquer perspective (yeah, pg would pretty much hate this). First break the project into its key deliverables and its 'nice to haves'. Once all the key deliverables are complete, then the project is complete; 'nice to haves' aren't factored into the completeness equation for the project. Once you know what your key deliverables are, then order them in terms of dependency: if deliverable A requires that deliverable B be completed before deliverable A can be completed, then focus on deliverable B first. Now that you know what you're supposed to be working on, you can get started on the deliverable at the root of your dependency tree, but whilst working on it ensure that you're only adding the functionality necessary to complete the current deliverable. As you work your way through your deliverables, refactor the previous deliverables as necessary to make your current deliverable work. For instance, if your root deliverable wasn't required to be thread-safe when you first wrote it, but your new deliverable requires it to be thread-safe, then work your way back through your deliverables refactoring as necessary in order to meet the requirements for your current deliverable.
By sorting your deliverables into dependency tree, you can ensure that you can properly apply YAGNI. It's very tempting for a perfectionist to turn into an architecture astronaut, and spend weeks or months mapping out the entire solution and all its edge cases before they've even started to work on a single deliverable; by keeping your deliverables in a dependency tree and working from the least to the most dependent, you force yourself out of this mindset. By applying this method, you also force yourself away from analysis paralysis, because you're not permitting yourself to consider patterns or architecture until the project's current deliverable necessitates it.
It's worth noting that this method won't work for all projects, as it requires a good understanding of what the deliverables for the project actually are before you can apply it. It also may not work for you at all, since it's all down to personal experience, but I've personally found that forcing yourself to focus only on the concrete deliverables ensures that you don't end up stalling yourself in analysis, and gives you lots of little pep boosts in the form of finished deliverables and refactorings.
Nonetheless, good luck finding a cure that works for you.
Sure, but that's what code samples are for. If I send you code samples, and I can't explain to you what they mean or how I built them, or describe a variation of your choice about them, then you can assume that I don't know what I'm talking about. You can do this over the phone. There are endless, quality, revealing questions you can ask about provided code samples that are far, far, far, far superior to asking the equivalent of reciting the ABCs.
No code sample? Get some. Won't provide some? Don't bring them in for an interview.
The point I'm making is that you can pre-qualify candidates in a LOT of ways before you even bring someone into the office. If I get into your office and you ask me dumb, basic questions, then you haven't screened me effectively. You should theoretically be able to make those determinations before I ever step through the front door of your office.
Sure, but that's what code samples are for. If I send you code samples, and I can't explain to you what they mean or how I built them, or describe a variation of your choice about them, then you can assume that I don't know what I'm talking about. You can do this over the phone. There are endless, quality, revealing questions you can ask about provided code samples that are far, far, far, far superior to asking the equivalent of reciting the ABCs.
That seems like a reasonable idea, but we've tried that before, and it's just a terrible filter. Let me try to make a more general claim: anything that lets the candidate have outside resources or unlimited time is going to be a bad filter. In this case, the candidate can prep the code sample and be able to explain it well.
How do I bring in code samples when all of the code I write has been for my previous employers and therefore is owned by them? Can't I be good at what I do without spending free time away from family and hobbies doing it even more?
You'd be surprised how many 'senior' engineers can't code their way out of a cardboard box.
Background: I've interviewed hundreds (if not thousands) of engineers.